EP3329419A1 - Method for capturing an object on a road in the environment of a motor vehicle, camera system and motor vehicle using the same - Google Patents

Method for capturing an object on a road in the environment of a motor vehicle, camera system and motor vehicle using the same

Info

Publication number
EP3329419A1
EP3329419A1 EP16753598.8A EP16753598A EP3329419A1 EP 3329419 A1 EP3329419 A1 EP 3329419A1 EP 16753598 A EP16753598 A EP 16753598A EP 3329419 A1 EP3329419 A1 EP 3329419A1
Authority
EP
European Patent Office
Prior art keywords
image
determined
image blocks
region
motor vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP16753598.8A
Other languages
German (de)
French (fr)
Inventor
Swaroop Kaggere Shivamurthy
Ciaran Hughes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Connaught Electronics Ltd
Original Assignee
Connaught Electronics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd filed Critical Connaught Electronics Ltd
Publication of EP3329419A1 publication Critical patent/EP3329419A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the present invention relates to a method for capturing at least one object in an environmental region of a motor vehicle. Moreover, the present invention relates to a camera system for a motor vehicle. Finally, the present invention relates to a motor vehicle with such a camera system.
  • Camera systems for motor vehicles are known from the prior art. These camera systems can for example include a plurality of cameras, which are disposed distributed on the motor vehicle. An environmental region of the motor vehicle can then be captured by these cameras. In particular, image sequences or video data can be provided by the cameras, which describe or depict the environmental region. This video data can then be presented in the interior of the motor vehicle for example on a display device.
  • a method according to the invention serves for capturing at least one object in an environmental region of a motor vehicle.
  • the method includes providing at least one image by means of the camera. Further, the method includes determining a road area in the at least one image, which is associated with a road in the environmental region.
  • the method involves dividing the road area into a plurality of image blocks as well as determining a respective intensity value for the image blocks, which describes an intensity of at least one pixel of the image block. Further, it is provided that an
  • identification edge blocks is associated with those of the image blocks, the respective intensity value of which has a predetermined variation to at least one adjacent of the image blocks. Finally, the at least one object is captured based on the image blocks with the identification edge blocks.
  • one or more objects in the environmental region of the motor vehicle are to be captured or recognized.
  • at least one image is provided by means of at least one camera of the motor vehicle.
  • a plurality of images or an image sequence is provided by the camera, which describes or depicts the environmental region.
  • the at least one image can then be processed or examined by means of a corresponding evaluation device.
  • the at least one image is subjected to an object recognition algorithm.
  • a road area in the image is recognized or determined, which is associated with a road in the environmental region.
  • the roadway is to be understood by the road, on which the motor vehicle is currently located or on which the motor vehicle currently travels.
  • the road area in the image When the road area in the image has been recognized, the road area is divided into a plurality of image blocks.
  • the individual image blocks thus present partial areas of the road area.
  • the image blocks can for example be disposed next to each other in multiple lines and multiple columns.
  • the respective image blocks can include at least one pixel of the image. In particular, the image blocks have the same size.
  • an intensity value is respectively determined for each image block, which describes the intensity of at least one pixel of the respective image block.
  • the intensity value of the pixel centrally disposed in the image block can for example be used.
  • the intensity is respectively determined for the individual pixels of the image block and the average value of the intensities of the pixels in the image block is used as the intensity value.
  • the respective image block in the road area can be described by the intensity value.
  • an identification edge blocks is assigned to this image block.
  • the identification or ID edge block describes that this image block can include a severe edge.
  • this identification is associated with each of the image blocks in the road area, which has the predetermined intensity variation compared to the neighboring image block. This allows that these edge blocks within the road area can be examined in more detail. In particular, these edge blocks are examined to the effect if they are to be associated with an object. Thereby, it can be allowed that an object located on the road in the environmental region of the motor vehicle can be captured.
  • the at least one image is preferably segmented by means of a texture-oriented method or based on the intensity of the pixels of the image.
  • the at least one image is correspondingly segmented.
  • a texture-oriented method is used hereto.
  • the road or the roadway has a typical texture due to its surface.
  • a corresponding method can for example be used, which is able to recognize these typical surface structures of the road or of the roadway in the image.
  • the intensity values of at least a predetermined number of the pixels in the image are examined for segmenting the image or for recognizing the road area.
  • the individual pixels can be examined in more detail with respect to their intensity. For example, those pixels can be combined, which satisfy a predetermined homogeneity criterion.
  • the image can for example be examined by a corresponding line histogram and/or column histogram. Further, it can be provided that the respective brightness of the pixels is examined. In this manner, the road area depicting the road or the roadway in the image can be reliably captured.
  • a plurality of cost values is preset, the image blocks are associated with one of the cost values depending on a difference of their intensity value to the intensity value of the at least one adjacent image block and the identification edge block is associated with those image blocks, the cost value of which exceeds a predetermined threshold value.
  • those image blocks are examined, which are associated with the road area in the image. For example, the identification or the ID "road” can be associated with these image blocks.
  • a plurality of cost values is preset. These cost values can for example correspond to respective classes in a histogram. For example, 64 cost values in total can be preset. Subsequently, for each of the image blocks, the difference of its intensity value to the intensity value of at least one adjacent image block is determined.
  • one of the predetermined cost values is associated with each of the image blocks.
  • multiple image blocks are associated with one of the cost values.
  • the cost values can be determined such that a low cost value describes a low intensity variation and a high cost value describes a high intensity variation.
  • a threshold value is preset and those image blocks, the cost value of which falls below this predetermined threshold value, are then referred to as edge blocks or the identification edge block is associated with these pixels. This allows identifying the edge blocks in simple manner and with low computational power.
  • the association of the image blocks with the edge blocks can be performed with the aid of a digital signal processor or another corresponding evaluation device of the camera.
  • a number of the image blocks associated with the respective cost values is respectively determined and summed up one after the other starting from the lowest cost value.
  • that of the cost values is determined, at which the sum of the image blocks reaches a predetermined portion with respect to the number of the image blocks in the road area and the threshold value is determined based on the determined cost value.
  • a corresponding histogram can be determined, in which the classes correspond to the cost values.
  • the respective number of the image blocks associated with the cost values can be summed up.
  • the lowest cost value includes those image blocks, in which the intensity variation with respect to the adjacent image block is lowest.
  • those image blocks can be associated with the lowest cost value, in which the intensity value corresponds to the intensity value of the adjacent block. Subsequently, it is checked when or at which of the cost values the accumulated sum has reached a predetermined portion of the total number of the image blocks in the road area.
  • This overall portion can be predetermined depending on the complexity of the scene in the image. For example, this predetermined portion can be 90 %.
  • That cost value, at which the accumulated sum has reached the portion with respect to the total number of the image blocks, is then used for determining the threshold value.
  • those cost values following after this selected cost value starting from the lowest cost value can be referred to as edge blocks. Thereby, the edge blocks can be determined in simple manner and with low
  • a region of interest of the image is divided into multiple columns, wherein a width of the respective columns is determined such that they include at least one pixel, and for each of the columns, a number of image blocks contained in the column with the identification edge block is determined.
  • a predetermined part or a region of interest is selected. This region of interest can be selected such that it is disposed below a horizontal line in the image.
  • the horizontal line in the image can be determined such that for example a sky is located above this horizontal line.
  • the road area is in the selected part of the image.
  • the selected part of the image is then divided into a plurality of columns.
  • a predetermined number of columns can be provided.
  • the column width is selected such that it includes at least one image block.
  • the number of the image blocks, with which the identification edge block is associated is determined in the manner of a histogram. In other words, it is thus determined how many image blocks with the identification edge block are in the respective column.
  • the selected region of interest of the image is divided into a plurality of lines and the number of the image blocks contained in the line with the identification edge block is determined for each of the lines.
  • At least one object region is determined within the image.
  • the predetermined columns in the manner of a vertical histogram, those areas within the image or the region of interest of the image can be recognized, in which the object can be located.
  • those columns can be determined, in which the number of the contained image blocks with the identification edge block exceeds a predetermined limit value.
  • it can also be provided that it is checked if this limit value is exceeded with columns disposed next to each other. Based on those columns, in which the limit value is exceeded, an object region can be defined in the image and in particular in the road area.
  • This object region describes a part of the image or of the region of interest of the image, which first was determined as the road, in which an object can optionally be located. This object region can then be examined in more detail afterwards. In particular, herein, it can be determined if an object is actually present, which is for example on the road or on the roadway or if the supposed object is a part of the road.
  • a respective brightness of the image blocks in the object region and/or a respective position of the image blocks with the identification edge block in the object region are determined and the at least one object is recognized based on the determined brightness of the image blocks and/or the position of the image blocks with the identification edge block.
  • a predetermined pattern can for example be recognized.
  • it can be further examined if this pattern is to be associated with an object.
  • it can be checked if the edge blocks for example describe a boundary of the road or the roadway.
  • it can be provided that the distribution of the brightness within the object region is examined.
  • it can also be recognized if a corresponding pattern can be recognized within the object region.
  • the object region can for example be recognized if areas illuminated by sunlight or shadow areas are present in the object region. Further, reflections on the road surface can for example be recognized. This is in particular the case if water is on the roadway surface. Thus, it can be examined if an object is actually located in the object region.
  • a spatial extension of the object region is determined and the at least one object is captured based on the determined spatial extension.
  • the spatial extension or the dimensions of the object region can be taken into account.
  • the object is for example a pedestrian
  • the object region has a low spatial extension along a horizontal direction.
  • the object region for example only extends over few columns disposed next to each other.
  • it can for example also be recognized based on the width of the object region in horizontal direction, how far the pedestrian is away from the motor vehicle.
  • it can for example be recognized based on the object region if the object is a roadway marking. Due to the optical distortion, roadway markings for delimiting the roadway or the road for example have a larger spatial extension than centerlines, which are for example centrally disposed in the image. In this manner, the object on the road can be reliably captured.
  • At least two images are provided, for each of the at least two images, the object region is determined and the object is captured based on a comparison of the object regions in the at least two images.
  • An image sequence with at least two images can be provided by means of the camera, which have been captured consecutive in time.
  • the object region can be recognized in each one.
  • the object regions corresponding to each other can then be compared to each other.
  • it can for example be examined if the object region varies depending on the time.
  • it can for example be determined if the object region has been recognized in multiple images consecutive in time.
  • it can for example be checked if the supposed object is a real object or for example momentarily present solar radiation or a momentarily present shadow.
  • the current traveling velocity and/or the direction of travel of the motor vehicle are acquired and taken into account in the comparison of the object regions of the at least two images.
  • it can be checked if a real object is present on the road or roadway in reliable manner.
  • the at least second object is identified as a static or moved object based on a comparison of the object regions in the at least two images.
  • the first image can for example describe the environmental region at a first point of time.
  • the second image can describe the environmental region at a second point of time following the first point of time. If the object region in the second image decreases compared to the first image, it can for example be assumed that it is a moved object moving away from the motor vehicle. If the object region in the second image increases compared to the first image, it can be assumed that the object moves towards the vehicle.
  • it can be recognized if an object is on the roadway, and additionally it can be determined if the object is a static, thus non-moved object, or a moved object.
  • the object region is determined within the road area.
  • the area of the image which was originally defined as the road in the segmentation, can be examined in more detail.
  • it can be determined if the image blocks having the identification edge block describe an object located on the road.
  • a confidence measure for the road area can be provided, which describes, with which probability the road area is actually a real road.
  • a three-dimensional object recognition can be performed.
  • three-dimensional reconstructions can be performed.
  • a three-dimensional clustering method is performed.
  • the information with respect to the object can be used to determine an optical flow.
  • the at least one object is classified as a pedestrian, as a vehicle, as a roadway marking or as a wall.
  • the object can for example be classified as a pothole, as a shadow, as a puddle or the like.
  • the object recognized on the road constitutes an obstacle for the motor vehicle. This information can be provided for the driver assistance systems of the motor vehicle.
  • the at least one captured object is registered in a digital environmental map describing the environmental region.
  • the relative position of the object to the motor vehicle can be registered in the environmental map.
  • information to the object is additionally recorded in the digital environmental map, which classifies the object.
  • This digital environmental map can then be used by the different driver assistance systems of the motor vehicle. For example, by a driver assistance system, the motor vehicle can be maneuvered based on the digital environmental map such that collision with the object is prevented.
  • a camera system according to the invention for a motor vehicle is adapted to perform a method according to the invention.
  • the camera system can for example include a plurality of cameras, which are disposed distributed on the motor vehicle.
  • the camera system can have a control device, which is for example constituted by an electronic control unit of the motor vehicle, by which the images of the cameras can be evaluated.
  • a motor vehicle according to the invention includes a driver assistance system according to the invention.
  • the motor vehicle is in particular formed as a passenger car.
  • Fig. 1 in schematic illustration a motor vehicle according to an embodiment of the present invention, which includes a camera system with a plurality of cameras;
  • Fig. 2 a schematic illustration of a module architecture for capturing at least one object in an environmental region of the motor vehicle
  • Fig. 3 a schematic flow diagram of a method for capturing the at least one object in the environmental region of the motor vehicle according to a further embodiment
  • Fig. 4 a diagram showing the number of image blocks, which are associated with respective cost values
  • Fig. 5 image divided in multiple image blocks, wherein edge blocks are marked a road area associated with a road;
  • Fig. 6 a further image, in which edge blocks are marked in the road area
  • Fig. 7 a method for capturing the at least one object in the environmental region of the motor vehicle according to a further embodiment
  • Fig. 8 the image according to Fig. 5, in which the number of edge blocks is determined for a plurality of columns;
  • Fig. 9 the image according to Fig. 8, wherein classified objects are marked in the image.
  • Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view.
  • the motor vehicle 1 is formed as a passenger car.
  • the motor vehicle 1 includes a camera system 2.
  • Objects 10 in an environmental region 8 of the motor vehicle 1 can be captured by the camera system 2.
  • the camera system 2 in turn includes a control device 3, which can for example be constituted by an electronic control unit of the motor vehicle 1 .
  • the camera system 2 includes at least one camera 4, which can for example be formed as a CCD camera or as a CMOS camera.
  • the camera system 2 includes four cameras 4, which are disposed distributed on the motor vehicle 1 .
  • one of the cameras 4 is disposed in a rear area 5 of the motor vehicle 1
  • one of the cameras 4 is disposed in a front area 7 of the motor vehicle 1
  • the remaining two cameras 4 are disposed in the respective lateral areas 6, in particular in an area of the wing mirrors, of the motor vehicle 1 .
  • the number and arrangement of the cameras 4 of the camera system 2 is to be understood purely exemplarily.
  • the environmental region 8 of the motor vehicle 1 can be captured by the cameras 4.
  • the four cameras 4 are formed identical in construction.
  • an image sequence or video data can be provided by the cameras 4, which describes the environmental region 8.
  • These image sequences can then be transmitted from the respective cameras 4 to the control device 3.
  • the cameras 4 are connected to the control device 3 by means of corresponding data lines or a vehicle data bus.
  • the data lines are not illustrated for the sake of clarity.
  • the individual images 9 captured by the cameras 4 can be processed by means of the control device 3.
  • objects 10 can be recognized in the respective images 9.
  • Fig. 2 shows a schematic illustration of a module architecture, which serves for capturing the object 10. According to this module architecture, the control device 3 of the camera system 2 can for example be operated.
  • the module architecture is formed such that upon recognition of the object 10, computational power can be saved.
  • the module architecture includes an object detector 1 1 , by means of which the object 10 in the image 9 can be recognized.
  • a segmenting block 12 is provided, by means of which the image 9 can be segmented.
  • the segmenting block 12 can have an accumulated block complexity, by means of which individual pixels of the image 9 can be examined with respect to their homogeneity or intensity. Further, it can be checked if individual pixels are combined. Moreover, corresponding corners or edges can be recognized in the image 9.
  • a further block 13 is provided, by means of which the object can be described. If the object 10 in the image has been recognized by the object detector 1 1 , this can be registered in a digital environmental map 14.
  • the digital environmental map 14 describes the environmental region 8 of the motor vehicle 1 . The relative position between the motor vehicle 1 and the object 10 can be recorded in the digital environmental map 14.
  • Fig. 3 shows a schematic flow diagram of a method for capturing an object 10 in the environmental region 8 of the motor vehicle 1 .
  • the method is started.
  • the functionality of the object recognition can for example be provided by means of corresponding configuration parameters.
  • the object recognition process begins with the initialization of these configuration parameters.
  • a step S2 the required data for segmenting the image 9 is gathered.
  • a confidence measure for performing the segmentation can be determined herein.
  • a road area 15 is to be recognized in the image 9, which is associated with a road or a roadway in the environmental region 8.
  • a texture-oriented method can for example be used.
  • the means required for performing the segmentation can be determined and initialized by the object detector 1 1 .
  • These means can for example be intermediate storages, data structures or the like. These means can serve for analyzing the confidence measure for the segmentation.
  • the contiguous areas in the image 9 each can be provided with a corresponding identification. For example, the identification or ID "road" can be assigned to the road area 15.
  • a first stage of the object recognition is effected.
  • the image 9 is divided into a plurality of image blocks 16.
  • the image blocks 16 can for example be disposed next to each other in multiple lines and rows.
  • the image blocks 16 can each include at least one pixel and in particular be selected equally sized.
  • an intensity value is determined, which describes the intensity of the respective image block 16. Further, it is checked if the intensity value of an image block 16 has a predetermined variation compared to an adjacent image block 16. If the intensity value of one of the image blocks 16 has this predetermined variation to an adjacent image block 16, the identification edge block 17 is associated with this image block in a step S5.
  • the identification edge block 17 describes that this image block 16 describes a severe edge.
  • a second stage of the object recognition is performed.
  • a region of interest 18 is determined in the image 9.
  • the region of interest 18 is divided into a plurality of columns 19.
  • the number of the image blocks 16 with the identification edge block 17 of each of the columns 19 is determined.
  • an object region 21 is then determined in the image.
  • a third stage of the object recognition is effected.
  • the edge blocks 17 in the object region 21 are examined in more detail.
  • it can for example be checked if the edge blocks 17 have a predetermined pattern.
  • it can be determined if the object 9 is on the road.
  • the recognized object 10 or the recognized objects 10 are correspondingly classified and marked in the image 9. Subsequently, the method is terminated in a step S9.
  • Fig. 4 shows a diagram, based on which step S4 and the first stage of object recognition, respectively, are to be explained in more detail.
  • a plurality of cost values 22 is preset. They are presented in the left column of the diagram.
  • 64 cost values 22 are preset.
  • the cost values 22 describe the difference of an intensity value of an image block 16 to the intensity value of the adjacent image block 16.
  • a cost value with the value of 0 corresponds to the case, in which the intensity values of adjacent image blocks 16 are identical.
  • the cost value 22 with the value of 64 describes the case, in which the intensity values of adjacent image blocks 16 maximally differ.
  • the respective number 23 of image blocks for each of the cost values 22 is presented.
  • the respective number 23 of the image blocks 16 is added. If the accumulated sum Acc reaches a predetermined portion of the total number of the image blocks 16, that cost value 22 is determined, at which the accumulated sum Acc has reached this portion, this portion can be selected depending on the scene. This portion can for example be 90 %. In the present example, the total number of the image blocks 16 can for example be 100. The predetermined portion of 90 % or the accumulated sum Acc of 90 is presently reached at the cost value 22 with the value of 3. Therein, the threshold value is determined such that the identification edge block 17 is assigned to all of the image blocks having a cost value greater than or equal to 4.
  • Fig. 5 shows an image 9, which was provided by one of the cameras 4.
  • the individual image blocks 16 are apparent.
  • the image blocks 16 with the identification edge block 17 are apparent. They are disposed in the edge areas 24 of the road area 15 on the one hand.
  • the edge blocks 17 are disposed in an area 25 in it, in which the object 10 or a pedestrian 26 is located. Further, the edge blocks 17 are disposed in an area 27, which is associated with a damage of the roadway surface.
  • FIG. 6 shows an image 9 of a further scene or environment.
  • the road area is apparent.
  • the edge blocks 17 in the road area 15 are apparent, which presently are disposed in an area 28, which is associated with a roadway marking 29.
  • Fig. 7 shows a further flow diagram for explaining the method for recognizing the object 9 and in particular for explaining the step S4 of the method according to Fig. 3 in more detail.
  • the method is started with step S1 .
  • a step S2 the confidence measure for the segmentation of the image 9 and in particular for determining the road area 15 is gathered.
  • step S3 the required means for segmenting are determined and initialized.
  • a step S10 it is checked if all of the image blocks 16 have already been processed. If this is not the case, the method is continued with a step S1 1 .
  • the image block 16 is associated with the road area 15. If this is not the case, the method is again continued with step S10.
  • the cost values 22 are defined and the image block 16 is associated with one of the cost values 22.
  • a step S13 it is checked if all of the cost values 22 are already passed. If this is not the case, in a step S14, the accumulated sum Acc for the following cost value 22 is determined. In a step S15, it is checked if the accumulated sum Acc exceeds the predetermined portion. If this is satisfied, the threshold value is determined based on the current cost value 22 in a step S16. If the predetermined portion is not yet reached, the method is again continued with step S13. Finally, the method is terminated in a step S17.
  • Fig. 8 shows the image according to Fig. 5, wherein the region of interest 18 is divided into the plurality of columns 19. For each column 19, the number of edge blocks 17 in the column is determined. Based on the respective number of the edge blocks 17 per column 13, the object region 21 can then be defined in the region of interest 18. Thus, those columns having a similar number of edge blocks 17 can for example be combined.
  • object regions 21 can then be examined in more detail.
  • a rectangle can be defined in the area of the object regions 21 by examining the position of the edge blocks 17.
  • the edge blocks 17 form.
  • the individual image blocks 16 of the object region can be examined with respect to their brightness.
  • it can for example be determined if it is a shadow, an area with increased illumination, for example as a result of solar radiation or the like.
  • the spatial extension of the object region 21 is examined.
  • it can for example be differentiated if it is a pedestrian 26 or a roadway marking 29.
  • the object region 21 has relatively low spatial dimensions. Further, it is usually apparent here that the edge blocks 17 are present in adjacent columns 19. If the pedestrian 26 is too far away from the motor vehicle 1 , the edge blocks 17 will only be able to be found in a narrow object region 21 . Even if the pedestrian 26 cannot be recognized herein, an area with severe edges can nevertheless be determined.
  • the roadway markings 29 can for example be recognized based on their spatial extension. Therein, an imaging error caused by the lens of the camera 4 can be taken into account. Further, the roadway markings can be recognized by their brightness. With other objects 9 such as for example parked vehicles 30 or walls 31 , usually, a low number of edge blocks 17 occurs.
  • Fig. 9 shows the image according to Fig. 8, wherein the classified objects 10 are correspondingly marked in the image 9.
  • a parked vehicle 30 as well as the walls 31 are for example apparent.
  • the objects 9 can be reliably recognized with the aid of the method.
  • objects 9 within the road area 15 can be recognized, which first has been completely associated with the road in the segmentation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for capturing at least one object (10) in an environmental region (8) of a motor vehicle (1) including the steps of: providing at least one image (9) of the environmental region (8) by means of a camera (4), determining a road area (15) in the at least one image (9), which is associated with a road in the environmental region (8), dividing the road area (15) into a plurality of image blocks (16), determining a respective intensity value for the image blocks (16), which describes an intensity of at least one pixel of the image block (16), associating an identification edge block (17) with those of the image blocks (16), the respective intensity value of which has a predetermined variation to at least one adjacent of the image blocks (16), and capturing the at least one object (10) based on the image blocks (16) with the identification edge block (17).

Description

METHOD FOR CAPTURING AN OBJECT ON A ROAD IN THE ENVIRONMENT OF A MOTOR VEHICLE, CAMERA SYSTEM AND MOTOR VEHICLE USING THE SAME
The present invention relates to a method for capturing at least one object in an environmental region of a motor vehicle. Moreover, the present invention relates to a camera system for a motor vehicle. Finally, the present invention relates to a motor vehicle with such a camera system.
Camera systems for motor vehicles are known from the prior art. These camera systems can for example include a plurality of cameras, which are disposed distributed on the motor vehicle. An environmental region of the motor vehicle can then be captured by these cameras. In particular, image sequences or video data can be provided by the cameras, which describe or depict the environmental region. This video data can then be presented in the interior of the motor vehicle for example on a display device.
Furthermore, it is known from the prior art to use the images of the cameras to capture objects in the environmental region. For this purpose, for example, corresponding object recognition algorithms can be used. Furthermore, it is known from the prior art to perform segmentation of the images hereto. Therein, areas contiguous in content in the image can be generated by combining adjacent pixels corresponding to a certain homogeneity criterion. The object recognition algorithms based on segmentation can be divided into different categories. For example, so-called edge-oriented methods are known, in which edges or object transitions are searched for in the image. Here, methods as the Sobel operator or the Canny edge detector can for example be used. Moreover, methods based on limit value such as for example the Otsu method are known. Moreover, region-oriented methods, cluster-based methods, so-called split and merge methods, graph-based methods and/or trainable methods are used.
In most of the existing methods for segmenting, the detection of an area in the image is effected based on a limit value, which has to be adapted for example depending on changing requirements or scenes. In addition, a plurality of areas or regions are often recognized by the segmentation, which subsequently have to be correspondingly examined. In particular, these areas have to be associated with corresponding objects. Thus, these areas can be classified in the form of a classification, thus associated with semantics or significance. These solutions are computationally complex and therefore require high computational power. This entails the disadvantage that they can only be employed in restricted manner in camera systems for motor vehicles, which for example have a digital signal processor (DSP).
It is the object of the present invention to demonstrate a solution how an object in an environmental region can be more simply and reliably captured based on images of a camera.
According to the invention, this object is solved by a method, by a camera system as well as by a motor vehicle having the features according to the respective independent claims. Advantageous developments of the invention are the subject matter of the dependent claims, of the description and of the figures.
A method according to the invention serves for capturing at least one object in an environmental region of a motor vehicle. The method includes providing at least one image by means of the camera. Further, the method includes determining a road area in the at least one image, which is associated with a road in the environmental region.
Further, the method involves dividing the road area into a plurality of image blocks as well as determining a respective intensity value for the image blocks, which describes an intensity of at least one pixel of the image block. Further, it is provided that an
identification edge blocks is associated with those of the image blocks, the respective intensity value of which has a predetermined variation to at least one adjacent of the image blocks. Finally, the at least one object is captured based on the image blocks with the identification edge blocks.
With the aid of the method, one or more objects in the environmental region of the motor vehicle are to be captured or recognized. Thereto, at least one image is provided by means of at least one camera of the motor vehicle. In particular, it is provided that a plurality of images or an image sequence is provided by the camera, which describes or depicts the environmental region. The at least one image can then be processed or examined by means of a corresponding evaluation device. In particular, it is provided that the at least one image is subjected to an object recognition algorithm. Herein, a road area in the image is recognized or determined, which is associated with a road in the environmental region. In particular, the roadway is to be understood by the road, on which the motor vehicle is currently located or on which the motor vehicle currently travels. When the road area in the image has been recognized, the road area is divided into a plurality of image blocks. The individual image blocks thus present partial areas of the road area. The image blocks can for example be disposed next to each other in multiple lines and multiple columns. The respective image blocks can include at least one pixel of the image. In particular, the image blocks have the same size.
Furthermore, it is provided that an intensity value is respectively determined for each image block, which describes the intensity of at least one pixel of the respective image block. Thus, the intensity value of the pixel centrally disposed in the image block can for example be used. It can also be provided that the intensity is respectively determined for the individual pixels of the image block and the average value of the intensities of the pixels in the image block is used as the intensity value. Thus, the respective image block in the road area can be described by the intensity value. Further, it is provided that it is examined for each of the image blocks, how the intensity value of the image block relates to the intensity value of at least one adjacent image block. In particular, a difference between the intensity value of the image block to at least one intensity value of the adjacent image block is determined. Thus, it is checked if the intensities of the adjacent image blocks have a predetermined variation. If the difference of the intensity value of the image block to the intensity value of the neighboring image block has a predetermined difference, an identification edge blocks is assigned to this image block. The identification or ID edge block describes that this image block can include a severe edge. Overall, this identification is associated with each of the image blocks in the road area, which has the predetermined intensity variation compared to the neighboring image block. This allows that these edge blocks within the road area can be examined in more detail. In particular, these edge blocks are examined to the effect if they are to be associated with an object. Thereby, it can be allowed that an object located on the road in the environmental region of the motor vehicle can be captured.
For determining the road area, the at least one image is preferably segmented by means of a texture-oriented method or based on the intensity of the pixels of the image. In order to recognize the road area in the at least one image, which is associated with the road or the roadway in the environmental region of the motor vehicle, the at least one image is correspondingly segmented. Herein, it can be provided that a texture-oriented method is used hereto. Herein, it is taken into account that the road or the roadway has a typical texture due to its surface. Hereto, a corresponding method can for example be used, which is able to recognize these typical surface structures of the road or of the roadway in the image. Alternatively or additionally, it can be provided that the intensity values of at least a predetermined number of the pixels in the image are examined for segmenting the image or for recognizing the road area. Hereto, the individual pixels can be examined in more detail with respect to their intensity. For example, those pixels can be combined, which satisfy a predetermined homogeneity criterion. Hereto, the image can for example be examined by a corresponding line histogram and/or column histogram. Further, it can be provided that the respective brightness of the pixels is examined. In this manner, the road area depicting the road or the roadway in the image can be reliably captured.
In an embodiment, a plurality of cost values is preset, the image blocks are associated with one of the cost values depending on a difference of their intensity value to the intensity value of the at least one adjacent image block and the identification edge block is associated with those image blocks, the cost value of which exceeds a predetermined threshold value. Presently, those image blocks are examined, which are associated with the road area in the image. For example, the identification or the ID "road" can be associated with these image blocks. Further, a plurality of cost values is preset. These cost values can for example correspond to respective classes in a histogram. For example, 64 cost values in total can be preset. Subsequently, for each of the image blocks, the difference of its intensity value to the intensity value of at least one adjacent image block is determined. Depending on this variation of the intensity value, one of the predetermined cost values is associated with each of the image blocks. Therein, it can also be provided that multiple image blocks are associated with one of the cost values. Therein, the cost values can be determined such that a low cost value describes a low intensity variation and a high cost value describes a high intensity variation. Further, a threshold value is preset and those image blocks, the cost value of which falls below this predetermined threshold value, are then referred to as edge blocks or the identification edge block is associated with these pixels. This allows identifying the edge blocks in simple manner and with low computational power. Thus, the association of the image blocks with the edge blocks can be performed with the aid of a digital signal processor or another corresponding evaluation device of the camera.
In a configuration, a number of the image blocks associated with the respective cost values is respectively determined and summed up one after the other starting from the lowest cost value. In addition, that of the cost values is determined, at which the sum of the image blocks reaches a predetermined portion with respect to the number of the image blocks in the road area and the threshold value is determined based on the determined cost value. In other words, a corresponding histogram can be determined, in which the classes correspond to the cost values. Thereby, the respective number of the image blocks associated with the cost values can be summed up. Thus, starting from the lowest cost value, an accumulated sum of the number of the image blocks can be determined. Therein, the lowest cost value includes those image blocks, in which the intensity variation with respect to the adjacent image block is lowest. For example, those image blocks can be associated with the lowest cost value, in which the intensity value corresponds to the intensity value of the adjacent block. Subsequently, it is checked when or at which of the cost values the accumulated sum has reached a predetermined portion of the total number of the image blocks in the road area. This overall portion can be predetermined depending on the complexity of the scene in the image. For example, this predetermined portion can be 90 %. That cost value, at which the accumulated sum has reached the portion with respect to the total number of the image blocks, is then used for determining the threshold value. For example, those cost values following after this selected cost value starting from the lowest cost value can be referred to as edge blocks. Thereby, the edge blocks can be determined in simple manner and with low
computational effort.
Furthermore, it is advantageous if a region of interest of the image is divided into multiple columns, wherein a width of the respective columns is determined such that they include at least one pixel, and for each of the columns, a number of image blocks contained in the column with the identification edge block is determined. In the at least one image, a predetermined part or a region of interest is selected. This region of interest can be selected such that it is disposed below a horizontal line in the image. The horizontal line in the image can be determined such that for example a sky is located above this horizontal line. In particular, the road area is in the selected part of the image. The selected part of the image is then divided into a plurality of columns. Herein, a predetermined number of columns can be provided. Therein, the column width is selected such that it includes at least one image block. For each of the columns, the number of the image blocks, with which the identification edge block is associated, is determined in the manner of a histogram. In other words, it is thus determined how many image blocks with the identification edge block are in the respective column. Thereby, it can be subsequently determined, in which area the object can possibly be located. In particular, it can be determined, in which area of the image, which has been associated with the road area in the segmentation, the object can be located. Alternatively or additionally, it can also be provided that the selected region of interest of the image is divided into a plurality of lines and the number of the image blocks contained in the line with the identification edge block is determined for each of the lines.
Preferably, based on the columns, in which the number of the contained image blocks with the identification edge block exceeds a predetermined limit value, at least one object region is determined within the image. By the examination of the predetermined columns in the manner of a vertical histogram, those areas within the image or the region of interest of the image can be recognized, in which the object can be located. Therein, those columns can be determined, in which the number of the contained image blocks with the identification edge block exceeds a predetermined limit value. Therein, it can also be provided that it is checked if this limit value is exceeded with columns disposed next to each other. Based on those columns, in which the limit value is exceeded, an object region can be defined in the image and in particular in the road area. This object region describes a part of the image or of the region of interest of the image, which first was determined as the road, in which an object can optionally be located. This object region can then be examined in more detail afterwards. In particular, herein, it can be determined if an object is actually present, which is for example on the road or on the roadway or if the supposed object is a part of the road.
In a configuration, a respective brightness of the image blocks in the object region and/or a respective position of the image blocks with the identification edge block in the object region are determined and the at least one object is recognized based on the determined brightness of the image blocks and/or the position of the image blocks with the identification edge block. Based on the respective position and the arrangement of the image blocks with the identification edge block, respectively, a predetermined pattern can for example be recognized. Herein, it can be further examined if this pattern is to be associated with an object. Further, it can be checked if the edge blocks for example describe a boundary of the road or the roadway. Alternatively or additionally, it can be provided that the distribution of the brightness within the object region is examined. Thus, it can also be recognized if a corresponding pattern can be recognized within the object region. Furthermore, it can for example be recognized if areas illuminated by sunlight or shadow areas are present in the object region. Further, reflections on the road surface can for example be recognized. This is in particular the case if water is on the roadway surface. Thus, it can be examined if an object is actually located in the object region.
Furthermore, it is advantageous if a spatial extension of the object region is determined and the at least one object is captured based on the determined spatial extension. For recognizing the object, basically, the spatial extension or the dimensions of the object region can be taken into account. If the object is for example a pedestrian, it is usually the case that the object region has a low spatial extension along a horizontal direction. In this case, the object region for example only extends over few columns disposed next to each other. Herein, it can for example also be recognized based on the width of the object region in horizontal direction, how far the pedestrian is away from the motor vehicle. Further, it can for example be recognized based on the object region if the object is a roadway marking. Due to the optical distortion, roadway markings for delimiting the roadway or the road for example have a larger spatial extension than centerlines, which are for example centrally disposed in the image. In this manner, the object on the road can be reliably captured.
In a further configuration, at least two images are provided, for each of the at least two images, the object region is determined and the object is captured based on a comparison of the object regions in the at least two images. An image sequence with at least two images can be provided by means of the camera, which have been captured consecutive in time. Therein, the object region can be recognized in each one. In the at least two images, the object regions corresponding to each other can then be compared to each other. Thus, it can for example be examined if the object region varies depending on the time. Thus, it can for example be determined if the object region has been recognized in multiple images consecutive in time. Thus, it can for example be checked if the supposed object is a real object or for example momentarily present solar radiation or a momentarily present shadow. Therein, it can additionally be provided that the current traveling velocity and/or the direction of travel of the motor vehicle are acquired and taken into account in the comparison of the object regions of the at least two images. Thus, it can be checked if a real object is present on the road or roadway in reliable manner.
Furthermore, it is advantageous if the at least second object is identified as a static or moved object based on a comparison of the object regions in the at least two images. The first image can for example describe the environmental region at a first point of time. The second image can describe the environmental region at a second point of time following the first point of time. If the object region in the second image decreases compared to the first image, it can for example be assumed that it is a moved object moving away from the motor vehicle. If the object region in the second image increases compared to the first image, it can be assumed that the object moves towards the vehicle. Thus, it can be recognized if an object is on the roadway, and additionally it can be determined if the object is a static, thus non-moved object, or a moved object.
In particular, it is provided that the object region is determined within the road area. Thus, the area of the image, which was originally defined as the road in the segmentation, can be examined in more detail. Thus, it can be determined if the image blocks having the identification edge block describe an object located on the road. In addition, it can be examined if the image blocks with the identification edge block for example describe a part of the road, in particular a pothole or the like. In this manner, a confidence measure for the road area can be provided, which describes, with which probability the road area is actually a real road. Based on the object recognized on the road, a three-dimensional object recognition can be performed. Further, three-dimensional reconstructions can be performed. It can also be provided that a three-dimensional clustering method is performed. Further, the information with respect to the object can be used to determine an optical flow.
In addition, it is advantageous if the at least one object is classified as a pedestrian, as a vehicle, as a roadway marking or as a wall. Furthermore, the object can for example be classified as a pothole, as a shadow, as a puddle or the like. Thus, it can in particular be determined if the object recognized on the road constitutes an obstacle for the motor vehicle. This information can be provided for the driver assistance systems of the motor vehicle.
In a further configuration, the at least one captured object is registered in a digital environmental map describing the environmental region. For example, the relative position of the object to the motor vehicle can be registered in the environmental map. Therein, it can additionally be provided that information to the object is additionally recorded in the digital environmental map, which classifies the object. This digital environmental map can then be used by the different driver assistance systems of the motor vehicle. For example, by a driver assistance system, the motor vehicle can be maneuvered based on the digital environmental map such that collision with the object is prevented.
A camera system according to the invention for a motor vehicle is adapted to perform a method according to the invention. The camera system can for example include a plurality of cameras, which are disposed distributed on the motor vehicle. In addition, the camera system can have a control device, which is for example constituted by an electronic control unit of the motor vehicle, by which the images of the cameras can be evaluated.
A motor vehicle according to the invention includes a driver assistance system according to the invention. The motor vehicle is in particular formed as a passenger car.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof apply in the same manner to the camera system according to the invention as well as to the motor vehicle according to the invention. Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or alone, without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations..
Now, the invention is explained in more detail based on preferred embodiments as well as with reference to the attached drawings.
There show:
Fig. 1 in schematic illustration a motor vehicle according to an embodiment of the present invention, which includes a camera system with a plurality of cameras;
Fig. 2 a schematic illustration of a module architecture for capturing at least one object in an environmental region of the motor vehicle;
Fig. 3 a schematic flow diagram of a method for capturing the at least one object in the environmental region of the motor vehicle according to a further embodiment;
Fig. 4 a diagram showing the number of image blocks, which are associated with respective cost values;
Fig. 5 image divided in multiple image blocks, wherein edge blocks are marked a road area associated with a road;
Fig. 6 a further image, in which edge blocks are marked in the road area;
Fig. 7 a method for capturing the at least one object in the environmental region of the motor vehicle according to a further embodiment; Fig. 8 the image according to Fig. 5, in which the number of edge blocks is determined for a plurality of columns; and
Fig. 9 the image according to Fig. 8, wherein classified objects are marked in the image.
In the figures, identical and functionally identical elements are provided with the same reference characters.
Fig. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view. In the present case, the motor vehicle 1 is formed as a passenger car. The motor vehicle 1 includes a camera system 2. Objects 10 in an environmental region 8 of the motor vehicle 1 can be captured by the camera system 2. The camera system 2 in turn includes a control device 3, which can for example be constituted by an electronic control unit of the motor vehicle 1 .
The camera system 2 includes at least one camera 4, which can for example be formed as a CCD camera or as a CMOS camera. In the present embodiment, the camera system 2 includes four cameras 4, which are disposed distributed on the motor vehicle 1 .
Presently, one of the cameras 4 is disposed in a rear area 5 of the motor vehicle 1 , one of the cameras 4 is disposed in a front area 7 of the motor vehicle 1 and the remaining two cameras 4 are disposed in the respective lateral areas 6, in particular in an area of the wing mirrors, of the motor vehicle 1 . Presently, the number and arrangement of the cameras 4 of the camera system 2 is to be understood purely exemplarily.
The environmental region 8 of the motor vehicle 1 can be captured by the cameras 4. Preferably, the four cameras 4 are formed identical in construction. In particular, an image sequence or video data can be provided by the cameras 4, which describes the environmental region 8. These image sequences can then be transmitted from the respective cameras 4 to the control device 3. Hereto, the cameras 4 are connected to the control device 3 by means of corresponding data lines or a vehicle data bus. Presently, the data lines are not illustrated for the sake of clarity. Then, the individual images 9 captured by the cameras 4 can be processed by means of the control device 3. In particular, objects 10 can be recognized in the respective images 9. Fig. 2 shows a schematic illustration of a module architecture, which serves for capturing the object 10. According to this module architecture, the control device 3 of the camera system 2 can for example be operated. Therein, the module architecture is formed such that upon recognition of the object 10, computational power can be saved. The module architecture includes an object detector 1 1 , by means of which the object 10 in the image 9 can be recognized. Moreover, a segmenting block 12 is provided, by means of which the image 9 can be segmented. The segmenting block 12 can have an accumulated block complexity, by means of which individual pixels of the image 9 can be examined with respect to their homogeneity or intensity. Further, it can be checked if individual pixels are combined. Moreover, corresponding corners or edges can be recognized in the image 9. In addition, a further block 13 is provided, by means of which the object can be described. If the object 10 in the image has been recognized by the object detector 1 1 , this can be registered in a digital environmental map 14. The digital environmental map 14 describes the environmental region 8 of the motor vehicle 1 . The relative position between the motor vehicle 1 and the object 10 can be recorded in the digital environmental map 14.
Fig. 3 shows a schematic flow diagram of a method for capturing an object 10 in the environmental region 8 of the motor vehicle 1 . In a step S1 , the method is started. Herein, the functionality of the object recognition can for example be provided by means of corresponding configuration parameters. The object recognition process begins with the initialization of these configuration parameters.
In a step S2, the required data for segmenting the image 9 is gathered. In particular, a confidence measure for performing the segmentation can be determined herein. In the segmentation, in particular a road area 15 is to be recognized in the image 9, which is associated with a road or a roadway in the environmental region 8. For recognizing the road area 15 in the segmentation, a texture-oriented method can for example be used.
In a step S3, the means required for performing the segmentation can be determined and initialized by the object detector 1 1 . These means can for example be intermediate storages, data structures or the like. These means can serve for analyzing the confidence measure for the segmentation. After the segmentation, the contiguous areas in the image 9 each can be provided with a corresponding identification. For example, the identification or ID "road" can be assigned to the road area 15.
In a step S4, a first stage of the object recognition is effected. Hereto, the image 9 is divided into a plurality of image blocks 16. Therein, the image blocks 16 can for example be disposed next to each other in multiple lines and rows. The image blocks 16 can each include at least one pixel and in particular be selected equally sized. For each of the image blocks 16, an intensity value is determined, which describes the intensity of the respective image block 16. Further, it is checked if the intensity value of an image block 16 has a predetermined variation compared to an adjacent image block 16. If the intensity value of one of the image blocks 16 has this predetermined variation to an adjacent image block 16, the identification edge block 17 is associated with this image block in a step S5. The identification edge block 17 describes that this image block 16 describes a severe edge.
In a step S6, a second stage of the object recognition is performed. Hereto, a region of interest 18 is determined in the image 9. The region of interest 18 is divided into a plurality of columns 19. Subsequently, the number of the image blocks 16 with the identification edge block 17 of each of the columns 19 is determined. Depending on the number of edge blocks 17 per column 19, an object region 21 is then determined in the image.
In a step S7, a third stage of the object recognition is effected. Herein, the edge blocks 17 in the object region 21 are examined in more detail. Here, it can for example be checked if the edge blocks 17 have a predetermined pattern. Thus, it can be determined if the object 9 is on the road. In a step S8, the recognized object 10 or the recognized objects 10 are correspondingly classified and marked in the image 9. Subsequently, the method is terminated in a step S9.
Fig. 4 shows a diagram, based on which step S4 and the first stage of object recognition, respectively, are to be explained in more detail. Therein, a plurality of cost values 22 is preset. They are presented in the left column of the diagram. In the present embodiment, 64 cost values 22 are preset. The cost values 22 describe the difference of an intensity value of an image block 16 to the intensity value of the adjacent image block 16. Therein, a cost value with the value of 0 corresponds to the case, in which the intensity values of adjacent image blocks 16 are identical. The cost value 22 with the value of 64 describes the case, in which the intensity values of adjacent image blocks 16 maximally differ. In the right column of the diagram, the respective number 23 of image blocks for each of the cost values 22 is presented.
Therein, starting from the cost value 22 with the lowest value or the cost value 22 with the value of 0, the respective number 23 of the image blocks 16 is added. If the accumulated sum Acc reaches a predetermined portion of the total number of the image blocks 16, that cost value 22 is determined, at which the accumulated sum Acc has reached this portion, this portion can be selected depending on the scene. This portion can for example be 90 %. In the present example, the total number of the image blocks 16 can for example be 100. The predetermined portion of 90 % or the accumulated sum Acc of 90 is presently reached at the cost value 22 with the value of 3. Therein, the threshold value is determined such that the identification edge block 17 is assigned to all of the image blocks having a cost value greater than or equal to 4.
Fig. 5 shows an image 9, which was provided by one of the cameras 4. Within the region of interest 18, the individual image blocks 16 are apparent. Further, the image blocks 16 with the identification edge block 17 are apparent. They are disposed in the edge areas 24 of the road area 15 on the one hand. The edge blocks 17 are disposed in an area 25 in it, in which the object 10 or a pedestrian 26 is located. Further, the edge blocks 17 are disposed in an area 27, which is associated with a damage of the roadway surface.
In comparison hereto, Fig. 6 shows an image 9 of a further scene or environment. Here too, the road area is apparent. In addition, the edge blocks 17 in the road area 15 are apparent, which presently are disposed in an area 28, which is associated with a roadway marking 29.
Fig. 7 shows a further flow diagram for explaining the method for recognizing the object 9 and in particular for explaining the step S4 of the method according to Fig. 3 in more detail. Here too, the method is started with step S1 . In a step S2, the confidence measure for the segmentation of the image 9 and in particular for determining the road area 15 is gathered. In step S3, the required means for segmenting are determined and initialized. In a step S10, it is checked if all of the image blocks 16 have already been processed. If this is not the case, the method is continued with a step S1 1 . Herein, it is checked if the image block 16 is associated with the road area 15. If this is not the case, the method is again continued with step S10. If the image block 16 is associated with the road area 15, the method is continued with a step S12. Herein, the cost values 22 are defined and the image block 16 is associated with one of the cost values 22.
In a step S13, it is checked if all of the cost values 22 are already passed. If this is not the case, in a step S14, the accumulated sum Acc for the following cost value 22 is determined. In a step S15, it is checked if the accumulated sum Acc exceeds the predetermined portion. If this is satisfied, the threshold value is determined based on the current cost value 22 in a step S16. If the predetermined portion is not yet reached, the method is again continued with step S13. Finally, the method is terminated in a step S17.
Fig. 8 shows the image according to Fig. 5, wherein the region of interest 18 is divided into the plurality of columns 19. For each column 19, the number of edge blocks 17 in the column is determined. Based on the respective number of the edge blocks 17 per column 13, the object region 21 can then be defined in the region of interest 18. Thus, those columns having a similar number of edge blocks 17 can for example be combined.
Presently, this is illustrated by the rectangles 20.
These object regions 21 can then be examined in more detail. For example, a rectangle can be defined in the area of the object regions 21 by examining the position of the edge blocks 17. Thus, it can for example be examined if a predetermined pattern can be recognized, which the edge blocks 17 form. Further, the individual image blocks 16 of the object region can be examined with respect to their brightness. Thus, it can for example be determined if it is a shadow, an area with increased illumination, for example as a result of solar radiation or the like. Further it can be provided that the spatial extension of the object region 21 is examined. Thus, it can for example be differentiated if it is a pedestrian 26 or a roadway marking 29.
With pedestrians 26, it is usually the case that the object region 21 has relatively low spatial dimensions. Further, it is usually apparent here that the edge blocks 17 are present in adjacent columns 19. If the pedestrian 26 is too far away from the motor vehicle 1 , the edge blocks 17 will only be able to be found in a narrow object region 21 . Even if the pedestrian 26 cannot be recognized herein, an area with severe edges can nevertheless be determined.
The roadway markings 29 can for example be recognized based on their spatial extension. Therein, an imaging error caused by the lens of the camera 4 can be taken into account. Further, the roadway markings can be recognized by their brightness. With other objects 9 such as for example parked vehicles 30 or walls 31 , usually, a low number of edge blocks 17 occurs.
Fig. 9 shows the image according to Fig. 8, wherein the classified objects 10 are correspondingly marked in the image 9. Here, a parked vehicle 30 as well as the walls 31 are for example apparent. Thus, the objects 9 can be reliably recognized with the aid of the method. In particular, objects 9 within the road area 15 can be recognized, which first has been completely associated with the road in the segmentation.

Claims

Claims
1 . Method for capturing at least one object (10) in an environmental region (8) of a motor vehicle (1 ) including the steps of:
- providing at least one image (9) of the environmental region (8) by means of a camera (4),
- determining a road area (15) in the at least one image (9), which is associated with a road in the environmental region (8),
- dividing the road area (15) into a plurality of image blocks (16),
- determining a respective intensity value for the image blocks (16), which
describes an intensity of at least one pixel of the image block (16),
- associating an identification edge block (17) with those of the image blocks (16), the respective intensity value of which has a predetermined variation to at least one adjacent of the image blocks (16), and
- capturing the at least one object (10) based on the image blocks (16) with the identification edge block (17).
2. Method according to claim 1 ,
characterized in that
for determining the road area (15), the at least one image (9) is segmented by means of a texture-oriented method and/or based on the intensity of the pixels of the image (9).
3. Method according to claim 1 or 2,
characterized in that
a plurality of cost values (22) is preset, the image blocks (16) are associated with one of the cost values (22) depending on a difference of their respective intensity value to the intensity value of the at least one adjacent image block (16), and the identification edge block is associated with those image blocks (16), the cost value of which exceeds a certain threshold value.
4. Method according to claim 3,
characterized in that
a number (23) of the image blocks (16), which are associated with the respective cost values (22), is each consecutively determined and summed up starting from the lowest cost value (22), and that one of the cost values (22) is determined, at which the sum of the image blocks (16) reaches a predetermined portion with respect to a total number of the image blocks (16) in the road area (15), and the threshold value is determined based on the determined cost value (22).
5. Method according to any one of the preceding claims,
characterized in that
a region of interest (18) of the image (9) is divided into multiple columns (19), wherein a width of the respective columns (19) is determined such that they include at least one image block (16), and a number of the image blocks (16) contained in the column (19) with the identification edge block (17) is determined for each of the columns (19).
6. Method according to claim 5,
characterized in that
based on the columns (19), in which the number of the contained image blocks (16) with the identification edge block (17) exceeds a predetermined limit value, at least one object region (21 ) is determined within the at least one image (9).
7. Method according to claim 6,
characterized in that
a respective brightness of the image blocks (16) in the object region (21 ) and/or a respective position of the image blocks (16) with the identification edge block (17) in the object region (21 ) are determined and the at least one object (10) is recognized based on the determined brightness of the image blocks (16) and/or the position of the image blocks (16) with the identification edge block (17).
8. Method according to claim 6 or 7,
characterized in that
a spatial extension of the object region (21 ) in the at least one image (9) is determined and the at least one object (10) is captured based on the determined spatial extension of the object region (21 ).
9. Method according to any one of claims 6 to 8,
characterized in that
at least two images (9) are provided, the object region (21 ) is determined in each of the at least two images (9) and the at least one object (10) is captured based on a comparison of the object regions (21 ) in the at least two images (9).
10. Method according to claim 9,
characterized in that
the at least one object (10) is recognized as a static or as a moved object (10) based on the comparison of the object regions (21 ) in the at least two images (9).
1 1 . Method according to any one of claims 6 to 10,
characterized in that
the object region (21 ) is determined within the road area (15).
12. Method according to any one of claims 6 to 1 1 ,
characterized in that
the at least one object (10) is classified as a pedestrian (26), as a vehicle (30), as a roadway marking (19) or as a wall (31 ).
13. Method according to any one of the preceding claims,
characterized in that
the at least one captured object (10) is registered in a digital environmental map (14) describing the environmental region (8).
14. Camera system (2) for a motor vehicle (1 ), which is adapted to perform a method according to any one of the preceding claims.
15. Motor vehicle (1 ) with a camera system (2) according to claim 14.
EP16753598.8A 2015-07-29 2016-07-27 Method for capturing an object on a road in the environment of a motor vehicle, camera system and motor vehicle using the same Withdrawn EP3329419A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102015112389.4A DE102015112389A1 (en) 2015-07-29 2015-07-29 Method for detecting at least one object on a road in a surrounding area of a motor vehicle, camera system and motor vehicle
PCT/EP2016/067907 WO2017017140A1 (en) 2015-07-29 2016-07-27 Method for capturing an object on a road in the environment of a motor vehicle, camera system and motor vehicle using the same

Publications (1)

Publication Number Publication Date
EP3329419A1 true EP3329419A1 (en) 2018-06-06

Family

ID=56738078

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16753598.8A Withdrawn EP3329419A1 (en) 2015-07-29 2016-07-27 Method for capturing an object on a road in the environment of a motor vehicle, camera system and motor vehicle using the same

Country Status (3)

Country Link
EP (1) EP3329419A1 (en)
DE (1) DE102015112389A1 (en)
WO (1) WO2017017140A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017113794A1 (en) * 2017-06-22 2018-12-27 Connaught Electronics Ltd. Classification of static and dynamic image segments in a driver assistance device of a motor vehicle
DE102018123250A1 (en) * 2018-09-21 2020-03-26 Connaught Electronics Ltd. Method and device for tracking a trailer

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1504276B1 (en) * 2002-05-03 2012-08-08 Donnelly Corporation Object detection system for vehicle
US7702425B2 (en) * 2004-06-07 2010-04-20 Ford Global Technologies Object classification system for a vehicle
US8184159B2 (en) * 2007-03-26 2012-05-22 Trw Automotive U.S. Llc Forward looking sensor system
US9020263B2 (en) * 2008-02-15 2015-04-28 Tivo Inc. Systems and methods for semantically classifying and extracting shots in video
US8670592B2 (en) * 2008-04-24 2014-03-11 GM Global Technology Operations LLC Clear path detection using segmentation-based method
JP5592308B2 (en) * 2011-05-19 2014-09-17 富士重工業株式会社 Environment recognition device

Also Published As

Publication number Publication date
DE102015112389A1 (en) 2017-02-02
WO2017017140A1 (en) 2017-02-02

Similar Documents

Publication Publication Date Title
US10078789B2 (en) Vehicle parking assist system with vision-based parking space detection
Wu et al. Lane-mark extraction for automobiles under complex conditions
EP2924654B1 (en) Image processing apparatus and image processing method
EP1796043B1 (en) Object detection
US10776946B2 (en) Image processing device, object recognizing device, device control system, moving object, image processing method, and computer-readable medium
JP4930046B2 (en) Road surface discrimination method and road surface discrimination device
US20060182312A1 (en) Obstacle detecting apparatus and method
KR101176693B1 (en) Method and System for Detecting Lane by Using Distance Sensor
CN109997148B (en) Information processing apparatus, imaging apparatus, device control system, moving object, information processing method, and computer-readable recording medium
EP3392830B1 (en) Image processing device, object recognition device, apparatus control system, image processing method and program
Aytekin et al. Increasing driving safety with a multiple vehicle detection and tracking system using ongoing vehicle shadow information
JP7018607B2 (en) Moving object detection device and moving object detection method
CN110088801B (en) Driving region detection device and driving assistance system
US11508156B2 (en) Vehicular vision system with enhanced range for pedestrian detection
EP2936386B1 (en) Method for detecting a target object based on a camera image by clustering from multiple adjacent image cells, camera device and motor vehicle
US20160180158A1 (en) Vehicle vision system with pedestrian detection
US11601635B2 (en) Rapid ground-plane discrimination in stereoscopic images
JP4344860B2 (en) Road plan area and obstacle detection method using stereo image
WO2019085929A1 (en) Image processing method, device for same, and method for safe driving
CN107220632B (en) Road surface image segmentation method based on normal characteristic
JP4798576B2 (en) Attachment detection device
EP3329419A1 (en) Method for capturing an object on a road in the environment of a motor vehicle, camera system and motor vehicle using the same
JP5434277B2 (en) Driving support device and driving support method
Barua et al. An Efficient Method of Lane Detection and Tracking for Highway Safety
US11323633B2 (en) Automated creation of a freeform mask for automotive cameras

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20180206

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIN1 Information on inventor provided before grant (corrected)

Inventor name: SHIVAMURTHY, SWAROOP KAGGERE

Inventor name: HUGHES, CIARAN

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20190222

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20200709