US10442438B2 - Method and apparatus for detecting and assessing road reflections - Google Patents

Method and apparatus for detecting and assessing road reflections Download PDF

Info

Publication number
US10442438B2
US10442438B2 US15/572,010 US201615572010A US10442438B2 US 10442438 B2 US10442438 B2 US 10442438B2 US 201615572010 A US201615572010 A US 201615572010A US 10442438 B2 US10442438 B2 US 10442438B2
Authority
US
United States
Prior art keywords
road
camera
vehicle
point
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/572,010
Other versions
US20180141561A1 (en
Inventor
Stefan Fritz
Bernd Hartmann
Manuel Amthor
Joachim Denzler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Continental Teves AG and Co OHG
Original Assignee
Continental Teves AG and Co OHG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Teves AG and Co OHG filed Critical Continental Teves AG and Co OHG
Assigned to CONTINENTAL TEVES AG & CO. OHG, FRIEDRICH-SCHILLER-UNIVERSITAET JENA reassignment CONTINENTAL TEVES AG & CO. OHG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AMTHOR, Manuel, DENZLER, JOACHIM, FRITZ, STEFAN, HARTMANN, BERND
Assigned to CONTINENTAL TEVES AG & CO. OHG reassignment CONTINENTAL TEVES AG & CO. OHG REQUEST TO TRANSFER RIGHTS Assignors: CONTINENTAL TEVES AG & CO. OHG, FRIEDRICH-SCHILLER-UNIVERSITAET JENA
Publication of US20180141561A1 publication Critical patent/US20180141561A1/en
Application granted granted Critical
Publication of US10442438B2 publication Critical patent/US10442438B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • G06K9/00791
    • G06K9/00798
    • G06K9/4661
    • G06K9/4676
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • B60W2550/14
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • G06K2009/363
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Definitions

  • the invention relates to a method for detecting and assessing reflections on a road.
  • the invention also relates to an apparatus for carrying out the above-mentioned method and to a vehicle having such an apparatus.
  • cameras can now be found in various applications and different functions for driver assistance systems in modern vehicles. It is the primary task of digital camera image processing as a standalone function or in conjunction with radar or lidar sensors to detect, classify, and track objects in the image section.
  • Classic objects typically include various vehicles such as cars, trucks, two-wheel vehicles, or pedestrians.
  • cameras detect traffic signs, lane markings, guardrails, free spaces, or other generic objects.
  • Modern driver assistance systems use different sensors including video cameras to capture the area in front of the vehicle as accurately and robustly as possible.
  • This environmental information together with driving dynamics information from the vehicle (e.g. from inertia sensors) provide a good impression of the current driving state of the vehicle and the entire driving situation.
  • This information can be used to derive the criticality of driving situations and to initiate the respective driver information/alerts or driving dynamic interventions through the brake and steering system.
  • the times for issuing an alert or for intervention are in principle determined based on a dry road with a high adhesion coefficient between the tire and the road surface. This results in the problem that accident-preventing or at least impact-weakening systems warn the driver or intervene so late that accidents are prevented or accident impacts acceptably weakened only if the road is really dry. If, however, the road provides less adhesion due to moisture, snow, or even ice, an accident can no longer be prevented and the reduction of the impact of the accident does not have the desired effect.
  • the method according to a first aspect of the invention is used for detecting and assessing reflections of at least one point on a road.
  • a camera is provided and is used to produce at least two digital images of the at least one point of the road, wherein the images are produced from different recording perspectives of the camera. Diffuse reflection and specular reflection of the road are then distinguished by assessing differences in the appearances of the at least one point of the road in the at least two digital images using digital image processing algorithms.
  • An item of road condition information in particular an item of road condition information which describes the friction coefficient of the road or which states whether the road is dry, wet or icy, is determined on the basis of the detected reflection.
  • the invention takes advantage of the fact that reflections can generally be divided into three categories and that each gives rise to different visual effects in the event of a change in the viewing angle or the perspective.
  • a distinction is made between diffuse, glossy and specular reflection, wherein in the present invention the difference between diffuse reflection, which is an indicator of a dry road, and specular reflection, which is an indicator of a wet and/or icy road, is particularly of interest.
  • the method according to the invention makes it possible to distinguish between dry and wet/icy roads.
  • the method according to the invention and the apparatus according to the invention use digital image processing algorithms, with the aim of robustly detecting road reflections in order to detect moisture and ice in particular.
  • a conclusion can be drawn about a current road condition with the method according to the invention as a result of detecting and assessing reflections of even a single point of the road, which represents the road, in that specific features are sought in the images of the point of the road produced by the camera from two different perspectives using digital image processing algorithms, which features allow a conclusion to be drawn about the current road condition.
  • the method is preferably carried out in a sufficiently illuminated scene, which makes it possible to produce or record usable images.
  • the prerequisite for the method is a change in the perspective in an image sequence of at least two images.
  • the change in the viewing angle does not have any visual effect on a fixed point on the road, since the light is reflected equally in all directions.
  • the appearance does not change for the observer.
  • the method according to the invention is preferably used in a vehicle.
  • the camera can, in particular, be provided inside the vehicle, preferably behind the windshield, so that the area in front of the vehicle can be captured in the way the driver of the vehicle perceives it.
  • the images can in this case be produced from two different perspectives in particular by a travel movement of the vehicle.
  • a digital camera is preferably provided, with which the at least two appearances can be directly recorded digitally and evaluated by means of digital image processing algorithms.
  • a mono camera or a stereo camera can be used to produce the appearances since, depending on the characteristic, depth information from the image can also be used for the algorithm. It is preferably envisaged in connection with this that at least two digital images of the at least one point of the road are produced by means of the camera, wherein the images are produced from different recording perspectives with a stereo camera.
  • One particular advantage of the method according to the invention is that specular reflections can be reliably distinguished from shadows (diffuse reflections), as they show a different movement behavior in the image.
  • diffuse and specular reflection are distinguished on the basis of digital image processing by differentiating between appearances fixed by the road and independent of the road caused by a relative movement of the observer and, therefore, a reliable separation of shadows and reflected infrastructure on the road is made possible.
  • One advantageous embodiment of the method according to the invention comprises the additional method steps of communicating the item of road condition information to a driver assistance system of a vehicle and adjusting times for issuing an alert or for intervention by means of the driver assistance system on the basis of the item of road condition information.
  • the item of road condition information thus serves as an input for an accident-preventing driver assistance system of a vehicle, in order to be able to particularly effectively adjust times for issuing an alert or for intervention of the driver assistance system.
  • ADAS advanced driver assistance systems
  • the item of road condition information serves as important information regarding the driving surroundings during automating and is preferably fed to a corresponding system control device for autonomous driving.
  • the item of road condition information is incorporated into the function of an automated vehicle and the driving strategy as well as the establishment of handover points between an automated system and the driver are adjusted on the basis of the item of road condition information.
  • a further advantageous embodiment comprises the additional method steps of producing two digital images of a plurality of points of the road, which preferably form a trapezoidal region, from different perspectives by means of the camera and transforming the preferably trapezoidal region by means of an estimated homography into a rectangular top view.
  • a region which comprises a plurality of points of the road is used in the images of the camera, which region represents the road, in order to detect road reflections.
  • the region can also be a segmented detail.
  • a region in the form of a trapezoid is particularly preferable, wherein the trapezoidal region is transformed with the aid of an estimated homography into a rectangular top view (“bird's eye view”).
  • the camera is provided in a vehicle.
  • the first image is produced in a first position of the vehicle from a first recording perspective.
  • the vehicle is moved, e.g. driven, into a second position, wherein the second position differs from the first position, i.e. the first and the second positions do not coincide.
  • the at least second image is then produced in the at least second position of the vehicle from an at least second recording perspective.
  • the at least two images of the at least two different recording perspectives are subsequently transformed into a respective top view.
  • the at least two top views produced are then registered with digital image processing means, incorporating driving dynamics parameters of the vehicle, and the appearances of the at least one point of the road in the at least two registered top views are compared.
  • the registration can be carried out according to this embodiment example by a simple translation and rotation, as the scene has been transformed into a top view.
  • the compensation can preferably be carried out or supported by incorporating individual driving dynamics parameters, e.g. vehicle speed, steering angle, etc. or entire models, e.g. ground plane model and driving dynamics models.
  • individual driving dynamics parameters e.g. vehicle speed, steering angle, etc.
  • entire models e.g. ground plane model and driving dynamics models.
  • features of the at least one point of the road or the region which features capture the change in the appearance in the at least two registered top views, are advantageously extracted. This is advantageously envisaged after assigning the individual points or regions to a sequence.
  • the extracting can be done in different ways such as e.g. by means of the variance thereof or value progression in the form of a vector.
  • the individual features form a feature vector which is subsequently assigned to at least one class by a classification system (classifier).
  • classifier preferably “wet/icy” and “dry/remainder”.
  • a classifier in this case is a mapping of a feature descriptor on a discrete number that represents the classes to be detected.
  • a random decision forest is preferably used as a classifier.
  • Decision trees are hierarchical classifiers which break down the classification problem iteratively. Starting at the root, a path towards a leaf node where the final classification decision is made is followed based on previous decisions. Due to the high learning complexity, very simple classifiers, so-called decision stumps, which separate the input parameter space orthogonally to a coordinate axis, are preferred for the inner nodes.
  • Decision forests are collections of decision trees which contain randomized elements preferably at two points in the training of the trees. First, every tree is trained with a random selection of training data, and second, only one random selection of permissible dimensions is used for each binary decision. Class histograms are stored in the leaf nodes which allow a maximum likelihood estimation with respect to the feature vectors that reach the leaf node during the training. Class histograms store the frequency with which a feature descriptor of a specific road condition reaches the respective leaf node while traveling through the decision tree. As a result, each class can preferably be assigned a probability that is calculated from the class histograms.
  • the most probable class from the class histogram is preferably used as the current condition, or other methods may be used, to transfer information from the decision trees into a decision about the presence of reflection.
  • An optimization step preferably follows this decision per input image.
  • This optimization can take the temporal context or further information which is provided by the vehicle into account.
  • the temporal context is preferably taken into account by using the most frequent class from a previous time period or by establishing the most frequent class by means of a so-called hysteresis threshold value method.
  • the hysteresis threshold value method uses threshold values to control the change from one road condition into another. A change is made only when the probability of the new condition is high enough and the probability of the old condition is accordingly low.
  • the extracting described above has been effected for the individual points or regions, it is additionally possible to extract features for an entire image section of one of the images, in particular a transformed image section.
  • Various calculations are conceivable for this such as, for example, the concatenation of the individual point features with any dimension reduction measures (e.g. “principal component analysis”), description with the aid of statistical moments or even a “bag of visual words” approach, during which the occurrence of specific prototype values or value tuples (e.g. SIFT, HOG, LBPs etc.) is detected on the basis of a histogram.
  • any dimension reduction measures e.g. “principal component analysis”
  • description with the aid of statistical moments or even a “bag of visual words” approach
  • the occurrence of specific prototype values or value tuples e.g. SIFT, HOG, LBPs etc.
  • Road reflections are particularly preferably assessed using the effects described above by means of an approximate approach, by means of which the robustness of the method can in particular be increased with image registration. At the same time running times can be reduced, which is essential for the automotive sector.
  • the at least two images produced are averaged in order to obtain an average image and that an absolute or quadratic difference is calculated between each pixel in the average image and the associated column mean.
  • a basic assumption of this embodiment is that a region moves through the entire image area. In the process, a specific region per se is not observed but the path traveled by it. Therefore, more than two images are particularly preferably produced.
  • the individual images are averaged from the sequence in order to obtain an average image.
  • it is also possible to calculate a moving average. The absolute difference or the quadratic difference is subsequently calculated between each pixel in the average image and the associated column mean.
  • a “bag of visual words” approach is preferably applied, in which the occurrence of specific prototype values or value tuples is captured on the basis of a histogram.
  • the resulting image can be used in order to assess the presence of specular reflections such as, for example, by means of statistical moments or in a particular advantageous form by means of local features (preferably a “local binary pattern”) in a “bag of visual words” approach.
  • the basis of this approximate approach is the assumption that regions passed on the road are very similar to the column mean during movement in a straight line in the case of diffuse reflection, whereas in the case of specular reflections the change in the appearance of the regions passed has considerable differences from the column means.
  • This method is substantially based—as mentioned above—on the assumption that the vehicle is moving in a straight line.
  • the viewed image region of the screen can be adjusted accordingly by rotation or shearing, in order to ensure that the effects still operate column by column even during cornering.
  • the individual regions are not directly tracked, so to speak, in order to assess their change in appearance, but the path traveled by them (image columns) is analyzed.
  • One advantage of this method is the robustness with respect to non-registered vehicle movements (pitching/rocking), which supplies error-free estimates in a particularly reliable manner.
  • Another advantage is the required computing time which is greatly reduced in contrast to the first method. The calculations are in this case limited to calculating the average and some subtractions.
  • camera parameters are preferably incorporated into the assessment of the change in the appearances.
  • the robustness of the method can be increased.
  • the ever-changing exposure time which causes changes in the appearance of the regions in the sequence (e.g. changes in brightness) and which can negatively affect the detection of reflections, is preferably taken into account.
  • the apparatus is in this case set up to assess differences in the appearances of the at least one point of the road using digital image processing algorithms and, as a result, detect diffuse reflections and specular reflections of the road and determine an item of road condition information on the basis of the detected reflection.
  • the vehicle according to a further aspect of the invention comprises the aforementioned apparatus according to the invention.
  • FIGS. 1 a and b show a schematic representation of an embodiment example of an apparatus according to the invention during the execution of an embodiment example of the method according to the invention.
  • the apparatus 1 according to the invention shown by FIGS. 1 a and 1 b comprises a digital camera 2 which is set up to record at least two digital images of a point 3 of the road from different recording perspectives, wherein the different recording perspectives are each shown by two different positions A and B of the respective camera 2 .
  • the camera 2 is arranged in a vehicle (not shown), and is in fact located behind the windshield thereof, so that the area in front of the vehicle can be captured in the way the driver of the vehicle perceives it.
  • said vehicle By means of a travel movement of the vehicle, said vehicle is moved from a first position into a second position.
  • the first position in which the camera 2 covers the recording perspective A shown on the right in FIGS. 1 a and 1 b respectively, a first image of the point 3 of the road is recorded in each case.
  • the vehicle is moved into the second position in which the recording perspective of the camera 2 is compensated in such a manner that the recording perspective shown on the left in FIGS. 1 a and 1 b respectively is covered, from which a second image of the point ( 3 ) of the road is recorded in each case.
  • the image of the point 3 of the road does not change during the change of the recording perspective from A to B, because an incoming light beam 4 is reflected equally in all directions by a dry road surface 5 .
  • the apparatus 1 compares the first and the second image with each other. Using digital image processing algorithms, the apparatus 1 detects that the first and the second image do not differ or only differ to an extent that a diffuse reflection must exist. Due to the identified or detected diffuse reflection, the apparatus 1 determines an item of road condition information which includes the fact that the road surface 5 is dry. This value is transmitted to a driver assistance system (not shown).
  • the image of the point 3 of the road changes, as can be seen from FIG. 1 b , during the change of the recording perspective from A to B, because an incoming light ray 6 is only reflected in a particular direction by an icy or wet road surface 7 .
  • the apparatus 1 compares the first and the second image with each other. By using digital image processing algorithms the apparatus detects that the first and the second image vary greatly from each other such that a specular reflection must exist. Due to the identified or detected specular reflection, the apparatus determines an item of road condition information which includes the fact that the road surface is wet or icy. This value is transmitted to a driver assistance system (not shown) which adjusts the times for issuing an alert or for intervention to the wet or icy road surface.

Abstract

In a method for detecting and assessing reflections on a road (7), a camera (2) produces at least two digital images of at least one point (3) of the road, respectively from different recording perspectives (A, B) of the camera (2). Diffuse reflection and specular reflection of the road (7) are then detected by assessing differences in the appearances of the point (3) of the road in the at least two digital images using digital image processing algorithms. Road reflections are preferably assessed using an approximative approach. An item of road condition information is determined based on the detected reflection, preferably an item of road condition information which states whether the road is dry, wet, snow-covered or icy. Also provided are an apparatus (1) for carrying out the above-mentioned method, and a vehicle having such an apparatus.

Description

FIELD OF THE INVENTION
The invention relates to a method for detecting and assessing reflections on a road. The invention also relates to an apparatus for carrying out the above-mentioned method and to a vehicle having such an apparatus.
BACKGROUND OF THE INVENTION
Technological progress in the field of optical image acquisition allows the use of camera-based driver assistance systems which are located behind the windshield and capture the area in front of the vehicle in the way the driver perceives it. The functionality of these systems ranges from automatic headlights to the detection and display of speed limits, lane departure warnings, and imminent collision warnings.
Starting from just capturing the area in front of the vehicle to a full 360° panoramic view, cameras can now be found in various applications and different functions for driver assistance systems in modern vehicles. It is the primary task of digital camera image processing as a standalone function or in conjunction with radar or lidar sensors to detect, classify, and track objects in the image section. Classic objects typically include various vehicles such as cars, trucks, two-wheel vehicles, or pedestrians. In addition, cameras detect traffic signs, lane markings, guardrails, free spaces, or other generic objects.
Automatic learning and detection of object categories and their instances is one of the most important tasks of digital image processing and represents the current state of the art. Due to the methods which are now very advanced and which can perform these tasks almost as well as a person, the focus has now shifted from a coarse localization to a precise localization of the objects.
Modern driver assistance systems use different sensors including video cameras to capture the area in front of the vehicle as accurately and robustly as possible. This environmental information, together with driving dynamics information from the vehicle (e.g. from inertia sensors) provide a good impression of the current driving state of the vehicle and the entire driving situation. This information can be used to derive the criticality of driving situations and to initiate the respective driver information/alerts or driving dynamic interventions through the brake and steering system.
However, since the available friction coefficient or road condition is not provided or cannot be designated in driver assistance systems, the times for issuing an alert or for intervention are in principle determined based on a dry road with a high adhesion coefficient between the tire and the road surface. This results in the problem that accident-preventing or at least impact-weakening systems warn the driver or intervene so late that accidents are prevented or accident impacts acceptably weakened only if the road is really dry. If, however, the road provides less adhesion due to moisture, snow, or even ice, an accident can no longer be prevented and the reduction of the impact of the accident does not have the desired effect.
SUMMARY OF THE INVENTION
It can therefore be the object of the present invention to provide a method and an apparatus of the type indicated above, with which the road condition or even the available friction coefficient of the road can be established so that driver alerts as well as system interventions can accordingly be effected in a more targeted manner and the effectiveness of accident-preventing driver assistance systems can be increased.
The object is achieved by the subject matter of the independent claims. Preferred embodiments are the subject matter of the subordinate claims.
The method according to a first aspect of the invention is used for detecting and assessing reflections of at least one point on a road. According to one method step, a camera is provided and is used to produce at least two digital images of the at least one point of the road, wherein the images are produced from different recording perspectives of the camera. Diffuse reflection and specular reflection of the road are then distinguished by assessing differences in the appearances of the at least one point of the road in the at least two digital images using digital image processing algorithms. An item of road condition information, in particular an item of road condition information which describes the friction coefficient of the road or which states whether the road is dry, wet or icy, is determined on the basis of the detected reflection.
The invention takes advantage of the fact that reflections can generally be divided into three categories and that each gives rise to different visual effects in the event of a change in the viewing angle or the perspective. In this case, a distinction is made between diffuse, glossy and specular reflection, wherein in the present invention the difference between diffuse reflection, which is an indicator of a dry road, and specular reflection, which is an indicator of a wet and/or icy road, is particularly of interest. In this way, the method according to the invention makes it possible to distinguish between dry and wet/icy roads.
The method according to the invention and the apparatus according to the invention use digital image processing algorithms, with the aim of robustly detecting road reflections in order to detect moisture and ice in particular. In this case, a conclusion can be drawn about a current road condition with the method according to the invention as a result of detecting and assessing reflections of even a single point of the road, which represents the road, in that specific features are sought in the images of the point of the road produced by the camera from two different perspectives using digital image processing algorithms, which features allow a conclusion to be drawn about the current road condition.
The method is preferably carried out in a sufficiently illuminated scene, which makes it possible to produce or record usable images. The prerequisite for the method is a change in the perspective in an image sequence of at least two images. In the case of diffuse reflections (indicator of a dry road) the change in the viewing angle does not have any visual effect on a fixed point on the road, since the light is reflected equally in all directions. In the event of a change in the perspective, the appearance does not change for the observer. In contrast thereto, in the case of a specular reflection (indicator of a wet and/or an icy road) the reflection is not bounced back in a scattered manner, the consequence of which is a considerable change in the appearance of a fixed point on the road in the event of a change in the viewing angle. The result of a change in perspective is that reflections in a specific point on the road no longer hit the viewer after the change. In order to exploit this effect it is basically necessary to track isolated or all of the points or regions in the image via a sequence of at least two images and to assess the change in appearance thereof.
The method according to the invention is preferably used in a vehicle. The camera can, in particular, be provided inside the vehicle, preferably behind the windshield, so that the area in front of the vehicle can be captured in the way the driver of the vehicle perceives it. The images can in this case be produced from two different perspectives in particular by a travel movement of the vehicle.
A digital camera is preferably provided, with which the at least two appearances can be directly recorded digitally and evaluated by means of digital image processing algorithms. In particular, a mono camera or a stereo camera can be used to produce the appearances since, depending on the characteristic, depth information from the image can also be used for the algorithm. It is preferably envisaged in connection with this that at least two digital images of the at least one point of the road are produced by means of the camera, wherein the images are produced from different recording perspectives with a stereo camera.
One particular advantage of the method according to the invention is that specular reflections can be reliably distinguished from shadows (diffuse reflections), as they show a different movement behavior in the image.
It can also be advantageously envisaged that diffuse and specular reflection are distinguished on the basis of digital image processing by differentiating between appearances fixed by the road and independent of the road caused by a relative movement of the observer and, therefore, a reliable separation of shadows and reflected infrastructure on the road is made possible.
One advantageous embodiment of the method according to the invention comprises the additional method steps of communicating the item of road condition information to a driver assistance system of a vehicle and adjusting times for issuing an alert or for intervention by means of the driver assistance system on the basis of the item of road condition information. The item of road condition information thus serves as an input for an accident-preventing driver assistance system of a vehicle, in order to be able to particularly effectively adjust times for issuing an alert or for intervention of the driver assistance system. The effectiveness of accident-preventing measures by such so-called advanced driver assistance systems (ADAS) can, as a result, be significantly increased.
Furthermore, it is also advantageously envisaged that the item of road condition information serves as important information regarding the driving surroundings during automating and is preferably fed to a corresponding system control device for autonomous driving. In this sense, it is envisaged according to another advantageous embodiment that the item of road condition information is incorporated into the function of an automated vehicle and the driving strategy as well as the establishment of handover points between an automated system and the driver are adjusted on the basis of the item of road condition information.
A further advantageous embodiment comprises the additional method steps of producing two digital images of a plurality of points of the road, which preferably form a trapezoidal region, from different perspectives by means of the camera and transforming the preferably trapezoidal region by means of an estimated homography into a rectangular top view. According to this embodiment, a region which comprises a plurality of points of the road is used in the images of the camera, which region represents the road, in order to detect road reflections. Depending on the running time and accuracy requirements the region can also be a segmented detail. However, a region in the form of a trapezoid is particularly preferable, wherein the trapezoidal region is transformed with the aid of an estimated homography into a rectangular top view (“bird's eye view”). Features which are particularly suitable for capturing the different appearance in this region on the basis of the presence of road reflections can be extracted with the aid of this transformation.
According to a particularly preferred embodiment, the camera is provided in a vehicle. The first image is produced in a first position of the vehicle from a first recording perspective. The vehicle is moved, e.g. driven, into a second position, wherein the second position differs from the first position, i.e. the first and the second positions do not coincide. The at least second image is then produced in the at least second position of the vehicle from an at least second recording perspective. The at least two images of the at least two different recording perspectives are subsequently transformed into a respective top view. The at least two top views produced are then registered with digital image processing means, incorporating driving dynamics parameters of the vehicle, and the appearances of the at least one point of the road in the at least two registered top views are compared. The registration can be carried out according to this embodiment example by a simple translation and rotation, as the scene has been transformed into a top view. The compensation can preferably be carried out or supported by incorporating individual driving dynamics parameters, e.g. vehicle speed, steering angle, etc. or entire models, e.g. ground plane model and driving dynamics models. The advantage of using this additional information is especially demonstrated in the case of a homogeneous or highly reflective road, in which case the vehicle movement can be misinterpreted on the basis of just image processing.
Furthermore, features of the at least one point of the road or the region, which features capture the change in the appearance in the at least two registered top views, are advantageously extracted. This is advantageously envisaged after assigning the individual points or regions to a sequence. The extracting can be done in different ways such as e.g. by means of the variance thereof or value progression in the form of a vector.
The individual features form a feature vector which is subsequently assigned to at least one class by a classification system (classifier). The classes are preferably “wet/icy” and “dry/remainder”. A classifier in this case is a mapping of a feature descriptor on a discrete number that represents the classes to be detected.
A random decision forest is preferably used as a classifier. Decision trees are hierarchical classifiers which break down the classification problem iteratively. Starting at the root, a path towards a leaf node where the final classification decision is made is followed based on previous decisions. Due to the high learning complexity, very simple classifiers, so-called decision stumps, which separate the input parameter space orthogonally to a coordinate axis, are preferred for the inner nodes.
Decision forests are collections of decision trees which contain randomized elements preferably at two points in the training of the trees. First, every tree is trained with a random selection of training data, and second, only one random selection of permissible dimensions is used for each binary decision. Class histograms are stored in the leaf nodes which allow a maximum likelihood estimation with respect to the feature vectors that reach the leaf node during the training. Class histograms store the frequency with which a feature descriptor of a specific road condition reaches the respective leaf node while traveling through the decision tree. As a result, each class can preferably be assigned a probability that is calculated from the class histograms.
To make a decision about the presence of specular reflections for a feature descriptor, the most probable class from the class histogram is preferably used as the current condition, or other methods may be used, to transfer information from the decision trees into a decision about the presence of reflection.
An optimization step preferably follows this decision per input image. This optimization can take the temporal context or further information which is provided by the vehicle into account. The temporal context is preferably taken into account by using the most frequent class from a previous time period or by establishing the most frequent class by means of a so-called hysteresis threshold value method. The hysteresis threshold value method uses threshold values to control the change from one road condition into another. A change is made only when the probability of the new condition is high enough and the probability of the old condition is accordingly low.
If the extracting described above has been effected for the individual points or regions, it is additionally possible to extract features for an entire image section of one of the images, in particular a transformed image section. Various calculations are conceivable for this such as, for example, the concatenation of the individual point features with any dimension reduction measures (e.g. “principal component analysis”), description with the aid of statistical moments or even a “bag of visual words” approach, during which the occurrence of specific prototype values or value tuples (e.g. SIFT, HOG, LBPs etc.) is detected on the basis of a histogram.
Road reflections are particularly preferably assessed using the effects described above by means of an approximate approach, by means of which the robustness of the method can in particular be increased with image registration. At the same time running times can be reduced, which is essential for the automotive sector. In this sense, it is envisaged according to another advantageous embodiment that the at least two images produced, particularly preferably the top views produced, are averaged in order to obtain an average image and that an absolute or quadratic difference is calculated between each pixel in the average image and the associated column mean. A basic assumption of this embodiment is that a region moves through the entire image area. In the process, a specific region per se is not observed but the path traveled by it. Therefore, more than two images are particularly preferably produced. It is additionally assumed that a straight-line and steady change in the perspective exists, preferably a uniformly straight-line movement of the vehicle. This assumption can preferably be confirmed by vehicle movement parameters as contextual knowledge. On this understanding, the individual images, preferably the transformed individual images, are averaged from the sequence in order to obtain an average image. In order to minimize disk space or to give more recent events a higher weighting, it is also possible to calculate a moving average. The absolute difference or the quadratic difference is subsequently calculated between each pixel in the average image and the associated column mean.
Features of the average image can be extracted, taking account of the column means, wherein a “bag of visual words” approach is preferably applied, in which the occurrence of specific prototype values or value tuples is captured on the basis of a histogram. The resulting image can be used in order to assess the presence of specular reflections such as, for example, by means of statistical moments or in a particular advantageous form by means of local features (preferably a “local binary pattern”) in a “bag of visual words” approach. The basis of this approximate approach is the assumption that regions passed on the road are very similar to the column mean during movement in a straight line in the case of diffuse reflection, whereas in the case of specular reflections the change in the appearance of the regions passed has considerable differences from the column means.
This method is substantially based—as mentioned above—on the assumption that the vehicle is moving in a straight line. During cornering, the viewed image region of the screen can be adjusted accordingly by rotation or shearing, in order to ensure that the effects still operate column by column even during cornering. In the case of this approach, the individual regions are not directly tracked, so to speak, in order to assess their change in appearance, but the path traveled by them (image columns) is analyzed. One advantage of this method is the robustness with respect to non-registered vehicle movements (pitching/rocking), which supplies error-free estimates in a particularly reliable manner. Another advantage is the required computing time which is greatly reduced in contrast to the first method. The calculations are in this case limited to calculating the average and some subtractions.
In addition, camera parameters are preferably incorporated into the assessment of the change in the appearances. As a result, the robustness of the method can be increased. The ever-changing exposure time, which causes changes in the appearance of the regions in the sequence (e.g. changes in brightness) and which can negatively affect the detection of reflections, is preferably taken into account.
The apparatus according to another aspect of the invention for detecting and assessing reflections of at least one point on a road comprises a camera which is set up to produce at least two digital images of the at least one point of the road from different recording perspectives. The apparatus is in this case set up to assess differences in the appearances of the at least one point of the road using digital image processing algorithms and, as a result, detect diffuse reflections and specular reflections of the road and determine an item of road condition information on the basis of the detected reflection.
With regard to the advantages and advantageous embodiments of the apparatus according to the invention, reference is made to the foregoing explanations in connection with the method according to the invention in order to avoid repetitions, wherein the apparatus according to the invention can have the necessary elements for this or can be set up for this in an extended manner.
Finally, the vehicle according to a further aspect of the invention comprises the aforementioned apparatus according to the invention.
BRIEF DESCRIPTION OF THE FIGURES
Embodiment examples of the invention will be explained in more detail below with reference to the drawing, wherein:
FIGS. 1a and b show a schematic representation of an embodiment example of an apparatus according to the invention during the execution of an embodiment example of the method according to the invention.
DETAILED DESCRIPTION OF EMBODIMENT EXAMPLES
The apparatus 1 according to the invention shown by FIGS. 1a and 1b comprises a digital camera 2 which is set up to record at least two digital images of a point 3 of the road from different recording perspectives, wherein the different recording perspectives are each shown by two different positions A and B of the respective camera 2.
The camera 2 is arranged in a vehicle (not shown), and is in fact located behind the windshield thereof, so that the area in front of the vehicle can be captured in the way the driver of the vehicle perceives it. By means of a travel movement of the vehicle, said vehicle is moved from a first position into a second position. In the first position, in which the camera 2 covers the recording perspective A shown on the right in FIGS. 1a and 1b respectively, a first image of the point 3 of the road is recorded in each case. The vehicle is moved into the second position in which the recording perspective of the camera 2 is compensated in such a manner that the recording perspective shown on the left in FIGS. 1a and 1b respectively is covered, from which a second image of the point (3) of the road is recorded in each case.
As can be seen from FIG. 1a , the image of the point 3 of the road does not change during the change of the recording perspective from A to B, because an incoming light beam 4 is reflected equally in all directions by a dry road surface 5. This corresponds to a diffuse reflection which is an indicator of a dry road surface. The apparatus 1 compares the first and the second image with each other. Using digital image processing algorithms, the apparatus 1 detects that the first and the second image do not differ or only differ to an extent that a diffuse reflection must exist. Due to the identified or detected diffuse reflection, the apparatus 1 determines an item of road condition information which includes the fact that the road surface 5 is dry. This value is transmitted to a driver assistance system (not shown).
On the other hand, the image of the point 3 of the road changes, as can be seen from FIG. 1b , during the change of the recording perspective from A to B, because an incoming light ray 6 is only reflected in a particular direction by an icy or wet road surface 7. This corresponds to a specular reflection which is an indicator of a wet or icy road surface. The apparatus 1 compares the first and the second image with each other. By using digital image processing algorithms the apparatus detects that the first and the second image vary greatly from each other such that a specular reflection must exist. Due to the identified or detected specular reflection, the apparatus determines an item of road condition information which includes the fact that the road surface is wet or icy. This value is transmitted to a driver assistance system (not shown) which adjusts the times for issuing an alert or for intervention to the wet or icy road surface.

Claims (14)

The invention claimed is:
1. A method for detecting and assessing reflections of at least one point on a road, comprising the method steps:
providing a camera;
producing at least two digital images of the at least one point of the road by the camera, wherein the images are produced from different recording perspectives of the camera;
distinguishing diffuse reflection and specular reflection of the road by assessing differences in appearances of the at least one point of the road in the at least two digital images using digital image processing algorithms; and
determining an item of road condition information based on the distinguished reflection;
wherein the at least two images are averaged to produce an average image, and wherein an absolute difference or a quadratic difference is calculated between each pixel in the average image and an associated image column mean value.
2. The method according to claim 1, characterized by producing the at least two digital images of the at least one point of the road by the camera which is a stereo camera, wherein the images are produced from the different recording perspectives of imaging elements of the stereo camera.
3. The method according to claim 1, characterized by
distinguishing between the diffuse reflection and the specular reflection based on digital image processing by differentiating between appearances fixed by the road and independent of the road caused by the relative movement of the camera, therefore
reliably separating shadows and reflected infrastructure on the road.
4. The method according to claim 1, characterized by
communicating the item of road condition information to a driver assistance system of a vehicle, and
adjusting times for issuing an alert or for intervention by the driver assistance system based on the item of road condition information.
5. The method according to claim 1, characterized by
incorporating the item of road condition information into a function of an automated vehicle, and
adjusting a driving strategy and establishing handover points between an automated system and a driver based on the item of road condition information.
6. The method according to claim 1, characterized by
producing the at least two digital images of a plurality of the points of the road, which form a trapezoidal region, from the different recording perspectives by the camera, and
transforming the trapezoidal region by an estimated homography into a rectangular top view.
7. The method according to claim 1, characterized by
providing the camera in a vehicle;
producing a first one of the digital images in a first position of the vehicle from a first one of the recording perspectives;
moving the vehicle into a second position which differs from the first position;
producing a second one of the digital images in the second position of the vehicle from a second one of the recording perspectives;
transforming the at least two digital images from the different recording perspectives respectively into at least two top views;
registering the at least two top views produced with digital image processing, incorporating driving dynamics parameters of the vehicle; and
comparing the appearances of the at least one point of the road in the at least two registered top views.
8. The method according to claim 7, characterized by extracting features of the at least one point of the road, which features capture a change in the appearance in the at least two registered top views.
9. The method according to claim 8, characterized by
forming a feature vector from the extracted features, and
assigning the feature vector to a class by a classifier.
10. The method according to claim 7, characterized by
producing the average image by averaging the at least two top views.
11. The method according to claim 1, characterized by extracting features of the average image, taking account of the image column mean values, during which an occurrence of particular prototype values or value tuples is captured based on a histogram.
12. The method according to claim 1, characterized by incorporating camera parameters of the camera into the assessing of the differences in the appearances.
13. An apparatus for detecting and assessing reflections of at least one point on a road, comprising a camera which is configured to produce at least two digital images of the at least one point of the road from different recording perspectives,
wherein the apparatus is configured to:
assess differences in appearances of the at least one point of the road using digital image processing algorithms and, as a result, detect diffuse reflections and specular reflections of the road,
determine an item of road condition information based on the detected reflection,
average the at least two images to produce an average image, and
calculate an absolute difference or a quadratic difference between each pixel in the average image and an associated image column mean value.
14. A combination comprising the apparatus according to claim 13 and a vehicle.
US15/572,010 2015-05-06 2016-05-04 Method and apparatus for detecting and assessing road reflections Active 2036-08-25 US10442438B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DE102015208429.9A DE102015208429A1 (en) 2015-05-06 2015-05-06 Method and device for detecting and evaluating road reflections
DE102015208429 2015-05-06
DE102015208429.9 2015-05-06
PCT/DE2016/200207 WO2016177371A1 (en) 2015-05-06 2016-05-04 Method and apparatus for detecting and assessing road reflections

Publications (2)

Publication Number Publication Date
US20180141561A1 US20180141561A1 (en) 2018-05-24
US10442438B2 true US10442438B2 (en) 2019-10-15

Family

ID=56097956

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/572,010 Active 2036-08-25 US10442438B2 (en) 2015-05-06 2016-05-04 Method and apparatus for detecting and assessing road reflections

Country Status (7)

Country Link
US (1) US10442438B2 (en)
EP (1) EP3292510B1 (en)
JP (1) JP6453490B2 (en)
KR (1) KR101891460B1 (en)
CN (1) CN107667378B (en)
DE (2) DE102015208429A1 (en)
WO (1) WO2016177371A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10787175B1 (en) * 2019-05-21 2020-09-29 Vaisala Oyj Method of calibrating an optical surface condition monitoring system, arrangement, apparatus and computer readable memory
US20210365750A1 (en) * 2016-01-05 2021-11-25 Mobileye Vision Technologies Ltd. Systems and methods for estimating future paths

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686165B (en) * 2016-12-30 2018-08-17 维沃移动通信有限公司 A kind of method and mobile terminal of road conditions detection
WO2018122586A1 (en) * 2016-12-30 2018-07-05 同济大学 Method of controlling automated driving speed based on comfort level
DE102018203807A1 (en) 2018-03-13 2019-09-19 Continental Teves Ag & Co. Ohg Method and device for detecting and evaluating road conditions and weather-related environmental influences
FR3103303B1 (en) * 2019-11-14 2022-07-22 Continental Automotive Determination of a coefficient of friction for a vehicle on a road
EP3866055A1 (en) * 2020-02-12 2021-08-18 Aptiv Technologies Limited System and method for displaying spatial information in the field of view of a driver of a vehicle
US20220198200A1 (en) 2020-12-22 2022-06-23 Continental Automotive Systems, Inc. Road lane condition detection with lane assist for a vehicle using infrared detecting device
CN112597666B (en) * 2021-01-08 2022-05-24 北京深睿博联科技有限责任公司 Pavement state analysis method and device based on surface material modeling
DE102021101788A1 (en) 2021-01-27 2022-07-28 Zf Cv Systems Global Gmbh Method for spatially resolved determination of a surface property of a subsoil, processing unit and vehicle
JP6955295B1 (en) * 2021-02-16 2021-10-27 株式会社アーバンエックステクノロジーズ Identification device, identification program, and identification method
CN115201218A (en) * 2022-07-13 2022-10-18 鲁朗软件(北京)有限公司 Vehicle-mounted pavement disease intelligent detection method and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3023444A1 (en) 1979-06-29 1981-01-22 Omron Tateisi Electronics Co Road surface condition monitoring system - uses IR signal reflected off road surface and sensed by array of detectors
US4690553A (en) 1979-06-29 1987-09-01 Omron Tateisi Electronics Co. Road surface condition detection system
US5163319A (en) 1987-11-11 1992-11-17 Messerschmitt-Bolkow-Blohm Gmbh Method and a device for recognizing the condition of a road
DE4235104A1 (en) 1992-10-17 1994-04-21 Sel Alcatel Ag Road condition detecting unit identifying state of road surface lying before moving motor vehicle - uses two adjustable light transmitting and receiving systems at different angles of elevation and supplying evaluation circuit correlating receiver signals
US20020191837A1 (en) 2001-05-23 2002-12-19 Kabushiki Kaisha Toshiba System and method for detecting obstacle
JP2003057168A (en) 2001-08-20 2003-02-26 Omron Corp Road-surface judging apparatus and method of installing and adjusting the same
US20030141980A1 (en) * 2000-02-07 2003-07-31 Moore Ian Frederick Smoke and flame detection
WO2004081897A2 (en) 2003-03-14 2004-09-23 Liwas Aps A device for detection of road surface condition
CN102017601A (en) 2008-06-26 2011-04-13 松下电器产业株式会社 Image processing apparatus, image division program and image synthesising method
EP2551794A2 (en) 2011-07-28 2013-01-30 Hitachi Ltd. Onboard environment recognition system
US20130156301A1 (en) * 2011-12-19 2013-06-20 Industrial Technology Research Institute Method and system for recognizing images
US20150325014A1 (en) * 2013-01-21 2015-11-12 Kowa Company, Ltd. Image processing device, image processing method, image processing program, and recording medium storing said program
US20170112057A1 (en) * 2015-10-23 2017-04-27 Carnegie Mellon University System for evaluating agricultural material
DE112010005669B4 (en) 2010-06-18 2017-10-26 Honda Motor Co., Ltd. Road surface reflectivity classification system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3626905B2 (en) * 2000-11-24 2005-03-09 富士重工業株式会社 Outside monitoring device
JP5761601B2 (en) * 2010-07-01 2015-08-12 株式会社リコー Object identification device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3023444A1 (en) 1979-06-29 1981-01-22 Omron Tateisi Electronics Co Road surface condition monitoring system - uses IR signal reflected off road surface and sensed by array of detectors
US4690553A (en) 1979-06-29 1987-09-01 Omron Tateisi Electronics Co. Road surface condition detection system
US5163319A (en) 1987-11-11 1992-11-17 Messerschmitt-Bolkow-Blohm Gmbh Method and a device for recognizing the condition of a road
DE4235104A1 (en) 1992-10-17 1994-04-21 Sel Alcatel Ag Road condition detecting unit identifying state of road surface lying before moving motor vehicle - uses two adjustable light transmitting and receiving systems at different angles of elevation and supplying evaluation circuit correlating receiver signals
US20030141980A1 (en) * 2000-02-07 2003-07-31 Moore Ian Frederick Smoke and flame detection
US20020191837A1 (en) 2001-05-23 2002-12-19 Kabushiki Kaisha Toshiba System and method for detecting obstacle
JP2003057168A (en) 2001-08-20 2003-02-26 Omron Corp Road-surface judging apparatus and method of installing and adjusting the same
CN1809853A (en) 2003-03-14 2006-07-26 利瓦斯有限责任公司 A device for detection of road surface condition
WO2004081897A2 (en) 2003-03-14 2004-09-23 Liwas Aps A device for detection of road surface condition
US7652584B2 (en) 2003-03-14 2010-01-26 Liwas Aps Device for detection of surface condition data
CN102017601A (en) 2008-06-26 2011-04-13 松下电器产业株式会社 Image processing apparatus, image division program and image synthesising method
US8184194B2 (en) 2008-06-26 2012-05-22 Panasonic Corporation Image processing apparatus, image division program and image synthesising method
DE112010005669B4 (en) 2010-06-18 2017-10-26 Honda Motor Co., Ltd. Road surface reflectivity classification system
EP2551794A2 (en) 2011-07-28 2013-01-30 Hitachi Ltd. Onboard environment recognition system
US20130156301A1 (en) * 2011-12-19 2013-06-20 Industrial Technology Research Institute Method and system for recognizing images
US20150325014A1 (en) * 2013-01-21 2015-11-12 Kowa Company, Ltd. Image processing device, image processing method, image processing program, and recording medium storing said program
US20170112057A1 (en) * 2015-10-23 2017-04-27 Carnegie Mellon University System for evaluating agricultural material

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
"International Conference on Computer Analysis of Images and Patterns. CAIP 2017: Computer Analysis of Images and Patterns", vol. 9358 Chap.1, 3 November 2015, SPRINGER, Berlin, Heidelberg, ISBN: 978-3-642-17318-9, article AMTHOR MANUEL; HARTMANN BERND; DENZLER JOACHIM: "Road Condition Estimation Based on Spatio-Temporal Reflection Models", pages: 3 - 15, XP047333260, 032548, DOI: 10.1007/978-3-319-24947-6_1
Chinese Office Action in Chinese Patent Application No. 201680026128.2, dated Jun. 5, 2019, 6 pages, with partial English translation, 5 pages.
English translation of PCT International Preliminary Report on Patentability of the International Searching Authority for International Application PCT/DE2016/200207, dated Nov. 9, 2017, 8 pages, International Bureau of WIPO, Geneva, Switzerland.
German Search Report for German Patent Application No. 10 2015 208 429.9, dated Mar. 22, 2016, 7 pages, Muenchen, Germany, with English translation, 5 pages.
International Search Report of the International Searching Authority for International Application PCT/DE2016/200207, dated Sep. 7, 2016, 4 pages, European Patent Office, HV Rijswijk, Netherlands.
Keiji Fujimura et al., "Road Surface Sensor", FUJITSU TEN GIHO—FUJITSU TEN Technical Report, FUJITSU TEN Kakushiki Gaisha, Kobe, Japan, Feb. 1, 1988, retrieved from the Internet on Dec. 4, 2012: URL:http://www.fujitsu-ten.com/business/technicaljournal/pdf/1-6E.pdf, XP002688511, ISSN: 0289-3789, pp. 64 to 72.
Manuel Amthor et al., "Road Condition Estimation Based on Spatio-Temporal Reflection Models", Correct System Design, Springer International Publishing, Computer Vision Group, Friedrich Schiller University Jena, Germany, Advanced Engineering, Continental Teves AG & Co. oHG Frankfurt a.M., Germany, Nov. 3, 2015, XP047333260, ISSN: 0302-9743, ISBN: 978-3-642-36616-1, pp. 3 to 15.
SHOHEI KAWAI ; KAZUYA TAKEUCHI ; KEIJI SHIBATA ; YUUKOU HORITA: "A method to distinguish road surface conditions for car-mounted camera images at night-time", ITS TELECOMMUNICATIONS (ITST), 2012 12TH INTERNATIONAL CONFERENCE ON, IEEE, 5 November 2012 (2012-11-05), pages 668 - 672, XP032327884, ISBN: 978-1-4673-3071-8, DOI: 10.1109/ITST.2012.6425265
Shohei Kawai et al., "A Method to Distinguish Road Surface Conditions for Car-Mounted Camera Images at Night-Time", 2012 12th International Conference on ITS Telecommunications (ITST), IEEE, University of Toyama, Japan, Nov. 5, 2012, XP032327884, ISBN: 978-1-4673-3071-8, pp. 668 to 672.

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210365750A1 (en) * 2016-01-05 2021-11-25 Mobileye Vision Technologies Ltd. Systems and methods for estimating future paths
US11657604B2 (en) * 2016-01-05 2023-05-23 Mobileye Vision Technologies Ltd. Systems and methods for estimating future paths
US10787175B1 (en) * 2019-05-21 2020-09-29 Vaisala Oyj Method of calibrating an optical surface condition monitoring system, arrangement, apparatus and computer readable memory

Also Published As

Publication number Publication date
KR101891460B1 (en) 2018-08-24
DE102015208429A1 (en) 2016-11-10
EP3292510A1 (en) 2018-03-14
WO2016177371A1 (en) 2016-11-10
EP3292510B1 (en) 2021-07-07
JP2018516799A (en) 2018-06-28
KR20170127036A (en) 2017-11-20
US20180141561A1 (en) 2018-05-24
CN107667378A (en) 2018-02-06
DE112016002050A5 (en) 2018-01-11
CN107667378B (en) 2021-04-27
JP6453490B2 (en) 2019-01-16

Similar Documents

Publication Publication Date Title
US10442438B2 (en) Method and apparatus for detecting and assessing road reflections
EP3807128B1 (en) A rider assistance system and method
Rezaei et al. Robust vehicle detection and distance estimation under challenging lighting conditions
US20200406897A1 (en) Method and Device for Recognizing and Evaluating Roadway Conditions and Weather-Related Environmental Influences
US10262216B2 (en) Hazard detection from a camera in a scene with moving shadows
US10147002B2 (en) Method and apparatus for determining a road condition
Wang et al. Appearance-based brake-lights recognition using deep learning and vehicle detection
US8232872B2 (en) Cross traffic collision alert system
WO2016129403A1 (en) Object detection device
RU2568777C2 (en) Device to detect moving bodies and system to detect moving bodies
JP2018516799A5 (en)
US20180060676A1 (en) Method and device for detecting and evaluating environmental influences and road condition information in the vehicle surroundings
US9280900B2 (en) Vehicle external environment recognition device
CN109997148B (en) Information processing apparatus, imaging apparatus, device control system, moving object, information processing method, and computer-readable recording medium
US11691585B2 (en) Image processing apparatus, imaging device, moving body device control system, image processing method, and program product
US10885351B2 (en) Image processing apparatus to estimate a plurality of road surfaces
US9870513B2 (en) Method and device for detecting objects from depth-resolved image data
US20150310265A1 (en) Method and System for Proactively Recognizing an Action of a Road User
JP2021510227A (en) Multispectral system for providing pre-collision alerts
US9365195B2 (en) Monitoring method of vehicle and automatic braking apparatus
JPWO2019174682A5 (en)
JP7229032B2 (en) External object detection device
JP6171608B2 (en) Object detection device
JP2018088237A (en) Information processing device, imaging device, apparatus control system, movable body, information processing method, and information processing program

Legal Events

Date Code Title Description
AS Assignment

Owner name: CONTINENTAL TEVES AG & CO. OHG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FRITZ, STEFAN;HARTMANN, BERND;AMTHOR, MANUEL;AND OTHERS;REEL/FRAME:044041/0938

Effective date: 20170731

Owner name: FRIEDRICH-SCHILLER-UNIVERSITAET JENA, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FRITZ, STEFAN;HARTMANN, BERND;AMTHOR, MANUEL;AND OTHERS;REEL/FRAME:044041/0938

Effective date: 20170731

Owner name: CONTINENTAL TEVES AG & CO. OHG, GERMANY

Free format text: REQUEST TO TRANSFER RIGHTS;ASSIGNORS:CONTINENTAL TEVES AG & CO. OHG;FRIEDRICH-SCHILLER-UNIVERSITAET JENA;REEL/FRAME:044648/0192

Effective date: 20170816

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4