WO2018234200A1 - Camera based wade assist - Google Patents

Camera based wade assist Download PDF

Info

Publication number
WO2018234200A1
WO2018234200A1 PCT/EP2018/066037 EP2018066037W WO2018234200A1 WO 2018234200 A1 WO2018234200 A1 WO 2018234200A1 EP 2018066037 W EP2018066037 W EP 2018066037W WO 2018234200 A1 WO2018234200 A1 WO 2018234200A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
camera
wade
vehicle body
liquid
Prior art date
Application number
PCT/EP2018/066037
Other languages
French (fr)
Inventor
Senthil Kumar Yogamani
Sunil Chandra
Ciaran Hughes
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2018234200A1 publication Critical patent/WO2018234200A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • the present invention refers to a method for wade level detection in a vehicle comprising at least one camera.
  • the present invention also refers to a driving assistance system for a vehicle, wherein the driving assistance system is adapted to perform the above method.
  • the present invention further refers to a vehicle with the above driving assistance system.
  • Fig. 1 shows a vehicle 10 known in the Art.
  • the vehicle has a wing mirror 12, and an ultrasonic sensor 14 is mounted at the wing mirror 12.
  • the ultrasonic sensor 14 has a field of view a.
  • the vehicle is partially covered by water 16.
  • ultrasonic waves emitted from the ultrasonic sensor 14 are reflected from a surface 18 of the water 16.
  • the wade level is determined based on reflections of ultrasonic pulses emitted from the ultrasonic sensor 14.
  • the reflections are received by the ultrasonic sensor 14 and based on a runtime of the ultrasonic waves, a distance to the surface 18 of the water 16 is determined based on a known position of the ultrasonic sensor 14 at the vehicle 10.
  • Additional contact sensors 20 for detecting the water 16 based on contact are provided at a bottom of the vehicle 10.
  • the contact sensors 20 are e.g. used for validation of the ultrasonic sensors 14.
  • a vehicle comprises a system for aiding driver control of the vehicle when the vehicle is wading in a body of water.
  • the system comprises a measurement apparatus for determining a measured depth of water in which the vehicle is wading.
  • the measurement apparatus is positioned and arranged relative to the vehicle such that the measured depth is indicative of the depth of water in a first measurement region relative to the actual vehicle.
  • the processor is coupled to the measurement apparatus and is configured to calculate an estimated water depth in dependence upon the measured depth and in dependence upon the vehicle speed.
  • a vehicle comprises a system having a control unit and at least one remote sensor, which includes a first ultrasonic transducer sensor mounted to a left-side mirror of the vehicle; and a second ultrasonic transducer sensor mounted to a right-side mirror of the vehicle the first and second ultrasonic transducer sensors are positioned on the vehicle.
  • the first and second ultrasonic transducer sensor are configured to emit and receive a pulsed ultrasonic signal.
  • the time of receipt of an echoed ultrasonic signal is indicative of a distance sensed between the ultrasonic transducer sensor and the surface level of a body of water in a measurement region adjacent to the vehicle.
  • the present invention provides a method for wade level detection in a vehicle comprising at least one camera with a field of view covering at least part of a vehicle body from the at least one camera in a direction downwards, comprising the steps of calibrating a position of the at least one camera with respect to the body of the vehicle prior to usage of the vehicle, learning a shape of the vehicle body from the at least one camera in a direction downwards, receiving a camera image covering the part of a vehicle body from the at least one camera in a direction downwards, identifying a part of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, comparing the part of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, to the shape of the vehicle body from the at least one camera in a direction downwards, and detecting a wade level based on the comparison.
  • the present invention also provides a driving assistance system for a vehicle, wherein the driving assistance system is adapted to perform the above method.
  • the present invention further provides a vehicle with the above driving assistance system.
  • the basic idea of the invention is to use at least one camera, typically at least one optical camera, to perform wade detection.
  • the usage of the at least one camera has the advantage, that is has a field of view superior to a field of view of a typical ultrasonic sensor. Due to the superior field of view, an increased area of the surface can be monitored, so that the reliability of the wade detection can be increased. Furthermore, the wide field of view enables analysis of the field of view already prior to reaching a wade area. Furthermore, nowadays vehicles are frequently equipped with at least one camera. Hence, using this camera, the method can be performed without additional camera hardware. Cameras provide compared to ultrasonic sensors a huge amount of sensor data, which enable a detailed analysis to implement a reliable wade detection, in particular a wade level detection.
  • Wade can be a desired or accepted feature, e.g. in the case of off-road vehicles.
  • the liquid is typically water or mud with a liquid
  • the wade level refers to a height of the liquid in an area around the vehicle.
  • wade levels of even more than a meter can be achieved, whereas for regular vehicles, a wade level of few centimeters can already be dangerous because of possible damages to the vehicle, in particular to the motor, in particular due to water entering into the cylinder.
  • the wade level can be different e.g. on a left and right side of the vehicle, or at its front and at its rear.
  • the vehicle has at least one camera.
  • the vehicle has a surround view camera system with multiple cameras covering all directions around the vehicle. Hence, the wade level can be determined all around the vehicle.
  • the at least one camera has a field of view, which covers part of the vehicle body from the at least one camera in a direction downwards. Hence, this part of the vehicle body can be used as reference for the detection of a liquid level around the vehicle.
  • the at least one camera provides images covering the part of a vehicle body from the at least one camera in a direction downwards.
  • the images can be provided at any suitable rate, e.g. depending on a vehicle speed.
  • Calibrating a position of the at least one camera with respect to the body of the vehicle is required to generate an absolute reference for determining the wade level.
  • Different cameras can have different references.
  • the information can be combined based on the known reference positions of the camera.
  • Calibration has to be made at least once prior to usage of the vehicle.
  • the calibration step S100 can be repeated, e.g. in order to adapt to changed vehicle feature, e.g. when air pressure in wheels of the vehicle changes or when the air pressure is changed based on current driving conditions.
  • off-road vehicles can require different air pressure when circulating on a road or in off-road conditions.
  • the step of learning a shape of the vehicle body from the at least one camera in a direction downwards comprises performing self-learning of the shape of the vehicle body from the at least one camera in a direction downwards.
  • the method can be easily applied to different types of vehicles. Changes in the appearance of the vehicle can be easily considered and do not lead to a false wade detection, since the vehicle can adapt to such changes, e.g. in case the color of the vehicle is changed, dirt or water drops reside at the vehicle body, stickers are attached to the vehicle body, or others. Since the vehicle body is a static object, it can be learned as background information during a simple training stage. The shape of the vehicle can be self-learned, i.e. outside the factory.
  • the step of learning a shape of the vehicle body has to be robust to handle illumination variations and presence of reflections on the body of the vehicle.
  • an initial training step is performed prior to usage of the vehicle.
  • the step of identifying a part of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid comprises modeling the liquid as a dynamic texture evolving in space with a color priori. Due to the nature of the liquid, e.g. water or even mud, its surface can move in an unpredictable manner. Liquids are dynamically evolving manifolds because of their fluidic nature. For example, waves can be formed on a surface of the liquid. This makes it in general difficult to determine a correct wade level. However, when adequately modeling the liquid, a correct wade level can be determined despite a movement of the liquid.
  • the step of modeling the liquid as a dynamic texture evolving in space with a color priori comprises modeling the liquid as a pixel wise temporally evolving an Auto Regressive Moving Average process.
  • the Auto Regressive Moving Average is also referred to as ARMA process.
  • the ARMA process is used in statistical analysis of time series.
  • the ARMA process provides a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the auto regression and the second for the moving average.
  • the method comprises a step of detecting ground water in a driving direction and a step of automatically activating the wade level detection upon detected ground water in the driving direction.
  • detection of ground water is enabled.
  • the ground water can typically be detected already well in advance of the vehicle, depending on a type of camera used and/or an orientation of the camera.
  • the wade level detection can already be started in advance, so that the wade level detection is already up and running, when the vehicle enters the water. A manual interaction to start wade detection can be omitted.
  • the step of detecting a wade level based on the comparison comprises performing a subtraction of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, from the shape of the vehicle body from the at least one camera in a direction downwards.
  • the subtraction result is essentially zero, the current image is supposed to represent the shape of vehicle body as previously learned. Hence, no water is present around the vehicle. The higher the subtraction result, the bigger the difference between the current image and the shape of vehicle body as previously learned.
  • a learning stage has to be robust to handle illumination variations and presence of reflections on the body part.
  • the step of detecting a wade level based on the comparison further comprises a step of modeling image points remaining after the subtraction by K Gaussians with weights ⁇ and parameters ⁇ (mean) and ⁇ (variance).
  • K Gaussians Gaussian mixture model
  • the step of modeling image points remaining after the subtraction by K Gaussians comprises performing an adaptive mixture of Gaussians, where the K Gaussians are sorted based on ratio ⁇ / ⁇ 2 which chooses the least variance and therefore more consistent Gaussian T, and the top k Gaussians are chosen from the sorted order.
  • a Gaussian model with Zivkovic's adaptive mixture of Gaussian (MOG) is used. According to Zivkovic's adaptive mixture of Gaussian, background subtraction is analyzed at pixel-level approach.
  • Recursive equations are used to constantly update the parameters and but also to simultaneously select an appropriate number of components for each pixel.
  • the method comprises an additional step of tiling the image into various blocks, whereby the top k Gaussians are learnt separately for each block. Hence, for each block, a deviation can be calculated from the learnt model during training time.
  • Training data can be artificially augmented with various noisy effects like reflection, illumination changes, etc. We use this method in particular because it includes an evolving Gaussian function, which can model temporal variability of the liquid surface.
  • a probability of a pixel being modeled by these K Gaussians can be calculated as
  • the method comprises an additional step of performing a vehicle to vehicle warning upon detection of a predefined wade level.
  • vehicles which do not support wade detection can be supplied with wade level information.
  • the vehicle comprises a communication device for communicating the wade level to other vehicles, either directly or via a server, which distributes wade detection information.
  • the communication device can be provided to communicate according to any suitable mobile communication standard including Bluetooth, WiFi, GPRS, UMTS, LTE, 5G, or others, just to name a few.
  • Vehicle to vehicle warning enables non-intrusive wade detection or even wade level detection for the warned vehicle.
  • the method comprises an additional step of performing a dynamic online background subtraction for other vehicles.
  • the at least one camera is used to determine, if other vehicles are submerged.
  • a wade level can be estimated based on a known or estimated size of the other vehicle and approximately doing background subtraction to see how much of the vehicle body is occluded by the liquid.
  • the method comprises an additional step of performing a displaced volume detection of liquid displaced by the vehicle.
  • a displaced volume detection of liquid displaced by the vehicle e.g. water level based on the vehicle entering or moving within the liquid.
  • a surface of the ground water based on dimensions of the vehicle, it can be easily determined, how much a liquid level will raise based on the presence of the vehicle and its inherent liquid displacement.
  • a volume of displaced liquid is much larger, and an immersion of the vehicle increases rapidly.
  • a predictive model can be appended by using a particle filter, and camera based estimation offers tracking and localization because of a wide field of view (FoV).
  • the vehicle comprises an ultrasonic distance sensor
  • the method comprises an additional step of fusing the detected wade level based on the comparison and a wade level detected by the ultrasonic distance sensor.
  • the fused information on the wade level increases reliability of the wade detection and in particular the wade level detection.
  • the ultrasonic distance sensor can be employed as known in the Art to determine the wade level. Fusion can be performed by a heterogeneous Bayesian model, as data corresponding to the different sensors are very different and have different ranges.
  • Fig. 1 shows a schematic view of a vehicle known in the Art with a driving
  • Fig. 2 shows a schematic view of a vehicle with a driving assistance system for detecting a wade condition using multiple cameras according to a first, preferred embodiment in a top view with additional camera views of the multiple cameras,
  • Fig. 3 shows a detailed schematic camera view of a left wing camera in
  • Fig. 4 shows a schematic view of the vehicle with the driving assistance system according to the first embodiment for detecting a wade condition in a lateral view
  • Fig. 5 shows a schematic view of the vehicle with the driving assistance system according to the first embodiment for detecting a wade condition in a lateral view
  • Fig. 6 shows a schematic detailed camera view of a rear camera in accordance with Fig. 2, whereby the rear camera view is shown with individual blocks, in accordance with the first embodiment
  • Fig. 7 shows a perspective camera view of a front camera in accordance with Fig. 2, whereby the front camera view shows a submerged vehicle, in accordance with the first embodiment
  • Fig. 8 a flow chart indicating a method for performing wade level detection with the vehicle and the driving assistance system according to the first embodiment.
  • Figure 2 shows a vehicle 1 10 with a driving assistance system 1 12 according to a first, preferred embodiment of the present invention.
  • the driving assistance system 1 12 comprises a processing unit 1 14 and a surround view camera system 1 16, 1 18, 120, 122.
  • the surround view camera system 1 16, 1 18, 120, 122 comprises a front camera 1 16 covering a front direction of the vehicle 1 10, a rear camera 1 18 covering a rear direction of the vehicle 1 10, a right mirror camera 120 covering a right direction of the vehicle 1 10, and a left mirror camera 122 covering a left direction of the vehicle 1 10.
  • the cameras 1 16, 1 18, 120, 122 and the processing unit 1 14 are connected via data bus connection 124.
  • Each camera 1 16, 1 18, 120, 122 has a field of view ⁇ , which can be seen e.g. in Fig. 4 or 5, and which covers part of a vehicle body 126 from the respective camera 1 16, 1 18, 120, 122 in a direction downwards, as can be seen in Fig. 2 as well as in Fig. 3.
  • the wade level refers to a height of a liquid 130, typically water, around the vehicle 10.
  • wade condition refers to presence of the liquid 130 around the vehicle
  • wade level refers to a height of a surface 132 of the liquid 130.
  • step S100 refers to calibrating a position of the cameras 1 16, 1 18, 120, 122 with respect to the body 126 of the vehicle 1 10.
  • Calibrating a position of the cameras 1 16, 1 18, 120, 122 with respect to the body 126 of the vehicle 1 10 refers to generating an absolute reference for determining the wade level.
  • the step of calibrating a position of the cameras 1 16, 1 18, 120, 122 with respect to the body 126 of the vehicle 1 10 is performed once prior to usage of the vehicle 1 10. Later on, step S100 can be repeated, e.g. due to changing driving condition.
  • step S1 10 a shape of the vehicle body 126 from the respective camera 1 16, 1 18, 120, 122 in a direction downwards from the camera 1 16, 1 18, 120, 122 is learned. This comprises performing self-learning of the shape of the vehicle body 126 from the cameras 1 16, 1 18, 120, 122 in a direction downwards. Since the vehicle body 126 is a static object, the shape is learned as background information during a training stage. Step S1 10 can be performed at essentially any time. Step S1 10 does not have to be performed continuously or every time the method is performed.
  • the step of learning a shape of the vehicle body 126 from the respective camera 1 16, 1 18, 120, 122 in a direction downwards from the camera 1 16, 1 18, 120, 122 is performed once prior to usage of the vehicle 1 10 as initial training. Later on, the training can be continued.
  • step S120 ground water in a driving direction is detected.
  • the further wade level detection is automatically started. Detection of the ground water is performed using the camera 1 16, 1 18, 120, 122 facing in the driving direction. Most commonly, the front camera 1 16 is used to detect the ground water.
  • step S130 the control unit 1 14 starts receiving camera images from the cameras 1 16, 1 18, 120, 122, each of which covering the respective part of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards.
  • step S140 a part of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards, which is not covered by liquid 130, is identified.
  • the liquid 130 is modeled as a dynamic texture evolving in space with a color priori. This comprises modeling the liquid 130 as a pixel wise temporally evolving an Auto Regressive Moving Average process.
  • the Auto Regressive Moving Average is also referred to as ARMA process.
  • the ARMA process is used in statistical analysis of time series and provides a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the auto regression and the second for the moving average.
  • step S150 the part of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards, which is not covered by liquid 130, is compare to the shape of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards.
  • the vehicle body 126 is used as reference for the detection of a liquid level around the vehicle 1 10.
  • the comparison is performed as a subtraction of the images, so that remaining image contents refers to a wade level.
  • an identical part of the image provided by the camera 1 16, 1 18, 120, 122 is subtracted from the learned shape of the vehicle body 126, so that only differing parts remain.
  • the wade level is detected in step S160. Accordingly, image points remaining after the subtraction by K Gaussians with weights ⁇ and parameters ⁇ (mean) and ⁇ (variance) are modeled.
  • an adaptive mixture of Gaussians is performed, where the K Gaussians are sorted based on a ratio ⁇ / ⁇ 2, which chooses the least variance and therefore more consistent Gaussian T.
  • the top k Gaussians are chosen from the sorted order.
  • a Gaussian model with Zivkovic's adaptive mixture of Gaussian (MOG) is used. According to Zivkovic's adaptive mixture of
  • Gaussian, background subtraction is analyzed at pixel-level approach. Recursive equations are used to constantly update the parameters and also to simultaneously select an appropriate number of components for each pixel.
  • each image of each camera 1 16, 1 18, 120, 122 is tiled into various blocks 134, as can be seen by way of Example in Fig. 6, whereby the top k Gaussians are learnt separately for each block 134.
  • a deviation is calculated from the learnt model during training time.
  • Training data is artificially augmented with various noisy effects like reflection, illumination changes, etc.
  • a displaced volume detection of liquid 130 displaced by the vehicle 1 10 is performed.
  • an increase in liquid level based on the vehicle 1 10 entering the liquid 130 and moving therein is determined.
  • a surface 132 of the ground water and dimensions of the vehicle 1 10 it is determined, how much a wade level raises based on the presence of the vehicle 1 10 and its liquid 130 displacement.
  • a dynamic online background subtraction for other vehicles is performed. Accordingly, the cameras 1 16, 1 18, 120, 122 are used to determine, if other vehicles 138 are submerged, as can be seen with respect to Fig. 7.
  • a wade level can be estimated based on a known or estimated size of the other vehicle 138 and
  • a vehicle to vehicle warning is performed upon detection of the wade level, i.e. the warning is performed when the wade level is above a pre-defined wade level.
  • the driving assistance system 12 comprises a communication device for communicating the wade level to other vehicles, either directly or via a server, which distributes wade detection information.
  • the communication device is provided to communicate according to a suitable mobile communication standard including

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Image Analysis (AREA)

Abstract

The present invention refers to a method for wade level detection in a vehicle (110) comprising at least one camera (116, 118, 120, 122) with a field of view (β) covering at least part of a vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, comprising the steps of calibrating a position of the at least one camera (116, 118, 120, 122) with respect to the body (126) of the vehicle (110) prior to usage of the vehicle (110), learning a shape of the vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, receiving a camera image covering the part of a vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, identifying a part of the vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, which is not covered by liquid (130), comparing the part of the vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, which is not covered by liquid (130), to the shape of the vehicle body (126) from the at least one camera (116, 118, 120, 122) in a direction downwards, and detecting a wade level based on the comparison. The present invention also refers to a driving assistance system (12) for a vehicle (10), wherein the driving assistance system (12) is adapted to perform the above method. Furthermore, the present invention also refers to a vehicle (10) with an above driving assistance system (12).

Description

Camera based wade assist
The present invention refers to a method for wade level detection in a vehicle comprising at least one camera.
The present invention also refers to a driving assistance system for a vehicle, wherein the driving assistance system is adapted to perform the above method.
The present invention further refers to a vehicle with the above driving assistance system.
Existing solutions for wade level detection in a vehicle are based on ultrasonic sensors, which are typically mounted on side mirrors. The ultrasonic sensors are typically directed downwards to ground to detect a distance to a water surface from the ground.
Fig. 1 shows a vehicle 10 known in the Art. The vehicle has a wing mirror 12, and an ultrasonic sensor 14 is mounted at the wing mirror 12. The ultrasonic sensor 14 has a field of view a. As can be seen in Fig. 1 , the vehicle is partially covered by water 16. Hence, ultrasonic waves emitted from the ultrasonic sensor 14 are reflected from a surface 18 of the water 16. The wade level is determined based on reflections of ultrasonic pulses emitted from the ultrasonic sensor 14. The reflections are received by the ultrasonic sensor 14 and based on a runtime of the ultrasonic waves, a distance to the surface 18 of the water 16 is determined based on a known position of the ultrasonic sensor 14 at the vehicle 10. Additional contact sensors 20 for detecting the water 16 based on contact are provided at a bottom of the vehicle 10. The contact sensors 20 are e.g. used for validation of the ultrasonic sensors 14.
However, such ultrasonic sensors have a limited field of view for detecting the water level, i.e. the surface of the water. Hence, the ultrasonic sensors merely perform a local measurement of the water level. Furthermore, the ultrasonic sensors typically require a flat surface, which limits the usability of the ultrasonic sensors. Still further, water is a liquid medium, so that its surface can move a lot in an unpredictable manner. Water is a dynamically evolving manifold because of its fluidic nature. This makes it difficult to determine the wade level using ultrasonic sensors. In this context, according to WO 2015/071 170 A1 , a vehicle comprises a system for aiding driver control of the vehicle when the vehicle is wading in a body of water. The system comprises a measurement apparatus for determining a measured depth of water in which the vehicle is wading. The measurement apparatus is positioned and arranged relative to the vehicle such that the measured depth is indicative of the depth of water in a first measurement region relative to the actual vehicle. The processor is coupled to the measurement apparatus and is configured to calculate an estimated water depth in dependence upon the measured depth and in dependence upon the vehicle speed. Hence, a vehicle comprises a system having a control unit and at least one remote sensor, which includes a first ultrasonic transducer sensor mounted to a left-side mirror of the vehicle; and a second ultrasonic transducer sensor mounted to a right-side mirror of the vehicle the first and second ultrasonic transducer sensors are positioned on the vehicle. The first and second ultrasonic transducer sensor are configured to emit and receive a pulsed ultrasonic signal. The time of receipt of an echoed ultrasonic signal is indicative of a distance sensed between the ultrasonic transducer sensor and the surface level of a body of water in a measurement region adjacent to the vehicle.
It is an object of the present invention to provide a method for wade level detection in a vehicle, a driving assistance system for a vehicle, and a vehicle with such a driving assistance system, which overcome at least some of the above problems. In particular, it is an object of the present invention to provide a method for wade level detection in a vehicle, a driving assistance system for a vehicle, and a vehicle with such a driving assistance system, which enable a reliable wade level detection in a simple manner.
This object is achieved by the independent claims. Advantageous embodiments are given in the dependent claims.
In particular, the present invention provides a method for wade level detection in a vehicle comprising at least one camera with a field of view covering at least part of a vehicle body from the at least one camera in a direction downwards, comprising the steps of calibrating a position of the at least one camera with respect to the body of the vehicle prior to usage of the vehicle, learning a shape of the vehicle body from the at least one camera in a direction downwards, receiving a camera image covering the part of a vehicle body from the at least one camera in a direction downwards, identifying a part of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, comparing the part of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, to the shape of the vehicle body from the at least one camera in a direction downwards, and detecting a wade level based on the comparison.
The present invention also provides a driving assistance system for a vehicle, wherein the driving assistance system is adapted to perform the above method.
The present invention further provides a vehicle with the above driving assistance system.
The basic idea of the invention is to use at least one camera, typically at least one optical camera, to perform wade detection. The usage of the at least one camera has the advantage, that is has a field of view superior to a field of view of a typical ultrasonic sensor. Due to the superior field of view, an increased area of the surface can be monitored, so that the reliability of the wade detection can be increased. Furthermore, the wide field of view enables analysis of the field of view already prior to reaching a wade area. Furthermore, nowadays vehicles are frequently equipped with at least one camera. Hence, using this camera, the method can be performed without additional camera hardware. Cameras provide compared to ultrasonic sensors a huge amount of sensor data, which enable a detailed analysis to implement a reliable wade detection, in particular a wade level detection.
Wade can be a desired or accepted feature, e.g. in the case of off-road vehicles.
However, also for regular vehicles, it is important to monitor a wade level to avoid damages due to the liquid. The liquid is typically water or mud with a liquid
characteristic.
The wade level refers to a height of the liquid in an area around the vehicle. For off-road vehicles, wade levels of even more than a meter can be achieved, whereas for regular vehicles, a wade level of few centimeters can already be dangerous because of possible damages to the vehicle, in particular to the motor, in particular due to water entering into the cylinder. Based on an orientation of the vehicle, the wade level can be different e.g. on a left and right side of the vehicle, or at its front and at its rear. The vehicle has at least one camera. Preferably, the vehicle has a surround view camera system with multiple cameras covering all directions around the vehicle. Hence, the wade level can be determined all around the vehicle.
The at least one camera has a field of view, which covers part of the vehicle body from the at least one camera in a direction downwards. Hence, this part of the vehicle body can be used as reference for the detection of a liquid level around the vehicle. The at least one camera provides images covering the part of a vehicle body from the at least one camera in a direction downwards. The images can be provided at any suitable rate, e.g. depending on a vehicle speed.
Calibrating a position of the at least one camera with respect to the body of the vehicle is required to generate an absolute reference for determining the wade level. Different cameras can have different references. However, also in this case, the information can be combined based on the known reference positions of the camera. Calibration has to be made at least once prior to usage of the vehicle. The calibration step S100 can be repeated, e.g. in order to adapt to changed vehicle feature, e.g. when air pressure in wheels of the vehicle changes or when the air pressure is changed based on current driving conditions. In particular, off-road vehicles can require different air pressure when circulating on a road or in off-road conditions.
According to a modified embodiment of the invention, the step of learning a shape of the vehicle body from the at least one camera in a direction downwards comprises performing self-learning of the shape of the vehicle body from the at least one camera in a direction downwards. Accordingly, the method can be easily applied to different types of vehicles. Changes in the appearance of the vehicle can be easily considered and do not lead to a false wade detection, since the vehicle can adapt to such changes, e.g. in case the color of the vehicle is changed, dirt or water drops reside at the vehicle body, stickers are attached to the vehicle body, or others. Since the vehicle body is a static object, it can be learned as background information during a simple training stage. The shape of the vehicle can be self-learned, i.e. outside the factory. The step of learning a shape of the vehicle body has to be robust to handle illumination variations and presence of reflections on the body of the vehicle. Typically, for each camera, only a part of the vehicle body will be visible and learned as vehicle body. Preferably, an initial training step is performed prior to usage of the vehicle.
According to a modified embodiment of the invention, the step of identifying a part of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, comprises modeling the liquid as a dynamic texture evolving in space with a color priori. Due to the nature of the liquid, e.g. water or even mud, its surface can move in an unpredictable manner. Liquids are dynamically evolving manifolds because of their fluidic nature. For example, waves can be formed on a surface of the liquid. This makes it in general difficult to determine a correct wade level. However, when adequately modeling the liquid, a correct wade level can be determined despite a movement of the liquid.
According to a modified embodiment of the invention, the step of modeling the liquid as a dynamic texture evolving in space with a color priori comprises modeling the liquid as a pixel wise temporally evolving an Auto Regressive Moving Average process. The Auto Regressive Moving Average is also referred to as ARMA process. The ARMA process is used in statistical analysis of time series. The ARMA process provides a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the auto regression and the second for the moving average.
According to a modified embodiment of the invention, the method comprises a step of detecting ground water in a driving direction and a step of automatically activating the wade level detection upon detected ground water in the driving direction. With the at least one camera being directed into the driving direction, detection of ground water is enabled. The ground water can typically be detected already well in advance of the vehicle, depending on a type of camera used and/or an orientation of the camera.
However, also other environment sensors of the vehicle can be used to determine ground water in the driving direction. Accordingly, the wade level detection can already be started in advance, so that the wade level detection is already up and running, when the vehicle enters the water. A manual interaction to start wade detection can be omitted.
According to a modified embodiment of the invention, the step of detecting a wade level based on the comparison comprises performing a subtraction of the vehicle body from the at least one camera in a direction downwards, which is not covered by liquid, from the shape of the vehicle body from the at least one camera in a direction downwards. By performing the subtraction, it can be determined up to which height the water level reaches in the area of the camera. If the subtraction result is essentially zero, the current image is supposed to represent the shape of vehicle body as previously learned. Hence, no water is present around the vehicle. The higher the subtraction result, the bigger the difference between the current image and the shape of vehicle body as previously learned. Based on the subtraction, non-body parts can be easily eliminated for determining the wade level. A learning stage has to be robust to handle illumination variations and presence of reflections on the body part.
According to a modified embodiment of the invention, the step of detecting a wade level based on the comparison further comprises a step of modeling image points remaining after the subtraction by K Gaussians with weights ωί and parameters μ (mean) and σ (variance). There are many variants of Gaussian mixture model (GMM) available, which can be used to model the image points. Preferably, the step of modeling image points remaining after the subtraction by K Gaussians comprises performing an adaptive mixture of Gaussians, where the K Gaussians are sorted based on ratio ωί/σ 2 which chooses the least variance and therefore more consistent Gaussian T, and the top k Gaussians are chosen from the sorted order. Further preferred, a Gaussian model with Zivkovic's adaptive mixture of Gaussian (MOG) is used. According to Zivkovic's adaptive mixture of Gaussian, background subtraction is analyzed at pixel-level approach.
Recursive equations are used to constantly update the parameters and but also to simultaneously select an appropriate number of components for each pixel.
According to a modified embodiment of the invention, the method comprises an additional step of tiling the image into various blocks, whereby the top k Gaussians are learnt separately for each block. Hence, for each block, a deviation can be calculated from the learnt model during training time. Training data can be artificially augmented with various noisy effects like reflection, illumination changes, etc. We use this method in particular because it includes an evolving Gaussian function, which can model temporal variability of the liquid surface.
A probability of a pixel being modeled by these K Gaussians can be calculated as
where
Figure imgf000009_0001
Parameter updates are performed as follows in the case components match with I:
where
Figure imgf000009_0002
Parameter updates are performed as follows in the case components do not match with I:
Figure imgf000009_0003
According to a modified embodiment of the invention, the method comprises an additional step of performing a vehicle to vehicle warning upon detection of a predefined wade level. Hence, vehicles which do not support wade detection can be supplied with wade level information. In order to perform the vehicle to vehicle warning, the vehicle comprises a communication device for communicating the wade level to other vehicles, either directly or via a server, which distributes wade detection information. The communication device can be provided to communicate according to any suitable mobile communication standard including Bluetooth, WiFi, GPRS, UMTS, LTE, 5G, or others, just to name a few. Vehicle to vehicle warning enables non-intrusive wade detection or even wade level detection for the warned vehicle.
According to a modified embodiment of the invention, the method comprises an additional step of performing a dynamic online background subtraction for other vehicles. Accordingly, the at least one camera is used to determine, if other vehicles are submerged. A wade level can be estimated based on a known or estimated size of the other vehicle and approximately doing background subtraction to see how much of the vehicle body is occluded by the liquid.
According to a modified embodiment of the invention, the method comprises an additional step of performing a displaced volume detection of liquid displaced by the vehicle. Hence, an increase in e.g. water level based on the vehicle entering or moving within the liquid can be considered. E.g. when a surface of the ground water is known, based on dimensions of the vehicle, it can be easily determined, how much a liquid level will raise based on the presence of the vehicle and its inherent liquid displacement. For small pits and large vehicles, a volume of displaced liquid is much larger, and an immersion of the vehicle increases rapidly. Thus, a predictive model can be appended by using a particle filter, and camera based estimation offers tracking and localization because of a wide field of view (FoV).
According to a modified embodiment of the invention, the vehicle comprises an ultrasonic distance sensor, and the method comprises an additional step of fusing the detected wade level based on the comparison and a wade level detected by the ultrasonic distance sensor. The fused information on the wade level increases reliability of the wade detection and in particular the wade level detection. The ultrasonic distance sensor can be employed as known in the Art to determine the wade level. Fusion can be performed by a heterogeneous Bayesian model, as data corresponding to the different sensors are very different and have different ranges. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter. Individual features disclosed in the embodiments con constitute alone or in combination an aspect of the present invention. Features of the different embodiments can be carried over from one embodiment to another embodiment.
In the drawings:
Fig. 1 shows a schematic view of a vehicle known in the Art with a driving
assistance system for detecting a wade condition using an ultrasonic sensor in a lateral view,
Fig. 2 shows a schematic view of a vehicle with a driving assistance system for detecting a wade condition using multiple cameras according to a first, preferred embodiment in a top view with additional camera views of the multiple cameras,
Fig. 3 shows a detailed schematic camera view of a left wing camera in
accordance with Fig. 2, whereby the camera view is shown once with and once without wade condition, in accordance with the first embodiment,
Fig. 4 shows a schematic view of the vehicle with the driving assistance system according to the first embodiment for detecting a wade condition in a lateral view,
Fig. 5 shows a schematic view of the vehicle with the driving assistance system according to the first embodiment for detecting a wade condition in a lateral view,
Fig. 6 shows a schematic detailed camera view of a rear camera in accordance with Fig. 2, whereby the rear camera view is shown with individual blocks, in accordance with the first embodiment, and Fig. 7 shows a perspective camera view of a front camera in accordance with Fig. 2, whereby the front camera view shows a submerged vehicle, in accordance with the first embodiment, and
Fig. 8 a flow chart indicating a method for performing wade level detection with the vehicle and the driving assistance system according to the first embodiment.
Figure 2 shows a vehicle 1 10 with a driving assistance system 1 12 according to a first, preferred embodiment of the present invention.
The driving assistance system 1 12 comprises a processing unit 1 14 and a surround view camera system 1 16, 1 18, 120, 122. The surround view camera system 1 16, 1 18, 120, 122 comprises a front camera 1 16 covering a front direction of the vehicle 1 10, a rear camera 1 18 covering a rear direction of the vehicle 1 10, a right mirror camera 120 covering a right direction of the vehicle 1 10, and a left mirror camera 122 covering a left direction of the vehicle 1 10.
The cameras 1 16, 1 18, 120, 122 and the processing unit 1 14 are connected via data bus connection 124.
Each camera 1 16, 1 18, 120, 122 has a field of view β, which can be seen e.g. in Fig. 4 or 5, and which covers part of a vehicle body 126 from the respective camera 1 16, 1 18, 120, 122 in a direction downwards, as can be seen in Fig. 2 as well as in Fig. 3.
Subsequently, a method for wade level detection in the vehicle 1 10 according to the first embodiment will be discussed. The wade level refers to a height of a liquid 130, typically water, around the vehicle 10. In particular, wade condition refers to presence of the liquid 130 around the vehicle, and wade level refers to a height of a surface 132 of the liquid 130. The method will discussed with reference to Fig. 8, which shows a flow chart of the inventive method. Apparently, some of the method steps can be performed in an order different to the order of the described embodiments.
The method starts with step S100, which refers to calibrating a position of the cameras 1 16, 1 18, 120, 122 with respect to the body 126 of the vehicle 1 10. Calibrating a position of the cameras 1 16, 1 18, 120, 122 with respect to the body 126 of the vehicle 1 10 refers to generating an absolute reference for determining the wade level. The step of calibrating a position of the cameras 1 16, 1 18, 120, 122 with respect to the body 126 of the vehicle 1 10 is performed once prior to usage of the vehicle 1 10. Later on, step S100 can be repeated, e.g. due to changing driving condition.
In step S1 10, a shape of the vehicle body 126 from the respective camera 1 16, 1 18, 120, 122 in a direction downwards from the camera 1 16, 1 18, 120, 122 is learned. This comprises performing self-learning of the shape of the vehicle body 126 from the cameras 1 16, 1 18, 120, 122 in a direction downwards. Since the vehicle body 126 is a static object, the shape is learned as background information during a training stage. Step S1 10 can be performed at essentially any time. Step S1 10 does not have to be performed continuously or every time the method is performed. The step of learning a shape of the vehicle body 126 from the respective camera 1 16, 1 18, 120, 122 in a direction downwards from the camera 1 16, 1 18, 120, 122 is performed once prior to usage of the vehicle 1 10 as initial training. Later on, the training can be continued.
In step S120, ground water in a driving direction is detected. Upon positive detection of ground water in the driving direction, the further wade level detection is automatically started. Detection of the ground water is performed using the camera 1 16, 1 18, 120, 122 facing in the driving direction. Most commonly, the front camera 1 16 is used to detect the ground water.
In step S130, the control unit 1 14 starts receiving camera images from the cameras 1 16, 1 18, 120, 122, each of which covering the respective part of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards.
In step S140, a part of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards, which is not covered by liquid 130, is identified. In order to enable a detection of the surface 132, the liquid 130 is modeled as a dynamic texture evolving in space with a color priori. This comprises modeling the liquid 130 as a pixel wise temporally evolving an Auto Regressive Moving Average process. The Auto Regressive Moving Average is also referred to as ARMA process. The ARMA process is used in statistical analysis of time series and provides a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the auto regression and the second for the moving average. According to step S150, the part of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards, which is not covered by liquid 130, is compare to the shape of the vehicle body 126 from the camera 1 16, 1 18, 120, 122 in a direction downwards. Hence, the vehicle body 126 is used as reference for the detection of a liquid level around the vehicle 1 10. The comparison is performed as a subtraction of the images, so that remaining image contents refers to a wade level. As can be seen in Fig. 3, an identical part of the image provided by the camera 1 16, 1 18, 120, 122 is subtracted from the learned shape of the vehicle body 126, so that only differing parts remain.
Based on the comparison, the wade level is detected in step S160. Accordingly, image points remaining after the subtraction by K Gaussians with weights ωί and parameters μ (mean) and σ (variance) are modeled. In particular, an adaptive mixture of Gaussians is performed, where the K Gaussians are sorted based on a ratio ωί/σ 2, which chooses the least variance and therefore more consistent Gaussian T. The top k Gaussians are chosen from the sorted order. Furthermore, a Gaussian model with Zivkovic's adaptive mixture of Gaussian (MOG) is used. According to Zivkovic's adaptive mixture of
Gaussian, background subtraction is analyzed at pixel-level approach. Recursive equations are used to constantly update the parameters and also to simultaneously select an appropriate number of components for each pixel.
Accordingly, each image of each camera 1 16, 1 18, 120, 122 is tiled into various blocks 134, as can be seen by way of Example in Fig. 6, whereby the top k Gaussians are learnt separately for each block 134. Hence, for each block 134, a deviation is calculated from the learnt model during training time. Training data is artificially augmented with various noisy effects like reflection, illumination changes, etc.
According to step S170, a displaced volume detection of liquid 130 displaced by the vehicle 1 10 is performed. Hence, an increase in liquid level based on the vehicle 1 10 entering the liquid 130 and moving therein is determined. Based on a surface 132 of the ground water and dimensions of the vehicle 1 10, it is determined, how much a wade level raises based on the presence of the vehicle 1 10 and its liquid 130 displacement. According to step S180, a dynamic online background subtraction for other vehicles is performed. Accordingly, the cameras 1 16, 1 18, 120, 122 are used to determine, if other vehicles 138 are submerged, as can be seen with respect to Fig. 7. A wade level can be estimated based on a known or estimated size of the other vehicle 138 and
approximated doing a background subtraction to see how much of the vehicle body 126 is occluded by the liquid 130.
According to step S190, a vehicle to vehicle warning is performed upon detection of the wade level, i.e. the warning is performed when the wade level is above a pre-defined wade level. The driving assistance system 12 comprises a communication device for communicating the wade level to other vehicles, either directly or via a server, which distributes wade detection information. The communication device is provided to communicate according to a suitable mobile communication standard including
Bluetooth, WiFi, GPRS, UMTS, LTE, 5G, or others.
Reference signs list
10 vehicle (state of the Art)
12 wing mirror (state of the Art)
14 ultrasonic sensor (state of the Art)
16 water (state of the Art)
18 surface (state of the Art)
20 contact sensor (state of the Art)
1 10 vehicle
1 12 driving assistance system
1 14 processing unit
1 16 front camera
1 18 rear camera
120 right mirror camera
122 left mirror camera
124 data bus connection
126 vehicle body
130 liquid
132 surface
134 block
136 identical part

Claims

Patent claims
1 . Method for wade level detection in a vehicle (1 10) comprising at least one camera (1 16, 1 18, 120, 122) with a field of view (β) covering at least part of a vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards, comprising the steps of
calibrating a position of the at least one camera (1 16, 1 18, 120, 122) with respect to the body (126) of the vehicle (1 10) prior to usage of the vehicle (1 10), learning a shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards,
receiving a camera image covering the part of a vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards,
identifying a part of the vehicle body (126) from the at least one camera (1 16, 1 18,
120, 122) in a direction downwards, which is not covered by liquid (130), comparing the part of the vehicle body (126) from the at least one camera (1 16,
1 18, 120, 122) in a direction downwards, which is not covered by liquid (130), to the shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120,
122) in a direction downwards, and
detecting a wade level based on the comparison.
2. Method according to claim 1 , characterized in that
the step of learning a shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards comprises performing self-learning of the shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards.
3. Method according to any of claims 1 or 2, characterized in that
the step of identifying a part of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards, which is not covered by liquid (130), comprises modeling the liquid (130) as a dynamic texture evolving in space with a color priori.
4. Method according to claim 3, characterized in that the step of modeling the liquid (130) as a dynamic texture evolving in space with a color priori comprises modeling the liquid (130) as a pixel wise temporally evolving an Auto Regressive Moving Average process.
5. Method according to any of claims 1 to 4, characterized in that
the method comprises a step of detecting ground water in a driving direction and a step of automatically activating the wade level detection upon detected ground water in the driving direction.
6. Method according to any preceding claim, characterized in that
the step of detecting a wade level based on the comparison comprises performing a subtraction of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards, which is not covered by liquid (130), from the shape of the vehicle body (126) from the at least one camera (1 16, 1 18, 120, 122) in a direction downwards.
7. Method according to claim 6, characterized in that
the step of detecting a wade level based on the comparison further comprises a step of modeling image points remaining after the subtraction by K Gaussians with weights ωί and parameters μ (mean) and σ (variance).
8. Method according to claim 7, characterized in that
comprising the step of tiling the image into various blocks, whereby the top k Gaussians are learnt separately for each block.
9. Method according to any preceding claim, characterized in that
the method comprises an additional step of performing a vehicle to vehicle warning upon detection of a pre-defined wade level.
10. Method according to any preceding claim, characterized in that
the method comprises an additional step of performing a dynamic online background subtraction for other vehicles.
11. Method according to any preceding claim, characterized in that the method comprises an additional step of performing a displaced volume detection of liquid (130) displaced by the vehicle (1 10).
12. Method according to any preceding claim, characterized in that
the vehicle comprises an ultrasonic distance sensor, and
the method comprises an additional step of fusing the detected wade level based on the comparison and a wade level detected by the ultrasonic distance sensor.
13. Driving assistance system (12) for a vehicle (10), characterized in that the driving assistance system (12) is adapted to perform the method according to any preceding claim.
14. Vehicle (10) with a driving assistance system (12) according to preceding claim 13.
PCT/EP2018/066037 2017-06-22 2018-06-18 Camera based wade assist WO2018234200A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017113815.3A DE102017113815A1 (en) 2017-06-22 2017-06-22 Camera-based surveillance assistance procedure
DE102017113815.3 2017-06-22

Publications (1)

Publication Number Publication Date
WO2018234200A1 true WO2018234200A1 (en) 2018-12-27

Family

ID=62748945

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/066037 WO2018234200A1 (en) 2017-06-22 2018-06-18 Camera based wade assist

Country Status (2)

Country Link
DE (1) DE102017113815A1 (en)
WO (1) WO2018234200A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115482652A (en) * 2022-08-23 2022-12-16 东风柳州汽车有限公司 Vehicle water soaking early warning method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140293056A1 (en) * 2011-10-27 2014-10-02 Jaguar Land Rover Limited Wading apparatus and method
GB2518850A (en) * 2013-10-01 2015-04-08 Jaguar Land Rover Ltd Vehicle having wade sensing apparatus and system
WO2015071170A1 (en) 2013-11-12 2015-05-21 Jaguar Land Rover Limited Vehicle having wade sensing display and system therefor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140293056A1 (en) * 2011-10-27 2014-10-02 Jaguar Land Rover Limited Wading apparatus and method
GB2518850A (en) * 2013-10-01 2015-04-08 Jaguar Land Rover Ltd Vehicle having wade sensing apparatus and system
WO2015071170A1 (en) 2013-11-12 2015-05-21 Jaguar Land Rover Limited Vehicle having wade sensing display and system therefor

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115482652A (en) * 2022-08-23 2022-12-16 东风柳州汽车有限公司 Vehicle water soaking early warning method, device, equipment and storage medium

Also Published As

Publication number Publication date
DE102017113815A1 (en) 2018-12-27

Similar Documents

Publication Publication Date Title
JP6313278B2 (en) Control system for display for crossing detection
US9026310B2 (en) Wading depth estimation for a vehicle
US8582809B2 (en) Method and device for detecting an interfering object in a camera image
CN108140323B (en) Method and device for improved data fusion during environmental detection in a motor vehicle
US10300852B2 (en) Water depth estimation apparatus and method
US20130242101A1 (en) Method and Device for Representing Obstacles in a Parking Assistance System of Motor Vehicles
JP6597350B2 (en) Vehicle imaging system
EP3068674B1 (en) Vehicle having wade sensing display and system therefor
US10703271B2 (en) Vehicle display device, vehicle display method, and recording medium for displaying images
CN109313813A (en) Vision system and method for motor vehicles
US20180114436A1 (en) Lidar and vision vehicle sensing
CN112130158A (en) Object distance measuring device and method
KR20160062589A (en) Obstacle display method of vehicle
WO2018234200A1 (en) Camera based wade assist
CN110691716A (en) Prompting device
KR101868898B1 (en) Method and apparatus of identifying lane for self-driving car
GB2518850A (en) Vehicle having wade sensing apparatus and system
CA3074628A1 (en) Method and device for optical distance measurement
JP2021025902A (en) Position posture estimation device, position posture estimation method, and program
WO2008037473A1 (en) Park assist system visually marking up dangerous objects
JP7275956B2 (en) Face orientation estimation device
EP4314704A1 (en) Depth sensor device and method for operating a depth sensor device
KR20180007211A (en) A Rear collision warning system of Vehicles
KR102518537B1 (en) Driver assist apparatus and method for providing lane line information thereof
JP2008257399A (en) Image processor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18734140

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18734140

Country of ref document: EP

Kind code of ref document: A1