GB2615568A - Vehicle camera system with horizon tracker - Google Patents

Vehicle camera system with horizon tracker Download PDF

Info

Publication number
GB2615568A
GB2615568A GB2201801.4A GB202201801A GB2615568A GB 2615568 A GB2615568 A GB 2615568A GB 202201801 A GB202201801 A GB 202201801A GB 2615568 A GB2615568 A GB 2615568A
Authority
GB
United Kingdom
Prior art keywords
camera
vehicle
image
road
horizon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2201801.4A
Other versions
GB202201801D0 (en
Inventor
Leppin Heiko
Heinrich Stefan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Continental Autonomous Mobility Germany GmbH
Original Assignee
Continental Autonomous Mobility Germany GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Autonomous Mobility Germany GmbH filed Critical Continental Autonomous Mobility Germany GmbH
Priority to GB2201801.4A priority Critical patent/GB2615568A/en
Publication of GB202201801D0 publication Critical patent/GB202201801D0/en
Priority to DE102023201025.9A priority patent/DE102023201025A1/en
Publication of GB2615568A publication Critical patent/GB2615568A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18163Lane change; Overtaking manoeuvres
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/804Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for lane monitoring
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2720/00Output or target parameters relating to overall vehicle dynamics
    • B60W2720/10Longitudinal speed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a vehicle camera system 200 for determining a geometric parameter in an image. The vehicle camera system comprises a first camera 202, a second camera 204 and a processing unit 208. The first camera is configured to capture a first image, wherein the first image is captured in a first orientation of the road with respect to the direction of travel of a vehicle100. The second camera configured to capture a second image,wherein the second image is captured in a second orientation of the road with respect to the direction of travel of a vehicle. The processing unit configured to determine a characteristic of the road in the second image and to apply the determined characteristic to the first image, and to determine a geometric parameter in the first image using the detected characteristic of the road 102. The first image may contain a further vehicle 108, and the characteristic may be a horizon and the geometric parameter is a height above the horizon. The system may be used to determine speed and/or distance of the further vehicle behind the system based upon the horizon characteristic in front of the system.

Description

Description
Vehicle camera system with horizon tracker
Technical Field
The invention relates to a vehicle camera system for determining a geometric parameter of a road, a driving assistance system arranged to detect the speed and/or distance of rearward vehicles, a method for determining a geometric parameter of a road in a first direction of travel of a vehicle, a use of the camera system for autonomous lane merging, and a vehicle comprising a camera system.
Background
Autonomous vehicles have to merge lanes occasionally, for example, to enter or leave a highway or at a lane ending, etc. To do so, it is necessary to detect other traffic participants approaching from the rear side. Due to large differential speeds, for example, at a highway entrance they have to be detected as early as 250-500m (300-400m) away. This detection can be performed with a Lidar, a radar, and/or a (stereo-) camera. To enable vehicles to merge into a faster or slower lane, it is therefore necessary to estimate the distance and speed of vehicles in the rear on this lane. For this purpose, a rear camera may be used. The distance of the vehicles may then be determined based on a geometrical frame, i.e. parameters of static structure of the road. However, such parameters of the road or the lanes are difficult to detect, if at all, under certain conditions, such as darkness.
Summary of the Invention
Therefore, one object of the invention could be to provide an improved camera system.
The problem is solved by the subject-matter of the independent claims. Embodiments are provided by the dependent claims, the following description and the accompanying figures.
The described embodiments similarly pertain to the vehicle camera system, the driving assistance system arranged to detect the speed and/or distance of rearward vehicles, the method for determining a geometric parameter of a road in a first direction of travel of a vehicle, the use of the camera system for autonomous lane merging, and the vehicle comprising the camera system. Synergetic effects may arise from different combinations of the embodiments although they might not be described in detail.
Further on, it shall be noted that all embodiments of the present invention concerning a method might be carried out with the order of the steps as described, nevertheless this has not to be the only and essential order of the steps of the method. The herein presented methods can be carried out with another order of the disclosed steps without departing from the respective method embodiment, unless explicitly mentioned to the contrary hereinafter.
A distinction is made in this application between the term "roadway" and the term "road". The term "roadway" is understood to mean a lane of the road, usually having markings. A road may have more than one such lane. Furthermore, the road may have additional markings, e.g. reflectors at the roadside, as well as guard rails or signs, which are not necessarily to be assigned to a lane. A "roadway" or "lane" is thus a part of a "road".
According to a first aspect, a vehicle camera system for determining a geometric parameter in an image is provided. The vehicle camera system comprises a first camera, a second camera and a processing unit. The first camera is configured to capture a first image, wherein the first image is captured in a first orientation of the road with respect to the direction of travel of a vehicle. The second camera is configured to capture a second image, wherein the second image is captured in a second orientation of the road with respect to the direction of travel of the vehicle. The processing unit is configured to determine a characteristic of the road in the second image and to apply the determined characteristic to the first image, and to determine a geometric parameter in the first image using the detected characteristic of the road.
The processing unit may be configured to use the geometric parameter for determining a speed information and/or distance information of a further vehicle. It may further be configured to output and transmit the speed information and/or distance information to a control unit of a driving assistance system.
In this disclosure, the orientation is always seen with respect to the direction of travel. That is, for example, if the camera is a front camera, the orientation is in direction of travel. If the camera is a rear camera, the orientation is opposite to the direction of travel. The field of view of the cameras is considered to be lower than 180°, such that the rear camera and the front camera have no or nearly no common field of view. Therefore, the orientation in direction of travel is herein also referred to as "front", and the corresponding camera is referred to as a "front camera ", regardless of its actual installation location at or on the vehicle. That is, the reference point is the camera itself. Accordingly, the orientation opposite to the direction of travel is herein also referred to as "rear", and the corresponding camera is referred to as "rear camera", regardless of its actual installation location at or on the vehicle.
The first and the second camera may be also designed as a single camera, 30 which may also be referred to as "360° -camera. Such cameras may comprise a first sensor in the first orientation and a second sensor in the second orientation, however mechanically connected to each other. They may therefore be integrated in one housing and eventually implemented on the same PCB, and they may have one processing module for both sensors. In this case, the first sensor corresponds to the "first camera" and the second sensor corresponds to the "second camera" according to the nomenclature in
this disclosure.
By "geometric parameter" of the road is meant a parameter that is related to the road as whole, or a part of the road such as the lane, or to a geometric relationship of the road to the vehicle or a further vehicle. The geometry is a geometry as it appears in the image, that is, the geometry from view of a camera. The processing unit may be configured to perform further calculations to determine, for example, a distance or a velocity. Since a digital camera is involved, the recorded pixels are evaluated for this purpose, whereby the quality of the evaluation depends, among other things, on the resolution of the camera.
The first camera may be set up to capture the image also for detecting the characteristic of the road. However, the image may not be able to provide this information, for example, in poor visibility conditions.
The "characteristic" is a common characteristic in both directions of the road, that is, to the rear and to the front of the vehicle. Due to the different viewing angle and depending on the viewing conditions, the feature may look the same on the two images, look slightly different, or -while present in principle -may not be visible on one of the images. The feature is, for example, the width of the road or the horizon of the road. The geometric parameter to be determined would be, for example, the "height" of an object, e.g. the further vehicle, in the image in relation to the horizon. Thus, features such as geometry, objects, horizon, etc., of the second image are transferred to the first image. Static objects at the roadside or features that are changing only slowly, such as the horizon, are particularly suitable for this purpose. In this way, objects such as vehicles can be geometrically related to these features, which could not be captured in the first image due to poor visibility conditions.
According to an embodiment, the first image contains a further vehicle, the characteristic is represented as at least one point and the geometric parameter is a distance of the further vehicle to the at least one point. The at least one point may be expressed for example as coordinates, as a polynomial or using other mathematical means. Of course, the coordinate systems of the two images, which may depend -among others -on the mounting angle, are related to each other in a suitable way.
The further vehicle is a vehicle following the ego-vehicle and is seen by, e.g., the rear camera if the first camera is the rear camera. The "further" vehicle is also referred to "approaching" vehicle" in some examples, although the invention is not limited to approaching vehicles.
The distance is a distance, such as a height, in the image. As an example, the image has a 2D-coordinate system with horizontal x-axis and a vertical y-axis. Then, in case the distance is the height, this height is the smallest difference of the y-coordinates of the further vehicle and the at least one point.
Alternatively, the first image including the characteristic is mapped onto the second image, and the characteristic is determined directly in the first image, which then comprises the characteristic.
According to an embodiment, the characteristic is a horizon and the geometric parameter is a height above the horizon.
In camera systems, one out of many parameters to estimate the distance between an object such as a vehicle on the road and the ego-vehicle is the height of the object above horizon in the image. The closer a vehicle appears to the horizon in the image, the greater is the distance from the camera. At daylight, steady structures along the roadside can be used to estimate the horizon, whereas at nighttime, in reverse direction such elements may not be visible in the image due to limited lighting and thus not available for determining the height above horizon. The combination as proposed in this embodiment allows a much better estimation of a rearward object distance based on height above horizon. For example, the forward facing automotive camera detects the horizon at nighttime by using the retroreflectors and road features, illuminated by the driving lights. Rearward looking, this feature is unavailable. Therefore, the front camera is used for the intended function of detecting objects and estimating the height above the horizon. The estimated height above the horizon can then be used to determine, e.g., the distance, and, over a time interval, the velocity of vehicles on the rear side in a known manner. The required horizon is, therefore, extracted from the second camera, which may be the camera facing forward. In embodiments, this second camera may only be used for this purpose, i.e., to perform the horizon detection by detecting and tracking the illuminated retro reflectors along the roadside in case the second camera is the front camera.
The horizon may be determined, for example, as the vanishing point of the reflectors along the road. The vanishing point may be detected in the image or constructed. For example, a first point may be determined or selected in the area where the distance of detected neighboring reflectors on the right roadside is lower than a threshold. The same is performed for the left roadside such that a second point is determined, and the line comprising these points is used as characteristic, i.e. horizon, or a midpoint is calculated as vanishing point. In the latter case, the horizon is simply supposed to be horizontal. There may be other methods to define or determine the horizon.
The horizon can be detected at short time intervals, so that in the event of pitching movements, for example of the cab of a truck, the horizon is always updated and the pitching movement can thus be compensated for when determining the geometric parameter.
According to an embodiment, the first camera is further configured to detect the characteristic of the road in the first orientation of the road with respect to the direction of travel, and the processing unit is further arranged to use the characteristic detected by the first camera in a supporting manner.
For example, visibility in one direction may be so poor due to splashing water from vehicles ahead or due to a low sun when the road is wet or due to pool lighting conditions during night that features of a road or its lanes in one of the directions are difficult or impossible to detect. In the first case, i.e., when the feature captured by the first camera, e.g., the rear camera, is usable, the features captured by the front camera and the rear camera can be combined.
In the second case, where the feature captured by the first camera, for example, rear camera is not usable (in one particular embodiment, but not limited to, at night), only the feature captured by the second camera, in the example the front camera, is used to determine the geometric parameter.
According to an embodiment, the first direction is the direction against the direction of travel and the second direction is the direction in the direction of travel.
This takes into account the above-described case that the permanent structure of the road at night cannot be determined by the rear camera.
However, it may also happen that the conditions are reversed, and, for example, the view to the front is disturbed. Thus, the system may also be applied in cases where the first direction is the direction in direction of travel and the second direction is the direction against the direction of travel.
According to an embodiment, the first camera and/or the second camera are stereo cameras.
S
This means that the first camera may consist of two camera units. For example, the camera units are arranged on the left and right of the vehicle. For example, the camera units form the outer rearview mirrors or are mounted on the outer rearview mirrors. In other words, the two camera units form logically one stereo camera, and the terms "front camera" and "back camera" each refer to such stereo cameras. A stereo camera allows for creating a disparity map and to calculate a 3D point cloud from the disparity map, e.g. for detecting the height above horizon. If only the front camera is used to determine the height above the horizon, it is not necessary that the rear camera is a stereo camera.
According to an embodiment, the processing unit comprises a neural network configured to create a disparity map, wherein the disparity map is used for determining the geometric parameter.
For example, a convolutional neural network, CNN, may be used to perform the forward facing disparity map calculation According to an embodiment, the second camera comprises a horizon tracker configured to track a detected horizon, and wherein the tracked horizon is used to determine the height above horizon.
The horizon tracker can be used once the horizon has been successfully detected by the second camera. Tracking the horizon can also be less computationally intensive and more reliable in poor visibility conditions. The tracked horizon is then used to determine the height above horizon. When available, the tracked horizon may be the only source of horizon information, or it may be combined with other, e.g. instantaneous horizon determination methods, and/or with information from the first camera. The horizon tracker may be implemented in the camera. A horizon tracker may alternatively implemented in the processing unit.
According to an embodiment, depths represented on the disparity map are based on at least the characteristic captured by the second camera.
If the distance to a vehicle or to a stationary object was already determined by a distance-measuring sensor, for example a radar-or laser sensor or using ego-vehicle-motion for determining distances, this can be used to "calibrate" the depth-measurements towards all vehicles and objects. The determined horizon will provide the reference on which the distance of another vehicle will be determined, and the depths or distances of further objects are related to these already determined distances.
According to an embodiment, the first camera is a first stereo camera and the first camera is combined with a third camera, which is a stereo camera in the same orientation, to form a second stereo pair for creating the disparity map.
For example, the rear-facing stereo camera may be combined with a further rear-facing stereo camera, to form a second stereo pair for creating the rear-facing disparity map. The third camera units forming the third stereo camera may be mounted also on each side of the vehicle, with or without a forward horizon sensor.
The processing unit may be completely or in parts integrated into one or both cameras. The processing unit may alternatively be located outside the cameras or, for example, in a computing unit of a driving assistance system.
According to a further aspect, a driving assistance system for a vehicle is provided. The driving assistance system comprises a vehicle camera system as described herein with a first and a second camera and a processing unit.
According to an embodiment, the driving assistance system further comprises a control unit. The control unit is configured to receive speed information and/or distance information of a further vehicle from the camera system and to use the received speed information and/or distance information for assisting a driver and/or an autonomous driving system to adapt the speed of the vehicle when changing lanes of a road.
Assisting a driver may further include, for example, displaying information with respect to the speed, distance, differential speed According to a further aspect, a method for determining a geometric parameter of a road in a first direction of travel of a vehicle is provided.
The method comprises the following steps: In a first step, a first image is captured by a first camera, wherein the first image is captured in a first orientation of the road with respect to the direction of travel of a vehicle. In a second step, a second image is captured by a second camera, wherein the second image is captured in a second orientation of the road with respect to the direction of travel of a vehicle. In a third step, performed by a processing unit, a characteristic of the road in the second image is determined. The determined characteristic is applied to the first image, and a geometric parameter in the first image using the detected characteristic of the road is determined.
The first and the second step may be performed in any order, preferably contemporarily. The first camera, the second camera and the processing unit may be part of a vehicle camera system as described herein.
According to a further aspect, a use of the vehicle camera system as described herein for autonomous lane merging is provided.
According to a further aspect, a vehicle comprising a vehicle camera system as described herein is provided.
These and other features, aspects and advantages of the present invention will become better understood with reference to the accompanying figures and the following description.
Short Description of the Figures
Fig. 1 shows a diagram of a lane merging scenario.
Fig. 2 shows a diagram of the vehicle camera system.
Fig. 3 shows a diagram of a vehicle with a camera system according to an embodiment.
Fig. 4 shows a diagram of a truck with cameras according to an embodiment.
Fig. 5 shows a diagram of a typical rear-viewing image at night.
Fig. 6 shows a diagram of a sample forward facing image.
Fig. 7 shows a method for determining a geometric parameter of a road according to an embodiment, Fig. 8 shows a diagram of a vehicle comprising a driving assistance system.
Detailed Description of Embodiments
Corresponding parts are provided with the same reference symbols in all figures.
Fig. 1 shows an example scenario in which the driving assistance system or the camera system 200 can be advantageously applied. A vehicle 100 comes from an access road 106 of a highway 102, which merges into a merge lane 107, and has now to merge from the merge lane 107 into the lane 108. In doing so, the vehicle 100 has to estimate the distance to and speed of the approaching vehicle 108 behind it so that it can match that speed and move into the lane 104 either ahead of or behind the vehicle 108. However, it is necessary to know the horizon of the road for this estimation. While this is possible without problems during daylight, it is usually not possible at night due to the poor lighting conditions.
Fig. 2 shows a driving assistance system 200, with a first camera 202, which is a rear camera 202, and a second camera 204, which is the front camera 204 of the vehicle 100 of Fig. 1. The direction of travel, i.e. the forward direction, is indicated by arrow 210. The driving assistance system 200 further comprises a processing unit 208. The dashed line between the cameras indicates the logical relationship of the two cameras to each other.
Physically, they are both connected to the processing unit 208, which accomplishes the linkage. The processing unit 208 may be installed or implemented in one of the cameras 202, 204 or in a separate device. Both cameras 202, 204, which may be camera sensors 202, 204, may be integrated into a single camera device.
In order to estimate the distance from the vehicle 100 to the vehicle 108 and the velocity of vehicle 108, the height of the vehicle 108 over horizon on the first captured image shall be determined. For example, the closer a vehicle appears to the horizon in the image, the greater is the distance from the camera. At nighttime, only the lights of vehicle 108 are visible, however, in the rear view from vehicle 100, a horizon cannot be detected by the rear camera 202. On the other side, the front camera 204 can detect light reflected from reflectors at the edge of the road and from road markings that are illuminated by the front headlights of vehicle 100. Based on these illuminations and reflections, the processing unit 208 detects the horizon to the front side and transfers the detected horizon to the image captured by the rear camera. In this way, the required horizon is available to determine the height over horizon of vehicle 108.
Fig. 3 in combination with Figs. 1 and 2 shows the vehicle 100 equipped with the rear camera 202 and the front camera 204 on both sides of the vehicle 100. The vehicle 100 may be, for example, a car, a motorcycle, or a truck. The cameras 202, 204 are respectively mounted on the exterior mirrors of the vehicle 100. The cameras 202, 204 may also be cameras that replace the exterior mirrors of the vehicle 100. As used herein, "rear camera 202" refers to the "camera unit" on the left mirror together with the camera unit on the right mirror of the vehicle 100. The two front camera units form a stereo camera 202 through which a three-dimensional dispersion map can be created. The same applies to the front camera 204. The front camera 204 may have a lower resolution than the rear camera 202, and may be configured especially for horizon detection and horizon tracking. Therefore, this camera can also be referred to as "horizon tracker" or "horizon sensor".
Fig. 4 shows a truck 100 as an example of a vehicle 100 in which the rear camera 202 is equipped with a horizon tracker 204 being the front camera 204, while the rear stereo camera 202 is not. The rear stereo camera 202 as shown in Fig. 4 is optionally combined with a further stereo camera 206 for enabling vertical stereo for rear traffic monitoring. For this purpose, the front camera 204 has, for example, an equal or lower resolution than the rear cameras 202, 206. The two forward-facing camera units 204 are then used, for example, in addition to the rear camera 202, to form a forward-facing stereo camera pair. The front-facing camera 204 is further used to create a disparity map and to calculate a 3D point cloud from the disparity map. For example, a CNN (Convolutional Neural Network) can be used to compute the disparity map for the forward direction.
Fig. 5 shows a typical night rear-viewing image with approaching vehicles 108 and characteristically missing static structures of the road. Only vehicle headlights 504 and their reflections 506 on the road may be visible.
Fig. 6 in combination with Fig. 5 shows a sample forward facing image with headlights of vehicle 100 illuminating the road. Both road markings 604 and roadside reflectors 606 are well visible for horizon estimation. The horizon will be estimated in such forward images and will then be used for rear facing camera 202 to measure the height of lights 504 of the approaching vehicle 108 on the rear side with respect to horizon.
Fig. 7 shows a method 700 for determining a geometric parameter of a road in a first direction of travel of a vehicle 100. In a first step 701, a first image, e.g., including another vehicle 108 that is approaching from behind, in a first direction of the road with respect to the direction of travel is captured by a first camera 202. This direction is, for example a direction to the rear side of the vehicle 100. In a second step, 702, a second image, in a second direction of the road with respect to the direction of travel is captured by the second camera 204. The second image contains for example reflectors at the left and right side of the road reflecting the light from the vehicle 100 and/or road features. In a third step 703, a characteristic of the road in the second image, is determined. For example, a horizon is determined based on the reflectors and road features on the second image. The determined characteristic is applied to the first image, and a geometric parameter in the first image, for example a height of a vehicle over horizon, is determined using the detected characteristic of the road. The determined geometric parameter can then be used to determine a distance between the vehicles and a speed of the approaching vehicle. The method can be used by a driving assistance system to adapt the speed where lanes are merging.
Fig. 8 shows a diagram of the vehicle 100 comprising a driving assistance system 800. The vehicle camera system 200 may be part of a driving assistance system 800. The driving assistance system 800 and the method 700 can be used by the driving assistance system 800 to adapt the speed where lanes are merging.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from the study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items or steps recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope of the claims.
List of Reference signs vehicle 102 highway 5 104 lane 106 access road 107 merge lane 108 (approaching/further) vehicle on rear side vehicle camera system 202 first camera / rear camera 204 second camera / front camera 206 third stereo camera 208 processing unit 210 arrow defining forward direction 504 headlights of (approaching) vehicle on rear side 506 reflections of headlights 604 road markings 606 roadside reflectors 700 method 701 first step of method 702 second step of method 703 third step of method 800 driving assistance system

Claims (15)

  1. Claims 1. A vehicle camera system (200) for determining a geometric parameter in an image comprising: a first camera (202) configured to capture a first image, wherein the first image is captured in a first orientation of the road with respect to the direction of travel of a vehicle (100); a second camera (204) configured to capture a second image, wherein the second image is captured in a second orientation of the road with respect to the direction of travel of the vehicle (100); and a processing unit (208) configured to determine a characteristic of the road in the second image and to apply the determined characteristic to the first image, and to determine a geometric parameter in the first image using the detected characteristic of the road (102).
  2. 2. Vehicle camera system (200) according to claim 1, wherein the first image contains a further vehicle (108), the characteristic is represented as at least one point and the geometric parameter is a distance of the further vehicle (108) to the at least one point.
  3. 3. The vehicle camera system (200) according to claim 1 or 2, wherein the characteristic is a horizon and the geometric parameter is a height above the horizon.
  4. 4. The vehicle camera system (200) according to claim 1, wherein the first camera (202) is further configured to detect the characteristic of the road in the first orientation of the road with respect to the direction of travel; and the processing unit (208) is further arranged to use the characteristic detected by the first camera (202) in a supporting manner.
  5. 5. The vehicle camera system (200) according to any one of the previous claims, wherein the first direction is the direction against the direction of travel and the second direction is the direction in the direction of travel.
  6. 6. The vehicle camera system (200) according to any one of the previous claims, wherein the first camera (202) and/or the second camera (202) are stereo cameras.
  7. 7. The vehicle camera system (200) according to any one of the previous claims, wherein the processing unit (208) comprises a neural network for creating a disparity map, wherein the disparity map is used for determining the geometric parameter.
  8. 8. The vehicle camera system (200) according to any one of claims 3 to 6, wherein the second camera (202) comprises a horizon tracker configured to track a detected horizon, and wherein the tracked horizon is used to determine the height above horizon.
  9. 9. The vehicle camera system (200) according to any one of the previous 20 claims, wherein the depths represented on the disparity map are based on at least the characteristic captured by the second camera (202).
  10. 10. The vehicle camera system (200) according to any one of claims 6 to 9, wherein the first camera (202) is a first stereo camera and the first camera (202) is combined with a third camera (206), which is a stereo camera in the same orientation, to form a second stereo pair for creating the disparity map.
  11. 11. A driving assistance system (800) for a vehicle (100), comprising a vehicle camera system (200) according to any one of the previous claims.
  12. 12. The driving assistance (800) system according to claim 11 further comprising a control unit, wherein the control unit is configured to receive the speed information and/or distance information of a further vehicle from the camera system (200) and to use the received speed and/or distance for assisting a driver and/or an autonomous driving system to adapt the speed of the vehicle (100) when changing lanes of a road.
  13. 13. Method (700) for determining a geometric parameter of a road in a first direction of travel of a vehicle (100) comprising the steps: Capturing (702), by a first camera (202), a first image, wherein the first image is captured in a first orientation of the road with respect to the direction of travel of a vehicle (100); Capturing (704), by a second camera (202), a second image, wherein the second image is captured in a second orientation of the road with respect to the direction of travel of the vehicle (100); and Determining (706), by a processing unit (208), a characteristic of the road in the second image, applying the determined characteristic to the first image, and determining a geometric parameter in the first image using the detected characteristic of the road.
  14. 14. Use of the vehicle camera system (200) according to any one of claims 1 to 10 for autonomous lane merging.
  15. 15. A vehicle (100) comprising a vehicle camera system (200) according to any one of claims 1 to 10.
GB2201801.4A 2022-02-11 2022-02-11 Vehicle camera system with horizon tracker Pending GB2615568A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2201801.4A GB2615568A (en) 2022-02-11 2022-02-11 Vehicle camera system with horizon tracker
DE102023201025.9A DE102023201025A1 (en) 2022-02-11 2023-02-08 Vehicle camera system with horizon tracking facility

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2201801.4A GB2615568A (en) 2022-02-11 2022-02-11 Vehicle camera system with horizon tracker

Publications (2)

Publication Number Publication Date
GB202201801D0 GB202201801D0 (en) 2022-03-30
GB2615568A true GB2615568A (en) 2023-08-16

Family

ID=80820915

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2201801.4A Pending GB2615568A (en) 2022-02-11 2022-02-11 Vehicle camera system with horizon tracker

Country Status (2)

Country Link
DE (1) DE102023201025A1 (en)
GB (1) GB2615568A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100201814A1 (en) * 2009-02-06 2010-08-12 Gm Global Technology Operations, Inc. Camera auto-calibration by horizon estimation
EP3745714A1 (en) * 2018-01-25 2020-12-02 Clarion Co., Ltd. Display control device and display system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100201814A1 (en) * 2009-02-06 2010-08-12 Gm Global Technology Operations, Inc. Camera auto-calibration by horizon estimation
EP3745714A1 (en) * 2018-01-25 2020-12-02 Clarion Co., Ltd. Display control device and display system

Also Published As

Publication number Publication date
GB202201801D0 (en) 2022-03-30
DE102023201025A1 (en) 2023-08-17

Similar Documents

Publication Publication Date Title
CN113998034B (en) Rider assistance system and method
US9443154B2 (en) Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications
KR100936558B1 (en) Perimeter monitoring apparatus and image display method for vehicle
US7266220B2 (en) Monitoring device, monitoring method and program for monitoring
US11100806B2 (en) Multi-spectral system for providing precollision alerts
JP6332384B2 (en) Vehicle target detection system
JP3776094B2 (en) Monitoring device, monitoring method and monitoring program
KR101075615B1 (en) Apparatus and method for generating a auxiliary information of moving vehicles for driver
US20150120160A1 (en) Method and device for detecting a braking situation
US20130147957A1 (en) Detection of obstacles at night by analysis of shadows
JP5361901B2 (en) Headlight control device
US20100045449A1 (en) Method for detecting a traffic space
WO2016092925A1 (en) Approaching vehicle detection device
JP2001195698A (en) Device for detecting pedestrian
GB2615568A (en) Vehicle camera system with horizon tracker
Quintero et al. Extended floating car data system-experimental study
Abad et al. Parking space detection
CN211032395U (en) Autonomous vehicle
US11721113B2 (en) Vehicular driving assist system with lane detection using rear camera
EP4299418A1 (en) Sensor system and method for determining an articulation angle of a vehicle with a trailer
JP2015179337A (en) Image determination device, image processor, image determination program, image determination method, and mobile object
THRONGNUMCHAI et al. IMK IMAGE SENSORS FOR INTELLIGENT AUTOMOTIVE APPLICATIONS
KR20180040313A (en) Road free space detection apparatus and method using a monocular camera

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: CONTINENTAL AUTONOMOUS MOBILITY GERMANY GMBH

Free format text: FORMER OWNER: CONTINENTAL AUTOMOTIVE GMBH