EP4100871A1 - Vehicle lighting control using image processing of changes in ambient light intensity - Google Patents

Vehicle lighting control using image processing of changes in ambient light intensity

Info

Publication number
EP4100871A1
EP4100871A1 EP21703141.8A EP21703141A EP4100871A1 EP 4100871 A1 EP4100871 A1 EP 4100871A1 EP 21703141 A EP21703141 A EP 21703141A EP 4100871 A1 EP4100871 A1 EP 4100871A1
Authority
EP
European Patent Office
Prior art keywords
image
signal
vehicle
noise ratio
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21703141.8A
Other languages
German (de)
French (fr)
Inventor
Michael FLATMAN
Samuel A. ROBERTS
Thomas H. ABEL
Raymond A. WISE
Peter J. PETRANY
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perkins Engines Co Ltd
Original Assignee
Perkins Engines Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perkins Engines Co Ltd filed Critical Perkins Engines Co Ltd
Publication of EP4100871A1 publication Critical patent/EP4100871A1/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • B60Q1/14Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights having dimming means
    • B60Q1/1415Dimming circuits
    • B60Q1/1423Automatic dimming circuits, i.e. switching between high beam and low beam due to change of ambient light or light level in road traffic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • B60Q1/14Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights having dimming means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • B60Q1/14Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights having dimming means
    • B60Q1/1415Dimming circuits
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/30Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating rear of vehicle, e.g. by means of reflecting surfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2300/00Indexing codes for automatically adjustable headlamps or automatically dimmable headlamps
    • B60Q2300/30Indexing codes relating to the vehicle environment
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2300/00Indexing codes for automatically adjustable headlamps or automatically dimmable headlamps
    • B60Q2300/30Indexing codes relating to the vehicle environment
    • B60Q2300/31Atmospheric conditions
    • B60Q2300/314Ambient light
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement

Definitions

  • This disclosure relates to systems for processing image data to detect changes in ambient light intensity.
  • United States Patent No. 10,311,599B2 describes a mining truck with lights and cameras for imaging an illuminated area. The image is analysed to determine from the proportion of dark pixels whether the lights are working properly. The threshold light intensity for identifying a dark pixel may be selected to distinguish dark objects from objects that are not illuminated.
  • KR101087741 controlling vehicle headlights responsive using an image from a forward-looking camera.
  • the brightness and exposure gain of different areas of the image are compared with reference values to distinguish day from night, and with one another to detect the presence of a tunnel and turn on the headlights as the vehicle enters the tunnel.
  • KR101789074 teaches dividing an image into regions, processing each region of the image to obtain a weighted contrast value, and comparing the weighted contrast values to distinguish between day and night.
  • United States Patent No, 6,677,986B1 teaches averaging pixel brightness in different measurement windows of an image to determine ambient brightness and brightness distribution ahead of a vehicle, e.g. to turn on the vehicle lights when entering a dark tunnel.
  • Some embodiments of the present disclosure provide an apparatus for detecting changes in ambient light intensity.
  • the apparatus includes a controller configured to receive image data from an image capture device.
  • the image data represents sequential images of at least a part of a field of view of the image capture device and includes, for each image, a pixel value for each of a plurality of pixels forming the respective image.
  • the controller is further configured to process the image data to determine, iteratively, a signal-to-noise ratio of the image data.
  • the signal-to- noise ratio is a ratio of average to variance of the pixel values of at least some of the plurality of pixels.
  • the controller is further configured to generate a control output responsive to changes in ambient light intensity, wherein the control output is based on the signal-to-noise ratio.
  • the disclosure provides a vehicle including the apparatus, at least one image capture device mounted on the vehicle for generating the image data, and at least one light source mounted on the vehicle.
  • the control output is arranged to control the at least one light source responsive to changes in ambient light intensity.
  • the disclosure provides a method for detecting changes in ambient light intensity.
  • the method includes receiving image data from at least one image capture device, where the image data represents sequential images of at least a part of a field of view of the at least one image capture device and includes, for each image, a pixel value for each of a plurality of pixels forming the respective image.
  • the method further includes processing the image data to determine, iteratively, a signal-to-noise ratio of the image data.
  • the signal-to- noise ratio is a ratio of average to variance of the pixel values of at least some of the plurality of pixels.
  • the method further includes generating a control output responsive to changes in ambient light intensity, wherein the control output is based on the signal-to-noise ratio.
  • Fig. 1 shows a vehicle equipped with an apparatus, according to some embodiments of the present disclosure
  • Figs. 4, 5, 6 and 7 are four sequential images A, B, C and D respectively of the field of view of a camera mounted on a vehicle;
  • Fig. 8 shows the signal to noise ratio obtained by processing the image data of the four images A, B, C and D.
  • Reference numerals or characters appearing in more than one of the figures indicate the same or corresponding features in each of them.
  • the image capture device 2 may be any device capable of detecting light to produce a signal representing a sequence of images.
  • the image capture device 2 may be a camera, which may comprise for example an imaging lens and a charge coupled device.
  • the camera 2 may be a video camera producing the image data in the form of a video feed to the controller 11.
  • the at least one image capture device may comprise a plurality of such devices with different fields of view; for example, to provide images in different directions of the environment of the vehicle. Multiple images could be defined by one camera which captures the images over time, or by multiple cameras (e.g. two or more cameras) each of which captures a different one of the images, either simultaneously or sequentially.
  • the vehicle 1 may be a wheeled or tracked vehicle, for example, a work vehicle - which is to say, a vehicle configured to carry out work on the environment of the vehicle, e.g. by digging or moving loose material.
  • the vehicle may include a tool mounted on the vehicle, e.g. on a stick or boom and operable, for example, by hydraulic actuators.
  • the vehicle 1 could be configured for use in construction or mining operations.
  • the vehicle could be a rigid or articulated vehicle, for example, an articulated dump truck with a tipping body as shown, or, for example, an excavator or a wheel loader.
  • the apparatus 10 may include a display 3 which is configured to display images, generated by the at least one image capture device 2, of its field of view T .
  • the camera 2 may be arranged to provide a view of the environment around the vehicle, and/or to provide an external view of a part of the vehicle, for example, to monitor its interaction with the environment. The view could be provided to the driver of the vehicle via a display 3 mounted inside the vehicle, and/or to a remote display (e.g. via a wireless data link) to enable other personnel to remotely monitor or control the operation of the vehicle 1.
  • the camera 2 may provide a signal to a control system configured to control the vehicle 1 responsive to the signal, for example, for autonomous operation of the vehicle. In each case, more than one such camera may be provided.
  • a camera 2 which is provided principally to send images to the display 3 may be used also to detect ambient light intensity, to obviate the need to install a separate ambient light sensor.
  • the image capture device 2 may include a reversing camera 2, which is to say, a camera whose field of view 2’ is positioned behind the vehicle 1.
  • the vehicle 1 may further include a display 3 for displaying images of the field of view 2’, i.e. the environment behind the vehicle, generated by the reversing camera 2, to a driver of the vehicle.
  • the display 3 may be operable discontinuously by selecting reverse gear; however, the camera 2 may be arranged to provide a continuous or regular video feed to the controller 11.
  • the apparatus 10 may include at least one light source, wherein the control output 15 is arranged to control the at least one light source responsive to changes in ambient light intensity.
  • the light source may be mounted on the vehicle 1 as shown and may include, for example, a headlight, indicator, tail light or other external lighting system 16, an ambient lighting system 17 for illuminating the environment of the vehicle, and/or a light source that illuminates or forms part of a display 18 (e.g. a dashboard display or driver control panel) and/or another internal lighting system of the vehicle 1.
  • the control output 15 could be arranged to turn the or each light source on or off, or to adjust its intensity, responsive to changes in ambient light intensity.
  • the intensity (light output) of the tail lights or internal displays of the vehicle 1 could be reduced with reducing light levels, e.g. at dusk, and increased with increasing light levels, e.g. at dawn, as detected by the apparatus.
  • the apparatus may include a power control unit 19 for controlling the light source or any other powered system responsive to the control output 15 from the controller 11.
  • Fig. 3 shows a sequence of images 13 represented by the image data 12.
  • the images 13 are spaced apart in time T by short reference time periods 5T and longer periods, which for convenience are referred to as standard reference time periods DT.
  • a standard reference time period DT may be a period of sufficient length to contain, from the point of view of an observer, substantive changes in the content of the image.
  • a standard reference time period DT could be, for example, at least 10 seconds, or at least one minute, or at least 10 minutes, or at least one hour or more in length.
  • a short reference time period is meant a time period that contains two or more consecutive images or frames 13, but during which, from the point of view of an observer when the image shows, for example, the work environment of a vehicle, there will be little substantive change in the content of the image.
  • a short reference time period dT could be, for example, not more than 0.01 second, or not more than 0.1 second, or not more than 1.0 second in length.
  • the pixel value may be referred to as the numerical pixel value or pixel intensity.
  • the pixel value or pixel intensity may be represented on a standard scale from 0 to 255, wherein a pixel intensity of 0 corresponds to a black pixel, and a pixel intensity of 255 corresponds to a white pixel.
  • each of the three colour components is represented similarly on a scale from 0 to 255, so that a pixel intensity value of 0,0,0 corresponds to a black pixel, and a pixel intensity of 255, 255, 255 corresponds to a white pixel.
  • the signal value for each pixel can be calculated based on the R, G and B values for each pixel.
  • any one pixel its R, G and B values could be processed individually, or could be averaged to give the intensity or pixel value on a scale from 0 to 255, or could be summed to give the intensity or pixel value on a scale from 0 to 765.
  • the pixel value or intensity for each pixel could also be expressed for example on a scale from 0 to 1, wherein a value of 0 corresponds to a black pixel, and a value of 1 corresponds to a white pixel.
  • a grey pixel of median intensity would have a value of 0.5.
  • Fig. 2 shows three instances of one region R1 of an image 13, which comprises a matrix of 8 x 5 pixels 14 with different pixel values from 0 to 1 as represented by the shading.
  • the controller 11 is configured to process the image data 12 to determine, iteratively, a signal-to-noise ratio of the image data 12.
  • the control output 15 is based on the signal-to-noise ratio - which is to say, the control output 15 is generated responsive to changes in the signal -to-noise ratio which represent changes in ambient light intensity.
  • the level (intensity) of ambient light e.g. daylight
  • the signal-to-noise ratio may be calculated continuously or at predefined time intervals.
  • the controller 11 may be configured to determine the signal-to- noise ratio iteratively over a standard reference time period DT, and to generate the control output 15 responsive to a change in the signal-to-noise ratio, only if the change persists over the standard reference time period.
  • the standard reference time period DT could be, for example, at least one second, or at least ten seconds, or at least one minute, or at least ten minutes. This can help eliminate false responses due to transient conditions.
  • the period can be selected depending on the application of the apparatus - for example, a relatively long standard reference time period DT may be selected to determine the transition from day to night.
  • the control output 15 may be a change in the value of a continuous or regularly repeated signal that reflects the signal-to-noise ratio.
  • a change in the value of the signal forming the control output 15 reflects a change in the signal-to-noise ratio, corresponding to a change in the ambient light intensity.
  • control output 15 may be generated when the signal-to-noise ratio changes to a value above or below a predefined threshold value, corresponding to a predefined threshold value of ambient light intensity.
  • the control output 15 may be generated responsive to the signal- to-noise ratio falling below the predefined threshold value, or alternatively, responsive to the signal-to-noise ratio rising above the predefined threshold value.
  • the control output 15 may be arranged to turn on at least one light source responsive to the signal-to-noise ratio falling below the predefined threshold value, indicating a low ambient light intensity, and/or to turn off the at least one light source responsive to the signal-to-noise ratio rising above the predefined threshold value, indicating a higher ambient light intensity.
  • the signal-to-noise ratio is defined as a ratio of average to variance of the pixel values - which is to say, the ratio of average pixel value to variance in pixel value - of at least some of the plurality of pixels 14.
  • the at least some of the plurality of pixels on which the calculation is based may include all of the pixels 14 forming each image 13 captured by the camera 2, or only those pixels 14 forming a selected region or regions Rl, R2, R3 of the image 13, which regions may be predefined or dynamically defined by the controller 11, as shown in Fig. 3 and further discussed below.
  • the average pixel value may be represented by the arithmetic mean, and the variance in pixel value may be represented by the standard deviation.
  • the pixel values may be defined for the purpose of the calculation in either a spatial domain or a time domain, as further explained below.
  • Contrast is defined as the difference between the maximum and minimum signal value, thus:
  • Contrast (maximum pixel value) - (minimum pixel value) ;
  • Contrast is a measure of how pixel intensity varies across an image.
  • At least one region Rl, R2, R3 may be selected from the field of view of the image capture device 2 to define the at least some of the plurality of pixels 14 for the purpose of the calculation, wherein at least one region Rl, R2, R3 represents less than all of the field of view 2’.
  • the controller may be configured to determine the signal-to- noise ratio in a temporal domain, as the ratio of average to variance of different pixel values for each pixel 14 of said at least some of the plurality of pixels, over a plurality of sequential images of the selected at least one region, over a short reference time period 5T.
  • the average and variance are determined by comparing the value of each pixel 14 in the selected one or more regions Rl, R2, R3 in a first image 13 with the value of that same pixel 14 in one or more subsequent images 13 to calculate the signal-to-noise ratio.
  • Fig. 2 shows a group of forty pixels 14 forming the region Rl of the image 13 as shown in Fig. 3 in three sequential frames or images 13, which is to say, the same pixels 14 as defined at three successive short time intervals dT, indicated respectively as Rl (dTI), Rl (dT2), and Rl (dT3).
  • dTI three successive short time intervals
  • dT three successive short time intervals
  • the pixel values, represented by the shading of each pixel vary substantially between adjacent pixels and also fluctuate rapidly over time, which is characteristic of the low signal-to-noise ratio or “pepper-and- salt” image quality obtained in poor ambient light conditions, as further discussed below under the heading “Industrial Applicability”.
  • the at least one region Rl, R2, R3 may include a reference surface 20 which forms a constant element of the field of view 2’.
  • the field of view T of the at least one image capture device 2 may include a reference surface 20 defined by a part of the vehicle 1.
  • the at least one light source may be arranged to illuminate the reference surface 20 defined by this part of the vehicle 1, as shown in Fig. 4.
  • the controller 11 may further be configured to monitor the operation of the vehicle lights by reference to the pixel values of the pixels 14 contained in the region Rl, R2, R3 of the image 13 that is occupied by the reference surface 20. For example, the controller 11 may carry out an additional operation to compare said pixel values before energising the light source with the same pixel values after energising the light source.
  • the controller may be configured to determine the signal-to- noise ratio for each of the images 13 in a spatial domain, as the ratio of average to variance across the at least some of the plurality of pixels 14.
  • Each image 13 may be an image of all of the field of view 2’.
  • the at least some of the plurality of pixels on which the signal-to-noise ratio calculation for that image 13 is based may include all of the plurality of pixels 14 forming that respective image 13.
  • at least one region Rl, R2, R3 may be selected from the field of view to define the at least some of the plurality of pixels, the at least one region Rl, R2, R3 representing less than all of the field of view 2’, as illustrated in Fig. 3.
  • Fig. 3 For example, as illustrated in Fig.
  • the signal-to-noise ratio may be calculated for the image 13 at a single point in time, represented by the region R1(5T1), based on the average and variance of the pixel values of the forty pixels 14 in the selected region Rl.
  • the at least one region may be pre-defmed or may be selected by the controller 11 responsive to processing the received image data, as further discussed below.
  • the image capture device 2 may be configured to send the image data 12 to the controller 11 for part or all of its field of view T .
  • the image capture device 2 may be configured to send the image data 12 to the controller 11 only for that at least one region, or alternatively for the entire image of the entire field of view 2’.
  • the or each region Rl, R2, R3 may be selected as a predefined area of the image (e.g. a predefined group of pixels 14 in the image, corresponding to a predefined area in the field of view T of the camera) before the image 13 is processed.
  • a predefined area of the image 13 could be selected based on a predefined component of the corresponding area in the field of view T .
  • the component could be, for example, the sky, which in a normal use position of the vehicle 1 will appear in that area of the field of view 2’.
  • the component could be for example a reference surface 20 forming part of the vehicle 1 which will appear in that area of the field of view.
  • the controller may be configured to process the image data 12 representing images 13 received from the image capture device 2, in accordance with an algorithm, to define the at least one region Rl, R2, R3 of the field of view 2’, based on a spatial distribution of different pixel values in the received images 13.
  • the at least one region Rl, R2, R3 defines the at least some of the plurality of pixels for the purpose of calculating the signal-to-noise ratio, and represents less than all of the field of view 2’.
  • the algorithm may be, for example, a watershed algorithm as well known in the art of image processing, and will not be further discussed.
  • the signal-to-noise ratio may be calculated for the or each region Rl, R2, R3 defined by the algorithm.
  • one or more regions Rl, R2, R3 defined by the algorithm may be selected, and the signal-to-noise ratio calculated for the or each region so selected.
  • the one or more regions may be selected, either as part of the segmenting step which defines the regions, or as a separate step after the segmenting step.
  • the one or more regions Rl, R2, R3 may be selected by processing the pixel values of each region identified by the algorithm to identify regions having pixel values that correspond to a predefined threshold value - for example, regions of high intensity or regions of low intensity. For example, the signal-to-noise ratio may be calculated for regions selected as having high intensity (high pixel values) over a predefined threshold. Alternatively or additionally, the one or more regions Rl, R2, R3 may be selected by comparing the pixel values of the different regions of the image identified by the algorithm to identify regions having similar or different pixel values. For example, the signal-to-noise ratio may be calculated for regions selected as having different intensity (which is to say, the regions have different average pixel values when the pixel values of each region are averaged over the respective region).
  • the calculated signal-to-noise ratio for each region Rl, R2, R3 may be averaged across the regions Rl, R2, R3, or compared with that of the other regions Rl, R2, R3.
  • changes in ambient light intensity may be detected by processing image data 12 from an image capture device 2 to determine, iteratively, a signal-to-noise ratio of the image data 12, wherein the signal-to- noise ratio is a ratio of average to variance of the pixel values of at least some of the pixels 14 forming the image 13.
  • a control output 15 is generated, based on the signal-to-noise ratio, responsive to changes in ambient light intensity.
  • image data 12 is received from at least one image capture device 2, wherein the image data 12 represents sequential images 13 of at least a part of a field of view T of the at least one image capture device 2, and includes for each image 13 a pixel value for each of a plurality of pixels 14 forming the respective image 13.
  • the image data 12 is processed to determine, iteratively, a signal-to-noise ratio of the image data 12, wherein the signal-to-noise ratio is a ratio of average to variance of the pixel values of at least some of the plurality of pixels 14.
  • the control output 15 is generated responsive to changes in ambient light intensity, wherein the control output 15 is based on the signal-to-noise ratio.
  • the apparatus could be arranged other than on a vehicle.
  • Many further adaptations are possible within the scope of the claims.
  • signal-to-noise ratio may be used as a more reliable indicator of ambient light conditions than other parameters, such as contrast, particularly in situations where the image is dominated by dark surfaces, for example, a coal seam, a mine face, a worksite and/or the like.
  • the pixel value of each pixel (or selected groups of pixels) of the image might be processed to determine that there is low contrast - which is to say, most or all of the pixels of the image, or of each group, have a similar, low pixel value.
  • a similar result might be obtained from a relatively lighter coloured surface in conditions of low ambient light.
  • contrast might give a false indication of low ambient light levels where the image is dominated by a dark surface such as a coal seam.
  • the signal-to-noise ratio can better discriminate between such images.
  • a dark surface such as a coal seam will tend to produce a relatively lower signal-to-noise ratio than each of the same surface, and a lighter surface, in poor ambient light conditions.
  • Figs. 4 - 7 show four sequential images 13, designated respectively as A, B, C and D, generated by a camera 2 mounted on a work vehicle 1. The images are spaced apart in time by long time periods of at least about half an hour to one or more hours.
  • Each image A, B, C, D corresponds to frame no. 1 in a sequence of frames F taken over a much shorter time period, the frames being numbered in a sequence as shown in the graph of Fig. 8.
  • a frame F is meant an image 13. About 1576 frames were generated by the camera 2 over each of the four discrete time periods covered by the graph.
  • the image data of all of the pixels 14 forming one individual frame F was analysed using the spatial domain method to generate the signal-to-noise ratio SNR which is plotted on the graph on a scale from 0 - 1.6.
  • the four different time periods commencing respectively with images A, B, C and D were spaced apart in sequence over a part of a day which began at sunrise (image A) and ended in full daylight (image D), in an open work environment where dark surfaces predominate.
  • the four traces A, B, C and D are defined by the four data sets starting with images A, B, C and D respectively. (Trace D is relatively short because the recording was interrupted during the test. The text that appears in trace D is an artefact of the camera and not relevant to this disclosure.)
  • each trace shows a signal which is consistent over the time period of the graph and, moreover, is clearly distinguished from the other traces.
  • images A, B, C and D represent progressively increasing ambient light levels in the environment of the work vehicle 1, while the traces A, B, C and D show a progressively increasing signal-to-noise ratio SNR.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

Changes in ambient light intensity are detected by processing image data (12) from an image capture device (2) to determine, iteratively, a signal -to- noise ratio of the image data (12). The signal-to-noise ratio is a ratio of average to variance of pixel values for some of the pixels (14) forming the image (13). A control output (15) is generated, based on the signal-to-noise ratio (SNR), that is responsive to changes in ambient light intensity. The control output (15) is used control a light source (16, 17, 18) of a vehicle (1).

Description

Description
VEHICLE LIGHTING CONTROL USING IMAGE PROCESSING OF
CHANGES IN AMBIENT LIGHT INTENSITY
Background
This disclosure relates to systems for processing image data to detect changes in ambient light intensity.
United States Patent No. 10,311,599B2 describes a mining truck with lights and cameras for imaging an illuminated area. The image is analysed to determine from the proportion of dark pixels whether the lights are working properly. The threshold light intensity for identifying a dark pixel may be selected to distinguish dark objects from objects that are not illuminated.
KR101087741 controlling vehicle headlights responsive using an image from a forward-looking camera. The brightness and exposure gain of different areas of the image are compared with reference values to distinguish day from night, and with one another to detect the presence of a tunnel and turn on the headlights as the vehicle enters the tunnel.
KR101789074 teaches dividing an image into regions, processing each region of the image to obtain a weighted contrast value, and comparing the weighted contrast values to distinguish between day and night.
United States Patent No, 6,677,986B1 teaches averaging pixel brightness in different measurement windows of an image to determine ambient brightness and brightness distribution ahead of a vehicle, e.g. to turn on the vehicle lights when entering a dark tunnel.
When such systems are used on vehicles that operate in an environment in which dark surfaces predominate, for example, in mining operations, it is found that the dark surfaces may result in a false indication of low light intensity. Summary
Some embodiments of the present disclosure provide an apparatus for detecting changes in ambient light intensity.
The apparatus includes a controller configured to receive image data from an image capture device. The image data represents sequential images of at least a part of a field of view of the image capture device and includes, for each image, a pixel value for each of a plurality of pixels forming the respective image.
The controller is further configured to process the image data to determine, iteratively, a signal-to-noise ratio of the image data. The signal-to- noise ratio is a ratio of average to variance of the pixel values of at least some of the plurality of pixels.
The controller is further configured to generate a control output responsive to changes in ambient light intensity, wherein the control output is based on the signal-to-noise ratio.
In some embodiments, the disclosure provides a vehicle including the apparatus, at least one image capture device mounted on the vehicle for generating the image data, and at least one light source mounted on the vehicle. The control output is arranged to control the at least one light source responsive to changes in ambient light intensity.
In some embodiments, the disclosure provides a method for detecting changes in ambient light intensity.
The method includes receiving image data from at least one image capture device, where the image data represents sequential images of at least a part of a field of view of the at least one image capture device and includes, for each image, a pixel value for each of a plurality of pixels forming the respective image.
The method further includes processing the image data to determine, iteratively, a signal-to-noise ratio of the image data. The signal-to- noise ratio is a ratio of average to variance of the pixel values of at least some of the plurality of pixels.
The method further includes generating a control output responsive to changes in ambient light intensity, wherein the control output is based on the signal-to-noise ratio.
Brief Description of the Drawings
Further features and advantages will be apparent from the following illustrative embodiments which will now be described, purely by way of example and without limitation to the scope of the claims, and with reference to the accompanying drawings, in which:
Fig. 1 shows a vehicle equipped with an apparatus, according to some embodiments of the present disclosure;
Fig. 2 shows a region of the field of view comprising a plurality of pixels as contained in three sequential images; Fig. 3 represents a sequence of images;
Figs. 4, 5, 6 and 7 are four sequential images A, B, C and D respectively of the field of view of a camera mounted on a vehicle; and
Fig. 8 shows the signal to noise ratio obtained by processing the image data of the four images A, B, C and D. Reference numerals or characters appearing in more than one of the figures indicate the same or corresponding features in each of them.
Detailed Description
Fig. 1 shows a vehicle 1 including an apparatus 10 for detecting changes in ambient light intensity. The apparatus 10 includes a controller 11 configured to receive image data 12 from at least one image capture device 2, which is arranged to generate the image data and send it to the controller 11. The controller is configured to generate a control output 15 responsive to changes in ambient light intensity, as further discussed below. The controller 11 may be any device configured to process the image data 12 to generate the control output 15, and may include a processor or CPU and a memory, e.g. RAM, wherein the processor is configured to execute instructions defined by software stored in a non-transitory machine-readable storage medium, e.g. ROM.
As illustrated, the at least one image capture device 2 may be mounted on a vehicle 1 together with the apparatus 10.
The image capture device 2 may be any device capable of detecting light to produce a signal representing a sequence of images. The image capture device 2 may be a camera, which may comprise for example an imaging lens and a charge coupled device. The camera 2 may be a video camera producing the image data in the form of a video feed to the controller 11. The at least one image capture device may comprise a plurality of such devices with different fields of view; for example, to provide images in different directions of the environment of the vehicle. Multiple images could be defined by one camera which captures the images over time, or by multiple cameras (e.g. two or more cameras) each of which captures a different one of the images, either simultaneously or sequentially.
The vehicle 1 may be a wheeled or tracked vehicle, for example, a work vehicle - which is to say, a vehicle configured to carry out work on the environment of the vehicle, e.g. by digging or moving loose material. The vehicle may include a tool mounted on the vehicle, e.g. on a stick or boom and operable, for example, by hydraulic actuators. The vehicle 1 could be configured for use in construction or mining operations. The vehicle could be a rigid or articulated vehicle, for example, an articulated dump truck with a tipping body as shown, or, for example, an excavator or a wheel loader.
The apparatus 10 may include a display 3 which is configured to display images, generated by the at least one image capture device 2, of its field of view T . For example, the camera 2 may be arranged to provide a view of the environment around the vehicle, and/or to provide an external view of a part of the vehicle, for example, to monitor its interaction with the environment. The view could be provided to the driver of the vehicle via a display 3 mounted inside the vehicle, and/or to a remote display (e.g. via a wireless data link) to enable other personnel to remotely monitor or control the operation of the vehicle 1. Alternatively or additionally, the camera 2 may provide a signal to a control system configured to control the vehicle 1 responsive to the signal, for example, for autonomous operation of the vehicle. In each case, more than one such camera may be provided. Thus, a camera 2 which is provided principally to send images to the display 3 may be used also to detect ambient light intensity, to obviate the need to install a separate ambient light sensor.
As illustrated in Fig. 1, the image capture device 2 may include a reversing camera 2, which is to say, a camera whose field of view 2’ is positioned behind the vehicle 1. The vehicle 1 may further include a display 3 for displaying images of the field of view 2’, i.e. the environment behind the vehicle, generated by the reversing camera 2, to a driver of the vehicle. The display 3 may be operable discontinuously by selecting reverse gear; however, the camera 2 may be arranged to provide a continuous or regular video feed to the controller 11. The apparatus 10 may include at least one light source, wherein the control output 15 is arranged to control the at least one light source responsive to changes in ambient light intensity.
The light source may be mounted on the vehicle 1 as shown and may include, for example, a headlight, indicator, tail light or other external lighting system 16, an ambient lighting system 17 for illuminating the environment of the vehicle, and/or a light source that illuminates or forms part of a display 18 (e.g. a dashboard display or driver control panel) and/or another internal lighting system of the vehicle 1. The control output 15 could be arranged to turn the or each light source on or off, or to adjust its intensity, responsive to changes in ambient light intensity. For example, the intensity (light output) of the tail lights or internal displays of the vehicle 1 could be reduced with reducing light levels, e.g. at dusk, and increased with increasing light levels, e.g. at dawn, as detected by the apparatus. The apparatus may include a power control unit 19 for controlling the light source or any other powered system responsive to the control output 15 from the controller 11.
Referring now also to Figs. 2 and 3, the image data 12 represents sequential images 13 of at least a part of a field of view T of the at least one image capture device 2, and includes, for each image 13, a pixel value for each of a plurality of pixels 14 forming the respective image 13. The images 13 may also be referred to as frames, when generated in a rapid sequence to form a video signal. Thus, it will be understood that a moving image as displayed as a live video feed on a screen will include many consecutive images or frames 13 in the sense of this specification. The frames may be captured at short time intervals; for example, at a rate of at least 10 frames per second, typically at least about 24 frames per second.
Fig. 3 shows a sequence of images 13 represented by the image data 12. The images 13 are spaced apart in time T by short reference time periods 5T and longer periods, which for convenience are referred to as standard reference time periods DT.
A standard reference time period DT may be a period of sufficient length to contain, from the point of view of an observer, substantive changes in the content of the image. A standard reference time period DT could be, for example, at least 10 seconds, or at least one minute, or at least 10 minutes, or at least one hour or more in length.
By a short reference time period is meant a time period that contains two or more consecutive images or frames 13, but during which, from the point of view of an observer when the image shows, for example, the work environment of a vehicle, there will be little substantive change in the content of the image. However, over a short reference time period 5T there may be a change in the individual pixel values of each of the pixels 14, which may be used to define the signal-to-noise ratio as further discussed below. A short reference time period dT could be, for example, not more than 0.01 second, or not more than 0.1 second, or not more than 1.0 second in length.
The pixel value may be referred to as the numerical pixel value or pixel intensity. For example, in a greyscale image, the pixel value or pixel intensity may be represented on a standard scale from 0 to 255, wherein a pixel intensity of 0 corresponds to a black pixel, and a pixel intensity of 255 corresponds to a white pixel. In a colour RGB image, each of the three colour components is represented similarly on a scale from 0 to 255, so that a pixel intensity value of 0,0,0 corresponds to a black pixel, and a pixel intensity of 255, 255, 255 corresponds to a white pixel. For colour images the signal value for each pixel can be calculated based on the R, G and B values for each pixel. For example, for any one pixel, its R, G and B values could be processed individually, or could be averaged to give the intensity or pixel value on a scale from 0 to 255, or could be summed to give the intensity or pixel value on a scale from 0 to 765. The pixel value or intensity for each pixel could also be expressed for example on a scale from 0 to 1, wherein a value of 0 corresponds to a black pixel, and a value of 1 corresponds to a white pixel. A grey pixel of median intensity would have a value of 0.5.
By way of example, Fig. 2 shows three instances of one region R1 of an image 13, which comprises a matrix of 8 x 5 pixels 14 with different pixel values from 0 to 1 as represented by the shading.
The controller 11 is configured to process the image data 12 to determine, iteratively, a signal-to-noise ratio of the image data 12. The control output 15 is based on the signal-to-noise ratio - which is to say, the control output 15 is generated responsive to changes in the signal -to-noise ratio which represent changes in ambient light intensity.
By iteratively determining the signal-to-noise ratio for successive images 13 over time, the level (intensity) of ambient light (e.g. daylight) can be monitored and so changes can be detected. For example, the signal-to-noise ratio may be calculated continuously or at predefined time intervals.
The controller 11 may be configured to determine the signal-to- noise ratio iteratively over a standard reference time period DT, and to generate the control output 15 responsive to a change in the signal-to-noise ratio, only if the change persists over the standard reference time period.
The standard reference time period DT could be, for example, at least one second, or at least ten seconds, or at least one minute, or at least ten minutes. This can help eliminate false responses due to transient conditions. The period can be selected depending on the application of the apparatus - for example, a relatively long standard reference time period DT may be selected to determine the transition from day to night.
The control output 15 may be a change in the value of a continuous or regularly repeated signal that reflects the signal-to-noise ratio.
Thus, a change in the value of the signal forming the control output 15 reflects a change in the signal-to-noise ratio, corresponding to a change in the ambient light intensity.
Alternatively, the control output 15 may be a signal that is generated only in response to a change in the signal-to-noise ratio, corresponding to a change in the ambient light intensity. For example, the control output 15 could be a binary signal for turning a light source or other system on or off responsive to changes in ambient light intensity.
In each case, the control output 15 may be generated when the signal-to-noise ratio changes, or when it changes beyond a predefined threshold range of values (e.g. by a predefined proportion of its previously calculated value), indicating a corresponding change in ambient light intensity.
Alternatively, the control output 15 may be generated when the signal-to-noise ratio changes to a value above or below a predefined threshold value, corresponding to a predefined threshold value of ambient light intensity.
The control output 15 may be generated responsive to the signal- to-noise ratio falling below the predefined threshold value, or alternatively, responsive to the signal-to-noise ratio rising above the predefined threshold value. For example, the control output 15 may be arranged to turn on at least one light source responsive to the signal-to-noise ratio falling below the predefined threshold value, indicating a low ambient light intensity, and/or to turn off the at least one light source responsive to the signal-to-noise ratio rising above the predefined threshold value, indicating a higher ambient light intensity. In accordance with the disclosure, the signal-to-noise ratio is defined as a ratio of average to variance of the pixel values - which is to say, the ratio of average pixel value to variance in pixel value - of at least some of the plurality of pixels 14. The at least some of the plurality of pixels on which the calculation is based may include all of the pixels 14 forming each image 13 captured by the camera 2, or only those pixels 14 forming a selected region or regions Rl, R2, R3 of the image 13, which regions may be predefined or dynamically defined by the controller 11, as shown in Fig. 3 and further discussed below.
For the purposes of the calculation, the average pixel value may be represented by the arithmetic mean, and the variance in pixel value may be represented by the standard deviation.
Thus, based on the numerical pixel value of each pixel in a group of pixels, the signal-to-noise ratio for the group of pixels may be calculated according to the formula: Signal-to-noise ratio = (average pixel value) / (standard deviation)
The pixel values may be defined for the purpose of the calculation in either a spatial domain or a time domain, as further explained below.
It will be appreciated that the signal-to-noise ratio is a different parameter from contrast. Contrast is defined as the difference between the maximum and minimum signal value, thus:
Contrast = (maximum pixel value) - (minimum pixel value) ; and
Contrast is a measure of how pixel intensity varies across an image.
As illustrated in Fig. 3, at least one region Rl, R2, R3 may be selected from the field of view of the image capture device 2 to define the at least some of the plurality of pixels 14 for the purpose of the calculation, wherein at least one region Rl, R2, R3 represents less than all of the field of view 2’.
Where the at least some of the pixels 14 are defined within such a region or regions, the controller may be configured to determine the signal-to- noise ratio in a temporal domain, as the ratio of average to variance of different pixel values for each pixel 14 of said at least some of the plurality of pixels, over a plurality of sequential images of the selected at least one region, over a short reference time period 5T.
That is to say, the average and variance are determined by comparing the value of each pixel 14 in the selected one or more regions Rl, R2, R3 in a first image 13 with the value of that same pixel 14 in one or more subsequent images 13 to calculate the signal-to-noise ratio.
Illustrating this principle, Fig. 2 shows a group of forty pixels 14 forming the region Rl of the image 13 as shown in Fig. 3 in three sequential frames or images 13, which is to say, the same pixels 14 as defined at three successive short time intervals dT, indicated respectively as Rl (dTI), Rl (dT2), and Rl (dT3). It can be seen that the pixel values, represented by the shading of each pixel, vary substantially between adjacent pixels and also fluctuate rapidly over time, which is characteristic of the low signal-to-noise ratio or “pepper-and- salt” image quality obtained in poor ambient light conditions, as further discussed below under the heading “Industrial Applicability”.
Where the at least some of the pixels 14 are defined within such a region or regions, the at least one region Rl, R2, R3 may include a reference surface 20 which forms a constant element of the field of view 2’.
For example, where as illustrated the apparatus 10 is arranged on a vehicle, the field of view T of the at least one image capture device 2 may include a reference surface 20 defined by a part of the vehicle 1.
Where at least one light source is provided, the at least one light source may be arranged to illuminate the reference surface 20 defined by this part of the vehicle 1, as shown in Fig. 4. In this case, the controller 11 may further be configured to monitor the operation of the vehicle lights by reference to the pixel values of the pixels 14 contained in the region Rl, R2, R3 of the image 13 that is occupied by the reference surface 20. For example, the controller 11 may carry out an additional operation to compare said pixel values before energising the light source with the same pixel values after energising the light source.
As an alternative to calculating the signal-to-noise ratio in a temporal domain, the controller may be configured to determine the signal-to- noise ratio for each of the images 13 in a spatial domain, as the ratio of average to variance across the at least some of the plurality of pixels 14.
Each image 13 may be an image of all of the field of view 2’. In this case, the at least some of the plurality of pixels on which the signal-to-noise ratio calculation for that image 13 is based, may include all of the plurality of pixels 14 forming that respective image 13. Alternatively, at least one region Rl, R2, R3 may be selected from the field of view to define the at least some of the plurality of pixels, the at least one region Rl, R2, R3 representing less than all of the field of view 2’, as illustrated in Fig. 3. For example, as illustrated in Fig. 2, the signal-to-noise ratio may be calculated for the image 13 at a single point in time, represented by the region R1(5T1), based on the average and variance of the pixel values of the forty pixels 14 in the selected region Rl. Where at least one region Rl, R2, R3 is selected, then irrespective of whether the signal-to-noise ratio is calculated in a time domain or a spatial domain, the at least one region may be pre-defmed or may be selected by the controller 11 responsive to processing the received image data, as further discussed below. The image capture device 2 may be configured to send the image data 12 to the controller 11 for part or all of its field of view T . For example, where at least one region Rl, R2, R3 is predefined, the image capture device 2 may be configured to send the image data 12 to the controller 11 only for that at least one region, or alternatively for the entire image of the entire field of view 2’. The or each region Rl, R2, R3 may be selected as a predefined area of the image (e.g. a predefined group of pixels 14 in the image, corresponding to a predefined area in the field of view T of the camera) before the image 13 is processed.
A predefined area of the image 13 could be selected based on a predefined component of the corresponding area in the field of view T . The component could be, for example, the sky, which in a normal use position of the vehicle 1 will appear in that area of the field of view 2’. As mentioned above, the component could be for example a reference surface 20 forming part of the vehicle 1 which will appear in that area of the field of view. The controller may be configured to process the image data 12 representing images 13 received from the image capture device 2, in accordance with an algorithm, to define the at least one region Rl, R2, R3 of the field of view 2’, based on a spatial distribution of different pixel values in the received images 13. In this case it will be understood that the at least one region Rl, R2, R3 defines the at least some of the plurality of pixels for the purpose of calculating the signal-to-noise ratio, and represents less than all of the field of view 2’. The algorithm may be, for example, a watershed algorithm as well known in the art of image processing, and will not be further discussed.
After segmenting the image in this way, the signal-to-noise ratio may be calculated for the or each region Rl, R2, R3 defined by the algorithm.
Alternatively, one or more regions Rl, R2, R3 defined by the algorithm may be selected, and the signal-to-noise ratio calculated for the or each region so selected. The one or more regions may be selected, either as part of the segmenting step which defines the regions, or as a separate step after the segmenting step.
The one or more regions Rl, R2, R3 may be selected by processing the pixel values of each region identified by the algorithm to identify regions having pixel values that correspond to a predefined threshold value - for example, regions of high intensity or regions of low intensity. For example, the signal-to-noise ratio may be calculated for regions selected as having high intensity (high pixel values) over a predefined threshold. Alternatively or additionally, the one or more regions Rl, R2, R3 may be selected by comparing the pixel values of the different regions of the image identified by the algorithm to identify regions having similar or different pixel values. For example, the signal-to-noise ratio may be calculated for regions selected as having different intensity (which is to say, the regions have different average pixel values when the pixel values of each region are averaged over the respective region).
If calculated for more than one region of an image 13 or of multiple images 13 from one or more image capture devices 2, the calculated signal-to-noise ratio for each region Rl, R2, R3 may be averaged across the regions Rl, R2, R3, or compared with that of the other regions Rl, R2, R3.
In summary, changes in ambient light intensity may be detected by processing image data 12 from an image capture device 2 to determine, iteratively, a signal-to-noise ratio of the image data 12, wherein the signal-to- noise ratio is a ratio of average to variance of the pixel values of at least some of the pixels 14 forming the image 13. A control output 15 is generated, based on the signal-to-noise ratio, responsive to changes in ambient light intensity.
Thus, in accordance with a method, image data 12 is received from at least one image capture device 2, wherein the image data 12 represents sequential images 13 of at least a part of a field of view T of the at least one image capture device 2, and includes for each image 13 a pixel value for each of a plurality of pixels 14 forming the respective image 13. The image data 12 is processed to determine, iteratively, a signal-to-noise ratio of the image data 12, wherein the signal-to-noise ratio is a ratio of average to variance of the pixel values of at least some of the plurality of pixels 14. The control output 15 is generated responsive to changes in ambient light intensity, wherein the control output 15 is based on the signal-to-noise ratio.
In alternative embodiments, the apparatus could be arranged other than on a vehicle. Many further adaptations are possible within the scope of the claims.
Industrial Applicability
The present disclosure recognises that signal-to-noise ratio may be used as a more reliable indicator of ambient light conditions than other parameters, such as contrast, particularly in situations where the image is dominated by dark surfaces, for example, a coal seam, a mine face, a worksite and/or the like.
In such a situation, the pixel value of each pixel (or selected groups of pixels) of the image might be processed to determine that there is low contrast - which is to say, most or all of the pixels of the image, or of each group, have a similar, low pixel value. A similar result might be obtained from a relatively lighter coloured surface in conditions of low ambient light. Thus, relying on contrast might give a false indication of low ambient light levels where the image is dominated by a dark surface such as a coal seam.
It has been found that the signal-to-noise ratio can better discriminate between such images. In good ambient light conditions, a dark surface such as a coal seam will tend to produce a relatively lower signal-to-noise ratio than each of the same surface, and a lighter surface, in poor ambient light conditions.
In poor ambient light conditions, the image will tend to exhibit a grainy character, also referred to as a “pepper-and-salt” effect because of the random juxtaposition of adjacent, lighter and darker pixels throughout the image. Even where the overall pixel intensity of the image is low - for example, where the whole image is dominated by a dark surface - good ambient light conditions result in a more uniform pixel intensity, so that adjacent pixels tend to have a similar, low pixel intensity - yielding a uniform, dark image as opposed to the “pepper-and-salt” effect observed from a comparable image of the same surface in low light conditions. By way of example, Figs. 4 - 7 show four sequential images 13, designated respectively as A, B, C and D, generated by a camera 2 mounted on a work vehicle 1. The images are spaced apart in time by long time periods of at least about half an hour to one or more hours.
Each image A, B, C, D corresponds to frame no. 1 in a sequence of frames F taken over a much shorter time period, the frames being numbered in a sequence as shown in the graph of Fig. 8. By a frame F is meant an image 13. About 1576 frames were generated by the camera 2 over each of the four discrete time periods covered by the graph. For each data point, the image data of all of the pixels 14 forming one individual frame F (so, for frame no. 1 in the sequence of frames F, the data of the entire image A, B, C or D respectively) was analysed using the spatial domain method to generate the signal-to-noise ratio SNR which is plotted on the graph on a scale from 0 - 1.6.
As can be seen, the four different time periods commencing respectively with images A, B, C and D were spaced apart in sequence over a part of a day which began at sunrise (image A) and ended in full daylight (image D), in an open work environment where dark surfaces predominate. The four traces A, B, C and D are defined by the four data sets starting with images A, B, C and D respectively. (Trace D is relatively short because the recording was interrupted during the test. The text that appears in trace D is an artefact of the camera and not relevant to this disclosure.)
Despite the dark surfaces, each trace shows a signal which is consistent over the time period of the graph and, moreover, is clearly distinguished from the other traces. As can be seen, images A, B, C and D represent progressively increasing ambient light levels in the environment of the work vehicle 1, while the traces A, B, C and D show a progressively increasing signal-to-noise ratio SNR.
In the claims, reference numerals and characters are provided in parentheses, purely for ease of reference, and should not be construed as limiting features.

Claims

Claims
1. An apparatus (10) for detecting changes in ambient light intensity, comprising: a controller (11) configured to: receive image data (12) from at least one image capture device (2), the image data (12) representing sequential images (13) of at least a part of a field of view (2’) of the at least one image capture device (2) and including for each image (13) a pixel value for each of a plurality of pixels (14) forming the respective image (13); process the image data (12) in accordance with an algorithm to define at least one region (Rl, R2, R3) of the field of view (2’) based on a spatial distribution of different pixel values in the received images (13), wherein the at least one region (Rl, R2, R3) defines at least some of the plurality of pixels (14) and represents less than all of the field of view (2’); process the image data (12) of the at least one region (Rl, R2, R3) to determine iteratively a signal-to-noise ratio (SNR) of the image data (12), wherein the signal-to-noise ratio (SNR) is a ratio of average to variance of the pixel values of at least some of the plurality of pixels (14); and generate a control output (15) responsive to changes in ambient light intensity, wherein the control output (15) is based on the signal-to-noise ratio (SNR).
2. An apparatus (10) according to claim 1, including at least one light source (16, 17, 18), wherein the control output (15) is arranged to control the at least one light source (16, 17, 18) responsive to changes in ambient light intensity.
3. An apparatus (10) according to claim 1, including a display (3) configured to display images of the field of view (2’), generated by the at least one image capture device (2).
4. An apparatus (10) according to claim 1, wherein at least one region (Rl, R2, R3) is selected from the field of view (2’) to define said at least some of the plurality of pixels (14), the at least one region (Rl, R2, R3) representing less than all of the field of view (2’).
5. An apparatus (10) according to claim 4, wherein the at least one region (Rl, R2, R3) includes a reference surface (20) forming a constant element of the field of view (2’).
6. An apparatus (10) according to claim 4, wherein the controller (11) is configured to determine the signal-to-noise ratio (SNR) in a temporal domain, as the ratio of average to variance of different pixel values for each pixel (14) of said at least some of the plurality of pixels (14), over a plurality of sequential images (13) of the selected at least one region (Rl, R2, R3), over a short reference time period (5T).
7. An apparatus according to claim 1, wherein the controller (11) is configured to determine the signal-to-noise ratio (SNR) in a spatial domain, as the ratio of average to variance across said at least some of the plurality of pixels (14), for each of the images (13).
8. An apparatus (10) according to claim 7, wherein at least one region (Rl, R2, R3) is selected from the field of view (2’) to define said at least some of the plurality of pixels (14), the at least one region (Rl, R2, R3) representing less than all of the field of view (2’).
9. An apparatus (10) according to claim 7, wherein each image (13) is an image of all of the field of view (2’), and for each image (13), said at least some of the plurality of pixels (14) include all of the plurality of pixels (14) forming the respective image (13).
10. An apparatus (10) according to claim 1, wherein the controller (11) is configured to determine the signal-to-noise ratio (SNR) iteratively over a standard reference time period (DT), and to generate the control output (15) responsive to a change in the signal-to-noise ratio (SNR), only if the change persists over the standard reference time period (DT).
11. A vehicle (1) including an apparatus (10) according to claim 1, at least one image capture device (2) mounted on the vehicle (1) for generating the image data (12), and at least one light source (16, 17, 18) mounted on the vehicle (1), wherein the control output (15) is arranged to control the at least one light source (16, 17, 18) responsive to changes in ambient light intensity.
12. A vehicle (1) according to claim 11, wherein the at least one image capture device includes a reversing camera (2), the field of view (2’) of the reversing camera (2) being positioned behind the vehicle (1), the vehicle (1) further including a display (3) for displaying images (13) of the field of view (2’), generated by the reversing camera (2), to a driver of the vehicle (1).
13. A vehicle (1) according to claim 11, wherein the field of view (2’) of the at least one image capture device (2) includes a part (20) of the vehicle (1).
14. A vehicle (1) according to claim 13, wherein the at least one light source (17) is arranged to illuminate said part of the vehicle (20).
15. A method for detecting changes in ambient light intensity, including: receiving image data (12) from at least one image capture device (2), the image data (12) representing sequential images (13) of at least a part of a field of view (2’) of the at least one image capture device (2) and including for each image (13) a pixel value for each of a plurality of pixels (14) forming the respective image (13); and processing the image data (12) to determine, iteratively, a signal- to-noise ratio (SNR) of the image data (12), the signal-to-noise ratio (SNR) being a ratio of average to variance of the pixel values of at least some of the plurality of pixels (14); and generating a control output (15) responsive to changes in ambient light intensity, wherein the control output (15) is based on the signal-to-noise ratio (SNR).
16. The method for detecting changes in ambient light intensity according to claim 15, further comprising: using the control output (15) to control operation of a light source
(17).
EP21703141.8A 2020-02-03 2021-01-28 Vehicle lighting control using image processing of changes in ambient light intensity Pending EP4100871A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2001431.2A GB2591518B (en) 2020-02-03 2020-02-03 Vehicle lighting control using image processing of changes in ambient light intensity
PCT/EP2021/025032 WO2021156002A1 (en) 2020-02-03 2021-01-28 Vehicle lighting control using image processing of changes in ambient light intensity

Publications (1)

Publication Number Publication Date
EP4100871A1 true EP4100871A1 (en) 2022-12-14

Family

ID=69800213

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21703141.8A Pending EP4100871A1 (en) 2020-02-03 2021-01-28 Vehicle lighting control using image processing of changes in ambient light intensity

Country Status (4)

Country Link
US (1) US20230049522A1 (en)
EP (1) EP4100871A1 (en)
GB (1) GB2591518B (en)
WO (1) WO2021156002A1 (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5796094A (en) * 1993-02-26 1998-08-18 Donnelly Corporation Vehicle headlight control using imaging sensor
DE19743580B4 (en) 1997-10-02 2004-12-02 Robert Bosch Gmbh Method and arrangement for determining the lighting conditions in the front of a moving vehicle, in particular in front of a motor vehicle
WO2010073488A1 (en) * 2008-12-22 2010-07-01 パナソニック株式会社 Image noise reduction device and method
KR101087741B1 (en) 2009-10-07 2011-11-30 에스엘 주식회사 Apparatus and method for controlling headlight
EP2648618B1 (en) * 2010-12-10 2021-07-14 Joyson Safety Systems Acquisition LLC System for monitoring a vehicle driver
KR101789074B1 (en) 2011-12-12 2017-10-24 현대모비스 주식회사 Determination method and apparatus of day and night for vehicle lamp control
US9696424B2 (en) * 2014-05-19 2017-07-04 Rockwell Automation Technologies, Inc. Optical area monitoring with spot matrix illumination
WO2015176953A1 (en) * 2014-05-23 2015-11-26 Koninklijke Philips N.V. Object detection system and method
US10311599B2 (en) 2016-11-03 2019-06-04 Caterpillar Inc. System and method for diagnosis of lighting system
US10682966B2 (en) * 2017-11-16 2020-06-16 Magna Electronics Inc. Vehicle light/display control system using camera
EP3562145A1 (en) * 2018-04-25 2019-10-30 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for operating an advanced driver assistance system of a vehicle
US11113801B1 (en) * 2018-09-11 2021-09-07 Apple Inc. Robust image motion detection using scene analysis and image frame pairs

Also Published As

Publication number Publication date
GB202001431D0 (en) 2020-03-18
WO2021156002A1 (en) 2021-08-12
GB2591518B (en) 2022-03-02
GB2591518A (en) 2021-08-04
US20230049522A1 (en) 2023-02-16

Similar Documents

Publication Publication Date Title
EP3367084B1 (en) Road surface state determination device and method
US9639764B2 (en) Image recognition system for vehicle for traffic sign board recognition
EP0822526B1 (en) Fire detection system
US10112537B2 (en) Trailer angle detection target fade warning
US10872419B2 (en) Method and apparatus for evaluating a vehicle travel surface
US20180334099A1 (en) Vehicle environment imaging systems and methods
US8854464B2 (en) Active visibility support apparatus and method for vehicle
KR101753928B1 (en) Method for determining relative motion by means of an hdr camera
CN102197418A (en) Device for monitoring surrounding area of vehicle
CN101044390B (en) Sight distance measuring device and uses
JP6935350B2 (en) Surveillance device and trolley type vehicle
US20110035099A1 (en) Display control device, display control method and computer program product for the same
US20220011440A1 (en) Ranging device
CN102428700A (en) Monitoring Apparatus
US8373754B2 (en) Method and system for evaluating brightness values in sensor images of image-evaluating adaptive cruise control systems
US8781158B1 (en) UVB-visible channel apparatus and method for viewing a scene comprising terrestrial corona radiation
US20230049522A1 (en) Vehicle lighting control using image processing of changes in ambient light intensity
KR101087741B1 (en) Apparatus and method for controlling headlight
JP2005033680A (en) Image processing apparatus for vehicle
JP2019001226A (en) Electronic mirror device
CN105745670B (en) System and method for forming nighttime images for a motor vehicle
JP3612565B2 (en) Road surface condition judgment method
JP2004200864A (en) Device and method of monitoring surrounding of vehicle
JP2008164537A (en) Adhesive detection device and method
JP2011028371A (en) Moving body detecting device and moving body detecting method

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220817

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)