WO2015055737A1 - Method and system for determining a reflection property of a scene - Google Patents

Method and system for determining a reflection property of a scene Download PDF

Info

Publication number
WO2015055737A1
WO2015055737A1 PCT/EP2014/072154 EP2014072154W WO2015055737A1 WO 2015055737 A1 WO2015055737 A1 WO 2015055737A1 EP 2014072154 W EP2014072154 W EP 2014072154W WO 2015055737 A1 WO2015055737 A1 WO 2015055737A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
scene
image
pixel
traffic
Prior art date
Application number
PCT/EP2014/072154
Other languages
French (fr)
Inventor
Kenneth Jonsson
David TINGDAHL
Original Assignee
Cipherstone Technologies Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cipherstone Technologies Ab filed Critical Cipherstone Technologies Ab
Priority to EP14789210.3A priority Critical patent/EP3058508A1/en
Publication of WO2015055737A1 publication Critical patent/WO2015055737A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Definitions

  • the present invention relates to a scene monitoring system and to a method of determining a reflection property of a scene.
  • a few dedicated light sensors e.g. lux meters
  • the lighting control system cannot take local variations in light levels into account. Local variations may occur due to variations in cloud coverage and shadowing introduced by buildings or vegetation.
  • JP2001 148295 discloses a system with a light source, a road luminance measuring device to measure the road luminance from a road segment illuminated by the light source, and a control equipment to control the output of the lighting equipment to obtain desired road luminance through the input of the sensing data of the road luminance from the road luminance measuring device.
  • a method of determining a reflection property of a scene comprising the steps of: acquiring a first image of the scene, the first image comprising a first plurality of pixels, each pixel having a pixel position and a pixel value; evaluating the pixel value of each of the pixels in respect of a pixel
  • the first set of pixels may be the pixels that fulfill the pixel classification criterion or the pixels that fail to fulfill the pixel classification criterion.
  • the first set may, furthermore, be a sub-set, or may be a set including all of the evaluated pixels or all of the pixels of the first image.
  • the pixel classification criterion may be related to a property indicative of movement, and pixels fulfilling the pixel classification criterion may be pixels that are determined to correspond to stationary portions of the scene.
  • the first image may be discarded if it is determined that there is movement in the scene.
  • the reflection property may be any property indicative of a reflection of light from the scene, such as, for example, luminance or illuminance of the light reflected from the scene and/or reflectance of one or several surfaces in the scene.
  • the scene may be an outdoor scene or an indoor scene, or a scene that is partly outdoors and partly indoors.
  • the scene may advantageously be a traffic-related scene, and a reflection property, such as the luminance, of the surface on which vehicles or persons move may be determined through image analysis.
  • traffic scenes include roads, airport runways or taxi areas, pedestrian or cyclist areas, parking lots/car parks, indoor pedestrian areas such as in subway stations or malls etc.
  • a series of images may be acquired and the pixels of one or several of the images in the series may be classified in accordance with the above-mentioned pixel classification criterion.
  • the pixel classification criterion may, for example, be related to movement, color, texture, shape, object detection, or a combination of two or more thereof.
  • the still pixels may advantageously be identified through image analysis, in which the acquired image may be compared with one or several previously acquired images and/or a previously determined background representation of the scene.
  • the still pixels of the two-dimensional array of pixels may also be identified using any other suitable method.
  • the scene may additionally be monitored using another image sensor, or another monitoring method, such as radar, lidar or ultrasonic technology, which may be used to determine where in the scene motion occurs and/or has occurred.
  • the reflection property determined using the method according to embodiments of the present invention may, for instance, be used for controlling illumination settings and/or to communicate to vehicles to allow adaptation of various settings to the current reflection properties of the road surface and/or monitor light-source performance.
  • the present invention is based on the realization that an improved determination of a reflection property of a scene can be achieved by, identifying still pixels, such as by classifying pixels of an image of the scene into still pixels and motion pixels, and then using a set of the still pixels for determining the reflection property.
  • the ground surface may, for example, be a road surface or the surface of an airport apron.
  • the method according to embodiments of the present invention may additionally include the steps of acquiring a second image of said scene, said second image comprising a second plurality of pixels, each pixel having a pixel position and a pixel value; evaluating at least said pixel value of each of said pixels in respect of said pixel classification criterion; determining, if a second set of said pixels fulfill said pixel classification criterion, said reflection property of at least a second sub-section of said scene different from said first sub-section based on at least one pixel comprised in said second set of pixels.
  • portions of the region of interest of the scene may be temporarily occluded.
  • the reflection property in one or several of the measurement points may be determined based on a first image, and the reflection property in one or several other measurement points may be determined based on a second image (in which the measurement point(s) is/are not occluded).
  • complete measurement data may be built up using several images, where the reflection property for different measurement points/regions of interest are determined based on different images.
  • the scene monitoring system carrying out the method according to embodiments of the present invention may be mounted on a structure that may move in the wind (at least in case of strong wind).
  • the method may additionally include the steps of monitoring the visibility in the vicinity of the image sensor and providing a signal indicative of the visibility.
  • image analysis may be used to provide a measure of scattering in the medium between the image sensor and the scene. For example, the presence of fog or smoke changes the frequency content of the image.
  • the quality of the measurement may vary with factors such as the visibility, the speed and density of vehicles (at high speed it may be more difficult to determine the exact position of vehicles, especially in the case when density estimation and luminance estimation is interleaved). Moreover, the quality or accuracy of the measurement may vary depending on the light level.
  • the scene may be a traffic scene
  • the method may further comprise the steps of acquiring a third image of the scene, the third image being formed by a two-dimensional array of pixels; and determining a traffic density in the traffic scene based on a third image.
  • traffic density should not only be understood the density of cars, but also, depending on the field of application, the density of pedestrians, or other vehicles than cars, such as trucks, airplanes or bicycles etc.
  • the third image used for determining the traffic density may
  • the traffic density may be determined based on an analysis of the same image used to determine the reflection property, i.e. the above-mentioned first image/second image.
  • the method may further comprise the step of providing a signal indicative of the reflection property.
  • such a signal may advantageously be provided to a lighting control system, that may be global or local and/or wirelessly to receivers on passing vehicles to allow the lighting control system of the vehicles to adapt the properties of the light emitted by the headlights and/or other settings in the vehicle.
  • a scene monitoring system for determining a reflection property of a scene
  • the scene monitoring system comprising: an image sensor arranged to receive light reflected from the scene; an image acquisition unit coupled to the image sensor for acquiring images, each being formed by an array of pixels, from the image sensor, wherein each pixel has a pixel position and a pixel value; a pixel classifier coupled to the image acquisition unit and configured to evaluate, in each of the images, at least the pixel value of each of the pixels in respect of a pixel classification criterion; and an image analyzer coupled to the pixel classifier and configured to determine the reflection property of at least a sub-section of the scene based on a set of pixels fulfilling the pixel classification criterion.
  • image acquisition unit the motion detector and the image analyzer may be realized as hardware components or software running on processing circuitry.
  • processing circuitry may be provided as one device or several devices working together.
  • the present invention thus relates to a method of determining a reflection property of a scene, such as the luminance of a road surface.
  • the method comprises the steps of acquiring a first image of the scene, and classifying pixels of the first image into a first set of motion pixels corresponding to reflection of light from a portion of the scene that is in motion and a first set of still pixels corresponding to reflection of light from a portion of the scene that is stationary.
  • the reflection property such as luminance
  • the reflection property is determined based on still pixels of the first image corresponding to a first sub-section, which may for example correspond to a measurement point in a measurement grid according to the standard EN 13201 .
  • FIG. 1 schematically shows an exemplary embodiment of the invention
  • illumination system arranged to provide adaptive illumination of a road
  • Fig 2 is a schematic illustration of the illumination system in fig 1 ;
  • Fig 3 is a flow-chart outlining an embodiment of the method according to the present invention.
  • Figs 4a-b are schematic illustrations of a monitored section of the road with measurement points of a predetermined measurement grid. Detailed Description of Example Embodiments
  • scenes include airport scenes, parking lots/car parks, pedestrian and cycle paths, and similar scenes where an indoor or outdoor region is illuminated and a flow of vehicles or people is expected across the region.
  • reflection properties include glare (estimation of loss of visbility in conditions with excessive contrast), uniformity
  • luminance/illuminance variations across or along the road surface luminance/illuminance variations across or along the road surface
  • surround ratio ratio between road surface and surrounding surface measurements
  • Fig 1 schematically illustrates an exemplary embodiment of the illumination system 1 according to the present invention arranged to provide adaptive illumination of a road 2.
  • the illumination system 1 comprises a luminance monitoring system 3 and controllable light-emitting devices 4a-d, here in the form of street lights.
  • the luminance monitoring system 3 is mounted on a supporting structure 5, which is also shown to support some of the light-emitting devices 4a-b. Note that, in some applications, it may be preferable to mount the luminance monitoring system 3 on a separate structure from the one used for light emitting devices 4a b.
  • the road 2 which is here shown as a motorway, is provided with markings 6 that separate the lanes.
  • the luminance of the road surface 7 may be measured in a measurement region 8 comprising a set of
  • measurement points 9a-d distributed across the road surface 7.
  • a similar grid of measurement points is prescribed in the European standard EN 13201 .
  • the illumination system 1 in fig 1 will now be described in greater detail with reference to the schematic block diagram in fig 2.
  • the illumination system 1 comprises a two- dimensional image sensor 1 1 , an image acquisition unit 12, a memory 13, a motion detector 14, an image analyzer 15, an illumination control unit 16 and light-emitting devices 4a-d.
  • the image sensor 1 1 which is arranged to receive light reflected from the relevant scene, in this case the road surface 7, may be calibrated against a state-of-the-art luminance meter to provide highly accurate luminance estimates with respect to one of the human vision models, i.e. the so-called photopic, mesopic or scotopic models.
  • These photometric models explain how the photoreceptors in the human eye respond in daylight, twilight and nocturnal conditions.
  • RGB color sensor we can obtain a response curve close to the luminosity function of a chosen photometric model by weighting the contributions from the individual color channels, possibly combined with an optical band pass filter.
  • the acquisition unit 12 is coupled to the image sensor 1 1 for acquiring images from the image sensor 1 1 .
  • the acquisition unit 12 further provides acquired images to the memory 13 and to the motion detector 14.
  • the motion detector 14 is configured to classify pixels of images as motion pixels and still pixels based on a comparison with previously acquired images or a background representation stored in memory 13.
  • the motion detector 14 provides the acquired image and information about the pixel classification to the image analyzer 15, which determines the luminance of the measurement points 9a-d based on one or several images, and provides a signal indicative of the luminance of the measurement points 9a-d to the illumination control unit 16, which in turn controls the light-emitting devices 4a-d based on the determined luminance of the road surface 7.
  • the different computations involved in determining the luminance may be distributed between the various devices comprised in the illumination system 1 and a centralized server.
  • the only communication channel may be over a power line at low bandwidth and the processing of image data may be performed within the embedded environment of the image sensor.
  • the communication channel may allow the transmission of single images, image sequences, or partially processed data and some computations may be performed on a centralized server.
  • Partially processed data may include filtered, transformed or compressed images. It may also include extracted image features.
  • the light sensor When mounted on a street light pole 5 as is schematically indicated in fig 1 , or embedded in the housing of a luminaire, the light sensor may communicate with an illumination control unit placed inside the light pole. The control unit will then relay the information to the centralized lighting control system.
  • the communication protocol between the light sensor and the illumination control unit may be based on e.g. LonTalk (ISO/IEC 14908), TCP/IP, RS232 or DAL I.
  • the protocol between the illumination control unit and the centralized control system may be based on e.g. LonTalk (ISO/IEC 14908), ZigBee (based on IEEE 802.15) or Z-Wave. In some applications, it may be possible to support multiple protocols including both power line communication protocols and wireless protocols.
  • the image sensor comprised in the luminance monitoring system 3 may be mounted above the road 2 looking down at the measurement region 8 at an angle similar to the angle between the line of sight of a typical driver and the road surface 7. From this view, we can capture an image and warp the image to a fronto-parallel or birds-eye view in which we can lay out the measurement grid 9a-d and perform the
  • the measurement region 8 may be detected using manual, automatic or semi-automatic techniques as part of the installation of the luminance monitoring system 3. For example, after mounting the image sensor 1 1 on e.g. an overhead gantry, the installation personnel may configure the image sensor 1 1 on site using a laptop computer or similar. Alternatively, the image sensor 1 1 may be configured remotely by an operator at a central command center. The configuration may include selecting boundary points of the region of interest in an image as presented in a graphical user interface. Also, the operator may provide the location of the light sources of interest through the graphical user interface. Alternatively, the system automatically localizes the regions and structures of interest and the operator is then given an
  • a images are captured by the image sensor 1 1 using a relatively short exposure time, such as 0.01 - 0.05 seconds (depending on the sensitivity of the pixel elements of the image sensor 1 1 ).
  • a relatively short exposure time such as 0.01 - 0.05 seconds (depending on the sensitivity of the pixel elements of the image sensor 1 1 ).
  • step S2 the images are analyzed using the motion detector 14 and image analyzer 15 to detect and track vehicles. Thereafter, in step S3, the current real-time traffic density is determined.
  • the detection of the vehicles may advantageously be performed by frame differencing (taking the difference between consecutive images to detect moving objects), background subtraction (subtracting a known background image from the input image to detect non-background objects), feature detection (detecting vehicles as groups of features with certain relationships), motion analysis (detecting vehicles as moving objects with a constrained motion), or a combination of these techniques.
  • the tracking may, for example, be performed using region-based methods (tracking vehicles as blobs), active contour-based methods (tracking vehicles by following their bounding contour), feature-based methods
  • the tracking procedure may generate a motion trajectory for each vehicle in the sequence.
  • the background subtraction may be performed with a static background image or with a background representation updated over time to reflect e.g. seasonal variations (color of vegetation, snow coverage etc).
  • Real-time estimation of local traffic density using an image sensor involves a number of challenges including extreme weather conditions. For example, strong wind may cause the mounting pole 5 to move and the algorithms may have to detect the movements and apply compensation if required. This can be performed by e.g. continuously monitoring the location of structures that are expected to be fixed (e.g. road signs, road markings 6, road edge marker posts and light poles). If a movement is detected, the size of the movement can be estimated and the vehicle tracker (and luminance estimator) updated with the information.
  • the traffic and luminance monitoring system 3 may include functionality for estimating the visibility and transmitting the visibility data to the control system allowing the control system to disregard sensor data when the visibility is too low. Also, the system 3 may output a confidence value for each measurement and the visibility may be incorporated in this value.
  • the visibility Perhaps the most important condition to detect and quantify for a visual sensor is the visibility. If the visibility is too low, the traffic and luminance monitoring system 3 cannot provide accurate measurements of e.g. light and traffic density. The visibility may be affected by mist, fog, heavy rain, snow and pollution.
  • One possible way of measuring visibility is to attempt to detect objects at known distances from the image sensor 1 1 . If the distance to an object within the field-of-view of the light sensor has been measured or estimated when installing the device then, if the object can be detected in the image, the distance to the object is a lower bound on the visibility.
  • the accuracy of the measurements will depend on the number of objects available and their distribution across the distance interval of interest. If the image sensor 1 1 is mounted on a light pole 5 and the pole is visible in the image, then high-contrast markings on the pole could provide a way of accurately measuring the visibility, at least in the direction of the pole. Another possibility is to use road markings 6 such as road edge lines and center lines as these typically occur at regular intervals. Also, road edge marker posts or road signs may be used to estimate visibility.
  • a first luminance measurement image is captured using a relatively long exposure time, such as 0.1 -1 second, is acquired.
  • Fig 4a is a schematic illustration of the state of the road 2 at the time when the first luminance measurement image is captured. As can be seen in fig 4a, three measurement points 9a-c are currently unoccluded (the road surface 7 is visible in these measurement points), while one measurement point 9d is occluded by a car 20.
  • ground pixels corresponding to the road surface 7 are identified in step S5.
  • the ground pixels correspond to stationary regions that were not previously moving (e.g. a vehicle that has stopped).
  • regions with unfavorable reflectance characteristics or regions occluded by permanent or semi-permanent structures may be identified considered in the identification of the ground pixels.
  • regions with unfavorable reflectance characteristics include pools of water which represent specular surfaces.
  • occluding structures may include equipment or signs used in road works.
  • the luminance in the unoccluded measurement points 9a-c is determined.
  • step S7 it is checked whether or not the luminance in all
  • measurement points 9a-d could be determined. If this is not the case, the method again performs steps S1 to S6. This time, the state of the road 2 may be as illustrated in fig 4b, where three measurement points 9a, 9c and 9d are currently unoccluded (the road surface 7 is visible in these measurement points), while one measurement point 9b is occluded by a car 21 .
  • the method proceeds to the next step S8 to provide a control signal to the street lights 4a-d based on the determined traffic density and the luminance in the measurement points 9a-d.
  • the illumination control system 1 may accumulate readings of the luminance and/or traffic density over some time before an adjustment of the illumination settings is made.
  • the traffic and luminance monitoring system 3 may be used for monitoring the condition of the lighting devices 4a-d comprised in the illumination system 1 .
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a method of determining a reflection property of a scene, such as the luminance of a road surface. The method comprises the steps of acquiring a first image of the scene, and classifying pixels of the first image into a first set of motion pixels corresponding to reflection of light from a portion of the scene that is in motion and a first set of still pixels corresponding to reflection of light from a portion of the scene that is stationary. Subsequently, the reflection property, such as luminance, is determined based on still pixels of the first image corresponding to a first sub-section, which may for example correspond to a measurement point in a measurement grid according to the standard EN 13201.

Description

METHOD AND SYSTEM FOR DETERMINING A REFLECTION PROPERTY
OF A SCENE
Field of the Invention
The present invention relates to a scene monitoring system and to a method of determining a reflection property of a scene. Background of the Invention
Illumination of various scenes, outdoors as well as indoors, is important in modern society. For instance, traffic safety is enhanced by adequate illumination of roads etc. However, this ubiquitous illumination is associated with substantial energy consumption, and there is an ongoing effort to reduce the energy consumption associated with illumination to reduce costs and CO2 emissions, while maintaining or increasing safety.
For example, some current intelligent control systems optimize the light source output based on statistics and non-local data. In a typical
configuration, a few dedicated light sensors (e.g. lux meters) provide light measurements for a whole city and therefore the lighting control system cannot take local variations in light levels into account. Local variations may occur due to variations in cloud coverage and shadowing introduced by buildings or vegetation.
With the aim of reducing energy consumption while providing sufficient illumination in tunnels, JP2001 148295 discloses a system with a light source, a road luminance measuring device to measure the road luminance from a road segment illuminated by the light source, and a control equipment to control the output of the lighting equipment to obtain desired road luminance through the input of the sensing data of the road luminance from the road luminance measuring device.
Although the system according to JP2001 148295 apparently allows for more precise control of illumination than a system based on statistics and non-local data, there still appears to be room for an improved determination of the luminance of a scene. Summary
In view of above-mentioned and other drawbacks of the prior art, it is an object of the present invention to provide an improved system and method for determining a reflection property of a scene.
According to a first aspect of the present invention, it is therefore provided a method of determining a reflection property of a scene, comprising the steps of: acquiring a first image of the scene, the first image comprising a first plurality of pixels, each pixel having a pixel position and a pixel value; evaluating the pixel value of each of the pixels in respect of a pixel
classification criterion; and determining, if a first set of the pixels fulfill the pixel classification criterion, the reflection property of at least a first subsection of the scene based on at least one pixel comprised in the first set of pixels.
The first set of pixels may be the pixels that fulfill the pixel classification criterion or the pixels that fail to fulfill the pixel classification criterion.
The first set may, furthermore, be a sub-set, or may be a set including all of the evaluated pixels or all of the pixels of the first image.
According to one embodiment, the pixel classification criterion may be related to a property indicative of movement, and pixels fulfilling the pixel classification criterion may be pixels that are determined to correspond to stationary portions of the scene.
The reflection property may then be determined based on at least one of the pixels that have been determined to be "still" pixels.
Alternatively, the first image may be discarded if it is determined that there is movement in the scene.
The reflection property may be any property indicative of a reflection of light from the scene, such as, for example, luminance or illuminance of the light reflected from the scene and/or reflectance of one or several surfaces in the scene.
The scene may be an outdoor scene or an indoor scene, or a scene that is partly outdoors and partly indoors. According to embodiments of the present invention, the scene may advantageously be a traffic-related scene, and a reflection property, such as the luminance, of the surface on which vehicles or persons move may be determined through image analysis. Examples of traffic scenes include roads, airport runways or taxi areas, pedestrian or cyclist areas, parking lots/car parks, indoor pedestrian areas such as in subway stations or malls etc.
The first image may advantageously be acquired using a two- dimensional image sensor, such as a CCD or CMOS image sensor typically used in a digital camera. The acquired image may be monochrome or in color, such as RGB color. The acquired image may advantageously be formed by a two-dimensional array of pixels.
According to embodiments of the present invention, a series of images may be acquired and the pixels of one or several of the images in the series may be classified in accordance with the above-mentioned pixel classification criterion.
Moreover, the first image may be formed by capturing a plurality of images and combining these images, for example by summing the pixel values, to provide a combined first image with an improved signal-to-noise ratio.
The pixel classification criterion may, for example, be related to movement, color, texture, shape, object detection, or a combination of two or more thereof.
According to various embodiments, the first set of pixels may be a first set of still pixels corresponding to reflection of light from a portion of the scene that is stationary.
The still pixels may advantageously be identified through image analysis, in which the acquired image may be compared with one or several previously acquired images and/or a previously determined background representation of the scene.
It should be noted, however, that the still pixels of the two-dimensional array of pixels may also be identified using any other suitable method. For instance, the scene may additionally be monitored using another image sensor, or another monitoring method, such as radar, lidar or ultrasonic technology, which may be used to determine where in the scene motion occurs and/or has occurred.
The reflection property determined using the method according to embodiments of the present invention may, for instance, be used for controlling illumination settings and/or to communicate to vehicles to allow adaptation of various settings to the current reflection properties of the road surface and/or monitor light-source performance.
The above-mentioned first sub-section of the scene may
advantageously include a measurement point in a measurement grid according to the standard EN 13201 . If the first sub-section of the scene is determined to be stationary, the reflection property, such as luminance, for the first sub-section is determined.
The present invention is based on the realization that an improved determination of a reflection property of a scene can be achieved by, identifying still pixels, such as by classifying pixels of an image of the scene into still pixels and motion pixels, and then using a set of the still pixels for determining the reflection property.
In the exemplary case of the scene being a traffic scene, and the reflection property being the luminance of an illuminated road surface, the determination of the luminance can be improved since the luminance of moving objects, such as moving vehicles that temporarily cover a portion of the road surface can conveniently be disregarded. This provides for a correct determination of the luminance of the road surface, allowing for more precise illumination control, which in turn provides for reduced energy consumption without compromising the traffic safety.
According to various embodiments of the present invention, the method may further comprise the step of identifying, among the still pixels, ground pixels corresponding to reflection of light from a ground surface, and the determination of the reflection property may be based on ground pixels corresponding to the first sub-section of the scene.
The ground surface may, for example, be a road surface or the surface of an airport apron.
By identifying the ground pixels, and determining the reflection property based on a set of ground pixels, temporarily stationary objects such as parked cars may be disregarded in the determination of the reflection property. This provides for a further improvement in the determination of the reflection property.
The detection of ground pixels may exclude regions with unfavorable reflectance characteristics (e.g. pools of water and other regions with specular surfaces) or regions occluded by permanent or semi-permanent structures. Depending on the life-time of a specular surface, it may be detected using either the background model (as a deviation from the background) or using image analysis based on expected ground surface characteristics. Pixels with characteristics deviating from the expected characteristics may be excluded from the determination of the reflection property. Similarly, occluding structures may be detected using the
background model or deviations from expected ground surface
characteristics.
Furthermore, the method according to embodiments of the present invention may additionally include the steps of acquiring a second image of said scene, said second image comprising a second plurality of pixels, each pixel having a pixel position and a pixel value; evaluating at least said pixel value of each of said pixels in respect of said pixel classification criterion; determining, if a second set of said pixels fulfill said pixel classification criterion, said reflection property of at least a second sub-section of said scene different from said first sub-section based on at least one pixel comprised in said second set of pixels.
As was mentioned above, portions of the region of interest of the scene, such as some of the measurement points in a measurement grid according to the standard EN 13201 , may be temporarily occluded. In such a case, the reflection property in one or several of the measurement points may be determined based on a first image, and the reflection property in one or several other measurement points may be determined based on a second image (in which the measurement point(s) is/are not occluded). In this manner, complete measurement data may be built up using several images, where the reflection property for different measurement points/regions of interest are determined based on different images.
The scene monitoring system carrying out the method according to embodiments of the present invention (or at least the imaging sensor that is used) may be mounted on a structure that may move in the wind (at least in case of strong wind).
To provide for an improved detection of the reflection property in such cases, embodiments of the method according to the present invention may further comprise the steps of identifying, in the scene, a fixed structure expected to have a fixed location; detecting an apparent movement of the fixed structure; and determining a set of still pixels corresponding to the first subsection based on the apparent movement.
By keeping track of relative movement between the imaging sensor and at least one fixed structure (at least more unaffected by wind than the structure on which the imaging sensor is mounted), movement of the imaging sensor can be compensated for. Thereby, a correct classification of still/motion pixels can be maintained. Examples of such fixed structures may include road signs, markings on the road, rocks, buildings, etc. Other examples may be objects, object parts and/or configuration of objects/parts and their appearance in the image. These objects may have been identified as fixed structures through a manual, automatic or semi-automatic analysis of data.
In some embodiments, the method may additionally include the steps of monitoring the visibility in the vicinity of the image sensor and providing a signal indicative of the visibility.
The visibility may, for instance, be monitored by evaluating the change over time of the visibility of fixed structures such as buildings, markings in the road, distinctive features in the structure supporting the imaging sensor etc.
Alternatively, or in combination, image analysis may be used to provide a measure of scattering in the medium between the image sensor and the scene. For example, the presence of fog or smoke changes the frequency content of the image.
Advantageously, a signal indicative of the quality of the determination of the reflection property may be provided.
The quality of the measurement may vary with factors such as the visibility, the speed and density of vehicles (at high speed it may be more difficult to determine the exact position of vehicles, especially in the case when density estimation and luminance estimation is interleaved). Moreover, the quality or accuracy of the measurement may vary depending on the light level.
According to various embodiments of the method of the present invention, the scene may be a traffic scene, and the method may further comprise the steps of acquiring a third image of the scene, the third image being formed by a two-dimensional array of pixels; and determining a traffic density in the traffic scene based on a third image. By traffic density should not only be understood the density of cars, but also, depending on the field of application, the density of pedestrians, or other vehicles than cars, such as trucks, airplanes or bicycles etc.
For a traffic scene, the traffic density may be an important parameter which, in addition to the above-mentioned reflection property, can be used to determine suitable illumination settings to achieve adequate safety in an energy efficient manner.
The third image used for determining the traffic density may
advantageously be captured using a substantially shorter exposure time than images used for determining the reflection property, to avoid motion blur.
Alternatively, the traffic density may be determined based on an analysis of the same image used to determine the reflection property, i.e. the above-mentioned first image/second image.
In some applications, it may be possible to find a compromise with respect to exposure time allowing us to measure both luminance and traffic density from the same images. Also, even in cases when the exposure time is long and significant motion blur is present, it may be possible to estimate traffic density by detecting the tracks of headlights and/or tail lights.
According to various embodiments, the step of determining the traffic density may advantageously comprise the steps of detecting individual moving objects; and tracking the individual moving objects through a series of images.
Moreover, the method may further comprise the step of providing a signal indicative of the reflection property.
In the case of a traffic scene, such a signal may advantageously be provided to a lighting control system, that may be global or local and/or wirelessly to receivers on passing vehicles to allow the lighting control system of the vehicles to adapt the properties of the light emitted by the headlights and/or other settings in the vehicle.
According to a second aspect of the present invention, there is provided a scene monitoring system for determining a reflection property of a scene, the scene monitoring system comprising: an image sensor arranged to receive light reflected from the scene; an image acquisition unit coupled to the image sensor for acquiring images, each being formed by an array of pixels, from the image sensor, wherein each pixel has a pixel position and a pixel value; a pixel classifier coupled to the image acquisition unit and configured to evaluate, in each of the images, at least the pixel value of each of the pixels in respect of a pixel classification criterion; and an image analyzer coupled to the pixel classifier and configured to determine the reflection property of at least a sub-section of the scene based on a set of pixels fulfilling the pixel classification criterion.
It should be noted that one or several of the image acquisition unit, the motion detector and the image analyzer may be realized as hardware components or software running on processing circuitry. Such processing circuitry may be provided as one device or several devices working together.
The scene monitoring system may further comprise memory for storing previously acquired images and/or a previously determined representation of the background such as a background image. Moreover, we may store statistics compiled over time, e.g. statistics over luminance and traffic density estimates, and various system logging information allowing efficient maintenance and repair of the system.
Further embodiments of, and effects obtained through this second aspect of the present invention are largely analogous to those described above for the first aspect of the invention.
In summary, the present invention thus relates to a method of determining a reflection property of a scene, such as the luminance of a road surface. The method comprises the steps of acquiring a first image of the scene, and classifying pixels of the first image into a first set of motion pixels corresponding to reflection of light from a portion of the scene that is in motion and a first set of still pixels corresponding to reflection of light from a portion of the scene that is stationary. Subsequently, the reflection property, such as luminance, is determined based on still pixels of the first image corresponding to a first sub-section, which may for example correspond to a measurement point in a measurement grid according to the standard EN 13201 .
Brief Description of the Drawings
These and other aspects of the present invention will now be described in more detail, with reference to the appended drawings showing an example embodiment of the invention, wherein: Fig 1 schematically shows an exemplary embodiment of the
illumination system according to the present invention arranged to provide adaptive illumination of a road;
Fig 2 is a schematic illustration of the illumination system in fig 1 ;
Fig 3 is a flow-chart outlining an embodiment of the method according to the present invention; and
Figs 4a-b are schematic illustrations of a monitored section of the road with measurement points of a predetermined measurement grid. Detailed Description of Example Embodiments
In the present detailed description, various embodiments of the system and method according to the present invention are mainly discussed with reference to a system and method for monitoring the luminance of a road surface and traffic density on the road.
It should be noted that this by no means limits the scope of the present invention, which equally well includes, for example, systems and methods for monitoring luminance only, as well as systems and methods for monitoring other types of scenes and/or other reflection properties.
Other types of scenes include airport scenes, parking lots/car parks, pedestrian and cycle paths, and similar scenes where an indoor or outdoor region is illuminated and a flow of vehicles or people is expected across the region. Other examples of reflection properties include glare (estimation of loss of visbility in conditions with excessive contrast), uniformity
(luminance/illuminance variations across or along the road surface), surround ratio (ratio between road surface and surrounding surface measurements), average/minimum/maximum luminance/illuminance and other road surface measurements as defined in EN 13 201 and other relevant standards.
Fig 1 schematically illustrates an exemplary embodiment of the illumination system 1 according to the present invention arranged to provide adaptive illumination of a road 2. As is schematically indicated in fig 1 , the illumination system 1 comprises a luminance monitoring system 3 and controllable light-emitting devices 4a-d, here in the form of street lights. The luminance monitoring system 3 is mounted on a supporting structure 5, which is also shown to support some of the light-emitting devices 4a-b. Note that, in some applications, it may be preferable to mount the luminance monitoring system 3 on a separate structure from the one used for light emitting devices 4a b.
The road 2, which is here shown as a motorway, is provided with markings 6 that separate the lanes. As is schematically indicated in fig 1 , the luminance of the road surface 7 (or over a single lane of the road surface 7) may be measured in a measurement region 8 comprising a set of
measurement points 9a-d distributed across the road surface 7. A similar grid of measurement points is prescribed in the European standard EN 13201 .
The illumination system 1 in fig 1 will now be described in greater detail with reference to the schematic block diagram in fig 2.
Referring to fig 2, the illumination system 1 comprises a two- dimensional image sensor 1 1 , an image acquisition unit 12, a memory 13, a motion detector 14, an image analyzer 15, an illumination control unit 16 and light-emitting devices 4a-d.
The image sensor 1 1 , which is arranged to receive light reflected from the relevant scene, in this case the road surface 7, may be calibrated against a state-of-the-art luminance meter to provide highly accurate luminance estimates with respect to one of the human vision models, i.e. the so-called photopic, mesopic or scotopic models. These photometric models explain how the photoreceptors in the human eye respond in daylight, twilight and nocturnal conditions. Using an RGB color sensor, we can obtain a response curve close to the luminosity function of a chosen photometric model by weighting the contributions from the individual color channels, possibly combined with an optical band pass filter.
The acquisition unit 12 is coupled to the image sensor 1 1 for acquiring images from the image sensor 1 1 . The acquisition unit 12 further provides acquired images to the memory 13 and to the motion detector 14.
The motion detector 14 is configured to classify pixels of images as motion pixels and still pixels based on a comparison with previously acquired images or a background representation stored in memory 13.
The motion detector 14 provides the acquired image and information about the pixel classification to the image analyzer 15, which determines the luminance of the measurement points 9a-d based on one or several images, and provides a signal indicative of the luminance of the measurement points 9a-d to the illumination control unit 16, which in turn controls the light-emitting devices 4a-d based on the determined luminance of the road surface 7.
Depending on the application and the available bandwidth, the different computations involved in determining the luminance may be distributed between the various devices comprised in the illumination system 1 and a centralized server. In some applications, the only communication channel may be over a power line at low bandwidth and the processing of image data may be performed within the embedded environment of the image sensor. In other applications, the communication channel may allow the transmission of single images, image sequences, or partially processed data and some computations may be performed on a centralized server. Partially processed data may include filtered, transformed or compressed images. It may also include extracted image features.
When mounted on a street light pole 5 as is schematically indicated in fig 1 , or embedded in the housing of a luminaire, the light sensor may communicate with an illumination control unit placed inside the light pole. The control unit will then relay the information to the centralized lighting control system. The communication protocol between the light sensor and the illumination control unit may be based on e.g. LonTalk (ISO/IEC 14908), TCP/IP, RS232 or DAL I. The protocol between the illumination control unit and the centralized control system may be based on e.g. LonTalk (ISO/IEC 14908), ZigBee (based on IEEE 802.15) or Z-Wave. In some applications, it may be possible to support multiple protocols including both power line communication protocols and wireless protocols.
Referring again to fig 1 , the image sensor comprised in the luminance monitoring system 3 may be mounted above the road 2 looking down at the measurement region 8 at an angle similar to the angle between the line of sight of a typical driver and the road surface 7. From this view, we can capture an image and warp the image to a fronto-parallel or birds-eye view in which we can lay out the measurement grid 9a-d and perform the
determination of the luminance of the road surface.
The measurement region 8 may be detected using manual, automatic or semi-automatic techniques as part of the installation of the luminance monitoring system 3. For example, after mounting the image sensor 1 1 on e.g. an overhead gantry, the installation personnel may configure the image sensor 1 1 on site using a laptop computer or similar. Alternatively, the image sensor 1 1 may be configured remotely by an operator at a central command center. The configuration may include selecting boundary points of the region of interest in an image as presented in a graphical user interface. Also, the operator may provide the location of the light sources of interest through the graphical user interface. Alternatively, the system automatically localizes the regions and structures of interest and the operator is then given an
opportunity to confirm their position.
An embodiment of the method according to the present invention will now be described with reference to the flow-chart in fig 3, the block diagram in fig 2, and also figs 4a-b.
In a first step S1 , a images are captured by the image sensor 1 1 using a relatively short exposure time, such as 0.01 - 0.05 seconds (depending on the sensitivity of the pixel elements of the image sensor 1 1 ). In the
subsequent step S2, the images are analyzed using the motion detector 14 and image analyzer 15 to detect and track vehicles. Thereafter, in step S3, the current real-time traffic density is determined.
The detection of the vehicles may advantageously be performed by frame differencing (taking the difference between consecutive images to detect moving objects), background subtraction (subtracting a known background image from the input image to detect non-background objects), feature detection (detecting vehicles as groups of features with certain relationships), motion analysis (detecting vehicles as moving objects with a constrained motion), or a combination of these techniques.
The tracking may, for example, be performed using region-based methods (tracking vehicles as blobs), active contour-based methods (tracking vehicles by following their bounding contour), feature-based methods
(tracking vehicles as a group of features), model-based methods (tracking vehicles by matching a projected model to the image data), motion-based methods (detecting vehicles as moving objects with a constrained motion), or a combination of these techniques. The tracking procedure may generate a motion trajectory for each vehicle in the sequence.
The specific algorithmic steps and parameters involved in the above- described detection and tracking may vary from one implementation to another. For example, the background subtraction may be performed with a static background image or with a background representation updated over time to reflect e.g. seasonal variations (color of vegetation, snow coverage etc).
Note that, depending on the angle between the optical axis of the image sensor and the road surface plane, we may or may not use image analysis methods for detecting and tracking vehicles based on detecting license plates in live video feeds. When the scene geometry makes detection feasible, this may be an attractive way of detecting vehicles. One possibility is to use infrared panels to illuminate the scene without blinding drivers. The retroreflective license plates will give strong reflections allowing robust detection.
Many of the detection and tracking techniques detailed above may be applied to both vehicles and people.
Real-time estimation of local traffic density using an image sensor involves a number of challenges including extreme weather conditions. For example, strong wind may cause the mounting pole 5 to move and the algorithms may have to detect the movements and apply compensation if required. This can be performed by e.g. continuously monitoring the location of structures that are expected to be fixed (e.g. road signs, road markings 6, road edge marker posts and light poles). If a movement is detected, the size of the movement can be estimated and the vehicle tracker (and luminance estimator) updated with the information.
Moreover, mist/fog or pollution may reduce the visibility making traffic density estimation difficult or impossible. The traffic and luminance monitoring system 3 may include functionality for estimating the visibility and transmitting the visibility data to the control system allowing the control system to disregard sensor data when the visibility is too low. Also, the system 3 may output a confidence value for each measurement and the visibility may be incorporated in this value.
Perhaps the most important condition to detect and quantify for a visual sensor is the visibility. If the visibility is too low, the traffic and luminance monitoring system 3 cannot provide accurate measurements of e.g. light and traffic density. The visibility may be affected by mist, fog, heavy rain, snow and pollution. One possible way of measuring visibility is to attempt to detect objects at known distances from the image sensor 1 1 . If the distance to an object within the field-of-view of the light sensor has been measured or estimated when installing the device then, if the object can be detected in the image, the distance to the object is a lower bound on the visibility.
Of course, the accuracy of the measurements will depend on the number of objects available and their distribution across the distance interval of interest. If the image sensor 1 1 is mounted on a light pole 5 and the pole is visible in the image, then high-contrast markings on the pole could provide a way of accurately measuring the visibility, at least in the direction of the pole. Another possibility is to use road markings 6 such as road edge lines and center lines as these typically occur at regular intervals. Also, road edge marker posts or road signs may be used to estimate visibility.
In the next step S4, a first luminance measurement image is captured using a relatively long exposure time, such as 0.1 -1 second, is acquired.
Fig 4a is a schematic illustration of the state of the road 2 at the time when the first luminance measurement image is captured. As can be seen in fig 4a, three measurement points 9a-c are currently unoccluded (the road surface 7 is visible in these measurement points), while one measurement point 9d is occluded by a car 20.
Using information obtained from the images acquired in step S1 , ground pixels corresponding to the road surface 7 are identified in step S5. The ground pixels correspond to stationary regions that were not previously moving (e.g. a vehicle that has stopped).
In addition to removing the traffic from the scene before determining luminance, additional regions with unfavorable reflectance characteristics or regions occluded by permanent or semi-permanent structures may be identified considered in the identification of the ground pixels. Examples of regions with unfavorable reflectance characteristics include pools of water which represent specular surfaces. Examples of occluding structures may include equipment or signs used in road works.
In the subsequent step S6, the luminance in the unoccluded measurement points 9a-c is determined.
In step S7, it is checked whether or not the luminance in all
measurement points 9a-d could be determined. If this is not the case, the method again performs steps S1 to S6. This time, the state of the road 2 may be as illustrated in fig 4b, where three measurement points 9a, 9c and 9d are currently unoccluded (the road surface 7 is visible in these measurement points), while one measurement point 9b is occluded by a car 21 .
Since the luminance in the previously occluded measurement point 9d can now be determined, there are luminance measurements for all measurement points 9a-d and the method proceeds to the next step S8 to provide a control signal to the street lights 4a-d based on the determined traffic density and the luminance in the measurement points 9a-d.
Thereafter, the method again returns to the first step S1 .
In general, it may be desirable to avoid rapid changes in light levels and the illumination control system 1 may accumulate readings of the luminance and/or traffic density over some time before an adjustment of the illumination settings is made.
In addition to determining luminance and traffic density, the traffic and luminance monitoring system 3 may be used for monitoring the condition of the lighting devices 4a-d comprised in the illumination system 1 .
The person skilled in the art realizes that the present invention by no means is limited to the preferred embodiments described above. On the contrary, many modifications and variations are possible within the scope of the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Claims

1 . A method of determining a reflection property of a traffic-related scene using an image sensor mounted on a stationary structure, comprising the steps of:
acquiring, using said image sensor, a first image of said traffic-related scene, said first image comprising a first plurality of pixels, each pixel having a pixel position and a pixel value;
evaluating said pixel value of each of said pixels in respect of a pixel classification criterion; and
determining, if a first set of said pixels fulfill said pixel classification criterion, said reflection property of at least a first sub-section of said traffic- related scene based on at least one pixel comprised in said first set of pixels.
2. The method according to claim 1 , wherein said pixel classification criterion is based on at least one characteristic selected from a group comprising movement, color, shape, texture and object detection.
3. The method according to claim 1 or 2, wherein said first set of pixels is a first set of still pixels corresponding to reflection of light from a portion of the traffic-related scene that is stationary.
4. The method according to claim 3, further comprising the step of: identifying, among said still pixels, a first set of ground pixels corresponding to reflection of light from a ground surface,
wherein said reflection property is determined based on at least one pixel comprised in said first set of ground pixels.
5. The method according to claim 3 or 4, further comprising the steps of:
identifying, in said scene, a fixed structure expected to have a fixed location in relation to said image sensor;
detecting an apparent movement of said fixed structure; and determining said first set of still pixels based on said apparent movement.
6. The method according to claim 1 or 2, wherein said first set of pixels is a first set of ground pixels corresponding to reflection of light from a ground surface in said traffic-related scene.
7. The method according to any one of the preceding claims, wherein said image sensor is configured to acquire said image according to a human vision model.
8. The method according to any one of the preceding claims, further comprising the steps of:
acquiring a second image of said traffic-related scene using said image sensor, said second image comprising a second plurality of pixels, each pixel having a pixel position and a pixel value;
evaluating at least said pixel value of each of said pixels in respect of said pixel classification criterion;
determining, if a second set of said pixels fulfill said pixel classification criterion, said reflection property of at least a second sub-section of said traffic-related scene different from said first sub-section based on at least one pixel comprised in said second set of pixels.
9. The method according to any one of the preceding claims, further comprising the step of:
determining a traffic density in said traffic-related scene based on at least one acquired image.
10. The method according to claim 9, wherein said step of determining said traffic density comprises the steps of:
detecting individual moving objects; and
tracking said individual moving objects through a series of images.
1 1 . The method according to any one of the preceding claims, further comprising the step of determining an illumination setting based on said reflection property.
12. The method according to claim 1 1 , wherein said illumination setting is further based on a traffic density.
13. The method according to any one of the preceding claims, further comprising the step of:
providing a signal indicative of said reflection property to a lighting control system.
14. A scene monitoring system for determining a reflection property of a traffic-related scene, said scene monitoring system comprising:
an image sensor arranged to receive light reflected from said traffic- related scene;
an image acquisition unit coupled to said image sensor for acquiring images, each being formed by an array of pixels, from said image sensor, wherein each pixel has a pixel position and a pixel value;
a pixel classifier coupled to said image acquisition unit and configured to evaluate, in each of said images, at least said pixel value of each of said pixels in respect of a pixel classification criterion; and
an image analyzer coupled to said pixel classifier and configured to determine said reflection property of at least a sub-section of said traffic- related scene based on a set of pixels fulfilling said pixel classification criterion.
15. The scene monitoring system according to claim 14, wherein: said pixel classifier is a motion detector configured to identify, in each of said images, still pixels corresponding to reflection of light from a portion of the scene that is stationary; and
said image analyzer is configured to determine said reflection property based on still pixels corresponding to said sub-section.
16. The scene monitoring system according to claim 15, wherein said image analyzer is further configured to: identify, among said still pixels, ground pixels corresponding to reflection of light from a ground surface; and
determine said reflection property based on ground pixels
corresponding to said sub-section.
17. The scene monitoring system according to any one of claims 14 to 16, wherein said image analyzer is further configured to determine a traffic density based on at least one image acquired by said image acquisition unit.
18. An illumination system for illuminating a traffic-related scene, said illumination system comprising:
the scene monitoring system according to any one of claims 14 to 17; a light-emitting device arranged to illuminate said traffic-related scene; and
an illumination control unit coupled to said light-emitting device and to the image analyzer of said scene monitoring system, said illumination control unit being configured to control operation of said light-emitting device based on said reflection property determined by the image analyzer.
PCT/EP2014/072154 2013-10-16 2014-10-15 Method and system for determining a reflection property of a scene WO2015055737A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP14789210.3A EP3058508A1 (en) 2013-10-16 2014-10-15 Method and system for determining a reflection property of a scene

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE1351224-9 2013-10-16
SE1351224 2013-10-16

Publications (1)

Publication Number Publication Date
WO2015055737A1 true WO2015055737A1 (en) 2015-04-23

Family

ID=52827693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/072154 WO2015055737A1 (en) 2013-10-16 2014-10-15 Method and system for determining a reflection property of a scene

Country Status (2)

Country Link
EP (1) EP3058508A1 (en)
WO (1) WO2015055737A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303844A (en) * 2015-10-26 2016-02-03 南京本来信息技术有限公司 Night highway agglomerate fog automatic detection device on the basis of laser and detection method thereof
EP3154318A1 (en) * 2015-10-09 2017-04-12 Cipherstone Technologies AB System for controlling illumination in a road tunnel and method of estimating a veiling luminance
WO2019092025A1 (en) * 2017-11-13 2019-05-16 Robert Bosch Gmbh Method and device for providing a position of at least one object
WO2020139553A1 (en) * 2018-12-27 2020-07-02 Continental Automotive Systems, Inc. Stabilization grid for sensors mounted on infrastructure
CN116699644A (en) * 2023-08-07 2023-09-05 四川华腾公路试验检测有限责任公司 Marking reliability assessment method based on three-dimensional laser radar

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044166A (en) * 1995-01-17 2000-03-28 Sarnoff Corporation Parallel-pipelined image processing system
JP2001148295A (en) * 1999-11-22 2001-05-29 Matsushita Electric Works Ltd Lighting apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044166A (en) * 1995-01-17 2000-03-28 Sarnoff Corporation Parallel-pipelined image processing system
JP2001148295A (en) * 1999-11-22 2001-05-29 Matsushita Electric Works Ltd Lighting apparatus

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GLOYER B ET AL: "Vehicle detection and tracking for freeway traffic monitoring", SIGNALS, SYSTEMS AND COMPUTERS, 1994. 1994 CONFERENCE RECORD OF THE TW ENTY-EIGHTH ASILOMAR CONFERENCE ON PACIFIC GROVE, CA, USA 31 OCT.-2 NOV. 1994, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, vol. 2, 31 October 1994 (1994-10-31), pages 970 - 974, XP010148723, ISBN: 978-0-8186-6405-2, DOI: 10.1109/ACSSC.1994.471604 *
MARTIN ROSER ET AL: "Camera-based bidirectional reflectance measurement for road surface reflectivity classification", INTELLIGENT VEHICLES SYMPOSIUM (IV), 2010 IEEE, IEEE, PISCATAWAY, NJ, USA, 21 June 2010 (2010-06-21), pages 340 - 347, XP031732290, ISBN: 978-1-4244-7866-8 *
S.G. NARASIMHAN ET AL: "Removing weather effects from monochrome images", PROCEEDINGS OF THE 2001 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. CVPR 2001, vol. 2, 1 January 2001 (2001-01-01), pages II - 186, XP055172268, ISBN: 978-0-76-951272-3, DOI: 10.1109/CVPR.2001.990956 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3154318A1 (en) * 2015-10-09 2017-04-12 Cipherstone Technologies AB System for controlling illumination in a road tunnel and method of estimating a veiling luminance
CN105303844A (en) * 2015-10-26 2016-02-03 南京本来信息技术有限公司 Night highway agglomerate fog automatic detection device on the basis of laser and detection method thereof
WO2019092025A1 (en) * 2017-11-13 2019-05-16 Robert Bosch Gmbh Method and device for providing a position of at least one object
CN111417993A (en) * 2017-11-13 2020-07-14 罗伯特·博世有限公司 Method and device for providing a position of at least one object
US11250695B2 (en) 2017-11-13 2022-02-15 Robert Bosch Gmbh Method and device for providing a position of at least one object
WO2020139553A1 (en) * 2018-12-27 2020-07-02 Continental Automotive Systems, Inc. Stabilization grid for sensors mounted on infrastructure
US11277723B2 (en) 2018-12-27 2022-03-15 Continental Automotive Systems, Inc. Stabilization grid for sensors mounted on infrastructure
CN116699644A (en) * 2023-08-07 2023-09-05 四川华腾公路试验检测有限责任公司 Marking reliability assessment method based on three-dimensional laser radar
CN116699644B (en) * 2023-08-07 2023-10-27 四川华腾公路试验检测有限责任公司 Marking reliability assessment method based on three-dimensional laser radar

Also Published As

Publication number Publication date
EP3058508A1 (en) 2016-08-24

Similar Documents

Publication Publication Date Title
CN101918980B (en) Runway surveillance system and method
US8750564B2 (en) Changing parameters of sequential video frames to detect different types of objects
CN102317952B (en) Method for representing objects of varying visibility surrounding a vehicle on the display of a display device
US20240046689A1 (en) Road side vehicle occupancy detection system
EP3058508A1 (en) Method and system for determining a reflection property of a scene
CN109792829B (en) Control system for a monitoring system, monitoring system and method for controlling a monitoring system
KR101364727B1 (en) Method and apparatus for detecting fog using the processing of pictured image
US20110221906A1 (en) Multiple Camera System for Automated Surface Distress Measurement
CN103931172A (en) Systems and methods for intelligent monitoring of thoroughfares using thermal imaging
EP2659668A1 (en) Calibration device and method for use in a surveillance system for event detection
US11308316B1 (en) Road side vehicle occupancy detection system
JPH07210795A (en) Method and instrument for image type traffic flow measurement
CN112750170A (en) Fog feature identification method and device and related equipment
US20230177724A1 (en) Vehicle to infrastructure extrinsic calibration system and method
Hautiere et al. Meteorological conditions processing for vision-based traffic monitoring
Hautière et al. Daytime visibility range monitoring through use of a roadside camera
CN1573797A (en) Method and apparatus for improving the identification and/or re-identification of objects in image processing
CN106128112A (en) Bayonet vehicle identification at night grasp shoot method
CN110414392A (en) A kind of determination method and device of obstacle distance
CN109308809A (en) A kind of tunnel device for monitoring vehicle based on dynamic image characteristic processing
KR102435281B1 (en) Road accident detection system and method using a lamppost-type structure
CN113936501A (en) Intelligent crossing traffic early warning system based on target detection
KR101934345B1 (en) Field analysis system for improving recognition rate of car number reading at night living crime prevention
CN110942631B (en) Traffic signal control method based on flight time camera
JP2000348184A (en) Method and device for background picture generation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14789210

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2014789210

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014789210

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE