EP3058508A1 - Verfahren und system zur bestimmung einer reflexionseigenschaft einer szene - Google Patents

Verfahren und system zur bestimmung einer reflexionseigenschaft einer szene

Info

Publication number
EP3058508A1
EP3058508A1 EP14789210.3A EP14789210A EP3058508A1 EP 3058508 A1 EP3058508 A1 EP 3058508A1 EP 14789210 A EP14789210 A EP 14789210A EP 3058508 A1 EP3058508 A1 EP 3058508A1
Authority
EP
European Patent Office
Prior art keywords
pixels
scene
image
pixel
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP14789210.3A
Other languages
English (en)
French (fr)
Inventor
Kenneth Jonsson
David TINGDAHL
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cipherstone Technologies AB
Original Assignee
Cipherstone Technologies AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cipherstone Technologies AB filed Critical Cipherstone Technologies AB
Publication of EP3058508A1 publication Critical patent/EP3058508A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Definitions

  • the present invention relates to a scene monitoring system and to a method of determining a reflection property of a scene.
  • some current intelligent control systems optimize the light source output based on statistics and non-local data.
  • some current intelligent control systems optimize the light source output based on statistics and non-local data.
  • a few dedicated light sensors e.g. lux meters
  • the lighting control system cannot take local variations in light levels into account. Local variations may occur due to variations in cloud coverage and shadowing introduced by buildings or vegetation.
  • JP2001 148295 discloses a system with a light source, a road luminance measuring device to measure the road luminance from a road segment illuminated by the light source, and a control equipment to control the output of the lighting equipment to obtain desired road luminance through the input of the sensing data of the road luminance from the road luminance measuring device.
  • a method of determining a reflection property of a scene comprising the steps of: acquiring a first image of the scene, the first image comprising a first plurality of pixels, each pixel having a pixel position and a pixel value; evaluating the pixel value of each of the pixels in respect of a pixel
  • the first set of pixels may be the pixels that fulfill the pixel classification criterion or the pixels that fail to fulfill the pixel classification criterion.
  • the first set may, furthermore, be a sub-set, or may be a set including all of the evaluated pixels or all of the pixels of the first image.
  • the pixel classification criterion may be related to a property indicative of movement, and pixels fulfilling the pixel classification criterion may be pixels that are determined to correspond to stationary portions of the scene.
  • the reflection property may then be determined based on at least one of the pixels that have been determined to be "still" pixels.
  • the first image may be discarded if it is determined that there is movement in the scene.
  • the reflection property may be any property indicative of a reflection of light from the scene, such as, for example, luminance or illuminance of the light reflected from the scene and/or reflectance of one or several surfaces in the scene.
  • the scene may be an outdoor scene or an indoor scene, or a scene that is partly outdoors and partly indoors.
  • the scene may advantageously be a traffic-related scene, and a reflection property, such as the luminance, of the surface on which vehicles or persons move may be determined through image analysis.
  • traffic scenes include roads, airport runways or taxi areas, pedestrian or cyclist areas, parking lots/car parks, indoor pedestrian areas such as in subway stations or malls etc.
  • the first image may advantageously be acquired using a two- dimensional image sensor, such as a CCD or CMOS image sensor typically used in a digital camera.
  • the acquired image may be monochrome or in color, such as RGB color.
  • the acquired image may advantageously be formed by a two-dimensional array of pixels.
  • a series of images may be acquired and the pixels of one or several of the images in the series may be classified in accordance with the above-mentioned pixel classification criterion.
  • the first image may be formed by capturing a plurality of images and combining these images, for example by summing the pixel values, to provide a combined first image with an improved signal-to-noise ratio.
  • the pixel classification criterion may, for example, be related to movement, color, texture, shape, object detection, or a combination of two or more thereof.
  • the first set of pixels may be a first set of still pixels corresponding to reflection of light from a portion of the scene that is stationary.
  • the still pixels may advantageously be identified through image analysis, in which the acquired image may be compared with one or several previously acquired images and/or a previously determined background representation of the scene.
  • the still pixels of the two-dimensional array of pixels may also be identified using any other suitable method.
  • the scene may additionally be monitored using another image sensor, or another monitoring method, such as radar, lidar or ultrasonic technology, which may be used to determine where in the scene motion occurs and/or has occurred.
  • the reflection property determined using the method according to embodiments of the present invention may, for instance, be used for controlling illumination settings and/or to communicate to vehicles to allow adaptation of various settings to the current reflection properties of the road surface and/or monitor light-source performance.
  • the above-mentioned first sub-section of the scene may be any one of the above-mentioned first sub-section of the scene.
  • the reflection property, such as luminance, for the first sub-section is determined.
  • the present invention is based on the realization that an improved determination of a reflection property of a scene can be achieved by, identifying still pixels, such as by classifying pixels of an image of the scene into still pixels and motion pixels, and then using a set of the still pixels for determining the reflection property.
  • the determination of the luminance can be improved since the luminance of moving objects, such as moving vehicles that temporarily cover a portion of the road surface can conveniently be disregarded. This provides for a correct determination of the luminance of the road surface, allowing for more precise illumination control, which in turn provides for reduced energy consumption without compromising the traffic safety.
  • the method may further comprise the step of identifying, among the still pixels, ground pixels corresponding to reflection of light from a ground surface, and the determination of the reflection property may be based on ground pixels corresponding to the first sub-section of the scene.
  • the ground surface may, for example, be a road surface or the surface of an airport apron.
  • ground pixels may exclude regions with unfavorable reflectance characteristics (e.g. pools of water and other regions with specular surfaces) or regions occluded by permanent or semi-permanent structures.
  • regions with unfavorable reflectance characteristics e.g. pools of water and other regions with specular surfaces
  • regions occluded by permanent or semi-permanent structures Depending on the life-time of a specular surface, it may be detected using either the background model (as a deviation from the background) or using image analysis based on expected ground surface characteristics. Pixels with characteristics deviating from the expected characteristics may be excluded from the determination of the reflection property.
  • occluding structures may be detected using the
  • the method according to embodiments of the present invention may additionally include the steps of acquiring a second image of said scene, said second image comprising a second plurality of pixels, each pixel having a pixel position and a pixel value; evaluating at least said pixel value of each of said pixels in respect of said pixel classification criterion; determining, if a second set of said pixels fulfill said pixel classification criterion, said reflection property of at least a second sub-section of said scene different from said first sub-section based on at least one pixel comprised in said second set of pixels.
  • portions of the region of interest of the scene may be temporarily occluded.
  • the reflection property in one or several of the measurement points may be determined based on a first image, and the reflection property in one or several other measurement points may be determined based on a second image (in which the measurement point(s) is/are not occluded).
  • complete measurement data may be built up using several images, where the reflection property for different measurement points/regions of interest are determined based on different images.
  • the scene monitoring system carrying out the method according to embodiments of the present invention may be mounted on a structure that may move in the wind (at least in case of strong wind).
  • embodiments of the method according to the present invention may further comprise the steps of identifying, in the scene, a fixed structure expected to have a fixed location; detecting an apparent movement of the fixed structure; and determining a set of still pixels corresponding to the first subsection based on the apparent movement.
  • Such fixed structures may include road signs, markings on the road, rocks, buildings, etc.
  • Other examples may be objects, object parts and/or configuration of objects/parts and their appearance in the image. These objects may have been identified as fixed structures through a manual, automatic or semi-automatic analysis of data.
  • the method may additionally include the steps of monitoring the visibility in the vicinity of the image sensor and providing a signal indicative of the visibility.
  • the visibility may, for instance, be monitored by evaluating the change over time of the visibility of fixed structures such as buildings, markings in the road, distinctive features in the structure supporting the imaging sensor etc.
  • image analysis may be used to provide a measure of scattering in the medium between the image sensor and the scene. For example, the presence of fog or smoke changes the frequency content of the image.
  • a signal indicative of the quality of the determination of the reflection property may be provided.
  • the quality of the measurement may vary with factors such as the visibility, the speed and density of vehicles (at high speed it may be more difficult to determine the exact position of vehicles, especially in the case when density estimation and luminance estimation is interleaved). Moreover, the quality or accuracy of the measurement may vary depending on the light level.
  • the scene may be a traffic scene
  • the method may further comprise the steps of acquiring a third image of the scene, the third image being formed by a two-dimensional array of pixels; and determining a traffic density in the traffic scene based on a third image.
  • traffic density should not only be understood the density of cars, but also, depending on the field of application, the density of pedestrians, or other vehicles than cars, such as trucks, airplanes or bicycles etc.
  • the traffic density may be an important parameter which, in addition to the above-mentioned reflection property, can be used to determine suitable illumination settings to achieve adequate safety in an energy efficient manner.
  • the third image used for determining the traffic density may
  • the traffic density may be determined based on an analysis of the same image used to determine the reflection property, i.e. the above-mentioned first image/second image.
  • the step of determining the traffic density may advantageously comprise the steps of detecting individual moving objects; and tracking the individual moving objects through a series of images.
  • the method may further comprise the step of providing a signal indicative of the reflection property.
  • such a signal may advantageously be provided to a lighting control system, that may be global or local and/or wirelessly to receivers on passing vehicles to allow the lighting control system of the vehicles to adapt the properties of the light emitted by the headlights and/or other settings in the vehicle.
  • a scene monitoring system for determining a reflection property of a scene
  • the scene monitoring system comprising: an image sensor arranged to receive light reflected from the scene; an image acquisition unit coupled to the image sensor for acquiring images, each being formed by an array of pixels, from the image sensor, wherein each pixel has a pixel position and a pixel value; a pixel classifier coupled to the image acquisition unit and configured to evaluate, in each of the images, at least the pixel value of each of the pixels in respect of a pixel classification criterion; and an image analyzer coupled to the pixel classifier and configured to determine the reflection property of at least a sub-section of the scene based on a set of pixels fulfilling the pixel classification criterion.
  • image acquisition unit the motion detector and the image analyzer may be realized as hardware components or software running on processing circuitry.
  • processing circuitry may be provided as one device or several devices working together.
  • the scene monitoring system may further comprise memory for storing previously acquired images and/or a previously determined representation of the background such as a background image. Moreover, we may store statistics compiled over time, e.g. statistics over luminance and traffic density estimates, and various system logging information allowing efficient maintenance and repair of the system.
  • the present invention thus relates to a method of determining a reflection property of a scene, such as the luminance of a road surface.
  • the method comprises the steps of acquiring a first image of the scene, and classifying pixels of the first image into a first set of motion pixels corresponding to reflection of light from a portion of the scene that is in motion and a first set of still pixels corresponding to reflection of light from a portion of the scene that is stationary.
  • the reflection property such as luminance
  • the reflection property is determined based on still pixels of the first image corresponding to a first sub-section, which may for example correspond to a measurement point in a measurement grid according to the standard EN 13201 .
  • FIG. 1 schematically shows an exemplary embodiment of the invention
  • illumination system arranged to provide adaptive illumination of a road
  • Fig 2 is a schematic illustration of the illumination system in fig 1 ;
  • Fig 3 is a flow-chart outlining an embodiment of the method according to the present invention.
  • Figs 4a-b are schematic illustrations of a monitored section of the road with measurement points of a predetermined measurement grid. Detailed Description of Example Embodiments
  • scenes include airport scenes, parking lots/car parks, pedestrian and cycle paths, and similar scenes where an indoor or outdoor region is illuminated and a flow of vehicles or people is expected across the region.
  • reflection properties include glare (estimation of loss of visbility in conditions with excessive contrast), uniformity
  • luminance/illuminance variations across or along the road surface luminance/illuminance variations across or along the road surface
  • surround ratio ratio between road surface and surrounding surface measurements
  • Fig 1 schematically illustrates an exemplary embodiment of the illumination system 1 according to the present invention arranged to provide adaptive illumination of a road 2.
  • the illumination system 1 comprises a luminance monitoring system 3 and controllable light-emitting devices 4a-d, here in the form of street lights.
  • the luminance monitoring system 3 is mounted on a supporting structure 5, which is also shown to support some of the light-emitting devices 4a-b. Note that, in some applications, it may be preferable to mount the luminance monitoring system 3 on a separate structure from the one used for light emitting devices 4a b.
  • the road 2 which is here shown as a motorway, is provided with markings 6 that separate the lanes.
  • the luminance of the road surface 7 may be measured in a measurement region 8 comprising a set of
  • measurement points 9a-d distributed across the road surface 7.
  • a similar grid of measurement points is prescribed in the European standard EN 13201 .
  • the illumination system 1 in fig 1 will now be described in greater detail with reference to the schematic block diagram in fig 2.
  • the illumination system 1 comprises a two- dimensional image sensor 1 1 , an image acquisition unit 12, a memory 13, a motion detector 14, an image analyzer 15, an illumination control unit 16 and light-emitting devices 4a-d.
  • the image sensor 1 1 which is arranged to receive light reflected from the relevant scene, in this case the road surface 7, may be calibrated against a state-of-the-art luminance meter to provide highly accurate luminance estimates with respect to one of the human vision models, i.e. the so-called photopic, mesopic or scotopic models.
  • These photometric models explain how the photoreceptors in the human eye respond in daylight, twilight and nocturnal conditions.
  • RGB color sensor we can obtain a response curve close to the luminosity function of a chosen photometric model by weighting the contributions from the individual color channels, possibly combined with an optical band pass filter.
  • the acquisition unit 12 is coupled to the image sensor 1 1 for acquiring images from the image sensor 1 1 .
  • the acquisition unit 12 further provides acquired images to the memory 13 and to the motion detector 14.
  • the motion detector 14 is configured to classify pixels of images as motion pixels and still pixels based on a comparison with previously acquired images or a background representation stored in memory 13.
  • the motion detector 14 provides the acquired image and information about the pixel classification to the image analyzer 15, which determines the luminance of the measurement points 9a-d based on one or several images, and provides a signal indicative of the luminance of the measurement points 9a-d to the illumination control unit 16, which in turn controls the light-emitting devices 4a-d based on the determined luminance of the road surface 7.
  • the different computations involved in determining the luminance may be distributed between the various devices comprised in the illumination system 1 and a centralized server.
  • the only communication channel may be over a power line at low bandwidth and the processing of image data may be performed within the embedded environment of the image sensor.
  • the communication channel may allow the transmission of single images, image sequences, or partially processed data and some computations may be performed on a centralized server.
  • Partially processed data may include filtered, transformed or compressed images. It may also include extracted image features.
  • the light sensor When mounted on a street light pole 5 as is schematically indicated in fig 1 , or embedded in the housing of a luminaire, the light sensor may communicate with an illumination control unit placed inside the light pole. The control unit will then relay the information to the centralized lighting control system.
  • the communication protocol between the light sensor and the illumination control unit may be based on e.g. LonTalk (ISO/IEC 14908), TCP/IP, RS232 or DAL I.
  • the protocol between the illumination control unit and the centralized control system may be based on e.g. LonTalk (ISO/IEC 14908), ZigBee (based on IEEE 802.15) or Z-Wave. In some applications, it may be possible to support multiple protocols including both power line communication protocols and wireless protocols.
  • the image sensor comprised in the luminance monitoring system 3 may be mounted above the road 2 looking down at the measurement region 8 at an angle similar to the angle between the line of sight of a typical driver and the road surface 7. From this view, we can capture an image and warp the image to a fronto-parallel or birds-eye view in which we can lay out the measurement grid 9a-d and perform the
  • the measurement region 8 may be detected using manual, automatic or semi-automatic techniques as part of the installation of the luminance monitoring system 3. For example, after mounting the image sensor 1 1 on e.g. an overhead gantry, the installation personnel may configure the image sensor 1 1 on site using a laptop computer or similar. Alternatively, the image sensor 1 1 may be configured remotely by an operator at a central command center. The configuration may include selecting boundary points of the region of interest in an image as presented in a graphical user interface. Also, the operator may provide the location of the light sources of interest through the graphical user interface. Alternatively, the system automatically localizes the regions and structures of interest and the operator is then given an
  • a images are captured by the image sensor 1 1 using a relatively short exposure time, such as 0.01 - 0.05 seconds (depending on the sensitivity of the pixel elements of the image sensor 1 1 ).
  • a relatively short exposure time such as 0.01 - 0.05 seconds (depending on the sensitivity of the pixel elements of the image sensor 1 1 ).
  • step S2 the images are analyzed using the motion detector 14 and image analyzer 15 to detect and track vehicles. Thereafter, in step S3, the current real-time traffic density is determined.
  • the detection of the vehicles may advantageously be performed by frame differencing (taking the difference between consecutive images to detect moving objects), background subtraction (subtracting a known background image from the input image to detect non-background objects), feature detection (detecting vehicles as groups of features with certain relationships), motion analysis (detecting vehicles as moving objects with a constrained motion), or a combination of these techniques.
  • the tracking may, for example, be performed using region-based methods (tracking vehicles as blobs), active contour-based methods (tracking vehicles by following their bounding contour), feature-based methods
  • the tracking procedure may generate a motion trajectory for each vehicle in the sequence.
  • the background subtraction may be performed with a static background image or with a background representation updated over time to reflect e.g. seasonal variations (color of vegetation, snow coverage etc).
  • Real-time estimation of local traffic density using an image sensor involves a number of challenges including extreme weather conditions. For example, strong wind may cause the mounting pole 5 to move and the algorithms may have to detect the movements and apply compensation if required. This can be performed by e.g. continuously monitoring the location of structures that are expected to be fixed (e.g. road signs, road markings 6, road edge marker posts and light poles). If a movement is detected, the size of the movement can be estimated and the vehicle tracker (and luminance estimator) updated with the information.
  • the traffic and luminance monitoring system 3 may include functionality for estimating the visibility and transmitting the visibility data to the control system allowing the control system to disregard sensor data when the visibility is too low. Also, the system 3 may output a confidence value for each measurement and the visibility may be incorporated in this value.
  • the visibility Perhaps the most important condition to detect and quantify for a visual sensor is the visibility. If the visibility is too low, the traffic and luminance monitoring system 3 cannot provide accurate measurements of e.g. light and traffic density. The visibility may be affected by mist, fog, heavy rain, snow and pollution.
  • One possible way of measuring visibility is to attempt to detect objects at known distances from the image sensor 1 1 . If the distance to an object within the field-of-view of the light sensor has been measured or estimated when installing the device then, if the object can be detected in the image, the distance to the object is a lower bound on the visibility.
  • the accuracy of the measurements will depend on the number of objects available and their distribution across the distance interval of interest. If the image sensor 1 1 is mounted on a light pole 5 and the pole is visible in the image, then high-contrast markings on the pole could provide a way of accurately measuring the visibility, at least in the direction of the pole. Another possibility is to use road markings 6 such as road edge lines and center lines as these typically occur at regular intervals. Also, road edge marker posts or road signs may be used to estimate visibility.
  • a first luminance measurement image is captured using a relatively long exposure time, such as 0.1 -1 second, is acquired.
  • Fig 4a is a schematic illustration of the state of the road 2 at the time when the first luminance measurement image is captured. As can be seen in fig 4a, three measurement points 9a-c are currently unoccluded (the road surface 7 is visible in these measurement points), while one measurement point 9d is occluded by a car 20.
  • ground pixels corresponding to the road surface 7 are identified in step S5.
  • the ground pixels correspond to stationary regions that were not previously moving (e.g. a vehicle that has stopped).
  • regions with unfavorable reflectance characteristics or regions occluded by permanent or semi-permanent structures may be identified considered in the identification of the ground pixels.
  • regions with unfavorable reflectance characteristics include pools of water which represent specular surfaces.
  • occluding structures may include equipment or signs used in road works.
  • the luminance in the unoccluded measurement points 9a-c is determined.
  • step S7 it is checked whether or not the luminance in all
  • measurement points 9a-d could be determined. If this is not the case, the method again performs steps S1 to S6. This time, the state of the road 2 may be as illustrated in fig 4b, where three measurement points 9a, 9c and 9d are currently unoccluded (the road surface 7 is visible in these measurement points), while one measurement point 9b is occluded by a car 21 .
  • the method proceeds to the next step S8 to provide a control signal to the street lights 4a-d based on the determined traffic density and the luminance in the measurement points 9a-d.
  • the illumination control system 1 may accumulate readings of the luminance and/or traffic density over some time before an adjustment of the illumination settings is made.
  • the traffic and luminance monitoring system 3 may be used for monitoring the condition of the lighting devices 4a-d comprised in the illumination system 1 .
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
EP14789210.3A 2013-10-16 2014-10-15 Verfahren und system zur bestimmung einer reflexionseigenschaft einer szene Withdrawn EP3058508A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE1351224 2013-10-16
PCT/EP2014/072154 WO2015055737A1 (en) 2013-10-16 2014-10-15 Method and system for determining a reflection property of a scene

Publications (1)

Publication Number Publication Date
EP3058508A1 true EP3058508A1 (de) 2016-08-24

Family

ID=52827693

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14789210.3A Withdrawn EP3058508A1 (de) 2013-10-16 2014-10-15 Verfahren und system zur bestimmung einer reflexionseigenschaft einer szene

Country Status (2)

Country Link
EP (1) EP3058508A1 (de)
WO (1) WO2015055737A1 (de)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE1551311A1 (en) * 2015-10-09 2017-04-10 Cipherstone Tech Ab System for controlling illumination in a road tunnel and method of estimating a veiling luminance
CN105303844B (zh) * 2015-10-26 2017-09-12 南京本来信息技术有限公司 基于激光的夜间高速公路团雾自动检测装置及其检测方法
DE102017220139A1 (de) 2017-11-13 2019-05-16 Robert Bosch Gmbh Verfahren und Vorrichtung zum Bereitstellen einer Position wenigstens eines Objekts
US11277723B2 (en) 2018-12-27 2022-03-15 Continental Automotive Systems, Inc. Stabilization grid for sensors mounted on infrastructure
CN116699644B (zh) * 2023-08-07 2023-10-27 四川华腾公路试验检测有限责任公司 基于三维激光雷达的标线可靠性评估方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044166A (en) * 1995-01-17 2000-03-28 Sarnoff Corporation Parallel-pipelined image processing system
JP4543459B2 (ja) * 1999-11-22 2010-09-15 パナソニック電工株式会社 照明装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2015055737A1 *

Also Published As

Publication number Publication date
WO2015055737A1 (en) 2015-04-23

Similar Documents

Publication Publication Date Title
CN101918980B (zh) 跑道监视系统和方法
US8750564B2 (en) Changing parameters of sequential video frames to detect different types of objects
CN102317952B (zh) 在显示装置的显示器上呈现运输工具周围环境中可见性不同的物体的方法
EP3058508A1 (de) Verfahren und system zur bestimmung einer reflexionseigenschaft einer szene
CN109792829B (zh) 监控系统的控制系统、监控系统和控制监控系统的方法
KR101364727B1 (ko) 촬영된 영상의 처리를 이용한 안개 감지 방법 및 장치
US20240046689A1 (en) Road side vehicle occupancy detection system
US20110221906A1 (en) Multiple Camera System for Automated Surface Distress Measurement
CN103931172A (zh) 使用热成像智能监控大街的系统及方法
WO2012090200A1 (en) Calibration device and method for use in a surveillance system for event detection
US11308316B1 (en) Road side vehicle occupancy detection system
JPH07210795A (ja) 画像式交通流計測方法と装置
Hautiere et al. Meteorological conditions processing for vision-based traffic monitoring
Hautière et al. Daytime visibility range monitoring through use of a roadside camera
CN1573797A (zh) 用于在图像处理中改善对象识别和/或再识别的方法和装置
CN106128112A (zh) 夜间卡口车辆识别抓拍方法
CN110414392A (zh) 一种障碍物距离的确定方法及装置
CN109308809A (zh) 一种基于动态图像特征处理的隧道车辆监控装置
KR102435281B1 (ko) 등주식 가로등 구조물을 활용한 도로 돌발상황 검지 시스템 및 방법
CN113936501A (zh) 基于目标检测的智能路口通行预警系统
KR101934345B1 (ko) 야간 차량번호 및 생활 방범상의 특정이미지 판독 인식률 향상을 위한 현장분석 시스템
CN110942631B (zh) 一种基于飞行时间相机的交通信号控制方法
JP2000348184A (ja) 背景画像生成装置及びその方法
Chiu et al. An embedded real-time vision system for 24-hour indoor/outdoor car-counting applications
Fascioli et al. Vision-based monitoring of pedestrian crossings

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20160330

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20180907

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20190319