CA2998175C - System and method for multipurpose traffic detection and characterization - Google Patents
System and method for multipurpose traffic detection and characterization Download PDFInfo
- Publication number
- CA2998175C CA2998175C CA2998175A CA2998175A CA2998175C CA 2998175 C CA2998175 C CA 2998175C CA 2998175 A CA2998175 A CA 2998175A CA 2998175 A CA2998175 A CA 2998175A CA 2998175 C CA2998175 C CA 2998175C
- Authority
- CA
- Canada
- Prior art keywords
- vehicle
- processing device
- data processing
- optical
- return
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 264
- 238000000034 method Methods 0.000 title claims abstract description 87
- 238000012512 characterization method Methods 0.000 title description 2
- 230000003287 optical effect Effects 0.000 claims abstract description 212
- 238000012545 processing Methods 0.000 claims description 164
- 238000005286 illumination Methods 0.000 claims description 118
- 238000005259 measurement Methods 0.000 claims description 68
- 230000008569 process Effects 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 21
- 238000007670 refining Methods 0.000 claims 1
- 238000009434 installation Methods 0.000 description 19
- 238000004422 calculation algorithm Methods 0.000 description 18
- 230000000875 corresponding effect Effects 0.000 description 16
- 239000011159 matrix material Substances 0.000 description 11
- 230000010354 integration Effects 0.000 description 8
- 238000013459 approach Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000012015 optical character recognition Methods 0.000 description 4
- 239000002245 particle Substances 0.000 description 4
- 238000002310 reflectometry Methods 0.000 description 4
- 229910003460 diamond Inorganic materials 0.000 description 3
- 239000010432 diamond Substances 0.000 description 3
- 239000000428 dust Substances 0.000 description 3
- 238000002592 echocardiography Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000010200 validation analysis Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000007788 liquid Substances 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000001681 protective effect Effects 0.000 description 2
- 238000011897 real-time detection Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 229920006395 saturated elastomer Polymers 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 101100388061 Mus musculus Polq gene Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000005923 long-lasting effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000035899 viability Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/50—Systems of measurement based on relative movement of target
- G01S17/58—Velocity or trajectory determination systems; Sense-of-movement determination systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/66—Tracking systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4808—Evaluating distance, position or velocity data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/484—Transmitters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/015—Detecting movement of traffic to be counted or controlled with provision for distinguishing between two or more types of vehicles, e.g. between motor-cars and cycles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/052—Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
- G08G1/054—Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed photographing overspeeding vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
A method for tracking and characterizing a plurality of vehicles simultaneously in a traffic control environment, comprising: providing a 3D optical emitter; providing a 3D optical receiver with a wide and deep field of view; driving the 3D optical emitter into emitting short light pulses; receiving a reflection/backscatter of the emitted light, thereby acquiring an individual digital full-waveform MAR trace for each detection channel of the 3D optical receiver; using the individual digital full-waveform LIDAR trace and the emitted light waveform, detecting a presence of a plurality of vehicles, a position of at least part of each vehicle and a time at which the position is detected; assigning a unique identifier to each vehicle; repeating the steps of driving, receiving, acquiring and detecting, at a predetermined frequency; tracking and recording an updated position of each vehicle and an updated time at which the updated position is detected.
Description
SYSTEM AND METHOD
FOR MULTIPURPOSE TRAFFIC DETECTION AND CHARACTERIZATION
TECHNICAL FIELD
[0001] The present invention relates to a system and method for traffic detection and more particularly to an optical system that detects the presence, location, lane position, direction and speed of vehicles in a traffic zone using an active three-dimensional sensor based on the time-of-flight ranging principle and an image sensor.
BACKGROUND OF THE ART
FOR MULTIPURPOSE TRAFFIC DETECTION AND CHARACTERIZATION
TECHNICAL FIELD
[0001] The present invention relates to a system and method for traffic detection and more particularly to an optical system that detects the presence, location, lane position, direction and speed of vehicles in a traffic zone using an active three-dimensional sensor based on the time-of-flight ranging principle and an image sensor.
BACKGROUND OF THE ART
[0002] Growth in transportation demand has a major Impact on traffic congeution and safety. To enhance the on-road safety and efficiency, major investments in transport infrastructures, including capital, operation and maintenance, are made all over the world. Intelligent systems collecting and disseminating real time traffic information is a key element for the optimization of traffic management.
[0003] Traffic monitoring can consist in different activities such as detecting the presence of a vehicle in a specific zone, counting the number of vehicles (volume), determining the lane position, classifying each vehicle, determining the direction of travel, estimating the occupancy and determining the speed.
[0004] Other traffic surveillance applications such as electronic toll collection and traffic enforcement require the same kind of information with a very high level of reliability.
[0005] In the United States, the FHWA has defined a vehicle classification based on 13 categories of vehicles from motorcycles, passenger cars, buses, two-axle-six-tire-single unit trucks, and up to a seven or more axle multi-trailer trucks classes.
Several alternative classification schemes are possible. Often, the aggregation of the WO 2013/128427 PC17[112013/051667 FHWA 13 classes is split into 3 or 4 classes. Other countries have their own way to define a classification for vehicles.
Several alternative classification schemes are possible. Often, the aggregation of the WO 2013/128427 PC17[112013/051667 FHWA 13 classes is split into 3 or 4 classes. Other countries have their own way to define a classification for vehicles.
[0006] In the case of speed infringement, determining the position and the lane, measuring accurately the speed of a specific vehicle in a multi-lane high-density highway, and associating this information without any ambiguity with the vehicle identified using an Automatic License Plate Recognition (ALPR) system is quite challenging.
[0007] A red light enforcement system has comparable requirements. There is a need for an automatic red light enforcement system but the high reliability required for this application is also challenging. It implies the detection of vehicles at specific locations, the tracking of each ot these vehicles in dense traffic at the Intersection, the identification of each of these vehicles with the ALPR system, the confirmation of a red light violation by a specific vehicle and the collection of all information to support the issuance of a traffic violation ticket to the registered owner of the vehicle without any ambiguity.
[0008] Different kinds of detectors are used to collect data for these applications.
Intrusive detectors such as inductive loop detectors are still common for detecting the presence of vehicles but have some disadvantages such as lengthy disruption to the traffic flow during installation and maintenance, inflexibility and inability to track a vehicle. Cameras with video processing have some drawbacks notably for speed measurement.
Intrusive detectors such as inductive loop detectors are still common for detecting the presence of vehicles but have some disadvantages such as lengthy disruption to the traffic flow during installation and maintenance, inflexibility and inability to track a vehicle. Cameras with video processing have some drawbacks notably for speed measurement.
[0009] Radar technology is known to perform well for speed measurement but has some limitations in terms of lateral resolution making difficult the association between a speed measurement and the identification of a specific vehicle in dense traffic, for example, at an intersection. Radar technology presents difficulties in the correlation of a specific speed measurement to a specific vehicle when two or more vehicles traveling at different speeds simultaneously enter into the measurement beam.
This limitation has an impact for speed enforcement applications. In some countries, legislation requires that ambiguous situations simply be discarded to reduce errors in the process. Installation of radar technology for speed enforcement is demanding because it requires adjusting the angle of the axis of the main lobe of emission in both the horizontal and vertical directions with respect to the axis of the road, with accuracy typically less than one-half degree angle to limit the cosine effect.
This limitation has an impact for speed enforcement applications. In some countries, legislation requires that ambiguous situations simply be discarded to reduce errors in the process. Installation of radar technology for speed enforcement is demanding because it requires adjusting the angle of the axis of the main lobe of emission in both the horizontal and vertical directions with respect to the axis of the road, with accuracy typically less than one-half degree angle to limit the cosine effect.
[0010] Thus, there is a need for a method and system for reliable multipurpose traffic detection for traffic management and enforcement applications.
SUMMARY
(00111 According to one broad aspect of the present invention, there is provided a method for tracking and characterizing a plurality of vehicles simultaneously in a traffic control environment. The method comprises providing a 3D optical emitter at an installation height oriented to allow illumination of a 3D detection zone in the environment; providing a 3D optical receiver or ieriled to have a wide and deep field of view within the 3D detection zone, the 3D optical receiver having a plurality of detection channels in the field of view; driving the 3D optical emitter into emitting short light pulses toward the detection zone, the light pulses having an emitted light waveform; receiving a reflection/backscatter of the emitted light on the vehicles in the 3D detection zone at the 3D optical receiver, thereby acquiring an individual digital full-waveform LIDAR trace for each detection channel of the 3D optical receiver;
using the individual digital full-waveform LIDAR trace and the emitted light waveform, detecting a presence of a plurality of vehicles in the 3D detection zone, a position of at least part of each the vehicle in the 3D detection zone and a time at which the position is detected; assigning a unique identifier to each vehicle of the plurality of vehicles detected; repeating the steps of driving, receiving, acquiring and detecting, at a predetermined frequency; at each instance of the repeating step, tracking and recording an updated position of each vehicle of the plurality of vehicles detected and an updated time at which the updated position is detected, with the unique identifier.
[0012] In one embodiment, the traffic control environment is at least one of a traffic management environment and a traffic enforcement environment.
[0013] In one embodiment, detecting the presence includes extracting observations in the individual digital full-waveform LIDAR trace; using the location for the observations to remove observations coming from a surrounding environment;
extracting lines using an estimate line and a covariance matrix using polar coordinates; removing observations located on lines parallel to the x axis.
[0014] In one embodiment, detecting the presence includes extracting observations in the individual digital full-waveform LIDAR trace and intensity data for the observations; finding at least one blob in the observations; computing an observation weight depending on the intensity of the observations in the blob;
computing a blob gravity center based on the weight and a position of the observations in the blob.
L0015.1 In one embodiment, the method further comprises setting at least one trigger line location and recording trigger line trespassing data with the unique identifier.
polq In one embodiment, the method further comprises setting the trigger line location relative to a visible landmark in the environment.
[0017] In one embodiment, detecting the time at which the position is detected includes assigning a timestamp for the detecting the presence and wherein the timestamp is adapted to be synchronized with an external controller.
[0018] In one embodiment, the method further comprises obtaining a classification for each detected vehicles using a plurality of detections in the 3D detection zone caused by the same vehicle.
=
=
[0019] In one embodiment, detecting the presence further comprises detecting a presence of a pedestrian in the environment.
[0020] In one embodiment, the part of the vehicle is one of a front, a side and a rear of the vehicle.
[0021] In one embodiment, emitting short light pulses includes emitting short light pulses of a duration of less than 50 ns.
[0022] In one embodiment, the 3D optical emitter is at least one of an infrared LED
source, a visible-light LED source and a laser.
[0023] In one embodiment, providing the 3D optical receiver to have a wide and deep field of view includes providing the 3D optical receiver to have a horizontal field of view angle of at least 20' and a vertical field of view angle of at least .
[0024] In one embodiment, the method further comprises determining and recording a speed for each the vehicle using the position and the updated position of one of the instances of the repeating step and an elapsed time between the time of the position and the updated time of the updated position, with the unique identifier.
[0025] In one embodiment, the method further comprises using a Kalman filter to determine an accuracy for the speed to validate the speed; comparing the accuracy to a predetermined accuracy threshold; if the accuracy is lower than the predetermined accuracy threshold, rejecting the speed.
[0026] In one embodiment, the method further comprises retrieving a speed limit and identifying a speed limit infraction by comparing the speed recorded for each the vehicle to the speed limit.
[0027] In one embodiment, the method further comprises providing a 2D optical receiver, wherein the 2D optical receiver being an image sensor adapted to provide images of the 2D detection zone; driving the 2D optical receiver to capture a image; using image registration to correlate corresponding locations between the 2D
image and the detection channels; extracting vehicle identification data from the 2D
image at a location corresponding to the location for the detected vehicle;
assigning the vehicle identification data to the unique identifier.
[0028] In one embodiment, the vehicle identification data is at least one of a picture of the vehicle and a license plate alphanumerical code present on the vehicle.
[0029] In one embodiment, the vehicle identification data includes the 2D
image showing a traffic violation.
[0030] In one embodiment, the method further comprises extracting at least one of a size of characters on the license plate and a size of the license plate and comparing one of the size among different instances of the repeating to determine an approximate speed value.
[0031] In one embodiment, the method further comprises providing a 2D
illumination source oriented to allow illumination of a 2D detection zone in the 3D
detection zone and driving the 2D illumination source to emit pulses to illuminate the 2D detection zone and synchronizing the driving the 2D optical receiver to capture images with the driving the 2D illumination source to emit pulses to allow capture of the images during the illumination.
[0032] In one embodiment, driving the 2D illumination source includes driving the 2D illumination source to emit pulses of a duration between 10 ps and 10 ms.
[0033] In one embodiment, the 2D illumination source is at least one of a visible light LED source, an infrared LED light source and laser.
[0034] In one embodiment, the 3D optical emitter and the 2D illumination source are provided by a common infrared LED light source.
[0035] In one embodiment, the vehicle identification data is at least two areas of high retroreflectivity apparent on the images, the detecting a presence includes extracting observations in the individual digital signals and intensity data for the observations, the method further comprising correlating locations for the areas of high retroreflectivity and high intensity data locations in the observations, wherein each the area of high retroreflectivity is created from one of a retroreflective license plate, a retro-reflector affixed on a vehicle and a retro-reflective lighting module provided on a vehicle.
[0036] In one embodiment, the method further comprises combining multiples ones of the captured images into a combined image with the vehicle and the vehicle identification data apparent.
[0037] According to another broad aspect of the present invention, there is provided a system for tracking and characterizing a plurality of vehicles simultaneously in a traffic control environment, the system comprising: a 3D
optical emitter provided at an installation height and oriented to allow illumination of a 3D
detection zone in the environment; a 3D optical receiver provided and oriented to have a wide and deep field of view within the 3D detection zone, the 3D
optical receiver having a plurality of detection channels in the field of view; a controller for driving the 3D optical emitter into emitting short light pulses toward the detection zone, the light pulses having an emitted light waveform; the 3D optical receiver receiving a reflectionibackscatter of the emitted light on the vehicles in the detection zone, thereby acquiring an individual digital full-waveform LIDAR
trace for each channel of the 3D optical receiver; a processor for detecting a presence of a plurality of vehicles in the 3D detection zone using the individual digital full-waveform LIDAR trace and the emitted light waveform, detecting a position of at least part of each the vehicle in the 3D detection zone, recording a time at which the position is detected, assigning a unique identifier to each vehicle of the plurality of vehicles detected and tracking and recording an updated position of each vehicle of the =
=
Cl 02665733 2014-08-27 plurality of vehicles detected and an updated time at which the updated position is detected, with the unique identifier.
[0038] In one embodiment, the processor is further for determining and recording a speed for each the vehicle using the position and the updated position of one of the instances of the repeating step and an elapsed time between the time of the position and the updated time of the updated position, with the unique identifier.
[0039] In one embodiment, the system further comprises a 2D
optical receiver, wherein the 2D optical receiver is an image sensor adapted to provide images of the 2D detection zone; and a driver for driving the 2D optical receiver to capture a 2D
image; the processor being further adapted for using image registration to correlate corresponding locations between the 2D 'maw and the detection channels and extracting vehicle identification data from the 2D image at a location corresponding to the location for the detected vehicle; and assigning the vehicle identification data to the unique identifier.
[0040] In one embodiment, the system further comprises a 2D illumination source provided and oriented to allow illumination of a 2D detection zone in the 3D
detection zone; a source driver for driving the 2D illumination source to emit pulses; a synchronization module for synchronizing the source driver and the driver to allow capture of the images while the 2D detection zone is illuminated.
[0041] According to another broad aspect of the present invention, there is provided a method for tracking and characterizing a plurality of vehicles simultaneously in a traffic control environment, comprising: providing a 3D
optical emitter; providing a 3D optical receiver with a wide and deep field of view;
driving the 3D optical emitter into emitting short light pulses; receiving a reflection/backscatter of the emitted light, thereby acquiring an individual digital full-waveform LIDAR
trace for each detection channel of the 3D optical receiver; using the individual digital full-waveform LIDAR trace and the emitted light waveform, detecting a presence of a plurality of vehicles, a position of at least part of each vehicle and a time at which the position is detected; assigning a unique identifier to each vehicle; repeating the steps of driving, receiving, acquiring and detecting, at a predetermined frequency;
tracking and recording an updated position of each vehicle and an updated time at which the updated position is detected.
[0041a] According to another broad aspect of the present invention, there is provided a vehicle detection system, comprising: an optical emitter configured to emit illumination signals for illuminating a field of illumination repeatedly over time, each illumination signal illuminating a portion of the field of illumination, the field of illumination including a moving vehicle; an optical receiver configured for capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the moving vehicle as a distance between the moving vehicle and the optical emitter changes, the optical receiver generating signal waveforms from the optical return signals; and a data processing device configured for estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
[0041b] According to another broad aspect of the present invention, there is provided a vehicle detection system, comprising: a laser configured to emit illumination signals for illuminating a field of illumination repeatedly over time, the field of illumination including a target vehicle having a side, a top and a rear; an optical receiver configured for capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the vehicle as a distance between the vehicle and the laser changes, the optical receiver generating signal waveforms from the optical return signals; and a data processing device configured for estimating a length, a width and a volume of the vehicle, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
[0041c] According to another broad aspect of the present invention, there is provided a method for vehicle detection, comprising: emitting illumination signals from an optical emitter for illuminating a field of illumination repeatedly over time, the field of illumination including a moving vehicle, each illumination signal illuminating a portion of the field of illumination; capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the moving vehicle as a distance between the moving vehicle and the optical emitter changes; and estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
[0041d] According to another broad aspect of the present invention, there is provided a method for vehicle detection, comprising: emitting illumination signals from a laser emitter for illuminating a field of illumination repeatedly over time, the field of illumination including a target vehicle having a side, a top and a rear;
capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the vehicle as a distance between the vehicle and the laser emitter changes; and estimating a length, a width and a volume of the vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
[0041e] According to another broad aspect of the present invention, there is provided a vehicle-mountable data processing device for performing LIDAR, including: an input configured to capture a series of vehicle measurements from signal waveforms generated from optical return signals produced by reflection of laser illumination signals that repeatedly illuminate a field of illumination over time, the field of illumination including a moving vehicle, the optical return signals being produced by reflection of the laser illumination signals from multiple sides of the moving vehicle as a distance between the vehicle and the vehicle-mountable data 9a processing device changes; at least one processor configured for estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating comprises plotting a line by feature extraction using the vehicle measurements.
[0041f] According to another broad aspect of the present invention, there is provided a vehicle-mountable data processing device for performing LIDAR, including: an input configured to capture a series of vehicle measurements from optical return signals produced by reflection of laser illumination signals that repeatedly illuminate a field of illumination over time, the field of illumination including a target vehicle having a top, a side and a rear, the optical signals being produced by reflection of the laser illumination signals from the top, the side and the rear of the target vehicle as a distance between the target vehicle and the vehicle-mountable data processing device changes; at least one processor configured for estimating a length, a width and a volume of the target vehicle, wherein the estimating comprises plotting a line by feature extraction using a subset of the vehicle measurements.
[0041g] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for capturing a series of vehicle measurements from a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for deriving estimates of the length, width and volume of the vehicle, wherein deriving estimates comprises plotting a line by feature extraction using the vehicle measurements.
[0041h] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a first and a second vehicle residing in a field of view, the first and a second vehicle each having 9b a side and a rear, the data processing device comprising: an input configured for capturing a series of vehicle measurements from a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of viewõ the optical return signal including a plurality of optical reflections produced by the first and second vehicles and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal to estimate a length, width and volume of each of the first and second vehicles by plotting a line by feature extraction using the vehicle measurements.
[0041i] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for: performing multiple detections of the vehicle from the plurality of optical reflections; deriving at least a portion of a physical boundary of the vehicle from the multiple reflections and estimating a volume of the vehicle using feature extraction based on the multiple detections.
[0041j] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for: performing multiple detections of the vehicle from the plurality of 9c optical reflections; deriving a geometric feature of the vehicle from the multiple detections and estimating a volume of the vehicle, using feature extraction based on the multiple detections.
[0041k] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including -a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for: performing multiple detections of the vehicle from the plurality of optical reflections; detecting, using feature extraction on the basis of the plurality of optical reflections, a corner of the vehicle defined at an intersection of two sides of the vehicle; estimating a volume of the vehicle, using feature extraction based on the multiple detections.
[00411] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for receiving a return signal conveying an optical return of an illumination signal, the illumination signal including a plurality of light pulses directed at the field of viewõ the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for:
performing multiple detections of the vehicle from the plurality of optical reflections;
processing the multiple detections of the vehicle to attempt to identify a physical boundary of the vehicle using feature extraction based on the multiple detections and derive at least in part on the basis of the physical boundary an extent of the vehicle and to estimate a volume of the vehicle from the extent; and to assign to the vehicle a default extent if the attempt is not successful.
9d [0042] Throughout this specification, the term "object" is intended to include a moving object and a stationary object. For example, it can be a vehicle, an environmental particle, a person, a pedestrian, a passenger, an animal, a gas, a liquid, a particle such as dust, a pavement, a wall, a post, a sidewalk, a ground surface, a tree, etc.
[0043] Throughout this specification, the term "vehicle" is intended to include any movable means of transportation for cargo, humans and animals, not necessarily restricted to ground transportation, including wheeled and unwheeled vehicles, such as, for example, a truck, a bus, a boat, a subway car, a train wagon, an aerial tramway car, a ski lift, a plane, a car, a motorcycle, a tricycle, a bicycle, a SegwayTM' a carriage, a wheelbarrow, a stroller, etc.
[0044] Throughout this specification, the term "environmental particle" is intended to include any particle detectable in the air or on the ground and which can be caused by an environmental, chemical or natural phenomenon or by human intervention. It includes fog, water, rain, liquid, dust, dirt, vapor, snow, smoke, gas, smog, pollution, black ice, hail, etc.
[0045] Throughout this specification, the term "red light" is intended to mean a traffic light (traffic signal, traffic lamp or signal light) which is currently signaling users of a road, at a road intersection, that they do not have the right of way into the intersection and that they should stop before entering the intersection.
Another color and/or symbol could be used to signal the same information to the user depending on the jurisdiction.
9e [0046] Throughout this specification, the term "green light" is intended to mean a traffic light (traffic signal, traffic lamp or signal light) which is currently signaling users of a road, at a road intersection, that they have the right of way into the intersection and that they should enter the intersection if it is safe to do so.
Another color and/or symbol could be used to signal the same information to the user depending on the jurisdiction.
BRIEF DESCRIPTION OF THE DRAWINGS
[0047] The accompanying drawings, which are included to provide a better understanding of the main aspects of the system and method and are incorporated inand constitute a part of this specification, illustrate different example embodiments.
The accompanying drawings are not intended to be drawn to scale. In the drawings:
[0048] FIG. 1 is a functional bloc diagram of an example of the multipurpose traffic detection system showing its main components and the way they are interconnected;
[0049] FIG. 2 is an example installation of the traffic detection system on the side of a 3-lane highway;
[0050] FIG. 3 shows an example installation of the traffic detection system on a gantry;
[0051] FIG. 4 shows the impact on the depth of a detection zone of the height of installation of the system;
[0052] FIG. 5 shows an example casing for the multipurpose traffic detector;
[0053] FIG. 6 shows a top view of the detection zone on a 3-lane highway;
[0054] FIG. 7 shows a top view of the detection zone in a red light enforcement application;
[0055] FIG. 8A and 8B are illustrative of example snapshots taken by the image sensor with the overlay of the 3D sensor displaying a vehicle in the detected zone with distance measurements;
[0056] FIG. 9A is illustrative of an example snapshot taken by the image sensor with the overlay of the 3D sensor at an intersection for red light enforcement application and FIG. 9B is a graph of data acquired by the detection system showing the range of detection of vehicles on 3 lanes in Cartesian coordinates;
[0057] FIG. 10 is a top view of an example road side installation with the tracking system being installed next to a one-directional three-lane highway and for which the detection zone is apparent and covers, at least partly, each of the lanes, all vehicles traveling in the same direction;
[0058] FIG. 11 is a top view of the example installation of FIG. 10 on which four vehicle detections are visible in some of the 16 separate channels with simultaneous acquisition capability;
[0059] FIG. 12 is a top view of the example installation of FIG. 10 on which a detection is visible between two trigger lines;
[0060] FIG. 13 includes FIGS. 13A, 1313, 13C, 13D, 13E and 13F, in which FIGS. 13A, 13C and 13E are illustrative of a few frames of vehicle tracking when vehicles arrive at an intersection with a red light and FIGS. 13B, 13D, 20 and show a graph of data acquired by the detection system for each corresponding frame;
[0061] FIG. 14 includes FIGS. 14A, 14B, 14C, 14D, 14E and 14F, in which FIGS. 14A, 14C and 14E are illustrative of a few frames of vehicle tracking when vehicles depart the intersection of FIG. 13 at the green light and FIGS. 14B, 14D, and 14F show a graph of data acquired by the detection system for each corresponding frame;
SUMMARY
(00111 According to one broad aspect of the present invention, there is provided a method for tracking and characterizing a plurality of vehicles simultaneously in a traffic control environment. The method comprises providing a 3D optical emitter at an installation height oriented to allow illumination of a 3D detection zone in the environment; providing a 3D optical receiver or ieriled to have a wide and deep field of view within the 3D detection zone, the 3D optical receiver having a plurality of detection channels in the field of view; driving the 3D optical emitter into emitting short light pulses toward the detection zone, the light pulses having an emitted light waveform; receiving a reflection/backscatter of the emitted light on the vehicles in the 3D detection zone at the 3D optical receiver, thereby acquiring an individual digital full-waveform LIDAR trace for each detection channel of the 3D optical receiver;
using the individual digital full-waveform LIDAR trace and the emitted light waveform, detecting a presence of a plurality of vehicles in the 3D detection zone, a position of at least part of each the vehicle in the 3D detection zone and a time at which the position is detected; assigning a unique identifier to each vehicle of the plurality of vehicles detected; repeating the steps of driving, receiving, acquiring and detecting, at a predetermined frequency; at each instance of the repeating step, tracking and recording an updated position of each vehicle of the plurality of vehicles detected and an updated time at which the updated position is detected, with the unique identifier.
[0012] In one embodiment, the traffic control environment is at least one of a traffic management environment and a traffic enforcement environment.
[0013] In one embodiment, detecting the presence includes extracting observations in the individual digital full-waveform LIDAR trace; using the location for the observations to remove observations coming from a surrounding environment;
extracting lines using an estimate line and a covariance matrix using polar coordinates; removing observations located on lines parallel to the x axis.
[0014] In one embodiment, detecting the presence includes extracting observations in the individual digital full-waveform LIDAR trace and intensity data for the observations; finding at least one blob in the observations; computing an observation weight depending on the intensity of the observations in the blob;
computing a blob gravity center based on the weight and a position of the observations in the blob.
L0015.1 In one embodiment, the method further comprises setting at least one trigger line location and recording trigger line trespassing data with the unique identifier.
polq In one embodiment, the method further comprises setting the trigger line location relative to a visible landmark in the environment.
[0017] In one embodiment, detecting the time at which the position is detected includes assigning a timestamp for the detecting the presence and wherein the timestamp is adapted to be synchronized with an external controller.
[0018] In one embodiment, the method further comprises obtaining a classification for each detected vehicles using a plurality of detections in the 3D detection zone caused by the same vehicle.
=
=
[0019] In one embodiment, detecting the presence further comprises detecting a presence of a pedestrian in the environment.
[0020] In one embodiment, the part of the vehicle is one of a front, a side and a rear of the vehicle.
[0021] In one embodiment, emitting short light pulses includes emitting short light pulses of a duration of less than 50 ns.
[0022] In one embodiment, the 3D optical emitter is at least one of an infrared LED
source, a visible-light LED source and a laser.
[0023] In one embodiment, providing the 3D optical receiver to have a wide and deep field of view includes providing the 3D optical receiver to have a horizontal field of view angle of at least 20' and a vertical field of view angle of at least .
[0024] In one embodiment, the method further comprises determining and recording a speed for each the vehicle using the position and the updated position of one of the instances of the repeating step and an elapsed time between the time of the position and the updated time of the updated position, with the unique identifier.
[0025] In one embodiment, the method further comprises using a Kalman filter to determine an accuracy for the speed to validate the speed; comparing the accuracy to a predetermined accuracy threshold; if the accuracy is lower than the predetermined accuracy threshold, rejecting the speed.
[0026] In one embodiment, the method further comprises retrieving a speed limit and identifying a speed limit infraction by comparing the speed recorded for each the vehicle to the speed limit.
[0027] In one embodiment, the method further comprises providing a 2D optical receiver, wherein the 2D optical receiver being an image sensor adapted to provide images of the 2D detection zone; driving the 2D optical receiver to capture a image; using image registration to correlate corresponding locations between the 2D
image and the detection channels; extracting vehicle identification data from the 2D
image at a location corresponding to the location for the detected vehicle;
assigning the vehicle identification data to the unique identifier.
[0028] In one embodiment, the vehicle identification data is at least one of a picture of the vehicle and a license plate alphanumerical code present on the vehicle.
[0029] In one embodiment, the vehicle identification data includes the 2D
image showing a traffic violation.
[0030] In one embodiment, the method further comprises extracting at least one of a size of characters on the license plate and a size of the license plate and comparing one of the size among different instances of the repeating to determine an approximate speed value.
[0031] In one embodiment, the method further comprises providing a 2D
illumination source oriented to allow illumination of a 2D detection zone in the 3D
detection zone and driving the 2D illumination source to emit pulses to illuminate the 2D detection zone and synchronizing the driving the 2D optical receiver to capture images with the driving the 2D illumination source to emit pulses to allow capture of the images during the illumination.
[0032] In one embodiment, driving the 2D illumination source includes driving the 2D illumination source to emit pulses of a duration between 10 ps and 10 ms.
[0033] In one embodiment, the 2D illumination source is at least one of a visible light LED source, an infrared LED light source and laser.
[0034] In one embodiment, the 3D optical emitter and the 2D illumination source are provided by a common infrared LED light source.
[0035] In one embodiment, the vehicle identification data is at least two areas of high retroreflectivity apparent on the images, the detecting a presence includes extracting observations in the individual digital signals and intensity data for the observations, the method further comprising correlating locations for the areas of high retroreflectivity and high intensity data locations in the observations, wherein each the area of high retroreflectivity is created from one of a retroreflective license plate, a retro-reflector affixed on a vehicle and a retro-reflective lighting module provided on a vehicle.
[0036] In one embodiment, the method further comprises combining multiples ones of the captured images into a combined image with the vehicle and the vehicle identification data apparent.
[0037] According to another broad aspect of the present invention, there is provided a system for tracking and characterizing a plurality of vehicles simultaneously in a traffic control environment, the system comprising: a 3D
optical emitter provided at an installation height and oriented to allow illumination of a 3D
detection zone in the environment; a 3D optical receiver provided and oriented to have a wide and deep field of view within the 3D detection zone, the 3D
optical receiver having a plurality of detection channels in the field of view; a controller for driving the 3D optical emitter into emitting short light pulses toward the detection zone, the light pulses having an emitted light waveform; the 3D optical receiver receiving a reflectionibackscatter of the emitted light on the vehicles in the detection zone, thereby acquiring an individual digital full-waveform LIDAR
trace for each channel of the 3D optical receiver; a processor for detecting a presence of a plurality of vehicles in the 3D detection zone using the individual digital full-waveform LIDAR trace and the emitted light waveform, detecting a position of at least part of each the vehicle in the 3D detection zone, recording a time at which the position is detected, assigning a unique identifier to each vehicle of the plurality of vehicles detected and tracking and recording an updated position of each vehicle of the =
=
Cl 02665733 2014-08-27 plurality of vehicles detected and an updated time at which the updated position is detected, with the unique identifier.
[0038] In one embodiment, the processor is further for determining and recording a speed for each the vehicle using the position and the updated position of one of the instances of the repeating step and an elapsed time between the time of the position and the updated time of the updated position, with the unique identifier.
[0039] In one embodiment, the system further comprises a 2D
optical receiver, wherein the 2D optical receiver is an image sensor adapted to provide images of the 2D detection zone; and a driver for driving the 2D optical receiver to capture a 2D
image; the processor being further adapted for using image registration to correlate corresponding locations between the 2D 'maw and the detection channels and extracting vehicle identification data from the 2D image at a location corresponding to the location for the detected vehicle; and assigning the vehicle identification data to the unique identifier.
[0040] In one embodiment, the system further comprises a 2D illumination source provided and oriented to allow illumination of a 2D detection zone in the 3D
detection zone; a source driver for driving the 2D illumination source to emit pulses; a synchronization module for synchronizing the source driver and the driver to allow capture of the images while the 2D detection zone is illuminated.
[0041] According to another broad aspect of the present invention, there is provided a method for tracking and characterizing a plurality of vehicles simultaneously in a traffic control environment, comprising: providing a 3D
optical emitter; providing a 3D optical receiver with a wide and deep field of view;
driving the 3D optical emitter into emitting short light pulses; receiving a reflection/backscatter of the emitted light, thereby acquiring an individual digital full-waveform LIDAR
trace for each detection channel of the 3D optical receiver; using the individual digital full-waveform LIDAR trace and the emitted light waveform, detecting a presence of a plurality of vehicles, a position of at least part of each vehicle and a time at which the position is detected; assigning a unique identifier to each vehicle; repeating the steps of driving, receiving, acquiring and detecting, at a predetermined frequency;
tracking and recording an updated position of each vehicle and an updated time at which the updated position is detected.
[0041a] According to another broad aspect of the present invention, there is provided a vehicle detection system, comprising: an optical emitter configured to emit illumination signals for illuminating a field of illumination repeatedly over time, each illumination signal illuminating a portion of the field of illumination, the field of illumination including a moving vehicle; an optical receiver configured for capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the moving vehicle as a distance between the moving vehicle and the optical emitter changes, the optical receiver generating signal waveforms from the optical return signals; and a data processing device configured for estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
[0041b] According to another broad aspect of the present invention, there is provided a vehicle detection system, comprising: a laser configured to emit illumination signals for illuminating a field of illumination repeatedly over time, the field of illumination including a target vehicle having a side, a top and a rear; an optical receiver configured for capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the vehicle as a distance between the vehicle and the laser changes, the optical receiver generating signal waveforms from the optical return signals; and a data processing device configured for estimating a length, a width and a volume of the vehicle, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
[0041c] According to another broad aspect of the present invention, there is provided a method for vehicle detection, comprising: emitting illumination signals from an optical emitter for illuminating a field of illumination repeatedly over time, the field of illumination including a moving vehicle, each illumination signal illuminating a portion of the field of illumination; capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the moving vehicle as a distance between the moving vehicle and the optical emitter changes; and estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
[0041d] According to another broad aspect of the present invention, there is provided a method for vehicle detection, comprising: emitting illumination signals from a laser emitter for illuminating a field of illumination repeatedly over time, the field of illumination including a target vehicle having a side, a top and a rear;
capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the vehicle as a distance between the vehicle and the laser emitter changes; and estimating a length, a width and a volume of the vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
[0041e] According to another broad aspect of the present invention, there is provided a vehicle-mountable data processing device for performing LIDAR, including: an input configured to capture a series of vehicle measurements from signal waveforms generated from optical return signals produced by reflection of laser illumination signals that repeatedly illuminate a field of illumination over time, the field of illumination including a moving vehicle, the optical return signals being produced by reflection of the laser illumination signals from multiple sides of the moving vehicle as a distance between the vehicle and the vehicle-mountable data 9a processing device changes; at least one processor configured for estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating comprises plotting a line by feature extraction using the vehicle measurements.
[0041f] According to another broad aspect of the present invention, there is provided a vehicle-mountable data processing device for performing LIDAR, including: an input configured to capture a series of vehicle measurements from optical return signals produced by reflection of laser illumination signals that repeatedly illuminate a field of illumination over time, the field of illumination including a target vehicle having a top, a side and a rear, the optical signals being produced by reflection of the laser illumination signals from the top, the side and the rear of the target vehicle as a distance between the target vehicle and the vehicle-mountable data processing device changes; at least one processor configured for estimating a length, a width and a volume of the target vehicle, wherein the estimating comprises plotting a line by feature extraction using a subset of the vehicle measurements.
[0041g] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for capturing a series of vehicle measurements from a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for deriving estimates of the length, width and volume of the vehicle, wherein deriving estimates comprises plotting a line by feature extraction using the vehicle measurements.
[0041h] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a first and a second vehicle residing in a field of view, the first and a second vehicle each having 9b a side and a rear, the data processing device comprising: an input configured for capturing a series of vehicle measurements from a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of viewõ the optical return signal including a plurality of optical reflections produced by the first and second vehicles and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal to estimate a length, width and volume of each of the first and second vehicles by plotting a line by feature extraction using the vehicle measurements.
[0041i] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for: performing multiple detections of the vehicle from the plurality of optical reflections; deriving at least a portion of a physical boundary of the vehicle from the multiple reflections and estimating a volume of the vehicle using feature extraction based on the multiple detections.
[0041j] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for: performing multiple detections of the vehicle from the plurality of 9c optical reflections; deriving a geometric feature of the vehicle from the multiple detections and estimating a volume of the vehicle, using feature extraction based on the multiple detections.
[0041k] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including -a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for: performing multiple detections of the vehicle from the plurality of optical reflections; detecting, using feature extraction on the basis of the plurality of optical reflections, a corner of the vehicle defined at an intersection of two sides of the vehicle; estimating a volume of the vehicle, using feature extraction based on the multiple detections.
[00411] According to another broad aspect of the present invention, there is provided a data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising: an input configured for receiving a return signal conveying an optical return of an illumination signal, the illumination signal including a plurality of light pulses directed at the field of viewõ the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses; at least one processor configured to process the return signal for:
performing multiple detections of the vehicle from the plurality of optical reflections;
processing the multiple detections of the vehicle to attempt to identify a physical boundary of the vehicle using feature extraction based on the multiple detections and derive at least in part on the basis of the physical boundary an extent of the vehicle and to estimate a volume of the vehicle from the extent; and to assign to the vehicle a default extent if the attempt is not successful.
9d [0042] Throughout this specification, the term "object" is intended to include a moving object and a stationary object. For example, it can be a vehicle, an environmental particle, a person, a pedestrian, a passenger, an animal, a gas, a liquid, a particle such as dust, a pavement, a wall, a post, a sidewalk, a ground surface, a tree, etc.
[0043] Throughout this specification, the term "vehicle" is intended to include any movable means of transportation for cargo, humans and animals, not necessarily restricted to ground transportation, including wheeled and unwheeled vehicles, such as, for example, a truck, a bus, a boat, a subway car, a train wagon, an aerial tramway car, a ski lift, a plane, a car, a motorcycle, a tricycle, a bicycle, a SegwayTM' a carriage, a wheelbarrow, a stroller, etc.
[0044] Throughout this specification, the term "environmental particle" is intended to include any particle detectable in the air or on the ground and which can be caused by an environmental, chemical or natural phenomenon or by human intervention. It includes fog, water, rain, liquid, dust, dirt, vapor, snow, smoke, gas, smog, pollution, black ice, hail, etc.
[0045] Throughout this specification, the term "red light" is intended to mean a traffic light (traffic signal, traffic lamp or signal light) which is currently signaling users of a road, at a road intersection, that they do not have the right of way into the intersection and that they should stop before entering the intersection.
Another color and/or symbol could be used to signal the same information to the user depending on the jurisdiction.
9e [0046] Throughout this specification, the term "green light" is intended to mean a traffic light (traffic signal, traffic lamp or signal light) which is currently signaling users of a road, at a road intersection, that they have the right of way into the intersection and that they should enter the intersection if it is safe to do so.
Another color and/or symbol could be used to signal the same information to the user depending on the jurisdiction.
BRIEF DESCRIPTION OF THE DRAWINGS
[0047] The accompanying drawings, which are included to provide a better understanding of the main aspects of the system and method and are incorporated inand constitute a part of this specification, illustrate different example embodiments.
The accompanying drawings are not intended to be drawn to scale. In the drawings:
[0048] FIG. 1 is a functional bloc diagram of an example of the multipurpose traffic detection system showing its main components and the way they are interconnected;
[0049] FIG. 2 is an example installation of the traffic detection system on the side of a 3-lane highway;
[0050] FIG. 3 shows an example installation of the traffic detection system on a gantry;
[0051] FIG. 4 shows the impact on the depth of a detection zone of the height of installation of the system;
[0052] FIG. 5 shows an example casing for the multipurpose traffic detector;
[0053] FIG. 6 shows a top view of the detection zone on a 3-lane highway;
[0054] FIG. 7 shows a top view of the detection zone in a red light enforcement application;
[0055] FIG. 8A and 8B are illustrative of example snapshots taken by the image sensor with the overlay of the 3D sensor displaying a vehicle in the detected zone with distance measurements;
[0056] FIG. 9A is illustrative of an example snapshot taken by the image sensor with the overlay of the 3D sensor at an intersection for red light enforcement application and FIG. 9B is a graph of data acquired by the detection system showing the range of detection of vehicles on 3 lanes in Cartesian coordinates;
[0057] FIG. 10 is a top view of an example road side installation with the tracking system being installed next to a one-directional three-lane highway and for which the detection zone is apparent and covers, at least partly, each of the lanes, all vehicles traveling in the same direction;
[0058] FIG. 11 is a top view of the example installation of FIG. 10 on which four vehicle detections are visible in some of the 16 separate channels with simultaneous acquisition capability;
[0059] FIG. 12 is a top view of the example installation of FIG. 10 on which a detection is visible between two trigger lines;
[0060] FIG. 13 includes FIGS. 13A, 1313, 13C, 13D, 13E and 13F, in which FIGS. 13A, 13C and 13E are illustrative of a few frames of vehicle tracking when vehicles arrive at an intersection with a red light and FIGS. 13B, 13D, 20 and show a graph of data acquired by the detection system for each corresponding frame;
[0061] FIG. 14 includes FIGS. 14A, 14B, 14C, 14D, 14E and 14F, in which FIGS. 14A, 14C and 14E are illustrative of a few frames of vehicle tracking when vehicles depart the intersection of FIG. 13 at the green light and FIGS. 14B, 14D, and 14F show a graph of data acquired by the detection system for each corresponding frame;
11 [0062] FIG. 15 is a flowchart illustrating an example method for tracking several vehicles based on a space-based tracking disjoint;
[0063] FIG. 16 is a flowchart illustrating an example method for tracking several vehicles for a red-light enforcement application, this algorithm uses a space-based tracking joint;
[0064] FIG. 17 is a flowchart illustrating the selection of appropriate measures among the detections;
[0065] FIG. 18 shows an example segment extraction line for a long vehicle;
[0066] FIG. 19 is a state diagram illustrating the tracking system used without a traffic light state;
[0067] FIG. 20 is a state diagram illustrating the tracking system used with a traffic light state;
[0068] FIG. 21 is a flowchart showing example steps performed to compute the vehicle position;
[0069] FIG. 22 is a flowchart showing example steps performed for object tracking without a traffic light state;
[0070] FIG. 23 is a flowchart showing example steps performed for object tracking with a traffic light state;
[0071] FIG. 24 is a flowchart illustrating an example classification process;
[0072] FIG. 25 includes FIGS. 25A, 25B and 25C which illustrate the relationship between the detections of a vehicle and its geometric features of width and length;
[0073] FIG. 26 illustrates the direct geometric relationship between height of the vehicle and distance of vehicle detection;
[0063] FIG. 16 is a flowchart illustrating an example method for tracking several vehicles for a red-light enforcement application, this algorithm uses a space-based tracking joint;
[0064] FIG. 17 is a flowchart illustrating the selection of appropriate measures among the detections;
[0065] FIG. 18 shows an example segment extraction line for a long vehicle;
[0066] FIG. 19 is a state diagram illustrating the tracking system used without a traffic light state;
[0067] FIG. 20 is a state diagram illustrating the tracking system used with a traffic light state;
[0068] FIG. 21 is a flowchart showing example steps performed to compute the vehicle position;
[0069] FIG. 22 is a flowchart showing example steps performed for object tracking without a traffic light state;
[0070] FIG. 23 is a flowchart showing example steps performed for object tracking with a traffic light state;
[0071] FIG. 24 is a flowchart illustrating an example classification process;
[0072] FIG. 25 includes FIGS. 25A, 25B and 25C which illustrate the relationship between the detections of a vehicle and its geometric features of width and length;
[0073] FIG. 26 illustrates the direct geometric relationship between height of the vehicle and distance of vehicle detection;
- 12 -[0074] FIG 27 includes FIGS. 27A, 27B, 27C and 27D which show top view frames of a vehicle detected by the LEDDAR sensor;
[0075] FIG. 28 includes FIGS. 28A, 28B, 28C and 28D which show corresponding side view frames of the vehicle of FIG. 27;
[0076] FIG. 29 is a flowchart illustrating an example segmentation algorithm based on a 3D bounding box;
[0077] FIG. 30 is a top view of an example scenario used for the analysis of Posterior Cramer-Rao lower bound;
[0078] FIG. 31 is a graph showing theoretical performance of the tracking algorithm given by the PCRB;
[0079] FIG. 32 includes FIG. 32A, 32B, 32C and 32D in which FIG. 32A is illustrative of an example snapshot taken by the image sensor during the day, FIGS.
32B, 32C and 32D are illustrative of a zoom in on license plates in the snapshot of FIG. 32A;
[0080] FIG. 33 includes FIG. 33A, 33B and 33C in which FIG. 33A is illustrative of an example snapshot taken by the image sensor at night without any light, FIG. 33B is illustrative of the same scene as FIG. 33A taken by the image sensor at night with an infrared light illumination, FIG. 33C is illustrative of a zoom in on a license plate extracted from the image of FIG 33B;
[0081] FIG. 34 includes FIG. 34A, 34B, 34C and 34D in which FIG. 34A is illustrative of another example snapshot taken by the image sensor at night with infrared light, FIG. 34B is illustrative of a zoom in on a license plate extracted from the image of FIG. 34A, FIG. 34C is illustrative of an example snapshot taken by the image sensor with a shorter integration time at night with infrared light, FIG. 34D is illustrative of a zoom in on a license plate extracted from the image of FIG.
34C; and [0082] FIG. 35 is illustrative of an example panoramic snapshot taken by the image sensor using infrared illumination in which two vehicles are present in the
[0075] FIG. 28 includes FIGS. 28A, 28B, 28C and 28D which show corresponding side view frames of the vehicle of FIG. 27;
[0076] FIG. 29 is a flowchart illustrating an example segmentation algorithm based on a 3D bounding box;
[0077] FIG. 30 is a top view of an example scenario used for the analysis of Posterior Cramer-Rao lower bound;
[0078] FIG. 31 is a graph showing theoretical performance of the tracking algorithm given by the PCRB;
[0079] FIG. 32 includes FIG. 32A, 32B, 32C and 32D in which FIG. 32A is illustrative of an example snapshot taken by the image sensor during the day, FIGS.
32B, 32C and 32D are illustrative of a zoom in on license plates in the snapshot of FIG. 32A;
[0080] FIG. 33 includes FIG. 33A, 33B and 33C in which FIG. 33A is illustrative of an example snapshot taken by the image sensor at night without any light, FIG. 33B is illustrative of the same scene as FIG. 33A taken by the image sensor at night with an infrared light illumination, FIG. 33C is illustrative of a zoom in on a license plate extracted from the image of FIG 33B;
[0081] FIG. 34 includes FIG. 34A, 34B, 34C and 34D in which FIG. 34A is illustrative of another example snapshot taken by the image sensor at night with infrared light, FIG. 34B is illustrative of a zoom in on a license plate extracted from the image of FIG. 34A, FIG. 34C is illustrative of an example snapshot taken by the image sensor with a shorter integration time at night with infrared light, FIG. 34D is illustrative of a zoom in on a license plate extracted from the image of FIG.
34C; and [0082] FIG. 35 is illustrative of an example panoramic snapshot taken by the image sensor using infrared illumination in which two vehicles are present in the
13 detection zone and on which the overlay of the 3D sensor is shown with dashed lines.
DETAILED DESCRIPTION
Description of the multipurpose traffic detection system [0083] Reference will now be made in detail to example embodiments. The system and method may however, be embodied in many different forms and should not be construed as limited to the example embodiments set forth in the following description.
[0084] The functionalities of the various components integrated in an example multipurpose traffic detection system 10 can be better understood by referring to the functional block diagram shown in FIG. 1. The 3D Optical Emitter 12 (3DOE) emits short pulses of light, for example of a length less than 50 ns, within a predetermined zone. In the example embodiment, the 3DOE 12 is an IR LED illumination source determining a Field-of-Illumination FOI3D covering the 3D detection zone FOV3D.
The optical source of the 3DOE can also be based on Laser technology. The horizontal angles of the FOI3D and FOV3D are wide enough to cover at least one lane. For example, a system with a horizontal FOI / FOV of 350 would be able to cover 3 lanes, each lane having a width of 3.5 m, when installed at 15 m from the side of the detection zone.
[0085] An example mounting configuration of the multipurpose traffic detection system 10 can be seen in FIG. 2, which depicts a schematic view of a roadway with 3 lanes being shown. The traffic detection system 10 is shown mounted on a pole 27 with an orientation towards traffic direction. Pole 27 can be a new dedicated road
DETAILED DESCRIPTION
Description of the multipurpose traffic detection system [0083] Reference will now be made in detail to example embodiments. The system and method may however, be embodied in many different forms and should not be construed as limited to the example embodiments set forth in the following description.
[0084] The functionalities of the various components integrated in an example multipurpose traffic detection system 10 can be better understood by referring to the functional block diagram shown in FIG. 1. The 3D Optical Emitter 12 (3DOE) emits short pulses of light, for example of a length less than 50 ns, within a predetermined zone. In the example embodiment, the 3DOE 12 is an IR LED illumination source determining a Field-of-Illumination FOI3D covering the 3D detection zone FOV3D.
The optical source of the 3DOE can also be based on Laser technology. The horizontal angles of the FOI3D and FOV3D are wide enough to cover at least one lane. For example, a system with a horizontal FOI / FOV of 350 would be able to cover 3 lanes, each lane having a width of 3.5 m, when installed at 15 m from the side of the detection zone.
[0085] An example mounting configuration of the multipurpose traffic detection system 10 can be seen in FIG. 2, which depicts a schematic view of a roadway with 3 lanes being shown. The traffic detection system 10 is shown mounted on a pole 27 with an orientation towards traffic direction. Pole 27 can be a new dedicated road
14 WO 2013/128427 PC:T/1112013/051667 infrastructure for the sensor installation or an already existing road infrastructure streetlight assembly or other types of infrastructures like gantries or buildings. This exemplary roadway comprises three adjacent traffic lanes for vehicles. The traffic detection system is intended to detect any type of objects that may be present within the predetermined 3D detection zone.
[0086] The mounting height of the traffic detection system 10 is, for example, between 1 to 10 m with a lateral distance from the nearest traffic lane of, for example, between 1 to 5 m. In FIG. 2, three vehicles travelling in the same direction on the traffic lanes enter in the 3D detection zone. When the vehicles reach the 3D
detection lo zone, the multipurpose traffic detection system is used for detection, localization, classification and measurement of the speed of the vehicles through the zone.
The system can also be installed over the roadway on a gantry as shown in FIG. 3.
The system can also detect vehicles traveling in opposite directions.
[0087] The detection system can be installed at different heights, from the ground up to 10 m. FIG. 4 shows the impact of the installation height on the longitudinal length of the detection zone. With a fixed starting distance of detection, the longitudinal length of the detection zone will be shorter with a system installed higher.
The vertical angles of the F013D and FOV3D have to be wide enough to detect and track vehicles over several meters, for example over at least 8 m. For example, a system installed at a height of 3.5 m with a vertical FOI / FOV of 6 and a detection zone beginning at 15 m from the detector will have a detection zone depth of approximately 13 m.
[0088] Referring back to FIG. 1, part of the light diffusively reflected by the vehicles and objects in the F0I3D is directed towards the collecting aperture of the 3D
Optical Receiver 14 (3DOR) for its 3D optical detection and subsequent conversion into digital waveforms. To be detected, an object should appear within the F0V30 of the 3DOR, which is defined by its optics as well as by the dimensions of its optically sensitive device. The 3DOR is composed of one or more optical lenses, multichannel
[0086] The mounting height of the traffic detection system 10 is, for example, between 1 to 10 m with a lateral distance from the nearest traffic lane of, for example, between 1 to 5 m. In FIG. 2, three vehicles travelling in the same direction on the traffic lanes enter in the 3D detection zone. When the vehicles reach the 3D
detection lo zone, the multipurpose traffic detection system is used for detection, localization, classification and measurement of the speed of the vehicles through the zone.
The system can also be installed over the roadway on a gantry as shown in FIG. 3.
The system can also detect vehicles traveling in opposite directions.
[0087] The detection system can be installed at different heights, from the ground up to 10 m. FIG. 4 shows the impact of the installation height on the longitudinal length of the detection zone. With a fixed starting distance of detection, the longitudinal length of the detection zone will be shorter with a system installed higher.
The vertical angles of the F013D and FOV3D have to be wide enough to detect and track vehicles over several meters, for example over at least 8 m. For example, a system installed at a height of 3.5 m with a vertical FOI / FOV of 6 and a detection zone beginning at 15 m from the detector will have a detection zone depth of approximately 13 m.
[0088] Referring back to FIG. 1, part of the light diffusively reflected by the vehicles and objects in the F0I3D is directed towards the collecting aperture of the 3D
Optical Receiver 14 (3DOR) for its 3D optical detection and subsequent conversion into digital waveforms. To be detected, an object should appear within the F0V30 of the 3DOR, which is defined by its optics as well as by the dimensions of its optically sensitive device. The 3DOR is composed of one or more optical lenses, multichannel
- 15 -optical detectors, for example photodiode arrays, an analog frontend and analog-to-digital converter. Usually, the channels are digitalized in parallel and the system implements a full-waveform signal processing of the signal waveforms generated by the plurality of optical detection channels.
[0089] The multipurpose traffic detection system provides a good accuracy in terms of lateral resolution and is less dependent on the angle of installation than Radar technology.
[0090] In FIG. 1, the 2D Optical Receiver 16 (2DOR) is at least one image sensor, for example a CMOS or CCD (including front end and AD conversion) which provides images of the portion of the roadway area that encompasses or overlaps at least a section of the F0130 of the 3DOE and the FOV3D of the 300R. The 2DOR will be used during installation, to transmit video data, and, for some applications, to help identify vehicles using, for example, Automatic License Plate Recognition (ALPR) techniques. For applications requiring vehicle identification, the requirement for the image sensor in terms of resolution is high. An external image sensor or camera can also be used for this function. The average size of a character on a license plate is between 50 mm to 80 mm. It takes at least 16 pixels per character (height) to obtain good results with an Optical Character Recognition (OCR) processing within an ALPR
system. Based on that criterion, the identification of a license plate of a vehicle circulating on a 3-lane highway (3.5 m x 3 m) requires an image sensor with a least 5 Mpixels (2.5K x 2K). High resolution image sensors are expensive. One way to reduce the cost is to use at least two image sensors each with lower resolution and to combine the information coming from both images using image stitching techniques.
The synchronization, acquisition and image processing are performed by Control and processing unit 22.
[0091] The 2D Illumination 18 (2DI) is an optical source emitting infrared and/or visible light. The 2DI can be embedded in the sensor enclosure or can be an external module. In one example embodiment, the optical source of 2DI 18 is at least one
[0089] The multipurpose traffic detection system provides a good accuracy in terms of lateral resolution and is less dependent on the angle of installation than Radar technology.
[0090] In FIG. 1, the 2D Optical Receiver 16 (2DOR) is at least one image sensor, for example a CMOS or CCD (including front end and AD conversion) which provides images of the portion of the roadway area that encompasses or overlaps at least a section of the F0130 of the 3DOE and the FOV3D of the 300R. The 2DOR will be used during installation, to transmit video data, and, for some applications, to help identify vehicles using, for example, Automatic License Plate Recognition (ALPR) techniques. For applications requiring vehicle identification, the requirement for the image sensor in terms of resolution is high. An external image sensor or camera can also be used for this function. The average size of a character on a license plate is between 50 mm to 80 mm. It takes at least 16 pixels per character (height) to obtain good results with an Optical Character Recognition (OCR) processing within an ALPR
system. Based on that criterion, the identification of a license plate of a vehicle circulating on a 3-lane highway (3.5 m x 3 m) requires an image sensor with a least 5 Mpixels (2.5K x 2K). High resolution image sensors are expensive. One way to reduce the cost is to use at least two image sensors each with lower resolution and to combine the information coming from both images using image stitching techniques.
The synchronization, acquisition and image processing are performed by Control and processing unit 22.
[0091] The 2D Illumination 18 (2DI) is an optical source emitting infrared and/or visible light. The 2DI can be embedded in the sensor enclosure or can be an external module. In one example embodiment, the optical source of 2DI 18 is at least one
- 16 -cA 2998175 2018-03-14 Cl 02865733 2014-08-27 LED. LEDs are efficient and the FOI can be optimized with optical collimators and diffusors. The pulse width of 2DOE can be in the range of 10 ps to 10 ms and can be synchronized with the image capture (integration time) of the image sensor(s).
For vehicles traveling at high speed, the integration time can be in the range of 500 ps and less. A vehicle moving at 150 km/h will travel 21 cm in 500 ps.
[0092] A single set of infrared LEDs can be used for both the 3DOE and 2DOE.
Very high-short intensity pulses (for example <50 ns) for 30 detection can be mixed with longer pulses (for example 10 ps to 10 ms) for 2D sensor(s). The LEDs can have a wavelength between 800 and 1000 pm, for example.
[0093] Source Driver Electronics (SDE) 20 uses dedicated electronics for driving the 3DOE 12 with current pulses having peak amplitude and duration suitable for effective implementation of the optical ranging principle on which the operation of the multipurpose traffic detection system is based. A pulsed voltage trig signal forwarded by the Control and Processing Unit 22 commands the generation of each current pulse by the drive electronics. The operating conditions and performance requirements for the multipurpose traffic detection system call for the emission of short optical pulses having a duration in the range of 5 to 50 ns, for example.
Depending on the repetition rate at which the pulses are emitted, the duty cycle (relative ON time) of the optical emission can be as low as 0.1 %. In order to get the desired peak optical output power for the radiated light pulses, any lowering of the peak drive level of the LEDs or Laser can be compensated by mounting additional LED or Laser sources in the 3DOE 12 and appropriately duplicating their drive electronics.
[0094] The SDE 20 can also drive 2D illumination with current pulses having peak amplitude and duration suitable for effective illumination of the scene for the 2DOR
16. A pulsed voltage trig signal forwarded by the Control and Processing Unit commands the generation of each current pulse by the drive electronics. The operating conditions and performance requirements for the multipurpose traffic
For vehicles traveling at high speed, the integration time can be in the range of 500 ps and less. A vehicle moving at 150 km/h will travel 21 cm in 500 ps.
[0092] A single set of infrared LEDs can be used for both the 3DOE and 2DOE.
Very high-short intensity pulses (for example <50 ns) for 30 detection can be mixed with longer pulses (for example 10 ps to 10 ms) for 2D sensor(s). The LEDs can have a wavelength between 800 and 1000 pm, for example.
[0093] Source Driver Electronics (SDE) 20 uses dedicated electronics for driving the 3DOE 12 with current pulses having peak amplitude and duration suitable for effective implementation of the optical ranging principle on which the operation of the multipurpose traffic detection system is based. A pulsed voltage trig signal forwarded by the Control and Processing Unit 22 commands the generation of each current pulse by the drive electronics. The operating conditions and performance requirements for the multipurpose traffic detection system call for the emission of short optical pulses having a duration in the range of 5 to 50 ns, for example.
Depending on the repetition rate at which the pulses are emitted, the duty cycle (relative ON time) of the optical emission can be as low as 0.1 %. In order to get the desired peak optical output power for the radiated light pulses, any lowering of the peak drive level of the LEDs or Laser can be compensated by mounting additional LED or Laser sources in the 3DOE 12 and appropriately duplicating their drive electronics.
[0094] The SDE 20 can also drive 2D illumination with current pulses having peak amplitude and duration suitable for effective illumination of the scene for the 2DOR
16. A pulsed voltage trig signal forwarded by the Control and Processing Unit commands the generation of each current pulse by the drive electronics. The operating conditions and performance requirements for the multipurpose traffic
- 17 -detection system call for the emission of 2D optical pulses having a duration in the range of 10 ps to 10 ms, for example.
[0095] The SDE 20 can control and receive information from 3DOE and 2D
illumination about the intensity of the current pulse, LEDs/Laser temperature, etc.
[0096] All of these modules exchange data and receive commands and signals from the control and processing unit 22. The Control and processing unit 22 can include digital logic (for example by a Field-Programmable Gated Array (FPGA)) for pre-processing the 3D raw data and for the synchronization and control, a memory, and a processing unit. The processing unit can be a digital signal processing (DSP) unit, a microcontroller or an embarked personal computer (PC) board as will be readily ur Icier stood.
[0097] The primary objective of the 3D full-waveform processing is to detect, within a prescribed minimum detection probability, the presence of vehicles in a lane that is mapped to a number of adjacent detection channels. Because of the usual optical reflection characteristics of the vehicle bodies and of various constraints that limit the performances of the modules implemented in a traffic detection system, the optical return signals captured by the 3DOR are optimized by acquisition shifting techniques, accumulation techniques and filtering and correlation technique to enhance the signal-to-noise ratio (SNR) of the useful signal echoes and detect a digital replica of the pulse emitted by the 3DPE. The properties (peak amplitude, shape, time/distance location) of the useful features present in the waveforms should remain ideally unchanged during the time period required to capture a complete set of waveforms that will be averaged. This condition may cause issues when attempting to detect vehicles that move rapidly, this situation leading to signal echoes that drift more or less appreciably from waveform to waveform. The detrimental impacts of this situation can be alleviated by designing the traffic detection system so that it radiates light pulses at a high repetition rate (e.g., in the tens to hundreds of kHz range). Such high repetition rates will enable the capture of a very large number of waveforms
[0095] The SDE 20 can control and receive information from 3DOE and 2D
illumination about the intensity of the current pulse, LEDs/Laser temperature, etc.
[0096] All of these modules exchange data and receive commands and signals from the control and processing unit 22. The Control and processing unit 22 can include digital logic (for example by a Field-Programmable Gated Array (FPGA)) for pre-processing the 3D raw data and for the synchronization and control, a memory, and a processing unit. The processing unit can be a digital signal processing (DSP) unit, a microcontroller or an embarked personal computer (PC) board as will be readily ur Icier stood.
[0097] The primary objective of the 3D full-waveform processing is to detect, within a prescribed minimum detection probability, the presence of vehicles in a lane that is mapped to a number of adjacent detection channels. Because of the usual optical reflection characteristics of the vehicle bodies and of various constraints that limit the performances of the modules implemented in a traffic detection system, the optical return signals captured by the 3DOR are optimized by acquisition shifting techniques, accumulation techniques and filtering and correlation technique to enhance the signal-to-noise ratio (SNR) of the useful signal echoes and detect a digital replica of the pulse emitted by the 3DPE. The properties (peak amplitude, shape, time/distance location) of the useful features present in the waveforms should remain ideally unchanged during the time period required to capture a complete set of waveforms that will be averaged. This condition may cause issues when attempting to detect vehicles that move rapidly, this situation leading to signal echoes that drift more or less appreciably from waveform to waveform. The detrimental impacts of this situation can be alleviated by designing the traffic detection system so that it radiates light pulses at a high repetition rate (e.g., in the tens to hundreds of kHz range). Such high repetition rates will enable the capture of a very large number of waveforms
- 18 -Cl 02865733 2014-08-27 during a time interval sufficiently short to keep the optical echoes associated to a moving vehicle stationary. Detection information on each channel can then be upgraded, for example between a few tens to a few hundred times per second.
For example, with a multipurpose traffic detection system using a frame rate at 200 Hz, a car at 250 km/h would have moved forward by 35 cm between each frame.
[0098] The Control and processing unit 22 has numerous functions in the operation of the multipurpose traffic detection system, one of these being the calibration of the system. This calibration process can be done by connecting a remote computer to the Control and processing unit 22 and communicating using a Power management and data Interface 24.
[0099] During normal operation of the multipurpose traffic deteolion system, Power management and data Interface 24 receives information from the external controller (including parameters like a speed limit) and also allows the Control and processing unit 22 to send data. The data sent can be related to the detection of each vehicle and can comprise information such as an accurate timestamp of the detection time synchronized with the external controller, a unique identifier (ID number), the lane and position of the vehicle (lateral and longitudinal) for each trigger event, the position of the vehicle in an image, video streaming, identification by ALPR, speed, classification, weather information, etc., to the external controller.
[00100] In another embodiment, part of the process and algorithms can be integrated in the external controller which receives the raw data from the Control and processing unit by the Power Management and Interface.
[00101] Several types of interfaces can be used to communicate with the external controller: Ethernet, RS-485, wireless link, etc. Power over Ethernet (PoE) may be used for its simplicity of connection including power, data and distance (up to 100 m).
[00102] The data information can also be stored in memory and retrieved later.
For example, with a multipurpose traffic detection system using a frame rate at 200 Hz, a car at 250 km/h would have moved forward by 35 cm between each frame.
[0098] The Control and processing unit 22 has numerous functions in the operation of the multipurpose traffic detection system, one of these being the calibration of the system. This calibration process can be done by connecting a remote computer to the Control and processing unit 22 and communicating using a Power management and data Interface 24.
[0099] During normal operation of the multipurpose traffic deteolion system, Power management and data Interface 24 receives information from the external controller (including parameters like a speed limit) and also allows the Control and processing unit 22 to send data. The data sent can be related to the detection of each vehicle and can comprise information such as an accurate timestamp of the detection time synchronized with the external controller, a unique identifier (ID number), the lane and position of the vehicle (lateral and longitudinal) for each trigger event, the position of the vehicle in an image, video streaming, identification by ALPR, speed, classification, weather information, etc., to the external controller.
[00100] In another embodiment, part of the process and algorithms can be integrated in the external controller which receives the raw data from the Control and processing unit by the Power Management and Interface.
[00101] Several types of interfaces can be used to communicate with the external controller: Ethernet, RS-485, wireless link, etc. Power over Ethernet (PoE) may be used for its simplicity of connection including power, data and distance (up to 100 m).
[00102] The data information can also be stored in memory and retrieved later.
- 19 -[00103] Power management and data Interface 24 can also send electrical trigger signals to synchronize events like the detection of the front or the rear of a vehicle at a specific position to other devices like an external camera, an external illuminator or other interface and external controller.
[00104] The Power Supply Management and Data Interface 24 can also be useful in transmitting images and videos to an external system or network to allow a remote operator to monitor different traffic events (ex.: accident, congestion, etc.). Video compression (ex.: IVIPEG) can be done by a processor to limit the bandwidth required for the video transmission.
[00105] The four optical modules can be rigidly secured to the attachment surface of an actuator assembly (not shown). The modules can then pivot in a controlled manner about up to three orthogonal axes to allow a precise alignment of their common line of sight after the multipurpose traffic detection unit has been installed in place and aligned in a coarse manner. The fine-tuning of the orientation of the line of sight is, for example, performed remotely by an operator via a computer device connected to the multipurpose traffic detection system, for example through PoE or a wireless data link.
[00106] FIG. 1 also shows a functional bloc labeled Sensors 26 for measuring different parameters. The internal temperature in the system enclosure can be monitored with a temperature sensor which can be used to control a heating/cooling device, not shown. The current orientation of the system can be monitored using an inclinometer/compass assembly. Such information may be useful for timely detection of the line of sight that may become misaligned. The sensor suite may also include an accelerometer for monitoring in real-time the vibration level to which the system is submitted to as well as a global positioning system (GPS) unit for real-time tracking of the location of the system and/or for having access to a real-time clock.
[00104] The Power Supply Management and Data Interface 24 can also be useful in transmitting images and videos to an external system or network to allow a remote operator to monitor different traffic events (ex.: accident, congestion, etc.). Video compression (ex.: IVIPEG) can be done by a processor to limit the bandwidth required for the video transmission.
[00105] The four optical modules can be rigidly secured to the attachment surface of an actuator assembly (not shown). The modules can then pivot in a controlled manner about up to three orthogonal axes to allow a precise alignment of their common line of sight after the multipurpose traffic detection unit has been installed in place and aligned in a coarse manner. The fine-tuning of the orientation of the line of sight is, for example, performed remotely by an operator via a computer device connected to the multipurpose traffic detection system, for example through PoE or a wireless data link.
[00106] FIG. 1 also shows a functional bloc labeled Sensors 26 for measuring different parameters. The internal temperature in the system enclosure can be monitored with a temperature sensor which can be used to control a heating/cooling device, not shown. The current orientation of the system can be monitored using an inclinometer/compass assembly. Such information may be useful for timely detection of the line of sight that may become misaligned. The sensor suite may also include an accelerometer for monitoring in real-time the vibration level to which the system is submitted to as well as a global positioning system (GPS) unit for real-time tracking of the location of the system and/or for having access to a real-time clock.
- 20 -=
Cl 02865733 2014-08-27 [001071 FIG. 5 shows an example casing with a window 28 for the multipurpose traffic detection system. The casing can house a more or less complete suite of monitoring instruments, each of them forwarding its output data signals to the control and processing unit for further processing or relay. In other configurations of the casing, lateral sections can be integrated to protect the window from the road dust.
Use, set-up, basic principles, features and applications [00108] FIG. 6 shows a top view of an installation of the multipurpose detection system. The multichannel 3DOR detects vehicles present within a two-dimensional detection zone, the active nature of the traffic detection system provides an optical ranging capability that enables measurement of the instantaneous distances of the detected vehicles from the system. This optical ranging capability is implemented via the emission of light in the form of very brief pulses along with the recordal of the time it takes to the pulses to travel from the system to the vehicle and then to return to the system. Those skilled in the art will readily recognize that the optical ranging is performed via the so-called time-of-flight (TOF) principle, of widespread use in optical rangefinder devices. However, most optical rangefinders rely on analog peak detection of the light pulse signal reflected from a remote object followed by its comparison with a predetermined amplitude threshold level. In the present system, the traffic detection system numerically processes the signal waveform acquired for a certain period of time after the emission of a light pulse. The traffic detection system can therefore be categorized as a full-waveform LIDAR (Light Detection and Ranging) instrument. The system analyses the detection and distance measurements on several 3D channels and is able to track several vehicles at the same time in the detection zone. The system can determine the lane position, the distance from the detector and the speed, for each individual vehicle.
[00109] As can be seen in FIG. 6, the detection system 10 is installed at a reference line 60, has a wide FOV 61, has a large and wide detection and tracking zone
Cl 02865733 2014-08-27 [001071 FIG. 5 shows an example casing with a window 28 for the multipurpose traffic detection system. The casing can house a more or less complete suite of monitoring instruments, each of them forwarding its output data signals to the control and processing unit for further processing or relay. In other configurations of the casing, lateral sections can be integrated to protect the window from the road dust.
Use, set-up, basic principles, features and applications [00108] FIG. 6 shows a top view of an installation of the multipurpose detection system. The multichannel 3DOR detects vehicles present within a two-dimensional detection zone, the active nature of the traffic detection system provides an optical ranging capability that enables measurement of the instantaneous distances of the detected vehicles from the system. This optical ranging capability is implemented via the emission of light in the form of very brief pulses along with the recordal of the time it takes to the pulses to travel from the system to the vehicle and then to return to the system. Those skilled in the art will readily recognize that the optical ranging is performed via the so-called time-of-flight (TOF) principle, of widespread use in optical rangefinder devices. However, most optical rangefinders rely on analog peak detection of the light pulse signal reflected from a remote object followed by its comparison with a predetermined amplitude threshold level. In the present system, the traffic detection system numerically processes the signal waveform acquired for a certain period of time after the emission of a light pulse. The traffic detection system can therefore be categorized as a full-waveform LIDAR (Light Detection and Ranging) instrument. The system analyses the detection and distance measurements on several 3D channels and is able to track several vehicles at the same time in the detection zone. The system can determine the lane position, the distance from the detector and the speed, for each individual vehicle.
[00109] As can be seen in FIG. 6, the detection system 10 is installed at a reference line 60, has a wide FOV 61, has a large and wide detection and tracking zone
- 21 -covering several lanes and several meters of depth and detects several vehicles on several lanes in a roadway.
[00110] The detection system can be configured with two trigger positions. The first trigger 63 is set in the first section of the detection zone and the second trigger 64 is set a few meters away, in this case close to the end of the detection zone. In this example, a first vehicle 65 was detected when entering the detection zone on lane 1, was tracked, was detected at the position of the first trigger 63, was continuously tracked and is now being detected at the position of the second trigger 64.
Information about its lane position, speed, etc., can be constantly sent or can be sent 1.0 only when the vehicle reaches pre-established trigger positions. A
second vehicle 66 was detected when entering the detection zone on lane 2, was tracked, was detected at the position of the first trigger 63, and is continuously tracked until it reaches the position of the second trigger 64. A third vehicle 67 was detected when entering the detection zone on lane 3, was tracked, is detected at the position of the first trigger 63, will continue to be tracked and will reach the position or the second trigger 04.
[00111 The detection system has the capability to identify, track and send information about multiple vehicles at the same time and its multiple receiver channels greatly reduce the cosine effect for speed measurement.
[00112] The system can capture several snapshots using the 2DOR at different levels of illumination using the 2DOE. Information about each vehicle (date/hour of an event, speed, position, photographs and identification based on Automatic License Plate Recognition) can be sent to the external controller. This is useful for applications like traffic management (for vehicle detection, volume, occupancy, speed measurement and classification), speed enforcement, red light enforcement, etc. The system can be permanently or temporarily installed. It can even be a mobile system, for example a system installed on a vehicle.
[00110] The detection system can be configured with two trigger positions. The first trigger 63 is set in the first section of the detection zone and the second trigger 64 is set a few meters away, in this case close to the end of the detection zone. In this example, a first vehicle 65 was detected when entering the detection zone on lane 1, was tracked, was detected at the position of the first trigger 63, was continuously tracked and is now being detected at the position of the second trigger 64.
Information about its lane position, speed, etc., can be constantly sent or can be sent 1.0 only when the vehicle reaches pre-established trigger positions. A
second vehicle 66 was detected when entering the detection zone on lane 2, was tracked, was detected at the position of the first trigger 63, and is continuously tracked until it reaches the position of the second trigger 64. A third vehicle 67 was detected when entering the detection zone on lane 3, was tracked, is detected at the position of the first trigger 63, will continue to be tracked and will reach the position or the second trigger 04.
[00111 The detection system has the capability to identify, track and send information about multiple vehicles at the same time and its multiple receiver channels greatly reduce the cosine effect for speed measurement.
[00112] The system can capture several snapshots using the 2DOR at different levels of illumination using the 2DOE. Information about each vehicle (date/hour of an event, speed, position, photographs and identification based on Automatic License Plate Recognition) can be sent to the external controller. This is useful for applications like traffic management (for vehicle detection, volume, occupancy, speed measurement and classification), speed enforcement, red light enforcement, etc. The system can be permanently or temporarily installed. It can even be a mobile system, for example a system installed on a vehicle.
- 22 -[00113] An example of configuration for Red Light Enforcement is shown in FIG.
7.
The capability of the system to detect, track, determine the lane position, measure the speed and take photographs (or videos) for each vehicle several meters away from the stop bar has great value for this application. Red light enforcement applications require the detection of a vehicle entering an intersection when the traffic light is at the red state and the automatic capture of several images of the vehicle as it crosses the stop bar and runs the red light. The detection system needs to provide evidence that a violation occurred without ambiguity.
[00114] For most applications, detection rates should be high, for example of the order of 95 % and more (without occlusion), and false detections should occur only very rarely. Images and information about the date and time of the infraction will allow the authorities to transmit a traffic infraction ticket. Identification of the driver and/or owner of the vehicle is generally made by the authorities using the information from the license plate of the vehicle. Since speed information is available, speed infractions can also be detected when the traffic light is green. As will be readily understood, the detection system can also be used for other detection applications such as stop line crossing and railway crossing.
[00115] In FIG. 7, the detection system is installed on the side of the road at an example distance of 15 to 25 m from the stop bar 70. The detection and tracking zone 71 starts few meters before the stop bar 70 and covers several meters after the bar, allowing a large and deep zone for detecting and tracking any vehicle on several lanes (three lanes in that example), at different speeds (from 0 to more than 100 km/h), at a rate of up to ten vehicles detected per second . The detection system can take several images of a red light infraction including, for example, when the vehicle is located at a predetermined trigger distance, for example at first trigger 72 when the back of the vehicle is close to the stop bar 70 and at second trigger when the back of the vehicle is few meters away from the stop bar 70. Optional
7.
The capability of the system to detect, track, determine the lane position, measure the speed and take photographs (or videos) for each vehicle several meters away from the stop bar has great value for this application. Red light enforcement applications require the detection of a vehicle entering an intersection when the traffic light is at the red state and the automatic capture of several images of the vehicle as it crosses the stop bar and runs the red light. The detection system needs to provide evidence that a violation occurred without ambiguity.
[00114] For most applications, detection rates should be high, for example of the order of 95 % and more (without occlusion), and false detections should occur only very rarely. Images and information about the date and time of the infraction will allow the authorities to transmit a traffic infraction ticket. Identification of the driver and/or owner of the vehicle is generally made by the authorities using the information from the license plate of the vehicle. Since speed information is available, speed infractions can also be detected when the traffic light is green. As will be readily understood, the detection system can also be used for other detection applications such as stop line crossing and railway crossing.
[00115] In FIG. 7, the detection system is installed on the side of the road at an example distance of 15 to 25 m from the stop bar 70. The detection and tracking zone 71 starts few meters before the stop bar 70 and covers several meters after the bar, allowing a large and deep zone for detecting and tracking any vehicle on several lanes (three lanes in that example), at different speeds (from 0 to more than 100 km/h), at a rate of up to ten vehicles detected per second . The detection system can take several images of a red light infraction including, for example, when the vehicle is located at a predetermined trigger distance, for example at first trigger 72 when the back of the vehicle is close to the stop bar 70 and at second trigger when the back of the vehicle is few meters away from the stop bar 70. Optional
- 23 -detection of the lane position is useful when a right turn on red is allowed at the intersection.
[00116] Speed enforcement is another application that requires providing evidence that a speed violation occurred. The correlation between the detected speed and the actual vehicle guilty of the infraction needs to be trustworthy. Sufficient information should be provided to allow identification of the vehicle owner, using information from the license plate, for example. The capability of the detection system to measure the speed of several vehicles at the same time with high accuracy and to make the association between each speed measurement and the specific identified vehicle is to useful for traffic enforcement applications. This is made possible by, among others, the multiple FOV, the robustness and accuracy of the sensor and the capability to store several images of a violation.
[00117] The detector can store speed limit data (which can be different for each lane) and determine the occurrence of the infraction.
[00118] The detector can be mounted on a permanent installation or can also be temporary, provided on a movable tripod for example. Detectors can also be installed at the entry and at the exit of a point-to-point enforcement system allowing the measurement of the average speed of a vehicle by determining the amount of time it takes to displace the vehicle between the two points. The position of each vehicle and its classification are also information that the detector can transmit to the external controller. In some countries, lane restriction can be determined for specific vehicles, such as trucks for example.
[00119] Moreover, the multipurpose traffic detection system can fulfill more than one application at a time. For example, the system used for traffic management near an intersection can also be used for red light enforcement at that intersection.
Methods for alignment and detection of the traffic detection system
[00116] Speed enforcement is another application that requires providing evidence that a speed violation occurred. The correlation between the detected speed and the actual vehicle guilty of the infraction needs to be trustworthy. Sufficient information should be provided to allow identification of the vehicle owner, using information from the license plate, for example. The capability of the detection system to measure the speed of several vehicles at the same time with high accuracy and to make the association between each speed measurement and the specific identified vehicle is to useful for traffic enforcement applications. This is made possible by, among others, the multiple FOV, the robustness and accuracy of the sensor and the capability to store several images of a violation.
[00117] The detector can store speed limit data (which can be different for each lane) and determine the occurrence of the infraction.
[00118] The detector can be mounted on a permanent installation or can also be temporary, provided on a movable tripod for example. Detectors can also be installed at the entry and at the exit of a point-to-point enforcement system allowing the measurement of the average speed of a vehicle by determining the amount of time it takes to displace the vehicle between the two points. The position of each vehicle and its classification are also information that the detector can transmit to the external controller. In some countries, lane restriction can be determined for specific vehicles, such as trucks for example.
[00119] Moreover, the multipurpose traffic detection system can fulfill more than one application at a time. For example, the system used for traffic management near an intersection can also be used for red light enforcement at that intersection.
Methods for alignment and detection of the traffic detection system
- 24 -[00120] A method that allows a rapid and simple alignment step for the multipurpose traffic detection system after it has been set in place is provided.
[00121] FIGS. 8A and B show examples images of a roadway captured by the 2DOR during the day. The image is overlaid with the perimeters of a set of 16 contiguous detection zones of the 3DOR. In FIG. 8A, a vehicle present in the first lane 32 would be detected by several adjacent channels at a respective detected distance between 17.4 m to 17.6 m (see the numbers at the bottom of the overlay).
In FIG. 8B, the vehicle is detected in the second lane 34 between 24.0 m to 24.4 m.
Note that the overall detection zone is wide enough to cover more than two lanes. In some situations depending on the context of the installation, some objects or even the ground can be detected by the system but can be filtered out and not be considered as an object of interest.
[00122] FIG. 9A is illustrative of a red light enforcement application installation.
Some channels detect echo back signals from the ground (see the numbers at the bottom of the overlay) but the system is able to discriminate them as static objects.
FIG. 9B is a graph showing a top view of the 3D 16 field of view of a road with 3 lanes. In a Cartesian coordinate system, if the detection system represents the origin, the horizontal direction from left to right is taken as the positive x-axis and represents the width of the 3 lanes in meters, and the vertical direction from bottom to top is taken as the positive y-axis and represents the longitudinal distance from the sensor. To facilitation installation, the installation software will indicate the beginning and the end of the detection zone by showing a detection line as seen in FIG. 9B.
Multi-vehicle simultaneous detection and tracking for position determination, speed measurement and classification [00123] FIG. 10 shows a top view of an example road facility equipped with a multipurpose traffic detection system 10. The system 10 mounted on an existing traffic infrastructure is used to illuminate a detection zone 42. In this example, the mounting height is between 1 and 10 m with a distance from the road between 1 and m. In FIG. 10, the vehicles 46 travel in lanes 43, 44 and 45 in a direction indicated by arrow A through the detection system illumination zone 42. The detection system 5 10 is used for detecting information of the rear surface of vehicles 46 coming in the illumination zone 42. The detection system 10 is based on IR LED illumination source with a multiple field-of-view detector.
[00124] In FIG. 11, the 16 fields of view N covering a section of the road are shown. In a Cartesian coordinate system, if the detection system represents the origin 49, the horizontal direction from left to right is taken as the positive x-axis 50, and the vertical direction from bottom to top is taken as the positive y-axis 51 then, each 3D detection 53 gives the distance between an object and the sensor.
[00125] FIG. 12 shows the system in an example configuration with two trigger lines 56 and 57 located at a distance from the sensor between 10 and 50 m, for example.
The two trigger lines 56 and 57 are configured by the user. Blob 55 illustrates a detectable vehicle rear. When the blob reaches the trigger line, the system returns a trigger message.
[00126] FIG. 13 arid FIG. 14 show example data for vehicle tracking in the context of traffic light enforcement. Thanks to a projection of the field-of-view of the detection system on the real 2D image, the relationship between the top view (FIGS. 13B, 13D, 13F) and the scene (FIGS. 13A, 13C, 13E) is made apparent. The 3D detections are represented by dots in the top views. In this example, a small diamond in the top views shows the estimated position of the rear of each vehicle based on the 3D
detections. In this example, the small diamond represents the middle of the rear of the vehicle. The distance of detection is indicated under each detection channel in the scene image. The amplitude of the detection is also indicated below the distance of detection. On the top view, thin lines define the limits of the tracking area and dotted lines define two trigger lines configured by the user. When entering this area, a , new vehicle is labeled with a unique identifier. In each frame, its estimated position is shown using a small diamond. As shown, the interactions between vehicle detections are managed by the tracking algorithm allowing distinguishing vehicles located in the detection area.
[00127] FIG. 15 shows the steps performed during the execution of an example tracking algorithm. At step 80, the tracking algorithm selects the reliable measurements located on the road. At step 81A, the generic Kalman Filter for tracking a variable number of objects is used. At step 82, a road user classification based on geometric features is computed. Finally, step 83 sends to each frame, a message with position, speed, class and trigger if necessary for the vehicles located in the detection zone.
[00128] FIG. 16 shows the steps performed during the execution of the tracking algorithm if the traffic light state 85 is known. Steps 80/800, 82 and 83 are unchanged. However, step 81B is different because the additional information allows working in a space-based tracking joint.
[00129] The selection of relevant measures 80 is described in FIG. 17. At step the tracking algorithm reads the available observations. At step 101, the tracking algorithm removes each detection that is not located on the road. Step 101 is followed by step 102 where the tracking algorithm recognizes lines by a feature-based approach. Step 103 eliminates the points located on lines parallel to the x-axis 50 with the aim of extracting the characteristics relating to the side(s) of vehicles and to keep only the objects having a "vehicle rear signature".
[00130] The estimation of a line based on the covariance matrix using polar coordinate 102 is illustrated in FIG. 18. This estimation is based on feature extraction.
The strength of the feature-based approach lies in its abstraction from data type, origin and amount. In this application, line segments will be considered as a basic primitive which later serves to identify and then remove the side of vehicles.
Feature extraction is divided into two sub-problems: (1) segmentation to determine which data points contribute to the line model, and (ii) fitting to give an answer as to how these points contribute.
[00131] The polar form is chosen to represent a line model:
[00132] x cos a + y sin a r [00133] where -7/ < a 5 7r is the angle between the x axis and the normal of the line, r > 0 is the perpendicular distance of the line to the origin; (x, y) is the Cartesian coordinates of a point on the line. The covariance matrix of line parameters is:
[0 17ra]
[00134] cov(r, a) ,.2 Crra [0013q PIG 19 shows a state diagram for the 3D real-time detection multi-object tracker. The core of the tracker 91A is based on a Kalman Filter in all weather and lighting conditions. The observation model 90 is illustrated in FIG. 21 which presents an example method to compute the vehide position by weighting each 3D
observation according to its height amplitude. This method permits to improve tne accuracy of the estimated position with respect to using only the x and y Cartesian positions.
[00136] Expression 301 computes the blob position as follows:
[00137] =Tin = Pn [001313] where irn is the intensity weight for the observation n, n e [1,., /V), and N
is the number of observation grouped together. Step 301 is followed by computing the observation weight depending on the intensity at step 302.
RECTIFIED SHEET (RULE 91) [00139] The function 300 normalizes the weight 1r15 according to the amplitude All of the observation Pn:
An [00140] 7t15=
I An [00141] The state evolution model 92 is represented by the classical model called speed constant. Kinematics model can be represented in a matrix form by:
[00142] Pk--1 = F. pk + G.Vk: Vk¨N (0, Qk) [00143] where Pk = s (xobs,-obs, Yobs V is the target state vector, the transition , aim', matrix which models the evolution of Pk' Qkthe covariance matrix of V I, and G
the noise matrix which is modeled by acceleration.
Ar2 1 AT 0 0 ¨
AT 0 ux2 C
[00144] F = 00 10 01 AOT -G = Qk [0 a 2 }
[00145] The equation observation can be written as:
[00146] Zk = H. pk+ Wk, Wk (0, Rk) [0014.7] Where Zk = (xobs, ,y,õ)t is the measurement vector, H the measurement sensitivity matrix, and Rkthe covariance matrix of Wk.
{1 0 0 01 2 0 0 0 1 0 0 ClobSx [00148] 11 = 0 Rk =
0 0 1 0 ,r 2 -obs [00149] The state space model 93A is based on probabilistic framework where the evolution model is supposed to be linear and the observation model is supposed to be Gaussian noise. In a 3D image, the system state encodes the information RECTIFIED SHEET (RULE 91) observed in the scene, e.g. the number of vehicles and their characteristics is xPr = (pg, li) with AT as the number of detected vehicles, where iiPT denotes the 20 position of object N at iteration k, 0,1 gives identification, age, lane and the object classification.
[00150] FIG. 20 shows a state diagram for 3D real-time detection multi-object joint tracker. The core of 91B is based on a Kalman Filter which addresses the issue of interacting targets, which cause occlusion issues. When an occlusion is present, 3D
data alone can be unreliable, and is not sufficient to detect, at each frame, the object of interest. If the algorithm uses the traffic light state 85, occlusions can be modeled with a joint state space model 0313. The multi-object joint tracker includes a multi-object interaction distance which is implemented by including an additional interaction factor in the vehicle position. The state space model 936 encodes the observations detected in the scene, e.g. the number of vehicles, the traffic light state and the interaction between the vehicles located in the same lane by concatenating their configurations into a single super-state vector such as: Xk = (0k,4,...,4) with Ok the size of state space at iteration k and 4 = (e, In the state vector associated with the object N, where it denotes the 2D position of the object N at iteration k, gives identification, age, lane, class, traffic light state and the object interaction.
[00151] Before integrating measures into the filter, a selection is made by a two-step procedure shown in FIGS. 22 and 23 : first at step 400 validation gate, then at step 401A/B data association. The validation gate is the ellipsoid of size N, (dimension of vector) defined such as:
[00152] 19`.5-1. V y [00153] where Pt Zk H75-1, is the innovation, S the covariance matrix of the predicted value of the measurement vector and y is obtained from the chi-square tables for N, degree of freedom. This threshold represents the probability that the (true) measurement will fall in the gate. Step 400 is followed by step 401NB
which RECTIFIED SHEET (RULE 91) makes the matching between a blob and a hypothesis. Then, (i) consider all entries as new blobs; (ii) find the corresponding entries to each blob by considering gating intervals around the predicted position of each hypothesis, (iii) choose the nearest entry of each interval as the corresponding final observation of each blob. At step 402, the tracking algorithm uses a track management module in order to change the number of hypothesis. This definition is: (i) if, considering the existing assumption, there occurs an observation that cannot be explained, the track management module proposes a new observation; (ii) if an assumption does not find any observation after 500 ms, the track management module proposes to suppress the assumption. In this case, of course, an evolution model helps to guide state space exploration of the Kalman filter algorithm with a prediction of the state. Finally, step 403 uses a Kalman framework to estimate the final position of the vehicle.
[00154] In a 3D image, the system state encodes the information observed in the scene, the number of vehicles and their characteristics is Xk = (0k, x, 41) with Ok is the size of state space (number of detected vehicles) at iteration k and xg = (74/, the state vector associated with object N, where plcv denotes the 2D position of object N at iteration k, gives identification, age, lane and the object classification. Step 90 and 92 are unchanged.
[001551 FIG. 24 shows the steps performed during the execution of the classification algorithm. At step 300, the algorithm checks if a line is detected in the 3D
image. If a line is detected, step 500 is followed by step 501 which computes vehicle length.
Vehicle length is defined as the overall length of the vehicle (including attached trailers) from the front to the rear. In order to calculate the length, two different positions are used: K0 and X1.. X0 is given by the position of the first detected line and Xi is given by the trigger line 1 (for example). Once the speed has been estimated, the vehicle length i can be determined such as:
RECTIFIED SHEET (RULE 91) [00156] 1 [m] = s[rnIS) * (Xi (t)[s] - X0(t)[s]) - (Xi (x)[mj - X0(x)[m]) Seg [m] T HEm}
Where s is the vehicle speed, Seg is the length of the detected line and TH is a calibration threshold determined from a large dataset.
[001571 If the line is not detected at step 500, step 500 is followed by step 502 which computes the vehicle height. The vehicle height is estimated during the entry into the sensor field of view. As shown in FIG. 26, for a known configuration of the detection system, there is a direct geometric relationship between the height of a vehicle 601 and the detection distance 600. The accuracy 602 is dependent on the half-size of the vertical FOV angle 603. Height measurement is validated if the accuracy is lower to than a threshold.
[00158] Finally, step 502 is followed by step 503 which computes the vehicle width.
Over the vehicle blob, let (3).1, x) be leftmost pixel and (yr, x) be the rightmost pixel in the vehicle blob for a given x Then the width W of the object is determined from the following formula:
[00159] WlYrYtI ¨
[00160] FIGS. 25A, 25B and 25C shows a result of vehicle classification based on the classification algorithm. For example, in FIG. 25A, the classification result is a heavy vehicle; in FIG. 25B, it is a four-wheeled lightweight vehicle and in FIG. 25C, it is a two-wheeled lightweight vehicle. The information from the detection system is flexible and can be adapted to different schemes of classification. FIG. 25 illustrates graphically the basic elements of the concept of an object-box approach which is detailed below and in FIG 27 and FIG. 28.
[00161] The object-box approach is mainly intended for vehicles because this approach uses the vehicle geometry in a LEDDAR image. The vehicles are represented by a 3D rectangular box of detected length, width and height. The size of the rectangular box will vary depending on the detections in the FOV.
FIGS. 27A, 27B, 270 and 270 show top view frames of a vehicle detected by the RECTIFIED SHEET (RULE 91) LEDDAR sensor. FIGS. 28A, 28B, 28C and 28D show corresponding side view frames of the vehicle of FIG. 27, [00162] FIGS. 27A, 27B, 27C, 27D and FIGS. 28A, 28B, 28C, 28D show the changing 3D size of the rectangle 701 for four example positions of a vehicle 702 in the 3D sensor FOV 703. When a vehicle 702 enters the 3D sensor FOV 703, two detections are made on the side of the vehicle (see FIG. 27A) and one detection is made for the top of the vehicle (see FIG. 2BA). The 3D rectangle is initialized with a length equal to 4 m, a width of 1.5 m and a height 011n, given by:
[00163] Om, = Hs ¨ dist * tan(0) [00164] Where 115 is the sensor height 704, dist is the distance of the detected vehide and 0 is sensor pitch.
[00165] FIG. 27B and FIG. 238 represent detections when the vehicle is three-fourths of the way in the detection FOV. Eight side detections are apparent on FIG. 27B and one top detection is apparent on FIG. 28B. The dimensions of the rectangle are calculated as follows:
[00166] The width is not yet adjusted because the vehicle back is not yet detected.
[00167] 01(k) = rriax(L2 ¨ 01(k ¨ 1)) [00168] Oh(k) = max(OHm Oft (k 1)) [00169] Where the points of a segment are clockwise angle sorted so L2 is the point with the smallest angle and L, is the segment-point with the largest angle.
Or(k)and Oh(k) are respectively the current length and height value at time k.
[00170] FIG. 27C and FIG. 28C represent detections when the back of the vehicle begins to enter in the detection FOV. Eight side detections and two rear detections RECTIFIED SHEET (RULE 91) are apparent on FIG. 27C while one detection is apparent on FIG. 28C. The dimensions of the 30 rectangle are calculated as follows:
[00171] 01(k) = max(L2 ¨ L1, 01(k ¨ 1)) [00172] Oh(k) = max(011m, Oh(k ¨ 1)) [00173] Ow (k) = max(L4 ¨ L3 , Ow(k ¨ 1)) [00174] As for the horizontal segment representing the side of the vehicle, the points of the vertical segment representing the rear and/or the top of the vehicle are clockwise angle sorted, so L4 is the point with the smallest angle and L3 is the segment-point with the largest angle.0/(k), Oh(k)and 0,(k) are respectively the current length, height and width value at time k.
[00175] FIG. 27D arid FIG. 28D represent detections when the back of the vehicle is fully in the detection FOV. Six side detections and four rear detections are apparent on FIG. 27D while one detection is apparent on FIG. 28D. The width Dim dimension is calculated as follows:
[00176] Otm(k) = a,g (L4 ¨ L3) (1 a) * Ohn(k ¨ 1) [00177] Where 017,(k) is the current width at time k and a is the filtering rate.
[00178] The size of the vehicle can then be determined fully.
[00179] The segmentation algorithm 800 based on a 3D bounding box for selection of the relevant measures is illustrated in FIG. 29. The first three steps are identical to that of FIG. 17. If step 120 finds horizontal lines, then step 120 is followed by step 121. As explained above, the points of a segment are clockwise angle sorted withL2, the smallest angle and Li the largest angle. This segment length is given byL2 ¨ L1.
Otherwise, the next step 123 initializes the 3D bounding box with a default vehicle length. Step 121 is followed by step 122 which considers that two segments have a RECTIFIED SHEET (RULE 91) common corner if there is a point of intersection Pi between the two segments with IPi ¨ Lil and IPi ¨ L4 less than a distance threshold. If no corner is found, step 123 initializes the 3D bounding box with default values. Otherwise, step 124 computes the 3D bounding box dimensions from equations presented above with respect to __ FIG. 27C.
[00180] It is of interest to derive minimum variance bounds on estimation errors to have an idea of the maximum knowledge on the speed measurement that can be expected and to assess the quality of the results of the proposed algorithms compared with the bounds. In time-invariant statistical models, a commonly used lower bound is the Cramer-Rao Lower Bound (cRL.13), given by the inverse of the Fisher information matrix. The PCRB can be used for estimating kinematic characteristics of the target.
[00181] A simulation was done according to the scenario shown in FIG. 30. The vehicle 130 is moving at a speed of 60 m/s along a straight line in lane 3.
The PCRB
-- was applied. As shown in FIG. 31, the tracking algorithm converges at point 903 at about (11{F = 0.48 km1h. after 80 samples. From point 900, it is apparent that after 16 samples, crip <3 km/h, from point 901 that after 28 samples, C*F < 1.5 km/h and from point 902 that after 39 samples,a-kp <1 km/h. Experimental tests confirmed the utility and viability of this approach.
-- Image processing and applications [00182] The multipurpose traffic detection system uses a high-resolution image sensor or more than one image sensor with lower resolution. In the latter case, the control and processing unit has to process an image stitching by combining multiple images with different FOVs with some overlapping sections in order to produce a high-resolution image. Normally during the calibration process, the system can determine exact overlaps between images sensors and produce seamless results by controlling and synchronizing the integration time of each image sensor and the RECTIFIED SHEET (RULE 91) illumination timing and analyzing overlap sections. Infrared and color image sensors can be used with optical filters.
[00183] At night, a visible light is required to enhance the color of the image. A
NIR flash is not visible to the human eye and does not blind drivers, so it can be used at any time of the day and night.
[00184] Image sensors can use electronic shutters (global or rolling) or mechanical shutters. In the case of rolling shutters, compensation for the distortions of fast-moving objects (skew effect) can be processed based on the information of the position and the speed of the vehicle. Other controls of the image sensor like Gamma and gain control can be used to improve the quality of the image in different contexts of illumination.
[00185] FIG. 32A is illustrative of an example snapshot taken by a 5 Mpixels image sensor during the day. Vehicles are at a distance of approximately 25 m and the FOV at that distance covers approximately 9 m (almost equivalent to 3 lanes).
FIGS. 3213, 32C and 32D show the quality of the image and resolution of FIG.
by zooming in on the three license plates.
[00186] FIG. 33A is illustrative of an example snapshot taken by the image sensor at night without any light. This image is completely dark. FIG. 33B
shows the same scene with infrared light. Two vehicles can be seen but the license plates are not readable even when zooming in as seen in FIG. 33C. The license plate acts as a retro-reflector and saturates the image sensing. FIGS. 34A and 34B use the same lighting with a lower integration time. The vehicle is less clear but the image shows some part of the license plate becoming less saturated. FIGS. 34C and 34D
decrease a little more the integration time and produce a readable license plate.
[00187] One way to get a visible license plate at night and an image of the vehicle is to process several snapshots with different integration times (Ti). For example, when the 3D detection confirms the position of a vehicle in the detection zone, a sequence of acquisition of several snapshots (ex.: 4 snapshots with Ti1=50 .is, Ti2=100 ps, Ti3=250 ps and Ti4=500 ps), each snapshot taken at a certain frame rate (ex.:
each 50 ms), will permit to get the information on a specific vehicle: information from the 3D sensor, a readable license plate of the tracked vehicle and an image from the context including the photo of the vehicle. If the system captures 4 images during 150 ms, a vehicle at 150 km/h would travel during 6.25 m (one snapshot every 1.5 m).
[00188] To enhance the quality of the image, high dynamic range (HDR) imaging techniques can be used to improve the dynamic range between the lightest and darkest areas of an image_ HDR notably compensates for loss of information by a saturated section by taking multiple pictures at different integration times and using stitching process to make a better quality image.
1001893 The system can use Automatic License Plate Recognition (ALPR), based on Optical Character Recognition (OCR) technology, to identify vehicle license plates.
This information of the vehicle identification and measurements is digitally transmitted to the external controller or by the network to back-office servers, which process the information and can traffic violation alerts.
[00190] The multipurpose traffic detection system can be used day or night, in good or bad weather condition, and also offers the possibility of providing weather information like the presence of fog or snowing conditions. Fog and snow have an impact on the reflection of the radiated light pulses of the protective window. In the presence of fog, the peak amplitude of the first pulse exhibits sizable time fluctuations, by a factor that may reach 2 to 3 when compared to its mean peak amplitude level. Likewise, the width of the first pulse also shows time fluctuations during these adverse weather conditions, but with a reduced factor, for example, by about 10 to 50 %. During snow falls, the peak amplitude of the first pulse visible in the waveforms generally shows faster time fluctuations while the fluctuations of the pulse width are less intense. Finally, it can be noted that a long-lasting change in the peak RECTIFIED SHEET (RULE 91) Cl 02865733 2014-08-27 amplitude of the first pulse can be simply due to the presence of dirt or snow deposited on the exterior surface of the protective window.
[00191] FIG. 35 shows an example image taken with infrared illumination with the overlay (dashed lines) representing the perimeter of the 16 contiguous detection zones of the 3DOR. Apparent on FIG. 35 are high intensity spots 140 coming from a section of the vehicle having a high retro-reflectivity characteristic. Such sections having a high retro-reflectivity characteristic include the license plate, retro-reflectors installed one the car and lighting modules that can include retro-reflectors.
An object with retro-reflectivity characteristic reflects light back to its source with minimum scattering. The return signal can be as much as 100 times stronger than a signal coming from a surface with Lambertian reflectance. This retro-reflectivity characteristic has the same kind of impact on the 3DOR. Each 3D channel detecting a retro-reflector at a certain distance in its FOV will acquire a waveform with high peak amplitude at the distance of the retro-reflector. The numbers at the bottom of the overlay (in dashed lines) represent the distance measured by the multipurpose traffic detection system in each channel which contains a high peak in its waveform.
Then, with a good -image registration between the 2D image sensor and the 3D
sensor, the 2D information (spot with high intensity) can be correlated with the 3D
information (high amplitude at a certain distance). This link between 2D
images and 3D detection ensures a match between the identification data based on reading license plates and measurements of position and velocity from the 3D sensor.
[00192) The license plate identification process can also be used as a second alternative to determine the speed of the vehicle with lower accuracy but useful as a validation or confirmation. By analyzing the size of the license plate and/or character on successive images, the progression of the vehicle in the detection zone can be estimated and used to confirm the measured displacement.
RECTIFIED SHEET (RULE 91) The embodiments described above are intended to be exemplary only. The scope of the invention is therefore intended to be limited solely by the appended claims.
RECTIFIED SHEET (RULE 91)
[00121] FIGS. 8A and B show examples images of a roadway captured by the 2DOR during the day. The image is overlaid with the perimeters of a set of 16 contiguous detection zones of the 3DOR. In FIG. 8A, a vehicle present in the first lane 32 would be detected by several adjacent channels at a respective detected distance between 17.4 m to 17.6 m (see the numbers at the bottom of the overlay).
In FIG. 8B, the vehicle is detected in the second lane 34 between 24.0 m to 24.4 m.
Note that the overall detection zone is wide enough to cover more than two lanes. In some situations depending on the context of the installation, some objects or even the ground can be detected by the system but can be filtered out and not be considered as an object of interest.
[00122] FIG. 9A is illustrative of a red light enforcement application installation.
Some channels detect echo back signals from the ground (see the numbers at the bottom of the overlay) but the system is able to discriminate them as static objects.
FIG. 9B is a graph showing a top view of the 3D 16 field of view of a road with 3 lanes. In a Cartesian coordinate system, if the detection system represents the origin, the horizontal direction from left to right is taken as the positive x-axis and represents the width of the 3 lanes in meters, and the vertical direction from bottom to top is taken as the positive y-axis and represents the longitudinal distance from the sensor. To facilitation installation, the installation software will indicate the beginning and the end of the detection zone by showing a detection line as seen in FIG. 9B.
Multi-vehicle simultaneous detection and tracking for position determination, speed measurement and classification [00123] FIG. 10 shows a top view of an example road facility equipped with a multipurpose traffic detection system 10. The system 10 mounted on an existing traffic infrastructure is used to illuminate a detection zone 42. In this example, the mounting height is between 1 and 10 m with a distance from the road between 1 and m. In FIG. 10, the vehicles 46 travel in lanes 43, 44 and 45 in a direction indicated by arrow A through the detection system illumination zone 42. The detection system 5 10 is used for detecting information of the rear surface of vehicles 46 coming in the illumination zone 42. The detection system 10 is based on IR LED illumination source with a multiple field-of-view detector.
[00124] In FIG. 11, the 16 fields of view N covering a section of the road are shown. In a Cartesian coordinate system, if the detection system represents the origin 49, the horizontal direction from left to right is taken as the positive x-axis 50, and the vertical direction from bottom to top is taken as the positive y-axis 51 then, each 3D detection 53 gives the distance between an object and the sensor.
[00125] FIG. 12 shows the system in an example configuration with two trigger lines 56 and 57 located at a distance from the sensor between 10 and 50 m, for example.
The two trigger lines 56 and 57 are configured by the user. Blob 55 illustrates a detectable vehicle rear. When the blob reaches the trigger line, the system returns a trigger message.
[00126] FIG. 13 arid FIG. 14 show example data for vehicle tracking in the context of traffic light enforcement. Thanks to a projection of the field-of-view of the detection system on the real 2D image, the relationship between the top view (FIGS. 13B, 13D, 13F) and the scene (FIGS. 13A, 13C, 13E) is made apparent. The 3D detections are represented by dots in the top views. In this example, a small diamond in the top views shows the estimated position of the rear of each vehicle based on the 3D
detections. In this example, the small diamond represents the middle of the rear of the vehicle. The distance of detection is indicated under each detection channel in the scene image. The amplitude of the detection is also indicated below the distance of detection. On the top view, thin lines define the limits of the tracking area and dotted lines define two trigger lines configured by the user. When entering this area, a , new vehicle is labeled with a unique identifier. In each frame, its estimated position is shown using a small diamond. As shown, the interactions between vehicle detections are managed by the tracking algorithm allowing distinguishing vehicles located in the detection area.
[00127] FIG. 15 shows the steps performed during the execution of an example tracking algorithm. At step 80, the tracking algorithm selects the reliable measurements located on the road. At step 81A, the generic Kalman Filter for tracking a variable number of objects is used. At step 82, a road user classification based on geometric features is computed. Finally, step 83 sends to each frame, a message with position, speed, class and trigger if necessary for the vehicles located in the detection zone.
[00128] FIG. 16 shows the steps performed during the execution of the tracking algorithm if the traffic light state 85 is known. Steps 80/800, 82 and 83 are unchanged. However, step 81B is different because the additional information allows working in a space-based tracking joint.
[00129] The selection of relevant measures 80 is described in FIG. 17. At step the tracking algorithm reads the available observations. At step 101, the tracking algorithm removes each detection that is not located on the road. Step 101 is followed by step 102 where the tracking algorithm recognizes lines by a feature-based approach. Step 103 eliminates the points located on lines parallel to the x-axis 50 with the aim of extracting the characteristics relating to the side(s) of vehicles and to keep only the objects having a "vehicle rear signature".
[00130] The estimation of a line based on the covariance matrix using polar coordinate 102 is illustrated in FIG. 18. This estimation is based on feature extraction.
The strength of the feature-based approach lies in its abstraction from data type, origin and amount. In this application, line segments will be considered as a basic primitive which later serves to identify and then remove the side of vehicles.
Feature extraction is divided into two sub-problems: (1) segmentation to determine which data points contribute to the line model, and (ii) fitting to give an answer as to how these points contribute.
[00131] The polar form is chosen to represent a line model:
[00132] x cos a + y sin a r [00133] where -7/ < a 5 7r is the angle between the x axis and the normal of the line, r > 0 is the perpendicular distance of the line to the origin; (x, y) is the Cartesian coordinates of a point on the line. The covariance matrix of line parameters is:
[0 17ra]
[00134] cov(r, a) ,.2 Crra [0013q PIG 19 shows a state diagram for the 3D real-time detection multi-object tracker. The core of the tracker 91A is based on a Kalman Filter in all weather and lighting conditions. The observation model 90 is illustrated in FIG. 21 which presents an example method to compute the vehide position by weighting each 3D
observation according to its height amplitude. This method permits to improve tne accuracy of the estimated position with respect to using only the x and y Cartesian positions.
[00136] Expression 301 computes the blob position as follows:
[00137] =Tin = Pn [001313] where irn is the intensity weight for the observation n, n e [1,., /V), and N
is the number of observation grouped together. Step 301 is followed by computing the observation weight depending on the intensity at step 302.
RECTIFIED SHEET (RULE 91) [00139] The function 300 normalizes the weight 1r15 according to the amplitude All of the observation Pn:
An [00140] 7t15=
I An [00141] The state evolution model 92 is represented by the classical model called speed constant. Kinematics model can be represented in a matrix form by:
[00142] Pk--1 = F. pk + G.Vk: Vk¨N (0, Qk) [00143] where Pk = s (xobs,-obs, Yobs V is the target state vector, the transition , aim', matrix which models the evolution of Pk' Qkthe covariance matrix of V I, and G
the noise matrix which is modeled by acceleration.
Ar2 1 AT 0 0 ¨
AT 0 ux2 C
[00144] F = 00 10 01 AOT -G = Qk [0 a 2 }
[00145] The equation observation can be written as:
[00146] Zk = H. pk+ Wk, Wk (0, Rk) [0014.7] Where Zk = (xobs, ,y,õ)t is the measurement vector, H the measurement sensitivity matrix, and Rkthe covariance matrix of Wk.
{1 0 0 01 2 0 0 0 1 0 0 ClobSx [00148] 11 = 0 Rk =
0 0 1 0 ,r 2 -obs [00149] The state space model 93A is based on probabilistic framework where the evolution model is supposed to be linear and the observation model is supposed to be Gaussian noise. In a 3D image, the system state encodes the information RECTIFIED SHEET (RULE 91) observed in the scene, e.g. the number of vehicles and their characteristics is xPr = (pg, li) with AT as the number of detected vehicles, where iiPT denotes the 20 position of object N at iteration k, 0,1 gives identification, age, lane and the object classification.
[00150] FIG. 20 shows a state diagram for 3D real-time detection multi-object joint tracker. The core of 91B is based on a Kalman Filter which addresses the issue of interacting targets, which cause occlusion issues. When an occlusion is present, 3D
data alone can be unreliable, and is not sufficient to detect, at each frame, the object of interest. If the algorithm uses the traffic light state 85, occlusions can be modeled with a joint state space model 0313. The multi-object joint tracker includes a multi-object interaction distance which is implemented by including an additional interaction factor in the vehicle position. The state space model 936 encodes the observations detected in the scene, e.g. the number of vehicles, the traffic light state and the interaction between the vehicles located in the same lane by concatenating their configurations into a single super-state vector such as: Xk = (0k,4,...,4) with Ok the size of state space at iteration k and 4 = (e, In the state vector associated with the object N, where it denotes the 2D position of the object N at iteration k, gives identification, age, lane, class, traffic light state and the object interaction.
[00151] Before integrating measures into the filter, a selection is made by a two-step procedure shown in FIGS. 22 and 23 : first at step 400 validation gate, then at step 401A/B data association. The validation gate is the ellipsoid of size N, (dimension of vector) defined such as:
[00152] 19`.5-1. V y [00153] where Pt Zk H75-1, is the innovation, S the covariance matrix of the predicted value of the measurement vector and y is obtained from the chi-square tables for N, degree of freedom. This threshold represents the probability that the (true) measurement will fall in the gate. Step 400 is followed by step 401NB
which RECTIFIED SHEET (RULE 91) makes the matching between a blob and a hypothesis. Then, (i) consider all entries as new blobs; (ii) find the corresponding entries to each blob by considering gating intervals around the predicted position of each hypothesis, (iii) choose the nearest entry of each interval as the corresponding final observation of each blob. At step 402, the tracking algorithm uses a track management module in order to change the number of hypothesis. This definition is: (i) if, considering the existing assumption, there occurs an observation that cannot be explained, the track management module proposes a new observation; (ii) if an assumption does not find any observation after 500 ms, the track management module proposes to suppress the assumption. In this case, of course, an evolution model helps to guide state space exploration of the Kalman filter algorithm with a prediction of the state. Finally, step 403 uses a Kalman framework to estimate the final position of the vehicle.
[00154] In a 3D image, the system state encodes the information observed in the scene, the number of vehicles and their characteristics is Xk = (0k, x, 41) with Ok is the size of state space (number of detected vehicles) at iteration k and xg = (74/, the state vector associated with object N, where plcv denotes the 2D position of object N at iteration k, gives identification, age, lane and the object classification. Step 90 and 92 are unchanged.
[001551 FIG. 24 shows the steps performed during the execution of the classification algorithm. At step 300, the algorithm checks if a line is detected in the 3D
image. If a line is detected, step 500 is followed by step 501 which computes vehicle length.
Vehicle length is defined as the overall length of the vehicle (including attached trailers) from the front to the rear. In order to calculate the length, two different positions are used: K0 and X1.. X0 is given by the position of the first detected line and Xi is given by the trigger line 1 (for example). Once the speed has been estimated, the vehicle length i can be determined such as:
RECTIFIED SHEET (RULE 91) [00156] 1 [m] = s[rnIS) * (Xi (t)[s] - X0(t)[s]) - (Xi (x)[mj - X0(x)[m]) Seg [m] T HEm}
Where s is the vehicle speed, Seg is the length of the detected line and TH is a calibration threshold determined from a large dataset.
[001571 If the line is not detected at step 500, step 500 is followed by step 502 which computes the vehicle height. The vehicle height is estimated during the entry into the sensor field of view. As shown in FIG. 26, for a known configuration of the detection system, there is a direct geometric relationship between the height of a vehicle 601 and the detection distance 600. The accuracy 602 is dependent on the half-size of the vertical FOV angle 603. Height measurement is validated if the accuracy is lower to than a threshold.
[00158] Finally, step 502 is followed by step 503 which computes the vehicle width.
Over the vehicle blob, let (3).1, x) be leftmost pixel and (yr, x) be the rightmost pixel in the vehicle blob for a given x Then the width W of the object is determined from the following formula:
[00159] WlYrYtI ¨
[00160] FIGS. 25A, 25B and 25C shows a result of vehicle classification based on the classification algorithm. For example, in FIG. 25A, the classification result is a heavy vehicle; in FIG. 25B, it is a four-wheeled lightweight vehicle and in FIG. 25C, it is a two-wheeled lightweight vehicle. The information from the detection system is flexible and can be adapted to different schemes of classification. FIG. 25 illustrates graphically the basic elements of the concept of an object-box approach which is detailed below and in FIG 27 and FIG. 28.
[00161] The object-box approach is mainly intended for vehicles because this approach uses the vehicle geometry in a LEDDAR image. The vehicles are represented by a 3D rectangular box of detected length, width and height. The size of the rectangular box will vary depending on the detections in the FOV.
FIGS. 27A, 27B, 270 and 270 show top view frames of a vehicle detected by the RECTIFIED SHEET (RULE 91) LEDDAR sensor. FIGS. 28A, 28B, 28C and 28D show corresponding side view frames of the vehicle of FIG. 27, [00162] FIGS. 27A, 27B, 27C, 27D and FIGS. 28A, 28B, 28C, 28D show the changing 3D size of the rectangle 701 for four example positions of a vehicle 702 in the 3D sensor FOV 703. When a vehicle 702 enters the 3D sensor FOV 703, two detections are made on the side of the vehicle (see FIG. 27A) and one detection is made for the top of the vehicle (see FIG. 2BA). The 3D rectangle is initialized with a length equal to 4 m, a width of 1.5 m and a height 011n, given by:
[00163] Om, = Hs ¨ dist * tan(0) [00164] Where 115 is the sensor height 704, dist is the distance of the detected vehide and 0 is sensor pitch.
[00165] FIG. 27B and FIG. 238 represent detections when the vehicle is three-fourths of the way in the detection FOV. Eight side detections are apparent on FIG. 27B and one top detection is apparent on FIG. 28B. The dimensions of the rectangle are calculated as follows:
[00166] The width is not yet adjusted because the vehicle back is not yet detected.
[00167] 01(k) = rriax(L2 ¨ 01(k ¨ 1)) [00168] Oh(k) = max(OHm Oft (k 1)) [00169] Where the points of a segment are clockwise angle sorted so L2 is the point with the smallest angle and L, is the segment-point with the largest angle.
Or(k)and Oh(k) are respectively the current length and height value at time k.
[00170] FIG. 27C and FIG. 28C represent detections when the back of the vehicle begins to enter in the detection FOV. Eight side detections and two rear detections RECTIFIED SHEET (RULE 91) are apparent on FIG. 27C while one detection is apparent on FIG. 28C. The dimensions of the 30 rectangle are calculated as follows:
[00171] 01(k) = max(L2 ¨ L1, 01(k ¨ 1)) [00172] Oh(k) = max(011m, Oh(k ¨ 1)) [00173] Ow (k) = max(L4 ¨ L3 , Ow(k ¨ 1)) [00174] As for the horizontal segment representing the side of the vehicle, the points of the vertical segment representing the rear and/or the top of the vehicle are clockwise angle sorted, so L4 is the point with the smallest angle and L3 is the segment-point with the largest angle.0/(k), Oh(k)and 0,(k) are respectively the current length, height and width value at time k.
[00175] FIG. 27D arid FIG. 28D represent detections when the back of the vehicle is fully in the detection FOV. Six side detections and four rear detections are apparent on FIG. 27D while one detection is apparent on FIG. 28D. The width Dim dimension is calculated as follows:
[00176] Otm(k) = a,g (L4 ¨ L3) (1 a) * Ohn(k ¨ 1) [00177] Where 017,(k) is the current width at time k and a is the filtering rate.
[00178] The size of the vehicle can then be determined fully.
[00179] The segmentation algorithm 800 based on a 3D bounding box for selection of the relevant measures is illustrated in FIG. 29. The first three steps are identical to that of FIG. 17. If step 120 finds horizontal lines, then step 120 is followed by step 121. As explained above, the points of a segment are clockwise angle sorted withL2, the smallest angle and Li the largest angle. This segment length is given byL2 ¨ L1.
Otherwise, the next step 123 initializes the 3D bounding box with a default vehicle length. Step 121 is followed by step 122 which considers that two segments have a RECTIFIED SHEET (RULE 91) common corner if there is a point of intersection Pi between the two segments with IPi ¨ Lil and IPi ¨ L4 less than a distance threshold. If no corner is found, step 123 initializes the 3D bounding box with default values. Otherwise, step 124 computes the 3D bounding box dimensions from equations presented above with respect to __ FIG. 27C.
[00180] It is of interest to derive minimum variance bounds on estimation errors to have an idea of the maximum knowledge on the speed measurement that can be expected and to assess the quality of the results of the proposed algorithms compared with the bounds. In time-invariant statistical models, a commonly used lower bound is the Cramer-Rao Lower Bound (cRL.13), given by the inverse of the Fisher information matrix. The PCRB can be used for estimating kinematic characteristics of the target.
[00181] A simulation was done according to the scenario shown in FIG. 30. The vehicle 130 is moving at a speed of 60 m/s along a straight line in lane 3.
The PCRB
-- was applied. As shown in FIG. 31, the tracking algorithm converges at point 903 at about (11{F = 0.48 km1h. after 80 samples. From point 900, it is apparent that after 16 samples, crip <3 km/h, from point 901 that after 28 samples, C*F < 1.5 km/h and from point 902 that after 39 samples,a-kp <1 km/h. Experimental tests confirmed the utility and viability of this approach.
-- Image processing and applications [00182] The multipurpose traffic detection system uses a high-resolution image sensor or more than one image sensor with lower resolution. In the latter case, the control and processing unit has to process an image stitching by combining multiple images with different FOVs with some overlapping sections in order to produce a high-resolution image. Normally during the calibration process, the system can determine exact overlaps between images sensors and produce seamless results by controlling and synchronizing the integration time of each image sensor and the RECTIFIED SHEET (RULE 91) illumination timing and analyzing overlap sections. Infrared and color image sensors can be used with optical filters.
[00183] At night, a visible light is required to enhance the color of the image. A
NIR flash is not visible to the human eye and does not blind drivers, so it can be used at any time of the day and night.
[00184] Image sensors can use electronic shutters (global or rolling) or mechanical shutters. In the case of rolling shutters, compensation for the distortions of fast-moving objects (skew effect) can be processed based on the information of the position and the speed of the vehicle. Other controls of the image sensor like Gamma and gain control can be used to improve the quality of the image in different contexts of illumination.
[00185] FIG. 32A is illustrative of an example snapshot taken by a 5 Mpixels image sensor during the day. Vehicles are at a distance of approximately 25 m and the FOV at that distance covers approximately 9 m (almost equivalent to 3 lanes).
FIGS. 3213, 32C and 32D show the quality of the image and resolution of FIG.
by zooming in on the three license plates.
[00186] FIG. 33A is illustrative of an example snapshot taken by the image sensor at night without any light. This image is completely dark. FIG. 33B
shows the same scene with infrared light. Two vehicles can be seen but the license plates are not readable even when zooming in as seen in FIG. 33C. The license plate acts as a retro-reflector and saturates the image sensing. FIGS. 34A and 34B use the same lighting with a lower integration time. The vehicle is less clear but the image shows some part of the license plate becoming less saturated. FIGS. 34C and 34D
decrease a little more the integration time and produce a readable license plate.
[00187] One way to get a visible license plate at night and an image of the vehicle is to process several snapshots with different integration times (Ti). For example, when the 3D detection confirms the position of a vehicle in the detection zone, a sequence of acquisition of several snapshots (ex.: 4 snapshots with Ti1=50 .is, Ti2=100 ps, Ti3=250 ps and Ti4=500 ps), each snapshot taken at a certain frame rate (ex.:
each 50 ms), will permit to get the information on a specific vehicle: information from the 3D sensor, a readable license plate of the tracked vehicle and an image from the context including the photo of the vehicle. If the system captures 4 images during 150 ms, a vehicle at 150 km/h would travel during 6.25 m (one snapshot every 1.5 m).
[00188] To enhance the quality of the image, high dynamic range (HDR) imaging techniques can be used to improve the dynamic range between the lightest and darkest areas of an image_ HDR notably compensates for loss of information by a saturated section by taking multiple pictures at different integration times and using stitching process to make a better quality image.
1001893 The system can use Automatic License Plate Recognition (ALPR), based on Optical Character Recognition (OCR) technology, to identify vehicle license plates.
This information of the vehicle identification and measurements is digitally transmitted to the external controller or by the network to back-office servers, which process the information and can traffic violation alerts.
[00190] The multipurpose traffic detection system can be used day or night, in good or bad weather condition, and also offers the possibility of providing weather information like the presence of fog or snowing conditions. Fog and snow have an impact on the reflection of the radiated light pulses of the protective window. In the presence of fog, the peak amplitude of the first pulse exhibits sizable time fluctuations, by a factor that may reach 2 to 3 when compared to its mean peak amplitude level. Likewise, the width of the first pulse also shows time fluctuations during these adverse weather conditions, but with a reduced factor, for example, by about 10 to 50 %. During snow falls, the peak amplitude of the first pulse visible in the waveforms generally shows faster time fluctuations while the fluctuations of the pulse width are less intense. Finally, it can be noted that a long-lasting change in the peak RECTIFIED SHEET (RULE 91) Cl 02865733 2014-08-27 amplitude of the first pulse can be simply due to the presence of dirt or snow deposited on the exterior surface of the protective window.
[00191] FIG. 35 shows an example image taken with infrared illumination with the overlay (dashed lines) representing the perimeter of the 16 contiguous detection zones of the 3DOR. Apparent on FIG. 35 are high intensity spots 140 coming from a section of the vehicle having a high retro-reflectivity characteristic. Such sections having a high retro-reflectivity characteristic include the license plate, retro-reflectors installed one the car and lighting modules that can include retro-reflectors.
An object with retro-reflectivity characteristic reflects light back to its source with minimum scattering. The return signal can be as much as 100 times stronger than a signal coming from a surface with Lambertian reflectance. This retro-reflectivity characteristic has the same kind of impact on the 3DOR. Each 3D channel detecting a retro-reflector at a certain distance in its FOV will acquire a waveform with high peak amplitude at the distance of the retro-reflector. The numbers at the bottom of the overlay (in dashed lines) represent the distance measured by the multipurpose traffic detection system in each channel which contains a high peak in its waveform.
Then, with a good -image registration between the 2D image sensor and the 3D
sensor, the 2D information (spot with high intensity) can be correlated with the 3D
information (high amplitude at a certain distance). This link between 2D
images and 3D detection ensures a match between the identification data based on reading license plates and measurements of position and velocity from the 3D sensor.
[00192) The license plate identification process can also be used as a second alternative to determine the speed of the vehicle with lower accuracy but useful as a validation or confirmation. By analyzing the size of the license plate and/or character on successive images, the progression of the vehicle in the detection zone can be estimated and used to confirm the measured displacement.
RECTIFIED SHEET (RULE 91) The embodiments described above are intended to be exemplary only. The scope of the invention is therefore intended to be limited solely by the appended claims.
RECTIFIED SHEET (RULE 91)
Claims (143)
1. A vehicle detection system, comprising:
an optical emitter configured to emit illumination signals for illuminating a field of illumination repeatedly over time, each illumination signal illuminating a portion of the field of illumination, the field of illumination including a moving vehicle;
an optical receiver configured for capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the moving vehicle as a distance between the moving vehicle and the optical emitter changes, the optical receiver generating signal waveforms from the optical return signals; and a data processing device configured for estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
an optical emitter configured to emit illumination signals for illuminating a field of illumination repeatedly over time, each illumination signal illuminating a portion of the field of illumination, the field of illumination including a moving vehicle;
an optical receiver configured for capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the moving vehicle as a distance between the moving vehicle and the optical emitter changes, the optical receiver generating signal waveforms from the optical return signals; and a data processing device configured for estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
2. The vehicle detection system of claim 1, wherein the optical receiver is a multi-channel optical receiver having a plurality of optical detection channels and configured to produce a signal waveform for each of the optical detection channels.
3. The vehicle detection system of claim 3, wherein the data processing device is configured to estimate a height of the vehicle.
4. The vehicle detection system of any one of claims 1 to 3, wherein the data processing device is configured to identify a corner point of the vehicle less than a threshold distance from points on both of the first and second lines.
5. The vehicle detection system of claim 4, wherein the data processing device is configured to define a three-dimensional bounding box corresponding to the vehicle based on detection of corners.
6. The vehicle detection system of claim 5, wherein the three-dimensional bounding box represents an estimate of bounding dimensions of the vehicle.
7. The vehicle detection system of claim 6, wherein the data processing device is further configured to refine the estimate of the bounding dimensions as the optical return signals are produced by reflection of the illumination signals from an increasing number of sides of the vehicle.
8. The vehicle detection system of any one of claims 1 to 7, wherein the optical return signals include optical return signals produced by reflection of the illumination signals from a complete side of the vehicle, and wherein the data processing device is configured to determine dimensions of a three-dimensional bounding box corresponding to the vehicle based at least on full-waveform signal processing of the signal waveforms from the optical return signals produced by reflection of the illumination signals from a complete side of the vehicle.
9. The vehicle detection system of any one of claims 1 to 8, wherein the distance between the vehicle and the optical emitter changes due to relative movement between the optical emitter and the vehicle.
10. The vehicle detection system of any one of claims 1 to 9, wherein the vehicle is assigned a classification based on a dimension of the vehicle.
11. The vehicle detection system of claim 10, wherein the classification is one of a motorcycle, a passenger car, a pickup, a panel, a van, a bus, a two-axle truck, a three-axle truck, a four-axle truck, a five-axle truck, a six-axle truck, and a seven or more axle truck.
12. The vehicle detection system of claim 10, wherein the classification is based on information related to a characteristic of the vehicle.
13. The vehicle detection system of any one of claims 1 to 12, wherein the data processing device is configured to estimate a distance from the optical emitter to the vehicle, based at least in part on a dimension of the vehicle.
14. The vehicle detection system of any one of claims 1 to 13, wherein the data processing device is configured to estimate a position of the vehicle, based at least in part on a dimension of the vehicle.
15. The vehicle detection system of any one of claims 1 to 14, wherein the data processing device is configured to estimate a velocity of the vehicle, based at least in part on a dimension of the vehicle.
16. The vehicle detection system of any one of claims 1 to 15, wherein the data processing device triggers an event based at least in part on a dimension of the vehicle.
17. A vehicle detection system, comprising:
a laser configured to emit illumination signals for illuminating a field of illumination repeatedly over time, the field of illumination including a target vehicle having a side, a top and a rear;
an optical receiver configured for capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the vehicle as a distance between the vehicle and the laser changes, the optical receiver generating signal waveforms from the optical return signals; and a data processing device configured for estimating a length, a width and a volume of the vehicle, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
a laser configured to emit illumination signals for illuminating a field of illumination repeatedly over time, the field of illumination including a target vehicle having a side, a top and a rear;
an optical receiver configured for capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the vehicle as a distance between the vehicle and the laser changes, the optical receiver generating signal waveforms from the optical return signals; and a data processing device configured for estimating a length, a width and a volume of the vehicle, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
18. The vehicle detection system of claim 17, wherein the optical receiver is a multi-channel optical receiver having a plurality of optical detection channels and configured to produce a signal waveform for each of the optical detection channels.
19. The vehicle detection system of claim 17, wherein the data processing device is configured to estimate a height of the vehicle.
20. The vehicle detection system of any one of claims 17 to 19, wherein the data processing device is configured to identify a corner point of the vehicle less than a threshold distance from points on both of the first and second lines.
21. The vehicle detection system of claim 20, wherein the data processing device is configured to define a three-dimensional bounding box corresponding to the vehicle based on detection of corners.
22. The vehicle detection system of claim 21, wherein the three-dimensional bounding box represents an estimate of bounding dimensions of the vehicle.
23. The vehicle detection system of any one of claims 17 to 22, wherein the distance between the vehicle and the laser changes due to relative movement between the laser and the vehicle.
24. The vehicle detection system of any one of claims 17 to 23, wherein the data processing device is further configured to assign a classification to the vehicle.
25. The vehicle detection system of claim 24, wherein the classification is one of a motorcycle, a passenger car, a pickup, a panel, a van, a bus, a two-axle truck, a three-axle truck, a four-axle truck, a five-axle truck, a six-axle truck, and a seven or more axle truck.
26. The vehicle detection system of claim 24, wherein the classification is based on information related to a characteristic of the vehicle.
27. The vehicle detection system of any one of claims 17 to 26, wherein the data processing device is configured to estimate a distance from the laser to the vehicle, based at least in part on the estimate of the at least one of a length, a width and a height of the vehicle.
28. The vehicle detection system of any one of claims 17 to 27, wherein the data processing device is configured to estimate a position of the vehicle, based at least in part on the estimate of the at least one of a length, a width and a height of the vehicle.
29. The vehicle detection system of any one of claims 18 to 28, wherein the data processing device is configured to estimate a velocity of the vehicle, based at least in part on the estimate of the length, or the width of the vehicle.
30. The vehicle detection system of any one of claims 18 to 29, wherein the data processing device triggers an event based at least in part on the estimate of the length, or the width of the vehicle.
31. A method for vehicle detection, comprising:
emitting illumination signals from an optical emitter for illuminating a field of illumination repeatedly over time, the field of illumination including a moving vehicle, each illumination signal illuminating a portion of the field of illumination;
capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the moving vehicle as a distance between the moving vehicle and the optical emitter changes; and estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
emitting illumination signals from an optical emitter for illuminating a field of illumination repeatedly over time, the field of illumination including a moving vehicle, each illumination signal illuminating a portion of the field of illumination;
capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the moving vehicle as a distance between the moving vehicle and the optical emitter changes; and estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
32. The method of claim 31, wherein generating signal waveforms comprises producing signal waveforms for multiple optical detection channels.
33. The method of claim 31 or 32, wherein the geometric features comprise at least one of length, width and height of the moving vehicle.
34. The method of any one of claims 31 to 33, comprising identifying a corner of the moving vehicle less than a threshold distance from points on both of the first and second lines.
35. The method of claim 34, comprising defining a three-dimensional bounding box corresponding to the moving vehicle based on detection of corners.
36. The method of claim 35, wherein the three-dimensional bounding box represents an estimate of bounding dimensions of the moving vehicle.
37. The method of claim 36, comprising refining the estimate of the bounding dimensions as the optical return signals are produced by reflection of the illumination signals from an increasing number of sides of the moving vehicle.
38. The method of any one of claims 31 to 37, wherein the optical return signals include optical return signals produced by reflection of the illumination signals from a complete side of the moving vehicle, and wherein estimating information related to a size of the moving vehicle comprises determining dimensions of a three-dimensional bounding box corresponding to the moving vehicle based at least on full-waveform signal processing of the signal waveforms from the optical return signals produced by reflection of the illumination signals from a complete side of the moving vehicle.
39. The method of any one of claims 31 to 38, wherein the distance between the moving vehicle and the optical emitter changes due to relative movement between the optical emitter and the moving vehicle.
40. The method of any one of claims 31 to 39, wherein the moving vehicle is assigned a classification based on a dimension of the moving vehicle.
41. The method of claim 40, wherein the classification is one of a motorcycle, a passenger car, a pickup, a panel, a van, a bus, a two-axle truck, a three-axle truck, a four-axle truck, a five-axle truck, a six-axle truck, and a seven or more axle truck.
42. The method of claim 41, wherein the classification is based on information related to a characteristic of the moving vehicle.
43. The method of any one of claims 31 to 42, comprising estimating a distance from the optical emitter to the moving vehicle, based at least in part on a dimension of the moving vehicle.
44. The method of any one of claims 31 to 43, comprising estimating a position of the moving vehicle, based at least in part on the estimated length or width of the moving vehicle.
45. The method of any one of claims 31 to 44, comprising estimating a velocity of the moving vehicle, based at least in part on the estimated length or width of the moving vehicle.
46. The method of any one of claims 31 to 45, wherein the data processing device triggers an event based at least in part on the estimated length or width of the moving vehicle.
47. A method for vehicle detection, comprising:
emitting illumination signals from a laser emitter for illuminating a field of illumination repeatedly over time, the field of illumination including a target vehicle having a side, a top and a rear;
capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the vehicle as a distance between the vehicle and the laser emitter changes; and estimating a length, a width and a volume of the vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
emitting illumination signals from a laser emitter for illuminating a field of illumination repeatedly over time, the field of illumination including a target vehicle having a side, a top and a rear;
capturing a series of vehicle measurements from optical return signals produced by reflection of the illumination signals from the vehicle as a distance between the vehicle and the laser emitter changes; and estimating a length, a width and a volume of the vehicle from the optical return signals, wherein estimating a length comprises fitting a first line to a first subset of the vehicle measurements, and wherein estimating a width comprises fitting a second line to a second subset of the vehicle measurements.
48. The method of claim 47, wherein generating signal waveforms comprises producing a signal waveform for each of multiple optical detection channels.
49. The method of claim 47 or claim 48, wherein estimating the volume of the vehicle comprises identifying a corner point of the vehicle less than a threshold distance from points on both of the first and second lines.
50. The method of claim 49, wherein estimating the volume of the vehicle comprises defining a three-dimensional bounding box corresponding to the vehicle based on detection of corners
51. The method of claim 50, wherein the three-dimensional bounding box represents an estimate of bounding dimensions of the vehicle.
52. The method of any one of claims 46 to 51, wherein the distance between the vehicle and the laser emitter changes due to relative movement between the laser emitter and the vehicle.
53. The method of any one of claims 46 to 52, wherein the vehicle is assigned a classification based on the estimating at least one of a length, a width and a height of the vehicle.
54. The method of claim 53, wherein the classification is one of a motorcycle, a passenger car, a pickup, a panel, a van, a bus, a two-axle truck, a three-axle truck, a four-axle truck, a five-axle truck, a six-axle truck, and a seven or more axle truck.
55. The method of claim 54, wherein the classification is based on information related to a characteristic of the vehicle.
56. The method of any one of claims 46 to 55, comprising estimating a distance from the laser emitter to the vehicle, based at least in part on the estimate of the at least one of a length, a width and a height of the vehicle.
57. The method of any one of claims 46 to 56, comprising estimating a position of the vehicle, based at least in part on the estimate of the at least one of a length, a width and a height of the vehicle.
58. The method of any one of claims 46 to 57, comprising estimating a velocity of the vehicle, based at least in part on the estimate of the at least one of a length, a width and a height of the vehicle.
59. The method of any one of claims 46 to 58, wherein the data processing device triggers an event based at least in part on the estimate of the at least one of a length, a width and a height of the vehicle.
60. A vehicle-mountable data processing device for performing LIDAR, including:
- an input configured to capture a series of vehicle measurements from signal waveforms generated from optical return signals produced by reflection of laser illumination signals that repeatedly illuminate a field of illumination over time, the field of illumination including a moving vehicle, the optical return signals being produced by reflection of the laser illumination signals from multiple sides of the moving vehicle as a distance between the vehicle and the vehicle-mountable data processing device changes;
- at least one processor configured for estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating comprises plotting a line by feature extraction using the vehicle measurements.
- an input configured to capture a series of vehicle measurements from signal waveforms generated from optical return signals produced by reflection of laser illumination signals that repeatedly illuminate a field of illumination over time, the field of illumination including a moving vehicle, the optical return signals being produced by reflection of the laser illumination signals from multiple sides of the moving vehicle as a distance between the vehicle and the vehicle-mountable data processing device changes;
- at least one processor configured for estimating a length, a width and a volume of the moving vehicle from the optical return signals, wherein estimating comprises plotting a line by feature extraction using the vehicle measurements.
61. The vehicle-mountable data processing device of claim 60, further comprising a multi-channel optical receiver having a plurality of optical detection channels and configured to produce one of the signal waveforms for each of the optical detection channels.
62. The vehicle-mountable data processing device of claim 60 or claim 61, wherein estimating comprises fitting a first line to a first subset of the vehicle measurements and fitting a second line to a second subset of the vehicle measurements, wherein the at least one processor is further configured to identify a corner point of the vehicle less than a threshold distance from both of the first and second lines.
63. The vehicle-mountable data processing device of claim 62, wherein the at least one processor is further configured to define a three-dimensional bounding box corresponding to the vehicle based on detection of corners.
64. The vehicle-mountable data processing device of claim 63, wherein the three-dimensional bounding box represents an estimate of bounding dimensions of the vehicle.
65. The vehicle-mountable data processing device of claim 64, wherein the at least one processor is further configured to refine the estimate of the bounding dimensions as the optical return signals are produced by reflection of the laser illumination signals from an increasing number of sides of the vehicle.
66. The vehicle-mountable data processing device of any one of claims 60 to 65, wherein the optical return signals include optical return signals produced by reflection of the laser illumination signals from a complete side of the vehicle, and wherein the at least one processor is further configured to determine dimensions of a three-dimensional bounding box corresponding to the vehicle based at least on full-waveform signal processing of the signal waveforms from the optical return signals produced by reflection of the laser illumination signals from a complete side of the vehicle.
67. The vehicle-mountable data processing device of any one of claims 60 to 66, further comprising an optical emitter, wherein the distance between the vehicle and the optical emitter changes due to relative movement between the optical emitter and the vehicle.
68. The vehicle-mountable data processing device of any one of claims 60 to 67, wherein the vehicle is assigned a classification based on a dimension of the vehicle.
69. The vehicle-mountable data processing device of claim 68, wherein the classification is one of a motorcycle, a passenger car, a pickup, a panel, a van, a bus, a two-axle truck, a three-axle truck, a four-axle truck, a five-axle truck, a six-axle truck, and a seven or more axle truck.
70. The vehicle-mountable data processing device of claim 68, wherein the classification is based on information related to a characteristic of the vehicle.
71. The vehicle-mountable data processing device of claim 67, wherein the at least one processor is further configured to estimate the distance from the optical emitter to the vehicle, based at least in part on a dimension of the vehicle.
72. The vehicle-mountable data processing device of any one of claims 60 to 71, wherein the at least one processor is configured to estimate a position of the vehicle, based at least in part on a dimension of the vehicle.
73. The vehicle-mountable data processing device of any one of claims 60 to 72, wherein the at least one processor is further configured to estimate a velocity of the vehicle, based at least in part on a dimension of the vehicle.
74. The vehicle-mountable data processing device of any one of claims 60 to 73, wherein the at least one processor is further configured to trigger an event based at least in part on a dimension of the vehicle.
75. A vehicle-mountable data processing device for performing LIDAR, including:
- an input configured to capture a series of vehicle measurements from optical return signals produced by reflection of laser illumination signals that repeatedly illuminate a field of illumination over time, the field of illumination including a target vehicle having a top, a side and a rear, the optical signals being produced by reflection of the laser illumination signals from the top, the side and the rear of the target vehicle as a distance between the target vehicle and the vehicle-mountable data processing device changes;
- at least one processor configured for estimating a length, a width and a volume of the target vehicle, wherein the estimating comprises plotting a line by feature extraction using a subset of the vehicle measurements.
- an input configured to capture a series of vehicle measurements from optical return signals produced by reflection of laser illumination signals that repeatedly illuminate a field of illumination over time, the field of illumination including a target vehicle having a top, a side and a rear, the optical signals being produced by reflection of the laser illumination signals from the top, the side and the rear of the target vehicle as a distance between the target vehicle and the vehicle-mountable data processing device changes;
- at least one processor configured for estimating a length, a width and a volume of the target vehicle, wherein the estimating comprises plotting a line by feature extraction using a subset of the vehicle measurements.
76. The vehicle-mountable data processing device of claim 75, further comprising a multi-channel optical receiver having a plurality of optical detection channels and configured to produce a signal waveform for each of the optical detection channels.
77. The vehicle-mountable data processing device of claim 75 or claim 76, wherein the estimating comprises fitting a first line to a first subset of the vehicle measurements and fitting a second line to a second subset of the vehicle measurements, and wherein the at least one processor is further configured to identify a corner point of the vehicle less than a threshold distance from both of the first and second lines.
78. The vehicle-mountable data processing device of claim 77, wherein the at least one processor is further configured to define a three-dimensional bounding box corresponding to the vehicle based on detection of corners.
79. The vehicle-mountable data processing device of claim 78, wherein the three-dimensional bounding box represents an estimate of bounding dimensions of the vehicle.
80. The vehicle-mountable data processing device any one of claims 75 to 79, wherein the distance between the vehicle and the optical emitter changes due to relative movement between the optical emitter and the vehicle.
81. The vehicle-mountable data processing device any one of claims 75 to 80, wherein the at least one processor is further configured to assign a classification to the vehicle.
82. The vehicle-mountable data processing device of claim 81, wherein the classification is one of a motorcycle, a passenger car, a pickup, a panel, a van, a bus, a two-axle truck, a three-axle truck, a four-axle truck, a five-axle truck, a six-axle truck, and a seven or more axle truck.
83. The vehicle-mountable data processing device of claim 81, wherein the classification is based on information related to a characteristic of the vehicle.
84. The vehicle-mountable data processing device of any one of claims 75 to 83, wherein the at least one processor is further configured to estimate a distance from the device to the vehicle, based at least in part on the estimate of the length or the width of the vehicle.
85. The vehicle-mountable data processing device of any one of claims 75 to 84, wherein the at least one processor is further configured to estimate a position of the vehicle, based at least in part on the estimate of the length or the width of the vehicle.
86. The vehicle-mountable data processing device of any one of claims 75 to 85, wherein the at least one processor is further is configured to estimate a velocity of the vehicle, based at least in part on the estimate of the length or the width of the vehicle.
87. The vehicle-mountable data processing device of any one of claims 75 to 86, wherein the at least one processor is further configured to trigger an event based at least in part on the estimate of the at least one of a length, a width and a height of the vehicle.
88. A data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising:
a. an input configured for capturing a series of vehicle measurements from a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for deriving estimates of the length, width and volume of the vehicle, wherein deriving estimates comprises plotting a line by feature extraction using the vehicle measurements.
a. an input configured for capturing a series of vehicle measurements from a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for deriving estimates of the length, width and volume of the vehicle, wherein deriving estimates comprises plotting a line by feature extraction using the vehicle measurements.
89. A data processing device as defined in claim 88, wherein the processor is configured to derive size information about the vehicle on the basis of a plurality of geometric features extracted from the return signal.
90. A data processing device as defined in claim 89, wherein plotting a line comprises fitting a line to a subset of the vehicle measurements.
91. A data processing device as defined in any one of claims 88 to 90, wherein the input is configured for connection to a multi-channel optical receiver outputting the return signal, the multi-channel optical receiver including multiple channels, each channel configured to output channel return data representative of an optical return captured by that channel, the return signal conveying the output channel return data of the multiple channels.
92. A data processing device as defined in claim 91, wherein the optical receiver is configured such that the multiple channels are arranged to form an array.
93. A data processing device as defined in claim 92, wherein the multiple of channels of the array include individual sensing elements arranged linearly.
94. A data processing device as defined in claim 93, wherein the individual sensing elements are arranged horizontally.
95. A data processing device as defined in any one of claims 88 to 94, wherein the at least one processor is configured to perform full signal waveform processing of the optical reflections.
96. A data processing device for performing LIDAR detection of a first and a second vehicle residing in a field of view, the first and a second vehicle each having a side and a rear, the data processing device comprising:
a. an input configured for capturing a series of vehicle measurements from a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, , the optical return signal including a plurality of optical reflections produced by the first and second vehicles and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal to estimate a length, width and volume of each of the first and second vehicles by plotting a line by feature extraction using the vehicle measurements.
a. an input configured for capturing a series of vehicle measurements from a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, , the optical return signal including a plurality of optical reflections produced by the first and second vehicles and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal to estimate a length, width and volume of each of the first and second vehicles by plotting a line by feature extraction using the vehicle measurements.
97. A data processing device as defined in claim 96, wherein the processor is configured to derive size information about the first vehicle and the second vehicle on the basis of a plurality of geometric features extracted from the return signal.
98. A data processing device as defined in claim 96 or claim 97, wherein the processor is configured to detect a line of the first vehicle and a line of the second vehicle in the return signal.
99. A data processing device as defined in any one of claims 96 to 98, wherein the processor is configured to perform multiple detections of the first vehicle and the second vehicle in the return signal.
100. A data processing device as defined in any one of claims 96 to 99, wherein the input is configured for connection to a multi-channel optical receiver outputting the return signal, the multi-channel optical receiver including multiple channels, each channel configured to output channel return data representative of an optical return captured by that channel, the return signal conveying the output channel return data of the multiple channels.
101. A data processing device as defined in claim 100, wherein the optical receiver is configured such that the multiple channels are arranged to form an array.
102. A data processing device as defined in claim 101, wherein the plurality of channels of the array include individual sensing elements arranged linearly.
103. A data processing device as defined in claim 101, wherein the individual sensing elements are arranged horizontally.
104. A data processing device as defined in any one of claims 96 to 103, wherein the at least one processor is configured to perform full signal waveform processing of the optical reflections.
105. A data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising:
a. an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for:
i. performing multiple detections of the vehicle from the plurality of optical reflections;
ii. deriving at least a portion of a physical boundary of the vehicle from the multiple reflections and estimating a volume of the vehicle using feature extraction based on the multiple detections.
a. an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for:
i. performing multiple detections of the vehicle from the plurality of optical reflections;
ii. deriving at least a portion of a physical boundary of the vehicle from the multiple reflections and estimating a volume of the vehicle using feature extraction based on the multiple detections.
106. A data processing device as defined in claim 105, wherein the at least one processor is configured to progressively develop the physical boundary as detections of the vehicle are being accumulated.
107. A data processing device as defined in any one of claims 105 to 106, wherein the at least one processor is configured to derive at least a portion of the physical boundary of the vehicle along at least one dimension of the vehicle.
108. A data processing device as defined in claim 107, wherein the at least one dimension of the vehicle is a length of the vehicle.
109. A data processing device as defined in claim 107, wherein the at least one dimension of the vehicle is a width of the vehicle.
110. A data processing device as defined in claim 107, wherein the at least one dimension of the vehicle is a height dimension of the vehicle.
111. A data processing device as defined in claim 107, wherein the at least one processor is configured to derive at least a portion of the physical boundary of the vehicle along at least two orthogonal dimensions of the vehicle.
112. A data processing device as defined in claim 107, wherein the at least one processor is configured to derive at least a portion of the physical boundary of the vehicle in three dimensions.
113. A data processing device as defined in any one of claims 105 to 112, wherein the at least one processing device is configured to perform a fit of the physical boundary into a predetermined geometric shape.
114. A data processing device as defined in claim 113, wherein the predetermined geometric shape is a box.
115. A data processing device as defined in any one of claims 105 to 114, wherein the physical boundary is a line.
116. A data processing device as defined in any one of claims 105 to 115, wherein the input is configured for connection to a multi-channel optical receiver outputting the return signal, the multi-channel optical receiver including multiple channels, each channel configured to output channel return data representative of an optical return captured by that channel, the return signal conveying the output channel return data of the multiple channels.
117. A data processing device as defined in claim 116, wherein the optical receiver is configured such that the multiple channels are arranged to form an array.
118. A data processing device as defined in claim 117, wherein the plurality of channels of the array include individual sensing elements arranged linearly.
119. A data processing device as defined in claim 118, wherein the individual sensing elements are arranged horizontally.
120. A data processing device as defined in any one of claims 105 to 119, wherein the at least one processor is configured to perform full signal waveform processing of the optical reflections.
121. A data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising:
a. an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for:
i. performing multiple detections of the vehicle from the plurality of optical reflections;
ii. deriving a geometric feature of the vehicle from the multiple detections and estimating a volume of the vehicle, using feature extraction based on the multiple detections.
a. an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for:
i. performing multiple detections of the vehicle from the plurality of optical reflections;
ii. deriving a geometric feature of the vehicle from the multiple detections and estimating a volume of the vehicle, using feature extraction based on the multiple detections.
122. A data processing device as defined in claim 121, wherein the at least one processor is configured to progressively develop the geometry as detections of the vehicle are being performed.
123. A data processing device as defined in any one of claims 121 to 122, wherein the at least one processor is configured to derive at least a portion of the geometry of the vehicle along at least one dimension of the vehicle.
124. A data processing device as defined in claim 123, wherein the at least one dimension of the vehicle is a length of the vehicle.
125. A data processing device as defined in claim 123, wherein the at least one dimension of the vehicle is a width of the vehicle.
126. A data processing device as defined in claim 123, wherein the at least one dimension of the vehicle is a height dimension of the vehicle.
127. A data processing device as defined in claim 123, wherein the at least one processor is configured to derive at least a portion of the geometry of the vehicle along at least two orthogonal dimensions of the vehicle.
128. A data processing device as defined in claim 123, wherein the at least one processor is configured to derive at least a portion of the geometry of the vehicle in three dimensions.
129. A data processing device as defined in any one of claims 121 to 128, wherein the at least one processing device is configured to perform a fit of the vehicle geometry into a predetermined shape.
130. A data processing device as defined in claim 129, wherein the predetermined shape is a box.
131. A data processing device as defined in any one of claims 121 to 130, wherein the input is configured for connection to a multi-channel optical receiver outputting the return signal, the multi-channel optical receiver including multiple channels, each channel configured to output channel return data representative of an optical return captured by that channel, the return signal conveying the output channel return data of the multiple channels.
132. A data processing device as defined in claim 131, wherein the optical receiver is configured such that the multiple channels are arranged to form an array.
133. A data processing device as defined in claim 132, wherein the plurality of channels of the array include individual sensing elements arranged linearly.
134. A data processing device as defined in claim 133, wherein the individual sensing elements are arranged horizontally.
135. A data processing device as defined in any one of claims 121 to 134, wherein the at least one processor is configured to perform full signal waveform processing of the optical reflections.
136. A data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising:
a. an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for:
i. performing multiple detections of the vehicle from the plurality of optical reflections;
ii. detecting, using feature extraction on the basis of the plurality of optical reflections, a corner of the vehicle defined at an intersection of two sides of the vehicle;
iii. estimating a volume of the vehicle, using feature extraction based on the multiple detections.
a. an input configured for receiving a return signal conveying an optical return of a laser illumination signal, the laser illumination signal including a plurality of light pulses directed at the field of view, the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for:
i. performing multiple detections of the vehicle from the plurality of optical reflections;
ii. detecting, using feature extraction on the basis of the plurality of optical reflections, a corner of the vehicle defined at an intersection of two sides of the vehicle;
iii. estimating a volume of the vehicle, using feature extraction based on the multiple detections.
137. A data processing device as defined in claim 136, wherein the at least one processor is configured to initialize a bounding box when the corner of the vehicle is detected.
138. A data processing device as defined in any one of claims 135 to 136, wherein the input is configured for connection to a multi-channel optical receiver outputting the return signal, the multi-channel optical receiver including multiple channels, each channel configured to output channel return data representative of an optical return captured by that channel, the return signal conveying the output channel return data of the multiple channels.
139. A data processing device as defined in claim 138, wherein the optical receiver is configured such that the multiple channels are arranged to form an array.
140. A data processing device as defined in claim 139, wherein the plurality of channels of the array include individual sensing elements arranged linearly.
141. A data processing device as defined in claim 140, wherein the individual sensing elements are arranged horizontally.
142. A data processing device as defined in any one of claims 136 to 141, wherein the at least one processor is configured to perform full signal waveform processing of the optical reflections.
143. A data processing device for performing LIDAR detection of a vehicle residing in a field of view, the vehicle having a side and a rear, the data processing device comprising:
a. an input configured for receiving a return signal conveying an optical return of an illumination signal, the illumination signal including a plurality of light pulses directed at the field of view, , the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for:
i. performing multiple detections of the vehicle from the plurality of optical reflections;
ii. processing the multiple detections of the vehicle to attempt to identify a physical boundary of the vehicle using feature extraction based on the multiple detections and derive at least in part on the basis of the physical boundary an extent of the vehicle and to estimate a volume of the vehicle from the extent; and to assign to the vehicle a default extent if the attempt is not successful.
a. an input configured for receiving a return signal conveying an optical return of an illumination signal, the illumination signal including a plurality of light pulses directed at the field of view, , the optical return signal including a plurality of optical reflections produced by the vehicle and corresponding to respective ones of the light pulses;
b. at least one processor configured to process the return signal for:
i. performing multiple detections of the vehicle from the plurality of optical reflections;
ii. processing the multiple detections of the vehicle to attempt to identify a physical boundary of the vehicle using feature extraction based on the multiple detections and derive at least in part on the basis of the physical boundary an extent of the vehicle and to estimate a volume of the vehicle from the extent; and to assign to the vehicle a default extent if the attempt is not successful.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261605896P | 2012-03-02 | 2012-03-02 | |
US61/605,896 | 2012-03-02 | ||
CA2865733A CA2865733C (en) | 2012-03-02 | 2013-03-01 | System and method for multipurpose traffic detection and characterization |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2865733A Division CA2865733C (en) | 2012-03-02 | 2013-03-01 | System and method for multipurpose traffic detection and characterization |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2998175A1 CA2998175A1 (en) | 2013-09-06 |
CA2998175C true CA2998175C (en) | 2020-11-10 |
Family
ID=49081727
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2865733A Active CA2865733C (en) | 2012-03-02 | 2013-03-01 | System and method for multipurpose traffic detection and characterization |
CA2998175A Active CA2998175C (en) | 2012-03-02 | 2013-03-01 | System and method for multipurpose traffic detection and characterization |
CA3112113A Pending CA3112113A1 (en) | 2012-03-02 | 2013-03-01 | System and method for multipurpose traffic detection and characterization |
CA2998166A Active CA2998166C (en) | 2012-03-02 | 2013-03-01 | System and method for vehicle detection |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2865733A Active CA2865733C (en) | 2012-03-02 | 2013-03-01 | System and method for multipurpose traffic detection and characterization |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA3112113A Pending CA3112113A1 (en) | 2012-03-02 | 2013-03-01 | System and method for multipurpose traffic detection and characterization |
CA2998166A Active CA2998166C (en) | 2012-03-02 | 2013-03-01 | System and method for vehicle detection |
Country Status (4)
Country | Link |
---|---|
US (2) | US9235988B2 (en) |
EP (1) | EP2820632B8 (en) |
CA (4) | CA2865733C (en) |
WO (1) | WO2013128427A1 (en) |
Families Citing this family (82)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2839194C (en) | 2011-06-17 | 2017-04-18 | Leddartech Inc. | System and method for traffic side detection and characterization |
CA2865733C (en) | 2012-03-02 | 2023-09-26 | Leddartech Inc. | System and method for multipurpose traffic detection and characterization |
US9336450B2 (en) * | 2013-06-05 | 2016-05-10 | Xerox Corporation | Methods and systems for selecting target vehicles for occupancy detection |
RU2559418C2 (en) * | 2013-10-15 | 2015-08-10 | Общество с ограниченной ответственностью "Симикон" | Method of determination of vehicle position and motion speed and complex to this end |
KR101551026B1 (en) * | 2013-12-24 | 2015-09-07 | 현대자동차주식회사 | Method of tracking vehicle |
US9495606B2 (en) * | 2014-02-28 | 2016-11-15 | Ricoh Co., Ltd. | Method for product recognition from multiple images |
DE102014103010A1 (en) * | 2014-03-06 | 2015-09-10 | Skidata Ag | digital camera |
WO2015149009A1 (en) * | 2014-03-27 | 2015-10-01 | Georgia Tech Research Corporation | Systems and methods for identifying traffic control devices and testing the retroreflectivity of the same |
US9396553B2 (en) * | 2014-04-16 | 2016-07-19 | Xerox Corporation | Vehicle dimension estimation from vehicle images |
DE102014010937A1 (en) * | 2014-07-28 | 2016-01-28 | S.M.S, Smart Microwave Sensors Gmbh | Method for determining a position and / or orientation of a sensor |
DE102014012285A1 (en) * | 2014-08-22 | 2016-02-25 | Jenoptik Robot Gmbh | Method and axle counting device for non-contact axle counting of a vehicle and axle counting system for road traffic |
JP6938371B2 (en) | 2014-09-09 | 2021-09-22 | レッダーテック インコーポレイテッド | Discretization of detection zones |
US9821813B2 (en) * | 2014-11-13 | 2017-11-21 | Nec Corporation | Continuous occlusion models for road scene understanding |
EP3026653A1 (en) * | 2014-11-27 | 2016-06-01 | Kapsch TrafficCom AB | Method of controlling a traffic surveillance system |
US10043091B2 (en) * | 2014-12-05 | 2018-08-07 | Magna Electronics Inc. | Vehicle vision system with retroreflector pattern recognition |
US10032370B2 (en) * | 2015-03-03 | 2018-07-24 | Honda Motor Co., Ltd. | Methods and apparatus for enabling mobile communication device based secure interaction from vehicles through motion signatures |
KR102614323B1 (en) | 2015-04-01 | 2023-12-14 | 바야비전 리미티드 | Create a 3D map of a scene using passive and active measurements |
PL3082119T3 (en) * | 2015-04-15 | 2023-03-20 | Vitronic Dr.-Ing. Stein Bildverarbeitungssysteme Gmbh | Distance measurement of vehicles |
EP3104357A1 (en) * | 2015-06-09 | 2016-12-14 | Kapsch TrafficCom AG | A device for detecting vehicles on a traffic area |
US9847022B2 (en) * | 2015-07-22 | 2017-12-19 | Ace/Avant Concrete Construction Co., Inc. | Vehicle detection system and method |
US10331956B2 (en) | 2015-09-23 | 2019-06-25 | Magna Electronics Inc. | Vehicle vision system with detection enhancement using light control |
CN108351402A (en) * | 2015-10-06 | 2018-07-31 | 日本先锋公司 | Light control device, control method, program and storage medium |
GB2543338B (en) * | 2015-10-15 | 2021-09-15 | Nal Ltd | Detector access chamber |
SG11201909024XA (en) | 2015-10-19 | 2019-11-28 | Skansense S L U | Obtaining data from targets using imagery and other remote sensing data |
CN105335723B (en) * | 2015-11-02 | 2019-04-19 | 招商局重庆交通科研设计院有限公司 | A kind of model recognizing method based on infrared structure light |
JP6452596B2 (en) * | 2015-11-30 | 2019-01-16 | 古河電気工業株式会社 | Radar device and radar device detection method |
CN105522988B (en) * | 2015-12-29 | 2018-01-30 | 同方威视技术股份有限公司 | Vehicle guidance system, the method for vehicle directional and safety check vehicle |
JP6672915B2 (en) * | 2016-03-15 | 2020-03-25 | オムロン株式会社 | Object detection device, object detection method, and program |
US9747793B1 (en) * | 2016-08-21 | 2017-08-29 | International Business Machines Corporation | Transportation vehicle traffic management |
US11017663B2 (en) | 2016-10-03 | 2021-05-25 | Murata Manufacturing Co., Ltd. | Ultra-low-power traffic-monitoring system |
KR20180060784A (en) | 2016-11-29 | 2018-06-07 | 삼성전자주식회사 | Method and apparatus for determining abnormal object |
US20180172807A1 (en) * | 2016-12-20 | 2018-06-21 | Analog Devices Global | Method of Providing Enhanced Range Accuracy |
US10114111B2 (en) | 2017-03-28 | 2018-10-30 | Luminar Technologies, Inc. | Method for dynamically controlling laser power |
US10254388B2 (en) * | 2017-03-28 | 2019-04-09 | Luminar Technologies, Inc. | Dynamically varying laser output in a vehicle in view of weather conditions |
US10677897B2 (en) | 2017-04-14 | 2020-06-09 | Luminar Technologies, Inc. | Combining lidar and camera data |
US10360796B2 (en) * | 2017-04-24 | 2019-07-23 | Futurewei Technologies, Inc. | Ticket-based traffic flow control at intersections for internet of vehicles |
US11288519B2 (en) * | 2017-06-30 | 2022-03-29 | FLIR Belgium BVBA | Object counting and classification for image processing |
US11250699B2 (en) | 2017-08-14 | 2022-02-15 | Cubic Corporation | System and method of adaptive traffic management at an intersection |
US11100336B2 (en) * | 2017-08-14 | 2021-08-24 | Cubic Corporation | System and method of adaptive traffic management at an intersection |
CN111492403A (en) * | 2017-10-19 | 2020-08-04 | 迪普迈普有限公司 | Lidar to camera calibration for generating high definition maps |
US10705216B2 (en) | 2017-12-07 | 2020-07-07 | Institute For Information Industry | Three-dimensional point cloud tracking apparatus and method using recurrent neural network |
US10690773B2 (en) | 2017-12-07 | 2020-06-23 | Velodyne Lidar, Inc. | Systems and methods for efficient multi-return light detectors |
US10803340B2 (en) * | 2017-12-08 | 2020-10-13 | Hunter Engineering Company | Method and apparatus for license plate recognition using multiple fields of view |
US11417111B2 (en) | 2017-12-22 | 2022-08-16 | Terra Scientia, Llc | Method, system and material for detecting objects of high interest with laser scanning systems |
EP3732665A4 (en) * | 2017-12-31 | 2021-09-01 | Axilion Ltd. | Method, device, and system of dynamic allocation of traffic resources |
US11341350B2 (en) | 2018-01-05 | 2022-05-24 | Packsize Llc | Systems and methods for volumetric sizing |
US11110895B2 (en) * | 2018-04-09 | 2021-09-07 | Cisco Technology, Inc. | Vehicle network intrusion detection system (IDS) using vehicle state predictions |
DE102018206751A1 (en) * | 2018-05-02 | 2019-11-07 | Continental Automotive Gmbh | CONTOUR RECOGNITION OF A VEHICLE BASED ON MEASUREMENT DATA OF A UMFELDSENSORIK |
JP7024610B2 (en) * | 2018-05-30 | 2022-02-24 | 株式会社Ihi | Detection device and detection system |
JP7028066B2 (en) * | 2018-05-30 | 2022-03-02 | 株式会社Ihi | Detection device and detection system |
US11668830B1 (en) | 2018-06-01 | 2023-06-06 | Vayavision Sensing Ltd. | System and method for performing active distance measurements |
US10627516B2 (en) | 2018-07-19 | 2020-04-21 | Luminar Technologies, Inc. | Adjustable pulse characteristics for ground detection in lidar systems |
US10746898B2 (en) | 2018-08-01 | 2020-08-18 | Infineon Technologies Ag | Method and device for object recognition and analysis |
TWI680898B (en) * | 2018-11-23 | 2020-01-01 | 財團法人車輛研究測試中心 | Light reaching detection device and method for close obstacles |
JP2020118567A (en) * | 2019-01-24 | 2020-08-06 | ソニーセミコンダクタソリューションズ株式会社 | Distance measurement device, on-vehicle system, and distance measurement method |
CN111696365A (en) * | 2019-03-11 | 2020-09-22 | 杭州海康威视数字技术股份有限公司 | Vehicle tracking system |
JP7050020B2 (en) * | 2019-03-14 | 2022-04-07 | Kddi株式会社 | Vehicle detection device, vehicle detection method, and program |
DE102019206480A1 (en) * | 2019-05-06 | 2020-11-12 | Audi Ag | Motor vehicle having an outer surface and method of operating a motor vehicle |
US11448732B2 (en) | 2019-05-21 | 2022-09-20 | Northrop Grumman Systems Corporation | Frequency modulated scanning LIDAR with 360 degrees field of view |
US11531111B2 (en) | 2019-05-21 | 2022-12-20 | Northrop Grumman Systems Corporation | 360 degrees field of view scanning lidar with no movable parts |
US11555891B2 (en) | 2019-05-21 | 2023-01-17 | Northrop Grumman Systems Corporation | Methods for large angle field of view scanning LIDAR with no movable parts |
EP3754535A1 (en) * | 2019-06-17 | 2020-12-23 | Kapsch TrafficCom AG | Apparatus for recording license plates of vehicles |
KR102137038B1 (en) * | 2019-07-08 | 2020-07-23 | 엘지전자 주식회사 | System for identifying vehicle using laser sensor |
CN110533921B (en) * | 2019-08-06 | 2021-06-18 | 北京万集科技股份有限公司 | Triggering snapshot method and system for vehicle |
EP3789794A1 (en) * | 2019-09-04 | 2021-03-10 | Ibeo Automotive Systems GmbH | Method and device for distance-measuring |
CA3155593C (en) | 2019-09-22 | 2023-07-25 | Vayavision Sensing Ltd. | Methods and systems for training and validating a perception system |
CA3113473A1 (en) * | 2020-03-24 | 2021-09-24 | Arudi Srinivas Rajagopal | Traffic warning and data capture devices and methods |
CN111507427A (en) * | 2020-05-08 | 2020-08-07 | 湖南车路协同智能科技有限公司 | Vehicle classification method |
KR20230038717A (en) * | 2020-06-29 | 2023-03-21 | 아이 알 키네틱스 리미티드 | Systems and methods for an interactive vehicular transportation network |
US11328519B2 (en) | 2020-07-23 | 2022-05-10 | Waymo Llc | Detecting traffic signaling states with neural networks |
US11956693B2 (en) * | 2020-12-03 | 2024-04-09 | Mitsubishi Electric Corporation | Apparatus and method for providing location |
US20220188953A1 (en) | 2020-12-15 | 2022-06-16 | Selex Es Inc. | Sytems and methods for electronic signature tracking |
US11482099B2 (en) * | 2021-02-02 | 2022-10-25 | Here Global B.V. | Method and apparatus for preventing traffic over-reporting via identifying misleading probe data |
US11393227B1 (en) * | 2021-02-02 | 2022-07-19 | Sony Group Corporation | License plate recognition based vehicle control |
KR20220126031A (en) * | 2021-03-08 | 2022-09-15 | 현대모비스 주식회사 | Vehicle tracking system and tracking method |
WO2022259344A1 (en) * | 2021-06-08 | 2022-12-15 | 三菱電機株式会社 | Laser radar device |
EP4285336A1 (en) | 2021-07-16 | 2023-12-06 | Huawei Technologies Co., Ltd. | Privacy compliant monitoring of objects |
EP4134933A1 (en) * | 2021-08-11 | 2023-02-15 | MOVYON S.p.A. | SYSTEM AND METHOD FOR DETECTING THE CLASS OF A
VEHICLE IN TRANSIT |
EP4399700A1 (en) * | 2021-09-09 | 2024-07-17 | Selex ES Inc. | Systems and methods for electronic signature tracking and analysis |
US11842635B2 (en) * | 2021-10-22 | 2023-12-12 | Richard Wohler | Traffic light control assembly |
WO2023171672A1 (en) * | 2022-03-08 | 2023-09-14 | パイオニア株式会社 | Information processing device, control method, program, and storage medium |
FR3137483A1 (en) * | 2022-06-30 | 2024-01-05 | Idemia Identity & Security | Method for measuring the semi-automatic speed of a vehicle from an image bank, computer program product and associated device |
Family Cites Families (293)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE2002012A1 (en) | 1969-01-21 | 1970-08-13 | Del Signore Dr Giovanni | Device and method for reporting obstacles and for displaying the distance of the obstacles |
US3967111A (en) | 1974-12-20 | 1976-06-29 | Scientific Technology Incorporated | Pulsed light source discriminator system |
JPS58211677A (en) | 1982-06-02 | 1983-12-09 | Nissan Motor Co Ltd | Optical radar device |
US4533242A (en) | 1982-09-28 | 1985-08-06 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Ranging system which compares an object-reflected component of a light beam to a reference component of the light beam |
US4733961A (en) | 1983-03-07 | 1988-03-29 | Texas Instruments Incorporated | Amplifier for integrated laser/FLIR rangefinder |
US4717862A (en) | 1984-11-19 | 1988-01-05 | The United States Government As Represented By The Secretary Of The Navy | Pulsed illumination projector |
US4766421A (en) | 1986-02-19 | 1988-08-23 | Auto-Sense, Ltd. | Object detection apparatus employing electro-optics |
US4808997A (en) | 1987-05-21 | 1989-02-28 | Barkley George J | Photoelectric vehicle position indicating device for use in parking and otherwise positioning vehicles |
US4891624A (en) | 1987-06-12 | 1990-01-02 | Stanley Electric Co., Ltd. | Rearward vehicle obstruction detector using modulated light from the brake light elements |
GB8727824D0 (en) | 1987-11-27 | 1987-12-31 | Combustion Dev Ltd | Monitoring means |
GB8826624D0 (en) | 1988-11-14 | 1988-12-21 | Martell D K | Traffic congestion monitoring system |
US4928232A (en) | 1988-12-16 | 1990-05-22 | Laser Precision Corporation | Signal averaging for optical time domain relectometers |
US5134393A (en) | 1990-04-02 | 1992-07-28 | Henson H Keith | Traffic control system |
GB9018174D0 (en) | 1990-08-17 | 1990-10-03 | Pearpoint Ltd | Apparatus for reading vehicle number-plates |
EP0476562A3 (en) | 1990-09-19 | 1993-02-10 | Hitachi, Ltd. | Method and apparatus for controlling moving body and facilities |
US5179286A (en) | 1990-10-05 | 1993-01-12 | Mitsubishi Denki K.K. | Distance measuring apparatus receiving echo light pulses |
JPH0833444B2 (en) | 1990-10-05 | 1996-03-29 | 三菱電機株式会社 | Distance measuring device |
JPH0827345B2 (en) | 1990-10-05 | 1996-03-21 | 三菱電機株式会社 | Distance measuring device |
JPH04172285A (en) | 1990-11-02 | 1992-06-19 | Mitsubishi Electric Corp | Distance measuring apparatus |
FR2671653B1 (en) | 1991-01-11 | 1995-05-24 | Renault | MOTOR VEHICLE TRAFFIC MEASUREMENT SYSTEM. |
US5357331A (en) | 1991-07-02 | 1994-10-18 | Flockencier Stuart W | System for processing reflected energy signals |
US5102218A (en) | 1991-08-15 | 1992-04-07 | The United States Of America As Represented By The Secretary Of The Air Force | Target-aerosol discrimination by means of digital signal processing |
US5270780A (en) | 1991-09-13 | 1993-12-14 | Science Applications International Corporation | Dual detector lidar system and method |
GB2264411B (en) | 1992-02-13 | 1995-09-06 | Roke Manor Research | Active infrared detector system |
FR2690519B1 (en) | 1992-04-23 | 1994-06-10 | Est Centre Etu Tech Equipement | DEVICE FOR ANALYZING THE PATH OF MOBILES. |
US6304321B1 (en) * | 1992-11-23 | 2001-10-16 | Schwartz Electro-Optics, Inc. | Vehicle classification and axle counting sensor system and method |
US5546188A (en) | 1992-11-23 | 1996-08-13 | Schwartz Electro-Optics, Inc. | Intelligent vehicle highway system sensor and method |
US5793491A (en) * | 1992-12-30 | 1998-08-11 | Schwartz Electro-Optics, Inc. | Intelligent vehicle highway system multi-lane sensor and method |
DE4304298A1 (en) | 1993-02-15 | 1994-08-18 | Atlas Elektronik Gmbh | Method for classifying vehicles passing a given waypoint |
US5510800A (en) | 1993-04-12 | 1996-04-23 | The Regents Of The University Of California | Time-of-flight radio location system |
US5389921A (en) | 1993-05-17 | 1995-02-14 | Whitton; John M. | Parking lot apparatus and method |
US5381155A (en) | 1993-12-08 | 1995-01-10 | Gerber; Eliot S. | Vehicle speeding detection and identification |
US5552767A (en) | 1994-02-14 | 1996-09-03 | Toman; John R. | Assembly for, and method of, detecting and signalling when an object enters a work zone |
US5714754A (en) | 1994-03-04 | 1998-02-03 | Nicholas; John Jacob | Remote zone operation of lighting systems for above-ground enclosed or semi-enclosed parking structures |
JPH07280940A (en) | 1994-04-12 | 1995-10-27 | Mitsubishi Electric Corp | Radar for vehicle |
GB9408252D0 (en) | 1994-04-26 | 1994-06-15 | Towers Paul | A detection system |
US7209221B2 (en) | 1994-05-23 | 2007-04-24 | Automotive Technologies International, Inc. | Method for obtaining and displaying information about objects in a vehicular blind spot |
US7359782B2 (en) | 1994-05-23 | 2008-04-15 | Automotive Technologies International, Inc. | Vehicular impact reactive system and method |
JP3254928B2 (en) | 1994-09-12 | 2002-02-12 | 日産自動車株式会社 | Radar position detection sensor and radar using the same |
DE69409815T2 (en) | 1994-11-26 | 1998-08-13 | Hewlett Packard Gmbh | Optical time domain reflectometer and method for optical time domain reflectometry |
US5633629A (en) | 1995-02-08 | 1997-05-27 | Hochstein; Peter A. | Traffic information system using light emitting diodes |
DE19604338B4 (en) | 1995-02-18 | 2004-07-15 | Leich, Andreas, Dipl.-Ing. | Vehicle counting and classification device |
US6259862B1 (en) | 1995-04-11 | 2001-07-10 | Eastman Kodak Company | Red-eye reduction using multiple function light source |
WO1996034252A1 (en) | 1995-04-28 | 1996-10-31 | Schwartz Electro-Optics, Inc. | Intelligent vehicle highway system sensor and method |
DE19517001A1 (en) | 1995-05-09 | 1996-11-14 | Sick Optik Elektronik Erwin | Method and device for determining the light propagation time over a measuring section arranged between a measuring device and a reflecting object |
US5764163A (en) | 1995-09-21 | 1998-06-09 | Electronics & Space Corp. | Non-imaging electro-optic vehicle sensor apparatus utilizing variance in reflectance |
US5633801A (en) | 1995-10-11 | 1997-05-27 | Fluke Corporation | Pulse-based impedance measurement instrument |
FR2743150B1 (en) | 1996-01-02 | 1998-01-30 | Renault | PARKING SLOT DETECTION SYSTEM |
FR2743151B1 (en) | 1996-01-02 | 1998-01-30 | Renault | PARKING SLOT DETECTION AND MEASUREMENT SYSTEM |
JP3206414B2 (en) | 1996-01-10 | 2001-09-10 | トヨタ自動車株式会社 | Vehicle type identification device |
JP3379324B2 (en) | 1996-02-08 | 2003-02-24 | トヨタ自動車株式会社 | Moving object detection method and apparatus |
US5786772A (en) | 1996-03-22 | 1998-07-28 | Donnelly Corporation | Vehicle blind spot detection display system |
EP0798684B1 (en) | 1996-03-25 | 2001-01-10 | MANNESMANN Aktiengesellschaft | Method and system to obtain the traffic situation through fixed data-acquisition device |
EP0935764B1 (en) | 1996-04-01 | 2003-03-19 | Gatsometer B.V. | Method and apparatus for determining the speed and location of a vehicle |
US5838116A (en) | 1996-04-15 | 1998-11-17 | Jrs Technology, Inc. | Fluorescent light ballast with information transmission circuitry |
US5760887A (en) | 1996-04-30 | 1998-06-02 | Hughes Electronics | Multi-pulse, multi-return, modal range processing for clutter rejection |
US5777564A (en) | 1996-06-06 | 1998-07-07 | Jones; Edward L. | Traffic signal system and method |
FR2749670B1 (en) | 1996-06-11 | 1998-07-31 | Renault | DEVICE AND METHOD FOR MEASURING PARKING SPOTS OF A MOTOR VEHICLE |
IT1286684B1 (en) | 1996-07-26 | 1998-07-15 | Paolo Sodi | DEVICE AND METHOD FOR DETECTION OF ROAD INFRINGEMENTS WITH DYNAMIC POINTING SYSTEMS |
US20040083035A1 (en) | 1996-09-25 | 2004-04-29 | Ellis Christ G. | Apparatus and method for automatic vision enhancement in a traffic complex |
US20030154017A1 (en) | 1996-09-25 | 2003-08-14 | Ellis Christ G. | Apparatus and method for vehicle counting, tracking and tagging |
US5812249A (en) | 1996-09-26 | 1998-09-22 | Envirotest Systems Corporation | Speed and acceleration monitoring device using visible laser beams |
DE29617413U1 (en) | 1996-10-07 | 1996-11-21 | Mekra Lang Gmbh & Co Kg | Monitoring device for difficult or invisible zones around motor vehicles |
DE19643475C1 (en) | 1996-10-22 | 1998-06-25 | Laser Applikationan Gmbh | Speed measurement method based on the laser Doppler principle |
US20050169643A1 (en) | 1997-01-02 | 2005-08-04 | Franklin Philip G. | Method and apparatus for the zonal transmission of data using building lighting fixtures |
DE19701803A1 (en) | 1997-01-20 | 1998-10-01 | Sick Ag | Light sensor with light transit time evaluation |
US5995900A (en) | 1997-01-24 | 1999-11-30 | Grumman Corporation | Infrared traffic sensor with feature curve generation |
CN1246948A (en) | 1997-02-05 | 2000-03-08 | 西门子公司 | Motor vehicle detector |
EP0866434B1 (en) | 1997-02-19 | 2004-06-16 | ATX Europe GmbH | Device to collect data about moving objects |
DE19708014A1 (en) | 1997-02-27 | 1998-09-10 | Ernst Dr Hoerber | Device and method for detecting an object in a predetermined spatial area, in particular vehicles for traffic monitoring |
US5942753A (en) | 1997-03-12 | 1999-08-24 | Remote Sensing Technologies | Infrared remote sensing device and system for checking vehicle brake condition |
US6049171A (en) | 1998-09-18 | 2000-04-11 | Gentex Corporation | Continuously variable headlamp control |
GB9715166D0 (en) | 1997-07-19 | 1997-09-24 | Footfall Limited | Video imaging system |
CN1129883C (en) | 1997-07-22 | 2003-12-03 | 奥特-森斯有限公司 | multi-frequency photoelectric detection system |
US6548967B1 (en) | 1997-08-26 | 2003-04-15 | Color Kinetics, Inc. | Universal lighting network methods and systems |
US5828320A (en) | 1997-09-26 | 1998-10-27 | Trigg Industries, Inc. | Vehicle overheight detector device and method |
US7796081B2 (en) | 1997-10-22 | 2010-09-14 | Intelligent Technologies International, Inc. | Combined imaging and distance monitoring for vehicular applications |
US6363326B1 (en) | 1997-11-05 | 2002-03-26 | Robert Lawrence Scully | Method and apparatus for detecting an object on a side of or backwards of a vehicle |
ES2215329T3 (en) | 1997-11-24 | 2004-10-01 | Peugeot Citroen Automobiles Sa | DETECTION DEVICE FOR PHOTOELECTRIC CELLS. |
DE19804958A1 (en) | 1998-02-07 | 1999-08-12 | Itt Mfg Enterprises Inc | Evaluation concept for distance measuring methods |
DE19804957A1 (en) | 1998-02-07 | 1999-08-12 | Itt Mfg Enterprises Inc | Distance measurement method with adaptive amplification |
US6104314A (en) | 1998-02-10 | 2000-08-15 | Jiang; Jung-Jye | Automatic parking apparatus |
US6546119B2 (en) * | 1998-02-24 | 2003-04-08 | Redflex Traffic Systems | Automated traffic violation monitoring and reporting system |
US6404506B1 (en) | 1998-03-09 | 2002-06-11 | The Regents Of The University Of California | Non-intrusive laser-based system for detecting objects moving across a planar surface |
DE19816004A1 (en) | 1998-04-09 | 1999-10-14 | Daimler Chrysler Ag | Arrangement for road condition detection |
US6794831B2 (en) | 1998-04-15 | 2004-09-21 | Talking Lights Llc | Non-flickering illumination based communication |
US5953110A (en) | 1998-04-23 | 1999-09-14 | H.N. Burns Engineering Corporation | Multichannel laser radar |
AT406093B (en) | 1998-05-19 | 2000-02-25 | Perger Andreas Dr | METHOD FOR OPTICAL DISTANCE MEASUREMENT |
DE19823135A1 (en) | 1998-05-23 | 1999-11-25 | Bosch Gmbh Robert | Traffic data acquisition for respective control of light signal system |
US6044336A (en) | 1998-07-13 | 2000-03-28 | Multispec Corporation | Method and apparatus for situationally adaptive processing in echo-location systems operating in non-Gaussian environments |
US6142702A (en) | 1998-11-25 | 2000-11-07 | Simmons; Jason | Parking space security and status indicator system |
US6115113A (en) | 1998-12-02 | 2000-09-05 | Lockheed Martin Corporation | Method for increasing single-pulse range resolution |
DE19856478C1 (en) | 1998-12-02 | 2000-06-21 | Ddg Ges Fuer Verkehrsdaten Mbh | Parking space detection |
US6166645A (en) | 1999-01-13 | 2000-12-26 | Blaney; Kevin | Road surface friction detector and method for vehicles |
US6771185B1 (en) | 1999-02-03 | 2004-08-03 | Chul Jin Yoo | Parking guidance and management system |
US6107942A (en) | 1999-02-03 | 2000-08-22 | Premier Management Partners, Inc. | Parking guidance and management system |
JP2002536739A (en) | 1999-02-05 | 2002-10-29 | ブレット・ホール | Parking lot facility computer management system |
EP1043602B1 (en) | 1999-04-06 | 2003-02-05 | Leica Geosystems AG | Method for detecting the distance of at least one target |
DE19919061A1 (en) | 1999-04-27 | 2000-11-02 | Robot Foto Electr Kg | Traffic monitoring device with polarization filters |
DE19919925C2 (en) | 1999-04-30 | 2001-06-13 | Siemens Ag | Arrangement and method for the simultaneous measurement of the speed and the surface shape of moving objects |
US6285297B1 (en) | 1999-05-03 | 2001-09-04 | Jay H. Ball | Determining the availability of parking spaces |
DE19921449C1 (en) | 1999-05-08 | 2001-01-25 | Daimler Chrysler Ag | Guide assistance when changing the lane of a motor vehicle |
US6580385B1 (en) | 1999-05-26 | 2003-06-17 | Robert Bosch Gmbh | Object detection system |
GB2354898B (en) | 1999-07-07 | 2003-07-23 | Pearpoint Ltd | Vehicle licence plate imaging |
US6502011B2 (en) | 1999-07-30 | 2002-12-31 | Gerhard Haag | Method and apparatus for presenting and managing information in an automated parking structure |
US6946974B1 (en) | 1999-09-28 | 2005-09-20 | Racunas Jr Robert Vincent | Web-based systems and methods for internet communication of substantially real-time parking data |
US6411204B1 (en) | 1999-11-15 | 2002-06-25 | Donnelly Corporation | Deceleration based anti-collision safety light control for vehicle |
GB9927623D0 (en) | 1999-11-24 | 2000-01-19 | Koninkl Philips Electronics Nv | Illumination source |
US6927700B1 (en) | 2000-01-04 | 2005-08-09 | Joseph P. Quinn | Method and apparatus for detection and remote notification of vehicle parking space availability data |
US7123166B1 (en) | 2000-11-17 | 2006-10-17 | Haynes Michael N | Method for managing a parking lot |
ATE263380T1 (en) | 2000-01-26 | 2004-04-15 | Instro Prec Ltd | OPTICAL DISTANCE MEASUREMENT |
US6147624A (en) | 2000-01-31 | 2000-11-14 | Intel Corporation | Method and apparatus for parking management system for locating available parking space |
US20020033884A1 (en) | 2000-05-03 | 2002-03-21 | Schurr George W. | Machine vision-based sorter verification |
AU2001259640A1 (en) | 2000-05-08 | 2001-11-20 | Automotive Technologies International, Inc. | Vehicular blind spot identification and monitoring system |
US7852462B2 (en) | 2000-05-08 | 2010-12-14 | Automotive Technologies International, Inc. | Vehicular component control methods based on blind spot monitoring |
US6765495B1 (en) | 2000-06-07 | 2004-07-20 | Hrl Laboratories, Llc | Inter vehicle communication system |
US6502053B1 (en) | 2000-06-12 | 2002-12-31 | Larry Hardin | Combination passive and active speed detection system |
US6642854B2 (en) | 2000-06-14 | 2003-11-04 | Mcmaster Steven James | Electronic car park management system |
DE10034976B4 (en) | 2000-07-13 | 2011-07-07 | iris-GmbH infrared & intelligent sensors, 12459 | Detecting device for detecting persons |
US6492949B1 (en) | 2000-08-16 | 2002-12-10 | Raytheon Company | Slot antenna element for an array antenna |
US6665621B2 (en) | 2000-11-28 | 2003-12-16 | Scientific Technologies Incorporated | System and method for waveform processing |
ATE301861T1 (en) | 2000-12-30 | 2005-08-15 | Goddert Peters | TUNNEL MONITORING SYSTEM IN A TUNNEL |
US6753766B2 (en) | 2001-01-15 | 2004-06-22 | 1138037 Ontario Ltd. (“Alirt”) | Detecting device and method of using same |
US20020117340A1 (en) | 2001-01-31 | 2002-08-29 | Roger Stettner | Laser radar based collision avoidance system for stationary or moving vehicles, automobiles, boats and aircraft |
JP2004528629A (en) | 2001-02-07 | 2004-09-16 | ビークルセンス, インク. | Parking management system |
US6559776B2 (en) | 2001-02-15 | 2003-05-06 | Yoram Katz | Parking status control system and method |
JP4405155B2 (en) | 2001-04-04 | 2010-01-27 | インストロ プレシジョン リミテッド | Image analysis system |
JP2002342896A (en) | 2001-05-21 | 2002-11-29 | Seiko Epson Corp | Parking lot guiding system and parking lot guiding program |
WO2003000520A1 (en) | 2001-06-21 | 2003-01-03 | Tis, Inc. | Parking guidance and vehicle control system |
US6426708B1 (en) | 2001-06-30 | 2002-07-30 | Koninklijke Philips Electronics N.V. | Smart parking advisor |
AUPR631801A0 (en) | 2001-07-12 | 2001-08-02 | Luscombe, Andrew | Roadside sensor system |
ITBO20010571A1 (en) | 2001-09-20 | 2003-03-20 | Univ Bologna | VEHICLE TRAFFIC MONITORING SYSTEM AND CONTROL UNIT AND RELATED OPERATING METHOD |
US6556916B2 (en) | 2001-09-27 | 2003-04-29 | Wavetronix Llc | System and method for identification of traffic lane positions |
WO2003029046A1 (en) | 2001-10-03 | 2003-04-10 | Maryann Winter | Apparatus and method for sensing the occupancy status of parking spaces in a parking lot |
US7734500B1 (en) * | 2001-10-17 | 2010-06-08 | United Toll Systems, Inc. | Multiple RF read zone system |
US8331621B1 (en) * | 2001-10-17 | 2012-12-11 | United Toll Systems, Inc. | Vehicle image capture system |
US7725348B1 (en) * | 2001-10-17 | 2010-05-25 | United Toll Systems, Inc. | Multilane vehicle information capture system |
DE10163534A1 (en) | 2001-12-21 | 2003-07-10 | Siemens Ag | Device for monitoring areas of space |
US7287884B2 (en) | 2002-02-07 | 2007-10-30 | Toyota Jidosha Kabushiki Kaisha | Vehicle operation supporting device and vehicle operation supporting system |
KR100459475B1 (en) | 2002-04-04 | 2004-12-03 | 엘지산전 주식회사 | System and method for judge the kind of vehicle |
US6885312B1 (en) | 2002-05-28 | 2005-04-26 | Bellsouth Intellectual Property Corporation | Method and system for mapping vehicle parking |
US6821003B2 (en) | 2002-07-16 | 2004-11-23 | Visteon Global Technologies, Inc. | Vehicle lamp and vehicle illumination and data transmission system incorporating same |
DE60232378D1 (en) | 2002-07-17 | 2009-06-25 | Fico Mirrors Sa | DEVICE FOR THE ACTIVE MONITORING OF THE OUTER SAFETY LIMITATION OF A MOTOR VEHICLE |
EP1540373B1 (en) | 2002-08-05 | 2008-02-20 | Elbit Systems Ltd. | Vehicle mounted night vision imaging system and method |
US6783425B2 (en) | 2002-08-26 | 2004-08-31 | Shoot The Moon Products Ii, Llc | Single wire automatically navigated vehicle systems and methods for toy applications |
US7312856B2 (en) | 2002-09-12 | 2007-12-25 | Lockheed Martin Corporation | Programmable pulse capture device with automatic gain control |
JP3822154B2 (en) | 2002-09-12 | 2006-09-13 | 本田技研工業株式会社 | Vehicle detection device |
US20040051659A1 (en) | 2002-09-18 | 2004-03-18 | Garrison Darwin A. | Vehicular situational awareness system |
US6842231B2 (en) | 2002-09-30 | 2005-01-11 | Raytheon Company | Method for improved range accuracy in laser range finders |
DE10247290B4 (en) | 2002-10-10 | 2013-04-18 | Volkswagen Ag | Method and device for monitoring dead angles of a motor vehicle |
US6825778B2 (en) | 2002-10-21 | 2004-11-30 | International Road Dynamics Inc. | Variable speed limit system |
US20040254728A1 (en) * | 2002-10-25 | 2004-12-16 | Poropat George Vladimir | Collision warning system and method |
DE10251133B3 (en) | 2002-10-31 | 2004-07-29 | Gerd Reime | Device for controlling lighting, in particular for vehicle interiors, and method for controlling it |
DE10252756A1 (en) | 2002-11-13 | 2004-05-27 | Robert Bosch Gmbh | A / D converter with improved resolution |
DE10255015B4 (en) | 2002-11-25 | 2008-09-25 | Daimler Ag | Broadband lighting device |
JP2006521536A (en) | 2002-11-26 | 2006-09-21 | ジェームス エフ. マンロ | High-precision distance measuring apparatus and method |
US6860350B2 (en) | 2002-12-20 | 2005-03-01 | Motorola, Inc. | CMOS camera with integral laser ranging and velocity measurement |
US7426450B2 (en) | 2003-01-10 | 2008-09-16 | Wavetronix, Llc | Systems and methods for monitoring speed |
US7148813B2 (en) | 2003-03-20 | 2006-12-12 | Gentex Corporation | Light emitting traffic sign having vehicle sensing capabilities |
US6674394B1 (en) | 2003-03-28 | 2004-01-06 | Visteon Global Technologies, Inc. | Method for determining object location from side-looking sensor data |
US7081832B2 (en) | 2003-04-25 | 2006-07-25 | General Electric Capital Corporation | Method and apparatus for obtaining data regarding a parking location |
WO2004100105A1 (en) | 2003-05-07 | 2004-11-18 | Koninklijke Philips Electronics N.V. | Traffic information system for conveying information to drivers |
FR2854692B1 (en) | 2003-05-07 | 2006-02-17 | Peugeot Citroen Automobiles Sa | OPTICAL EXPLORATION DEVICE AND VEHICLE COMPRISING SUCH A DEVICE |
US6917307B2 (en) | 2003-05-08 | 2005-07-12 | Shih-Hsiung Li | Management method and system for a parking lot |
CN1698381A (en) | 2003-05-08 | 2005-11-16 | 西门子公司 | Method and device for detecting an object or a person |
ES2355384T3 (en) | 2003-05-22 | 2011-03-25 | Pips Technology Inc. | AUTOMATED SECURITY OF A SITE, MONITORING SYSTEM AND ACCESS CONTROL. |
US7026954B2 (en) | 2003-06-10 | 2006-04-11 | Bellsouth Intellectual Property Corporation | Automated parking director systems and related methods |
KR100464584B1 (en) | 2003-07-10 | 2005-01-03 | 에이앤디엔지니어링 주식회사 | Laser Rangefinder and method thereof |
DE102004035856A1 (en) | 2003-08-14 | 2005-03-10 | Roland Bittner | Electrical auxiliary device for use in a traffic system, e.g. a traffic data collection system or traffic warning system, whereby the device is mounted at least partially in a mounting tube or pipe of existing infrastructure |
US7821422B2 (en) | 2003-08-18 | 2010-10-26 | Light Vision Systems, Inc. | Traffic light signal system using radar-based target detection and tracking |
US20050083212A1 (en) | 2003-09-03 | 2005-04-21 | Stratech Systems Limited | Apparatus and method for locating, identifying and tracking vehicles in a parking area |
JP2005085187A (en) | 2003-09-11 | 2005-03-31 | Oki Electric Ind Co Ltd | Parking lot management system utilizing radio lan system |
US7688222B2 (en) | 2003-09-18 | 2010-03-30 | Spot Devices, Inc. | Methods, systems and devices related to road mounted indicators for providing visual indications to approaching traffic |
ITTO20030770A1 (en) | 2003-10-02 | 2005-04-03 | Fiat Ricerche | LONG-DETECTION DETECTOR LONG ONE |
EP1522870B1 (en) | 2003-10-06 | 2013-07-17 | Triple-IN Holding AG | Distance measurement |
US20050117364A1 (en) | 2003-10-27 | 2005-06-02 | Mark Rennick | Method and apparatus for projecting a turn signal indication |
US7230545B2 (en) | 2003-11-07 | 2007-06-12 | Nattel Group, Inc. | Automobile communication and registry system |
JP4449443B2 (en) | 2003-12-10 | 2010-04-14 | 日産自動車株式会社 | LED lamp device with radar function |
FR2864932B1 (en) | 2004-01-09 | 2007-03-16 | Valeo Vision | SYSTEM AND METHOD FOR DETECTING CIRCULATION CONDITIONS FOR A MOTOR VEHICLE |
WO2005072358A2 (en) | 2004-01-28 | 2005-08-11 | Canesta, Inc. | Single chip red, green, blue, distance (rgb-z) sensor |
US20050187701A1 (en) | 2004-02-23 | 2005-08-25 | Baney Douglas M. | Traffic communication system |
JP2005290813A (en) | 2004-03-31 | 2005-10-20 | Honda Motor Co Ltd | Parking guidance robot |
US7106214B2 (en) | 2004-04-06 | 2006-09-12 | Mongkol Jesadanont | Apparatus and method of finding an unoccupied parking space in a parking lot |
US7526103B2 (en) | 2004-04-15 | 2009-04-28 | Donnelly Corporation | Imaging system for vehicle |
JP4238766B2 (en) | 2004-04-15 | 2009-03-18 | 株式会社デンソー | Roundabout vehicle information system |
US7616293B2 (en) | 2004-04-29 | 2009-11-10 | Sigma Space Corporation | System and method for traffic monitoring, speed determination, and traffic light violation detection and recording |
US7323987B2 (en) | 2004-06-28 | 2008-01-29 | Sigma Space Corporation | Compact single lens laser system for object/vehicle presence and speed determination |
JP2006021720A (en) | 2004-07-09 | 2006-01-26 | Nissan Motor Co Ltd | Lamp device with distance measuring function |
EP1628278A1 (en) | 2004-08-16 | 2006-02-22 | Alcatel | Method and system for detecting available parking places |
US7405676B2 (en) | 2004-09-10 | 2008-07-29 | Gatsometer B.V. | Method and system for detecting with laser the passage by a vehicle of a point for monitoring on a road |
NL1027018C2 (en) | 2004-09-10 | 2006-03-13 | Gatsometer Bv | Method and system for observing with radar the passing of a point to be monitored on a road through a vehicle. |
WO2006031220A2 (en) | 2004-09-10 | 2006-03-23 | Darryll Anderson | Blind spot detector system |
DE102004047022A1 (en) | 2004-09-28 | 2006-04-06 | Siemens Ag | Device for monitoring room areas |
BRPI0516486A (en) | 2004-10-15 | 2008-09-09 | Trico Products Corp Of Tenness | object detection system with a vcsel diode array |
US7221288B2 (en) | 2004-10-25 | 2007-05-22 | The Chamberlain Group, Inc. | Method and apparatus for using optical signal time-of-flight information to facilitate obstacle detection |
EP1828958B1 (en) | 2004-12-01 | 2012-05-16 | Datalogic ADC, Inc. | Triggering illumination for a data reader |
JP2006172210A (en) | 2004-12-16 | 2006-06-29 | Matsushita Electric Works Ltd | Distance image sensor for vehicle, and obstacle monitoring device using the same |
US7233683B2 (en) | 2005-01-04 | 2007-06-19 | Deere & Company | Method and system for guiding a vehicle with vision-based adjustment |
US7610123B2 (en) | 2005-01-04 | 2009-10-27 | Deere & Company | Vision-aided system and method for guiding a vehicle |
ES2258399B1 (en) | 2005-02-04 | 2007-11-16 | Fico Mirrors, S.A. | METHOD AND SYSTEM TO IMPROVE THE SUPERVISION OF AN OUTSIDE ENVIRONMENT OF A MOTOR VEHICLE. |
US7242281B2 (en) | 2005-02-23 | 2007-07-10 | Quintos Mel Francis P | Speed control system |
JP4587301B2 (en) | 2005-02-23 | 2010-11-24 | 本田技研工業株式会社 | Vehicle recognition device |
ITTO20050138A1 (en) | 2005-03-04 | 2006-09-05 | Fiat Ricerche | EVALUATION SYSTEM OF THE FLUIDITY OF ROAD OR MOTORWAY TRAFFIC AND OF PREDICTION OF TAIL TRAINING AND SLOWDOWN |
JP4210662B2 (en) | 2005-03-17 | 2009-01-21 | 本田技研工業株式会社 | Vehicle object detection device |
WO2006100672A2 (en) | 2005-03-21 | 2006-09-28 | Visonic Ltd. | Passive infra-red detectors |
GB0506722D0 (en) | 2005-04-02 | 2005-05-11 | Agd Systems Ltd | Detector systems |
EP1902338B1 (en) | 2005-07-06 | 2013-02-20 | Donnelly Corporation | Vehicle exterior mirror assembly with blind spot indicator |
DE202005010816U1 (en) | 2005-07-09 | 2005-11-03 | Owzar, Houtan, Dipl.-Ing.(FH) | Alarm system for dead angle area of motor vehicle has sensors mounted on side mirror or roof edge of both sides of vehicle |
US7714265B2 (en) | 2005-09-30 | 2010-05-11 | Apple Inc. | Integrated proximity sensor and light sensor |
GB0521713D0 (en) | 2005-10-25 | 2005-11-30 | Qinetiq Ltd | Traffic sensing and monitoring apparatus |
JP2007121116A (en) | 2005-10-28 | 2007-05-17 | Sharp Corp | Optical distance measuring device |
US7417718B2 (en) | 2005-10-28 | 2008-08-26 | Sharp Kabushiki Kaisha | Optical distance measuring apparatus |
US7889097B1 (en) | 2005-12-19 | 2011-02-15 | Wavetronix Llc | Detecting targets in roadway intersections |
US8248272B2 (en) | 2005-10-31 | 2012-08-21 | Wavetronix | Detecting targets in roadway intersections |
US7573400B2 (en) | 2005-10-31 | 2009-08-11 | Wavetronix, Llc | Systems and methods for configuring intersection detection zones |
GB2445767A (en) | 2005-11-24 | 2008-07-23 | Linda Long | Illuminated car park space indicator. |
CN2857132Y (en) | 2005-12-12 | 2007-01-10 | 上海高德威智能交通系统有限公司 | Central mode type vehicle information acquisition system |
US8242476B2 (en) * | 2005-12-19 | 2012-08-14 | Leddartech Inc. | LED object detection system and method combining complete reflection traces from individual narrow field-of-view channels |
US7855376B2 (en) | 2005-12-19 | 2010-12-21 | Institut National D'optique | Lighting system and method for illuminating and detecting object |
US7544945B2 (en) * | 2006-02-06 | 2009-06-09 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Vertical cavity surface emitting laser (VCSEL) array laser scanner |
CN101389895A (en) | 2006-02-20 | 2009-03-18 | 皇家飞利浦电子股份有限公司 | Portable illumination device |
ES2315078B1 (en) | 2006-03-06 | 2009-11-05 | Quality Informations System, S.A. | ESTIMATION SYSTEM FOR VEHICLE LOCATION IN PARKING. |
ITTO20060214A1 (en) | 2006-03-22 | 2007-09-23 | Kria S R L | VEHICLE DETECTION SYSTEM |
US7991542B2 (en) | 2006-03-24 | 2011-08-02 | Wavetronix Llc | Monitoring signalized traffic flow |
DE102006025020B4 (en) | 2006-05-26 | 2017-02-09 | PMD Technologie GmbH | displacement measuring system |
EP1901093B1 (en) | 2006-09-15 | 2018-11-14 | Triple-IN Holding AG | Capture of distance images |
RU2431774C2 (en) | 2006-09-25 | 2011-10-20 | Тони МЭЙЕР | Floodlight with fringe micro relief for observation system |
CN100561541C (en) | 2006-11-24 | 2009-11-18 | 鸿富锦精密工业(深圳)有限公司 | Traffic safety indicating system |
FR2910408B1 (en) | 2006-12-21 | 2009-09-11 | Valeo Vision Sa | NIGHT VISION METHOD ON ROAD. |
CA2674830A1 (en) * | 2007-01-05 | 2008-07-17 | Nestor, Inc. | Video speed detection system |
US9460619B2 (en) | 2007-01-17 | 2016-10-04 | The Boeing Company | Methods and systems for controlling traffic flow |
JP4893945B2 (en) * | 2007-02-06 | 2012-03-07 | 株式会社デンソー | Vehicle periphery monitoring device |
US7898433B2 (en) | 2007-03-29 | 2011-03-01 | Roberts Howard H | Traffic control system |
US7745771B2 (en) | 2007-04-03 | 2010-06-29 | Delphi Technologies, Inc. | Synchronous imaging using segmented illumination |
DE102007022372A1 (en) | 2007-05-07 | 2008-11-13 | Robot Visual Systems Gmbh | Method and device for determining the vehicle class of vehicles |
US7859432B2 (en) | 2007-05-23 | 2010-12-28 | Che Il Electric Wireing Devices Co., Ltd. | Collision avoidance system based on detection of obstacles in blind spots of vehicle |
US8319949B2 (en) | 2007-06-18 | 2012-11-27 | Leddartech Inc. | Method for detecting objects with visible light |
EP2158579B1 (en) | 2007-06-18 | 2014-10-15 | Leddartech Inc. | Lighting system with traffic management capabilities |
EP2160629B1 (en) | 2007-06-18 | 2017-04-26 | Leddartech Inc. | Lighting system with driver assistance capabilities |
IL184815A0 (en) | 2007-07-24 | 2008-11-03 | Elbit Systems Ltd | System and method for level of visibility determination and vehicle counting |
DE102007038973A1 (en) | 2007-08-17 | 2009-02-19 | GM Global Technology Operations, Inc., Detroit | Motor vehicle, has sensor e.g. parking space sensor, arranged on exterior mirror at passenger's side, and monitoring dead angle of exterior mirror, where monitoring is executable upto definite speed of vehicle |
EP2048515B1 (en) | 2007-10-11 | 2012-08-01 | JENOPTIK Robot GmbH | Method for determining and documenting traffic violations at a traffic light |
US7640122B2 (en) | 2007-11-07 | 2009-12-29 | Institut National D'optique | Digital signal processing in optical systems used for ranging applications |
US8723689B2 (en) | 2007-12-21 | 2014-05-13 | Leddartech Inc. | Parking management system and method using lighting system |
EP2235561B8 (en) | 2007-12-21 | 2017-05-31 | Leddartech Inc. | Detection and ranging methods and systems |
ES2330499B1 (en) | 2007-12-31 | 2010-09-21 | Imagsa Technologies, S.A. | PROCEDURE AND SYSTEM FOR DETECTION OF MOVING OBJECTS. |
US7808401B1 (en) | 2008-01-11 | 2010-10-05 | Global Traffic Technologies, Llc | Light emitters for optical traffic control systems |
US8072346B2 (en) | 2008-01-11 | 2011-12-06 | Global Traffic Technologies, Llc | LED light bar for optical traffic control systems |
US7982631B2 (en) | 2008-06-16 | 2011-07-19 | Global Traffic Technologies, Llc | LED emitter for optical traffic control systems |
US7957900B2 (en) * | 2008-02-08 | 2011-06-07 | Gaurav Chowdhary | Tracking vehicle locations in a parking lot for definitive display on a GUI |
NL1035051C2 (en) | 2008-02-20 | 2009-08-24 | Markus Henricus Beuvink | Method, system and optical communication composition for obtaining traffic information. |
US7554652B1 (en) | 2008-02-29 | 2009-06-30 | Institut National D'optique | Light-integrating rangefinding device and method |
US8237791B2 (en) | 2008-03-19 | 2012-08-07 | Microsoft Corporation | Visualizing camera feeds on a map |
DE202008003979U1 (en) | 2008-03-20 | 2008-06-26 | Fraas, Alfred, Dipl.-Ing. | Measuring system for traffic flow analysis |
US8310353B2 (en) | 2008-03-31 | 2012-11-13 | Honda Motor Co., Ltd. | Vehicle blind spot detection and indicator system |
US7697126B2 (en) | 2008-04-02 | 2010-04-13 | Spatial Integrated Systems, Inc. | Three dimensional spatial imaging system and method |
EP2112465A1 (en) | 2008-04-24 | 2009-10-28 | Snap-on Equipment Srl a unico socio. | Parameter detection system for wheels |
DE202008007078U1 (en) | 2008-05-26 | 2008-09-04 | Signalbau Huber Gmbh | Video detection with PMD sensors |
US8249798B2 (en) * | 2008-05-29 | 2012-08-21 | Delphi Technologies, Inc. | Vehicle pre-impact sensing system having signal modulation |
JP5505761B2 (en) | 2008-06-18 | 2014-05-28 | 株式会社リコー | Imaging device |
US8073027B2 (en) | 2008-06-27 | 2011-12-06 | Institut National D'optique | Digital laser pulse shaping module and system |
US7635854B1 (en) | 2008-07-09 | 2009-12-22 | Institut National D'optique | Method and apparatus for optical level sensing of agitated fluid surfaces |
TWM353849U (en) | 2008-09-17 | 2009-04-01 | Jyh-Chiang Liou | Integrated driving assistance apparatus |
US7872572B2 (en) | 2008-09-17 | 2011-01-18 | International Business Machines Corporation | Method and system for vehicle mounted infrared wavelength information displays for traffic camera viewers |
NL2001994C (en) | 2008-09-19 | 2010-03-22 | Nedap Nv | PARKING DEVICE WITH AN AUTOMATIC VEHICLE DETECTION SYSTEM, AND METHOD FOR OPERATING AND MANAGING A PARKING DEVICE. |
US8044781B2 (en) | 2008-11-10 | 2011-10-25 | Volkswagen Ag | System and method for displaying a 3D vehicle surrounding with adjustable point of view including a distance sensor |
DE102008043880A1 (en) | 2008-11-19 | 2010-05-20 | Robert Bosch Gmbh | Lighting unit for a vehicle, vehicle and method therefor |
ATE504823T1 (en) | 2008-12-09 | 2011-04-15 | Fiat Ricerche | OPTICAL DEVICE FOR MOTOR VEHICLES FOR DETECTING THE CONDITION OF THE ROAD SURFACE |
EP2199806A1 (en) | 2008-12-18 | 2010-06-23 | Universität Zürich | Passive translational velocity measurement from optical information |
WO2010069002A1 (en) | 2008-12-19 | 2010-06-24 | Park Assist Pty Ltd | Method, apparatus and system for vehicle detection |
FR2940463B1 (en) | 2008-12-23 | 2012-07-27 | Thales Sa | PASSIVE IMAGING SYSTEM EQUIPPED WITH A TELEMETER |
JP5210233B2 (en) * | 2009-04-14 | 2013-06-12 | 日立オートモティブシステムズ株式会社 | Vehicle external recognition device and vehicle system using the same |
GB2469648A (en) | 2009-04-21 | 2010-10-27 | Clearview Traffic Group Ltd | Traffic counting device |
US8222591B2 (en) | 2009-07-07 | 2012-07-17 | Intersil Americas Inc. | Proximity sensors with improved ambient light rejection |
US8441622B2 (en) | 2009-07-28 | 2013-05-14 | Applied Concepts, Inc. | Lidar measurement device for vehicular traffic surveillance and method for use of same |
US8125622B2 (en) * | 2009-07-28 | 2012-02-28 | Applied Concepts, Inc. | Lidar measurement device with target tracking and method for use of same |
US8803967B2 (en) | 2009-07-31 | 2014-08-12 | Mesa Imaging Ag | Time of flight camera with rectangular field of illumination |
GB0913501D0 (en) | 2009-08-03 | 2009-09-16 | Hatton Traffic Man Ltd | Traffic control system |
GB2472793B (en) * | 2009-08-17 | 2012-05-09 | Pips Technology Ltd | A method and system for measuring the speed of a vehicle |
US8368559B2 (en) | 2009-08-26 | 2013-02-05 | Raytheon Company | Network of traffic behavior-monitoring unattended ground sensors (NeTBUGS) |
PL2306426T3 (en) | 2009-10-01 | 2013-05-31 | Kapsch Trafficcom Ag | Device for detecting vehicles on a traffic surface |
US9526149B2 (en) | 2009-11-03 | 2016-12-20 | Philips Lighting Holding B.V. | Object-sensing lighting network and control system therefor |
US8400511B2 (en) | 2009-12-04 | 2013-03-19 | Lockheed Martin Corporation | Optical detection and ranging sensor system for sense and avoid, and related methods |
CN102959599B (en) | 2009-12-22 | 2015-07-15 | 莱达科技股份有限公司 | Active 3D monitoring system for traffic detection |
DE102010012811B4 (en) * | 2010-03-23 | 2013-08-08 | Jenoptik Robot Gmbh | Method for measuring speeds and associating the measured speeds with appropriate vehicles by collecting and merging object tracking data and image tracking data |
EP2393295A1 (en) | 2010-06-07 | 2011-12-07 | Harman Becker Automotive Systems GmbH | Method and device for identifying driving situations |
CA2802487C (en) * | 2010-07-23 | 2016-06-28 | Leddartech Inc. | 3d optical detection system and method for a mobile storage system |
US9229106B2 (en) * | 2010-08-13 | 2016-01-05 | Ryan Dotson | Enhancement of range measurement resolution using imagery |
US8401231B2 (en) * | 2010-11-09 | 2013-03-19 | Biological Illumination, Llc | Sustainable outdoor lighting system for use in environmentally photo-sensitive area |
US8692690B2 (en) * | 2011-03-09 | 2014-04-08 | Xerox Corporation | Automated vehicle speed measurement and enforcement method and system |
US8908159B2 (en) | 2011-05-11 | 2014-12-09 | Leddartech Inc. | Multiple-field-of-view scannerless optical rangefinder in high ambient background light |
CA2839194C (en) | 2011-06-17 | 2017-04-18 | Leddartech Inc. | System and method for traffic side detection and characterization |
CA2865733C (en) | 2012-03-02 | 2023-09-26 | Leddartech Inc. | System and method for multipurpose traffic detection and characterization |
-
2013
- 2013-03-01 CA CA2865733A patent/CA2865733C/en active Active
- 2013-03-01 US US14/115,244 patent/US9235988B2/en not_active Ceased
- 2013-03-01 CA CA2998175A patent/CA2998175C/en active Active
- 2013-03-01 CA CA3112113A patent/CA3112113A1/en active Pending
- 2013-03-01 CA CA2998166A patent/CA2998166C/en active Active
- 2013-03-01 WO PCT/IB2013/051667 patent/WO2013128427A1/en active Application Filing
- 2013-03-01 US US15/867,995 patent/USRE48914E1/en active Active
- 2013-03-01 EP EP13755537.1A patent/EP2820632B8/en active Active
Also Published As
Publication number | Publication date |
---|---|
US20140159925A1 (en) | 2014-06-12 |
CA3112113A1 (en) | 2013-09-06 |
EP2820632B1 (en) | 2017-05-10 |
USRE48914E1 (en) | 2022-02-01 |
CA2865733A1 (en) | 2013-09-06 |
CA2998166A1 (en) | 2013-09-06 |
EP2820632A1 (en) | 2015-01-07 |
CA2998166C (en) | 2019-04-09 |
CA2865733C (en) | 2023-09-26 |
EP2820632B8 (en) | 2017-07-26 |
US9235988B2 (en) | 2016-01-12 |
EP2820632A4 (en) | 2015-08-12 |
WO2013128427A1 (en) | 2013-09-06 |
CA2998175A1 (en) | 2013-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2998175C (en) | System and method for multipurpose traffic detection and characterization | |
EP2721593B1 (en) | System and method for traffic side detection and characterization | |
US9946734B2 (en) | Portable vehicle monitoring system | |
US9472097B2 (en) | Roadway sensing systems | |
KR101967610B1 (en) | Multi lane monitoring system that can recognize vehicle velocity and license plate number of multi lane | |
US11914041B2 (en) | Detection device and detection system | |
US8294595B1 (en) | Speed detector for moving vehicles | |
JP7024610B2 (en) | Detection device and detection system | |
US20060111841A1 (en) | Method and apparatus for obstacle avoidance with camera vision | |
CN116601681A (en) | Estimating automatic exposure values of a camera by prioritizing objects of interest based on relevant inputs of a 3D map | |
JP2019207654A (en) | Detection device and detection system | |
TW201809670A (en) | Travelling vehicle measuring system and related method thereof measuring the travelling speed of a plurality of traffic vehicles on a road provided with at least one lane | |
JP2019207655A (en) | Detection device and detection system | |
KR101262044B1 (en) | Vehicle detection apparatus using thermal image and method thereof | |
US20240142609A1 (en) | Radar object classification method and system | |
KR102707641B1 (en) | Traffic information measurement and control system using radar | |
KR20240099675A (en) | Road traffic situation monitoring system and method | |
CN116468755A (en) | Vehicle speed detection method | |
KR20240006942A (en) | System for vehicle and event detection using ladar and camera | |
KR20230135733A (en) | Parking enforcement system using cctv camera and drone | |
RU2573629C2 (en) | Apparatus for controlling vehicular traffic flow in order to account for and control movement | |
Kamesh et al. | CHARACTERIZATION OF NUMBER PLATES THROUGH DIGITAL IMAGES | |
Ladiges et al. | Development of a new traffic enforcement product-service |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request |
Effective date: 20180314 |