EP0403193A2 - Verkehrsüberwachungsverfahren und -vorrichtung - Google Patents

Verkehrsüberwachungsverfahren und -vorrichtung Download PDF

Info

Publication number
EP0403193A2
EP0403193A2 EP90306317A EP90306317A EP0403193A2 EP 0403193 A2 EP0403193 A2 EP 0403193A2 EP 90306317 A EP90306317 A EP 90306317A EP 90306317 A EP90306317 A EP 90306317A EP 0403193 A2 EP0403193 A2 EP 0403193A2
Authority
EP
European Patent Office
Prior art keywords
traffic
image
scene
cell
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP90306317A
Other languages
English (en)
French (fr)
Other versions
EP0403193A3 (de
Inventor
Neil Hoose
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University College London
Original Assignee
University College London
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University College London filed Critical University College London
Publication of EP0403193A2 publication Critical patent/EP0403193A2/de
Publication of EP0403193A3 publication Critical patent/EP0403193A3/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Definitions

  • This invention relates to a method of traffic monitoring.
  • Closed circuit television systems are increasingly being installed to allow the monitoring of critical sections of a highway. In some cases these act in tandem with a loop based incident detection system as in the HIOCC system described by J.F. Collins in "Automatic incident detection - experience with TRRL algorithm HIOCC", TRRL, U.K. 1983, Supplementary report 775.
  • An object of this invention is to mimic that ability in a computer, that is, to provide an procedure whereby some description, albeit qualitative, of the current traffic state over the image can be obtained directly from image data.
  • the present inventor there described a method of traffic monitoring which comprises forming at least first and second scene images of a scene in which traffic may be present, the images being formed at instants of time separated by a time interval, each scene image being an array of pixels, processing at least one of the first and second scene images to form an edge image representing the occurrence of edges in the scene, determining on the basis of the said edge image the presence or absence, and spatial location, of traffic in the scene, forming a difference image in which each pixel represents the difference between the intensity of the pixels of the first and second images at the corresponding point in the image, and determining from the distribution of pixels of different intensities in the difference image the presence or absence of movement in the scene, wherein the edge image and difference image are each subdivided into an array of cells, each edge cell and its related difference image cell corresponding to a given sub-area of the image constituting a scene image cell, the presence or absence of traffic and the presence or absence of movement being separately determined for each scene image cell.
  • a method as aforesaid characterised in that each of a plurality of scene image cells lying along the image of the line of a traffic lane is analysed as aforesaid, and the presence of predetermined traffic objects detected on the basis thereof.
  • the above time interval between successive scene images should be short, preferably not more than 0.2 seconds.
  • images of the scene are repeatedly formed, for example by a video camera, at short intervals of, say, 80 ms, each image being examined for edges and each image being compared with its predecessor to form a difference image.
  • each of the edge and difference images is divided into a plurality of cells, and analysis is carried out separately for each cell.
  • a video camera image 512 x 512 pixels in size can be subdivided into square cells each 64 x 64 pixels in size.
  • a rectangular grid of cells can be used, it is preferred to use a non-rectangular grid of cells, with the individual cells being of varying size depending on the position of cell in the image.
  • the cells can be arranged along lines which reflect the direction within the image of a traffic lane along which the traffic being monitored is travelling, and the variation in the size of the cells can take account of the fact that the area of image required to represent a given area of the traffic lane decreases with increasing distance of the part of the lane concerned from the video camera or other image forming device.
  • the variation in the size of the cells can be such that each cell represents, at least approximately, the same area of lane.
  • cells are only analysed if more than a given percentage, say 50%, of their area represents road, as opposed to non-road parts of the image.
  • the cells themselves need not be square as suggested above, but may be rectangular or even some shape such as trapezoidal, a trapezoidal shape being advantageous in terms of subdividing the image of a lane, which image itself is trapezoidal except in the case of a lane perpendicular to the axis of vision of the camera.
  • the above mentioned traffic objects preferably include at least gaps (a gap is a cell or a succession of cells along the line of a traffic lane in each of which no traffic is detected), and/or platoons (a platoon is a cell or a succession of cells along the line of traffic lane in which in each of which moving traffic is detected), and/or blocks (a block is a cell or a succession of cells along the line of a traffic lane in each of which stationary traffic is detected).
  • the images may also be examined for the existence of other traffic objects, as is explained in more detail below.
  • a sensor e.g. a video camera
  • the actual length over which a digitised video image can be analyzed depends upon several factors such as the relative position of the camera and the road, the camera's field of view and the size of the vehicles within the scene.
  • Occlusion can be considered to be the shadow area of a vehicle that position.
  • the amount of occlusion, or the size of the shadow area varies with the size of the vehicle and its relative position to the camera. A large, tall vehicle will produce a large amount of occlusion when compared to a 1.5 Ton van. However, for a camera looking along a length of road the amount of occlusion caused by the latter will increase as it moves further away from the camera.
  • the values for Yoc, the distance at which partial occlusion occurs, will vary with the camera height, the traffic conditions and vehicle mix. Higher camera positions will increase this value.
  • the analysis used to estimate the camera range for resolution of vehicles can also be used to estimate the range at which movement would be detectable.
  • a digital image represents the pattern of light levels across the camera field of view. If the image data is represented as a three-dimensional graph where the vertical axis represents the brightness of a pixel at coordinate x, y then bright regions will show as peaks and plateaux and dark regions will be seen as valleys and troughs. The steepness of a slope in this graph represents how rapidly the light intensity changes and, in general, where there is a significant gradient corresponds to an edge in the image.
  • An image can be transformed so that the pixel intensity represents the size of the gradient at the location by performing a spatial convolution. The image thus produced is referred to herein as an edge image.
  • the spatial convolution referred to can be performed using, for example, a Laplacian operator as disclosed in "Digital Image Processing", by Gonzales & Wintz, published 1987 by Addison-Wesley Publishing Company (see pp 338-340).
  • edge image In traffic scenes the edge image generally highlights vehicles as complex groups of edges. An individual vehicle will be made up of several regions of differing intensity which in turn are different from the background scene. In most cases the road area in the image has a relatively low edge content, chiefly road markings and kerb lines. The presence of vehicles can thus be detected by the increase in edge content within the road area.
  • Figure 4 shows two histograms in each of which the number of pixels in a given image cell whose intensity falls in a given range, is plotted against that intensity range.
  • the first histogram shows the case where there are few edges and the second shows the case where there are many edges.
  • the histograms are unsigned (plotted without reference to sign), i.e. no distinction is made between the cases where, in going from one pixel to the next, the intensity increases and the intensity decreases. However, a signed histogram could be used instead, and this would have the convenience of keeping the mean stationary.
  • the second histogram there is a much greater distribution of intensities, and this is the shape of histogram to be expected where vehicles are present.
  • the present invention also needs to detect movement. If we subtract an image taken at time t from an image taken at time t + ⁇ t the differences will be due to four possible causes: movement of the camera, movement of objects within the scene, changes in lighting and electrical noise. For a fixed camera position subject to minimum vibration the first cause can be eliminated. If we choose ⁇ t to be sufficiently small then changes in light levels in a real world scene will be negligible. This leaves differences due to moving objects to be differentiated from those due to noise.
  • the differences caused by moving objects are a result of regions of differing brightness covering or uncovering each other, e.g. a bright region moving over a dark one. If ⁇ t is small then these differences will appear at the edges of the region. In real world images most edges are not simple steps but are sloped, and this is shown in Figure 5.
  • the first diagram in Figure 5 is plot of intensity over a region of an image containing an edge, at two instants in time separated by an interval ⁇ t .
  • the second diagram shows the result of subtracting one plot from the other. It can be seen that, as the distance moved increases, the size of the difference increases both in magnitude and in area.
  • An image referred to herein as a difference image, can be created by subtracting, for each pixel, the intensity of the pixel at a time t + ⁇ t from the intensity at a time t .
  • a histogram can then be constructed from the difference image, in which the number of pixels in a given image cell whose intensity falls in a given range, is plotted against that intensity range. Increasing movement causes the distribution of intensities to spread.
  • Figure 6 shows two signed histograms, the top histogram showing the distribution of signed differences due to noise, and the lower showing the histogram of differences when movement is taking place.
  • a histogram representing the edge content of the cell and a histogram representing the movement content of the cell then need to be analyzed. This may be done in various ways. One way is to calculated the value of the variance for each histogram. The greater the edge content or movement content respectively, the greater will be the variance of the respective histogram. A variance above predetermined threshold values is taken to represent the presence of vehicles and the presence of movement respectively. An alternative approach is to sum the area of the histogram above a predetermined positive intensity value and below the corresponding negative intensity value (this is for a signed histogram, for an unsigned histogram only the area above a predetermined positive value is required).
  • a summed area above a predetermined threshold value is taken to represent the presence of vehicles or the presence of movement, depending on which histogram is being considered.
  • the value of the variance or summed area, depending on which approach is being used, is referred to below as the edge parameter or movement parameter, as the case may be.
  • the threshold values, for either method of analysis, are designed below as T E for edges and T M for movement.
  • V1 represents the proportion of movement per unit area and V2 the proportion of movement normalised for the edge content in a cell. These values allow comparison between adjacent cells. V2 gives some measure of the speed of the traffic.
  • the movement and edge parameters are again compared with the threshold values but this time their closeness to the threshold value is assessed. If the movement parameter is significantly more than T M the cell state is adjudged to be "Moving". If this is not so the edge parameter is examined to see if it is within a set limit of T E and if it is the cell state is judged to be "None". If neither of these conditions is satisfied the cell state is undefined.
  • any residual undefined cells are examined for their value of V1. If this exceeds a threshold value the cell is designated as moving, otherwise it is set to none. "None” is a representation that no traffic is present in the cell, “Stop” is a representation that traffic is present and stopped, and “Moving” is a representation that traffic is present and moving.
  • Any cell classified as “Stop” is checked again to confirm the status by closer analysis of the relative values of the parameters and the thresholds.
  • the basis for this is that a “Stop” state is the one which generally triggers an alarm or action, and it is clearly desirable ensure the correctness of the analysis before this occurs.
  • the threshold values are preferably generated by carrying out a "training" run. In this, a number of pairs of images, say 50 or 100 pairs, are analysed in the manner described above and values generated for the movement and edge parameter of each cell. No state definitions are performed and the training run is carried out when the traffic is very light and is moving freely.
  • T M To determine the value of T M , a histogram is constructed of the frequency with which the movement parameter lies in a given range. The mode value of the histogram is determined and T M is set at a value slightly greater than the mode value.
  • T E is determined differently. What is done there is to consider for each cell only those images in which the value of the movement parameter is less than or equal to the mode value, i.e. for which it assumed that no movement is taking place. Since the traffic is freely moving it follows that one is then only considering, for each cell, images where no vehicles are present. These cell images are examined and the maximum value of the edge parameter determined for these cells. The value of T E is then set at this value.
  • a means can be provided whereby the values of T m and T e can be displayed to an operator and the values altered if desired.
  • a number of more complex "objects” can then be defined on the basis of the three basic "objects”. Two important ones are: QUEUE: BLOCK followed by a PLATOON with no GAP in between them. WAVE or HUMP: PLATOON followed by a BLOCK followed by either a GAP or another PLATOON.
  • V1 and V2 can be compared between adjacent cells. If the difference in these values between adjacent cells exceeds a threshold value then an error in cell state classification is deemed to have occurred.
  • the data of the cell thus identified is then re-examined and the cell is re-­classified according to its own data and the type of the object currently being considered.
  • the object classification for that lane is then repeated. This process continues recursively until a stable object classification is achieved.
  • Figure 8 shows the identification of traffic "objects" in four exemplary lengths of lane.
  • the system of the invention may be provided with means for displaying to an operator the presence of the traffic "objects”.
  • Figure 9 shows in summary the steps involved in the procedure of the invention. On the left, starting from the bottom, are the steps when the image is divided into a rectangular grid of cells, and on the right are the steps when the image is divided into cells whose size represents an equal area of traffic lane.
  • the division of image into cells is referred to below as a cell map.
  • the letters P, G and Q stand for "platoon”, “gap” and “queue” respectively.
  • the analysis may be further refined to give a measure of the speed and acceleration of the travel or formation of the "object”. Also, an indication can be given of the location where each "object" starts and finishes.
  • the state of the traffic in each lane can be found by an analysis of the objects.
  • a lane is described by:
  • the spatial occupancy expresses the percentage of the length of a lane which is occupied by vehicles.
  • the distribution factor is a measure of the extent to which traffic is bunched together in a lane.
  • Each object type has an associated weight factor which is used in calculating a weight for each object.
  • Object weight (W) (factor x length) x (1+ number of "stops").
  • QUEUE is more significant than PLATOON
  • PLATOON PLATOON
  • a message giving lane number, spatial occupancy, distribution factor and most significant object for each lane is then displayed to the operator.
  • Figure 8a is a graph showing the way in which the value of W increases for various traffic objects with increasing numbers of cells in the object.
  • the graph includes three composite objects made up of a QUEUE plus 1, 2 or 4 stop cells.
  • the system of the present invention may be connected to means for taking action to control the traffic on the basis of the detection of particular traffic objects.
  • Two particular applications of the invention are in the detection of queues and in the detection of incidents
  • the term "incidents” is a recognised term in the study of traffic and denotes an abnormal event which has consequences for traffic flow.
  • Such an abnormal event might be, for example, an accident, a breakdown or the presence of debris on the road.
  • Data from the monitoring system of the present invention can be fed, for example, to roadside warning systems to provide drivers with an appropriate message concerning the state of the traffic ahead, to signals (for example traffic lights) controlling access to a road so as, for example, to prevent vehicles entering a road which was over-congested, or to a route guidance system on board a vehicle.
  • the traffic "objects" detected by each camera may then be combined with one another to provide composite "objects” representing the state of the whole road or at least of larger parts of it than the individually observed sections.
  • the "objects" detected by each of the individual cameras can be compared with one another and the operator can then be provided with information relating only to the most significant object, or indeed, with the visual image from just the cameras which has detected the most significant objects. In this way the operator is freed from having to try to observe simultaneously the images provided by a possibly large number of cameras.
  • FIG. 11 shows hardware which can be used for the present invention.
  • the software presents the operator with an option menu at which he is asked to specify whether what is to follow is a training run. If it is the first task is to draw a cell map defining the cells into which the image of the scene is to be divided. If it is not a training run then the system is loaded with cell map data previously provided.
  • the system then proceeds to grab, digitise and store two successive video images, denoted as image A and image B, the images showing the scene at times separated from one another by approximately 0.2 seconds.
  • image A and image B the images showing the scene at times separated from one another by approximately 0.2 seconds.
  • a difference image is then formed by subtracting image A from image B, and an edge convolution is performed on image A to form an edge image.
  • a histogram is calculated of the difference image, from which the movement parameter for the cell is determined, and a histogram of the edge image is calculated, from which the edge parameter for the cell is calculated.
  • the process passes to a "Break loop?" option which, if exercised, then causes the system to set the threshold values T e and T m for the edge and movement parameters. If the Break loop option is not exercised the process of analysing a pair of images is repeated. Typically, a mentioned above, 50 or 100 pairs of images are analysed before the data thus generated is used to set the above-mentioned thresholds.
  • the steps shown in Figure 10b are performed.
  • the cell state is determined, depending on the value of the movement and edge parameters in relation to the movement and edge threshold values. Once this has been done for each cell in the cell map then for each traffic lane in the cell map the line of cells covering that lane is analysed to ascertain the existence of the traffic "objects" described above.
  • the system determines the furthest downstream cell (which gives the start point of the object concerned), the length of the object in cells, the number of "Stop" cells which the object contains, the object weight, and the values of V1 and V2 as defined above. Once this has been done for each object the process then calculates the values for spatial occupancy and distribution factor (see above) for the lane. The overall process just described is repeated for each traffic lane in the cell map.
  • the data regarding the traffic objects is then output to a file, from where information as to the traffic objects detected in each lane are displayed to the operator.
  • the relevant data is sent from the file of object data to whatever equipment is carrying out the control.
  • the invention can be conveniently implemented on a commercially available image processing subsystem connected to a 80286 - based microcomputer.
  • the subsystem digitises and processes pixel data at high speed while the host microcomputer controls the programming of the subsystem and processes the cell data.
  • the procedure has been implemented using as the image processing subsystem a Series 151 Image Processor, a subsystem manufactured by Imaging Technology Inc. of Woburn, Massachusetts, U.S.A..
  • the subsystem is modular and comprises a set of cards connected by a VME bus and a proprietary video bus. Pixel data is transferred between the different function and memory cards via the video bus.
  • An example of the subsystem comprises five cards, a digitiser and controller card, two framestore cards each with 1MB of video storage, an arithmetic/logical processing card and a histogramming card.
  • the microcomputer host controls the subsystem via a card that connects the host bus to the subsystem VME bus. Instructions can be sent to, and data received from, the subsystem by this route.
  • the subsystem processes the image data up to the histogram stage with the host issuing the appropriate instructions for this lower level stage of the procedure. Analysis of the cell histograms and subsequent stages of the procedure are processes within the host microcomputer.
  • One application for which the present invention may be regarded as particularly suited is to detect queues of stationary or slow moving traffic on high speed roads such as freeways.
  • the presence of slow moving or stationary traffic on this type of road represents a serious hazard due to the danger of fast moving vehicles running into the rear of the queue.
  • the build up of traffic can be very rapid with the tail end of the queue moving back along the road extremely quickly.
  • high speed roads generally have some constraints which simplify the analysis of special images.
  • the underlying background scene is relatively simple. Therefore, the presence of a vehicle can be inferred from an increase in scene complexity, i.e. an increase in the number of edges found using an edge detection operator.
  • the pattern of movement within the scene is also fairly straightforward, particularly if attention is restricted to a single carriageway. Movement is then in one general direction and useful information can be derived from the magnitude of such movement. If the road has a high speed limit then the detection of slow traffic is even easier.
  • a key feature of the approach described above is that its purpose is to provide a qualitative description of the spatial distribution of moving and stationary traffic within a scene.
  • the technique does not attempt to identify individual vehicles nor does it seek to follow the vehicle "clusters" that are identified as they move across the image. Instead, the strategy is to mimic the way in which a human observer might describe the pattern of traffic when viewing a CCTV monitor.
  • the first approach involves a more "microscopic" analysis of the image followed by a reconstruction of the traffic situation.
  • a more direct approach with the aim of providing a more qualitative description of the traffic has been adopted.
  • Another important feature of the approach is that the image interpretation is carried out on the spatial difference (i.e. gradient) and the temporal difference transforms of the scene and not from grey level values. This reduces the influence of changes in the light distribution across the scene caused either by changes in the ambient light or the action of the auto-iris. The effect of this is to increase the robustness of the technique over time and to widen the range of conditions under which it will perform satisfactory.
  • the spatial difference i.e. gradient
  • Describing local regions of the image i.e. "cells" in terms of two parameter values is both the first data reduction step and the first abstraction from pixel data. These values are still characteristics of the image and the next stage is to interpret these values in a way that relates to vehicles and traffic. This is done by a comparison with predetermined background values for the parameters which specify a "state" for each "cell". The assumption that these states relate to vehicular movement is based on considering only those cells which are over the traffic lanes, which will be true at least for roads where pedestrian movement is prohibited.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
EP19900306317 1989-06-16 1990-06-11 Verkehrsüberwachungsverfahren und -vorrichtung Withdrawn EP0403193A3 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB8913946 1989-06-16
GB898913946A GB8913946D0 (en) 1989-06-16 1989-06-16 Method and apparatus for traffic monitoring

Publications (2)

Publication Number Publication Date
EP0403193A2 true EP0403193A2 (de) 1990-12-19
EP0403193A3 EP0403193A3 (de) 1991-12-11

Family

ID=10658600

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19900306317 Withdrawn EP0403193A3 (de) 1989-06-16 1990-06-11 Verkehrsüberwachungsverfahren und -vorrichtung

Country Status (2)

Country Link
EP (1) EP0403193A3 (de)
GB (1) GB8913946D0 (de)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0505858A1 (de) * 1991-03-19 1992-09-30 Mitsubishi Denki Kabushiki Kaisha Messeinrichtung eines bewegenden Körper- und Bildverarbeitungsanordnung zum Messen des Verkehrsflusses
FR2679682A1 (fr) * 1991-07-22 1993-01-29 Inrets Procede pour detecter les changements de l'etat d'occupation d'une voie.
US5296852A (en) * 1991-02-27 1994-03-22 Rathi Rajendra P Method and apparatus for monitoring traffic flow
WO1996007937A1 (de) * 1994-09-03 1996-03-14 Robert Bosch Gmbh Vorrichtung und verfahren zur erkennung von objekten
WO1996023290A1 (en) * 1995-01-24 1996-08-01 Minnesota Mining And Manufacturing Company Automated lane definition for machine vision traffic detector
US5999635A (en) * 1996-01-12 1999-12-07 Sumitomo Electric Industries, Ltd. Traffic congestion measuring method and apparatus and image processing method and apparatus
US6188778B1 (en) 1997-01-09 2001-02-13 Sumitomo Electric Industries, Ltd. Traffic congestion measuring method and apparatus and image processing method and apparatus
WO2001033503A1 (en) * 1999-11-03 2001-05-10 Cet Technologies Pte Ltd Image processing techniques for a video based traffic monitoring system and methods therefor
ES2169657A1 (es) * 2000-04-14 2002-07-01 Univ De Valencia Inst De Robot Sistema de deteccion automatica de incidentes de trafico en entornos urbanos.
EP1262933A1 (de) * 1999-12-27 2002-12-04 Sumitomo Electric Industries, Ltd. Bildverarbeitungsvorrichtung, bildverarbeitungsverfahren und fahrzeugüberwachungssystem

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0277050A1 (de) * 1987-01-14 1988-08-03 Association Pour La Recherche Et Le Developpement Des Methodes Et Processus Industriels (Armines) Bahnbestimmungsverfahren eines auf einer Strasse bewegungsfähigen Körpers und Vorrichtung zur Verwendung dieses Verfahrens
WO1988006326A1 (en) * 1987-02-17 1988-08-25 Regents Of The University Of Minnesota Vehicle detection through image processing for traffic surveillance and control

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0277050A1 (de) * 1987-01-14 1988-08-03 Association Pour La Recherche Et Le Developpement Des Methodes Et Processus Industriels (Armines) Bahnbestimmungsverfahren eines auf einer Strasse bewegungsfähigen Körpers und Vorrichtung zur Verwendung dieses Verfahrens
WO1988006326A1 (en) * 1987-02-17 1988-08-25 Regents Of The University Of Minnesota Vehicle detection through image processing for traffic surveillance and control

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
2ND INTERNATIONAL CONFERENCE ON ROAD TRAFFIC MONITORING, INSTITUTION OF ELECTRICAL ENGINEERS CONFERENCE February 9, 1989, pages 94 - 98; HOOSE: 'Queue detection using computer image processing ' *
IEEE TRANSACTIONS ON VEHICULAR COMMUNICATIONS. vol. 38, no. 3, 1989, NEW YORK US pages 112 - 122; IíIGO: 'Application of machine vision to traffic monitoring and control ' *
SYSTEMS & COMPUTERS IN JAPAN. vol. 17, no. 1, February 1986, NEW YORK US pages 62 - 72; YASUO KUDO: 'Traffic flow measurement system using image processing ' *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5296852A (en) * 1991-02-27 1994-03-22 Rathi Rajendra P Method and apparatus for monitoring traffic flow
US5691902A (en) * 1991-03-09 1997-11-25 Mitsubishi Denki Kabushiki Kaisha Moving body measuring device and an image processing device for measuring traffic flows
EP0505858A1 (de) * 1991-03-19 1992-09-30 Mitsubishi Denki Kabushiki Kaisha Messeinrichtung eines bewegenden Körper- und Bildverarbeitungsanordnung zum Messen des Verkehrsflusses
US5313295A (en) * 1991-03-19 1994-05-17 Mitsubishi Denki Kabushiki Kaisha Moving body measuring device and an image processing device for measuring traffic flows
US5396283A (en) * 1991-03-19 1995-03-07 Mitsubishi Denki Kabushiki Kaisha Moving body measuring device and an image processing device for measuring traffic flows
US5598338A (en) * 1991-03-19 1997-01-28 Mitsubishi Denki Kabushiki Kaisha Device for detecting the existence of moving bodies in an image
FR2679682A1 (fr) * 1991-07-22 1993-01-29 Inrets Procede pour detecter les changements de l'etat d'occupation d'une voie.
WO1996007937A1 (de) * 1994-09-03 1996-03-14 Robert Bosch Gmbh Vorrichtung und verfahren zur erkennung von objekten
US5621645A (en) * 1995-01-24 1997-04-15 Minnesota Mining And Manufacturing Company Automated lane definition for machine vision traffic detector
WO1996023290A1 (en) * 1995-01-24 1996-08-01 Minnesota Mining And Manufacturing Company Automated lane definition for machine vision traffic detector
US5999635A (en) * 1996-01-12 1999-12-07 Sumitomo Electric Industries, Ltd. Traffic congestion measuring method and apparatus and image processing method and apparatus
US6075874A (en) * 1996-01-12 2000-06-13 Sumitomo Electric Industries, Ltd. Traffic congestion measuring method and apparatus and image processing method and apparatus
US6188778B1 (en) 1997-01-09 2001-02-13 Sumitomo Electric Industries, Ltd. Traffic congestion measuring method and apparatus and image processing method and apparatus
WO2001033503A1 (en) * 1999-11-03 2001-05-10 Cet Technologies Pte Ltd Image processing techniques for a video based traffic monitoring system and methods therefor
CN100533482C (zh) * 1999-11-03 2009-08-26 特许科技有限公司 基于视频的交通监控系统的图像处理技术及其方法
EP1262933A1 (de) * 1999-12-27 2002-12-04 Sumitomo Electric Industries, Ltd. Bildverarbeitungsvorrichtung, bildverarbeitungsverfahren und fahrzeugüberwachungssystem
EP1262933A4 (de) * 1999-12-27 2004-03-31 Sumitomo Electric Industries Bildverarbeitungsvorrichtung, bildverarbeitungsverfahren und fahrzeugüberwachungssystem
ES2169657A1 (es) * 2000-04-14 2002-07-01 Univ De Valencia Inst De Robot Sistema de deteccion automatica de incidentes de trafico en entornos urbanos.

Also Published As

Publication number Publication date
GB8913946D0 (en) 1989-08-02
EP0403193A3 (de) 1991-12-11

Similar Documents

Publication Publication Date Title
EP0567059B1 (de) Gegenstandserkennungssystem mittels Bildverarbeitung
US5434927A (en) Method and apparatus for machine vision classification and tracking
US11380105B2 (en) Identification and classification of traffic conflicts
US6999004B2 (en) System and method for vehicle detection and tracking
US6404455B1 (en) Method for tracking entering object and apparatus for tracking and monitoring entering object
CN106373430A (zh) 一种基于计算机视觉的交叉路口通行预警方法
CN103021175A (zh) 基于达芬奇架构的行人闯红灯视频检测方法及装置
EP0403193A2 (de) Verkehrsüberwachungsverfahren und -vorrichtung
CN112349087B (zh) 一种基于路口信息全息感知的可视化数据输入方法
JP2003030776A (ja) 物体検知システムおよびその方法
CN106980810A (zh) 一种基于敏感区图像轮廓递增的超高车辆接近检测方法
KR20040051778A (ko) 유고 감지 방법
CN112906428B (zh) 影像侦测区域取得方法及空间使用情况的判定方法
CN112633228A (zh) 停车检测方法、装置、设备及存储介质
KR100532058B1 (ko) 카메라 캘리브레이션을 이용한 교통정보 추출 방법 및 장치
Hilbert et al. A sensor for control of arterials and networks
Siyal et al. Image processing techniques for real-time qualitative road traffic data analysis
Hoose Queue detection using computer image processing
JPH11353581A (ja) 昼間における車種判別装置及び方法
KR101930429B1 (ko) 표준화된 유고 감시 시스템 및 이를 이용한 유고 상황 분석 방법
CN115402322A (zh) 一种路口驾驶辅助方法、系统、电子设备及存储介质
JP3771729B2 (ja) 交通流計測システム
JP4697761B2 (ja) 待ち行列検出方法及び待ち行列検出装置
JP3508320B2 (ja) 監視システム
Hoose Computer vision as a traffic surveillance tool

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH DE ES FR GB GR IT LI NL

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE CH DE ES FR GB GR IT LI NL

17P Request for examination filed

Effective date: 19920520

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19940104