GB2448617A - Traffic Detector System - Google Patents

Traffic Detector System Download PDF

Info

Publication number
GB2448617A
GB2448617A GB0807243A GB0807243A GB2448617A GB 2448617 A GB2448617 A GB 2448617A GB 0807243 A GB0807243 A GB 0807243A GB 0807243 A GB0807243 A GB 0807243A GB 2448617 A GB2448617 A GB 2448617A
Authority
GB
United Kingdom
Prior art keywords
detector system
image
image processing
detection
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0807243A
Other versions
GB2448617B (en
GB0807243D0 (en
Inventor
Richard Diggory Jenkins
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AGD Systems Ltd
Original Assignee
AGD Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AGD Systems Ltd filed Critical AGD Systems Ltd
Publication of GB0807243D0 publication Critical patent/GB0807243D0/en
Publication of GB2448617A publication Critical patent/GB2448617A/en
Application granted granted Critical
Publication of GB2448617B publication Critical patent/GB2448617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0075
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A traffic detector system comprises an imaging means (40,42) directed generally downwardly to observe a detection zone and arranged to acquire spaced images from respective spaced viewpoints. An image processing device processes the spaced images acquired by said imaging means to detect the presence of an object of appreciable height in said detection zone. The image processing device acquires respective image frames (44,46) each containing an array of pixels, and processes each of said arrays on a block by block basis to obtain a variance value which is dependant on the variation of the pixel values within the respective left and right blocks. The image processing device compares the values of corresponding pixels in the left and right blocks and outputs a detection active signal for that block if the differences between the pixels from the two blocks exceed a threshold that depends on said variance value. The detection active signals are monitored and a detection signal output if there are more than a preset proportion of blocks showing detection active signals.

Description

I
TRAFFIC DETECTOR SYSTEM
This invention relates to a traffic detector system and associated methods, and in particular, to systems and methods for discriminating objects of appreciable height on or adjacent to a carriageway or walkway.
There is a growing trend in traffic control systems to rely on passive image based detection to determine the presence of pedestrians or vehicles in a detection zone (otherwise referred to as "objects of appreciable height"), as image* based detection provides useful advantages over traditional loop detectors, Doppler detectors, radar detectors, etc. However, one of the limitations of current passive image based detection, both for pedestrian and vehicle detection, is the susceptibility to false detection caused by shadows within the detection zone. Headlamps illuminated at night and other surface features such as leaves and litter also can create images on the ground that give rise to false detections. Thus a puddle or pile of leaves in a detection zone may be seen by an existing image based detection systems as a person and thus *:*::* send an inappropriate signal to a pedestrian crossing control. * **.
We have realised that the potential ambiguities caused by such effects can be discriminated or resolved by using a stereoscopic image capture system *u* * to reject such false detections.
Accordingly, in one aspect, this invention provides a traffic detector *.**** * system for detecting the presence of one or more objects of appreciable height in a detection zone on or adjacent a carriageway or walkway, said detector system in use being disposed in an elevated position relative to the detection zone, said detector system comprising: an imaging means directed generally downwardly to observe the detection zone and arranged to acquire spaced images from respective spaced viewpoints, and image processing means for processing the spaced images acquired by said imaging means to detect the presence of an object.of appreciable height in said detection zone, wherein said image processing means acquires respective image frames each containing an array of pixels, said image processing means further including means for processing at least part of at least one of said arrays to obtain a variance value which is dependant on the variation of the pixel values within said array or a part thereof, and means for comparing the values of corresponding pixels in said arrays and outputting a detection signal for all or part of the array if the differences between the pixels from the two frames exceeds a threshold that depends on said variance value.
In this manner, the spaced images are processed to allow a degree of depth perception within a kerbside scene to allow reflection of false positive detection of ground images.
The imaging means may comprise a common sensor with suitable optics to acquire said spaced images while, more conveniently, it may comprise two spaced sensors.
* Preferably, said image processing means is operable to apply an optical transformation to the acquired images to at least partially correct for optical distortion therein. Thus many suitable sensors or imaging means employ a wide angle lens which produces a pronounced fish-eye effect and this can be corrected by using a suitable inverse transformation (here a pin cushion distortion) to the captured image.
Preferably said image processing means is operable to at least partiafly compensate for registration differences between said viewpoints. We have found that nominally identical image sensors can typically differ by as much as 5% of the field of view. The registration differences may comprise differences in one or more areas or another position or scale.
Preferably the image processing means is operable to perform at least a partial perspective correction function. In this manner the image of the zone playing effectively becomes a perpendicular (top plan) view so that the spaced images are registered to the zone as delineated on the ground. Thus for example a square shape on the ground is restored to a square in the corrected image.
Preferably, the image processing means is operable to process the acquired images to derive data representative of a height map of objects in the detection zone. This therefore effectively allows the system to detect whether S...
there are any objects of appreciable height in the zone. Alternatively in a more basic arrangement, the system may simply look for significant differences in the * two images.
Preferably, the image processing means is operable to filter out objects or * artefacts in the height map which are below a preset minimum height or which are of apparent negative height. Thus, for example slight variations in the level of the ground, leaves, low level litter, etc. can be filtered out to leave just those objects of appreciable height in the map. * a
In order to enhance discrimination, the image processing means may be operable to apply edge detection to one of both of said acquired images or to the data derived there from.
The invention extends to a pedestrian crossing control including a traffic detector system as described above. Thus the traffic detector system may be used in conjunction with an image based detection system of the type described
in the introduction to the specification.
In another aspect, this invention provides a method for detecting and/or discriminating one or more objects of appreciable height in a detection zone on a carriageway or walkway, which comprises acquiring from an elevated position on or adjacent to a carriageway or walkway respective images from space viewpoints, and processing said spaced images thereby to detect the presence of an object of appreciable height in said detection zone.
Whilst the invention has been described above it extends to any inventive.
combinations of the features set out above or in the following description. * **
The invention may be performed in various ways and specific embodiments will now be described by way of example only, reference being made to the accompanying drawings in which:-Figure 1 is a schematic view of an idealised detect ion zone configuration; *:*. Figure 2 illustrates the relationship between binocular parallax, amount of S..
* depth discrimination, target height and lens separation distance; Figure 3 represents pixel resolution distance at different ranges from the detector; Figure 4 is a block diagram of an algorithm implementation for use in an embodiment of this invention; Figures 5 (a) and (b) show an image before and after perspective correction respectively; Figures 6 (a) and (b) show schematically horizontal differencing to provide a three dimensional zone model; Figure 7 is an example of a typical sobel filter; Figure 8 is a schematic diagram representing light perception by horizontal differencing of an extracted shape; Figure 9 illustrates schematically the ixial distortion due to height measurement along the detection axis, and correction thereof; Figure 10 is a schematic block diagram of another embodiment of algorithm implementation; Figure 11 is a schematic diagram of a further embodiment of algorithm implementation, and * ** Figure 12 is a schematic block diagram of an embodiment in which the **0* detection threshold is adjusted according to the variance or contrast in the image. ***
Referring initially to Figures 1 to 3, in a typical installation, a pedestrian detector 10 in accordance with this invention will be mounted on a pole 12 at a ****** * height of approximately 3 metres, viewing objects on the, ground up to a range of 4-5 metres from the base of the pole.
As seen in Figure 2, the amount of depth discrimination due to binocular parallax is related to target height and the lens separation distance. Thus, with a detector 10 at a height of 3 metres and a lens separation "d" an object at a height of 1 metre will generate the parallax separation of d12 whereas between the two images, with the same camera separation, a target at a height of 2 metres will generate a parallax separation of 2d.
Figure 3 illustrates schematically the ground pixel resolution figures obtained from a conventional (monocular) detector (AGD 625). At a camera height of 3 metres, a tactile tile 16 on the ground measuring 400mm x 400mm will occupy a region of 27 x 27 pixels at a range of 0.5m, a region of 23 x 20 pixels at a range of 2.5m and a region of 17 x 10 pixels at a range of 5m.
For accurate digital processing of the binocular images to derive a height map or to discriminate objects of appropriate height -e.g. above 0.5m, we have found that there needs to be a parallax separation of typically 4 or more pixels between the two images. The parallax separation, as indicated above, is dependent on the separation of the cameras, the height of the target and the range of the target. Given a pole height of 3 metres, a maximum range of 5 * metres, our studies have shown that a camera separation distance of 200mm S...
would provide adequate separation to detect targets greater than 1 metre in height within a typical kerbside zone. It will of course be appreciate that these * figures are given by way of example and smaller lens separation distances may *..O be possible with cameras with higher resolution sensors.
* 5 Referring now to Figure 4, there is shown a schematic diagram of a first embodiment of detection system. In this embodiment, left -and right -image sensors 18, 20 respectively are mounted on a pole and supply images to an image acquisition module 22 which synchronises operation of the sensors so that the frames from each are synchronised and also provides light level tracking. From the image acquisition module 22 the left and right images pass to an image correction module 24. In the image correction module 24 each image is subjected to a transformation to correct for any distortion in the original image. For example, the image sensors 18 and 20 may typically be wide angle or fish-eye lens to provide a wide field of view and this will provide a fish-eye distortion which is corrected in the image correction module by providing an inverse, pin cushion transformation. A perspective correction transformation is also applied to linearise the image for each sensor.
Figures 5 (a) and (b) show schematically an image as acquired and an image after transformation. The effect is to provide an image as viewed in plan view rather than from an offset perspective view. The image correction module also applies linear and/or angular shifting and scaling as necessary to ensure that there is initial image registration between the two cameras. From the image correction module 24 the two images are passed to a depth processing module 26 which provides a horizontal comparison between left and right images respectively using the digital processing techniques to be described below thereby providing a depth assessment or height map of the viewed scene. The * horizontal comparison may be a horizontal differencing algorithm which provides *:*.20 a crude three dimensional model as seen in Figures 6 (a) and (b). This then * distances between making blocks of the image in the left and right images as shown in Figure 6 (a) are mapped into the 3 dimensional model as shown in Figure 6 (b). Suitable comparison functions may be used from the field of MPEG video encoding which are used to compute difference sectors. These functions involve computing the minimum absolute differences (MAD) of blocks and pixels to find similar image areas within successive frames, and are available optimised for the TI DSP family (e.g. mad 8 x 8; mad 16 x 16). In this appropriate distance measuring in the horizontal direction only needs to be applied. These functions may also be used to assist with zone registration.
From the depth processing module 26 the data is passed to a detection processing module 28 which implements a process of continuous monitoring and background decay in the same way as other standard detectors, with thresholdir,g and bIob" processing to provide a detection status.
Having registered and lineansed the left and right images, horizontal differences in the images are analysed to determine height values over the area of the zone. In these embodiments the left and right images are processed to produce an image frame which can represent the detection zone in three dimensions, for example using a two dimensional image with the final pixel intensity representing the height above the ground. If required for calibration *:::* and/or operational purposes, a three dimensional bar chart representation similar to Excel (rtm) may be provided as seen in Figure 6 (b).
There are a number of different ways of analysing the horizontal differences to determine the height values referred to above. Two such functions are the "mad" (minimum absolute difference) and "sad" (sum absolute ***..
* difference). We have found that a mad 8 x 8 function performs well as a tool for finding matching pixel blocks though a mad 16 x 16 function may be less susceptible to false positives. The mad functions return not only the location of the closest matching picture block but also the value of the difference. The difference value may be thresholded to allow for filtering and also to reduce false positives.
The image frames can also be processed for edge detection using a Sobel filter to extract shape information from the intensity domain. Figure 7 shows an example of a suitable Sobel filter. In addition, an embossing filter may be used to allow discrimination of positive and negative gradient edges -which typically indicate the left and right (or right and left) edges of targets.
It Will be appreciated that targets towards the left and right edges of the zone or frame will not always provide two matching blocks since they may only appear in one of the cameras.
If the data zone frames are successfully matched, (i.e. in terms of registration, light levels, etc.) then any b!ock that does not match indicates height. This could be used on its own as a basic form as workable detector, but we prefer to provide a 3D zone representation to deliver a level of detection confidence and allow filtering of objects of insufficient height.
Figure 8 illustrates this process schematically. Thus the left frame and right frame are linearised as discussed above and then processed to extract a .: * shape (here the left hand edge of each). The shapes are then differenced to provide a height frame.
Various algorithms may be used to perform the differencing. In one option the target array is used as the reference frame rather than either the left or right source frames. The proposition is that the "true" position of the target block is halfway between the left frame and right frame co-ordinates for the matching blocks. In pseudocode the following operations are carried out: For each row in destination For each block in destination row Until edge of frame Repeat Move 1 to left on left frame Move 1 to right on right frame Compare Select best match Convert to Height value Place height in destination array This technique may be better suited for use of sad functions as opposed to mad functions.
A second option makes use of the mad function in that it starts by referencing the left source frame.
For each row in left frame For each block in left row Find best matching block in right frame row * S. SS * * S. Convert to height value *5S Place height in destination array This has the disadvantage of using the left frame as the dominant frame (analogous to eyesight) and so targets to the right of the zone/frame may not be correctly identified.
The third option, which is a variant on the second option in that the inverse process (from left to right) is added. As the mad function is limited to left to right operation, the frames may be flipped to provide a mirror image before going through the same process the second time and then fusing the results.
For each row in left frame For each block in left row Find best matching block in right frame row Convert to height value Place height in left hand destination array Flip the left frame -mirror image For each row in right frame For each block in right row Find best matching block in left frame row Convert to height value Place height, in right hand destination array Fuse left and right hand destination frames.
Each of these algorithms provides the first stage derivation of height which is measured along the detection axis. This results in an axial distortion as * *.
seen in Figure 9. In many instances this will not be of.significant concern but it is possible to correct for this effect by applying correction factors for different height readings, and moving protection blocks, If the detection frame resolution is fairly low then look up tables may be applied to adjust the three dimensional image result. This kind of correction may also be useful for confirming protection of vertical targets, (i.e. most Pedestrians).
Referring now to Figure 10, there is shown an alternative scheme which applies DCI detection to a single channel, applying edge detection to both channels before looking for depth assessment and then using the depth assessment to confirm the original result. In this manner extracting image features from the individual channels early in the process reduces the risk of mis-registration generating image noise and artefacts.
A third option, is shown in Figure 11, in which threshold frame differencing is applied to filter out ground plane features. In this option the detection algorithm compares heights derived from calculating the differences between left and right images against heights derived from applying the same process to periodically updated reference images. This process, therefore, looks for changes in relative calculated heights rather than absolute height values.
An important factor in the systems is the ability to reduce the effects of lens distortion, registration differences and perspective for shortening to the background data image. Reducing such effects to less than +1-1 pixel within the zone enables rejection of shadows and lighting effects and also titter, puddles and any other ground plain issues.
Bilinear interpolation may be used for efficiently applying the lens and perspective corrections. Of course, other interpolations such as cubic interpolation could be used.
a.. . . . . . * Our research indicates that height vanations in.zone ground plane will typically amount to less than 200mm and so be capable of being filtered out a S.....
* within the depth perception algorithm.
The required lens performance may be identifying the specified to reduce distortion due to optical defects. We have found image quality with cheaper small lenses to be acceptable. The registration performance between sensors can easily be verified by tests. The lens correction and perspective correction may be adjusted separately or by the same mathematical transformation.
A discrete courier transformation (DCT) can assist in reducing potential problems may reduce the need for co-ordinating the image acquisition exposure levels between the two sensors.
The horizontal block comparison (rather than a vertical process) can make reasonable use of cacheing and lends itself to an efficient pipelining process. Similarly1 the lens and perspective correction functions lend themselves to a pipeline approach, provided that only a small degree of correction is required.
An important principal is to select appropriate image processing functions so as to abstract shape information early in the process and so decouple the detection performance from pixel intensity values (i.e. shade and colour).
Application of a thresholded edge -detection (e.g. Laplacian) convolution is an example where edge features can be a highlighted in a binary bit map raster and subsequently analysed for depth deception.
The two dimensional approach to algorithm/function selection is selected in order to allow for low cost optics and a sensor, and production costs in tooling * limitations.
*:*.po Provided zone registration can be achieved efficiently, then depth abstraction may be readily available through vertical edge detection. A single ID horizontal edge convolution may be suitable in certain applications but a signed variation may be less susceptible to noise/confusion. In one variation, a vertical Sobel filter produces an "embossed" image which may then be thresholded.
With conventional image registration techniques very accurate registration can be achieved, either by feature-based or area-based methods.
Images can be re-sampled, or inter-pixel values interpolated, to achieve sub-pixel registration accuracy. With the constraints of,a low cost embedded solution, such an accurate interpolation scheme is impractical, as it is too computationany intensive. Refemng now to the embodiment of Figure 12, a more pragmatic solution is to use closest match' pixel registration where each pixel in the reference image is related to its corresponding spatial position by means of a Look Up Table (LUT).
At run time each corresponding pixel pair can then be compared quickly and efficiently by simply indexing into the LUT for the appropriate address.
The Sum of Absolute Differences (SAD) is a widely used, extremely simple, block comparison method used for image block-matching in video *:*::* compression (such as MPEG4). The difference between each corresponding pixel is calculated as an absolute value, and totalled for the whole block. The SAD method is commonly used because it is simple and therefore fast to
S S..
* process.
The embodiment of stereo detector in Figure 12 uses two similar low-cost * camera sensors mounted on the same PCB so that the empty scene viewed by both cameras has similar exposure characteristics to allow for straightforward image registration.
An efficient detection scheme can then be performed by comparing the SAD values for corresponding blocks of pixels (typically blocks of 8 by 8 pixels, or 16 by 16) across the required detection zone within the previously registered image frame. Indeed the SAD method would be used initially as the means of finding the closest pixel match to configure the LUT for Correspondence in the image registration process.
One problem of such a scheme is that registration is only accurate to within 1 pixel tolerance (±half a pixel) so any high contrast ground based features (such a white line on tarmac) will register as a difference. High contrast ground details can be useful during the image registration process, as they can give high confidence characteristj for good feature matching, but during detection process it can mask detection of genuine target objects (ie. differences due to parallax differences). Accordingly in the embodiment of Figure 12, the pixel variance of each of the source blocks of pixel is calculated. Any high * 15 contrast feature within the block will increase the variance value to be obtained.
We use this to reduce the detection sensitivity of the block by subtracting a proportion of the variance values obtained from the SAD value.
Block detection value = SAD(Blockl,Block2) -k(VAR(Blockl) + VAR(Block2)); * The constant k can be fixed by trial and experiment -for UK kerbside *:*. pedestrian detection has been found to be 1.0. The block detection value can * then be compared against an adjustable threshold value of, say, 5% of maximum SAD value to indicate whether or not detection is taking place.
In this embodiment, left and right cameras 40, 42 provide respective image frames 44, 46. The pixels from these frames are supplied to respective look-up tables 46, 48 which effectively map the pixels to left and right registration images. The outputs from the look up tables are therefore the registered left and right images of the same viewed scene, and so the corresponding pixels in the arrays Correspond to the same point in the image. The look up tables may be stored during initial registration, and will have been programmed with a scene stored during initial registration and will have only the background defined in it, i.e. the ground. Assuming the background is unchanged the look up table 46, 48 should give exactly the same output array of pixels 50, 52 for each image. If a person or other object of appreciable height now appears in the viewed scene, the images from the look up tables will be different over several pixels due to parallax.
Having obtained the two output arrays 50, 52 from the look up tables, these are now used as the left and right hand image arrays. The arrays are each processed on a block-by-block basis (typically 8x8) and respective variances obtained for each 8x8 block at 54, 56 respectively. The blocks from *:*::* this left and right images are then compared to determine a difference value at *.S.
58. For example a sum of absolute differences (SAD) although may be used that sums this absolute difference between corresponding pixels in the two arrays, over the whole block. Other algorithms may be used. Then, at 60, a ** 0 determination is made as to whether the block is udetectingn; this is done in this * embodiment by subtracting a value dependant on the variances of the left and right blocks from the difference value (or SAD) and flagging that block as detecting if this modified SAD value is greater than a threshold. This process is repeated for all the blocks in the image and the result accumulated at 62 and a decision made on whether the detection is "active", at 64, e.g. by comparing the.
block detection value against an adjustable threshold value of, say, 5% of maximum SAD value to indicate whether or not detection is taking place The above embodiment enables relatively tow resolution cameras to be used with a relatively low level of processing. Also the method described reduces the possibility of error due to variations in automatic gain control or automatic exposure control between the two image arrays. An alternative scheme to reduce false detections would be to provide an image detector in which the left and right detectors had the same settings for exposure and gain at the instant the images are captured. Accordingly in another aspect, this invention provides an image detector for capturing left and right hand images for use in a stereo-based height detection or discrimination system, said image detector having spaced image sensors, and means for controlling at least one of the exposure and gain such that the exposure and/or gain setting is the same for a left hand and a right hand image captured for processing by said detector. * ** ** . * ** *S*. * a * S*. * ** * a * *** a a a.. a. a * a. * a.
*....a
S S

Claims (13)

---I..t. _ CLAIMS
1. A traffic detector system for detecting the presence of one or more objects of appreciable height in a detection zone on or adjacent a carriageway or walkway, said detector system in use being disposed in an elevated position relative to the detection zone, said detector system comprising: an imaging means directed generally downwardly to observe the detection zone and arranged to acquire spaced images from respective spaced viewpoints, and image processing means for processing the spaced images acquired by said imaging means to detect the presence of an object of appreciable height in said detection zone, wherein said image processing means acquires respective image frames each containing an array of pixels, said image processing means further including means for processing at least part of at least one of said arrays to obtain a variance value which is dependant on the variation of the pixel values within said array or a part thereof, and means for comparing the values of corresponding pixels in said arrays and outputting a detection signal for all or part of the array if the differences between the pixels from the two frames * exceeds a threshold that depends on said vanance value.
2. A traffic detector system according to claim 1, wherein said image S..... . * * processing means processes a plurality of blocks which together make up a frame.
3. A traffic detector system according to claim 2, wherein said image processing means processes the image data on a block-by-block basis and, for each block, obtains the corresponding pixel data from the two images, calculates a variance value for each block, compares each block to obtain a value representing a difference between the two blocks, and outputs a detection signal for that block if the difference exceeds a threshold that is dependant on the variance values.
4. A traffic detector system according to any of claims 1 to 3, wherein the imaging means acquires spaced images and the respective image data values are supplied to respective look-up tables which output a registered image.
A traffic detector system according to Claim 1, wherein said image processing means is operable to apply an optical transformation to the acquired images to at least partially correct for optical distortion therein.
6. A traffic detector system according to Claim I or Claim 5, wherein said image processing means is operable to at least partially compensate for registration differences between said viewpoints.
7. A traffic detector system according to any of the preceding Claims, *:*::* wherein said image processing means is operable to perform at least a partial **.
perspective correction function.
8. A traffic detector system according to any of* the preceding Claims, wherein the image processing means is operable to process the acquired * .0 images to derive data representative of a height map of objects in the detection * zone.
9. A traffic detector system according to Claim 5, wherein said image processing means is operable to filter out objects or artefacts below a preset minimum height or of apparent negative height.
10. A traffic detector system as claimed in any of the preceding Claims, wherein said image processing means is operable to apply edge detection to one or both said acquired images or data derived therefrom.
11. A pedestrian crossing control including a traffic detector system according to any of the preceding Claims.
12. A method for detecting and/or discriminating one or more objects of appreciable height in a detection zone on a camageway or walkway, which comprises acquiring from an elevated position on or adjacent a carriageway or walkway respective images from spaced viewpoints, and processing said spaced images thereby to detect the presence of an object of an appreciable height in said detection zone.
13. A traffic detector system for detecting the presence of one or more objects of appreciable height in a detection zone on or adjacent a camageway or walkway, said detector system in use being disposed in an elevated position relative to the detection zone, said detector system comprising: an imaging means directed generally downwardly to observe the detection zone and arranged to acquire spaced images from respective spaced viewpoints, and image processing means for processing the spaced images acquired by said imaging means to detect the presence of an object of appreciable height in said detection zone.
GB0807243.1A 2007-04-21 2008-04-21 Traffic detector system Active GB2448617B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB0707732.4A GB0707732D0 (en) 2007-04-21 2007-04-21 Traffic detector system

Publications (3)

Publication Number Publication Date
GB0807243D0 GB0807243D0 (en) 2008-05-28
GB2448617A true GB2448617A (en) 2008-10-22
GB2448617B GB2448617B (en) 2012-02-29

Family

ID=38135201

Family Applications (2)

Application Number Title Priority Date Filing Date
GBGB0707732.4A Ceased GB0707732D0 (en) 2007-04-21 2007-04-21 Traffic detector system
GB0807243.1A Active GB2448617B (en) 2007-04-21 2008-04-21 Traffic detector system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB0707732.4A Ceased GB0707732D0 (en) 2007-04-21 2007-04-21 Traffic detector system

Country Status (1)

Country Link
GB (2) GB0707732D0 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL2014154A (en) * 2015-01-19 2016-09-26 Lumi Guide Fietsdetectie Holding B V System and method for detecting the occupancy of a spatial volume.

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0997395A (en) * 1995-09-29 1997-04-08 Nec Corp Stereoscopic image pickup type vehicle sensor and vehicle sensing method
WO1998015934A1 (en) * 1996-10-04 1998-04-16 Robert Bosch Gmbh Device and process for monitoring traffic zones
JP2000215299A (en) * 1999-01-27 2000-08-04 Toshiba Corp Image monitoring device
US6205242B1 (en) * 1997-09-29 2001-03-20 Kabushiki Kaisha Toshiba Image monitor apparatus and a method
US20010019356A1 (en) * 2000-02-29 2001-09-06 Nobuyuki Takeda Obstacle detection apparatus and method
US20040234124A1 (en) * 2003-03-13 2004-11-25 Kabushiki Kaisha Toshiba Stereo calibration apparatus and stereo image monitoring apparatus using the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0997395A (en) * 1995-09-29 1997-04-08 Nec Corp Stereoscopic image pickup type vehicle sensor and vehicle sensing method
WO1998015934A1 (en) * 1996-10-04 1998-04-16 Robert Bosch Gmbh Device and process for monitoring traffic zones
US6205242B1 (en) * 1997-09-29 2001-03-20 Kabushiki Kaisha Toshiba Image monitor apparatus and a method
JP2000215299A (en) * 1999-01-27 2000-08-04 Toshiba Corp Image monitoring device
US20010019356A1 (en) * 2000-02-29 2001-09-06 Nobuyuki Takeda Obstacle detection apparatus and method
US20040234124A1 (en) * 2003-03-13 2004-11-25 Kabushiki Kaisha Toshiba Stereo calibration apparatus and stereo image monitoring apparatus using the same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL2014154A (en) * 2015-01-19 2016-09-26 Lumi Guide Fietsdetectie Holding B V System and method for detecting the occupancy of a spatial volume.

Also Published As

Publication number Publication date
GB2448617B (en) 2012-02-29
GB0707732D0 (en) 2007-05-30
GB0807243D0 (en) 2008-05-28

Similar Documents

Publication Publication Date Title
US20230336707A1 (en) Systems and Methods for Dynamic Calibration of Array Cameras
CN109813251B (en) Method, device and system for three-dimensional measurement
US9715734B2 (en) Image processing apparatus, imaging apparatus, and image processing method
KR100776649B1 (en) A depth information-based Stereo/Multi-view Stereo Image Matching Apparatus and Method
WO2011027564A1 (en) Parallax calculation method and parallax calculation device
WO2014073322A1 (en) Object detection device and object detection method
WO2014165244A1 (en) Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
AU2009311052A1 (en) Motion detection method, apparatus and system
WO2014132729A1 (en) Stereo camera device
Kim et al. Adaptive 3D sensing system based on variable magnification using stereo vision and structured light
US20130077825A1 (en) Image processing apparatus
KR101709317B1 (en) Method for calculating an object's coordinates in an image using single camera and gps
JP2004094640A (en) Intruding-object detection apparatus
JPH09297849A (en) Vehicle detector
JP2007316856A (en) Traveling object detecting device, computer program, and traveling object detection method
CN102713511A (en) Distance calculation device for vehicle
CN106131448B (en) The three-dimensional stereoscopic visual system of brightness of image can be automatically adjusted
GB2448617A (en) Traffic Detector System
JP2007195061A (en) Image processor
Hirahara et al. Detection of street-parking vehicles using line scan camera and scanning laser range sensor
WO2021124657A1 (en) Camera system
CN113838111A (en) Road texture feature detection method and device and automatic driving system
JP2004094707A (en) Method for estimating plane by stereo image and detector for object
JP5579297B2 (en) Parallax calculation method and parallax calculation device
Lee et al. Generic obstacle detection on roads by dynamic programming for remapped stereo images to an overhead view