AU760788B2 - An image acquisition system - Google Patents

An image acquisition system Download PDF

Info

Publication number
AU760788B2
AU760788B2 AU22570/00A AU2257000A AU760788B2 AU 760788 B2 AU760788 B2 AU 760788B2 AU 22570/00 A AU22570/00 A AU 22570/00A AU 2257000 A AU2257000 A AU 2257000A AU 760788 B2 AU760788 B2 AU 760788B2
Authority
AU
Australia
Prior art keywords
image
aircraft
camera
image acquisition
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU22570/00A
Other versions
AU2257000A (en
Inventor
Glen William Auty
Michael John Best
Timothy John Davis
Ashley John Dreier
Ian Barry Macintyre
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commonwealth Scientific and Industrial Research Organization CSIRO
Original Assignee
Commonwealth Scientific and Industrial Research Organization CSIRO
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU21438/97A external-priority patent/AU720315B2/en
Application filed by Commonwealth Scientific and Industrial Research Organization CSIRO filed Critical Commonwealth Scientific and Industrial Research Organization CSIRO
Priority to AU22570/00A priority Critical patent/AU760788B2/en
Publication of AU2257000A publication Critical patent/AU2257000A/en
Application granted granted Critical
Publication of AU760788B2 publication Critical patent/AU760788B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Description

PAPER\DBW\22570-00 spc.do.2l J=-34 2003 -2- AN IMAGE ACQUISITION SYSTEM The present invention relates to an image acquisition system and method and, in particular to an image acquisition system of an aircraft detection system.
The International Civil Aviation Organisation (ICAO) has established regulations which require all civil aircraft to have registration markings beneath the port wing to identify an aircraft. The markings denote the nationality of an aircraft 10 and its registration code granted by the ICAO. In some countries, airline operators do not follow the regulations and the markings appear on an aircraft's fuselage.
o do: Owners of aircraft are charged for airport use, but a satisfactory system has not S been developed to automatically detect aircraft and then, if necessary, administer a charge to the owner. Microwave signals for detecting an aircraft can interfere with microwave frequencies used for airport communications and, similarly, radar signals can interfere with those used for aircraft guidance systems. It is desired to provide a system which can be used to detect and/or acquire an image of at least part of an object using unobtrusive passive technology, or at least provide a useful *alternative to existing systems.
d. In accordance with the present invention there is provided an image acquisition system including: at least one camera for acquiring an image of at least part of a moving object, in response to a trigger signal, and analysis means for processing said image to locate a region in said image including markings identifying said object and for processing said region to extract said markings for a recognition process, such that said analysis means subsamples said image, extracts lines exceeding a predetermined length and at predetermined angles, binarises the image, removes features smaller or larger 5T7i than said markings, removes features not clustered as said markings, and locates said region using the remaining features.
P:OPER\DB/M21438-97. IV 20/3/2000 -3- The present invention also provides an image acquisition method including: acquiring an image of at least part of a moving object, in response to a trigger signal, using at least one camera, and processing said image to locate a region in said image including markings identifying said object and processing said region to extract said markings for a recognition process, said processing to locate said region including sub-sampling said image, extracting lines exceeding a predetermined length and at predetermined angles, binarising the image, removing features smaller or larger than said markings, removing features not clustered as said markings, and locating said region using the 10 remaining features.
Preferred embodiments of the present invention are hereinafter described, by S* way of example only, with reference to the accompanying drawings, wherein: Figure 1 is a block diagram of a preferred embodiment of an aircraft detection system; Figure 2 is a schematic diagram of a preferred embodiment of the aircraft detection system; Figure 3 is a block diagram of a connection arrangement for components of the aircraft detection system; Figure 4 is a more detailed block diagram of a proximity detector and a tracking system for the aircraft detection system; Figure 5 is a coordinate system used for the proximity detector; Figure 6(a) and 6(b) are underneath views of discs of sensors of the tracking system; Figure 7 is a schematic diagram of an image obtained by the tracking system; Figures 8 and 9 are images obtained from a first embodiment of the tracking system; Figure 10 is a graph of a pixel row sum profile for an image obtained by the tracking system; Figure 11 is a graph of a difference profile obtained by subtracting successive row sum profiles; Figure 12 is a diagram of a coordinate system for images obtained by the PROPERIDBM21438-97.DIV 20/3/2000 -4tracking system; Figure 13 is a diagram of a coordinate system for the aircraft used for geometric correction of the images obtained by the tracking system; Figure 14 is a diagram of a coordinate system used for predicting a time to generate an acquisition signal; Figure 15 is a graph of aircraft position in images obtained by the tracking system over successive frames; Figure 16 is a graph of predicted trigger frame number over successive image frames obtained by the tracking system; 10 Figure 17 is a schematic diagram of a pyroelectric sensor used in a second embodiment of the tracking system; Figure 18 is graphs of differential signatures obtained using the second embodiment of the tracking system; Figures 19 and 20 are images obtained of an aircraft by high resolution cameras of an acquisition system of the aircraft detection system; Figure 21 is a schematic diagram of an optical sensor system used for exposure control of the acquisition cameras; Figure 22 is a flow diagram of a preferred character location process executed on image data obtained by the high resolution cameras; Figure 23 is a diagram of images produced during the character location process; and Figure 24 is a flow diagram of a character recognition process executed on a binary image of the characters extracted from an image obtained by one of the high resolution cameras.
An aircraft detection system 2, as shown in Figure 1, includes a proximity detector 4, a tracking sensor system 6, an image processing system 8, an image acquisition system 10 and an analysis system 12. A control system 14 can be included to control the image acquisition system 10 on the basis of signals provided by the image processing system 8, and also control an illumination unit 16.
The proximity detector 4 and the tracking sensor system 6 includes sensors 3 P:\OPER\DBW\21438-97.DIV 20/312000 which may be placed on or near an aircraft runway 5 to detect the presence of an aircraft 28 using visual or thermal imaging or aural sensing techniques. Also located on or near the runway 5 is at least one high resolution camera 7 of the image acquisition system 10. The sensors 3 and the acquisition camera 7 are connected by data and power lines 9 to an instrument rack 11, as shown in Figure 2, which may be located adjacent or near the runway 5. The instrument rack 11 may alternatively be powered by its own independent supply which may be charged by solar power. The instrument rack 11 includes control circuitry and image processing circuitry which is able to control activation of the sensors 3 and the camera 7 and perform image 10 processing, as required. The instrument rack 11, the data and power lines 9, the sensors 3 and the acquisition camera 7 can be considered to form a runway module which may be located at the end of each runway of an airport. A runway module can be connected back to a central control system 13 using an optical fibre or other data link 15. Images provided by the sensors 3 may be processed and passed to the central system 13 for further processing, and the central system 13 would control triggering of the acquisition cameras 7. Alternatively image processing for determining triggering of the acquisition camera 7 may be performed by each instrument rack 11. The central control system 13 includes the analysis system 12. One method of configuring :connection of the instrument racks 11 to the central control system 13 is illustrated in Figure 3. The optical fibre link 15 may include dedicated optical fibres 17 for transmitting video signals to the central control system 13 and other optical fibres 19 dedicated to transmitting data to and receiving data from the central control system 13 using the Ethernet protocol or direct serial data communication. A number of different alternatives can be used for connecting the runway modules to the central control system 13. For example, the runway modules and the control system 13 may be connected as a Wide Area Network (WAN) using Asynchronous Transfer Mode (ATM) or Synchronous Digital Hierarchy (SDH) links. The runway modules and the central control system 13 may also be connected as a Local Area Network (LAN) using a LAN protocol, such as Ethernet. Physical connections may be made between the runway modules and the central control system 13 or alternatively wireless transmission techniques may be used, such as using infrared or microwave signals for communication.
P:\OPER\DBW21438-97.DIV 20/3/2000 -6- The proximity detector 4 determines when an aircraft is within a predetermined region, and then on detecting the presence of an aircraft activates the tracking sensor system 6. The proximity detector 4, as shown in Figure 4, may include one or more pyroelectric devices 21, judiciously located at an airport, and a signal processing unit 23 and trigger unit 25 connected thereto in order to generate an activation signal to the tracking sensor system 6 when the thermal emission of an approaching aircraft exceeds a predetermined threshold. The proximity detector 4 may use one or more pyroelectric point sensors that detect the infrared radiation emitted from the aircraft 28.
A mirror system can be employed with a point sensor 70 to enhance its sensitivity to 10 the motion of the aircraft 28. The point sensor 70 may consist of two or more pyroelectric sensors configured in a geometry and with appropriate electrical connections so as to be insensitive to the background infrared radiation and slowly S* moving objects. With these sensors the rate of motion of the image of the aircraft 28 across the sensor 70 is important. The focal length of the mirror 72 is chosen to optimise the motion of the image across the sensor 70 at the time of detection. As an example, if the aircraft at altitude H with glide slope angle OGS moves with velocity V and passes overhead at time t 0 as shown in Figure 5, then the position h of the image of the aircraft 28 on the sensor 70 is h _f H tane
G
(1) V( t) where f is the focal length of a cylindrical mirror.
If the rate of motion of the image dhldt is required to have a known value, then the focal length of the mirror 72 should be chosen to satisfy (t t) 2 dh(2) f ViOL11- (2) H dt where t o t is the time difference between the time t o at which the aircraft is overhead and the time t at which it is to be detected. Alternatively, the proximity detector 4 may include different angled point sensors to determine when an aircraft enters the monitored region and is about to land or take-off. In response to the u (I P:\OPEMDBW\21438-97.DIV 20/3/2000 -7activation signal, the tracking sensor system 6 exposes the sensor 3 to track the aircraft. Use of the proximity detector 4 allows the sensor 3 to be sealed in a housing when not in use and protected from damaging environmental conditions, such as hailstorms and blizzards or fuel. The sensor 3 is only exposed to the environment for a short duration whilst an aircraft is in the vicinity of the sensor 3. If the tracking system 6 is used in conditions where the sensor 3 can be permanently exposed to the environment or the sensor 3 can resist the operating conditions, then the proximity S detector 4 may not be required. The activation signal generated by the proximity i detector 4 can also be used to cause the instrument rack 11 and the central control 10 system 13 to adjust the bandwidth allocated on the link 15 so as to provide an S°adequate data transfer rate for transmission of video signals from the runway module to the central system 13. If the bandwidth is fixed at an acceptable rate or the system 2 only uses local area network communications and only requires a reduced S. bandwidth, then again the proximity detector 4 may not be required.
The tracking sensor system 6 includes one or more tracking or detection cameras 3 which obtain images of an aircraft as it approaches or leaves a runway.
From a simple image of the aircraft, aspect ratios, such as the ratio of the wingspan to the fuselage length can be obtained. The tracking camera 3 used is a thermal camera which monitors thermal radiation received in the 10 to'14 Am wavelength range and is not dependent on lighting conditions for satisfactory operation. Use of the thermal cameras is also advantageous as distribution of temperatures over the observed surfaces of an aircraft can be obtained, together with signatures of engine exhaust emissions and features in the fuselage or engines. The tracking camera 3 can obtain an instantaneous two-dimensional image I n using all of the sensors in a CCD array of the camera, or alternatively one row of the array perpendicular to the direction of motion of the aircraft can be used to obtain a linear image at each scan and the linear image is then used to build up a two-dimensional image I n for subsequent processing.
To allow operation of the tracking and acquisition cameras 3 and 7 in rain, a rotating disc system is employed. The use of a rotating disc for removing water drops P:\OPER\DB'21438-97 DIV 20/3/2000 -8from windows is used on marine vessels. A reflective or transparent disc is rotated at high speed in front of the window that is to be kept clear. Water droplets falling on the disk experience a large shear force related to the rotation velocity. The shear force is sufficient to atomise the water drop, thereby removing it from the surface of the disc.
A transparent disc of approximate diameter 200 mm is mounted to an electric motor and rotated to a frequency of 60 Hz. A camera with a 4.8 mm focal length lens was placed below a glass window which in turn was beneath the rotating disc. The results Sof inserting the rotating disc are illustrated in Figure which shows the surface of 1a camera housing without the rotating disc, and in Figure which shows the 10 surface of a camera housing with the rotating disc activated and in rain conditions.
The image processing system 8 processes the digital images provided by the tracking sensor system 6 so as to extract in real-time information concerning the
*.SS
S. features and movement of the aircraft. The images provided to the image processing system, depending on the tracking cameras employed, provide an underneath view of the aircraft, as shown in Figure 7. The tips of the wings or wingspan points 18 of the aircraft are tracked by the image processor system 8 to determine when the image acquisition system 10 should be activated so as to obtain the best image of the S...i registration markings on the port wing 20 of the aircraft. The image processing system 8 generates an acquisition signal using a trigger logic circuit 39 to trigger the camera of the image acquisition system 10. The image processing system 8 also determines and stores data concerning the wingspan 22 of the aircraft and other details concerning the size, shape and ICAO category (A to G) of the aircraft. The image processing system 8 classifies the aircraft on the basis of the size which can be used subsequently when determining the registration markings on the port wing 20. The data obtained can also be used for evaluation of the aircraft during landing and/or takeoff.
Alternatively a pyroelectric sensor 27 can be used with a signal processing wing detection unit 29 to provide a tracking system 1 which also generates the acquisition signal using the trigger logic circuit 39, as shown in Figure 4 and described later.
P:\OPER\DBW\21438-97.DIV 20/3/2000 -9- Detecting moving aircraft in the field of view of the sensor 3 or 27 is based on forming a profile or signature of the aircraft, that depends on a spatial coordinatey and time t. To eliminate features in the field of view that are secondary or slowly moving, a difference profileAP(y,t) is formed. The profile or signature can be differenced in time or in space because these differences are equivalent for moving objects. If the intensity of the light or thermal radiation from the object is not changing then the time derivative of the profile obtained from this radiation is zero. A time derivative of a moving field can be written as a convective derivative involving partial derivatives, which gives the equation SdP(y,t) aP(y,t) P(y,t) (3) dt at ay where v is the speed of the object as observed in the profile. After rearranging equation gives .aP(y, -v P(y, (4) at ay which shows that the difference in the profile in time is equivalent to the difference in the profile in space. This only holds for moving objects, when v 0. Equation also follows from the simple fact that if the profile has a given valueP(yo,to) at the coordinate (yo,to), then it will have this same value along the line y Yo v(t t o To detect and locate a moving feature that forms an extremum in the profile, such as an aircraft wing, the profile can be differenced in space AP(y,t). Then an extremum in the profile P(y,t) will correspond to a point where the difference profile AyP(y,t) crosses zero.
In one method for detecting a feature on the aircraft, a profile P(y,t) is formed and a difference profile AtP(y,t) is obtained by differencing in time, as described below.
According to equation this is equivalent to a profile of a moving object that is P:\OPER\DBW\21438-97.DIV -20/3/2000 differenced in space. Therefore the position y, of the zero crossing point of AtP(y,t) at time t is also the position of the zero crossing point of AyP(y,t) which locates an extremum in P(y,t).
In another method for detecting a feature on the aircraft, the difference between the radiation received by a sensor 27 from two points in space is obtained as a function of time, AyS(t), as described below. If there are no moving features in the field of view, then the difference is constant. If any object in the field of view is moving, then i' the position of a point on the object is related to time using equation This allows a profile or signature differenced in space to be constructed a.
AyS(t) (6) and, as described above, allows an extremum corresponding to an aircraft wing to be located in the profile from the zero crossing point in the differential signature.
a The image acquisition system 10 includes at least one high resolution camera 7 to obtain images of the aircraft when triggered. The images are of sufficient resolution to enable automatic character recognition of the registration code on the port wing 20 or elsewhere. The illumination unit 16 is also triggered simultaneously to provide illumination of the aircraft during adverse lighting conditions, such as at night or during inclement weather.
The acquired images are passed to the analysis system 12 which performs Optical Character Recognition (OCR) on the images to obtain the registration code.
The registration code corresponds to aircraft type and therefore the aircraft classification determined by the image processing system 8 can be used to assist to the recognition process, particularly when characters of the code are obscured in an acquired image. The registration code extracted and any other information concerning the aircraft can be then passed to other systems via a network connection 24.
Once signals received from the pyroelectric sensors 21 indicate the aircraft 28 P:\OPER\DBW\21438-97.DIV 20/3/2000 -11 is within the field of view of the sensors 3 of the tracking sensor system 6, the tracking system 1 is activated by the proximity detector 4. The proximity detector 4 is usually the first stage detection system to determine when the aircraft is in the proximity of the more precise tracking system 1. The tracking system 1 includes the tracking sensor system 6 and the image processing system 8 and according to one embodiment the images from the detection cameras 3 of the sensor system 6 are used by the image processing system 8 to provide a trigger for the image acquisition system when some S. point in the image of the aircraft reaches a predetermined pixel position. One or more i detection cameras 3 are placed in appropriate locations near the airport runway such 10 that the aircraft passes within the field of view of the cameras 3. A tracking camera 3 S"°provides a sequence of images, The image processing system 8 subtracts a background image from each image Inof the sequence. The background image ***represents an average of a number of preceding images. This yields an image AIn that ,contains only those objects that have moved during the time interval between images.
The image /An is thresholded at appropriate values to yield a binary image, i.e. one that contains only two levels of brightness, such that the pixels comprising the edges of the aircraft are clearly distinguishable. The pixels at the extremes of the aircraft in the direction perpendicular to the motion of the aircraft will correspond to the edges 18 of the wings of the aircraft. After further processing, described below, when it is determined the pixels comprising the port edge pass a certain position in the image corresponding to the acquisition point, the acquisition system 10 is triggered, thereby obtaining an image of the registration code beneath the wing 20 of the aircraft.
Imaging the aircraft using thermal infrared wavelengths and detecting the aircraft by its thermal radiation renders the aircraft self-luminous so that it can be imaged both during the day and night primarily without supplementary illumination.
Infrared (IR) detectors are classified as either photon detectors (termed cooled sensors herein), or thermal detectors (termed uncooled sensors herein). Photon detectors (photoconductors or photodiodes) produce an electrical response directly as the result of absorbing IR radiation. These detectors are very sensitive, but are subject to noise due to ambient operating temperatures. It is usually necessary to cryogenically cool 0 K) these detectors to maintain high sensitivity. Thermal detectors experience a P:\OPER\DBV\21438-97.DIV 20/3/2000 -12temperature change when they absorb IR radiation, and an electrical response results from temperature dependence of the material property. Thermal detectors are not generally as sensitive as photon detectors, but perform well at room temperature.
Typically, the cooled sensing devices are formed from Mercury Cadmium Telluride offer far greater sensitivity than uncooled devices, which may be formed from Barium Strontium Titanate. Their Net Equivalent Temperature Difference (NETD) is also superior. However, with the uncooled sensor a chopper can be used to provide .tool temporal modulation of the scene. This permits AC coupling of the output of each pixel 10 to remove the average background. This minimises the dynamic range requirements for the processing electronics and amplifies only the temperature differences. This is an advantage for resolving differences between cloud, the sun, the aircraft and the background. The advantage of differentiation between objects is that it reduced the load on subsequent image processing tasks for segmenting the aircraft from the background and other moving objects such as the clouds.
4' 4 Both a cooled and uncooled thermal infrared imaging system 6 has, been used during day, night and foggy conditions. The system 6 produced consistent images of the aircraft in all these conditions, as shown in Figures 8 and 9. In particular, the sun in the field of view produced no saturation artefacts or flaring in the lens. At night, the entire aircraft was observable, not just the lights.
The image processing system 8 uses a background subtraction method in an attempt to eliminate slowly moving or stationary objects from the image, leaving only the fast moving objects. This is achieved by maintaining a background image that is updated after a certain time interval elapses. The update is an incremental one based on the difference between the current image and the background. The incremental change is such that the background image can adapt to small intensity variations in the scene but takes some time to respond to large variations. The background image is subtracted from the current image, a modulus is taken and a threshold applied. The result is a binary image containing only those differences from the background that exceed the threshold.
P:\OPER\DBW\21438-97.DIV 20/3/2000 -13- One problem with this method is that some slow moving features, such as clouds, still appear in the binary image. The reason for this is that the method does not select on velocity but on a combination of velocity and intensity gradients. If the intensity in the image is represented by where x and y represent the position in rows and columns, respectively, and t represents the image frame number (time) and if the variation in the intensity due to ambient conditions is very small then it can shown that the time variation of the intensity in the image due to a feature moving with velocity v is given by -v.VI(x,y,t) (7) at In practice, the time derivative in equation is performed by taking the 10 difference between the intensity at at different times. Equation shows that the value of this difference depends on the velocity v of the feature at and the 0 intensity gradient. Thus a fast moving feature with low contrast relative to the background is identical to a slow moving feature with a large contrast. This is the situation with slowly moving clouds that often have very bright edges and therefore 15 large intensity gradients there, and are not eliminated by this method. Since features in a binary image have the same intensity gradients, better velocity selection is obtained using the same method but applied to the binary image. In this sense, the background-subtraction method is applied twice, once to the original grey-scale image to produce a binary image as described above, and again to the subsequent binary image, as described below.
The output from the initial image processing hardware is a binary image B(x,y,t) where B(x,y,t) 1 if a feature is located at at time t, and B(x,y,t) 0 represents the background. Within this image the fast moving features belong to the aircraft. To deduce the aircraft wing position the two-dimensional binary image can be compressed into one dimension by summing along each pixel row of the binary image, P(yt) f B(x,y,t) dx (8) P:\OPER\DB121438-97.IV 20/3/2000 -14where the aircraft image moves in the direction of the image columns. This row-sum profile is easily analysed in real time to determine the location of the aircraft. An example of a profile is shown in Figure 10 where the two peaks 30 and 31 of the aircraft profile correspond to the main wings (large peak 30) and the tail wings (smaller peak 31).
In general, there are other features present, such as clouds, that must be identified or filtered from the profile. To do this, differences between profiles from successive frames are taken, which is equivalent to a time derivative of the profile.
10 Letting A(x,y,t) be the aircraft where A(x,y,t) 1 if lies within the aircraft and 0 otherwise and letting C(x,y,t) represent clouds or other slowly moving objects, then it can be shown that the time derivative of the profile is given by aP(y, t) aA(x,y, t) dx aC(xy,t) dx- f [A(x,y,t)C(x,y,t)]dx at J at at at f A(x y,t) [1 C(x,y,t)]dx e(C) *a where e(C) 0 is a small error term due to the small velocity of the clouds. Equation demonstrates an obvious fact that the time derivative of a profile gives information on the changes (such as motion) of feature A only when the changes in A do not overlap features C. In order to obtain the best measure of the location of a feature, the overlap between features must be minimised. This means that C(x,y,t) must cover as small an area as possible. If the clouds are present but do not overlap the aircraft, then apart from a small error term, the time difference between profiles gives the motion of the aircraft. The difference profile corresponding to Figure 10 is shown in Figure 11 where the slow moving clouds have been eliminated. The wing positions occur at the zero-crossing points 33 and 34. Note that the clouds have been removed, apart from small error terms.
The method is implemented using a programmable logic circuit of the image processing system 8 which is programmed to perform the row sums on the binary image and to output these as a set of integers after each video field. When taking the difference between successive profiles the best results were obtained using 6 1.
P:\OPER\DBWV21438-97.DIV -20/3/2000 differences between like fields of the video image, i.e. even-even and odd-odd fields.
The difference profile is analysed to locate valid zero crossing points corresponding to the aircraft wing positions. A valid zero crossing is one in which the difference profile initially rises above a threshold IT for a minimum distance YT and falls through zero to below -IT for a minimum distance YT. The magnitude of the threshold IT is chosen to be greater than the error term s(C) which is done to discount S:••"the affect produced by slow moving features, such as clouds.
10 In addition, the peak value of the profile, corresponding to the aircraft wing, can be obtained by summing the difference values when they are valid up to the zero crossing point. This method removes the contributions to the peak from the nonoverlapping clouds. It can be used as a guide to the wing span of the aircraft.
The changes in position of the aircraft in the row-sum profile are used to determine a velocity for the aircraft that can be used for determining the image acquisition or trigger time, even if the aircraft is not in view. This situation may occur if the aircraft image moves into a region on the sensor that is saturated, or if the trigger point is not in the field of view of the camera 3. However, to obtain a reliable estimate of the velocity, geometric corrections to the aircraft position are required to account for the distortions in the image introduced by the camera lens. These are described below using the coordinate systems for the image and for the aircraft as shown in Figures 12 and 13, respectively.
For an aircraft at distance Z and at a constant altitude Y 0 the angle from the horizontal to the aircraft in the vertical plane is tanOY 0 Since Y 0 is approximately constant, a normalised variableZN ZIY o can be used. If Y 0 is the coordinate of the centre of the images, f is the focal length of the lens and 8 c is the angle of the camera from the horizontal in the vertical plane, then P:\OPER\DBW\21438-97.DIV 20/3/2000 -16y y tan9, -tan,
Y
o tan(e, tanOy- f 1 tane tan c where the tangent has been expanded using a standard trigonometric identity. Using and (11) an expression for the normalised distance ZN is obtained N( 1 y 0 )tan (12) (12) tane9 13(Y Y 0 where p 1/f. This equation allows a point in the image at y to be mapped onto a true distance scale, ZN. Since the aircraft altitude is unknown, the actual distance cannot be determined. Instead, all points in the image profile are scaled to be equivalent to a specific point, in the profile. This point is chosen to be the trigger line or image acquisition line. The change in the normalised distance ZN(y,) at y, due to an increment in pixel value Ay, is AZN(y,) ZN(y, Ay,) The number of such increments over a distance ZN(y 2 is M (ZN(y 2 ZN(yl))IAZN(y,).
Thus the geometrically corrected pixel position at y 2 is
ZN(Y
2 ZN(Yl)
Y*
2 MAy 1 y, A Y 1 (13) AZ,(yl) For an aircraft at distance Z and at altitude Y 0 a length X on it in the X direction subtends an angle in the horizontal plane of tan x e (4) YO2 Z2 1 (14) where normalised values have been used. If x 0 is the location of the centre of the image and f is the focal length of the lens, then x -X _x x_ tan9x f Using (14) and the normalised distance XN can be obtained in terms of x and y P:\OPER\DBW\21438-97.DIV 20/3/2000 -17- XN- Xo) 1 Z(y) (16) As with the y coordinate, the x coordinate is corrected to a value at Since XN should be independent of position, then a length x 2 x 0 at y 2 has a geometrically corrected length of 1 +Z(y 2 Xc 2 XO (X2 XO P2- yo 2 sin, (y y 0 )cose (X2 X0 (x 2 Xsine, PY y)cose 1 2 y 0 The parameter P 1/f is chosen so that x and y are measured in terms of pixel numbers. If yo is the centre of the camera centre and it is equal to half the total number of pixels, and if GFOv is the vertical field of view of the camera, then tan(eFOJ2) SP (18) This relation allows P to be calculated without knowing the lens focal length and the dimensions of the sensor pixels.
The velocity of a feature is expressed in terms of the number of pixels moved between image fields (or frames). Then if the position of the feature in frame n is yn, the velocity is given by v n yn y-1. Over N frames, the average velocity is then 1 N 1 1 9 1E n n n n-1) -N Yo) (19) Nn=1 Nn=1 N which depends only on the start and finish points of the data. This is sensitive to errors in the first and last values and takes no account of the positions in between. The error in the velocity due to an error yN in the value yN is Y
N
*1 Ik.
P:\OPER\DBW\21438-97.DIV 20/3/2000 -18- A better method of velocity estimation uses all the position data obtained between these values. A time t is maintained which represents the current frame number. Then the current position is given by Y Yo vt (21) where yo is the unknown starting point and v is the unknown velocity. The number n 5 of valid positions yn measured from the feature are each measured at time t,.
Minimising the mean square error
N
n=1 99 X 2 vt(22) with respect to v and yo gives two equations for the unknown quantities y and v.
Solving for v yields N N N 'Yn E tn NE yntn V n=1 n=1 n=1 (23) N N N n=1 n=1 n=l This solution is more robust in the sense that it takes account of all the motions of the feature, rather than the positions at the beginning and at the end of the observations.
If the time is sequential, so that t n nAt where t, 1 is the time interval between image frames, then the error in the velocity due to an error by, in the value yn is 6 Yn 6(N 1 2n) (24) (24) N (N 1)(N 1)1 which, for the same error 6y, in gives a smaller error than (21) for N 5. In general, the error in (24) varies as 1/N 2 which is less sensitive to uncertainties in position than (19).
If the aircraft is not in view, then the measurement of the velocity v can be used to estimate the trigger time. If y, is the position of a feature on the aircraft that was last seen at a time then the position at any time t is estimated from P:\OPER\DBW\21438-97.DIV 20/3/2000 -19y y, v(t Based on this estimate of position, the aircraft will cross the trigger point located at YT at a time tT estimated by YT Y1 T. t r (26) An alternative method of processing the images obtained by the camera 3 to determine the aircraft position, which also automatically accounts for geometric corrections, is described below. The method is able to predict the time for triggering the acquisition system 10 based on observations of the position of the aircraft 28.
0 t To describe the location of an aircraft 28 and its position, a set of coordinates :9 are defined such that the R axis points vertically upwards, the z axis points horizontally along the runway towards the approaching aircraft, and y is horizontal and perpendicular to the runway. The image 66 of the aircraft is located in the digitised image by pixel values where x, is defined to be the vertical pixel value and y, the horizontal value. The lens on the camera inverts the image so that a light ray from the aircraft strikes the sensor at position where the sensor is located at the coordinate origin. Figure 14 shows a ray 68 from an object, such as a point on the aircraft, passing through a lens of a focal length f, and striking the imaging sensor at a point wherexp and y are the pixel values. The equation locating a point on the ray is given by r X(zlf 1), c yp(zIf 1)k c z2, (27) where z is the horizontal distance along the ray, and the subscript c refers to the camera coordinates. The camera axis 2cis collinear with the lens optical axis. It will be assumed that zlf1) which is usually the case.
Assuming the camera is aligned so that y9 9 is aligned with the runway coordinate, but the camera is tilted from the horizontal by angle 8. Then P:\OPER\DBW\21438-97.DIV 20/3/2000 c R cose 2 sine z c x sin6 I cos and a point on the ray from the aircraft to its image is given by r z[(x cos9 f sin) R (cose xp sine t) f] (29) Letting the aircraft trajectory be given by r(t) tan8GS X y z(t)2 where z(t) is the horizontal position of the aircraft at time t GS is the glide-slope angle, and the aircraft is at altitude x o and has a lateral displacement yo at z(t o 0.
o 5 Here, t to is the time at which the image acquisition system 10 is triggered, i.e. when "the aircraft is overhead with respect to the cameras 7.
Comparing equations (29) and (30) allows z to be written in terms of z(t) and gives the pixel positions as z(t)[cos6 tanOS sin x, cosQ0 xP(t) f (31) p x z(t)[sine tanOGS cos] x 0 sine) 10 and foo y(t) (32) z(t)[sine taneGS cosO] x 0 sine Since is the vertical coordinate and its value controls the acquisition trigger, the following discussion will be centred on equation The aircraft position is given by z(t) v(t, t) (33) where v is the speed of the aircraft along the I axis.
The aim is to determine t o from a series of values of Zp(t) at t determined from the image of the aircraft. For this purpose, it is useful to rearrange (31) into the P:\OPER\DBW\21438-97.DIV 20/3/2000 -21following form c t axp btx, 0 (34) where a vto(taneGS cote) xo fv(1 taneGS cote) taneGS cote b (36) I taneGS cote) S. and o (37) fv(1 taneGS cote) The pixel value corresponding to the trigger point vertically upwards is xT f cote.
5 The trigger time, to, can be expressed in terms of the parameters a, b and c c ax
T
to (38) 1 bx
T
S The parameters a, b and c are unknown since the aircraft glide slope, speed, altitude and the time at which the trigger is to occur are unknown. However, it is possible to estimate these using equation (34) by minimising the chi-square statistic.
Essentially, equation (34) is a prediction of the relationship between the measured values x, and t, based on a simple model of the optical system of the detection camera 3 and the trajectory of the aircraft 28. The parameters a, b and c are to be chosen so as to minimise the error of the model fit to the data, i.e. make equation (34) be as close to zero as possible.
Let x n be the location of the aircraft in the image, i.e. pixel value, obtained at time t n Then the chi-square statistic is P:\OPERDBW\21438-97.DIV 20/3/2000 -22-
N
X 2 (c tn axn btnxnf (39) n=1 for N pairs of data points. The optimum values of the parameters are those that minimise the chi-square statistic, i.e. those that satisfy equation (34).
For convenience, the following symbols are defined N N N N Ex, T E tn, P E n, Y= X, n=1 n=1 n=1 N N N 2 22 Q= xnt n R EXnt n S xft n n=1 n=1 n=1 Then the values of a, b and c that minimise equation (39) are given by (NP XT)(P 2 NS) (NR PT)(PX NQ) a (41) (NY X2p 2 NS) (PX NQ) 2 (NP XT)(PX NQ) (NR P)(NY X 2 b (42) (NY X2X 2 NS) (PX NQ) 2 and T bP aX c (43)
N
On obtaining a, b and c from equations (41) to then t o can be obtained from equation (38).
Using data obtained from video images of an aircraft landing at Melbourne airport, a graph of aircraft image position as a function of image frame number is shown in Figure 15. The data was processed using equations (41) to (43) and (38) to yield the predicted value for the trigger frame number t o 66 corresponding to trigger point 70. The predicted point 70 is shown in Figure 16 as a function of frame number.
The predicted value is t o 66 0.5 after 34 frames. In this example, the aircraft can be out of the view of the camera 3 for up to 1.4 seconds and the system 2 can still P:\OPER\DBW\21438-97.DIV -20/3/2000 -23trigger the acquisition camera 7 to within 40 milliseconds of the correct time. For an aircraft travelling at 62.5 m/s, the system 2 captures the aircraft to within 2.5 metres of the required position.
The tracking system 6, 8 may also use an Area-Parameter Accelerator (APA) digital processing unit, as discussed in International Publication No. WO 93/19441, to extract additional information, such as the aspect ratio of the wing span to the fuselage length of the aircraft and the location of the centre of the aircraft.
eo 10 The tracking system 1 can also be implemented using one or more pyroelectric sensors 27 with a signal processing wing detection unit 29. Each sensor 27 has two adjacent pyroelectric sensing elements 40 and 42, as shown in Figure 17, which are S•electrically connected so as to cancel identical signals generated by each element. A plate 44 with a slit 46 is placed above the sensing elements 40 and 42 so as to provide S. 15 the elements 40 and 42 with different fields of view 48 and 50. The fields of view 48 and 50 are significantly narrower than the field of view of a detection camera discussed previously. If aircraft move above the runway in the direction indicated by the arrow 48, oi the first element 40 has a front field of view 48 and the second element 42 has a rear field of view 50. As an aircraft 28 passes over the sensor 27 the first element detects the thermal radiation of the aircraft before the second element 42, the aircraft 28 will then be momentarily in both fields of view 48 and 50, and then only detectable by the second element 42. An example of the difference signals generated by two sensors 27 is illustrated in Figure 18 where the graph 52 is for a sensor 27 which has a field of view that is directed at 900 to the horizontal and a sensor 27 which is directed at 750 to the horizontal. Graph 54 is an expanded view of the centre of graph 52. The zero crossing points of peaks 56 in the graphs 52 and 54 correspond to the point at which the aircraft 28 passes the sensor 27. Using the known position of the sensor 27 the time at which the aircraft passes, and the speed of the aircraft 28, a time can be -determined for generating an acquisition signal to trigger the high resolution acquisition cameras 7. The speed can be determined from movement of the zero crossing points over time, in a similar manner to that described previously.
P:\OPER\DBW\21438-97.DIV 2013/2000 -24 The image acquisition system 10, as mentioned previously, acquires an image of the aircraft with sufficient resolution for the aircraft registration characters to be obtained using optical character recognition. According to one embodiment of the acquisition system 10, the system 10 includes two high resolution cameras 7 each comprising a lens and a CCD detector array. Respective images obtained by the two cameras 7 are shown in Figures 19 and The minimum pixel dimension and the focal length of the lens determine the spatial resolution in the image. If the dimension of a pixel is LI, the focal length f and o. 10 the altitude of the aircraft is h, then the dimension of a feature Wmin on the aircraft that is mapped onto a pixel is Wmi Lph or f Lph (44) Wmn f min (44) f Wmin The character recognition process used requires each character stroke to be mapped onto at least four pixels with contrast levels having at least 10% difference from the background. The width of a character stroke in the aircraft registration is regulated by the ICAO. According to the ICAO Report, Annex 7, sections 4.2.1 and 5.3, the character height beneath the port wing must be at least 50 centimetres and the character stroke must be 1 /6th the character height. Therefore, to satisfy the character recognition criterion, the dimension of the feature on the aircraft that is mapped onto a pixel should be Wmin 2 centimetres, or less. Once the CCD detector is chosen, LP is fixed and the focal length of the system 10 is determined by the maximum altitude of the aircraft at which the spatial resolution Wmin 2 centimetres is required.
The field of view of the system 10 at altitude h is determined by the spatial resolution Wmin chosen at altitude hmax and the number of pixels Np, along the length of the CCD,
WF
v -Npl Wminh FOV hmax P:\OPER\MDBW21438-97.01V -20/3/2000 For h hmax and N, 1552 the field of view is WFOv 31.04 metres.
To avoid blurring due to motion of the aircraft, the image moves a distance less than the size of a pixel during the exposure. If the aircraft velocity is v, then the time to move a distance equal to the required spatial resolution Wmin is
W
t m i n (46)
V
The maximum aircraft velocity that is likely to be encountered on landing or take-off is v 160 knots 82 ms-1. With Wmi 0.02 m, the exposure time to avoid excessive blurring is t 240 gs.
10 The focal length of the lens in the system 10 can be chosen to obtain the required spatial resolution at the maximum altitude. This fixes the field of view.
Alternatively, the field of view may be varied by altering the focal length according to the altitude of the aircraft. The range of focal lengths required can be calculated from equation (44).
The aircraft registration, during daylight conditions, is illuminated by sunlight or scattered light reflected from the ground. The aircraft scatters the light that is incident, some of which is captured by the lens of the imaging system. The considerable amount of light reflected from aluminium fuselages of an aircraft can affect the image obtained, and is taken into account. The light power falling onto a pixel of the CCD is given by P, L AA n RgndRAA (47) 8f# 2 where LA is the solar spectral radiance, AA is the wavelength bandpass of the entire configuration, Qsun is the solid angle subtended by the sun, Rgnd is the reflectivity of the ground, RA is the reflectivity of the aircraft, AP is the area of a pixel in the CCD detector and f is the lens f-number.
L° I P:\OPER\DBW\21438-97.DIV 2013/2000 -26- The solar spectral radiance LA, varies markedly with wavelength A. The power falling on a pixel will therefore vary over a large range. This can be limited by restricting the wavelength range AX passing to the sensor and optimally choosing the centre wavelength of this range. The optimum range and centre wavelength are chosen to match the characteristics of the imaging sensor.
In one embodiment, the optimum wavelength range and centre wavelength are chosen in the near infrared waveband, 0.69 to 2.0 microns. This limits the variation in light power on a pixel in the sensor to within the useable limits of the sensor. A 10 KODAK T M KAF-1600L imaging sensor (a monolithic silicon sensor with lateral overflow anti-blooming) was chosen that incorporated a mechanism to accommodate a
S
thousandfold saturation of each pixel, giving a total acceptable range of light powers in each pixel of 10 5 This enables the sensor to produce a useful image of an aircraft when very bright light sources, for example the sun, are in its field of view.
The correct choice of sensor and the correct choice of wavelength range and centre wavelength enables an image to be obtained within a time interval that arrests the motion of the aircraft and that provides an image with sufficient contrast on the aircraft registration to enable digital image processing and recognition of the 20 registration characters.
In choosing the wavelength range and centre wavelength, it was important to avoid dazzling light from the supplementary illumination of the illumination unit 16. The optimum wavelength range was therefore set to between 0.69 pm and 2.0 Mm.
The power of sunlight falling onto a pixel directly from the sun is A~n p-sun L\AA p (48) 4f# 2 The relative light powers from the sun and from the aircraft registration falling onto a single pixel is P:\OPER\DBW 21438-97DIV 20/3/2000 -27- (49) p-sun 2n( Pp OsunRgndRA With 0 sun 6.8 x 10 5 steradians, Rgnd 0.2 and RA 1, the ratio is Pp-sun/P 4.6 x 105. This provides an estimate of the relative contrast between the image of the sun and the image of the underneath of the aircraft on a CCD pixel. The CCD sensor and system electronics are chosen to accommodate this range of light powers.
In poor lighting conditions, the aircraft registration requires additional illumination from the illumination unit 16. The light source of the unit 16 needs to be 0* sufficient to illuminate the aircraft at its maximum altitude. If the source is designed to emit light into a solid angle that just covers the field of view of the imaging system then the light power incident onto a pixel of the imaging system 10 due to light emitted from the source and reflected from the aircraft is given by Sp RA A P P8 8AA Nptot# 2 where AA is the area on the aircraft imaged onto a pixel of area Ap, PS is the light power of the source, R A is the aircraft reflectivity, Nptt is the total number of pixels in the CCD sensor and fI is the f-number of the lens. The power of the source required to match the daytime reflected illumination is estimated by setting Pp 7.3 x 10- 11 W, RA 1, Ap 81 Zm 2 Nptt =1552 x 1032, f 1.8 and noting that A W2i where Wmin 0.02 m.Then Ps 1.50 x 104 W. For a Xenon flash lamp, the flash time is typically t 300 us which compares favourably with the exposure time to minimise motion blurring. Then the source must deliver an energy of E, Pt 4.5 J. This is the light energy in a wavelength band AA 0.1 Xenon flash lamps typically have 10% of their light energy within this bandpass centred around A 0.8 Mm. Furthermore, the flash lamp may only be 50% efficient. Thus the electrical energy required is approximately 90 J. Flash lamps that deliver energies of 1500 J in 300 Ms are readily available. Illumination with such a flash lamp during the day reduces the contrast between the direct sun and the aircraft registration, thereby P:AOPERDBVA21438-97. IV 20/3/2000 -28relaxing the requirement for over-exposure tolerance of the CCD sensor. This result depends on the flash lamp directing all of its energy into the field of view only and that the lens focal length is optimally chosen to image the region of dimension Wmin 0.02 monto a single pixel.
In one embodiment, the aperture of the lens on the acquisition camera 7 is automatically adjusted to control the amount of light on the imaging sensor in order to optimise the image quality for digital processing. In the image obtained, the intensity level of the registration characters relative to the underside of the aircraft needs to be 10 maintained to provide good contrast between the two for OCR. The power P. of the flash 16 is automatically adjusted in accordance with the aperture setting f# of the acquisition camera 7 to optimise the image quality and maintain the relative contrast between the registration characters and the underside of the aircraft, in accordance with the relationship expressed in equation For example, during the day the aperture of the lens may be very small and the power of the flash may be increased to provide additional illumination of the underside, whereas during night conditions, the aperture may be fully opened and the power of the flash reduced considerably as additional illumination is not required. As an alternative, or in addition, the electrical gain of the electronic circuits connected to an acquisition camera 7 is adjusted 20 automatically to optimise the image quality.
To appropriately set the camera aperture and/or gain one or more point optical sensors 60, 62, as shown in Figure 21, are used to measure the ambient lighting conditions. The electrical output signals of the sensors 60, 62 are processed by the acquisition system 10 to produce the information required to control the camera aperture and/or gain. Two point sensors 60, 62 sensitive to the same optical spectrum as the acquisition cameras 7 can be used. One sensor 60 receives light from the sky that passes through a diffusing plate 64 onto the sensor 60. The diffusing plate 64 collects light from many different directions and allows it to reach the sensor 60. The second sensor 62 is directed towards the ground to measure the reflected light from the ground.
P:\OPER\DBW\21438-97.DIV 20/3/2000 -29- The high resolution images obtained of the aircraft by the acquisition system are submitted, as described previously, to the analysis system 12 which performs optical character recognition on the images to extract the registration codes of the aircraft.
The analysis system 12 processes the aircraft images obtained by a high resolution camera 7 according to an image processing procedure 100, as shown in Figure 22, which is divided into two parts 102 and 104. The first part 102 operates on a sub-sampled image 105, as shown in Figure 23, to locate regions that contain 10 features that may be registration characters, whereas the second part 104 executes L: a similar procedure but is done using the full resolution of the original image and is executed only on the regions identified by the first part 102. The sub-sampled image 105 is the original image with one pixel in four removed in both row and column directions, resulting in a one in sixteen sampling ratio. Use of the sub-sampled image improves processing time sixteen-fold.
The first part 102 receives the sub-sampled image at step 106 and filters the image to remove features which are larger than the expected size of the registration characters at step 108. Step 108 executes a morphological operation of linear closings applied to a set of lines angled between 0 and 1800. The operation passes a kernel or window across the image 105 to extract lines which exceed a predetermined length and are at a predetermined angle. The kernel or window is passed over the image a number of times and each time the predetermined angle is varied. The lines extracted from all of the passes are then subtracted from the image 105 to provide a filtered difference image 109. The filtered difference image 109 is then thresholded or binarised at step 110 to convert it from a grey scale image to a binary scale image 111. This is done by setting to 1 all image values that are greater than a threshold and setting to 0 all other image values. The threshold at a given point in the image is determined from a specified multiple of standard deviations from the mean calculated from the pixel values within a window centred on the given point. The binarised image 111 is then filtered at step 112 to remove all features that have pixel densities in a bounding box that are smaller or larger than the expected pixel density P:\OPER\DBV21438-97.DIV 20/3/2000 30 for a bounded registration character. The image 111 is then processed at step 114 to remove all features which are not clustered together like registration characters. Step 114 achieves this by grouping together features that have similar sizes and that are close to one another. Groups of features that are smaller than a specified size are removed from the image to obtain a cleaned image 113. The cleaned image 113 is then used at step 116 to locate regions of interest. Regions of interest are obtained in step 116 from the location and extents of the groups remaining after step 114. Step 116 produces regions of interest which include the registration characters and areas of the regions are bounded above and below, as for the region 115 shown in 10 Figure 23.
The regions of interest obtained by the first part 102 of the procedure 100 are S"further processed individually using the full resolution of the original image and the second part 104 of the procedure. The second part 104 takes a region of interest 115 15 from the original image at step 120 and for that region filters out features larger than the expected character sizes at step 122, using the same morphological operation of S•linear closings applied to a set of lines angled between 0 and 1800, followed by image o subtraction, as described above, to obtain image 117. The filtered image 117 is then binarised at step 124 by selecting a filter threshold that is representative of the pixel S 20 values at the edges of features. To distinguish the registration characters from the aircraft wing or body the filter threshold needs to be set correctly. A mask image of significant edges in image 117 is created by calculating edge-strengths at each point in image 117 and setting to 1 all points that have edge-strengths greater than a mask threshold and setting to 0 all other points. An edge-strength is determined by taking at each point pixel gradients in two directions, Ax and Ay, and calculating X 2 Ay 2 to give the edge-strength at that point. The mask threshold at a given point is determined from a specified multiple of standard deviations from the mean calculated from the edge-strengths within a window centred on the given point. The filter threshold for each point in image 117 is then determined from a specified multiple of standard deviations from the mean calculated from the pixel values at all points within a window centred on the given point that correspond to non-zero values in the mask image. The binarised image 118 is then filtered at step 126 to remove features that are smaller P:\OPER\DBW\21438-97, IV 2013/2000 -31 than the expected character sizes. Features are clustered together at step 128 that have similar sizes, that are near to one another and that are associated with similar image values in image 117. At step 130 the correctly clustered features that have sizes, orientations and relative positions that deviate too much from the averages for the clusters are filtered out to leave features that form linear chains. Then at step 132, if the number of features remaining in the image produced by step 130 is greater than 3, then a final image is created by rotating image 118 to align the linear chain of features with the image rows and by masking out features not belonging to the linear chain. The final image is passed to a character recognition process 200 to determine i ~10 whether the features are registration characters and, if so, which characters.
The final image undergoes a standard optical character recognition process S"200, as shown in Figure 24, to generate character string data which represents the ICAO characters on the port wing. The process 200 includes receiving the final image at step 202, which is produced by step 132 of the image processing procedure 100, and separating the characters of the image at step 204. The size of the characters are normalised at step 206 and at step 208 correction for the alignment of the characters is made and further normalisation occurs. Character features are extracted at step 210 and an attempt made to classify the features of the characters extracted at step 212.
Character rules are applied to the classified features at step 214 so as to produce a binary string representative of the registration characters at step 216.
Although the system 2 has been described above as being one which is particularly suitable for detecting an aircraft, it should be noted that many features of the system can be used for detecting and identifying other moving objects. For example, the embodiments of the tracking system 1 may be used for tracking land vehicles. The system 2 may be employed to acquire images of and identify automobiles at tollway points on a roadway.
Many modifications will be apparent to those skilled in the art without departing from the scope of the present invention as hereinbefore described with reference to the accompanying drawings.

Claims (18)

1. An image acquisition system including: at least one camera for acquiring an image of at least part of a moving object, in response to a trigger signal, and analysis means for processing said image to locate a region in said image including markings identifying said object and for processing said region to extract said markings for a recognition process, such that said analysis means sub-samples said image, extracts lines exceeding a predetermined length and at predetermined angles, glee 10 binarises the image, removes features smaller or larger than said markings, removes features not clustered as said markings, and locates said region using the remaining :features.
2. An image acquisition system as claimed in claim 1, wherein said camera images received radiation between 0.69 to 2.0 /.tm.
3. An image acquisition system as claimed in claim 2, wherein said camera has an exposure time of 240 us. S 20
4. An image acquisition system as claimed in claim 2, including an infrared flash having its power adjusted on the basis of the aperture setting of said camera.
An image acquisition system as claimed in claim 2, including optical sensor means positioned to obtain measurements of ambient direct light and reflected light for the field of view of said camera and adjust settings of said camera on the basis of said measurements.
6. An image acquisition system as claimed in claim 1, wherein said analysis means extracts said region from said image and processes said region by removing features larger than expected marking sizes, binarising said region, removing features smaller than expected marking sizes, removing features not clustered as identifying markings, and passing the remaining image for optical recognition if including more P:\OPER\DBW\21438-97.DIV 20/3/2000 -33- than a predetermined number of markings.
7. An image acquisition system as claimed in claim 2, wherein said moving object is an aircraft.
8. An image acquisition system as claimed in claim 7, wherein said aircraft is in flight.
9. An image acquisition method including: acquiring an image of at least part of a moving object, in response to a trigger signal, using at least one camera, and processing said image to locate a region in said image including markings identifying said object and processing said region to extract said markings for a recognition process, said processing to locate said region including sub-sampling said image, extracting lines exceeding a predetermined length and at predetermined angles, binarising the image, removing features smaller or larger than said markings, 0'removing features not clustered as said markings, and locating said region using the remaining features. 20
10. An image acquisition method as claimed in claim 9, wherein said camera images received radiation between 0.69 to 2.0 ,m.
11. An image acquisition method as claimed in claim 10, wherein said camera has an exposure time of 240 js.
12. An image acquisition method as claimed in claim 10, including adjusting the power of an infrared flash for said camera on the basis of the aperture setting of said camera.
13. An image acquisition method as claimed in claim 10, including obtaining automatic measurements of ambient direct light and reflected light for the field of view of said camera and adjusting settings of said camera on the basis of said P:\OPER\DBW\21438-97.DIV- 23/3/2000 -34- measurements.
14. An image acquisition method as claimed in claim 9, wherein said region processing includes extracting said region from said image, removing features larger than expected marking sizes, binarising said region, removing features with areas smaller or larger than expected marking areas, removing features not clustered as identifying markings, and passing the remaining image for optical recognition if including more than a predetermined number of markings.
15. An image acquisition method as claimed in claim 10, wherein said moving object is an aircraft.
16. An image acquisition method as claimed in claim 15, wherein said aircraft is in flight.
17. An image acquisition system substantially as hereinbefore described with reference to the accompanying drawings.
18. An image acquisition method substantially as hereinbefore described with 20 reference to the accompanying drawings. DATED this 20th day of March, 2000 COMMONWEALTH SCIENTIFIC AND INDUSTRIAL RESEARCH ORGANISATION By its Patent Attorneys DAVIES COLLISON CAVE
AU22570/00A 1996-03-29 2000-03-24 An image acquisition system Ceased AU760788B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU22570/00A AU760788B2 (en) 1996-03-29 2000-03-24 An image acquisition system

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AUPN9032 1996-03-29
AU21438/97A AU720315B2 (en) 1996-03-29 1997-03-27 An aircraft detection system
AU22570/00A AU760788B2 (en) 1996-03-29 2000-03-24 An image acquisition system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU21438/97A Division AU720315B2 (en) 1996-03-29 1997-03-27 An aircraft detection system

Publications (2)

Publication Number Publication Date
AU2257000A AU2257000A (en) 2000-06-08
AU760788B2 true AU760788B2 (en) 2003-05-22

Family

ID=3710696

Family Applications (1)

Application Number Title Priority Date Filing Date
AU22570/00A Ceased AU760788B2 (en) 1996-03-29 2000-03-24 An image acquisition system

Country Status (1)

Country Link
AU (1) AU760788B2 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993019441A1 (en) * 1992-03-20 1993-09-30 Commonwealth Scientific And Industrial Research Organisation An object monitoring system
WO1993021617A1 (en) * 1992-04-16 1993-10-28 Traffic Technology Limited Vehicle monitoring apparatus
WO1996012265A1 (en) * 1994-10-14 1996-04-25 Airport Technology In Scandinavia Ab Aircraft identification and docking guidance systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993019441A1 (en) * 1992-03-20 1993-09-30 Commonwealth Scientific And Industrial Research Organisation An object monitoring system
WO1993021617A1 (en) * 1992-04-16 1993-10-28 Traffic Technology Limited Vehicle monitoring apparatus
WO1996012265A1 (en) * 1994-10-14 1996-04-25 Airport Technology In Scandinavia Ab Aircraft identification and docking guidance systems

Also Published As

Publication number Publication date
AU2257000A (en) 2000-06-08

Similar Documents

Publication Publication Date Title
US5974158A (en) Aircraft detection system
US10043404B2 (en) Method and system for aircraft taxi strike alerting
US6678395B2 (en) Video search and rescue device
EP2856207B1 (en) Gated imaging using an adaptive depth of field
US20110279682A1 (en) Methods for Target Tracking, Classification and Identification by Using Foveal Sensors
GB2486947A (en) Determining a total number of people in an image obtained via an infra-red imaging system
CA2250927A1 (en) An aircraft detection system
US10902630B2 (en) Passive sense and avoid system
CN108682105B (en) One kind is based on multispectral transmission line forest fire exploration prior-warning device and method for early warning
Sudhakar et al. Imaging Lidar system for night vision and surveillance applications
Härer et al. PRACTISE–Photo rectification and classification software (V. 2.1)
Kölling et al. Aircraft-based stereographic reconstruction of 3-D cloud geometry
EP0597715A1 (en) Automatic aircraft landing system calibration
AU760788B2 (en) An image acquisition system
Gruber et al. Learning super-resolved depth from active gated imaging
AU720315B2 (en) An aircraft detection system
CN101173984A (en) Spaceborne target detection tracing camera in sun viewing blind zone
US10257472B2 (en) Detecting and locating bright light sources from moving aircraft
Beisley Spectral detection of human skin in VIS-SWIR hyperspectral imagery without radiometric calibration
Roberts et al. Suspended sediment concentration estimation from multi-spectral video imagery
EP0297665A2 (en) Radiation source detection
EP3428686A1 (en) A vision system and method for a vehicle
Daley et al. Detection of vehicle occupants in HOV lanes: exploration of image sensing for detection of vehicle occupants
US20200264047A1 (en) Hyperspectral imaging systems
US20230119179A1 (en) Inspection system and method for controlling the same, and storage medium

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
NB Applications allowed - extensions of time section 223(2)

Free format text: THE TIME IN WHICH TO PROVIDE SEARCH RESULTS UNDER S45(3) HAS BEEN EXTENDED TO 20040401