WO2000047511A1 - Obstruction detection system - Google Patents

Obstruction detection system Download PDF

Info

Publication number
WO2000047511A1
WO2000047511A1 PCT/NZ2000/000013 NZ0000013W WO0047511A1 WO 2000047511 A1 WO2000047511 A1 WO 2000047511A1 NZ 0000013 W NZ0000013 W NZ 0000013W WO 0047511 A1 WO0047511 A1 WO 0047511A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
images
door
sill
Prior art date
Application number
PCT/NZ2000/000013
Other languages
French (fr)
Inventor
Russell Watson
Ian Woodhead
Harrie Visschedijk
Dave Burkitt
Original Assignee
Tl Jones Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tl Jones Limited filed Critical Tl Jones Limited
Priority to AU27019/00A priority Critical patent/AU2701900A/en
Priority to JP2000598438A priority patent/JP2003524813A/en
Priority to EP00905485A priority patent/EP1169255A4/en
Priority to CA002362326A priority patent/CA2362326A1/en
Publication of WO2000047511A1 publication Critical patent/WO2000047511A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B13/00Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
    • B66B13/24Safety devices in passenger lifts, not otherwise provided for, for preventing trapping of passengers
    • B66B13/26Safety devices in passenger lifts, not otherwise provided for, for preventing trapping of passengers between closing doors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection

Definitions

  • the present invention relates to obstruction detection systems. More particularly, but not exclusively, the present invention relates to methods and apparatus for detecting obstructions between or in the vicinity of elevator doors. The present invention may also be applied to obstruction detection in the context of industrial environments, safety applications, monitoring machinery activity, process control and the movement of people.
  • prior art techniques focus on using optical devices to detect the presence of an obstruction or obstructions within a lift door detection area.
  • These known systems typically use arrays of infrared (IR) emitters with corresponding receivers.
  • IR infrared
  • One prior art technique consists of "shining" a sequential array of IR beams across an elevator door entrance and an obstruction event is triggered by interrupting or breaking one or more of the beams. Such an event activates a switching device that reverses or stops movement of the elevator door.
  • An advantage of such systems is that they can be located along the edges of the moving doors and can thus be specifically adapted to deal with variable geometry entrance obstruction detection in the plane defined by one or more elevator doors.
  • the Otis imaging system collects images at two different times and then uses the difference between the two images to determine whether there is a moving object in the elevator obstruction detection zone.
  • This technique relies on the use of a reference image that is captured at a time before the second obstruction detection image is recorded.
  • the two images are then subtracted and thresholded to produce an image containing only the outlines of the objects that have moved during the interval between collecting the first and second image.
  • the system includes separate masks for the hall area and elevator sill.
  • the hall mask masks out variable portions of the image where the masked image size depends on whether any motion has been detected in that particular region or the viewing area.
  • the sill mask increases in size as the doors close thereby removing the doors from the image that is to be processed.
  • the invention provides for a method of detecting objects in an area, the method including obtaining one or more images of the area, using an edge detection technique in such a way as to highlight substantially dominant linear features in the image(s), and determining if any dominant linear features intersect linear features defining the area.
  • the area is an object detection zone, the area being separated into at least two zones; a primary zone, being the volume described by a door and a door sill; and a secondary zone, which may include the volume beyond the door through which a person using the door would pass.
  • the door and sill are the door(s) and sill of an elevator and the volume beyond the door is the landing/lobby where passengers may wait for the elevator.
  • the method includes a further step of detecting parallax in the two or more images, the parallax being produced by the presence of an object in an obstruction zone. More specifically in the secondary obstruction zone.
  • the invention provides for a method of detecting objects/obstructions in relation to the volume defined by a door and/or sill, said method including using edge detection techniques in such a way so as to highlight the substantially dominant linear features in an image or image(s), and determining if any dominant linear features intersect linear features defining said door and/or sill.
  • the method may include a preliminary stage of characterising one or more images to establish the presence of any characteristic dominant linear features in the area. More preferably said characteristic dominant linear features are lines defining the door edges and/or sill and the location of said features may be stored for future reference.
  • the method may also include an operational stage which analyses one or more images to establish the presence of any uncharacteristic features in the volume, said uncharacteristic features representing potential object and/or obstructions in the area.
  • the preliminary stage includes at least two steps, a first step of detecting the location and dimensions of a door sill and a second step of detecting the location and dimensions of one or more door edge(s).
  • the first step includes: using substantially horizontal and/or substantially vertical edge detection filters to highlight the dominant vertical and/or horizontal lines in the part of the image where the sill is known to be approximately located; summing the intensity values along each row of pixels in the image(s) produced using the vertical and/or horizontal edge detection filters thus producing a vertical and/or horizontal function with maxima and/or minima corresponding to the position of horizontal linear features and/or vertical linear features, said linear features defining the spatial location of the door sill in terms of horizontal and vertical features in the image.
  • the second step includes: using knowledge of the spatial location of the sill and knowledge of the physical relationship between the sill and the door edge(s) to obtain a sub- image or sub-images of the door(s); subjecting the sub-image(s) to edge detection filters adapted to highlight edges oriented at angles which lie between some known bounds; manipulating the sub-image(s) to produce a binary image(s), the binary image(s) consisting of one of more linear features corresponding to the door edges; and deriving equations for the linear features in the binary image(s).
  • the known bounds are substantially vertical and substantially horizontal edges.
  • the second step may also include: manipulating the binary image by a ramp function which increases in magnitude in the vertical direction; further manipulating the images to clearly identify any dominant linear features in the binary image(s), the manipulation including applying a first filter to remove any substantially isolated features in the binary image(s), and applying a second filter to the binary image(s) to thin any substantially linear features in the image(s).
  • the equations of the linear features are obtained by locating the line(s) by means of a least squares, or similar, technique. There may be more than one dominant linear feature in the image(s) wherein once the equation for each linear feature has been determined, the linear feature is removed from the image and the next dominant linear feature equated.
  • a total weighting means is used to manipulate an estimate of the equation for each linear feature to improve the confidence of the equation for that linear feature, the total weighting means being found by normalising, and if necessary multiplying, one or more of: a first weighting means, wherein the derivative and variance of a linear feature are determined, changes in the derivative and distance of points of the feature which are outside a given parameter representing breaks in the feature, the first weighting means down weighting or eliminating said points from the estimate; and/or a second weighting means, wherein points in a feature further away from the image capture source are given a higher weighting than points in the same feature which are closer to the image capture source; and/or a third weighting means, wherein the third weighting means is the inverse of the derivative of the feature; and/or a fourth weighting means, wherein linear features which do not span any sub- image from vertical edge to vertical edge are weighted.
  • the edge detection may be effected by means of filters, differentiators and the like.
  • said edge detection is aimed at highlighting dominant lines orientated substantially horizontal, vertical and substantially diagonal in the image(s) . More preferably the diagonal lines are at substantially 45° and 135°.
  • the operational stage includes the steps of: capturing one or more real time operational images of the area; detecting the position of a door or doors in the image(s); detecting the presence of obstructions on the area of the image(s) representing a sill; and detecting the presence of obstructions in the area of the image(s) representing the door edges.
  • the position of the door(s) is obtained by detecting the intensity change in the substantially horizontal features of the sill where the intensity changes defining the spatial location of the door(s) in the image(s).
  • the presence of obstructions in the area of the image representing the sill is determined by at least using a substantially vertical edge detection filter to highlight predominately vertical features in the image which intersect the linear features of the sill.
  • the presence of obstructions in the area of the image representing the door edges is determined by at least using an edge detection filter to highlight predominate features in the image which intersect the linear features of the door.
  • the operational step includes converting the edge detected image(s) to a histogram or histograms wherein peaks in the histograms represent features in the image(s), said features representing the door(s) and/or sill, and or an obstruction or obstructions on the door edge(s) and/or sill.
  • the operational stage may use any of the image manipulation means describe earlier. Preferably the operational stage may be repeated a plurality of times.
  • a method of detecting obstructions and/or movement in obstructions including the step of detecting parallax in two or more images of an obstruction detection area, the parallax produced by the presence of objects in the area.
  • the method may include the step of detecting temporal changes in the images of the area.
  • the method may include the step of detecting vertical and horizontal parallax produced by an object located in the area.
  • the invention provides for a method of detecting objects including the steps of aligning backgrounds of a plurality of images of an area and subtracting pairs of images so as to reveal, by way of parallax, the presence of objects in the area.
  • the invention provides for a method of detecting objects including the steps of aligning backgrounds of a first and second image of an area and subtracting the first image from the second, thereby revealing, by way of parallax, the presence of a three dimensional object.
  • the method includes the steps of: collecting a first image of an area from a first viewing point; collecting a second image of the area from a second viewing point; calculating the shift between the backgrounds of the two images; aligning the backgrounds of the two images; subtracting the two images to produce a third difference image; analysing the third difference image to detect parallax thereby revealing the presence of a 3-dimensional object in the area.
  • a thresholding step whereby the difference image is thresholded to exclude noise thus producing a binary image.
  • the third difference image is manipulated so as to contain substantially only the outlines of any 3-dimensional objects in the area.
  • the images are divided into background images and door edge images wherein calculation of the necessary shift between the backgrounds of the two images is based on the images of the background when no obstruction is present.
  • the shift is calculated using cross-correlation.
  • the images are blurred with gaussian, median or similar filters so as to reduce the effect of pixelation in the images.
  • the invention also provides for an apparatus for detecting obstructions in an obstruction detection area, said apparatus including at least one imaging means and a microprocessor apparatus adapted to manipulate said image(s) according to any part of the above description.
  • An apparatus for detecting objects in an area including: at least one imaging means adapted to image the same scene from at least two spatially separate viewing points; and microprocessor apparatus adapted to manipulate said images in such a way as to highlight substantially dominant linear features in said images and determine if any dominant linear features signify the presence of an object in the area.
  • the apparatus for detecting obstructions in an obstruction detection area includes: at least one imaging means adapted to image substantially the same scene from at least two spatially separate viewing points; and microprocessor apparatus adapted to manipulate said images in order to calculate the shift between the backgrounds of the two images or pairs of images, align the background images based on said shift, subtract the resulting images to produce a difference image thereby allowing the detection of parallax effects in the difference image thus signifying the presence of an object in the area.
  • the microprocessor is also adapted to manipulate the image or images to highlight substantially dominant linear features of the image(s).
  • the images may be manipulated optically, mathematically or in a like manner which reveals dominant linear features and/or parallax in the image(s) of the area.
  • microprocessor is further adapted to threshold the difference image.
  • the microprocessor may be in the form of a solid state, optical or the like device.
  • the apparatus further includes an optical arm and reflection means adapted to relay an image from a viewing point that is displaced from the physical location of the camera.
  • parallax images may be effected by optical means including prisms, coherent optical fibre guides, and the like or alternatively the imaging means themselves may be translated or suitably displaced.
  • Figure 1 illustrates plan (a), end elevation (b) and side elevation (c) views of an elevator entrance with cameras according to the preferred embodiment of the invention
  • Figure 2 illustrates schematic views of two embodiments of a parallax imaging system according to the invention
  • Figure 3 illustrates a schematic representation of the connection of two imaging devices (cameras), computer and interface with the door controller;
  • Figure 4 illustrates the primary detection zone
  • Figure 5 illustrates a series of images captured by the lift cameras of the embodiment shown in Figure 1 ;
  • Figure 6 illustrates an edge detection technique as applied to determining the horizontal and vertical position, in an image, of an elevator door sill
  • Figure 7 illustrates schematically the steps in an algorithm used for locating positions of the door edges
  • Figure 8 illustrates the sub-images in Figure 5 when processed according to steps 4 and 5 of Figure 7;
  • Figure 9 illustrates a 9x9 filtering technique to remove isolated features of black and white images
  • Figure 10 illustrates broken door edge lines in the images
  • Figure 1 1 illustrates the application of a ramp to the black and white images
  • Figure 12 illustrates application of a weighting array to door line edges
  • Figure 13 illustrates application of the weighting array to broken line images
  • Figure 14 illustrates estimation of the line equations for the black and white images
  • Figure 15 illustrates the equations used to calculate the door vanishing points
  • Figure 16 illustrates the detection of door position by examining the intensity profile of the running clearance
  • Figure 17 illustrates how to determine the position of the doors based on histograms from vertical and horizontal edge detected images
  • Figure 18 illustrates an example of the construction of a histogram for determining both the door position and any objects on the sill or door edges;
  • Figure 19 Illustrates how the histogram can be used to detect the door position
  • Figure 20 illustrates how the histogram can be used to locate both the doors and any objects or obstructions present
  • Figure 21 illustrates a flow chart showing the steps in a parallax-based method for detecting obstructions in an obstruction sensing area
  • Figure 22 illustrates data produced according to the method of Figure 21 as applied to a sample obstruction (a ladder) in an elevator door
  • Figure 23 illustrates the detection of machine recognisable parallax for a number of sample obstructions
  • Figure 24 illustrates the ability of filtering techniques to reduce artefacts produced by the pixelated nature of the detected images
  • Figure 25 illustrates sample data for a door edge obstruction event.
  • obstruction detection in elevator door systems This is to be understood as not to be a limiting feature of the invention.
  • the apparatus and method of the present invention may be applied to obstruction detection applications, for example the monitoring of industrial machinery, security applications in the like.
  • the first is the critical volume bounded by the sill and both sets of door edges. This will be called the primary obstruction zone. Objects in this area must be detected with a high degree of reliability as a closing door will almost certainly strike any obstruction.
  • the second zone is the lobby/landing area in front of the elevator where people approach and wait for the elevator. This zone will be called the secondary obstruction zone.
  • the requirement for detection of obstructions in this area is less critical and the obstruction detection system must be able to mask out irrelevant or erroneous objects.
  • the obstruction detection system of the current invention is based on optical detection methods with difference imaging techniques used to provide the required level of obstruction detection for each zone.
  • the obstruction detection system uses an edge detection technique to determine if any objects (obstructions) are present between the door edges or on the sill (the sill is the section on the floor of the elevator car and landing that the car and landing doors r ⁇ n in).
  • the detection of edges in the context of elevator doors is particularly critical. Over time, people have developed the habit of placing their hand in the door gap, or stepping onto the sill, in order to stop the elevator doors closing. It is therefore important that any obstruction detection system can detect hands or other objects being put between the closing doors as well as objects on the door sill.
  • the system can accomplish this by determining whether any lines defining the edge of an obstruction intersect with the lines that describe the door or the edges of the sill.
  • the system could also use standard difference imaging techniques where a reference image is used to allow obstructions to be detected.
  • parallax technique For detection of obstructions and objects in the secondary obstruction zone a parallax technique is used. This parallax technique use the same optical images obtained for the edge detection technique but is concerned with identifying 3-dimensional objects present in the landing/lobby area. Parallax techniques can also be used to detect objects or obstructions in the primary zone. However, this has been found not to have the required accuracy for the critical zone. The reason for this is twofold: firstly, the door edge produces a substantial parallax effect which can potentially swamp the parallax produced by smaller three-dimensional objects; and secondly, the applicants have found that it might not be possible to detect objects less than 200mm above the sill using the parallax technique (this problem is described later) .
  • the system is likely to consist of two cameras which view the lift opening from two separate viewing points.
  • the two separate viewing points allow the secondary detection means, based on parallax, to function.
  • This may be achieved by known optical imaging systems such as charged coupled device (CCD) cameras, CMOS cameras (the applicant is currently using an Active Pixel Sensor or APS CMOS camera) or any other type of electronic imaging device.
  • a single camera may be used whereby one or more optical arms directs an image of a first view (viewed from a first vantage point) and second view (from a second vantage point) to the single camera, or coherent optical fibre guides could be used to transmit separate views to a single camera.
  • imaging could be controlled by an optical cell that would alternately interpose a reflector or other type of interception device into the field of view of the camera thus diverting the view to the spatially displaced vantage point.
  • the major drawbacks to such a system are: that the cameras must be sufficiently fast so that the doors appear stationary in each subsequently collected image; and the optical systems are likely to suffer from dust ingress in the elevator environment.
  • Figure 2 illustrates a simplified schematic of a possible embodiment of the invention using an electronic camera or cameras.
  • the upper embodiment of Figure 2 shows a single electronic camera 1 10 positioned to be capable of collecting images of the obstruction detection area from different viewing points. This is effected by mirrors 1 1 1 , 1 1 2, 1 1 3 and 1 14.
  • the horizontal length of the optical arms have been shortened for clarity. If a charged coupled device (CCD) camera were be used it could either comprised of two separate collection devices or a split CCD array.
  • the lower embodiment of Figure 2 illustrates a schematic of a single camera parallax detection device. Here, separate images of a scene pass through separate arms of the detector. The selection of the particular viewing point is controlled by electro- optical switches which are controlled by switch 1 1 9.
  • the camera collects images 'seen' through alternate optical arms and the parallax detection is made on the basis of switched viewing of the scene.
  • the optical arm is formed using mirrors 1 1 6, 1 1 7 and 1 1 8.
  • Figure 1 a illustrates a plan view of the elevator entrance showing the lobby side 1 and car side 2. These two areas are separated by the sill 3 which is bisected by the running clearance 8. The door edges are shown by 9a, 9b and 1 0a, 10b.
  • Figure 1 b illustrates an end elevation looking from the lobby 1 into the car 2. This Figure clearly shows two cameras 6 and 7 mounted on the entrance header 4. The cameras are arranged in a splayed configuration so that camera 6 is orientated towards a first side door edge 9a, 9b and camera 7 is orientated towards the second door edge 10a, 10b.
  • Figure 3 shows a schematic representation of the connection between the two cameras 6 and 7, the computer 1 1 and interface with the door controller 1 2.
  • a triggering signal 1 3 from the interface with the door controller 1 2 is transmitted to the door controller which, for example, can operate a relay which opens the elevator doors when the system detects the presence of an obstruction.
  • the first aspect of the present invention resides in the identification of linear features for use in primary obstruction detection where the elevator door edges and sill are obstructed. This is represented by the shaded area in Figure 4.
  • the edge detection technique is divided into two separate sections.
  • the first section is an automatic calibration algorithm, which is used to determine the position of the door edges and the sill in the image or images. It is anticipated that this algorithm will run when the unit is first installed and will provide a number of parameters that describe the lift door and camera geometry.
  • the second section of the edge detection technique is an operational algorithm which detects the presence of objects on the door edges and sill when the doors are closing. These algorithms will be known as the primary calibration algorithm and primary operational algorithm respectively.
  • the edge detection technique used in the primary calibration algorithm is divided into two steps.
  • the first step examines the image in order to detect the door sill, indicated by numeral 3 in Figures 1 a and 4.
  • the second step identifies the edges of the doors, indicated by numerals 9a, 9b and 10a, 10b in Figures 1 a, 1 b and 4.
  • identifying linear features corresponding to the sill in the images involves fully opening the elevator doors and using horizontal and vertical edge detection filters to highlight the strong vertical and horizontal lines in the respective right (image 5a) and left (image 5b) sides of the images. This is where the sill is expected to be located in these images.
  • the image shown in Figure 6b is subjected to a vertical edge detection filter.
  • the resulting image is that shown in Figure 6c which emphasises the vertical lines that occur where the sill meets the door edges.
  • the intensities in each column of pixels of Figure 6c are summed to produce the function shown in Figure 6e.
  • the peaks in Figure 6e correspond to the horizontal position of the sill edges.
  • the above technique provides both the horizontal and vertical locations of the sill and it is thus possible to separate out the sill from the image (or images in the case of Figure 5a and Figure 5b) .
  • FIG. 7 contains a flow chart of the second stage of the primary calibration algorithm applied to an actual set of elevator doors. Each step will now be described in more detail.
  • the initial step is to use knowledge of the sill extents (obtained above) to subdivide the image into four sub-images which contain lines which slope either towards the top or bottom of the image. These sub-images are shown in Figure 5c, 5d, 5e and 5f.
  • the sub-images are subjected to edge detection filters (similar to those used to determine the sill extents) which are adapted to highlight edges oriented horizontal and at an angle approximating 45 ° or 1 35 ° .
  • the sub-images of the door edges are now converted to black and white (b/w) images by applying a threshold.
  • the results of thresholding are shown in Figures 8a and 8b which are the black and white images produced by thresholding the images in Figure 5e and 5d respectively.
  • the algorithm also applies routines to separate out the lines, particularly close to the sill where they can appear to join, and to remove any small isolated features (i.e. isolated black pixels) that are clearly not part of a line.
  • the erosion technique removes pixels from the boundary of an object. It does this by turning to white any black pixel that is adjacent to a white pixel.
  • the object is to remove any pixels that are bridging adjacent lines and to thin down the door edge lines.
  • the images in Figures 8c and 8d are the images in Figure 8a and 8b once they have been eroded. It can be seen that this has the effect of thinning down and separating out the lines that describe the door edges.
  • a filter which operates on 9x9 sub-sections of the image, is used. If the summation of all the elements in the 9x9 sub-section is less than nine then the centre pixel is set at zero, otherwise the output is the value of the centre pixel. Consequently, the algorithm looks to see if at least a complete line is likely to be passing through the 9x9 sub-section.
  • the size of the filter i.e. 9x9 in our case
  • Figures 9a and 9b the centre pixel will be set to zero.
  • Images in Figures 8e and 8f are the result after the 9x9 filter that removes isolated features is applied to the eroded images 8c and 8d. The ability of this filter to remove isolated features can most clearly be seen in image 8e. Step 5
  • a ramp is now used to scale the black and white image to enable linear equations describing the lines produced by the door edges to be determined.
  • the ramp decreases in value with vertical displacement from the line of bisection used to create the sub-images.
  • the reason for applying the ramp in this manner is that the door edge lines in the sub-images closest to the line of bisection tend to be horizontal and span the sub-image from vertical edge to vertical edge.
  • the edge lines tend to slope upwards and for the lower portion of the door edge the edge lines slope downwards.
  • These sloping lines tend to be shorter than the horizontal lines as they begin at a point on the vertical edge that is in contact with the sill and they then end on either the top edge of the image (for the upper sub-image) or bottom edge of the sub-image (for the lower sub-image).
  • Figure 1 1 An example of the application of the ramp is illustrated by Figure 1 1 which shows stylised images of the upper left portion of the door and the lower left portion of the door.
  • Figures 1 1 a and 1 1 c are stylised images of the door edges after applying the edge detection, isolated pixel and erosion filters. The direction of the ramp slope is shown in columns A and B to the left of these Figures.
  • the application of the ramp to the filtered images is shown in Figures 1 1 b and 1 1 d, and it can be seen that the ramp slopes up towards the line of bisection between the two images.
  • the first column maximum value arrays (which are used by the least squares technique to produce the equations describing the lines) are shown.
  • the column maximum value arrays in Figure 1 1 b and 1 1 d define the door edge lines closest to the lines of bisection.
  • the stylised images in Figure 1 1 are representative of the type of images obtained when images in Figure 5d and 5e are filtered and then multiplied by a ramp.
  • the images in 8g and 8h depict the images that result after the ramp is applied to the images in 8e and 8f.
  • the ramp scales the images in a linear fashion.
  • the ramp decreases . from the top of the image for the images that are of the bottom of the lift doors (i.e. Figure 5e and 5f) and increases from the bottom of the images for the images of the top of the lift doors (i.e. Figure 5c and 5d).
  • the reason for this is that the performance of the algorithm is enhanced when the longest, and most well defined lines, are found first.
  • the ramp which increases in value with the vertical image dimension and is constant with the horizontal image dimension, now has its maximum value along the line of bisection and then decreases in magnitude towards the bottom or top of the sub-images.
  • the line determination portion of the algorithm i.e. steps 6- 10 starts with the longest and most well defined lines, and moves onto those lines which are shorter and less well-defined.
  • the next step is to find the equation of each door edge line in the images. This starts with the column maximum array which defines the edge closest to the line of bisection of the images (see Figures 1 1 b and 1 1 d).
  • the confidence with which the column maximum array is determined is affected by a number of factors. These factors including:
  • the number of points in the column maximum array may be less than the horizontal dimension of the sub-image. This happens when the line does not begin and end on a vertical edge of the image, but begins on a vertical edge (where it is attached to the sill) and then finishes on a horizontal edge of the image.
  • An example of this type of line can be seen in Figure 1 1 a where the top line finishes on the top edge of the image.
  • the maximum column array would be [ 5 4 2 2 1 1 1 1
  • the final factor which may contribute to maximum array confidence is noisy data.
  • a weighting function that is the inverse of the derivative of the column maximum array it is possible to down weight the noise. That is, as the line whose equation is being sought should be smooth, the column maximum array should also be smooth and consequently any sudden changes in derivative are likely to be noise.
  • individual weighting arrays are computed which overcome each of the above effects. These individual weighting arrays are known as the sill distance weighting array, short line weighting array, broken line weighting array and derivative weighting array. A total weighting array is found by normalising each of these component arrays, with respect to their largest element, and then multiplying them all together.
  • Figures 1 2a and 1 2b relate to a weighting estimate of the first line in the upper group of lines in Figure 1 3a which exits the top of the image rather than the right hand side of the image.
  • This line is called a short line and a weighting function is produced which ensures that the line equation estimate is only influenced by line data up to the point at which the line exits the top of the image.
  • the top plot of Figure 1 2a shows the column maximum array 20 for this short line with the first-pass linear equation estimate 21 laid over it.
  • the middle plot in 1 2a is of the derivative of the column maximum array and the bottom plot of 1 2a is a product of the short line and sill distance weights.
  • the sill distance weight sets the weighting function to zero at the point where the short line exits the top of the image therefore data after this point has no influence on the linear equation estimate.
  • the plot also shows the sill distance weight which forces the linear equation estimator to place less emphasis on the data making up the current line as the line moves further away from the sill. It can be seen that the sill distance weighting function decreases the weight with increasing distance in a linear fashion.
  • a standard weighted least squares technique is used to determine the equations of the door edge lines in the image from the column maximum array.
  • the least squares algorithm is applied twice to each column maximum array.
  • the sill distance and short line weights are used to find a first estimate of the line equation.
  • the point of intersection of the line estimate and the column maximum array is determined. If the two "lines" do not intersect or the angle between the two lines is greater than some threshold then the estimate is said to be poor.
  • the computation of the broken line weight begins. This is done by starting at the point of intersection and moves out towards each end of the column maximum array.
  • the broken line weight then down-weights any points in the column maximum array that are a significant distance from the first-pass estimate of the line and the derivative of the column maximum array has suddenly changed. If there is another sudden change in derivative of the column maximum array, and the distance between the points in the column maximum array and line estimate are small, then down-weighting stops. Thus, down-weighting is toggled on and off.
  • Figure 1 3 shows removal of data associated with breaks in the current line that cause data from later lines to be included in the column maximum array.
  • Figures 1 3a, 1 3c and 1 3e show ramped images that contain lines which have breaks in various positions. In Figure 1 3a the break is at the very end of the line, in Figure 13c there are two breaks in the middle and one break at the end of the line and in Figure 1 3e the break is associated with the feature left over from a previous line.
  • Figures 1 3b, 1 3d and 1 3f are plots of the column maximum array with first-pass line estimates overlaid for each of Figures 1 3a, 1 3c and 1 3e respectively. The plots also show the derivative of the column maximum array that is used to find the breaks in the current line and the weighting function that is used to remove the data that is present at the breaks.
  • Figure 1 4 illustrates the estimation of linear equations and removal of lines from the ramped image of Figure 8h once an equation has been found for the current line.
  • Figure 14a shows the ramped image of Figure 8h.
  • the top plot of Figure 14b is the contents of the column maximum array (the value obtained by determining the maximum of each column of the image).
  • the bottom plot in Figure 14b is of values obtained from the linear equation estimator after its first-pass. This is the data from the equation that describes the line at the very bottom of Figure 1 4a and is derived from applying a least square routine to the data in the top plot of Figure 14b.
  • Figure 14c is a result that is obtained after the data relating to the line determined above is erased from the image in Figure 14a.
  • Figure 14a is a column maximum array from Figure 1 4c and the bottom plot of 14d is the first estimate of the linear equation that describes the data in the top plot.
  • the process of obtaining line data and erasing each successive line is shown in Figures 14e to 14k.
  • the final Figure 141 is the original black and white image of the door edge with the calculated line estimates (in grey) overlaid.
  • Knowledge of the vanishing point is useful as it allows the position of the door edges to be tracked as the door closes.
  • the vanishing point remains stationary as the doors close and it is therefore possible, with knowledge of the position of the door on the sill, to determine the position of the door edges as the doors close. That is, if the point where the bottom of the door makes contact with the sill can be determined, then the door edges can be derived by drawing a line through this point and the vanishing point.
  • a technique can be developed to detect lines that have been incorrectly calculated and do not appear to pass through the vanishing point. If this were done the least squares estimate of the vanishing point would not be skewed by these lines.
  • a least squares algorithm is used to find an estimate of the point of intersection of all the lines previously calculated that describe the features on the door edges. That is, the point of intersection of the linear equations describing the door edge features, on each side of the door, are found by solving equations of the form shown in Figures 1 5a and 1 5b.
  • x is the horizontal position of the vanishing point
  • y is the vertical position of the vanishing point
  • a ⁇ are the slopes of the equations
  • b ⁇ are the intercepts of the equations
  • n is the number of equations.
  • tape or stickers could be used to mark the centre of the door opening or to emphasise features, such as the tracks in which the door guides run or the line along which the sill meets the elevator doors.
  • the primary detection algorithm is divided into two separate sections.
  • the first section is an automatic calibration algorithm, which is used to determine the position, in the image, of the door edges and the sill as described above. It is anticipated that this algorithm will run when the unit is first installed and will provide a number of parameters that describe the lift door and camera geometry.
  • the second section is an operational algorithm that detects the presence of objects on the door edges and sill when the doors are closing. This Primary operational algorithm is described below.
  • the primary operational algorithm consists of the following steps which will be described in detail later.
  • Step 1 Step 1
  • Step 2 As with the sill itself, the vertical position of the running clearance (the gap, which bisects the sill, between the landing/lobby floor and the elevator car floor - it can be clearly seen when looking at images of the sill) and the door tracks (tbe grove in the sill which the door guides run in) remain in the same position in the operational images. It is therefore possible to extract sub-images of these features, from the sill image, by using the knowledge of the position of these features, which was gained during the calibration stage.
  • An alternative method of finding the door position uses the principle that the horizontal lines in the image are shortened as the doors close, and that the door edge lines become more vertical as the doors close.
  • the second technique involves:
  • the door-closed position (i.e. usually the centre of the lift) is found by applying the above algorithm when the doors are closed.
  • a parabola is fitted to the peak maxima and the points on either side of the peak where the peak's values approach the background level.
  • FIG. 1 7 An example of the later technique is given in Figure 1 7 where the original images of the sill area, as the doors close, are shown in Figures 17a, 1 7c, 17e and 17g; and the plots of the corresponding histograms are shown in Figures 1 7b, 1 7d, 1 7f and 1 7h.
  • the histogram plots consist of the histogram of the horizontal edge detected image, the histogram of the vertical edge detected image and the summation of the two histograms after energy equalisation. It can be seen that there is a sudden change in intensity in the summation histogram at the position corresponding to the door. Thus, it is possible to automatically detect the door position using this technique.
  • Step 3 The algorithm then needs to detect objects on the sill. Objects that are on the sill will cut one or both of the horizontal "lines" that define the vertical extent of the sill. There is also the possibility that they will cut the horizontal lines that describe the vertical position of the running clearance and tracks.
  • Detection of objects on the door edges knowledge of the vanishing points (obtained during the calibration stage) and the position of the bottom of the doors on the sill (obtained immediately above) allows the equations defining the door edges to be modified as the doors close. Thus, as the doors close it is possible to determine where the door edges should be in the image.
  • a vertical edge detection filter is applied to the sub-images.
  • the vertical edge detection filter emphasises the strong vertical lines that these objects tend to produce due to their orientation with respect to the cameras. By over-laying the lines that define the door edges, it is possible to determine whether these lines are cut by any strong vertical lines associated with an object. Hence, it is possible to detect the object.
  • a new histogram is calculated from the product of the angled edge histogram and the vertical edge histogram, divided by the horizontal edge histogram. 1 2.
  • the peaks in this histogram are tracked as the doors close. If the protection area is clear these peaks belong to the door position. If an object appears on either the sill or door edges large additional peaks appear in the histogram, in positions not corresponding to the door peaks, indicating the presence of an object.
  • This technique indicates the door position as the substantially diagonal edge detection emphasises the door edges resulting in a raised histogram level from the left (or right) side of the image to the door position.
  • the vertical edge detection also provides a peak aligned with the door position due to the edge that results where the bottom of the doors meet the sill.
  • the peak in the product of the two histograms indicates the door position.
  • the vertical histogram When an object is placed across the sill or door edges the vertical histogram then contains significant peaks indicating the positions of the edges of the object. In this case the histogram product contains multiple peaks, some of which are due to the object and some due to the doors.
  • the histogram product is divided by the horizontal histogram as this has been shown to lower the background level in the histogram, and thereby emphasise the peaks.
  • the background level tends to be quite high when the image of the sill contains horizontal features that arise from the sill being textured in some way.
  • Figure 1 8 demonstrate how the histograms of the various edge detected images combine to give an histogram that enables the door position to be detected.
  • Figure 1 8a, 1 8b and 1 8c are images of the sill after 45°, vertical, and horizontal edge detection filters have been applied respectively.
  • Figure 2d the uppermost three plots are the raw histograms obtained from the edge detected images and the bottom plot is the combination histogram which is used to determine the door position.
  • Figure 1 9 demonstrates, using a number of images of the sill area as the door close (Figure 1 9a, 1 9c, 1 9e and 1 9g) and the accompanying histograms (Figure 1 9b, 1 9d, 1 9f and 1 9h), how this further technique can be used to determine the door position.
  • the images and plots in Figure 20 demonstrate how the histograms combine to enable doors and objects to be located.
  • the objects in Figure 20 are a foot on the elevator sill and an arm on the elevator door edges.
  • the original images of the sill, at various stages of door closure, are in Figure 20a, 20c, 20e and 20g and the accompanying histograms are in Figure 20b, 20d, 20f, and 20h. It can be seen that with the objects used in this example, the peaks associated with the object are much larger than those associated with the door edges.
  • detection can be performed or confirmed using: (a) a method based on searching for breaks in the lines that describe the door edges, sill/running-clearance interfaces or sill/floor interfaces; and/or
  • the symmetry of the door opening or prediction methods can be used to provide confirmation of the door position provided by the algorithm. That is, the distance from the estimated left-hand door position to the centre line should be approximately equal to the estimated right- hand door position to the centre line. Furthermore, the knowledge of the current door position, direction of travel, and estimate of door speed could be used to provide confirmation of the door position in the next frame.
  • the above edge imaging technique allows for the determination of objects on the door edges and sill which might be struck by the closing doors. While this provides the required safety feature of the door obstruction detection, it would be advantageous to have some early warning of objects moving in the vicinity of the elevator doors. This would allow the elevator controller to anticipate a person wanting to enter the elevator car. It is envisaged that the parallax technique, described next, will serve as such an early warning and anticipatory device. Thus, the doors would reverse before objects appeared on the sill or door edges.
  • a key feature of the secondary detection technique resides in the application of the parallax effect to obstruction detection.
  • the two images are collected from spatially separate vantage points. These images correspond to the scene looking down into an elevator doorway from two different locations (see Figure 22a and 22b) .
  • the views encompass the immediate vicinity of the elevator doorway - this being the area where normally users of the lift would approach the lift doors. This vicinity can be broken down into a primary obstruction zone (described earlier) and the wider, secondary obstruction zone through which users pass when approaching the lift (see Figure 4).
  • the parallax technique is illustrated by means of placing a ladder immediately outside the primary obstruction zone of a lift door (i.e. in the secondary obstruction zone).
  • Two images, 22a and 22b are recorded from different vantage points. As a preliminary point, these two images have been taken with a different camera arrangement to that described in the earlier part of the specification.
  • the earlier camera arrangement used two unsplayed cameras that were placed 100mm apart. With the earlier camera arrangement the two images produced were those shown in Figures 5a and 5b.
  • the main point to note is that the images in Figure 22a and 22b show similar views of the door, whereas the images in Figures 5a and 5b show corresponding views of either side of the door. This has no bearing on the implementation of the following discussion as in practice once the calibration algorithm has determined the location of the door edges and sill, these areas would be masked out and only sub-images of the secondary obstruction zone (numbered 5 in Figure 4) are considered. These sub-images correspond to the upper middle of the images in Figures 22a and 22b or the top right and top left of the respective images in Figures 5a and 5b.
  • the shift between the backgrounds of the two images is calculated and used to align the background of one scene with the other.
  • the amount of alignment of the backgrounds would preferably be minimised by ensuring that the optics of the system are as precisely aligned as possible during their manufacture. Any minor imperfections in the alignment of the backgrounds could then be compensated for by a suitable mathematical image processing technique.
  • the technique for correcting for such imperfections is by way of cross-correlation or minimum energy.
  • the minimum energy technique involves 'shifting' the image (in two dimensions) by a pixel at a time (in an ordered manner in each direction) . The resulting two images are subtracted and then all of the pixel values summed in the difference image.
  • cross-correlation is a statistical technique which is generally more robust and faster than techniques based on minimum energy. Further, significant enhancements in processing speed have been found when cross-correlation is effected via fast Fourier transforms.
  • the error introduced by image alignment effects would depend on both the size of the 3-dimensional object relative to the background and the magnitude of the parallax that the object produces.
  • a section of the images containing no or minimal parallax and maximum background can be used to calculate the shift necessary to align the backgrounds of the images.
  • a further source of error in background shifting is pixelation of the elements of the picture.
  • Real images are, by their nature, discrete at their boundaries and as they are viewed from two different vantage points, it is not possible to align the backgrounds of the images exactly or cancel the backgrounds completely. This is due to the fact that the edges of objects within an image will not always lie precisely on a pixel boundary. The edge of an object will generally overlap the pixel boundary and therefore shifts will not always correspond to an integer number of pixels.
  • Errors due to image rotation can be largely reduced by accurately aligning the optics during manufacture. Illumination errors can be minimised by using a system that implements a single camera and hence the same exposure and aperture control system, in order to obtain two images which are unaffected by differences in lighting intensity. Parallax effects can then be obtained using a single camera in conjunction with a mirror/lens system to obtain spatially separate views whereby the resulting images are focused onto separate halves of the imaging device within a single camera. It is not necessary that the image be split onto separate halves of the imaging device. A switching means may be used to select the required image which is then focussed on the camera. This was discussed earlier.
  • the background shift would be calculated during the calibration stage.
  • the difference image will contain only outlines of the three dimensional objects.
  • the resulting parallax-highlighted image is as shown in Figure 22d. This has elements of the door and sill in it. As described earlier the location of these is known form the calibration stage and as a consequence they can be masked out.
  • the present technique has been found to be particularly useful in detecting people proximate to or entering an elevator. This is because as the height of an object increases, the parallax effect becomes more noticeable thereby allowing more accurate and clear identification of the obstruction.
  • Figure 23 illustrates the result of placing a variety of sample obstructions immediately outside an elevator door.
  • Figures 23a, 23d, 23g, 23j and 23m illustrate a box; box on a rug; a cane; a soft toy (representing an animal); and the leg of an approaching user.
  • the corresponding difference image Figures 23b, 23e, 23h, 23k and 23n
  • Figure 23c, 23f, 23i, 23I and 23o are shown along with subsequently thresholded difference images.
  • the existence of a patterned rug can hamper effective subtraction of a background.
  • the thresholding step significantly enhances the machine detectable position of the obstruction.
  • parallax is primarily produced by the parts of the image corresponding to the vertical edges of the box and not by the horizontal edges. This is due to the fact that the cameras are displaced horizontally at the top of the lift doorway and therefore horizontal parallax effects will be minimised.
  • the parallax produced by the right hand door edge is also clearly visible and it can be seen that the size of the parallax decreases and eventually vanishes as the door edge approaches the sill or floor area.
  • Figure 24 illustrates the ability of filtering techniques (discussed in detail earlier) to reduce pixelation artefacts for identical sample images to those shown in Figure 23.
  • Figures 24c, 24f and 24i illustrate that filtering reduces the level of the background cancellation remnants without suppressing the features produced by parallax. The effectiveness of this technique is evident as it can be seen that the previously visible horizontal lines due to the tracks on the door sill are now absent. This is desirable given that these features belong to the background and are not attributable to the parallax effect caused by an obstruction.
  • the parallax obstruction detection technique described can also be used to detect a hand or other obstruction on the door edge.
  • the parallax produced by a hand on the edge of the door was clearly machine detectable.
  • this technique was to be implemented in a practical form, it would be necessary to be able to distinguish the parallax produced by the door edge itself from that produced by the presence of a hand or other obstruction.
  • the previously described technique for identifying the door edges in an image could be used for this purpose.
  • An additional technique that could be used to identify such obstructions is to obtain reference images of the elevator door edges in a situation where no obstructions exist. Such reference images could continually be compared with the images of the car door edges recorded when the lift is in use. If a hand is placed between the doors, the reference image could be subtracted from the newly obtained 'operative' image. If an obstruction is present, it will then be visible in the difference image otherwise the difference image should be zero.
  • An example of such a subtractive process is shown in Figures 25d to 25f.
  • the reference images 25b and 25e illustrates a non-obstruction situation and the image 25a and 25d respectively are Operative' images.
  • the subtracted image 25f and 25c reveals the presence of the hand and its reflection in the edge of the door slamming post.
  • the present invention has been found to be capable of machine-detecting parallax for a reasonably large variety of objects.
  • the present invention provides for a significantly improved obstruction detection system which can reliably detect objects in both the door edge and wider protection zones. Changes in imaging parameters will only improve this detection threshold, particularly for the parallax technique.
  • the system can further reliably remove the majority of the background from the image to aid in further processing.
  • hands or other obstructions placed at the door edges can be reliably detected - this being done by separating the image into a primary obstruction zone and a secondary obstruction zone.
  • Numerous variations and modifications will be clear to one skilled in the art. These may include substituting different types of camera or imaging devices. Further, it may be possible to reduce the number of image collection devices to one by means of optical systems such as that described above. This may provide significant cost savings in terms of the requirements of providing two spatially separate viewing points.
  • the present invention has been described in the context of elevator doors, it is possible that, with suitable modification, the invention may be applicable to other obstruction detection applications such as those involved in heavy machinery, process control, safety and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Elevator Door Apparatuses (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention provides for a method of detecting objects in an area, the method including obtaining one or more images of the area, using an edge detection technique in such a way as to highlight substantially dominant linear features in the image(s), and determining if any dominant linear features intersect linear features defining the area. The method may also include detecting parallax in at least two images, the parallax being produced by the presence of 3-dimensional objects in the area.

Description

OBSTRUCTION DETECTION SYSTEM
Field of the Invention The present invention relates to obstruction detection systems. More particularly, but not exclusively, the present invention relates to methods and apparatus for detecting obstructions between or in the vicinity of elevator doors. The present invention may also be applied to obstruction detection in the context of industrial environments, safety applications, monitoring machinery activity, process control and the movement of people.
Background Art
The following discussion will be primarily directed towards obstruction detection methods and apparatus for use in elevator door systems. However, it is to be understood that this is not intended to be a limiting application. In certain circumstances and with appropriate modification, the invention may be suitable for use in other obstruction detection situations discussed elsewhere in this specification.
To the present time, there are a large number of techniques and devices, which may be used for detecting obstructions within either static volumes or variable locations. For a general discussion of such techniques, see applicant's International Application PCT/NZ95/00067.
Generally such prior art techniques focus on using optical devices to detect the presence of an obstruction or obstructions within a lift door detection area. These known systems typically use arrays of infrared (IR) emitters with corresponding receivers. One prior art technique consists of "shining" a sequential array of IR beams across an elevator door entrance and an obstruction event is triggered by interrupting or breaking one or more of the beams. Such an event activates a switching device that reverses or stops movement of the elevator door. An advantage of such systems is that they can be located along the edges of the moving doors and can thus be specifically adapted to deal with variable geometry entrance obstruction detection in the plane defined by one or more elevator doors.
Such techniques are generally satisfactory for detecting obstructions in the area directly between elevator doors. However, limiting obstruction detection to the door plane is now considered insufficient to meet contemporary industry safety standards. There is impetus in the lift industry to develop a lift door obstruction sensor that is not only capable of detecting obstructions in the area between the elevator doors, but also has detection capability that extends past the doors out to a predetermined distance into the lobby. Accordingly, there is a need to either be able to upgrade existing two-dimensional door obstruction sensing arrays to incorporate three- dimensional functionality or to provide a new integrated door plane and vicinity obstruction detection system.
Previous attempts to address the abovementioned industry trend include those described in US patent no. 5,387,768 (Otis Elevator Company). This patent describes a technique whereby an obstruction event is triggered by people approaching a lift as opposed to standing stationary in front of the lift. That is - this specification describes what is essentially a movement detector. The system uses masking techniques to remove regions of the lift/area image that are not relevant to obstruction detection. The system detects passengers, controls the movement of the doors and counts the number of passengers in an attempt to minimise waiting time between elevators.
To implement this functionality, the Otis imaging system collects images at two different times and then uses the difference between the two images to determine whether there is a moving object in the elevator obstruction detection zone. This technique relies on the use of a reference image that is captured at a time before the second obstruction detection image is recorded. The two images are then subtracted and thresholded to produce an image containing only the outlines of the objects that have moved during the interval between collecting the first and second image. The system includes separate masks for the hall area and elevator sill. The hall mask masks out variable portions of the image where the masked image size depends on whether any motion has been detected in that particular region or the viewing area. That is - if a person is standing at the back of the hall and not moving, that region of the image is masked out by virtue of the lack of movement detection. The sill mask increases in size as the doors close thereby removing the doors from the image that is to be processed.
Thus, the most relevant prior art describes techniques that detect objects and masks out regions of non-interest based on response to movement.
A number of other prior art techniques (see for example US patents Nos. 5,284,225 and 5, 1 82,776) disclose systems which again use reference images that are compared with later collected "active images". Such techniques conventionally use image subtraction whereby the reference images are subtracted from those collected at a later time to determine whether any obstructions have entered the obstruction detection zone. These techniques exhibit a number of disadvantages in that time- based obstruction detection systems may not be sensitive to time intervals longer than a certain threshold. This may produce problems where there is rapid transitory movement in an object following a stationary period. Also, known techniques can be difficult to implement given the large degree of possible variation between elevator door environments. For example, variations in furniture (and other permanent fixtures), floor covering patterns and the like can hamper the detection of incremental changes in a visual scene based on time differences. This is particularly so if the reference image must be set at an early or fixed stage.
It is therefore an object of the present invention to provide a method and apparatus for detecting obstructions which overcomes or at least ameliorates some of the disadvantages of the prior art or at least provides the public with a useful choice. Disclosure of the Invention
In its broadest aspect the invention provides for a method of detecting objects in an area, the method including obtaining one or more images of the area, using an edge detection technique in such a way as to highlight substantially dominant linear features in the image(s), and determining if any dominant linear features intersect linear features defining the area.
Preferably the area is an object detection zone, the area being separated into at least two zones; a primary zone, being the volume described by a door and a door sill; and a secondary zone, which may include the volume beyond the door through which a person using the door would pass.
Preferably the door and sill are the door(s) and sill of an elevator and the volume beyond the door is the landing/lobby where passengers may wait for the elevator.
Preferably there may be at least two images and the method includes a further step of detecting parallax in the two or more images, the parallax being produced by the presence of an object in an obstruction zone. More specifically in the secondary obstruction zone.
In a first particular aspect the invention provides for a method of detecting objects/obstructions in relation to the volume defined by a door and/or sill, said method including using edge detection techniques in such a way so as to highlight the substantially dominant linear features in an image or image(s), and determining if any dominant linear features intersect linear features defining said door and/or sill.
Preferably the method may include a preliminary stage of characterising one or more images to establish the presence of any characteristic dominant linear features in the area. More preferably said characteristic dominant linear features are lines defining the door edges and/or sill and the location of said features may be stored for future reference. The method may also include an operational stage which analyses one or more images to establish the presence of any uncharacteristic features in the volume, said uncharacteristic features representing potential object and/or obstructions in the area.
The preliminary stage includes at least two steps, a first step of detecting the location and dimensions of a door sill and a second step of detecting the location and dimensions of one or more door edge(s).
Preferably the first step includes: using substantially horizontal and/or substantially vertical edge detection filters to highlight the dominant vertical and/or horizontal lines in the part of the image where the sill is known to be approximately located; summing the intensity values along each row of pixels in the image(s) produced using the vertical and/or horizontal edge detection filters thus producing a vertical and/or horizontal function with maxima and/or minima corresponding to the position of horizontal linear features and/or vertical linear features, said linear features defining the spatial location of the door sill in terms of horizontal and vertical features in the image.
Preferably, the second step includes: using knowledge of the spatial location of the sill and knowledge of the physical relationship between the sill and the door edge(s) to obtain a sub- image or sub-images of the door(s); subjecting the sub-image(s) to edge detection filters adapted to highlight edges oriented at angles which lie between some known bounds; manipulating the sub-image(s) to produce a binary image(s), the binary image(s) consisting of one of more linear features corresponding to the door edges; and deriving equations for the linear features in the binary image(s).
Preferably the known bounds are substantially vertical and substantially horizontal edges. Prior to deriving equations for the linear features in the binary image(s) the second step may also include: manipulating the binary image by a ramp function which increases in magnitude in the vertical direction; further manipulating the images to clearly identify any dominant linear features in the binary image(s), the manipulation including applying a first filter to remove any substantially isolated features in the binary image(s), and applying a second filter to the binary image(s) to thin any substantially linear features in the image(s).
Preferably the equations of the linear features are obtained by locating the line(s) by means of a least squares, or similar, technique. There may be more than one dominant linear feature in the image(s) wherein once the equation for each linear feature has been determined, the linear feature is removed from the image and the next dominant linear feature equated.
Preferably a total weighting means is used to manipulate an estimate of the equation for each linear feature to improve the confidence of the equation for that linear feature, the total weighting means being found by normalising, and if necessary multiplying, one or more of: a first weighting means, wherein the derivative and variance of a linear feature are determined, changes in the derivative and distance of points of the feature which are outside a given parameter representing breaks in the feature, the first weighting means down weighting or eliminating said points from the estimate; and/or a second weighting means, wherein points in a feature further away from the image capture source are given a higher weighting than points in the same feature which are closer to the image capture source; and/or a third weighting means, wherein the third weighting means is the inverse of the derivative of the feature; and/or a fourth weighting means, wherein linear features which do not span any sub- image from vertical edge to vertical edge are weighted. The edge detection may be effected by means of filters, differentiators and the like.
Preferably said edge detection is aimed at highlighting dominant lines orientated substantially horizontal, vertical and substantially diagonal in the image(s) . More preferably the diagonal lines are at substantially 45° and 135°.
The operational stage includes the steps of: capturing one or more real time operational images of the area; detecting the position of a door or doors in the image(s); detecting the presence of obstructions on the area of the image(s) representing a sill; and detecting the presence of obstructions in the area of the image(s) representing the door edges.
Preferably the position of the door(s) is obtained by detecting the intensity change in the substantially horizontal features of the sill where the intensity changes defining the spatial location of the door(s) in the image(s).
Preferably the presence of obstructions in the area of the image representing the sill is determined by at least using a substantially vertical edge detection filter to highlight predominately vertical features in the image which intersect the linear features of the sill.
Preferably the presence of obstructions in the area of the image representing the door edges is determined by at least using an edge detection filter to highlight predominate features in the image which intersect the linear features of the door.
Preferably the operational step includes converting the edge detected image(s) to a histogram or histograms wherein peaks in the histograms represent features in the image(s), said features representing the door(s) and/or sill, and or an obstruction or obstructions on the door edge(s) and/or sill.
The operational stage may use any of the image manipulation means describe earlier. Preferably the operational stage may be repeated a plurality of times.
In a further particular aspect of the invention there is a method of detecting obstructions and/or movement in obstructions, the method including the step of detecting parallax in two or more images of an obstruction detection area, the parallax produced by the presence of objects in the area.
The method may include the step of detecting temporal changes in the images of the area.
The method may include the step of detecting vertical and horizontal parallax produced by an object located in the area.
More particularly, the invention provides for a method of detecting objects including the steps of aligning backgrounds of a plurality of images of an area and subtracting pairs of images so as to reveal, by way of parallax, the presence of objects in the area.
Even more particularly, the invention provides for a method of detecting objects including the steps of aligning backgrounds of a first and second image of an area and subtracting the first image from the second, thereby revealing, by way of parallax, the presence of a three dimensional object.
In a preferred embodiment, the method includes the steps of: collecting a first image of an area from a first viewing point; collecting a second image of the area from a second viewing point; calculating the shift between the backgrounds of the two images; aligning the backgrounds of the two images; subtracting the two images to produce a third difference image; analysing the third difference image to detect parallax thereby revealing the presence of a 3-dimensional object in the area. Preferably following the subtraction step, and before the analysing step, there is a thresholding step whereby the difference image is thresholded to exclude noise thus producing a binary image.
Preferably the third difference image is manipulated so as to contain substantially only the outlines of any 3-dimensional objects in the area.
In an alternative embodiment, the images are divided into background images and door edge images wherein calculation of the necessary shift between the backgrounds of the two images is based on the images of the background when no obstruction is present.
Preferably the shift is calculated using cross-correlation.
Preferably, the images are blurred with gaussian, median or similar filters so as to reduce the effect of pixelation in the images.
The invention also provides for an apparatus for detecting obstructions in an obstruction detection area, said apparatus including at least one imaging means and a microprocessor apparatus adapted to manipulate said image(s) according to any part of the above description.
An apparatus for detecting objects in an area, the apparatus including: at least one imaging means adapted to image the same scene from at least two spatially separate viewing points; and microprocessor apparatus adapted to manipulate said images in such a way as to highlight substantially dominant linear features in said images and determine if any dominant linear features signify the presence of an object in the area.
The apparatus for detecting obstructions in an obstruction detection area includes: at least one imaging means adapted to image substantially the same scene from at least two spatially separate viewing points; and microprocessor apparatus adapted to manipulate said images in order to calculate the shift between the backgrounds of the two images or pairs of images, align the background images based on said shift, subtract the resulting images to produce a difference image thereby allowing the detection of parallax effects in the difference image thus signifying the presence of an object in the area.
Preferably the microprocessor is also adapted to manipulate the image or images to highlight substantially dominant linear features of the image(s).
The images may be manipulated optically, mathematically or in a like manner which reveals dominant linear features and/or parallax in the image(s) of the area.
Preferably the microprocessor is further adapted to threshold the difference image.
The microprocessor may be in the form of a solid state, optical or the like device.
In cases where a single camera is used, the apparatus further includes an optical arm and reflection means adapted to relay an image from a viewing point that is displaced from the physical location of the camera.
The collection of parallax images may be effected by optical means including prisms, coherent optical fibre guides, and the like or alternatively the imaging means themselves may be translated or suitably displaced.
In one embodiment there may be artificial features added to aid the microprocessor in highlighting substantially normal dominant features of the image(s). There may also be an input means, the input means enabling a user to input the location of normal dominant features into the microprocessor.
Further aspects of the invention will become apparent from the following description which is given by way of example only. Brief Description of the Drawings
The invention will now be described by way of example only and with reference to the accompanying Drawings in which:
Figure 1 : illustrates plan (a), end elevation (b) and side elevation (c) views of an elevator entrance with cameras according to the preferred embodiment of the invention;
Figure 2: illustrates schematic views of two embodiments of a parallax imaging system according to the invention;
Figure 3: illustrates a schematic representation of the connection of two imaging devices (cameras), computer and interface with the door controller;
Figure 4: illustrates the primary detection zone;
Figure 5: illustrates a series of images captured by the lift cameras of the embodiment shown in Figure 1 ;
Figure 6: illustrates an edge detection technique as applied to determining the horizontal and vertical position, in an image, of an elevator door sill;
Figure 7: illustrates schematically the steps in an algorithm used for locating positions of the door edges;
Figure 8: illustrates the sub-images in Figure 5 when processed according to steps 4 and 5 of Figure 7;
Figure 9: illustrates a 9x9 filtering technique to remove isolated features of black and white images;
Figure 10: illustrates broken door edge lines in the images; Figure 1 1 : illustrates the application of a ramp to the black and white images;
Figure 12: illustrates application of a weighting array to door line edges;
Figure 13: illustrates application of the weighting array to broken line images;
Figure 14: illustrates estimation of the line equations for the black and white images;
Figure 15: illustrates the equations used to calculate the door vanishing points;
Figure 16: illustrates the detection of door position by examining the intensity profile of the running clearance;
Figure 17: illustrates how to determine the position of the doors based on histograms from vertical and horizontal edge detected images;
Figure 18: illustrates an example of the construction of a histogram for determining both the door position and any objects on the sill or door edges;
Figure 19: Illustrates how the histogram can be used to detect the door position;
Figure 20: illustrates how the histogram can be used to locate both the doors and any objects or obstructions present;
Figure 21 : illustrates a flow chart showing the steps in a parallax-based method for detecting obstructions in an obstruction sensing area;
Figure 22: illustrates data produced according to the method of Figure 21 as applied to a sample obstruction (a ladder) in an elevator door; Figure 23: illustrates the detection of machine recognisable parallax for a number of sample obstructions;
Figure 24: illustrates the ability of filtering techniques to reduce artefacts produced by the pixelated nature of the detected images; and
Figure 25: illustrates sample data for a door edge obstruction event.
Description of the Preferred Example
The following description will be given in the context of obstruction detection in elevator door systems. This is to be understood as not to be a limiting feature of the invention. The apparatus and method of the present invention may be applied to obstruction detection applications, for example the monitoring of industrial machinery, security applications in the like.
In the context of elevator door obstruction detection systems there are two distinct zones. The first is the critical volume bounded by the sill and both sets of door edges. This will be called the primary obstruction zone. Objects in this area must be detected with a high degree of reliability as a closing door will almost certainly strike any obstruction. The second zone is the lobby/landing area in front of the elevator where people approach and wait for the elevator. This zone will be called the secondary obstruction zone. The requirement for detection of obstructions in this area is less critical and the obstruction detection system must be able to mask out irrelevant or erroneous objects. The obstruction detection system of the current invention is based on optical detection methods with difference imaging techniques used to provide the required level of obstruction detection for each zone.
For the primary obstruction zone the obstruction detection system uses an edge detection technique to determine if any objects (obstructions) are present between the door edges or on the sill (the sill is the section on the floor of the elevator car and landing that the car and landing doors rυn in). The detection of edges in the context of elevator doors is particularly critical. Over time, people have developed the habit of placing their hand in the door gap, or stepping onto the sill, in order to stop the elevator doors closing. It is therefore important that any obstruction detection system can detect hands or other objects being put between the closing doors as well as objects on the door sill.
The system can accomplish this by determining whether any lines defining the edge of an obstruction intersect with the lines that describe the door or the edges of the sill. The system could also use standard difference imaging techniques where a reference image is used to allow obstructions to be detected.
For detection of obstructions and objects in the secondary obstruction zone a parallax technique is used. This parallax technique use the same optical images obtained for the edge detection technique but is concerned with identifying 3-dimensional objects present in the landing/lobby area. Parallax techniques can also be used to detect objects or obstructions in the primary zone. However, this has been found not to have the required accuracy for the critical zone. The reason for this is twofold: firstly, the door edge produces a substantial parallax effect which can potentially swamp the parallax produced by smaller three-dimensional objects; and secondly, the applicants have found that it might not be possible to detect objects less than 200mm above the sill using the parallax technique (this problem is described later) .
The physical Layout
In the preferred embodiment the system is likely to consist of two cameras which view the lift opening from two separate viewing points. The two separate viewing points allow the secondary detection means, based on parallax, to function. This may be achieved by known optical imaging systems such as charged coupled device (CCD) cameras, CMOS cameras (the applicant is currently using an Active Pixel Sensor or APS CMOS camera) or any other type of electronic imaging device.
Alternatively, a single camera may be used whereby one or more optical arms directs an image of a first view (viewed from a first vantage point) and second view (from a second vantage point) to the single camera, or coherent optical fibre guides could be used to transmit separate views to a single camera. Also, imaging could be controlled by an optical cell that would alternately interpose a reflector or other type of interception device into the field of view of the camera thus diverting the view to the spatially displaced vantage point. The major drawbacks to such a system are: that the cameras must be sufficiently fast so that the doors appear stationary in each subsequently collected image; and the optical systems are likely to suffer from dust ingress in the elevator environment.
Figure 2 illustrates a simplified schematic of a possible embodiment of the invention using an electronic camera or cameras. The upper embodiment of Figure 2 shows a single electronic camera 1 10 positioned to be capable of collecting images of the obstruction detection area from different viewing points. This is effected by mirrors 1 1 1 , 1 1 2, 1 1 3 and 1 14. The horizontal length of the optical arms have been shortened for clarity. If a charged coupled device (CCD) camera were be used it could either comprised of two separate collection devices or a split CCD array. The lower embodiment of Figure 2 illustrates a schematic of a single camera parallax detection device. Here, separate images of a scene pass through separate arms of the detector. The selection of the particular viewing point is controlled by electro- optical switches which are controlled by switch 1 1 9. The camera collects images 'seen' through alternate optical arms and the parallax detection is made on the basis of switched viewing of the scene. The optical arm is formed using mirrors 1 1 6, 1 1 7 and 1 1 8.
However, the system preferred by the applicants uses standard cameras in a splayed arrangement as shown in Figure 1 . Figure 1 a illustrates a plan view of the elevator entrance showing the lobby side 1 and car side 2. These two areas are separated by the sill 3 which is bisected by the running clearance 8. The door edges are shown by 9a, 9b and 1 0a, 10b. Figure 1 b illustrates an end elevation looking from the lobby 1 into the car 2. This Figure clearly shows two cameras 6 and 7 mounted on the entrance header 4. The cameras are arranged in a splayed configuration so that camera 6 is orientated towards a first side door edge 9a, 9b and camera 7 is orientated towards the second door edge 10a, 10b. There is an overlap area 5 which covers the lobby area 1 and is used in the parallax technique! To allow a larger volume to be viewed and /or to increase the parallax area the position of the cameras 6 and 7 could be interchanged. This would mean that camera 6 would look at door set 10 and camera 7 would look at door set 9. Figure 1 c shows a side view of the elevator entrance arrangement.
The reason for the preference of standard electronic cameras in this arrangement is that it allows the majority of lift openings to be viewed with cameras of reasonable cost (lenses with wide fields of view are expensive and more difficult to obtain and are more likely to introduce distortion into the captured image) and it provides an area for the parallax based secondary detection (i.e. the region where the fields of view overlap) in the centre of the lift opening. An example of the images obtained from such a set up are shown in Figures 5a and 5b. It should be noted on these two images that there is an overlap of the area in front of the elevator. This is the secondary obstruction zone for which the parallax technique is used, and corresponds to area 5 on Figure 1 b and 1 c.
Figure 3 shows a schematic representation of the connection between the two cameras 6 and 7, the computer 1 1 and interface with the door controller 1 2. A triggering signal 1 3 from the interface with the door controller 1 2 is transmitted to the door controller which, for example, can operate a relay which opens the elevator doors when the system detects the presence of an obstruction.
Edge Detection
The first aspect of the present invention resides in the identification of linear features for use in primary obstruction detection where the elevator door edges and sill are obstructed. This is represented by the shaded area in Figure 4.
The edge detection technique is divided into two separate sections. The first section is an automatic calibration algorithm, which is used to determine the position of the door edges and the sill in the image or images. It is anticipated that this algorithm will run when the unit is first installed and will provide a number of parameters that describe the lift door and camera geometry. The second section of the edge detection technique is an operational algorithm which detects the presence of objects on the door edges and sill when the doors are closing. These algorithms will be known as the primary calibration algorithm and primary operational algorithm respectively.
The primary calibration algorithm
The edge detection technique used in the primary calibration algorithm is divided into two steps. The first step examines the image in order to detect the door sill, indicated by numeral 3 in Figures 1 a and 4. The second step identifies the edges of the doors, indicated by numerals 9a, 9b and 10a, 10b in Figures 1 a, 1 b and 4.
Referring to Figures 5a and 5b (which show images captured by cameras 6 and 7 of Figure 1 ): identifying linear features corresponding to the sill in the images involves fully opening the elevator doors and using horizontal and vertical edge detection filters to highlight the strong vertical and horizontal lines in the respective right (image 5a) and left (image 5b) sides of the images. This is where the sill is expected to be located in these images.
This filtering technique is illustrated by the image in Figure 6 which is taken by a single camera looking down onto the sill. However, the same technique can be applied to each of the images in Figure 5a and 5b. The horizontal door tracks can be seen in the lower part of Figure 6b. It can also be seen in Figure 6d that the horizontal edge detection emphasises the horizontal lines due to the sill and the door tracks.
Once the horizontal edge image of Figure 6d is obtained the intensity values along each row of pixels in the image are summed. As can be seen from Figure 6f, the result is a function which has peaks located at the positions of the tracks and edges of the sill (when scanned in the vertical direction). These peaks can be quite easily detected in order to provide the location of the sill.
To determine the width of the sill (the horizontal part of the elevator door zone), the image shown in Figure 6b is subjected to a vertical edge detection filter. The resulting image is that shown in Figure 6c which emphasises the vertical lines that occur where the sill meets the door edges. In a similar fashion as for the tracks, the intensities in each column of pixels of Figure 6c are summed to produce the function shown in Figure 6e. The peaks in Figure 6e correspond to the horizontal position of the sill edges.
The above technique provides both the horizontal and vertical locations of the sill and it is thus possible to separate out the sill from the image (or images in the case of Figure 5a and Figure 5b) .
In summary the steps in determining the sill extents can be summarised:
1 . Fully opening the lift doors with no foreign objects present on the sill or door edges.
2. Use a horizontal edge detection filter to determine the vertical extent of the sill, the vertical position of the running clearance and the vertical position of any rails. 3. Use a vertical edge detection filter to determine the horizontal extent of the sill. That is, determine the vertical lines that describe where the sill finishes and the doors start. 4. Save to non-volatile memory the numbers that describe the position of the sill in each image.
As the edges of the sill are attached to the lower part of the door edges, knowledge of the sill edges makes it possible to separate out sub-images of the left hand and right hand door edges. These sub-images are shown in Figures 5c, 5d, 5e and 5f. The sub-images are then used in the second step of the method for identifying the door edges. The following is a summary of how to determine the position of the door edges and the vanishing points.
1 . Use the sill extents (determined earlier) to extract four sub-images in which the door edges are the major features. The four images consist of two left and two right door edge images (Figures 5c, 5d, 5e and 5f) . The two images for each side consist of images where the door edges are at angles approximating 45° or alternatively approximating 135°. 2. Use Edge detection filters capable of emphasising the door edges. For example - horizontal and 45° or 1 35 ° edge detection filters.
3. Take the absolute value of all the edge detected images created in the previous step and add them together (e.g. add the absolute value of the horizontal and the 45° or 1 35° edge detected images).
4. Threshold the edge detected image to obtain a black and white image of the door edges. Clean up the b/w image by excluding any isolated black points that are not attached to the door edge lines and eroding the door edge lines. 5. Apply a ramp, which decreases in value vertically with vertical displacement from the sill, to the black and white image. 6. Find the maximum value in each column. That is, the points that describe the door edge line closest to the sill. The array containing the maximum value in each column is known as the maximum value array. 7. Obtaining a weighting matrix that describes how accurately each of these points is known.
8. Use a least squares linear equation estimation algorithm to obtain a linear equation describing the line.
9. Determining the width of the current line and erase this line from the image. 10. Looping back to step 6 and continuing to find lines until the maximum array is predominately empty. 1 1 . Using a least squares technique to find the point of intersection of all the door edge lines found on each side of the door opening. Two vanishing points are obtained using this technique; one for each camera.
Figure 7 contains a flow chart of the second stage of the primary calibration algorithm applied to an actual set of elevator doors. Each step will now be described in more detail.
Steps 1 , 2 and 3
The initial step is to use knowledge of the sill extents (obtained above) to subdivide the image into four sub-images which contain lines which slope either towards the top or bottom of the image. These sub-images are shown in Figure 5c, 5d, 5e and 5f. The sub-images are subjected to edge detection filters (similar to those used to determine the sill extents) which are adapted to highlight edges oriented horizontal and at an angle approximating 45 ° or 1 35 ° .
Step 4
The sub-images of the door edges are now converted to black and white (b/w) images by applying a threshold. The results of thresholding are shown in Figures 8a and 8b which are the black and white images produced by thresholding the images in Figure 5e and 5d respectively. The algorithm also applies routines to separate out the lines, particularly close to the sill where they can appear to join, and to remove any small isolated features (i.e. isolated black pixels) that are clearly not part of a line.
To separate out the lines an erosion technique is used. In the invention the erosion technique removes pixels from the boundary of an object. It does this by turning to white any black pixel that is adjacent to a white pixel. The object is to remove any pixels that are bridging adjacent lines and to thin down the door edge lines. The images in Figures 8c and 8d are the images in Figure 8a and 8b once they have been eroded. It can be seen that this has the effect of thinning down and separating out the lines that describe the door edges.
To remove isolated features a filter, which operates on 9x9 sub-sections of the image, is used. If the summation of all the elements in the 9x9 sub-section is less than nine then the centre pixel is set at zero, otherwise the output is the value of the centre pixel. Consequently, the algorithm looks to see if at least a complete line is likely to be passing through the 9x9 sub-section. The size of the filter (i.e. 9x9 in our case) is somewhat arbitrary and could be varied to exclude smaller or include larger objects. This technique is illustrated by Figures 9a and 9b. In Figure 9b the centre pixel will be set to zero.
Images in Figures 8e and 8f are the result after the 9x9 filter that removes isolated features is applied to the eroded images 8c and 8d. The ability of this filter to remove isolated features can most clearly be seen in image 8e. Step 5
A ramp is now used to scale the black and white image to enable linear equations describing the lines produced by the door edges to be determined. The ramp decreases in value with vertical displacement from the line of bisection used to create the sub-images. The reason for applying the ramp in this manner is that the door edge lines in the sub-images closest to the line of bisection tend to be horizontal and span the sub-image from vertical edge to vertical edge. Furthermore, in the sub- images of the upper portion of the door edges the edge lines tend to slope upwards and for the lower portion of the door edge the edge lines slope downwards. These sloping lines tend to be shorter than the horizontal lines as they begin at a point on the vertical edge that is in contact with the sill and they then end on either the top edge of the image (for the upper sub-image) or bottom edge of the sub-image (for the lower sub-image).
An example of the application of the ramp is illustrated by Figure 1 1 which shows stylised images of the upper left portion of the door and the lower left portion of the door. Figures 1 1 a and 1 1 c are stylised images of the door edges after applying the edge detection, isolated pixel and erosion filters. The direction of the ramp slope is shown in columns A and B to the left of these Figures. The application of the ramp to the filtered images is shown in Figures 1 1 b and 1 1 d, and it can be seen that the ramp slopes up towards the line of bisection between the two images. At the bottom of Figures 1 1 b and 1 1 d the first column maximum value arrays (which are used by the least squares technique to produce the equations describing the lines) are shown. The column maximum value arrays in Figure 1 1 b and 1 1 d define the door edge lines closest to the lines of bisection.
The stylised images in Figure 1 1 are representative of the type of images obtained when images in Figure 5d and 5e are filtered and then multiplied by a ramp.
Referring back to Figure 8, the images in 8g and 8h depict the images that result after the ramp is applied to the images in 8e and 8f. As previously described, the ramp scales the images in a linear fashion. The ramp decreases . from the top of the image for the images that are of the bottom of the lift doors (i.e. Figure 5e and 5f) and increases from the bottom of the images for the images of the top of the lift doors (i.e. Figure 5c and 5d). The reason for this is that the performance of the algorithm is enhanced when the longest, and most well defined lines, are found first.
Thus, the ramp, which increases in value with the vertical image dimension and is constant with the horizontal image dimension, now has its maximum value along the line of bisection and then decreases in magnitude towards the bottom or top of the sub-images. In this way the line determination portion of the algorithm (i.e. steps 6- 10) starts with the longest and most well defined lines, and moves onto those lines which are shorter and less well-defined.
Step 7. 8 and 9
The next step is to find the equation of each door edge line in the images. This starts with the column maximum array which defines the edge closest to the line of bisection of the images (see Figures 1 1 b and 1 1 d).
Some of the points in the column maximum array, that defines the line whose equation is currently being determined, have an accuracy to which can be attributed a greater confidence than other points. For this reason the algorithm produces a weighting array that influences the linear equation result depending on the confidence of the data points.
The confidence with which the column maximum array is determined is affected by a number of factors. These factors including:
1 . Points further away from the sill having less contrast. That is, as the lines defining the door edge move from the sill to the top of the car they tend to become less well defined due to a loss of contrast. As a consequence those points on the lines closest to the sill are given a higher weight than those closest to the top of the car. 2. The number of points in the column maximum array may be less than the horizontal dimension of the sub-image. This happens when the line does not begin and end on a vertical edge of the image, but begins on a vertical edge (where it is attached to the sill) and then finishes on a horizontal edge of the image. An example of this type of line can be seen in Figure 1 1 a where the top line finishes on the top edge of the image. In this case the maximum column array would be [ 5 4 2 2 1 1 1 1 1 | where the last four values are 1 because each of these columns now completely contains zeros. These values would have the effect of rotating the line estimate upwards.
3. A further factor which effects confidence of the weighting array points is lines that are broken into a number of sub-lines and therefore do not completely span the image. It is quite common for the lines to have small breaks in them. An example of this type of problem is shown in Figure 10 where it can be seen that the bottom most line is broken into two sub-lines. The result is that the maximum column array now has two elements that are contributed from the upper-most line, which would obviously corrupt the linear equation computation. This can be seen in the 3rd and 4th elements of the column maximum array at the bottom of Figure 10b. If the derivative and variance of the maximum column array is computed, these sections can be found by a sudden change in derivative and their distance, from a previously calculated estimate of the line, being larger than the variance. These portions are removed from the linear equation estimation by down-weighting them.
4. The final factor which may contribute to maximum array confidence is noisy data. By generating a weighting function that is the inverse of the derivative of the column maximum array it is possible to down weight the noise. That is, as the line whose equation is being sought should be smooth, the column maximum array should also be smooth and consequently any sudden changes in derivative are likely to be noise. To overcome the above factors individual weighting arrays are computed which overcome each of the above effects. These individual weighting arrays are known as the sill distance weighting array, short line weighting array, broken line weighting array and derivative weighting array. A total weighting array is found by normalising each of these component arrays, with respect to their largest element, and then multiplying them all together.
Application of the short line and sill distance weighting arrays will be illustrated with reference to Figure 1 2. Figures 1 2a and 1 2b relate to a weighting estimate of the first line in the upper group of lines in Figure 1 3a which exits the top of the image rather than the right hand side of the image. This line is called a short line and a weighting function is produced which ensures that the line equation estimate is only influenced by line data up to the point at which the line exits the top of the image. The top plot of Figure 1 2a shows the column maximum array 20 for this short line with the first-pass linear equation estimate 21 laid over it. The middle plot in 1 2a is of the derivative of the column maximum array and the bottom plot of 1 2a is a product of the short line and sill distance weights. It can be seen that the sill distance weight sets the weighting function to zero at the point where the short line exits the top of the image therefore data after this point has no influence on the linear equation estimate. The plot also shows the sill distance weight which forces the linear equation estimator to place less emphasis on the data making up the current line as the line moves further away from the sill. It can be seen that the sill distance weighting function decreases the weight with increasing distance in a linear fashion.
For the Least Squares determination of the lines a standard weighted least squares technique is used to determine the equations of the door edge lines in the image from the column maximum array. Currently, the least squares algorithm is applied twice to each column maximum array. At the first application of the algorithm, only the sill distance and short line weights are used to find a first estimate of the line equation.
To determine whether the first-pass of the line equation estimates is of reasonable accuracy the point of intersection of the line estimate and the column maximum array is determined. If the two "lines" do not intersect or the angle between the two lines is greater than some threshold then the estimate is said to be poor.
If the line estimate is good the computation of the broken line weight begins. This is done by starting at the point of intersection and moves out towards each end of the column maximum array. The broken line weight then down-weights any points in the column maximum array that are a significant distance from the first-pass estimate of the line and the derivative of the column maximum array has suddenly changed. If there is another sudden change in derivative of the column maximum array, and the distance between the points in the column maximum array and line estimate are small, then down-weighting stops. Thus, down-weighting is toggled on and off.
Application of this broken line down weighting is illustrated in Figure 1 3 which shows removal of data associated with breaks in the current line that cause data from later lines to be included in the column maximum array. Figures 1 3a, 1 3c and 1 3e show ramped images that contain lines which have breaks in various positions. In Figure 1 3a the break is at the very end of the line, in Figure 13c there are two breaks in the middle and one break at the end of the line and in Figure 1 3e the break is associated with the feature left over from a previous line. Figures 1 3b, 1 3d and 1 3f are plots of the column maximum array with first-pass line estimates overlaid for each of Figures 1 3a, 1 3c and 1 3e respectively. The plots also show the derivative of the column maximum array that is used to find the breaks in the current line and the weighting function that is used to remove the data that is present at the breaks.
If the line estimate is poor an algorithm searches the column maximum array to find the longest line segment. It does this by looking for large changes in derivative. The computation of the broken line weight now begins from the middle of this longest line. A second line estimate is then computed using the column maximum array and the complete set of weights.
Referring back to Figure 1 2, the plots in Figure 1 2b show the total weighting function and its components (except the sill distance weight). This total weighting function is used by the linear equation estimator on its second pass to get an improved estimate of the equation of the line.
Figure 1 4 illustrates the estimation of linear equations and removal of lines from the ramped image of Figure 8h once an equation has been found for the current line. Figure 14a shows the ramped image of Figure 8h. The top plot of Figure 14b is the contents of the column maximum array (the value obtained by determining the maximum of each column of the image). The bottom plot in Figure 14b is of values obtained from the linear equation estimator after its first-pass. This is the data from the equation that describes the line at the very bottom of Figure 1 4a and is derived from applying a least square routine to the data in the top plot of Figure 14b. Figure 14c is a result that is obtained after the data relating to the line determined above is erased from the image in Figure 14a. This exposes the data from which the next line estimate can be computed. Comparing Figure 14a to Figure 14c shows that if there are interconnecting pixels between the current line and the neighbouring line then the lines are treated as one and both lines are then erased. The top plot in Figure 14d is a column maximum array from Figure 1 4c and the bottom plot of 14d is the first estimate of the linear equation that describes the data in the top plot. The process of obtaining line data and erasing each successive line is shown in Figures 14e to 14k. The final Figure 141 is the original black and white image of the door edge with the calculated line estimates (in grey) overlaid.
Step 1 1
Knowledge of the vanishing point is useful as it allows the position of the door edges to be tracked as the door closes. The vanishing point remains stationary as the doors close and it is therefore possible, with knowledge of the position of the door on the sill, to determine the position of the door edges as the doors close. That is, if the point where the bottom of the door makes contact with the sill can be determined, then the door edges can be derived by drawing a line through this point and the vanishing point. A technique can be developed to detect lines that have been incorrectly calculated and do not appear to pass through the vanishing point. If this were done the least squares estimate of the vanishing point would not be skewed by these lines. To obtain an estimate of the vanishing point for each camera, a least squares algorithm is used to find an estimate of the point of intersection of all the lines previously calculated that describe the features on the door edges. That is, the point of intersection of the linear equations describing the door edge features, on each side of the door, are found by solving equations of the form shown in Figures 1 5a and 1 5b. In these equations x is the horizontal position of the vanishing point, y is the vertical position of the vanishing point, aι are the slopes of the equations, bι are the intercepts of the equations, and n is the number of equations.
Once the data for each of the door edge lines and vanishing points is obtained it is stored in non volatile memory for later use.
While the above description discusses an automated calibration technique which defines the sill, door edges, and vanishing point by using an automatic selection process. These features could be manually selected at installation by using, for example, a computer and mouse arrangement that is interfaced to the detection system.
Also, it may also be possible to assist the automatic calibration technique by placing artificial landmarks on important features on the sill and door edges. For example tape or stickers could be used to mark the centre of the door opening or to emphasise features, such as the tracks in which the door guides run or the line along which the sill meets the elevator doors.
The primary detection algorithm is divided into two separate sections. The first section is an automatic calibration algorithm, which is used to determine the position, in the image, of the door edges and the sill as described above. It is anticipated that this algorithm will run when the unit is first installed and will provide a number of parameters that describe the lift door and camera geometry. The second section is an operational algorithm that detects the presence of objects on the door edges and sill when the doors are closing. This Primary operational algorithm is described below.
Primary Operational Algorithm
The primary operational algorithm consists of the following steps which will be described in detail later.
1 - Obtain a new (real time) image of the door opening;
2 - Detect the position of the doors:
• Use knowledge of the sill position to extract sub-images which contain predominately the sill.
• Use knowledge of the running clearance and rail positions to extract sub- images of these features.
• Determine the door position by determining the position at which there is a sudden change in intensity in the horizontal direction. That is, the running clearance and tracks are almost always dark features and the intensity of these features changes at points where they intersect with the doors.
• Use a vertical edge detection filter on the sill to obtain confirmation of the door position if required.
3 - Detect objects on Sill: • Using a vertical edge detection filter determine whether there are any strong vertical lines in the sub-image of the sill that cut either of the lines that describe the vertical extent of the sill.
4 Detect objects on door edges:
• Use prior knowledge of door position and the vanishing points to determine the door edge positions.
• Use edge detection techniques to confirm the position of door edges if required.
Details of Specific Steps will now be described. Step 1
Firstly an operational image of the elevator door, sill and lobby is obtained. As the cameras and lift sill are fixed relative to each other, their position in the images remains constant. Thus it is a relatively simple process to extract a sub-image of the sill from the images that are captured during the operation of the lift detection system. The location of these sub-images will have been determined during the calibration stage described above.
Step 2 As with the sill itself, the vertical position of the running clearance (the gap, which bisects the sill, between the landing/lobby floor and the elevator car floor - it can be clearly seen when looking at images of the sill) and the door tracks (tbe grove in the sill which the door guides run in) remain in the same position in the operational images. It is therefore possible to extract sub-images of these features, from the sill image, by using the knowledge of the position of these features, which was gained during the calibration stage.
As the elevator doors close the sub-images of the running clearance and door tracks are characterised by sudden intensity changes near their ends. This is due to the highly polished doors now obscuring these darker features. The intensity change seen is from dark to light as is shown in the stylised images in Figure 1 6. As the doors close the position of this intensity change also moves and as a consequence it is possible to track the position of the bottom of the door. In Figure 1 6 number 31 represents the running clearance when the doors are fully open and 32 represents the running clearance when the doors are partially closed.
An alternative method of finding the door position uses the principle that the horizontal lines in the image are shortened as the doors close, and that the door edge lines become more vertical as the doors close. The second technique involves:
1 . Filtering the image with a horizontal edge detection filter.
2. Converting the edge detected image values to absolute values. 3. Summing the columns of the above image to produce the horizontal edge histogram.
4. Filtering the image with a vertical edge detection filter
5. Converting the edge detected image values to absolute values. 6. Summing the columns of the above image to produce the vertical edge histogram.
7. The energy in the vertical and horizontal edge histograms is then equalised.
8. The two energy equalised histograms are then added together to produce a single histogram.
9. The peak in the histogram between the known door open and door-closed positions is found as this provides the new door position.
The door-closed position (i.e. usually the centre of the lift) is found by applying the above algorithm when the doors are closed. To obtain the global maxima and a good approximation of the door closed position a parabola is fitted to the peak maxima and the points on either side of the peak where the peak's values approach the background level.
The advantage of this technique over the first technique is that it is more robust with respect to noise and any movement in image features as the doors close. The reason for this is that the summation averages out any noise and reinforces the change in width of all the horizontal features on the sill as the door closes. In contrast the first technique is quite reliant on:
• The position of the running clearance in the images remaining in the same position from image to image (if the imaging system is rigidly mounted this should not pose too much of a problem) .
• No major changes in lighting occurring on the horizontal features as the doors close.
• Horizontal features in the image that individually may only be a few pixels in height.
• The horizontal features being orientated to the horizontal reasonably accurately (i.e. the images cannot be tilted too much with respect to the horizontal). However, the advantage of the first technique is that is computationally much faster than the later technique.
An example of the later technique is given in Figure 1 7 where the original images of the sill area, as the doors close, are shown in Figures 17a, 1 7c, 17e and 17g; and the plots of the corresponding histograms are shown in Figures 1 7b, 1 7d, 1 7f and 1 7h. The histogram plots consist of the histogram of the horizontal edge detected image, the histogram of the vertical edge detected image and the summation of the two histograms after energy equalisation. It can be seen that there is a sudden change in intensity in the summation histogram at the position corresponding to the door. Thus, it is possible to automatically detect the door position using this technique.
Step 3 The algorithm then needs to detect objects on the sill. Objects that are on the sill will cut one or both of the horizontal "lines" that define the vertical extent of the sill. There is also the possibility that they will cut the horizontal lines that describe the vertical position of the running clearance and tracks.
By applying horizontal and vertical edge detection filters to the sill sub-image it is possible to emphasise any objects placed on the sill. Detection is then possible by looking for cuts in the horizontal lines that define the vertical sill extent, the running clearance and the door tracks. Once the vertical edge detection filter has been used objects on the sill also tend to produce strong vertical lines (in contrast to the native sill which tends to have strong horizontal lines) . Thus, objects can also be detected by looking in the filtered sill image for strong vertical lines, which intersect with the horizontal lines that define the sill, the running clearance or door tracks. Knowledge of the door position gained in the above step means that the elevator doors will not be detected, erroneously, as an object.
Step 4
Detection of objects on the door edges: knowledge of the vanishing points (obtained during the calibration stage) and the position of the bottom of the doors on the sill (obtained immediately above) allows the equations defining the door edges to be modified as the doors close. Thus, as the doors close it is possible to determine where the door edges should be in the image.
This knowledge can be used to extract a sub-image that contains the door edges. A vertical edge detection filter is applied to the sub-images. The vertical edge detection filter emphasises the strong vertical lines that these objects tend to produce due to their orientation with respect to the cameras. By over-laying the lines that define the door edges, it is possible to determine whether these lines are cut by any strong vertical lines associated with an object. Hence, it is possible to detect the object.
An extension of the alternative method for determining the positions of the doors, which was described above with reference to Figure 1 7, allows the door position and any objects present to be detected in a single operation. This method based on histograms from vertical, horizontal and substantially diagonal (e.g. 45° or 1 35 °) edge detected images. It involves calculating the horizontal and vertical edge histograms as descried in steps 1 to 9 above, and then:
10. Calculating an edge histogram from an image processed by an edge detection filter designed to emphasise substantially diagonal lines in the image. For example a 45° edge detected image for the left-hand door (when standing in the elevator looking outwards) and 1 35 ° edge detected for the right hand door.
1 1 . A new histogram is calculated from the product of the angled edge histogram and the vertical edge histogram, divided by the horizontal edge histogram. 1 2. The peaks in this histogram are tracked as the doors close. If the protection area is clear these peaks belong to the door position. If an object appears on either the sill or door edges large additional peaks appear in the histogram, in positions not corresponding to the door peaks, indicating the presence of an object.
This technique indicates the door position as the substantially diagonal edge detection emphasises the door edges resulting in a raised histogram level from the left (or right) side of the image to the door position. The vertical edge detection also provides a peak aligned with the door position due to the edge that results where the bottom of the doors meet the sill. Thus, the peak in the product of the two histograms indicates the door position. When an object is placed across the sill or door edges the vertical histogram then contains significant peaks indicating the positions of the edges of the object. In this case the histogram product contains multiple peaks, some of which are due to the object and some due to the doors.
The histogram product is divided by the horizontal histogram as this has been shown to lower the background level in the histogram, and thereby emphasise the peaks. The background level tends to be quite high when the image of the sill contains horizontal features that arise from the sill being textured in some way.
The images in Figure 1 8 demonstrate how the histograms of the various edge detected images combine to give an histogram that enables the door position to be detected. Figure 1 8a, 1 8b and 1 8c are images of the sill after 45°, vertical, and horizontal edge detection filters have been applied respectively. In Figure 2d the uppermost three plots are the raw histograms obtained from the edge detected images and the bottom plot is the combination histogram which is used to determine the door position.
Figure 1 9 demonstrates, using a number of images of the sill area as the door close (Figure 1 9a, 1 9c, 1 9e and 1 9g) and the accompanying histograms (Figure 1 9b, 1 9d, 1 9f and 1 9h), how this further technique can be used to determine the door position.
The images and plots in Figure 20 demonstrate how the histograms combine to enable doors and objects to be located. The objects in Figure 20 are a foot on the elevator sill and an arm on the elevator door edges. The original images of the sill, at various stages of door closure, are in Figure 20a, 20c, 20e and 20g and the accompanying histograms are in Figure 20b, 20d, 20f, and 20h. It can be seen that with the objects used in this example, the peaks associated with the object are much larger than those associated with the door edges.
In addition to the histogram method, described above, for detecting objects, detection can be performed or confirmed using: (a) a method based on searching for breaks in the lines that describe the door edges, sill/running-clearance interfaces or sill/floor interfaces; and/or
(b) a method that searches for lines that extend vertically from the horizontal lines on the sill or angled lines of the door edges.
With any of the above methods of determining the door position the symmetry of the door opening or prediction methods can be used to provide confirmation of the door position provided by the algorithm. That is, the distance from the estimated left-hand door position to the centre line should be approximately equal to the estimated right- hand door position to the centre line. Furthermore, the knowledge of the current door position, direction of travel, and estimate of door speed could be used to provide confirmation of the door position in the next frame.
The above edge imaging technique allows for the determination of objects on the door edges and sill which might be struck by the closing doors. While this provides the required safety feature of the door obstruction detection, it would be advantageous to have some early warning of objects moving in the vicinity of the elevator doors. This would allow the elevator controller to anticipate a person wanting to enter the elevator car. It is envisaged that the parallax technique, described next, will serve as such an early warning and anticipatory device. Thus, the doors would reverse before objects appeared on the sill or door edges.
Parallax Detection technique
A key feature of the secondary detection technique resides in the application of the parallax effect to obstruction detection. Referring to Figure 21 , the steps in a preferred embodiment are shown. The two images are collected from spatially separate vantage points. These images correspond to the scene looking down into an elevator doorway from two different locations (see Figure 22a and 22b) . The views encompass the immediate vicinity of the elevator doorway - this being the area where normally users of the lift would approach the lift doors. This vicinity can be broken down into a primary obstruction zone (described earlier) and the wider, secondary obstruction zone through which users pass when approaching the lift (see Figure 4). Referring to Figure 22, the parallax technique is illustrated by means of placing a ladder immediately outside the primary obstruction zone of a lift door (i.e. in the secondary obstruction zone). Two images, 22a and 22b are recorded from different vantage points. As a preliminary point, these two images have been taken with a different camera arrangement to that described in the earlier part of the specification. The earlier camera arrangement used two unsplayed cameras that were placed 100mm apart. With the earlier camera arrangement the two images produced were those shown in Figures 5a and 5b. The main point to note is that the images in Figure 22a and 22b show similar views of the door, whereas the images in Figures 5a and 5b show corresponding views of either side of the door. This has no bearing on the implementation of the following discussion as in practice once the calibration algorithm has determined the location of the door edges and sill, these areas would be masked out and only sub-images of the secondary obstruction zone (numbered 5 in Figure 4) are considered. These sub-images correspond to the upper middle of the images in Figures 22a and 22b or the top right and top left of the respective images in Figures 5a and 5b.
Once the images in Figures 22a and 22b have been obtained the shift between the backgrounds of the two images is calculated and used to align the background of one scene with the other. The amount of alignment of the backgrounds would preferably be minimised by ensuring that the optics of the system are as precisely aligned as possible during their manufacture. Any minor imperfections in the alignment of the backgrounds could then be compensated for by a suitable mathematical image processing technique. In the preferred embodiment the technique for correcting for such imperfections is by way of cross-correlation or minimum energy. The minimum energy technique involves 'shifting' the image (in two dimensions) by a pixel at a time (in an ordered manner in each direction) . The resulting two images are subtracted and then all of the pixel values summed in the difference image. The required shift to align the images most accurately is then that which results in the minimum summation value. That is - when the difference image sum is minimised, alignment is optimised. Cross-correlation is a statistical technique which is generally more robust and faster than techniques based on minimum energy. Further, significant enhancements in processing speed have been found when cross-correlation is effected via fast Fourier transforms.
Complete cancellation of the background has been found to be impractical due to inaccuracies in determining the shift due to parallax, incomplete cancellation due to noise, the effects of pixelated images, non-linear distortion, rotation in the image plane of the cameras and the differences in lighting observed from the two different vantage points.
The error introduced by image alignment effects would depend on both the size of the 3-dimensional object relative to the background and the magnitude of the parallax that the object produces. To minimise this error, a section of the images containing no or minimal parallax and maximum background, can be used to calculate the shift necessary to align the backgrounds of the images.
The mathematical techniques that are used to compute the shift between the backgrounds of the images assume that the images are infinite. Real images are finite and as a result, these techniques tend to underestimate the shift. These effects can usually be overcome by forcing the image to zero at its perimeter before using a mathematical technique to compute the shift. This is achieved by multiplying the image by a function (such as a two dimensional cosine function) that is zero valued at its boundary (perimeter). When this is done, it has been found that the shifts tend to be correctly calculated and the difference images have only a relatively small contribution from background that is not completely cancelled.
Additional sources of error come from noise. However, the overall effect of this is generally minimal.
A further source of error in background shifting is pixelation of the elements of the picture. Real images are, by their nature, discrete at their boundaries and as they are viewed from two different vantage points, it is not possible to align the backgrounds of the images exactly or cancel the backgrounds completely. This is due to the fact that the edges of objects within an image will not always lie precisely on a pixel boundary. The edge of an object will generally overlap the pixel boundary and therefore shifts will not always correspond to an integer number of pixels.
This error can be largely overcome by blurring or smearing the image so that each pixel has an intensity value that is an average of its surrounding pixel values. In prototyping, it has been found that gaussian and median filtering is particularly effective in this regard.
Errors due to image rotation can be largely reduced by accurately aligning the optics during manufacture. Illumination errors can be minimised by using a system that implements a single camera and hence the same exposure and aperture control system, in order to obtain two images which are unaffected by differences in lighting intensity. Parallax effects can then be obtained using a single camera in conjunction with a mirror/lens system to obtain spatially separate views whereby the resulting images are focused onto separate halves of the imaging device within a single camera. It is not necessary that the image be split onto separate halves of the imaging device. A switching means may be used to select the required image which is then focussed on the camera. This was discussed earlier.
Once the shift between the backgrounds of the two initially collected images is calculated. The information is stored in non-volatile memory for future use. As the cameras and lobby remain at a fixed location in respect to each other there is no need to recalculate the background shift for each operational image. This saves processing time. Ideally the background shift would be calculated during the calibration stage.
The backgrounds are aligned and then subtracted in order to produce a difference image. This is shown in Figure 22c. As can be seen, the background and lift sill contribution to the image is substantially cancelled while the obstruction (in the present case a ladder) is emphasised. To enhance the parallax effect, the resulting image is preferably thresholded thus producing the significantly more intense representation of the parallax as shown in Figure 22d.
Ideally, the difference image will contain only outlines of the three dimensional objects. However, in practise, the resulting parallax-highlighted image is as shown in Figure 22d. This has elements of the door and sill in it. As described earlier the location of these is known form the calibration stage and as a consequence they can be masked out.
The present technique has been found to be particularly useful in detecting people proximate to or entering an elevator. This is because as the height of an object increases, the parallax effect becomes more noticeable thereby allowing more accurate and clear identification of the obstruction.
Figure 23 illustrates the result of placing a variety of sample obstructions immediately outside an elevator door. Figures 23a, 23d, 23g, 23j and 23m illustrate a box; box on a rug; a cane; a soft toy (representing an animal); and the leg of an approaching user. The corresponding difference image (Figures 23b, 23e, 23h, 23k and 23n) are shown along with subsequently thresholded difference images (Figure 23c, 23f, 23i, 23I and 23o). As can be seen, the existence of a patterned rug can hamper effective subtraction of a background. However, even with significant background, the thresholding step significantly enhances the machine detectable position of the obstruction.
Thus a machine recognisable parallax effect is produced when an obstruction is moved within or placed in the obstruction detection area. As Figures 23a, 23b and 23c illustrate, parallax is primarily produced by the parts of the image corresponding to the vertical edges of the box and not by the horizontal edges. This is due to the fact that the cameras are displaced horizontally at the top of the lift doorway and therefore horizontal parallax effects will be minimised. The parallax produced by the right hand door edge is also clearly visible and it can be seen that the size of the parallax decreases and eventually vanishes as the door edge approaches the sill or floor area. Figure 24 illustrates the ability of filtering techniques (discussed in detail earlier) to reduce pixelation artefacts for identical sample images to those shown in Figure 23. Figures 24c, 24f and 24i (when compared with Figures 23f, 23i and 23o) illustrate that filtering reduces the level of the background cancellation remnants without suppressing the features produced by parallax. The effectiveness of this technique is evident as it can be seen that the previously visible horizontal lines due to the tracks on the door sill are now absent. This is desirable given that these features belong to the background and are not attributable to the parallax effect caused by an obstruction.
It has been found that the parallax obstruction detection technique described can also be used to detect a hand or other obstruction on the door edge. In trials, the parallax produced by a hand on the edge of the door was clearly machine detectable. Clearly if this technique was to be implemented in a practical form, it would be necessary to be able to distinguish the parallax produced by the door edge itself from that produced by the presence of a hand or other obstruction. The previously described technique for identifying the door edges in an image could be used for this purpose.
An additional technique that could be used to identify such obstructions is to obtain reference images of the elevator door edges in a situation where no obstructions exist. Such reference images could continually be compared with the images of the car door edges recorded when the lift is in use. If a hand is placed between the doors, the reference image could be subtracted from the newly obtained 'operative' image. If an obstruction is present, it will then be visible in the difference image otherwise the difference image should be zero. An example of such a subtractive process is shown in Figures 25d to 25f. The reference images 25b and 25e illustrates a non-obstruction situation and the image 25a and 25d respectively are Operative' images. The subtracted image 25f and 25c reveals the presence of the hand and its reflection in the edge of the door slamming post.
In trials, the present invention has been found to be capable of machine-detecting parallax for a reasonably large variety of objects. There are theoretical limits in the parallax which can be detected by the system given the various camera parameters (such as maximum camera separation and height). To this end, it appears that it would be difficult to detect objects that are located less than 200mm above the floor and objects that are placed in the top corners of the lift doorway. However, it is possible that this limit may be overcome by development of camera optics and geometry.
The above-mentioned technique has been developed to allow automatic detection of the sill in door edges. In terms of the requirements of the primary obstruction zone, both techniques - parallax and reference imaging have been found to work well at producing machine identifiable features that corresponded directly to an obstruction being placed in the door obstruction detection zone. However, the edge detection technique described earlier is much more robust and reliable for the critical zone. The generalised parallax technique has been found to be particularly effective at detecting objects in the secondary obstruction zone.
Thus the present invention provides for a significantly improved obstruction detection system which can reliably detect objects in both the door edge and wider protection zones. Changes in imaging parameters will only improve this detection threshold, particularly for the parallax technique. The system can further reliably remove the majority of the background from the image to aid in further processing. In addition, it has been found that hands or other obstructions placed at the door edges can be reliably detected - this being done by separating the image into a primary obstruction zone and a secondary obstruction zone. Numerous variations and modifications will be clear to one skilled in the art. These may include substituting different types of camera or imaging devices. Further, it may be possible to reduce the number of image collection devices to one by means of optical systems such as that described above. This may provide significant cost savings in terms of the requirements of providing two spatially separate viewing points.
Although the present invention has been described in the context of elevator doors, it is possible that, with suitable modification, the invention may be applicable to other obstruction detection applications such as those involved in heavy machinery, process control, safety and the like.
Where in the foregoing description, reference has been made to specific components or integers of the invention having known equivalents, then such equivalents are herein incorporated as if individually set forth.
Although the invention has been described by way of example and with reference to possible embodiments thereof, it is to be understood that modifications or improvements may be made thereto without departing from the scope or spirit of the invention.

Claims

1 . A method of detecting objects in an area, the method including obtaining one or more images of the area, using an edge detection technique in such a way as to highlight substantially dominant linear features in the image(s), and determining if any dominant linear features intersect linear features defining the area.
2. A method of detecting objects in an area as claimed in claim 1 wherein the area is an object detection zone, the area being separated into at least two zones; a primary zone, being the volume described by a door and a door sill; and a secondary zone, which may include the volume beyond the door through which a person using the door would pass.
3. A method of detecting objects in an area as claimed in claim 2 wherein primary zone is the door(s) and sill of an elevator and secondary zone is the landing/lobby where passengers may wait for the elevator.
4. A method of detecting objects in an area as claimed in any one of claims 1 to 3 wherein there are at least two images and the method includes a further step of detecting parallax in the two or more images, the parallax being produced by the presence of an object in the area, more specifically in the secondary zone.
5. A method of detecting objects in an area defined by a door and/or sill, said method including using edge detection techniques in such a way so as to highlight the substantially dominant linear features in an image or image(s), and determining if any dominant linear features intersect linear features defining said door and/or sill.
A method of detecting objects in an area as claimed in claim 5 wherein the method includes a preliminary stage of characterising one or more images to establish the presence of any characteristic linear features in the area, said characteristic linear features are lines defining the door edges and/or sill and the location of said features is stored for future reference.
7. A method of detecting objects in an area as claimed in claim 5 wherein the method also includes an operational stage which analyses one or more images to establish the presence of any uncharacteristic features in the volume, said uncharacteristic features representing potential object and/or obstructions in the area.
8. A method of detecting objects in an area as claimed in claims 6 or 7 wherein the preliminary stage includes at least two steps, a first step of detecting the location and dimensions of a door sill and a second step of detecting the location and dimensions of one or more door edge(s) .
9. A method of detecting objects in an area as claimed in claim 8 wherein the first step includes: using substantially horizontal and/or substantially vertical edge detection filters to highlight the dominant vertical and/or horizontal lines in the part of the image where the sill is known to be approximately located; summing the intensity values along each row of pixels in the image(s) produced using the vertical and/or horizontal edge detection filters thus producing a vertical and/or horizontal function with maxima and/or minima corresponding to the position of horizontal linear features and/or vertical linear features, said linear features defining the spatial location of the door sill in terms of horizontal and vertical features in the image.
10. A method of detecting objects in an area as claimed in claim 8 or claim 9 wherein the second step includes: using knowledge of the spatial location of the sill and knowledge of the physical relationship between the sill and the door edge(s) to obtain a sub-image or sub-images of the door(s); subjecting the sub-image(s) to edge detection filters adapted to highlight edges oriented at angles which lie between some known bounds; manipulating the sub-image(s) to produce a binary image(s), the binary image(s) consisting of one of more linear features corresponding to the door edges; and deriving equations for the linear features in the binary image(s).
1 1 . A method of detecting objects in an area as claimed in claim 10 wherein the known bounds are substantially vertical and substantially horizontal edges.
1 2. A method of detecting objects in an area as claimed in claim 10 or 1 1 wherein prior to deriving equations for the linear features in the binary image(s) the second step may also include: manipulating the binary image by a ramp function which increases in magnitude in the vertical direction; further manipulating the images to clearly identify any dominant linear features in the binary image(s), the manipulation including applying a first filter to remove any substantially isolated features in the binary image(s), and applying a second filter to the binary image(s) to thin any substantially linear features in the image(s) .
13. A method of detecting objects in an area as claimed in any one of claims 10 to 1 2 wherein the equations of the linear features are obtained by locating the line(s) by means of a least squares, or similar, technique; if there is more than one dominant linear feature in the image(s), once the equation for any one linear feature has been determined, that linear feature is removed from the image and the next dominant linear feature equated.
14. A method of detecting objects in an area as claimed in any one of claims 10 to 1 3 wherein a total weighting means is used to manipulate an estimate of the equation for each linear feature, thereby improving the confidence of the equation for that linear feature, the total weighting means being found by normalising, and if necessary multiplying, one or more of: a first weighting means, wherein the derivative and variance of a linear feature are determined, changes in the derivative and distance of points of the feature which are outside a given parameter representing breaks in the feature, the first weighting means down weighting or eliminating said points from the estimate; and/or a second weighting means, wherein points in a linear feature further away from the image capture source are given a higher weighting than points in the same feature which are closer to the image capture source; and/or a third weighting means, wherein the third weighting means is the inverse of the derivative of the feature; and/or a fourth weighting means, wherein linear features which do not span any sub-image from vertical edge to vertical edge are weighted.
5. A method of detecting objects in an area as claimed in any preceding claim wherein the edge detection may be effected by means of filters, differentiators and the like.
6. A method of detecting objects in an area as claimed in any preceding claim wherein said edge detection is aimed at highlighting dominant lines orientated substantially horizontal, vertical and substantially diagonal, more particularly the diagonal lines are at substantially 45° and 1 35°, in the image(s).
7. A method of detecting objects in an area as claimed in claim 7 wherein the operational stage includes the steps of: capturing one or more real time operational images of the area; detecting the position of a door or doors in the image(s); detecting the presence of objects in the area of the image(s) representing a sill; and detecting the presence of objects in the area of the image(s) representing the door edges.
1 8. A method of detecting objects in an area as claimed in claim 1 7 wherein the position of the doors is obtained by detecting the intensity change in the substantially horizontal features of the sill where the intensity changes define the spatial location of the door(s) in the image(s).
1 9. A method of detecting objects in an area as claimed in claim 1 7 wherein the presence of objects in the area of the image representing the sill is determined by at least using a substantially vertical edge detection filter to highlight predominately vertical features in the image which intersect the linear features of the sill.
20. A method of detecting objects in an area as claimed in claim 1 7 wherein the presence of objects in the area of the image representing the door edges is determined by at least using an edge detection filter to highlight predominate features in the image which intersect the linear features of the door.
21 . A method of detecting objects in an area as claimed in any one of claims 1 7 to 20 wherein the operational step includes converting the edge detected image(s) to a histogram or histograms wherein peaks in the histograms represent features in the image(s), said features representing the door(s) and/or sill, and/or an obstruction or obstructions on the door edge(s) and/or sill.
22. A method of detecting objects in an area as claimed in any one of clams 1 7 to 21 wherein the operational stage may be repeated a plurality of times.
23. A method of detecting objects and/or movement in objects, the method including the step of detecting parallax in two or more images of an area, the parallax produced by the presence of an object in the area.
24. A method as claimed in claim 23 the method including the step of detecting temporal changes in the images of the area.
25. A method as claimed in claim 23 and 24 wherein the method includes the step of detecting vertical and horizontal parallax produced by an object located in the area.
26. A method as claimed in claim 25 including the steps of aligning backgrounds of a plurality of images of an area and subtracting pairs of images so as to reveal, by way of parallax, the presence of objects in the area.
27. A method as claimed in claim 23 or claim 26 wherein the method of detecting objects includes the steps of aligning backgrounds of a first and second image of an area and subtracting the first image from the second, thereby revealing, by way of parallax, the presence of a three dimensional object.
28. A method as claimed in claim 23, 26 or 27 wherein the method includes the steps of: collecting a first image of an area from a first viewing point; collecting a second image of an area from a second viewing point; calculating the shift between the backgrounds of the two images; aligning the backgrounds of the two images; subtracting the two images to produce a third difference image; analysing the third difference image to detect parallax thereby revealing the presence of a 3-dimensional object in the area.
29. A method as claimed in claim 28 wherein following the subtraction step, and before the analysing step, there is a thresholding step whereby the difference image is thresholded to exclude noise thus producing a binary image.
30. A method as claimed in claims 28 or 29 wherein the third difference image is manipulated so as to contain substantially only the outlines of any 3- dimensional objects in the area.
31 . A method as claimed in any one of claims 28 to 30 wherein the images are divided into background images and door edge images wherein calculation of the necessary shift between the backgrounds of the two images is based on the images of the background when no object is present.
32. A method as claimed in any one of claims 28 to 31 wherein the shift is calculated using cross-correlation.
33. A method as claimed in any one of claims 28 to 32 wherein the images are blurred with gaussian, median or similar filters so as to reduce the effect of pixelation in the images.
34. An apparatus for detecting objects in an area, said apparatus including at least one imaging means and a microprocessor apparatus adapted to carry out the method as claimed in any preceding claim.
35. An apparatus for detecting objects in an area, the apparatus including: at least one imaging means adapted to image the same scene from at least two spatially separate viewing points; and microprocessor apparatus adapted to manipulate said images in such a way as to highlight substantially dominant linear features in said images and determine if any dominant linear features signify the presence of an object in the area.
36. An apparatus for detecting objects in an area, the apparatus including: at least one imaging means adapted to image substantially the same scene from at least two spatially separate viewing points; and microprocessor apparatus adapted to manipulate said images in order to calculate the shift between the backgrounds of the two images or pairs of images, align the background images based on said shift, subtract the resulting images to produce a difference image thereby allowing the detection of parallax effects in the difference image thus signifying the presence of an object in the area.
37. An apparatus for detecting objects in an area as claimed in claim 34 to 36 wherein the microprocessor is also adapted to manipulate the image or images to highlight substantially dominant linear features of the image(s).
38. An apparatus for detecting objects in an area as claimed in any one of claims 34 to 37 wherein the images may be manipulated optically, mathematically or in a like manner which reveals dominant linear features and/or parallax in the image(s) of the area.
39. An apparatus for detecting objects in an area as claimed in any one of claims 34 to 38 wherein the microprocessor is further adapted to threshold the image(s).
40. An apparatus for detecting objects in an area as claimed in any one of claims
33 to 39 wherein the microprocessor may be in the form of a solid state, optical or the like device.
41 . An apparatus for detecting objects in an area as claimed in any one of claims 34 to 40 wherein a single camera is used and the apparatus includes an optical arm and reflection means adapted to relay an image from a viewing point that is displaced from the physical location of the camera.
42. An apparatus for detecting objects in an area as claimed in any one of claims 34 to 41 wherein the collection of two or more images may be effected by optical means including prisms, coherent optical fibre guides, and the like or alternatively the imaging means themselves may be translated or suitably displaced.
43. An apparatus for detecting objects in an area as claimed in any one of claims
34 to 42 wherein there may be artificial features added to aid the microprocessor in highlighting substantially normal dominant features of the image(s) .
4. An apparatus for detecting objects in an area as claimed in any one of claims 34 to 42 wherein there may also be an input means, the input means enabling a user to input the location of normal dominant features into the microprocessor.
PCT/NZ2000/000013 1999-02-11 2000-02-11 Obstruction detection system WO2000047511A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AU27019/00A AU2701900A (en) 1999-02-11 2000-02-11 Obstruction detection system
JP2000598438A JP2003524813A (en) 1999-02-11 2000-02-11 Obstacle detection device
EP00905485A EP1169255A4 (en) 1999-02-11 2000-02-11 Obstruction detection system
CA002362326A CA2362326A1 (en) 1999-02-11 2000-02-11 Obstruction detection system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
NZ33414499 1999-02-11
NZ334144 1999-02-11
NZ50203799 1999-12-23
NZ502037 1999-12-23

Publications (1)

Publication Number Publication Date
WO2000047511A1 true WO2000047511A1 (en) 2000-08-17

Family

ID=26652017

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NZ2000/000013 WO2000047511A1 (en) 1999-02-11 2000-02-11 Obstruction detection system

Country Status (6)

Country Link
EP (1) EP1169255A4 (en)
JP (1) JP2003524813A (en)
CN (1) CN1346327A (en)
AU (1) AU2701900A (en)
CA (1) CA2362326A1 (en)
WO (1) WO2000047511A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002044505A1 (en) * 2000-12-01 2002-06-06 Safenet I Harads Ab Camera supervised motion safety system
US9120646B2 (en) 2009-07-17 2015-09-01 Otis Elevator Company Systems and methods for determining functionality of an automatic door system
US10087048B2 (en) 2016-01-13 2018-10-02 Toshiba Elevator Kabushiki Kaisha Elevator system
EP3499413A1 (en) * 2017-12-15 2019-06-19 Toshiba Elevator Kabushiki Kaisha User detection system
CN111704013A (en) * 2019-03-18 2020-09-25 东芝电梯株式会社 User detection system of elevator
US20210357676A1 (en) * 2020-05-18 2021-11-18 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
CN114697603A (en) * 2022-03-07 2022-07-01 国网山东省电力公司信息通信公司 Meeting place picture detection method and system for video conference
DE102021115280A1 (en) 2021-06-14 2022-12-15 Agtatec Ag Automatic door assembly with sensor device and method for operating such an automatic door assembly

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5048912B2 (en) * 2002-11-06 2012-10-17 インベンテイオ・アクテイエンゲゼルシヤフト Surveillance and moving walkway video camera surveillance
WO2009140793A1 (en) * 2008-05-22 2009-11-26 Otis Elevator Company Video-based system and method of elevator door detection
JP5504881B2 (en) * 2009-12-25 2014-05-28 ソニー株式会社 Arithmetic apparatus, arithmetic method, arithmetic program, and microscope
CN102530690A (en) * 2012-01-07 2012-07-04 广州永日电梯有限公司 Elevator video light curtain system for preventing pinching touch
JP5969149B1 (en) * 2016-01-13 2016-08-17 東芝エレベータ株式会社 Elevator system
JP6092434B1 (en) * 2016-01-13 2017-03-08 東芝エレベータ株式会社 Elevator system
JP6046287B1 (en) * 2016-01-13 2016-12-14 東芝エレベータ株式会社 Elevator system
CN106081776B (en) * 2016-08-22 2018-09-21 日立楼宇技术(广州)有限公司 The method, apparatus and system of elevator safety monitoring
CA3037395A1 (en) 2016-10-03 2018-04-12 Sensotech Inc. Time of flight (tof) based detecting system for an automatic door
JP6742543B2 (en) * 2017-12-28 2020-08-19 三菱電機株式会社 Elevator door equipment
KR102001962B1 (en) * 2018-02-26 2019-07-23 세라에스이 주식회사 Apparatus for control a sliding door
CN108809400B (en) * 2018-03-05 2019-04-30 龙大(深圳)网络科技有限公司 Narrow space network relay system
JP7078461B2 (en) * 2018-06-08 2022-05-31 株式会社日立ビルシステム Elevator system and elevator group management control method
JP6702578B1 (en) * 2019-03-18 2020-06-03 東芝エレベータ株式会社 Elevator user detection system
JP6881853B2 (en) * 2019-08-09 2021-06-02 東芝エレベータ株式会社 Elevator user detection system
GB2589113B (en) 2019-11-20 2021-11-17 Kingsway Enterprises Uk Ltd Pressure monitor
CN111646349B (en) * 2020-06-10 2022-05-06 浙江德亚光电有限公司 Elevator protection method and device based on TOF image
GB202018613D0 (en) 2020-11-26 2021-01-13 Kingsway Enterprises Uk Ltd Anti-ligature device
CN112938719B (en) * 2021-03-09 2024-02-27 陕西省特种设备检验检测研究院 Anti-pinch flexible door for elevator

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5182776A (en) * 1990-03-02 1993-01-26 Hitachi, Ltd. Image processing apparatus having apparatus for correcting the image processing
US5387768A (en) * 1993-09-27 1995-02-07 Otis Elevator Company Elevator passenger detector and door control system which masks portions of a hall image to determine motion and court passengers
DE19522760A1 (en) * 1995-06-27 1997-04-10 Dorma Gmbh & Co Kg Automatic door operating system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2951814B2 (en) * 1993-02-25 1999-09-20 富士通株式会社 Image extraction method
US5410149A (en) * 1993-07-14 1995-04-25 Otis Elevator Company Optical obstruction detector with light barriers having planes of light for controlling automatic doors

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5182776A (en) * 1990-03-02 1993-01-26 Hitachi, Ltd. Image processing apparatus having apparatus for correcting the image processing
US5387768A (en) * 1993-09-27 1995-02-07 Otis Elevator Company Elevator passenger detector and door control system which masks portions of a hall image to determine motion and court passengers
DE19522760A1 (en) * 1995-06-27 1997-04-10 Dorma Gmbh & Co Kg Automatic door operating system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1169255A4 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002044505A1 (en) * 2000-12-01 2002-06-06 Safenet I Harads Ab Camera supervised motion safety system
US9120646B2 (en) 2009-07-17 2015-09-01 Otis Elevator Company Systems and methods for determining functionality of an automatic door system
US10087048B2 (en) 2016-01-13 2018-10-02 Toshiba Elevator Kabushiki Kaisha Elevator system
EP3499413A1 (en) * 2017-12-15 2019-06-19 Toshiba Elevator Kabushiki Kaisha User detection system
CN109928290A (en) * 2017-12-15 2019-06-25 东芝电梯株式会社 User's detection system
US10941019B2 (en) 2017-12-15 2021-03-09 Toshiba Elevator Kabushiki Kaisha User detection system and image processing device
CN109928290B (en) * 2017-12-15 2021-08-06 东芝电梯株式会社 User detection system
CN111704013A (en) * 2019-03-18 2020-09-25 东芝电梯株式会社 User detection system of elevator
US11643303B2 (en) 2019-03-18 2023-05-09 Toshiba Elevator Kabushiki Kaisha Elevator passenger detection system
US20210357676A1 (en) * 2020-05-18 2021-11-18 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
DE102021115280A1 (en) 2021-06-14 2022-12-15 Agtatec Ag Automatic door assembly with sensor device and method for operating such an automatic door assembly
CN114697603A (en) * 2022-03-07 2022-07-01 国网山东省电力公司信息通信公司 Meeting place picture detection method and system for video conference

Also Published As

Publication number Publication date
EP1169255A1 (en) 2002-01-09
CN1346327A (en) 2002-04-24
CA2362326A1 (en) 2000-08-17
EP1169255A4 (en) 2005-07-20
AU2701900A (en) 2000-08-29
JP2003524813A (en) 2003-08-19

Similar Documents

Publication Publication Date Title
EP1169255A1 (en) Obstruction detection system
US11232326B2 (en) System and process for detecting, tracking and counting human objects of interest
US7397929B2 (en) Method and apparatus for monitoring a passageway using 3D images
CN108622777B (en) Elevator riding detection system
US7400744B2 (en) Stereo door sensor
US7623674B2 (en) Method and system for enhanced portal security through stereoscopy
Terada et al. A method of counting the passing people by using the stereo images
KR101078474B1 (en) Uncleanness detecting device
Kim et al. Real-time vision-based people counting system for the security door
JP2008273709A (en) Elevator device
JP2010122078A (en) Height detection system, and automatic ticket gate using the same
JP2010262527A (en) Passing person counting device, passing person counting method and passing person counting program
Conrad et al. A real-time people counter
CN100339863C (en) Stereo door sensor
JP2004088599A (en) Image monitoring apparatus and method therefor
JPS63292386A (en) Counting device for moving object
JP6693624B2 (en) Image detection system
JP5069442B2 (en) Human backflow detection system
KR200256086Y1 (en) Apparatus for counting the number of entering object at the gate using image
Kim et al. Robust real-time people tracking system for security

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 00806120.3

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ CZ DE DE DK DK DM EE EE ES FI FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref document number: 2362326

Country of ref document: CA

Ref document number: 2362326

Country of ref document: CA

Kind code of ref document: A

Ref document number: 2000 598438

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2000905485

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 514127

Country of ref document: NZ

Ref document number: 27019/00

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 09926004

Country of ref document: US

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 2000905485

Country of ref document: EP

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWW Wipo information: withdrawn in national office

Ref document number: 2000905485

Country of ref document: EP

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)