AU2008243688B2 - Method and apparatus for three dimensional image processing and analysis - Google Patents

Method and apparatus for three dimensional image processing and analysis Download PDF

Info

Publication number
AU2008243688B2
AU2008243688B2 AU2008243688A AU2008243688A AU2008243688B2 AU 2008243688 B2 AU2008243688 B2 AU 2008243688B2 AU 2008243688 A AU2008243688 A AU 2008243688A AU 2008243688 A AU2008243688 A AU 2008243688A AU 2008243688 B2 AU2008243688 B2 AU 2008243688B2
Authority
AU
Australia
Prior art keywords
image
detection zone
image detection
images
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2008243688A
Other versions
AU2008243688A1 (en
Inventor
Peter Davekumar Stephens
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GRYPHON SYSTEMS ENGINEERING Pty Ltd
HOWE FARMING CO Pty Ltd
Original Assignee
Gryphon Systems Eng Pty Ltd
HOWE FARMING CO Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2007902205A external-priority patent/AU2007902205A0/en
Application filed by Gryphon Systems Eng Pty Ltd, HOWE FARMING CO Pty Ltd filed Critical Gryphon Systems Eng Pty Ltd
Priority to AU2008243688A priority Critical patent/AU2008243688B2/en
Publication of AU2008243688A1 publication Critical patent/AU2008243688A1/en
Application granted granted Critical
Publication of AU2008243688B2 publication Critical patent/AU2008243688B2/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

An image analysis apparatus (10) including first and second image capture devices( 18) and (20), each image capture device (18) and (20) having a field of view which includes an image detection zone (23); at least one light source (24) adapted to generate a series of parallel lines onto the image detection zone (23); positioning means (14) for locating an object (12) to be analysed in the image detection zone (23); processing means (11) adapted to cause the first and second image capture devices (18) and (20) to capture first and second images of the image detection zone (23), the first and second images comprised of the parallel lines as distorted by the object (12) located in the image detection zone (23), wherein the processing means (11) is further adapted to analyse the lines of the first and second captured images to establish the extent to which each parallel line has been distorted by the object (12), and the processing means (11) further adapted to establish at least one characteristic of the object (11) by analysis of the extent to which the parallel lines have been distorted.

Description

WO 2008/131474 PCT/AU2008/000538 1 Method and apparatus for three dimensional image processing and analysis Field of the invention The present invention relates to a method and apparatus for image processing. The 5 invention will be described with specific reference to the image processing of bananas, however it will be appreciated that this is a non-limiting application of the invention only. Background of the invention Many processes today either require or benefit from image processing. Such processes range from security applications whereby images of people are processed, production 10 line applications where image processing may be used for quality control purposes, and general sortation applications where image processing may be used to classify and sort objects. While image processing systems and methods do exist, many making use of laser triangulation to calculate three dimensional coordinates that describe the image being 15 processed, these systems are typically not capable of analysing anything but the simplest of objects in a real time scenario. Such real time scenarios, for example, include the analysis of objects moving along a conveyor belt for sortation or other purposes. Where objects are of complex shape, or where detailed information is required, the 20 problem of real time analysis of the images has typically overwhelmed normal processing equipment. The sorting of fruit such as bananas would, for example, prove difficult to do using currently available image analysis systems which would struggle with anything but the most rudimentary analysis. Hands of bananas provide an example of a complex object that are difficult to analyse 25 using current image processing techniques. Currently, after being picked bananas are classified and sorted on the basis of, inter alia, size. This sortation is required for a number of reasons. For example, in Australia a large proportion of bananas are grown WO 2008/131474 PCT/AU2008/000538 2 in Queensland and then exported to other states. Due to the danger of spreading fruit fly inherent in exportation, Australian authorities require that any bananas being exported from Queensland must fit within a certain size and condition. Severe penalties are imposed for non-conformance of these requirements. 5 Further, bananas are picked while still green and are ripened artificially through the use of ethylene gas. Being able to sort bananas into fruit of substantially similar size is advantageous as fruits of the same general length and thickness can be ripened more evenly. This is desirable as it helps prevent fruit from being over/under ripened which in turn provides for less wastage and damage. 10 Finally, consistently ripened and consistently sized fruit are generally more attractive visually. This allows better prices to be obtained from buyers than for inconsistently. sized and ripened fruit. Bananas, particularly when bunched together in a hand, are particularly difficulty to analyse due to the fact that individual bananas have a length, diameter and radius of 15 curvature which is often masked by adjacent bananas. The highly irregular shape of bananas, both in an absolute sense and in terms of the variation in shape between, individual bananas, and the fact that bananas are conveyed in hands rather than single fruit pieces, create significant difficulties for automatic image analysis. As such bananas have traditionally been, and continue to be, sorted by hand. This is a time consuming 20 and difficult process, and workers often tend to be inconsistent in the sortation analysis and processing, as well damaging a proportion of the fruit being handled during the analysis and processing. Accordingly, it is desirable to provide an improved method and apparatus for image processing analysis for irregularly shaped objects, such as hands of bananas. 25 Summary of the invention In one aspect the present 'invention provides an image analysis apparatus, the apparatus including: first and second image capture devices, each image capture device having a field of view which includes an image detection zone; at least one light WO 2008/131474 PCT/AU2008/000538 3 source adapted to generate a series of parallel lines onto the image detection zone; positioning means for locating an object to be analysed in the image detection zone; processing means adapted to cause the first and second image capture devices to capture first and second images of the image detection zone, the first and second 5 images comprised of the parallel lines as distorted by the object located in the image detection zone, wherein the processing means is further adapted to analyse the lines of the first and second captured images to establish the extent to which each parallel line has been distorted by the object, and the processing means further adapted to establish at least one characteristic of the object by analysis of the extent to which the parallel 10 lines have been distorted. The processor means may be adapted to calculate a plurality of reference line masks during a system calibration phase, each reference line mask relating to the view of an unimpeded parallel line from one of the image capture devices, and wherein the reference line masks are used by the processor means to calculate the extent to which 15 the parallel lines have been distorted in the first and second images. The first image capture device may view the image detection zone from a first known viewpoint and the second image capture device may view the image detection zone. from a second known viewpoint, and the processing means may use the first and second known viewpoints to map pixels of the first and second images to a common set 20 of coordinates. The processing means may combine the first and second images into a combined image in which pixel intensities are mapped to a z value, the z value corresponding to the vertical distance of the pixel away from a base plane of the image detection zone. The z values are used by the processing means in an edge detection algorithm to 25 extrapolate one or more edges present in the object. The at least one characteristic may be selected from the group of the width of an element of the object, the length of an element of the object, and the height of an element of the object.
WO 2008/131474 PCT/AU2008/000538 4 The object may be a hand of bananas and the element of the object may be a single banana. The positioning means may be a conveyor. The processing means may be further adapted to control the positioning means to 5 locate the object in the image detection zone. The image capture devices may selected from the group of CCD cameras and CMOS sensors. The light source may be a laser. The light source may be coupled to a line generator, the line generator adapted to 10 generate the series of parallel lines onto the image detection zone. The image detection zone may be housed in an analysis chamber constructed of opaque material. The apparatus may further include a first detection sensor for detecting when an object is located in the image detection zone and notifying the processing means when an 15 object is located in the image detection zone. The image analysis apparatus may further include a second detection sensor for detecting when no object is present in the image detection zone and notifying the processing means that no object is present in the image detection zone. In a second aspect the present invention provides a method for using an image analysis 20 apparatus to establish at least one characteristic of an object, the method including the steps of: illuminating an image detection zone with a series of parallel lines from a light source; positioning the object within the image detection zone, causing distortion of at least one of the parallel lines in the series of parallel lines; capturing a first image of the distorted lines and a second image of the distorted lines from first and second image 25 capture devices respectively, each image capture device having a different view of the WO 2008/131474 PCT/AU2008/000538 5 image detection zone; analysing the lines of the first and second captured images to establish the extent to which each parallel line has been distorted by the object; establishing at least one characteristic of the object by analysis of the extent to which the parallel lines have been distorted. 5 The method may further include the steps of: calculating a plurality of reference line masks during a calibration phase, each reference line mask relating to the view of an unimpeded parallel line from one of the image capture devices, and using the reference line masks to calculate the extent to which the parallel lines have been distorted in the first and second images. 10 The first image capture device may view the image detection zone from a first known viewpoint and the second image capture device may view the image detection zone from a second known viewpoint, and the method may include using the first and second known viewpoints to map pixels of the first and second images to a common set of coordinates. 15 The method may further include the step of combining the first and second images into a combined image in which pixel intensities are mapped to a z value, the z value corresponding to the vertical distance of the pixel away from a base plane of the image detection zone. The z values may be used in an edge detection algorithm to extrapolate one or more 20 edges present in the object. The at least one characteristic may be selected from the group of the width of an element of the object, the length of an element of the object, and the height of an element of the object. The object may be a hand of bananas and the element of the object may be a single 25 banana. The image capture devices may be selected from the group of CCD cameras and CMOS sensors.
WO 2008/131474 PCT/AU2008/000538 6 The light source may be a laser. The light source may be coupled to a line generator, the line generator adapted to generate the series of parallel lines onto the image detection zone. In a third aspect the present invention provides instructions executable by a computer 5 processor to implement the method described above and to a computer readable storage medium for storing such instructions. Brief description of the drawings The present invention will now be described with specific reference to a system and method for the analysis and processing of images of hands of bananas, and sorting,, 10 bananas on the basis of that analysis. Bananas have been selected as these highlight the ability of the apparatus and method to deal both with highly complex objects and objects that are composite (i.e. the analysis of individual bananas in a hand of bananas rather than a single banana on its own). 15 It will, however, be appreciated that bananas are but one example of the numerous objects and items that the method and apparatus of present invention may be used to analyse. In the drawings: Figure 1A provides a picture of a hand of bananas;; 20 Figure 1B provides a picture of an individual banana; Figure 2 provides a schematic representation of the banana analysing and sorting apparatus according to the preferred embodiment of the present invention; WO 2008/131474 PCT/AU2008/000538 7 Figure 3 provides a flowchart of the steps involved in transporting the bananas through the apparatus; Figure 4A shows laser lines from the laser line generator 22 illuminating a surface unimpeded by any other objects; 5 Figure 4B shows laser lines from the laser line generator 22 illuminating a hand of bananas. Figure 5 provides a flowchart of the steps involved in the image analysis of a hand of bananas according to the preferred embodiment of the present invention; Figure 6A depicts a reference line mask around an unimpeded laser line; 10 Figure 6B depicts a locus mask around a laser line; Figure 7 provides a side view of a simplified system set up with rays from the laser line generator striking a table unimpeded by any object; Figure 8 shows the image captured by a camera of the scene of figure 7; Figure 9 provides a side view representation of rays from the laser line generator as 15 shown in figure 7 striking a single object; Figure 10 shows the image captured by a camera of the scene in figure 9; Figure 11 provides representations of the camera centre and the plane of a laser line according to the setup depicted in figure 7; Figure 12 provides a representation of the point shown in figure 11 as seen by the 20 camera; Figure 13 shows a representation of the image of a single banana after analysis; and WO 2008/131474 PCT/AU2008/000538 8 Figure 14 shows how the edges of the banana of figure 13 are calculated. Detailed description of the embodiments By way of background, figure 1A provides a picture of a hand of bananas 2 and figure 1 B provides a picture of a single banana 6. When analysed and classified manually, a 5 hand 2 of bananas is classified according to the size of the middle or central banana 4. The central banana 4 is measured and these measures taken as an indicator of the general size of the hand as a whole. The measurements of interest in this particular scenario are the length 7 of the banana, measured along the outer curved surface of the fruit, and the width 8 of the banana, measured at or near to the thickest portion of the 10 fruit. The present invention will, therefore, be described with reference to the analysis of a hand of bananas with a view to obtaining these particular measurements. Referring to figure 2, the preferred embodiment of the apparatus 10 will now be described. The hand of bananas 12 which is to be analysed are transferred to a 15 conveyor 14. This transfer may be directly from the back of fruit picking truck or from another location, for example an upstream conveyor transporting bananas from previous operations. Where the bananas are transferred to the conveyor directly from fruit picking trucks or similar, individual hands of bananas are separated from each other along the conveyor 20 by speeding up a section of the conveyor. Once the banana hands have been separated along the conveyor 14, each individual hand is conveyed 12 through an analysis chamber 16 which is positioned over a section 15 of the conveyor. Before conveying a hand 12 into the analysis chamber, however, a system control processor 11 awaits a signal indicating the chamber 16 is free and 25 analysis of the hand can be performed.
WO 2008/131474 PCT/AU2008/000538 9 The section 15 of the conveyor inside the analysis chamber 16 can be stopped, started, or have its speed adjusted independently of the conveyor sections 14 outside of the analysis chamber 16. Inside the analysis chamber 16 are two image capture devices, entry camera 18 and 5 exit camera 20. The cameras 18 and 20 are mounted at either end of the analysis chamber such that their field of view includes an image detection zone 23. Entry camera 18 is mounted just above the entry to the analysis chamber 16 and exit camera 20 is mounted just above the exit of the chamber 16. In the preferred embodiment charged couple device (CCD) cameras have been used, 10 however any other type of imaging sensor (such as a complementary metal oxide semiconductor (CMOS) sensor, for example) may be used provided the sensor has a: sufficient sensitivity and signal-to-noise ratio for the imaging and analysis. Also housed in the analysis chamber 16 is a light source in the form of a laser line generator 22. The laser line generator 22 is mounted so as to illuminate the image 15 detection zone 23. The laser line generator 22 includes a 50 Watt diode laser 24 with a wavelength of 660 nanometers. It will be appreciated that a light source of different power and wavelength may be used, however the above parameters have been found to provide a suitable image brightness when considering the spectral sensitivity of cameras 18 and 20. 20 The laser 24 is coupled to a line generator 26 which generates 19 parallel lines each with a known inter-beam angle which allows calculation of the plane in which each laser line lies. The number of lines used is not critical provided sufficient resolution can be obtained for the imaging and analysis as discussed below. Advantageously, the analysis chamber 16 itself is constructed out of opaque material. 25 This prevents external light shining into the chamber 16 which aids in the analysis operations, as well as providing protection to people outside the chamber from the laser line generator 22.
WO 2008/131474 PCT/AU2008/000538 10 The analysis chamber 16 is further fitted with a detection sensor 28. The detection sensor 28 is mounted on the side of conveyor section 15 and detects when a hand of bananas is in the image detection zone 23. Once the bananas 12 are in the image detection zone 23, the detection sensor 28 detects this and sends a signal to the 5 system control processor 11, Once system control processor 11 receives the signal from the detection sensor 28, the control processor 11 triggers the laser line generator 24 to flash as well as the simultaneous acquisition of images from cameras 18 and 20. Once images have been acquired by cameras 18 and 20 the system control processor can perform the necessary image processing and analysis (as discussed below) to 10 calculate dimensions of the fruit in the hand of bananas 12. Based on these dimensions the controller 11 then classifies the hand 12. After the required images have been acquired by cameras 18 and 20 the hand 12 travels along the conveyor section 15 to exit the analysis chamber 16 and rejoin the main section of the conveyor 14. 15 In order to provide the above mentioned signal which indicates whether the next hand of bananas can be conveyed into the detection chamber 16 a further sensor may be added at the chamber exit which signals the system control processor 11 when the hand leaves the chamber 16. Alternatively, the signal may be provided as soon as the required images have been acquired and processed by cameras 18 and 20. 20 After exiting the chamber 16 the hand 12 is sorted by conveying the hand 12 to a sortation conveyor (not shown) with lanes running off to either side. Each of the side lanes is set up to accept fruit belonging to a particular classification, and a sortation controller (which may be the same controller as the system control processor 11 or a separate controller) queries the classification ascribed to the hand 12 by the system 25 control processor 11 and uses this classification to direct the hand 12 along the appropriate side lane. The sortation controller may determine the correct lane, for example, by reference to a sortation table which defines which lane a particular classification has been assigned to.
WO 2008/131474 PCT/AU2008/000538 11 By way of a summary, figure 3 provides an overview 40 of the general work flow of the sorting apparatus of the preferred embodiment. The hand 12 to be analysed and sorted are first transferred to the conveyor 14 (step 42). The hand 12 of bananas is then separated (step 44) along the conveyor 14 and conveyed towards the analysis chamber 5 16. When a hand 12 of bananas arrives at the analysis chamber 16 (and the analysis chamber 16 is free) the system control processor 11 sends a signal (step 46) to convey the hand 12 through the analysis chamber 16. Once the hand 12 is in the chamber 16 the detection sensor 28 detects the position of the bananas 12 (step 48) when they reach the image detection zone and signals the system control processor 11. The 10 system control processor 11 then triggers the laser line generator 22 and both cameras 18 and 20 to acquire the required images (step 50), analyses and classifies (step 52) the hand 12 based on the captured images. The hand 12 is then conveyed out of the analysis chamber 16 (step 54) before being sorted (step 56) according to the classification determined by the system control processor 11. 15 System calibration, image processing, and image analysis 1. Summary Referring now to figures 4 and 5, the method 60 by which the images acquired by cameras 18 and 20 are processed will be described. As discussed above, the two measurements of primary interest for the purpose of this 20 example are the length (measured along the outer curved surface) and width (measured at or towards the middle) of the middle banana of the hand 12. Figure 4A shows laser lines from the laser line generator 22 illuminating a surface unimpeded by any other objects. Figure 4B shows laser lines from the laser line generator 22 illuminating a hand of bananas . As can be seen the laser line generator 25 22 illuminates a hand of bananas in the image detection zone 23 with a number of parallel laser lines. It will be noted how, when the parallel lines fall on the hand of bananas the lines are distorted, as viewed by one of the cameras, and it is the extent of WO 2008/131474 PCT/AU2008/000538 12 that distortion which allows the characteristics of the bananas to be determined by suitable processing means. By way of summary of the image acquisition and processing, the images acquired by cameras 18 and 20 are processed and then combined to obtain a single two 5 dimensional image. In this combined image pixel intensities are mapped to a Z value. Edge detection algorithms as are know in the art can then be used to find the edges and/or tops of the fruit, and from the detected edges the required dimensional measurements can be extrapolated. In order to combine the images acquired from cameras 18 and 20 in a useful way, and 10 as described above, the cameras 18 and 20 are calibrated such that each camera has a, different view of the image detection zone 23. Despite this different view the system control processor 11 initialises the cameras 18 and 20 such that the view each camera has of the image detection zone 23 (and therefore any features within that detection zone) are mapped to coordinates based on the same real world coordinate system. 15 After cameras 18 and 20 have acquired an image of the bananas illuminated by the parallel laser lines in the image detection zone 23 the processor 11 calculates the source plane of each line in the image real world coordinates. Once the plane of each line has been determined the processor 11 can then determine the three dimensional coordinates of each of the parallel laser lines in the image. A single three-dimensional 20 image is then formed based on the presence of an object in the image detection zone 23 which serves to occlude one or the other of cameras 18 and 20 view of the image detection zone 23. A significant problem in this form of laser triangulation is the marrying of object lines to their source. In the preferred embodiment the line generator 26 splits the beam of the 25 laser 24 into nineteen lines and when an object is placed in the imaged detection zone 23 of the chamber 16 an image consisting of these laser lines is formed. By reliably associating each laser line in the image with the source of that laser line, the real world dimensions of each laser line are able to be calculated.
WO 2008/131474 PCT/AU2008/000538 13 Referring now to figure 5, the steps involved in the image analysis 60 will now be described in detail. 2 System calibration In order to provide for the efficient processing of the images captured by cameras 18 5 and 20 a number of system calibration calculations are performed. These include: 1 determination of camera calibration matrices 2 calculation of planes corresponding to each laser line 3 generation of reference line masks 4. generation of locus masks 10 While these steps have been described as calibration steps to be done prior to image acquisition, it will be appreciated that the steps may be performed (or re-performed) at other times during the procedure. 2.1 Determination of camera calibration matrices The data calculated during the system calibration include camera calibration matrices 15 for each camera 18 and 20. The calculation of these matrices may be achieved, for example, according to the method described in Shapiro and Stockman, Computer Vision, or in Hartley and Zisserman, Multiple View Geometry. Calculation of the calibration matrices in turn allows the determination of coordinates corresponding to each camera's centre - a three-dimensional point that does not exist 20 on the image plane but through which all rays of light must pass before ending up on the image plane. 2.2 Calculation of planes corresponding to each laser line WO 2008/131474 PCT/AU2008/000538 14 Equations of the planes that correspond to each of the laser lines are also calculated and stored at system calibration. These planes may be calculated using known geometrical techniques. 2.3 Determination of reference line masks 5 In order to create the reference line masks reference images of the image detection zone 23 are captured by cameras 18 and 20. For these reference images the image detection zone 23 is left empty, thus providing images which are used to show the expected location of unimpeded laser lines in the image detection zone 23. For each unimpeded laser line a reference line mask is then calculated. In the case of 10 nineteen laser lines, nineteen laser reference line masks are obtained for each camera. Figure 6A depicts a reference line mask 90 around an unimpeded laser line 92. Once calculated the reference line masks are stored in an array for use during the image analysis as discussed below. 2.4 Determination of locus masks 15 To further aid processing, the image detection zone 23 is configured to deal specifically with the measurement of objects of a specified maximum height. For example it may be specified that only objects up to 250mm high will need to be measured. A block of the specified height is placed in the image detection zone 23 and images of the block captured by cameras 18 and 20. From these images the locus of particles for 20 each laser reference line is determined. The rectangle corresponding to the locus of particles originating from a particular laser line is determined and this is used to calculate the locus mask for each line. In the case of nineteen lines, nineteen locus masks are obtained for each camera. Figure 6B depicts a locus mask 94 around a laser line 96 impeded by a block of the specified maximum height.
WO 2008/131474 PCT/AU2008/000538 15 If desired, the locus mask may be calculated without placing a physical block in the image detection zone 23. In this case the locus mask is calculated by using the specified height and camera view angles. Once calculated the locus masks are also stored in an array for use during the image 5 analysis. As discussed in more detail below, once an image has been captured by a camera that image is analysed iteratively using each of the locus masks as a sub-image in turn. Within the sub image any image particle that corresponds to an unimpeded laser line is interpreted as an indication that no object impeded that part of the laser line and therefore there is nothing of interest to analyse in that part of the image. In other words, 10 particles that correspond to unimpeded laser lines are used to create exclusion corridors within the sub-image that do not require analysis. 3 Image acquisition As previously described, when a hand of bananas 12 enters the analysis chamber 16 the detection sensor 28 detects (step 62) when the hand 12 is in the image detection 15 zone 23 and signals the control processor 11 accordingly. The control processor 11 then triggers the laser line generator 22 to be operated (step 64), just prior to triggering the simultaneous image acquisition from cameras 18 and 20 (step 66). From this an entry camera image from camera 18 and an exit camera image from camera 20 are obtained. 20 After the image has been acquired by cameras 18 and 20 the laser line generator 22 is turned off. This is done to reduce the risk of laser exposure and also serves to lengthen the service life of the laser diode. The exposure parameters such as shutter speed and aperture for cameras 18 and 20 are selected to provide the maximum depth of field and image sharpness of the parallel 25 laser lines. As an example, appropriate settings are an aperture of f2 with an exposure time of 1.24 milliseconds. 4 Image preprocessing WO 2008/131474 PCT/AU2008/000538 16 Before the entry camera image and exit camera image can be analysed they are preprocessed (both in the same manner) in order to simplify and speed up the subsequent analysis steps. For maximum speed the images from the cameras 18 and 20 are processed in parallel according to the following steps: 5 Rotation of each image by an angle determined during calibration to ensure that the parallel laser lines are as close as possible to vertical in the image. It will be understood that the rotation could equally be in order to ensure the laser lines are as close to horizontal as possible (step 66). Binary thresholding is performed on each image to convert each image to black 10 and white (step 68). 5 Image Analysis Throughout the description of the image analysis algorithm image "particles" will be referred to. In this context a "particle" is an individual line segment separated from other line segments. Examples of image particles may easily be seen in figure 4B where the 15 curvature of the bananas has disrupted the laser lines to provide discrete line segments. In order to accurately associate each particle in the scene images captured by each camera with the corresponding laser line source the following algorithm is used: Association of scene image particles with source laser lines (step 74) Calculation of three dimensional coordinates (step 76) 20 Use of three dimensional coordinates to combine entry and exit camera images (step 78) Calculation of object dimensions (step 80) Each of these steps will now be discussed in detail.
WO 2008/131474 PCT/AU2008/000538 17 5.1 Association of Scene Image Particles with Source Laser Lines (Step 74) In this step each of the camera images are analysed against the reference line masks and locus masks. The result of this step of the analysis is the generation of a total of N (where N is the number of reference laser lines) sub-images, each of which contains 5 only particles originating from a particular laser line, n. This step allows for the reliable association of an image particle with its corresponding laser source line which, in turn, allows for the determination of real world three-dimensional coordinates. The generation of the sub-images is achieved by the iterative analysis of each individual camera image. Each iteration focuses on identifying particles originating from a 10 particular laser line (referred to as n) according to the following steps: 1. Removal of unimpeded line portions from consideration. Each camera image is analysed against the reference line mask n corresponding to laser n. Any particles found within this mask are determined to be part of the unimpeded reference line and are then ignored as they do not provide relevant 15 information on the image. The height (corresponding in this instance to the width. of the particles and recalling that the preprocessing step involved rotation of the image to vertical) of these particles is then used to create an exclusion corridor in the locus mask for this line n. The result of this is that when the locus mask is analysed, corridors which are loci 20 for particles that could have originated from these unimpeded lines are excluded. This in turn assists in reducing the required image analysis. 2. Iterative identification of the boundary particle. In this step the rightmost or leftmost boundary particle of the scene image is iteratively identified. If the laser line is shifted to the left when impeded, the 25 rightmost particle is identified. If the laser line is shifted to the right, the leftmost particle is identified. The following will be discussed in relation to identifying the rightmost particle as seen from the left camera (in this case entry camera 18).
WO 2008/131474 PCT/AU2008/000538 18 After the unimpeded line portions have been removed as discussed above, the locus mask is analysed for the rightmost particle. Once identified this particle is copied to the output image for line n as the particle must originate from line n. The particle is then deleted from the scene image so as not to interfere with the 5 analysis of the scene for line n+1. Another exclusion corridor corresponding to this particle in light of the maximum height is created in the locus mask as no other particles originating from line n can exist to the left of this particle. This set of steps is repeated till there are no more particles found in the image. 10 In practice, a size threshold may be applied to the identification of particles by filtering very small particles from the image before each iteration. When no particles are returned, the analysis is repeated for line n+1. As can be seen, the result of this phase of the analysis is that the number of particles that need to be analysed in each iteration is decreased which assists in speeding up the 15 image analysis. 5.2 Calculation of 3D coordinates (Step 76) In this next step of the analysis, each of the N output images for each camera is analysed using standard triangulation methods. Appropriate methods may, for example, be similar to those described in Shapiro and Stockman, 'Computer Vision'; Hartley and 20 Zisserman, 'Multiple View Geometry';, or Trucco and Verri, 'Introductory Techniques for 3-D Computer Vision'. Generally speaking, each image is analysed by calculating the intersection of the plane corresponding to laser line n with the line passing through a pixel in the image plane (whose real world three dimensional coordinates are to be calculated) and the camera 25 centre.
WO 2008/131474 PCT/AU2008/000538 19 Once these steps have been repeated for all the N sub images from both cameras the output of the three-dimensional calculations are combined into a single image (step 78) in which intensity values of that image correspond to the real-world height of the objects being imaged. 5 5.3 Calculation of object dimensions (Step 80) Once the single three-dimensional image of the scene has been formed, edge detection methods are used to locate the transitions in height that correspond to the edges present in the object. For example, when considering a greyscale image a point of zero intensity (black) would correspond to a height of zero, and white as the highest part of 10 the object in the workspace. Established edge detection algorithms can then be used to pick out peaks and troughs. Further, discontinuities in the rate of change of greyscale can be used to detect the edges of the fruit. In this case the edges correspond to edge points of individual bananas in' the hand. Using known mathematical algorithms curves can be fitted to these edge points to 15 define an estimation of the actual edge of the fruit. The point at which the two curves intersect then provide the ends of the fruit (see example below). From these edges and end points the edges the length and thickness of the fruit can be calculated by simple measurement based on pixel distance and height (intensity). The knowledge of which laser plane each image particle is from (determined in step 74) 20 provides the basis for employing further processing to make meaningful groupings of particles. For example, by applying the rule that all image particles in an output image for a line n (see section 5.1(2) above) belong to different objects, logical groupings can be made for particles from laser lines n+1 and n-1. The nearest (Cartesian distance in 3 dimensions) particles from lines n+1 and n-1 may be deduced to belong to the same 25 object (fruit) if they also fulfil a threshold distance and quality of fit metrics. These rules would typically be applied from the highest particle in the image downwards to group particles together to identify a particular object of interest (the topmost single banana for example). This grouping of particles can then be used to correctly determine the length and thickness of a fruit.
WO 2008/131474 PCT/AU2008/000538 20 Simplified system example In order to more fully explain and aid in the understanding of the above, use of the system to analyse a single banana will now be described with reference to figures 7 to 15. It will be appreciated that these figures representative only and do not show all 5 features of the system. Figure 7 provides a side view of a simplified system set up with rays from the laser line generator 22 striking a table unimpeded by any object (in the simplified view a limited number of laser lines only have been depicted). Only the left hand camera 18 is depicted. In a real life situation, both entry and exit cameras would typically be required 10 in order to obtain accurate images. Figure 8 shows the image 101 of the unimpeded laser lines on the table as taken by camera 18. Figure 9 provides a side view representation of rays from the laser line generator 22 striking a single banana 100. Figure 10 shows the image of the laser lines falling on the banana 100 as taken by camera 18. As can be easily seen in the representation of the 15 image taken by the camera, the present of the banana 100 causes both breaks 102 in the original reference lines (the unimpeded laser lines shown in figure 8) as well as distortion/displacement 104 of segments of those lines from the perspective of the camera. In other words, because the camera 18 is at an angle of approximately 45 degrees to the image, the image particles 104, as captured by the camera, appear to 20 have shifted to the right. Figure 11 depicts the camera centre 106 (which is determined at calibration of the system in real world coordinates) as well as the plane 108 of a single laser line (also determined at calibration). Also shown is a single particle 110 on the banana 100 (this particle as seen by the camera is shown in figure 12). The real world coordinates of 25 particle 110 are calculated from the image by determination of the intersection of the line 112 passing through the camera centre 106 and the image plane 111 and the plane of the laser line 108.
WO 2008/131474 PCT/AU2008/000538 21 Points in the image taken by the camera (such as particle 110) correspond to interrupted laser lines and are used to calculate the three-dimensional coordinates of those parts of the banana 100 intersected by the laser lines. These particles are then transformed into an image where the x, y coordinates are normalised to real world units 5 and the z coordinate (corresponding to the real world height of the point) is proportional to the intensity of the point. As mentioned above, for camera 18, the left hand line of the set of 19 parallel lines is analysed first (i.e. the leftmost line is line number 1). If that line is unbroken it can be discarded for analysis purposes and analysis can begin on line number 2, and so on. 10 The first broken line (line n) must by definition be located towards the left hand edge of the banana. The processor then discards all those portions of line n which are unbroken, and, in that corridor where the line was broken, the processor shifts attention to the right of the image to locate the image particle from line n that has shifted to the right. Simple 15 trigonometry enables the processor to determine the height of that particle above the surface 115 of the detection zone. Once line n has been fully analysed, the processor discards the entire line, including the shifted image particles, and turns attention to line n+1. Because the entire line n has been discarded there can be no confusion between line n 20 and line n+1. This process then continues for the entire set of lines, from line 1 to line 19 in order to capture a set of three-dimensional data based on the deflected laser lines for the banana located in the detection zone. Figure 13 shows a representation of the image of a single banana after analysis. The intensity of points along each line (corresponding to the height of various points on the 25 line) is low at the ends of the lines and highest in the middle of the lines. The peaks and troughs can be used to identify the boundary of each fruit as well as the highest points on each fruit.
WO 2008/131474 PCT/AU2008/000538 22 Figure 14 shows how the edges of a fruit can be calculated by fitting curves 114 through the end points of the lines and determining the intersections 116 of those curves. Once the curves 114 have been fitted and intersections calculated the width of the fruit can be calculated directly across the lines, and the length can be measured across the end 5 points (where the fitted curves intersect) and the intensity mid points of each line segment. It will be appreciated that while the invention has been discussed with reference to the analysis of images of bananas and hands, the invention may equally be applied to the analysis of other objects or items. Further, while the apparatus has been described in 10 terms of the objects/items being analysed being carried through the analysis chamber by way of a conveyor, any means may be used to place and remove objects from the chamber. It will be understood that the invention disclosed and defined in this specification extends to all alternative combinations of two or more of the individual features 15 mentioned or evident from the text or drawings. All of these different combinations constitute various alternative aspects of the invention.

Claims (20)

1. An image analysis apparatus, the apparatus including: first and second image capture devices, each image capture device having a field of view which includes an image detection zone; 5 at least one light source adapted to generate a series of parallel lines onto the image detection zone; positioning means for locating an object to be analysed in the image detection zone; processing means adapted to cause the first and second image capture devices 10 to capture first and second images of the image detection zone, the first and second images comprised of the parallel lines as distorted by the object located in the image detection zone, wherein the processing means is further adapted to analyse the lines of the first and second captured images to establish the extent to which each parallel line has been 15 distorted by the object, and the processing means further adapted to establish at least one characteristic of the object by analysis of the extent to which the parallel lines have been distorted.
2. The image analysis apparatus according to claim 1, wherein the processor means is adapted to calculate a plurality of reference line masks during a system 20 calibration phase, each reference line mask relating to the view of an unimpeded parallel line from one of the image capture devices, and wherein the reference line masks are used by the processor means to calculate the extent to which the parallel lines have been distorted in the first and second images.
3 The image analysis apparatus according to claim 1 or claim 2, wherein the first 25 image capture device views the image detection zone from a first known viewpoint and the second image capture device views the image detection zone from a second known 24 viewpoint, and wherein the processing means uses the first and second known viewpoints to map pixels of the first and second images to a common set of coordinates.
4. The image analysis apparatus according to any one of the preceding claims, wherein the processing means combines the first and second images into a combined 5 image in which pixel intensities are mapped to a z value, the z value corresponding to the vertical distance of the pixel away from a base plane of the image detection zone.
5. The image analysis apparatus according to claim 4, wherein the z values are used by the processing means in an edge detection algorithm to extrapolate one or more edges present in the object. 10
6. The image analysis apparatus according to any one of the preceding claims, wherein the at least one characteristic is selected from the group of the width of an element of the object, the length of an element of the object, and the height of an element of the object.
7. The image analysis apparatus according to claim 6, wherein the object is a hand 15 of bananas and the element of the object is a single banana.
8. The image analysis apparatus according to any one of the preceding claims, wherein the processing means is further adapted to control the positioning means to locate the object in the image detection zone.
9. The image analysis apparatus according to any one of the preceding claims, 20 wherein the light source is a laser.
10. The image analysis apparatus according to any one of the preceding claims, wherein the light source is coupled to a line generator, the line generator adapted to generate the series of parallel lines onto the image detection zone.
11. The image analysis apparatus according to any one of the preceding claims 25 further including a first detection sensor for detecting when an object is located in the 25 image detection zone and notifying the processing means when an object is located in the image detection zone.
12. A method for using an image analysis apparatus to establish at least one characteristic of an object, the method including the steps of: 5 illuminating an image detection zone with a series of parallel lines from a light source; positioning the object within the image detection zone, causing distortion of at least one of the parallel lines in the series of parallel lines; capturing a first image of the distorted lines and a second image of the distorted 10 lines from first and second image capture devices respectively, each image capture device having a different view of the image detection zone; analysing the lines of the first and second captured images to establish the extent to which each parallel line has been distorted by the object; establishing at least one characteristic of the object by analysis of the extent to 15 which the parallel lines have been distorted.
13. The method of claim 12 further including the steps of: calculating a plurality of reference line masks during a calibration phase, each reference line mask relating to the view of an unimpeded parallel line from one of the image capture devices, and 20 using the reference line masks to calculate the extent to which the parallel lines have been distorted in the first and second images.
14 The method of either claim 12 or claim 13, wherein the first image capture device views the image detection zone from a first known viewpoint and the second image capture device views the image detection zone from a second known viewpoint, and 26 wherein the method includes using the first and second known viewpoints to map pixels of the first and second images to a common set of coordinates.
15. The method of any one of claims 12 to 14 further including the step of combining the first and second images into a combined image in which pixel intensities are 5 mapped to a z value, the z value corresponding to the vertical distance of the pixel away from a base plane of the image detection zone.
16. The method of claim 15, wherein the z values are used in an edge detection algorithm to extrapolate one or more edges present in the object.
17. The method of any one of claims 12 to 16, wherein the at least one characteristic 10 is selected from the group of the width of an element of the object, the length of an element of the object, and the height of an element of the object.
18. The method of claim 17, wherein the object is a hand of bananas and the element of the object is a single banana.
19. The method of any one of claims 12 to 18, wherein the light source is coupled to 15 a line generator, the line generator adapted to generate the series of parallel lines onto the image detection zone.
20. A computer readable storage medium for storing instructions executable by a computer processor to implement the method of any one of claims 12 to 19.
AU2008243688A 2007-04-26 2008-04-17 Method and apparatus for three dimensional image processing and analysis Ceased AU2008243688B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2008243688A AU2008243688B2 (en) 2007-04-26 2008-04-17 Method and apparatus for three dimensional image processing and analysis

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AU2007902205A AU2007902205A0 (en) 2007-04-26 Method and apparatus for three dimensional image processing and analysis
AU2007902205 2007-04-26
PCT/AU2008/000538 WO2008131474A1 (en) 2007-04-26 2008-04-17 Method and apparatus for three dimensional image processing and analysis
AU2008243688A AU2008243688B2 (en) 2007-04-26 2008-04-17 Method and apparatus for three dimensional image processing and analysis

Publications (2)

Publication Number Publication Date
AU2008243688A1 AU2008243688A1 (en) 2008-11-06
AU2008243688B2 true AU2008243688B2 (en) 2013-12-12

Family

ID=39925092

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2008243688A Ceased AU2008243688B2 (en) 2007-04-26 2008-04-17 Method and apparatus for three dimensional image processing and analysis

Country Status (2)

Country Link
AU (1) AU2008243688B2 (en)
WO (1) WO2008131474A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE202011051565U1 (en) * 2011-10-06 2011-11-03 Leuze Electronic Gmbh & Co. Kg Optical sensor
JP6151562B2 (en) * 2013-05-24 2017-06-21 株式会社ブレイン Article identification system and its program
JP6230814B2 (en) * 2013-05-24 2017-11-15 株式会社ブレイン Article identification system and its program
CN104315977B (en) * 2014-11-11 2017-06-30 南京航空航天大学 Rubber stopper quality detection device and detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072915A (en) * 1996-01-11 2000-06-06 Ushiodenki Kabushiki Kaisha Process for pattern searching and a device for positioning of a mask to a workpiece
WO2006013681A1 (en) * 2004-08-03 2006-02-09 Bridgestone Corporation Air bladder for safety tire

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2427913B (en) * 2005-06-24 2008-04-02 Aew Delford Systems Ltd Two colour vision system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6072915A (en) * 1996-01-11 2000-06-06 Ushiodenki Kabushiki Kaisha Process for pattern searching and a device for positioning of a mask to a workpiece
WO2006013681A1 (en) * 2004-08-03 2006-02-09 Bridgestone Corporation Air bladder for safety tire

Also Published As

Publication number Publication date
AU2008243688A1 (en) 2008-11-06
WO2008131474A1 (en) 2008-11-06

Similar Documents

Publication Publication Date Title
US11657595B1 (en) Detecting and locating actors in scenes based on degraded or supersaturated depth data
CN110779928B (en) Defect detection device and method
US6812846B2 (en) Spill detector based on machine-imaging
CN108491892A (en) fruit sorting system based on machine vision
CN112394064B (en) Point-line measuring method for screen defect detection
KR20020054223A (en) An Apparatus and Method to Measuring Dimensions of 3D Object on a Moving Conveyor
JP2020008501A (en) Surface defect detection device and surface defect detection method
CN105180836B (en) Control device, robot and control method
CN112334761B (en) Defect discriminating method, defect discriminating apparatus, and recording medium
US10999524B1 (en) Temporal high dynamic range imaging using time-of-flight cameras
KR102027986B1 (en) Bead recognition apparatus using vision camera and method thereof
AU2008243688B2 (en) Method and apparatus for three dimensional image processing and analysis
CN112889087A (en) Automatic inspection of sheet material parts of arbitrary shape during film formation
US20200086353A1 (en) Seed sorter
Mizushima et al. A low-cost color vision system for automatic estimation of apple fruit orientation and maximum equatorial diameter
CN105787429B (en) The method and apparatus for being used to check object using machine vision
JP5455409B2 (en) Foreign matter sorting method and foreign matter sorting equipment
CN113189005B (en) Portable surface defect integrated detection device and surface defect automatic detection method
US5586663A (en) Processing for the optical sorting of bulk material
US10228239B2 (en) Measuring apparatus, measuring method, and article manufacturing method
WO2003031956A1 (en) System and method for classifying workpieces according to tonal variations
KR102646286B1 (en) Part inspection system
TWI655412B (en) Light source detection system and method
Keresztes et al. Apple yield estimation during the growth season using image analysis
CN110856847A (en) Capacitance character detection method and device based on intelligent vision

Legal Events

Date Code Title Description
DA3 Amendments made section 104

Free format text: THE NATURE OF THE AMENDMENT IS: AMEND THE NAME OF THE INVENTOR FROM STEPHENS, PETER TO STEPHENS, PETER DAVEKUMAR

FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired