GB2232764A - Processing an image signal of a 3-d object - Google Patents

Processing an image signal of a 3-d object Download PDF

Info

Publication number
GB2232764A
GB2232764A GB9011678A GB9011678A GB2232764A GB 2232764 A GB2232764 A GB 2232764A GB 9011678 A GB9011678 A GB 9011678A GB 9011678 A GB9011678 A GB 9011678A GB 2232764 A GB2232764 A GB 2232764A
Authority
GB
United Kingdom
Prior art keywords
colour
background
pixels
pixel
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9011678A
Other versions
GB9011678D0 (en
Inventor
Robin Deirdre Tillett
Nigel James Bruce Mcfarlane
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Research Development Corp UK
Original Assignee
National Research Development Corp UK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Research Development Corp UK filed Critical National Research Development Corp UK
Publication of GB9011678D0 publication Critical patent/GB9011678D0/en
Publication of GB2232764A publication Critical patent/GB2232764A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method of analysing a visual image of a complex three dimensional object such as a plant, avoids misclassification of shadows as plant material. A colour video signal of plant material positioned against a background of substantially uniform colour is analysed to give colour coordinates for each pixel in three dimensional colour space. Each pixel is classified as representing background or plant material, according to whether the colour coordinates of the pixel fall inside or outside, respectively, a substantially cone shaped volume in colour space. The cone shaped volume is determined by extending a cone from a region representing the colour of the background surface, towards the origin of the colour coordinate system. Preferably pixels close to the origin are classified as plant material whether or not they lie within the cone.

Description

METHOD OF AND APPARATUS FOR PROCESSING AN IMAGE SIGNAL The present invention relates to a method of, and apparatus for, processing a signal representing an image of an object, particularly, but not exclusively, for processing a visual image signal of a three dimensional object of complex shape and such as to cast shadows which make difficult the processing of a visual image signal of the object. The invention has particular, but not exclusive, application in the processing of images of plant material, for example for the guidance of robotic apparatus for use in micropropagation of plant material.
The growth of plants from tissue culture, or micropropagation, is a technique which can produce large numbers of genetically identical plants, perhaps possessing a desirable quality such as disease resistance, in a short time. The tasks of dissecting and transplanting such plants are labour-intensive and repetitive, and the gains in speed, sterility and labour costs which could be achieved by the use of robots make automation an attractive prospect for the fast-expanding micropropagation industry. However, automation is difficult, requiring methods of robot guidance which can deal with the natural variability of biological objects.
Vision processing is a method of sensory control which has been applied to similar problems in other areas of agriculture, such as tomato sorting, fruit harvesting, and plant identification. However these techniques are difficult to apply in micropropagation because of the nature of the plant material and the features, such as nodes of the plant, which need to be located.
The problem to be solved in the context of micropropagation is that the plant material to be analysed is three dimensional, and in the mass of foliage shadows are cast on the background surface. It then becomes difficult to analyse the image and to distinguish between dark areas of plant and dark areas oftshadowed background surface, and to distinguish between highlights on the plant and illuminated areas of the background surface.
According to the present invention there is provided a method of processing a signal representing an image of an object formed by electromagnetic radiation, comprising the steps of generating an image signal representing an image of a three dimensional object positioned against a background of a substantially uniform colour, analysing the colour of pixels of the image signal to give coordinates for the pixel representing the colour of the pixels, and classifying pixels as representing background or object, according to whether the coordinates of the pixel fall inside or outside, respectively, a predetermined region in colour space which represents the colour of the background throughout a range of different values of illumination of the background.
Where the term "colour" is used in connection with an image signal, what is meant is the frequency composition of the electromagnetic radiation represented by the image signal, whether the electromagnetic radiation defining that colour is in the visual range, or outside the visual range, for example in the infrared range or ultraviolet range.
In a preferred application, the method includes the step of producing an output signal containing positional information relating to the object and derived from the said classified pixels. Another preferred step comprises producing an output signal which represents an image of the said object containing information derived from the said classified pixels.
In a preferred form, the method includes analysing the colour of pixels to give coordinates for the pixels in three dimensional colour space, and classifying pixels as representing background or object, according to whether the colour coordinates of the pixel fall inside or outside, respectively, a predetermined volume in the said three dimensional colour space which encompasses approximately a region representing the colour of the background. Preferably the said predetermined volume is a substantially cone-shaped volume, encompassing approximately the background colour region, and leading towards the origin of the colour space.
In accordance with a preferred feature, the method includes the step of classifying as object, pixels lying closer to the origin than a predetermined threshold, whether or not such pixels fall within the said predetermined volume in colour space. This means that the said substantially cone-shaped volume in colour space is truncated close to the origin, and any pixels occurring beyond the truncated base of the cone will be identified as object. Even if by making this allocation, some gap in the object is identified as object, such an area will be very close to the object itself, since the dark shadow will be very close to the edge of the material of the object.
In one practical arrangement, the said substantially cone shaped volume can be defined by a cone lea#ding toward the origin from the region in colour space which represents the colour of the background. It is found that if the colour of the background in normal illumination is represented approximately by a sphere in three dimensional colour space, then a cone taken from that sphere towards the origin of the three dimensional colour space, contains colour coordinates corresponding to shadowed areas of the background surface. If in the analysis of the visual image, any pixel which has a colour lying within this cone, is allocated to background, then it is possible to pick out much more clearly gaps or spaces in a complex object which would otherwise be confused with the material of the object itself.
In some uses of the method of the invention, the results of the classification of the pixels may be utilised to produce an output signal representing an outline of the object. In another use, the classification results may be utilised to derive spatial coordinates of locations on the object having predetermined, specified characteristics for example nodes of a plant.
In one preferred practical example of the method, the classifying step may comprise the steps of: (i) calculating for a pixel the value of cos where Ctt is the angle between the mean vector in colour space of the background colour and the vector in colour space of the pixel, coso( being given by the equation:: rr + gg + bb = (r2 + 72 + b2)+ (r2 + g2 + b2)+ cos where r, g and b are the red, green and blue colour coordinates of the mean vector of the background colour, and r, g and b, are the colour coordinates of the vector of the pixel colour, and (ii) determining for a pixel whether or not cos O( is greater than cos + , where + is a predetermined constant representing the half angle of the said cone shaped volume taken from the region representing the background colour and leading towards the origin, and (iii) classifying the pixel as background if the inequality holds of: : cos α > cos # In such an arrangement, the method may include classifying as object a pixel having colour coordinates of value below a predetermined threshold, and using the steps specified in the preceding paragraph only for pixels above the predetermined intensity threshold.
The method may include a learning phase, in which the parameters of a background region in colour space are calculated from a sample of pixels taken to be typical of the image background. In a practical application, this would only have to be performed once for a whole series of images with the same background.
To effect this learning phase, the method may include the preliminary step of calculating the parameters of the said predetermined region representing background, for example the substantially cone shaped region in colour space, by the steps of generating a calibration image signal representing a colour image of the background analysing a sample of the pixels of the background to give colour coordinates of the background, calculating from the sample colour coordinates the mean vector angle, and the approximate half angle, of a cone formed by joining to the origin in three dimensional colour space a volume approximately encompassing in colour space the colour coordinates of the sample of the background pixels.
Features of the invention which have been set out above with regard to the method of the invention, are applicable equally with regard to apparatus according to the invention.
In particular, there may be provided in accordance with the present invention apparatus for processing a signal representing an image of an object formed by electromagnetic radiation, comprising means for generating an image signal representing an image of a three dimensional object positioned against a background of a substantially uniform colour, and signal processing means for analysing the colour of pixels of the signal to give coordinates for the pixels representing the colour of the pixel, and for classifying pixels as representing background or object, according to whether the coordinates of the pixel fall inside or outside, respectively, a predetermined region in colour space which represents the colour of the background throughout a range of different values of illumination of the background.
An embodiment of the invention will now be described by way of example with reference to the accompanying drawings in which: Figure 1 is a diagrammatic representation of apparatus embodying the invention for producing a visual image signal of a plant and for utilizing the signal to effect robotic micropropagation of the plant material; Figure 2 is a flow chart of a routine for processing a visual image signal in an embodiment of the invention; Figure 3 is a black and white image of a rhododendron microplant on a blue background surface; Figure 4 is a diagrammatic representation of three dimensional colour space, the dots shown in Figure 4 representing the colours of the pixels in a colour image of a plant against a background;; Figures 4a, b, and c, show scatter diagrams which are two side views, and a plan view, respectively, of three dimensional colour space, showing clusters of pixels representing the plant and background shown in Figure 3; Figures 5a, b and c, show scatter diagrams which are two side views, and a plan view, respectively, of three dimensional colour space showing colour values for an image of uniformly lit, blue background surface; Figures 6a, b and c, are scatter diagrams corresponding to Figures 5a to c, for the same blue background surface, when under an illumination gradient to produce shadow; and Figure 7 is a diagrammatic representation of three dimensional colour space, showing a cone of colour values to be used in accordance with the present invention; Figure 8 is a histogram of cos# oz , where for a pixel is the angle between the pixel vector and the mean background vector, calculated for all pixels in the image in Figure 3.
The embodiment of the invention will be described in the context of automatic or semi-automatic operations on plant tissue culture during micropropagation, for example the cutting of nodes from plant material. In Figure 1 a conveyor belt 11 carries plant material beneath a solid-state colour video camera 12 and then on below a Cartesian robotic system indicated generally at 13. A carriage 14 can be moved to any selected location over the conveyor belt 11 by means of motors 15 and 16 controlled by a control device 17. The speed of the conveyor 11 is controlled by a control device 18 so that the position of plant material observed by the camera 12 can be correlated with the subsequent position of the material beneath the carriage 14. The carriage 14 carries an operating device, for example a cutter 19 for cutting plant material, which is operated under control of a control device 20.The control devices 17, 18 and 20 are all controlled by a main micro-computer 21.
The video camera 12 feeds a first colour visual image signal to a signal processing means 22 which embodies the invention, and produces at its output a second image signal, which may for example represent a binary visual image of an outline of the plant material.
The second image signal is fed to a monitor 23, and also to the micro-computer 21. The classification of pixels in accordance with the invention (to be described hereinafter) gives information which is additional to the existing colour information. Therefore, although an improved binary output is one possibility, the greylevel or colour information may also be used and does not have to be discarded.
The operation of the system is either semi-automatic or fully automatic. When semi-automatic, an operator observes the image on the monitor 23, and by operating a keyboard of the micro-computer 21, selects locations on the monitor 23 to be cut by the cutter 19, for example so as to cut out a required node of the plant material. In a fully automatic system, the micro-computer 21 further processes the second image signal, and automatically locates the nodes and instructs the movement of the cutter 19. In some arrangements, the signal processing means 22 and the micro-computer 21 may be provided by a single signal processing computer for effecting all the functions required. In other arrangements, further processing may occur in the signal processing means 22, which will then pass on to the computer 21 only information such as positions of required locations in the image.
Figure 2 illustrates in diagrammatic form a flow chart setting out the main steps of the processing of the image signal in the signal processing means 22.
This chart will be described in detail below, after the general manner of operation of the embodiment has been described.
The video signal from the solid-state colour video camera is divided into its red, green and blue components by means of a PAL decoder. The red, green and blue components, as well as the original black and white video image, are digitised and placed in RAM by a Frame Grabber card. Each complete image consists of four component images, each measuring 256 x 256 pixels and with 255 grey levels available to each pixel. The colour and brightness of each pixel in the colour image are determined by its grey level values in the red, green and blue component images; for example, a red pixel of the colour image would have a higher value in the red component than in the green or blue.
In the example to be described the images used are of rhododendron microplants set against a background of a blue surface, this having been found to present the best contrast with the plants, which are mainly red and green. Figure 3 shows a typical black and white image.
A known method of classifying such an image is grey-level thresholding, in which pixels are labelled as 'dark' or 'bright' according to whether their grey-levels are above or below a threshold value. The technique does not work well when there is overlap between the grey-level values of the intended categories, such as plant and background, in the image.
For example, in Figure 3 some of the plant stems are brighter than the background, and some of the background shadows are as dark as the plant, so that the results of grey-level thresholding are many misclassified pixels.
More information, such as colour, is required to correctly classify them, as will be explained in accordance with the invention.
Because a colour image produced by a video camera is fully described by its red, green and blue components, each pixel may be represented as a vector or a point (R,G,B) in colour space. Figure 4 is a diagrammatic representation of three dimensional colour space, defined by Cartesian coordinates R,G,B representing red, green and blue. A totally black pixel would have zero values in all three components and would therefore be found at the point, (0,0,0) in colour space, whilst the brightest white pixel would be found at (255,255,255). Grey pixels, having equal values in all colours, would be found along the line R=G=B.
The dots shown in Figure 4 represent a colour image of a plant (for example the plant shown in black and white in Figure 3) against a substantially uniform colour background. The representation shown in Figure 4 is achieved by taking each pixel of a colour image of a plant against a substantially uniform colour background, determining the colour coordinates of each pixel, and then plotting a dot representing that pixel at the appropriate coordinates in colour space. Two groups of dots are shown, indicated at X and Y, which indicate diagrammatically the colours of the background and the plant respectively. However, the groups of dots are purely diagrammatic, and do not correspond accurately in position to the representations of the plant of Figure 3 which are shown in subsequent Figures 4a, 4b and 4c.
A colour image equivalent to Figure 3 can be resolved into its red, green and blue components. The distribution of pixels in colour space can be best seen by means of scatter diagrams, shown in Figures 4a, 4b and 4c, in which the colour components of each pixel have been plotted against each other. The scatter diagrams are side and plan views of the colour space cube with the concentration of pixels at a point represented by the shading.
In the scatter diagrams in Figures 4a, 4b and 4c, two clusters can be seen, one corresponding to the plant and one to the background. The pixels from the bright background form a cluster further from the origin than the predominantly dark plant, because they have large values in all three colours. In the red-blue scatter diagram, Figure 4b, the background cluster is above the line R=B because of its blueness. The same leaning towards the blue axis is seen in the blue-green scatter diagram of Figure 4c, but the red-green scatter diagram of Figure 4a shows little difference between the red and green values within either cluster, because there is little difference between the red and green images in the plant of Figure 3.
Figures 5a, Sb and 5c are scatter-diagrams showing the colour space distribution of a plain blue background surface, uniformly lit, and Figures 6a, 6b and 6c show that of the same background surface after shadows have been cast to produce an illumination gradient across the image. The distribution in Figures Sa to c is a small elliptical blob, showing that all the pixel vectors have similar red, green and blue levels close to a particular point in colour space. When shadows are cast on the image, as in Figures 6a to c the ellipse grows a conical tail towards the origin, as would be expected if the colour vectors have changed their moduli but not their direction. This is consistent with the interpretation that the brightness or intensity of a colour vector is a function of its modulus, and its 'colour' a function of its direction.The conical distributions of the scatter diagrams 6a, b and c are illustrated diagrammatically in Figure 7 in a three dimensional representation of the colour space. The cone is represented at Z, and is of course simplified in shape from the actual experimental results shown in Figures 6a to c.
As seen in Figures Sa to c and 6a to c, pixels of the same colour but different intensities, due to shadows, form a cone-shaped cluster in colour space. In the proposed algorithm of the present embodiment of the invention, pixels are classified as background or object depending on whether their colour vectors fall inside or outside a cone defined by a sample background cluster.
The initial stage of the algorithm is a learning process in which the axial vector and half-angle of the cone are derived from a typical sample of background; in this case a wide strip around the edge of the image.
Since the mean background vector, whether the sample contains shadows or not, must lie on the axis of the cluster, the axial vector can be set to the mean of the sample. It is possible, in principle, to derive a suitable half-angle from the covariance matrix of the sample, but in practical embodiments the best value was found not to vary very much and was simply chosen as a constant before processing.
By taking the scalar product, it can be shown that the angle o( between the mean background vector (r,g,b) and an arbitrary vector (r,g,b) is given by rr + gg + bb = (r2 92 +~b2)+ (r2 + g2 + b2)+ cos (1) and that the condition for the vector (r,g,b) to be inside the cone with axial vector (r,g7b) and half-angle + is cos α > cos # ...(2) The main part of the algorithm consists of calculating cos for each pixel from equation (1), comparing it to cos 4 as in the inequality (2) and assigning the pixel to the background if inequality (2) holds.The amount of processing per pixel is not very great, because the r, g and b terms are all established as constants in the learning phase, the square-root operations can be performed through look-up tables, and cos dj is a pre-processing constant.
Figure s shows a histogram of cos at , as calculated from equation (1) for all pixels in the colour image corresponding to the plant image seen in Figure 3. This may be used to determine a practical value of the half angle# + . The peak at cos = 1 or 0 < = 0 is off the scale and has been removed. It will be noted how the different directions of the two clusters in colour space have been picked out with a significant valley between them at cos < = 0.980 oroX -n- 11'. This value of o( , since it marks the edge of the background cone, would be the best choice for the angle f in equation (2).
Some five images similar to Figure 3a were processed in the same way, and the valley was found to vary by no more than two degrees from < = 10-. The value would not be expected to be the same for images other than those of rhododendron microplants on blue background, but for such images the variation between them is small, and the calculation can be simplified by fixing + = 10' as a pre-processing constant.
The choice of the half angle affects the sensitivity of the classification, or what type of errors occur (background classified as object or vice versa). In theory the angle could be chosen from some study of the distribution in colour space of a sample of background pixels, and possibly a sample of object pixels. A very simple strategy would be to choose the smallest angle that still allowed all sample pixels to lie in the cone. A practical method is to try different angles experimentally and assess the segmentation by eye. Another practical method is to produce a histogram (as in Figure 8) and assume that the dip at about 0.98 represents a sensible divide between the two populations of object and background. In some cases it has been found possible to choose one universal value for the half-angle, but this will probably not always be the case.In general the half-angle can either be pre-set (possibly on the basis of experimentation) or set actively as part of the learning procedure.
There will now be described the operation of the steps labeled A to L in the flow chart of Figure 2.
This flow chart shows the main processing steps carried out by the processing means 22 of Figure 1. The first step carried out, at step A in Figure 2, is to derive sets of component signals from the input signal from the camera 12. The processing means is arranged to grab the input signal in the form of three component colour images, and to store it in memory. The next step at B is to reset row and column counters in the processing means 22 (corresponding to the rows and columns of pixels in the input video signal), so that the counters point to the top left hand pixel of each component image.
The centre part of the flow chart of Figure 2, at steps C to G, represents the processing of a pixel which is pointed to by the row and column counters. At step C the grey levels of the pixel in each colour component are fetched from memory. At step D the values are used to calculate cos ol , where o < is the angle between the vector from the origin to the position of the pixel in colour space and the vector from the origin to the position of the mean background colour in colour space. The value of cos # is tested at step E in accordance with the formula set out in the previous description, and the pixel is marked as background at step F or as object at step G depending on the result of the test at step E.
The next set of steps H to K cause the algorithm to step through the image, processing each pixel in sequence. At step H the column counter is incremented and the content of the counter is tested at step I. If the current row of pixels is not yet completed, the flow passes back to step C to process the next pixel. If the row is complete at step I, then the flow passes to step J where the row counter is incremented and the column counter is reset. When the row counter reaches 256, then all pixels have been processed. This is tested at step K. If the row counter has not reached 256, then the logic flow returns to step C. If all pixels have been processed in the three colour component images which have been grabbed, then the logic flow passes to step L which ends the algorithm for that particular set of three component colour images.
Results obtained with the cone algorithm avoid the misclassification of bright stems as background, and give good results with poor lighting, which may be unavoidable in a practical application. Most of the shadows are correctly classified, but mistakes may be made in the darkest areas of the picture, due to digitisation errors near the colour space origin. At a radius of 7 grey-level values from the origin, an error of one grey-level in the location of a pixel in colour space could make a difference of 8e in the angle < in equation (1). Since the angle between the object and background clusters in Figure 8 is only about ll, this error would be enough to misclassify pixels from one cluster to the other.The problem is caused by a singularity in equation (1) which does not allow for a unique definition of cos < when r=g=b=O, and which makes colour information unreliable near this point. Other non-linear definitions of colour, such as normalised colour, saturation and hue, feature the same singlarity.
A modification can therefore be made in the algorithm in which the cone is truncated near its tip as shown diagrammatically in broken lines in Figure 7.
Equation (1) is then used only above a set intensity threshold, for example 30, and any pixels below the threshold are classified as plant. Better results are obtained by this method, improving on those obtained with a full cone.
It should be noted that the cone shown in Figure 7, and defined by the above algorithm, is defined as extending from the origin out until it reaches the limits of the notional colour 'cube'. Therefore the cone does not end in a disc or sphere. The sample of background pixels forms some shape in colour space.
This cluster of pixels may be approximated by a sphere which then, together with the origin, forms the basis of the cone, but the cone extends beyond the sphere.
Sometimes it may be desirable to have the sphere slightly smaller than the cluster. In such a case some background pixels will be misclassified, but more object pixels may be correctly classified.

Claims (14)

1. A method of processing a signal representing an image of an object formed by electromagnetic radiation, comprising the steps of generating an image signal representing an image of a three dimensional object positioned against a background of a substantially uniform colour, analysing the colour of pixels of the image signal to give coordinates for the pixels representing the colour of the pixels, and classifying pixels as representing background or object, according to whether the coordinates of the pixel fall inside or outside, respectively, a predetermined region in colour space which represents the colour of the background throughout a range of different values of illumination of the background.
2. A method according to claim 1 including the step of producing an output image signal containing positional information relating to the object and derived from the said classified pixels.
3. A method according to claim 1 or 2 including the step of producing an output signal which represents an image of the said object containing information derived from the said classified pixels.
4. A method according to any preceding claim, including analysing the colour of pixels to give coordinates for the pixels in three dimensional colour space, and classifying pixels as representing background or object, according to whether the colour coordinates of the pixel fall inside or outside, respectively, a predetermined volume in the said three dimensional colour space which encompasses approximately a region representing the colour of the background.
5. A method according to claim 4 in which the said predetermined volume is a substantially cone shaped volume, encompassing approximately the background colour region, and leading towards the origin.
6. A method according to claim 4 or 5 including the step of classifying as object, pixels lying closer to the origin than a predetermined threshold, whether or not such pixels fall within the said predetermined volume in colour space.
7. A method according to any preceding claim including utilising the results of the classification of the pixels to produce an output signal representing an outline of the object.
8. A method according to any preceding claim including utilising the results of the classification of the pixels to derive spatial coordinates of locations on the object having predetermined, specified characteristics.
9. A method according to any preceding claim in which the classifying step comprises the steps of: (i) calculating for a pixel the value of cos < where tt is the angle between the mean vector in three dimensional colour space of the background colour and the vector in colour space of the pixel, CoSOt being given by the equation:: rr + gg + tub = (r2 + g2 + 2)+ (r2 + g2 + b2)+ cos , where r, g and b are the red, green and blue colour coordinates of the mean vector of the background colour, and r, g and b, are the colour coordinates of the vector of the pixel colour, and (ii) determining for a pixel whether or not cos OC is greater than cos + , where 0 is a predetermined constant representing the half angle of the said cone shaped volume taken from the region representing the background colour and leading towards the origin, and (iii) classifying the pixel as background if the inequality holds of: COSDC > COS
10.A method according to claim 9 including the step of classifying as object a pixel having colour coordinates of value below a predetermined intensity threshold, and using the steps specified in claim 10 only for pixels above the predetermined intensity threshold.
11. A method according to any preceding claim including the preliminary step of calculating the parameters of the said predetermined region representing background by the steps of: generating a calibration image signal representing a colour image of the background, analysing a sample of the pixels of the background to give colour coordinates of the background, calculating from the sample colour coordinates the mean vector angle, and the approximate half angle of a cone formed by joining to the origin in three dimensional colour space a volume approximately encompassing in colour space the colour coordinates of the sample of the background pixels.
12. Apparatus for processing a signal representing an image of an object formed by electromagnetic radiation, comprising means for generating an image signal representing an image of a three dimensional object positioned against a background of a substantially uniform colour, and signal processing means for analysing the colour of pixels of the image signal to give coordinates for the pixels representing the colour of the pixel, and for classifying pixels as representing background or object, according to whether the coordinates of the pixel fall inside or outside, respectively, a predetermined region in colour space which represents the colour of the background throughout a range of different values of illumination of the background.
13. A method of processing a signal representing an image of an object, substantially as hereinbefore described with reference to the accompanying drawings.
14. Apparatus for processing a signal representing an image of an object, substantially as hereinbefore described with reference to the accompanying drawings.
GB9011678A 1989-05-26 1990-05-24 Processing an image signal of a 3-d object Withdrawn GB2232764A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB898912226A GB8912226D0 (en) 1989-05-26 1989-05-26 Method of and apparatus for processing an image signal

Publications (2)

Publication Number Publication Date
GB9011678D0 GB9011678D0 (en) 1990-07-11
GB2232764A true GB2232764A (en) 1990-12-19

Family

ID=10657456

Family Applications (2)

Application Number Title Priority Date Filing Date
GB898912226A Pending GB8912226D0 (en) 1989-05-26 1989-05-26 Method of and apparatus for processing an image signal
GB9011678A Withdrawn GB2232764A (en) 1989-05-26 1990-05-24 Processing an image signal of a 3-d object

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB898912226A Pending GB8912226D0 (en) 1989-05-26 1989-05-26 Method of and apparatus for processing an image signal

Country Status (7)

Country Link
EP (1) EP0473661A1 (en)
JP (1) JPH04505817A (en)
AU (1) AU5732990A (en)
CA (1) CA2056985A1 (en)
GB (2) GB8912226D0 (en)
IL (1) IL94481A0 (en)
WO (1) WO1990014635A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5179598A (en) * 1990-05-31 1993-01-12 Western Atlas International, Inc. Method for identifying and displaying particular features of an object
JP3468877B2 (en) * 1994-10-27 2003-11-17 矢崎総業株式会社 Plant automatic diagnosis method and apparatus
DE102012010912B3 (en) * 2012-06-04 2013-08-14 Yara International Asa A method for contactless determination of the current nutritional status of a crop and for processing this information

Also Published As

Publication number Publication date
GB8912226D0 (en) 1989-07-12
JPH04505817A (en) 1992-10-08
WO1990014635A1 (en) 1990-11-29
EP0473661A1 (en) 1992-03-11
IL94481A0 (en) 1991-03-10
AU5732990A (en) 1990-12-18
GB9011678D0 (en) 1990-07-11
CA2056985A1 (en) 1990-11-27

Similar Documents

Publication Publication Date Title
Heinemann et al. An automated inspection station for machine-vision grading of potatoes
Strachan Recognition of fish species by colour and shape
Sites et al. Computer vision to locate fruit on a tree
EP0833701B1 (en) Defective object inspection and separation system
Laykin et al. Image–processing algorithms for tomato classification
US7995058B2 (en) Method and system for identifying illumination fields in an image
US5253302A (en) Method and arrangement for automatic optical classification of plants
Timmermans Computer vision system for on-line sorting of pot plants based on learning techniques
JPH03504424A (en) Automatic optical classification method and device for plants
KR101703542B1 (en) Automatic sorting method of sea-squirt using feature measurement and HSV color model
US4807762A (en) Procedure for sorting a granular material and a machine for executing the procedure
CN114758236A (en) Non-specific shape object identification, positioning and manipulator grabbing system and method
Bulanon et al. Optimal thresholding for the automatic recognition of apple fruits
GB2232764A (en) Processing an image signal of a 3-d object
Ghate et al. Maturity detection in peanuts (Arachis hypogaea L.) using machine vision
JP4171806B2 (en) A method for determining the grade of fruits and vegetables.
CN113145473A (en) Intelligent fruit sorting system and method
Kanade et al. Development of machine vision based system for classification of Guava fruits on the basis of CIE1931 chromaticity coordinates
CN113751332A (en) Visual inspection system and method of inspecting parts
EP0342354A2 (en) Color sorting apparatus
CN113808206B (en) Typesetting system and method based on vision tracking robot
Jaisin et al. Determining the size and location of longans in bunches by image processing technique.
CA2253712A1 (en) A learning method for an image analysis system for use in the analysis of an object as well as uses of the method
CN112775032B (en) Fruit sorting method and device and flexible robot
Lu et al. Detecting defects on citrus surface based on circularity threshold segmentation

Legal Events

Date Code Title Description
732 Registration of transactions, instruments or events in the register (sect. 32/1977)
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)