EP0289500A1 - Inspection apparatus - Google Patents

Inspection apparatus

Info

Publication number
EP0289500A1
EP0289500A1 EP87900206A EP87900206A EP0289500A1 EP 0289500 A1 EP0289500 A1 EP 0289500A1 EP 87900206 A EP87900206 A EP 87900206A EP 87900206 A EP87900206 A EP 87900206A EP 0289500 A1 EP0289500 A1 EP 0289500A1
Authority
EP
European Patent Office
Prior art keywords
features
inspection
interest
articles
inspection apparatus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP87900206A
Other languages
German (de)
French (fr)
Inventor
Emlyn Roy Davies
Adrian Ivor Clive Johnstone
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Research Development Corp UK
Original Assignee
National Research Development Corp UK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB858530928A external-priority patent/GB8530928D0/en
Priority claimed from GB858530929A external-priority patent/GB8530929D0/en
Application filed by National Research Development Corp UK filed Critical National Research Development Corp UK
Publication of EP0289500A1 publication Critical patent/EP0289500A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Abstract

L'appareil ci-décrit permet le contrôle successif d'une pluralité d'articles possédant en commun un ensemble de caractéristiques dignes d'intérêt, et comprend un détecteur de bord permettant de localiser la position d'articles individuels. Le détecteur de bord est alimenté à partir d'un dispositif de balayage qui saisit une image d'une région au voisinage d'un article localisé. Il analyse la signification des caractéristiques détectées par le dispositif de balayage et commande sélectivement le traitement des données dérivées afin d'obtenir de préférence des données correspondant aux caractéristiques concernées, réduisant ainsi le temps de traitement.The apparatus described below allows the successive control of a plurality of articles having in common a set of features of interest, and includes an edge detector for locating the position of individual articles. The edge sensor is powered from a scanner that captures an image of a region in the vicinity of a localized item. It analyzes the significance of the characteristics detected by the scanning device and selectively controls the processing of the derived data in order to preferably obtain data corresponding to the characteristics concerned, thus reducing the processing time.

Description

INSPECTION APPARATUS This invention relates to industrial inspection apparatus and, in particular, to apparatus adapted for the rapid inspection of a plurality of components.
Industrial inspection involves the identification, location, counting, scrunity and measurement of products and components, often under conditions where they are moving at moderate speed along a conveyor system. Products therefore have to be examined in real time, and this imposes a major difficulty, since the processing rate required to analyse the necessary number of pixels is considerably more than can be coped with by a single conventional serial processor. In practice the processing rate is 50-100 times faster than a single central processor unit can cope with so special hardware has to be designed for the purpose.
We have devised an Image-handling Multi-Processor (IMP) for this purpose. IMP consists of a Versatile Modular Eurocard VME bus and crate, which holds a set of special co-procesor boards including a frame store for image processing. The co-processors operate rapidly and enable the system to perform industrial inspection tasks in real time. The system is designed in an integrated way, so that the data transactions on the VME bus give only a limited overhead in terms of speed. In addition, the memory sub-systems are designed to operate at the maximum speed of which the VME bus is capable.
According to the present invention there is provided inspection apparatus for the susccessive inspection of a plurality or articles having a common set of features of interest, comprising detector means for the positional location of individual articles, scanning means to capture an Image of a region in the vicinity of a located article and analysis means to analyse the significance of features detected by said scanning means, wherein selection means coupled to said positional location means is operable selectively to control the processing of data derived from said scanning means in order preferentially to obtain data relevant to said features of interest.
There is also provided an image location device for use in inspection apparatus for the succesive inspection of a plurality of articles having a common set of features of interest, comprising detector means for deriving a plurality of electrical signals corresponding to the intensity of the optical signal at a plurality of positions in the vicinity of a predetermined position on an optical image, combining means for combining pairs of symmetrically- weighted groups of said electrical signals in accordance with a predetermined algorithm to derive a pair of electrical signals dependent on the intensity of the optical signals at positions in the vicinity of said predetermined" position and difference means to derive an electrical signal dependent on the difference between said pair of electrical signals.
The IMP system is built around a frame store typically containing four image-planes of 128 x 128 bytes. The memory is configured so that access is as rapid as VME protocol will allow, viz 150 nsec. A special technique is used to achieve this throughput.
The purpose of the IMP system is to permit industrial inspection tasks to be undertaken in real time and at reasonable expense. Images from a line-scan or TV (vidicon) camera are digitised and fed to the frame store under computer control: they are then processed rapidly by special processing boards. These contain co-processors which operate under control of a host computer and which can access image data autonomously via the VME bus. An arbitrator on the VME bus mediates between the host processor, the image display hardware and the various co-processors. IMP is a multi-processor system to the extent that a number of processors can perform their various functions concurrently, though only one processor may access the bus at any one time. This is not as severe a constraint as might be thought, since (a) pipelining of processes together with provision of local memory enables use to be made of the parallel processing capability of the co-processors; and (b) many real time industrial inspection applications demand speeds that are high but not so high that careful design of the processors will not permit useful tasks to be done by this sort of system. It should be noted that the design of the co-processor system has in this case not only been careful but also ingenious, so that the capability of IMP is substantially greater than that of many commercially available systems, while being significantly less complex and expensive.
The IMP system finds particular application for food product inspection. Considerable numbers of food products have the characteristic that they are round. In our inspection work we needed to ensure that we could locate such products rapidly and then scrutinise them effectively for crucial features and defects. Food products tend to be mass produced in large numbers on continuously moving product lines. These lines are typically 1-2 metres wide, and contain 12-20 products across the width of a conveyor, which may be moving at 30-50 cm per second. Thus the product flow rate past a given point in the line is typically 20 units per second. This means that one processor of the IMP type will have to cope with one item every 50 msec or so. It will have to find each product in the images it receives, and then scrutinise it. An adequate resolution for a single product will be such that the product occupies a square of side 60 to 80 pixels. Thus location of the product is non-trivial, and scrunity is less trivial. It is seen that this sort of problem involves a lot of pixel accesses, but this need not be sufficient to 'tie up' the VME bus and cause data-flow problems.
Considerable attention has heen devoted to the algorithms for locating and scrutinising the product and the types of processor needed to implement them. Although algorithms originally developed for round food product inspection were the first to be implemented in hardware, they have now been generalised so that they are suitable for a much wider variety of products. First, any products which are round or have round holes can be located rapidly and with ease. Second, all such products can be inspected and closely scrutinised. Third, a number of features even of non-circular products can be located directly in their own right, and then products scrutinised in the vicinity of these features, or at related positions. A characteristic of the system is that it relies on recognition of certain features which are capable of triggering the remainder of the system. In our experience, there are a very large number of object features that can simply be located - including holes, specks, dots, characters, corners, line intersections and so on. Thus the IMP system can be used in an extremely wide range of industrial inspection and robotics applications. In addition, there are cases when the object or some feature of it does not need to be located, since its presence and position is already known - e.g. in a robot gripper, at the bottom of a chute, or elsewhere. The IMP system is clearly capable of dealing with this simplified set of situations. Overall, it is seen that the IMP system is in practice of rather general utility. The functions presently performed by the various processors are: (1) edge detection and edge orientation; (2) construction of radial intensity histograms in the vicinity of special features; (3) counting of pixels in various ranges of intensity and with various tags already attached to .them; and (4) correlation of thresholded patterns against internally generated parameters such as distance from the feature of interest. Other features are: (5) construction of angular intensity histograms; (6) construction of overall object intensity histograms, plus (7) construction of intensity histograms within a specified area; and (8) more general grey-scale sub-image correlation. These functions permit the rapid estimatation of product centres, and the measurement of perimeter and area, e.g. that of chocolate cover over a biscuit. A novel feature that permits much of the processing to be carried out rapidly and efficiently is that of autoscan by a particular processor of an area of an image relative to a given starting point, coupled with the use of an internal bus which holds (x,y) and (r,θ) co-ordinates of the currently accessed pixel relative to the starting pixel: this is aided by use of a look-up table on the processor board giving information related to the pixel (x,y) or (r,θ) co-ordinates.
Because the system is designed to use simple functions that can be relocated in the image at will, it is highly flexible and efficient.
In addition to these advantages, the whole system is simply controlled in a high level language from a PDP-11 type of host processor, for which suitable interfaces have been devised. Alternatively, a 68000 or other host processors can be used. The important point here is that complex assembly level programming is not required at any stage. The only requirement imposed by the co-processors is that they should be initialised by RESET signals, that their operation should be started by suitable START pulses, and that certain information should be provided for them in specific registers and memory locations. Data may be read out of them by the host or other processors, or they may write their data to other locations. Control is via a minimal number of registers which may each be given a high level name for programming purposes. The co-processor functions listed above are special functions which have been found to take the bulk of the effort in practical industrial inspection systems. Since these functions are not actually general, they cannot on their own carry out the whole of an inspection task. Thus 'glue' functionality is required between the given functions. This may be provided by the host processor. If this gets slow or is overloaded, then several software co-processors may be added to the IMP system; at present this is envisaged to be possible via DEC T-11 processors, possibly working in conjunction with a bit-slice. It would not be possible for such processors to perform the whole function of IMP because they would operate too slowly for inspection purposes, unless at least 50 concurrently operating processors were added to the system. The IMP system is intended to overcome the need for this sort of solution by doing the bulk of the processing much more economically: thus a brute-force solution is replaced by a clever solution. However, for generality, one or two additional software co-processors form a useful adjunct to the remainder of the hardware. An embodiment of this invention will now be described by way of example with reference to the accompanying drawings in which:-
Figure 1 is a schematic view of an image-handling multiprocessor system;
Figure 2 is a schematic drawing of Processor II from the IMP system of Figure 1;
Figure 3 is a schematic drawing of Processor Module A for Processor II;
Figure 4 is a schematic drawing of Processor Module B for Processor II; Figure 5 is a schematic drawing of Processor Module C for Processor II;
Figure 6 is a schematic drawing of Processor Module D for Processor III;
Figure 7 is a schematic drawing of Processor Module E for Processor IV;
Figure 8 is a schematic drawing of Processor Module F for Processor IV;
In order to carry out the inspection task, products first have to be located. Advantageously inspection can be carried out in two stages: (1) product location and (2) product scrutiny. Product location may conveniently be carried out using the generalised Hough transform. This may be performed directly on grey-scale images, for objects whose shapes are determined only from a look-up table. In order to perform the Hough transform, edge location must be carried out. Processor I has been designed for this purpose, and is therefore able to deliver data from which special reference points in an image containing product may be found. It is crucial to IMP that starting reference points should be located because of the general nature of the Hough transform. It has been found that special reference points can normally be located in an image containing product, and hence this constitutes a sufficiently general procedure to form the basic starting point for the process of inspection.
Given that certain reference points have been located in an image, the next stage of inspection is to analyse the image in their vicinity, thereby enabling objects to be scrutinised. Processor II carries out this function, using an autoscanning module which scans systematically over the region of interest. Scanning over a sub-area of the image is profitable for two reasons: (1) it speeds up processing by eliminating irrelevant areas; (2) it enables a variety of matching processes to be carried out, since the reference point will be of one or other standard type: this means that comparison can be made with previously compiled data sets. The remainder of Processor II contains modules which aid methods by which matching may be achieved. Processor II contains an internal bus which carries information about the position (x,y) of the current pixel relative to that of the reference point. In particular this internal bus carries information on the (r,θ) co-ordinates of the current pixel relative to the reference point, which it has obtained from a downloadable look-up table held in RAM. Only one reference point may be used at any one time, but the look-up table may contain several sets of additional information, each set being relevant to a particular type of object or feature. For example, the additional information may include data on the ideal size of a particular type of object, or other details. Thus Processor II may scan the image successively looking at several objects of various types, and placing output information on each in its output memory. (The latter arrangement will often be used to save time on the VME bus.)
Use of the internal (x,y)/(r,θ)/information bus (which has been designated as the 'Relative Location' or RL-bus) feeds data to the various Modules within Processor II. In particular, these can build up valuable radial and angular intensity histograms which provide rapid means of comparing the region near a special reference point with that expected for a ideal object. Normal intensity histograms of various ranges can also be generated, to aid this analysis, and correlations can be performed for the distribution of particular intensity values, relative to standard distributions. The emphasis here is on rapidly scrutinising specific regions of the image in various standard ways, in order to save the host processor from the bulk of the processing. The host processor is still permitted to interrogate part of the image when Processor II leaves any detail unclear.
Processor I carries out a vital edge detection function: in fact it is optimised for the computation of Hough transforms. For this purpose it is insufficient to compute, and threshold edge magnitude - edge orientation also has to be determined locally. Processor I is designed to determine both edge magnitude and edge orientation. In addition Processor I is used in a novel and unique manner, namely thresholding edge magnitude at a high level. The reason for this is (a) to find a significantly reduced number of edge points, thereby speeding up the whole IMP system, and (b) ensuring that the edge points that are located are of increased accuracy relative to an average edge point. This strategy is enhanced by incorporating within Processor I a double threshold on the pixel intensity value, so that points which are not half way up the intensity scale are eliminated, thereby speeding up processing further.
Processor I has a more complex autoscan unit than Processor II, since it employs a 3 x 3 pixel window instead of a 1 x 1 window. It achieves addional speed-up by (a) saving input pixel data from the previous two pixels (i.e. it only takes in a new 1 x 3 sub-window for every new pixel), (b) pipelining its computation, and (c) saving its output data in its own local high-speed memory, where it is still accessable by the host processor.
The priority levels on the VME bus, starting with the highest, are: level 3: executive (host) processor, which acts as system controller level 2 hardwired and micro-coded co-processors, including Processor I and Processor II level 1 : software co-processors, including DEC T-11 processors level 0: video display circuitry
Software co-processors are more likely to be intelligent than hardware co-processors, since it is easier to build more complex functionality into software than into hardware. Thus hardware co-processors may not be interruptable and should if necessary be permitted to complete their assigned operations. Therefore they are assigned priority level 2 rather than level 1. Video display circuitry has the lowest priority, and is thus able to display images from VME bus memory only when no other activity is occurring on the bus.
Bus grants to processors at the same level of priority are daisy-chained, and those closest to the arbitrator module have highest resulting priority. During algorithm development, or in an industrial system when speed of processing is not at a premium, high level language notation of pixels is useful. The PPL2 notation for pixels within a 5 x 5 window is:
P15 P14 P13 P12 P11
P16 P4 P3 P2 P10
P17 P5 P0 P1 P9
P18 P6 P7 P8 P24
P19 P20 P21 P22 P23 In order to employ this notation for pixels around the location (X,Y) in an image, it is necessary to perform a re-mapping operation. If this is carried out purely in software it will slow access to a miserable level. In the IMP frame store re-mapplng is carried out automatically in a look-up table: IMP actually copes with windows of size up to 7 x 7 by this method, rather than 5 x 5 as in the above example. Clearly, the look-up operation will reduce speed slightly. However, for the two instances cited above, the minimal reduction in speed resulting from direct RAM look-up will be immaterial, and the gains in ease of programming will be very worthwhile. In fact the automatic re-mapping procedure adopted here has the advantage of using absolute rather than Indexed addressing, which itself leads to a speed-up in- pixel access: this is not seen in currently available commercial systems where (say) a 68000 has direct access to all the image data in a huge block of contiguous memory.
The size of look-up table required for re-mapping is that required to combine X with 6 bits of window placement information, and similarly for Y. For a 128 x 128 frame store this means two look-up tables each having 7 + 5 = 13 address bits and 7 co-ordinate data bits plus 1 over-range data bit: and for a 256 x 256 frame store it means two tables have 8 + 6 = 14 address bits and 8 + 1 data bits. For a 128 x 128 frame store two 8K x 8 EPROMs are sufficient for the purpose. Advantageously, apparatus in accordance with the invention may be used for processing signals from an edge detector. This enhances speed of processing by rapid selection of pixels which provide accurate orientation and location information and ignoring other pixels. The principle that is used for selecting pixels giving high location and orientation accuracy is to look for those pixels where the intensity gradient is very uniform. This may be achieved by thresholding an intensity gradient uniformity parameter at high level or a non-uniformity parameter at low level. This may be achieved by taking two symmetrically-weighted sums of pixels near a pixel location under consideration. Advantageously, these may be re-weighted so that they will be exactly equal if the locality has an intensity gradient which is exactly uniform. The difference of the sums provides a convenient non-uniformity parameter which may be detected by a threshold detector set to a convenient value.
The method will be illustrated for a 3 x 3 neighbourhood, where the Sobel operator is being used to detect edges. Using the following notation to describe the pixel intensities in the neighbourhood
we estimate the (Sobel) x and y components of Intensity gradient as
gx = (C + 2F + I) - (A + 2D + G) gy = (A + 2B + C) - (C + 2H + I)
and intensity gradient can then be estimated as
g = [gx 2 + gy 2]½
or by a suitable approximation. Edge orientation may be deduced from the relative values of g and g using the arctangent function. Symmetric sums of pixel values that may be used for computing gradient uniformity are
s1 = A + C + G + I s2 = B + D + F + H s3 = 4E
Thus possible non-uniformity parameters would be
u1 = s2 = s3 u2 = s3 - s1
u1 = s1 - s2
It will be clear to one skilled In the art that the effect of the uniformity detector is to remove from consideration by a later object locator a good proportion of those edge points where noise is significant or the edge orientation measurement would be otherwise innacurate and at the same timespeed up the algorithm. It will, furthermore, operate in any size of neighbourhood, not just the 3 x 3 one illustrated by the Sobell edge detector. It is not restricted to use with a Sobell edge detector - others such as a Prewett 3 x 3 edge detector may be employed. Other sets of symmetric combinations of sums of weights should be employed and any linear or non-linear combination of these could be used to detect non-uniformity. (E.g. in the 3 x 3 case one could well use u1 + u2 + u3.) In larger neighbourhoods there are many possible uniformity operators.
One function of the uniformity detector is the elimination of noisy locations. Another is the exclusion from consideration of points which are not in their expected position due, for example, to malformation of an object which is being inspected. The uniformity detector would also be able to improve edge orientation accuracy by eliminating cases when a 'step' edge didn't pass closely enough through the centre of a neighbourhood. It here is any offset, accuracy deteriorates but the uniformity operator improves the probability of detecting this.
The uniformity detector may be used to eliminate edge (or apparent edges) locations which are subject to noise. This noise can arise within any of the pixels in the neighbourhood: e.g. with a Sobel detector, noise in any one of the pixesl In the neighbourhood (except the central one!) would reduce the edge orientation accuracy, and the iniformity operator could attempt to detect this. (Of course, it might sometimes fail - e.g. if two pixels were subject to noise and the effect cancelled itself out in the uniformity of the operator but not in the edge orientation operator itself)
It will furthermore be apparent that the screener can act by pre-screening or post-screening or simply in parallel. Parallel screening would be appropriate If special VLSI chips were to be designed, since this would be the fastest operating option. Otherwise pre- or post-screening would be useful for progressively cutting down the data to be handled by a later object detector. Note also that the whole thing could be done in one VLSI chip, rather than having one such chip for edge detection and one for uniformity screening. The important point is that the uniformity detector can be used to speed up the object detection algorithm by reducing the number of points considered.
Other features are that the invention permits use of radial histograms and correlation on the fly. By defining crucial areas in the picture which are relevant for hardware pre-selects for computer removal of redundant information it creates a progressive hierarchy of extraction of data and thus speeds up the inspection process. It does not waste time because it knows where to check any relevant features of image, which are not necessarily only edge points. The computer recalculates edge points on the basis of stored data and of measurements from the scanner. Because it is aware of the background points it does not take them into consideration. The invention finds particular application in industrial inspection of mass produced products such as food items (biscuits, pizzas, cakes and pies and other products of uniform size and shape). It may also be used for the forensic tasks such as number plate recognition and for optical mark and optical character recognition. It is not restricted to optical techniques for image capture and may employ other methods, such as sonar, infra-red or tactile sensing.
16L

Claims

1. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest, comprising detector means for the positional location of individual articles, characterised in that it includes scanning means to capture an image of a region in the vicinity of a located article and analysis means to analyse the significance of features detected by said scanning means, wherein selection means coupled to said positional location means is operable selectively to control the processing of data derived from said scanning means in order preferentially to obtain data relevant to said features of interest.
2. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 1 characterised in that said selection means includes an intensity gradient uniformity detector which serves as an intensity threshold detector.
3. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 2 characterised in that it includes rejection means to reject from further analysis the signals derived from certain of said features.
4. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 3 characterised in that the rejection means is adapted to reject signals which are subject to noise greater than a predetermined level.
5. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 3 characterised in that the rejection means is adapted to reject signals which to not conform to a predetermined profile.
6. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 5 characterised in that the profile is determined by the location of a step edge relative to the centre of a population of measured Intensities.
7. An image location device for use in inspection apparatus in accordance with any one of the preceding claims characterised in that it comprises detecor means for deriving a plurality of electric signals corresponding to the intensity of the optical signal at a plurality of positions in the vicinity of a predetermined position on an optical image, combining means for combining pairs of symmetrically-weighted groups of said electrical signals in accordance with a predetermined algorithm to derive a pair of electrical signals dependent on the intensity of the optical signals at positions in the vicinity of said predetermined position and difference means to derive an electrical signal dependent on the difference between said pair of electrical signals.
EP87900206A 1985-12-16 1986-12-16 Inspection apparatus Withdrawn EP0289500A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB858530928A GB8530928D0 (en) 1985-12-16 1985-12-16 Image enhancer
GB8530929 1985-12-16
GB8530928 1985-12-16
GB858530929A GB8530929D0 (en) 1985-12-16 1985-12-16 Inspection apparatus

Publications (1)

Publication Number Publication Date
EP0289500A1 true EP0289500A1 (en) 1988-11-09

Family

ID=26290124

Family Applications (1)

Application Number Title Priority Date Filing Date
EP87900206A Withdrawn EP0289500A1 (en) 1985-12-16 1986-12-16 Inspection apparatus

Country Status (4)

Country Link
EP (1) EP0289500A1 (en)
JP (1) JPS63503332A (en)
GB (1) GB2184233A (en)
WO (1) WO1987003719A1 (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0355377B1 (en) * 1988-08-05 1994-01-26 Siemens Aktiengesellschaft Method for testing optically flat electronic component assemblies
EP0364614B1 (en) * 1988-10-17 1993-12-22 Siemens Aktiengesellschaft Method of recognising the spatial position and orientation of already known objects
ES2047528T3 (en) * 1988-10-17 1994-03-01 Siemens Ag PROCEDURE FOR THE TWO-DIMENSIONAL RECOGNITION OF THE POSITION AND ORIENTATION OF BODIES PREVIOUSLY KNOWN.
GB8908507D0 (en) * 1989-04-14 1989-06-01 Fokker Aircraft Bv Method of and apparatus for non-destructive composite laminatecharacterisation
GB0318733D0 (en) * 2003-08-11 2003-09-10 Icerobotics Ltd Improvements in or relating to milking machines
US9161511B2 (en) 2010-07-06 2015-10-20 Technologies Holdings Corp. Automated rotary milking system
US8800487B2 (en) 2010-08-31 2014-08-12 Technologies Holdings Corp. System and method for controlling the position of a robot carriage based on the position of a milking stall of an adjacent rotary milking platform
US9149018B2 (en) 2010-08-31 2015-10-06 Technologies Holdings Corp. System and method for determining whether to operate a robot in conjunction with a rotary milking platform based on detection of a milking claw
US8720382B2 (en) 2010-08-31 2014-05-13 Technologies Holdings Corp. Vision system for facilitating the automated application of disinfectant to the teats of dairy livestock
US10111401B2 (en) 2010-08-31 2018-10-30 Technologies Holdings Corp. System and method for determining whether to operate a robot in conjunction with a rotary parlor
US8903129B2 (en) 2011-04-28 2014-12-02 Technologies Holdings Corp. System and method for filtering data captured by a 2D camera
US9357744B2 (en) 2011-04-28 2016-06-07 Technologies Holdings Corp. Cleaning system for a milking box stall
US10127446B2 (en) 2011-04-28 2018-11-13 Technologies Holdings Corp. System and method for filtering data captured by a 2D camera
US9265227B2 (en) 2011-04-28 2016-02-23 Technologies Holdings Corp. System and method for improved attachment of a cup to a dairy animal
US9107379B2 (en) 2011-04-28 2015-08-18 Technologies Holdings Corp. Arrangement of milking box stalls
US9049843B2 (en) 2011-04-28 2015-06-09 Technologies Holdings Corp. Milking box with a robotic attacher having a three-dimensional range of motion
US8746176B2 (en) 2011-04-28 2014-06-10 Technologies Holdings Corp. System and method of attaching a cup to a dairy animal according to a sequence
US9043988B2 (en) 2011-04-28 2015-06-02 Technologies Holdings Corp. Milking box with storage area for teat cups
US9681634B2 (en) 2011-04-28 2017-06-20 Technologies Holdings Corp. System and method to determine a teat position using edge detection in rear images of a livestock from two cameras
US8671885B2 (en) 2011-04-28 2014-03-18 Technologies Holdings Corp. Vision system for robotic attacher
US9161512B2 (en) 2011-04-28 2015-10-20 Technologies Holdings Corp. Milking box with robotic attacher comprising an arm that pivots, rotates, and grips
US10357015B2 (en) 2011-04-28 2019-07-23 Technologies Holdings Corp. Robotic arm with double grabber and method of operation
US9058657B2 (en) 2011-04-28 2015-06-16 Technologies Holdings Corp. System and method for filtering data captured by a 3D camera
US9215861B2 (en) 2011-04-28 2015-12-22 Technologies Holdings Corp. Milking box with robotic attacher and backplane for tracking movements of a dairy animal
US9258975B2 (en) 2011-04-28 2016-02-16 Technologies Holdings Corp. Milking box with robotic attacher and vision system
US8885891B2 (en) 2011-04-28 2014-11-11 Technologies Holdings Corp. System and method for analyzing data captured by a three-dimensional camera
US8683946B2 (en) 2011-04-28 2014-04-01 Technologies Holdings Corp. System and method of attaching cups to a dairy animal
US9107378B2 (en) 2011-04-28 2015-08-18 Technologies Holdings Corp. Milking box with robotic attacher

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2050849A5 (en) * 1969-06-26 1971-04-02 Automatisme Cie Gle
BE789062A (en) * 1971-09-23 1973-01-15 Nederlanden Staat AUTOMATIC ADDRESS DETECTION
CA1098209A (en) * 1975-10-20 1981-03-24 Billy J. Tucker Apparatus and method for parts inspection
US4450579A (en) * 1980-06-10 1984-05-22 Fujitsu Limited Recognition method and apparatus
GB2112130B (en) * 1981-12-04 1985-04-03 British Robotic Syst Component identification systems
US4685141A (en) * 1983-12-19 1987-08-04 Ncr Canada Ltd - Ncr Canada Ltee Method and system for finding image data associated with the monetary amount on financial documents
DE3587220T2 (en) * 1984-01-13 1993-07-08 Komatsu Mfg Co Ltd IDENTIFICATION METHOD OF CONTOUR LINES.

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO8703719A1 *

Also Published As

Publication number Publication date
JPS63503332A (en) 1988-12-02
WO1987003719A1 (en) 1987-06-18
GB8630026D0 (en) 1987-01-28
GB2184233A (en) 1987-06-17

Similar Documents

Publication Publication Date Title
EP0289500A1 (en) Inspection apparatus
US5318173A (en) Hole sorting system and method
US5933519A (en) Cytological slide scoring apparatus
US5796868A (en) Object edge point filtering system for machine vision
US5305894A (en) Center shot sorting system and method
US5214744A (en) Method and apparatus for automatically identifying targets in sonar images
CN110599544B (en) Workpiece positioning method and device based on machine vision
NL7902709A (en) AUTOMATIC IMAGE PROCESSOR.
KR20060100376A (en) Method and image processing device for analyzing an object contour image, method and image processing device for detecting an object, industrial vision apparatus, smart camera, image display, security system, and computer program product
US5140444A (en) Image data processor
CN109583306B (en) Bobbin residual yarn detection method based on machine vision
US7551762B2 (en) Method and system for automatic vision inspection and classification of microarray slides
CN116596921B (en) Method and system for sorting incinerator slag
CN113751332A (en) Visual inspection system and method of inspecting parts
EP1218851B1 (en) System and method for locating color and pattern match regions in a target image
US4246570A (en) Optical wand for mechanical character recognition
EP0525318A2 (en) Moving object tracking method
US7881538B2 (en) Efficient detection of constant regions of an image
CN114187269A (en) Method for rapidly detecting surface defect edge of small-sized device
Mabrouk et al. Automated statistical and morphological based gridding methods for noisy microarray image processing
EP0447541B1 (en) Image data processor system and method
Barth et al. Attentive sensing strategy for a multiwindow vision architecture
GB2333628A (en) Detection of contaminants in granular material
JPH067171B2 (en) Moving object detection method
JP3675366B2 (en) Image extraction processing device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19880524

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): CH DE FR LI NL

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19900703

RIN1 Information on inventor provided before grant (corrected)

Inventor name: DAVIES, EMLYN, ROY

Inventor name: JOHNSTONE, ADRIAN, IVOR, CLIVE