MXPA00004739A - Automatic lens inspection system - Google Patents

Automatic lens inspection system

Info

Publication number
MXPA00004739A
MXPA00004739A MXPA/A/2000/004739A MXPA00004739A MXPA00004739A MX PA00004739 A MXPA00004739 A MX PA00004739A MX PA00004739 A MXPA00004739 A MX PA00004739A MX PA00004739 A MXPA00004739 A MX PA00004739A
Authority
MX
Mexico
Prior art keywords
lens
image
further characterized
images
steps
Prior art date
Application number
MXPA/A/2000/004739A
Other languages
Spanish (es)
Inventor
Harvey E Rhody
Billy C Leung
David H Xu
Original Assignee
Wesley Jessen Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wesley Jessen Corporation filed Critical Wesley Jessen Corporation
Publication of MXPA00004739A publication Critical patent/MXPA00004739A/en

Links

Abstract

An automatic system for inspecting contact lenses (15) that are suspended in a saline solution (17) within a lens holder. In the manufacturing process a first electronic image is taken of each lens disposed in its holder. Thereafter a second image is taken of the lens after the holder has been rotated and the solution and lens have moved. The two images are compared and any dark spots on the first image that move with respect to the second image are eliminated as artifacts that are caused by contaminants in the solution or marks on the lens holder. The rim of the lens, optical zone, printed logo and colored iris area of the lens are automatically inspected by a computer program for defects. The lens is rejected if defect features are found in any of the inspected areas.

Description

AUTOMATIC SYSTEM FOR INSPECTION OF LENSES TECHNICAL FIELD The invention relates to an automated system for inspection of lenses, for example contact lenses. Particularly, the invention relates to a system wherein the digital images of a lens are automatically inspected by a computer in order to determine whether the lens has been satisfactorily manufactured or should be rejected as defective. This automated system for computerized lens inspection analyzes digital image data at the edges of the lens, the optical center of the lens and in color-imprinted regions of the lens. The inspection system is particularly useful for differentiating actual defects from apparent defects caused by dirt stains or other contaminants that are deposited on or adjacent to the lens.
BACKGROUND OF THE INVENTION In the contact lens manufacturing industry, it is known that automated inspection of contact lenses offers the opportunity to reduce production costs while increasing the consistency of the product. Automated systems have been designed to examine and reject lenses with specific types of defects. The purpose of these automated systems has been to inspect all the defects that are of importance in the quality control of the product and eliminate the need for human inspection except if it is necessary to verify the current performance of the automated system. It is generally understood that a well-designed automated inspection system is more consistent than a human inspection system because the automated system does not suffer from fatigue, distractions, or changes in subjective inspection criteria. In addition, an automated system offers the opportunity to gather a large amount of data for statistical analysis, and thereby provide a means for objective quality control. This statistical base can be the foundation for continuous improvement in both manufacturing and inspection prores. Lens inspection systems for contact lenses may have individual containers that carry a contact lens in a saline solution. In such systems, each hydrated lens is examined microscopically for defects, for example at the edge and at the optical center of the lens. A lens inspection system of that type is described in the U.S. patent. No. 5,443,152, issued August 22, 1995 to Davis. This system uses dark field illumination to inspect a hydrated lens plain a transparent frustoconical fastener. The steps of the method are presented to inspect various parts of the lens. In automated lens inspection systems, minute particles of dirt or other contaminants may enter the saline solution of the lens holder or may adhere to the base of the fastener. The base of the lens holder can also be scraped in the manufacturing process. Scratches and contaminants on the lens holder and in the saline solution appear as dark spots on the contact lens image. An automatic inspection system can detect these dark spots and identify them as defects in the lenses. Therefore, it is necessary to provide a means where said alterations can be ignored so that they do not affect the inspection prores. The lens inspection system of the invention greatly reduces the number of such alterations and therefore improves the reliability of the inspection prore. An automatic contact lens inspection system should be particularly sensitive to defects, such as small notches or tears in the periphery of the lens. Known systems have not provided a sufficiently accurate and robust method to detect such defects. The system of the invention employs a particularly suitable method for detecting defects at the edge of the lens. It is also important that a lens inspection system provides a reliable and accurate means to detect defects in the optical center of the lens and in color printing of a portion of the lens iris. furtherIf a company logo or other reference mark is printed on the lens, the inspection system must be able to detect unacceptable defects in the printing of any logo or brand. The inspection system of the invention achieves reliable and accurate detection of said defects using a brightness value inspection matrix for at least two images of each lens and solid inspection algorithms. The precision, speed and simplicity of the automated system of the invention so far has not been demonstrated in the art.
BRIEF DESCRIPTION OF THE INVENTION In order to achieve the objects of the invention and to counteract the problems of the prior art, the improved automated lens inspection system of the invention records two or more images of each inspected lens. The first image of a lens shows dark spots that can be the cause of true defects in the lens or contaminants on a cuvette holding the lenses or in a saline solution where the lens is immersed. The second lens image is taken after the cuvette and its saline solution have been rotated so that contaminants in the solution or scrapes or other marks on the cuvette move relative to the location and position of the lens image. The stored images of the lens are recorded and compared with each other. A resulting image is formed from the brightest part of each pair of compared pixels (drawing elements) and the dark spots caused by moving alterations are removed from the resulting image.
Any remaining dark spots correspond to true lens defects. Therefore, the defects are reliably detected and the alterations caused by contaminants and scrapes in the cuvette are eliminated. In the operation of the automated lens inspection system of the invention, the ambient light level is detected and the output of a CCD image device of an optical inspection station is normalized for variations in ambient light through the field from image. This normalization procedure ensures that the image of the lens under inspection is not affected by variations in ambient light. By analyzing the resulting image of the lens, the center of the lens is first located by a computer algorithm that draws strings through the lens and takes the midpoint of the strings to determine the approximate position of the center of the lens. The light intensity of the lens pixels is recorded in polar coordinates. This light intensity information in the form of a polar coordinate is stored in a matrix where the light intensity values at increased angles are listed in rows of the matrix and the light intensity values for increased radii are listed in columns. This matrix presentation of the light intensity data facilitates the analysis of defects. The matrix shall be referred to hereinafter as S-matrix. When the data of the S-matrix for the two images of a lens are compared and combined, the resulting S-matrix contains the light intensities of pixels of the image with the disturbances removed. The iris pattern of the lens is analyzed for oversize white spots. The logo printed on the lenses is also analyzed to determine if there is a complete formation of the letters that make up the logo. The optical zone of the lenses is analyzed to locate black spots. The lens is rejected if the black spots are located in the optical zone or if the ink or logo pattern has serious defects. An end portion of the S-matrix data for each image of a lens is used to construct polynomial approximations of line segments that model the periphery of the lens. These polynomial approximations overlap and combine to model the edge. The polynomial model of the edge of each of the images is compared with the actual edge data to detect abrupt transitions in the data. These transitions indicate tears or other edge defects. Defects detected on both edges of the images are compared and a defect is recognized if it is located in the same edge area for each lens image. The advantages and features of the invention will be apparent from the following description, claims and drawings.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a plan view of a star wheel and associated carriers and cuvettes as used in the automatic lens inspection procedure of the invention. Figure 2 is a partial cross-sectional view of the cuvette holding a hydrated lens in a saline solution. Figure 3 is a partial perspective view of the star wheel and support mechanism of Figure 1. Figure 4a is an illustration of a first light field image of a hydrated lens placed in a bucket. Figure 4b is an illustration of a second light field image of the hydrated lens of Figure 4a after the cuvette has been rotated. Figure 5 is a diagrammatic view and partial cross section of the bowl assembly, lens inspection camera and strobe light of an inspection station for the lens inspection system of the invention. Figure 6 is an illustration of an image of the light field of a hydrated lens placed in a cuvette with a dark line added to show an edge location technique. Figure 7 is an illustration of the polar coordinates arranged in an S-matrix of the lens image of Figure 4a. Figure 8 is a graph showing the brightness profile along a radial cut through a dark spot of the lens of Figure 4a.
Figure 9 is a graph showing a surface plane of the S-matrix in a region close to the outline of the lens of Figure 4a. Figure 10 is a graph showing the regions defined in the optical zone to detect dark spots in the area. Figure 11 is a graph showing the brightness deviation for the regions of the optical zone shown in Figure 10. Figure 12a is an illustration of the first image of the lens of Figure 4a showing the polar coordinates of a scrape bucket, dirt particle, lens spot defect and logo. Figure 12b is an illustration of the second image of the lens of Figure 4b showing the polar coordinates of a bucket scrape, dirt particle, lens spot defect and logo. Figure 13 is a diagrammatic illustration of S-matrices for the two images of Figures 4a, 12a and 4b, 12b and the manner in which the information of these S-matrices is used to eliminate the alterations and detect the actual defects of the lens . Figure 14 illustrates an image of a lens with an iris ink pattern and a printed logo. Figure 15 illustrates an image of a lens that has a large defect in the printed area of the iris. Figure 16 illustrates an S-matrix representation of the light intensity information for the lens of Figure 14. Figure 17 illustrates an improved contrast version of the S-matrix lens data of Figure 16. 18 shows another improved contrasting version of the lens data of the S-matrix of Figure 16. Figure 19 shows another improved version of the S-matrix lens data contrast of Figure 16. Figure 20 shows another improved contrast version of the S-matrix lens data of Figure 16. Figure 21 shows a graph of brightness levels for an image of a lens with different contrast thresholds. Figure 22 shows improved S-matrix contrast information in association with a sweep inspection box. Figure 23 illustrates another improved S-contrast matrix with a 30 by 30 scan inspection box. Figure 24 illustrates another improved S-contrast matrix with a scan inspection box of 60 by 30. Figure 25! It polishes another improved S-contrast matrix with a 100-by-100 scanning inspection box. Figure 26 illustrates the Wessley Jessen WJ logo for a clear lens and a color printed lens. Figures 27a-e illustrate amplified views of defective portions of the Wessley Jessen logo. Figures 28a-d illustrate examples of defects on the contour of a contact lens.
Figure 29 illustrates a graph of a surface plane of the values of an S-matrix containing a portion of the contour having a defect. Figure 30 is a graph illustrating a plane of radius and angle for points in the S-matrix data set at the edge of a lens. Figure 31 is a flowchart of computer program functions to obtain a first and second image of each lens in the lens inspection procedure. Figure 32 is a flowchart of computer program functions that eliminate the alterations caused by contaminants in the lens solution and marks on the cuvette. Figure 33 is a flowchart of computer program functions for inspecting the optical zone of the lens. Fig. 34 is a flowchart of computer program functions for inspecting the printed iris area of the lens. Figure 35a is the first part of a computer program function flow chart for inspecting a printed logo of the lens. Figure 35b is the second part of a computer program function flow chart for inspecting a printed logo of the lens. Fig. 36 is a flowchart of computer program functions for inspecting the contour of the lens.
Figure 37 is a flowchart of computer program functions for locating the center and contour of the lens.
DETAILED DESCRIPTION OF THE PREFERRED MODALITIES The apparatus Figure 1 illustrates a plan view of a star wheel 1 and conveyors 3, 5 and 7 interacting in the lens inspection system for moving transparent cuvettes 9 and contact lenses 15 with respect to the optical inspection stations 1 1 and 13 that take pictures of the brightness field of the buckets and lenses. Figure 2 shows a partial cross-sectional view of the cuvette 9 holding a contact lens 15 in a saline solution 17. The cuvette 9 is made of a transparent water-tight material, such as glass, a polycarbonate or polymethylmethacrylate of plastic optically clear. The contact lens 15 is a hydrogel such as is known in the art. An outer annular wall 21 of the cuvette forms a container of excesses 23 which retains any small amount of the saline solution 17 that can be spilled during the movement of the cuvette. The wall 21 also acts as a stop between buckets as they move in the inspection procedure. As shown in the figure, the hydrated lens 15 expands to its normal shape and falls to the bottom of the cuvette 9. The lower portion 19 of the cuvette 9 has a concave inner surface having a radius of curvature that allows the lens contact 15 slides downstream under the force of gravity to a position centered on the bucket. The radius of curvature of the concave inner surface of the bowl is selected to be as high as possible and thus to maximize the centering capacity of the bowl and low enough to maintain center contact with the hydrated lens under inspection. This structure minimizes the distance at which a lens can move at the time typically required by the video cameras of the optical inspection stations 11, 13 to obtain a lens image in the inspection procedure. It is highly desirable to restrict the distance at which the lens can be moved in a single video time frame, eg, 1/30 seconds, at a distance less than the smallest feature of the inspected lens. The optically clear lower portion 19 of the cuvette forms a lens. The concave inner and convex outer surface of the lower portion 19 of the cuvette provide optical energy in combination and serve to focus light through the hydrated contact lens 15 at the entrance of the pupil of each chamber of the optical inspection stations 11, 13. In the lens inspection system, a saline solution is injected into cuvettes 9 which move along the conveyor and a contact lens is placed in the solution of each cuvette. The trays with hydrated lenses move along the inlet conveyor 3 until they reach the end of the conveyor and are placed between teeth 25 formed in the star wheel 1 which is indexed, for example in the clockwise direction, at a speed around a rotational position every 1.6 seconds. Although this rate of indexing is satisfactory, it is possible to use a higher or lower speed. Figure 3 illustrates a perspective view of two adjacent teeth 25 of the stellate wheel rotating on a stationary circular platform 27 and thus pushing the trays from the input conveyor 3 in the clockwise direction. As shown in Figures 1 and 3, each pair of teeth forms an area that receives and holds one of the buckets. Stellate wheel 1 of Figure 1 is shown with 17 cuvette retention areas for simplicity of illustration. In practice, a star wheel having 32 such areas is used. It is possible to use a greater or lesser number of said areas, depending on the requirements of the manufacturing system. A washing station 26 sprinkles the bottom of each cuvette with a puff of water and compressed air which is supplied by a mixer 28 which combines filtered water, deionized with filtered air from a compressor 30 that supplies air at a pressure of 1.40 kg / cm2 or any other suitable pressure. Water spray removes contaminants and droplets of saline from the bottom of the bucket. A drying station 29 removes any water droplets that may adhere to the bottom of the bucket. The station 29 blows dry air, filtered from the conventional air compressor 30 against the bottom of each cuvette. And the in-line air filter 32 removes any particle from the compressed air and a humidity trap 34 removes moisture from the air as it flows into the bucket. A conventional filtration-regulation-drying unit, for example a Vortec 208R, can be used to obtain dry, filtered air. The indexing speed of the star wheel and the time of the washing and drying puffs are controlled by a programmable logic controller of conventional production line (PLC) 36, for example a DL-5 series that is commercially available from Allen-Bradley Company.
The PLC is adjusted and operated in a known manner. A rigid tray guide (not shown) is mounted to the stationary structure of the star wheel and disposed over the buckets in the washing and drying stations so that the air puffs in these stations do not lift the buckets of the star wheel . The holes can be punched through the guide to allow the passage of air thus preventing upward movement of the cuvettes. A rectangular shaped piece of rigid plastic or any other suitable material, including metal, can be used for this purpose. In the lens inspection system, a camera of the optical inspection station 11 takes and stores a first amplified digital image of each of the lenses as they pass through the star wheel. Figure 4a shows an example of said image. The outer circular bright region 31 is the cuvette containing the lens. The dark circular ring 33 is the edge of the lens and the annular region 35 is printed with a color iris pattern. The WJ Wessley Jessen 37 logo is printed in the iris region of color. The dark spot 39 is a lens defect. Stain 41 is a particle of dirt in the saline and line 43 is a scrape on the tray. As shown in Figure 1, a rotating element of the trough 45 on the periphery of the star wheel is pressed against and lightly adhered to each trough as it moves clockwise with the star wheel after which Your first image is taken. An adhesive tape can be used to provide a clockwise rotation movement for each cuvette as they move with the star wheel. The rotation of the cuvette causes the saline solution 17 and the lens 15 to rotate inside the cuvette. The cuvette stops rotating when the tape element 45 passes and the turning movement of the lens and saline becomes slower and eventually stops when the cuvette reaches a second optical inspection station 13. It has been determined that this establishment procedure It can take up to 10 seconds. As a consequence of this particular time, the first ten lens images and the cuvettes are taken between the first and the second optical inspection station 1 1 and 13. The inspection system is not limited to 10 positions between the inspection stations. It is possible to use more or less positions. In the second station 13, a camera takes a second image of each lens when the rotational movement has stabilized and stopped. Figure 4b shows the second image where the lens, the tray and the saline have moved with respect to their positions in the first image of figure 4a. It can be seen that the relative positions of the lens defect 39 and the logo WJ 37 are the same because both are fixed characteristics of the lens. The changed location of the scrape 43 corresponds to the movement of the bucket and the location of the dirt particle 41 corresponds to the movement of the saline solution. Now it should be understood that each cuvette has a first and a second associated image, with the second image having a lens, solution and cuvette displaced with respect to the first image. The particulate contaminants in the saline solution and the spots or scratches on the bottom of the cuvette will show dark spots in the first image. These spots will move relative to the lens when the cuvette is rotated and will therefore have different positions in the second image that will be apparent if this image is oriented so that the contact lenses of the two images are aligned. The relative motion of dark spots is used to distinguish spots as alterations and not as actual defects in the lens. The alteration differentiation procedure will be explained in more detail after the following discussion of the system apparatus. Figure 5 illustrates a partial and diagrammatic cross-sectional view of each of the optical inspection stations 1 1 and 13 taken along the center lines AA of Figure 1, in the direction of the arrows. In addition, as shown in Figure 5, a strobe light 47 is placed below the circular platform 27 to provide a pulse of high light intensity. A light baffle tube 49 is provided to contain the light intensity pulse of strobe 47. A strobe model MVS 2020 is commercially available from EG &G Electro-optics from Salem, Massachusetts and can be used to provide the pulse strobe light. The stroboscope achieves a detached action image of the lens 15 within the cuvette 9. That is, the strobe is of sufficient brightness to form an image of the contact lens in less time than that which takes any part of the image to move at a time. by a pixel on the focal plane when the cell moves at the speed of the automated inspection system. The light pulse of the stroboscope is randomly diverted by a diffuser 51 to provide a pulse of diffuse light. The diffuse light from the stroboscope 47 and the diffuser 51 pass through a disc of collimated orifices 53 which in a preferred embodiment is made of a black opaque glass of approximately 20 mm in diameter and 2 mm in thickness, with a grid around 600,000 holes, each 20 μm in diameter. The disk is commercially available as Part No. 781-0009 of Collimated Holes, Inc. of Campbell, California. The light collimated from the collimator disc 53 passes through a hole 55 formed in the circular platform 27. Then the light passes through the lower lens portion 19 of the cuvette. The lens 19 focuses the collimated light through the hydrated contact lens 15 and through an aperture 57 of a digital video camera 59 having at least one lens 61 and an associated CCD imager 63. The cuvette and the collimator disc are preferably separated so that the disc is out of the depth field of the camera and only the focused image of the lens and the cuvette is obtained by the camera. The Kodak Megaplus camcorder, model XFH and its control unit, are commercially available from the Kodak Company and can be used with a Nikon MicroNikkor AF 60 mm lens for the lens inspection station. The CCD layout of this camera provides an image of 1, 000 by 1, 000 pixels at 30 frames / seconds. Therefore, the first and second optical inspection station corresponding to Figure 5 take instantaneous strobe images of the contact lenses passing through the stations. As shown in Figure 1, these images are digitized and stored in the random access memory of an inspection computer 64 having, for example, a 200 MHz Pentium Pro Processor processor. This computer connects to the PLC 36 and controls the time at which the images of the camera are taken. When the cuvettes leave the second optical inspection station 13, the computer analyzes the first and the second image for each lens and, based on this analysis, determines whether the lens has defects and therefore should be rejected. If the automatic analysis determines that the lens should be rejected, the computer 64 activates a solenoid plunger 65 that pushes the lens to the reject conveyor 7. If the lens and the cuvette pass the automated inspection analysis, the cuvette moves in the direction clockwise along the star wheel and computer 64 activates a solenoid plunger 67 that pushes the bucket towards a through conveyor 5.
Categories of defects The automated lens inspection system can detect different types of defects and differentiate these defects from alterations caused by contaminants or scratches in the cuvette. The defects that result in the rejection of the lens are: 1.- Dark patches in the optical zone in the central area of the lens. The darkness in this area reduces the performance of the lens; 2.- Thick imperfections in the printed iris pattern. Some types of lenses have an iris pattern printed on an annular ring around the optical zone. If this pattern is not centered correctly or poorly printed in a variety of ways it will reduce the attractiveness and functionality of the lens; 3.- Imperfections when printing the logo, particularly on clear lenses. The logo is used as a reference mark by the user to orient the lens. A damaged or missing logo is a defect; 4.- Mechanical defects in the lens material. These include tears, notches, cuts, holes, bends and other problems; 5.- Foreign material fixed to the lens. Any type of dirt or "burr" that attaches to the lens can be a functional and safety problem, and should be detected; and 6.- Notches or cuts in the contour of the lens.
Normalization of illumination The algorithms that have been developed detect the listed defects at an acceptable level. Each type of defect has a rejection threshold that can be changed in the algorithm to match the changes in the detection criteria. Some components of the defect inspection algorithms should be able to perform a reliable analysis although the illumination of the image is irregular. Irregular lighting may arise due to variations in either the lighting system or the optical elements of the imaging system. The variations of illumination typically have a slow variation throughout the image. The effect of these variations can be greatly reduced by using a normalization algorithm. The variations of illumination can be modeled with a linear function in the coordinates of the image. Let (x, y) represent a point in the plane of the image. A linear illumination model is provided by the equation: l (x, y) = ax + by + c in which (a, b, c) are parameters. If a = £ > = 0 then the illumination does not change with position, and has a uniform brightness given by the value of c. If there is a variation, then a and b provide the magnitude of the change with respect to the changes in the location in the and y directions, respectively.
The values of the parameters can be calculated by a simple adjustment of the least squares to the brightness data that is displayed through the bright regions of the image. For example, brightness-to-pixel measurements in the region of the cuvette outside the lens contour are detected and stored. These values vary if the illumination is not uniform. The values of the parameters of the equation are found by fitting a linear function to the brightness data. Let l be the brightness at the sample point (X, yk), k = 1, 2, .... n. Then the least squares solution for the parameters is found by solving a simple set of three equations in three unknowns. agxx + bgxy + cgx = fx agxy + bgyy + cgy = fy agx + bgy + cg0 = fo where fQ =? klk,? kXk, gy =? kVk, g0 = n. A similar technique can be used to model the illumination variation with a quadratic function. In that case, terms are added for xy, x2, and y2. This leads to the requirement to solve for six parameters, something that is more complicated but similar in technique. In practice, it has been found that linear molding of lighting variations is sufficient. It is assumed that the brightness value observed at a point (x, y) is due to the illumination that passes through a medium with transmission value T (x, y). That is, B (x, y) = T (x, y) l (x, y). Dividing by a model of the illumination can be widely removed the variations in the observations due to irregular lighting and keep the values of brightness due to the variations in transmission. This is what is desired in the inspection procedure. In this way, the calculation T1 (x, y) = B (x, y) / I (x, y), where l (x, y) is provided by the lighting model, provides a corrected image for further analysis. Therefore, normalization is achieved by subjecting the sample to illumination at a number of points across the brightness region of the image plane. A mathematical function adjusts to the lighting data. For each point in the image, the measured brightness is replaced with a corrected normalized value that is determined by the defined function. It can be expected that the lighting function at any fixed camera location will slowly change from image to image. This makes it possible to accumulate the molding data on a sequence of images as well as on each image. This procedure can be used to dynamically update the lighting model, which provides solidity, and detect large changes in lighting that may occur in particular images, which may indicate a defect in the cuvette. The lighting can also be normalized separately for each optical inspection station, thus avoiding calibration issues and simplifying the normalization procedure in a production environment.
Location of the edge of the lens The first step in inspecting a lens is to locate it in each of the images. The lens is located by finding several points that may be on the edges of the lens. The procedure to find the points is subject to errors due to the noisy variations in the image, and therefore it must be followed by a procedure that refines the set of edge points. The accuracy of the procedure is greater as the number of points increases. The point search algorithm makes use of the knowledge that the lens edge can be the first dark zone if the pixels are examined in such a way that it starts with one end of the image and moves towards the opposite end. A lens image is shown in Figure 6. In this it can be seen that the outline of the lens is a dark ring 33 against a lighter background. The area of light formed by disc outside the contour is created by brightness of light through the cuvette containing the lens in its saline solution. The black background 71 in the corners of the image is the region outside the cuvette. To find the points on the edge, it is necessary to look for a place where the brightness of a column or row of the CCD image falls quickly after it has been at a high level. It is required to fall from a high level, instead of just being at a low level, to go through the black area outside the bucket. On a scale of (0, 1) with 0 corresponding to black and 1 corresponding to white, the brightness is expected to change from a low level in the black periphery of the image to a level of .7 to .9 as it moves towards in the area of brightness corresponding to the cuvette. The brightness is expected to fall again, for example around .3, to indicate the presence of the edge of the lens. A point "a" on the left end of the lens can be found by detecting the minimum point of the narrow dark region in the outline of the lens. A point "b" on the right end can be found by looking from the right side of the profile towards the center. In Figure 6, the dark horizontal line 73 represents a row of pixels and the points marked "a" and "b" represent points on the edge of the lens found when searching along the row of pixels.
Location of the center The points on the lens contour can be found by sweeping rows or columns of the CCD image as described above. A series of sweeps is performed on a set of rows and columns that normally cross the end of the lens. If it is determined in the sweep that an edge is missing, then the row or column is discarded from the set for that lens. The points are captured in four sets: L, R, T and B, corresponding to the left, right, top and bottom ends. Each point has a row and column index. The center is estimated by averaging the coordinate values of the sets. The column index of the center, Cx can be estimated using the data of the column of the sets L and R. Leaving that. { H, i2, ... im} are the numbers of the row for which there is a data point in both sets L and R. For any row, Cx should be in the center between u? _ (ij) and UR (ij). The center can be calculated by averaging these estimates. 1 m CX = T ^? UL (YJ) + UR J)) Similarly, be it. { i-i, i2, ... in} the set of indices for which there is a data point in both set T and in ei B. The row index of the center can be calculated by i Cy = - 2 «-? ("r (ij) + UB (ij)) M Estimates of (Cx, Cy) can be influenced by spurious external points.These points can be removed after the location of the lens boundary is calculated, and the center can be recalculated using good points The above formulas are used with the refined data.
Inspection matrix The inspection matrix provides information about light and dark patterns on the surface of the lens, and is the central structure for various inspection functions. The lens image can be described naturally by a function B (r,?) In polar coordinates with reference to the center of the lens. The patterns on the lens can be observed by taking samples of the brightness pattern on a set of coordinates (r,?). Let (rm,? N) 1 = m <; M, 1 < n < N. Then the value arrangement S = B (rm,? N), 1 < m = M, l = n = N are joined and the matrix M x N whose columns represent the brightness samples at a fixed angle and whose rows represent the brightness samples at a fixed radius. This is the inspection matrix or S-matrix, which contains the essential brightness information in a rectangular data structure that is convenient for efficient processing. Although the S-matrix is rectangular, it contains information points that are on a polar grid, and represents a mapping from the polar to the rectangular format. In a lens image such as the one shown in the figure 4a, it is clear that a natural reference system will be polar coordinates with reference to the center of the lens. In operation, a polar sampling grid is placed over the lens image and the S-matrix is the arrangement of image values over these sample points. The grid is arranged to cover the lens from the inside of the printed area out of the contour. The image formed by the S-matrix is shown in figure 7. It is clear that the S-matrix contains the same information as the corresponding region of the original image. Brilliance values anywhere in the sample region can be found in the S-matrix. As an example, the values along a radial portion passing through the dark spot 39 on the lens are shown in the gloss profile graph of Figure 8. This particular part is represented by a column of data of the S-matrix, which corresponds to an angle of approximately 290 ° relative to a reference of 0 ° in the upper part of the lens. It clearly shows the effect of said defect. The S-matrix, therefore, contains information that is a good basis for inspection algorithms. A second example of a presentation that is available from the data in the S-matrix is shown in figure 9. This is a surface plane formed by a section of the S-matrix covering the end 33 and the dark spot 39 of the lens. The channel formed by the end should be uniform and straight in its presentation. The variations in the cross section of the contour channel are the basis for the location of the contour defects.
Optical zone inspection Inspection of the central optical zone of the contact lens requires a method to remove dark spots due to foreign material. This is done by matching two or more images independently, which serves to remove the dark regions of any image that is not equal to the other. A defect is a mark on the lens caused by a scrape, bubble, fixed particle of foreign material such as burr, and cuts or perforations. The defects can be found by looking for dark regions in the optical zone, because all these defects can be seen in the image as dark areas. However, the darkened areas are also caused by dirt in the saline, mark on the surface of the cuvette, and other similar effects that are not related to the lens. These can be separated using the fact that the effects on the lens will move with the lens and will appear in the same location in both images, with the proviso that things that are not fixed to the lenses will be in different positions in both images. The differentiation between alterations and real defects requires that two images of the same lens are geometrically equalized. This can be done by image combination. After the images have been combined, the optical zone can be inspected. The images of the lenses should be combined in a way that retains the characteristics that are common and eliminate features that are not common. This is achieved by the procedure of: (1) finding the points on each image that are equal in both; (2) perform a geometric transformation that takes the equalization points to register; and (3) compare all points in the recorded image arrangements to eliminate the alterations that are not part of the lens. Each of the three steps can be achieved by a variety of computational strategies. Each strategy leads to an algorithm and a computer code implemented on the inspection platform. The following describes an algorithm that achieves each of these three steps.
Finding Equalization Points The geometric equalization algorithm described below uses six parameters. The values of the parameters are calculated by substituting the coordinates of three points of comparison on the two images in a set of equations. An accurate calculation requires that the three points of comparison be known accurately. Equalization points are found by locating the particular characteristics of each image. Exact matching requires that features selected for comparison have specific details so that they can be located accurately. Locating features accurately requires a detailed image search. Fulfilling the search with speed is a must for a real-time application.
Lenses with ink patterns The ink pattern on printed lenses provides an image that can be used as a source points that will be matched. Figure 4a shows said ink pattern 35. The challenge is to find matched image fields of two different lens images in a fraction of a second. The search is performed by a hierarchy procedure that, in each step, refines the accuracy of the pattern location. The steps to locate the equalization points on the lenses with ink patterns are given later. A last described modification is required for lenses without ink patterns. 1. - Locate the center of each lens image. 2.- Find a prominent reference point on each lens. The most common reference point is the lens logo. A line from the center through the reference point provides a zero-angle reference. The logo can be located with precision and reliability in the ink pattern with a matching of hierarchical origin. The search can be performed on the lens data in polar coordinates or in the S-lens matrix in rectangular coordinates. 3.- Find three points in the ink pattern of the first lens image at angles separated by about 120 degrees. The radius to micropatterns of ink can be a fraction (for example 0.5) of distance to the contour. Form a template by cutting a small section of data from the ink pattern at each location. The three micropattern arrays will be called A-i, B-¡and C-i. It should be noted that we are not selective about the micropatterns that are chosen. The density of dots printed on the ink pattern makes it possible to obtain a suitable pattern for matching. A refinement will be to evaluate the quality of each micropattern in this step and to choose a replacement for any of those that are below the standard for comparison requirements. 4.- Locate the three locations for the comparison micropatterns on the second lens image. The three locations have the same polar coordinates relative to the reference line as they were used in the first lens image.
. - Perform a local pattern search with the micropattern Ai at location 1. Select the pixel with a maximum comparison score. Repeat for the other two micropatterns. Note that this procedure requires two pattern searches to find the logos in the ink field and three micro-searches in very localized regions to compare the patterns of the first image with those of the second image. Note that the micropatterns are not preselected. The selections is simply the cut pattern from a well-spaced location in the ink pattern. Experience has shown that this search procedure is fast and that it allows comparison points on 1000 x 1000 pixel images to be located within a pixel.
Lenses without ink patterns The location of reference points on lenses that do not have ink patterns can not use micropatterns. The procedure is modified by (1) refinement of the location of the logo and (2) use of points on the contour of the lens at known angles from the reference line. The procedure continues to be hierarchical, fast and accurate. You must first locate the center of each lens image. In this way: 1.- Find the logo mark on each lens. The logo can be located quickly and reliably on lenses without ink patterns looking for a larger dark region. The search can be performed either on the lens data in polar coordinates or on the S-lens matrix in rectangular coordinates. 2.- Select the logo pattern from the image of the first lens. This selected logo pattern is the template for a refined logo location search for the second image of the lens. 3.- Locate the logo with greater precision in the second image of the lens making a comparison of the pattern with the logo of the first image. Now the two logos will be located within about 1 pixel on each lens image. 4.- The central point and reference on the logo form a line of references on each lens. The construction lines through the center make angles of + 120 ° C with the reference. The intersection of these lines with the contour of the lens provides two additional points that can be used as geometric coordinates of comparison. A variation is to select the location on the contour provided by extending the reference line until it intersects the contour. This third point is used in place of the logo coordinate reference. This search system works well with lenses of both types. Thus, the method described for lenses without ink patterns can be adapted to the lenses with ink patterns. It is not required that the dots in the ink pattern be used to derive the reference points.
Geometric Comparison The implementation of the present uses two images taken by two cameras, which requires a comparison for each image pair. The same technique could be used to compare more images, but that is not necessary for this system. The image comparison is made under the following assumptions, which are reasonable for contact lens products and the imaging system. 1.- The lens behaves like a rigid body so that all points on the lens move together. If the lens were significantly deformed between the images then this assumption would be invalid. However, the lens is in a liquid, which provides stable support in the short term for a period of more than 10 seconds or less between the images. 2.- The movement of the lens is in a plane. It can rotate and move in the plane of the image but the tilt does not change between images. This assumption is reasonable given the geometry of the cuvette, the geometry of the camera system, and the short time between images. 3.- Because the images are taken with different cameras, there is the possibility of different image sizes. Image rescaling is an inherent part of the subsequent derived transformation. It is commonly known that transformation, rotation and scaling can be achieved in an image plane by a linear transformation of coordinates. For each point (x, y) in image 1 we want to find the corresponding point (u, v) in image 2. The transformation, commonly known as affine transformation, can be written as a pair of linear equations. u = ax + by + e v = cx + dy + f These equations have six parameters, (a, b, c, d, e, f) whose numerical values must be determined. Once the parameter values have been determined, the equations can be used to map from one image plane to the other. The six parameters can be determined by finding three points of comparison on the images. The only requirement is that the points are not on the same straight line. Let A, B and C denote the characteristics of the first image and its coordinates denoted by (Xa, ya), (Xb, and b) and (Xc, Ye) respectively. The corresponding characteristics on the second image are A ', B' and C with coordinates (ua, va), (Ub, vb) and (uc, vc). After replacing the corresponding pairs of the coordinates in the equations u, v six equations are obtained. These equations can be solved for the parameter values. a = «b x t - u. a and u u b + u a y u b and x z b -? a and +? c and b +? a and _ -? _. Y b = "b X a ~ U + U + u ~ .U? X? ayb +? cyb +? ayc -? by. - V b + vcy C = y + v cy a + _v ayy ß y - ^? b ? cy - X and b +? cyb +? ay? bb and S c d = Vb X * + VcX. + VaXb ~ V. b Vc + Vb c - xbya + xcya + a and b - xcyb - xayc + b and c UcXb - MbXbXc - McXaJ _. + MaXcJ¿ + Mb aJc- aXbJC, -xl + xbxc + xayb - xcyb - xayc + xbyc f _? bya - ^ cya - * ayb + cyb + a and c - by.
Once the values of the parameter have been calculated, they can be used in the equations u, v to equalize or register each point on the first image with the corresponding point on the second image.
Comparison of registered lenses The purpose of the lens comparison is to remove the dark pixels in the image that are not related to the actual marks on the lens. This is done by simply creating a third image of the resulting lens where the value of each pixel is replaced by the maximum value of the pixel in two compared images. If Li and L2 are compared lens images, then L = max (L ^ L2) is an image where each pixel is equal to the brightest value in the location in both images. This procedure is effective to remove foreign matter from the image if the matter is darker than the true lens value. It is possible that a pixel on a particle of foreign matter is lighter than a pixel on a lens mark. This could replace a darker real pixel with one that is a little lighter. This has little effect on the analysis of lens defects. It is important that the lenses are accurately recorded so that the true dark brushes are compared. If this were not the case, then the dark pixels due to scratches or other lens defects would be removed by the comparison procedure. The size of the defect that could be removed in this way is closely related to the accuracy of the image matching. When comparing images within 1 or 2 pixels, the size of the defects that can be eliminated in the comparison procedure becomes small enough to be below the rejection threshold. The comparison algorithm can be used to "clean" selected regions of the lens instead of the complete image of the lens. This makes it convenient to combine the comparison step with, for example, the optical zone inspection step. The optical zone is inspected in small blocks that cover the vision region of the lens. If the region comparison algorithm is used to "clean up" only these blocks that the inspection determines as suspicious so that they can be re-evaluated, and thus the computation time for comparison can be greatly reduced in exchange for a small increase for the reassessment of suspicious regions. This exchange can greatly reduce the overall inspection time. This is a procedure that is used in the operational implementation of the system.
Optical zone inspection With reference to Figure 4a, the optical zone (OZ) 74 is the clear area in the central area of the lens through which the user sees. It is important that this area has no defects. Therefore, the OZ is inspected to find dark areas that correspond to some type of defect. The inspection of the optical zone is done by examining small regions that cover the area in an overlapping manner. The dimensions of the region are selected so that the OZ is well covered by the layout. Each region is a square of size 34 x 34 = 1156 pixels. This size is large enough to understand a good sample of brightness values. The current implementation uses 37 regions arranged as shown in the table of Figure 10. Cell c19 is over the center of the lens and those at the outer ends are over the boundary of the optical zone. Any defect in the optical zone will appear as a dark spot in one or more of the regions. These defects can be found by evaluating the uniformity of brightness through each of the 37 regions. Let Bprom (n) be the average brightness across region n and let Bm¡n (n) be the minimum brightness in region n. Then the difference, D (n) = Bprom (n) - Bm¡n (n) is a measure of the deviation of brightness across the region. Figure 11 shows a graph of the difference D (n) against region number for a particular lens. This particular lens has defects in regions 21 and 22 and suspicious values in regions 8 and 14. A criterion for rejection of lenses based on optical zone defects can make use of a heavy scoring of deviation values. Ti and T2 will be the threshold values, such as Ti = 50 and T2 = 60. All cells with D < You give a score of 0, all the cells with Ti < D = T2 a score of 2 and all cells with D > T2 a score of 6. Then, any lens with a total score of 5 or more should be rejected. A single bad region or three suspicious regions will produce a rejection. The actual threshold values and the score are given here as an illustration. The procedure can be modified to obtain measurements that correlate with decisions made with human inspectors. The deviation measure D (n) = Bprom (n) - Bm¡n (n) measures the variations in brightness values within a small region of the optical zone. Therefore it is not susceptible to variations in illumination that occur through the larger scale of the full lens. If the variations in illumination significantly affect computing within a small region, then the inspection system will be completely inoperable. In this way, the algorithm is solid and does not require additional lighting normalization for its satisfactoperation for lighting systems that are closer to the normal specifications.
Example of elimination of alterations The lens inspection system is capable of detecting real defects in the lens by analytically eliminating the effect of scratches or stains on the tray and dirt or other contaminants that float in the saline solution 17. Figures 12a and 12b illustrate the lens of Figures 4a and 4b and the movement of a lens spot defect 39, a scrape 43 on the tray and a floating dirt particle 41 in the saline solution when the tray rotates in a clockwise direction at 45 ° C. ° and the lens rotates clockwise 90 ° of the first image of Figures 4a and 12a to the second image of Figures 4b and 12b. For the purpose of this discussion it will be assumed that these spots are on a pixel in the area. As will be explained in an example below, the automated lens inspection system will ignore the alterations formed by the floating particle 41 and the bucket scrape 43 and will recognize the spot 39 as a lens defect. Figure 4a illustrates an example of a first image taken at the first camera inspection station 11 for a lens. As shown in figure 4a, the image shows dark areas corresponding to the spot of the lens 39, the bucket scrape 43 and the floating particle 41. Figure 4b shows the second image of the same lens taken in the second camera inspection station 13 after the lens , the saline solution and the cuvette have been rotated by the rotation element of the cuvette 45. For the purpose of this discussion, it will be assumed that the cuvette has moved in the clockwise direction 45 ° and the lens has moved 90 ° clockwise from the first to the second image. This alleged movement is provided for illustrative purposes only. It should be understood that, in practice, any rotational movement will be acceptable. There may also be translational movement between the first and the second image. However, to simplify the illustration and to improve the understanding of the invention, only the rotational movement in this example is taken into account. With reference to Figures 4a, 4b, 12a and 12b, the rotational movement of the lens for the second image with respect to the first image can be seen by the 90 ° offset in the clockwise direction of the WJ 37 logo which is printed over the lens. This logo is provided to identify the manufacturer, Wessley Jessen, and also to identify a fixed point on the lens that can be used as a reference for the second image for translation and rotation with respect to the first image. For the purpose of this analysis, as shown in Figure 4a, the position of the logo 37 on the first image at 60 ° from the vertical angular reference of 0 ° of the S-matrix. The logo of Figures 4a and 12a is defined as the 0 ° reference for the first image. The lens of the second image of Figures 4b and 12b is rotated 90 ° clockwise with respect to the first image and the 0 ° logo reference of the second image is located 150 ° of the vertical reference of the S-matrix, as shown in Figure 4b. The logo of Figures 4b and 12b is defined as the 0 ° reference for the second image. In order to be able to compare the positions of the apparent defects 39, 41 and 43 of the two images, it is necessary to mathematically translate the information of the second image until the lens of the image is registered with the lens of the first image. When the images are in register, the positions of the black spots of each image can be compared and the spots that move, due to the movement of the cuvette and the solution in relation to the movement of the lens, will be recognized as alterations that are not defects . As explained previously, in the lens inspection system the translation of the second image with respect to the first image is achieved by an affine transformation of data from the S-matrix. The S-matrix stores the values of intensity of light of the images in columns of fixed angles and rows of fixed radii in relation to the vertical axis 0 °. Therefore, as shown in Fig. 13, in the S-matrix 75 for the first image of Figs. 4a and 12a, the spot 39 on the lens is a dark area with coordinates R2,?, Where R2 is the distance from the center of the image to spot 39 and? is the 230 ° angle that corresponds to the position of the spot related to a reference line of 0 ° through the logo 37 of the first image. As shown in S-matrix 75, the image data for the spot is a matrix cell 77 which is located in column 290 and in row R2 of the S-matrix for the first image. This column and row correspond to the radial and angular positions of the spot 39 in the first image. Column 290 corresponds to 290 ° from the vertical reference of the S-matrix. This angular position corresponds to an angle of 230 ° (that is, 290 ° -60 °) relative to the 0 ° reference position of the logo 37 of the first image. Likewise, the floating dirt particle 41 is located in a cell 79 corresponding to a radius R3 and at an angle? that is, column 300 of the S-matrix and 240 ° in relation to the logo. The scrape 43 on the cuvette is shown in cell 81 corresponding to a radius R1 and at an associated angle? which is column 180 of the S-matrix and 120 ° in relation to the logo. With reference to figures 12b and 13, the S-matrix 83 of the second image shows the spot 39 stored in a cell 85 having a radius R2 and an angle? this is column 20 of S-matrix and 230 ° in relation to the logo of the second image of the lens. The spot is a lens defect that retains the same radius and relative angular displacement from the logo of both images. The floating dirt particle 41 is located in cell 87 of the S-matrix of the second image 83 at a radius R4 and at an angle? that is, column 340 of this matrix and 190 ° relative to the logo of the second lens image. This position occurs as a result of the movement in the clockwise direction of the solution and the particle 41 in the solution. The scrape 43 on the bucket is located in cell 89 of the S-matrix of the second image at a radius R1 and at an angle? that is, column 225 of the S-matrix and 75 ° with respect to the logo of the second image of the lens. This angular position corresponds to the supposed 45 ° movement in the clockwise direction of the cuvette relative to lens movement. The affine transformation is applied to the S-matrix 83 so that the 90 ° movement of the lens is subtracted from the data of the second image in order to be able to rotate this data in register with the data of the first image. The '90 ° translation is provided to cancel the movement in the clockwise direction of 90 ° of the second image with respect to the first image. The transformed S-matrix 84 of the second image is compared to the S-matrix 75 of the first image to produce a resulting S-matrix image 91 that reflects only the brightest pixels of the two matrices 75 and 84. Therefore , when the dark cell 81 of the matrix 75 for the first image is compared to the corresponding bright white cell of the transformed matrix 84, the bright white cell of the transformed matrix is copied through the resulting cell 80 of the S Matrix 91. Likewise, the dark cell 82 of the transformed matrix 84 is compared to the corresponding bright white cell of the matrix 75 and the bright white cell of the matrix 75 is copied to cell 86 of the resulting matrix 91. The dark spot corresponding to the scrape on the bucket is ignored in this way. This dark spot is properly ignored because it moved relative to the lens and therefore can not be a lens defect. Also, the dark cells 79 and 88 for the dirt particle 41 of the dies 75 and 84 are compared to their corresponding bright white cells and the bright white cells are copied to the cells 90 and 92 of the S-matrix 91 resulting. The dark stain of the floating dirt particle 41 is thus removed in the S-matrix 91. This is correct, because the floating particle 41 moved with respect to the lens and therefore is not a lens defect. The only black spot that passes to the resulting S-matrix 91 is spot 39 which is a defect fixed on the lens. This stain is copied to cell 94 of matrix 91 because the dark cell 77 of first image array 75 matches dark cell 96 in the same position R,? in the S-matrix 84 transformed. Figure 13 demonstrates the method where contaminants in solution and scrapes on the cuvette are removed in the automated lens inspection procedure. The resulting S-matrix contains only dark spots corresponding to lens defects. If any of these spots are located in the optical zone, they are detected as defects analyzing superimposed regions as described above.
Ink pat inspection The printed iris ink pat can be inspected with a set of data that can be easily gathered once the cenof the lens has been found. This data set is a set of values of brightness of the image along radial lines at inals of regular angles around the compass of the lens. The brightness values along the radial line are stored in the S-matrix. As explained previously, the columns of the S-matrix contain the values along the path of the constant angle and the rows contain the values along the path of the constant radius. The iris portion of the S-matrix represents a plot of the brightness values within a region surrounded by radii r-i and r2 and angles? I and? 2, but shown on a rectangular row and column format. In this analysis p and r2 are respectively selected as the inal and exal radius of the iris printing zone. The angles? I and? 2 are selected to cover the entire circle from 0 to 360 degrees. This description of the S-matrix assumes that the radial trajectories traverse a circular path cend on the cenof the lens. A more sophisticated version of the algorithm can use a generated elliptical or polynomial trajectory that models the shape of the lens. The size of the S-matrix is MxN, where M is the number of points taken along a constant angle trajectory and N is the number of angles. The experimentation can be done with the best size for S. The exchange is between more detail and more processing time.
Because the purpose is to find larger blank areas in the ink pattern, it is not necessary to use a larger number of points in S. The best size to use can be determined empirically. An image of a lens is reproduced in Figure 14. This lens has a number of medium-sized spaces in the printed area, but this is a part of the pattern design. This ink pattern is acceptable. In contrast, the lens of Figure 15 has a large unprinted region 93 that runs through the printed area. This lens should be clearly rejected. Figure 16 shows an assembled S-matrix of the lens of Figure 14 using 600 angles (about every 0.6 degrees) and 150 radial steps. Clearly the information for the print zone quality is preserved in the S-matrix. The contrast between the bright areas and the ink pattern can be improved by a standard contrast enhancement procedure. This procedure consists of applying a point function u = f (s) for each element "s" of the arrangement in such a way that the gray levels are dispersed more evenly over the total scale of brightness values. However, this procedure takes time, with at least one multiplication per element of the S-matrix and does not achieve anything useful in terms of the objective of detecting the bright areas. Said method is useful for comparing the image of S with a detector such as a human visual system, but is not useful in an automated detection method where the detector can be adjusted to the image. Instead, a threshold detection procedure is used as it is written later. Threshold detection can be used to differentiate pixels that have bright values from others that do not. A threshold function can be defined by u = f (s), where 1, s > T f (s) = 1, s < T As an implementation issue, the U-matrix is scaled so that 0 is the minimum (black) and 1 is the maximum (white). Applying the threshold function to S will produce a new arrangement, either U, where all values are 0 or 1. A region of S that has a large number of bright areas will appear in U as a cluster of ones. The U-matrix for the S-matrix of Figure 16 is shown in Figures 17-20 for different threshold establishments. Figure 17 shows a U-matrix constructed with T = 235 on a brightness scale of (0.255). Figure 18 shows a U-matrix constructed with T = 215. Figure 19 shows a U-matrix constructed with T = 200 and Figure 20 shows a U-matrix constructed with T = 150. The threshold option is important, but not particularly sensitive. It should be between the highest gray level of a printed pixel and the brightness value of an unprinted pixel. There is a scale of values that can be used for the threshold T. With reference to the histogram of the S-brightness matrix of Figure 21, it can be seen that the threshold could be on the scale of 200 to 235. Note that using a value under T = 150 removes everything except for the darker regions, as shown in Figure 20. This is useful for an operation that looks for the printed pattern logo, but would not be useful for searching spaces in the print. The value of T can be set automatically in an operational system by measuring the average brightness over the optical zone for several lenses in sequence. If the threshold is set only on the current lens, then a cloudy OZ can lead to a value that is too low for T. T must be set to be about 20 steps of brightness lower than the average. As an example, the average brightness in the central area of the lens can be 242, which would place a threshold at 222. Using this automated technique ensures that the print is less than T and will appear black in the U-matrix. Automatic T adjustment is recommended for solid operation. The scale of possible threshold values can be verified by examining the brightness histogram, an example of which is shown in Figure 21. Printed and unprinted pixels are clearly separated into two regions of brightness on either side of the minimum at approximately 215. A search can be made for bright areas by adding on a region It is expected that the sum is large in the bright areas and small in the other areas. The size of the block will be approximately the size of the smallest area of brightness in a printed area that will lead to a rejection. Step size is an exchange between efficiency and the need to ensure that the window is within the area of brightness. Let A be a matrix where A (i, j) is the sum of the values of U within the block located at position (ij). Then A will have a large value when the block is over a bright area and a small value when it is over a dark area. A surface plane of A for a block size of 50 x 50 and a step size of 10 in each direction has a peak in column 21 and row 6. The corresponding U matrix with the block located by this peak is highlighted in Figure 22. Note that the 50 x 50 block broadly covers the bright area in the printed area. Figures 23 and 24 respectively show the location of a bright area using a search box of 30x30 and 60x30. It is remarkable that the same region is located regardless of the size of the search box. You want to accept these print spaces because they are part of the design of the pattern. Therefore, the size of the inspection box should be substantially greater than 60 x 60. To look for larger problems in the printed area it is necessary to increase the size of the inspection box. An example is shown in figure 25. It can be seen that an inspection box of size 100 x 100 fits within the non-printed area. This provides an indication of the size that should be chosen for the inspection box. The analysis program has the following components: 1. Select the rows of the S-matrix that correspond to the printed area. The selected rows should cover the printed area but not pick up the bright area in the optical zone and outside the printed area. 2. Construct a matrix U that is a threshold version of S. The threshold should be about 20 steps of brightness below the average brightness in the optical zone. 3. Sweep the U-matrix with an inspection box that is about the same size as the spaces in the printed area that you want to detect. 4. Reject the lens if any of the frames has a sum greater than an acceptable threshold. This threshold should be around 80% of the size of the sweep box.
Logo inspection The purpose of logo inspection is to determine that the logo has been properly printed on the lens. The logo is used by the user to determine the orientation of the lens. If the logo is poorly printed to the extent that it can not be read or is misplaced then that function does not work. In addition, a poorly printed logo is an indication to the user of a poorly controlled production process and a reason to reduce respect for the product. For these reasons it is necessary to inspect the logo and determine if it has an acceptable quality. An example of a lens logo is made with the initials, W and J, followed by one another. This is shown in Figure 26 where the logo on the left is of a clear lens and the one on the right is of a lens with a pattern printed in color. It is necessary to be able to locate and inspect the logo on any type of lens. The biggest challenge in logo inspection is to determine if the logo has the correct structure. It is not difficult to build a procedure to locate a region with a large dark object against a lighter background. It is more difficult to analyze the structure of the dark region to determine if it is in an acceptable form for the logo. Some examples of badly formed logos are shown in Figures 27 (a) - (e). Examples (a), (b), (d), (e) have defects that cause rejection. Example (c) is acceptable on the edge line because the user can determine the orientation from the relative location of the two parts. A logo inspection algorithm can be created by constructing a procedure to recognize the letters W and J. Once the letters have been located and recognized, their relative placement can be measured. If they were not recognized or if their placement was incorrect then the lens will not pass the inspection. The algorithm described below essentially follows this procedure, but without using a general recognition tool. Because there are only two patterns that will be evaluated, it is possible to effectively use a relatively simple structured geometric approach. The steps in the algorithm are listed below. Note that the step of locating the lens and building the S-matrix is done anywhere in an integrated inspection system, but it is listed here for integrity purposes. 1. Locate the lens in the image and build the S-matrix. 2. Construct a U matrix by applying a brightness threshold to the elements of S. The purpose is to separate the pixels of the dark logo from the pixels printed in the background based on the brightness. (The threshold can be adjusted dynamically in an automatic system). 3. Locate the U column scale that contains the logo. This corresponds to finding the angular sector of the lens that contains the logo. Select said column scale from U to form a reduced A matrix. 4. Select the largest contiguous set of dark pixels in A as the W. section. Calculate the corners and the center of a box containing W. 5. Select the second largest contiguous set of dark pixels in A as the section J. Calculate the corners and the center of a box containing J. 6. Evaluate the geometric relationships provided by boxes W and J in relation to accepted specifications. The geometric comparisons are based on five parameters that are computed from the coordinates of the box W and J: 1. Distance D between the center of the box W and the center of the box J. It must be in the interval DWJMIN < D < DWJMAX. 2. Angular dimension DWA (width) of the box W. It should be in the DWAMIN interval < DWA < DWAMAX. 3. Radial dimension DWR (height) of the box W. It should be in the DWRMIN interval < DWR < DWRMAX. 4. Dimension angular DJA (width) of the box J. Must be in the interval DJAMIN < DJA < DJAMAX 5. Radial dimension DJR (height) of the box J. Must be in the interval DJRMIN <; DJR < DJRMAX. 6. Aspect ratio WA = DWR DWA of box W. It should be in the WAMIN interval < WA < WAMAX 7. Aspect ratio JA = DJR / DJA from box J. Must be in the range JAMIN < JA < JAMAX 8. Area AW = DWA X DWR of the box W. It should be in the AWMIN interval < AW < AWMAX 9. Area AJ = DJA X DJR of the box J. Must be in the interval AJMIN < AJ < AJMAX 10. The WC center of the W must be within the radial limits for the location of the logo. WCMIN < WC < WCMAX. These ten tests must be done to pass the test of the logo. Each test includes a pair of parameters that define the endpoints of an acceptance interval. The twenty values determine the acceptance criteria and must be established from the specifications of the letters of the logo and the desired location on the lenses. Experience with this inspection procedure has shown that it is fast and an effective inspection filter for the quality of the logo. It would be possible to gather statistical information in the eight parameters calculated D, DWR, DJA, WA, JA, AW, AJ, WC. Following these parameters during the production process will result in an online indication of the problems in the printing process and may allow for corrective action before the procedure goes out of tolerance.
Edge inspection The edge of the lens should be smooth and continuous. Edge defects include nicks, cuts, cuts in portions and sticky foreign matter. The nicks that appear on the edges can be so small that they are almost invisible, except under considerable magnification. Figures 28 (a) - (d) illustrate examples of edge defects in the contour of the contact lens that must be detected and rejected. Automated inspection of the edge of the lens requires extensive molding. Edge inspection consists of an analysis of profile cuts by the image of the edge that are constructed on the basis of a geometric model of the edge. The model of the edge can be of ellipsoidal parameters or of a polynomial adjustment of the edge in the inspection matrix. The polynomial adjustment is preferable. The edge inspection procedure is essentially the search for quick variations in the edge profile. The edge profile can be seen in the matrix S as a dark line that is almost horizontal in the matrix. Figure 7 shows an example of said profile 33. The profile of a small section of a matrix S in a portion of a lens edge containing a defect appears in figure 30. The contour of the lens 33 appears as a prolonged channel and deep in this surface plane. The bottom of the channel corresponds to the darkest part of the contour. The defect is visible in the surface plane as a stop 96 on the channel side. The inspection algorithm can detect this defect by finding variations of little variation in the edge profile. Next, an inspection procedure that can locate defects by analyzing the contour profile in the S matrix is explained. There are many possible variations in this procedure, and each variation has its own adjustment parameters that can be set by a variety of means, either automatically or manually. 1. The location of the center of the lens and the construction of the inspection matrix S. The lines of the radial profile must extend along the contour of the lens. 2. The location of the band of the rows of the matrix that contain the contour. This can be done in many ways, given the prominence of the dark band 3. The use of a comparison metric to compare the value of a profile in the k-column with a model. The model can be static or can be constructed by using a subset of the edge profile in the S matrix. This comparison can be simple or complex by adjusting, for example, the changes in the location of the minimum point and the depth of the profile. The result of each comparison is a set of one or more numbers for each column that describes how well that edge profile compares to the model. 4. The location of those profiles whose variation of the model makes them doubtful. 5. The optional completion of an additional analysis in the region of doubtful profiles. 6. Performing an edge matching analysis on the second of a pair of lens images. 7. The comparison of the locations of dubious border points. If the doubtful points in the two images do not agree on a given tolerance in the location, then it is assumed that the effect was caused by an object (strange) in motion. If the locations are compared then it is assumed that it is a defect in the edge of the lens. This inspection algorithm can be used in a single imaging system; in which case, it will detect defects at the edges, but is vulnerable to false lens rejection due to foreign matter in the image.
The use of this algorithm with two images is preferred, as well as the comparison of the defects that are detected. If a dark spot is detected in the same edge area of both images, it is recognized as a defect in the lens.
Polynomial Edge Adjustment Edge defects are detected in a preferred mode by comparing the edge data with a mathematical model of the edge. A preferred method of edge molding is to adjust the data points in the polar coordinate representation with polynomials, and then combine the polynomial representations. The basic approximation can be described by seeing in a plane of pj versus ßj, as in figure 30. If the data are grouped into four sections corresponding to the angle intervals (0 °, 90 °), (90 °, 180 °) ), (180 °, 270 °), (270 °, 360 °), then the polynomial approximations ap against ß¡, can be calculated for each section. The sections do not have to be as described here, or even be contiguous to this approach that will be used. One variation is to allow some overlap in the data sets, which facilitates the interpolation of values within the boundaries of the set. The data sets will be denoted by S¡, 1 <; i < n, where n is the number of data sets. A polynomial fj (ß) is defined in the points in S, so that the sum of the squared differences is reduced to the maximum. The coefficients of said polynomial can be derived by normal techniques. If we choose that the polynomials have the same degree, p, then they can be written in the form f, (ß) = c¡o + c¡? ß + c12ß2 ... + c¡pß °, 1 < i < n A model of the lens contour is provided by this set of polynomials, since the radius can be calculated for any angle. Given a value of ß, the region in the falls is chosen. Then the radius is calculated using the appropriate polynomial function. If the angle falls near the limit, the radius is calculated at angles close to the limit in the junction regions and the radius is interpolated. Polynomial expressions should be used with care only in the regions for which they were calculated. Polynomial approximations are notoriously accurate outside the definition region. By allowing overlap in the regions, it is possible to reduce this edge effect. The polynomial molding has an advantage over the ellipsoidal molding, since these are lenses that are distorted in some way. This is the case, for example, in the inspection of partially hydrated lenses. In such cases, the elliptical fit may be poor and cause rejection of the lens in the inspection procedure, while the polynomial method may be better adapted to slight variations in the silhouette of the edge. The number of terms used in the polynomial adjustment depends on the amount of data that is available and the rate of change of the data. In general, no more than about 1/3 of the data points used in the calculation of coefficients should be used. In practice, a polynomial of tenth order has been successfully used, although polynomials of lesser or greater degree could be resorted to. After having calculated a model for the edge of a lens, it is possible to go back and debug the data set of edge points. This can be done by calculating the distance of each point in the set to the limit and then with the rejection of those points that fall beyond a specified distance. The reduced data set can then be used to create a new estimate of the model parameters. The refinement procedure is necessary in any environment where there is a probability of alterations. Alterations have many causes, including scratches in the bucket and dirt in the fluid in the bucket. These alterations are not eliminated in the dark contour region by the "cleaning" procedure described above. As a routine, you must resort to a debugging step. The purification process can be used with the elliptical or polynomial methods, and in fact, with any other molding approach. Once a contour model is available, it can be used to locate the model points at any angle around the center. The effectiveness of the procedure can often be improved by the use of an observation chart. The observation box is calculated once and then used for future calculations. If a value that is not in the table is required, it can be interpolated from the nearest square. The system has been instrumented with an observation box of 720 points at half-degree angles around the edge of the lens. The radius at any other point is interpolated.
Ellipsoidal Edge Adjustment Although the polynomial edge trim is preferred, an ellipse could also be used to mold the edge. As an alternative, elliptical molding has been used as an initial approximation to almost locate the area of the contour data of the lens. Subsequently, a polynomial is used to mold the data. An ellipsoidal representation of the circular lens is useful because it approximates the appearance of the lens when it is tilted relative to the angle of view. In addition, the lens can actually be slightly elliptical thanks to variations in the manufacturing process. The algorithm for finding the edge provides adjustment points that fall approximately at the edge of the lens. A variation of practical limitations for the image and edge search procedures will cause noisy variation in the location of the edge points. In addition you will find some points that are not even close to the edge. The edge molding algorithm must be related to the randomness of the edge data and even produce a good representation of the limit. An ellipsoidal model should allow for a reasonable variation in the imaging and fabrication procedure and should provide a good compact model of parameters. An ellipse can be described by five parameters, namely, the central coordinates (Cx, Cy), the lengths of the major and minor axes, a and b, and the angle 0 of the major axis from the horizontal axis. The simplest case is when the center is at the origin of the coordinate system and the major axis is in the horizontal direction. A point (x, y) in the ellipse has coordinate values that are related by x = acos? y = bsen? where < 9 is an angular parameter. The distance from the center to the point (x, y) is given by the Euclidean distance formula.
Equation E1: d = Jx x = - cos2? + b2sen2? and the angle that a line from the point (x, y) to the center makes with the horizontal axis is bsen? a - ian - tan a cosé? The angle a is equivalent to the angular parameter? only when the major and minor axes have the same length. Any geometric figure can be rotated over the origin of the coordinate system by means of a coordinate transformation. In effect, the static figure is left and rotates on the axes of the coordinate system, (x, y) It is the location of a point P in a coordinate system and (u, v) is the location of the same point P in the rotated system. The two coordinate systems have the same origin, but have the axes rotated by an angle 0. Then the coordinates of P in the two systems are related by Equation E2: ¿7 =; ¡C? S0 +? Sin0 v = -? sena +? thing The same information can be expressed in the form of a matrix like cos sen ^ -sen ^ cos ^ The origin of an ellipse can be changed simply by adding the exchange value to each coordinate. The equation of an ellipse rotated with location in (Cx, Cy) is ü = Cx + acos Tcos0 + b sin? Sin0 v = Cy-acos6sen0 + b sin6bos0 In this analysis it will be assumed that the center of the figure has been located by the central position algorithm and it is only necessary to find the values for the three parameters. { a, b, 0.}. . The observed data is a set of points (u¡f v¡), 1 < i < n.
Most of these will fall inside or near the edge of the lens, but some will fall away from the edge. The goal is to adjust an ellipse to the observed data using the optimal points and eliminating the outermost points that are not on the edge. The lengths of the major and minor axes and the angle of rotation must be found. The normal method for determining the parameter values of the data is the statistical calculation. In this case, the parameters (a, b, 0) are related in a non-linear way to the set of points (u¡t v¡), 1 < i < n. The values of the parameters must be determined in a real-time production procedure under complicated time constraints. Therefore, it is necessary to consider the transformations of the points that will simplify the estimation procedure. The distance of a point (u, v) in an ellipse to the origin is given by Equation E3 R = u 2 + v2 The angle that a line from the point to the center makes with the horizontal axis is ß = tan It is possible to calculate the radius and angle values for each point in a data set. This corresponds to a transformation of rectangular to polar coordinates: (u¡t v¡) ,? (R¡ (ß¡), 1 < n.It is not considered good practice to estimate parameter values from individual data points, such as the location of extreme values.These calculations are vulnerable to noise and values. A calculation must make use of all the available information from the complete set of data points, in this way, we will continue to develop some analytical tools to use all the values.It can be demonstrated from the Equation E2 that u2 + v2 = x2 + y2, where (x, y) would be the point corresponding to (u, v) in an ellipse with larger axes in the horizontal direction From the Equation E1 we can see that f? 2 = a2cos? + b2 sen26> by replacing trigonometric identities ? 1 1 cos í? = - + -cos 2T 2 2 r- 1 1 sen? = cos 2T 2 2 you can write the above as a a2- b2 R - + - cos 26 »2 2 This indicates that the average value of R2 and the amplitude of the variation of R2 must be important in determining the value of the parameters (a, ib). Therefore, the intermediate parameters are defined Equation 4: 2 a2- b2 M = which correspond to the average value and the amplitude of R2. It is known from the definition of R given in Equation E3 that the values of R2 for each data point (u, v,) can be calculated. If it is possible to use the values of i?, 2 to calculate the values of parameters C and M, then we find that a and b from a = c + M The angle 0 corresponds to the phase angle of a sinusoid that is set to a plane of the radius R, versus the angle, β, for the edge points that lie on or near the elliptical contour of the contact lens. It is well known that the plane of R2 against ß crosses through two complete cycles on the scale -180 < ß = 180 °. In this way, the values are adjusted with a curve of the shape Equation 5: /? = C? 8s2 / 3¡ + C2sen2 + C3 where / is a value that would be calculated by the substitution of ß in the equation. It is desired to choose the values of the coefficients so that the adjustment of / ¡with the real value pj- u2 + vf is as close as possible. If the values of the coefficients. { C-i, C2, C3} can be found from the data, then the parameters can be found. The necessary relationships can be found by expressing Equation 5 as a function of a constant term and a single sinusoid.
This can be expressed in an amplitude-angle form by defining Then f, - A CQÍ s [2 (A - -r)] + c Now it is possible to relate the parameters of this equation with those of the plane of the radius R, against the angle ß. The amplitude A corresponds to the amplitude M, and C3 corresponds to C of Equation 4. Also,? is the angle at which the function reaches its maximum amplitude, and therefore is equal to the angle of rotation f. That means that it is possible to calculate all the parameters. { a, b, F} , if we find the values of. { C-i, C2, C3} . Explicitly, Equation 6: a = lC3 + JCl¿ + Cí b = ^ c3- ^ c2 + a f - -tan i 2 2 C.
Next, all you have to do is determine how to calculate the coefficients from the data.
You start by calculating (p "- ß¡) for each point (? ¡, V¡) in the data set. Upon knowing? at each point, f, is calculated as a function of the coefficients. Therefore, if the coefficient values were known, the error between the model and the observed data could be calculated. * =? , - /,) 2 1 = 1 To find the best values for the coefficients, we have de = 0, l < / < 3 AD, This is equivalent to This results in three equations in three unknown All terms in matrices A and B can be calculated from the observed data. The values of the coefficients. { Ci, C2, C3} then they can be found by solving this set of equations. The matrix elements are obtained from the data points by means of the following calculations: n 4_ =! 3 =? Cos2ß.
An = Ai =? cos 2/7 ,. sin 2ß, 1 = 1 A32 =? Sen22ßi ¡= 1 *. =? A n B? =? P.cos2ß¡ 1 = 1 B3 =? lsen2ß_ The functions cos2 / 3; and sin2 /?, - can be calculated directly from the coordinate values of the data points without first calculating the angles. This is based on the fact that u. eos /?,. = V u + vf v .. sin / ?. = 4 u2 + vf The double angle formulas then result cos 2ß¡ = eos2 ß¡ - sin2 ß¡uf + vf sen 2ß_ - 2 sin ß_ cos ß¡ = 2".v, uf + v If a pair of parameters is defined 2u.vt?, 2, 2 then all the equations for the matrix elements can be expressed simply according to these parameters.
B3 =? P? Finally, the solutions for the unknown coefficients can be found by C = A-1B The major and minor axes, as well as the angle, can be calculated from the coefficient values using equation 6. The procedure of ellipsoidal edge adjustment is not it requires that the points on the edge of the lens be stored in some particular order. All points in the L, R, T, and B sets that were found in the original edge search can be used. The values of (gj, hj, pj) are calculated for each point in the set, and then the results are combined in accordance with the previous equations to find the parameters.
System Flowcharts As already explained with respect to Figure 1, a computer 64 operates the apparatus that obtains the images of the cuvettes that pass in the star wheel 1 and analyzes the images to detect defects in the lens that will cause rejection of it in manufacturing. The computer operates with a program that is described in the following with flowcharts. The computer program of preference is written in C ++ programming language. The steps of the program necessary to carry out the functions described in the flow diagrams will be evident to the programming experts. Figure 31 is a flow chart of the functional steps for obtaining images of each contact lens when moving around a star wheel in the manufacturing process. As it appears in block 95, the moisture in the bottom of the bowl is dried by an air jet when the bowl moves to a starting position 29 as shown in figure 1. The star wheel 1 then moves forward one position, so that the cuvette moves to the first camera position 11 and a first image of the contact lens in solution is taken in the cuvette. Subsequently, the moisture at the bottom of the next cuvette dries in the start position 29, the stellate wheel moves forward one position and the cuvette which was in the first position moves to the second position and the tape 45 makes it rotate as it moves to the second position. A first image of the cuvette is then taken which is now placed in the first camera position 1 1. As illustrated in the flow diagram, the operation described continues until an account of ten, at which time the cuvette having the The image taken at the beginning has been moved to the second camera position 13 illustrated in Figure 1. As it appears in the box 97 of Figure 31, the second camera takes a second image of the cuvette in the tenth position 13. Subsequently, the First and second lens and cuvette images in the tenth position are analyzed by the computer program and an approval / rejection determination is made for the lens. Then the lens moves to approval or rejection conveyors, depending on the result of the computer defect analysis analysis. The procedure continues when the cuvettes and contact lenses move around the star wheel. The procedure is effective because two imaging stations allow the continuous movement of the cuvettes on the star wheel. The number of cells between the first and second image positions was defined as the tenth in order to allow enough time for the rotating movement of the cuvette, the lens and the solution stop before taking the second image. The system is not limited to any particular number of optical inspection stations or lens positions between optical inspection stations. Actually, a single optical inspection station could be used to take the first and second images for each cuvette. In this case, there would be a need for a time lag between the first and second images to allow the movement of the solution and the lens to stop before taking the second image. The use of two separate optical inspection stations is preferred to provide continuous movement of the cuvettes, and thereby increase the performance of the system.
Figure 32 illustrates a flowchart of program functions combining the first and second images of a cuvette and the lens to provide a resulting image that excludes alterations caused by contaminants in the lens solution and scratches or marks in the cuvette . With respect to the box 98 of Figure 32, the first and second images are accessed for each cuvette as explained with respect to Figure 31. The light intensity information for these images is then normalized to compensate for variations in the environmental light The center of each image is then located in the manner that will be described later and the light intensity data are represented in polar coordinates that are stored in a rectangular S matrix. Although the alterations could be eliminated from the entire lens image, the computational effort is not absolutely necessary. In practice it has been discovered that only some areas of the lens need to be corrected to eliminate alterations that are not defects. It has been discovered that areas, such as the optical zone, the logo and the color printed iris can be analyzed effectively by first eliminating the alterations. As illustrated in box 99, the program therefore requires a particular area of the second image to be selected to eliminate the alterations. The pixels in this area are transformed, for example by the use of an affine mathematical transformation, to register the pixels in the second image with the corresponding pixels of the first image. As it appears in the decision box 101, the program compares each pair of pixels of the first and second images and generates a pixel of a resulting image that has the same intensity as the brightness of the compared pixels. Therefore, the resulting image eliminates any alteration that changed the position with respect to the lens. Alterations in motion or defective characteristics are eliminated from the resulting image. Then, the resulting image is analyzed to detect defects. Figure 33 illustrates a flowchart of computer functions that are used to test the optical zone in the center of the lens. As it appears in the box 103, there is access to the matrix S for the superimposed inspection regions N for the optical zone. The brightness deviation BD is calculated on each region. The brightness deviation BD for each region is determined by the degree to which the dark characteristics in the region vary from an average luminous intensity measured in the region. As shown in block 105, a high threshold T2 and a low threshold T1 are set to assign a score to the brightness deviation for each region. Subsequently, as appears in decision box 107, the brightness deviation score for each overlay is set to 0 if the brightness deviation is less than or equal to T1. If the brightness deviation for the overlay is greater than T1 and less than or equal to T2, the deviation score is set to 2. If the brightness deviation is greater than T2, the deviation score is set to 6. The score for all inspection regions in the optical zone and, as shown in box 109, the score is added and the lens is rejected if the sum is greater than or equal to 5.
The lens passes the optical zone test if the sum of the overlap deviation score is less than 5. Figure 34 illustrates the steps of the functional program that are used to analyze the ink pattern of the iris for unacceptable spaces. As it appears in box 11, we have access to the resulting matrix S corrected for alterations to the ink pattern of the iris with annular color. A threshold T is then set to 20 steps of brightness below the average brightness at the center of the optical zone of the lens. A matrix U is then calculated from the threshold T and the matrix S. Each light intensity value of the matrix S is set to one and is copied to the matrix U if it is greater than or equal to T and is set to 0 and copy to the matrix U if it is less than T. Therefore, the matrix U contains a relatively high contrast image of the ink pattern. The threshold T is selected to improve the characteristics of the ink pattern for inspection. Then, an inspection box is defined with an area corresponding to the smallest transparent space in the ink pattern that would result in rejection of the lens. A brightness threshold is then set in the inspection box BT. Subsequently, the box is recorded on the matrix U and, if the sum of the brightness of the pixels in the inspection box is greater than the brightness threshold BT, the lens is rejected. If the sum of the brightness of the pixels in the inspection box is equal to or less than the brightness threshold BT, the scanning of the box is continued. If the box registers the entire matrix U and does not find a sum of the brightness pixels greater than the brightness threshold BT, the ink pattern test for the lens passes. Figures 35a and 35b illustrate the steps of the functional program to analyze the characters of the logo that is printed on the contact lens. As it appears in box 113 of figure 35a, there is access to the resulting matrix S for the lens, to provide data from which the alterations have been eliminated. A very high contrast threshold T is set and a U matrix is employed in the manner described above to provide a very high contrast image which essentially shows only the area printed with the dark logo of the lens. As illustrated in box 115 of Figure 35a, the logo is analyzed to select as the first character the contiguous set of dark pixels in matrix U. Then, a box B (1) is measured to include this character. Subsequently, subsequent characters of decreasing size are selected, and boxes containing these characters are defined until all characters have been detected. In the described mode only two characters W and J are detected. As shown at the beginning in a decision box 1 17 of Figure 35b, the selected geometric characteristics of each character box are verified. The lens is rejected if any of the required geometric characteristics is not detected for a character. If all the characters have the required geometric characteristics, the lens passes. Figure 36 illustrates the functional steps required by the computer program to detect defects in the edge or contour of the lens. As illustrated in box 1 19, the matrix S is accessed for the first and second images. The edge data for each image is defined by a black horizontal line of the matrix S. These edge data are located in each matrix S, and the center of the lens is determined for each image. The edge data can be located first by applying an ellipse approximation and the farthest data points can be removed as described above. Subsequently, the tenth order polynomials are defined to make a model of the contour of each image in four overlapping segments. No particular order of the polynomial and number of overlapping segments is required. However, the tenth order and the four overlapping segments have proved adequate in experimental tests. The polynomial border model for each image is then compared to the actual border data and the profile numbers are generated to indicate the lack of agreement between the model and the contour data that could indicate a possible defect. The locations of possible defects for the first and second images are then compared to determine if a mismatch occurs in the same area of each image. If there is a mismatch in the same area, the apparent defect is counted, and the mismatch location, generation of profile number and comparison of apparent defects continues until all obvious defects have been found. If the number of obvious defects exceeds a predetermined threshold, the lens is rejected. The lens passes and has no defects or fewer defects than the predetermined threshold. Figure 37 illustrates the functions of the program that are necessary to locate the center and the edge or contour of the lens. As shown in box 121, the luminous intensity data of the charge coupled devices of the cameras are scrutinized for each lens image at the top and bottom and left and right to search for dark pixels. In the internal scrutiny from the dark edge of the image, the program detects when the luminous intensity of the image falls from a high intensity to a low intensity. This sharp fall in intensity locates the edge of the contour. As the search continues in the same direction, the program detects acute elevation in intensity from a low to a high value. The program selects "a" as the darkest point on the detected edge of the lens. The scrutiny continues on the same horizontal line from the opposite edge of the lens and generates a point "b" in the same way as for the darkest point on the opposite edge of the lens. A plurality of pairs of contour points is thus obtained from the left and right sides and the upper and lower sides of the lens for both lens images 1 and 2. This method locates points on or adjacent to the outline of the lens. lens. The center of each line a-b is calculated to define a number of lines that passes through the center of the lens. The location of the center of the lens is estimated by the average of the intersection of these lines. This estimation can be improved by removing more distant points, for example by molding the polynomial or ellipsoidal edge. Although it is preferred to program the functions of the lens inspection system in C ** language and in separate subroutines as illustrated to facilitate changes and corrections, the scope of the invention is not limited to any particular programming language or subroutine arrangement. The invention includes within its scope any programming language or subroutine arrangement that could be used to achieve the functions and features of the lens inspection system of the invention. The described models of manufacturing equipment, such as cameras, strobes, cuvettes, diffusers, and collimators could be changed without departing from the invention. For example, cuvettes or other types of lens holders of different shapes and with or without lower lens surfaces could be used without departing from the invention. Also, the invention is not limited to the use of adhesive tape to rotate the cuvettes or the lens holders or to the use of two cameras. A camera or more than two cameras could be used to take two or more lens images. The system of the invention can also be used to inspect lenses other than soft contact lenses. For example, hard contact lenses or hard lenses of any type could be inspected by the system of the invention with or without suspension solutions. The invention can therefore be modalized in other specific forms than those described without departing from its spirit or essential characteristics. Therefore, these modalities are considered in all aspects as illustrative and not as restrictive.

Claims (88)

NOVELTY OF THE INVENTION CLAIMS
1. - A method for automatically inspecting a plurality of lenses comprising the steps of: placing each lens in a lens holder; take a first image of each lens in the lens holder; rotate each lens holder; take a second image of each lens in the lens holder that was rotated; register each pixel of the second image with the corresponding pixel of the first image; comparing the pixels of at least a portion of each first image and the registered pixels of the portion corresponding to the second image; recognizing as defective lens the defective characteristics of the first image having corresponding defective characteristics in the second recorded image; and reject as defective any lens having a predetermined number of defects recognized in the lens.
2. The method according to claim 1, further characterized in that the recognition step includes the generation of a resulting image with pixels having the same luminous intensity as the brighter of the two corresponding compared pixels.
3. The method according to claim 1, further characterized in that it includes the step to obtain the first and second images with two separate optical inspection stations, each station uses a strobe light to momentarily illuminate the lens and the fastener of lenses and a digital camera to record an image of the lens and the lens holder.
4. The method according to claim 1, further characterized by including the step to obtain the first and second images with a single optical inspection station that: uses a strobe light to momentarily illuminate the lens and the lens holder; record the first image with a digital camera; rotate the lens holder; uses the strobe light to momentarily illuminate the lens and the displaced lens holder; and register the second image with the digital camera.
5. The method according to claim 1, further characterized by including the step of normalization of the light intensity of the first and second images to correct the variations in ambient light.
6. The method according to claim 1, further characterized by including the step of providing a digital representation of the light intensity in each pixel of said images, locating the center of each image and storing the light intensity data of polar coordinates for each image in an S matrix, where the data is stored in fixed-angle columns and fixed-radius rows.
7. The method according to claim 1, further characterized in that it includes the steps of: defining a plurality of regions of light detection in the optical zone of the lens; determine the deviation in brightness through each region; assign a score in numbers to each region based on the magnitude of the deviation in brightness across the region; and reject the lens when the sum of the score exceeds a predefined threshold.
8. The method according to claim 1, further characterized in that it includes the steps of: determining the luminous intensity of each pixel of a predefined portion of each image; define a light intensity threshold to improve the contrast of the predefined portion; setting the pixels with luminous intensities equal to or greater than the threshold at "1" and light intensities below the threshold at "0"; and analyze the resulting pixel values to detect areas of excessive brightness.
9. The method according to claim 8, further characterized in that the step for defining a light intensity threshold includes setting the threshold 20 brightness steps lower than the average brightness at the center of the optical zone of the lens.
10. The method according to claim 8, further comprising the steps for analyzing an annular area of the printed iris of a lens, steps including: defining an inspection box that has a cross-sectional area that corresponds in size to the space blank smaller in the printed iris that would cause lens rejection; scrutinizing in the inspection box about a matrix of said resultant pixel values for the printed iris area; and reject the lens if the sum of the resulting pixel values in the inspection box is greater than a predefined threshold of the inspection box.
11. The method according to claim 10, further characterized in that the predefined threshold of the inspection box is 80% of the size of the inspection box. The method according to claim 8, further characterized in that it includes the steps to analyze characters printed on the lens, steps that include: a) the definition of the light intensity threshold to separate the relatively dark printed characters from other dark patterns in the lens; b) the formation of a character matrix of the resulting pixel values for the area of said printed characters; c) selecting the first character as the largest contiguous set of dark pixels in the character array; d) the calculation of the corners and the center of a box containing the first character; e) selecting the next character as the next largest contiguous set of dark pixels in the character array; f) the calculation of the corners and the center of a box that contains the following character; g) the repetition of steps (e) and (f) until all the characters have been selected; and h) evaluating at least one geometric characteristic of at least one of the boxes in relation to at least one acceptance specification to determine if the characters are acceptable. 13. The method according to claim 12, further characterized in that the geometric feature includes the distance between the centers of at least two boxes of characters. 14. The method according to claim 12, further characterized in that the geometric feature includes the width of at least one of the boxes. 15. The method according to claim 12, further characterized in that the geometric feature includes the height of at least one of the boxes. 16. The method according to claim 12, further characterized in that the geometric feature includes the aspect ratio of the height and width of at least one of the boxes. 17. The method according to claim 12, further characterized in that the geometric feature includes the area of at least one of the boxes. 18. The method according to claim 12, further characterized in that the geometric feature includes the location of the center of at least one of the boxes. 19. The method according to claim 1, further characterized in that it includes the steps of: generating a mathematical model of the edge of the lens; comparing the measured luminous intensities of the pixels adjacent to the edge of the first and second images with the luminous intensities of the model at predefined angles around the edges of the images; generate a border profile number for each comparison of the angular edge that is representative of what the profile of the measured edge of the image with the model also compares; locate the points on the edge profiles for the first and second images that have numbers representative of a mismatch; and reject the lens if a predefined number of mismatches for the first image corresponds in position to the mismatch for the second image. 20. The method according to claim 19, further characterized in that the step of generating includes the generation of a polynomial model of the edge of the lens. 21. The method according to claim 19, further characterized in that the step of generating includes the generation of an ellipsoidal model of the edge of the lens. 22. A method for automatically inspecting a lens, comprising the steps of: placing at least one lens in solution in a lens holder; get a first image of the lens; moving the lens, the solution and the lens holder one with respect to the other; obtain at least a second image of the lens; comparing the pixels of the first image to correspond the registered pixels of the second image; and recognizing a defect in the lens only if a plurality of pixels of the first image and the corresponding plurality of registered pixels of the second image are dark. 23. A method for automatically inspecting a lens, comprising the steps of: placing a lens in solution in a lens holder; take a first image of the lens in said fastener; rotate the fastener, lens and solution; take a second image of the lens in said fastener; comparing the portions of the first image and the corresponding recorded portions of the second image; and provide a resulting image that eliminates the differences between the first and second images. 24. A method for automatically inspecting at least one lens, comprising the steps of: a) placing a lens in solution in a lens holder; b) taking a first image of the lens in the holder; c) move the fastener, the lens and the solution; d) taking an image of the lens in the holder after the movement stopped; e) repeating steps (c) and (d) to take a predetermined number of additional lens images in the holder; f) comparing at least a portion of the first image with the corresponding recorded portions of each of the additional images; and g) recognizing as defects in the lens the defective characteristics of the first image having corresponding defective characteristics in the additional recorded images. 25. A method for automatically inspecting a lens, comprising the steps of: placing the lens in solution in a lens holder; take a first image of the lens in said fastener; rotate the fastener, lens and solution; take a second image of the lens in the bra; comparing the luminous intensity of pixels of at least a portion of the first image with the luminous intensity of pixels of a corresponding portion of the second image; and providing a resulting image with pixels having the same luminous intensity as the brightest of the corresponding pair of pixels compared. 26.- A method for automatically inspecting a lens, comprising the steps of: placing a lens in solution in a lens holder; take a first image of the lens in the lens holder; provide a relative movement between the lens and the lens holder; take a second image of the lens and the lens holder; register each pixel of the second image with the corresponding pixel of the first image; comparing the pixels of at least a portion of the first image and the registered pixels of the corresponding portion of the second image; and recognizing as flaws in the lens the defective characteristics of the first image having corresponding defective characteristics in the second recorded image. 27. The method according to claim 26, further characterized in that the recognition step includes the generation of a resulting image with pixels having the same luminous quantity as the brighter of the two corresponding compared pixels. 28. The method according to claim 26, further characterized in that it includes the step of obtaining the first and second images with two separate optical inspection stations, each station uses a strobe light to momentarily illuminate the lens and the fastener and a digital camera to record an image of the lens and the lens holder. 29. The method according to claim 26, further characterized in that the step of obtaining the first and second images with a single optical inspection station that: momentarily illuminates the lens and the fastener; record the first image with a digital camera, rotate the fastener; momentarily illuminates the displaced lens and fastener; and register the second image with the digital camera. 30. The method according to claim 26, further characterized in that it includes the step of normalizing the luminous intensity of the first and second images to correct the variations in ambient light. The method according to claim 26, further characterized in that it includes the step of providing a digital representation of the luminous intensity in each pixel of the images, locating the center of each of the images, and storing the intensity data. luminous coordinate of polar coordinates for each image in an S matrix, where the data is stored in fixed angle columns and rows of fixed radius. 32. The method according to claim 26, further characterized in that it includes the steps of: defining a plurality of light-sensing regions in the optical zone of the lens; determine the deviation in brightness through each of the regions; assign a score in numbers to each region based on the magnitude of the deviation in brightness across the region; and reject the lens when the sum of the score exceeds a predefined threshold. The method according to claim 26, further characterized in that it includes the steps of: determining the luminous intensity of each pixel of a predefined portion of each of the images; define a light intensity threshold to improve the contrast of the predefined portion; set the pixels with luminous intensities equal to or greater than the threshold at "1" and light intensities less than the threshold at "0"; and analyze the resulting pixel values to detect areas of excessive brightness. 34.- The method according to claim 33, further characterized in that the step to define a threshold of light intensity includes adjusting the threshold in 20 steps of brightness lower than the average brightness in the center of the optical zone of the lens. The method according to claim 33, further characterized in that it includes the steps for analyzing an annular iris area printed on a lens, steps including: the definition of an inspection box having a cross-sectional area corresponding in size to the smallest blank space in said printed iris that would cause rejection of the lens; scanning the inspection box on a matrix of pixel values resulting from the area of the printed iris; and rejection of the lens if the sum of the resulting pixel values in the inspection box is greater than a predefined inspection box threshold. 36. The method according to claim 35, further characterized in that the predefined threshold of the inspection box is 80% of the size of the inspection box. 37.- The method according to claim 33, further characterized in that it includes steps to analyze the characters printed on the lens, steps comprising: a) the definition of the light intensity threshold to separate the relatively dark printed characters from other dark patterns in the lens; b) the formation of a character matrix of the resulting pixel values for the area of the printed characters; c) selecting the first character as the largest contiguous set of dark pixels in the character array; d) the calculation of the corners and the center of a box containing the first character; e) selecting the next character as the next largest contiguous set of dark pixels in the character array; f) the calculation of the corners and the center of a box that contains the following character; g) the repetition of steps (e) and (f) until all the characters have been selected; and h) evaluating at least one geometric characteristic of at least one of the boxes in relation to at least one acceptance specification to determine if the characters are acceptable. 38.- The method according to claim 37, further characterized in that the geometric feature includes the distance between the centers of at least two character boxes. 39.- The method according to claim 37, further characterized in that the geometric feature includes the width of at least one of the boxes. 40. The method according to claim 37, further characterized in that the geometric feature includes the height of at least one of the boxes. 41. The method according to claim 37, further characterized in that the geometric feature includes the aspect ratio of the height and width of at least one of the boxes. 42. The method according to claim 37, further characterized in that the geometric feature includes the area of at least one of the boxes. 43.- The method according to claim 37, further characterized in that the geometric feature includes the location of the center of at least one of the boxes. 44.- The method according to claim 26, further characterized by including the steps of: generating a polynomial model of the edge of the lens; comparing the measured luminous intensities of the pixels adjacent to the edge of the first and second images with the luminous intensities of the model at predefined angles around the edges of the images; generating an edge profile number for each angular edge comparison that represents that the profile of the measured edge of the image is also compared to the model; locate the points of the edge profiles for the first and second images that have numbers representative of a mismatch; and reject the lens if a predefined number of mismatches for the first image corresponds in position to the mismatch for the second image. 45.- The method for automatically inspecting a plurality of lenses comprising the steps of: obtaining at least one image of each lens; generate a polynomial model of the edge of the lens; comparing the measured luminous intensities of the pixels adjacent to the edge of each of the images with the luminous intensities of the model at predefined angles around the edges of the image; generate an edge profile number for each angle edge comparison that represents that the measured edge profile of the image is also compared to the model; locate the edge profile points for the image that have numbers representative of a mismatch; and reject the lens if the mismatch exceeds a predefined number of mismatches. 46. The method according to claim 45, further characterized in that it includes the steps of: defining a plurality of light-sensing regions in the optical zone of the lens; determine the deviation in brightness through each region; assign a score in numbers to each region based on the magnitude of the deviation in brightness across the region; and reject the lens when the sum of the score exceeds a predefined threshold. 47. The method according to claim 45, further characterized in that it includes the steps of: determining the luminous intensity of each pixel of a predefined portion of each of the images; define a light intensity threshold to improve the contrast of the predefined portion; setting pixels with luminous intensities equal to or greater than the threshold at "1" and light intensities less than the threshold at "0"; and analyze the resulting pixel values to detect areas of excessive brightness. 48. The method according to claim 47, further characterized in that the step of defining a light intensity threshold includes adjusting the threshold in 20 steps of brightness lower than the average brightness in the center of the optical zone of the lens. 49. The method according to claim 47, further characterized in that it includes the steps for analyzing an annular iris area printed on a lens, steps including: the definition of an inspection box having a cross-sectional area corresponding in size to the smallest transparent space in the printed iris that would cause the rejection of the lens; scrutinizing in the inspection box a matrix of the resulting pixel values for the printed iris area; and rejection of the lens if the sum of the resulting pixel values in the inspection box is greater than a predefined inspection box threshold. 50. The method according to claim 49, further characterized in that the predefined threshold of the inspection box is 80% of the size of the inspection box. 51.- The method according to claim 47, further characterized in that it includes the steps to analyze the characters printed on the lens, steps that include: a) the definition of the light intensity threshold to separate the relatively dark printed characters from other patterns dark in the lens; b) the formation of a character matrix of the resulting pixel values for the area of the printed characters; c) selecting the first character as the largest contiguous set of dark pixels in the character array; d) the calculation of the corners and the center of a box containing the first character; e) selecting the next character as the next largest contiguous set of dark pixels in the character array; f) the calculation of the corners and the center of a box that contains the following character; g) the repetition of steps (e) and (f) until all the characters have been selected; and h) evaluating at least one geometric characteristic of at least one of the boxes in relation to at least one acceptance specification to determine if the characters are acceptable. 52. The method according to claim 51, further characterized in that the geometric feature includes the distance between the centers of at least two character boxes. 53. The method according to claim 51, further characterized in that the geometric feature includes the width of at least one of the boxes. 54.- The method according to claim 51, further characterized in that the geometry feature includes the height of at least one of the boxes. 55. The method according to claim 51, further characterized in that the geometric feature includes the aspect ratio of the height and width of at least one of the boxes. 56. The method according to claim 51, further characterized in that the geometric feature includes the area of at least one of the boxes. 57. The method according to claim 51, further characterized in that the geometric feature includes the location of the center of at least one of the boxes. 58. The method according to claim 45, further characterized by including the step of normalizing the light intensity of each of the images to correct variations in ambient light. 59. The method according to claim 45, further characterized by including the steps of providing a digital representation of the light intensity of each pixel of each of the images, locate the center of each of the images and store the data of light intensity of polar coordinates for each image in a rectangular S matrix. 60.- A method for automatically inspecting a plurality of lenses comprising the steps of: obtaining at least one image of each lens; provide a digital representation of the luminous intensity in each pixel of each of the images; locate the center of each of the images; storing the light intensity data of polar coordinates for each image in a rectangular S matrix; and using the data from the S matrix to analyze portions of each of the images to look for defects in the lens. 61.- The method according to claim 60, further characterized in that the step of use includes using the data to analyze the defects of the optical zone of each lens. 62. The method according to claim 60, further characterized in that the use step includes using the data to analyze a logo of each lens to look for defects. 63. The method according to claim 60, further characterized in that the step of use includes using the data to analyze a printed annular area of each lens to look for defects. 64.- The method according to claim 60, further characterized in that the step of use includes using the data to analyze the edge of each lens to look for defects. The method according to claim 60, further characterized in that it includes the step of normalizing the light intensity of each of the images to correct the variations in ambient light. 66.- A method for automatically inspecting a plurality of lenses, comprising the steps of: obtaining at least one image of each lens; defining a plurality of light-sensing regions in an optical zone portion of each of the images; determine the deviation in brightness through each of the regions; assign a number score to each region based on the magnitude of the deviation in brightness across the regions; and rejecting a lens when the sum of the score of the regions of the image exceeds a predefined threshold. 67.- A method for automatically inspecting a plurality of lenses comprising the steps of: obtaining at least one image of each lens; determining the luminous intensity of each pixel of a predefined portion of each of the images; define a light intensity threshold to improve the contrast of the predefined portion; establish the pixels with luminous intensities equal to or greater than the threshold in "V and luminous intensities below the threshold in" 0", and analyze the resulting pixel values to detect areas of excessive brightness 68.- The method in accordance with the claim 67, further characterized in that the definition step includes adjusting the threshold in 20 brightness steps lower than the average brightness in the center of the optical zone of the lens 69. The method according to claim 67, further characterized in that includes steps for analyzing an annular iris area printed on a lens, steps including: defining an inspection box having a cross-sectional area corresponding in size to the smallest transparent space in the printed iris that would cause rejection of the lens; scrutiny in the inspection box on a matrix of the resulting pixel values for the area of the printed iris, and rejection of the lens if the sum of the resulting pixel values in the inspection box is greater than a predefined inspection box threshold. 70. The method according to claim 69, further characterized in that the predefined inspection box threshold is 80% of the size of the inspection box. 71.- The method according to claim 67, further characterized in that it includes steps to analyze characters printed on the lens, including: a) the definition of the light intensity threshold to separate the relatively dark printed characters from other dark patterns in the lens; b) the formation of a character matrix of the resulting pixel values for the area of printed characters; c) selecting the first character as the largest contiguous set of dark pixels in the character array; d) the calculation of the corners and the center of a box containing the first character; e) selecting the next character as the next largest contiguous set of dark pixels in the character array; f) the calculation of corners and the center of a box that contains the following character; g) the repetition of steps (e) and (f) until all the characters have been selected; and h) evaluating at least one geometric characteristic of at least one of the boxes in relation to at least one acceptance specification to determine if the characters are acceptable. 72. The method according to claim 71, further characterized in that the geometric feature includes the distance between the centers of at least two character boxes. 73.- The method according to claim 71, further characterized in that the geometric feature includes the width of at least one of the boxes. 74. The method according to claim 71, further characterized in that the geometric feature includes the height of at least one of the boxes. The method according to claim 71, further characterized in that the geometric feature includes the aspect ratio of the height and width of at least one of the boxes. 76.- The method according to claim 71, further characterized in that the geometric feature includes the area of at least one of the boxes. 77. The method according to claim 71, further characterized in that the geometric feature includes the location of the center of at least one of the boxes. 78.- The method according to claim 67, further characterized because it includes the steps of: generating a polynomial model of the edge of the lens; comparing the measured luminous intensities of the pixels adjacent to the edge of each of the images with the luminous intensities of the model at predefined angles around the edges of the images; generate an edge profile number for each angular edge comparison that is representative of how well the measured edge profile of the image is compared to the model; locate the points on the edge profiles for each of the images that have numbers representative of a mismatch; and reject a lens if the mismatch number for the image exceeds a predefined number. 79.- A method for automatically inspecting a plurality of lens comprising the steps of: placing each lens in solution in a cuvette; take a first image of each lens in solution in the bucket; turn each cuvette and the lens and the solution in the cuvette; take a second image of each lens in solution in the bucket that was turned; normalize the luminous intensity of each image, provide a digital representation of the luminous intensity in each pixel of the images; locate the center of each the images; storing the light intensity data of polar coordinates for each image in a rectangular S matrix; comparing with a transformation to the pixels of at least a portion of the matrix S of each first image and the transformed pixels of the corresponding portion of the matrix S of the second image; generate a resulting image for each lens with pixels having the same luminous intensity as the brighter one of the two corresponding compared pixels; and analyzing the effects of the resulting image on an optical zone of each lens. The method according to claim 79, further characterized in that it includes the step of analyzing the resulting image to look for defects in a logo pattern of each lens. The method according to claim 79, further characterized in that it includes the step of analyzing the resulting image to look for defects in a color printed area of each lens. 82.- An apparatus for automatically inspecting lenses comprising: a transparent cuvette for holding a contact lens in solution in a first position; means for moving the cuvette, the lens and the solution in relation to one another and stopping the movement in a second position; means for momentarily illuminating the lens and the cuvette in the first and second positions; means for recording a first image of the lens and the cuvette in the first position and a second image of the lens and the cuvette in the second position; means for comparing the first and second recorded images of each lens; and means for recognizing as flaws in the lens the defective characteristics of the first image having corresponding defective characteristics in the second image. 83. The apparatus according to claim 82, further characterized in that the recognition means include means for generating a resulting image for each first and second images, the resulting image having pixels with the same luminous intensity as the brightest of the images. two corresponding compared pixels of the first and second images. 84. The apparatus according to claim 82, further characterized in that the means for momentarily illuminating include at least one strobe light. The apparatus according to claim 82, further characterized in that the means for momentarily illuminating include at least one strobe light to generate a flash of light; at least one diffuser to diffuse said flash of light; and at least one multi-hole collimator to cause the flash of light to diffuse. 86.- The apparatus according to claim 82, further characterized in that the means for registering include at least one digital camera. The apparatus according to claim 82, further characterized in that the means for momentarily illuminating include two separate lighting stations, the first station associated with the first image and the second station associated with the second image, each station with a strobe light to generate a flash of light, a diffuser to diffuse said flash of light; and a multi-hole collimator to make the flash of diffused light flow. 88.- The apparatus according to claim 82, further characterized in that the means for recording include two separate digital cameras, the first camera for recording the first image and the second camera for recording the second image of each lens.
MXPA/A/2000/004739A 1997-11-14 2000-05-15 Automatic lens inspection system MXPA00004739A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08971160 1997-11-14

Publications (1)

Publication Number Publication Date
MXPA00004739A true MXPA00004739A (en) 2001-05-07

Family

ID=

Similar Documents

Publication Publication Date Title
US6047082A (en) Automatic lens inspection system
CN110672617B (en) Method for detecting defects of silk-screen area of glass cover plate of smart phone based on machine vision
TWI698628B (en) System and method for inspection of wet ophthalmic lens
US7256881B2 (en) Systems and methods for inspection of ophthalmic lenses
JP5932110B2 (en) Method and apparatus for detecting / identifying rough inclusion
US5500732A (en) Lens inspection system and method
US8254659B2 (en) Method and apparatus for visually inspecting an object
US7684034B2 (en) Apparatus and methods for container inspection
CN107255641A (en) A kind of method that Machine Vision Detection is carried out for GRIN Lens surface defect
EP0486219A2 (en) Automated evaluation of painted surface quality
US11836912B2 (en) Grading cosmetic appearance of a test object based on multi-region determination of cosmetic defects
HUT65808A (en) A method for testing quality of an ophthalmic lens
CN105466953A (en) Steel ball surface defect detecting method based on reorganization of steel ball surface reflection pattern integrity
HUT65842A (en) Arrangement for testing ophtalmic lens
EP4217704A1 (en) Systems and methods for automatic visual inspection of defects in ophthalmic lenses
JPS62502358A (en) Panel surface inspection method and device
CA2151355A1 (en) System and method for inspecting lenses
MXPA00004739A (en) Automatic lens inspection system
CN116237266A (en) Flange size measuring method and device
CN116148277B (en) Three-dimensional detection method, device and equipment for defects of transparent body and storage medium
CN111344553B (en) Method and system for detecting defects of curved object
Guo et al. On the detection of defects on smooth free metallic paint surfaces
Killing Design and development of an intelligent neuro-fuzzy system for automated visual inspection