GB2408323A - Training an image-based inspection system - Google Patents
Training an image-based inspection system Download PDFInfo
- Publication number
- GB2408323A GB2408323A GB0423390A GB0423390A GB2408323A GB 2408323 A GB2408323 A GB 2408323A GB 0423390 A GB0423390 A GB 0423390A GB 0423390 A GB0423390 A GB 0423390A GB 2408323 A GB2408323 A GB 2408323A
- Authority
- GB
- United Kingdom
- Prior art keywords
- training
- image data
- web
- images
- processes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/89—Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/93—Detection standards; Calibrating baseline adjustment, drift correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30124—Fabrics; Textile; Paper
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Textile Engineering (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
A method (figures 11-12) or a system for training an inspection system, the method comprises the steps of: capturing images of a material for example a web 22 or a sheet of material; using one or more cameras 20; auto-training one or more of a plurality of image processing techniques using the captured images; and using the training image processing techniques to identify faults. The material is visually inspected to identify faults; and one or more parameters associated with one or more processes is varied to improve its fault detecting capability. The auto-training step may involve identifying and removing data associated with defects from the image data to provide new image data of non-defective material, and then determining one or more fault detection thresholds based on the new image data. Images of plurality of materials may be captured, and the sensitivity of each process may be trained or optimised on a per material basis. The materials images may be recorded for replaying and inspection at a later stage.
Description
1 2408323 An On-line Inspection System The present invention relates to an
on-line inspection system and method for inspecting material, for example webs or sheets of material, to ensure that quality standards can be met.
In the manufacture of webs of material, it is important to be able to detect flaws or faults in the finished product in order to ensure that quality specifications set by customers can be met. Currently in many applications, web inspection is primarily done by visual inspection by highly skilled inspectors at the end of a production run. For large scale manufacturing processes, material can be produced at a rate of as much as 30-500 metres per minute. In contrast, the overall manual inspection rate for say laminated fabrics is significantly lower. Hence, the manual inspection stage causes a serious bottleneck and .e reduces the overall manufacturing rate.
To overcome the problems associated with manual inspection, some automatic web inspection systems have been proposed. To produce the same or better inspection results as would be provided by manual inspection, automatic web inspection systems have to be trained. For some manufacturing facilities there may be hundreds or thousands of different material types, which require individual settings for the inspection system to inspect them. This is especially true when the faults are hard to detect, because the system would need to employ a multitude of advanced image processes to detect the faults, and all these processes would need to be configured correctly for each material. For these applications, training the system to reliably detect faults can be very time consuming.
An object of the present invention is to provide an improved method for training a web inspection system that can handle demanding applications without requiring vast amounts of user time to train the system.
According to a first aspect of the invention, there is provided a method for training an inspection system comprising: capturing and storing images of a material, for example a web or sheet of material, using one or more cameras; auto-training one or more of a plurality of image processing techniques using the captured images to establish one or more suitable threshold levels for each process; using the trained image processing techniques to identify faults; visually inspecting the material to identify faults; identifying any discrepancies in the faults detected by the visual inspection and the image processing, and when necessary varying one or more parameters associated with the one or more processes to improve its fault detecting capability. s
By training the processes using an automated and unsupervised technique and with reference to feedback from a manual inspection followed by reauto training the materials, the fault detection rate can be optimised for a large range of products with faults that are difficult to detect. Once the processes are optimised in this way, the system can be used to efficiently and quickly inspect webs of many types of material for faults. This greatly improves the overall inspection rate and reduces the time required for the training of the system. ..
Preferably, the step of auto-training involves identifying defects and removing data associated with these from the image data to provide new image data of non-defective material, and then determining a suitable threshold based on the new image data. To greatly reduce the need for operator input, the auto-training is designed to work unsupervised on the entire range of materials in the application. It does this by assuming the image data it receives is in the greatest part defect free, and that significant deviations in appearance from this data should be considered faults. However since it is unsupervised, it has to automatically cope with faults that may appear in the training material image data without requiring operator assistance.
Preferably, the method further involves capturing images of a plurality of materials and grouping the materials together using the captured image data and according to pre- determined criteria. By doing this, the training technique can be optimised in situations where large numbers of materials have to be inspected using the same equipment. The grouping can be done using a tool to group the materials according to the similarity in the threshold levels of their processes. Preferably, each t:ault detection process is then trained or optimised on a group basis. The sensitivity of each process may be trained or optimised on a per material basis. Hence, for all materials within a group, the same process configurations are used, but for each material the sensitivity of the processes may be varied to take into account physical differences, such as minor differences in texture or colour. Because the materials are grouped together the changes that have to be made are only in the process sensitivity thresholds.
According to another aspect of the invention, there is provided a method for training an inspection process that uses a plurality of images processing techniques, the method comprising processing image data using the one or more image processing techniques to identify defects; removing as much data associated with the defects identified as possible to produce processed and substantially defect free image data, and using the processed and substantially defect free image data to establish one or more trained thresholds for the or each image processing technique.
Preferably, the method involves two phases. The first phase establishes threshold settings for a normal threshold image process, where the image is not filtered in any way, and one or two simple thresholds are applied. This process is used in the second phase to detect and remove any data from faults that exceeds those thresholds to produce the processed image data that all the other image processes will be trained from. are..
According to yet another aspect of the present invention, there is provided a method for training an inspection system, the method comprising grouping together materials according to pre-determined criteria and training a set of processes for applying to that group. By grouping materials together, training can be optimised in situations where large numbers of materials have to be inspected using the same equipment.
According to still another aspect of the invention, there is provided a method for training an inspection system that has a plurality of image processing techniques for processing image data to identify faults, the method comprising recording images of a material for replaying at a later stage, and using the recorded image data to detect faults.
Providing a recording facility assists in the comparison of visually inspected results from those of the inspection system, as faults that are not seen by the system can be analysed and process changes made to detect them. This helps in the training of the system for difficult to detect defects.
According to still a further aspect of the invention, there is provided a material inspection system that is trained in accordance with any of the training methods in which the other aspects of the invention are embodied.
Various aspects of the invention will now be described by way of example only and with reference to the accompanying drawings, of which: Figure 1 is a schematic view of a web inspection and training system; Figure 2 is a diagrammatic view of the web inspection part of the system of Figure 1; Figure 3 is an image of a dark stain defect plus noise; Figure 4 is a plot of process output units versus position across the web for the stain of Figure 3; Figure 5 is an image of the stain of Figure 3 after blurring; . . .
Figure 6 is a plot of process output units versus position across the web for the..
image of Figure S.; Figure 7 is a grey level histogram for an image that is relatively noisy; :.
Figure 8 is a grey level histogram for an image that is relatively noise free; .
Figure 9 is a grey level histogram showing various threshold levels during the....
automatic training of process thresholds; . . Figure 10 is a grey level histogram on which an extension caused by a defect is .
shown; Figure 11 is flow diagram of a process for grouping materials together; Figure 12 is a flow diagram showing the overall training strategy, and Figure 13 is a schematic view in a two camera set arrangement.
Figure 1 shows an on-line fault detection system 10 for detecting faults in a web of material. This has an on-line inspection system 12 for capturing images of a web of material and various processing tools. In order to maximize the sensitivity of the system, it firstly has to be trained to recognize typical faults in the material. To do this a first sample of the material is moved along the production line so that images of it can be captured. These images are recorded and stored for replaying by a web-corder tool 14.
This tool can be set automatically to record lots of small samples of products, so that they can be auto-trained at a later date. Also to allow for a manual validation, the web-corder tool is operable to allow the recorded web information to be played back as though it were s actually being inspected. The captured image data is processed using a plurality of different defect detection processes or algorithms to identify for example any differences in contrast that could be indicative of a defect.
To optimise the sensitivity of the defect detection processes, thereby to limit the number of false alarms and/or ensure that as many of the material defects as possible are detected, a web-trainer tool is provided. This is operable to automatically train the correct sensitivities for a given process. Owing to the need to train possibly thousands of products, the web-trainer is designed to work in an unsupervised manner, i.e. it does not need feedback from the user as to what is and is not a fault, provided that the configuration of the process it is training has been validated. It can also be used to give a quantitative value to each product so that the products can be compared for likeness in texture and aesthetics.
Once the web-trainer has varied the processes as deemed necessary, all the defects should be identified when the trained product is played through the web-corder and the trained detection processes are applied. To validate this, a manual inspection is carried out using the images stored in the web-corder and the results of the automated and manual inspections are compared. In the event that the validation identifies a fault that was not picked up by the automated process, the recorded image of the area that has that unidentified fault is isolated. At least one of the detection processes applied to that area is then varied until it is able to identify that particular fault. In this way, by using the stored images and a manual inspection of the material, a profile of the faults that are detected can be built up and the sensitivities of the detection process can be modified and where necessary improved.
Because some materials have similar properties, it can be useful to group them together.
To do this a web-grouper tool 18 is provided. This is operable to use the quantitative values determined by the web-trainer to group the materials together. Once materials are grouped together, a particular configuration of processes for detecting defects in all of the group members is selected or set. Then the sensitivity of these processes is fine-tuned.
By grouping products together in this way, the processing power and time needed to train the system to inspect many products can be minimized, whilst at the same time the efficiency and effectiveness can be maintained at a high level.
Figure 2 shows the on-line web inspection system of Figure 1 in more detail. This has a plurality of cameras 20 that are positioned to capture images of a web or sheet of material 22 that is being moved passed. The cameras are grouped into one or more sets located at different viewing positions and so able to see different types of defects. For example, one set of cameras could be located to look at light placed behind the web and another set of cameras could be located so as to look at light shining from the same side as the cameras or at low angles.
Associated with each camera set is a master PC that includes a plurality of algorithms for processing image data from the cameras. Optionally, one or more slave PCs may be connected to the master PC to forward image data from some cameras in the set to the master PC. The number of camera sets is determined by the application, but typically a system may include three sets of cameras, each set including between one and eight ..
cameras. Each master PC includes a number of fault detection processes. In a preferred embodiment, each master PC includes seventeen fault detection processes. In practice, :.
this means that there is typically a total of fifty-one independent and separately..
configurable detection processes. .... .. :--.
Each master PC 24 is operable to store the image data from its associated cameras and .
perform a flat field correction to take into account, for example, lens and illumination effects. Techniques for doing flat field corrections are known and so will not be described in detail. Each master PC 24 provides data to a core or front end processor or controller 26 that controls the overall functionality of the system, and in particular is operable to communicate with the master PCs. The front-end processor 26 typically includes a user interface for presenting data to the system user. Each master PC has an interface for training the camera set it belongs to, and for displaying all training data to a user. A limited amount of training data may also be displayed on the user interface of the front end processor 26.
The cameras 20 each have a line-scan charge coupled device (CCD), which has a row of very small light sensitive light sensors inside and uses lenses to image a thin strip of the web onto the photo-sensors, the outputs of which are then digitized and passed to a frame grabber within the master PC associated with it. In use, the web 22 is moving, so that a fixed number of consecutive lines retrieved from each camera 20 form an actual image of the web inside the frame grabber. This image is sometimes referred to as a frame. The digital images collected by the cameras 20 of Figure 2 consist of many thousands of small pixels, each with a given value of brightness or grey level. Since the image is monochrome (black/white), this grey level is normally 0 for black and 255 for white. Each pixel in the image is a representation of the amount of light falling onto one of the photo- sensors in the cameras. Once the frame grabbers have acquired an image of the moving web 22, they pass into the computer software 24 where it is processed.
From an image processing perspective, it is important to cause defects to have the strongest deviation from the background grey level. The defect signal is affected by, for example, the illumination intensity of the lighting system and the physical shade/colour difference between the defect and the web. It is also affected by the angle of the lighting and the type of defect. A further factor is the light sensitivity of the cameras 20. In .
practice, for large scale manufacturing operations, extremely sensitive cameras are needed. For example, the cameras of Figure 2 may be based on Dalsa P3 sensor 'all technology. The focus of the cameras can also be important, because small defects require 2.
accurate focus of the cameras to remain sharp and well contrasted. The camera resolution....
also has to be selected to be of a suitable size relative to the defect size. This is because if.
the defect is smaller than this, then it will not completely fill any given pixel, and so will .
not have its shade fully represented in the final image. Hence defects similar in size to or smaller than the imaging pixel size have to have very strong contrast to be detected. All of these factors vary on a caseby-case basis and have to be taken into account when designing and setting up the system for an application.
As noted before, included in each of the master PCs 24 is a number of different defect detection processes that are implemented by algorithms and are operable to manipulate the incoming image from a camera. Each of the algorithms is designed to alter the image so that defects can more easily stand out from the surrounding material. For each process, there are one or two thresholds. Once the process has manipulated the image, every pixel is compared with a threshold value. If the pixel exceeds this threshold, then it is considered as belonging to a defect. Some processes have two thresholds if they are looking for light and dark defects at the same time.
Various different processing techniques could be used to identify faults or defects in the web. However, as an example, one of the processes used in the system of Figure 2 is a normal thresholding process. This can work at the highest resolution of the camera system. For this process, the basic principle is to apply two thresholds to the flat field corrected web image, such that dark defects fall below a lower threshold and bright defects rise above an upper threshold. The advantages of this include that it works at the full resolution of the system; it is relatively low on resource usage; it is good at putting reliable defect class names (A,B, C,D etc) to the defects, and it is not defect size sensitive, i.e. it will see defects of any size provided their contrast is great enough. However, a disadvantage is that defects that are subtle in contrast often cannot be seen by this method reliably.
For the normal thresholding method, there are two settings, upper level % and lower level %. If for example there is a background light level of 100 grey levels, then a 13 % upper threshold level applies a threshold at 113. Similarly a 13% lower threshold level applies a threshold at 87. For a background level of 200, a 13 % upper threshold will apply a threshold at 126 etc. In this way, the lower the levels, the more sensitive this process, since the thresholds are closer to the noise floor. It is possible to use different levels for the upper and lower thresholds. Such a situation would arise for example if it were necessary to have the system more sensitive to dark defects than white defects. In this case the lower threshold level would be smaller than the upper.
Another process used in the system of Figure 2 is operable to identify large area defects.
This is used for subtly contrasted defects that have a relatively large area. The defects can be seen above the noise by the fact that the grey level deviation of a defect exists over a larger area than normal system noise. To identify these defects, the image is blurred or low passed in spatial frequency terms. This can be done using, for example, a top-hat function, typically a top-hat convolution kernel. When the large area defect process is applied, the background noise of the image is blurred and cancelled, leaving only larger defects unchanged. This increases the signal to noise ratio for large defects.
Figure 3 shows a raw image of a subtle stain, in the presence of random background noise. Figure 4 shows the grey level for the stain of Figure 3. From this, it can be seen that there is no suitable level to place a threshold, such that it would sufficiently detect this fault and avoid the noise peaks. Figure 5 shows this image after large area defect processing, i.e. after blurring the image. Although no clearer to the human eye, the stain can now be more easily identified by looking at the grey scale graph, as shown in Figure 6. From this, it can be seen that a threshold could easily be placed to detect the stain. s
An advantage of blurring the image of a defect is that subtle defects that exist over a relatively large area, such as subtle stains, light/dark or thick/thin patches, can be seen.
The large area of these defects allows the computer to see them above the background noise. Also, this technique is very efficient on resources. Disadvantages are that this method is unable to see subtle defects smaller than its blurring size, since they are blurred and cancelled in the same way as the background noise is. The same goes for strongly contrasted defects that are much smaller than the blurring size.
Each process in the system of Figure 2 has a sensitivity setting that determines the position of the thresholds. A high sensitivity means that the thresholds are close to the normal texture values of the material, and so more likely to see subtle features in the image but at the same time more likely to cause false alarms. A low sensitivity means that the thresholds are set away from the texture of the material and it is much less likely to see false alarms, but at the same time subtle defects may not cross the thresholds either.
In order to optimize the sensitivity of the system, it is necessary to train it for the application for which it is to be used. This is important because in practice the type of fault the system has to be able to detect can vary significantly depending on the material that is being inspected. Where many materials are being run off the same production line, ensuring that the system is suitably trained becomes a significant problem. To this end, before the system can be used, each process has to be trained for particular materials by tuning its sensitivities. These sensitivity settings are stored within parameter sets, termed methods. The ability of the system to detect a defect depends in part on the defect contrast with the background. The amount of contrast that a defect needs in order to be recognized by the system as a defect and rejected depends on the sensitivity parameters.
The higher the sensitivity setting, the less contrasted the defect needs to be.
To train the system for a particular material, images of that material are firstly captured and recorded by the web-corder of Figure 1. The webtrainer is then used to automatically set its own thresholds and sensitivities on each product that it inspects. It does this by sampling the material images and calculating what thresholds would be needed to avoid false alarms, but are necessary to enable each process to be as sensitive as required.
When auto-training, the system does not know what is a defect and what is not a defect until its sensitivities are set correctly. Hence, the autotrainer needs to assume that the material it is seeing is defect free, so that it can set the sensitivities to detect features that look significantly different to the material it assumes is correct. This assumption can be problematic if a defect appears in the 'defect free' material. To overcome this, the automatic inspection includes two phases. The first is to establish threshold sensitivity settings for the normal threshold process. This is done using the equations below. In phase two, this trained normal threshold process is used to supervise the rest of the training for all of the other processes.
To supervise phase two the normal threshold is applied to the frames of the image. In the event that it detects a defect in a given frame, that frame is removed from the auto- training for all of the other processes. This filters out serious defects of any size or shape from the training. However it does not filter out more subtle defects that cannot be detected by the normal threshold. To do this, the web-trainer assumes that if a defect does occur, then it is not very common, and is significantly different to the rest of the material The web-trainer is able to remove it from the training, because of its difference to the surrounding material with respect to each process that is used to detect it. Of course, it helps the auto-training if a defect-free length of material is selected, although this is not practical when auto-training a large number of materials in an unsupervised fashion.
There are some settings for each process that determine whether the autotraining sets the sensitivities high or low for each process. These settings are held in a file and can be altered during validation of products. There is one file for every group of products. These alterations impart manual inspection knowledge and experience into the web-trainer, so that although it sets the sensitivities automatically for each product, it takes into account human inspection requirements as it does it.
To establish the upper and lower thresholds for each process, the webtrainer firstly takes line samples of each image frame, and builds up a collection of these samples into a new sample frame. If any frames are found to have defects on them using the normal threshold process setup in phase-one, then the lines will not be taken from these frames. Each sample frame is then processed using each of the defect detection processes. At the end of the run, these sample frames contain a good representation of the different image values that are output from each process over the training distance. The web-trainer builds a histogram of these values for each sample frame, and so each process. s
For noisy sample images, which may arise due to a strong material texture that is not filtered much, the web-trainer histogram is relatively wide in its breadth, see for example Figure 7. For smooth textures or processes that filter much of the texture from the image, the histogram will be thin, with a smaller standard deviation, see Figure 8. The web trainer uses the histograms to set the upper and lower thresholds for each process so that the thresholds do not identify parts of the normal material texture as a detect, i.e. the position of the thresholds should be sufficiently placed away from the tails of the histogram. More specifically, to set the upper threshold U. the web-trainer uses the.
following equation: ' : 1 OO + SMUpper U=mean+ux:.,.:. '
:::;e Similarly for the lower threshold L: ,, + SMLower L = mean + l x The parameters U. u, L and I are shown in Figure 9. Setting SMLower to 100 means that the threshold is to be twice as high as the top of the noise floor. The greater this value the less the relative sensitivity this threshold will be set to with respect to that texture, i.e. it will be less prone to false alarms. A setting of O would put the threshold onto the top of the noise floor, which would create thousands of false alarms within one frame. Typical values for these parameters are between 100 and 200%.
To help prevent defects that are not removed by the normal threshold process from affecting the training in phase two another setting is defined per process. This is the 'zero level' and is defined as the normalized height at which the histogram is considered as zero. In simple terms, the web-trainer takes the maximum value of the histogram, and normalises this to 1. To find the right hand extreme of the histogram, i. e. the top of the noise floor, the web-trainer finds the position at which the right hand side of the normalized histogram crosses the 'zero level', as shown in Figure 10. The reason for this is because a defect in the sample frame is likely to show up as an uncharacteristic flat extension to the curve of the histogram.
The extension arises because defects have grey levels that are outside the normal histogram values. Since there is only a small area of the material affected by the defect compared to the normal material, the extension is very small in height. The aim of the zero level threshold is to trim away these extensions so that the auto-training of any method is not affected by defects in the training material. Typical values for the zero level are between 0.01 and 0.001, but it is normally set to 0.01. This tells the web-trainer . to not use extreme extensions to the histogram that are from defects, but gives the. . i, positions where the histogram will have near enough reached zero.
:...:.
Using the zero level setting is useful because difficulties may beexperienced in setting the SMUpper or SMLower values correctly so that defects are detected, but 'spikey' bits of the material texture are not. This can happen when the texture of the material is dotted with spikes, bright or dark spots, which are not faults in themselves and are part of the natural texture, but which do not occupy a large proportion of the image on an area basis.
These spots extend the arms of the histogram, under the 0.01 zero level. By using the zero level trim, these extensions are not seen and so the web-trainer does not take these spikes into account when training the process. In such situations, it may be wise to reduce the zero level to 0. 005 or even 0.001. This means that the thresholds are set away from these spikes, as they have been taken into account as part of the histogram. As a general rule, the zero level should be decreased from 0.01 only if it can be seen from the raw frame that the material is quite spikey and, after the auto-training, there are too many false alarms on some processes. Not all processes need to have their zero level changed, but only those in which spikes would still be present after filtering. For example, the process for detecting large area faults would not need to have its zero level reduced since the spikes are removed by the filtering.
As well as auto-training, where a large number of different materials is to be inspected, another strategy for reducing the training time is to group the materials together depending on their physical characteristics. Figure 11 shows the steps for doing this.
These steps are carried out by the web grouper 18. Firstly, the desired camera set is chosen 30. Then the products that are to be grouped together are chosen 32. Initially, this would be all of the products. The web-grouper then compares all product method sensitivities with all other product method sensitivities within the same camera set and determines a numerical value for each comparison. This numerical value is preferably an accumulated sum of the square of the differences of all the auto- trained process thresholds between any two products, although any other suitable value could be used. The lower the value, the closer the match, and therefore the stronger the similarity of texture for that material under that camera set.
Optionally, the system may be adapted to allow a trainer/user to select whether to A..
constrain the number of groups or have a constrained comparison level 34. By constrained comparison level, it is meant there is a pre-determined comparison level that two products.
must have to belong to the same group, i.e. a predetermined closeness of the images in aspects important to the system. Alternatively, the system could be programmed to make. . this selection for the user or programmed with only one or other of these options. In the event that a constrained comparison level is selected or effective, the products are grouped together according to the specified level 36. Where the number of groups is to be constrained, the products are grouped into the required number of groups by varying the comparison level until that number of groups or less is obtained 38. In either case, the results of the grouping are output to a file 40 and stored for use by the master PC of the selected camera set. This process is repeated for all of the camera sets that are provided, so that all the products are grouped for all of the camera sets. It should be noted that some products may be grouped together for one camera set, but not another.
Each of the web-corder, the web-trainer and the web-grouper described above are used in an overall training strategy, as shown in Figure 12. Before starting a training session, for each new product a new method is created for each camera set. At this point it is worth noting that each camera set has to be regarded as a separate inspection system, with its own methods, own groups of methods, and own autotrain files. This method is a combination of process sensitivities (collectively termed the method sensitivities) and process configurations (collectively termed the method configuration) that determine exactly how a camera set inspects a product, for example the dimensions of the defect for the large area defect process. Initially, this method has the same method configuration as those in a development set product that has method configurations covering a wide and general set of filter sizes and process set ups.
Firstly, images of samples of each product are captured and stored by the web-corder 46.
Then the processes are auto-trained by the web trainer on a sample-bysample basis using the common method configurations based on the development set method. The sensitivity parameter settings for each process and for each material are then stored. Similar products are then grouped together 50 for each camera set. This is done by the web grouper and is based on the output of the auto trainer. Then, an image of a full work order, e.g. a full roll of material, is captured 52. The image data is then processed using the processes that define the method configuration to identify any faults and their location 54. The sample is manually inspected, and the computer identified faults and the manually A. identified faults are compared (validation) 56.
Each group needs to be validated at least once by comparing an automatic inspection report with manual inspection of one or two of the products in the group. Due to the way it works, the web-trainer ensures that there are very few if no false alarms in the defect report from a conventional image processing point of view. However during the validation certain image defects may transpire to be cosmetic features of the fabric and not real faults, and therefore the image processes that detected them are set too sensitively or need to be disabled/changed in configuration. Hence, in the event that there is a discrepancy between the manual and the computer faults, then one or more of the process sensitivities can be adjusted to see if they can more reliably identify faults 58. If this is sufficient to detect the fault, then the process sensitivities are stored and the method configuration is validated and used to auto-train all the products in the group. If it is not sufficient, then the method configurations are adjusted 60 and the image data is preferably replayed through the web-corder to see whether the fault can be identified. This process effectively transfers the understanding a manual inspector has of the fabric and the sort of features that are real faults into the system.
The validation of a product from a group helps shape and adjust the method configuration and method sensitivities for that entire group of products. When a fault is not seen due to under sensitivity or a false alarm is seen due to over sensitivity, the relevant sensitivity threshold should be changed accordingly. This percentage change needs to be reflected in the sensitivities of all processes used for that group. This ensures that all products within the group share this understanding of the nature of this material and its faults. In practice, this is achieved by making the relevant change to the SMUpper or SMLower parameters in the autotrain parameter file for that group and restraining the whole group at the end of the validation. If a particular fault is not seen and cannot be reliably seen by simply changing sensitivities, it would require a specific change in the method configuration to be seen. Since the products in a group share the same method configuration, this change will affect all products in the group automatically. The term 'establishing a group' is used to describe the way in which all products of a group are auto-trained a second time to..
include changes made to the auto-train parameter file and the method configuration from .'.: validation. This sets the method sensitivities to the required level for each product. If at a.
later stage a product has to be re-validated within a group, and it results in significant changes to either the method sensitivities or the group method configuration, then the process of establishing a group would have to be repeated. :. .
At the end of the training routine for a large number of products, each material should be included in a group for which the method sensitivities and configurations have been optimised. As an example consider the Fabric A, B and C on a two camera set system having a toplight set and a backlight set, as shown in Figure 13. This diagram shows that Fabric 1, with its toplight method, belongs to toplight group 2. It may belong to a different group in its backlight because the toplight and backlight are completely independent in this respect. Fabric 2 also belongs to group 2 for the toplight camera set. However, the thresholds for each process are specific to the products. Fabric A and Fabric B are in the same toplight group, but their thresholds may be different, even though the process configurations are the same. By grouping materials together, the amount of work needed to train and maintain the system can be reduced, but at the same time allowing individual setting of the product sensitivities using the automated web-trainer. This is particularly advantageous for applications where large numbers of products have to be inspected.
The system and method in which the invention is embodied are able to inspect materials, for example sheets or webs of material that are produced continuously. They will operate with the same consistency of inspection at all times, and should never allow a defect that it is able to detect to pass as a good product. The level at which to accept or reject defects can be adjusted in a simple fashion. The level of sensitivity can be changed for different products, or even different customers for the same product. The system is capable of inspecting a large number of products for any application, even if each product needs to be inspected according to delicate sensitivity settings.
A skilled person will appreciate that variations of the disclosed arrangements are possible without departing from the invention. For example, although the system of Figure 2 is described primarily with reference to a three camera set arrangement, it will be appreciated that the system is modular and so can be readily configured or adapted to suit many different applications. For example, the number of camera sets could be varied, as could the number of cameras per set and/or the number of processes for each master PC Optionally, to allow for a real-time display of the results of a defect detection process on web data, each master PC may be adapted to present a process-training graph. This graph is generated using the results derived from the process and is scaled such that the upper threshold of the process is located midway between the graph top and the noise floor.
As yet another option, each master PC may include a master sensitivity control that can be varied by a user of the system, which is adapted to affect the sensitivity of all the defect detection processes for the associated set of cameras equally. Each detection process may have an individual sensitivity level that can be manually changed independently from all the others. In this case, the master sensitivity control does not override these levels, but adds to them. For example, if a process had its own sensitivity settings set to high, and the master sensitivity control was set low, then the overall sensitivity for that process would be medium.
Another manual sensitivity control that may be provided is a minimum defect area control, which allows a user to manually enter or change the minimum defect area for all or individual processes within a method configuration. Also, although the invention has been described primarily with reference to a flexible web or sheet of material, it will be appreciated that it would be adapted for training inspection systems for other types of material. Accordingly, the above description of a specific embodiment is made by way of example only and not for the purposes of limitations. It will be clear to the skilled person that minor modifications may be made without significant changes to the operation described. . . . e .
be.. . - * a... :-
- e.
Claims (15)
- Claims 1. A method for training an inspection system comprising: capturingimages of a material, for example a web or sheet of material, using one or more cameras; auto-training one or more of a plurality of image processing techniques using the captured images; using the trained image processing techniques to identify faults; visually inspecting the material to identify faults; identifying any discrepancies in the faults detected by the visual inspection and the image processing, and when necessary varying one or more parameters associated with the one or more processes to improve its fault detecting 1 0 capability.
- 2. A method as claimed in claim 1 wherein the step of auto-training involves .identifying defects and removing data associated with these from the image data to ...provide new image data of non-defective material, and then determining one or more fault . ..: detection thresholds based on the new image data. . . A-.
- 3. A method as claimed in claim 1 or claim 2 involving capturing images of a plurality of materials and grouping the materials together using the captured image data . and according to pre-determined criteria. .. ...
- 4. A method as claimed in claim 3 comprising training or optimising each fault detection process on a group basis.
- 5. A method as claimed in any of the preceding claims wherein the sensitivity of each process is trained or optimised on a per material basis.
- 6. A method as claimed in any of the preceding claims comprising grouping together materials according to pre-determined criteria and training a set of processes for applying to that group.
- 7. A method as claimed in any of the preceding claims comprising recording visual images of a material for replaying at a later stage, and using the recorded image data to perform the visual inspection.
- 8. A system for training an inspection system comprising: means for capturing and storing images of a material, for example a web or sheet of material; means for auto training one or more of a plurality of image processing techniques using the captured images; means for using the trained image processing techniques to identify faults, and means for varying one or more parameters associated with the one or more processes to improve its fault detecting capability.
- 9. A system as claimed in claim 8 wherein the means for auto-training are operable to identify defects and remove data associated with these from the image data to provide new image data of non-defective material, and then determine one or more fault detection thresholds based on the new image data.
- 10. A system as claimed in claim 8 or claim 9 including means for grouping a plurality of materials together using captured image data and according to pre-determined criteria.
- 11. A system as claimed in claim 10 including means for training or optimising each fault detection process on a group basis. :.
- 12. A system as claimed in any of claims 8 to 11 including means for training or optimising the sensitivity of each process on a per material basis.
- 13. A system as claimed in any of claims 8 to 12 including means for grouping together materials according to pre-determined criteria and training a set of processes for applying to that group.
- 14. A system as claimed in any of claims 8 to 13 comprising means for recording visual images of a material for replaying at a later stage.
- 15. A material inspection system that is trained in accordance with the training methods an/or systems of any of the preceding claims.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0324638A GB0324638D0 (en) | 2003-10-22 | 2003-10-22 | An on-line inspection system |
Publications (2)
Publication Number | Publication Date |
---|---|
GB0423390D0 GB0423390D0 (en) | 2004-11-24 |
GB2408323A true GB2408323A (en) | 2005-05-25 |
Family
ID=29595603
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0324638A Ceased GB0324638D0 (en) | 2003-10-22 | 2003-10-22 | An on-line inspection system |
GB0423390A Withdrawn GB2408323A (en) | 2003-10-22 | 2004-10-21 | Training an image-based inspection system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0324638A Ceased GB0324638D0 (en) | 2003-10-22 | 2003-10-22 | An on-line inspection system |
Country Status (2)
Country | Link |
---|---|
GB (2) | GB0324638D0 (en) |
WO (1) | WO2005043141A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102009040755A1 (en) | 2008-09-11 | 2010-04-15 | Software Competence Center Hagenberg Gmbh | Method for automatically detecting a defect in an electronic representation |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6410459B2 (en) * | 2014-04-22 | 2018-10-24 | キヤノン株式会社 | Image inspection method and image inspection apparatus |
JP2023045076A (en) * | 2021-09-21 | 2023-04-03 | 株式会社東芝 | Determination device, determination system, judgment method, program, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0263473A2 (en) * | 1986-10-03 | 1988-04-13 | Omron Tateisi Electronics Co. | Apparatus for inspecting packaged electronic device |
US4823394A (en) * | 1986-04-24 | 1989-04-18 | Kulicke & Soffa Industries, Inc. | Pattern recognition system |
US5283443A (en) * | 1990-04-17 | 1994-02-01 | De Montfort University | Method for inspecting garments for holes having a contrasting background |
US20020038510A1 (en) * | 2000-10-04 | 2002-04-04 | Orbotech, Ltd | Method for detecting line width defects in electrical circuit inspection |
WO2002037407A1 (en) * | 2000-10-31 | 2002-05-10 | Lee Shih Jong | Automatic referencing for computer vision applications |
US6567542B1 (en) * | 1998-12-17 | 2003-05-20 | Cognex Technology And Investment Corporation | Automatic training of inspection sites for paste inspection by using sample pads |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5305392A (en) * | 1993-01-11 | 1994-04-19 | Philip Morris Incorporated | High speed, high resolution web inspection system |
US5475768A (en) * | 1993-04-29 | 1995-12-12 | Canon Inc. | High accuracy optical character recognition using neural networks with centroid dithering |
USH1616H (en) * | 1994-05-31 | 1996-12-03 | Minnesota Mining And Manufacturing Company | Web inspection system having enhanced video signal preprocessing |
US6934028B2 (en) * | 2000-01-20 | 2005-08-23 | Webview, Inc. | Certification and verification management system and method for a web inspection apparatus |
-
2003
- 2003-10-22 GB GB0324638A patent/GB0324638D0/en not_active Ceased
-
2004
- 2004-10-21 GB GB0423390A patent/GB2408323A/en not_active Withdrawn
- 2004-10-22 WO PCT/GB2004/004472 patent/WO2005043141A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4823394A (en) * | 1986-04-24 | 1989-04-18 | Kulicke & Soffa Industries, Inc. | Pattern recognition system |
EP0263473A2 (en) * | 1986-10-03 | 1988-04-13 | Omron Tateisi Electronics Co. | Apparatus for inspecting packaged electronic device |
US5283443A (en) * | 1990-04-17 | 1994-02-01 | De Montfort University | Method for inspecting garments for holes having a contrasting background |
US6567542B1 (en) * | 1998-12-17 | 2003-05-20 | Cognex Technology And Investment Corporation | Automatic training of inspection sites for paste inspection by using sample pads |
US20020038510A1 (en) * | 2000-10-04 | 2002-04-04 | Orbotech, Ltd | Method for detecting line width defects in electrical circuit inspection |
WO2002037407A1 (en) * | 2000-10-31 | 2002-05-10 | Lee Shih Jong | Automatic referencing for computer vision applications |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102009040755A1 (en) | 2008-09-11 | 2010-04-15 | Software Competence Center Hagenberg Gmbh | Method for automatically detecting a defect in an electronic representation |
Also Published As
Publication number | Publication date |
---|---|
GB0324638D0 (en) | 2003-11-26 |
GB0423390D0 (en) | 2004-11-24 |
WO2005043141A1 (en) | 2005-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105158258B (en) | A kind of bamboo cane detection method of surface flaw based on computer vision | |
US5588068A (en) | Methods and apparatus for inspecting the appearance of substantially circular objects | |
Boukouvalas et al. | Ceramic tile inspection for colour and structural defects | |
CN110403232B (en) | Cigarette quality detection method based on secondary algorithm | |
Ali et al. | A cascading fuzzy logic with image processing algorithm–based defect detection for automatic visual inspection of industrial cylindrical object’s surface | |
CN111047655A (en) | High-definition camera cloth defect detection method based on convolutional neural network | |
JP2009524884A (en) | Method and system for identifying an illumination area in an image | |
US6556291B2 (en) | Defect inspection method and defect inspection apparatus | |
CN108171693B (en) | Method for automatically detecting inferior mushrooms | |
Chen et al. | Evaluating fabric pilling with light-projected image analysis | |
EP2173087B1 (en) | Improved setting of image sensing conditions for defect detection | |
Bandara et al. | Automated fabric defect detection | |
JPH05509136A (en) | Web shrinkage frequency measurement method and device | |
CN110858395A (en) | Method for detecting dirty yarn defect of coiled filament | |
CN116441190A (en) | Longan detection system, method, equipment and storage medium | |
JPH06207909A (en) | Inspection apparatus for surface defect | |
JP2005069887A (en) | Defect inspection method and apparatus | |
GB2408323A (en) | Training an image-based inspection system | |
Puchalski et al. | Image analysis for apple defect detection | |
Parmar et al. | Image morphological operation based quality analysis of coriander seed (Coriandrum satavum L) | |
CN110857919A (en) | Tail yarn defect detection method for package filament | |
JPH04238592A (en) | Automatic bundled bar steel tally device | |
Mishra et al. | Surface defects detection for ceramic tiles using image processing and morphological techniques | |
CN115343313A (en) | Visual identification method based on artificial intelligence | |
WO1992003721A1 (en) | Inspecting garments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |