CN109146839A - Image processing method and defect detecting method - Google Patents

Image processing method and defect detecting method Download PDF

Info

Publication number
CN109146839A
CN109146839A CN201810677554.3A CN201810677554A CN109146839A CN 109146839 A CN109146839 A CN 109146839A CN 201810677554 A CN201810677554 A CN 201810677554A CN 109146839 A CN109146839 A CN 109146839A
Authority
CN
China
Prior art keywords
image
pixel
value
difference
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810677554.3A
Other languages
Chinese (zh)
Other versions
CN109146839B (en
Inventor
加藤嗣
梅崎太造
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TOKYO WELLS CO Ltd
Tokyo Weld Co Ltd
Original Assignee
TOKYO WELLS CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TOKYO WELLS CO Ltd filed Critical TOKYO WELLS CO Ltd
Publication of CN109146839A publication Critical patent/CN109146839A/en
Application granted granted Critical
Publication of CN109146839B publication Critical patent/CN109146839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

Image processing method and defect detecting method.It can mitigate to generate when sorting check object region from the shooting image of checked property and require operating personnel the burdens such as skilled judgement and simply set up threshold value and easily select check object region.Image processing method includes that difference calculates step, the the 1st and the 2nd region is divided by reference line relative to the monochromatic original image that reference line is symmetrical 1st and the 2nd region by having, for, relative to each right of two original pixel that reference line is the configuration of symmetrical position, calculating the differential pixel values for becoming the difference of pixel value of two original pixel in the 1st and the 2nd region.This method further includes difference image generation step, it configures the differential pixel with differential pixel values and generates difference image, will there is being configured at the 1st and the 2nd position of difference image to the differential pixel of the differential pixel values calculated of original pixel using the 2nd position in the original pixel and the 2nd region of the 1st position in the 1st region and generate difference image.

Description

Image processing method and defect detecting method
Technical field
The present invention relates to use shot as surface of the filming apparatus to checked property obtained from shooting image come It selects the image processing method that should carry out the check object region of defect inspection to the surface of checked property and has used the figure Type as the defect detecting method of processing method, in particular with the checked property with line symmetry regardless of defect High-precision and it can easily select the image processing method in check object region and use its defect detecting method.
Background technique
Using shooting image obtained from being shot as surface of the filming apparatus to checked property to checked property Surface inspection has flawless defect inspection widely to carry out.Applicant has been proposed to referred to as crackle (crack) The defect detecting method (patent document 1) that line defect is detected.
On the surface of checked property, it is multi-party that there is also cracking, notch, mark, marked erroneous etc. other than above-mentioned crackle The defect in face.When detecting drawbacks described above, the image processing algorithm that digital processing is carried out to digitized shooting image has been used (image processing method).
As the filming apparatus shot to checked property, using carrying just like CCD (Charged-coupled Devices, charge-coupled device) or CMOS (ComplemeNtary metal-oxide-semicoNductor, it is complementary Metal-oxide semiconductor) as capturing element camera.It is input to when shooting from the light that checked property issues above-mentioned When capturing element, the power of light is transformed into the power of electric signal and is digitized, is registered as digital picture.
(1) digital picture
Here, being illustrated to digital picture.The minimum element for constituting image is known as pixel, digital picture is by two-dimensionally The pixel of arrangement is constituted.Each pixel independently has the numerical value with binary representation for being combined with 0 and 1 as colouring information, should Numerical value indicates the color of the intensity of the light issued from checked property and/or the surface of checked property.By number possessed by each pixel Value is known as pixel value, and image is for example divided into the types such as color image, grey scale image.
In color image, the color of 1 pixel is by R (red), G (green), the B (indigo plant) three as the component for constituting the pixel The component ratio of primary colors determines.Thus, when indicating the pixel value of 1 pixel in color image, use RGB element point more 24 (=8 × 3 colors) not indicated with 8.
Relative to color image, grey scale image will be known as by the white black deep or light image to show.Gray-scale pitcture Pixel value as indicating 1 pixel with 8 does not include colouring information and only includes luminance information.Dark pixel has low (small) Pixel value, bright pixel have the pixel value of high (big).The number of the grade of such light and shade is known as grayscale.Grayscale is according to distributing to The information content of 1 pixel and change.Here, the unit of information content is known as digit, the more big then grayscale of digit can be bigger.Tool For body, gray number when digit N is made to become 2N.Such as since the digit of above-mentioned grey scale image is 8, grayscale Number becomes 28=256.Also, since gray number is 256, the minimum value of the pixel value in grey scale image is and paint Black corresponding 0, maximum value be and pure white corresponding 255.
In addition, being mostly the three primary colors of above-mentioned R (red), G (green), B (indigo plant) by color decomposition, to each face in color image Color is with identical gray number come apparent brightness.This with from color image generate 3 grey scale image, that is, monochrome images be equivalent 's.When being suitable for RGB colors for gray number 256 (8) of above-mentioned grey scale image, as described above, color image Digit becomes 8 × 3 color=24.Gray number in this case becomes 224=16777216, the gray number number can be passed through All colors of word image appearance.The color that color image is showed with 24 may feel that extremely in the eyes observation of employment Natural color.Therefore, the color image showed with 24 is known as full-color image.
From color image generate monochrome image method, be not limited to as described above by color image be decomposed into R (red), G (green), B (indigo plant) three primary colors and generate 3 monochrome images.Furthermore also there is the method for generating 1 monochrome image from color image.Make For an example, there is the method for the playing standard used as TV and known NTSC signal.As obtaining NTSC letter Number last stage used in component signal, have YIQ signal, the Y-component of the YIQ signal is brightness value.Now, considering will be colored Respective pixel value, that is, the E of signal, that is, R signal of image, G-signal, B signalR、EG、EBWith multiplication and sum, generate and make For the brightness value Y of new pixel value.At this point, known be applicable in the weighted average for having used NTSC coefficient, so that closest to people's The luminance that eyes can be seen.Specifically, carrying out Y=0.299ER+0.687EG+0.114EBOperation, to generate brightness value Y。
But in the defect of above-mentioned such detection image, uses and the shooting image digitized has been carried out at number The image processing algorithm of reason.It is nothing but exactly that operation is carried out to above-mentioned pixel value using the image processing algorithm.Also, by pair Operation method is worked hard, and the region present in image as the object of defect inspection can be selected based on operation result.To fortune Calculation method makes an effort to have such image processing algorithm for selecting effect, is widely used as the prior art.
(2) image processing algorithm of the prior art
It is illustrated using image processing algorithm of the Figure 32 to Figure 37 to the prior art.In addition, in the following description, it will The checked property of object as the defect inspection for carrying out surface is mainly recorded as workpiece.In addition, by present in above-mentioned image The region of object as defect inspection is recorded as check object region.Meanwhile the region of the object of defect inspection will not become Region i.e. other than check object region is recorded as exclusionary zone.
When carrying out the defect inspection of workpiece surface, workpiece is shot using filming apparatus and obtains shooting image, And image processing algorithm is applicable in the shooting image.Here, making obtained shooting image grey scale image.About shooting Method, workpiece be such as 6 face bodies as three-dimensional shape in the case where, workpiece is placed on the platform being horizontally arranged, is made Used in it is opposite with each face of workpiece to position configure filming apparatus each face is shot.In addition, workpiece be by paper or In the case that person is formed as flat shape made of laminal timber and/resin, workpiece is placed in the platform being horizontally arranged On, the upper surface of workpiece is shot using the filming apparatus for the upside for being configured at platform.
In addition, in the following description, in order to simple, the shape of the workpiece in attached drawing, the note for being marked in workpiece surface Number, the shape of defect present in workpiece surface etc., the figure for using circle, ellipse, rectangle, square etc. simple is as signal Figure indicates.
Figure 32 (a) is that filming apparatus is used to shoot 1 face of workpiece (qualified product workpiece) WG1 as qualified product Obtained from qualified product image PG1, background B1 and workpiece WG1 are shot.Workpiece WG1 has rectangular shape, Its surface markers has circular mark MG1.The center of the cornerwise intersection point and mark MG1 of workpiece WG1 is roughly the same.
Figure 33 (a) is to use filming apparatus to having defective workpiece on surface, (do not conform to as the workpiece of rejected product Lattice product workpiece) WD1 1 face shot obtained from rejected product image PD1.Qualified product image PG1 and rejected product figure As the difference of PD1 is, existing defects D2 in existing defects D1 and mark MD1 in the workpiece WD1 of rejected product image PD1 This point.
Here, the rejected product image PD1 of qualified product the image PG1 and Figure 33 (a) of Figure 32 (a) is the number of tonal gradation Word image.Also, the color in each region is compared by visual observation in qualified product image PG1 and rejected product image PD1, As follows.Qualified product image PG1 and rejected product image PD1 be background B1 be black, workpiece WG1 and WD1 be dense gray scale, Mark MG1 and MD1 are white.In addition, in rejected product image PD1, the defect D1 of workpiece WD1 is light gray scale, on mark MD1 Defect D2 be also light gray scale.But, even if being similarly light gray scale, defect D2 is slightly dense compared with defect D1.
Above-mentioned comparison be operating personnel visually observe Figure 33 (a) rejected product image PD1 it is resulting as a result, actual Rejected product image PD1 is used as digital image recording in filming apparatus as described above.About the digital picture recorded, When the title of using area indicates the size relation of the pixel value in above-mentioned each region with inequality, become
Background B1 < workpiece WD1 < defect D2 < defect D1 < mark MD1 (1).
Here, above-mentioned each region is made of multiple pixels.Also, constitute picture possessed by multiple pixels of the same area Element value independently has different values.But in the following description in order to simple, make the multiple pixels for constituting the same area All pixel values having the same.
It is somebody's turn to do the size relation of pixel value shown in (1) means that: the rejected product image shown in visually observation Figure 33 (a) When PD1, compared with the normal segments of workpiece WD1, defect D1 seems bright (band white).Similarly means that: with workpiece The normal segments of mark MD1 on WD1 are compared, and defect D2 seems dark (band black).In addition, the qualification about Figure 32 (a) The size relation of the pixel value in the region in the case where product image PG1 eliminates relevant to the defect of 2 types from (1) Pixel value becomes
Background B1 < workpiece WG1 < mark MG1 (2).
As the concrete example of above-mentioned (1) (2), make the rejected product figure of qualified product the image PG1 and Figure 33 (a) of Figure 32 (a) As corresponding with the pixel value in each region to the label of each region imparting in PD1, respectively in Figure 32 (b) and Figure 33 (b) with table Form is shown.Hereinafter, the object of explanation to be limited to the rejected product image PD1 of Figure 33 (a) in order to simple.
It is used when then, to the defect inspection for using the progress workpiece surface of rejected product image PD1 shown in Figure 33 (a) Image processing algorithm is illustrated.Taken background B1 and workpiece WD1 in the rejected product image PD1 of Figure 33 (a), become into The object of row defect inspection is workpiece WD1.Here, shooting image in workpiece position every time shooting time-varying turn to it is various Various kinds.As an example, it is shown in the same manner as Figure 33 (a) in Figure 34 (a) to different with above-mentioned rejected product workpiece WD1 unqualified Rejected product image PD2 obtained from product workpiece WD2 is shot.Moreover, making shown in Figure 34 (a) in the same manner as Figure 33 Rejected product image PD2 in the label that each region is endowed it is corresponding with the pixel value in each region, and in Figure 34 (b) with The form of table is shown.
In the position of workpiece WD2 in the rejected product image PD2 of Figure 34 (a) and the rejected product image PD1 of Figure 33 (a) Workpiece WD1 position it is different.Therefore, it is necessary to by each shooting image, no matter how the position of workpiece correctly extracts workpiece. The step is named as workpiece extraction step.As image processing algorithm used in workpiece extraction step, it is known to from investigation The template matching method (TM method) of specific picture pattern is extracted in object images.It is the algorithm executed in the following order.
[sequence 1] is used as the specific picture pattern
Prepare prespecified picture pattern (template).
Respondent's image (matching) is compared with template by [sequence 2]
Search for most consistent position.
[sequence 3] extracts consistent position as the specific picture pattern.
Such as in the case where the rejected product image PD1 of Figure 33 (a) is set as respondent's image, by Figure 32's (a) The shape of the workpiece WG1 taken in qualified product image PG1 is set as template.Also, if by rejected product image PD1 with Template is compared, even if then the position of workpiece WD1 changes in rejected product image PD1 to be various, such as Figure 33 (a) Shown WD1 be located at the case where substantial middle or the rejected product image PD1a as shown in Figure 35 (a) workpiece WD1 be located at In the case where bottom right, the workpiece WD1 with shape as Figure 35 (b) can be also extracted.
Then, about the workpiece WD1 extracted, check object region and exclusionary zone are sorted.The step is named as Inspection area sorting step.As the image processing algorithm of the prior art used in the sorting step of inspection area, by following [condition 1] sort check object region and exclusionary zone.
The pixel value in [condition 1] check object region,
It is big or small compared with the pixel value of the exclusionary zone adjacent with the region.
Here, it should be noted that [condition 1] documented by " the adjacent exclusionary zone with the region " there are 2 kinds of situations.Specifically For, there is a situation where exclusionary zone the outside in check object region it is adjacent and check object region inside using as its Partial set and the situation adjacent by the mode of interior packet.And in the latter case, by the inspection pair of interior packet exclusionary zone When being set as the 1st check object region as region, it is understood that there may be using the exclusionary zone by interior packet as with the 1st check object region Independent 2nd check object region and the case where carry out stratification.As a result, multiple check object region groups can be made to merge interior packet together When constitute check object region.About the inspection area sorting step for the stratification for including such check object region, figure is used 35 and Figure 36 is illustrated.
Firstly, the workpiece extraction step that template matching method is utilized to the rejected product image PD1a execution of Figure 35 (a) comes Extract workpiece WD1.The position for specifying the workpiece WD1 extracted is the bottom right of rejected product image and outer most edge is rectangular The case where shape.After being extracted workpiece WD1 in this way, inspection area sorting step is executed.
1st stage of inspection area sorting step configures the dimensions than the outer most edge in the outside of workpiece WD1 The peripheral frame F1 of the size of the big α of maximum value, to surround the outer most edge of workpiece WD1.Situation after the 1st stage that performed is shown in figure 36(a).Here, the size of workpiece WD1 outer most edge due to foozle generates deviation.The size has through maximum value and minimum value Defined specification, meet the specification workpiece WD1 be judged as size in terms of qualified product.Also, in terms of to the size After qualified product, that is, workpiece WD1 is shot, workpiece extraction step is executed using above-mentioned template matching method.Therefore, as work The size of the outer most edge for the workpiece WD1 that the result of part extraction step obtains, it is possible to take from the maximum value of above-mentioned dimensions to All numerical value of minimum value.Thus, it, can be to institute if configuring the peripheral frame F1 of size α bigger than the maximum value of dimensions Some workpiece WD1 are in the more outward configuration peripheral frame F1 than outer most edge.
2nd stage of inspection area sorting step, be conceived to constitute Figure 36 (a) each region pixel pixel value and into The sorting in row check object region.It specifies and is formed in the peripheral frame F1's and workpiece WD1 configured in the 1st stage in Figure 36 (a) The picture of the pixel of the pixel value and composition background B of the pixel of region (hereinafter referred to as workpiece peripheral region) WDS1 between outer most edge Element value is equal.Here, being conceived to the size relation of the pixel value of workpiece peripheral region WDS1 and workpiece WD1.From Figure 33 (b) and formula (1), the pixel value of workpiece peripheral region WDS1 is equal with the pixel value of background B1 10, and the pixel value of workpiece WD1 is 100. Then, inside the peripheral frame F1 in Figure 36 (a), such as pixel value 50 is set as workpiece and sorts threshold value TWD1.Further, it is possible to logical It is examined to sort the inside for the workpiece WD1 that pixel value is 100 as becoming in the region for crossing sorting workpiece sorting threshold value TWD1 or more Look into the candidate object candidates region of subject area.
Then, after the 3rd stage of inspection area sorting step, check object is sorted from above-mentioned object candidates region Region.It is below that required reason is illustrated to the sorting.When in Figure 36 (a) by the normal region of the inside of workpiece WD1 When being compared with the respective pixel value of defect, as shown in Figure 33 (b), the pixel value of workpiece WD1 (normal region) is 100, work The pixel value of the defect D1 of part is 200.According to this case, it is believed that for example, if by picture in the region in workpiece WD1 Plain value 140 is set as workpiece, defect threshold value TD1 and sorts the region of workpiece, defect threshold value TD1 or more, then is able to detect that defect D1.But the region in workpiece WD1, there are the mark M1 (normal region) on workpiece, the pixel value of mark M1 is 250. Thus, the case where region as 140 or more the pixel value of workpiece, defect threshold value TD1 having been sorted in the region in workpiece WD1 Under, can by the mark MD1 on the workpiece with pixel value 250 as normal region also with the defect D1 with pixel value 200 It is sorted together as defect.Obviously for this result as defect inspection and incorrect.In order to prevent this situation, will be from work To eliminate the region behind the region in the mark MD1 on workpiece in the region in the workpiece WD1 in object candidates region as inspection Look into subject area.Step after this 3rd stage for being inspection area sorting step.
Here, the region in the workpiece WD1 sub-elected in the 2nd stage of above-mentioned inspection area sorting step is named as 1st object candidates region.Also, it will should be named from the region in the mark MD1 on the workpiece that the 1st object candidates region excludes For the 1st exclusionary zone.Finally, the region after eliminating the 1st exclusionary zone from the 1st object candidates region is named as the 1st inspection Subject area.Specifically, in the 3rd stage of inspection area sorting step, using same as above-mentioned workpiece extraction step Template matching method only sorts the mark MD1 on workpiece.
Then in the 4th stage, in the same manner as above-mentioned 1st stage, the mark MD1 as shown in Figure 36 (b) on workpiece The peripheral frame F2 of the β big size of the maximum value of the dimensions of outer most edge of the outside configuration than mark MD1, to surround on workpiece The outer most edge of mark MD1.
Also, in the 5th stage, in the same manner as above-mentioned 2nd stage, it is conceived to the mark constituted on peripheral frame F2 and workpiece The pixel value of the pixel of region (hereinafter referred to as mark peripheral region) M1S1 before the outer most edge of MD1, with composition workpiece WD1's The pixel value of pixel is equal.According to Figure 36 (b), the pixel value of mark peripheral region M1S1 is equal with the pixel value of workpiece WD1 100, the pixel value of the mark MD1 on workpiece is 250.Then, inside the peripheral frame F2 in Figure 36 (b), such as by pixel value 200 are set as mark sorting threshold value TM1.Further, it is possible to the region of threshold value TM1 or more be sorted by sorting mark, by pixel The inner part for the mark M1 on workpiece that value is 250 is selected as the 1st exclusionary zone that should be excluded from the 1st object candidates region.
Then it was used as the 6th stage, excludes the 1st exclusionary zone from the 1st object candidates region as the 1st check object area Domain.Then, it is conceived to the mark MD1 being sorted on the workpiece for above-mentioned 1st exclusionary zone.As shown in Figure 36 (b), sometimes at this Existing defects D2 in mark MD1 on workpiece.Thus, the mark MD1 on workpiece is considered to detect the 2nd object of defect D2 Candidate region is suitable.That is, after the 7th stage, by the mark MD1 sorting on workpiece be the 2nd object candidates region it Afterwards, the 2nd exclusionary zone is sorted in the 2nd object candidates region as needed, the 2nd will be eliminated from the 2nd object candidates region Region sorting after exclusionary zone is the 2nd check object region.But according to Figure 36 (b), make by the mark MD1 on workpiece When for the 2nd object candidates region, the 2nd exclusionary zone is not present in the area.Therefore, the 2nd check object region becomes workpiece On mark MD1 in whole region.As the step more than being performed to rejected product image PD1a shown in Figure 35 (a) As a result obtained check object region becomes the 1st check object region sub-elected in above-mentioned 6th stage and above-mentioned This 2 regions of the 2nd check object region sub-elected in 7 stages.So far, terminate inspection area sorting step.
When sub-electing the 1st check object region and the 2nd check object region in the sorting step of inspection area, then to Setting can check the defect threshold value setting procedure transfer of the defect threshold value of defect in each check object region.In the defect In threshold value setting procedure, will by each check object region sort at the normal region and defect for not being defect, specific 1 A pixel value is set as defect threshold value.Sorted at this time by [condition 2] below normal region in check object region and Defect.
[condition 2] is in the pixel value in the defects of check object region,
It is big or small compared with the pixel value of the normal segments in the region.
First in the 1st stage of defect threshold value setting procedure, the 1st defect threshold value is set as the 1st check object region The defects of threshold value.As described above, the 1st check object region is the region other than the mark MD1 on the workpiece in workpiece WD1. Also, according to Figure 33 (b), the pixel value in the 1st check object region is 100, and the pixel value of the defect D1 of workpiece is 200.According to This case, can be by being for example set as the 1st defect threshold value TD1 for pixel value 150 in the 1st check object region and dividing The region of the 1st defect threshold value TD1 or more is selected, only to sort defect D1.The 1st defect threshold value TD1 is set as 150.
Then in the 2nd stage of defect threshold value setting procedure, the 2nd defect threshold value is set as the 2nd check object region The defects of threshold value.2nd check object region is the region in the mark MD1 on workpiece.Also, according to Figure 33 (b), the 2nd The pixel value in check object region is 250, and the pixel value of the defects of mark MD1 on workpiece D2 is 180.According to this feelings Condition, can be by for example being set as the 2nd defect threshold value TD2 and sorting the 2nd to lack in the 2nd check object region by pixel value 210 The region below threshold value TD2 is fallen into, to sort defect D2.I.e., the 2nd defect threshold value TD2 is set as 210.So far, terminate defect threshold It is worth setting procedure.
As described above, pixel value 50 is set as workpiece sorting threshold value in the 2nd stage of inspection area sorting step TWD1.It is intended that sorting the inside of workpiece WD1 inside peripheral frame F1 in Figure 36 (a) from background B1, and sorted For the 1st object candidates region of the subject area as defect inspection.In addition, pixel value 200 is set as in the 5th stage Mark sorts threshold value TM1.It is intended that making from the inside of the mark MD1 inside the peripheral frame F2 in Figure 36 (b) on sorting workpiece For the 1st exclusionary zone that should be excluded from the 1st object candidates region.
Similarly, pixel value 150 is set as the 1st defect threshold value TD1 in the 1st stage of defect threshold value setting procedure. It is intended that the defects of the 1st check object region of sorting.It is lacked in addition, pixel value 210 is set as the 2nd in the 2nd stage Fall into threshold value TD2.It is intended that the defects of the 2nd check object region of sorting.
Above-mentioned various threshold values are the rejected product image for being most suitable for rejected product the image PD1 and Figure 35 (a) of Figure 33 (a) The threshold value of the defect inspection of workpiece surface in PD1a.The reason is that the pixel in each region of above-mentioned rejected product image Value becomes to be worth shown in Figure 33 (b).Here, the rejected product image, i.e. other than considering the rejected product image PD1 of Figure 33 (a) Shoot the pixel value in each region in image obtained from other rejected product workpiece.Suspect by each workpiece, the pixel in each region Value has deviation throughout a certain range.As long as visual rejected product image, the relative luminance or darkness in each region are about right Shooting image is also same obtained from any workpiece is shot.
But if considering as digital picture, the difference of pixel value becomes problem.Such as in visual different 2 When the shooting image of a workpiece, the Visual Outcomes of the same area in the two are white.But when by this 2 shooting images As digital picture come when recording, it is understood that there may be the pixel value of the white of a side is 240 and the pixel value of the white of another party is 220 this case.Similarly, even if the Visual Outcomes of the same area in the two are dense gray scale, it is also possible to which there are digitized maps The pixel value of the dense gray scale of a side as in is 100 and the pixel value of the dense gray scale of another party is 80 this case.In this way, pressing Each workpiece, each region pixel value have a certain degree deviation.Thus, if using utilizing Figure 33's (a) as described above The various threshold values that rejected product image PD1 is set out in inspection area sorting step or defect threshold value setting procedure, then without Method guarantees: can reliably carry out the sorting in check object region for all rejected product images and be able to carry out The sorting in the defects of the region.It is suitable threshold value verification step therefore, it is necessary to execute the confirmation various threshold values.It is right below The sequence of threshold value verification step is illustrated.
First in the 1st stage, it is a not to prepare B obtained from shooting to B (B is natural number) rejected product workpiece Qualified product image.For each of above-mentioned rejected product image, using the rejected product image PD1 based on Figure 33 (a) The various threshold values set out execute the sorting in check object region and the sorting of defect.Also, confirm: all are not conformed to Lattice product image can sort same check object region, and reliably sorting is different by each rejected product image respectively Defect.If, will in the case where having found the rejected product image that can not correctly sort check object region or defect The rejected product image sets the picture in each region of the rejected product image PD1 of used Figure 33 (a) with threshold value originally Plain value (Figure 33 (b)) is compared, and carrys out correction threshold.Also, using revised threshold value again for B rejected product image To sort check object region and defect.The confirmation of the sorting and the amendment of threshold value is repeated, until for all unqualified Product image can sort same check object region and can reliably sort defect.
After above-mentioned 1st stage, then it was used as the 2nd stage, will be being confirmed by using B rejected product image And revised various threshold applications are when qualified product image, confirmation can sort same check object to all qualified product images Region and defect is not sorted.In this case, preparing A (A is natural number) qualified product images, with above-mentioned for B Rejected product image confirming is carried out similarly.If in the qualified product figure for having found that check object region is not sorted correctly In the case where the qualified product image that picture or defect are sorted, B rejected product image still has been used really with above-mentioned Recognize corresponding amendment similarly, carries out the amendment of threshold value.I.e., will scheme used in the qualified product image and initial threshold value setting Pixel value (Figure 33 (b)) in each region of 33 (a) rejected product image PD1 is compared, and is modified to threshold value.And And confirmation and amendment is repeated, until can the same check object region of whole sorting to A qualified product image and Until defect is not sorted.In the occurrence of this B and A, such as consider the object for becoming defect inspection when volume production every day Number, manufacture deviation of workpiece of workpiece etc., are determined using statistical gimmick.
So, after the completion of various threshold values are that suitable situation is recognized really, terminate threshold value verification step.Also, to The inspection for using these suitable threshold values to carry out defect inspection to the shooting image as the workpiece of checked property executes step and turns It moves.It checks and executes step, above-mentioned inspection area sorting step is executed to the shooting image of checked property, sub-elects check object Region.Then, using defect threshold value, to the presence or absence of check object range check defect.It is judged as qualified if without defect Product are judged as rejected product if defective.
Here, the example of the pixel value of the shooting image different from the case where the rejected product image PD1 of Figure 33 (a) Son is illustrated using rejected product image PD11 shown in Figure 37 (a).As the rejected product image PD11 in Figure 37 (a) In when being compared by visual observation to the color in each region, then become as follows.Background B11 be white, workpiece WD11 be light gray scale, Mark MD11 is black.In addition, the defect D11 of workpiece WD11 is dense gray scale, the defect D21 on mark MD11 is light gray scale.No It crosses, even if being equally light gray scale, defect D21 is slightly light compared with workpiece WD11.It is recorded as counting by rejected product image PD11 In the case where word image, the size relation of the pixel value in above-mentioned each region becomes,
Mark MD11 < defect D11 < workpiece WD11 < defect D21 < background B11 (3).
It is somebody's turn to do the size relation of pixel value shown in (3) means that: in the rejected product image PD11 of visually observation Figure 37 (a) When, compared with the normal segments of workpiece WD11, the defect D11 of workpiece WD11 seems dark (band black).Similarly, it means : compared with the normal segments of the mark MD11 on workpiece WD11, the defect D21 of workpiece WD11 seems bright (with white Color).Above-mentioned pixel value is shown in Figure 37 (b) in the form of a table.
Above-mentioned inspection area sorting step and defect threshold value setting step are executed to the rejected product image PD11 of Figure 37 (a) The pixel value of various threshold values and the region that should be sorted when rapid and the size relation of threshold value are as follows.
Firstly, setting workpiece sorts threshold value TWD11 in the 2nd stage of inspection area sorting step.It is intended that from The inside of workpiece WD11 is sub-elected in background B11 and is sorted as the 1st object candidates of the subject area as defect inspection Region.Here, being 130 according to the pixel value of Figure 37 (b) workpiece WD11, the pixel value of background B11 is 250.Known to: when will be above-mentioned When pixel value is compared, if workpiece sorting threshold value TWD11 is set as such as 180, and sorting, there is workpiece to sort threshold value The region of TWD11 pixel value below, then can sub-elect the 1st object candidates region.
Then in the 5th stage, setting mark sorts threshold value TM11.Its purpose is to sub-elect the mark on workpiece The inside of MD11 and as should from the 1st object candidates region exclude the 1st exclusionary zone.Here, according to Figure 37 (b), work The pixel value of part WD11 is 130, the pixel value of mark MD11 is 40.Known to: when above-mentioned pixel value is compared, if will Mark sorting threshold value TM11 is set as such as 90, and sorts the region with mark sorting threshold value TM11 pixel value below, then The 1st exclusionary zone can be sub-elected.
Similarly, the 1st defect threshold value is set in the 1st stage of defect threshold value setting procedure.It is intended that sorting the The defects of 1 check object region.Here, the pixel value of workpiece WD11 is the pixel value of 130, defect D11 according to Figure 37 (b) It is 80.Known to: when above-mentioned pixel value to be compared, if the 1st defect threshold value TD11 is set as such as 100, and sort Region with the 1st defect threshold value pixel value below, then can sub-elect defect D11.
In turn, the 2nd defect threshold value is set in the 2nd stage.Its purpose is to sort lacking in the 2nd check object region It falls into.Here, the pixel value of mark MD11 is 40, the pixel value of defect D21 is 170 according to Figure 37 (b).Known to: when by above-mentioned picture When plain value is compared, if the 2nd defect threshold value TD21 is set as such as 100, and sort with more than 2nd defect threshold value The region of pixel value can then sub-elect defect D21.
As described above, as the setting of the various threshold values in the image processing algorithm of the prior art and the region that should be sorted Pixel value and threshold value size relation, based on shooting image in 2 information and be determined.1st information is by shooting figure The pixel value in region when as digital image recording.Also, the 2nd information is operating personnel visually shoots image and obtains It is bright between the comparison result of the brightness (albescent degree) in each region arrived and darkness (blackish degree), i.e. each region Spend the comparison result of information.It the particularly important is the brightness of the normal segments and defect in check object region in 2nd information The comparison of information.
In explanation before this, it is set as the shooting image of rejected product image and qualified product image being gray-scale pitcture Picture.If shoot image be color image in the case where, by the color decomposition of color image be R (red), G (green), B (indigo plant) this Three primary colors generate grey scale image about each color.Also, 3 types that operating personnel visually generates from rejected product image Grey scale image be best able to clearly know to select to be judged as by being compared the luminance information between each region 1 grey scale image of other defect.Above-mentioned image processing algorithm is applied to the grey scale image selected.
(3) the problem of the image processing algorithm of the prior art
Above such image processing algorithm as the prior art, there are following problems.The problem is, right The visual operation of operating personnel becomes more when shooting image application image Processing Algorithm.As described above, in inspection area sorting step When the various threshold values of middle setting, operating personnel visually shoots image, and each interregional luminance information is compared.Also, it is based on The comparison result, to judge that the inside in the check object region that should be sorted whether there is as should be identified as the normal of defect Region exclusionary zone.
Such as have above-mentioned with the inspection area sorting step of rejected product image PD1 shown in Figure 33 (a) has been used In the explanation of pass, as shown in Figure 33 (b), the pixel value of workpiece WD1 is 100, the pixel value of defect D1 on workpiece WD1 is 200. Also, the pixel value of the mark MD1 in workpiece WD1 is 250.Therefore, in order to correctly sort defect D1 on workpiece WD1, from It is excluded as in the workpiece WD1 in the 1st object candidates region using mark MD1 as the 1st exclusionary zone, thus sub-elects the 1st inspection Look into subject area.In above-mentioned operation, visually being played the role of for operating personnel is very big, it is desirable that operating personnel is skilled. In the case where rejected product image PD1 shown in the Figure 33 (a) used in the above description as an example, exclusionary zone be 1, Check object region is 2.But it's not limited to that for exclusionary zone and the quantity in check object region.
According to the quantity for the part for constituting workpiece surface, the part and its configuration and each part and corresponding of defect occurs Defect pixel value etc., it is also contemplated that further increasing exclusionary zone and the quantity in check object region.Moreover, such as It is above-mentioned like that, in the case that shooting image is color image, additional following operation: by the color decomposition of color image be R (red), G (green), B (indigo plant) this three primary colors, after generating grey scale image to each color, the tonal gradation of this visual 3 type of operating personnel Image is simultaneously compared.In the increased situation of above-mentioned such exclusionary zone and check object region or to color camera figure Picture be treated as if necessary, for sort region threshold value quantity increase etc., operating personnel it is visual required for Time further increase.In addition, the judgement project of operating personnel increases at the same time, the burden of operating personnel increases.Due to The visual required increase of time and the increase of burden of such operating personnel, checks speed and checks accuracy decline.
Existing technical literature
Patent document
Patent document 1: Japanese Unexamined Patent Publication 2015-4538 bulletin
Summary of the invention
Technical problems to be solved by the inivention
The purpose of the present invention is to provide: by sorting inspection from the shooting image of the checked property with line symmetry The visual operation of operating personnel is reduced when subject area and can reduce the burdens such as the judgement for generating and requiring operating personnel skilled Situation and threshold value can be simply set up and easily select check object region, facilitate check speed raising simultaneously And it is not easily susceptible to the image processing method of the influence of the difference of the proficiency of operating personnel and has used the image processing method Defect detecting method.
Technical teaching for solving the problem was
Image processing method as a technical solution of the invention characterized by comprising difference calculates step, To there is the original image relative to the monochrome that reference line is symmetrical 1st region and the 2nd region, passes through the reference line It is divided into the 1st region and the 2nd region, is configured at about in the 1st region and the 2nd region relative to institute The each right of two original pixel that reference line is symmetrical position is stated, the difference for becoming the pixel value of described two original pixel is calculated The differential pixel values divided;It is that there is the differential pixel of the differential pixel values to generate difference for configuration with difference image generation step The difference image generation step of partial image will have original pixel and the 2nd area using the 1st position in the 1st region The original pixel of the 2nd position in domain is configured at the difference image to the differential pixel of the differential pixel values calculated 1st position and the 2nd position generate the difference image.
In addition, in described image processing method, which is characterized in that the reference line is to be divided into the original image 1st straight line in the 2nd region in the 1st region and lower half portion of the top half of the original pixel comprising equal amount.
In addition, in described image processing method, which is characterized in that the reference line is to be divided into the original image 2nd straight line in the 2nd region in the 1st region and right half part of the left-half of the original pixel comprising equal amount.
In addition, in described image processing method, which is characterized in that in the difference image generation step, from same The original image generates the 1st difference image and 2nd difference image different from the 1st difference image as the difference diagram Picture, described image processing method further include: step is selected in inspection area, uses the 1st difference image and the 2nd difference diagram Picture selects the check object region in the original image.
It is using described image processing method to examined as the defect detecting method of a technical solution of the invention The defect detecting method of object progress defect inspection, which is characterized in that have: threshold value sets mode and checks execution pattern, described It includes: that step 1 is rapid that threshold value, which sets mode, makes to be known to be shooting image life obtained from multiple checked properties of qualified product from shooting At monochrome image be the 1st original image, generated from multiple 1st original images generated using described image processing method multiple Qualified product difference image;And second step, make to be known to be obtained from multiple checked properties of rejected product from shooting and shoot image The monochrome image of generation is the 2nd original image, is generated from multiple 2nd original images generated using described image processing method more A rejected product difference image sets 1 inspection area threshold value under the threshold value setting mode, 1 inspection area threshold value Differential pixel values can be selected relative to other differential pixel values from the differential pixel of the multiple rejected product difference image It leaves predetermined value or more and is configured at the pixel of leaving of same position in each rejected product difference image, and can not be from It selects and is configured at and the differential pixel for leaving pixel same position in the differential pixel of the multiple qualified product difference image Pixel is left as described, under the inspection execution pattern, is made obtained from the checked property of shooting defect inspection object Shooting the monochrome image that image generates is the 3rd original image, uses described image processing method next life from the 3rd original image generated At difference image, the check object in the 3rd original image is selected using the difference image and the inspection area threshold value Region executes defect inspection to the check object region.
In addition, in the defect detecting method, which is characterized in that the shooting figure seems achromatic image, from each bat The monochrome image for taking the photograph image generation is 1.
In addition, in the defect detecting method, which is characterized in that the shooting figure seems color image, from each shooting The monochrome image that image generates is 2 or more, in the step 1 of threshold value setting mode is rapid, makes each monochrome image 1st original image generates the multiple qualified product difference image from multiple 1st original images generated, sets in the threshold value In the second step of mould-fixed, make each monochrome image the 2nd original image, is generated from multiple 2nd original images generated The multiple rejected product difference image, under the threshold value setting mode, that selects the inspection area threshold value sets model Maximum 1 monochrome image is enclosed, the inspection area threshold value is set to selected monochrome image, in the inspection execution pattern Under, 2 or more the monochromes of the shooting image generation obtained from the checked property as shooting the defect inspection object The monochrome image with monochrome image identical type selected under the threshold value setting mode is selected in image as described 3rd original image generates the difference image from the 3rd original image generated, uses the difference image and the inspection Region threshold selects the check object region, executes defect inspection to the check object region.
In addition, the defect detecting method, which is characterized in that further include using template matching from the 3rd original image Method extracts the checked property extraction step of the checked property of the defect inspection object.
The effect of invention
In the figure that image processing method of the invention is used for check object workpiece (checked property of defect inspection object) When as being symmetrical situation about specific straight line, the image progress that can there's almost no the visual workpiece of operating personnel is any 1 threshold value is set to the step of judgement using the operation of simple image procossing, easily selects the figure of check object workpiece Check object region as in.Therefore, do not require operating personnel skilled as prior art, the burden of operating personnel becomes It is few.Moreover, the defect that image processing method automation can be made by software, be easily performed image to check object workpiece It checks.Therefore, compared with the defect inspection of the prior art, make to check that speed extraordinarily improves, and be not easily susceptible to operating personnel Proficiency difference influence.
Detailed description of the invention
Fig. 1 (a) (b) is the explanatory diagram of image processing algorithm of the invention.
Fig. 2 is the explanatory diagram of image processing algorithm of the invention.
Fig. 3 (a) (b) (c) (d) (e) is the explanatory diagram of image processing algorithm of the invention.
Fig. 4 (a) (b) (c) is the explanatory diagram of image processing algorithm of the invention.
Fig. 5 (a) (b) (c) (d) (e) is the explanatory diagram of image processing algorithm of the invention.
Fig. 6 (a) (b) (c) is the explanatory diagram of image processing algorithm of the invention.
Fig. 7 (a) (b) is the explanatory diagram of image processing algorithm of the invention.
Fig. 8 (a) (b) (c) (d) (e) is the explanatory diagram of image processing algorithm of the invention.
Fig. 9 (a) (b) (c) is the explanatory diagram of image processing algorithm of the invention.
Figure 10 (a) (b) (c) (d) (e) is the explanatory diagram of image processing algorithm of the invention.
Figure 11 (a) (b) (c) is the explanatory diagram of image processing algorithm of the invention.
Figure 12 is the explanatory diagram of image processing algorithm of the invention.
Figure 13 (a) (b) is the explanatory diagram of image processing algorithm of the invention.
Figure 14 (a) (b) is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 15 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 16 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 17 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 18 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 19 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 20 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 21 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 22 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 23 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 24 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 25 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 26 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 27 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 28 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 29 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 30 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.
Figure 31 is the explanatory diagram for having used the defect detecting method of image processing algorithm of the invention.Figure 32 (a) (b) is existing There is the explanatory diagram of the image processing algorithm of technology.
Figure 33 (a) (b) is the explanatory diagram of the image processing algorithm of the prior art.
Figure 34 (a) (b) is the explanatory diagram of the image processing algorithm of the prior art.
Figure 35 (a) (b) is the explanatory diagram of the image processing algorithm of the prior art.
Figure 36 (a) (b) is the explanatory diagram of the image processing algorithm of the prior art.
Figure 37 (a) (b) is the explanatory diagram of the image processing algorithm of the prior art.
Label declaration
WD1, WD2 rejected product workpiece
MD1, MD2 mark
D1, D2, Da, Db defect
WG1 qualified product workpiece
MG1 mark
B1 background
F1, F2 peripheral frame
The 1st straight line of L1
The 2nd straight line of L2
The 3rd straight line of L3
Specific embodiment
Hereinafter, the embodiments of the present invention will be described with reference to the drawings.
(1) image processing algorithm of the invention
It is illustrated using Fig. 1 to Fig. 6 to basic image processing algorithm of the invention is become.Fig. 1 (a) is as making figure The rejected product workpiece WD1 taken in 33 (a) rejected product image PD1 is the number of the original image i.e. object of image procossing Image, for being applicable in the original image explanatory diagram of image processing algorithm of the invention.Here, the rejected product workpiece of Fig. 1 (a) WD1 is extracted from above-mentioned rejected product image PD1 using template matching method.Use Figure 33 (a) to rejected product Each region on the surface of workpiece WD1 is illustrated, therefore in this description will be omitted.In addition, being shown in the form of a table in Fig. 1 (b) The pixel value in each region in original image shown in Fig. 1 (a).The pixel value in above-mentioned pixel value region corresponding with Figure 33's (b) It is identical.I.e. the original image of Fig. 1 (a) is also and the identical grey scale image of Figure 33 (a).
First in Fig. 1 (a), the region of workpiece WD1 and mark MD1 are paid close attention to.Above-mentioned zone be respectively present defect D1 and D2.If also, observation remove defect D1 and D2 exterior domain, i.e. with the qualified product workpiece in the qualified product image PG1 of Figure 32 (a) The identical region WG1 then knows that the region is the region with line symmetry.Should region identical with qualified product workpiece WG1 by Rejected product workpiece WD1 and the mark MD1 on the surface for being marked on rejected product workpiece WD1 are constituted.Moreover, as described above, not Qualified product workpiece WD1 has rectangular shape, and mark MD1 is circle.In addition, the cornerwise intersection point and mark of workpiece WD1 The center of MD1 is roughly the same.Accordingly it is found that the region being made of rejected product workpiece WD1 and mark MD1, about Fig. 1 (a) 2 straight lines L1 and L2 shown in are that line is symmetrical.Specifically, being using the 1st straight line L1 as reference line as symmetry axis Symmetrical above and below, the 1st straight line L1 divides the original image into the pixel for having equal amount in top half and lower half portion.Equally Ground, using the 2nd straight line L2 as reference line as symmetry axis, for bilateral symmetry, the 2nd straight line L2 is divided the original image into left half Part and right half part have the pixel of equal amount.
Image processing algorithm of the invention, for having the symmetry axis about 1 or more to become symmetrical region in this way Image, can easily select the check object region that should carry out defect inspection.To image processing algorithm of the invention When principle is illustrated, it is set as being configured to 4 vertical, horizontal 4 square shapes using as shown in Figure 2 in order to simple 16 pixels.The shape that each pixel is square in Fig. 2 records consolidating for the position for indicating each pixel in the square Some addresses.The address is aobvious with two dimension obtained from the Y as the X for the transverse direction for being combined into Fig. 2 and as longitudinal direction Show, is given to each pixel.The adding method of specific address is illustrated below.
First of all for the increase direction for determining the address value in X-direction and Y-direction, the origin as X=0 and Y=0 is taken In upper left.Also, increase the X and Y of two-dimensional address (X, Y) to the arrow x-direction and y-direction of Fig. 2 respectively.That is, being located at upper left The address of the pixel at angle becomes (0,0), and the address of the pixel on the right side of it is formed by making the value of the address X increase by 1 from (0,0) For (1,0).Similarly, positioned at the upper left corner pixel downside pixel address by make from (0,0) Y address value increase by 1 And become (0,1).So change the value of the address X and Y address, the address for being located furthest from the pixel in the lower right corner of origin is logical Cross makes the value of the address X increase by 3 and so that the value of Y address is increased by 3 and become (3,3) from (0,0).In the following description, pass through This method shows the address of pixel.
Fig. 3 to Fig. 6 is the figure being illustrated to the principle of image processing algorithm of the invention.Fig. 3 (a) is using such as Fig. 2 Shown such 16 pixels for being configured to 4 vertical, horizontal 4 square shapes, will be in the qualified product image PG1 of Figure 32 (a) Figure of the qualified product workpiece WG1 as original image and after modeling.It is recorded in each pixel such just like corresponding with the table of Figure 32 (b) Pixel value.In addition, the shape of the qualified product workpiece WG1 in Figure 32 (a) is rectangle, in order to simple and by just in Fig. 3 (a) It is rectangular to make its modelling.In addition the mark MG1 in Figure 32 (a) is circle, with by all 16 pixels constituted in Fig. 3 (a) In the address (1,1) in centrally located portion, (2,1), (1,2), (2,2) this 4 pixels modeled.In this case, on surrounding 12 pixels for stating 4 pixels of central portion become pixel obtained from the qualified product workpiece WG1 modelling made in Figure 32 (a).
Here, the pixel value of the qualified product workpiece WG1 in Figure 32 (b) is 100, the pixel value of mark MG1 is 250.But In Fig. 3 (a), above-mentioned 12 will be distributed to as several values there are deviation near the 100 of the pixel value of qualified product workpiece WG1 Each pixel in a pixel, and will be distributed to as several values there are deviation near the 250 of the pixel value of mark MG1 State each pixel in 4 pixels.The deviation of such pixel value is the deviation for example generated by noise in actual image.
Furthermore it in Fig. 3 (a), accordingly with 16 respective pixel values of pixel, with visual manner and relatively shows The brightness (albescent degree) of the pixel and darkness (blackish degree) out.For example, as described above, with Figure 32 (a) The pixel of the corresponding Fig. 3 (a) of mark MG1 is the address (1,1) of central portion, (2,1), (1,2), (2,2) this 4 pixels.It is above-mentioned The pixel value of pixel is 250 or so, is visually white.Thus, 4 pixels of above-mentioned central portion are not have figuratum painting in substrate White square.On the other hand, as described above, the pixel of Fig. 3 (a) corresponding with the qualified product workpiece WG1 in Figure 32 (a) is packet Place 12 pixels of 4 pixels for stating central portion.The pixel value of above-mentioned pixel is 100 or so, is visually dense gray scale.Cause And above-mentioned 12 pixels are to be configured in the part other than the number of expression pixel value of substrate by many extremely short levels The square for the pattern that line segment is formed.The Fig. 3 (a) for visually indicating above-mentioned 16 pixels, can be understood and Fig. 1 in a manner of vision (a) similarly center portion is white and is dense gray scale around it.After, in the explanation of image processing algorithm of the invention In, for the model being made of 16 pixels other than Fig. 3 (a), also showed in the way of vision by same method The opposite brightness and/or darkness of each pixel.
In addition, the pixel table for being configured at address (a, b) is denoted as in the following description for the purpose of the simplicity of article The pixel of (a, b).Similarly, the pixel value table for being configured at the pixel of address (a, b) is denoted as to the pixel value of (a, b).Here, It is overlappingly recorded as the 1st straight line L1 of symmetry axis and the 2nd straight line L2 in the same manner as Fig. 1 in Fig. 3 (a).Using the Fig. 3 (a), with Under image processing algorithm of the invention is illustrated in order.
Image processing algorithm of the invention is schemed suitable for having as described above about the symmetry axis as reference line The 1st straight line L1 or the 2nd straight line L2 in 3 (a) become symmetrical region image (by the region of a side be known as the 1st region, The region of another party is known as the 2nd region).Here, the case where in Fig. 3 (a) using the 1st straight line L1 as symmetry axis, concern The symmetry of the pixel value of each pixel.Such as positioned at (0,0) pixel value 100 pixel and be located at the pixel value 103 of (0,3) Pixel is configured at symmetric position.Similarly, for example, positioned at (2,1) pixel value 255 pixel and be located at (2,2) pixel value 253 pixel is also configured at symmetric position.The pixel for constituting original image is named as original pixel later.
It, can be by by the 1st straight line if the set of 16 original pixel of Fig. 3 (a) to be regarded as to the paper folding of square at this time L1 folds the paper folding as broken line, and thereby, it is possible to so that the original pixel in the symmetric position is lain overlapping one another. The processing is known as doubling up and down.Image processing algorithm of the invention, firstly for two be overlapped by the upper and lower doubling A original pixel is the pixel value for being configured at two original pixel about the 1st symmetrical position straight line L1, for all original pixel Calculate the differential pixel values for becoming difference.In other words, right using two original pixel being overlapped by upper and lower doubling as 1 pair All pairs it is each to calculate differential pixel values.The step is named as difference and calculates step.After difference calculates step, will have The differential pixel of the differential pixel values calculated is configured at two original pixel used in the calculating with the differential pixel values Identical position and generate difference image (the 1st difference image).For example, using the 1st position in the 1st region original pixel with The original pixel of the 2nd position in 2nd region to differential pixel values have been calculated in the case where, will with the differential pixel values difference Point pixel configuration generates difference image in the 1st position and the 2nd position of difference image.It is raw that the step is named as difference image At step.
The 1st difference diagram that difference calculates step and difference image generation step and generates will be executed to the original image of Fig. 3 (a) As being shown in Fig. 3 (b).For example, when being held in the original image of Fig. 3 (a) using the original pixel of above-mentioned (0,0) and the original pixel of (0,3) When row difference calculates step, obtained differential pixel values become 103-100=3.Thus, when using with the differential pixel values When 3 differential pixel executes difference image generation step, the pixel value of (0,0) and (0,3) in the 1st difference image of Fig. 3 (b) As 3.Similarly, step is calculated when the original pixel of the original pixel and (2,2) that use above-mentioned (2,1) in original image executes difference When, obtained differential pixel values become 255-253=2.Thus, it is held when using the differential pixel with the differential pixel values 2 When row difference image generation step, the pixel value of (2,1) and (2,2) becomes 2 in the 1st difference image of Fig. 3 (b).
Here, the pixel value of 16 pixels is all very close to the smallest picture in the 1st difference image shown in Fig. 3 (b) The small value of element value i.e. 0.Why generate pixel value small in this way, be because are as follows: as described above by Fig. 3 (a) with the 1st straight line L1 Position identical with above-mentioned two original pixel is configured at for the differential pixel that symmetry axis is configured at two original pixel of symmetric position And generate Fig. 3 (b).Since above-mentioned two original pixel is configured at symmetric position, respective pixel value is very close.Thus, The differential pixel values of above-mentioned 2 pixels become the value very close to 0.Visually 16 pictures of Fig. 3 (b) with such pixel value Element is black.Correspondingly, 16 pixels in Fig. 3 (b) all become substrate in addition to indicate the number of pixel value with Outer part is configured with from the square of many patterns formed to the oblique line in upper right side.
Next, image processing algorithm of the invention pays close attention to symmetrical symmetry axis i.e. the 2nd straight line L2 in Fig. 3 (a).It should 2nd straight line L2, which divides the original image into left-half and right half part, has 8 equal pixels.Positioned at a left side of the 2nd straight line L2 8 original pixel of side and 8 original pixel positioned at the right side of the 2nd straight line L2 are mutually arranged in symmetric position.Such as positioned at (0,0) Pixel value 100 original pixel and be located at the original pixel of pixel value 110 of (3,0) and be configured at symmetric position.Similarly, such as position Original pixel in the pixel value 254 of (1,1) and the original pixel of the pixel value 255 positioned at (2,1) are also configured at symmetric position.
At this time with it is above-mentioned about the 1st straight line L1 be symmetrical situation in the same manner as, if by 16 original pixel of Fig. 3 (a) Set regard as square paper folding, then can by being folded using the 2nd straight line L2 as broken line to the paper folding, thus, it is possible to It is overlapped the original pixel in the symmetric position each other.The processing is known as left and right doubling.For passing through the left and right Doubling and two original pixel being overlapped are configured at the pixel about two original pixel that the 2nd straight line L2 is symmetrical position Value executes and all original pixel is calculated with the difference calculating step for becoming the differential pixel values of difference.After difference calculates step, It executes and the differential pixel with calculated differential pixel values is configured at two used in the calculating with the differential pixel values The identical position of a original pixel and the difference image generation step for generating difference image (the 2nd difference image).
The 2nd difference diagram that difference calculates step and difference image generation step and generates will be executed to the original image of Fig. 3 (a) As being shown in Fig. 3 (c).For example, when being come in the original image of Fig. 3 (a) using the original pixel of above-mentioned (0,0) and the original pixel of (3,0) When executing difference calculating step, obtained differential pixel values become 110-100=10.Therefore, when using with the Differential Image When the differential pixel of plain value 10 executes difference image generation step, in the 2nd difference image of Fig. 3 (c) (0,0) and (3,0) Pixel value becomes 10.Similarly, when the original pixel of the original pixel and (2,1) that use above-mentioned (1,1) in original image executes difference When dividing calculating step, obtained differential pixel values become 255-254=1.Therefore, when use has the differential pixel values 1 Differential pixel execute difference image generation step when, in the 2nd difference image of Fig. 3 (c) pixel value of (1,1) and (2,1) at It is 1.
Here, the pixel value of 16 pixels is all very close to the smallest picture in the 2nd difference image shown in Fig. 3 (c) The small value of element value i.e. 0.Why generate pixel value small in this way, in the same manner as the above-mentioned explanation to Fig. 3 (b), be because By the differential pixel for two original pixel for being configured at symmetric position using the 2nd straight line L2 as symmetry axis in Fig. 3 (a) be configured at this two The identical position of a original pixel and generate Fig. 3 (c).Since the two original pixel are configured at symmetric position, respective pixel Value is very close.Therefore, the differential pixel values of this 2 pixels become the value very close to 0.Visually with the figure of such pixel value 3 (c) 16 pixels are black.Correspondingly, in Fig. 3 (c) in the same manner as Fig. 3 (b), 16 pixels are all in substrate The number in addition to indicating pixel value other than part be configured with from the square of many patterns formed to the oblique line in upper right side.
Here, the specific calculation to the doubling up and down and left and right doubling that are executed using software in above-mentioned difference calculating step Method remarks additionally.16 pixels in Fig. 3 (a) have been assigned address as shown in Figure 2 respectively.
Therefore, when executing doubling up and down, in the address of the one end for determining the 1st straight line L1 in Fig. 3 (a) based on Fig. 2 After the address of the other end, using address above mentioned as input be sent to execute difference calculate step software it is (hereinafter referred to as poor Divide and calculate software).Difference, which calculates software, as a result, can identify symmetry axis, and be judged according to the address of the one end and the other end It whether is doubling up and down.
Similarly, when executing left and right doubling, on the ground of the one end for determining the 2nd straight line L2 in Fig. 3 (a) based on Fig. 2 After location and the address of the other end, address above mentioned is sent to difference as input and calculates software.Difference calculates software energy as a result, Enough identify symmetry axis, and according to the address of the one end and the other end to determine whether being left and right doubling.
The address of one end of the 1st straight line L1 in Fig. 3 (a) and the position pair folded by (0,1) and (0,2) this 2 pixels It answers.Thus, which becomes (0,1.5), this is exactly (0,0) and (0,3) on the direction indicated in Fig. 2 with arrow Y nothing but The median of the address of pixel.
Similarly, the address of the other end of the 1st straight line L1 in Fig. 3 (a) and folded by (3,1) and (3,2) this 2 pixels Position it is corresponding.Thus, which becomes (3,1.5), this be exactly nothing but (3,0) on the direction indicated in Fig. 2 with arrow Y and The median of the address of the pixel of (3,3).
It is soft when 2 addresses as (0,1.5) that determines in this way and (3,1.5) are sent to difference calculating as input When part, difference is calculated software and is held the address of Fig. 2 in a manner of table, thus can be generated referring to the table as symmetry axis the 1st Straight line L1.Also, because symmetry axis is level, difference calculating software, which is judged as, will execute doubling up and down, such as about upper The pixel value of (0,0) in the Fig. 3 (a) stated and the pixel value of (0,3) calculate difference, similarly about (2,1) pixel value and The pixel value of (2,2) calculates difference.
Similarly about the 2nd straight line L2 in Fig. 3 (a), by the center as (0,0) and (3,0) of the address of one end Value i.e. (1.5,0) are sent to difference as input and calculate software, and will be as (0,3) and (3,3) of the address of the other end Median is (1.5,3) as input and is sent to difference calculating software.The difference for having received address above mentioned calculates software, with The case where above-mentioned 1st straight line L1 similarly, can referring to the address of Fig. 2 table and generate the 2nd straight line L2 as symmetry axis.And And because symmetry axis be it is vertical, difference calculates software and is judged as left and right doubling to be executed, such as about above-mentioned Fig. 3 (a) In (0,0) pixel value and (3,0) pixel value calculate difference, similarly about (1,1) pixel value and (2,1) pixel Value calculates difference.
By above step, the 1st difference image and Fig. 3 (c) institute shown in Fig. 3 (b) are generated from the original image of Fig. 3 (a) After the 2nd difference image shown, next concern constitutes the pixel value of 16 pixels of each difference image.Specifically, selecting It states in 16 pixels, (hereinafter referred to as leave picture with the pixel value for becoming the value significantly having left for most of pixel values Plain value) pixel (hereinafter referred to as leaving pixel).Using Fig. 3 (d) to Fig. 4 (c), this is sequentially illustrated below.
Firstly, the pixel value of 16 differential pixels shown in Fig. 3 (b) is arranged in descending order.Its result Fig. 3 (b) just It is lower that Fig. 3 (d) is used as to indicate.Fig. 3 (d) is constituted by 3 layers configured up and down.Numerical value is recorded in each layer, is made in the left end of each layer The meaning for having the numerical value of this layer is remembered for title.Downward arrow is recorded between the numerical value of each layer, thereby assists in understanding pair The explanation of the step of from the numerical value of the layer of the downside of the numerical generation arrow of the layer of the upside of arrow.
The entitled differential pixel values descending on the upper layer of Fig. 3 (d) is by 16 differential pixels shown in above-mentioned Fig. 3 (b) Pixel value arranges resulting result in descending order.Pixel value is 7,3,2,1 this 4 types, is configured in left end as maximum value 7, it is configured with small pixel value in order to the right since then.Right end is configured with 1 as minimum value.
Then, it calculates the difference between differential pixel values adjacent in upper layer and arranges in middle level.Middle layer it is entitled adjacent The difference value of differential pixel values, note have above-mentioned difference value.The numerical value of left end be 4,4 this value be upper layer left end 7 with Its right 3 adjacent difference.In order to indicate this point, the pixel value 7 and 3 on the upper layer used in the calculating of difference value 4 is in The difference value 4 of layer is associated by downward arrow respectively.It is also same about other numerical value in middle layer.
After step more than end, the maximum value in the numerical value in middle layer is next selected.It is above-mentioned for selecting that this, which is selected, The step of leaving pixel.In the case where Fig. 3 (d), the numerical value 4 of the left end in middle layer is maximum value.Also, generate this at 2 pixel values for the upper layer of the difference value of maximum value are the 7 and adjacent thereto 3 of left end.7 and 3 section is middle layer Difference value becomes maximum section can select above-mentioned if the median of 7 and 3 is set as threshold value in the section Leave pixel.In fact, 4 numerical value that the upper layer of observation Fig. 3 (d) is remembered, remaining 3 for the numerical value 7 of left end Numerical value is 3,2,1, and the differential comparison between this 3 numerical value is small.In contrast, only the 7 of left end are significantly left, can be regarded as from Open pixel.
After the difference value for having selected middle layer in this way becomes maximum section, the differential pixel values at the both ends in the section are calculated Median and be set to for selecting the threshold value for leaving pixel.Threshold value calculating result is shown to the lower layer of Fig. 3 (d).Under Threshold value in the maximum section of the entitled difference value of layer is recorded as threshold value and to be selected as the numerical value calculated from middle layer 5 (the following carries of decimal point) of the median of pixel value i.e. 7 and 3 on the upper layer of maximum difference value 4.
After above step sets threshold value, the picture as the value bigger than the threshold value is selected from the pixel value on upper layer Element value.The pixel value being selected is to leave pixel value, and having the pixel for leaving pixel value is to leave pixel.The pixel being selected Value is 7, this can be defined from above-mentioned explanation.
As a result, leaving the pixel value " 7 " of pixel and other pixel values " 3,2,1 " leave predetermined value or more.Predetermined value Example is maximum difference value 4.In this case, leave the pixel value " 7 " of pixel and pixel value " 3 " leave 4, with pixel value " 2 " It leaves 5, leave 6 with pixel value " 1 ", leave predetermined value or more (i.e. 4 or more) with other pixel values.
With it is above relative to shown in Fig. 3 (b) the step of Fig. 3 (d) of the 1st difference image similarly, for Fig. 3 (c) Shown in the 2nd difference image also by the step of Fig. 3 (e) carry out given threshold, selected from the pixel of Fig. 3 (c) and leave pixel value 10.The relationship of Fig. 3 (c) and Fig. 3 (e) is identical as the relationship of above-mentioned Fig. 3 (b) and Fig. 3 (d).In addition, the upper layer of Fig. 3 (e), in Layer, lower layer numerical value and each layer between arrow meaning it is also identical as Fig. 3 (d).Thus, it omits to Fig. 3 (e) specifically It is bright.
Pixel will be left in Fig. 3 (b) that step more than process is selected and Fig. 3 (c), respectively in Fig. 4 (a) and Fig. 4 (b) In shown by surrounding pixel value with dual peripheral frame.After being had selected in 2 difference images leave pixel in this way, As next step, the common ground for leaving pixel in 2 difference images is selected.Also, the common ground selected becomes It there may be the region of defect in original image, should carry out the check object region of defect inspection.
Here, there is no the common grounds for leaving pixel in Fig. 4 (a) and Fig. 4 (b).The reason is that: the original image of Fig. 3 (a) Be to qualified product workpiece WG1 shown in Figure 32 (a), do not have defective workpiece to model obtained from, relative to 2 make It is symmetrical to become line for the 1st straight line L1 of symmetry axis and the 2nd straight line L2.Thus, there is no should carry out in the original image of Fig. 3 (a) The check object region of defect inspection.
Shown in Fig. 4 (c) in the original image of Fig. 3 (a) to be configured at in Fig. 4 (a) and Fig. 4 (b) by with dual The original pixel for the identical position of common ground for leaving pixel that the mode that frame surrounds pixel value indicates, by being surrounded with dual frame The image that the mode of pixel value indicates.Above-mentioned 2 difference diagrams indicated with dual frame in Fig. 4 (a) and Fig. 4 (b) as described above Pixel is left as in, does not have common ground.Therefore, the pixel that pixel value is surrounded with dual frame is not present in Fig. 4 (c), at For image identical with Fig. 3 (a).
According to the explanation of above such model that Fig. 3 (a) is utilized, it is known that: in the qualified product image to Figure 32 (a) The original image of such qualified product workpiece WG1 with symmetrical region shown in PG1 has been applicable in image procossing of the invention and has calculated In the case where method, the check object region that should carry out defect inspection will not be selected.
(2) Application Example of image processing algorithm of the invention
Then, to the rejected product work like that shown in the rejected product image PD1 in Figure 33 (a) with symmetrical region The image of part WD1 has been applicable in the case where image processing algorithm of the invention and has been illustrated.Fig. 5 (a) is same with above-mentioned Fig. 3 (a) 16 pixels are used to sample to carry out mould for the rejected product workpiece WD1 in the rejected product image PD1 of Figure 33 (a) as original image Scheme obtained from type.Such pixel value corresponding with the table of Figure 33 (b) is recorded in each pixel.
Here, being illustrated to the difference of Fig. 5 (a) and Fig. 3 (a), the i.e. performance of the defects of Fig. 5 (a).Fig. 5 (a) with Fig. 3's (a) the difference lies in that be configured at the pixel value of the pixel of address (0,0) and (2,2).Address (0,0) in Fig. 3 (a) Pixel value is 100, and the pixel value is corresponding with the pixel value of qualified product workpiece WG1 shown in Figure 32 (b).In contrast, Fig. 5 (a) In the pixel value of (0,0) be 200, the pixel of the defect D1 on rejected product workpiece WD1 shown in the pixel value and Figure 33 (b) Value corresponds to.Similarly, the pixel value of the address in Fig. 3 (a) (2,2) is 100, the pixel value and mark MG1 shown in Figure 32 (b) Pixel value it is corresponding.In contrast, the pixel value of the address (2,2) in Fig. 5 (a) is 180, the pixel value and Figure 33 (b) are shown Mark MD1 on defect D2 pixel value it is corresponding.
Also, in Fig. 5 (a), in the same manner as Fig. 3 (a), surround in the address (1,1) of central portion, (2,1), (1,2), 12 pixels of 4 pixels of (2,2) become rejected product workpiece WD1.In addition, rejected product workpiece WD1 in Fig. 5 (a) Pixel value becomes with Fig. 3 (a) identical value except above-mentioned (0,0) in addition to indicating defect.Similarly the pixel value of mark MD1 removes It indicates to become with Fig. 3 (a) identical value except above-mentioned (2,2) of defect.In addition, the pixel value of each pixel shown in fig. 5 with Indicate the relationship of the pattern of the substrate of the square of the pixel, it is identical as Fig. 3.
Here, as described above, when the pixel value of Fig. 5 (a) is compared with the pixel value of Fig. 3 (a), in Fig. 5 (a) The pixel value of (0,0) and (2,2) is respectively different from Fig. 3 (a) 200 and 180.Visual above-mentioned pixel value, as shown in Figure 33 (b) It is like that light gray scale.The pixel that light gray scale is visually appeared as by this is not present in indicating Figure 32 as qualified product image (a) as a result Figure 32 (b) corresponding with each region is also not present in Fig. 3 (a) obtained from modeling Figure 32 (a).Therefore, In order to corresponding with the light gray scale, the pixel of (0,0) and (2,2) in Fig. 5 (a) becomes the number in addition to indicating pixel value in substrate Part other than word is configured with the square for the pattern being made of many points.
The original image of Fig. 5 (a) will be executed above-mentioned difference calculate step and difference image generation step and generate the 1st Difference image is shown in Fig. 5 (b).In addition, the 2nd difference image for executing same step and generating is shown in Fig. 5 (c).Here, will Fig. 5 (b) and Fig. 5 (c) are compared with Fig. 3 (b) and Fig. 3 (c) respectively.Firstly, if Fig. 5 (b) and Fig. 3 (b) are compared, Know that the two has differences.The pixel value of 16 pixels in Fig. 3 (b) is all as described above very close to 0.In contrast, in Fig. 5 (b) pixel value of in 16 pixel values in, (0,0) and (0,3) is 97, and the pixel value of (2,2) and (2,1) is 65.Relatively In other pixels pixel value very close to 0, above-mentioned pixel value is the pixel that pixel value considerably big with 0 compared with leaves with 0 Value.
The reason of leaving below to the pixel value of this 4 pixels with 0 is illustrated.In the original image of Fig. 5 (a), with lack The pixel value for falling into corresponding (0,0) is 200.Also, it is configured at symmetrical using the 1st straight line L1 as the pixel of symmetry axis and (0,0) The pixel of symmetric position is (0,3) not corresponding with defect, pixel value 103.The differential pixel values of this 2 pixel values are matched Be placed in that the position of (0,0) and (0,3) of original image obtains is the 1st difference image of Fig. 5 (b).Therefore, because in original image The difference of the pixel value of (0,0) corresponding with defect and the pixel value of (0,3) not corresponding with defect is big, so Fig. 5 (b) The pixel value of (0,0) and (0,3) becomes larger, and leaves with 0.In addition, in the original image of Fig. 5 (a), (2,2) corresponding with defect Pixel value is 180.Also, it is configured at and is by the pixel of symmetry axis and the symmetrical symmetric position of pixel of (2,2) of the 1st straight line L1 (2,1) not corresponding with defect, pixel value 255.By the differential pixel values of this 2 pixel values be configured at original image (2, 2) what is obtained with the position of (2,1) is the 1st difference image of Fig. 5 (b).Therefore, because corresponding with defect in original image (2, 2) difference of the pixel value of pixel value and (2,1) not corresponding with defect is big, so the picture of (2,2) and (2,1) of Fig. 5 (b) Plain value becomes larger, and leaves with 0.
Then, Fig. 5 (c) and Fig. 3 (c) are compared, it is known that the two is still variant.As described above, 16 in Fig. 3 (c) The pixel value of a pixel is all very close to 0.In contrast, in 16 pixel values in Fig. 5 (c), the picture of (0,0) and (3,0) Element value is 90, and the pixel value of (1,2) and (2,2) is 71.The case where pixel value relative to other pixels is very close to 0, above-mentioned picture Plain value is that sizable pixel value is the pixel value left with 0 compared with 0.
The reason of leaving below to the pixel value of this 4 pixels with 0 is illustrated.In the original image of Fig. 5 (a), with lack The pixel value for falling into corresponding (0,0) is 200.Also, it is configured at symmetrical using the 2nd straight line L2 as the pixel of symmetry axis and (0,0) The pixel of symmetric position is (3,0) not corresponding with defect, pixel value 110.The differential pixel values of this 2 pixel values are matched Be placed in that the position of (0,0) and (3,0) of original image obtains is the 2nd difference image of Fig. 5 (c).Therefore, because in original image The difference of the pixel value of (0,0) corresponding with defect and the pixel value of (3,0) not corresponding with defect is big, so Fig. 5 (c) The pixel value of (0,0) and (3,0) becomes larger, and leaves with 0.In addition, in the original image of Fig. 5 (a), (2,2) corresponding with defect Pixel value is 180.Also, it is configured at and is by the pixel of symmetry axis and the symmetrical symmetric position of pixel of (2,2) of the 2nd straight line L2 (1,2) not corresponding with defect, pixel value 251.By the differential pixel values of this 2 pixel values be configured at original image (2, 2) what is obtained with the position of (1,2) is the 2nd difference image of Fig. 5 (c).Therefore, because corresponding with defect in original image (2, 2) difference of the pixel value of pixel value and (1,2) not corresponding with defect is big, so the picture of (2,2) and (1,2) of Fig. 5 (c) Plain value becomes larger, and leaves with 0.
According to the above facts, it is known that: if executing difference calculates step and difference image generation step, from being possible to deposit The differential pixel values that pixel value in the region of defect generates, the difference that the pixel value in the region relative to never existing defects generates Divide pixel value, can significantly leave.If selecting to have in the difference image of Fig. 5 (b) and Fig. 5 (c) and leave using this point Pixel value leaves pixel, then can be judged as to generate in original image has at the position that the original pixel for leaving pixel is configured There may be defects.This select the step of leaving pixel value with it is in the above-mentioned explanation for having used Fig. 3 (d) and Fig. 3 (e), in phase It is suitable that the difference value of adjacent differential pixel values becomes the step of maximum section given threshold.
Here, the result in the case where the step identical as Fig. 3 (d) and Fig. 3 (e) to Fig. 5 (b) and Fig. 5 (c) implementation is shown In Fig. 5 (d) and Fig. 5 (e).The record method of Fig. 5 (d) and Fig. 5 (e) is identical as Fig. 3 (d) and Fig. 3 (e) respectively.Thus, it omits and closes In the detailed description of Fig. 5 (d) and Fig. 5 (e).Threshold value documented by lower layer using Fig. 5 (d) is selected from the pixel of Fig. 5 (b) Pixel value is left, is 65 and 97.Be used in the same manner threshold value documented by the lower layer of Fig. 5 (e) selected from the pixel of Fig. 5 (c) from Pixel value is opened, is 71 and 90.
In the Fig. 5 (b) and Fig. 5 (c) for being selected step more than process by way of surrounding pixel value with dual frame Leave the pixel display in Fig. 6 (a) and Fig. 6 (b) respectively.So, it is selected in 2 difference images after leaving pixel, The common ground for leaving pixel in 2 difference images is selected as next step.Pixel is left in Fig. 6 (a) and Fig. 6 (b) Common ground be (0,0) and (2,2).
It will indicate to be configured in the original image of Fig. 5 (a) and the common portion by way of surrounding pixel value with dual frame Divide image obtained from the original pixel of identical position, is shown in Fig. 6 (c).The pixel of the dual frame in Fig. 5 (a) with defect Corresponding pixel is identical.I.e., it is known that: to shown in the rejected product image PD1 such as Figure 33 (a), in addition to defect D1 and D2 Region in addition is the case where original image of symmetrical rejected product workpiece WD1 has been applicable in image processing algorithm of the invention Under, it can easily select the check object region that should carry out defect inspection.If by the step with it is above-mentioned right in the prior art The step of check object region is sorted is compared, then knows that number of steps greatly reduces.In addition, knowing: operation people Member is significantly reduced the step of visually original image is judged.This is that have to become line pair about symmetry axis in original image The effect generated in the case where the region of title, by using image processing algorithm of the invention.
In the above description, as original image, using has a small circle at the center of square as Fig. 3 (a) Square mark, the model of workpiece with symmetrical above and below and symmetrical shape be example.In this case, as above Described to have 2 symmetry axis, the difference image generated from original image becomes 2 corresponding with the quantity of symmetry axis.But at this The quantity of symmetry axis is not limited to 2 in invention.When making C natural number, if the quantity of symmetry axis is C item, by upper It states difference and calculates the quantity for the difference image that step and difference image generation step generate as C.In this case to C difference Partial image, which is selected, leaves pixel, and the common ground for leaving pixel is set as to carry out the check object of defect inspection in original image Region.
(3) workpiece of the invention conveying and image procossing: the 1st straight line L1 and the 2nd straight line L2 is used
But the shooting image of multiple workpiece in order to obtain, need by supply unit sequentially conveying workpieces to shooting dress Setting can be to the position that workpiece is shot.At this point, even if in the case where workpiece has above-mentioned symmetry axis, according to the outer of workpiece The configuration of part in shape and its surface causes in multiple shooting images sometimes because of the delivery method of the workpiece in supply unit To be not fixed be 1 the direction of symmetry axis of workpiece.Its an example is illustrated using Fig. 7.In the explanation in order to simple, adopt For the qualified product workpiece for being 1 with symmetry axis.
Fig. 7 (a) be show after being shot to qualified product workpiece WG2 (hereinafter referred to as workpiece WG2), by using Above-mentioned template matching method extracts workpiece WG2 and the figure of the shape and size of original image that generates.Workpiece WG2 is a's in one side Square is marked with circular mark MG2 in the midpoint P0 on its one side and the side in contact.Work in the one end for becoming the side There are the relationships of d < a/2 between length a/2 until the angle of part WG2 to midpoint P0 and the diameter d of mark MG2.It is same with Fig. 1 (a) Sample, record the 1st straight line L1 and the 2nd straight line L2 with being overlapped in the original image.In Fig. 7 (a), original image is obvious and is not in relation to 1st straight line L1 is symmetrical for line, but is that line is symmetrical about the 2nd straight line L2.I.e. symmetry axis is the 2nd straight line L2.
Here, using Fig. 7 (b) to conveying workpieces WG2 until the conveying for the position that shooting unit is shot can be passed through Unit is illustrated.Fig. 7 (b) is shown from the upside of supply unit T1 by the situation of the supply unit T1 workpiece WG2 conveyed. Supply unit T1 is horizontally arranged, have with load it is that the state of workpiece WG2 is conveyed, have in two sides it is substantially flat mutually The loader F1 of the elongated straight threadiness of capable edge E1 and E2.Also at edge, E1 and E2 are attached with the workpiece for preventing from loading sometimes The mechanism that WG2 is stretched out to the outside of loader F1.Workpiece WG2, with face shown in Fig. 7 (a) become upside and the face it is opposite to 2 sides become the upper surface that the mode substantially parallel with edge E1 and E2 is placed in loader F1.Also, by (not shown) The effect of driving mechanism keeps loader F1 mobile to the direction of arrow X1 shown in Fig. 7 (b), and workpiece WG2 is as a result, with straight line path Diameter is conveyed.Shooting unit (not shown) is set to the upside of supply unit T1, is transported to the shooting unit in workpiece WG2 When lower position, the upper surface of workpiece WG2 can be shot to obtain shooting image shown in Fig. 7 (a).
In addition, being carried out about this 4 directions of W1~W4 to the workpiece WG2 for being placed in supply unit T1 in Fig. 7 (b) Diagram.As described above, workpiece WG2 is square.Also, when supply unit T1 loads workpiece WG2, with Fig. 7 as described above (a) face shown in become upside and the face it is opposite to 2 sides become the mode substantially parallel with edge E1 and E2 and determined Position.That is, mark MG2 generates 4 positions relative to conveying direction (arrow X1) when workpiece WG2 is conveyed by supply unit T1 Set relationship.By this 4 positional relationships in Fig. 7 (b) as the workpiece W1 of the beginning conveyed, the 2nd workpiece W2, the from the starting 3 workpiece W3, most end workpiece W4 indicate.
It is straight that will further above-mentioned the 1st straight line L1 and the 2nd for being likely to become symmetry axis in image be shot in Fig. 7 (b) Line L2 is shown in a manner of being overlapped in each workpiece W1~W4.According to the 1st straight line L1 and the 2nd straight line L2 in workpiece W1~W4 Known to: the shooting image of W1 and W3 is using the 2nd straight line L2 as symmetry axis, and the shooting image of W2 and W4 are using the 1st straight line L1 as symmetry axis. That is, depending on workpiece WG2 is placed in the direction of supply unit, the original image generated from shooting image zooming-out workpiece it is symmetrical The direction of axis is not fixed as 1 direction.
In this case, the individually visual original image of operating personnel, determines the 1st straight line L1 and the 2nd straight line L2 every time Which of become symmetry axis, be extremely miscellaneous.But image processing algorithm of the invention, even if needing in this way to each Original image selects a certain item in 2 symmetry axis, also can be same with the above-mentioned explanation using Fig. 3 and Fig. 4 or Fig. 5 and Fig. 6 Sample, by generating difference image about 2 symmetry axis both sides, correctly to select check object region.About its step, make It is verified and is illustrated with Fig. 8 to Figure 11.
Fig. 8 (a) is to model qualified product workpiece WG2 and Fig. 3 (a) shown in Fig. 7 (a) likewise by 16 pixels Original image afterwards.Mark MG2 in Fig. 7 (a) utilizes (1,0) and (2,0) this 2 pixels to be modeled in Fig. 8 (a).It Pixel value be and that (1,1) of central portion, (2,1), (1,2), (2,2) this 4 pixels are configured in Fig. 3 (a) is similar Value.In addition, in Fig. 8 (a), 14 pixels other than (1,0) and (2,0) this 2 pixels be by Fig. 7 (a) in addition to note Pixel obtained from qualified product workpiece WG2 modelling except number MG2.Above-mentioned pixel value is in being configured at the middle encirclement of Fig. 3 (a) The similar value of 12 pixels of (1,1) in centre portion, (2,1), (1,2), (2,2).In addition, same as Fig. 3 (a) in Fig. 8 (a) Ground overlappingly records the 1st straight line L1 and the 2nd straight line L2.
In addition, indicating the 16 of each pixel in the Fig. 8 (b) used in the following description, Fig. 8 (c), Fig. 9 (a)~Fig. 9 (c) In a square, the pattern of the square interior other than indicating the number of respective pixel value is identical as the pattern of Fig. 3. In addition, the table note method of Fig. 8 (d) and Fig. 8 (e) is also identical as Fig. 3 (d) and Fig. 3 (e).Thus, omit be associated with it is detailed Explanation.
The 1st difference diagram that difference calculates step and difference image generation step and generates will be executed to the original image of Fig. 8 (a) As being shown in Fig. 8 (b).In addition, the 2nd difference image being equally generated is shown in Fig. 8 (c).Here, original image shown in Fig. 8 (a) Do not have symmetry relative to the 1st straight line L1.Specifically, being configured at the pixel of (1,0) and (2,0) about the 1st straight line L1 The pixel of (1,3) and (2,3) of symmetrical symmetric position, value differences are larger.This can be to make Fig. 7 (a) according to Fig. 8 (a) Qualified product workpiece WG2 model after obtained from image clearly learn.The pixel and Fig. 7 of (1,0) and (2,0) in Fig. 8 (a) (a) the mark MG2 in is corresponding.In contrast, the qualified product work in the pixel of (1,3) and (2,3) in Fig. 8 (a) and Fig. 7 (a) Part WG2 is corresponding.
Eye diagram 7 (a) can clearly learn do not have symmetry about the 1st straight line L1.Therefore, it is executed to original image Upper and lower doubling and in the 1st difference image of Fig. 8 (b) that generates, be configured at original image do not have above-mentioned symmetry position (1, 0) and the pixel value of (1,3) is 145.Also, be equally configured at original image do not have above-mentioned symmetry position (2,0) and The pixel value of (2,3) is 142.Above-mentioned pixel value becomes relative to other 12 pixel values leaves pixel value.But above-mentioned 150 Neighbouring pixel value first appears in the Fig. 8 (b).In order to which correspondingly, this 4 pixels all become in a manner of vision Part other than the number in addition to indicating pixel value of substrate is being configured with the pattern formed by many oblique lines towards bottom right just It is rectangular.
On the other hand, the original image of Fig. 8 (a) has symmetry about the 2nd straight line L2.Therefore, left and right is executed to original image Doubling and all pixels of the 2nd difference image of Fig. 8 (c) generated are the pixel value very close to 0.Next, will be to Fig. 8 (b) and the result in the case that Fig. 8 (c) implements the step identical as above-mentioned Fig. 3 (d) and Fig. 3 (e) is shown in Fig. 8 (d) and figure 8(e).If being selected from the pixel of Fig. 8 (b) using the threshold value of lower layer's record in Fig. 8 (d) and leaving pixel value, then become 142 With 145.If being used in the same manner the threshold value in lower layer's record of Fig. 8 (e), is selected from the pixel of Fig. 8 (c) and leave pixel value, then As 9 and 10.
Will be in Fig. 8 (b) selected by step more than process and Fig. 8 (c) in such a way that dual frame surrounds pixel value Pixel of leaving be shown in Fig. 9 (a) and Fig. 9 (b).After being had selected in 2 difference images leave pixel in this way, as Next step selects the common ground for leaving pixel in 2 difference images.Being total to for pixel is left in Fig. 9 (a) and Fig. 9 (b) It is same to be partially not present.Its reason is leaving the reason of common ground is not present in pixel phase with above-mentioned Fig. 4 (a) and Fig. 4 (b) Together.Therefore, position identical with the common ground is configured at by surrounding in the original image of Fig. 8 (a) with dual frame if generating The image that the mode of the pixel value of original pixel is shown then becomes shown in Fig. 9 (c), and there is no the pixels surrounded with dual frame Value.In this way, the image processing algorithm of invention, to the qualified product workpiece only about certain 1 in 2 symmetry axis with symmetry Original image, also can be with for having the original image of 2 symmetry axis the case where, are carried out similarly applicable.
Then, the qualified product workpiece WG2 to Fig. 7 (a) attached to the rejected product workpiece after defect as shown in Fig. 8 (a) It is modeled, is executed and Fig. 8 (b)~Fig. 9 (c) same step.Figure 10 (a) is that attached to original image shown in Fig. 8 (a) The original image of rejected product workpiece WD2 obtained from defect at 2.The pixel for indicating defect is (0,1) and (2,3).
Here, being known as below interior if observation Fig. 8 (a) is original image corresponding with the qualified product workpiece WG2 of Fig. 7 (a) Hold.If removing the pixel of (1,0) and (2,0) in Fig. 8 (a) and being configured at symmetrical about 2 pixels of the 1st straight line L1 and this The pixel of (1,3) and (2,3) of symmetric position, then original image becomes the shape similar with alphabetical H.Also, from the composition H shape Original image the observation of 12 pixel values, the original pixel of H shape has about either in the 1st straight line L1 and the 2nd straight line L2 There is symmetry.Also, the region that 4 by eliminating pixels are constituted only has symmetry for the 2nd straight line L2.
Also, belong to as the pixel for having (0,1) that defective pixel attached about above-mentioned H in Figure 10 (a) The region of symmetry is all had either in the original pixel of shape i.e. the 1st straight line L1 and the 2nd straight line L2.In addition, in Figure 10 (a) As the pixel for having (2,3) that defective pixel attached, belong to the above-mentioned original pixel eliminated i.e. only for the 2nd straight line L2 has the region of symmetry.Such pixel configuration for indicating defect is had from the viewpoint of in original image from symmetry Model obtained from of different nature 2 is Figure 10 (a).In addition, the pixel of above-mentioned (0,1) and (2,3) is configured at and Fig. 7 (a) the corresponding position of defect on the workpiece other than mark MG2 in qualified product workpiece WG2 shown in.Also, in Figure 10 (a) In, the pixel value of (0,1) is 22, it is fairly small for the pixel value on the workpiece other than mark MG2 is 100 or so, be Seem black defect than normal segments when visually.In addition, the pixel value of (2,3) is 170, relative to mark in Figure 10 (a) The pixel value on workpiece other than MG2 is quite big for being 100 or so, seems white defect than normal segments when being visual.
Step and difference diagram are calculated by above-mentioned difference is executed to the original image of rejected product workpiece shown in the Figure 10 (a) The 1st difference image generated as generation step is shown in Figure 10 (b).In addition, the 2nd difference that same step will be executed and generated Image is shown in Figure 10 (c).Also, the case where step identical as Fig. 8 (d) and Fig. 8 (e) being executed to Figure 10 (b) and Figure 10 (c) Under result be shown in Figure 10 (d) and Figure 10 (e).If using the threshold value of lower layer's record in Figure 10 (d) come from the pixel of Figure 10 (b) In select and leave pixel value, then be 73,83,142,145.If the threshold value in lower layer's record of Figure 10 (e) is used in the same manner, from figure It is selected in 10 (c) pixel and leaves pixel value, be then 64 and 87.
In such a way that dual frame surrounds pixel value by Figure 10 (b) selected by step more than process and Figure 10 (c) In pixel of leaving be shown in Figure 11 (a) and Figure 11 (b).Picture is left so having selected in 2 difference images After element, the common ground for leaving pixel in 2 difference images is selected as next step.In Figure 11 (a) and Figure 11 (b) The common ground for leaving pixel be (0,1), (1,3), (2,3).It will be matched by being surrounded in the original image of Figure 10 (a) with dual frame It is placed in the pixel value of the original pixel of position identical with the common ground of pixel is left shown in Figure 11 (a) and Figure 11 (b) and table The image shown, is shown in Figure 11 (c).The pixel of dual frame in Figure 11 (c) is that should carry out the check object region of defect inspection, packet Include the pixel corresponding with defect in Figure 10 (a).Image processing algorithm of the invention in this way, even for symmetrical only about 2 The original image of certain the 1 rejected product workpiece with symmetry in axis, also can be with the original image for having 2 symmetry axis The case where, is carried out similarly applicable, can easily select check object region.
(4) workpiece of the invention conveying and image procossing: the 3rd straight line L3 is used
In the above-mentioned explanation of image processing algorithm of the invention for having used Fig. 3 to Fig. 6 etc., become pair of original image 1st straight line of axis or the 2nd straight line are referred to as horizontally or vertically.But the symmetry axis in algorithm of the invention is not limited to This.As the example of other symmetry axis, inclined symmetry axis is illustrated using Figure 12 and Figure 13.
Figure 12 is shown after being shot to qualified product workpiece WG3 (hereinafter referred to as workpiece WG3), by using upper The figure of the shape and size for the original image that the template matching method stated generates to extract workpiece WG3.Workpiece WG3 is on one side for a Square is tangentially marked with diameter d's in a point P30 on its one side and with the point P31 and each edge while adjacent Circular mark MG31.Here, the value of a and d is that workpiece WG2 shown in 7 (a) is identical.In workpiece WG3, in circular mark A respective point P32 and P33 on MG31 nontangential remaining two side tangent mode, being also marked with has and mark MG31 phase With the circular mark MG32 of size.From become each side one end workpiece WG3 angle to midpoint P0 until length a/2 with Between mark MG31 and MG32 diameter d, there are the relationships of d < a/2.The workpiece WG3 of Figure 12 is about its upper left corner of connection and bottom right Angle become to the right under inclined 3rd straight line L3 be line it is symmetrical.
Original image obtained from modelling will be carried out similarly to workpiece WG3 shown in the Figure 12 and Fig. 8 (a) and be shown in Figure 13 (a).In Figure 13 (a), (2,0) (3,0) (3,1) this 3 pixels are corresponding with the mark MG31 in Figure 12.Similarly, in Figure 13 (a) (0,2) (0,3) (1,3) this 3 pixels are corresponding with the mark MG32 in Figure 12 in.This 6 pixel values be in Fig. 8 (a) The similar value of the pixel of (1,0) and (2,0).Also, 10 pixels other than above-mentioned 6 in Figure 13 (a) are and Fig. 8 (a) the similar value of 14 pixels except (1,0) and (2,0) in.Here, knowing: the pixel of Figure 13 (a) is configured to Using link original pixel upper left and bottom right become to the right under inclined 3rd straight line L3 it is symmetrical as the line of symmetry axis.That is, the situation Under doubling, if carry out with the doubling up and down or left and right doubling same table note in Fig. 3, for the doubling of upper right lower-left.
Difference is executed in the same manner as Fig. 3 in the original pixel for the Figure 13 (a) calculates step and difference image generation step When, the one end of the 3rd straight line L3 for becoming symmetry axis and the address of the other end are sent to difference as also noted above and calculate software.But It is, in the case where the 3rd straight line L3, such as different, the ground of one end and other end the case where from the 1st straight line L1 in Fig. 3 (a) Location and the address for the pixel for being configured at original pixel repeat.Specifically, the address of one end of the 3rd straight line L3 in Figure 13 (a) is (0,0), the address of the other end are (3,3).Moreover, according to Figure 13 (a), the 3rd straight line L3 not only in above-mentioned one end and the other end and And it is also repeated with the pixel of (1,1) and (2,2) in passage path in addition to this.The ground that such symmetry axis is passed through below Difference in the duplicate situation in location and the address of pixel calculates step and difference image generation step, is illustrated.
Firstly, original image is by the 3rd straight line how in upper right when considering in Figure 13 (a) using the 3rd straight line L3 as symmetry axis The pixel of equal amount is divided into lower-left.As described above, (0,0), (1,1), (2,2), (3,3) pixel be located at the 3rd straight line On L3.Therefore, this 4 pixels can not be divided into upper right and lower-left and the 3rd straight line as symmetry axis.Thus, this 4 pictures Element calculates in step in difference not as object.In specific algorithm, make the differential pixel values 0 of this 4 pixels.The difference The meaning of pixel value 0 is not understood as pixel from the difference of exclusion in the object that difference calculates and calculating and the pixel itself Any result.
In Figure 13 (a), about above-mentioned 12 pixels with other than duplicate 4 pixels of the 3rd straight line L3, about the 3rd The upper right of straight line and the pixel of lower-left balanced configuration calculate the difference of pixel value.Such as the pixel of the pixel value of (2,0) and (0,2) Value calculates difference, the pixel value calculating difference of the pixel value of (2,1) and (1,2).The difference image that will so generate It is shown in Figure 13 (b).In Figure 13 (b), the pixel value all 0 of (0,0) as described above, (1,1), (2,2), (3,3) is other 12 pixel values are balanced configuration about the 3rd straight line L3, therefore pixel value becomes the value very close to 0.
(5) defect detecting method of the invention
Using Figure 14 to Figure 31, below in relation to the workpiece for having used the image processing algorithm of the invention so far illustrated Defect detecting method is illustrated.
Figure 14 is the general flowchart for having used the defect detecting method of workpiece of image processing algorithm of the invention.This is general Slightly flow chart is made of 2 modes.Figure 14 (a) is that threshold value sets mode, to use image processing algorithm of the invention from multiple The original image of rejected product workpiece generates difference image, and sets the inspection for selecting workpiece using difference image generated The step of looking into the threshold value of subject area.In addition Figure 14 (b) is to check execution pattern, to set under threshold value setting mode in use Threshold value out from the original image for the workpiece for becoming check object select check object region after, defect inspection is executed to the region The step of.Threshold value sets mode and checks execution pattern, by shown in the lowest part of Figure 14 (a) and the topmost of Figure 14 (b) The connector of number 101 connects.
Figure 15 to Figure 31 is the detailed process for having used the defect detecting method of workpiece of image processing algorithm of the invention Figure.Specifically, there is the figure of the detailed step of the defined processing of number of steps S1 to S14 shown in Figure 14 for note.Figure 15 To Figure 31 by name and number of steps are remembered in topmost, in the step name and step the step of defined processing documented by Figure 14 The start-stop symbol for being denoted as IN (input) under number configured with the entrance for indicating the defined processing.Next, the then start-stop Symbol shows the detailed step of the defined processing.Also, the start-stop symbol for being denoted as OUT (output), the OUT table are configured in lowest part Show for when terminating all steps from this it is defined deal and enter Figure 14 shown in next defined processing outlet.
In addition, the step in Figure 15 to Figure 31 is numbered, in order to indicate with the corresponding of Figure 14 and the step in Figure 14 is numbered It is assigned as high 1 or 2 high, the step in the detail flowchart is numbered and distributes to next low 2 by ascending order from 01 Position.
In addition, as will be described later, in the defect inspection side for the workpiece for having used image processing algorithm of the invention In method, the common processing executed in several defined processing shown in Figure 14 is made to carry out subroutine.Subroutine includes son Routine 1 (Sub1) and subroutine 2 (Sub2) show in Figure 20 in the detail flowchart and Figure 23 of Sub1 and show the detailed of Sub2 Flow chart.Step number in above-mentioned detail flowchart, the Sub1 about Figure 20 are set as 21 for high 2, the Sub2 about Figure 23 22 are set as by high 2.About low 2 of number of steps, adding method is different in Figure 20 and Figure 23.In the Sub1 of Figure 20, 2nd reciprocal is set as indicating corresponding digital the 5 or 6 of the step and step S5 shown in Figure 14 (a) and S6.And And last is distributed to number by ascending order since 1 as the step number in the inside of above-mentioned steps S5 or S6.Phase For this, in the Sub2 of Figure 23, Sub2 is only corresponding with step S7 shown in Figure 14 (a), therefore 2nd reciprocal is fixed as number 0.Also, it is numbered minimum position as the step of the inside of step S7, distributes number by ascending order since 1.
In addition, in step and corresponding explanation shown in later Figure 15 to Figure 31, for several times using in a register Store this table of image note.This table of storage image note means that: make the address X, the Y address of the pixel defined by Fig. 2 with And the pixel value of the pixel is corresponding and assemble 1 arrangement (such as three-dimensional arrangement), composition includes the pixel number with composition image The permutation group of the consistent number of permutations is simultaneously stored in register.
First using Figure 14 (a) and Figure 15 to Figure 26, threshold value setting mode is illustrated.
(5.1) step S1
Then the step S1 of start-stop symbol START is known workpiece shooting step in Figure 14 (a).This is that preparation is multiple known The step of being the workpiece of qualified product or rejected product, these workpiece are shot respectively.Known workpiece is shot into step (S1) detailed process is illustrated in Figure 15.
A qualified product workpiece is shot in the step S101 of Figure 15 and obtains qualified product image.This known A work Part is qualified product.Then advance to step S102, assign the volume from 1 to A to the qualified product image taken in step s101 Number.Then B rejected product workpiece is shot in step s 103 and obtains rejected product image.This known B workpiece It is rejected product.Then advance to step S104, be from 1 to B to the rejected product image imparting taken in step s 103 Number only.By above step S101 to the S104, obtained imparting respectively intrinsic number A qualified product image and B rejected product image.
Step S105 is that the step S2 for executing the then step S1 of Figure 14 (a) to B rejected product image is arrived later The preparation of step S7.Firstly, in step s105,0 is stored in storage above-mentioned steps S2 to step S7 during execution processing Rejected product image number register J, make register J initialize.Then, 1 is added to the value of J in step s 106. And in step s 107, among the rejected product image of the number imparted from 1 until B take out with the value of J specify The image of number.Here, J=1, the image taken out becomes rejected product image 1.Also, end step S1 is to Figure 14 (a) institute The step S2 shown advances.
(5.2) step S2
Step S2 in Figure 14 (a) is check object workpiece extraction step.This is using above-mentioned template matching method from bat The step of taking the photograph image zooming-out workpiece.The detailed process of check object workpiece extraction step (S2) is illustrated in Figure 16.
Template matching method is used in the step S201 of Figure 16, searches for and determine the outer most edge of workpiece.Then, in step Make the image for becoming check object workpiece in the outer most edge of workpiece in S202.Also, end step S2 is walked to shown in Figure 14 (a) Rapid S3 advances.
(5.3) step S3
Step S3 in Figure 14 (a) is monochrome image generation step, for about the check object extracted in step s 2 Workpiece generates the step of monochrome image.The detailed process of monochrome image generation step (S3) is illustrated in Figure 17.
In the step S301 of Figure 17, whether the image about check object workpiece is that colour is judged.It is tied in judgement Fruit be " being (Yes) " in the case where be color image in the case where, to step S302 advance.In step s 302, from conduct The image of the check object workpiece of color image generates K monochrome image.It is natural number in this K.For example, if as discussed above will Color image is decomposed into 3 R (red), G (green), the three primary colors of B (indigo plant) and generation monochrome images, then is K=3.In addition, if such as The component signal i.e. Y-component of YIQ signal that the upper generation is used in the last stage for obtaining NTSC signal, then be K=1.In step After generating K monochrome image from color image in rapid S302, to next step S303 advance.
In step S303, based on the benchmark re-defined, to the K monochrome image generated in step s 302 assign from 1 number until K.Here, an example as the benchmark re-defined, resolves into R (red), G (green), B to by color image The three primary colors of (indigo plant) and the case where generating 3 monochrome images is illustrated.In this case, it is contemplated that following benchmark, i.e., pair Monochrome image based on (red) generation of R assigns 1,2 is assigned to the monochrome image based on (green) generation of G, to what is generated based on B (indigo plant) Monochrome image assigns 3.After imparting number to monochrome image in step S303, advance to step S304.
It is step S4 for executing the then step S3 of Figure 14 (a) to rejected product image 1 after step S304 to walking The preparation of rapid S7.Firstly, in step s 304,0 is stored in storage above-mentioned steps S4 to step S7 during execution processing The register N of the number of monochrome image initializes register N.Then, 1 is added to the value of N in step S305.And In step S306, by the monochrome image for the number specified among the monochrome image for imparting number from 1 until K with the value of N (being in this case the 1st monochrome image), is stored in the register of this title of original image.Then, end step S3, to Figure 14 (a) the step S4 shown in advances.
Step S302 to step S306 before this, when the image for being judged as check object workpiece in above-mentioned steps S301 is It is executed in the case where color image.If not being that color image, the judgement in step S301 are in the image of check object workpiece In the case where " no (No) ", then step S301 advances to step S307.Here, the image of check object workpiece is not cromogram Picture is monochrome image, and the monochrome image generated from check object workpiece is the image of check object workpiece itself.This is nothing but It is the quantity K=1 of monochrome image.Then, in step S307, the image of check object workpiece is set as the 1st monochrome image.And And 1 is stored in step 308 the register K for indicating the quantity of monochrome image, advance to step S309.It is walked when proceeding to When rapid S309, the volume that is stored in the monochrome image then handled in the step S4 to step S7 of step S3 in storage Figure 14 (a) for 1 Number register N.Also, advances to step S306, N monochrome image i.e. the 1st monochrome image is stored in this name of original image The register of title.And end step S3, and advance to step S4 shown in Figure 14 (a).
(5.4) step S4
Step S4 is symmetry judgment step, the symmetry to the monochrome image generated in step s3 in Figure 14 (a) Judged.The detailed process of symmetry judgment step (S4) is illustrated in Figure 18.
In the step S401 of Figure 18, quilt in the step S306 of monochrome image generation step (S3) shown in Figure 17 is taken out It is stored in the N monochrome image of the register of original image this title, and judges it whether there is or not symmetry axis.It were it not for symmetry axis In the case where, it is judged as "No", and advance to start-stop symbol END, terminate defect inspection.There is no symmetry axis in original image below In the case of terminate defect inspection the reason of be illustrated.The defect detecting method that general flowchart is shown in Figure 14, is arrived using Fig. 1 Image processing algorithm of the invention shown in fig. 6.Also, as described above, image processing algorithm of the invention is to have about 1 Above symmetry axis is that the original image in symmetrical region is object.Therefore, it is impossible to which the original image for not symmetry axis uses Image processing algorithm of the invention, therefore defect inspection can terminate at the moment.
On the other hand, it is judged as "Yes" in the case where N monochrome image has symmetry axis in step S401, and to Step S402 advances.In step S402, the quantity of symmetry axis possessed by original image is stored in register C.
It is step S5 for executing the then step S4 of Figure 14 (a) to rejected product image 1 after step S403 to walking The preparation of rapid S7.Firstly, 0 is stored in storage above-mentioned steps S5 to step S7 during execution processing in step S403 The register M of the number of symmetry axis in monochrome image initializes register M.Then, the value of M is added in step s 404 Upper 1.Then, in step S405, the coordinate of one end of M symmetry axis is stored in the register of this title of ONEM.Then In step S406, the coordinate of the other end of M symmetry axis is stored in the register of this title of OTEM.The ONEM and OTEM As the expression symmetry axis in the image processing algorithm of the invention for using Fig. 3 (a), Fig. 5 (a) and Figure 13 (a) to illustrate The independent variable of the coordinate of one end and the other end, is sent to subroutine Sub1 in subsequent steps S5.End here step S4 And advance to step S5 shown in Figure 14 (a).
(5.5) step S5 and S6
Step S5 is that difference calculates step in Figure 14 (a).And then the step S6 of step S5 is difference image generation Step.In the two steps, Fig. 3 to image processing algorithm of the invention shown in fig. 6 is used.The image processing algorithm is rear Be often used when in continuous step also, thus in detail flowchart by by this 2 steps merge into 1 subroutine Sub1 come It is defined.
Difference is shown in Figure 19 and calculates step (S5) and difference image generation step (S6).As described above, this 2 steps by The subroutine Sub1 being defined as step S21 is constituted.As shown in step S21, the independent variable of subroutine Sub1 is former The coordinate (OTEM) of the other end of the coordinate (ONEM) and M symmetry axis of one end of image, M symmetry axis.
As the original image in the 1st independent variable shown, in the step S3 for Figure 14 (a) in main program, in detail For the original image i.e. N monochrome image that sets in step S306 shown in Figure 17.It is set in step S306 relative to this Original image i.e. N monochrome image, as described above in the step S4 of Figure 14 (a), the specifically step shown in Figure 18 Symmetry is judged in S401.Step S402 is executed as described above in the case where being judged as "Yes" to step S406.As the 2nd The ONEM of a independent variable shown is the deposit for saving the coordinate of one end of M symmetry axis in step S405 as described above Device.Similarly, as being as described above to save M symmetry axis in step S406 in the OTEM of the 3rd independent variable shown The other end coordinate register.
The subroutine Sub1 of the original image as this 3 independents variable, ONEM and OTEM is had sent from main program, such as (difference image generates step by the upper step S5 (difference calculating step) executed in Figure 14 (a) and then the step S6 of the step 5 Suddenly).
The detailed process of subroutine Sub1 is illustrated in Figure 20.In Figure 20 then start-stop symbol Sub1 (original image, ONEM, OTEM step S2151 to step S2153) is the step S5 (difference calculating step) in Figure 14 (a).
Firstly, ONEM is connect with OTEM by straight line and generates symmetry axis in step S2151.It is raw by the step At symmetry axis, in the explanation for the image processing algorithm of the invention for having used Fig. 3 (a), Fig. 5 (a) and Figure 13 (a), As the step of coordinate of the one end and the other end of the 1st straight line L1, the 2nd straight line L2, the 3rd straight line L3 is attached using straight line It is illustrated.After generating symmetry axis in the step S2151 of Figure 20, advance to step S2152.
Original image is divided into two parts of equal pixel numbers in the two sides of symmetry axis in step S2152.The step and example As by 16 original pixel shown in Fig. 5 (a) be divided into be located at as symmetry axis the 1st straight line L1 upside 8 original pixel with 8 original pixel positioned at downside are corresponding.In the step S2152 of Figure 20, after original pixel is divided into two parts, to step S2153 advances.
In step S2153, it is configured at the original pixel PA and PB of symmetrical position about the two sides in symmetry axis, calculates The differential pixel values BAB of difference as respective pixel value BA and BB.The step is for example corresponding with following content: will be in Fig. 5 (a) original pixel (pixel value 200) in positioned at (0,0) is set as original pixel PA and will be located at the original pixel (pixel value 103) of (0,3) When being set as original pixel PB, the difference of respective pixel value 200 and 103 is calculated in a manner of 200-103=97, and by the value 97 It is set as differential pixel values BAB.When calculating differential pixel values in the step S2153 in Figure 20, (difference calculates step to end step S5 Suddenly), the step S6 in subroutine Sub1 into Figure 14 (a) (difference image generation step) advances.Difference image generation step It as shown in figure 20 include step S2161.
In step S2161, the differential pixel PAB with differential pixel values BAB is configured at the original pixel in original image The position of PA and PB and generate difference image.The step is for example corresponding with following content: when described by being located in Fig. 5 (a) When the original pixel PA of (0,0) and the original pixel PB positioned at (0,3) generate 97 this differential pixel values BAB, Fig. 5's (b) The configuration of the position of (0,0) and (0,3) has the differential pixel of 97 this pixel value.As the end step S2161 in Figure 20, knot Beam step S6 (difference image generation step).And at the same time terminating the subroutine Sub1 as step S21 in Figure 19, return To main program.At this point, the difference image generated in the step S2161 of Figure 20 is sent to main program as return value.Scheming In 20, it is shown in the content as Return (difference image) in the start-stop symbol of then step S2161.To the return mesh of main program Ground be input in Figure 14 (a) as threshold range setting procedure documented by step S7.
(5.6) step S7
Step S7 is threshold range setting procedure in Figure 14 (a).This be to the difference image generated in step s 6 such as Fig. 3 (d) (e) and/or Fig. 5 (d) (e) for having been used in the explanation of above-mentioned image processing algorithm of the invention like that, by difference Differential pixel values in image, which arrange in descending order and calculate the mutual difference value of adjacent differential pixel and select difference value, to be become most The step of big section.
The detailed process of threshold range setting procedure (S7) is illustrated in Figure 21 to Figure 23.Initial step in Figure 21 In S701, the difference image generated in the routine Sub1 shown in Figure 20 is stored in the deposit of this title of difference image NJ Device.Here, being illustrated to J and N as 2 numbers used in this register title of difference image NJ.
J in the title is not conform to what is taken out in the step S107 that known workpiece shown in figure 15 shoots step (S1) The corresponding number of J of lattice product image J.And the as described above, check object workpiece extraction step (S2) shown in Figure 16 first It is middle using template matching method after the image for having extracted check object workpiece in rejected product image J, then in Figure 17 institute In the monochrome image generation step (S3) shown, 1 or more monochrome image is generated from the image of check object workpiece.To these lists Chromatic graph picture assigns the number (step S302 and S307) until 1 to K (K is 1 or more), by N achromatic map in step S306 Register as being stored in this title of original image.Here, N is to be stored in step S4 shown in Figure 14 (a) as described above to step The value of the register N of the number of monochrome image in rapid S7 during execution processing.Also, the N and this deposit of difference image NJ N in device title is consistent.The original image of N monochrome image is stored in above-mentioned steps S306, as described above, passing through Figure 18 institute Difference shown in the symmetry judgment step (S4) and Figure 19 shown calculates step (S5) and difference image generation step (S6) quilt It sequentially handles, and generates difference image.If summarizing to these, stored in this register of difference image NJ from being based on Rejected product image J and difference image that the N monochrome image that generates generates.
After difference image is stored in difference image NJ in step s 701 as described above, advance to step S22. Step S22, for common processing performed in several steps shown in Figure 14 as described above, therefore as subroutine Sub2 It is defined.Subroutine Sub2 is using difference image as independent variable.Here, in the subroutine Sub1 shown in Figure 20, it will be in step The difference image generated in S2161 returns to main program as return value.Subroutine Sub2 connects from main program as independent variable Receive the difference image.Also, using the difference image received, execute the explanation such as above-mentioned image processing algorithm of the invention Middle Fig. 5 (d) (e) used is such, arranges the differential pixel values in difference image in descending order and calculates adjacent differential picture The mutual difference value of element, selects the step of difference value is as maximum section.The detailed process of subroutine Sub2 is illustrated in figure 23。
Step S2201 in Figure 23 and the step for being denoted as differential pixel values descending as title on upper layer in Fig. 5 (d) (e) It is rapid corresponding.I.e., number 1 is assigned to X to X pixel for constituting difference image first in step S2201.Then, by each volume Number the pixel value PV (1) to PV (X) of pixel be stored in X arrangement AP (1) in descending order from peak to peak to AP (X). It sequentially describes from left to right and is stored in X pixel value PV (1) of X until the AP (1) to AP (X) arrangement to PV (X) Obtained is exactly the number that upper layer is configured in Fig. 5 (d) (e).Herein as described above, Fig. 5 (d) it is described number in Fig. 5 (a) difference image generated as symmetry axis in using the 1st straight line L1, that is, Fig. 5 (b) is corresponding, the number of Fig. 5 (e) in Fig. 5 (a) In difference image, that is, Fig. 5 (c) for being generated as symmetry axis using the 2nd straight line L2 it is corresponding.After performing step S2201 in this way, to Step S2202 advances.
In step S2202,0 is stored in the deposit for being stored in the number of the pixel handled in subsequent steps S2204 Device S, and initialized.Then, the numerical value saved in step S2203 to register S adds 1.Register S at this stage Numerical value become 1, expression the pixel that No. 1 is had been assigned in above-mentioned S2201 is handled.If the value of register S is true It is fixed, then then advance to step S2204.
Step S2204 and the difference value for being denoted as in Fig. 5 (d) (e) adjacent differential pixel value as title in middle level Step is corresponding.In step S2204, use what is saved from the maximum pixel value that arrangement AP (1) is saved to arrangement AP (X) The smallest pixel value sequentially calculates the differential pixel values NP for becoming the difference between the pixel value that 2 arranged adjacents are saved.I.e., If showed using the value of the register S determined in above-mentioned steps S2203, operation AP (S+1)-AP (S), and will As a result it is stored in the register of NP (S) this title.Here, all pixels number be X, therefore the value of S with from 1 until (X-1) It is sequentially processed in step S2204 plus 1 mode one by one.The processing of operation and the preservation of above-mentioned difference, according to step The table of software is remembered and is recorded as NP (S) ← AP (S+1)-AP (S) in S2204.Register NP will be stored in step S2204 (S) differential pixel values by by arrow it is corresponding with the number on upper layer is configured in Fig. 5 (d) (e) in a manner of sequentially show To the number for being just arranged in middle layer.As end step S2204, advance to step S2205.
In step S2205, whether the number S about the pixel handled in step S2204 is equal to than all pixels The value of number small 1 is that (X-1) is judged.I.e., step is completed about whether to X all pixels for constituting difference image The processing of S2204 is judged.Step S2203 is returned in the case where judging result is "No", and the value of register S is added 1.The number of pixel to be processed becomes larger 1 in step S2204 as a result,.And become larger in step S2204 to the number again 1 pixel executes processing.The processing is repeated, when X all pixels are completed with the processing in step S2204, When for S=X-1, the judging result of step S2205 becomes "Yes".Advance in this case to step S2206.
Step S2206 and the threshold value in Fig. 5 (d) (e) in the maximum section that lower layer is denoted as difference value as title The step of in, to select the step of maximum section corresponding.The setting of threshold value, by aftermentioned relative superiority or inferiority after finishing subroutine Sub2 After limit value and low upper limit value return to main program as return value, executed in main program.In step S2206, from Maximum difference is selected among (X-1) a differential pixel values until the slave NP (1) to NP (X-1) generated in step S2204 Value MXNP.This is corresponding with following situation: selecting among the difference value of the adjacent differential pixel value in middle layer from remembering in Fig. 5 (b) Maximum value 58 or from Fig. 5 (c) note select maximum value 66 among the difference value of the adjacent differential pixel value in middle layer.Such as Fruit has selected maximum difference value MXNP in step S2206 in this way, then advances to step S2207.
Step S2207 be in Fig. 5 (d) (e) during lower layer is denoted as the maximum of difference value as title in threshold value The step of in, as in main program execute threshold value setting preparation the step of.It, will be in step in step S2207 The 2 adjacent arrangement AP (MX+1) used in the calculating of the maximum difference value MXNP selected in rapid S2206 and AP (MX) institute The pixel value PV (MX+1) and PV (MX) of preservation, are defined respectively as high lower limit value HBP and low upper limit value LTP.Specifically For, PV (MX+1) is stored in the register of this title of HBP, while PV (MX) being stored in the deposit of this title of LTP Device.The processing of the preservation is remembered by the table of software in step S2207 and is recorded as HBP ← PV (MX+1) and LTP ← PV (MX).
Here, being asked caused by the threshold value only generated from 1 difference image as shown in Fig. 5 (d) (e) because original sample is used Topic and step used to solve the problem are illustrated.
As described above, showing the difference value of adjacent differential pixel value in middle level in Fig. 5 (d) (e).And show in lower layer The threshold value of the middle calculating during the maximum of the difference value is gone out.The threshold value of the lower layer is for the differential pixel documented by the upper layer The value for leaving pixel value is selected in value, calculates the median of the differential pixel values (upper layer) at the both ends in the maximum section as threshold Value.Specifically, selecting 58 as the section in middle level with maximum value in the case where Fig. 5 (d), therefore calculated in lower layer The median of the differential pixel values at the both ends in the section i.e. 65 and 7.The median is 36, therefore threshold value becomes 36.About Fig. 5 (e), the section with maximum value i.e. 66 similarly, are selected in middle level, calculate the differential pixel at the both ends in the section in lower layer The median of value i.e. 71 and 5.The median is 38, therefore threshold value becomes 38.
However, the Fig. 5 (d) (e) is the explanatory diagram of image processing algorithm of the invention, original image is only shown in Fig. 5 (a) 1.But the difference image that subroutine Sub2 is handled shown in Figure 23, because being generated by following step , so there are multiple.In the step of generating difference image, firstly, as described above, based on B rejected product image is used as 1 rejected product image J (the step S107 of Figure 15) in (the step S103 of Figure 15) generates 1 or more monochrome image (the step S302 or S307 of Figure 17).Then, using N monochrome image as original from above-mentioned 1 or more (K) monochrome image Image (the step S306 of Figure 17) generates difference image (step S2151 to the S2161 of Figure 20) from the original image.
The difference image generated in this way is recorded in the detailed stream of the subroutine Sub2 shown in Figure 23 as independent variable The start-stop of the topmost of journey figure accords with Sub2.That is, subroutine Sub2, in the achromatic map generated based on same rejected product image J As in the case where 2 or more, to good grounds multiple monochrome images until the 1st monochrome image to K monochrome image The difference image of each generation is handled.Also, the number for the rejected product image J for generating above-mentioned K monochrome image It is not 1 but multiple (B).
It can be defined according to above content, subroutine Sub2 executes processing to multiple difference images in succession.Above-mentioned multiple differences Partial image, as described above, being all based on B rejected product image and generating respectively.Therefore, and in B rejected product image Accordingly, the pixel value of difference image handled by subroutine Sub2 can also generate deviation to the deviation of pixel value.That is, in Fig. 5 (d) (e) in, the differential pixel values for being shown in upper layer generate deviation by each difference image.Therefore, it is shown in the adjacent differential picture in middle layer The difference value of plain value also generates deviation by each difference image.Therefore, it may occur that be shown in the threshold value of lower layer by each difference diagram As and deviate, be not fixed as 1 this problem.This is " the 1st problem ".In order to solve the problems, such as the 1st, it is desirable to be able to eliminate The deviation of the difference value of each difference image, and can be to the multiple differences generated from all (B) rejected product images Partial image only sets 1 threshold value.
In addition, as described above, difference image handled by subroutine Sub2 is generated according to from 1 rejected product image J Each of the monochrome image of multiple (K) and generate.Such as described above by the rejected product image J as color image Resolve into R (red), G (green), B (indigo plant) three primary colors and in the case where generating 3 monochrome images, all generate 3 (K=3) Monochrome image.Also, this 3 monochrome images, as described above, being and the respective component ratio of the three primary colors in color image Accordingly image obtained from the pixel value with tonal gradation to show each component.
Also, all 3 kinds of difference images generated from each monochrome image are handled in succession in subroutine Sub2. In this case, the maximum section of the difference value of adjacent differential pixel value shown in the middle layer of Fig. 5 (d) (e), with each difference image It is corresponding, it not can guarantee all as identical.Its reason is: as described above, upper in each pixel of rejected product image J The trichromatic component ratio of R (red), G (green), B (indigo plant) are stated, it is not identical about all pixels.
Therefore, the pixel value of the pixel of same position is configured in K difference image, depending on above-mentioned component ratio It is different.Thus, about each of the K difference image generated from same rejected product image J, the upper layer of Fig. 5 (d) (e) Shown in differential pixel values it is different.It therefore, should in the case where having calculated the difference value of adjacent differential pixel value shown in middle layer Whole different values can also be become about each of K difference image by calculating result.I.e., and from each of multiple monochrome images The maximum section of the difference value of the corresponding adjacent differential pixel value of the difference image of generation may also with monochrome image accordingly at For entirely different section.In this case, can generate when carrying out given threshold using the maximum section of difference value, it may be desirable to select This problem of the benchmark of the monochrome image of threshold value setting.This is " the 2nd problem ".In order to solve the problems, such as the 2nd, it may be desirable to make institute It states the benchmark selected to make clear, and selects among the multiple monochrome images generated based on 1 rejected product image to setting The optimal monochrome image of threshold value.
In the threshold range setting procedure (S7) shown in Figure 21 to Figure 23, pass through following step to solve the above-mentioned problems The rapid processing to carry out difference image.Firstly, in step S2207 shown in Figure 23, as described above by pixel value PV (MX+1) High lower limit value HBP and low upper limit value LTP are respectively defined as with PV (MX).Also, as the start-stop of the lowest part of Figure 23 accords with Return It is documented like that, using above-mentioned high lower limit value HBP and low upper limit value LTP as return value be sent to Figure 21 shown in threshold range The main program of setting procedure (S7).
In Figure 21, main program receives the return value from the subroutine Sub2 of step S22, subsequent steps S702 with Processing is executed afterwards.In this way, 1 can only be set to the multiple difference images generated from all (B) rejected product images A threshold value.Also, (each rejected product is pressed from all monochrome images generated according to all (B) rejected product images Image is respectively K) each of among corresponding K difference image, select optimal for setting above-mentioned threshold value based on benchmark Difference image.Selecting for the optimal difference image is exactly selecting for optimal monochrome image nothing but.
Figure 21 and Figure 22 used below, to can only be set to the multiple difference images generated from all rejected product images The step of 1 threshold value and from the corresponding difference diagram of each of all monochrome images for being generated according to all rejected product images It is selected as among and the step of above-mentioned threshold value of setting optimal difference image is illustrated.
In the step S702 of threshold range setting procedure (S7) shown in Figure 21, the son shown in step S22, that is, Figure 23 Routine Sub2 receives high lower limit value HBP and low upper limit value LTP (the step S2207 of Figure 23 and the rising for lowest part as return value Only accord with Return).Also, the high lower limit value (HBP) in above-mentioned return value is stored in high lower limit value HBPMNJ this title Register.Similarly, the low upper limit value (LTP) in return value is stored in the register of this title of low upper limit value LTPMNJ. The processing of the preservation, in step S702 by software table remember and be recorded as high lower limit value HBPMNJ ← high lower limit value and it is low on Limit value LTPMNJ ← low upper limit value.
Here, to as 3 volumes used in high lower limit value HBPMNJ and low upper limit value LTPMNJ this register title Number J, N and M be illustrated.The 2nd and the 3rd number N and J in the title, with difference documented by step S701 The N and J of this register title of image NJ are identical.The N and J are explained, so detailed description is omitted.Such as Fruit only records conclusion, then means the height from the N monochrome image difference image generated generated based on rejected product image J Lower limit value HBP and low upper limit value LTP.
Then, the 1st number M in the register title is illustrated below.It is right shown in Figure 18 as described above In step S401 in title property judgment step (S4), whether there is symmetry axis to judge N monochrome image (original image).? In the case that the judging result is "Yes", the number of symmetry axis possessed by original image is stored in register in step S402 C.Also, in step S403, register M is stored in by 0 and is initialized.Then in step s 404, register M is added Upper 1.It is that storage will arrive step by becoming the step S5 (difference calculating step) of Figure 14 (a) of subsequent step in this register M S7 (threshold range setting procedure) is come the register of the number C of possessed, the above-mentioned symmetry axis of original image handled.And And in step S405 and S406, the coordinate of one end of M symmetry axis corresponding with the register M and the other end is saved In register ONEM and OTEM.
Also, from Figure 21 of the detail flowchart as step S7 shown in Figure 14 (a) (threshold range setting procedure) to Figure 23 can be defined, and above-mentioned register M indicates the number for the symmetry axis that subroutine Sub2 has been handled shown in Figure 23.If summarize with Upper description, then high lower limit value HBPMNJ and low upper limit value LTPMNJ means mono- from the N generated based on rejected product image J respectively The meaning of high lower limit value HBP and low upper limit value LTP possessed by chromatic graph picture and the difference image generated based on M symmetry axis.
High lower limit value HBPMNJ and low upper limit value LTPMNJ are set in the step S702 shown in Figure 21 as described above Each register content after, to step S703 advance.In step S703, for the register M symmetry axis saved The quantity i.e. C whether number reaches symmetry axis is judged.Register M symmetry judgment step (S4) shown in Figure 18 1 is added every time in step S404.The corresponding original image of the value of register M after being coupled with 1 with this, passes through Figure 19 as described above It is handled to Figure 23.Also, in the step S702 of Figure 21, set high lower limit value HBPMNJ's and low upper limit value LTPMNJ The content of each register.
Judgement in the step S703 of Figure 21 is just whether nothing but the institute with C symmetry axis possessed by N monochrome image There is symmetry axis accordingly to set the judgement of the content of each register of high lower limit value HBPMNJ and low upper limit value LTPMNJ.At this In the case that judging result is "No", the content of above-mentioned register corresponding with all symmetry axis is not set, so via The step S404 for describing the connector for jumping destination number 204 and jumping to Figure 18 (symmetry judgment step (S4)).? In step S404,1 is added to the value of the register M for the quantity for saving symmetry axis.Also, in the same manner as explanation before this, in M Value only become larger and execute step documented by the step S405 and S406 and Figure 19 to Figure 23 of Figure 18 again in the state of 1.If It is executed repeatedly to the value of M plus 1, then the judging result of the step S703 of Figure 21 (threshold range setting procedure (S7)) becomes "Yes".Advance in this case to step S704.
In step S704, the quantity of monochrome image whether has been reached to the number of the register N monochrome image saved I.e. K is judged.Register N is added 1 in the step S305 of monochrome image generation step (S3) shown in Figure 17 every time.With The corresponding original image of value of the register N after being coupled with 1, it is processed by Figure 18 to Figure 23 as described above.And scheming In 21 step S702, the content of each register of high lower limit value HBPMNJ and low upper limit value LTPMNJ is set.The step of Figure 21 Judgement in S704 is just whether nothing but all lists with the whole K monochrome images generated based on j-th rejected product image Chromatic graph picture accordingly sets the judgement of the content of each register of high lower limit value HBPMNJ and low upper limit value LTPMNJ.Sentence at this In the case that disconnected result is "No", the content of above-mentioned register corresponding with all monochrome images is not set, so via note The step S305 for having carried the connector for jumping destination number 203 and having jumped to Figure 17 (monochrome image generation step (S3)).? In step S305,1 is added to the value of the register N for the quantity for saving monochrome image.Also, in the same manner as explanation before this, The value of N become larger 1 in the state of execute step documented by the step S306 and Figure 18 to Figure 23 of Figure 17 again.If holding repeatedly Row adds 1 to the value of N, then the judging result of the step S704 of Figure 21 (threshold range setting procedure (S7)) becomes "Yes".The feelings Advance under condition to step S705.
In step S705, whether rejected product figure has been reached to the number of the register J rejected product image saved Quantity, that is, B of picture is judged.Register J is each in the step S106 of known workpiece shooting step (S1) shown in figure 15 In addition 1.The corresponding rejected product image of the value of register J after being coupled with 1 with this, as described above by Figure 16 to Figure 23 by Processing.And in the step S702 of Figure 21, the interior of each register of high lower limit value HBPMNJ and low upper limit value LTPMNJ is set Hold.Judgement in the step S705 of Figure 21 is just whether accordingly to set relative superiority or inferiority with B all of rejected product image nothing but The judgement of the content of each register of limit value HBPMNJ and low upper limit value LTPMNJ.In the case where the judging result is "No", The content of above-mentioned register corresponding with all rejected product images is not set, so jumping destination number via describing 202 connector and the step S106 for jumping to Figure 15 (known workpiece shoot step (S1)).In step S106, to preservation The value of the register J of the quantity of rejected product image adds 1.Also, the value in the same manner as explanation before this in J has become larger 1 Step documented by the step S107 and Figure 16 to Figure 23 of Figure 15 is executed under state again.1 is added to the value of J if executing repeatedly, Then the judging result of the step S705 of Figure 21 (threshold range setting procedure (S7)) becomes "Yes".In this case to before step S706 Into.
It can be defined according to above explanation, 3 kinds of step S703 to the S705 in Figure 21 are judged as that 3 below are sentenced It is disconnected.First in step S703, the high lower limit value HBP and low upper limit value LTP as the return value from subroutine Sub2 are judged Whether all symmetry axis possessed by monochrome image have generated.Then in step S704, judgement comes from subroutine Sub2 The return value whether generated about based on all monochrome images that rejected product image generates.And in step S705 In, judge whether the return value from subroutine Sub2 has generated about all rejected product images.
The results of this 3 judgements all become "Yes", thus about as the high lower limit value HBPMNJ of above-mentioned register and low J, N and M of 3 numbers used in the title of upper limit value LTPMNJ complete processing involved in all numbers.I.e., lead to After before this step of in register high lower limit value HBPMNJ and low upper limit value LTPMNJ, given birth to about based on rejected product image At all monochrome images, all rejected product images are generated and save all symmetry axis possessed by the monochrome image The high lower limit value (HBP) and low upper limit value (LTP) for the difference image being related to.
Then in the step S706 and S707 of Figure 21, in the register M initialization for making to save the number of symmetry axis again Later, 1 is added to the value.Also, Figure 22 (threshold value model is jumped to via the connector for jumping destination number 206 is described Enclose setting procedure (S7)) step S708.
Threshold range setting procedure (S7) shown in Figure 22 is to execute for protecting in the step S702 in above-mentioned Figure 21 Be stored in register high lower limit value HBPMNJ and low upper limit value LTPMNJ multiple high lower limit values and low upper limit value processing the step of. The step, as described above, having 2 purposes.1st purpose is can be multiple to generating from all (B) rejected product images Difference image only sets 1 threshold value.2nd purpose is from all achromatic maps generated with all (B) rejected product images of basis As among the corresponding K difference image of each of (being respectively K by each rejected product image), selecting to the above-mentioned threshold value of setting most Good difference image.
In initial step S708 in Figure 22, from according to until from rejected product image 1 to rejected product image B The corresponding K monochrome image of each rejected product image in each monochrome image, use the difference generated based on M symmetry axis A high lower limit value HBPMNJ (being 1≤M≤C, 1≤J≤B about each of 1≤N≤K N is become) of (B × C) that image is calculated Among, select the smallest limit value MNBMN as minimum value.
Then advance to step S709, from basis and not conforming to respectively until from rejected product image 1 to rejected product image B Each monochrome image in the corresponding K monochrome image of lattice product image is calculated using the difference image generated based on M symmetry axis Among the low upper limit value LTPMNJ of B out (being 1≤M≤C, 1≤J≤B about each of 1≤N≤K N is become), selecting becomes The maximum upper limit MXTMN of maximum value.
In this way, below in step S708 from (B × C) a high lower limit value HBPMNJ (about as 1≤N≤K N's Each be 1≤M≤C, 1≤J≤B) among select minimum value, from the low upper limit value LTPMNJ of B (about becoming in step S709 Each of 1≤N≤K N is illustrated to select the meaning of maximum value among 1≤M≤C, 1≤J≤B).
High lower limit value HBP be the subroutine Sub2 shown in Figure 23 step S2206 and S2207 in generate.Also, such as It is upper described, step S2206 and the threshold value in Fig. 5 (d) (e) in the maximum section that lower layer is denoted as difference value as title It is corresponding the step of selecting maximum section in step.The maximum section is for example to generate conduct in middle level in the case where Fig. 5 (d) 58 section of maximum difference value, i.e. upper layer with differential pixel values 65 and 7 be adjacent differential pixel value section.And scheming In step S2207 in 23, by 2 used in the calculating of maximum difference value MXNP adjacent arrangement AP (MX+1) and AP (MX) pixel value PV (MX+1) and PV (MX) stored by are set to high lower limit value HBP and low upper limit value LTP.When making Fig. 5 (d) With the processing of step S2207 to it is corresponding when the differential pixel values documented by the upper layer of Fig. 5 (d) in 65 be high lower limit value HBP, 7 For low upper limit value LTP.
Herein as described above, the respective pixel value of B rejected product image there are deviations.Therefore, from each rejected product The pixel value for the K monochrome image that image generates is also and rejected product image accordingly has deviation.Therefore, when from each achromatic map When as generating difference image, the deviation of pixel value possessed by the rejected product image with the basis for becoming each difference image is corresponding Ground, there is also deviations for the pixel value of difference image.When the subroutine Sub2 for passing through Figure 23 produces multiple differences of deviation to above-mentioned When partial image is handled, deviation is generated from each difference image high lower limit value HBP generated and low upper limit value LTP.I.e., work as figure When original image shown in 5 (a) i.e. N monochrome image (the step S306 of Figure 17) and B rejected product image accordingly change, figure The numerical value of each layer of 5 (d) (e) generates deviation.As a result, the high lower limit value HBP generated in the step S2207 of Figure 23 and it is low on Limit value LTP, and the deviation of pixel value of B rejected product image accordingly generate deviation.
Deviation is generated by each difference image, high lower limit value HBP and low upper limit value LTP in this way, therefore, as such as Fig. 5 (d) (e) when such given threshold documented by lower layer, then the threshold value generates deviation by each difference image.
Even therefore, it is necessary to generate from some rejected product image in the rejected product image constituted by all B N monochrome image, also can be used in the step of same threshold value is set to the difference image that generates from the N monochrome image.Make For the step, the step of implementing the step S708 and S709 in Figure 22.
Adjacent difference documented by the middle layer of processed high lower limit value HBPMNJ and Fig. 5 (d) in the step S708 the step of Divide big pixel value pair in the differential pixel values on upper layer corresponding to the maximum section of the difference value of pixel value, positioned at left side It answers.The pixel value is named as the big pixel value in maximum section later.
Similarly the step S709 the step of in processed low upper limit value LTPMNJ and Fig. 5 (d) middle layer documented by Small picture in the differential pixel values on upper layer corresponding to the maximum section of the difference value of adjacent differential pixel value, positioned at right side Element value corresponds to.The pixel value is named as maximum section small pixel value later.
That is, selecting the smallest limit value as minimum value among the big pixel value in maximum section in step S708 MNBMN.The smallest limit value MNBMN, which becomes, is using base from each of K monochrome image corresponding with B rejected product image Among the big pixel value in a maximum section of difference image (B × C) that is calculated that M symmetry axis generates, closest to it is corresponding most The value of big section small pixel value.
Select the maximum upper limit as maximum value among maximum section small pixel value in step S709 together MXTMN.Maximum upper limit MXTMN, which becomes, is using base from each of K monochrome image corresponding with B rejected product image Among a maximum section small pixel value of difference image (B × C) that is calculated that M symmetry axis generates, closest to it is corresponding most The value of the big pixel value in big section.
That is, the combination of smallest limit value MNBMN and maximum upper limit MXTMN are exactly to make in Fig. 5 (d) (e) in nothing but Difference value in the maximum section of the difference value of adjacent differential pixel value shown in layer becomes the combination of minimum value.Such difference value Meaning as minimum value are as follows: minimized, so that relative to from N monochrome image all difference diagrams generated Picture is become jointly by the range that each N monochrome image carrys out given threshold.Thus, pass through smallest limit value MNBMN and maximum The section that the combination of upper limit value M XTMN is constituted is can be to above-mentioned all (B × C) a difference images reliably given threshold Section.After having selected smallest limit value MNBMN and maximum upper limit MXTMN in this way, advance to step S710.
In step S710, about smallest limit value MNBMN corresponding with K monochrome image and maximum upper limit MXTMN calculates the range RN=smallest limit value MNBMN- maximum upper limit MXTMN for becoming difference.Range RN is institute as above It states, about the difference generated from each use of K monochrome image corresponding with B rejected product image based on M symmetry axis All (B × C) a difference images that image is calculated, can be in the maximum section (example of the difference value of adjacent differential pixel value Such as there is in the middle layer of Fig. 5 (d) section of difference value 58) difference value in the reliably section of given threshold.If in step S710 Middle calculating range RN, then end step S7 and to step S8 shown in Figure 14 (a) advance.
(5.7) step S8
Step S8 is threshold binary image generation step in Figure 14 (a), is selected relative to setting for selecting check object area Optimal threshold binary image and given threshold for the threshold value in domain.The detailed process of threshold binary image generation step (S8) is illustrated in figure 24。
In the step S801 of Figure 24, whether number, that is, K about the monochrome image generated from rejected product image is 2 or more Judged.This is and carries out check object workpiece in the step S301 shown in Figure 17 (monochrome image generation step (S3)) Image whether be color image the identical content of judgement.That is, in the step S801 of Figure 24, if the figure of check object workpiece Number K as being color image then monochrome image is 2 or more, the achromatic map if the image of check object workpiece is not color image The number K of picture is 1, K less than 2.It is "Yes", i.e. according to the monochrome image of rejected product image generation in the judging result of step S801 Number be 2 or more in the case where, to step S802 advance.
In step S802, maximum is selected among the K range RN (1≤N≤K) calculated according to all monochrome images Value selects the difference image with the range RN as the maximum value as M threshold binary image.The volume of the M threshold binary image Number M is the number M of M symmetry axis possessed by original image as described above i.e. N monochrome image.I.e., it has selected and symmetry axis Number corresponding threshold binary image.
Here, to select with as maximum value range RN difference image as M threshold binary image meaning into Row explanation.Range RN is as described above can be to from the N monochrome image institute generated generated according to B rejected product image There is the difference value in difference image ((B × C) is a) the reliably section of given threshold.This with for solving the problems, such as above-mentioned the 1st i.e. The method that threshold value deviates this problem for not being determined as 1 threshold value by each difference image is consistent.In addition, from this way It is exactly to be compared the range RN as the smallest difference value and therefrom select that maximum value is selected among the range RN of calculating nothing but It is capable of the maximum value of most easily given threshold out.This has selecting and being used for for the difference image of the range RN as maximum value Setting that solve the problems, such as the problem of above-mentioned the 2nd needs the benchmark for the monochrome image for selecting threshold value setting, benchmark is consistent.
If summarizing to above description, in order to solve threshold value shown in the lower layer of Fig. 5 (d) (e) by each difference diagram As and lead to the problem of deviation, do not know for 1 threshold value this above-mentioned 1st, range RN is calculated in the step S710 of Figure 22. Thereby, it is possible to eliminate the deviation of the difference value of each difference image, basis can be generated from B rejected product image N monochrome image all ((B × C) is a) difference images generated set only 1 threshold value.In addition, in order to solve to need to select This 2nd above-mentioned problem of the benchmark of the monochrome image of threshold value setting, in the step S802 of Figure 24 from monochrome image The corresponding K range RN of number, that is, K among select maximum value, and select with as the maximum value range RN difference image As M threshold binary image.It, can be in subsequent step S803, from root thereby, it is possible to make the benchmark selected clear The optimal monochrome image for given threshold is selected among the multiple monochrome images generated according to 1 rejected product image.
It on the other hand, is "No", the monochrome image i.e. based on the generation of rejected product image in the judging result of step S801 Number be 1 in the case where, to step S804 advance.In step S804, the difference image generated from 1 monochrome image is set as M threshold binary image.After having selected M threshold binary image in this way, advance to step S803.
In step S803, the monochrome image for generating M threshold binary image is selected and as M threshold value monochrome image. The M threshold value is with the institute of lower layer that monochrome image is to calculate Fig. 5 (d) (e) relative to M symmetry axis possessed by original image That shows leaves monochrome image used in the threshold value that pixel value is selected.After having selected M threshold value monochrome image, to step S805 advances.
In step S805 in M threshold binary image, median=(smallest limit value MNBMN+ maximum of range RN is calculated Upper limit value M XTMN)/2, which is set as to select the M inspection threshold value with threshold value as check object region.In step S805 The M of middle calculating checks that threshold value is corresponding with the threshold value in the maximum section of difference value shown in the lower layer in Fig. 5 (d) (e).At this After sample sets M inspection threshold value, advance to step S806.
In step S806, the number about the symmetry axis stored by register M whether reached the number i.e. C of symmetry axis into Row judgement.Register M is added 1 in the step S707 of threshold range setting procedure (S7) shown in Figure 21 every time.With the quilt In addition the corresponding original image of value of the register M after 1, processed by Figure 22 and Figure 24 as described above.Also, Figure 24's In step S805, setting M checks threshold value.Judgement in the step S806 of Figure 24 is just whether nothing but and N monochrome image institute The whole C symmetry axis having accordingly set the judgement that M checks threshold value.In the case where the judging result is no, with The corresponding M of all symmetry axis checks that threshold value is not set, so via the connection for jumping destination number 205 has been described Accord with and jump to the step S707 of Figure 21 (threshold range setting procedure (S7)).In step S707, to storage symmetry axis The value of several register M adds 1.Also, in the same manner as explanation before this, the value of M become larger 1 in the state of execution figure again The processing of 22 and Figure 24.If sentencing for the step S806 of 1, Figure 24 (threshold binary image generation step (S8)) is added to the value of M repeatedly Disconnected result becomes "Yes".In this case end step S8 and to step S9 shown in Figure 14 (a) advance.
(5.8) step S9
Step S9 in Figure 14 (a) is threshold value verification step.This is following step: confirmation is generated from qualified product image The pixel value of all pixels of difference image check that threshold value is small than above-mentioned M, even if that is, confirmation the is applicable in qualified product image M checks threshold value, can not also select the check object area that should carry out defect inspection as indicating in Fig. 6 (c) in dual frame Domain.The detailed process of threshold value verification step (S9) is illustrated in Figure 25 and Figure 26.
Step S901 and S902 are for shooting shown in step (S1) in known workpiece shown in figure 15 in Figure 25 Obtained A qualified product image executes the preparation of threshold value verification step in step S101.First in step S901, by 0 storage It is stored in the register I for being stored in the number of the qualified product image in threshold value verification step during execution processing, keeps register I initial Change.Then in step S902,1 is added to the value of I.
And in step S903, taken out among the qualified product image of the number imparted from 1 to A signified with the value of I The image of fixed number.It is herein I=1.And advance to step S904.
Step S904 and S905 respectively with the step S201 of check object workpiece extraction step (S2) shown in Figure 16 and S202 is identical.That is, generating check object workpiece from qualified product image 1 using template matching method in step S904 and S905 Image.This generates after the image of check object workpiece, advance to step S906.
Step S906 and S907 are for from the monochrome image difference image generated generated based on qualified product image 1 Execute the preparation of the subsequent step in threshold value verification step.First in step S906,0 is stored in and is stored in threshold value confirmation The register M of the number of the symmetry axis in monochrome image in subsequent step in step during execution processing, at the beginning of making register M Beginningization.Then in step s 907,1 is added to the value of M.
Then advance to step S908, whether the image about check object workpiece is that colour is judged.It is being judged as In the case where the case where "Yes", that is, color image, advance to step S909.In step S909, from the inspection as color image The image of objective workpiece generates K monochrome image.In monochrome image generation step (S3) shown in step S909 and Figure 17 Step S302 is identical.After generating K monochrome image, advance to step S910.
In step S910, the monochrome image pair of M threshold binary image is selected and generated among the monochrome image of generation The monochrome image answered is as M threshold value monochrome image.Herein as described above, select the monochrome for generating M threshold binary image Image as M threshold value monochrome image be threshold binary image generation step (S8) shown in Figure 24 step S803.I.e., make The M threshold value selected in step S803 is corresponding with the M threshold value monochrome image selected in step S910 with monochrome image. Purpose corresponding in this way is to confirm: being examined even if being applicable in qualified product image using the M that M threshold value monochrome image is set Threshold value is looked into, the such check object region as shown in dual frame in Fig. 6 (c) as described above can not be also selected.
On the other hand, the image of the i.e. check object workpiece of the case where judgement in step S908 is "No" is monochrome image In the case where, advance to step S911.In step 911, the monochrome image, that is, check object workpiece image is set as M Threshold value monochrome image.If having selected M threshold value image in step S910 or S911 in this way, via being recorded It jumps the connector of destination number 209 and jumps to the step S912 of Figure 26.
In the step S912 of Figure 26, M threshold value is stored in the register of this title of original image with monochrome image.It connects To step S913 advance, by the coordinate storage of one end of M symmetry axis in the register of this title of ONEM.Step S913 For the step identical as the step S405 of symmetry judgment step (S4) shown in Figure 18.Then advance to step S914, by M The coordinate storage of the other end of symmetry axis is in the register of this title of OTEM.Step S914 be and symmetry shown in Figure 18 The step S406 of judgment step (S4) identical step.Also, advances to step S21, and execute subroutine shown in Figure 20 Sub1.Explanation is had been carried out about subroutine Sub1, so omitting detailed description herein.
By executing step S912 in the Figure 26 to step S21 (subroutine Sub1), thus will be Figure 25 the step of The coordinate of the one end and the other end of the symmetry axis of the M threshold value image, the M threshold value image selected in S910 or S911 It is sent to subroutine Sub1 from main program as independent variable, as return value, main program receives subroutine Sub1 and uses from M threshold value The difference image that image generates.
After performing step S21, step S915 is executed to the difference image as return value.In step S915, Judge whether the pixel value of all pixels of difference image is smaller than M inspection threshold value.In the case where the judging result is "Yes", Confirmation: the check object region as in Fig. 6 (c) shown in dual frame can not be selected as described above to the difference image.It should In the case of, advance to step S916.
In step S916, the i.e. C of number for whether having reached symmetry axis to the number of the symmetry axis stored by register M is carried out Judgement.Register M is added 1 in the step S907 of Figure 25 every time.The corresponding original of the value of register M after being coupled with 1 with this Image generates difference image to step S21 is processed by step S912 as described above.And in step S915, judgement Whether the pixel value of all pixels of difference image is smaller than M inspection threshold value.Judgement in step S916 be just whether nothing but with All C symmetry axis possessed by the image of check object workpiece have been performed in accordance with the judgement of the judgement of above-mentioned steps S915. In the case where the judging result of step S916 is "No", because not carried out to difference image corresponding with all symmetry axis The judgement of step S915, so the step of via being described the connector for jumping destination number 208 and having jumped to Figure 25 S907.In step S907,1 is added to the value of the register M of the number of storage symmetry axis.Also, it is same as explanation before this Ground, the value of M become larger 1 in the state of execute again Figure 25 step S908 to Figure 26 step S915.If repeatedly to the value of M In addition 1, then the judging result of step S916 becomes "Yes".Advance in this case to step S917.
In step S917, whether qualified product image has been reached to the number of the qualified product image stored by register I Number is that A is judged.Register I is added 1 in the step S902 of Figure 25 every time.The value of register I after being coupled with 1 with this Corresponding qualified product image, the step S916 by step S903 to Figure 26 of Figure 25 is processed as described above.Also, judge Whether the pixel value of all pixels of the difference image corresponding with all symmetry axis of the qualified product image is than M inspection threshold value It is small.Judgement in step S917 be exactly nothing but for all A qualified product images, whether confirmed it is corresponding with all symmetry axis The judgement smaller than M inspection threshold value of the pixel value of all pixels of difference image.It is "No" in the judging result of step S917 In the case where, all pixels for all A qualified product images, without confirming corresponding with all symmetry axis difference image Pixel value checks that threshold value is small than M.Therefore, Figure 25 is jumped to via the connector for jumping destination number 207 has been described Step S902.In step S902,1 is added to the value of the register I of the number of storage qualified product image.
Also, with before this explanation in the same manner as the value of I become larger 1 in the state of, again execute Figure 25 step S903 To Figure 26 step S916 the step of.If the judging result to the value of I plus the step S917 of 1, Figure 26 becomes "Yes" repeatedly. End step S9 in this case has been described the connector for jumping destination number 101 shown in the lowest part via Figure 14 (a) And jump to the step S10 of Figure 14 (b).But Figure 14 (a) as described above is that threshold value sets mode, Figure 14 (b) is to check to execute Mode.Thus, threshold value setting mode is terminated in the stage for finishing step S9, and start to check execution pattern.
On the other hand, the case where judging result of step S915 is "No" in Figure 26 means that: if to qualified product image Threshold value is checked using M, then check object region is selected.Then advance to step S918, investigation check object region is selected Out the reason of, simultaneously carries out the reply for removing the reason.After being coped with, destination number is jumped via being described 201 connector and jump to Figure 15 known workpiece shooting step (S1) step S103.And Figure 14 (a) institute is executed again The threshold value setting mode shown.In the execution again, the step S802 in the threshold binary image generation step (S8) of Figure 24 is arrived M threshold binary image and M threshold value monochrome image are selected in S805, and are set M and checked threshold value.And again in the step of Figure 26 In rapid S915, judge whether the pixel value of all pixels of difference image is smaller than M inspection threshold value.Become to the judging result Until "Yes", the reason of step S918 is repeated, investigates and the execution again of reply and threshold value setting mode.If it is determined that knot Fruit becomes "Yes", then as described above until the judgement of step S916 and the judgement of step S917 become "Yes", executes threshold value Verification step (S9).When the judging result of S917 becomes "Yes", end step S9 as described above simultaneously terminates threshold value setting mode. And jump to the step S10 that execution pattern is checked shown in Figure 14 (b).
Inspection execution pattern is illustrated followed by Figure 14 (b) and Figure 27 to Figure 31.
(5.9) step S10
Step S10 is to be examined workpiece to shoot step in Figure 14 (b), is shot to examined workpiece.It will be examined The detailed process of workpiece shooting step (S10) is illustrated in Figure 27.
In step S1001 shown in Figure 27, examined workpiece is shot.By step at the end of step S1001 S10 terminates and advances to step S2 shown in Figure 14 (b).Step S2 is check object workpiece extraction step in Figure 14 (b), It is same step with step S2 shown in Figure 14 (a).I.e., it is searched for from the image of examined workpiece using template matching method and true Determine the outer most edge of workpiece, and generates the image of check object workpiece.If step S2 terminates, to step S11 shown in Figure 14 (b) Advance.
(5.10) step S11
Step S11 is to be examined monochrome image generation step in Figure 14 (b), from the check object work generated in step S2 The monochromatic image formation image of part.The detailed process of examined monochrome image generation step (S11) is illustrated in Figure 28.
In step S1101 shown in Figure 28,0 is stored in register M, and initialize register M.Here, register M To be stored in step subsequent shown in the later examined monochrome image generation step of subsequent step S1102 and Figure 14 (b) The register of the number of the symmetry axis in monochrome image in S12 and S13 during execution processing.The number of the symmetry axis for example exists Setting in step S4 (symmetry judgment step) in the threshold value setting mode of Figure 14 (a) as preceding step.Specifically, In the step S402 of symmetry judgment step (S4) shown in Figure 18, register C stores the number of symmetry axis.Register C's Value, after being set in step S402, maintains the value.
Then it in step S1102, is replaced as most by all pixels of the image for check object workpiece, pixel value Image after big pixel value, the register of this title of image before being stored in.The effect of image before the register is carried out below Explanation.
Image processing algorithm of the invention is illustrated using Fig. 5.In the explanation, Fig. 5 (a) institute is defined The 1st straight line L1 and the 2nd straight line L2 as 2 symmetry axis shown.Difference image corresponding with above-mentioned symmetry axis be Fig. 5 (b) and Fig. 5 (c).Also, pixel is left by what is selected among all pixels of each difference image, that is, Fig. 5 (b) and Fig. 5 (c), is passed through Fig. 6 (a) and Fig. 6 (b) are shown in such a way that dual frame surrounds pixel value.Have selected it is above-mentioned leave pixel after, select Fig. 6 (a) and shown in Fig. 6 (b) common ground of pixel is left, and the common ground is set as check object region.In the Fig. 6 It leaves selecting as following concept for the common ground of pixel: while generating corresponding with multiple symmetry axis difference image, from each Difference image has selected leave pixel after, select each common ground for leaving pixel.But carried out using software with It is performed in parallel while the corresponding processing of multiple symmetry axis and unrealistic.The reason is that because are as follows: in order to carry out it is such simultaneously Row processing, therefore, to assure that for storing the huge storage region of all pixels value of multiple difference images, and need to carry out To the pixel value of the storage region write-in and reading the step of.Therefore, lacking for image processing algorithm of the invention is being utilized It falls into inspection method, in order to execute the step of Fig. 5 (b) arrives Fig. 6 (c) by software, determines to use the following method.
First from main program as independent variable by original image (from the image of check object workpiece generate monochrome image) and One end (ONEM) of 1st symmetry axis and the coordinate of the other end (OTEM) are sent to subroutine Sub1 shown in Figure 20.Subroutine Sub1 is based on original image and generates difference image corresponding with the 1st symmetry axis, returns to the main journey for difference image as return value Sequence.Main program checks that threshold value is left to select using the M set under threshold value setting mode for the difference image received Pixel.The allocation position for leaving pixel becomes check object corresponding with the 1st symmetry axis region.
Then, 1 is added by the register M to the number for storing symmetry axis, to become M=2.As a result, then, from The independent variable that main program is sent to subroutine Sub1 becomes amount related with the 2nd symmetry axis.And similarly in subroutine Sub1 In, the generation of difference image corresponding with the 2nd symmetry axis is carried out based on original image.And similarly in main program, carry out With selecting for the 2nd corresponding check object region of symmetry axis.
Also, by and the corresponding check object region of the 2nd symmetry axis and just inspection corresponding with the 1st symmetry axis selected The common ground for looking into subject area is set as new check object region.It i.e., will be corresponding with (M+1) symmetry axis and M symmetry axis The common ground in 2 check object regions be set as new check object region.When considering such step, need arrive choosing Out until check object region corresponding with (M+1) symmetry axis during, in advance will check object corresponding with M symmetry axis Regions store is in register.This title of image ensures the register and stores up in the register as initial value before in advance The step of having the image of check object workpiece is above-mentioned step S1102.Here, storing in step S1102 for checking Pixel value is replaced as the image after max pixel value by all pixels of the image of objective workpiece.Its reason is chatted below It states.
As described above, so-called check object region is the region that there may be defect in the image of check object workpiece The region of defect inspection should be carried out.Therefore, reference figure when defect inspection is carried out as the image to check object workpiece Picture needs to generate the image that can clearly identify the region other than check object region and the check object region.Therefore at this In invention, inspection image is generated as the reference image.Inspection uses image to configure tool as in check object region There are the pixel of max pixel value, the region configuration other than check object region that there is the image of the pixel of minimum pixel value and give birth to At.All regions become the image in check object region in image in order to correspondingly, firstly generate check object workpiece As the initial value (hereinafter referred to as initial preceding image) of preceding image.Image is stored up in the step S1102 of Figure 28 before this is initial It is stored in image before register.And had as described above, selecting in subsequent steps as the image with check object workpiece The corresponding check object region of the 1st symmetry axis in some symmetry axis (C) leave pixel.
Then, the check object region in check object region corresponding with the 1st symmetry axis and initial preceding image is selected Common ground.And generate as check object region and have the new preceding image of the common ground.Later, that this is new The update of image before the generation of preceding image is known as.Here, in the update of preceding image corresponding with the 1st symmetry axis, for the 1st The check object region in the corresponding check object region of symmetry axis and initial preceding image, also cannot not necessarily select common portion Point.Its condition is that all areas of initial preceding image become check object region.Therefore, deposit is stored in step S1102 The initial preceding image of image is so that all pixels value is set as most by the mode that all regions become check object region before device Big pixel value.Deposit also, before accordingly being had updated with the 1st symmetry axis after image, to the number for storing symmetry axis Device M is plus 1 and executes same step.Until register M reaches the number C of above-mentioned symmetry axis repeatedly.And reaching C's The preceding image of final updating is set as inspection image by the moment.
Before having carried out register in the step S1102 of Figure 28 in this way after the initialization of image, to before step S1103 Into.In step S1103,1 is added to the value of the register M for the number for storing symmetry axis.Then it is arrived in step S1104 In S1108, generates and be examined monochrome image and be stored in register original image.Step S1104 in above-mentioned steps is to step S1107 is identical as step S908 to the S911 in threshold value verification step (S9) shown in Figure 25 respectively.Thus, it omits specifically It is bright.In step S1108, the end step S11 when examined monochrome image is stored in register original image, and to Figure 14 (b) the step S5 shown in advances.
Step S5 is that difference calculates step in Figure 14 (b), and then the step S6 of step S5 is difference image generation step. Above-mentioned steps utilize the subroutine Sub1 of Figure 19 and Figure 20 as described above, the original set from the step S1108 of above-mentioned Figure 28 Image generates difference image.Therefore, detailed description is omitted.As end step S5 and step S6 to step in Figure 14 (b) S12 advances.
(5.11) step S12
Step S12 is that step is selected in inspection area in Figure 14 (b), uses the difference image next life generated in step s 6 At the region candidate image with the candidate region as check object region.The detailed of step (S12) is selected into inspection area Thread journey is illustrated in Figure 29.
In the step S1201 of Figure 29, selecting among all pixels of difference image has bigger than M inspection threshold value The pixel of pixel value is as inspection area pixel.The step is and selects Fig. 5 (d) among all pixels of difference image (e) the identical content of pixel is left shown in.But, step S1201 is as the preparation for selecting check object region The step of, therefore the title for the pixel selected is set to inspection area pixel.Inspection area pixel is had selected in step S1201 Later, advance to step S1202.
Step S1202 is for generating the area other than above-mentioned clearly identification check object region and the check object region The preceding step of the image in domain, i.e. inspection image.Firstly, difference image is distinguished into inspection area pixel and inspection area pixel Pixel in addition.Then, in the position of inspection area pixel, configuration has the region specified pixel of max pixel value, and is examining The position configuration for looking into the pixel other than area pixel has the inspection exterior pixel of minimum pixel value, formation zone candidate image.It is logical Step region candidate image generated is crossed, the pixel value of inspection area pixel and the pixel value of inspection exterior pixel can be passed through Comparison positively identify the region other than check object region and check object region.It is generated when in step S1202 Step S12 is terminated when region candidate image, is advanced to step S13 shown in Figure 15 (b).
(5.12) step S13
Step S13 is inspection image generation step, is generated and is checked based on the region candidate image generated in step s 12 Use image.Inspection is illustrated in Figure 30 with the detailed process of image generation step (S13).
The common portion as region specified pixel is selected for preceding image and region candidate image in the step S1301 of Figure 30 The common specified pixel divided.As described above, the step is for whenever being correspondingly generated check object region with symmetry axis, choosing The check object region of the generation and the common ground of preceding image and the step of be updated to preceding image out.In the step After having selected common specified pixel i.e. common ground in S1301, advance to step S1302.
In the position of common specified pixel, configuration has the common specified pixel of max pixel value in step S1302, and And the pixel value of the pixel other than common specified pixel is set as minimum pixel value to generate update candidate image.In the step It is unrelated with the position that the pixel in preceding image with max pixel value is configured in S1302, by what is selected in step S1301 The pixel value of pixel other than common specified pixel is set as minimum pixel value.Even i.e., having max pixel value in preceding image The position that is configured of pixel, when the common specified pixel selected in step S1301 is not configured in the position, configuration Minimum pixel value is also updated in the pixel value of the pixel of the position.The update of image is completed before in this way, updated image As update candidate image.It generates in step S1302 after updating candidate image, advances to step S1303.
In step S1303, candidate image will be updated and be stored in image before register.Then in step S1304, to posting The number i.e. the C whether number of the symmetry axis stored by storage M has reached symmetry axis is judged.Register M is shown in Figure 28 It is examined in the step S1103 of monochrome image generation step (S11) and is added 1 every time.Register M's after being coupled with 1 with this The step of being worth corresponding original image, passing through the step S1104 to step S1108 and Figure 29 and Figure 30 of Figure 28 as described above S1301 to step S1303 and it is processed.And in the step S1303 of Figure 30, preceding image is updated.Sentencing in step S1304 It is disconnected to be just whether to be performed in accordance with the update of preceding image with all C symmetry axis possessed by examined monochrome image nothing but Judgement.In the case where the judging result is "No", the update of preceding image corresponding with all symmetry axis is not carried out, so through It is jumped the connector of destination number 210 by being described and is jumped to Figure 28 (be examined monochrome image generation step (S11)) Step S1103.
In step S1103,1 is added to the value of the register M of the number of storage symmetry axis.And it is same with explanation before this Sample, the value of M become larger 1 in the state of execute the step S1104 of Figure 28 again to step S1108's and Figure 29 and Figure 30 Step documented by step S1301 to step S1303.If (checking plus 1, Figure 30 to the value of M repeatedly and using image generation step (S13)) judging result of step S1304 becomes "Yes".Advance in this case to step S1305.
In step S1305, update candidate image is stored in the register of this title of inspection image.As a result, It checks and is completed with the generation of image.And step S13 terminates, and advances to step S14 shown in Figure 14 (b).
(5.13) step S14
Step S14 in Figure 14 (b) is to check to execute step, is held based on the inspection generated in step s 13 with image Defect inspection of the row for the image of check object workpiece.It will check that the detailed process for executing step (S14) is illustrated in Figure 31.
In the step S1401 of Figure 31, the image of check object workpiece is taken out, on one side referring to the inspection pair checked with image As region carries out defect inspection to the region on one side.Specifically, by the region that max pixel value is configured in inspection image It is set as the check object region in check object workpiece.Also, defect inspection is executed to the region.It is applied in defect inspection Method uses any means known for operating personnel.
The defect detecting method for having used the image processing algorithm in the above-mentioned present invention is examined with defect in the prior art Checking method is compared, excellent in the following areas.Firstly, operating personnel's project for visually shooting image to judge is few and this is sentenced It is disconnected judge benchmark also not on operating personnel it is skilled premised on.
In defect detecting method in the prior art, operating personnel visually becomes the figure of the workpiece of the object of defect inspection Picture.Also, check object region and the not exclusion as the object of defect inspection to the object for becoming defect inspection in image Region is confirmed.Also, the configuration about each region confirms correlation and about the pixel for constituting each region Pixel value size relation is confirmed.Then in the size relation for the correlation and pixel value for considering above-mentioned configuration On the basis of, setting sorts check object region for sorting the threshold value in check object region and exclusionary zone.In addition, from coloured silk In the case that chromatic graph picture generates multiple monochrome images, the visual each monochrome image of operating personnel.And it will be above-mentioned each interregional Brightness is compared, and selects to be judged as 1 monochrome image that can most positively identify defect.Also, this is selected Monochrome image carries out the setting of above-mentioned threshold value and the sorting in check object region.
In contrast, image processing algorithm of the invention and its defect detecting method has been used, the figure for workpiece As being symmetrical situation about specific straight line.Firstly, obtaining the picture for being configured at the pixel of symmetric position about symmetry axis The difference of plain value simultaneously generates difference image.Then, it selects to have from the pixel value of pixel for constituting difference image and be set than in advance The pixel of the big pixel value of fixed threshold value, and the position for being configured the pixel is selected as check object region.Symmetrical In the case that axis is multiple, the common ground in check object corresponding with each symmetry axis region is set as final check object Region.In addition, in the case where generating multiple monochrome images from color image, for the difference diagram generated from each monochrome image The pixel of picture executes prespecified operation to pixel value.And the operation result of each difference image is compared, is selected pair Optimal 1 monochrome image for given threshold.Using the monochrome image, above-mentioned threshold value has been preset to set.
The algorithm and the defect detecting method for using the algorithm, the image that there's almost no the visual workpiece of operating personnel come The step of carrying out any judgement.Therefore, without requiring operating personnel skilled as defect detecting method in the prior art, make The burden of industry personnel becomes smaller.Moreover, be easy to automate algorithm by software and execute defect inspection, thus in the prior art Defect inspection compared to checking that speed especially improves, and the not differentia influence vulnerable to the proficiency of operating personnel.

Claims (8)

1. a kind of image processing method characterized by comprising
Difference calculates step, will have the original image relative to the monochrome that reference line is symmetrical 1st region and the 2nd region Picture is divided into the 1st region and the 2nd region by the reference line, about in the 1st region and the described 2nd It is configured in region relative to each right of two original pixel that the reference line is symmetrical position, calculating becomes described two The differential pixel values of the difference of the pixel value of a original pixel;With
Difference image generation step is that configuration has the differential pixel of the differential pixel values to generate the difference diagram of difference image As generation step, will there is the 2nd position in the original pixel and the 2nd region for using the 1st position in the 1st region The 1st position and the institute that the difference image is configured to the differential pixel of the differential pixel values calculated of original pixel The 2nd position is stated to generate the difference image.
2. image processing method according to claim 1, which is characterized in that
The reference line is the 1st area that the original image is divided into the top half of the original pixel comprising equal amount 1st straight line in the 2nd region of domain and lower half portion.
3. image processing method according to claim 1, which is characterized in that
The reference line is the 1st area that the original image is divided into the left-half of the original pixel comprising equal amount 2nd straight line in domain and the 2nd region of right half part.
4. image processing method according to claim 1, which is characterized in that
In the difference image generation step, from the same original image generate the 1st difference image and with the 1st difference diagram As the 2nd different difference images is as the difference image,
Described image processing method further include:
Step is selected in inspection area, using the 1st difference image and the 2nd difference image, selects in the original image Check object region.
5. a kind of defect detecting method is to carry out defect inspection to checked property using image processing method described in claim 1 The defect detecting method looked into, which is characterized in that have:
Threshold value sets mode and checks execution pattern,
The threshold value sets mode
Step 1 is rapid, makes to be known to be the monochrome image that shooting image generates obtained from multiple checked properties of qualified product from shooting For the 1st original image, multiple qualified product difference diagrams are generated using described image processing method from multiple 1st original images generated Picture;With
Second step makes to be known to be the achromatic map that shooting image generates obtained from multiple checked properties of rejected product from shooting As being the 2nd original image, it is poor that multiple rejected products are generated using described image processing method from multiple 2nd original images generated Partial image,
Under the threshold value setting mode, 1 inspection area threshold value is set, which can be from the multiple In the differential pixel of rejected product difference image, differential pixel values are selected relative to other differential pixel values and leave predetermined value or more And the pixel of leaving of same position is configured in each rejected product difference image, and can not be from the multiple qualified product It selects to be configured in the differential pixel of difference image and leaves picture with described in the differential pixel conduct for leaving pixel same position Element,
Under the inspection execution pattern, generate the shooting image obtained from the checked property of shooting defect inspection object Monochrome image is the 3rd original image, generates difference image using described image processing method from the 3rd original image generated, makes The check object region in the 3rd original image is selected with the difference image and the inspection area threshold value, to the inspection It looks into subject area and executes defect inspection.
6. defect detecting method according to claim 5, which is characterized in that
The shooting figure seems achromatic image,
The monochrome image generated from each shooting image is 1.
7. defect detecting method according to claim 5, which is characterized in that
The shooting figure seems color image,
The monochrome image generated from each shooting image is 2 or more,
In the step 1 of threshold value setting mode is rapid, make each monochrome image the 1st original image, from generated Multiple 1st original images generate the multiple qualified product difference image,
In the second step of threshold value setting mode, make each monochrome image the 2nd original image, from generated Multiple 2nd original images generate the multiple rejected product difference image,
Under the threshold value setting mode, maximum 1 monochrome image of setting range of the inspection area threshold value is selected, it is right Selected monochrome image sets the inspection area threshold value,
Under the inspection execution pattern, from the shooting figure obtained from the checked property as shooting the defect inspection object It is selected in 2 or more the monochrome images that picture generates and monochrome image identical type selected under the threshold value setting mode Monochrome image as the 3rd original image, generate the difference image from the 3rd original image generated, use institute Difference image and the inspection area threshold value are stated to select the check object region, defect is executed to the check object region It checks.
8. defect detecting method according to claim 5, which is characterized in that
It further include extracting the tested of the checked property of the defect inspection object using template matching method from the 3rd original image Look into object extraction step.
CN201810677554.3A 2017-06-28 2018-06-27 Defect inspection method Active CN109146839B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-126674 2017-06-28
JP2017126674A JP6879841B2 (en) 2017-06-28 2017-06-28 Image processing method and defect inspection method

Publications (2)

Publication Number Publication Date
CN109146839A true CN109146839A (en) 2019-01-04
CN109146839B CN109146839B (en) 2022-03-08

Family

ID=64802353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810677554.3A Active CN109146839B (en) 2017-06-28 2018-06-27 Defect inspection method

Country Status (4)

Country Link
JP (1) JP6879841B2 (en)
KR (1) KR102090568B1 (en)
CN (1) CN109146839B (en)
TW (1) TWI695165B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754504A (en) * 2020-07-01 2020-10-09 华能国际电力股份有限公司大连电厂 Machine vision-based chemical mixed bed layered detection method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126914A1 (en) * 2004-12-13 2006-06-15 Akio Ishikawa Image defect inspection method, image defect inspection apparatus, and appearance inspection apparatus
CN101609500A (en) * 2008-12-01 2009-12-23 公安部第一研究所 Quality estimation method of exit-entry digital portrait photos
JP2010025708A (en) * 2008-07-17 2010-02-04 Fujifilm Corp Method and device for inspecting imaging element
US20140198239A1 (en) * 2013-01-17 2014-07-17 Sony Corporation Imaging apparatus and imaging method
US20160019689A1 (en) * 2014-07-15 2016-01-21 Nuflare Technology, Inc. Mask inspection apparatus and mask inspection method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1723384A (en) * 2003-01-15 2006-01-18 麦克罗尼克激光系统公司 Method to detect a defective element
US7969488B2 (en) * 2005-08-03 2011-06-28 Micron Technologies, Inc. Correction of cluster defects in imagers
JP2008003063A (en) * 2006-06-26 2008-01-10 Seiko Epson Corp Shading correction method, defect detection method, and defect detector and control method program thereof
JP4192975B2 (en) * 2006-07-26 2008-12-10 ソニー株式会社 Image processing apparatus and method, program, and recording medium
JP5310247B2 (en) * 2009-05-13 2013-10-09 ソニー株式会社 Image processing apparatus and method, and program
KR20140087606A (en) * 2012-12-31 2014-07-09 엘지디스플레이 주식회사 Method and apparatus of inspecting mura of flat display
JP6344593B2 (en) * 2013-06-19 2018-06-20 株式会社 東京ウエルズ Defect inspection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060126914A1 (en) * 2004-12-13 2006-06-15 Akio Ishikawa Image defect inspection method, image defect inspection apparatus, and appearance inspection apparatus
JP2010025708A (en) * 2008-07-17 2010-02-04 Fujifilm Corp Method and device for inspecting imaging element
CN101609500A (en) * 2008-12-01 2009-12-23 公安部第一研究所 Quality estimation method of exit-entry digital portrait photos
US20140198239A1 (en) * 2013-01-17 2014-07-17 Sony Corporation Imaging apparatus and imaging method
US20160019689A1 (en) * 2014-07-15 2016-01-21 Nuflare Technology, Inc. Mask inspection apparatus and mask inspection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YIH-CHIH CHIOU 等: "Flaw detection of cylindrical surfaces in PU-packing by using machine vision technique", 《测量》 *
高晓滨 等: "基于机器视觉的印刷品缺陷检测的改进", 《印刷杂志》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754504A (en) * 2020-07-01 2020-10-09 华能国际电力股份有限公司大连电厂 Machine vision-based chemical mixed bed layered detection method
CN111754504B (en) * 2020-07-01 2024-03-19 华能国际电力股份有限公司大连电厂 Chemical mixed bed layering detection method based on machine vision

Also Published As

Publication number Publication date
KR20190001914A (en) 2019-01-07
TW201905443A (en) 2019-02-01
JP2019008739A (en) 2019-01-17
JP6879841B2 (en) 2021-06-02
TWI695165B (en) 2020-06-01
KR102090568B1 (en) 2020-03-18
CN109146839B (en) 2022-03-08

Similar Documents

Publication Publication Date Title
JP6924413B2 (en) Data generator, data generation method and data generation program
Naimpally et al. Topology with applications: topological spaces via near and far
CN107292885A (en) A kind of product defects classifying identification method and device based on autocoder
JPH01222381A (en) Method and device for high speed flaw detection
CN110274908A (en) Flaw detection apparatus, defect detecting method and computer readable recording medium
CN107274419A (en) A kind of deep learning conspicuousness detection method based on global priori and local context
CN107886089A (en) A kind of method of the 3 D human body Attitude estimation returned based on skeleton drawing
CN107093172A (en) character detecting method and system
CN108961220A (en) A kind of image collaboration conspicuousness detection method based on multilayer convolution Fusion Features
CN109829391A (en) Conspicuousness object detection method based on concatenated convolutional network and confrontation study
Martinez-Valpuesta et al. A morphological and statistical analysis of ansae in barred galaxies
CN109978815A (en) Detection system, information processing unit, evaluation method and storage medium
CN101149837B (en) Color processing method for identification of areas within an image corresponding to monetary banknotes
CN108038839A (en) Twisted-pair feeder lay real-time detection method on a kind of flow production line
CN109146839A (en) Image processing method and defect detecting method
CN110458178A (en) The multi-modal RGB-D conspicuousness object detection method spliced more
CN114021704B (en) AI neural network model training method and related device
CN109902751A (en) A kind of dial digital character identifying method merging convolutional neural networks and half-word template matching
CN109297971B (en) Defect inspection system and defect inspection method
CN109800809A (en) A kind of candidate region extracting method decomposed based on dimension
CN111784667A (en) Crack identification method and device
JPH04112391A (en) Pattern discriminating method
CN110335670A (en) Image processing method and device for the classification of epiphysis grade
CN107748899A (en) A kind of target classification of the two dimensional image based on LSTM sentences knowledge method
CN115345848A (en) Quality inspection method of display screen based on big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant