CN106503724A - Grader generating means, defective/zero defect determining device and method - Google Patents

Grader generating means, defective/zero defect determining device and method Download PDF

Info

Publication number
CN106503724A
CN106503724A CN201610792290.7A CN201610792290A CN106503724A CN 106503724 A CN106503724 A CN 106503724A CN 201610792290 A CN201610792290 A CN 201610792290A CN 106503724 A CN106503724 A CN 106503724A
Authority
CN
China
Prior art keywords
characteristic quantity
image
defective
target object
outward appearance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610792290.7A
Other languages
Chinese (zh)
Inventor
奥田洋志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of CN106503724A publication Critical patent/CN106503724A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of grader generating means, defective/zero defect determining device and method.Outward appearance in order to determine check object object is defective or zero defect, grader generating means are based on for image target object, shooting under at least two different imaging conditions with known defective outward appearance or zero defect outward appearance, each extraction characteristic quantity from least two images.Characteristic quantity of the grader generating means from the characteristic quantity for synthetically including extracting, selects for determining the defective or flawless characteristic quantity of target object, also, is generated for determining the defective or flawless grader of target object based on the characteristic quantity for selecting.Based on the characteristic quantity and grader for extracting the outward appearance that determines target object is defective or zero defect.

Description

Grader generating means, defective/zero defect determining device and method
Technical field
Aspects of the invention relate generally to a kind of grader generating means, defective/zero defect and determine methods and procedures, Determine that more particularly to the shooting image based on object object is defective or zero defect.
Background technology
Usually, the product to manufacturing in factory is checked, and determines outward appearance based on product to determine the product Defective or zero defect.If how previously known defect in defective product there is (that is, intensity, size and position Defect), then can provide the result of the image procossing executed based on the shooting image to check object object to detect inspection The method of the defect of target object.However, under many circumstances, defect occurs in uncertain mode, and intensity, size and The defect of position may be varied in many ways.Therefore, conventionally, visually implement visual examination, and automatic shape inspection Cannot almost put into actually used.
A kind of known inspection method using substantial amounts of characteristic quantity, which makes the inspection automatization for uncertain defect. Specifically, the image of multiple no defective products and defective product is shot as learning sample.That is, carrying from these images Substantial amounts of characteristic quantity is taken, for example, the meansigma methodss of pixel value, dispersion, maximum and contrast, and in multidimensional characteristic quantity space The middle grader created for being classified to no defective product and defective product.Then, reality is determined using the grader The check object object on border is no defective product or defective product.
If relative to the quantity increase of study sample, during learning, grader is exceedingly fitted to characteristic quantity quantity The learning sample (that is, over-fitting) of no defective product and defective product, thus, for check object object, such as extensive mistake Poor problem increases.If increasing characteristic quantity quantity, the characteristic quantity of redundancy is potentially included, the process time needed for thus learning May increase.Therefore, it is desirable to adopt a kind of can pass through from the middle of substantial amounts of characteristic quantity, to select appropriate characteristic quantity to reduce general Change error the method that accelerates arithmetic processing.According to the technology discussed in Japanese Unexamined Patent Publication 2005-309878 publications, from reference to figure As extracting multiple characteristic quantities, and select from multiple extracted characteristic quantities for determining the characteristic quantity of check image.Then, Check object object zero defect or defective is determined based on selected characteristic quantity from check image.
A kind of method for being checked to defect with higher sensitivity and classified, including by multiple imaging conditions The lower image for shooting check object object is checking to check object object.According to Japanese Unexamined Patent Publication 2014-149177 public affairs The technology discussed in report, obtains image under multiple imaging conditions, and extraction includes defect candidate under these imaging conditions Parts of images.Then, the characteristic quantity of the defect candidate in fetching portion image so that based on same coordinate, different The characteristic quantity of the defect candidate of imaging conditions, from defect candidate extraction defect.
Usually, imaging conditions (for example, lighting condition) and defect type are interrelated, so as to different defects is in difference Imaging conditions under visualized.Therefore, in order to determine that check object object is defective or zero defect with high accuracy, by The image of check object object is shot under multiple imaging conditions and more clearly visualizes defect to execute inspection.However, in day In technology described in this JP 2005-309878 publications, there is no shooting image under multiple imaging conditions.Accordingly, it is difficult to Determine that check object object is defective or zero defect with high degree of accuracy.Additionally, in Japanese Unexamined Patent Publication 2014-149177 publications Described in technology in, although shooting image under multiple imaging conditions, do not select for no defective product with have scarce Detached features described above amount is carried out between sunken product.In Japanese Unexamined Patent Publication 2005-309878 publications and 2014-149177 publications Described in technology be combined together in the case of, inspection is executed by shooting image under multiple imaging conditions, thus with Inspection is executed with the as many number of times of the quantity of imaging conditions.Therefore, the review time increases.As different defects is in difference Imaging conditions under visualized, therefore have to select learning object image for each imaging conditions.In addition, if due to lacking Sunken visualization status and be difficult to select learning object image, then can select the characteristic quantity of redundancy when characteristic quantity will be selected. Therefore, so may not only increase the review time but also reduced for detached property is carried out between defective product and no defective product Energy.
Content of the invention
According to an aspect of the present invention, a kind of grader generating means include:Study extraction unit, which is configured to be based on For target object, bat under at least two different imaging conditions with known defective outward appearance or zero defect outward appearance The image that takes the photograph, each extraction multiple images characteristic quantity from least two images;Select unit, its are configured to from extraction Select in the middle of characteristic quantity for determining the defective or flawless characteristic quantity of target object;And signal generating unit, which is constructed It is to be generated for determining the defective or flawless grader of target object based on the characteristic quantity for selecting.
A kind of defective/zero defect determining device includes:Study extraction unit, its are configured to based on known for having Defective outward appearance or zero defect outward appearance target object, under at least two different imaging conditions shoot image to Each extraction characteristic quantity in few two images;Select unit, its are configured to select for true from the middle of the characteristic quantity for extracting Determine the defective or flawless characteristic quantity of target object;Signal generating unit, its are configured to be generated based on the characteristic quantity for selecting and are used In the defective or flawless grader of determination target object;Extraction unit is checked, which is configured to based on for not The defective outward appearance that knows or figure target object, shooting under described at least two different imaging conditions of zero defect outward appearance Picture, each extraction characteristic quantity from least two images;Determining unit, its be configured to by by the characteristic quantity for being extracted and The grader for being generated is compared, determine target object outward appearance is defective or zero defect.
According to description with reference to the accompanying drawings to example embodiment, other features of each aspect of the present invention will become clear Chu.
Description of the drawings
Fig. 1 is the block diagram for illustrating the hardware configuration for realizing defective/zero defect determining device.
Fig. 2 is the block diagram of the functional structure for illustrating defective/zero defect determining device.
Fig. 3 A are the flow charts for being illustrated in the process executed during learning by defective/zero defect determining device.
Fig. 3 B are the flow charts for illustrating the process for being executed during checking by defective/zero defect determining device.
Fig. 4 A and Fig. 4 B are the figures of the first example for illustrating the relation between camera head and target object.
Fig. 5 is the figure of the example for illustrating lighting condition.
Fig. 6 is the figure of the image for being illustrated in the defective part that shoot under respective lighting condition.
Fig. 7 is the figure of the structure for illustrating learning object image.
Fig. 8 is the figure of the creation method for illustrating pyramid level image.
Fig. 9 is the figure for illustrating the pixel number for being used for describing wavelet transformation.
Figure 10 is the figure of the computational methods for illustrating the characteristic quantity for emphasizing scratch defects.
Figure 11 is the figure of the computational methods for illustrating the characteristic quantity for emphasizing uneven defect.
Figure 12 is the form of the list for illustrating characteristic quantity.
Figure 13 is the form of the list for illustrating assemblage characteristic amount.
Figure 14 A and Figure 14 B are to illustrate the figure using assemblage characteristic amount or the operating process without assemblage characteristic amount.
Figure 15 A and Figure 15 B are the figures of the second example for illustrating the relation between camera head and target object.
Figure 16 is the relation between camera head in three dimensions exemplified with the middle illustrations of Figure 15 A (15B) and target object Figure.
Figure 17 A and Figure 17 B are the figures of the 3rd example for illustrating the relation between camera head and target object.
Figure 18 A and Figure 18 B are the figures of the 4th example for illustrating the relation between camera head and target object.
Figure 19 is the figure of the 5th example for illustrating the relation between camera head and target object.
Figure 20 is the figure of the 6th example for illustrating the relation between camera head and target object.
Specific embodiment
Hereinafter, multiple exemplary embodiments are described with reference to the accompanying drawings.In each following exemplary embodiment, will pass through Learnt to execute using the view data of the target object shot under at least two different imaging conditions and checked.For example, Relevant with the surrounding of the camera head condition during including the condition relevant with camera head, shooting-shooting of imaging conditions And in the condition relevant with target object at least any one.In the first exemplary embodiment, will be using at least two The image of reference object object under individual different lighting condition, used as the first example of imaging conditions.In the second exemplary enforcement In example, will adopt by the image of at least two different image unit reference object objects, as the second example of imaging conditions. In the 3rd exemplary embodiment, at least two zoness of different in same image in reference object object will be adopted, as 3rd example of imaging conditions.In the 4th exemplary embodiment, at least two differences for shooting same target object will be adopted Partial image, used as the 4th example of imaging conditions.
First, will the first exemplary embodiment of description.
In the present example embodiment, first, the hardware configuration and function knot of defective/zero defect determining device will be described The example of structure.Then, the respective flow chart (step) for description study and inspection being processed.Finally, this exemplary reality will be described Apply the effect of example.
<Hardware configuration and functional structure>
Realization is illustrated in FIG according to the hardware configuration of the defective/zero defect determining device of this exemplary embodiment Example.In FIG, CPU (CPU) 110 generally controls connected each equipment via bus 100.CPU 110 process steps or program for reading and executing storage in read only memory (ROM) 120.According to this exemplary embodiment Various processing routines or device driver including operating system (OS) are stored in ROM 120, with by they are interim Be stored in random access memory (RAM) 130 and be appropriately performed by CPU 110.Input interface (I/F) 140 is with by having Defect/accessible the form of zero defect determining device, from the external device (ED) receives input signal of such as camera head etc..Additionally, defeated Go out I/F 150 with the accessible form of external device (ED) by display device etc., output signal is exported.
Fig. 2 is the example of the functional structure for illustrating the defective/zero defect determining device according to this exemplary embodiment Block diagram.In fig. 2, according to the defective/zero defect determining device 200 of this exemplary embodiment include image acquisition unit 201, Image composing unit 202, comprehensive features extraction unit 203, characteristic quantity assembled unit 204, characteristic quantity select unit 205, point Class device signal generating unit 206, selection characteristic quantity storage unit 207 and grader storage unit 208.Defective/zero defect determines dress Putting 200 also includes selecting Characteristic Extraction unit 209, determining unit 210 and output unit 211.Additionally, defective/zero defect Determining device 200 is connected with camera head 220 and display device 230.Defective/zero defect determining device 200 is by known Check object object for defective product or no defective product executes machine learning to create grader, and by using wound For the unknown check object object for defective product or no defective product, the grader that builds is determining that outward appearance is defective still Zero defect.In fig. 2, during study in operation order represented by solid arrow, and check during operation order by dotted arrow Represent.
Image acquisition unit 201 obtains image from camera head 220.In the present example embodiment, 220 pin of camera head To single target object under at least two or more lighting conditions shooting image.Above-mentioned shooting behaviour will be described in detail belows Make.User is previously applied defective product or no defective product to the target object shot during learning by camera head 220 Label.During checking, usually, for the object shot by camera head 220, the object is defective or zero defect is Unknown.In the present example embodiment, defective/zero defect determining device 200 is connected with camera head 220, with from shooting Device 220 obtains the shooting image of target object.However, exemplary embodiment be not limited to above-mentioned.For example, shot in advance is right As subject image can be stored in storage medium so that can read and obtain the target object figure of shooting from storage medium Picture.
Image composing unit 202 is received from image acquisition unit 201 and is clapped under at least two mutually different lighting conditions The target object image that takes the photograph, and by synthesizing these target object creation of image composographs.Here, obtain during learning The shooting image or composograph for taking is referred to as learning object image, and the shooting image that obtains during checking or composograph It is referred to as check image.Image composing unit 202 will be described in detail belows.
Comprehensive features extraction unit 203 executes study extraction process.Specifically, comprehensive features extraction unit 203 from In the middle of the learning object image obtained by image acquisition unit 201 and the learning object image created by image composing unit 202 , at least each image in two or more images, synthetically extract the characteristic quantity of the statistic for including image.Below will Describe comprehensive features extraction unit 203 in detail.Now, in the learning object image that obtained by image acquisition unit 201 and In the middle of the learning object image created by image composing unit 202, the learning object figure that only obtained by image acquisition unit 201 As the object of Characteristic Extraction can be designated as.Alternatively, in the learning object obtained by image acquisition unit 201 In the middle of image and the learning object image that created by image composing unit 202, the study that only created by image composing unit 202 Object images can be designated as the object of Characteristic Extraction.Additionally, the learning object image obtained by image acquisition unit 201 The object of Characteristic Extraction is may be designated as with the learning object image created by image composing unit 202.
The characteristic quantity of the respective image extracted by comprehensive features extraction unit 203 is combined by characteristic quantity assembled unit 204 For a characteristic quantity.Characteristic quantity assembled unit 204 will be described in detail belows.
Characteristic quantity select unit 205 is selected for producing in zero defect from the characteristic quantity combined by characteristic quantity assembled unit 204 Detached characteristic quantity is carried out between product and defective product.The type of the characteristic quantity selected by characteristic quantity select unit 205 is deposited Storage is in characteristic quantity storage unit 207 is selected.
Characteristic quantity select unit 205 will be described in detail belows.Grader signal generating unit 206 is single using being selected by characteristic quantity The characteristic quantities that unit 205 selects are creating the grader for being classified to no defective product and defective product.By grader The grader that signal generating unit 206 is generated is stored in grader storage unit 208.Grader generation will be described in detail belows Unit 206.
Select Characteristic Extraction unit 209 to execute and check extraction process.Specifically, select Characteristic Extraction unit 209 from The check image obtained by image acquisition unit 201 or the check image created by image composing unit 202, are extracted in selection special The characteristic quantity of the type stored in the amount of levying storage unit 207, i.e. the characteristic quantity selected by characteristic quantity select unit 205.Below will Describe selection Characteristic Extraction unit 209 in detail.
Determining unit 210 is based on by the characteristic quantity for selecting Characteristic Extraction unit 209 to extract and in grader storage unit The grader stored in 208, determine target object outward appearance is defective or zero defect.
Output unit 211 would indicate that the determination result of the defective outward appearance or zero defect outward appearance of target object, via interface (illustration) is being sent to the display device 230 by 230 displayable form of exterior display device.In addition, output unit 211 will The defective or flawless check image of outward appearance for determining target object and the defective outward appearance for representing target object or The determination result of zero defect outward appearance together, is sent to display device 230.
Display device 230 shows outside the defective outward appearance of the expression target object exported by output unit 211 or zero defect The determination result of sight.For example, represent that the determination result of the defective outward appearance or zero defect outward appearance of target object can be with such as " nothing The text of defect " or " defective " and be shown.However, representing the determination of the defective outward appearance or zero defect outward appearance of target object As a result display pattern is not limited to text display mode.For example, " zero defect " and " defective " can be distinguished with color and be shown Show.Additionally, supplement as above-mentioned display pattern or replacing above-mentioned display pattern, it is possible to use sound come export " zero defect " and " defective ".Liquid crystal display or cathode ray tube (CRT) display are the examples of display device 230.CPU 110 in Fig. 1 Display control is executed to display device 230.
<Flow chart>
Fig. 3 A and Fig. 3 B are the flow charts according to this exemplary embodiment.Specifically, Fig. 3 A be illustrated in study during by The flow chart of the example of the process that defective/zero defect determining device 200 is executed.Fig. 3 B be illustrate scarce by having during checking The flow chart of the example of the process that sunken/zero defect determining device 200 is executed.Hereinafter, by the stream in reference picture 3A and Fig. 3 B The example of the process that the description of journey figure is executed by defective/zero defect determining device 200.As shown in Figure 3 A and Figure 3 B, by according to this The process that the defective/zero defect determining device 200 of exemplary embodiment is executed is substantially by two steps, i.e. learning procedure S1 Constitute with checking step S2.Hereinafter, will be described in detail in step S1 and step S2 each.
<Step S101>
First, by the learning procedure S1 shown in description Fig. 3 A.In step S101, image acquisition unit 201 is from shooting Device 220 obtains the learning object image for shooting under multiple illumination conditions.Fig. 4 A are the top views for illustrating camera head 220 The figure of example, and Fig. 4 B are the cross-sectional views for illustrating camera head 220 (in figure 4b by dotted line) and target object 450 The figure of example.Fig. 4 B are the cross-sectional views intercepted along the line I-I' in Fig. 4 A.
As shown in Figure 4 B, camera head 220 includes photographing unit 440.The optical axis of photographing unit 440 is arranged to relative to object The plate face of object 450 is vertical.Additionally, camera head 220 be included in longitudinal (circumferencial direction) with eight orientation arrangement, There is on latitude direction illumination 410a to 410h, the 420a to 420h and 430a to 430h of diverse location (height and position).Such as Upper described, in the present example embodiment, it is assumed that camera head 220 is directed to single target object 450 at least two or more Shooting image under individual imaging conditions.For example, it is possible to change adoptable illumination 410a to 410h, 420a to 420h and 430a extremely 430h (that is, direction of illumination), illumination 410a to 410h, the light quantity of 420a to 420h and 430a to 430h, and photographing unit 440 In the time of exposure of imageing sensor at least any one.Using the structure, shooting image under multiple illumination conditions.Below Example by description lighting condition.Additionally, industrial camera is used as photographing unit 440, such that it is able to shoot monochrome image or colour Image.In step S101, in order to obtain learning object image, to being known as the product of no defective product or defective product before The image of the outside of product (target object 450) is shot, and obtains the image.User is true to defective/zero defect in advance Determine device 200 to notify target object 450 is no defective product or defective product.In addition, target object 450 is by same material Formed.
<Step S102>
In step s 102, image acquisition unit 201 determines whether to be set to defective/zero defect in advance true Determine under all lighting conditions of device 200, to obtain image.As the result for determining, if obtained not yet under all lighting conditions Image ("No" in step S102), then process and return to step S101, and shooting image again.Fig. 5 is illustrated according to this The figure of the example of the lighting condition of exemplary embodiment.As shown in figure 5, in the present example embodiment, will be according to exemplary reality Apply example and provide such description as an example, wherein, by changing illumination 410a to 410h, 420a to 420h and 430a to 430h Central adoptable illumination is changing lighting condition.In Figure 5, the top view of the camera head 220 in Fig. 4 A is with simplified Mode is exemplified, and it can be deployed in illumination filling rectangular shape expression.In the present example embodiment, there is provided seven classes The lighting condition of type.
Shooting image under multiple illumination conditions, because emphasize such as scratch, indenture or coating not depending on lighting condition Uniform defect.For example, scratch defects are emphasized on the lower image for shooting of lighting condition 1 to 4, and in the lower bat of lighting condition 5 to 7 Uneven defect is emphasized on the image that takes the photograph.Fig. 6 is to illustrate shooting under respective lighting condition according to this exemplary embodiment The figure of the example of the image of defect part.In the lower image for shooting of lighting condition 1 to 4, may it is however emphasized that be connected two The upwardly extending scratch defects in the vertical side in the direction of the illumination being lit.This is because as illumination light is scarce perpendicular to scratch On sunken direction, send from the position of low latitudes and cause reflectance to be altered significantly over time in the part with scratch defects. In figure 6, scratch defects are visualized in the image of 3 times shootings of lighting condition most.On the other hand, more likely in illumination Uneven defect is emphasized on the lower image for shooting of condition 5 to 7.Because equably applying in longitudinal under lighting condition 5 to 7 Plus illumination, so illumination unevenness can not possibly occur while uneven defect is emphasized.In figure 6, uneven defect is being shone Visualized in the image of 7 times shootings of bright condition most.Under which lighting condition in the middle of lighting condition 5 to 7, uneven Defect be emphasised at most depend on uneven defect the reason for and type.Image be have taken under this seven lighting conditions all In the case of, process and proceed to step S103.In the present example embodiment, by changing adoptable illumination 410a extremely 410h, 420a to 420h and 430a to 430h are changing lighting condition.However, lighting condition is not limited to adoptable illumination 410a to 410h, 420a to 420h and 430a to 430h.As described above, for example, it is possible to pass through change illumination 410a to 410h, The time of exposure of the light quantity of 420a to 420h and 430a to 430h or photographing unit 440 is changing lighting condition.
<Step S103>
In step s 103, image acquisition unit 201 determines whether the target object for having obtained the quantity needed for study Image.As the result for determining, if the target object image of the quantity not yet needed for acquisition study is (in step S103 "No"), then process and return to step S101, and shooting image again.In the present example embodiment, in a lighting condition About 150 no defective product images of lower acquisition and 50 defective product images, used as learning object image.Therefore, when completing During process in step S103,150 × 7 no defective product images and 50 × 7 defective product images will be obtained, as Practise object images.When the image of above-mentioned quantity is obtained, process and proceed to step S104.In for 200 target objects Each, the following process in execution step S104 to step S107.
<Step S104>
In step S104, in the middle of seven images shot under lighting condition 1 to 7 for same target object, Image composing unit 202 pairs synthesizes in the lower image for shooting of lighting condition 1 to 4.As described above, in this exemplary embodiment In, image composing unit 202 pairs synthesizes in the lower image for shooting of lighting condition 1 to 4, to export composograph as study Object images, and without the synthesis image that directly output is shot under lighting condition 5 to 7 as learning object image.As above Described because lighting condition 1 to 4 illumination use direction in terms of depend on azimuth, it is however emphasized that scratch defects side To in each in lighting condition 1 to 4 and may changing.Therefore, by being taken at the figure that shoot under lighting condition 1 to 4 The pixel value sum of the mutual corresponding position as in can be generated and emphasize scratch in all angles generating during composograph The composograph of defect.Here, for simplicity, describe as an example for by being taken at shooting under lighting condition 1 to 4 Image sum come the method that creates composograph.However, this method be not limited to above-mentioned.For example, it is possible to pass through using four kinds The image procossing of arithmetical operation further emphasizes the composograph of defect to generate.For example, as use in lighting condition 1 to 4 The supplement of the computing of the pixel value of the image of lower shooting replaces the computing, can be by using in the lower bat of lighting condition 1 to 4 The statistic of the image that takes the photograph and the computing of the statistic between the multiple images in the middle of the lower image for shooting of lighting condition 1 to 4, To generate composograph.
Fig. 7 is the figure of the topology example for illustrating learning object image.In the figure 7, learning object image 1 is in lighting condition The composograph of the image of 1 to 4 time shooting, and learning object image 2 to 4 is precisely the figure for shooting lower in lighting condition 5 to 7 Picture.As described above, in the present example embodiment, totally four kinds of learning object images 1 to 4 for same target object creation.
<Step S105>
In step S105, comprehensive features extraction unit 203 is from the learning object image synthesis ground of a target object Extract characteristic quantity.Comprehensive features extraction unit 203 has difference from the learning object image creation of one target object The pyramid level image of frequency, and by pyramid level image each execution statistical calculation and Filtering Processing come Extract characteristic quantity.
First, the example of the creation method of pyramid level image will be described in detail.In the present example embodiment, lead to Cross wavelet transformation (that is, frequency transformation) and create pyramid level image.Fig. 8 is to illustrate the pyramid according to this exemplary embodiment The figure of the example of the creation method of stratal diagram picture.First, comprehensive features extraction unit 203 is using acquisition in step S104 Learning object image to create four kinds of images from the original image 801, i.e. low-frequency image 802, longitudinal direction as original image 801 Frequency image 803, horizontal frequency image 804 and diagonal frequency image 805.This four image 802,803,804 and 805 are whole It is contracted to a quarter of the size of original image 801.Fig. 9 is the figure for illustrating the pixel number for being used for describing wavelet transformation. As shown in figure 9, top left pixel, top right pel, bottom left pixel and bottom right pixel are known respectively as " a ", " b ", " c " and " d ".At this In the case of kind, the pixel value that is expressed by following formula 1,2,3 and 4 conversion is executed creating low frequency by being respectively directed to original image 801 Image 802, longitudinal frequency image 803, horizontal frequency image 804 and diagonal frequency image 805.
(a+b+c+d)/4...(1)
(a+b-c-d)/4...(2)
(a-b+c-d)/4...(3)
(a-b-c+d)/4...(4)
Additionally, thus comprehensive features extraction unit 203 from creating as longitudinal frequency image 803, horizontal frequency image 804 and diagonal frequency image 805 three images in, create following four image.In other words, comprehensive features extract single Unit 203 creates four images, i.e. longitudinal frequency absolute value images 806, horizontal frequency absolute value images 807, diagonal frequency are exhausted To being worth image 808 and longitudinal direction/laterally/diagonal frequency square and image 809.By taking longitudinal frequency image 803, laterally respectively Frequency image 804 and the absolute value of diagonal frequency image 805, create longitudinal frequency absolute value images 806, horizontal frequency exhausted To being worth image 807 and diagonal frequency absolute value images 808.Additionally, by calculating longitudinal frequency image 803, horizontal frequency diagram As 804 and the quadratic sum of diagonal frequency image 805, longitudinal direction/laterally/diagonal frequency square and image 809 is created.Change sentence Talk about, comprehensive features extraction unit 203 obtains longitudinal frequency image 803, horizontal frequency image 804 and diagonal frequency image The square value of 805 respective position (pixel).Then, comprehensive features extraction unit 203 is by will be in longitudinal frequency image 803rd, the mutual corresponding position of horizontal frequency image 804 and diagonal frequency image 805 square value phase Calais create longitudinal direction/ Laterally/diagonal frequency square and image 809.
In fig. 8, eight images, i.e. from the low-frequency image 802 of the acquisition of original image 801 to longitudinal direction/laterally/diagonal Frequency square and image 809, are referred to as the image sets of the first level.
Subsequently, comprehensive features extraction unit 203 is executed to low-frequency image 802 and the image sets for the first level of establishment Image conversion identical image conversion, to create above-mentioned eight images, as the image sets of the second level.Additionally, comprehensive special The low-frequency image of 203 pairs of the second levels of the amount of levying extraction unit executes identical and processes, to create above-mentioned eight images, as the 3rd The image sets of level.Low-frequency image for respective level is repeatedly carried out for creating eight images (that is, figure of each level As group) process, until the size of low-frequency image has the value for being equal to or less than certain value.This processes dotted line in fig. 8 repeatedly The inside of part 810 is exemplified.By above-mentioned process repeatedly, eight images are respectively created in each level.For example, repeatedly Above-mentioned process creates 81 image (+10 layers of 1 original image for single image in the case of the tenth level Level × 8 images).The creation method of pyramid level image as already described above.In the present example embodiment, as showing Example, it has been described that the pyramid level image using wavelet transformation is (with the frequency different from the frequency of original image 801 Image) creation method.However, pyramid level image (image with the frequency different from the frequency of original image 801) Creation method be not limited to the method using wavelet transformation.For example, it is possible to by executing Fourier transform to original image 801 To create pyramid level image (image with the frequency different from the frequency of original image 801).
Next, will be described in detail for by executing statistical calculation and filtering operation to each pyramid level image Method to extract characteristic quantity.
First, by descriptive statisticses computing.Comprehensive features extraction unit 203 calculates the average of each pyramid level image Value, dispersion, kurtosis, the degree of bias, maximum and minima, and distribute these values as characteristic quantity.In addition to above-mentioned value Statistic can be allocated as characteristic quantity.
Subsequently, the characteristic quantity for description being extracted by Filtering Processing.Here, by for emphasizing scratch defects and uneven Two Filtering Processing of defect and the result that calculates is allocated as characteristic quantity.Which will in turn be described below to process.
First, description is emphasized the characteristic quantity of scratch defects.Under many circumstances, scraped by certain ridge when in production There are scratch defects during target object, and scratch defects often have the linearity configuration that grows in one direction.Figure 10 is Schematic diagram of the illustration according to the example of the computational methods of the characteristic quantity for emphasizing scratch defects of this exemplary embodiment.In Figure 10 In, solid-line rectangle framework 1001 represents in pyramid level image.For rectangular frame (pyramid level image) 1001, comprehensive features extraction unit 203 by using rectangular area 1002 (the dashed rectangle framework in Figure 10) and has The rectangular area 1003 (the chain-dotted line rectangular frame in Figure 10) of the linearity configuration of the upwardly extending length in one side is executing convolution Computing.By convolution algorithm, the characteristic quantity for emphasizing scratch defects is extracted.
In the present example embodiment, comprehensive features extraction unit 203 scans whole rectangular frame (pyramid stratal diagram Picture) 1001 (arrows in referring to Figure 10).Then, comprehensive features extraction unit 203 calculates the rectangular area except linearity configuration The pixel in the meansigma methodss of the pixel in rectangular area 1002 and the rectangular area 1003 of linearity configuration outside 1003 average The ratio of value.Then, this than maximum and minima be allocated as characteristic quantity.Because rectangular area 1003 has linear shape Shape, it is possible to extract the characteristic quantity for further emphasizing scratch defects.Additionally, in Fig. 10, rectangular frame (pyramid stratal diagram Picture) 1001 and linearity configuration rectangular area 1003 mutually parallel.However, in 360 degree of all directions all it may happen that The defect of linearity configuration.Thus, for example, comprehensive features extraction unit 203 is making rectangular frame per 15 degree on 24 directions (pyramid level image) 1001 rotates, and calculates respective characteristic quantity.Additionally, characteristic quantity is equipped with various filters size.
Secondly, description is emphasized the characteristic quantity of uneven defect.Uneven defect is due to uneven coating or uneven tree Fat is moulded and is produced, and is likely to widely to occur.Figure 11 be illustrate according to this exemplary embodiment emphasize uneven The schematic diagram of the example of the computational methods of the characteristic quantity of defect.Rectangular area 1101 (the solid-line rectangle framework in Figure 11) represents gold One in word tower stratal diagram picture.For rectangular area (pyramid level image) 1101, comprehensive features extraction unit 203 leads to Cross using 1103 (the chain-dotted line rectangle frame in Figure 11 of rectangular area 1102 (the dashed rectangle framework in Figure 11) and rectangular area Frame) executing convolution algorithm.By convolution algorithm, the characteristic quantity for emphasizing uneven defect is extracted.Here, rectangular area 1103 (the chain-dotted line rectangular frame in Figure 11) is the region including uneven defect in rectangular area 1102.
In the present example embodiment, comprehensive features extraction unit 203 scans whole rectangular area 1101 (referring to Figure 11 In arrow), to calculate meansigma methodss and the rectangular area of the pixel in rectangular area 1102 in addition to rectangular area 1103 The ratio of the meansigma methodss of the pixel in 1103.Then, comprehensive features extraction unit 203 by this than maximum and minima distribution The amount of being characterized.Because rectangular area 1103 is the region for including uneven defect, it is possible to which calculating is further emphasized uneven The characteristic quantity of defect.Additionally, the situation of the characteristic quantity similar to scratch defects, provides characteristic quantity with various filters size.
Here, computational methods are described by the ratio of calculating meansigma methodss as an example.However, characteristic quantity is not limited to averagely The ratio of value.For example, the ratio of dispersion or standard deviation can serve as characteristic quantity, and substitute and use ratio, and difference can serve as Characteristic quantity.Additionally, in the present example embodiment, maximum and minima are calculated after scanning is executed.However, not total It is to calculate maximum and minima.Other statistics can be calculated from scanning result, for example, meansigma methodss or dispersion.
Additionally, in the present example embodiment, characteristic quantity is extracted by creating pyramid level image.However, Pyramid level image must not always be created.For example, it is possible to only extract characteristic quantity from original image.Additionally, characteristic quantity Type is not limited to those described in this exemplary embodiment.For example, it is possible to by being directed to pyramid level image or original Image 801, execute statistical calculation, convolution algorithm, binary conversion treatment and in differentiating at least any one calculating feature Amount.
Characteristic quantity that comprehensive features extraction unit 203 pairs is as above derived applies numbering, and by characteristic quantity and numbering It is temporarily stored in memorizer together.Figure 12 is the form of the list for illustrating the characteristic quantity according to this exemplary embodiment.By In the characteristic quantity that there are a large amount of types, therefore illustrate in the table in a simplified manner in fig. 12 most of.Additionally, in order to For the sake of lower process, for a learning object image, it is assumed that " N " individual characteristic quantity altogether will be extracted, while executing computing, Zhi Daoti The characteristic quantity for taking the uneven defect with filter size " Z " that pyramid level image " Y " of X levels includes is Only.As described above, comprehensive features extraction unit 203 extracts about 4000 characteristic quantity (N=from learning object image synthesis 4000).
<Step S106>
In step s 106, comprehensive features extraction unit 203 is determined for four study created in step S104 Object images 1 to 4, if complete the extraction of the characteristic quantity executed in step S105.As the result for determining, if not yet Characteristic quantity ("No" in step S106) is extracted from four learning object images 1 to 4, is then processed and is returned to step S105, so as to Characteristic quantity is extracted again.Then, if being extracted comprehensive features (step from all of four learning object images 1 to 4 "Yes" in S106), then process and proceed to step S107.
<Step S107>
In step s 107, characteristic quantity assembled unit 204 pairs is by owning that the process in step S105 and S106 is extracted The comprehensive features of four learning object images 1 to 4 be combined.Figure 13 is the form of the list for illustrating assemblage characteristic amount. Here, number from 1 to 4N assigned characteristics amount.In the present example embodiment, the characteristic quantity group by executing in step s 107 Close to process and all of characteristic quantity 1 is combined to 4N.However, must not always combine all of characteristic quantity 1 to 4N.Example Such as, in the case where starting to have already known a substantially unwanted characteristic quantity, it is not necessary to combine this feature amount.
<Step S108>
In step S108, characteristic quantity assembled unit 204 determines whether target object to the quantity needed for study Characteristic quantity combined.As the result for determining, if the characteristic quantity not yet to the target object of the quantity needed for study ("No" in step S108) is combined, then process and return to step S104, and step S104 is repeatedly carried out to step The process of S108, till the characteristic quantity of the target object of the quantity required to study is combined.As in step S103 Described, for no defective product, the characteristic quantity of 150 target objects is combined, and is directed to defective product, combine 50 objects The characteristic quantity of body.The characteristic quantity of the target object of the quantity needed for study is combined ("Yes" in step S108) When, process and proceed to step S109.
<Step S109>
In step S109, characteristic quantity select unit 205 is from the spy combined by the process till step S108 In the middle of the amount of levying, select and determine for carrying out detached characteristic quantity between no defective product and defective product, i.e. for examining The type of the characteristic quantity that looks into.Specifically, characteristic quantity select unit 205 is created between no defective product and defective product The ranking of the type of detached characteristic quantity is carried out, and will use that from top ranked how many characteristic quantities (that is, by determining Using characteristic quantity quantity) selecting characteristic quantity.
First, by the example of description ranking creation method.Each learning object object is applied numbering " j " (j=1, 2,...,200).Apply numbering 1 to 150 to no defective product, and apply numbering 151 to 200 to defective product, and inciting somebody to action The i-th characteristic quantity (i=1,2 ..., 4N) after characteristic quantity is combined is expressed as " xi,j”.Each class for characteristic quantity Type, characteristic quantity select unit 205 calculate the meansigma methodss " x of 150 no defective productsave_i" and standard deviation " σave_i", and lead to Cross hypothesis probability density function f (xi,j) generation characteristic quantity " x is created for normal distributioni,j" probability density function f (xi,j). Now, probability density function f (xi,j) can be expressed by following formula 5.
Subsequently, characteristic quantity select unit 205 calculates the probability density letter of all of defective product used in study Number f (xi,j) product, and take the value of acquisition as creating assessed value g (i) of ranking.Here, assessed value g (i) can be by Following formula 6 is expressed.
Hour is got in its assessed value g (i), characteristic quantity is got over for carrying out separating between no defective product and defective product Useful.Therefore, characteristic quantity select unit 205 is in turn scored to assessed value g (i) and ranking from minima, to create feature The ranking of the type of amount.When ranking is created, the combination of characteristic quantity can be estimated, substitute and characteristic quantity itself is commented Estimate.In the case where the combination to characteristic quantity is estimated, it is equal to the dimension of characteristic quantity to be combined by establishment quantity The probability density function of quantity is executing assessment.For example, for the combination of i-th and k-th two dimensional character amount, with the side of two dimension Formula expression formula 5 and formula 6 so that probability density function f (xi,j,xk,j) and assessed value g (i, k) expressed by following formula 7 and formula 8 respectively.
One characteristic quantity " k " (k-th characteristic quantity) is fixed, and in turn characteristic quantity is entered from smallest evaluation value g (i, k) Row is classified and is scored.For example, for this characteristic quantity " k ", as follows the characteristic quantity of front ten rankings is scored, has The ith feature amount for having smallest evaluation value g (i, k) is remembered 10 points, and there is the second smallest evaluation value g (i', the i-th ' individual spy k) The amount of levying is remembered 9 points, by that analogy.Scored by executing this for all of characteristic quantity k, in the situation of the combination of consideration characteristic quantity The ranking of the lower type for creating assemblage characteristic amount.
Next, characteristic quantity select unit 205 determines the characteristic quantity from top ranked type using how many types (that is, the quantity of characteristic quantity to be used).First, for all of learning object object, characteristic quantity select unit 205 is by taking The quantity of characteristic quantity to be used is as a parameter to calculate score.Specifically, the quantity of characteristic quantity to be used is taken as " p ", And be taken as " m " by the type of the characteristic quantity of the position sequence sequence of ranking, and score h (p, j) of jth target object is by following formula 9 Expression.
Score h (p, j) is based on, characteristic quantity select unit 205 is come for each characteristic quantity to be used is by the position sequence of score Arrange all of learning object object.Learning object object known to assuming is no defective product or defective product.When by During the position sequence arrangement target object that divides, also by this sequence arrangement no defective product and defective product of score.Can obtain with The as many above-mentioned data of the candidate of the quantity " p " of characteristic quantity to be used.Characteristic quantity select unit 205 is specified and to be used The separation degree of the corresponding data of the quantity of the candidate of the quantity " p " of characteristic quantity (is represented and how can be accurately separated zero defect Product and the value of defective product), as assessed value, and characteristic quantity to be used is determined from the data for obtaining highest assessed value Quantity " p ".The area under curve (AUC) of receiver operating characteristic (ROC) curve can be used as the separation degree of data.Additionally, No defective product when the defective product for being considered learning object data is ignored percent of pass (quantity of no defective product with The ratio of the sum of target object) can serve as the separation degree of data.By adopting said method, characteristic quantity select unit 205 From in the middle of the assemblage characteristic amount (that is, the characteristic quantities of 16000 types in N=4000) of 4N type, select to be used big The characteristic quantity of about 50 to 100 types.In the present example embodiment, although determining the quantity of characteristic quantity to be used, It is quantity that fixed value can be applied to characteristic quantity to be used.Selective type is stored in characteristic quantity storage unit 207 is selected Characteristic quantity.
<Step S110>
In step s 110, grader signal generating unit 206 creates grader.Specifically, for obtaining for being calculated by formula 9 Point, grader signal generating unit 206 determines when determining for checking that target object is the threshold of no defective product or defective product Value.Here, depending on partly allowing still not allow to ignore defective product, user is determined according to the condition of production line to be used for The threshold value of detached score is carried out between no defective product and defective product.Then, the storage of grader storage unit 208 life Into grader.The process executed in learning procedure S1 is described above.
<Step S201>
Next, checking step S2 shown in Fig. 3 B will be described.In step s 201, image acquisition unit 201 is from taking the photograph As device 220 obtains the check image shot under multiple imaging conditions.Different with during study, during checking, do not know Road target object is no defective product or defective product.
<Step S202>
In step S202, image acquisition unit 201 determines whether to be set to defective/zero defect in advance true Determine under all of lighting condition of device 200, to obtain image.As the result for determining, if not yet under all of lighting condition Image ("No" in step S202) is obtained, is then processed and is returned to step S201, also, repeatedly shooting image.In this example Property embodiment in, when image is obtained under seven lighting conditions, process proceed to step S203.
<Step S203>
In step S203, seven creation of image composographs of the image composing unit 202 by using target object. As the situation of learning object image, in the present example embodiment, image composing unit 202 pairs is under lighting condition 1 to 4 The image of shooting is synthesized to export composograph, and without the synthesis figure that directly output is shot under lighting condition 5 to 7 Picture.Therefore, four check images are created altogether.
<Step S204>
In step S204, Characteristic Extraction unit 209 is selected to receive by characteristic quantity from selection characteristic quantity storage unit 207 The type of the characteristic quantity that select unit 205 is selected, and the type of feature based amount calculates the value of characteristic quantity from check image. The computational methods of the value of each characteristic quantity are similar to the method described in step S105.
<Step S205>
In step S205, Characteristic Extraction unit 209 is selected to determine for four inspections created in step S203 Whether image completes the extraction of the characteristic quantity in step S204.As the result for determining, if not yet from four check images Characteristic quantity ("No" in step S205) is extracted, is then processed and is returned to step S204, so as to repeatedly extract characteristic quantity.Then, If characteristic quantity ("Yes" in step S205) is extracted from all of four check images, process and proceed to step S206.
In the present example embodiment, for the process in step S202 to step S205, with study during in process Situation the same, shooting image under all of seven lighting conditions, and by the lower figure for shooting of lighting condition 1 to 4 As carrying out synthesis to create four check images.However, exemplary embodiment is not limited to this.For example, depending on by characteristic quantity The characteristic quantity that select unit 205 is selected, if there is arbitrarily unwanted lighting condition or check image, then can omit illumination Condition or check image.
<Step S206>
In step S206, determining unit 210 is by the characteristic quantity calculated by process till step S205 In value plug-in type 9, the score of check object object is calculated.Then, it is determined that unit 210 by the score of check object object with The threshold value stored in grader storage unit 208 is compared, and determines that check object object is intact based on comparative result Sunken product or defective product.Now, determining unit 210 would indicate that the information for determining result is exported via output unit 211 Arrive display device 230.
<Step S207>
In step S207, determining unit 210 determines whether to complete the inspection to all of check object object.As The result of determination, if not yet completing the inspection ("No" in step S207) to all of check object object, processes and returns Step S201 is returned to, so as to repeatedly shoot the image of other check object objects.
Respective process step describe in detail above.
<The description of the effect of this exemplary embodiment>
Next, will be described in detail the effect of this exemplary embodiment.Illustratively, by this exemplary reality Apply example be compared with the situation that study/inspection process is executed without the assemblage characteristic amount in obtaining step S107.
Figure 14 A are the figures of the example for illustrating the operating process for excluding the characteristic quantity combination operation in step S107, and Figure 14 B Be illustrate according to this exemplary embodiment including step S107 in characteristic quantity combination operation operating process example figure. As shown in Figure 14 A, when without assemblage characteristic amount, need defective for each selection in four learning object images 1 to 4 The image (" image selection 1 to 4 " in Figure 14 A) of product.For example, as shown in fig. 7, learning object image 1 is from illumination bar The lower image for shooting of part 1 to 4 and the composograph that creates, accordingly, because scratch defects are likely to quilt lower in lighting condition 1 to 4 Visualization, so uneven defect is often less visualized in learning object image 1.Because even target object is marked Defective product is designated as, defect cannot be treated to the image of defective product by visual image yet, so necessary This image is eliminated from the image of defective product.
Additionally, under many circumstances, it may be difficult to select above-mentioned defective product image.For example, for target object In same defect, there is the situation for clearly visualizing defect in learning object image 1, and in learning object image 2, Defect is only visualized with the degree similar with the intensity of variation of the pixel value of no defective product image.Now, learning object Image 1 can serve as the learning object image of defective product.If however, learning object image 2 is used as defective product Learning object image, then be used for carrying out very may be used when detached characteristic quantity is chosen between no defective product and defective product Can select the characteristic quantity of redundancy.As a result, can so cause the performance degradation of grader.
Additionally, each selection characteristic quantity in four learning object images 1 to 4 from step S109, thus, for spy The selection of the amount of levying is creating four results.Therefore, it is necessary to execute repeatedly check four times.Usually, four inspections are synthetically assessed As a result, the target object for being confirmed as no defective product and in all of inspection is evaluated as no defective product with being integrated into.
On the other hand, if wanting assemblage characteristic amount, can solve the above problems.Because selecting after assemblage characteristic amount Characteristic quantity, as long as so defect any one in learning object image 1 to 4 in visualized, defect just can be visual Change.Therefore, different with situation about characteristic quantity not being combined, it is not necessary to select the image of defective product.Additionally, from Learning object image 1 selects the characteristic quantity for emphasizing scratch defects, and is likely to select to emphasize not from learning object image 2 to 4 The characteristic quantity of uniform defect.Therefore, or even when existing defects are only visualized as the picture that includes with no defective product image In the case of one image of the degree that the degree of the change of element value is similar to, simply by the presence of defect by clearly visual another Individual image, it is not necessary to from this image selection characteristic quantity, so as to the characteristic quantity of redundancy will not be selected.Therefore, it is possible to realize height Degree is accurately separated performance.Additionally, because only obtaining the selection result of a characteristic quantity by assemblage characteristic amount, should be only Execute and once check.
As described above, in the present example embodiment, based on for known defective outward appearance or zero defect outward appearance Target object, under at least two or more different lighting conditions shoot image, from two images at least Each multiple characteristic quantity of extraction.Then, it is right for determining to select from the characteristic quantity of the characteristic quantity synthetically included from image zooming-out As the defective or flawless characteristic quantity of object, and generated for determining that target object is defective based on the characteristic quantity for selecting Or flawless grader.Then, the outer of target object is determined based on the characteristic quantity and grader extracted from check image See defective or zero defect.Therefore, when the image of reference object object under multiple illumination conditions, it is not necessary to for each illumination Condition selects learning object image, once checks thus, it is possible to be directed to multiple lighting conditions and execute.Furthermore it is possible to efficiently true Determine that check object object is defective or zero defect, because the characteristic quantity of redundancy will not be selected.Therefore, it can in short time period Inner height be accurately determined check object object outward appearance is defective or zero defect.
Additionally, in the present example embodiment, describe by same device that (defective/zero defect determines dress as an example Put.However, study not always must be executed in the same apparatus and examined Look into.For example, it is possible to be configured to generate the grader generating means of (study) grader and for executing the check device for checking, So as to realize learning functionality and audit function in detached device.In this case, for example, in grader generating means Including the respective function of image acquisition unit 201 to grader storage unit 208, and include that image is obtained in check device The respective function of unit 201, image composing unit 202 and selection Characteristic Extraction unit 209 to output unit 211.Now, Grader generating means and check device are directly in communication with each other, so as to check device can be obtained with regard to grader and characteristic quantity Information.Additionally, substituting above-mentioned construction, for example, grader generating means can be stored in portable storage media with regard to classification Device and the information of characteristic quantity, so as to check device can be by obtaining from the read information with regard to grader and spy The information of the amount of levying.
Next, will the second exemplary embodiment of description.In the first exemplary embodiment, give for following example Property embodiment description, wherein, by using the view data shot under at least two different lighting conditions executing Practise and check.In the present example embodiment, the description for following exemplary embodiment will be provided, wherein, by using by The view data that at least two different image units shoot learns to execute and checks.Consequently, because in the first exemplary reality Different types of learning data used in example and this exemplary embodiment is applied, so its structure and process are main on this point not With.Therefore, in the present example embodiment, for the certain applications similar with the part described in the first exemplary embodiment with The reference identical reference of application in Fig. 1 to Figure 14 A (14B), and its detailed description will be omitted.
Figure 15 A are the figures of the top view for illustrating camera head 1500, and Figure 15 B are to illustrate taking the photograph according to this exemplary embodiment As device 1500 (by dotted line in Figure 15 B) and the figure of the cross-sectional view of target object 450.Figure 15 B are along Figure 15 A The cross-sectional view that line I-I' is intercepted.
As shown in fig. 15b, although according to the camera head 1500 of this exemplary embodiment similar in the first exemplary reality The camera head 220 described in example is applied, but the difference of camera head 1500 is, in addition to photographing unit 440, also wrapped Another photographing unit 460 (being represented in Figure 15 B) different from photographing unit 440 are included by thick line.The optical axis of photographing unit 440 is set In the vertical direction of the plate face relative to target object 450.On the other hand, the optical axis of photographing unit 460 is towards target object 450 Plate face incline and on the direction vertical with the plate face.Additionally, not had according to the camera head 1500 of this exemplary embodiment There is illumination.In the first exemplary embodiment, to from the image data acquisition shot under at least two different lighting conditions Characteristic quantity be combined.On the other hand, in the present example embodiment, to from by least two different image units (photographs The characteristic quantity of the image data acquisition that 460) camera 440 and photographing unit shoot is combined.Although illustrating in Figure 15 A (15B) Two photographing units 440 and photographing unit 460, but the quantity of photographing unit can be three or more, as long as using multiple photographs Camera.
Figure 16 is to illustrate photographing unit 440,460 in three dimensions shown in Figure 15 A (15B) seen from above and target object The figure of 450 state.The image of the same area of target object 450 is by two photographing units in mutually different shooting direction 440 and 460 shoot, and obtain view data from which.Advantage using multiple different photographing units is, in addition hardly by Visual defect is also likely to by obtaining view data on the multiple imaging directions for target object 450, by taking a picture Taken by any one in machine.Be similarly to for multiple lighting conditions description idea, and with the photograph shown in Fig. 6 Under the conditions of bright, easily the situation of visual defect is the same, there is also taking the photograph depending on the image unit for target object 450 Image space easy visual defect to (optical axis).
The handling process of defective during study and inspection/zero defect determining device 200 is similar to the first exemplary reality Apply the handling process of example.However, in the first exemplary embodiment, in step s 102, obtain under multiple illumination conditions according to The image of a bright target object 450.On the other hand, in the present example embodiment, obtain by multiple image units not The image of one target object 450 of same shooting direction photographs.Specifically, the object shot by photographing unit 440 is obtained The image of body 450 and the image of the target object 450 shot by photographing unit 460.
Additionally, in step S105, being carried from two image synthesises obtained by photographing unit 440 and photographing unit 460 respectively Characteristic quantity is taken, and these characteristic quantities are combined in step s 107.Hereafter, characteristic quantity is selected in step S109.Should note Meaning, in step S104, according to shooting direction (optical axis) composograph of photographing unit 440 and photographing unit 460.Have during checking The handling process of defect/zero defect determining device 200 is also similar to that above-mentioned handling process, thus will omit which and describe in detail. As a result, similar to the first exemplary embodiment, it is not necessary to select learning object for the image obtained by each image unit Image, it is possible thereby to being directed to the image shot by multiple image units once executes inspection.Additionally, because redundancy will not be selected Characteristic quantity, it is possible to efficiently determining that check object object is defective or zero defect.
Additionally, in the present example embodiment, it would however also be possible to employ the various modifications described in the first exemplary embodiment Example.For example, similar to the first exemplary embodiment, can by least two different image units at least two or more 450 shooting image of target object is directed under lighting condition.Specifically, such as Fig. 4 A described in the first exemplary embodiment (4B), as shown in, similarly arrangement illuminates 410a to 410h, 420a to 420h and 430a to 430h, and can pass through Change each direction of illumination of illumination and light quantity by multiple image units shooting image under multiple illumination conditions.It is then possible to By at least two different image units under respective lighting condition shooting image.Need not select to learn under each lighting condition Practise object images.In addition, for the image selection of each image unit become need not, and it is possible to it is single to be directed to multiple shootings First and multiple lighting condition is executed and is once checked.
Next, will the 3rd exemplary embodiment of description.In the first exemplary embodiment, give for following example Property embodiment description, wherein, by using the view data shot under at least two different lighting conditions executing Practise and check.In the present example embodiment, the description for following exemplary embodiment will be provided, wherein, by using same The view data of at least two zoness of different in one image learns to execute and checks.Consequently, because in the first exemplary reality Different types of learning data used in example and this exemplary embodiment is applied, so its structure and process are main on this point not With.Therefore, in the present example embodiment, for the certain applications similar with the part described in the first exemplary embodiment with The reference identical reference of application in Fig. 1 to Figure 14 A (14B), also, its detailed description will be omitted.
Figure 17 A are the figures for illustrating the state of photographing unit seen from above 440 and target object 1700 in three dimensions, and scheme 17B is the figure of the example of the shooting image of instantiation object object 1700.Although additionally, right described in the first exemplary embodiment As object 450 is made up of same material, but, the target object 1700 illustrated in Figure 17 A (17B) is made up of bi-material.? In Figure 17 A (17B), the material of region 1700a is referred to as materials A, and the material of region 1700b is referred to as material B.
In the first exemplary embodiment, to from the image data acquisition shot under at least two different lighting conditions Characteristic quantity be combined.On the other hand, in the present example embodiment, to from the same image shot by photographing unit 440 The characteristic quantity of image data acquisition of zones of different be combined.In the example that Figure 17 B are illustrated, two regions, i.e. corresponding Region 1700a in the materials A and region 1700b corresponding to material B, is designated as inspection area.Although at Figure 17 A (17B) Two inspection areas of middle illustration, but the quantity of inspection area can be three or more, as long as specifying multiple regions.
The handling process of defective during study and inspection/zero defect determining device 200 is similar to the first exemplary reality Apply the handling process of example.However, in the present example embodiment, in step s 102, the two of same target object 1700 is obtained The image of individual region 1700a and 1700b.Additionally, in step S105, from the image synthesis ground of two regions 1700a and 1700b Extract characteristic quantity respectively, and these characteristic quantities are combined in step s 107.It should be noted that in step S104, can be with root According to these region synthesis images.The handling process of defective during checking/zero defect determining device 200 is also similar to that above-mentioned Handling process, thus, will omit which and describe in detail.Conventionally, need to execute study respectively and check twice, because being directed to region 1700a and 1700b independently obtain learning outcome.Conversely, the advantage of this exemplary embodiment is, learns and check both Only should be executed once.Additionally, in the present example embodiment, it would however also be possible to employ described in the first exemplary embodiment Various modifications.
Next, will the 4th exemplary embodiment of description.In the first exemplary embodiment, give for following example Property embodiment description, wherein, by using the view data shot under at least two different lighting conditions executing Practise and check.In the present example embodiment, the description for following exemplary embodiment will be provided, wherein, by using same The view data of at least two different parts of one target object learns to execute and checks.As described above, because first Different types of learning data used in exemplary embodiment and this exemplary embodiment, so its structure and process are mainly with regard to this Different for point.Therefore, in the present example embodiment, for the portion similar with the part described in the first exemplary embodiment Divide the reference identical reference of application and application in Fig. 1 to Figure 14 A (14B), also, its detailed description will be omitted.
Figure 18 A are the states for illustrating photographing unit seen from above 440, photographing unit 461 and target object 450 in three dimensions Figure, and Figure 18 B are the figures of the example of the shooting image of instantiation object object 450.Although the shooting according to this exemplary embodiment Device is similar to the camera head 220 described in the first exemplary embodiment, however, the difference of the camera head exists In, in addition to photographing unit 440, also including another photographing unit 461 different from photographing unit 440.Photographing unit 440 and photographing unit 461 optical axis is arranged on the direction vertical with the plate face of target object 450.461 reference object of photographing unit 440 and photographing unit The image of the zones of different of object 450.For the sake of following process, in Figure 18 A (18B), on the left side of target object 450 Defect is intentionally illustrated in part.Although additionally, illustrate two photographing units 440 and 461 in Figure 18 A, but the number of photographing unit Amount can be three or more, as long as using multiple photographing units.Additionally, the target object 450 shown in Figure 18 A (18B) Formed by same material.
In the present example embodiment, in step S105, from the view data of the different piece of same target object 450 Synthetically extract characteristic quantity respectively, and combine these characteristic quantities in step s 107.Specifically, the left side is arranged in Figure 18 A 440 reference object object 450 of photographing unit left area 450a image, and be arranged on the right photographing unit 461 shoot right Image as the right area 450b of object 450.Hereafter, from left area 450a and the right area 450b of target object 450 The characteristic quantity for synthetically extracting is combined together.It should be noted that in step S104, can be according to these region synthesis images. The handling process of defective during checking/zero defect determining device 200 is also similar to that above-mentioned handling process, thus, will save Slightly which describes in detail.
In addition to the advantage that can reduce study and the number of times for checking described in the 3rd exemplary embodiment, this example Property embodiment advantage also reside in, can easily labelling zero defect study product and defective study product.Hereinafter, will Describe this advantage in detail.
As shown in figure 18b, for example, the image of the region 450a for being shot by left side photographing unit 440 includes defect, and by the right The image of the region 450b that photographing unit 461 shoots does not include defect.Additionally, in the example shown in Figure 18 B, although region 450a Mutually partly overlap with region 450b, but region 450a and region 450b mutually need not be overlapped.
Now, no defective product and defective product will be learnt, as described in detail by the first exemplary embodiment.Such as Fruit does not introduce the idea of assemblage characteristic amount, then each that must be directed in region 450a and 450b executes study.It is apparent that figure The target object 450 illustrated in 18B is defective product, because existing defects in target object 450.However, target object 450 during the study of region 450a in be treated to defective object, and during the study of region 450b in be treated to Zero defect object.Accordingly, there exist Note or the situation of defective labelling.
However, by being combined to the characteristic quantity of region 450a and 450b as described in this exemplary embodiment, it is not necessary to For each change zero defect labelling or defective labelling in region 450a and region 450b.Therefore, can substantially improve Availability during habit.
Next, the modification that this exemplary embodiment will be described.Figure 19 is to illustrate photograph seen from above in three dimensions The modification of the state of machine 440 and target object 450.Although additionally, target object 450 is not in the first exemplary embodiment Movably, but in the present example embodiment, target object 450 is installed on driving platform 1900.According to this example Property embodiment modification in, the left side legend in such as Figure 19 is shown, by the right area of 440 reference object object 450 of photographing unit The image in domain.Then, by platform 1900 mobile object object 450 is driven, so as to the right legend in such as Figure 19 is shown, by photographing unit The image of the left area of 440 reference object objects 450.Hereafter, from the right area of target object 450 and left area synthesis The characteristic quantity that ground is extracted is combined together.In the example that Figure 19 is illustrated, by driving platform 1900, same target object 450 The image of zones of different is shot by photographing unit 440.As long as however, in photographing unit 440 and target object 450 at least any one The image of the different piece of 440 reference object object 450 of photographing unit is moved so that, the device just not always must be with this Mode is constructed.For example, it is possible to while fixed target object 450 mobile cameras 440.
<Other exemplary embodiments>
Above-mentioned example embodiment is only to implement the example of each aspect of the present invention, and is not construed as limiting the present invention Each side technical scope.Therefore, it can the scope in the technical spirit without departing from each aspect of the present invention or principal character In the case of, realize each aspect of the present invention in many ways.
For example, for simplicity, as independent embodiment, the first exemplary embodiment is described exemplary to the 4th Embodiment.However, at least two exemplary embodiments in these exemplary embodiments can be combined.To illustrate in fig. 20 Specific example.Similar to the 3rd exemplary embodiment, Figure 20 is that illustration has the target object 1700 of different materials by two photographs The figure of the state that camera 440 and photographing unit 460 shoot.The arrangement of photographing unit 440 and photographing unit 460 with the second exemplary enforcement The arrangement shown in Figure 16 described in example is identical.As described above, the structure shown in Figure 20 is the second exemplary embodiment and The combination of three exemplary embodiments, thus the characteristic quantity in four regions be combined.Specifically, right from shot by photographing unit 440 As the right area of object 1700 and two characteristic quantities of left area extraction and from the target object shot by photographing unit 460 Two characteristic quantities that 1700 right area and left area are extracted are combined together.Furthermore, it is possible to pass through to change the first example Property embodiment described in lighting condition (that is, adoptable illumination, illumination light quantity or time of exposure) increasing for synthetically Extract the quantity of the view data of characteristic quantity.Additionally, in the present example embodiment, to all of characteristic quantity in four regions It is combined.However, it is possible to the degree of accuracy of separating property according to needed for user or check precision to change feature to be combined Amount, it is possible thereby to combine such as only trizonal characteristic quantity.
Additionally, each aspect of the present invention can be realized by executing following process.For realizing that above-mentioned example is implemented The software (computer program) of the function of example is fed into system or device via network or various storage mediums.Then, this is The computer (or CPU or microprocessing unit (MPU)) of system or device reads and executes computer program.
Other embodiment
(also can also more completely be referred to as that " non-transitory computer can by reading and executing record in storage medium Read storage medium ") on computer executable instructions (for example, one or more programs) executing in above-described embodiment Individual or more function and/or including for executing one of the one or more function in above-described embodiment Or system or the computer of device of more circuits (for example, special IC (ASIC)), realize the enforcement of the present invention Example, and it is possible to can using the computer by for example being read and executed from storage medium by the computer of system or device Execute instruction is executing the one or more function in above-described embodiment and/or control one or more circuits Method to execute the one or more function in above-described embodiment, realizes embodiments of the invention.Computer can be with Including one or more processors (for example, CPU (CPU), microprocessing unit (MPU)), and can include dividing The computer that opens or the network of separate processor, to read and execute computer executable instructions.Computer executable instructions Computer can be provided to from network or storage medium for example.Storage medium can include such as hard disk, random access memory Device (RAM), read only memory (ROM), the memorizer of distributed computing system, CD (such as compact disk (CD), digital universal CD (DVD) or Blu-ray Disc (BD)TM), one or more in flash memory device and storage card etc..
Embodiments of the invention can also be realized by following method, i.e. by network or various storage mediums The software (program) of function for executing above-described embodiment is supplied to system or device, the computer of the system or device or in The method that Central Processing Unit (CPU), microprocessing unit (MPU) read simultaneously configuration processor.
Although describe each aspect of the present invention for exemplary embodiment, however, it is to be understood that each aspect of the present invention It is not limited to disclosed exemplary embodiment.The scope of the claims below should be endowed most wide explanation, to cover All such modification and the 26S Proteasome Structure and Function of equivalent.

Claims (13)

1. a kind of grader generating means, the grader generating means include:
Study extraction unit, which is configured to, based on for the object with known defective outward appearance or zero defect outward appearance Body, under at least two different imaging conditions shoot image, from least two images each extraction characteristic quantity;
Select unit, its are configured to, and are selected for determining that target object is defective or intact from the middle of the characteristic quantity for extracting Sunken characteristic quantity;And
Signal generating unit, its are configured to, and based on the characteristic quantity for selecting, generate for determining that target object is defective or zero defect Grader.
2. grader generating means according to claim 1, the grader generating means also include:
Synthesis unit, its are configured to, to for have known defective outward appearance or zero defect outward appearance target object, The multiple images shot under at least two different imaging conditions are synthesized,
Wherein, at least two images based on the shooting image, including the composograph that created by the synthesis unit and institute State in the image of the synthetic object for being not selected as the synthesis unit in the middle of shooting image at least any one.
3. grader generating means according to claim 2, wherein, the synthesis unit is by using known for having Defective outward appearance or zero defect outward appearance target object, and under at least two different imaging conditions shoot image in Statistic between the pixel value of each image, the statistic of image and the plurality of image, executes the operation of composograph.
4. grader generating means according to claim 1, wherein, the study extraction unit is based on known for having Defective outward appearance or zero defect outward appearance target object the shooting image, from least two images each and generate The multiple images of different frequency, also, each extraction characteristic quantity from the image of the different frequency for generating.
5. grader generating means according to claim 4, wherein, the study extraction unit is using wavelet transformation or Fu Vertical leaf transformation is generating the multiple images of different frequency.
6. grader generating means according to claim 4, wherein, the study extraction unit is by different frequency The plurality of image executes statistical calculation, convolution algorithm, differentiate or binary conversion treatment at least any one extracting Characteristic quantity.
7. grader generating means according to claim 1, wherein, the select unit calculate for synthetically include by Each assessed value in the characteristic quantity of the characteristic quantity that the study extraction unit is extracted, or for synthetically including by described The assessed value of the combination of the characteristic quantity of the characteristic quantity that study extraction unit is extracted, based on the assessed value for calculating, to synthetically wrapping Include by described study extraction unit extract characteristic quantity characteristic quantity in each, or synthetically include by described study extract Each in the combination of the characteristic quantity of the characteristic quantity that unit is extracted carries out ranking, and selected according to the ranking right for determining As the defective or flawless characteristic quantity of object.
8. grader generating means according to claim 7, wherein, for known defective outward appearance or zero defect Each in the target object of outward appearance, the select unit,
Calculating includes the score of characteristic quantity quantity as parameter, and the characteristic quantity is used for determining that target object to be defective or nothing Defect,
Position sequence by the score according to characteristic quantity quantity is arranging the object with known defective outward appearance or zero defect outward appearance Each in object,
Defective outward appearance or zero defect outward appearance are had based on target object, the arrangement position sequence of the target object of arrangement is assessed,
To be selected as determining the defective or flawless characteristic quantity of target object to derive based on the result of assessment Characteristic quantity quantity, and
Selected in the as many mode with the quantity of the highest order sequence derivation from the ranking, synthetically included by the study The characteristic quantity of the characteristic quantity that extraction unit is extracted, or the spy of the characteristic quantity for synthetically including being extracted by the study extraction unit The combination of the amount of levying.
9. grader generating means according to claim 1, wherein, described at least two different imaging conditions include, Shooting under at least two different lighting conditions, the shootings under at least two different shooting directions or to object In the shooting of at least two zoness of different of body at least any one.
10. grader generating means according to claim 9, wherein, the lighting condition includes, for target object Illuminate light quantity, be directed in the direction of illumination of illumination or the time of exposure of the imageing sensor imaged for execution of target object At least any one.
A kind of 11. defective/zero defects determine method, and the defective/zero defect determines that method includes:
Based on for have known defective outward appearance or zero defect outward appearance target object, at least two different shootings Under the conditions of shoot image, from least two images each extraction characteristic quantity;
Select from the middle of the characteristic quantity for being extracted for determining the defective or flawless characteristic quantity of target object;
Based on the characteristic quantity for selecting, generate for determining the defective or flawless grader of target object;
Extracted by checking, based on for have unknown defective outward appearance or zero defect outward appearance target object, with institute The image shot under imaging conditions identical imaging conditions is stated, each the multiple characteristic quantity of extraction from least two images;With And
Based on extracting and the characteristic quantity for extracting and the grader for generating by described inspection, determine that the outward appearance of target object has scarce Fall into or zero defect.
A kind of 12. defective/zero defect determining devices, the defective/zero defect determining device include:
Study extraction unit, which is configured to, based on for the object with known defective outward appearance or zero defect outward appearance Body, under at least two different imaging conditions shoot image, from least two images each extraction characteristic quantity;
Select unit, its are configured to, and are selected for determining that target object is defective or intact from the middle of the characteristic quantity for extracting Sunken characteristic quantity;
Signal generating unit, its are configured to, and based on the characteristic quantity for selecting, generate for determining that target object is defective or zero defect Grader;
Extraction unit is checked, which is configured to, based on for the object with unknown defective outward appearance or zero defect outward appearance Body, under described at least two different imaging conditions shoot image, from least two images each extraction feature Amount;And
Determining unit, its are configured to, and by the characteristic quantity of extraction is compared with the grader for generating, determine object The outward appearance of body is defective or zero defect.
A kind of 13. defective/zero defects determine method, and the defective/zero defect determines that method includes:
Based on for have known defective outward appearance or zero defect outward appearance target object, at least two different shootings Under the conditions of shoot image, from least two images each extraction characteristic quantity;
Select from the middle of the characteristic quantity for extracting for determining the defective or flawless characteristic quantity of target object;
Based on the characteristic quantity for selecting, generate for determining the defective or flawless grader of target object;
Extracted by checking, based on for have unknown defective outward appearance or zero defect outward appearance target object, with institute The image shot under imaging conditions identical imaging conditions is stated, each the multiple characteristic quantity of extraction from least two images;With And
Had based on outward appearance of the grader by the characteristic quantity for checking extraction and extracting and generation to determine target object scarce Fall into or zero defect.
CN201610792290.7A 2015-09-04 2016-08-31 Grader generating means, defective/zero defect determining device and method Pending CN106503724A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2015-174899 2015-09-04
JP2015174899 2015-09-04
JP2016-064128 2016-03-28
JP2016064128A JP2017049974A (en) 2015-09-04 2016-03-28 Discriminator generator, quality determine method, and program

Publications (1)

Publication Number Publication Date
CN106503724A true CN106503724A (en) 2017-03-15

Family

ID=58279856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610792290.7A Pending CN106503724A (en) 2015-09-04 2016-08-31 Grader generating means, defective/zero defect determining device and method

Country Status (2)

Country Link
JP (1) JP2017049974A (en)
CN (1) CN106503724A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598605A (en) * 2018-11-26 2019-04-09 格锐科技有限公司 A kind of long-distance intelligent assessment and loan self-aid system based on entity guaranty
CN109685756A (en) * 2017-10-16 2019-04-26 乐达创意科技有限公司 Image feature automatic identifier, system and method
CN109817267A (en) * 2018-12-17 2019-05-28 武汉忆数存储技术有限公司 A kind of service life of flash memory prediction technique based on deep learning, system and computer-readable access medium
CN110738237A (en) * 2019-09-16 2020-01-31 深圳新视智科技术有限公司 Defect classification method and device, computer equipment and storage medium
CN111507945A (en) * 2020-03-31 2020-08-07 成都数之联科技有限公司 Method for training deep learning defect detection model by using defect-free map
CN111712769A (en) * 2018-03-06 2020-09-25 欧姆龙株式会社 Method, apparatus, system, and program for setting lighting condition, and storage medium

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7054436B2 (en) 2017-12-14 2022-04-14 オムロン株式会社 Detection system, information processing device, evaluation method and program
JP6955211B2 (en) 2017-12-14 2021-10-27 オムロン株式会社 Identification device, identification method and program
WO2019171121A1 (en) * 2018-03-05 2019-09-12 Omron Corporation Method, device, system and program for setting lighting condition and storage medium
JP7015001B2 (en) 2018-03-14 2022-02-02 オムロン株式会社 Defect inspection equipment, defect inspection methods, and their programs
JPWO2020036082A1 (en) * 2018-08-15 2021-08-26 味の素株式会社 Inspection equipment, inspection methods and inspection programs
JP6823025B2 (en) 2018-09-12 2021-01-27 ファナック株式会社 Inspection equipment and machine learning method
JP6795562B2 (en) 2018-09-12 2020-12-02 ファナック株式会社 Inspection equipment and machine learning method
JP6780682B2 (en) 2018-09-20 2020-11-04 日本電気株式会社 Information acquisition system, control device and information acquisition method
JP7119949B2 (en) * 2018-11-28 2022-08-17 セイコーエプソン株式会社 Judgment device and judgment method
JP7075057B2 (en) 2018-12-27 2022-05-25 オムロン株式会社 Image judgment device, image judgment method and image judgment program
JP7130190B2 (en) 2018-12-27 2022-09-05 オムロン株式会社 Image determination device, learning method and image determination program
JP7075056B2 (en) * 2018-12-27 2022-05-25 オムロン株式会社 Image judgment device, image judgment method and image judgment program
JP6869490B2 (en) 2018-12-28 2021-05-12 オムロン株式会社 Defect inspection equipment, defect inspection methods, and their programs
WO2021039437A1 (en) * 2019-08-27 2021-03-04 富士フイルム富山化学株式会社 Image processing device, portable terminal, image processing method, and program
JP7342616B2 (en) * 2019-10-29 2023-09-12 オムロン株式会社 Image processing system, setting method and program
JP7397404B2 (en) * 2020-02-07 2023-12-13 オムロン株式会社 Image processing device, image processing method, and image processing program
JPWO2022130814A1 (en) * 2020-12-16 2022-06-23
WO2022153743A1 (en) * 2021-01-15 2022-07-21 パナソニックIpマネジメント株式会社 Determination system, determination method, and program
CN113933294B (en) * 2021-11-08 2023-07-18 中国联合网络通信集团有限公司 Concentration detection method and device
WO2023089846A1 (en) * 2021-11-22 2023-05-25 株式会社Roxy Inspection device and inspection method, and program for use in same

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1822644A (en) * 2005-02-15 2006-08-23 雅马哈发动机株式会社 Image acquiring method, image acquiring apparatus and device with image acquiring apparatus
CN101034069A (en) * 2006-03-10 2007-09-12 欧姆龙株式会社 Defect inspection apparatus and defect inspection method
CN101063660A (en) * 2007-01-30 2007-10-31 蹇木伟 Method for detecting textile defect and device thereof
CN101644657A (en) * 2009-09-03 2010-02-10 浙江大学 Rotation lighting method and device for big calibre precision optical component surface defect detection
CN101832948A (en) * 2009-03-12 2010-09-15 Aju高技术公司 The appearance inspecting system of printed circuit board (PCB) and method thereof
CN101981683A (en) * 2008-03-27 2011-02-23 东京毅力科创株式会社 Method for classifying defects, computer storage medium, and device for classifying defects
US20110182496A1 (en) * 2008-08-25 2011-07-28 Kaoru Sakai Defect check method and device thereof
CN102165488A (en) * 2008-09-24 2011-08-24 佳能株式会社 Information processing apparatus for selecting characteristic feature used for classifying input data
CN102288613A (en) * 2011-05-11 2011-12-21 北京科技大学 Surface defect detecting method for fusing grey and depth information
CN102292805A (en) * 2009-01-26 2011-12-21 恪纳腾公司 Systems and methods for detecting defects on a wafer
CN102630299A (en) * 2009-10-30 2012-08-08 住友化学株式会社 Image processing device for defect inspection and image processing method for defect inspection
JP2013016909A (en) * 2011-06-30 2013-01-24 Lintec Corp Synchronous detection circuit, receiver, and detection method
CN103377373A (en) * 2012-04-25 2013-10-30 佳能株式会社 Image feature generation method and equipment, classifier, system and capture equipment
CN103620482A (en) * 2011-06-24 2014-03-05 夏普株式会社 Defect inspection device and defect inspection method
WO2014119772A1 (en) * 2013-01-30 2014-08-07 住友化学株式会社 Image generating device, defect inspecting device, and defect inspecting method
CN104067600A (en) * 2012-02-01 2014-09-24 埃科亚特姆公司 Method and apparatus for recycling electronic devices
CN104796600A (en) * 2014-01-17 2015-07-22 奥林巴斯株式会社 Image composition apparatus and image composition method

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1822644A (en) * 2005-02-15 2006-08-23 雅马哈发动机株式会社 Image acquiring method, image acquiring apparatus and device with image acquiring apparatus
CN101034069A (en) * 2006-03-10 2007-09-12 欧姆龙株式会社 Defect inspection apparatus and defect inspection method
CN101063660A (en) * 2007-01-30 2007-10-31 蹇木伟 Method for detecting textile defect and device thereof
CN101981683A (en) * 2008-03-27 2011-02-23 东京毅力科创株式会社 Method for classifying defects, computer storage medium, and device for classifying defects
US20110182496A1 (en) * 2008-08-25 2011-07-28 Kaoru Sakai Defect check method and device thereof
CN102165488A (en) * 2008-09-24 2011-08-24 佳能株式会社 Information processing apparatus for selecting characteristic feature used for classifying input data
CN102292805A (en) * 2009-01-26 2011-12-21 恪纳腾公司 Systems and methods for detecting defects on a wafer
CN101832948A (en) * 2009-03-12 2010-09-15 Aju高技术公司 The appearance inspecting system of printed circuit board (PCB) and method thereof
CN101644657A (en) * 2009-09-03 2010-02-10 浙江大学 Rotation lighting method and device for big calibre precision optical component surface defect detection
CN102630299A (en) * 2009-10-30 2012-08-08 住友化学株式会社 Image processing device for defect inspection and image processing method for defect inspection
CN102288613A (en) * 2011-05-11 2011-12-21 北京科技大学 Surface defect detecting method for fusing grey and depth information
CN103620482A (en) * 2011-06-24 2014-03-05 夏普株式会社 Defect inspection device and defect inspection method
JP2013016909A (en) * 2011-06-30 2013-01-24 Lintec Corp Synchronous detection circuit, receiver, and detection method
CN104067600A (en) * 2012-02-01 2014-09-24 埃科亚特姆公司 Method and apparatus for recycling electronic devices
CN103377373A (en) * 2012-04-25 2013-10-30 佳能株式会社 Image feature generation method and equipment, classifier, system and capture equipment
WO2014119772A1 (en) * 2013-01-30 2014-08-07 住友化学株式会社 Image generating device, defect inspecting device, and defect inspecting method
CN104796600A (en) * 2014-01-17 2015-07-22 奥林巴斯株式会社 Image composition apparatus and image composition method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685756A (en) * 2017-10-16 2019-04-26 乐达创意科技有限公司 Image feature automatic identifier, system and method
CN111712769A (en) * 2018-03-06 2020-09-25 欧姆龙株式会社 Method, apparatus, system, and program for setting lighting condition, and storage medium
US11631230B2 (en) 2018-03-06 2023-04-18 Omron Corporation Method, device, system and computer-program product for setting lighting condition and storage medium
CN109598605A (en) * 2018-11-26 2019-04-09 格锐科技有限公司 A kind of long-distance intelligent assessment and loan self-aid system based on entity guaranty
CN109817267A (en) * 2018-12-17 2019-05-28 武汉忆数存储技术有限公司 A kind of service life of flash memory prediction technique based on deep learning, system and computer-readable access medium
CN109817267B (en) * 2018-12-17 2021-02-26 武汉忆数存储技术有限公司 Deep learning-based flash memory life prediction method and system and computer-readable access medium
CN110738237A (en) * 2019-09-16 2020-01-31 深圳新视智科技术有限公司 Defect classification method and device, computer equipment and storage medium
CN111507945A (en) * 2020-03-31 2020-08-07 成都数之联科技有限公司 Method for training deep learning defect detection model by using defect-free map
CN111507945B (en) * 2020-03-31 2022-08-16 成都数之联科技股份有限公司 Method for training deep learning defect detection model by using defect-free map

Also Published As

Publication number Publication date
JP2017049974A (en) 2017-03-09

Similar Documents

Publication Publication Date Title
CN106503724A (en) Grader generating means, defective/zero defect determining device and method
CN108764358B (en) Terahertz image identification method, device and equipment and readable storage medium
CN110852316B (en) Image tampering detection and positioning method adopting convolution network with dense structure
JP4870363B2 (en) High compression image data file generation device, method and program having a plurality of foreground planes
Neal et al. Measuring shape
US20170069075A1 (en) Classifier generation apparatus, defective/non-defective determination method, and program
CN107004265A (en) Information processor, the method for processing information, discriminator generating means, the method and program for generating discriminator
JPWO2019026104A1 (en) Information processing apparatus, information processing program, and information processing method
CN106934794A (en) Information processor, information processing method and inspection system
CN105701493B (en) The method and system of image zooming-out and prospect estimation based on stratum&#39;s figure
WO2021106174A1 (en) Image processing method, learning device, and image processing device
US12073567B2 (en) Analysing objects in a set of frames
CN114399644A (en) Target detection method and device based on small sample
KR20210057518A (en) Apparatus and method for generating a defect image
CN113159064A (en) Method and device for detecting electronic element target based on simplified YOLOv3 circuit board
CN111667459A (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN112699885A (en) Semantic segmentation training data augmentation method and system based on antagonism generation network GAN
Ibarra et al. Determination of Leaf Degradation Percentage for Banana leaves with Panama Disease Using Image Segmentation of Color Spaces and OpenCV
CN113016023B (en) Information processing method and computer-readable non-transitory recording medium
JP6546385B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
WO2016092783A1 (en) Information processing apparatus, method for processing information, discriminator generating apparatus, method for generating discriminator, and program
US20220383616A1 (en) Information processing apparatus and image processing method
KR102286711B1 (en) Camera module stain defect detecting system and method
WO2022172469A1 (en) Image inspection device, image inspection method, and trained model generation device
JP7533263B2 (en) Image inspection device, image inspection method, and trained model generation device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170315

WD01 Invention patent application deemed withdrawn after publication