TWI808019B - Method for filtering surface type of object based on artificial neural network and system - Google Patents
Method for filtering surface type of object based on artificial neural network and system Download PDFInfo
- Publication number
- TWI808019B TWI808019B TW111137589A TW111137589A TWI808019B TW I808019 B TWI808019 B TW I808019B TW 111137589 A TW111137589 A TW 111137589A TW 111137589 A TW111137589 A TW 111137589A TW I808019 B TWI808019 B TW I808019B
- Authority
- TW
- Taiwan
- Prior art keywords
- defect
- image
- defective
- detection module
- quasi
- Prior art date
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
本發明是關於物件之表面型態的篩選技術,特別是一種基於人工神經網路的物件表面型態的篩選方法。The present invention relates to the screening technology of the surface form of objects, in particular to a screening method of the surface form of objects based on artificial neural network.
各種安全性保護措施是藉由許多小型結構物件所組成,例如安全帶。若是這些小型結構物件強度不足則可以令安全性保護措施的保護作用產生疑慮。Various safety protection measures are composed of many small structural objects, such as seat belts. If the strength of these small structural objects is insufficient, the protective effect of safety protection measures can be questioned.
這些結構物件在製造過程中可能因為各種原因,例如碰撞、製程誤差、模具缺陷等,而導致其表面產生微小的瑕疵,例如變形、撞傷、砂孔、斷柱、沖崩、拉模、異物、毛邊等。這些微小的瑕疵並不容易被查覺。傳統的瑕疵檢測方法中之一是人工以肉眼觀察或以雙手觸摸待檢測之結構物件,以判定結構物件是否具有瑕疵。然而,以人工方式檢測結構物件是否具有瑕疵之效率較差,且極容易發生誤判的情形,如此將造成結構物件之良率無法控管。During the manufacturing process of these structural objects, due to various reasons, such as collisions, process errors, mold defects, etc., there may be tiny defects on the surface, such as deformation, bruises, sand holes, broken columns, collapse, die drawing, foreign objects, burrs, etc. These tiny blemishes are not easy to detect. One of the traditional defect inspection methods is to manually observe the structural object to be inspected with naked eyes or touch it with both hands to determine whether the structural object has defects. However, the efficiency of manually detecting whether a structural object has defects is low, and misjudgment is very likely to occur, which will cause the yield rate of the structural object to be uncontrollable.
另有一瑕疵檢測方法是透過人工神經網路來進行檢測。然而,傳統之人工神經網路的瑕疵檢測結果僅簡易分類成良品與不良品,仍有誤判的情形。此外,傳統之人工神經網路的瑕疵檢測僅考慮單一參數,精確度較低。Another defect detection method is to detect through artificial neural network. However, the defect detection results of the traditional artificial neural network are only simply classified into good products and defective products, and there are still cases of misjudgment. In addition, the defect detection of traditional artificial neural network only considers a single parameter, and the accuracy is low.
在一實施例中,本發明提供一種基於人工神經網路的物件表面型態的篩選方法。基於人工神經網路的物件表面型態的篩選方法包含:利用第一檢測模組識別物件影像以分類物件影像至良品組與第一不良品的其中之一者,並輸出分類至第一不良品組之物件影像的標記影像,其中標記影像為具有至少一瑕疵標記的物件影像;於物件影像分類至第一不良品組中時,對應至少一瑕疵標記自標記影像中擷取出至少一準瑕疵影像;以及利用第二檢測模組根據至少一準瑕疵影像分類物件影像至模糊組與第二不良品組的其中之一者,其中第二檢測模組是於至少一準瑕疵影像皆檢測出無瑕疵時將物件影像分類至模糊組。In one embodiment, the present invention provides a method for screening object surface types based on artificial neural network. The method for screening the object surface type based on the artificial neural network includes: using the first detection module to identify the object image to classify the object image into one of the good product group and the first defective product, and output a marked image of the object image classified into the first defective product group, wherein the marked image is an object image with at least one defect mark; when the object image is classified into the first defective product group, at least one quasi-defective image is extracted from the marked image corresponding to at least one defect mark; The image is classified into one of the fuzzy group and the second defective product group, wherein the second detection module classifies the object image into the fuzzy group when at least one quasi-defective image is detected to be free of defects.
在一實施例中,本發明另提供一種基於人工神經網路的物件表面型態的檢測系統,包括:第一檢測模組與第二檢測模組。第一檢測模組識別一物件影像以分類物件影像至良品組與第一不良品組的其中之一者,並輸出分類至第一不良品組的物件影像的標記影像,並自標記影像中擷取出至少一準瑕疵影像。第二檢測模組根據至少一準瑕疵影像分類物件影像至模糊組與第二不良品組的其中之一者,其中於至少一準瑕疵影像皆檢測出無瑕疵時將物件影像分類至模糊組。In an embodiment, the present invention further provides an artificial neural network-based object surface type detection system, including: a first detection module and a second detection module. The first detection module identifies an object image to classify the object image into one of the good product group and the first defective product group, and outputs a tagged image of the object image classified into the first defective product group, and extracts at least one quasi-defective image from the tagged image. The second detection module classifies the object image into one of the fuzzy group and the second defective product group according to the at least one quasi-defective image, wherein the object image is classified into the fuzzy group when no defect is detected in the at least one quasi-defective image.
綜上所述,任一實施例之基於人工神經網路之物件表面型態的篩選方法與檢測系統,其先藉由第一檢測模組依據較嚴苛且容易一致性的瑕疵標準來對物件影像進行分類,以降低因標定人員的漏標或標定標準不一所產生的錯誤分類。之後,再藉由第二檢測模組根據分類至第一不良品組中的物件影像之至少一準瑕疵影像來進行分類,以將過殺的物件救回來。如此一來,可大幅簡化物件的標定工程,減少因標定作業造成的模型缺陷,使得檢測精確度可更加穩定。此外,因多階段式分類,每一階段模型(即第一檢測模組與第二檢測模組)負責的任務不同,由嚴到寬,視野由廣到細微,使得各階段模型的分工更專一而更容易收斂。再者,可解決標定問題、樣品複雜、瑕疵類別不均勻等問題,使得採用此篩選方法的檢測設備可提早上線。另,因先嚴殺再救回,可讓使用採用此篩選方法之檢測設備的業者可不必擔心流出不良品,而提高對檢測設備的信賴度。To sum up, in any embodiment of the artificial neural network-based object surface type screening method and detection system, firstly, the first detection module classifies the object images according to the more stringent and easy-to-consistency defect standards, so as to reduce the misclassification caused by the omission of the calibration personnel or the different calibration standards. Afterwards, the second detection module is used to classify at least one quasi-defective image of the object images classified into the first defective product group, so as to rescue overkill objects. In this way, the calibration project of the object can be greatly simplified, the model defects caused by the calibration operation can be reduced, and the detection accuracy can be more stable. In addition, due to the multi-stage classification, each stage model (ie, the first detection module and the second detection module) is responsible for different tasks, from strict to wide, and from wide to subtle, making the division of labor of each stage model more specific and easier to converge. Furthermore, problems such as calibration problems, complex samples, and uneven defect categories can be solved, so that testing equipment using this screening method can be brought online earlier. In addition, because of strict killing first and then rescue, the operators who use the testing equipment using this screening method do not have to worry about the outflow of defective products, which improves the reliability of the testing equipment.
以下在實施方式中詳細敘述本發明之詳細特徵以及優點,其內容足以使任何熟習相關技藝者瞭解本發明之技術內容並據以實施,且根據本說明書所揭露之內容、申請專利範圍及圖式,任何熟習相關技藝者可輕易地理解本發明相關之目的及優點。The detailed features and advantages of the present invention are described in detail below in the embodiments, the content of which is sufficient to enable any skilled person to understand the technical content of the present invention and implement it accordingly, and according to the content disclosed in this specification, the scope of the patent application and the drawings, any skilled person can easily understand the related objectives and advantages of the present invention.
使本發明之實施例之上述目的、特徵和優點能更明顯易懂,下文配合所附圖式,作詳細說明如下。To make the above objects, features and advantages of the embodiments of the present invention more comprehensible, a detailed description is given below in conjunction with the accompanying drawings.
圖1為檢測系統之一實施例的方塊示意圖,且圖2為物件表面型態之篩選方法之一實施例的流程示意圖。請參閱圖1與圖2,檢測系統100可執行任一實施例之基於人工神經網路110、120的物件表面型態的篩選方法。在一些實施態樣中,檢測系統100可實施在一處理器上。此外,所述處理器可應用於一檢測設備中。FIG. 1 is a schematic block diagram of an embodiment of a detection system, and FIG. 2 is a schematic flowchart of an embodiment of a method for screening the surface type of an object. Please refer to FIG. 1 and FIG. 2 , the detection system 100 can implement any embodiment of the method for screening object surface types based on the artificial neural network 110 , 120 . In some implementation aspects, the detection system 100 can be implemented on a processor. In addition, the processor can be applied in a detection device.
在一實施例中,檢測系統100包含二人工神經網路(以下分別稱之為第一人工神經網路110與第二人工神經網路120),且第二人工神經網路120串接於第一人工神經網路110的輸出。換言之,第一人工神經網路110為檢測系統100中的第一級,且第二人工神經網路120為檢測系統100中的第二級。其中,第一人工神經網路110具有一第一檢測模組111,且第二人工神經網路120具有一第二檢測模組121。In one embodiment, the detection system 100 includes two artificial neural networks (hereinafter respectively referred to as the first artificial neural network 110 and the second artificial neural network 120 ), and the second artificial neural network 120 is serially connected to the output of the first artificial neural network 110 . In other words, the first artificial neural network 110 is the first stage in the detection system 100 , and the second artificial neural network 120 is the second stage in the detection system 100 . Wherein, the first artificial neural network 110 has a first detection module 111 , and the second artificial neural network 120 has a second detection module 121 .
在一些實施例中,於學習階段中,第一人工神經網路110(此時為尚未訓練好的人工神經網路)可採用不同訓練條件對相同或不同的多個物件影像I11-I1N執行深度學習,以建立出用以識別物件之表面型態並據以分類的第一檢測模組111(此時為已訓練好的人工神經網路)。其中,N為大於1的正整數。In some embodiments, in the learning phase, the first artificial neural network 110 (an untrained artificial neural network at this time) may use different training conditions to perform deep learning on the same or different multiple object images I11-I1N, so as to establish the first detection module 111 (an artificial neural network that has been trained at this time) for identifying and classifying the surface type of the object. Wherein, N is a positive integer greater than 1.
在一些實施例中,複數物件影像I11-I1N可為同一種物件的相同相對位置之表面的影像。換言之,複數物件影像I11-I1N分別是第一個物件到第N個物件的表面影像,並且第一個物件到第N個物件為同一種物品。於此,第一人工神經網路110可以固定的取像座標參數來接收複數物件影像I11-I1N。其中,此些物件影像I11-I1N可以是透過擷取多個物件的表面之影像而得到。當物件的表面具有任何表面型態時,在各物件的物件影像I11-I1N中的對應影像位置亦有此些表面型態的成像。在一些實施態樣中,表面型態可例如為槽孔、裂縫、凸塊、砂孔、氣孔、撞痕、刮痕、邊緣、紋理等表面結構。其中,各表面結構為三維細微結構,其尺寸為次微米到微米。換言之,三維細微結構的最長邊或最長直徑是介於次微米到微米之間。其中,次微米是指小於1微米,例如0.1微米到1微米。例如,三維細微結構可以是300奈米到6微米的微結構。在一些實施態樣中,物件可為一種安全帶軸承,且其物件影像即為安全帶軸承的表面影像(如圖7所示的物件影像I11)。In some embodiments, the plurality of object images I11-I1N may be images of surfaces of the same object at the same relative position. In other words, the plurality of object images I11-I1N are surface images of the first object to the Nth object respectively, and the first object to the Nth object are the same kind of object. Here, the first artificial neural network 110 can receive a plurality of object images I11 - I1N with fixed image coordinate parameters. Wherein, the object images I11-I1N may be obtained by capturing images of surfaces of multiple objects. When the surface of the object has any surface type, the corresponding image positions in the object images I11-I1N of each object also have images of these surface types. In some embodiments, the surface pattern can be, for example, surface structures such as slots, cracks, bumps, sand holes, pores, impact marks, scratches, edges, and textures. Wherein, each surface structure is a three-dimensional fine structure, and its size is from submicron to micron. In other words, the longest side or longest diameter of the three-dimensional microstructure is between submicron to micron. Wherein, submicron refers to less than 1 micron, such as 0.1 micron to 1 micron. For example, the three-dimensional microstructure may be a microstructure of 300 nanometers to 6 micrometers. In some implementations, the object may be a seat belt bearing, and the object image thereof is the surface image of the seat belt bearing (the object image I11 shown in FIG. 7 ).
在一些實施例中,於學習階段中,第二人工神經網路120(此時為尚未訓練好的人工神經網路)可採用不同訓練條件對相同或不同的多個準瑕疵影像執行深度學習,以建立出用以識別物件是否具有瑕疵並據以分類的第二檢測模組121(此時為已練好的人工神經網路)。於此,各準瑕疵影像是一物件影像,例如物件影像I11中的局部影像(如圖10所示的準瑕疵影像I510-I519),且各準瑕疵影像可能包含或不包含瑕疵圖塊。其中,瑕疵圖塊是對應於物件之表面上的瑕疵成像而來的。在一些實施態樣中,瑕疵可包含但不限於變形、撞傷、砂孔、斷柱、沖崩、拉模、異物、毛邊等或其他人工定義成瑕疵的任何表面型態。In some embodiments, in the learning phase, the second artificial neural network 120 (an artificial neural network that has not been trained at this time) can use different training conditions to perform deep learning on the same or different quasi-defective images, so as to establish a second detection module 121 (an artificial neural network that has been trained) for identifying whether an object has a defect and classifying it accordingly. Here, each quasi-defective image is an object image, such as a partial image in the object image I11 (such as quasi-defective images I510 - I519 shown in FIG. 10 ), and each quasi-defective image may or may not contain a defective tile. Wherein, the defect image block is formed by imaging corresponding to the defect on the surface of the object. In some embodiments, defects may include but not limited to deformation, bruise, sand hole, broken column, collapse, die, foreign matter, burrs, etc., or any other artificially defined defects on the surface.
在一些實施態樣中,所述的訓練條件可例如為不同數量的神經網路層、不同神經元配置、不同輸入影像的預處理、不同神經網路演算法或其任意組合。其中,影像的預處理可為特徵強化、影像裁切、資料格式轉換、影像疊合或其任意組合。此外,第一人工神經網路110的第一檢測模組111與第二人工神經網路120的第二檢測模組121可分別利用卷積神經網路(CNN)演算法來實現,但本案並非以此為限。In some implementation aspects, the training conditions may be, for example, different numbers of neural network layers, different neuron configurations, different preprocessing of input images, different neural network algorithms or any combination thereof. Wherein, image preprocessing may be feature enhancement, image cropping, data format conversion, image superposition or any combination thereof. In addition, the first detection module 111 of the first artificial neural network 110 and the second detection module 121 of the second artificial neural network 120 can be respectively implemented using a convolutional neural network (CNN) algorithm, but this case is not limited thereto.
在一些實施例中,檢測系統100的第一人工神經網路110會接收至少一物件影像I11-I1N(步驟S10)。在一些實施態樣中,第一人工神經網路110是同時或依序接收物件影像I11-I1N。In some embodiments, the first artificial neural network 110 of the detection system 100 receives at least one object image I11 - I1N (step S10 ). In some implementation aspects, the first artificial neural network 110 receives the object images I11 - I1N simultaneously or sequentially.
在一些實施例中,各物件的物件影像I11-I1N可為由一物件的複數視角影像(即一組視角影像V1-VA)拼接而成。因此,在步驟S10之一實施例中,可利用例如一影像處理模組112來分別接收複數組視角影像V1-VA(步驟S11),並且影像處理模組112在將複數組視角影像V1-VA拼接成複數物件影像I11-I1N(步驟S12)之後,輸出給第一人工神經網路110。其中,A為大於1的正整數。In some embodiments, the object images I11 - I1N of each object may be spliced from multiple perspective images of an object (ie, a set of perspective images V1 - VA ). Therefore, in an embodiment of step S10 , for example, an image processing module 112 may be used to respectively receive the multiple sets of perspective images V1-VA (step S11 ), and the image processing module 112 will output the multiple sets of perspective images V1-VA into multiple object images I11-I1N (step S12 ) to the first artificial neural network 110 . Wherein, A is a positive integer greater than 1.
在一些實施態樣中,各組視角影像V1-VA是依據擷取順序依序輸入至影像處理模組112,以致影像處理模組112可據此依序拼接成各物件影像I11-I1N。各視角影像V1-VA的尺寸可大致相同,以利於拼接作業。舉例而言,各視角影像V1-VA之尺寸(長x寬)分別為800(像素)800(像素),且各組視角影像V1-VA共有7 x 8張,以分別組成尺寸為5600(像素)x6400(像素)的各物件影像I11-I1N。此外,影像處理模組112可利用但不限於處理器來實現。在一些實施態樣中,影像處理模組112可整合於第一人工神經網路110中。換言之,第一人工神經網路110可包含影像處理模組112和第一檢測模組111,且第一檢測模組111串接於影像處理模組112之輸出,如圖1所示。In some implementations, each set of perspective images V1-VA is sequentially input to the image processing module 112 according to the capture order, so that the image processing module 112 can sequentially splice object images I11-I1N accordingly. The sizes of the images V1-VA of each viewing angle can be approximately the same to facilitate the splicing operation. For example, the dimensions (length x width) of each perspective image V1-VA are 800 (pixels) and 800 (pixels), and each set of perspective images V1-VA has a total of 7 x 8 pieces to form object images I11-I1N with a size of 5600 (pixels) x 6400 (pixels). In addition, the image processing module 112 can be implemented by using but not limited to a processor. In some implementation aspects, the image processing module 112 can be integrated into the first artificial neural network 110 . In other words, the first artificial neural network 110 may include an image processing module 112 and a first detection module 111 , and the first detection module 111 is serially connected to the output of the image processing module 112 , as shown in FIG. 1 .
於第一檢測模組111接收到物件影像I11-I1N(步驟S10)後,第一檢測模組111可識別各物件影像I11-I1N以分類物件影像I11-I1N至良品組G1或不良品組G2(以下稱第一不良品組G2)中,並且輸出被分類至第一不良品組G2之物件影像I11、I17的標記影像I31、I37(步驟S20)。舉例而言,第一人工神經網路110的第一檢測模組111將物件影像I11、I17分類至第一不良品組G2、將物件影像I12-I16、I18-I1N分類至良品組G1,並輸出物件影像I11、I17的標記影像I31、I37。在一些實施例中,標記影像I31、I37為具有瑕疵標記M1的物件影像I11、I17。舉例來說,標記影像I31是影像中10個瑕疵圖像分別由10個框線(即瑕疵標記M1)框選出來的物件影像I11,如圖9所示。After the first detection module 111 receives the object images I11-I1N (step S10), the first detection module 111 can identify each object image I11-I1N to classify the object images I11-I1N into the good product group G1 or the defective product group G2 (hereinafter referred to as the first defective product group G2), and output the tagged images I31, I37 of the object images I11, I17 classified into the first defective product group G2 (step S20 ). For example, the first detection module 111 of the first artificial neural network 110 classifies the object images I11, I17 into the first defective product group G2, classifies the object images I12-I16, I18-I1N into the good product group G1, and outputs the marked images I31, I37 of the object images I11, I17. In some embodiments, the marker images I31, I37 are the object images I11, I17 with the defect marker M1. For example, the marker image I31 is an object image I11 in which 10 defect images in the image are framed by 10 frame lines (ie, defect markers M1 ), as shown in FIG. 9 .
圖3為步驟S20之一實施例的流程示意圖。請參閱圖1至圖3,在步驟S20之一實施例中,第一檢測模組111可分別對各物件影像I11-I1N進行一瑕疵預測以分別生成相應的瑕疵熱點圖(步驟S21)。例如,第一檢測模組111對物件影像I11進行瑕疵預測後而形成此物件影像I11的瑕疵熱點圖I6,如圖8所示。接續,第一檢測模組111可分別根據一瑕疵標準對各瑕疵熱點圖進行瑕疵篩選(步驟S22)。其中,請參閱圖1至圖3,於瑕疵篩選中,若判定達到瑕疵標準時,第一檢測模組111會分類此物件影像,例如將物件影像I11、I17分類至第一不良品組G2中(步驟S23)。反之,於瑕疵篩選中,若判定未達到瑕疵標準時,第一檢測模組111會分類此物件影像,例如將物件影像I12-I16、I18-I1N分類至良品組G1中(步驟S24)。於分類(可稱為第一階段的分類)完成後,第一檢測模組111可根據被分類至第一不良品組G2中的物件影像I11、I17與其對應的瑕疵熱點圖來產生標記影像I31、I37(步驟S25)。FIG. 3 is a schematic flowchart of an embodiment of step S20. Please refer to FIG. 1 to FIG. 3 , in an embodiment of step S20 , the first detection module 111 may respectively perform a defect prediction on each object image I11 - I1N to generate a corresponding defect heat map (step S21 ). For example, the first detection module 111 performs defect prediction on the object image I11 to form a defect heat map I6 of the object image I11 , as shown in FIG. 8 . Next, the first detection module 111 can perform defect screening on each defect heat map according to a defect standard (step S22 ). Wherein, please refer to FIG. 1 to FIG. 3 , in defect screening, if it is determined that the defect standard is met, the first detection module 111 will classify the object image, for example, classify the object images I11 and I17 into the first defective product group G2 (step S23 ). Conversely, in defect screening, if it is determined that the defect standard is not met, the first detection module 111 will classify the object image, for example, classify the object images I12-I16, I18-I1N into the good product group G1 (step S24). After the classification (which can be referred to as the first stage of classification) is completed, the first detection module 111 can generate the marked images I31, I37 according to the object images I11, I17 classified into the first defective product group G2 and their corresponding defect heat maps (step S25).
在一些實施例中,請參閱圖7及圖8,瑕疵熱點圖I6是一種表示物件影像I11中瑕疵的機率分布圖。瑕疵熱點圖I6包含複數像素,且每一像素具有一瑕疵分數。其中,瑕疵分數用以表示物件影像I11中此處為瑕疵的可能性,其數值可以介於0到1之間。瑕疵分數越高表示此處(像素的座標位置)為瑕疵的可能性越高。於此,請參閱圖1、圖7及圖8,瑕疵分數是由機器根據學習階段中的成果,例如由第一檢測模組111根據學習階段中的成果來分別針對各物件影像I11-I1N進行各物件影像I11-I1N中可能屬於瑕疵圖塊的像素的預測,以依據各物件影像I11-I1N中各像素屬於瑕疵圖塊的可能性給定相應的瑕疵分數,並且生成各像素根據其對應的瑕疵分數來以熱點的形式進行顯示的相應的瑕疵熱點圖。換言之,瑕疵熱點圖I6與物件影像I11具有相同的影像尺寸。舉例來說,瑕疵熱點圖I6與物件影像I11的尺寸(長x寬)均為5600(像素)x6400(像素)。並且,在瑕疵熱點圖I6中,越亮的像素表示其瑕疵分數越高且物件影像I11在同樣的對應位置處出現瑕疵(圖塊)的可能性亦越高,如圖8所示。在一些實施態樣中,各物件影像I11-I1N的瑕疵熱點圖的尺寸(長x寬)可為700(像素)x800(像素),即由560,000個數字陣列組成。In some embodiments, please refer to FIG. 7 and FIG. 8 , the defect heat map I6 is a probability distribution map representing defects in the object image I11 . The defect heat map I6 includes a plurality of pixels, and each pixel has a defect score. Wherein, the blemish score is used to indicate the possibility of a blemish in the object image I11, and its value can be between 0 and 1. The higher the blemish score, the higher the probability that this location (the coordinate position of the pixel) is a blemish. Here, please refer to FIG. 1 , FIG. 7 and FIG. 8 , the defect score is determined by the machine based on the results in the learning stage, for example, the first detection module 111 predicts the pixels that may belong to the defect block in each object image I11-I1N for each object image I11-I1N according to the result in the learning stage, so as to give the corresponding defect score according to the possibility that each pixel in each object image I11-I1N belongs to the defect block, and generate each pixel according to its corresponding defect score The corresponding defect heat map displayed in the form of hot spots. In other words, the defect heat map I6 has the same image size as the object image I11. For example, the dimensions (length x width) of the defect heat map I6 and the object image I11 are both 5600 (pixels) x 6400 (pixels). Moreover, in the blemish heat map I6, brighter pixels indicate higher blemish scores and higher probability of blemishes (blocks) appearing in the same corresponding position of the object image I11, as shown in FIG. 8 . In some implementation aspects, the size (length x width) of the defect heat map of each object image I11-I1N may be 700 (pixels) x 800 (pixels), that is, it is composed of 560,000 digital arrays.
在一些實施例中,瑕疵標準可包含但不限於信心度閥值、面積閥值與數量閥值。圖4為步驟S22之一實施例的流程示意圖。請參閱圖1至圖4,在步驟S22之一實施例中,於各物件影像I11-I1N之瑕疵熱點圖的瑕疵篩選中,第一檢測模組111可先根據信心度閥值自瑕疵熱點圖的複數像素中篩選出至少一瑕疵像素(步驟S221)。例如,第一檢測模組111可將各像素的瑕疵分數與信心度閥值相比,並將瑕疵分數大於或等於信心度閥值的像素篩選出來。其中,相鄰的瑕疵像素可形成一瑕疵面積。在一些實施態樣中,第一檢測模組111可將瑕疵分數小於信心度閥值的各像素之值設定為0,並將瑕疵分數大於或等於信心度閥值的各像素之值設定為1,以藉此篩選出瑕疵像素(即值設定成1的像素)。In some embodiments, the defect criteria may include but not limited to confidence threshold, area threshold and quantity threshold. FIG. 4 is a schematic flowchart of an embodiment of step S22. Please refer to FIG. 1 to FIG. 4, in an embodiment of step S22, in the defect screening of the defect heat map of each object image I11-I1N, the first detection module 111 can first filter out at least one defect pixel from the plurality of pixels in the defect heat map according to the confidence threshold (step S221). For example, the first detection module 111 can compare the defect score of each pixel with a confidence threshold, and filter out pixels whose defect scores are greater than or equal to the confidence threshold. Wherein, adjacent defective pixels may form a defective area. In some implementations, the first detection module 111 can set the value of each pixel whose defect score is less than the confidence threshold to 0, and set the value of each pixel whose defect score is greater than or equal to the confidence threshold to 1, so as to filter out the defect pixels (that is, the pixels whose value is set to 1).
篩選完後,第一檢測模組111會統計出瑕疵面積大於面積閥值的數量(以下,稱之為瑕疵數量)(步驟S222)。舉例而言,第一檢測模組111可先分別計算出各瑕疵面積之面積值,再將各瑕疵面積之面積值與面積閥值相比,以得到瑕疵數量。接續,第一檢測模組111可將瑕疵數量和數量閥值相比,以判斷瑕疵數量是否大於數量閥值(步驟S223)。其中,於瑕疵數量大於或等於數量閥值時,第一檢測模組111會判定達到瑕疵標準(步驟S224),並接續執行步驟S23,以將對應的物件影像,例如物件影像I11、I17分類至第一不良品組G2。反之,於瑕疵數量小於數量閥值時,第一檢測模組111會判定未達到瑕疵標準(步驟S225),並接續執行步驟S24,以將對應的物件影像,例如物件影像I12-I16、I18-I1N分類至良品組G1。After the screening, the first detection module 111 will count the number of blemishes whose area is greater than the area threshold (hereinafter referred to as the number of blemishes) (step S222 ). For example, the first detection module 111 can firstly calculate the area value of each defect area, and then compare the area value of each defect area with the area threshold to obtain the number of defects. Next, the first detection module 111 can compare the number of defects with the number threshold to determine whether the number of defects is greater than the number threshold (step S223 ). Wherein, when the number of defects is greater than or equal to the number threshold, the first detection module 111 determines that the defect standard is met (step S224 ), and proceeds to step S23 to classify the corresponding object images, such as object images I11 and I17, into the first defective product group G2. Conversely, when the number of defects is less than the quantity threshold, the first detection module 111 determines that the defect standard is not met (step S225 ), and proceeds to step S24 to classify the corresponding object images, such as object images I12-I16, I18-I1N, into the good product group G1.
在一些實施態樣中,瑕疵標準的信心度閥值、面積閥值與數量閥值可以是使用多個測試樣品(例如,100個)進行測試後所計算出準確度最高的組合。例如,信心度閥值可有[0.5, 0.6, 0.7, 0.8, 0.9]、面積閥值可有[0, 3, 5, 7, 9],且數量閥值可有[0, 1, 2, 3],而共有5*5*4種組合,並可透過測試後統計出一組準確度最高的數值組合,但本案並非以此為限。In some implementation aspects, the confidence threshold, the area threshold and the quantity threshold of the flaw standard may be the combination with the highest accuracy calculated by using multiple test samples (for example, 100) for testing. For example, the confidence threshold can be [0.5, 0.6, 0.7, 0.8, 0.9], the area threshold can be [0, 3, 5, 7, 9], and the quantity threshold can be [0, 1, 2, 3], and there are 5*5*4 combinations, and a set of numerical combinations with the highest accuracy can be calculated after testing, but this case is not limited to this.
圖5為步驟S25之一實施例的流程示意圖。請參閱圖1至圖5、圖7及圖8,在步驟S25之一實施例中,第一檢測模組111可將物件影像I11和對應的瑕疵熱點圖I6疊合成一疊合影像(步驟S251)。於此,疊合影像包含至少一候選瑕疵圖塊。其中,各候選瑕疵圖塊是指疊合影像中之瑕疵熱點圖I6上相鄰的瑕疵像素(會形成一瑕疵)於疊合影像中之物件影像I11上所對應到的圖塊部分。在得到疊合影像後,第一檢測模組111會根據疊合影像中交集的各候選瑕疵圖塊進行計算,並根據計算結果,例如,以框線框出各候選瑕疵圖塊,以得到帶有至少一瑕疵標記M1的標記影像I31(步驟S252)。在一些實施態樣中,可依據各候選瑕疵圖塊的形狀與範圍大小相應設置框線。但本案並非以此為限,在另一些實施例中,所述框線可具有固定形狀,例如矩形、圓形、橢圓形等並至少包圍了候選瑕疵圖塊。FIG. 5 is a schematic flowchart of an embodiment of step S25. Please refer to FIG. 1 to FIG. 5 , FIG. 7 and FIG. 8 , in an embodiment of step S25 , the first inspection module 111 can superimpose the object image I11 and the corresponding defect heat map I6 into a superimposed image (step S251 ). Here, the superimposed image includes at least one candidate defect block. Wherein, each candidate defect block refers to the part of the block corresponding to the adjacent defective pixels (which will form a defect) on the defect heat map I6 in the superimposed image on the object image I11 in the superimposed image. After obtaining the superimposed image, the first detection module 111 performs calculations according to the intersection candidate defect blocks in the superimposed image, and according to the calculation results, for example, frame each candidate defect block with a frame line to obtain a marked image I31 with at least one defect mark M1 (step S252 ). In some implementation aspects, frame lines may be set correspondingly according to the shape and range size of each candidate defect block. However, this application is not limited thereto. In some other embodiments, the frame line may have a fixed shape, such as a rectangle, a circle, an ellipse, etc., and at least encloses the defect candidate blocks.
在一些實施例中,候選瑕疵圖塊的數量可相同於瑕疵熱點圖I6上的瑕疵之數量。但本案並非以此為限,在另一些實施例中,候選瑕疵圖塊的數量可少於瑕疵熱點圖I6上的瑕疵之數量。舉例而言,第一檢測模組111可先從疊合影像中之瑕疵熱點圖I6上的所有瑕疵中找出一預定數量,並以框線分別框出此預定數量的瑕疵來成為所述的候選瑕疵圖塊。具體而言,第一檢測模組111可先從疊合影像中之瑕疵熱點圖I6上的所有瑕疵中找出10個最有可能為瑕疵的部分,並以框線只框出此10個最有可能為瑕疵的部分來成為所述的候選瑕疵圖塊。於此,第一檢測模組111可先計算各瑕疵的瑕疵總分,即此瑕疵的所有像素的瑕疵分數加總,再將瑕疵總分由大到小排序,並於排序後取瑕疵總分前10名的瑕疵的即為10個最有可能為瑕疵。In some embodiments, the number of candidate defect blocks may be the same as the number of defects on the defect heat map I6. But this case is not limited thereto. In some other embodiments, the number of candidate defect blocks may be less than the number of defects on the defect heat map I6. For example, the first detection module 111 can first find a predetermined number of defects from all the defects on the defect heat map I6 in the superimposed image, and frame the predetermined number of defects with frame lines to become the candidate defect blocks. Specifically, the first detection module 111 can first find out the 10 most likely defect parts from all the defects on the defect heat map I6 in the superimposed image, and only frame the 10 most likely defect parts with a frame line to become the candidate defect block. Here, the first detection module 111 can first calculate the total defect score of each defect, that is, the sum of the defect scores of all the pixels of the defect, and then sort the total defect scores from large to small, and take the top 10 defects in the total defect score after sorting as the 10 most likely defects.
於此,被分類至第一不良品組G2中的物件影像I11、I17會由第二人工神經網路120進行再次預測。於再次預測之前,第二人工神經網路120會先利用各物件影像I11、I17的標記影像I31、I37對各物件影像I11、I17進行降維(dimension reduction),然後再以降維後的影像進行再次預測。如此一來,可大幅減少後續輸入至第二檢測模組121進行再次預測的資料數量。此外,還可使得第二人工神經網路120在學習階段中的學習更專注。Here, the object images I11 and I17 classified into the first defective product group G2 are predicted again by the second artificial neural network 120 . Before re-prediction, the second artificial neural network 120 first uses the labeled images I31, I37 of the object images I11, I17 to perform dimension reduction (dimension reduction) on the object images I11, I17, and then performs re-prediction with the dimensionality-reduced images. In this way, the amount of data subsequently input to the second detection module 121 for re-prediction can be greatly reduced. In addition, the learning of the second artificial neural network 120 in the learning phase can be made more focused.
參閱回圖1、圖2、圖7及圖9,於步驟20後,對於被分類至第一不良品組G2中的物件影像I11、I17,影像處理模組122會根據各物件影像I11、I17之標記影像I31、I37中的至少一瑕疵標記M1從標記影像I31、I37中擷取出各物件影像I11、I17的至少一準瑕疵影像I51s、I57s(步驟S30)。Referring back to FIG. 1, FIG. 2, FIG. 7 and FIG. 9, after step 20, for the object images I11, I17 classified into the first defective product group G2, the image processing module 122 will extract at least one quasi-defective image I51s, I57 of each object image I11, I17 from the marked image I31, I37 according to at least one defect mark M1 in the mark image I31, I37 of each object image I11, I17 s (step S30).
圖6為步驟S30之一實施例的流程示意圖。請參閱圖1至圖6、圖9及圖10,在步驟S30之一實施例中,對於第一不良品組G2的物件影像I11、I17(以下以物件影像I11為例進行說明),影像處理模組122可先根據標記影像I31中的至少一瑕疵標記M1從標記影像I31中擷取出各瑕疵標記M1對應的局部影像(步驟S31)。於此,各局部影像至少涵蓋候選瑕疵圖塊中之一。換言之,局部影像的尺寸會隨著候選瑕疵圖塊的尺寸(即在物件影像I11中各瑕疵的大小)而有所不同。因此,於取得各瑕疵標記M1對應的局部影像後,影像處理模組122可更調整各局部影像之尺寸成具有一預定尺寸的準瑕疵影像I510-I519(步驟S32)。換言之,標記影像I31中瑕疵標記M1的數量會與擷取出的準瑕疵影像I510-I519的數量相同。在一些實施態樣中,各準瑕疵影像I510-I519的預定尺寸(長x寬)可為128(像素)x128(像素)。在另一些實施態樣中,各準瑕疵影像I510-I519的預定尺寸(長x寬)可為800(像素)x800(像素)。需注意的是,各準瑕疵影像I510-I519的預定尺寸可視設計而定。FIG. 6 is a schematic flowchart of an embodiment of step S30. 1 to 6, FIG. 9 and FIG. 10, in an embodiment of step S30, for the object images I11 and I17 of the first defective product group G2 (the object image I11 is used as an example for description below), the image processing module 122 can first extract the partial images corresponding to each defect mark M1 from the mark image I31 according to at least one defect mark M1 in the mark image I31 (step S31). Here, each partial image covers at least one of the candidate defect blocks. In other words, the size of the partial image varies with the size of the defect candidate block (ie, the size of each defect in the object image I11 ). Therefore, after obtaining the partial images corresponding to each defect mark M1, the image processing module 122 can resize each partial image into quasi-defect images I510-I519 with a predetermined size (step S32). In other words, the number of defect marks M1 in the marker image I31 is the same as the number of the extracted quasi-defect images I510-I519. In some implementation aspects, the predetermined size (length x width) of each quasi-defective image I510 - I519 may be 128 (pixels) x 128 (pixels). In some other implementation aspects, the predetermined size (length x width) of each quasi-defective image I510 - I519 may be 800 (pixels) x 800 (pixels). It should be noted that the predetermined size of each quasi-defective image I510-I519 may depend on the design.
在一些實施態樣中,影像處理模組122可利用但不限於處理器來實現。此外,在一些實施態樣中,影像處理模組122可整合於第二人工神經網路120中。換言之,第二人工神經網路120可包含影像處理模組122和第二檢測模組121。其中,第二檢測模組121串接於影像處理模組122之輸出,如圖1所示。在另一些實施態樣中,影像處理模組122亦可串接在第一人工神經網路110與第二人工神經網路120之間(圖未示)。In some implementation aspects, the image processing module 122 can be implemented by using but not limited to a processor. In addition, in some implementation aspects, the image processing module 122 can be integrated into the second artificial neural network 120 . In other words, the second artificial neural network 120 may include an image processing module 122 and a second detection module 121 . Wherein, the second detection module 121 is serially connected to the output of the image processing module 122 , as shown in FIG. 1 . In other embodiments, the image processing module 122 may also be connected in series between the first artificial neural network 110 and the second artificial neural network 120 (not shown).
參照回圖1及圖2,對於第一不良品組G2的各物件影像I11、I17,於得到物件影像I11、I17的至少一準瑕疵影像I51s、I57s (即物件影像I11、I17完成降維)(步驟S30)後,檢測系統100可利用第二人工神經網路120的第二檢測模組121根據準瑕疵影像I51s、I57s來分類對應的物件影像I11、I17至模糊組G3或不良品組G4(以下稱第二不良品組G4)(步驟S40)。於此,第二檢測模組121是於物件影像I17所屬的所有準瑕疵影像I57s皆檢測出無瑕疵時將對應的物件影像I17分類至模糊組G3(步驟S50)。相反地,若物件影像I11所屬的任一準瑕疵影像I51s檢測出有瑕疵時,第二檢測模組121則分類此對應的物件影像I11至第二不良品組G4(步驟S60)。Referring back to FIG. 1 and FIG. 2 , for the object images I11 and I17 of the first defective product group G2, after obtaining at least one quasi-defective image I51s and I57s of the object images I11 and I17 (that is, the dimensionality reduction of the object images I11 and I17 is completed) (step S30), the detection system 100 can use the second detection module 121 of the second artificial neural network 120 to classify and correspond to the quasi-defective images I51s and I57s The object images I11 and I17 are sent to the fuzzy group G3 or the defective product group G4 (hereinafter referred to as the second defective product group G4) (step S40 ). Here, the second detection module 121 classifies the corresponding object image I17 into the fuzzy group G3 when all quasi-defective images I57s to which the object image I17 belongs are detected to be flawless (step S50 ). On the contrary, if any quasi-defective image I51s to which the object image I11 belongs is detected to be defective, the second detection module 121 classifies the corresponding object image I11 into the second defective product group G4 (step S60 ).
在步驟S40之一實施例中,對於第一不良品組G2的各物件影像I11、I17,第二檢測模組121可個別判斷各準瑕疵影像I51s、157s的信心度(可稱之為瑕疵信心度)是否小於一預定閥值,以根據判斷結果來選擇接續執行步驟S50或步驟S60(可稱為第二階段的分類)。其中,於物件影像I17所屬的所有準瑕疵影像I57s的信心度皆小於預定閥值時,第二檢測模組121會接續執行步驟S50,即分類物件影像I17至模糊組G3。於物件影像I11所屬的任一準瑕疵影像I51s(例如,圖10所示的準瑕疵影像I510-I519中的任一者)的信心度大於或等於預定閥值時,第二檢測模組121則接續執行步驟S60,即分類物件影像I11至第二不良品組G4。In an embodiment of step S40, for each object image I11, I17 of the first defective product group G2, the second detection module 121 can individually determine whether the confidence level of each quasi-defective image I51s, 157s (which can be called a defect confidence level) is less than a predetermined threshold, so as to choose to proceed to step S50 or step S60 according to the judgment result (which can be called the second stage of classification). Wherein, when the confidence levels of all the quasi-defect images I57s to which the object image I17 belongs are less than the predetermined threshold, the second detection module 121 will continue to perform step S50, ie classify the object image I17 into the fuzzy group G3. When the confidence level of any quasi-defective image I51s to which the object image I11 belongs (for example, any one of the quasi-defective images I510-I519 shown in FIG. 10 ) is greater than or equal to the predetermined threshold, the second detection module 121 proceeds to step S60, namely classifying the object image I11 into the second defective product group G4.
在一些實施態樣中,各準瑕疵影像I51s、157s的信心度是由第二檢測模組121預測出來的,其數值可介於0到1之間。而所述的預定閥值則可透過在學習階段中的統計而得到,例如但不限於0.5。In some implementation aspects, the confidence level of each quasi-defect image I51s, 157s is predicted by the second detection module 121, and its value can be between 0 and 1. The predetermined threshold can be obtained through statistics in the learning phase, such as but not limited to 0.5.
在步驟S40之一些實施態樣中,第二檢測模組121可先根據各物件影像I11、I17之準瑕疵影像I51s、157s的判斷結果來輸出各物件影像I11、I17的一分數值,以作為其分類的依據,之後再根據各物件影像I11、I17的分數值將其分類至模糊組G3或第二不良品組G4。舉例而言,假設分類至第一不良品組G2中的有物件影像I11、I17。當物件影像I17的判斷結果為各準瑕疵影像I57s的信心度皆小於預定閥值且物件影像I11的判斷結果為任一準瑕疵影像I51s的信心度大於或等於預定閥值時,第二檢測模組121可輸出物件影像I17的分數值為1且物件影像I11的分數值為0,之後第二檢測模組121再分別根據物件影像I11、I17的分數值進行分類。於此,第二檢測模組121可因物件影像I17的分數值為1而將物件影像I17分類至模糊組G3,並且因物件影像I11的分數值為0而將物件影像I11分類至第二不良品組G4。In some implementations of step S40, the second detection module 121 can first output a score value of each object image I11, I17 according to the judgment result of the quasi-defective image I51s, 157s of each object image I11, I17, as the basis for its classification, and then classify each object image I11, I17 into the fuzzy group G3 or the second defective product group G4 according to the score value of each object image I11, I17. For example, assume that there are object images I11 and I17 classified into the first defective product group G2. When the judgment result of the object image I17 is that the confidence of each quasi-defective image I57s is less than the predetermined threshold and the judgment result of the object image I11 is that the confidence of any quasi-defective image I51s is greater than or equal to the predetermined threshold, the second detection module 121 can output a score value of 1 for the object image I17 and a score value of 0 for the object image I11, and then the second detection module 121 performs classification according to the score values of the object images I11 and I17. Here, the second detection module 121 can classify the object image I17 into the fuzzy group G3 because the score value of the object image I17 is 1, and classify the object image I11 into the second defective group G4 because the score value of the object image I11 is 0.
圖11為圖1之第二人工神經網路之一實施例的方塊示意圖。請參閱圖1及圖11,在一些實施例中,第二檢測模組121可包含分類單元1211以及複數判斷單元121A-121X。分類單元1211之輸出可包含一無瑕疵類別C1以及複數瑕疵類別C21-C2Y。於此,判斷單元121A-121X的數量相等於瑕疵類別C21-C2Y的數量,且各判斷單元121A-121X可對應於複數瑕疵類別C21-C2Y中之一而串接於分類單元1211之一輸出。在一些實施態樣中,複數瑕疵類別C21-C2Y可包含但不限於變形、撞傷、砂孔、斷柱、沖崩、拉模、異物、毛邊等。FIG. 11 is a schematic block diagram of an embodiment of the second artificial neural network in FIG. 1 . Please refer to FIG. 1 and FIG. 11 , in some embodiments, the second detection module 121 may include a classification unit 1211 and plural determination units 121A- 121X. The output of the classification unit 1211 may include a flawless class C1 and a plurality of flawed classes C21-C2Y. Here, the number of judging units 121A-121X is equal to the number of defect categories C21-C2Y, and each judging unit 121A-121X can be connected in series to an output of the classification unit 1211 corresponding to one of the plurality of defect categories C21-C2Y. In some implementation aspects, the plurality of defect categories C21-C2Y may include but not limited to deformation, bruise, sand hole, broken pillar, chipping, drawing mold, foreign matter, burr and so on.
圖12為物件表面型態之篩選方法之一實施例的流程示意圖。請參閱圖1、圖11與圖12,在步驟S40之一實施例中,第二檢測模組121可先利用分類單元1211分類各物件影像I11、I17的各準瑕疵影像I51s、I57s(例如,物件影像I11的各準瑕疵影像I510-I1519,如圖10所示)至無瑕疵類別C1或複數瑕疵類別C21-C2Y中之一(步驟S41)(可稱為第二階段的分類)。之後,第二檢測模組121再利用分類單元1211分別對各瑕疵類別C21-C2Y進行特徵向量的擷取,以得到複數瑕疵類別C21-C2Y的複數特徵向量(步驟S42)。FIG. 12 is a schematic flowchart of an embodiment of a method for screening the surface type of an object. Please refer to FIG. 1, FIG. 11 and FIG. 12. In an embodiment of step S40, the second detection module 121 can first use the classification unit 1211 to classify each quasi-defect image I51s, I57s of each object image I11, I17 (for example, each quasi-defect image I510-I1519 of the object image I11, as shown in FIG. 41) (may be referred to as the second stage of classification). Afterwards, the second detection module 121 uses the classifying unit 1211 to extract feature vectors for each of the flaw categories C21 - C2Y to obtain complex feature vectors of the multiple flaw categories C21 - C2Y (step S42 ).
在步驟S41之一些實施態樣中,分類單元1211在分類完各物件影像I11、I17的各準瑕疵影像I51s、157s後,可更產生各物件影像I11、I17屬於各項類別的分數向量(即無瑕疵類別C1的分數向量以及各瑕疵類別C21-C2Y的分數向量)。其中,所述分數向量用以表示此類別的信心度。此外,在步驟S42之一些實施態樣中,所述的特徵向量為一維特徵向量,並可包含但不限於瑕疵屬性分數(例如,步驟S41的分數向量)、瑕疵尺寸(例如,從步驟S22中得到)、重複數量(例如,從步驟S41中得到)、類別等。In some implementations of step S41, after classifying the quasi-defect images I51s and 157s of the object images I11 and I17, the classifying unit 1211 may further generate score vectors for each category of the object images I11 and I17 (ie, the score vector of the flawless category C1 and the score vectors of the flaw categories C21-C2Y). Wherein, the score vector is used to represent the confidence level of this category. In addition, in some implementation aspects of step S42, the feature vector is a one-dimensional feature vector, and may include but not limited to defect attribute scores (for example, the score vector of step S41), defect size (for example, obtained from step S22), number of repetitions (for example, obtained from step S41), category, etc.
在得到複數特徵向量後,第二檢測模組121可利用各判斷單元121A-121X分別判斷各特徵向量是否小於對應之瑕疵類別C21-C2Y的屬性閥值(步驟S43)(可稱為第三階段的分類)。例如,屬性閥值可為一種信心度閥值,例如0.5,且判斷單元121A可判斷瑕疵類別C21之特徵向量中的瑕疵屬性分數是否小於瑕疵類別C21的屬性閥值。After obtaining the complex number of feature vectors, the second detection module 121 can use each of the judging units 121A-121X to judge whether each feature vector is smaller than the attribute threshold of the corresponding defect category C21-C2Y (step S43) (which can be called the third stage of classification). For example, the attribute threshold can be a confidence threshold, such as 0.5, and the judging unit 121A can judge whether the flaw attribute score in the feature vector of the flaw category C21 is smaller than the attribute threshold of the flaw category C21.
於所有判斷單元121A-121X皆判定物件影像I17的所有瑕疵類別C21-C2Y的特徵向量小於瑕疵類別C21-C2Y對應的屬性閥值時,第二檢測模組121可接續執行步驟S50而將此物件影像I17分類至模糊組G3。相反地,於任一判斷單元121A-121X判定物件影像I11的瑕疵類別C21-C2Y的特徵向量大於或等於此瑕疵類別C21-C2Y對應的屬性閥值(即物件影像I11的任一瑕疵類別C21-C2Y的特徵向量大於或等於對應的屬性閥值)時,第二檢測模組121則接續執行步驟S60而將此物件影像I11分類至第二不良品組G4。When all the judging units 121A-121X determine that the feature vectors of all the defect categories C21-C2Y of the object image I17 are smaller than the attribute thresholds corresponding to the defect categories C21-C2Y, the second detection module 121 may proceed to step S50 to classify the object image I17 into the fuzzy group G3. Conversely, when any of the judging units 121A-121X determine that the feature vector of the defect category C21-C2Y of the object image I11 is greater than or equal to the attribute threshold value corresponding to the defect category C21-C2Y (that is, the feature vector of any defect category C21-C2Y of the object image I11 is greater than or equal to the corresponding attribute threshold value), the second detection module 121 continues to execute step S60 to classify the object image I11 into the second defective product group G4 .
在一些實施態樣中,由於各判斷單元121A-121X的輸入已將特徵維度壓縮萃取到一定程度,故各判斷單元121A-121X可分別利用例如但不限於傳統的機器學習演算法來實現。此外,各判斷單元121A-121X可分別針對對應之瑕疵類別C21-C2Y來採用特定的演算法。再者,由於各判斷單元121A-121X只需用以判斷單一瑕疵類別,故每一個判斷單元121A-121X之演算法需要學習的特徵空間可縮小。In some implementation aspects, since the input of each judging unit 121A-121X has compressed and extracted the feature dimension to a certain extent, each judging unit 121A-121X can be realized by using, for example but not limited to, a traditional machine learning algorithm. In addition, each judging unit 121A-121X can adopt a specific algorithm for the corresponding defect category C21-C2Y respectively. Furthermore, since each judging unit 121A-121X only needs to judge a single defect category, the feature space that needs to be learned by the algorithm of each judging unit 121A-121X can be reduced.
在一些實施例中,對應於被分類至良品組G1之物件影像I12-I16、I18-I1N的物件即為良品,對應於被分類至第二不良品組G4之物件影像I11的物件即為不良品,而對應於被分類至模糊組G3之物件影像I17的物件則需進一步由人工進行鑑定,以鑑別其為良品或不良品。在一些實施態樣中,被分類至模糊組G3之物件影像I17可於經由人工鑑別後,再依據鑑別結果將此物件影像I17作為良品或不良品的訓練資料給檢測系統100,以增進檢測系統100的精確度。具體而言,當物件影像I17被鑑別為良品時,於學習階段中,此物件影像I17會作為良品的訓練資料輸入檢測系統100,使第一人工神經網路110對此物件影像I17執行深度學習。當物件影像I17被鑑別為不良品時,於學習階段中,此物件影像I17會作為不良品的訓練資料輸入檢測系統100,使第一人工神經網路110與第二人工神經網路120對此物件影像I17執行深度學習。In some embodiments, the objects corresponding to the object images I12-I16, I18-I1N classified into the good product group G1 are good products, the objects corresponding to the object image I11 classified into the second defective product group G4 are defective products, and the objects corresponding to the object image I17 classified into the fuzzy group G3 need to be further manually identified to identify whether they are good products or defective products. In some implementations, the object image I17 classified into the fuzzy group G3 can be manually identified, and then the object image I17 can be used as training data for good or defective products to the detection system 100 according to the identification result, so as to improve the accuracy of the detection system 100 . Specifically, when the object image I17 is identified as a good product, in the learning phase, the object image I17 will be input into the detection system 100 as good product training data, so that the first artificial neural network 110 performs deep learning on the object image I17. When the object image I17 is identified as a defective product, in the learning phase, the object image I17 will be input into the detection system 100 as training data for defective products, so that the first artificial neural network 110 and the second artificial neural network 120 perform deep learning on the object image I17.
在一些實施例中,第一檢測模組111所採用的瑕疵標準可為較為嚴苛的且容易達到一致性的判斷標準,以降低因標定人員的漏標或標定標準不一所產生的錯誤分類。In some embodiments, the defect standard adopted by the first detection module 111 may be a more stringent judgment standard that is easy to achieve consistency, so as to reduce misclassifications caused by missed marks by calibration personnel or different calibration standards.
在一些實施例中,在學習階段中,可只擷取物件影像I17中瑕疵部位(如同準瑕疵影像I510-I519)的圖塊作為輸入至第二檢測模組121的瑕疵影像(即訓練資料),以大幅降低影像背景對於第二檢測模組121之訓練上的影響。此外,可一次只輸入一種瑕疵的瑕疵影像,以較好控制各種瑕疵在訓練中的瑕疵比例,而不會有類別數量不均勻的問題。In some embodiments, in the learning phase, only the tiles of the defective parts (such as the quasi-defective images I510 - I519 ) in the object image I17 can be captured as the defective images (ie, training data) input to the second detection module 121 , so as to greatly reduce the influence of the image background on the training of the second detection module 121 . In addition, only one type of defect image can be input at a time to better control the proportion of various defects in training without the problem of uneven number of categories.
綜上所述,任一實施例之基於人工神經網路之物件表面型態的篩選方法與檢測系統,其先藉由第一檢測模組111依據較嚴苛且容易一致性的瑕疵標準來對物件影像I11-I1N進行分類,以降低因標定人員的漏標或標定標準不一所產生的錯誤分類。之後,再藉由第二檢測模組121根據分類至第一不良品組G2中的物件影像I11、I17之至少一準瑕疵影像I51s、I57s來進行分類,以將過殺的物件救回來。如此一來,可大幅簡化物件的標定工程,減少因標定作業造成的模型缺陷,使得檢測精確度可更加穩定。此外,因多階段式分類,每一階段模型(即第一檢測模組111與第二檢測模組121)負責的任務不同,由嚴到寬,視野由廣到細微,使得各階段模型的分工更專一而更容易收斂。再者,可解決標定問題、樣品複雜、瑕疵類別不均勻等問題,使得採用此篩選方法的檢測設備可提早上線。另,因先嚴殺再救回,可讓使用採用此篩選方法之檢測設備的業者可不必擔心流出不良品,而提高對檢測設備的信賴度。To sum up, in any embodiment of the artificial neural network-based object surface type screening method and detection system, it first uses the first detection module 111 to classify the object images I11-I1N according to the more stringent and easily consistent defect standards, so as to reduce the misclassification caused by the omission of the calibration personnel or different calibration standards. Afterwards, the second detection module 121 classifies at least one quasi-defective image I51s, I57s of the object images I11, I17 classified into the first defective product group G2, so as to rescue overkill objects. In this way, the calibration project of the object can be greatly simplified, the model defects caused by the calibration operation can be reduced, and the detection accuracy can be more stable. In addition, due to the multi-stage classification, the models in each stage (ie, the first detection module 111 and the second detection module 121) are responsible for different tasks, from strict to wide, and from wide to subtle, making the division of labor of each stage model more specific and easier to converge. Furthermore, problems such as calibration problems, complex samples, and uneven defect categories can be solved, so that testing equipment using this screening method can be brought online earlier. In addition, because of strict killing first and then rescue, the operators who use the testing equipment using this screening method do not have to worry about the outflow of defective products, which improves the reliability of the testing equipment.
雖然本發明的技術內容已經以較佳實施例揭露如上,然其並非用以限定本發明,任何熟習此技藝者,在不脫離本發明之精神所作些許之更動與潤飾,皆應涵蓋於本發明的範疇內,因此本發明之保護範圍當視後附之申請專利範圍所界定者為準。Although the technical content of the present invention has been disclosed above with preferred embodiments, it is not intended to limit the present invention. Any modification and modification made by those skilled in the art without departing from the spirit of the present invention should be covered within the scope of the present invention. Therefore, the scope of protection of the present invention should be defined by the scope of the appended patent application.
100:檢測系統 110:第一人工神經網路 111:第一檢測模組 112:影像處理模組 120:第二人工神經網路 121:第二檢測模組 1211:分類單元 121A-121X:判斷單元 122:影像處理模組 C1:無瑕疵類別 C21-C2Y:瑕疵類別 G1:良品組 G2:第一不良品組 G3:模糊組 G4:第二不良品組 I11-I1N:物件影像 I31, I37:標記影像 I51s, I57s:準瑕疵影像 I510-I519:準瑕疵影像 I6:瑕疵熱點圖 M1:瑕疵標記 V1-VA:視角影像 S10-S60:步驟 S11-S12:步驟 S21-S25:步驟 S221-S225:步驟 S251-S252:步驟 S31-S32:步驟 100: Detection system 110: The first artificial neural network 111: The first detection module 112: Image processing module 120:Second artificial neural network 121: Second detection module 1211: Taxa 121A-121X: judging unit 122: Image processing module C1: Flawless category C21-C2Y: Defect categories G1: good product group G2: The first defective product group G3: fuzzy group G4: The second defective product group I11-I1N: Object Image I31, I37: Marking images I51s, I57s: quasi-defective images I510-I519: quasi-defective images I6: Defect heat map M1: Flaw Mark V1-VA: Perspective image S10-S60: Steps S11-S12: Steps S21-S25: steps S221-S225: Steps S251-S252: Steps S31-S32: Steps
圖1為檢測系統之一實施例的方塊示意圖。 圖2為物件表面型態之篩選方法之一實施例的流程示意圖。 圖3為步驟S20之一實施例的流程示意圖。 圖4為步驟S22之一實施例的流程示意圖。 圖5為步驟S25之一實施例的流程示意圖。 圖6為步驟S30之一實施例的流程示意圖。 圖7為物件影像之一實施例的示意圖。 圖8為瑕疵熱點圖之一實施例的示意圖。 圖9為標記影像之一實施例的示意圖。 圖10為準瑕疵影像之一實施例的示意圖。 圖11為圖1之第二人工神經網路之一實施例的方塊示意圖。 圖12為物件表面型態之篩選方法之一實施例的流程示意圖。 FIG. 1 is a schematic block diagram of an embodiment of a detection system. FIG. 2 is a schematic flowchart of an embodiment of a method for screening the surface type of an object. FIG. 3 is a schematic flowchart of an embodiment of step S20. FIG. 4 is a schematic flowchart of an embodiment of step S22. FIG. 5 is a schematic flowchart of an embodiment of step S25. FIG. 6 is a schematic flowchart of an embodiment of step S30. FIG. 7 is a schematic diagram of an embodiment of an object image. FIG. 8 is a schematic diagram of an embodiment of a defect heat map. FIG. 9 is a schematic diagram of an embodiment of marking an image. FIG. 10 is a schematic diagram of an embodiment of a quasi-defective image. FIG. 11 is a schematic block diagram of an embodiment of the second artificial neural network in FIG. 1 . FIG. 12 is a schematic flowchart of an embodiment of a method for screening the surface type of an object.
100:檢測系統 100: Detection system
110:第一人工神經網路 110: The first artificial neural network
111:第一檢測模組 111: The first detection module
112:影像處理模組 112: Image processing module
120:第二人工神經網路 120:Second artificial neural network
121:第二檢測模組 121: Second detection module
122:影像處理模組 122: Image processing module
G1:良品組 G1: good product group
G2:第一不良品組 G2: The first defective product group
G3:模糊組 G3: fuzzy group
G4:第二不良品組 G4: The second defective product group
I11-I1N:物件影像 I11-I1N: Object Image
I31,I37:標記影像 I31,I37: mark images
I51s,I57s:準瑕疵影像 I51s, I57s: quasi-defective images
V1-VA:視角影像 V1-VA: Perspective image
Claims (19)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111137589A TWI808019B (en) | 2022-10-03 | 2022-10-03 | Method for filtering surface type of object based on artificial neural network and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111137589A TWI808019B (en) | 2022-10-03 | 2022-10-03 | Method for filtering surface type of object based on artificial neural network and system |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI808019B true TWI808019B (en) | 2023-07-01 |
TW202416232A TW202416232A (en) | 2024-04-16 |
Family
ID=88149185
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW111137589A TWI808019B (en) | 2022-10-03 | 2022-10-03 | Method for filtering surface type of object based on artificial neural network and system |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI808019B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019246250A1 (en) * | 2018-06-20 | 2019-12-26 | Zoox, Inc. | Instance segmentation inferred from machine-learning model output |
CN111044525A (en) * | 2019-12-30 | 2020-04-21 | 歌尔股份有限公司 | Product defect detection method, device and system |
CN112598591A (en) * | 2020-12-18 | 2021-04-02 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
TW202127371A (en) * | 2019-12-31 | 2021-07-16 | 大陸商鄭州富聯智能工坊有限公司 | Image-based defect detection method and computer readable medium thereof |
TW202135092A (en) * | 2020-02-03 | 2021-09-16 | 愛爾蘭商卡司莫人工智能有限公司 | Systems and methods for contextual image analysis |
-
2022
- 2022-10-03 TW TW111137589A patent/TWI808019B/en active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019246250A1 (en) * | 2018-06-20 | 2019-12-26 | Zoox, Inc. | Instance segmentation inferred from machine-learning model output |
CN111044525A (en) * | 2019-12-30 | 2020-04-21 | 歌尔股份有限公司 | Product defect detection method, device and system |
TW202127371A (en) * | 2019-12-31 | 2021-07-16 | 大陸商鄭州富聯智能工坊有限公司 | Image-based defect detection method and computer readable medium thereof |
TW202135092A (en) * | 2020-02-03 | 2021-09-16 | 愛爾蘭商卡司莫人工智能有限公司 | Systems and methods for contextual image analysis |
CN112598591A (en) * | 2020-12-18 | 2021-04-02 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
TW202416232A (en) | 2024-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111507976B (en) | Defect detection method and system based on multi-angle imaging | |
CN107966454A (en) | A kind of end plug defect detecting device and detection method based on FPGA | |
CN109886298A (en) | A kind of detection method for quality of welding line based on convolutional neural networks | |
CN106934800A (en) | A kind of metal plate and belt detection method of surface flaw and device based on YOLO9000 networks | |
CN111982916A (en) | Welding seam surface defect detection method and system based on machine vision | |
CN110889838A (en) | Fabric defect detection method and device | |
CN113706490B (en) | Wafer defect detection method | |
CN106355579A (en) | Defect detecting method of cigarette carton surface wrinkles | |
CN111539927B (en) | Detection method of automobile plastic assembly fastening buckle missing detection device | |
CN110490842A (en) | A kind of steel strip surface defect detection method based on deep learning | |
TWI798650B (en) | Automated optical inspection method, automated optical inspection system and storage medium | |
CN110763700A (en) | Method and equipment for detecting defects of semiconductor component | |
TWI747686B (en) | A defect detection method and a defect detection device | |
CN114612472A (en) | SegNet improvement-based leather defect segmentation network algorithm | |
CN107240086A (en) | A kind of fabric defects detection method based on integration nomography | |
CN111161233A (en) | Method and system for detecting defects of punched leather | |
CN115205209A (en) | Monochrome cloth flaw detection method based on weak supervised learning | |
Abdellah et al. | Defect detection and identification in textile fabric by SVM method | |
KR20210086303A (en) | Pattern inspection apparatus based on deep learning and inspection method using the same | |
TWI749714B (en) | Method for defect detection, method for defect classification and system thereof | |
TWI808019B (en) | Method for filtering surface type of object based on artificial neural network and system | |
Sa'idah et al. | Convolutional neural network GoogleNet architecture for detecting the defect tire | |
TWM606740U (en) | Defect detection system | |
CN115953387A (en) | Radiographic image weld defect detection method based on deep learning | |
CN116883345A (en) | Deep learning-based weld joint large-scale defect detection method and system |