CN112683924A - Method for screening surface form of object based on artificial neural network - Google Patents

Method for screening surface form of object based on artificial neural network Download PDF

Info

Publication number
CN112683924A
CN112683924A CN201910987177.8A CN201910987177A CN112683924A CN 112683924 A CN112683924 A CN 112683924A CN 201910987177 A CN201910987177 A CN 201910987177A CN 112683924 A CN112683924 A CN 112683924A
Authority
CN
China
Prior art keywords
image
neural network
light
object image
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910987177.8A
Other languages
Chinese (zh)
Inventor
蔡昆佑
杨博宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Original Assignee
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitac Computer Kunshan Co Ltd, Getac Technology Corp filed Critical Mitac Computer Kunshan Co Ltd
Priority to CN201910987177.8A priority Critical patent/CN112683924A/en
Publication of CN112683924A publication Critical patent/CN112683924A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for screening surface morphology of an object based on an artificial neural network comprises the following steps: receiving at least one object image; using a first prediction model to identify the surface type of each object image so as to classify the object images into a first normal group and a first abnormal group; and performing surface type recognition of each output image in the first normal group by using a second prediction model so as to classify the output images into a second normal group and a second abnormal group. By using the method for screening the surface pattern of the object based on the artificial neural network, the surface pattern of the object image is continuously identified through the plurality of neural networks which are connected in series under different training conditions, so that the object image is accurately and quickly classified, the object corresponding to the object image is efficiently screened based on the classification result of the object image, and the lower overdischarge rate is obtained.

Description

Method for screening surface form of object based on artificial neural network
[ technical field ] A method for producing a semiconductor device
The invention relates to an artificial neural network training system, in particular to a method for screening the surface morphology of an object based on an artificial neural network.
[ background of the invention ]
Various safety protection measures are made by a number of small structural objects, such as safety belts. If these small structural objects are not strong enough, the protective effect of the safety protection measures can be questioned.
These structural objects may have minute defects such as slots, cracks, bumps, and textures on their surfaces due to various reasons such as impact, process error, mold defect, etc. during the manufacturing process. These tiny defects are not easily detected. One of the existing defect detection methods is to manually observe a structural object to be detected with naked eyes or touch the structural object with two hands to determine whether the structural object has defects, such as pits, scratches, color differences, defects, etc. However, the efficiency of manually detecting whether the structural object has defects is poor, and erroneous determination is very likely to occur, so that the yield of the structural object cannot be controlled.
[ summary of the invention ]
In one embodiment, a method for screening surface morphology of an object based on an artificial neural network includes: receiving at least one object image; using a first prediction model to identify the surface type of each object image so as to classify the object images into a first normal group and a first abnormal group; and performing surface type recognition of each output image in the first normal group by using a second prediction model so as to classify the output images into a second normal group and a second abnormal group.
In summary, according to the embodiment of the method for screening the surface morphology of the object based on the artificial neural network, the surface morphology of the object image is continuously identified by the plurality of neural networks connected in series under different training conditions, so as to accurately and rapidly classify the object image, thereby efficiently screening the object corresponding to the object image based on the classification result of the object image, and further obtaining a low overdischarge (Miss) rate.
[ description of the drawings ]
Fig. 1 is a schematic diagram of an artificial neural network system according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an artificial neural network system according to another embodiment of the present invention.
Fig. 3 is a flowchart illustrating a method for screening a surface type of an object based on an artificial neural network according to an embodiment of the invention.
FIG. 4 is a flowchart of a training method of a sub-neural network system according to an embodiment of the present invention.
FIG. 5 is a flowchart of a method for detecting a sub-neural network system according to an embodiment of the present invention.
FIG. 6 is a diagram illustrating an exemplary image region.
Fig. 7 is a flowchart of a training method of a sub-neural network system according to another embodiment of the present invention.
Fig. 8 is a flowchart of a detection method of a sub neural network system according to another embodiment of the present invention.
Fig. 9 is a flowchart of a training method of a sub neural network system according to still another embodiment of the present invention.
Fig. 10 is a flowchart of a detection method of a sub neural network system according to still another embodiment of the present invention.
FIG. 11 is a flowchart of a training method of a sub-neural network system according to still another embodiment of the present invention.
Fig. 12 is a flowchart of a detection method of a sub-neural network system according to still another embodiment of the present invention.
FIG. 13 is a diagram illustrating an example of an object image.
FIG. 14 is a diagram of an image scanning system for an object surface type according to an embodiment of the present invention.
FIG. 15 is a functional diagram of the first embodiment of the image scanning system for an object surface type.
FIG. 16 is a diagram illustrating a first embodiment of relative optical positions among the object, the light source module and the photosensitive elements shown in FIG. 14.
FIG. 17 is a diagram illustrating a second embodiment of the relative optical positions of the object, the light source module and the photosensitive elements shown in FIG. 14.
FIG. 18 is a schematic view of an embodiment of a surface profile.
FIG. 19 is a diagram illustrating a third embodiment of the relative optical positions of the object, the light source module and the photosensitive elements shown in FIG. 14.
FIG. 20 is a diagram illustrating a fourth embodiment of relative optical positions among the object, the light source module and the photosensitive elements shown in FIG. 14.
FIG. 21 is a diagram of an image scanning system for an object surface type according to another embodiment of the present invention.
FIG. 22 is a functional diagram of a second embodiment of an image scanning system for an object surface type.
FIG. 23 is a diagram illustrating a fifth embodiment of relative optical positions among the objects, light source modules and photosensitive elements shown in FIG. 14.
FIG. 24 is a functional diagram of a third embodiment of an image scanning system for an object surface type.
FIG. 25 is a diagram illustrating a sixth embodiment of the relative optical positions of the object, the light source module and the photosensitive elements shown in FIG. 14.
FIG. 26 is a functional diagram of a fourth embodiment of an image scanning system for an object surface type.
FIG. 27 is a diagram illustrating an exemplary inspection image.
FIG. 28 is a diagram illustrating another exemplary inspection image.
FIG. 29 is a diagram illustrating another exemplary detection image.
FIG. 30 is a functional diagram of a fifth embodiment of an image scanning system for an object surface type.
FIG. 31 is a diagram illustrating another example of an object image.
FIG. 32 is a diagram illustrating another example of an object image.
FIG. 33 is a diagram illustrating another example of an object image.
[ detailed description ] embodiments
The method for screening the surface of the object based on the artificial neural network is suitable for an artificial neural network system. In this regard, the artificial neural network system may be implemented on a processor.
In some embodiments, the processor may perform deep learning (i.e., an untrained artificial neural network) of a plurality of sub-neural network systems on the same or different object images under different training conditions to respectively build a prediction model (i.e., a trained artificial neural network) for the plurality of sub-neural network systems to identify the surface morphology of the object, and concatenate the sub-neural network systems to form an artificial neural network system. Here, the object images may be images of surfaces of the same type of object at the same relative position. In other words, when the surface of the object has any surface type, the corresponding image position of the object image of the object is also imaged by the surface type. Further, the artificial neural network system receives a plurality of object images with fixed image capturing coordinate parameters. For example, when the surface of the object has a sand hole, the sand hole is also imaged on the corresponding image position of the object image of the object. When the surface of the object has a bump, the bump is also imaged on the corresponding image position of the object image of the object. In some embodiments, the surface topography may be surface structures such as slots, cracks, bumps, sand holes, air holes, bumps, scratches, edges, textures, and the like. Wherein each surface structure is a three-dimensional fine structure. Here, the three-dimensional fine structure is a sub-micron size to a micron (μm) size. I.e., the longest side or diameter of the three-dimensional mesostructure is between sub-micrometers to micrometers. Sub-micron means <1 μm, for example, 0.1 μm to 1 μm. For example, the three-dimensional microstructure may be a 300nm to 6 μm microstructure.
Referring to fig. 1, the artificial neural network system 30 may include an input unit 31, a plurality of sub-neural network systems 33, and an output unit 35. The sub-neural network systems 33 are serially connected between the input unit 31 and the output unit 35, and each sub-neural network system 30 is serially connected with the next sub-neural network system 33 by partial output. Each sub-neural network system 33 has a predictive model.
In some embodiments, the output of each sub-neural network system 33 can be divided into a normal group and an abnormal group, and the normal group of each sub-neural network system 33 is coupled to the input of the next sub-neural network system 33. For example, in the prediction stage, when one or more object images IM are fed into the artificial neural network system 30, the first-level sub-neural network system 33 performs a prediction model on each object image IM to classify the object images IM into a first-level normal group or a first-level abnormal group. When the object images IM are classified into the first-level normal group, the object images IM output from the first-level sub-neural network system 33 are fed into the second-level sub-neural network system 33, so that the second-level sub-neural network system 33 continuously performs a prediction model on the object images IM to classify the object images IM into the second-level normal group or the second-level abnormal group. On the contrary, when the object image is classified into the first-level abnormal group, the object image IM output from the first-level sub-neural network system 33 to the first-level normal group is not fed into the second-level sub-neural network system 33. In this way, until the last-stage sub-neural network system 33 executes the prediction model on the object image IM fed from the previous stage (i.e., the object images IM classified as the normal group of the previous-stage sub-neural network system 33).
In some embodiments, the output unit 35 receives all the abnormal groups output by the sub-neural network systems 33 and outputs an abnormal result accordingly, and the output unit 35 also receives the normal group output by the last sub-neural network system 33 and outputs a normal result accordingly.
For convenience of illustration, two sub-neural network systems 33 are used as an example, but the number is not a limitation of the present invention. Referring to fig. 2, the two sub-neural network systems 33 are respectively referred to as a first sub-neural network system 33a and a second sub-neural network system 33 b.
The input of the first sub-neural network system 33a is coupled to the input unit 31. One output of the first sub-neural network system 33a is coupled to the input of the second sub-neural network system 33b, and the other output of the first sub-neural network system 33a is coupled to the output unit 35.
Here, the first sub-neural network system 33a has a first prediction model. The second sub neural network system 33b has a second prediction model. In some embodiments, the first predictive model may be implemented with a CNN algorithm. The second predictive model may also be implemented with the CNN algorithm. However, the present invention is not limited thereto.
Here, referring to fig. 2 and 3, the input unit 31 receives one or more object images IM (step S01), and feeds the received object images IM to the first sub-neural network system 33 a. Next, the first prediction model of the first sub-neural network system 33a performs surface morphology recognition on each object image IM to classify the object image IM into one of a first normal group G12 and a first abnormal group G11 (step S02). In other words, after the first prediction model identifies the surface type imaged by the object image IM, the object image IM is classified into the first normal group G12 or the first abnormal group G11 according to the identification result.
Then, the object images IM classified into the first normal group G12 are fed into the second sub-neural network system 33b, and the surface morphology recognition is performed by the second prediction model of the second sub-neural network system 33b to classify the object images IM into one of a second normal group G22 and a second abnormal group G21 (step S03). In other words, the second prediction model identifies the surface type imaged by the object images IM belonging to the first normal group G12, and then classifies the object images IM into the second normal group G22 or the second abnormal group G21 according to the identification result.
Finally, the output unit 35 receives the first abnormal group G11 output by the first prediction model, the second abnormal group G21 output by the second prediction model, and the second normal group G22 output by the second prediction model, and outputs an abnormal result and a normal result. The abnormal result includes the object image IM classified into the first abnormal group G11 and the object image IM classified into the second abnormal group G21. The normal result includes the object images IM classified into the second normal group G22.
In some embodiments, the prediction model of each sub-neural network system 33 may be implemented by a Convolutional Neural Network (CNN) algorithm, but the disclosure is not limited thereto.
In some embodiments, each sub-neural network system 33 performs deep learning under different training conditions to build a respective prediction model. The training conditions may be, for example, different numbers of neural network layers, different neuron configurations, different pre-processing of input images, different neural network algorithms, or any combination thereof. The image preprocessing may be feature enhancement, image cropping, data format conversion, image overlay, and any combination thereof.
In some embodiments, the number of neural networks connected in series in the artificial neural network system 30 may be designed to be 2 neural networks, 3 neural networks, 4 neural networks or more in series according to actual requirements.
In some embodiments, each sub-neural network system 33 may have a different decision deficiency rate. In some embodiments, the processor can concatenate the plurality of sub-neural network systems 33 into an artificial neural network system 30 based on the determined defectivity of the predictive models of the plurality of sub-neural network systems 33. For example, the sub-neural network system 33 having a higher determination defect rate is arranged at the front, and the sub-neural network system 33 having a lower determination defect rate is arranged at the rear. In other words, the determination defect rates of the plurality of cascaded sub-neural network systems 33 are sequentially decreased. Therefore, the artificial neural network system 30 can quickly classify and predict a large number of objects to be detected, and has a high overdischarge (Miss) rate.
In some embodiments, at least one of the sub-neural network systems 33 may perform image preprocessing for image cropping.
Referring to fig. 4, in the learning phase, the sub-neural network system 33 receives a plurality of object images IM (step S11). Here, the object images are images of surfaces of the same type of object at the same relative position. Next, the sub-neural network system 33 divides each object image IM into a plurality of image areas (step S12), and designates at least one region of interest among the plurality of image areas of each object image IM (step S13). In other words, after the object image IM is cut into a plurality of image areas, the sub-neural network system 33 can designate the image areas in the corresponding sequence of the plurality of image areas as the regions of interest according to the designated setting. Then, the sub-neural network system 33 performs a deep learning (training) with the designated region of interest to build a prediction model for identifying the surface morphology of the object (step S14). In some embodiments, the sub-neural network system 33 can divide, assign, and train each image one by one. In other embodiments, the sub-neural network system 33 can divide and assign each object image and then train all the assigned regions of interest together.
In the prediction stage, the sub-neural network system 33 performs classification prediction in substantially the same step as the learning stage. Referring to fig. 5, the sub-neural network system 33 receives one or more object images IM (step S21). Herein, the image capturing target and the image capturing position of each object image IM are the same as the image capturing target and the image capturing position of the object image IM used in the learning stage (the same relative position as an object). Next, the sub-neural network system 33 divides each object image IM into a plurality of image areas (step S22), and designates at least one region of interest among the plurality of image areas of each object image IM (step S23). In other words, after the object image IM is cut into a plurality of image areas, the sub-neural network system 33 can designate the image areas in the corresponding sequence of the plurality of image areas as the regions of interest according to the designated setting. Then, the sub-neural network system 33 performs a prediction model with the designated region of interest to identify the surface type of the object (step S24).
Based on this, the sub neural network system 33 can flexibly import the detection result of a specific region (designated region of interest). In some embodiments, the sub-neural network system 33 may also obtain a lower overdischarge rate, such as an overdischarge rate approaching zero.
In some embodiments, the number of image areas into which each object image IM is divided is greater than any integer of 2. Preferably, the image size of each image region can be less than or equal to 768 × 768 pixels, such as 400 × 400 pixels, 416 × 416 pixels, 608 × 608 pixels, and the like. Moreover, the image sizes of the image areas are all the same. In some embodiments, each image area is preferably square. For example, when the image size of the object image IM is 3000 × 4000 pixels, the image size of the cropped image area may be 200 × 200 pixels.
In some embodiments of steps S12 and S22, the sub-neural network system 33 may first enlarge the object image IM according to a predetermined cropping size, so that the size of the object image IM is an integer multiple of the size of the image area. Then, the sub-neural network system 33 cuts the enlarged object image IM into a plurality of image areas according to a predetermined cutting size. Herein, the image sizes of the image areas are the same, i.e. the image sizes are the same as the preset cropping sizes.
For example, referring to fig. 6, the sub-neural network system 33 divides each of the received object images IM into 70 image areas a 01-a 70 with the same cropping size. Then, the sub-neural network system 33 designates the image areas a 01-a 10 as interested areas according to preset designated settings (assuming 1-10), and further performs deep learning or performs a prediction model with the image areas a 01-a 10 (i.e., interested areas).
In some embodiments, the region of interest may be an imaged image region with sand holes of different depths, an imaged image region without sand holes and with bumps or scratches, an imaged image region with different surface roughness, an imaged image region without surface defects, or an imaged image region with defects of different depth ratios. In this regard, the sub-neural network system 33 performs deep learning or performs predictive modeling based on the regions of interest of the various surface types as described above. During the learning phase, the sub-neural network system 33 can classify the regions of interest with different surface morphologies to generate different predetermined surface morphology classes in advance.
For example, the sub-neural network system 33 may recognize, using the region of interest, that the region of interest a01 is imaged with sand holes and impacts, the region of interest a02 is not imaged with defects, the region of interest a33 is imaged with only sand holes and the imaged surface roughness is less than the surface roughness of the region of interest a 35. In the prediction stage, taking five categories of the preset surface types including sand holes or air holes, scratches or bumps, high roughness, low roughness and no surface defects as examples, the sub-neural network system 33 can classify the region of interest a01 into the preset category of sand holes or air holes and the preset category of scratches or bumps, classify the region of interest a02 into the preset category without surface defects, classify the region of interest a33 into the preset category of sand holes or air holes and the preset category with low roughness, and classify the region of interest a35 into the preset category with high roughness.
In one embodiment of steps S13 and S23, for each object image IM, the sub-neural network system 33 designates the region of interest by changing the weight of each image region. For example, referring to fig. 6, after the object image IM is cut into a plurality of image areas a 01-a 70, the weights of the image areas a 01-a 70 are initially preset to 1. In one embodiment, assuming the designated settings are 1-5, 33-38, and 66-70, the sub-neural network system 33 increases the weights of the image areas a 1-a 5, a 33-a 38, a 66-a 70 to 2 according to the preset designated settings, thereby designating the image areas a 1-a 5, a 33-a 38, a 66-a 70 as interested areas. In an example, when the weight of the region of interest is increased, the weights of the other image regions a 6-a 32 and a 39-a 65 may be maintained as 1. In another example, when the weight of the region of interest is increased, the sub-neural network system 33 may simultaneously decrease the weights of the other image regions a 6-a 32 and a 39-a 65 to 0.
In another embodiment, assuming the designated settings are 1-5, 33-38, and 66-70, the artificial neural network system 30 reduces the weights of the image areas A6-a 32, a 39-a 65 except the image areas a 1-A5, a 33-a 38, a 66-a 70 to 0.5, and maintains the weights of the image areas a 1-A5, a 33-a 38, a 66-a 70 to 1 according to the preset designated settings, so as to designate the image areas a 1-A5, a 33-a 38, a 66-a 70 as the interested areas.
In one embodiment, the sub-neural network system 33 may include a preprocessing unit and a deep learning unit. The input of the preprocessing unit is coupled to the previous stage of this sub-neural network system 33 (the previous sub-neural network system 33 or the input unit 31), and the output of the preprocessing unit is coupled to the input of the deep learning unit. The output of the deep learning unit is coupled to the next stage of the sub-neural network system 33 (the next sub-neural network system 33 or the output unit 35). Herein, the preprocessing unit is used for executing the steps S11 to S13 or S21 to S23, and the deep learning unit is used for executing the step S14 or S24. In other words, the architecture of the deep learning unit after performing deep learning is the prediction model. In another embodiment, the deep learning unit may include an input layer and a plurality of hidden layers. The input layer is coupled between the previous stage (previous sub-neural network system 33 or input unit 31) and the hidden layers. Each hidden layer is coupled between the input layer and the next stage (next sub-neural network system 33 or output unit 35). In this case, the steps S11 to S13 or the steps S21 to S23 can be executed by the input layer instead.
In some embodiments, at least one of the sub-neural network systems 33 may perform image preprocessing for converting data formats.
Referring to fig. 7, in the learning phase, the sub-neural network system 33 receives a plurality of object images IM (step S31). Next, the sub-neural network system 33 converts the object image IM into a Matrix (Matrix) according to the color mode of the object image IM (step S32), i.e., converts the data format of the object image into a format (e.g., an image Matrix) supported by the input channels of the artificial neural network. Then, the sub-neural network system 33 performs a deep learning with the matrix to build a prediction model for identifying the surface morphology of the object (step S33).
Herein, the received object images IM are all images of surfaces of the same type of object at the same relative position. The received object image IM has a plurality of color modes, and each object image IM has one of the color modes. In some embodiments, such color patterns may include a plurality of spectra that are different from each other. For example, during the learning phase, the processor can feed a large number of object images IM to the sub-neural network system 33. The fed object image IM includes surface images (i.e., object images IM) of different spectra of the same relative position of each object 2 of the plurality of objects 2 of the same type.
Herein, the artificial neural network in the sub-neural network system 33 has a plurality of image matrix input channels for inputting corresponding matrices, and the image matrix input channels respectively represent a plurality of image capturing conditions (e.g. respectively represent a plurality of color modes). That is, the sub-neural network system 33 converts the object images IM of different color modes into information of length, width, pixel type, pixel depth, channel number, etc. in the matrix, wherein the channel number represents the image capturing condition of the corresponding object image. And the converted matrix is imported into a corresponding image matrix input channel according to the color mode of the object image, so that deep learning is facilitated. In some embodiments, the image matrix input channels respectively represent a plurality of different spectra.
In some embodiments, the plurality of spectra may range between 380nm to 3000 nm. For example, the plurality of different spectra may be any of visible light such as white light, violet light, blue light, green light, yellow light, orange light, and red light. In one embodiment, the wavelength of the white light may be 380nm to 780nm, the wavelength of the violet light may be 380nm to 450nm, the wavelength of the blue light may be 450nm to 495nm, the wavelength of the green light may be 495nm to 570nm, the wavelength of the yellow light may be 570nm to 590nm, the wavelength of the orange light may be 590nm to 620nm, the wavelength of the red light may be 620nm to 780nm, and in another embodiment, the spectrum may be far infrared light with the wavelength of 800nm to 3000 nm.
In some embodiments, such color modes may also include grayscale modes. At this time, the object image IM is converted into a gray-scale image, and then converted into a matrix having a channel number representing the gray scale.
In the prediction stage, the sub-neural network system 33 performs classification prediction in substantially the same step as the learning stage. Referring to fig. 8, the sub-neural network system 33 receives one or more object images IM (step S41). Herein, each object image IM is an image of the surface of the same object at the same relative position and has any one specific color pattern. Next, the sub-neural network system 33 converts the object image IM into a matrix according to the color mode of the object image IM (step S42). Then, the sub-neural network system 33 performs a prediction model with the matrix to identify the surface type of the object (step S43).
In some embodiments, the sub-neural network system 33 can normalize the object image IM to reduce the asymmetry between the learning data and improve the learning efficiency. Then, the object image IM normalized by the sub-neural network system 33 is converted into a matrix.
Accordingly, the sub-neural network system 33 performs deep learning by using a matrix with the number of channels representing different color modes, so that the established prediction model can identify the information (i.e., the surface type) such as the structural type and the surface texture of the surface 21 of the object 2. In other words, by controlling the light-emitting spectrum or the light-receiving spectrum to provide object images of the same object with different imaging effects, the distinction of the sub-neural network system 33 for various target surface types of the object can be improved. In some embodiments, the sub-neural network system 33 may integrate the multi-spectral surface texture image to enhance the identification of the target surface type of the object, thereby obtaining the surface roughness and the fine texture type of the object.
In one embodiment, the sub-neural network system 33 may include a preprocessing unit and a deep learning unit. The input of the preprocessing unit is coupled to the previous stage of this sub-neural network system 33 (the previous sub-neural network system 33 or the input unit 31), and the output of the preprocessing unit is coupled to the input of the deep learning unit. The output of the deep learning unit is coupled to the next stage of the sub-neural network system 33 (the next sub-neural network system 33 or the output unit 35). Herein, the preprocessing unit is used for executing the steps S31 to S32 or S41 to S42, and the deep learning unit is used for executing the step S33 or S43. In other words, the architecture of the deep learning unit after performing deep learning is the prediction model. In another embodiment, the deep learning unit may include an input layer and a plurality of hidden layers. The input layer is coupled between the previous stage (previous sub-neural network system 33 or input unit 31) and the hidden layers. Each hidden layer is coupled between the input layer and the next stage (next sub-neural network system 33 or output unit 35). In this case, the steps S31 to S32 or the steps S41 to S42 can be executed by the input layer instead.
In some embodiments, at least one of the sub-neural network systems 33 may perform image preprocessing for image overlay.
In one embodiment, referring to fig. 9, in the learning phase, the sub-neural network system 33 receives object images IM of objects (step S51). The object images IM are images of the surface of the same object at the same relative position. The plurality of object images IM of the same object are obtained by capturing the images of the object based on the light rays with different lighting directions. In one example, images captured of the same object may have the same spectrum or different spectra. Next, the sub-neural network system 33 superimposes the object images IM of each object into an superimposed object image (hereinafter referred to as an initial image) (step S52). Then, the sub-neural network system 33 performs a deep learning with the initial image of each object to establish a prediction model for identifying the surface morphology of the object (step S54). For example, the received object images IM include a plurality of object images IM of a first object and a plurality of object images IM of a second object. The sub-neural network system 33 superimposes the object images IM of the first object as the initial image of the first object and superimposes the object images IM of the second object as the initial image of the second object, and then performs the depth learning with the initial images of the first object and the initial images of the second object.
In the prediction stage, the sub-neural network system 33 performs classification prediction in substantially the same step as the learning stage. Referring to fig. 10, the sub-neural network system 33 receives a plurality of object images IM of an object (step S61). Herein, the plurality of object images IM of the object are all images of the surface of the same position of the object. And the plurality of object images IM of the object are images of the object captured based on the light rays with different lighting directions. Then, the sub-neural network system 33 superimposes the object images IM of the object into the initial image (step S62). Then, the sub-neural network system 33 performs a prediction model with the initial image to identify the surface type of the object (step S64).
Accordingly, the sub-neural network system 33 can perform training by combining multi-angle image capture (i.e. different lighting directions) with multi-dimensional superposition preprocessing, so as to improve the identification degree of the three-dimensional structure features of the object without increasing calculation time. In other words, by controlling various incident angles of the image capturing light source to provide object images of the same object with different imaging effects, the spatial stereo distinction of the sub-neural network system 33 for various surface types of the object can be improved. Moreover, by integrating the object images in different lighting directions, the object images are subjected to multi-dimensional superposition, so that the recognition of the sub-neural network system 33 on the surface form of the object is improved, and the optimal analysis of the surface form of the object is obtained.
In another embodiment, referring to fig. 11 and 12, after steps S52 or S62, the sub-neural network system 33 may convert the initial image of each object into a matrix (steps S53 or S63), i.e., convert the data format of the initial image of each object into a format (e.g., an image matrix) supported by the input channels of the artificial neural network. Then, the sub-neural network system 33 further performs a deep learning or prediction model with the matrix of each object (step S54 'or S64'). That is, the sub-neural network system 33 converts the initial image of each object into information of length, width, pixel type, pixel depth, channel number, etc. in the matrix, wherein the channel number represents the color mode corresponding to the initial image. And the converted matrix is imported into a corresponding image matrix input channel according to the color mode of the initial image, so that the next processing is facilitated.
In an example of steps S52 and S62, the sub-neural network system 33 normalizes (normalizes) the received object image IM and then superimposes the normalized object images IM of the same object into the original image. Therefore, the asymmetry between the learning data can be reduced, and the learning efficiency is improved.
In an example of steps S51 or S61, the object images IM of the same object may have the same spectrum. In another example of the steps S51 or S61, the object image IM of the same object may have different multiple spectra. That is, the object images IM of the same object include an image of the object captured based on light of one spectrum in different lighting orientations and an image of the object captured based on light of another spectrum in different lighting orientations. And, the two spectra are different from each other.
In one embodiment, the sub-neural network system 33 may include a preprocessing unit and a deep learning unit. The input of the preprocessing unit is coupled to the previous stage of this sub-neural network system 33 (the previous sub-neural network system 33 or the input unit 31), and the output of the preprocessing unit is coupled to the input of the deep learning unit. The output of the deep learning unit is coupled to the next stage of the sub-neural network system 33 (the next sub-neural network system 33 or the output unit 35). Herein, the preprocessing unit is used for executing the steps S51 to S53 or S61 to S63, and the deep learning unit is used for executing the steps S54, S54 ', S64 or S64'. In other words, the architecture of the deep learning unit after performing deep learning is the prediction model. In another embodiment, the deep learning unit may include an input layer and a plurality of hidden layers. The input layer is coupled between the previous stage (previous sub-neural network system 33 or input unit 31) and the hidden layers. Each hidden layer is coupled between the input layer and the next stage (next sub-neural network system 33 or output unit 35). In this case, the steps S51 to S53 or the steps S61 to S63 can be executed by the input layer instead.
In some embodiments, each object image IM is formed by stitching a plurality of inspection images MB (as shown in fig. 13), and the image size of each region of interest is smaller than the image size of the inspection image (original image size).
In some embodiments, each detected image MB may be generated by an image scanning system for the type of object surface. Referring to fig. 14, the image scanning system for the object surface type is adapted to scan the object 2 to obtain at least one detected image MB of the object 2. Herein, the object 2 has a surface 21, and along an extending direction D1 of the surface 21 of the object 2, the surface 21 of the object 2 is divided into a plurality of surface sections 21A-21C. In some embodiments, the surface 21 of the object 2 is divided into nine surface areas, three of which 21A-21C are exemplarily shown in the figure. However, the present application is not limited thereto, and the surface 21 of the object 2 can be divided into other number of surface blocks according to actual requirements, such as 3 blocks, 5 blocks, 11 blocks, 15 blocks, 20 blocks, and any number thereof.
Referring to fig. 14 to 17, fig. 16 and 17 are schematic diagrams illustrating two embodiments of the relative optical positions of the object 2, the light source assembly 12 and the photosensitive element 13 in fig. 14, respectively.
The image scanning system for the surface type of the object includes a driving assembly 11, a light source assembly 12 and a photosensitive element 13. The light source assembly 12 and the photosensitive elements 13 face a detection position 14 on the driving assembly 11 at different angles.
The image scanning system can execute a detection procedure. In the detecting process, the driving assembly 11 carries the object 2 to be detected and sequentially moves one of the surface areas 21A-21C to the detecting position 14, and the light source assembly 12 emits a light L1 toward the detecting position 14 to sequentially illuminate the surface areas 21A-21C at the detecting position 14. Therefore, the surface blocks 21A-21C are sequentially disposed at the detecting position 14, and are irradiated by the light L1 from the side direction or the oblique direction when being at the detecting position 14.
In some embodiments, when each of the surface areas 21A-21C is located at the detecting position 14, the light sensing element 13 receives the diffused light generated by the light received by the surface area currently located at the detecting position 14, and captures the detection image of the surface area currently located at the detecting position 14 according to the received diffused light.
For example, in the inspection process, the driving assembly 11 first displaces the surface area 21A to the inspection position 14, and the photosensitive element 13 captures an inspection image Ma of the surface area 21A under the illumination of the surface area 21A by the inspection light L1 provided by the light source assembly 12. Then, the driving assembly 11 displaces the object 2 again to move the surface area 21B to the detection position 14, and the photosensitive element 13 captures the detection image Mb of the surface area 21B again under the irradiation of the surface area 21B with the detection light L1 provided by the light source assembly 12. Then, the driving assembly 11 displaces the object 2 to move the surface area 21C to the detection position 14, and the photosensitive element 13 captures a detection image Mc of the surface area 21C again under the irradiation of the surface area 21B with the detection light L1 provided by the light source assembly 12. And so on until the detection images MB of all the surface blocks are captured.
In some embodiments, the angle between the incident direction of the light ray L1 and the normal direction 14A of each surface area 21A-21C at the detection position 14 (hereinafter referred to as the light incident angle θ) is greater than 0 degree and less than or equal to 90 degrees. That is, the light ray L1 (i.e., the incident optical axis thereof) irradiates the detection position 14 at a light incident angle θ larger than 0 degree and smaller than or equal to 90 degrees with respect to the normal line 14A.
In some embodiments, the light incident angle θ may be greater than or equal to a critical angle and less than or equal to 90 degrees, so as to obtain the best target feature extraction effect at the wavelength to be detected. In this regard, the critical angle may be related to the surface type desired to be detected. In some embodiments, the light incident angle θ is related to a depth ratio of the surface topography expected to be detected. Here, the expected detected surface pattern may be a target surface pattern of a minimum size among the surface patterns that the user wishes to detect. In some embodiments, the critical angle may be arctangent (r/d), where d is the pore depth of the surface morphology desired to be detected and r is the pore radius of the surface morphology desired to be detected. For example, referring to FIG. 18, the surface type is taken as an example of a defect, which includes a hole depth d and a hole radius r. Here, the hole radius r is the distance between either side surface within the defect to the forward normal 14A. The ratio (r/d) between the hole radius r and the hole depth d is the depth ratio (r/d) of the defect. In this case, the light incident angle θ is equal to or larger than the arctangent (r/d) and equal to or smaller than 90 degrees.
In some embodiments, the photosensitive axis 13A of the photosensitive element 13 may be parallel to the normal line 14A or between the normal line 14A and a tangent line of the surface area of the object 2 on the detection position 14, as shown in fig. 16, 17 and 19. In one example, the photosensitive axis 13A of the photosensitive element 13 is parallel to the normal line 14A, as shown in fig. 16 and 17. In another example, an angle (hereinafter referred to as a light reflection angle α) is formed between the light sensing axis 13A of the light sensing element 13 and the normal line 14A, as shown in fig. 19. In some embodiments, the light reflection angle α is not equal to the light incidence angle θ, so as to reduce the generation of flare light, thereby obtaining a clearer detection image MB.
In some embodiments, the light source module 12 may provide the light ray L1 with a wavelength between 300nm and 3000 nm. For example, the light wavelength value of the light L1 can be in the light band of 300nm-600nm, 600nm-900nm, 900nm-1200nm, 1200nm-1500nm, 1500-1800nm, or 1800nm-2100 nm. In one example, the light L1 provided by the light source module 12 can be visible light, so that surface features having a micrometer (μm) level on the surface 21 are imaged in the detection image MB. In one embodiment, the light wavelength of the light L1 may be between 380nm and 780nm, which may depend on the material properties of the object to be detected and the requirement of surface spectral reflectivity. In some embodiments, the visible light may be any one of white light, violet light, blue light, green light, yellow light, orange light, and red light, for example. For example, the light L1 may be white light with a light wavelength value between 380nm and 780nm, or blue light with a light wavelength value between 450nm and 475nm, or green light with a light wavelength value between 495nm and 570nm, or red light with a light wavelength value between 620nm and 750 nm.
In another embodiment, the light L1 provided by the light source assembly 12 can be far infrared light (e.g., light having a wavelength in the range of 800nm-3000 nm). Thus, the detection light can image surface features having a sub-micron (e.g., 300nm) order on the surface of the object 2 in the detection image. In an exemplary embodiment, when the object 2 having the surface attachment is obliquely irradiated with far infrared light provided by the light source module 12, the far infrared light can penetrate the attachment to the surface of the object 2, so that the photosensitive element 13 can capture an image of the surface of the object 2 under the attachment. In other words, the far infrared light can penetrate the surface attachment of the object 2, so that the photosensitive element 13 can acquire an image of the surface 21 of the object 2. In some embodiments, the far infrared light has a light wavelength value greater than 2 μm. In some embodiments, the far infrared light has a wavelength of light having a value greater than the thickness of the attachment. In other words, the wavelength of the far infrared light can be selected according to the thickness of the attachment to be penetrated. In some embodiments, the wavelength of the far infrared light can be selected according to the surface morphology of the object to be measured, so as to perform image filtering of micron (μm) structure. For example, if the sample surface has 1 μm to 3 μm fine traces or sand holes, but such phenomena do not affect the product quality, and the quality manager is interested in structural defects of 10 μm or more, the wavelength of the far infrared light L1 is selected to be an intermediate wavelength (e.g., 4 μm) to obtain the best filtering effect of the image microstructure and low-noise image quality, and also not affect the detection of larger scale defects. Preferably, the wavelength of the far infrared light is greater than 3.5 μm. In some embodiments, the object 2 is preferably made of metal. In some embodiments, the adherent can be an oil stain, colored paint, or the like.
In some embodiments, referring to fig. 20, the image scanning system for the surface type of the object may further include a polarizer 17. The polarizing plate 17 is located on the optical axis 13A of the photosensitive element 13 and is disposed between the photosensitive element 13 and the detection position 14. Here, the photosensitive element 13 captures an image of the surface of the object 2 through the polarizer 17. In this way, the polarization plate 130 is used for polarization filtering, so that the saturation glare of the strong infrared light to the photosensitive element 13 can be effectively avoided, and the quality of the detected image can be improved to obtain a low-disturbance penetrating image.
In some embodiments, referring back to fig. 14 and 15, the driving assembly 11 includes a carrier 111 and a driving motor 112 connected to the carrier 111. In the detection procedure, the carrying element 111 carries the object 2, and the driving motor 112 drives the carrying element 111 to drive the object 2 to align a surface area to the detection position 14. In one embodiment, as shown in fig. 14, 16, 17, 19 and 20, the object 2 may be cylindrical, such as a spindle. Here, the surface 21 of the object 2 may be a side surface of the body of the object 2, i.e. the surface 21 is a cylindrical surface, and the surface 21 has a radian of 2 pi. Here, the extending direction D1 may be a clockwise direction or a counterclockwise direction with the major axis of the body of the object 2 as the rotation axis. In some embodiments, the object 2 has a narrower configuration at one end relative to the other. In one example, the supporting element 111 may be two rollers spaced apart by a predetermined distance, and the driving motor 112 is coupled to the rotating shafts of the two rollers. Here, the predetermined distance is smaller than the diameter of the article 2 (the minimum diameter of the body). Therefore, during the inspection process, the object 2 is movably disposed between the two rollers. Moreover, when the driving motor 112 rotates the two rollers, the object 2 is driven by the surface friction between the object 2 and the two rollers, and thus rotates along the extending direction D1 of the surface 21, so as to align a surface area to the detecting position 14. In another example, the supporting element 111 can be a shaft, and the driving motor 112 is coupled to one end of the shaft. At this time, the other end of the rotating shaft is provided with an embedded part (such as an inserting hole). At this time, the object 2 may be removably embedded in the embedding member in the inspection process. When the driving motor 112 rotates the shaft, the object 2 is driven by the shaft to rotate along the extending direction D1 of the surface 21, so that a surface area is aligned to the detecting position 14. In some embodiments, taking the surface 21 divided into 9 surface blocks 21A-21C as an example, the driving motor 112 drives the supporting element 111 to rotate 40 degrees at a time, so as to drive the object 2 to rotate 40 degrees along the extending direction D1 of the surface 21.
In one embodiment, as shown in FIG. 21, the object 2 is plate-shaped. The surface 21 of the object 2 may be a non-curved surface having a curvature equal to or approaching zero. Here, the extending direction D1 can be the extending direction of any side (such as the long side) of the surface 21 of the object 2. In an exemplary embodiment, the supporting element 111 can be a planar supporting board, and the driving motor 112 is coupled to a side of the planar supporting board. At this time, the article 2 may be removably disposed on the flat carrier plate in the inspection process. The driving motor 112 drives the planar carrying board to move along the extending direction D1 of the surface 21 to drive the object 2 to move, so as to align a surface area to the detecting position 14. Here, the driving motor 112 drives the planar-carrying board to move a predetermined distance each time, and drives the planar-carrying board to move repeatedly to sequentially move each of the surface blocks 21A-21C to the detection position 14. Here, the predetermined distance is substantially equal to the width of each surface segment 21A-21C along the extending direction D1.
In some embodiments, the drive motor 112 may be a stepper motor.
In some embodiments, referring to fig. 14, 21 and 22, the image scanning system may further include a light source adjusting assembly 16, and the light source adjusting assembly 16 is coupled to the light source assembly 12. Herein, the light source adjusting assembly 16 can be used to adjust the position of the light source assembly 12 to change the light incident angle θ.
In some embodiments, the magnitude of the light incident angle θ has a negative correlation with the brightness of the surface defect appearing on the inspection image. The shallower surface pattern appears in the detected image MB as the light incident angle θ is smaller, i.e., the shallower surface pattern is less recognizable by the image scanning system or the detector in the case of the smaller light incident angle θ. The image scanning system or the inspector can more easily recognize the deeper surface morphology from the darker image. On the other hand, if the light incident angle θ is larger, the shallow and deep surface defects are darker in the inspection image, i.e. the image scanning system or the inspector can recognize all surface types in the case of larger light incident angle θ.
In one example, if a deeper predetermined surface pattern is to be detected but a shallower predetermined surface pattern is not to be detected, the light source adjustment assembly 16 may adjust the position of the light source assembly 12 to set a smaller light incident angle θ according to the light incident angle calculated by the above-mentioned inverse correlation. At this time, the light source adjusting assembly 16 drives the light source assembly 12 to output the light L1, so that the shallower predetermined surface pattern represents a brighter image in the detected image and the deeper predetermined surface pattern represents a darker image in the detected image. If both shallow and deep predetermined surface types are to be detected, the light source adjustment assembly 16 can adjust the position of the light source assembly 12 according to the light incident angle θ calculated by the above-mentioned negative correlation to set a larger (e.g., 90 degrees) light incident angle θ. At this time, the light source adjusting assembly 16 drives the light source assembly 12 to output the detection light L1, so that the shallow and deep predetermined surfaces θ are shaded in the detection image.
In some embodiments, the light source adjusting assembly 16 can sequentially adjust the position of the light source assembly 12, so that the light sensing elements 13 capture the detection images MB of the object 2 at different light incident angles θ respectively.
In some embodiments, referring to fig. 23 and 24, the image scanning system may further include a beam splitting element 18. The light splitting assembly 18 is located between the photosensitive element 13 and the detection position 14, and the light splitting assembly 18 can also be said to be located between the photosensitive element 13 and the article 2. The light splitting assembly 18 has a plurality of filter regions F1 corresponding to a plurality of spectra, respectively. In this case, the light source assembly 12 provides a multi-spectrum light to illuminate the detection position 14. Here, the multi-spectrum light has a plurality of spectra of sub-light. Therefore, by switching the filter regions F1 of the light splitting assembly 18 (even though the filter regions F1 are respectively shifted to the photosensitive axes 13A of the photosensitive elements 13), the photosensitive elements 13 capture the detected images MB of the surface blocks (21A to 21C) located at the detecting positions 14 through the filter regions F1, so as to obtain a plurality of detected images MB with different spectra. That is, when the multi-spectrum light is irradiated from the light source assembly 12 to the object 2 at the detection position 14, the surface of the object 2 diffuses the multi-spectrum light, and the diffused light is filtered by any one of the filter regions F1 of the light splitting assembly 18 to be sub-light having the spectrum corresponding to the filter region F1, and then enters the sensing region of the photosensitive element 13. At this time, the sub-light reaching the photosensitive element 130 is only left to have a single spectrum (the middle value of the optical band). When the same filter F1 is aligned with the photosensitive axis 13A of the photosensitive element 13, the driving assembly 11 shifts one surface block to the detecting position 14 at a time, and after each shift, the photosensitive element 13 captures a detecting image MB of the surface block currently located at the detecting position 14, so as to obtain the detecting images MB of all the surface blocks 21A-21C under the same spectrum. Then, the light splitting assembly 18 is switched to another filter region F1 to align with the photosensitive axis 13A of the photosensitive element 13, and the surface area is sequentially shifted again to capture the detection image MB of the surface area. By analogy, the detected image MB having the spectrum corresponding to each filter region F1 can be obtained. In other words, the light source assembly 12 can have a wider light band with a wider range of light wavelengths, and then the light splitting assembly 18 is disposed on the light receiving path to allow the light with a specific light band to pass through, so as to provide the reflected light of the light L1 with the light wavelength predicted by the light sensing element 13.
In some embodiments, referring to fig. 23 and 24, the image scanning system may further include a displacement component 19. The displacement assembly 19 is coupled to the light splitting assembly 18. During the operation of the image scanning system, the displacement assembly 19 sequentially moves one of the filter regions F1 of the light splitting assembly 18 onto the photosensitive axis 13A of the photosensitive element 13.
In another embodiment, the light splitting assembly may be disposed at the light incident end instead. In some embodiments, referring to fig. 25 and 26, the image scanning system may further include a beam splitting element 18'. The light-splitting assembly 18 'is located between the light source assembly 12 and the detection location 14, and it can also be said that the light-splitting assembly 18' is located between the light source assembly 12 and the article 2. The light splitting assembly 18' has a plurality of filter regions F1 corresponding to the plurality of spectra, respectively. In this case, the light source assembly 12 provides a multi-spectral light to illuminate the detection site 14 through the light splitting assembly 18'. Here, the multi-spectrum light has a plurality of spectra of sub-light. Therefore, by switching the filter region F1 of the light splitting assembly 18 '(i.e. shifting the filter regions F1 to the optical axis of the light source assembly 12 respectively), the multi-spectrum light output from the light source assembly 12 is filtered into sub-light of a single spectrum by the filter region F1 of the light splitting assembly 18', and then illuminates the object 2 at the detecting position 14. At this time, the photosensitive device 13 can capture the detection image MB of the specific spectrum of the surface area (one of the surface areas 21A to 21C) located at the detection position 14. When the same filter area F1 is aligned with the optical axis of the light source module 12, the driving module 11 shifts one surface block to the detecting position 14 at a time, and the photosensitive element 13 captures a detecting image MB of the surface block currently located at the detecting position 14 after each shift, so as to obtain detecting images MB of all surface blocks 21A-21C under the same spectrum. Then, the light splitting assembly 18' is switched to another filter area F1 to align the optical axis of the light source assembly 12, and the surface area is sequentially shifted again and the detected image MB of the surface area is captured. By analogy, the detected image MB having the spectrum corresponding to each filter region F1 can be obtained. In other words, the light source module 12 can have a relatively wide light band with a specific light wavelength, and then the light splitting module 18 allowing the specific light band to pass through is disposed on the light incident path to provide the light L1 with a predetermined light wavelength to irradiate the detection position 14.
In some embodiments, referring to fig. 25 and 26, the image scanning system may further include a displacement component 19'. The displacement assembly 19 'is coupled to the light splitting assembly 18'. During operation of the image scanning system, the displacement assembly 19 'sequentially moves one of the filter regions F1 of the light splitting assembly 18' to the optical axis of the light source assembly 12.
In some embodiments, the light band of the multi-spectrum light provided by light source module 12 may be between 300nm and 2100nm, and the light bands respectively allowed to pass through by filter regions F1 of light splitting assembly 18(18') may be any non-overlapping sections between 300nm and 2100 nm. Herein, the wavelength bands of light respectively allowed to pass through the plurality of filter regions F1 of the light splitting assembly 18(18') may be continuous or discontinuous. For example, when the wavelength range of the multi-spectral light can be between 300nm and 2100nm, the wavelength ranges of light passing through the filter regions F1 of the light splitting assembly 18(18') can be 300nm-600nm, 600nm-900nm, 900nm-1200nm, 1200nm-1500nm, 1500-1800nm, and 1800-2100 nm, respectively. In another example, when the light band of the multi-spectral light can be between 380nm and 750nm, the light bands respectively allowed to pass through by the filter regions F1 of the light splitting assembly 18(18') can be 380nm-450nm, 495nm-570nm and 620nm-750nm, respectively.
In some embodiments, the aforementioned spectra may be represented in the wavelength band of monochromatic light or in intermediate values thereof.
In some embodiments, the light splitting assembly 18(18') may be a beam splitter.
In some embodiments, referring to fig. 27, the image scanning system can utilize a plurality of light emitting devices 121-123 with different spectrums to provide light L1 with a plurality of spectrums, and the light emitting devices 121-123 with different spectrums are sequentially activated, so that the light sensing device 13 can obtain a plurality of detection images with different spectrums. In other words, the light source module 12 includes a plurality of light emitting devices 121-123, and the light emitting devices 121-123 correspond to a plurality of non-overlapping light bands, respectively. In some embodiments, such optical bands may be continuous or discontinuous.
By way of example, light source assembly 12 includes a red LED, a blue LED, and a green LED. When the red LED emits light, the photosensitive element 13 can obtain a detection image MB of the red spectrum. When the blue LED emits light, the photosensitive element 13 can obtain a detection image MB of the blue light spectrum, as shown in fig. 28. When the green LED emits light, the photosensitive element 13 can obtain a detected image MB of a green spectrum, as shown in fig. 29. In this way, the details presented by the detection image MB under different bands of light are different. For example, the grooves in the detected image MB are more distinct in the blue spectrum, and the bumps in the detected image MB are more distinct in the green spectrum.
In one embodiment, as shown in FIG. 19, the light source module 12 may comprise a light emitting element. In another embodiment, as shown in fig. 16, 17 and 20, the light source assembly 12 may include two light-emitting elements 121 and 122, and the two light-emitting elements 121 and 122 are symmetrically disposed on two opposite sides of the object 2 with respect to the normal line 14A. The two light emitting elements 121 and 122 irradiate the detection position 14 with light rays in the same wavelength band, and the surface 21 is irradiated by the symmetrical detection light ray L1 to generate a symmetrical diffused light ray. Here, the photosensitive element 13 captures the detection image MB of the surface area currently located at the detection position 14 according to the symmetric diffused light, so that the imaging quality of the detection image MB can be improved.
In some embodiments, each light emitting element (121, 122) may be implemented by one or more Light Emitting Diodes (LEDs); in some embodiments, each light emitting element (121, 122) may be implemented by a laser light source.
In one embodiment, the object surface inspection system may have a single set of light source modules 12, as shown in fig. 14 and 21.
In another embodiment, referring to FIG. 30, the object surface inspection system may have multiple sets of light source assemblies 12a, 12b, 12c, 12 d. The light source assemblies 12a, 12b, 12c, 12d are respectively located at different orientations of the detecting position 14, i.e. at different orientations of the carrying elements 111 for carrying the object 2. Thus, the object surface type detection system can obtain the object image with the best surface characteristic space information. For example, assume that the object surface inspection system has four light source modules 12a, 12b, 12c, 12 d. Light source assembly 12a may be disposed on the front side of detection position 14 (or carrier element 111, light source assembly 12b may be disposed on the rear side of detection position 14 (or carrier element 111), light source assembly 12c may be disposed on the left side of detection position 14 (or carrier element 111), and light source assembly 12d may be disposed on the right side of detection position 14 (or carrier element 111).
Here, under illumination of each light source assembly (12a, 12b, 12C, 12 d), the object surface inspection system performs an image capturing procedure to obtain the inspection images MB of all the surface areas 21A-21C of the object 2 under illumination in a specific orientation. For example, assume that the object surface inspection system has four light source modules 12a, 12b, 12c, 12 d. First, the object surface inspection system emits light L1 from the light source module 12 a. Under the light L1 emitted by the light source 12a, the photosensitive elements 13 capture the detected images MB of all the surface areas 21A-21C of the object 2. The object surface detection system is then switched to emit light L1 from light source module 12 b. Under the light L1 emitted by the light source 12b, the photosensitive elements 13 capture the detection images MB of all the surface areas 21A-21C of the object 2. The object surface inspection system is then switched to emit light L1 from light source module 12 c. Under the light L1 emitted by the light source 12C, the photosensitive elements 13 capture the detection images MB of all the surface areas 21A-21C of the object 2. The object surface detection system is then switched to emit light L1 from light source module 12 d. Under the light L1 emitted by the light source 12d, the photosensitive elements 13 capture the detection images MB of all the surface areas 21A-21C of the object 2.
In some embodiments, referring to fig. 14, 15, 21, 22, 24, 26 and 30, the image scanning system may further include a processor 15. The processor 15 is coupled to the components (e.g., the light source assembly 12, the photosensitive elements 13, the driving motor 112, the light source adjusting assembly 16 and/or the displacement assemblies 19, 19 '), and is configured to control operations of the components (e.g., the light source assembly 12, the photosensitive elements 13, the driving motor 112, the light source adjusting assembly 16 and/or the displacement assemblies 19, 19').
In some embodiments, when the photosensitive element 13 captures the detected images MB of all the surface areas 21A-21C of the object 2, the processor 15 may further stitch the captured detected images MB into an object image IM according to the capturing order.
In one embodiment, the photosensitive element 13 may be a linear photosensitive element. At this time, the detection image MB captured by the photosensitive element 13 can be spliced by the processor 15 without being cut. In some embodiments, the line type photosensitive element may be implemented by a line (linear) type image sensor. Wherein the line image sensor can have a field of view (FOV) approaching 0 degree
In another embodiment, the photosensitive element 13 is a two-dimensional photosensitive element. At this time, when the photosensitive element 13 captures the inspection image MB of the surface blocks 21A-21C, the processor 15 captures a middle region MBc of the inspection image MB based on the short side of the inspection image MB, as shown in fig. 31. Then, the processor 15 stitches the middle area MBc corresponding to all the surface areas 21A-21C into the object image IM. In some embodiments, the mid-section area MBc may have a width of, for example, one pixel (pixel). In some embodiments, the two-dimensional light sensing element may be implemented by a surface image sensor. Wherein the area image sensor has a field of view of about 5 to 30 degrees.
In one embodiment, referring to fig. 21, the image scanning system may be configured with a single photosensitive element 13, and the photosensitive element 13 performs image capturing on a plurality of surface areas 21A to 21C to obtain a plurality of detection images respectively corresponding to the surface areas 21A to 21C. In another embodiment, referring to fig. 14, the image scanning system may be provided with a plurality of photosensitive elements 13, and the photosensitive elements 13 face the detection position 14 and are arranged along the long axis of the object 2. The light-sensing devices 13 respectively capture the detection images of the surface areas of the object 2 at the detection positions 14.
In one example, it is assumed that the object 2 is cylindrical and the image scanning system is provided with a single photosensitive element 13. The photosensitive element 13 can capture images of a plurality of surface areas 21A-21C of the main body (i.e., the middle section) of the object 2 to obtain a plurality of detected images MB corresponding to the surface areas 21A-21C, and the processor 15 stitches the detected images MB of the surface areas 21A-21C into an object image IM, as shown in fig. 13.
In another example, it is assumed that the object 2 is cylindrical and the image scanning system is provided with a plurality of photosensitive elements 131-133, as shown in FIG. 14. The photosensitive elements 131 to 133 respectively capture inspection images MB1 to MB3 of the surface of the object 2 located at different segment positions of the inspection position 14, and the processor 15 stitches all the inspection images MB1 to MB3 as an object image IM, as shown in fig. 32. For example, it is assumed that the number of the photosensitive elements 131 to 133 can be three, and the processor 15 splices the object image IM of the object 2 according to the detection images MB1 to MB3 captured by the three photosensitive elements 131 to 133, as shown in fig. 27. The object image IM includes the sub-object image 22 (the upper segment of the object image IM in fig. 32) spliced by the detected images MB1 of all the surface areas 21A-21C captured by the first photosensitive element 131 in the three photosensitive elements 13, the sub-object image 23 (the middle segment of the object image IM in fig. 32) spliced by the detected images MB2 of all the surface areas 21A-21C captured by the second photosensitive element 132 in the three photosensitive elements 13, and the sub-object image 24 (the lower segment of the object image IM in fig. 32) spliced by the detected images MB3 of all the surface areas 21A-21C captured by the third photosensitive element 133 in the three photosensitive elements 13.
In some embodiments, the processor 15 may have the artificial neural network system 30 to automatically classify the surface type according to the stitched object images IM, so as to automatically determine the surface type of the surface 21 of the object 2. In other words, in the learning stage, the object image IM generated by the processor 15 can be subsequently trained by the aforementioned sub-neural network systems 33 to establish a prediction model for identifying the surface morphology of the object. In the prediction stage, the object image IM generated by the processor 15 may be subsequently subjected to prediction classification by the artificial neural network system 30, so as to sequentially perform the classification prediction of the object image IM through the prediction models of the respective sub-neural network systems 33.
In some embodiments, the object image IM generated by the processor 15 can be fed to another processor having the aforementioned artificial neural network system 30, so that the artificial neural network system 30 can automatically classify the surface type according to the stitched object image IM, thereby automatically determining the surface type of the surface 21 of the object 2. In other words, in the learning phase, the sub-neural network system 33 automatically trains the fed object image IM and connects to the artificial neural network system 30 in series. In the prediction stage, the artificial neural network system 30 automatically performs classification prediction on the fed object image IM.
For example, in one example, when the object 2 is a defective object, the surface of the object 2 has one or more surface types that the artificial neural network system has learned and attempted to extract, so that at least one sub-neural network system 33 can select them; on the other hand, when the object 2 is a qualified object, the surface of the object 2 does not have any surface pattern recorded for exciting the selecting action of any sub-neural network system 33. In the learning stage, a part of the object images IM received by each of the sub-neural network systems 33 is marked with one or more surface types, and another part is marked without any surface type. Furthermore, the output of the sub-neural network system 33 is preset with a plurality of surface type classifications according to the surface types. In another example, when the object 2 is a defective object, the surface of the object 2 has one or more surface types that the artificial neural network has learned and attempted to extract; conversely, when the object 2 is a qualified object, the surface of the object 2 has another type of surface type that the artificial neural network or networks have learned and attempted to capture, and the type of surface type may be, for example, a standard surface type. In the learning stage, the object image IM received by the sub-neural network system 33 has a part with category labels having one or more first surface types and another part with category labels having one or more second surface types. Furthermore, the output of the sub-neural network system 33 is preset with a plurality of surface type classifications according to the surface types.
Referring to FIG. 33, when the surface of the object 2 has at least one surface type, the corresponding image positions of the object image IM of the object also show the local images P01-P09 of the surface type.
In some embodiments, in the learning stage, the object image IM received by each sub-neural network system 33 is a known surface type (i.e. a target surface type marked to exist thereon), and the type of the surface type output by each sub-neural network system 33 is also set. In other words, each object image IM for performing deep learning is marked with an existing object type. In some embodiments, the type mark of the object type may present a mark pattern on the object image IM (as shown in fig. 33), and/or record object information in the image information of the object image IM.
In some embodiments, during the learning stage, each sub-neural network system 33 is trained using the object image IM with the known surface type to generate judgment items for each neuron in the prediction model and/or adjust the weight of the connection of any neuron, so that the prediction result (i.e. the output surface defect type) of each object image IM conforms to the known and labeled and learned surface type, thereby establishing a prediction model for identifying the surface type of the object. In the prediction stage, each sub-neural network system 33 can perform classification prediction on the object image IM with unknown surface type through the established prediction model. In some embodiments, each sub-neural network system 33 performs percentage prediction on the object images IM according to the surface type categories, i.e. determines the percentage of each object image IM that may fall into each surface type category. Then, each sub-neural network system 33 determines whether the object 2 corresponding to each surface type is qualified or not according to the percentage of the object image IM for each surface type in sequence, and classifies the object image IM into a normal group or an abnormal group according to the qualification or not.
In some embodiments, the method for selecting the surface type of the object based on the artificial neural network according to the present invention can be implemented by a computer program product, so that the method for selecting the surface type of the object based on the artificial neural network according to any embodiment of the present invention can be completed when the computer (i.e. the on-demand processor) is loaded with the program and executed. In some embodiments, the computer program product may be a non-transitory computer readable recording medium, and the program is stored in the non-transitory computer readable recording medium and loaded into a computer (i.e., a processor). In some embodiments, the program itself may be a computer program product and transmitted to the computer by wire or wirelessly.
In summary, according to the embodiment of the method for screening the surface morphology of the object based on the artificial neural network, the surface morphology of the object image is continuously identified by the plurality of neural networks connected in series under different training conditions, so as to accurately and rapidly classify the object image, thereby efficiently screening the object corresponding to the object image based on the classification result of the object image, and further obtaining a low overdischarge (Miss) rate. In some embodiments, the method for screening the surface morphology of the object based on the artificial neural network can obtain an overdischarge rate approaching zero.

Claims (16)

1. A method for screening surface morphology of an object based on an artificial neural network, comprising:
receiving at least one object image;
performing surface type recognition of each object image by using a first prediction model to classify the object image into one of a first normal group and a first abnormal group;
the output surface type of the first normal group is identified by a second prediction model to be classified into one of a second normal group and a second abnormal group.
2. The method of claim 1, further comprising: deep learning is performed under different training conditions to establish the first prediction model and the second prediction model, respectively.
3. The method of claim 1, further comprising:
converting the at least one object image into at least one matrix;
wherein the step of performing the surface type recognition of each object image with the first prediction model comprises executing the first prediction model with the at least one matrix.
4. The method of claim 1, further comprising:
normalizing the at least one object image;
converting the normalized at least one object image into at least one matrix;
wherein the step of performing the surface type recognition of each object image with the first prediction model comprises executing the first prediction model with the at least one matrix.
5. The method of claim 1, further comprising:
dividing each object image into a plurality of image areas;
designating at least one region of interest in the plurality of image regions of each object image;
wherein the step of identifying the surface type of each object image by the first prediction model comprises: executing the first prediction model with the at least one region of interest of each of the object images.
6. The method of claim 1, wherein the at least one object image comprises a plurality of object images captured of an object based on light from different illumination directions, and the method further comprises:
overlapping the object images of the object to form an initial image;
wherein the step of identifying the surface type of each object image by the first prediction model comprises: the first prediction model is executed with the initial image.
7. The method of claim 1, wherein the output of the first normal group comprises the at least one object image, and the method further comprises:
converting each object image in the first normal group into a matrix;
wherein the step of performing the surface type recognition of the output of the first normal group with the second predictive model comprises performing the second predictive model with the transformed matrix.
8. The method of claim 1, wherein the output of the first normal group comprises the at least one object image, and the method further comprises:
normalizing each object image in the first normal group;
converting each normalized object image into a matrix;
wherein the step of performing the surface type recognition of the output of the first normal group with the second predictive model comprises performing the second predictive model with the transformed matrix.
9. The method of claim 1, wherein the output of the first normal group comprises the at least one object image, and the method further comprises:
dividing each object image in the first normal group into a plurality of image areas;
designating at least one region of interest in the plurality of image regions of each object image;
wherein the step of performing the surface morphology recognition of the output of the first normal group with the second predictive model comprises: executing the second prediction model with the at least one region of interest of each of the object images.
10. The method of claim 1, wherein the output of the first normal group comprises a plurality of object images of an object captured based on light from different illumination orientations, and the method further comprises:
overlapping the object images of the object to form an initial image;
wherein the step of performing the surface morphology recognition of the output of the first normal group with the second predictive model comprises: executing the second prediction model with the initial image.
11. The method of claim 1, wherein the first prediction model is implemented by a convolutional neural network algorithm.
12. The method of claim 1, wherein the second prediction model is implemented by a convolutional neural network algorithm.
13. The method of claim 1, wherein each object image is formed by stitching a plurality of detection images.
14. The method of claim 1, wherein the first predictive model and the second predictive model have different determination defectivity.
15. The method of claim 1, wherein the first predictive model and the second predictive model have different numbers of neural network layers.
16. The method of claim 1, wherein the first predictive model and the second predictive model have different neuron configurations.
CN201910987177.8A 2019-10-17 2019-10-17 Method for screening surface form of object based on artificial neural network Pending CN112683924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910987177.8A CN112683924A (en) 2019-10-17 2019-10-17 Method for screening surface form of object based on artificial neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910987177.8A CN112683924A (en) 2019-10-17 2019-10-17 Method for screening surface form of object based on artificial neural network

Publications (1)

Publication Number Publication Date
CN112683924A true CN112683924A (en) 2021-04-20

Family

ID=75444421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910987177.8A Pending CN112683924A (en) 2019-10-17 2019-10-17 Method for screening surface form of object based on artificial neural network

Country Status (1)

Country Link
CN (1) CN112683924A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023096908A1 (en) * 2021-11-23 2023-06-01 Trustees Of Tufts College Detection and identification of defects using artificial intelligence analysis of multi-dimensional information data
WO2023195107A1 (en) * 2022-04-06 2023-10-12 日本電気株式会社 Object evaluation device, object evaluation method, and recording medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160139977A1 (en) * 2013-07-01 2016-05-19 Agent Video Intelligence Ltd. System and method for abnormality detection
CN106557778A (en) * 2016-06-17 2017-04-05 北京市商汤科技开发有限公司 Generic object detection method and device, data processing equipment and terminal device
CN108364006A (en) * 2018-01-17 2018-08-03 超凡影像科技股份有限公司 Medical Images Classification device and its construction method based on multi-mode deep learning
CN110163858A (en) * 2019-05-27 2019-08-23 成都数之联科技有限公司 A kind of aluminium shape surface defects detection and classification method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160139977A1 (en) * 2013-07-01 2016-05-19 Agent Video Intelligence Ltd. System and method for abnormality detection
CN106557778A (en) * 2016-06-17 2017-04-05 北京市商汤科技开发有限公司 Generic object detection method and device, data processing equipment and terminal device
CN108364006A (en) * 2018-01-17 2018-08-03 超凡影像科技股份有限公司 Medical Images Classification device and its construction method based on multi-mode deep learning
CN110163858A (en) * 2019-05-27 2019-08-23 成都数之联科技有限公司 A kind of aluminium shape surface defects detection and classification method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023096908A1 (en) * 2021-11-23 2023-06-01 Trustees Of Tufts College Detection and identification of defects using artificial intelligence analysis of multi-dimensional information data
WO2023195107A1 (en) * 2022-04-06 2023-10-12 日本電気株式会社 Object evaluation device, object evaluation method, and recording medium

Similar Documents

Publication Publication Date Title
US11650164B2 (en) Artificial neural network-based method for selecting surface type of object
US20170191946A1 (en) Apparatus for and method of inspecting surface topography of a moving object
JP4753181B2 (en) OVD inspection method and inspection apparatus
US8426223B2 (en) Wafer edge inspection
CN104508423A (en) Method and device for inspecting surfaces of an examined object
KR20100015628A (en) Lumber inspection method, device and program
CN112683924A (en) Method for screening surface form of object based on artificial neural network
CN112683923A (en) Method for screening surface form of object based on artificial neural network
US20170045448A1 (en) Apparatus of Detecting Transmittance of Trench on Infrared-Transmittable Material and Method Thereof
US11756186B2 (en) Workpiece inspection and defect detection system utilizing color channels
EP3465169B1 (en) An image capturing system and a method for determining the position of an embossed structure on a sheet element
Radovan et al. An approach for automated inspection of wood boards
CN112683789A (en) Object surface pattern detection system and detection method based on artificial neural network
KR102554478B1 (en) Real-time tool wear measurement system using infrared image-based deep learning
KR20230139166A (en) Inspection Method for Wood Product
CN112683787A (en) Object surface detection system and detection method based on artificial neural network
JP2023529512A (en) Method and apparatus for identifying trading card characteristics
CN112686831A (en) Method for detecting surface form of object based on artificial neural network
CN112683786A (en) Object alignment method
WO2014005085A1 (en) Systems for capturing images of a document
US8315464B2 (en) Method of pore detection
CN112683921A (en) Image scanning method and image scanning system for metal surface
CN112683925A (en) Image detection scanning method and system for possible defects on surface of object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination