CN112683923A - Method for screening surface form of object based on artificial neural network - Google Patents

Method for screening surface form of object based on artificial neural network Download PDF

Info

Publication number
CN112683923A
CN112683923A CN201910987145.8A CN201910987145A CN112683923A CN 112683923 A CN112683923 A CN 112683923A CN 201910987145 A CN201910987145 A CN 201910987145A CN 112683923 A CN112683923 A CN 112683923A
Authority
CN
China
Prior art keywords
image
neural network
light
sub
network system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910987145.8A
Other languages
Chinese (zh)
Inventor
蔡昆佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Original Assignee
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitac Computer Kunshan Co Ltd, Getac Technology Corp filed Critical Mitac Computer Kunshan Co Ltd
Priority to CN201910987145.8A priority Critical patent/CN112683923A/en
Publication of CN112683923A publication Critical patent/CN112683923A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for screening surface morphology of an object based on an artificial neural network is suitable for screening a plurality of objects, and the method for screening surface morphology of the object based on the artificial neural network comprises the following steps: carrying out surface type recognition on a plurality of object images by using a plurality of prediction models to obtain the judgment defect rate of each prediction model, wherein the object images correspond to the surface type of a part of objects; and connecting the prediction models in series to form an artificial neural network system according to the judgment defect rate of each prediction model so as to screen other objects. The method for screening the surface morphology of the object based on the artificial neural network is characterized in that a plurality of neural networks with different training conditions are connected in series based on the judgment defect rate of each neural network, so that an artificial neural network system for accurately and quickly classifying a large number of objects to be tested is provided, and a better over-discharge rate is taken into consideration.

Description

Method for screening surface form of object based on artificial neural network
[ technical field ] A method for producing a semiconductor device
The invention relates to an artificial neural network training system, in particular to a method for screening the surface morphology of an object based on an artificial neural network.
[ background of the invention ]
Various safety protection measures are made by a number of small structural objects, such as safety belts. If these small structural objects are not strong enough, the protective effect of the safety protection measures can be questioned.
These structural objects may have minute defects such as slots, cracks, bumps, and textures on their surfaces due to various reasons such as impact, process error, mold defect, etc. during the manufacturing process. These tiny defects are not easily detected. One of the existing defect detection methods is to manually observe a structural object to be detected with naked eyes or touch the structural object with two hands to determine whether the structural object has defects, such as pits, scratches, color differences, defects, etc. However, the efficiency of manually detecting whether the structural object has defects is poor, and erroneous determination is very likely to occur, so that the yield of the structural object cannot be controlled.
[ summary of the invention ]
In one embodiment, a method for screening surface types of objects based on an artificial neural network is suitable for screening a plurality of objects. The method for screening the surface type of the object based on the artificial neural network comprises the following steps: carrying out surface type recognition on a plurality of object images by using a plurality of prediction models to obtain the judgment defect rate of each prediction model, wherein the object images correspond to the surface type of a part of objects; and connecting the prediction models in series to form an artificial neural network system according to the judgment defect rate of each prediction model so as to screen other objects.
In summary, according to the embodiment of the method for screening the surface morphology of the object based on the artificial neural network, the plurality of neural networks with different training conditions are connected in series based on the determination defect rate of each neural network, so as to provide an artificial neural network system for accurately and rapidly classifying a large number of objects to be tested, and simultaneously give consideration to a better overdischarge (Miss) rate.
[ description of the drawings ]
Fig. 1 is a flowchart illustrating a method for screening surface morphology of an object based on an artificial neural network according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an embodiment of step S02 in fig. 1.
FIG. 3 is a diagram of an artificial neural network system, according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of an artificial neural network system according to another embodiment of the present invention.
Fig. 5 is a schematic diagram of an embodiment of step S03 in fig. 1.
Fig. 6 is a flowchart of a training method of a sub neural network system according to a first embodiment of the present invention.
Fig. 7 is a flowchart of a prediction method of a sub neural network system according to a first embodiment of the present invention.
FIG. 8 is a schematic diagram of an exemplary image area.
Fig. 9 is a flowchart of a training method of a sub neural network system according to a second embodiment of the present invention.
Fig. 10 is a flowchart of a prediction method of a sub neural network system according to a second embodiment of the present invention.
Fig. 11 is a flowchart of a training method of a sub neural network system according to a third embodiment of the present invention.
Fig. 12 is a flowchart of a prediction method of a sub neural network system according to a third embodiment of the present invention.
Fig. 13 is a flowchart of a training method of a sub neural network system according to a fourth embodiment of the present invention.
Fig. 14 is a flowchart of a prediction method of a sub neural network system according to a fourth embodiment of the present invention.
FIG. 15 is a diagram illustrating an example of an object image.
FIG. 16 is a diagram of an image scanning system for an object surface type according to a first embodiment of the present invention.
FIG. 17 is a functional diagram of an image scanning system for an object surface type according to an embodiment of the present invention.
FIG. 18 is a diagram illustrating a first embodiment of the relative optical positions of the object, the light source module, and the photosensitive elements in FIG. 16.
FIG. 19 is a diagram illustrating a second embodiment of the relative optical positions of the object, the light source module and the photosensitive elements in FIG. 16.
Fig. 20 is a flowchart of an image scanning method for an object surface type according to a first embodiment of the invention.
FIG. 21 is a schematic view of an exemplary object.
FIG. 22 is a top view of the article of FIG. 21.
FIG. 23 is a flowchart of an image scanning method for an object surface type according to a second embodiment of the present invention.
Fig. 24 is a flowchart of an image scanning method for an object surface type according to a third embodiment of the invention.
FIG. 25 is a diagram illustrating an exemplary inspection image.
Fig. 26 is a partial flow diagram of a method for image scanning of a surface profile of an object according to some embodiments of the present invention.
FIG. 27 is a partial flow chart of a method for image scanning of a surface profile of an object according to other embodiments of the present invention.
FIG. 28 is a diagram illustrating a third embodiment of the relative optical positions of the object, the light source module and the photosensitive elements in FIG. 16.
FIG. 29 is a schematic view of an embodiment of a surface profile.
FIG. 30 is a diagram illustrating a fourth embodiment of the relative optical positions of the object, the light source module and the photosensitive elements in FIG. 16.
FIG. 31 is a diagram illustrating a fifth embodiment of the relative optical positions of the object, the light source module and the photosensitive elements in FIG. 16.
FIG. 32 is a functional diagram of an image scanning system for an object surface type according to another embodiment of the present invention.
FIG. 33 is a diagram illustrating a sixth embodiment of the relative optical positions of the object, the light source module, and the photosensitive elements in FIG. 16.
FIG. 34 is a functional diagram of an image scanning system for an object surface type according to another embodiment of the present invention.
FIG. 35 is a diagram illustrating a seventh embodiment of the relative optical positions of the object, the light source module and the photosensitive elements in FIG. 16.
FIG. 36 is a diagram illustrating another exemplary detection image.
FIG. 37 is a diagram illustrating another exemplary detection image.
FIG. 38 is a diagram of an image scanning system for a surface type of an object according to a second embodiment of the present invention.
FIG. 39 is a diagram of an image scanning system for an object surface type according to a third embodiment of the present invention.
FIG. 40 is a functional diagram of an image scanning system for an object surface type according to yet another embodiment of the present invention.
FIG. 41 is a partial schematic view of an embodiment of an image scanning system for an object surface type.
Fig. 42 to 45 are schematic views of images obtained by the four light source modules shown in fig. 41 after being illuminated.
FIG. 46 is a diagram illustrating an example of an initial image.
FIG. 47 is a diagram of an image scanning system for an object surface type according to a fourth embodiment of the present invention.
FIG. 48 is a diagram illustrating the first embodiment of the relative optical positions of the object, the light source module and the photosensitive elements in FIG. 47.
FIG. 49 is a diagram illustrating a second embodiment of the relative optical positions of the object, the light source module and the photosensitive elements in FIG. 47.
FIG. 50 is a diagram illustrating a third embodiment of the relative optical positions of the object, the light source module and the photosensitive elements in FIG. 47.
FIG. 51 is a diagram illustrating another example of an object image.
FIG. 52 is a diagram illustrating another example of an object image.
[ detailed description ] embodiments
The method for screening the surface form of the object based on the artificial neural network is suitable for an artificial neural network system. In this regard, the artificial neural network system may be implemented on a processor.
In some embodiments, referring to fig. 1, in the learning stage, the processor may perform deep learning (i.e., the artificial neural network that has not been trained) of a plurality of sub-neural network systems under different training conditions to respectively build a prediction model (i.e., the trained artificial neural network) of the sub-neural network systems for identifying the surface morphology of the object, so as to obtain the trained sub-neural network systems (step S01). Here, the object images may be images of surfaces of the same type of object at the same relative position. Further, the artificial neural network system receives a plurality of object images with fixed image capturing coordinate parameters. Moreover, the batch of object images may be obtained by capturing images of surfaces of a plurality of objects.
In some embodiments of step S01, at the beginning of the learning phase, the plurality of sub-neural network systems can use the same neural network algorithm but different setting parameters (e.g., different preprocessing, different number of layers, different number of neurons, or a combination thereof), or use different neural network algorithms but the same setting parameters, or use different neural network algorithms and different setting parameters, etc. For example, the processor can perform a deep learning with different setting parameters with the same or different object images to build the prediction models of the plurality of sub-neural network systems. In another example, the processor can perform a plurality of deep learning on a same or different batch of object images to build a prediction model of a plurality of sub-neural network systems.
In some embodiments, each sub-neural network system performs deep learning under different training conditions to build a respective prediction model. The training conditions may be, for example, different numbers of neural network layers, different neuron configurations, different pre-processing of input images, different neural network algorithms, or any combination thereof. The image preprocessing may be feature enhancement, image cropping, data format conversion, image overlay, and any combination thereof. In some embodiments, the prediction models of the respective sub-neural network systems may be implemented with the same or different neural network algorithms. The neural network algorithm may be, for example, a Convolutional Neural Network (CNN) algorithm, but is not limited thereto.
Then, in the system creation phase, the processor concatenates the trained sub-neural network systems into an artificial neural network system (step S02).
In an example of step S02, referring to fig. 2, after the prediction models of the sub-neural network systems are built, the processor feeds the same batch of object images into the sub-neural network systems, so that the prediction models of the sub-neural network systems classify the batch of object images individually to obtain the determined defect rates of the prediction models (step S02 a). Then, the processor concatenates the prediction models into an artificial neural network system according to the determined defect rates of the prediction models (step S02 b). In other words, the trained sub-neural network systems respectively perform surface pattern recognition on the object by using the same batch of object images to obtain the determination defect rate of each sub-neural network system. Then, the processor connects the sub-neural network systems in series according to the determined defect rates of the sub-neural network systems to obtain an artificial neural network system with a plurality of sub-neural network systems connected in series in sequence.
After the artificial neural network system is formed (step S02 or S02b), in the application stage, the processor may use the formed artificial neural network system to screen another lot of objects (step S03). In other words, the processor feeds object images of another batch of objects into the artificial neural network system, so that the artificial neural network system performs classification prediction on the fed object images. For example, the processor individually trains a plurality of sub-neural network systems in advance. And the processor receives object images of a plurality of objects. When the artificial neural network system is created (i.e. the system creation phase), the processor feeds object images of a part of objects into the trained plurality of sub-neural network systems, so that each sub-neural network system screens the part of objects and obtains the determined defect rate thereof. At this time, the sub-neural network systems respectively perform classification prediction on the object images of the part of the objects and obtain respective determination defect rates according to the classification. Then, the processor connects the sub-neural network systems in series into a sub-neural network system according to the determined defect rate of each sub-neural network system. Then, when the artificial neural network system is applied (i.e. the application stage), the processor feeds the object images of the other objects into the artificial neural network system to screen the other objects.
Therefore, the artificial neural network system is connected in series with a plurality of neural networks under different training conditions based on the judgment defect rate of each neural network, so that the artificial neural network system for accurately and quickly classifying a large number of objects to be detected is provided, and simultaneously, the better overdischarge (Miss) rate is considered.
Referring to fig. 3, the artificial neural network system 30 may include an input unit 31, a plurality of sub-neural network systems 33, and an output unit 35. The sub-neural network systems 33 are serially connected between the input unit 31 and the output unit 35, and each sub-neural network system 30 is serially connected with the next sub-neural network system 33 by partial output. Each sub-neural network system 33 has a predictive model.
In some embodiments, the output of each sub-neural network system 33 can be divided into a normal group and an abnormal group, and the normal group of each sub-neural network system 33 is coupled to the input of the next sub-neural network system 33. Referring to fig. 1 and 3, in the application stage, the object images IM fed into the artificial neural network system 30 are sequentially filtered by each sub-neural network system 33 (step S03). For example, in the application stage, when one or more object images IM are fed into the artificial neural network system 30, the first-level sub-neural network system 33 performs a prediction model on each object image IM to classify the object image IM into a first-level normal group or a first-level abnormal group. When the object images IM are classified into the first-level normal group, the object images IM output from the first-level sub-neural network system 33 are fed into the second-level sub-neural network system 33, so that the second-level sub-neural network system 33 continuously performs a prediction model on the object images IM to classify the object images IM into the second-level normal group or the second-level abnormal group. On the contrary, when the object image is classified into the first-level abnormal group, the object image IM output from the first-level sub-neural network system 33 to the first-level normal group is not fed into the second-level sub-neural network system 33. In this way, until the last-stage sub-neural network system 33 executes the prediction model on the object image IM fed from the previous stage (i.e., the object images IM classified as the normal group of the previous-stage sub-neural network system 33).
In some embodiments, the output unit 35 receives all the abnormal groups output by the sub-neural network systems 33 and outputs an abnormal result accordingly, and the output unit 35 also receives the normal group output by the last sub-neural network system 33 and outputs a normal result accordingly.
For convenience of illustration, two sub-neural network systems 33 are used as an example, but the number is not a limitation of the present invention. Referring to fig. 4, the two sub-neural network systems 33 are respectively referred to as a first sub-neural network system 33a and a second sub-neural network system 33 b.
The input of the first sub-neural network system 33a is coupled to the input unit 31. One output of the first sub-neural network system 33a is coupled to the input of the second sub-neural network system 33b, and the other output of the first sub-neural network system 33a is coupled to the output unit 35.
Here, the first sub-neural network system 33a has a first prediction model. The second sub neural network system 33b has a second prediction model. In some embodiments, the first predictive model may be implemented with a CNN algorithm. The second predictive model may also be implemented with the CNN algorithm. However, the present invention is not limited thereto.
Here, referring to fig. 4 and 5, the input unit 31 receives one or more object images IM (step S03a), and feeds the received object images IM to the first sub-neural network system 33 a. Next, the first prediction model of the first sub-neural network system 33a performs surface morphology recognition on each object image IM to classify the object image IM into one of a first normal group G12 and a first abnormal group G11 (step S03 b). In other words, the first prediction model identifies the surface type imaged by the object image IM and classifies the object image IM into the first normal group G12 or the first abnormal group G11 according to the identification result.
Then, the object images IM classified as the first normal group G12 are fed into the second sub-neural network system 33b, and the surface morphology recognition is performed by the second prediction model of the second sub-neural network system 33b to classify the object images IM into one of a second normal group G22 and a second abnormal group G21 (step S03 c). In other words, the second prediction model identifies the surface type imaged by the object images IM belonging to the first normal group G12, and then classifies the object images IM into the second normal group G22 or the second abnormal group G21 according to the identification result.
Finally, the output unit 35 receives the first abnormal group G11 output by the first prediction model, the second abnormal group G21 output by the second prediction model, and the second normal group G22 output by the second prediction model, and outputs an abnormal result and a normal result. The abnormal result includes the object image IM classified into the first abnormal group G11 and the object image IM classified into the second abnormal group G21. The normal result includes the object image IM classified as the second normal group G22.
In some embodiments, the number of neural networks connected in series in the artificial neural network system 30 may be designed to be 2 neural networks, 3 neural networks, 4 neural networks or more in series according to actual requirements.
In some embodiments, the processor can concatenate the plurality of sub-neural network systems 33 into an artificial neural network system 30 from high to low based on the determined defect rates of the predictive models of the plurality of sub-neural network systems 33. For example, the sub-neural network system 33 having a higher determination defect rate is arranged at the front, and the sub-neural network system 33 having a lower determination defect rate is arranged at the rear. In other words, the determination defect rates of the plurality of cascaded sub-neural network systems 33 are sequentially decreased. Therefore, in the application stage, the artificial neural network system 30 preferentially selects the object with the higher determination defect rate of the prediction model. Therefore, the artificial neural network system 30 can quickly classify and predict a large number of objects to be detected, and has a high overdischarge (Miss) rate.
In this regard, when the surface of the object has any surface type, the corresponding image position of the object image of the object is also imaged with such surface type. For example, when the surface of the object has a sand hole, the sand hole is also imaged on the corresponding image position of the object image of the object. When the surface of the object has a bump, the bump is also imaged on the corresponding image position of the object image of the object. In some embodiments, the surface topography may be surface structures such as slots, cracks, bumps, sand holes, air holes, bumps, scratches, edges, textures, and the like. Wherein each surface structure is a three-dimensional fine structure. Here, the three-dimensional fine structure is a sub-micron size to a micron (μm) size. I.e., the longest side or diameter of the three-dimensional mesostructure is between sub-micrometers to micrometers. Herein, submicron means <1 μm, for example, 0.1 to 1 μm. The three-dimensional structure may be a microstructure of 300nm to 6 μm, for example.
In some embodiments, at least one of the sub-neural network systems 33 may perform image preprocessing for image cropping.
Referring to fig. 6, in the learning phase, the sub-neural network system 33 receives a plurality of object images IM (step S11). Here, the object images are images of surfaces of the same type of object at the same relative position. Next, the sub-neural network system 33 divides each object image IM into a plurality of image areas (step S12), and designates at least one region of interest among the plurality of image areas of each object image IM (step S13). In other words, after the object image IM is cut into a plurality of image areas, the sub-neural network system 33 can designate the image areas in the corresponding sequence of the plurality of image areas as the regions of interest according to the designated setting. Then, the sub-neural network system 33 performs a deep learning (training) with the designated region of interest to build a prediction model for identifying the surface morphology of the object (step S14). In some embodiments, the sub-neural network system 33 can divide, assign, and train each image one by one. In other embodiments, the sub-neural network system 33 can divide and assign each object image and then train all the assigned regions of interest together.
In the prediction phase (i.e., the system creation phase or the application phase), the sub-neural network system 33 performs classification prediction in substantially the same step as the learning phase. Referring to fig. 7, the sub-neural network system 33 receives one or more object images IM (step S21). Herein, the image capturing target and the image capturing position of each object image IM are the same as the image capturing target and the image capturing position of the object image IM used in the learning stage (the same relative position as an object). Next, the sub-neural network system 33 divides each object image IM into a plurality of image areas (step S22), and designates at least one region of interest among the plurality of image areas of each object image IM (step S23). In other words, after the object image IM is cut into a plurality of image areas, the sub-neural network system 33 can designate the image areas in the corresponding sequence of the plurality of image areas as the regions of interest according to the designated setting. Then, the sub-neural network system 33 performs a prediction model with the designated region of interest to identify the surface type of the object (step S24).
Based on this, the sub neural network system 33 can flexibly import the detection result of a specific region (designated region of interest). In some embodiments, the sub-neural network system 33 may also obtain a lower overdischarge rate, such as an overdischarge rate approaching zero.
In some embodiments, the number of image areas into which each object image IM is divided is any integer greater than 2. Preferably, the image size of each image region can be less than or equal to 768 × 768 pixels, such as 400 × 400 pixels, 416 × 416 pixels, 608 × 608 pixels, and the like. Moreover, the image sizes of the image areas are all the same. In some embodiments, each image area is preferably square. For example, when the image size of the object image IM is 3000 × 4000 pixels, the image size of the cropped image area may be 200 × 200 pixels.
In some embodiments of step S12 (or step S22), the sub-neural network system 33 may first enlarge the object image IM according to a predetermined cropping size, so that the size of the object image IM is an integer multiple of the size of the image area. Then, the sub-neural network system 33 cuts the enlarged object image IM into a plurality of image areas according to a predetermined cutting size. Herein, the image sizes of the image areas are the same, i.e. the image sizes are the same as the preset cropping sizes.
For example, referring to fig. 8, the sub-neural network system 33 divides each of the received object images IM into 70 image areas a 01-a 70 with the same cropping size. Then, the sub-neural network system 33 designates the image areas a 01-a 10 as interested areas according to preset designated settings (assuming 1-10), and further performs deep learning or performs a prediction model with the image areas a 01-a 10 (i.e., interested areas).
In some embodiments, the region of interest may be an imaged image region with sand holes of different depths, an imaged image region without sand holes and with bumps or scratches, an imaged image region with different surface roughness, an imaged image region without surface defects, or an imaged image region with defects of different depth ratios. In this regard, the sub-neural network system 33 performs deep learning or performs predictive modeling based on the regions of interest of the various surface types as described above. During the learning phase, the sub-neural network system 33 can classify the regions of interest with different surface morphologies to generate different predetermined surface morphology classes in advance.
For example, the sub-neural network system 33 may recognize, using the region of interest, that the region of interest a01 is imaged with sand holes and impacts, the region of interest a02 is not imaged with defects, the region of interest a33 is imaged with only sand holes and the imaged surface roughness is less than the surface roughness of the region of interest a 35. In the prediction stage, taking five categories of the preset surface types including sand holes or air holes, scratches or bumps, high roughness, low roughness and no surface defects as examples, the sub-neural network system 33 can classify the region of interest a01 into the preset category of sand holes or air holes and the preset category of scratches or bumps, classify the region of interest a02 into the preset category without surface defects, classify the region of interest a33 into the preset category of sand holes or air holes and the preset category with low roughness, and classify the region of interest a35 into the preset category with high roughness.
In an embodiment of step S13 (or step S23), for each object image IM, the sub-neural network system 33 specifies the region of interest by changing the weight of each image region. For example, referring to fig. 8, after the object image IM is cut into a plurality of image areas a 01-a 70, the weights of the image areas a 01-a 70 are initially preset to 1. In one embodiment, assuming the designated settings are 1-5, 33-38, and 66-70, the sub-neural network system 33 increases the weights of the image areas a 1-a 5, a 33-a 38, a 66-a 70 to 2 according to the preset designated settings, thereby designating the image areas a 1-a 5, a 33-a 38, a 66-a 70 as interested areas. In an example, when the weight of the region of interest is increased, the weights of the other image regions a 6-a 32 and a 39-a 65 may be maintained as 1. In another example, when the weight of the region of interest is increased, the sub-neural network system 33 may simultaneously decrease the weights of the other image regions a 6-a 32 and a 39-a 65 to 0.
In another embodiment, assuming the designated settings are 1-5, 33-38, and 66-70, the artificial neural network system 30 reduces the weights of the image areas A6-a 32, a 39-a 65 except the image areas a 1-A5, a 33-a 38, a 66-a 70 to 0.5, and maintains the weights of the image areas a 1-A5, a 33-a 38, a 66-a 70 to 1 according to the preset designated settings, so as to designate the image areas a 1-A5, a 33-a 38, a 66-a 70 as the interested areas.
In one embodiment, the sub-neural network system 33 may include a preprocessing unit and a deep learning unit. The input of the preprocessing unit is coupled to the previous stage of this sub-neural network system 33 (the previous sub-neural network system 33 or the input unit 31), and the output of the preprocessing unit is coupled to the input of the deep learning unit. The output of the deep learning unit is coupled to the next stage of the sub-neural network system 33 (the next sub-neural network system 33 or the output unit 35). Herein, the preprocessing unit is used for executing the steps S11 to S13 or S21 to S23, and the deep learning unit is used for executing the step S14 or S24. In other words, the architecture of the deep learning unit after performing deep learning is the prediction model. In another embodiment, the deep learning unit may include an input layer and a plurality of hidden layers. The input layer is coupled between the previous stage (previous sub-neural network system 33 or input unit 31) and the hidden layers. Each hidden layer is coupled between the input layer and the next stage (next sub-neural network system 33 or output unit 35). In this case, the steps S11 to S13 or the steps S21 to S23 can be executed by the input layer instead.
In some embodiments, at least one of the sub-neural network systems 33 may perform image preprocessing for converting data formats.
Referring to fig. 9, in the learning phase, the sub-neural network system 33 receives a plurality of object images IM (step S31). Next, the sub-neural network system 33 converts the object image IM into a Matrix (Matrix) according to the color mode of the object image IM (step S32), i.e., converts the data format of the object image into a format (e.g., an image Matrix) supported by the input channels of the artificial neural network. Then, the sub-neural network system 33 performs a deep learning with the matrix to build a prediction model for identifying the surface morphology of the object (step S33).
Herein, the received object images IM are all images of surfaces of the same type of object at the same relative position. The received object image IM has a plurality of color modes, and each object image IM has one of the color modes. In some embodiments, such color patterns may include a plurality of spectra that are different from each other. For example, during the learning phase, the processor can feed a large number of object images IM to the sub-neural network system 33. The fed object image IM includes surface images (i.e., object images IM) of different spectra of the same relative position of each object 2 of the plurality of objects 2 of the same type.
Herein, the artificial neural network in the sub-neural network system 33 has a plurality of image matrix input channels for inputting corresponding matrices, and the image matrix input channels respectively represent a plurality of image capturing conditions (e.g. respectively represent a plurality of color modes). That is, the sub-neural network system 33 converts the object images IM of different color modes into information of length, width, pixel type, pixel depth, channel number, etc. in the matrix, wherein the channel number represents the image capturing condition of the corresponding object image. And the converted matrix is imported into a corresponding image matrix input channel according to the color mode of the object image, so that deep learning is facilitated. In some embodiments, the image matrix input channels respectively represent a plurality of different spectra.
In some embodiments, the plurality of spectra may range between 380nm to 3000 nm. For example, the plurality of different spectra may be any of visible light such as white light, violet light, blue light, green light, yellow light, orange light, and red light. In one embodiment, the wavelength of the white light may be 380nm to 780nm, the wavelength of the violet light may be 380nm to 450nm, the wavelength of the blue light may be 450nm to 495nm, the wavelength of the green light may be 495nm to 570nm, the wavelength of the yellow light may be 570nm to 590nm, the wavelength of the orange light may be 590nm to 620nm, the wavelength of the red light may be 620nm to 780nm, and in another embodiment, the spectrum may be far infrared light with the wavelength of 800nm to 3000 nm.
In some embodiments, such color modes may also include grayscale modes. At this time, the object image IM is converted into a gray-scale image, and then converted into a matrix having a channel number representing the gray scale.
In the prediction stage, the sub-neural network system 33 performs classification prediction in substantially the same step as the learning stage. Referring to fig. 10, the sub-neural network system 33 receives one or more object images IM (step S41). Herein, each object image IM is an image of the surface of the same object at the same relative position and has any one specific color pattern. Next, the sub-neural network system 33 converts the object image IM into a matrix according to the color mode of the object image IM (step S42). Then, the sub-neural network system 33 performs a prediction model with the matrix to identify the surface type of the object (step S43).
In some embodiments, the sub-neural network system 33 can normalize the object image IM to reduce the asymmetry between the learning data and improve the learning efficiency. Then, the object image IM normalized by the sub-neural network system 33 is converted into a matrix.
Accordingly, the sub-neural network system 33 performs deep learning by using a matrix with the number of channels representing different color modes, so that the established prediction model can identify the information (i.e., the surface type) such as the structural type and the surface texture of the surface 21 of the object 2. In other words, by controlling the light-emitting spectrum or the light-receiving spectrum to provide object images of the same object with different imaging effects, the distinction of the sub-neural network system 33 for various target surface types of the object can be improved. In some embodiments, the sub-neural network system 33 may integrate the multi-spectral surface texture image to enhance the identification of the target surface type of the object, thereby obtaining the surface roughness and the fine texture type of the object.
In one embodiment, the sub-neural network system 33 may include a preprocessing unit and a deep learning unit. The input of the preprocessing unit is coupled to the previous stage of this sub-neural network system 33 (the previous sub-neural network system 33 or the input unit 31), and the output of the preprocessing unit is coupled to the input of the deep learning unit. The output of the deep learning unit is coupled to the next stage of the sub-neural network system 33 (the next sub-neural network system 33 or the output unit 35). Herein, the preprocessing unit is used for executing the steps S31 to S32 or S41 to S42, and the deep learning unit is used for executing the step S33 or S43. In other words, the architecture of the deep learning unit after performing deep learning is the prediction model. In another embodiment, the deep learning unit may include an input layer and a plurality of hidden layers. The input layer is coupled between the previous stage (previous sub-neural network system 33 or input unit 31) and the hidden layers. Each hidden layer is coupled between the input layer and the next stage (next sub-neural network system 33 or output unit 35). In this case, the steps S31 to S32 or the steps S41 to S42 can be executed by the input layer instead.
In some embodiments, at least one of the sub-neural network systems 33 may perform image preprocessing for image overlay.
In one embodiment, referring to fig. 11, in the learning phase, the sub-neural network system 33 receives object images IM of objects (step S51). The object images IM are images of the surface of the same object at the same relative position. The plurality of object images IM of the same object are obtained by capturing the images of the object based on the light rays with different lighting directions. In one example, images captured of the same object may have the same spectrum or different spectra. Next, the sub-neural network system 33 superimposes the object images IM of each object into an superimposed object image (hereinafter referred to as an initial image) (step S52). Then, the sub-neural network system 33 performs a deep learning with the initial image of each object to establish a prediction model for identifying the surface morphology of the object (step S54). For example, the received object images IM include a plurality of object images IM of a first object and a plurality of object images IM of a second object. The sub-neural network system 33 superimposes the object images IM of the first object as the initial image of the first object and superimposes the object images IM of the second object as the initial image of the second object, and then performs the depth learning with the initial images of the first object and the initial images of the second object.
In the prediction stage, the sub-neural network system 33 performs classification prediction in substantially the same step as the learning stage. Referring to fig. 12, the sub-neural network system 33 receives a plurality of object images IM of an object (step S61). Herein, the plurality of object images IM of the object are all images of the surface of the same position of the object. And the plurality of object images IM of the object are images of the object captured based on the light rays with different lighting directions. Then, the sub-neural network system 33 superimposes the object images IM of the object into the initial image (step S62). Then, the sub-neural network system 33 performs a prediction model with the initial image to identify the surface type of the object (step S64).
Accordingly, the sub-neural network system 33 can perform training by combining multi-angle image capture (i.e. different lighting directions) with multi-dimensional superposition preprocessing, so as to improve the identification degree of the three-dimensional structure features of the object without increasing calculation time. In other words, by controlling various incident angles of the image capturing light source to provide object images of the same object with different imaging effects, the spatial stereo distinction of the sub-neural network system 33 for various surface types of the object can be improved. Moreover, by integrating the object images in different lighting directions, the object images are subjected to multi-dimensional superposition, so that the recognition of the sub-neural network system 33 on the surface form of the object is improved, and the optimal analysis of the surface form of the object is obtained.
In an exemplary case of the step S52 (or the step S62), the overlaying refers to overlaying the brightness values of the pixels in the object image IM.
In another embodiment, referring to fig. 13 and 14, after steps S52 or S62, the sub-neural network system 33 may convert the initial image of each object into a matrix (steps S53 or S63), i.e., convert the data format of the initial image of each object into a format (e.g., an image matrix) supported by the input channels of the artificial neural network. Then, the sub-neural network system 33 further performs a deep learning or prediction model with the matrix of each object (step S54 'or S64'). That is, the sub-neural network system 33 converts the initial image of each object into the length, width, pixel type, pixel depth, channel number, etc. information in the matrix, wherein the channel number represents the color mode corresponding to the initial image. And the converted matrix is imported into a corresponding image matrix input channel according to the color mode of the initial image, so that the next processing is facilitated.
In an example of the step S52 (or the step S62), the sub-neural network system 33 normalizes (normalizes) the received object image IM and then superimposes the normalized object images IM of the same object into the original image. Therefore, the asymmetry between the learning data can be reduced, and the learning efficiency is improved.
In an example of the step S51 (or the step S61), the object images IM of the same object may have the same spectrum. In another example of the step S51 (or the step S61), the object image IM of the same object may have different multiple spectrums. That is, the object images IM of the same object include an image of the object captured based on light of one spectrum in different lighting orientations and an image of the object captured based on light of another spectrum in different lighting orientations. And, the two spectra are different from each other.
In one embodiment, the sub-neural network system 33 may include a preprocessing unit and a deep learning unit. The input of the preprocessing unit is coupled to the previous stage of this sub-neural network system 33 (the previous sub-neural network system 33 or the input unit 31), and the output of the preprocessing unit is coupled to the input of the deep learning unit. The output of the deep learning unit is coupled to the next stage of the sub-neural network system 33 (the next sub-neural network system 33 or the output unit 35). Herein, the preprocessing unit is used for executing the steps S51 to S53 or S61 to S63, and the deep learning unit is used for executing the steps S54, S54 ', S64 or S64'. In other words, the architecture of the deep learning unit after performing deep learning is the prediction model. In another embodiment, the deep learning unit may include an input layer and a plurality of hidden layers. The input layer is coupled between the previous stage (previous sub-neural network system 33 or input unit 31) and the hidden layers. Each hidden layer is coupled between the input layer and the next stage (next sub-neural network system 33 or output unit 35). In this case, the steps S51 to S53 or the steps S61 to S63 can be executed by the input layer instead.
In some embodiments, each object image IM is formed by stitching a plurality of detection images MB (as shown in fig. 15). In an exemplary embodiment, the image size of the region of interest is smaller than the image size of the inspection image (original image size).
In some embodiments, each detection image MB may be generated by an image scanning system for the surface type of the object to image-scan the object 2.
Referring to fig. 16, the image scanning system for the object surface type is adapted to scan the object 2 to obtain at least one detected image MB of the object 2. Herein, the object 2 has a surface 21, and along an extending direction D1 of the surface 21 of the object 2, the surface 21 of the object 2 is divided into a plurality of surface sections 21A-21C. In some embodiments, the surface 21 of the object 2 is divided into nine surface areas, three of which 21A-21C are exemplarily shown in the figure. However, the present application is not limited thereto, and the surface 21 of the object 2 can be divided into other number of surface blocks according to actual requirements, such as 3 blocks, 5 blocks, 11 blocks, 15 blocks, 20 blocks, and any number thereof.
Referring to fig. 16 to 19, fig. 18 and 19 are schematic diagrams illustrating two embodiments of the relative optical positions between the object 2, the light source assembly 12 and the photosensitive element 13 in fig. 16, respectively. The image scanning system for the surface type of the object includes a driving assembly 11, a light source assembly 12 and a photosensitive element 13. The light source assembly 12 and the photosensitive elements 13 face a detection position 14 on the driving assembly 11 at different angles.
The image scanning system can execute an image capturing procedure. Referring to fig. 16 to 20, in the image capturing procedure, the driving assembly 11 carries the object 2 to be detected and sequentially moves one of the surface blocks 21A-21C to the detection position 14 (step S110). The light source assembly 12 emits a light L1 toward the detecting position 14 (step S120) to illuminate the detecting position 14 in a forward or lateral direction. Thus, the surface blocks 21A-21C are sequentially disposed at the detecting position 14, and are irradiated by the light L1 from the side direction or the oblique direction when being at the detecting position 14.
In some embodiments, when each of the surface areas 21A-21C is located at the detecting position 14, the light sensing element 13 receives the diffused light generated by the light received by the surface area currently located at the detecting position 14, and captures a detection image of the surface area currently located at the detecting position 14 according to the received diffused light (step S130).
In some embodiments, the image scanning system may further include a processor 15. The processor 15 is coupled to the light source assembly 12, the light-sensing elements 13 and the driving motor 112, and is used for controlling operations of the components (such as the light source assembly 12, the light-sensing elements 13 and the driving motor 112).
In some embodiments, after the photosensitive device 13 captures the detection images MB of all the surface areas 21A-21C of the object 2, the processor 15 can perform a stitching procedure according to the detection images MB to obtain an object image IM of the object 2 (step S140).
For example, in the image capturing process, the driving assembly 11 first displaces the surface area 21A to the detecting position 14, and the photosensitive element 13 captures a detecting image Ma of the surface area 21A under the condition that the surface area 21A is irradiated by the detecting light L1 provided by the light source assembly 12, as shown in fig. 15. Then, the driving assembly 11 displaces the object 2 again to move the surface area 21B to the detection position 14, and the photosensitive element 13 captures a detection image Mb of the surface area 21B again under the irradiation of the surface area 21B with the detection light L1 provided by the light source assembly 12, as shown in fig. 15. Then, the driving assembly 11 displaces the object 2 to move the surface area 21C to the detection position 14, and the photosensitive element 13 captures a detection image Mc of the surface area 21C again under the irradiation of the surface area 21B with the detection light L1 provided by the light source assembly 12, as shown in fig. 15. And so on until the detection images MB of all the surface blocks are captured. Then, the processor 15 stitches the detected images MB into an object image IM, as shown in fig. 15.
In some embodiments, specifically, in the image capturing process, the object 2 is carried on the driving assembly 11, and one of the surface blocks 21A-21C of the object 2 is substantially located at the detecting position 14. Before capturing an image, the image scanning system performs a positioning operation (i.e. fine-tuning the position of the object 2) to align the surface area with the viewing angle of the photosensitive element 13.
In some embodiments, referring to fig. 21 and 22, the object 2 includes a body 201, a plurality of first alignment structures 202, and a plurality of second alignment structures 203. The first alignment structure 202 is located at one end of the body 201, and the second alignment structure 203 is located at the other end of the body 201. In some embodiments, the first alignment structure 202 can be a post, a bump, a slot, or the like. The second alignment structure 203 may be a post, a bump, a slot, etc. In some embodiments, the second alignment structures 203 are spaced along the extending direction of the surface 21 of the body 201 (hereinafter referred to as the first direction D1), and the spacing distance between any two adjacent second alignment structures 203 is greater than or equal to the viewing angle of the photosensitive element 13. In some embodiments, the second alignment structures 203 correspond to the surface sections 21A-21C of the object 2, respectively. Each second alignment structure 203 is aligned with the middle of the side of its corresponding surface area along the first direction D1.
The first alignment structure 202 is a post (hereinafter referred to as an alignment post) and the second alignment structure 203 is a slot (hereinafter referred to as an alignment slot). In some embodiments, the extending direction of each alignment pillar is substantially the same as the extending direction of the body 201, and one end of each alignment pillar is coupled to one end of the body 201. The alignment slot is located at the other end of the body 201, and surrounds the body 201 with the long axis of the body 201 as the rotation axis and is disposed on the surface of the other end of the body 201 at intervals.
In some embodiments, the first alignment structures 202 are spaced apart on the body 201. In the present exemplary embodiment, three first alignment structures 202 are taken as an example, but the number is not a limitation of the present invention. When looking down the side of the main body 201, the first alignment structures 202 may have different relative positions as the main body 201 rotates around the long axis thereof, for example, the first alignment structures 202 are disposed at intervals and do not overlap with each other (as shown in fig. 22), or any two first alignment structures 202 overlap with each other but the remaining one first alignment structure 202 does not overlap with each other.
In some embodiments, referring to fig. 16 to 19 and fig. 21 to 24, in the image capturing process, under illumination of the light source assembly 12, the processor 15 controls the photosensitive elements 13 to capture a test image of the object 2 (step S211). Here, the test image includes an image block representing the second alignment structure 203 currently facing the photosensitive element 13.
The processor 15 detects the position of the image block of the second alignment structure 203 in the test image (step S212) to determine whether the surface block currently located at the detection position 14 is aligned with the viewing angle of the photosensitive element 13.
When the position of the image block is not located in the middle of the test image, the processor 15 controls the driving element 11 to fine-tune the position of the object 2 in the first direction D1 (step S213), and returns to perform step S211. Herein, the steps S211 to S213 are repeatedly executed until the processor 15 detects that the presenting position of the image block is located in the middle of the test image.
When the display position of the image block is located in the middle of the test image, the processor 15 drives the photosensitive element 13 to capture the image; at this time, the photosensitive element 13 captures an image of the surface area of the object 2 under the illumination of the light source assembly 12 (step S214).
Next, the processor 15 controls the driving assembly 11 to displace the next surface area of the object 2 to the detecting position 14 in a first direction, so that the next second alignment structure 203 faces the photosensitive element 13 (step S215), and returns to perform step S211. Herein, the steps S211 to S215 are repeatedly executed until the detection images of all the surface areas of the object 2 are captured. In some embodiments, the drive assembly 11 fine-tunes the amplitude of the object 2 to be smaller than the amplitude of the next surface area of the object 2.
Accordingly, the image scanning system can determine whether the object is aligned by analyzing the specific structure of the object in the test image, so as to obtain the image aligned with the viewing angle of the sensing device 13.
For example, assume that the object 2 has three surface areas, and the photosensitive element 13 faces the surface area 21A of the object 2 when the image capturing process is started. At this time, under the illumination of the light source assembly 12, the light-sensing device 13 will capture a test image (hereinafter referred to as the first test image) of the object 2. The first test image includes an image block (hereinafter referred to as a first image block) representing the second alignment structure 203 corresponding to the surface block 21A. Then, the processor 15 performs an image analysis of the first test image to detect a position of the first image block in the first test image. When the position of the first image block is not located in the middle of the first test image, the driving component 11 fine-tunes the position of the object 2 in the first direction D1. After the fine adjustment, the photosensitive device 13 captures the first test image again for the processor 15 to determine whether the presenting position of the first image block is located in the middle of the first test image. On the contrary, when the position of the first image block is located in the middle of the first test image, the light sensing element 13 captures the detection image of the surface block 21A of the object 2 under the illumination of the light source assembly 12. After the capturing, the driving assembly 11 displaces the next surface area 21B of the object 2 to the detecting position 14 in the first direction D1, so that the second alignment structure 203 corresponding to the surface area 21B faces the photosensitive element 13. Then, under the illumination of the light source assembly 12, the light-sensing element 13 captures a test image (hereinafter referred to as a second test image) of the object 2, and the second test image includes an image block (hereinafter referred to as a second image block) of the second alignment structure 203 corresponding to the surface block 21B. Then, the processor 15 performs an image analysis of the second test image to detect a position of the second image block in the second test image. When the position of the second image block is not located in the middle of the second test image, the driving component 11 fine-tunes the position of the object 2 in the first direction D1. After the fine adjustment, the photosensitive device 13 captures the second test image again for the processor 15 to determine whether the displaying position of the two image blocks is located in the middle of the second test image. On the contrary, when the position of the second image block is located in the middle of the second test image, the light sensing element 13 captures the detection image of the surface block 21B of the object 2 under the illumination of the light source assembly 12. After the capturing, the driving assembly 11 further displaces the next surface area 21C of the object 2 to the detecting position 14 in the first direction D1, so that the second alignment structure 203 corresponding to the surface area 21C faces the photosensitive element 13. Then, under the illumination of the light source assembly 12, the light-sensing element 13 captures a test image (hereinafter referred to as a third test image) of the object 2, and the third test image includes an image block (hereinafter referred to as a third image block) of the second alignment structure 203 corresponding to the surface block 21C. Then, the processor 15 performs an image analysis of the third test image to detect a position of the third image block in the third test image. When the position of the third image block is not located in the middle of the third test image, the driving component 11 fine-tunes the position of the object 2 in the first direction D1. After the fine adjustment, the photosensitive device 13 captures the third test image again for the processor 15 to determine whether the rendering position of the three image blocks is located in the middle of the third test image. On the contrary, when the position of the third image block is located in the middle of the third test image, the light sensing element 13 captures the detection image of the surface block 21C of the object 2 under the illumination of the light source assembly 12.
In some embodiments, when the image scanning system needs to capture an image of an object 2 with two different image capture parameters, the image scanning system sequentially performs an image capture procedure with each image capture parameter. The different image capturing parameters may provide the light source module 12 with different brightness L1, the light source module 12 shines at different incident angles, or the light source module 12 provides light L1 with different spectrums.
In some embodiments, referring to fig. 23, after capturing the detection images of all the surface areas 21A to 21C of the object 2, the processor 15 splices the detection images of all the surface areas 21A to 21C corresponding to the object 2 into an object image according to the capturing sequence (step S221), and compares the spliced object image with a predetermined pattern (step S222). When the object image does not match the predetermined pattern, the processor 15 adjusts the stitching sequence of the detected images (step S223), and compares the adjusted images again (step S222). On the contrary, when the object image matches the predetermined pattern, the processor 15 obtains the object image of the object 2.
In some embodiments, the image scanning system may also perform a registration procedure. After the object 2 is placed on the driving component 11, the image scanning system performs an alignment procedure to perform object alignment, so as to determine the position where the object 2 starts to capture an image.
Referring to fig. 24, in the alignment procedure, the driving component 11 continuously rotates the object 2, and the processor 15 detects the first alignment structure 202 of the object 2 through the photosensitive element 13 while the object 2 rotates (step S201) to determine whether the first alignment structure 202 is of a predetermined type. In this way, during the rotation of the object 2, the second alignment structures 203 of the object 2 sequentially face the photosensitive elements 13.
In some embodiments, the predetermined pattern may be a relative position of the first alignment structure 202 and/or a luminance relationship of an image block of the first alignment structure 202.
In an exemplary embodiment, the photosensitive element 13 continuously captures a detection image of the object 2 while the object 2 rotates, and the detection image includes an image area representing the first alignment structure 202. The processor 15 analyzes each of the detection images to determine the relative position of the image blocks of the first alignment structure 202 in the detection image and/or the brightness relationship of the image blocks of the first alignment structure 202 in the detection image. For example, the processor 15 analyzes the inspection image to find that the image blocks of the first alignment structures 202 are spaced from each other and do not overlap, and the brightness of the image block located in the middle of the image blocks of the first alignment structures 202 is brighter than the brightness of the image blocks located at the two sides; at this time, the processor 15 determines that the first bit structure 202 is of the predetermined type. In other words, the predetermined pattern can be set by the image characteristics of the specific structure of the object 2.
When the first alignment structure 202 reaches the predetermined configuration, the processor 15 stops the rotation of the object (step S202) and performs an image capturing procedure on the object. I.e. the processor 15 controls the drive assembly 11 to stop rotating the object 2. Otherwise, the detection image is continuously captured and the imaging position and/or the imaging state of the image block of the first alignment structure 202 is analyzed.
Therefore, the image scanning system can utilize the image to analyze the presentation type and the presentation position of the specific structure of the object in the test image to judge whether the object is aligned, so as to capture the detection image at the same position on each surface block according to the aligned object.
In some embodiments, when the image scanning system has an alignment procedure, after capturing the inspection images of all the surface areas 21A-21C of the object 2, the processor 15 can stitch the captured inspection images into the object image of the object 2 according to the capturing sequence (step S231).
For example, taking the spindle shown in fig. 21 and 22 as an example, after the image capturing process (i.e., repeatedly performing steps S211 to S215) is performed by the image scanning system, the photosensitive element 13 can capture the detected images MB of all the surface areas 21A to 21C. Here, the processor 15 can stitch the detected images MB of all the surface areas 21A to 21C into the object image IM of the object 2 in the capturing order, as shown in fig. 15. In this example, the photosensitive element 13 may be a linear photosensitive element. At this time, the detection image MB captured by the photosensitive element 13 can be spliced by the processor 15 without being cut. In some embodiments, the line type photosensitive element may be implemented by a line (linear) type image sensor. Wherein the line image sensor can have a field of view (FOV) of approximately 0 degree.
In another embodiment, the photosensitive element 13 is a two-dimensional photosensitive element. At this time, when the photosensitive element 13 captures the inspection image MB of the surface blocks 21A-21C, the processor 15 captures a middle region MBc of the inspection image MB based on the short side of the inspection image MB, as shown in fig. 25. Then, the processor 15 stitches the middle area MBc corresponding to all the surface areas 21A-21C into the object image IM. In some embodiments, the mid-section area MBc may have a width of, for example, one pixel (pixel). In some embodiments, the two-dimensional light sensing element may be implemented by a surface image sensor. Wherein the area image sensor has a field of view of about 5 to 30 degrees.
In some embodiments, the image scanning system may further include a test program. In other words, before the alignment procedure and the image capturing procedure are performed, the image scanning system may first perform a testing procedure to determine that the components (such as the driving component 11, the light source component 12, the photosensitive elements 13, etc.) are operating normally.
In the testing procedure, referring to fig. 26, the photosensitive element 13 captures a testing image under the illumination of the light source module 12 (step S301). The processor 15 receives the test image captured by the photosensitive element 13, and the processor 15 analyzes the test image (step S302) to determine whether the test image is normal (step S303), and accordingly determines whether the test is completed. If the test image is normal (yes), it means that the photosensitive device 13 can capture a normal detection image in step S301 of the image capturing procedure, and the image scanning system will continue to execute the alignment procedure (step S201) or the image capturing procedure (step S211).
If the test image is abnormal (determination result is "no"), the image scanning system can perform an adjustment procedure (step S305).
In some embodiments, referring to fig. 16 and 17, the image scanning system may further include a light source adjusting assembly 16, and the light source adjusting assembly 16 is coupled to the light source assembly 12 and the processor 15. Herein, the light source adjusting assembly 16 can be used to adjust the position of the light source assembly 12 to change the light incident angle θ.
In an example, referring to fig. 16, 17 and 26, the light sensing element 13 may capture a surface area currently located at the detection position 14 as a test image (step S301). At this time, the processor 15 analyzes the test image (step S302) to determine whether the average brightness of the test image matches a predetermined brightness to determine whether the test image is normal (step S303). If the average brightness of the test image does not conform to the predetermined brightness (the determination result is "no"), it indicates that the test image is abnormal. For example, when the light incident angle θ of the light source module 12 is not appropriate, the average brightness of the test image will not meet the preset brightness; at this time, the test image may not correctly represent the predetermined surface type of the object 2 to be detected.
In the calibration procedure, the processor 15 controls the light source adjusting assembly 16 to readjust the position of the light source assembly 12 to reset the light incident angle θ (step S305). After the light source adjusting assembly 16 re-adjusts the position of the light source assembly 12 (step S305), the light source assembly 12 emits another test light having a different light incident angle θ. At this time, the processor 15 controls the photosensitive element 13 to capture an image of a surface area currently located at the detection position 14 according to another test light (step S301) to generate another test image, and the processor 15 may analyze the another test image (step S302) to determine whether the average brightness of the another test image matches the predetermined brightness (step S303). If the average brightness of another test image does not meet the predetermined brightness (no), the processor 15 controls the light source adjusting assembly 16 to readjust the position of the light source assembly 12 to readjust the light incident angle θ (step S301) until the average brightness of the test image captured by the photosensitive element 13 meets the predetermined brightness. When the average brightness of the test image is equal to the predetermined brightness (yes), the image scanning system then performs the following step S201 or S211 to perform the aforementioned alignment procedure or image capturing procedure.
In another embodiment, referring to fig. 16, 17 and 27, the processor 15 may also determine whether the setting parameters of the photosensitive element 13 are normal according to whether the test image is normal (step S303). If the test image is normal (yes), which indicates that the setting parameters of the photosensitive device 13 are normal, the image scanning system then performs the following step S201 or S211 to perform the aforementioned alignment procedure or image capturing procedure. If the test image is abnormal (no), indicating that the setting parameters of the photosensitive element 13 are abnormal, the processor 15 further determines whether the photosensitive element 13 has performed the adjustment operation of the setting parameters (step S304). If the photosensitive element 13 has performed the adjustment of the setting parameters (yes), the processor 15 generates an alarm signal indicating the abnormality of the photosensitive element 13 (step S306). If the photosensitive device 13 does not perform the adjustment operation of the setting parameters (no), the image scanning system proceeds to the aforementioned adjustment procedure (step S305). The processor 15 drives the photosensitive element 13 to perform the calibration operation of the setting parameters during the calibration process (step S305). After the photosensitive device 13 performs the calibration operation (step S305), the photosensitive device 13 captures another test image (step S301), and the processor 15 then determines whether the another test image captured after the photosensitive device 13 performs the calibration operation is normal (step S303). If the processor 15 determines that the other test image is still abnormal (no), the processor 15 then determines that the photosensitive element 13 has performed the calibration operation (yes) in step S304, and the processor 15 generates an alarm signal indicating the abnormality of the photosensitive element 13 (step S306).
In some embodiments, the setting parameter of the photosensitive element 13 includes a photosensitive value, an exposure value, a focal length value, a contrast setting value, or any combination thereof. In some embodiments, the processor 15 may determine whether the average brightness or the contrast of the test image meets a predetermined brightness, so as to determine whether the setting parameters are normal. For example, if the average brightness of the test image does not meet the preset brightness, it indicates that the average brightness or the contrast of the test image does not meet the preset brightness due to any error in the setting parameters of the photosensitive element 13; if the average brightness or contrast of the test image meets the predetermined brightness, it indicates that each of the setting parameters of the photosensitive element 13 is correct.
In an embodiment, the image scanning system may further include an audio/video display unit, the warning signal may include an image, a sound, or both, and the audio/video display unit may display the warning signal. Moreover, the image scanning system may also have a network function, and the processor 15 may send the warning signal to the cloud for storage through the network function, or send the warning signal to other devices through the network function, so that a user at the cloud or other devices can know that the photosensitive element 13 is abnormal, and then perform a debugging operation on the photosensitive element 13.
In one embodiment, in the calibration process (step S305), the photosensitive element 13 automatically adjusts the setting parameters according to a parameter setting file. Herein, the parameter setting file stores setting parameters of the photosensitive element 13. In some embodiments, the inspector updates the parameter setting file through the user interface of the image scanning system, so that the photosensitive element 13 automatically adjusts the setting parameters according to the updated parameter setting file in the calibration procedure, so as to correct the wrong setting parameters.
In the above embodiment, when the image (i.e., the test image or the inspection image) is captured by the light-sensing device 13, the light source assembly 12 emits a light L1 toward the inspection position 14, and the light L1 obliquely or laterally irradiates the surface area currently located at the inspection position 14.
Referring to fig. 18 and 19, the incident direction of the light L1 forms an angle (hereinafter referred to as light incident angle θ) with the normal line 14A of the surface area of the detection position 14. That is, at the light incident end, the angle between the optical axis of the light ray L1 and the normal to the front direction 14A is the light incident angle θ. In some embodiments, the light incident angle θ is greater than 0 degrees and less than or equal to 90 degrees, i.e. the detection light ray L1 illuminates the detection position 14 with a light incident angle θ greater than 0 degrees and less than or equal to 90 degrees with respect to the normal line 14A, so that the surface area currently located at the detection position 14 is illuminated by the detection light ray L1 from a lateral direction or an oblique direction.
In some embodiments, as shown in fig. 18 and 19, the photosensitive axis 13A of the photosensitive element 13 is parallel to the normal line 14A; alternatively, as shown in fig. 20, the light sensing axis 13A of the light sensing element 13 is between the normal line 14A and the extending direction D1, that is, an angle (hereinafter referred to as light reflection angle α) is formed between the light sensing axis 13A of the light sensing element 13 and the normal line 14A. The light sensor 13 receives the diffused light generated by the light received by the surface blocks 21A-21C, and the light sensor 13 captures the detection images of the surface blocks 21A-21C sequentially located at the detection position 14 according to the diffused light (step S130 or step S214).
In some embodiments, if the surface 21 of the object 2 includes a groove-like or hole-like surface structure according to the light incident angle θ greater than 0 degrees and less than or equal to 90 degrees, i.e., according to the light ray L1 incident laterally or obliquely, the light ray L1 does not strike the bottom of the surface structure, and the surface structure is shaded in the inspection image of the surface areas 21A-21C, so that the inspection image with sharp contrast between the surface 21 and the surface defect can be formed. Thus, the image scanning system or the inspector can determine whether the surface 21 of the object 2 has defects by inspecting whether the image has shadows.
In some embodiments, the surface structures with different depths exhibit different intensities in the inspection image according to different light incident angles θ. In detail, as shown in fig. 19, when the light incident angle θ is equal to 90 degrees, the incident direction of the light ray L1 is perpendicular to the depth direction of the surface defect, that is, the optical axis of the light ray L1 overlaps with the tangent of the surface at the center of the detection position; at this time, no matter the depth of the surface structure, the surface structure on the surface 21 does not generate the reflected light and the diffused light because the recess is not irradiated by the light L1, and the surface structure with the deeper depth or the shallower depth shows the shadow in the detected image, i.e. the detected image has a poor contrast, or approaches to no contrast. As shown in fig. 18, when the light incident angle θ is smaller than 90 degrees, the incident direction of the detection light ray L1 is not perpendicular to the depth direction of the surface structure; at this time, the light L1 irradiates a partial region of the surface structure under the surface 21, and the partial region of the surface structure is irradiated by the light L1 to generate reflected light and diffused light, so that the light-sensing element 13 receives the reflected light and diffused light from the partial region of the surface structure, and the surface structure presents an image with a brighter boundary (e.g., a boundary where a defect is raised) or a darker boundary (e.g., a boundary where a defect is depressed) in the inspection image, i.e., the inspection image has better contrast.
Also, in the case of the same light incident angle θ smaller than 90 degrees, the light sensing element 13 receives more reflected light and diffused light from the shallower surface structure than the deeper surface structure. Therefore, a shallower surface structure appears a brighter image in the inspection image than a surface structure with a larger depth-width ratio. Further, in the case that the light incident angle θ is smaller than 90 degrees, if the light incident angle θ is smaller, more reflected light and diffused light are generated in the surface structure region, the surface structure presents a brighter image in the detection image, and the brightness of the shallower surface structure in the detection image is also greater than the brightness of the deeper surface structure in the detection image. For example, compared with the detection image corresponding to the light incident angle θ of 60 degrees, in the detection image corresponding to the light incident angle θ of 30 degrees, the surface structure exhibits higher brightness; in the detection image corresponding to the light incident angle θ of 30 degrees, the light surface structure exhibits higher brightness in the detection image than the light surface structure having a greater depth.
Therefore, the size of the light incidence angle theta and the brightness of the surface structure presented on the detection image have a negative correlation relationship. The shallower surface structure appears in the inspection image as the smaller the light incident angle θ, i.e., the shallower surface structure is less recognizable by the image scanning system or the inspector in the case of the smaller light incident angle θ. In other words, the image scanning system or the inspector can more easily identify the deeper surface structure from the darker image. On the contrary, if the light incident angle θ is larger, the shallow and deep surface structures are darker in the detected image, i.e. the image scanning system or the detector can identify all the surface structures in the case of larger light incident angle θ.
Therefore, the image scanning system or the detector can set the corresponding light incident angle θ according to the predetermined hole depth of the predetermined surface structure to be detected by the above-mentioned inverse correlation. For example, if a deeper predetermined surface defect is to be detected and a shallower predetermined surface structure is not to be detected, the light source adjustment assembly 16 may adjust the position of the light source assembly 12 to set a smaller light incident angle theta according to the light incident angle calculated by the above-mentioned negative correlation relationship, and the light source adjusting component 16 drives the light source component 12 to output the detection light L1, so that the shallow predetermined surface defect appears as a brighter image in the detection image and the deeper predetermined surface structure appears as a darker image in the detection image, if the light source adjusting component 16 intends to detect the shallow and the deeper predetermined surface defect together, the light source adjusting component 16 can adjust the position of the light source component 12 according to the light incident angle calculated by the above-mentioned negative correlation relationship to set a larger (e.g. 90 degrees) light incident angle theta, the light source adjusting assembly 16 drives the light source assembly 12 to output a detection light L1, so that the shallow and deep predetermined surface structures are shaded in the image.
For example, if the object 2 is a spindle (spindle) of a safety belt assembly applied to an automobile, the surface structure may be a sand hole or an air hole, or a bump or a scratch caused by sand dust or air in the process of manufacturing the object 2. Wherein, the depth of the sand hole or the air hole is larger than the collision mark or the scratch. If the object 2 to be detected has sand holes or air holes but the object 2 to be detected does not have impact marks or scratches, the light source adjusting assembly 16 can adjust the position of the light source assembly 12 according to the light incident angle calculated by the above-mentioned negative correlation relationship to set a smaller light incident angle θ, so that the sand holes or air holes have lower brightness in the detected image, and the impact marks or scratches have higher brightness in the detected image, and the image scanning system or the detector can quickly identify whether the object 2 has sand holes or air holes. If the object 2 to be detected has a bump, a scratch, a sand hole and an air hole, the light source adjusting assembly 16 can adjust the position of the light source assembly 12 according to the light incident angle calculated by the above-mentioned negative correlation relationship to set a larger light incident angle θ, so that the bump, the scratch, the sand hole and the air hole all present shadows in the detected image.
In some embodiments, the light incident angle θ may be greater than or equal to a critical angle and less than or equal to 90 degrees, so as to obtain the best target feature extraction effect at the wavelength to be detected. In this regard, the critical angle may be related to the surface type desired to be detected. In one embodiment, the light incident angle θ is related to a predetermined depth ratio of the predetermined surface defect to be detected. Referring to fig. 29, taking the example that the predetermined surface defect includes a predetermined hole depth d and a predetermined hole radius r, the predetermined hole radius r is a distance between any side surface in the predetermined surface defect and the forward normal 14A, and a ratio (r/d) between the predetermined hole radius r and the predetermined hole depth d is the aforementioned depth ratio (r/d), and the critical angle is arctangent (r/d). Accordingly, in step S03, the light source adjusting element 16 may adjust the position of the light source module 12 according to the depth ratio (r/d) of the predetermined surface defect to be detected to set the critical angle of the light incident angle θ to be arctangent (r/d), the light incident angle θ should satisfy the condition of being equal to or greater than the arctangent (r/d) and less than or equal to 90 degrees, and the light source adjusting element 16 drives the light source module 12 to output the detection light L1 after adjusting the position of the light source module 12. In some embodiments, the predetermined aperture radius r may be predetermined according to the size of the surface structure of the object 2 to be detected.
In one embodiment, the processor 15 can calculate the light incident angle θ according to the above-mentioned inverse correlation and arctangent (d/r), and the processor 15 then drives the light source adjustment assembly 16 to adjust the position of the light source module 12 according to the calculated light incident angle θ.
In some embodiments, the light source module 12 may provide the light ray L1 with a wavelength between 300nm and 3000 nm. For example, the light wavelength value of the light L1 can be in the light band of 300nm-600nm, 600nm-900nm, 900nm-1200nm, 1200nm-1500nm, 1500-1800nm, or 1800nm-2100 nm. In an exemplary embodiment, the light L1 provided by the light source module 12 can be visible light. Here, visible light can image surface defects on the order of μm on the surface 21 in the inspection image. In some embodiments, the light wavelength of the light L1 may be in the range of 380nm to 780nm, which may depend on the material properties of the object to be detected and the requirement of surface spectral reflectivity. In some embodiments, the light L1 can be any one of visible light such as white light, violet light, blue light, green light, yellow light, orange light, and red light. In one embodiment, the wavelength of the white light may be 380nm to 780nm, the wavelength of the violet light may be 380nm to 450nm, the wavelength of the blue light may be 450nm to 495nm, the wavelength of the green light may be 495nm to 570nm, the wavelength of the yellow light may be 570nm to 590nm, the wavelength of the orange light may be 590nm to 620nm, and the wavelength of the red light may be 620nm to 780 nm.
In some embodiments, the light L1 provided by the light source assembly 12 can be far infrared light (e.g., light having a wavelength in the range of 800nm to 3000 nm). Thus, the detection light can image surface features having a sub-micron (e.g., 300nm) order on the surface of the object 2 in the detection image. In an exemplary embodiment, when the object 2 having the surface attachment is obliquely irradiated with far infrared light provided by the light source module 12, the far infrared light can penetrate the attachment to the surface of the object 2, so that the photosensitive element 13 can capture an image of the surface of the object 2 under the attachment. In other words, the far infrared light can penetrate the surface attachment of the object 2, so that the photosensitive element 13 can acquire an image of the surface 21 of the object 2. In some embodiments, the far infrared light has a light wavelength value greater than 2 μm. In some embodiments, the far infrared light has a wavelength of light having a value greater than the thickness of the attachment. In other words, the wavelength of the far infrared light can be selected according to the thickness of the attachment to be penetrated. In some embodiments, the wavelength of the far infrared light can be selected according to the surface morphology of the object to be measured, so as to perform image filtering of micron (μm) structure. For example, if the sample surface has 1 μm to 3 μm fine traces or sand holes, but such phenomena do not affect the product quality, and the quality manager is interested in structural defects of 10 μm or more, the wavelength of the far infrared light L1 is selected to be an intermediate wavelength (e.g., 4 μm) to obtain the best filtering effect of the image microstructure and low-noise image quality, and also not affect the detection of larger scale defects. Preferably, the wavelength of the far infrared light is greater than 3.5 μm. In some embodiments, the object 2 is preferably made of metal. In some embodiments, the adherent can be an oil stain, colored paint, or the like.
In one embodiment, the processor 15 can drive the light source adjustment assembly 16 to adjust the light intensity of the far-infrared light L1 emitted by the light source assembly 12 to improve the glare phenomenon, so as to improve the quality of the detected image captured by the photosensitive element 13, thereby obtaining a low-disturbance through image. For example, the light source adjusting assembly 16 can reduce the light intensity, so that the light sensing element 13 obtains a detection image with less glare.
In another embodiment, the surface defects with different depths have different brightness in the inspection image according to different light incident angles θ, and the intensity of the glare generated by the far-infrared light L1 will vary accordingly. In other words, the processor 15 can drive the light source adjustment assembly 16 to adjust the light incident angle θ of the far-infrared light L1 emitted by the light source assembly 12, so as to effectively reduce glare, and further improve the quality of the detected image captured by the photosensitive element 13, so as to obtain a low-disturbance through image.
In another embodiment, the light source adjustment assembly 16 can determine the light wave polarization direction of the far-infrared light L1 emitted by the light source assembly 12, i.e., control the light source assembly 12 to output the polarized detected far-infrared light L1, so as to effectively reduce glare and further improve the quality of the detected image captured by the photosensitive element 13, thereby obtaining a low-disturbance through image.
In some embodiments, the light source adjustment assembly 16 may be a driving motor for adjusting the light incident angle θ of the light source assembly 12. Wherein the drive motor may be a stepper motor.
In some embodiments, the light source adjustment assembly 16 may include a driving circuit to adjust the intensity of the light L1 of the light source assembly 12 by changing the voltage applied to the light source assembly 12.
In some embodiments, referring to fig. 30, the image scanning system may further include a polarizer 17. The polarizing plate 17 is located on the photosensitive axis 13A of the photosensitive element 13 and is disposed between the photosensitive element 13 and the detection position 14. Here, the photosensitive element 13 captures an image of the surface of the object 2 through the polarizer 17. The polarizing plate 17 is used for polarization filtering to effectively avoid saturation glare caused by strong infrared light to the photosensitive element 13, thereby improving the quality of the detected image captured by the photosensitive element 13 and obtaining a low-disturbance through image.
In some embodiments, the positions of the light source module 12 and the photosensitive element 13 are designed such that the light incident angle θ is not equal to the light reflection angle α, so as to reduce glare and further improve the quality of the detected image captured by the photosensitive element 13, thereby obtaining a low-disturbance through image.
In some embodiments, the light source adjusting assembly 16 can sequentially adjust the position of the light source assembly 12, so that the light sensing elements 13 capture the detection images MB of the object 2 at different light incident angles θ respectively. Accordingly, the image scanning system can obtain a plurality of detection images MB of each surface area of the same object 2 under different light incidence angles θ. In other words, the photosensitive element 13 performs multiple image captures on the same surface area based on the light L1 with different light incident angles θ to obtain multiple detected images MB of the same surface area.
In some embodiments, referring to fig. 31 and 32, the image scanning system may further include a beam splitting element 18. The light splitting assembly 18 is located between the photosensitive element 13 and the detection position 14, and the light splitting assembly 18 can also be said to be located between the photosensitive element 13 and the article 2. The light splitting assembly 18 has a plurality of filter regions F1 corresponding to a plurality of spectra, respectively. In this case, the light source assembly 12 provides a multi-spectrum light to illuminate the detection position 14. Here, the multi-spectrum light has a plurality of spectra of sub-light. Therefore, by switching the filter regions F1 of the light splitting assembly 18 (even though the filter regions F1 are respectively shifted to the photosensitive axes 13A of the photosensitive elements 13), the photosensitive elements 13 capture the detection images MB of the surface blocks (one of the surface blocks 21A to 21C) located at the detection positions 14 through the filter regions F1, so as to obtain a plurality of detection images MB with different spectra. That is, when the multi-spectrum light is irradiated from the light source assembly 12 to the object 2 at the detection position 14, the surface of the object 2 diffuses the multi-spectrum light, and the diffused light is filtered by any one of the filter regions F1 of the light splitting assembly 18 to be sub-light having the spectrum corresponding to the filter region F1, and then enters the sensing region of the photosensitive element 13. At this time, the sub-light reaching the photosensitive element 130 is only left to have a single spectrum (the middle value of the optical band). When the same filter F1 is aligned with the photosensitive axis 13A of the photosensitive element 13, the driving assembly 11 shifts one surface block to the detecting position 14 at a time, and after each shift, the photosensitive element 13 captures a detecting image MB of the surface block currently located at the detecting position 14, so as to obtain the detecting images MB of all the surface blocks 21A-21C under the same spectrum. Then, the light splitting assembly 18 is switched to another filter region F1 to align with the photosensitive axis 13A of the photosensitive element 13, and the surface area is sequentially shifted again to capture the detection image MB of the surface area. By analogy, the detected image MB having the spectrum corresponding to each filter region F1 can be obtained. In other words, the light source assembly 12 can have a wider light band with a wider range of light wavelengths, and then the light splitting assembly 18 is disposed on the light receiving path to allow the light with a specific light band to pass through, so as to provide the reflected light of the light L1 with the light wavelength predicted by the light sensing element 13.
In some embodiments, referring to fig. 31 and 32, the image scanning system may further include a displacement component 19. The displacement assembly 19 is coupled to the light splitting assembly 18. During the operation of the image scanning system, the displacement assembly 19 sequentially moves one of the filter regions F1 of the light splitting assembly 18 onto the photosensitive axis 13A of the photosensitive element 13.
In another embodiment, the light splitting assembly may be disposed at the light incident end instead. In some embodiments, referring to fig. 33 and 34, the image scanning system may further include a beam splitting element 18'. The light-splitting assembly 18 'is located between the light source assembly 12 and the detection location 14, and it can also be said that the light-splitting assembly 18' is located between the light source assembly 12 and the article 2. The light splitting assembly 18' has a plurality of filter regions F1 corresponding to the plurality of spectra, respectively. In this case, the light source assembly 12 provides a multi-spectral light to illuminate the detection site 14 through the light splitting assembly 18'. Here, the multi-spectrum light has a plurality of spectra of sub-light. Therefore, by switching the filter region F1 of the light splitting assembly 18 '(i.e. shifting the filter regions F1 to the optical axis of the light source assembly 12 respectively), the multi-spectrum light output from the light source assembly 12 is filtered into sub-light of a single spectrum by the filter region F1 of the light splitting assembly 18', and then illuminates the object 2 at the detecting position 14. At this time, the photosensitive device 13 can capture the detection image MB of the specific spectrum of the surface area (one of the surface areas 21A to 21C) located at the detection position 14. When the same filter area F1 is aligned with the optical axis of the light source module 12, the driving module 11 shifts one surface block to the detecting position 14 at a time, and the photosensitive element 13 captures a detecting image MB of the surface block currently located at the detecting position 14 after each shift, so as to obtain detecting images MB of all surface blocks 21A-21C under the same spectrum. Then, the light splitting assembly 18' is switched to another filter area F1 to align the optical axis of the light source assembly 12, and the surface area is sequentially shifted again and the detected image MB of the surface area is captured. By analogy, the detected image MB having the spectrum corresponding to each filter region F1 can be obtained. In other words, the light source module 12 can have a relatively wide light band with a specific light wavelength, and then the light splitting module 18 allowing the specific light band to pass through is disposed on the light incident path to provide the light L1 with a predetermined light wavelength to irradiate the detection position 14.
In some embodiments, referring to fig. 33 and 34, the image scanning system may further include a displacement component 19'. The displacement assembly 19 'is coupled to the light splitting assembly 18'. During operation of the image scanning system, the displacement assembly 19 'sequentially moves one of the filter regions F1 of the light splitting assembly 18' to the optical axis of the light source assembly 12.
In some embodiments, the light band of the multi-spectrum light provided by light source module 12 may be between 300nm and 2100nm, and the light bands respectively allowed to pass through by filter regions F1 of light splitting assembly 18(18') may be any non-overlapping sections between 300nm and 2100 nm. Herein, the wavelength bands of light respectively allowed to pass through the plurality of filter regions F1 of the light splitting assembly 18(18') may be continuous or discontinuous. For example, when the wavelength range of the multi-spectral light can be between 300nm and 2100nm, the wavelength ranges of light passing through the filter regions F1 of the light splitting assembly 18(18') can be 300nm-600nm, 600nm-900nm, 900nm-1200nm, 1200nm-1500nm, 1500-1800nm, and 1800-2100 nm, respectively. In another example, when the light band of the multi-spectral light can be between 380nm and 750nm, the light bands respectively allowed to pass through by the filter regions F1 of the light splitting assembly 18(18') can be 380nm-450nm, 495nm-570nm and 620nm-750nm, respectively. In some embodiments, the aforementioned spectra may be represented in the wavelength band of monochromatic light or in intermediate values thereof.
In some embodiments, the light splitting assembly 18(18') may be a beam splitter. In some embodiments, the displacement assembly 19(19') may be implemented by a drive motor. Wherein the driving motor can be a stepping motor.
In some embodiments, referring to fig. 35, the image scanning system can utilize a plurality of light emitting devices 121-123 with different spectrums to provide light L1 with a plurality of spectrums, and the light emitting devices 121-123 with different spectrums are sequentially activated, so that the light sensing device 13 can obtain a plurality of detection images with different spectrums. In other words, the light source module 12 includes a plurality of light emitting devices 121-123, and the light emitting devices 121-123 correspond to a plurality of non-overlapping light bands, respectively. In some embodiments, such optical bands may be continuous or discontinuous.
By way of example, light source assembly 12 includes a red LED, a blue LED, and a green LED. When the red LED emits light, the photosensitive element 13 can obtain a detection image MB of the red spectrum. When the blue LED emits light, the photosensitive element 13 can obtain a detection image MB of the blue spectrum, as shown in fig. 36. When the green LED emits light, the photosensitive element 13 can obtain a detected image MB of a green spectrum, as shown in fig. 37. In this way, the details presented by the detection image MB under different bands of light are different. For example, the grooves in the detected image MB are more distinct in the blue spectrum, and the bumps in the detected image MB are more distinct in the green spectrum.
Accordingly, the image scanning system can obtain a plurality of detection images MB of different spectra for each surface area of the same object 2. In other words, the photosensitive element 13 performs multiple image captures on the same surface area based on the light L1 with different light bands to obtain multiple detected images MB with different spectra for the same surface area.
In some embodiments, as shown in FIGS. 28, 31 and 33, the light source module 12 may include a light emitting element.
In still other embodiments, as shown in fig. 18, 19 and 30, the light source assembly 12 may include two light emitting elements 121 and 122, the two light emitting elements 121 and 122 are symmetrically disposed on two opposite sides of the object 2 with respect to the normal line 14A, the two light emitting elements 121 and 122 respectively illuminate the detection position 14, the surface 21 is illuminated by the symmetrical detection light L1 to generate symmetrical diffusion light, and the light sensing element 13 sequentially captures detection images of the surface blocks 21A-21C located on the detection position 14 according to the symmetrical diffusion light, so as to improve the imaging quality of the detection images. In some embodiments, the light emitting elements 121, 122 may be implemented by one or more Light Emitting Diodes (LEDs); in some embodiments, each light emitting device 121, 122 can be implemented by a laser source.
In one embodiment, the image scanning system may have a single set of light source modules 12, as shown in FIG. 14.
In another embodiment, referring to fig. 38 to 40, the image scanning system may have a plurality of sets of light source modules 12a, 12b, 12c, 12 d. The light source assemblies 12a, 12b, 12c, 12d are respectively located at different orientations of the detecting position 14, i.e. at different orientations of the carrying elements 111 for carrying the object 2. Thus, the image scanning system can obtain the object image with the optimal surface feature space information. For example, light source assembly 12a may be disposed on the front side of detection location 14 (or carrier element 111), light source assembly 12b may be disposed on the rear side of detection location 14 (or carrier element 111), light source assembly 12c may be disposed on the left side of detection location 14 (or carrier element 111), and light source assembly 12d may be disposed on the right side of detection location 14 (or carrier element 111).
Here, under illumination of each light source assembly (12a, 12b, 12C, 12 d), the image scanning system performs an image capturing procedure to obtain the detected images MB of all the surface areas 21A-21C of the object 2 under illumination in a specific orientation. For example, the image scanning system first emits light L1 from the light source assembly 12 a. Under the light L1 emitted by the light source 12a, the photosensitive elements 13 capture the detected images MB of all the surface areas 21A-21C of the object 2. Then, the image scanning system is switched to emit light L1 from light source module 12 b. Under the light L1 emitted by the light source 12b, the photosensitive elements 13 capture the detection images MB of all the surface areas 21A-21C of the object 2. Next, the image scanning system is switched to emit light L1 from light source module 12 c. Under the light L1 emitted by the light source 12C, the photosensitive elements 13 capture the detection images MB of all the surface areas 21A-21C of the object 2. The image scanning system is then switched to emit light L1 from light source module 12 d. Under the light L1 emitted by the light source 12d, the photosensitive elements 13 capture the detection images MB of all the surface areas 21A-21C of the object 2.
Accordingly, the image scanning system can obtain a plurality of detection images MB of each surface area of the same object 2 under the illumination of different light source modules 12 a-12 d. In other words, the photosensitive element 13 performs multiple image captures on the same surface area based on the light L1 provided by the different light source modules 12a to 12d to obtain multiple detected images MB of the same surface area in different lighting directions. In some embodiments, the image scanning system may integrate the object images IM of the object 2 in different lighting orientations to improve the imaging distinctiveness of various surface types of the object, thereby improving the identification of the surface types of the object to obtain the best resolution of the surface types of the object. For example, the image scanning system can make the attachment on the surface 21 of the object 2 and the surface type form a significantly different imaging effect through the integration of the object images IM of the object 2 in different lighting orientations, so as to identify the surface type of the object.
For example, referring to FIG. 41, the image scanning system may have four sets of light source assemblies 12a, 12b, 12c, 12d respectively disposed at the upper side, the lower side, the left side and the right side of the detection position 14. Assume that the surface 21 of the object 2 has the attachment of the pattern Sa and the surface morphology of the slots Sb. In the image capturing process, the photosensitive element 13 can capture an image M01 (as shown in fig. 42) of the surface 21 of the object 2 based on the light L1 provided by the light source module 12a, an image M02 (as shown in fig. 43) of the surface 21 of the object 2 based on the light L1 provided by the light source module 12b, an image M03 (as shown in fig. 44) of the surface 21 of the object 2 based on the light L1 provided by the light source module 12a, and an image M04 (as shown in fig. 45) of the surface 21 of the object 2 based on the light L1 provided by the light source module 12 d. Referring to fig. 42 to 45, in the images M01 to M04, the image of the pattern Sa is not shaded due to the difference in the lighting orientation, and the image of the slot Sb is shaded correspondingly due to the difference in the lighting orientation. The processor 15 superimposes the images M01-M04 of the object 2 into a superimposed object image (i.e., the initial image IMc), as shown in FIG. 46. When the initial image IMc is fed into the artificial neural network system 30 or any sub-neural network system 33, the artificial neural network system 30 or the sub-neural network system 33 can determine whether the surface of the object 2 has attachments and/or surface morphology based on the representation (e.g., presence or absence, presence position, etc.) of the shadow in the image of the initial image IMc. Taking the initial image IMc shown in fig. 46 as an example, the sub-neural network system 33 can determine that the surface 21 of the object 2 has attachments and slots after executing the prediction model with the initial image IMc.
In some embodiments, the optical axes (e.g., the light ray L1) of any two adjacent light source modules in the light source modules 12 a-12 d have the same predetermined included angle therebetween. For example, in the case of a top view image scanning system, the light source modules 12 a-12 d are disposed around the center of the detection position 14 at regular angular intervals.
In some embodiments, the light source modules 12 a-12 d provide the light L1 toward the detection position 14 at the same light incident angle θ.
In some embodiments, the photosensitive element 13 captures a plurality of detection images MB of the same spectrum for the same surface area based on the light L1 provided by different light source modules 12a to 12 d.
In some embodiments, the photosensitive element 13 can also capture a plurality of detection images MB with different spectra for the same surface area based on the light L1 provided by different light source modules 12a to 12 d. For example, assume that the image scanning system has four light source modules 12 a-12 d respectively disposed at the upper side, the lower side, the left side and the right side of the detection position 14. The photo sensor 13 captures a detection image MB of a first spectrum of the surface area 12A based on the light L1 provided by the light source module 12A, captures a detection image MB of a second spectrum of the surface area 12A based on the light L1 provided by the light source module 12b, captures a detection image MB of a third spectrum of the surface area 12A based on the light L1 provided by the light source module 12c, and captures a detection image MB of a fourth spectrum of the surface area 12A based on the light L1 provided by the light source module 12 d. Wherein the fourth spectrum from the first spectrum to the fourth spectrum belong to different optical bands respectively.
In some embodiments, the photosensitive element 13 captures a plurality of detection images MB with different spectra for the same surface area based on the light L1 provided by each of the different light source modules 12 a-12 d. For example, taking the light source module 12a and the surface area 21A as an example, under the illumination of the light source module 12a, the light sensing device 13 can capture the detection images MB of different spectra of the same surface area 21A through the light splitting assembly 18 (18').
In one embodiment, as shown in fig. 16, 36 and 37, the object 2 has a cylindrical shape, such as a spindle. I.e. the body 201 of the object 2 is cylindrical. Herein, the surface 21 of the object 2 may be a side surface of the body 201 of the object 2, i.e. the surface 21 is a cylindrical surface, and the surface 21 has a radian of 2 pi. Here, the first direction D1 may be a clockwise direction or a counterclockwise direction with the major axis of the body of the object 2 as the rotation axis. In some embodiments, the object 2 has a narrower configuration at one end relative to the other. In an example, referring to fig. 17, 24 and 26, the supporting element 111 may be two rollers spaced apart by a predetermined distance, and the driving motor 112 is coupled to the rotating shafts of the two rollers. Here, the predetermined distance is smaller than the diameter of the article 2 (the minimum diameter of the body). Thus, the article 2 is movably disposed between the two rollers. Moreover, when the driving motor 112 rotates the two rollers, the object 2 is driven by the surface friction between the object 2 and the two rollers, and thus rotates along the first direction D1 of the surface 21, so as to align a surface area to the detection position 14. In another example, the supporting element 111 can be a shaft, and the driving motor 112 is coupled to one end of the shaft. At this time, the other end of the rotating shaft is provided with an embedded part (such as an inserting hole). At this point, the article 2 may be removably embedded in the insert. When the driving motor 112 rotates the shaft, the object 2 is driven by the shaft to rotate along the first direction D1 of the surface 21, so that a surface area is aligned to the detecting position 14. In some embodiments, taking the surface 21 divided into 9 surface sections 21A-21C as an example, the driving motor 112 drives the supporting element 111 to rotate 40 degrees at a time, so as to drive the object 2 to rotate 40 degrees along the first direction D1 of the surface 21. In some embodiments, the angle of rotation of the driving motor 112 (to fine-tune the position of the object 2) in step S13 is smaller than the angle of rotation of the driving motor 112 (to displace the next surface segment to the detection position 14) in step S15.
In one embodiment, as shown in fig. 47-50, the object 2 is plate-shaped. I.e. the body 201 of the object 2 has a plane. The surface 21 of the object 2 (i.e. the plane of the body 201) may be a non-curved surface having a curvature equal to or approaching zero. Here, the first direction D1 may be an extending direction of any side length (e.g., a long side) of the surface 21 of the object 2. In an exemplary embodiment, the supporting element 111 can be a planar supporting board, and the driving motor 112 is coupled to a side of the planar supporting board. At this time, the object 2 may be removably disposed on the planar carrying plate during the image capturing process. The driving motor 112 drives the planar carrying board to move along the first direction D1 of the surface 21 to drive the object 2 to move, so as to align a surface area to the detecting position 14. Here, the driving motor 112 drives the planar-carrying board to move a predetermined distance each time, and drives the planar-carrying board to move repeatedly to sequentially move each of the surface blocks 21A-21C to the detection position 14. Here, the predetermined distance is substantially equal to the width of each surface segment 21A-21C along the first direction D1.
In some embodiments, the drive motor 112 may be a stepper motor.
In one embodiment, referring to fig. 39 and 47, the image scanning system may be configured with a single photosensitive element 13, and the photosensitive element 13 performs image capturing on a plurality of surface areas 21A to 21C to obtain a plurality of detection images respectively corresponding to the surface areas 21A to 21C.
In one example, it is assumed that the object 2 is cylindrical and the image scanning system is provided with a single photosensitive element 13. The photosensitive element 13 can capture images of a plurality of surface areas 21A-21C of the main body (i.e., the middle section) of the object 2 to obtain a plurality of detected images MB corresponding to the surface areas 21A-21C, and the processor 15 stitches the detected images MB of the surface areas 21A-21C into an object image IM, as shown in fig. 15.
In another embodiment, referring to fig. 16 and 38, the image scanning system may be provided with a plurality of photosensitive elements 13, and the photosensitive elements 13 face the detection position 14 and are arranged along the long axis of the object 2. The light-sensing devices 13 respectively capture the detection images of the surface areas of the object 2 at the detection positions 14.
In one example, assume that the object 2 is cylindrical and the image scanning system is provided with a plurality of photosensitive elements 131-133, as shown in FIG. 16. The photosensitive elements 131 to 133 respectively capture inspection images MB1 to MB3 of the surface of the object 2 located at different segment positions of the inspection position 14, and the processor 15 stitches all the inspection images MB1 to MB3 as an object image IM, as shown in fig. 51. For example, it is assumed that the number of the photosensitive elements 131 to 133 can be three, and the processor 15 splices the object image IM of the object 2 according to the detection images MB1 to MB3 captured by the three photosensitive elements 131 to 133, as shown in fig. 51. The object image IM includes a sub-object image 22 (the upper segment of the object image IM in fig. 51) spliced by the detected images MB1 of all the surface areas 21A-21C captured by the first photosensitive element 131 in the three photosensitive elements 13, a sub-object image 23 (the middle segment of the object image IM in fig. 51) spliced by the detected images MB2 of all the surface areas 21A-21C captured by the second photosensitive element 132 in the three photosensitive elements 13, and a sub-object image 24 (the lower segment of the object image IM in fig. 51) spliced by the detected images MB3 of all the surface areas 21A-21C captured by the third photosensitive element 133 in the three photosensitive elements 13.
In some embodiments, although the aforementioned image scanning system takes the detection images of all the surface areas of the object 2 as an example to illustrate the operation, the present invention is not limited thereto, and the image scanning system can also be applied to directly capture the detection images of the entire surface of the tiny object 2 (i.e. the surface of the object 2 facing the sensing device 13, and the area of the surface is equal to or smaller than the viewing angle of the sensing device 13), or capture the detection images of only any one or more of all the surface areas of the object 2 by setting.
In some embodiments, the processor 15 may not perform the stitching process, but directly use the detected image MB of any surface area 21A-21C of the object 2 captured by the photosensitive element 13 as the object image IM.
In some embodiments, the processor 15 may automatically determine whether the surface 21 of the object 2 includes surface defects, whether the surface 21 has different textures, and whether the surface 21 has attachments such as paint or oil stains according to the obtained object image IM, that is, the processor 15 may automatically determine different surface types of the object 2 according to the object image.
In some embodiments, the processor 15 may have the artificial neural network system 30 to automatically classify the surface type according to the obtained object image IM, so as to automatically determine the surface type of the surface 21 of the object 2. In an exemplary embodiment, the object image IM generated by the processor 15 may be subsequently trained (deep learning) by a plurality of sub-neural network systems 33 of the artificial neural network system 30 before the artificial neural network system 30 is created, so as to build a prediction model of the surface morphology of the respective identified object. In another example, the object image IM generated by the processor 15 may be further fed into the trained sub-neural network systems 33 to obtain respective determination defect rates before the artificial neural network system 30 is created. Moreover, the processor 15 further concatenates the sub-neural network systems 33 according to the respective determined defect rates to obtain the artificial neural network system 30. In another example, after the artificial neural network system 30 is created, the object image IM generated by the processor 15 may be subsequently subjected to prediction classification by the artificial neural network system 30, so as to sequentially perform classification prediction of the object image IM through the prediction models of the respective sub-neural network systems 33.
In some embodiments, the object image IM generated by the processor 15 can be fed to another processor having the aforementioned artificial neural network system 30, so that the artificial neural network system 30 can automatically classify the surface type according to the obtained object image IM, thereby automatically determining the surface type of the surface 21 of the object 2.
In one embodiment, the image scanning system and the artificial neural network system 30 can be implemented on the same host. For example, the image scanning system and the artificial neural network system 30 are implemented on the same host, and the host has a processor 15 for controlling the operation of the image scanning system and executing the artificial neural network system 30. In another example, the image scanning system and the artificial neural network system 30 are implemented on the same host, and the host has a processor 15 for controlling the operation of the image scanning system and another processor for executing the artificial neural network system 30. In another embodiment, the image scanning system and the artificial neural network system 30 can be implemented on different hosts. In other words, the image scanning system and the artificial neural network system 30 are implemented on two different hosts. The two hosts can be connected in a wired or wireless communication manner to transmit information such as object images IM.
In some embodiments, the creation and application of artificial neural network system 30 can be implemented on different processors (or hosts). In other words, after one processor connects a plurality of sub-neural network systems 33 in series to the artificial neural network system 30, the formed artificial neural network system 30 is loaded to another processor for execution.
In some embodiments, the aforementioned sub-neural network system 33 may also operate independently. In one embodiment, the processor 15 may have any sub-neural network system 33 for automatically classifying the surface type according to the obtained object image IM. In the learning stage, the sub-neural network system 33 performs deep learning with the obtained object image IM to build a prediction model of the surface morphology of the object. In the prediction stage, the object image IM generated by the processor 15 may be subsequently subjected to prediction classification by the sub-neural network system 33 using a prediction model to identify the surface type of the object. In some embodiments, the object image IM generated by the processor 15 can be fed to another processor having the aforementioned sub-neural network system 33, so that the sub-neural network system 33 can automatically classify the surface type according to the obtained object image IM.
In one embodiment, the image scanning system and the sub-neural network system 33 can be implemented on the same host. For example, the image scanning system and the sub-neural network system 33 are implemented on the same host, and the host has a processor 15 for controlling the operation of the image scanning system and executing the sub-neural network system 33. In another example, the image scanning system and the sub-neural network system 33 are implemented on the same host, and the host has a processor 15 for controlling the operation of the image scanning system and another processor for executing the sub-neural network system 33. In another embodiment, the image scanning system and the sub-neural network system 33 can be implemented on different hosts. In other words, the image scanning system and the sub-neural network system 33 are implemented on two different hosts. The two hosts can be connected in a wired or wireless communication manner to transmit information such as object images IM.
In some embodiments, where the sub-neural network system 33 operates alone, the creation (i.e., learning phase) and application (i.e., prediction phase) of the sub-neural network system 33 can be implemented on different processors (or hosts). In other words, a processor has an untrained neural network system 33, and trains (performs deep learning) the neural network system 33 with a plurality of object images IM to build a prediction model thereof. The trained sub-neural network system 33 is then loaded into another processor to perform class prediction.
For example, in one example, when the object 2 is a defective object, the surface of the object 2 has one or more surface types that the artificial neural network system has learned and attempted to extract, so that at least one sub-neural network system 33 can select them; on the other hand, when the object 2 is a qualified object, the surface of the object 2 does not have any surface pattern recorded for exciting the selecting action of any sub-neural network system 33. In the learning stage, the object image IM received by the sub-neural network system 33 has a part with class marks of one or more surface types and another part with class marks without any surface type. Furthermore, the output of the sub-neural network system 33 is preset with a plurality of surface type classifications according to the surface types. In another example, when the object 2 is a defective object, the surface of the object 2 has one or more surface types that the artificial neural network has learned and attempted to extract; conversely, when the object 2 is a qualified object, the surface of the object 2 has another type of surface type that the artificial neural network or networks have learned and attempted to capture, and the type of surface type may be, for example, a standard surface type. In the learning stage, the object image IM received by the sub-neural network system 33 has a part with category labels having one or more first surface types and another part with category labels having one or more second surface types. Furthermore, the output of the sub-neural network system 33 is preset with a plurality of surface type classifications according to the surface types.
Referring to FIG. 52, when the surface of the object 2 has at least one surface type, the corresponding image positions of the object image IM of the object also show the partial images P01-P09 of the surface type.
In some embodiments, in the learning phase, the object image IM received by the sub-neural network system 33 is a known surface type (i.e. a target surface type marked to exist thereon), and the type of the surface type output by the sub-neural network system 33 is also set. In other words, each object image IM for performing deep learning is marked with an existing object type. In some embodiments, the type mark of the object type may present a mark pattern on the object image IM (as shown in fig. 52), and/or record object information in the image information of the object image IM.
In some embodiments, during the learning phase, the sub-neural network system 33 is trained using the object images IM with known surface morphology to generate judgment items for each neuron in the prediction model and/or adjust the weight of the connection of any neuron, so that the prediction result (i.e. the output surface defect type) of each object image IM conforms to the known and labeled and learned surface morphology, thereby establishing a prediction model for identifying the surface morphology of the object.
In the prediction stage, the sub-neural network system 33 can perform classification prediction on the object image IM with unknown surface type through the established prediction model. In some embodiments, the sub-neural network system 33 performs percentage prediction on the object images IM according to the surface type categories, i.e. determines the percentage of each object image IM that may fall into each surface type category. Then, the sub-neural network system 33 determines whether the object 2 corresponding to the object image IM is qualified according to the percentage of the object image IM for each surface type category in sequence, and classifies the object image IM into a normal group or an abnormal group according to the qualification.
In some embodiments, processor 15 includes one or more sub-neural network systems 33. In the learning stage, the object image input to each sub-neural network system 33 is of a known surface type, and after the object image of the known surface type is input, each sub-neural network system 33 performs deep learning to establish a prediction model (i.e., composed of a plurality of hidden layers connected in sequence, each hidden layer having one or more neurons, each neuron performing a judgment item) according to the known surface type and the surface type class of the known surface type (hereinafter referred to as a preset surface type class). In other words, in the learning stage, the object image with the known surface type is used to generate the judgment items of each neuron and/or adjust the weight of the connection between any two neurons, so that the prediction result (i.e. the output preset surface type) of each object image conforms to the known surface type.
For example, the aforementioned surface type may be sand holes or air holes, bumps or scratches, and the image areas representing different surface types may be imaged image areas having sand holes with different depths, imaged image areas having no sand holes and having bumps or scratches, imaged image areas having different surface roughness, imaged image areas having no surface defects, imaged image areas having surface types representing different depth ratios by irradiating the surface areas 21A to 21C with the detection light L1 with different light wavelength values to generate different contrasts, or imaged image areas having attachments with different colors. In the learning stage, the sub-neural network system 33 performs deep learning according to the object images of various surface types to build a prediction model for identifying various surface types. Furthermore, the sub-neural network system 33 can classify the object images with different surface types to generate different predetermined surface type categories in advance. Thus, in the prediction stage, after the object image IM is fed in, the artificial neural network system 30 (or each sub-neural network system 33) executes the prediction model according to the input object image to identify the object image representing the surface type of the object 2 in the object image. The prediction model classifies the object image of the surface type of the object according to a plurality of predetermined surface type categories.
For example, taking a sub-neural network system 33 as an example, the sub-neural network system 33 executes the above prediction model according to the fed object image, and the sub-neural network system 33 can recognize that the surface area 21A of the first object 2 includes sand holes and scratches, the surface area 21B of the second object 2 does not have surface defects, and the surface area 21C of the third object 2 includes sand holes and paint and the surface roughness of the surface area 21A is greater than the surface roughness of the surface area 21C by using the object image IM of the object 2; then, taking six categories of preset surface type including sand hole or air hole, scratch or impact mark, high roughness, low roughness, having attachment and having no surface defect as an example, the sub-neural network system 33 can classify the object image IM of the first object 2 into a preset category of sand hole or air hole and scratch or impact mark, classify the object image IM of the second object 2 into a preset category having no surface defect, classify the object image IM of the first object 3 into a preset category of sand hole or air hole and a preset category having attachment, and classify the detection image of the object image IM of the third object 2 into a preset category having non-uniform roughness. In another example, after the classification, the sub-neural network system 33 may further output the object images IM of the first object 2 and the third object 2 in abnormal groups and output the object images IM of the second object 2 in normal groups according to the classification result.
In some embodiments, the artificial neural network system 30 or any sub-neural network system 33 according to the present invention may be implemented as a computer program product, so that the method for selecting the surface type of an object based on an artificial neural network according to any of the above embodiments can be completed when the computer (i.e., the intended processor) is loaded with the program and executed. In some embodiments, the computer program product may be a non-transitory computer readable recording medium, and the program is stored in the non-transitory computer readable recording medium and loaded into a computer (i.e., a processor). In some embodiments, the program itself may be a computer program product and transmitted to the computer by wire or wirelessly.

Claims (14)

1. A method for screening surface patterns of objects based on an artificial neural network is suitable for screening a plurality of objects, and is characterized by comprising the following steps:
performing surface type recognition on a plurality of object images by using a plurality of prediction models to obtain the judgment defect rate of each prediction model, wherein the plurality of object images correspond to the surface type of one part of the plurality of objects;
and connecting the plurality of prediction models in series to form an artificial neural network system according to the judgment defect rate of each prediction model so as to screen the rest of the plurality of objects.
2. The method of claim 1, further comprising:
converting each object image into a matrix;
wherein one of the plurality of predictive models performs the surface type recognition with the matrix.
3. The method of claim 1, further comprising:
normalizing the plurality of object images;
converting the normalized object images into a matrix;
wherein one of the plurality of predictive models performs the surface type recognition with each of the matrices.
4. The method of claim 1, further comprising:
overlapping the plurality of object images corresponding to the same object to form an initial image;
wherein one of the plurality of predictive models performs the surface morphology recognition with each of the initial images.
5. The method of claim 1, further comprising:
overlapping the plurality of object images corresponding to the same object to form an initial image;
converting each initial image into a matrix;
wherein one of the plurality of predictive models performs the surface type recognition with each of the matrices.
6. The method of claim 1, wherein each of the prediction models is implemented by a Convolutional Neural Network (CNN) algorithm.
7. The method of claim 1, wherein each object image is formed by stitching a plurality of detection images.
8. The method of claim 1, wherein the plurality of predictive models have different numbers of neural network layers.
9. The method of claim 1, wherein the plurality of predictive models have different neuron configurations.
10. The method of claim 1, further comprising: and feeding a plurality of object images corresponding to the other objects into the artificial neural network system to identify the surface structure.
11. The method of claim 1, further comprising: preferably, the other objects are screened by the object with higher defect rate.
12. The method of claim 1, wherein the plurality of object images correspond to surface types of the plurality of objects with known defectivity, and one of the determined defectivity of each of the prediction models is higher than the known defectivity.
13. The method of claim 1, further comprising: a deep learning is performed under different training conditions to build the plurality of prediction models.
14. The method of claim 1, further comprising: a plurality of deep learning is performed to build the plurality of prediction models, respectively.
CN201910987145.8A 2019-10-17 2019-10-17 Method for screening surface form of object based on artificial neural network Pending CN112683923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910987145.8A CN112683923A (en) 2019-10-17 2019-10-17 Method for screening surface form of object based on artificial neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910987145.8A CN112683923A (en) 2019-10-17 2019-10-17 Method for screening surface form of object based on artificial neural network

Publications (1)

Publication Number Publication Date
CN112683923A true CN112683923A (en) 2021-04-20

Family

ID=75444660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910987145.8A Pending CN112683923A (en) 2019-10-17 2019-10-17 Method for screening surface form of object based on artificial neural network

Country Status (1)

Country Link
CN (1) CN112683923A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI818880B (en) * 2023-03-31 2023-10-11 宸祿科技股份有限公司 Image fusion system and method for parallelization optimization of image fusion algorithm
TWI823256B (en) * 2022-02-16 2023-11-21 新加坡商光寶科技新加坡私人有限公司 Light color coordinate estimation system and deep learning method thereof

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050002560A1 (en) * 2003-05-29 2005-01-06 Nidek Co., Ltd. Defect inspection apparatus
CN108520274A (en) * 2018-03-27 2018-09-11 天津大学 High reflecting surface defect inspection method based on image procossing and neural network classification
CN109118119A (en) * 2018-09-06 2019-01-01 多点生活(成都)科技有限公司 Air control model generating method and device
CN109146843A (en) * 2018-07-11 2019-01-04 北京飞搜科技有限公司 Object detection method and device based on deep neural network
US20190073568A1 (en) * 2017-09-06 2019-03-07 Kla-Tencor Corporation Unified neural network for defect detection and classification
CN109712113A (en) * 2018-11-28 2019-05-03 中原工学院 A kind of fabric defect detection method based on cascade low-rank decomposition
CN109816158A (en) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 Combined method, device, equipment and the readable storage medium storing program for executing of prediction model
CN110136154A (en) * 2019-05-16 2019-08-16 西安电子科技大学 Remote sensing images semantic segmentation method based on full convolutional network and Morphological scale-space
CN110163858A (en) * 2019-05-27 2019-08-23 成都数之联科技有限公司 A kind of aluminium shape surface defects detection and classification method and system
CN110222681A (en) * 2019-05-31 2019-09-10 华中科技大学 A kind of casting defect recognition methods based on convolutional neural networks

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050002560A1 (en) * 2003-05-29 2005-01-06 Nidek Co., Ltd. Defect inspection apparatus
US20190073568A1 (en) * 2017-09-06 2019-03-07 Kla-Tencor Corporation Unified neural network for defect detection and classification
CN108520274A (en) * 2018-03-27 2018-09-11 天津大学 High reflecting surface defect inspection method based on image procossing and neural network classification
CN109146843A (en) * 2018-07-11 2019-01-04 北京飞搜科技有限公司 Object detection method and device based on deep neural network
CN109118119A (en) * 2018-09-06 2019-01-01 多点生活(成都)科技有限公司 Air control model generating method and device
CN109712113A (en) * 2018-11-28 2019-05-03 中原工学院 A kind of fabric defect detection method based on cascade low-rank decomposition
CN109816158A (en) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 Combined method, device, equipment and the readable storage medium storing program for executing of prediction model
CN110136154A (en) * 2019-05-16 2019-08-16 西安电子科技大学 Remote sensing images semantic segmentation method based on full convolutional network and Morphological scale-space
CN110163858A (en) * 2019-05-27 2019-08-23 成都数之联科技有限公司 A kind of aluminium shape surface defects detection and classification method and system
CN110222681A (en) * 2019-05-31 2019-09-10 华中科技大学 A kind of casting defect recognition methods based on convolutional neural networks

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI823256B (en) * 2022-02-16 2023-11-21 新加坡商光寶科技新加坡私人有限公司 Light color coordinate estimation system and deep learning method thereof
TWI818880B (en) * 2023-03-31 2023-10-11 宸祿科技股份有限公司 Image fusion system and method for parallelization optimization of image fusion algorithm

Similar Documents

Publication Publication Date Title
US20200364442A1 (en) System for detecting surface pattern of object and artificial neural network-based method for detecting surface pattern of object
EP3531114B1 (en) Visual inspection device and illumination condition setting method of visual inspection device
JP4719284B2 (en) Surface inspection device
US20170191946A1 (en) Apparatus for and method of inspecting surface topography of a moving object
US20070211242A1 (en) Defect inspection apparatus and defect inspection method
CN104508423A (en) Method and device for inspecting surfaces of an examined object
KR20100015628A (en) Lumber inspection method, device and program
WO2010028353A1 (en) Wafer edge inspection
CN211347985U (en) Machine vision detection device applied to surface detection industry
CN112683923A (en) Method for screening surface form of object based on artificial neural network
CN112683924A (en) Method for screening surface form of object based on artificial neural network
US20170045448A1 (en) Apparatus of Detecting Transmittance of Trench on Infrared-Transmittable Material and Method Thereof
CN115809984A (en) Workpiece inspection and defect detection system using color channels
EP3465169B1 (en) An image capturing system and a method for determining the position of an embossed structure on a sheet element
CN112683789A (en) Object surface pattern detection system and detection method based on artificial neural network
KR20230139166A (en) Inspection Method for Wood Product
CA2153647A1 (en) Method and apparatus for recognizing geometrical features of parallelepiped-shaped parts of polygonal section
CN112666180A (en) Automatic dispensing detection method and system
CN112683786A (en) Object alignment method
CN112683787A (en) Object surface detection system and detection method based on artificial neural network
CN112683921A (en) Image scanning method and image scanning system for metal surface
CN112686831A (en) Method for detecting surface form of object based on artificial neural network
CN112683790A (en) Image detection scanning method and system for possible defects on surface of object
WO2014005085A1 (en) Systems for capturing images of a document
CN112683788A (en) Image detection scanning method and system for possible defects on surface of object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination