CN112683789A - Object surface pattern detection system and detection method based on artificial neural network - Google Patents

Object surface pattern detection system and detection method based on artificial neural network Download PDF

Info

Publication number
CN112683789A
CN112683789A CN201910987176.3A CN201910987176A CN112683789A CN 112683789 A CN112683789 A CN 112683789A CN 201910987176 A CN201910987176 A CN 201910987176A CN 112683789 A CN112683789 A CN 112683789A
Authority
CN
China
Prior art keywords
light
image
images
detection
photosensitive element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910987176.3A
Other languages
Chinese (zh)
Inventor
蔡昆佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Original Assignee
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitac Computer Kunshan Co Ltd, Getac Technology Corp filed Critical Mitac Computer Kunshan Co Ltd
Priority to CN201910987176.3A priority Critical patent/CN112683789A/en
Publication of CN112683789A publication Critical patent/CN112683789A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

An object surface pattern detection system and an artificial neural network-based detection method thereof are provided, wherein the artificial neural network-based object surface detection method comprises the following steps: receiving a plurality of images, wherein the plurality of images are obtained by capturing images of an object based on light rays in different lighting directions, and the light incident angle is less than or equal to 90 degrees; superposing the initial images of the object images; and performing depth learning with the plurality of initial images to build a prediction model identifying the surface morphology of the object. By using the object surface form detection system and the detection method based on the artificial neural network, the speed of identifying the surface form of the object can be improved, and the yield of products is further improved.

Description

Object surface pattern detection system and detection method based on artificial neural network
[ technical field ] A method for producing a semiconductor device
The invention relates to an object surface pattern detection system and an object surface pattern detection method based on an artificial neural network, in particular to an object surface pattern system capable of learning and automatically detecting various slots, cracks, bumps and patterns on the surface of an object and a neural network training system thereof.
[ background of the invention ]
Various safety protection measures are made of a number of small structural elements, such as safety belts. If these small structural elements are not strong enough or have other defects, security precautions may be raised.
These small or micro-structured devices may have minute holes, cracks, bumps, and textures on their surfaces due to various reasons, such as inadvertent impact or mold defects, which are not easily detected. One of the existing defect detection methods is to manually observe with naked eyes or touch a product to be detected with two hands, however, the efficiency of manually detecting whether the product has defects is poor, and misjudgment is very easy to occur.
[ summary of the invention ]
In view of the above, the object surface morphology detection system and the object surface morphology detection method based on the artificial neural network of the present invention perform training by combining multi-angle image capture (i.e. different lighting directions) with multi-dimensional superposition preprocessing, so as to improve the recognition degree of the three-dimensional structure characteristics of the object without increasing the calculation time.
In one embodiment, an object surface pattern detection method based on an artificial neural network comprises receiving a plurality of object images of a plurality of objects, wherein the plurality of object images of each object are images of the object captured based on light rays in a plurality of lighting directions, and the lighting directions are different from each other; overlapping a plurality of object images of each object to form an initial image; and performing deep learning with a plurality of initial images of a plurality of objects to build a prediction model identifying the surface morphology of the objects.
In one embodiment, an object surface type detection system includes a driving assembly, a plurality of light source assemblies and a photosensitive element. The driving assembly bears an object, the surface of the object is divided into a plurality of surface blocks along a first direction, and the driving assembly is further used for sequentially displacing one of the surface blocks to a detection position. The light source component surfaces are respectively arranged towards the detection position according to the different light emitting directions and respectively provide light rays to irradiate the detection position, wherein the light incident angle of the light rays provided by each light source component is smaller than or equal to 90 degrees relative to the normal of the surface block positioned at the detection position. The photosensitive element is arranged facing the detection position, and sequentially captures the detection image of each surface area when the light irradiates the detection position in each lighting direction.
In summary, in the object surface pattern detection system and the object surface pattern detection method based on the artificial neural network according to the embodiments of the present invention, the object images of the same object with different imaging effects can be provided by controlling the imaging light source with different incident angles, so as to improve the spatial stereo distinction of the various surface patterns of the object under the image detection. In the object surface pattern detection system and the object surface pattern detection method based on the artificial neural network according to the embodiment of the invention, the object images in different lighting directions can be integrated, and the object images are subjected to multi-dimensional superposition to improve the identification of the object surface pattern, so that the optimal analysis of the object surface pattern is obtained. In the object surface pattern detection system and the object surface pattern detection method based on the artificial neural network according to the embodiment of the present invention, the multi-spectral surface image can be integrated to enhance the recognition of the surface pattern of the object.
[ description of the drawings ]
Fig. 1 is a schematic diagram of an object surface type detection system according to an embodiment of the invention.
FIG. 2 is a block diagram of an object surface type detection system according to an embodiment of the present invention.
FIG. 3 is a schematic diagram illustrating an embodiment of relative optical positions among the object, the light source module and the photosensitive elements in FIG. 1.
FIG. 4 is a schematic diagram of an object surface type detection system according to another embodiment of the invention.
FIG. 5 is a schematic diagram illustrating an embodiment of relative optical positions among the object, the light source assembly and the photosensitive elements shown in FIG. 4.
FIG. 6 is a schematic diagram illustrating another embodiment of the relative optical positions of the object, the light source module and the photosensitive elements shown in FIG. 4.
FIG. 7 is a diagram illustrating an embodiment of an object image.
FIG. 8 is a schematic diagram of another embodiment of an object image.
FIG. 9 is a flowchart illustrating a method for detecting a surface topography of an object based on an artificial neural network according to an embodiment of the present invention.
FIG. 10 is a schematic view of an embodiment of a surface pattern of the surface block.
FIG. 11 is a schematic view of an image of the object of FIG. 10 with the surface area block in the illumination orientation of the light source assembly 502.
FIG. 12 is a schematic view of an image of the object with the surface area of FIG. 10 in a lighting orientation of the light source assembly 501.
FIG. 13 is a schematic view of the object image of the surface patch of FIG. 10 in the illumination orientation of the light source assembly 503.
FIG. 14 is an image of the object of FIG. 10 with the surface patch in an illuminated orientation by a light source assembly 504.
FIG. 15 is a diagram illustrating an exemplary initial image.
FIG. 16 is a flowchart illustrating a method for detecting a surface type of an object based on an artificial neural network according to another embodiment of the present invention.
FIG. 17 is a flowchart illustrating a method for detecting a surface type of an object based on an artificial neural network according to another embodiment of the present invention.
[ detailed description ] embodiments
Refer to fig. 1 and 2. In one embodiment, the object surface type detection system 1 is adapted to scan the object 10 to obtain object images of the object 10 at different illumination orientations. In some embodiments, the surface of the object 10 may have at least one surface type (e.g., surface structures such as grooves, cracks, bumps, undulations, edges, surface defects, surface roughness, micro-patterns, etc.), and the image of the surface type may also be presented in the corresponding object image. Wherein the surface defect is a three-dimensional structure. Here, the three-dimensional structure is a sub-micron (<1 μm, e.g., 0.1 to 1um) size to a micron (μm) size, i.e., the longest side or diameter of the three-dimensional microstructure is between sub-micron and micron. The three-dimensional structure may be a microstructure of 300nm to 6 μm, for example.
The system 1 for detecting the surface type of an object includes a processor 30, a driving module 20, a plurality of light source modules 501/502/503/504, and a photosensitive element 40. The processor 30 is coupled to the driving assembly 20, the plurality of light source assemblies 501/502/503/504 and the photosensitive elements 40. The driving assembly 20 is used for carrying the object 10, and the driving assembly 20 is provided with a detection position. The light source modules 501/502/503/504 and the photosensitive elements 40 are respectively disposed facing the detection positions from different angles, and the light source modules 501/502/503/504 provide light to the image-capturing target position (i.e., the detection position) to be detected from different lighting directions. In other words, the plurality of light source units 501/502/503/504 are arranged in a plurality of different lighting orientations at the detection position so as to face the detection position. Thus, the object surface type detection system 1 can obtain an object image having optimal surface feature spatial information. In one embodiment, the plurality of lighting orientations includes at least a front side of the detection position, a back side of the detection position, a left side of the detection position, and a right side of the detection position, as shown in FIG. 1. That is, the light source module 502 is provided on the front side of the detection position, the light source module 504 is provided on the rear side of the detection position, the light source module 501 is provided on the left side of the detection position, and the light source module 503 is provided on the right side of the detection position. Herein, the plurality of light source components sequentially provide light to the detection position.
In one embodiment, the light L provided by the light source assembly may be visible light such that surface features on the surface of the object 10 having a sub-micron (μm) scale are imaged in the inspection image. In one embodiment, the wavelength of the light L may be between 380nm and 780nm, which may depend on the material characteristics of the object to be detected and the requirement of the surface spectrum reflectivity. In some embodiments, the visible light may be any one of white light, violet light, blue light, green light, yellow light, orange light, and red light, for example. For example, the light L may be white light with a wavelength of 380nm to 780nm, or blue light with a wavelength of 450nm to 475nm, or green light with a wavelength of 495nm to 570nm, or red light with a wavelength of 620nm to 750 nm.
Refer to fig. 3. In one embodiment, a light incident angle θ of the light L is less than or equal to 90 degrees with respect to a normal to the surface area 10a/10b/10C of the object 10 at the detecting position. Herein, the light incident angle θ is an angle between the incident direction of the detection light L and the normal line C of the surface area at the detection position, and the light incident angle θ is greater than 0 degree and less than or equal to 90 degrees, that is, the detection light L irradiates the surface area at the detection position with the light incident angle θ greater than 0 degree and less than or equal to 90 degrees with respect to the normal line C. In some embodiments, the light incident angle θ of the light ray L may be greater than or equal to a critical angle and less than or equal to 90 degrees. In this regard, the critical angle may be related to the surface type desired to be detected. Here, the expected detected surface pattern may be a target surface pattern of a minimum size among the surface patterns that the user wishes to detect. In some embodiments, the light incident angle θ is related to a depth ratio of the surface topography expected to be detected. In some embodiments, the critical angle may be arctangent (r/d), where d is the pore depth of the surface morphology desired to be detected and r is the pore radius of the surface morphology desired to be detected. That is, the light incident angle θ may be greater than or equal to the arctangent (r/d).
In one embodiment, the light source modules of the object surface type detection system 1 provide light at the same incident angle. In one embodiment, the photosensitive axis of the photosensitive element 40 is parallel to the normal direction C.
Reference is continued to fig. 1. During the operation of the object surface type detecting system 1, the surface of the object 10 can be divided into a plurality of surface blocks 10a/10b/10c along the first direction E, and the driving component 20 will sequentially displace one of the surface blocks 10a, 10b or 10c to the detecting position. Although three of the surface areas 10a/10b/10c are exemplarily indicated in fig. 1. However, the present disclosure is not limited thereto, and the surface of the object 10 may be divided into other number of surface blocks according to actual requirements, such as 3 blocks, 5 blocks, 11 blocks, 15 blocks, 20 blocks, and any number thereof.
In one embodiment, the photosensitive element 40 is disposed facing the detection position, and sequentially captures the detection images of the surface blocks 10a/10b/10c when the light L irradiates the detection position in the respective lighting directions. For example, in the inspection process, the driving assembly 20 first displaces the surface area 10a to the inspection position, and the photosensitive element 40 captures an inspection image of the surface area 10a when the surface area 10a is irradiated by the inspection light provided by the light source assembly 501. Then, the light source assembly 502 provides the detection light to irradiate the surface area 10a, and the photosensitive element 40 captures the detection image of the surface area 10 a. Then, the light source assembly 503 provides the detecting light to illuminate the surface area 10a, and the photosensitive element 40 captures the detecting image of the surface area 10 a. And so on until the detection images of the surface area 10a in all different lighting orientations are captured. Then, the driving assembly 20 displaces the object 10 to move the surface area 10b to the detection position, and the photosensitive element 40 captures a detection image of the surface area 10b when the surface area 10b is irradiated by the detection light provided by the light source assembly 501. The same process is repeated until the detection images of the surface area 10b at all different lighting orientations are captured. By analogy, detection images of all the surface areas in different lighting directions can be obtained.
In one embodiment, the photosensitive element 40 is disposed facing the detection position, when the light L irradiates the detection position in one of the lighting directions, each surface area 10a/10b/10c of the object 10 reaches the detection position in sequence, and the photosensitive element 40 captures the detection image of each surface area 10a/10b/10c in sequence. For example, in the case that the light source device 501 provides the light L to the detection position, when the surface blocks 10a/10b/10c sequentially reach the detection position, the photosensitive elements 40 also sequentially capture the detection images of the surface blocks 10a, 10b and 10c of the object 10 located at the detection position. When the surface blocks 10a/10b/10c sequentially reach the detection position under the condition that the light source assembly 502 provides the light L to the detection position, the photosensitive element 40 also sequentially captures the detection images of the surface blocks 10a, 10b and 10c of the object 10 located at the detection position. And so on to obtain the detection images of the surface blocks under different polishing orientations.
In an embodiment, the optical axes of any two adjacent light source modules in the plurality of light source modules have the same predetermined included angle. As shown in fig. 1, in an exemplary embodiment, the predetermined angle between the light source module 501 and the light source module 502 is 90 degrees, the predetermined angle between the light source module 502 and the light source module 503 is 90 degrees, the predetermined angle between the light source module 503 and the light source module 504 is 90 degrees, and the predetermined angle between the light source module 504 and the light source module 501 is 90 degrees. In one embodiment, the total angle of the predetermined included angles between the light source assemblies is 360 degrees. Herein, the predetermined included angle may refer to an included angle in an incident direction of light L (also referred to as an optical axis) of two adjacent light source modules.
Refer to fig. 4 and 5. In one embodiment, the system 1 for detecting the surface type of an object includes a photosensitive module 41 composed of a photosensitive element 40 and a light splitting component 46. The light-splitting assembly 46 is located between the photosensitive element 40 and the detection position, and the light-splitting assembly 46 can also be said to be located between the photosensitive element 40 and the object 10. The light splitting assembly 46 has a plurality of filter regions 462/464/466 and a displacement assembly 460 corresponding to a plurality of spectra, respectively. At this time, the light source modules 50 (i.e., any one of the light source modules 501/502/503/504) provide a multi-spectrum light to illuminate the detection position. Here, the multi-spectrum light has several spectra of sub-light. Therefore, the filter regions 462/464/466 of the light splitting assembly 46 are switched by the shifting assembly 460 to drive the filter regions 462/464/466 to shift to the photosensitive axes D of the photosensitive elements 40, so that the photosensitive elements 40 capture the detected images of a plurality of different spectrums of the surface region at the detection position under the sub-light beams of each spectrum through each filter region 462/464/466. That is, when the multi-spectrum light irradiates the object 10 at the detection position from the light source assembly 50, the multi-spectrum light is reflected by the surface of the object 10, and the reflected light is filtered by any one of the filter regions 462/464/466 of the light splitting assembly 46 to be sub-light having the spectrum corresponding to the filter region, and then enters the sensing region of the photosensitive element 40, at this time, the sub-light reaching the photosensitive element 40 only has a single spectrum (middle value of the light band). When the same filter 462/464/466 is aligned with the photosensitive axis D of the photosensitive element 40, the driving assembly 20 shifts one surface block 10a/10b/10c to the detecting position each time, and after each shift, the photosensitive element 40 captures a detection image of the surface block 10a/10b/10c currently located at the detecting position, thereby obtaining detection images of all surface blocks 10a/10b/10c under the same spectrum. Then, the light splitting assembly 46 is switched to another filter region 462/464/466 to align with the photosensitive axis D of the photosensitive element 40, and sequentially shifts the surface segment again and captures the detection image of the surface segment 10a/10b/10 c. By analogy, a detection image having a spectrum corresponding to each filter region 462/464/466 can be obtained.
In some embodiments, the photosensitive element 40 may include a displacement element 460, and the displacement element 460 couples the light splitting element 46 and the processor 30. During operation of the object surface type detection system 1, the displacement assembly 460 sequentially moves one of the filter regions 462/464/466 of the light splitting assembly 46 onto the photosensitive axis D of the photosensitive element 40 under the control of the processor 30.
Refer to fig. 3. In one embodiment, the light source module 501 provides light beams with a plurality of spectra by using a plurality of light-emitting elements (not shown) with different spectra, and the light-emitting elements with different spectra are sequentially activated, so that the light-sensing element 40 can obtain detection images with a plurality of different spectra. The plurality of different spectrums can be any of visible lights such as white light, purple light, blue light, green light, yellow light, orange light, red light and the like. The light-emitting elements correspond to a plurality of non-overlapping light bands respectively, and the light bands can be continuous or discontinuous. In some embodiments, each light emitting element may be implemented by one or more Light Emitting Diodes (LEDs). In some embodiments, each light emitting element can be implemented by a laser light source. For example, the light source assembly 50 includes a red LED, a blue LED, and a green LED, and different LEDs are sequentially illuminated to obtain a red light spectrum detection image, a blue light spectrum detection image, and a green light spectrum detection image, respectively.
Refer to fig. 6. In one embodiment, the light source assembly 50 includes a light emitting element 52 and a light splitting assembly 56. The light-splitting component 56 is located between the light-emitting device 52 and the detection position, and it can be said that the light-splitting component 56 is located between the light-emitting device 52 and the object 10. The light splitting assembly 56 has a plurality of filter regions 562/564/566 corresponding to a plurality of spectra, respectively, and a displacement assembly 560. At this time, the light emitting element 52 provides a multi-spectrum light to irradiate toward the detection position. Here, the multi-spectrum light has several spectra of sub-light. Therefore, the shifting module 560 switches the filter regions 562/564/566 of the light splitting module 56 to drive one of the filter regions to shift to the front of the light emitting device 52, so that the light emitting device 52 irradiates the surface area of the object 10 at the detection position under the sub-light of each spectrum through each filter region 562/564/566, so that the photosensitive device 40 captures a plurality of detection images of different spectra. That is, when the multi-spectrum light emitted from the light emitting device 52 is filtered by any one of the filter regions 562/564/566 of the light splitting assembly 56 to be sub-light having the spectrum corresponding to the filter region, and then the sub-light is irradiated onto the object 10 at the detection position, the surface of the object 10 will reflect the sub-light to reach the photosensitive device 40. When the same filter 562/564/566 is aligned with the light emitting device 52, the driving assembly 20 shifts one surface block 10a/10b/10c to the detecting position each time, and the photosensitive device 40 captures a detection image of the surface block 10a/10b/10c currently located at the detecting position after each shift, so as to obtain detection images of all surface blocks 10a/10b/10c under the same spectrum. Then, the light splitting assembly 56 is switched to another filter region 562/564/566 to align with the light emitting device 52, and sequentially shifts the surface area again and captures the inspection image of the surface area 10a/10b/10 c. By analogy, a detection image having a spectrum corresponding to each filter region 562/564/566 can be obtained. In other words, the light source assembly 50 uses multi-spectrum light emitted by one light emitting element to illuminate the detection position, and then passes the multi-spectrum light through the light splitting assembly 56 to form sub-light of a single spectrum to illuminate the detection position, so that the light sensing element 40 can obtain a plurality of detection images of different spectrums.
In some embodiments, the wavelength bands of the spectrum of the multi-spectrum light provided by the light source module 50 may be between 380nm and 750nm, and the wavelength bands of the spectrum respectively allowed to pass through the plurality of filter regions 462/464/466/562/564/566 of the light splitting module 46/56 may be any non-overlapping sections between 380nm and 750 nm. Herein, the wavelength bands of the spectrum that the plurality of filter regions 462/464/466/562/564/566 of the light splitting assembly 46/56 individually allow to pass through can be continuous or discontinuous. For example, when the wavelength band of the multi-spectral light can be between 380nm to 750nm, the wavelength bands of the spectrum respectively allowed to pass through by the plurality of filter regions of the light splitting assembly 46/56 can be 380nm to 450nm, 450nm to 475nm, 476nm to 495nm, 495nm to 570nm, 570-590nm, 590nm to 620nm, and 620nm to 750nm, respectively. In another example, when the wavelength bands of the spectrum of the multi-spectrum light can be between 380nm and 750nm, the wavelength bands of the spectrum respectively allowed to pass through by the plurality of filter regions 462/464/466/562/564/566 of the light splitting assembly 46/56 can be 380nm-450nm, 495nm-570nm and 620nm-750nm, respectively.
In one embodiment, the object images of each object 10 are based on images of the object 10 captured under another light of the plurality of lighting orientations, wherein the spectrum of the another light is different from the spectrum of the original light.
In a first exemplary embodiment, in the operation process of the object surface type detection system 1, under the condition that the light source assemblies in different lighting orientations sequentially emit the first light to irradiate the detection positions, the photosensitive elements 40 sequentially capture the detection images of the surface blocks 10a/10b/10c in different lighting orientations, and then under the condition that the light source assemblies in different lighting orientations sequentially emit the second light to irradiate the detection positions, the photosensitive elements 40 sequentially capture the detection images of the surface blocks 10a/10b/10c, and the first light and the second light have different spectrums.
In the second exemplary embodiment, when the light source elements 50 in different lighting directions sequentially emit multi-frequency light to illuminate the detection positions, the photosensitive element 40 captures the detection images of the surface blocks 10a/10b/10c when the filter region 562 is displaced on the photosensitive axis, and the photosensitive element 40 captures the detection images of the surface blocks 10a/10b/10c when the filter region 564 is displaced on the photosensitive axis, so as to obtain a plurality of detection images corresponding to the spectrums of the filter region 562 and the filter region 564.
In the case of the first and second examples, the object surface type detection system 1 can also obtain the detection images of different spectra in the respective lighting directions through different operation processes, thereby improving the spatial stereo distinction of the various surface types of the object under the image detection.
Reference is continued to fig. 1. In one embodiment, during the operation of the object surface type detecting system 1, the carrying element 22 carries the object 10, and the driving motor 24 rotates the carrying element 22 to drive the object 10 to move a plurality of surface blocks along the first direction to the detecting position. In one embodiment, if the object 10 is a plate, the surface of the object 10 may be a non-curved surface having a curvature equal to or close to zero. The object 10 is moved along the first direction E by the driving component 20, so as to sequentially move the surface blocks 10a/10b/10c of the object 10 to the detection position for the photosensitive module 40 to obtain the detection image. Here, the first direction E may be an extending direction of any side length (e.g. a long side) of the surface of the object 10. In an exemplary embodiment, the carrier 22 can be a planar carrier, and the driving motor 24 is coupled to one side of the planar carrier. At this time, the object 10 is removably disposed on the flat carrier plate during the inspection process. The driving motor 24 drives the planar carrying plate to move along the first direction E to drive the object 10 to move, so as to align a surface block to the detection position. Here, the driving motor 24 drives the plane-carrying board to move a predetermined distance each time, and drives the plane-carrying board to move repeatedly to sequentially move each surface block 10a/10b/10c to the detection position. Herein, the predetermined distance is substantially equal to the width of each surface segment 10a/10b/10c along the first direction E.
Reference is continued to fig. 4. In one embodiment, if the object 10 is cylindrical, the driving assembly 20 rotates the object 10 along the first direction a to sequentially move the surface blocks 10a/10b/10c of the object 10 to the detection positions for the photosensitive module 41 to obtain the detection images. In some embodiments, the first direction a may be a clockwise direction or a counterclockwise direction. In some embodiments, the surface of the object 10 is divided into nine surface zones, but not limited thereto.
In one embodiment, the driving assembly 20 includes a carriage 22 and a driving motor 24. The driving motor 24 is connected to the carrying element 22, during the operation of the object surface type detecting system, the carrying element 22 carries the object 10, and the driving motor 24 rotates the carrying element 22 to drive the object 10 to rotate and sequentially displace a plurality of surface blocks to the detecting position.
In one example, the carrying element 22 may be two rollers spaced apart by a predetermined distance, and the driving motor 24 is coupled to the rotating shafts of the two rollers. Here, the predetermined distance is smaller than the diameter of the article 10 (the minimum diameter of the body). Thus, the article 10 can be movably disposed between the two rollers. When the driving motor 24 rotates the two rollers, the object 10 is driven to rotate by the surface friction between the object 10 and the two rollers.
In another example, the carrying element 22 may be a rotating shaft, and the driving motor 24 is coupled to one end of the rotating shaft. At this time, the other end of the rotating shaft is provided with an embedded part (such as an inserting hole). At this point, the article 10 may be removably embedded in the insert. Moreover, when the driving motor 24 rotates the shaft, the object 10 is driven by the shaft to rotate
Refer to fig. 7. In one embodiment, the detection image can be directly used as the object image M for the subsequent steps. In another embodiment, each object image of each object 10 is formed by stitching a plurality of detected images of the object. In some embodiments, when the photosensitive element 40 captures the inspection images 100 of all the surface areas 10a-10c in the same polishing orientation, the processor 30 may further stitch the captured inspection images 100 into an object image M in the capturing order.
In one embodiment, the photosensitive element 40 can be a linear photosensitive element. Wherein, the line-type photosensitive element can be realized by a line (linear) type image sensor. At this time, the detection image M captured by the photosensitive element 40 can be spliced by the processor 30 without being cut.
In another embodiment, the photosensitive element 40 is a two-dimensional photosensitive element. Wherein, the two-dimensional photosensitive element can be realized by a surface image sensor. At this time, when the photosensitive element 40 captures the inspection image 100 of the surface blocks 10a-10c in the same polishing direction, the processor 30 captures a middle region of the inspection image 100 based on the short side of the inspection image. Then, the processor 30 further stitches the middle regions corresponding to all the surface areas 10a-10c into the object image M.
In an embodiment, referring to fig. 1, a single photosensitive device 40 may be disposed in the system 1 for detecting the surface type of an object, and the photosensitive device 40 captures images of a plurality of surface areas 10a to 10c to obtain a plurality of detection images respectively corresponding to the surface areas 10a to 10 c. In another embodiment, referring to fig. 4, a plurality of photosensitive elements 40 may be disposed for the object surface type detection system 1, and each photosensitive element 40 faces the detection position and is disposed on the frame 45 along the long axis (the second direction B) of the object 10. The photosensitive elements 40 respectively capture the detection images of the surface areas of the object 10 at the detection positions of different sections. In one embodiment, the second direction B is substantially perpendicular to the first direction a. That is, the photosensitive axis D of the photosensitive element 400 is parallel to the normal direction C.
Refer to fig. 8. In an example, it is assumed that the object 10 is cylindrical and the number of the plurality of photosensitive elements 40 provided for the object surface type detection system 1 is three, as shown in fig. 4. The photosensitive elements 40 respectively capture the detection images of the surface of the object 10 at different section positions of the detection position, such as: the detected image 101 of the head section of the object 10, the detected image 102 of the middle section of the object 10, and the detected image 103 of the tail section of the object 10 are then stitched by the processor 30 to obtain all detected images 110/120/130 under the same section, and finally the image 110/120/130 is stitched to form the object image M.
Refer to fig. 9. The object surface detection method based on the artificial neural network is suitable for an artificial neural network system. Here, an artificial neural network system may be implemented on the processor 30. The artificial neural network system has a learning phase (i.e., training) and a prediction phase.
In the learning stage, the artificial neural network system receives object images of objects (step S01). Herein, the plurality of object images of each object are images of the object captured based on light rays of a plurality of lighting orientations, and the plurality of lighting orientations are different from each other. For example, the object images may be object images M obtained by the object surface type detection system 1 as described in fig. 7 and 8.
Then, the artificial neural network system superimposes a plurality of object images of each object as initial images (step S02). Thereafter, the artificial neural network system performs a deep learning process on a plurality of initial images of a plurality of objects to establish a prediction model for identifying the surface type of the object (step S03). In some embodiments, the deep learning may be implemented by a Convolutional Neural Network (CNN) algorithm, but the disclosure is not limited thereto.
Refer to fig. 10. In one example, the surface area of the article 10 has several surface types, one of which is a recessed slot 14, the other of which is a planar pattern 12, and a general surface 16 that is free of defects. When the light source module 502 disposed at the front side of the detecting position emits light toward the article 10, the recess of the slot 14 is darker near the front side and generates a shadow, the rear side is lighter, the pattern 12 is lighter, and the general surface 16 is darker, and an article image M01 captured by the photosensitive element 40 is shown in fig. 11. When the light source assembly 501 disposed at the left side of the inspection position emits light to the article 10, the recess of the slot 12 is dark near the left side and generates a shadow, while the right side is brighter, the pattern 12 is brighter, and the general surface 16 is darker, and an article image M02 is captured by the photosensitive element 40, as shown in fig. 12. By analogy, when the light source assembly 503 disposed at the right side of the inspection position emits light toward the article 10, the article image M03 captured by the photosensitive element 40 is as shown in fig. 13, and when the light source assembly 504 disposed at the rear side of the inspection position emits light toward the article 10, the article image M04 captured by the photosensitive element 40 is as shown in fig. 14. Referring to fig. 11 to 14, in the object images M01 to M04, the image of the pattern 12 does not generate shadows due to different lighting orientations, and the image of the slot 14 generates corresponding shadows in the slot 14 due to different lighting orientations.
Refer to fig. 15. In an example, the artificial neural network system superimposes the object image M01, the object image M02, the object image M03, and the object image M04 of the object 10 as the initial image MF. In one embodiment, the overlaying refers to overlaying the brightness values of the pixels in the object image. For example, with the object images M01 to M04, the brightness values of the pattern 12 are high in all of the object images M01 to M04, so that the brightness values after superimposition are also high. The brightness value of the surface 16 is low in all of the object images M01 through M04, so that the superimposed brightness value is low. The brightness value of the front side of the slot 14 is lower in the object image M01, the brightness values of the front sides of the slot 14 are higher and lower on the left and right sides in the object images M02 and M03, and the brightness value of the front side of the slot 14 is higher in the object image M04, but the brightness value of the peripheral edge of the slot 14 formed after the whole stack is lower than that of the surface 16, so that the brightness value can be highlighted in the object image MF. It can be known that when the defect to be detected is too small, the imaging mode of the single light source may not be obvious due to the shape and depth of the defect, so that the defect is difficult to be detected and misjudgment is generated.
That is, in the learning stage, the artificial neural network system stored in the processor 30 can receive the initial image obtained by superimposing the object images with a plurality of objects from different polishing orientations, for example, the object surface type detection system, the images with different surface types can be images with different defects, or images without defects, or images with different surface roughness, or images with defects represented by different light and shade contrasts generated by irradiating the surface area with light rays with different polishing orientations, and the artificial neural network system performs deep learning according to the images with different surface types to establish a prediction procedure for identifying various surface types. In other words, the multi-angle light source image capturing is combined with the pre-processing of the superimposed image, so that the CNN algorithm can greatly improve the identification degree of the three-dimensional defect characteristics without increasing a large amount of calculation time, and is more effective than the conventional optical algorithm.
Refer to fig. 16. In one embodiment, the surface types are slots, cracks, and bumps, i.e., the predictive model can identify whether the surface types have slots, cracks, bumps, sand holes, voids, bumps, scratches, edges, or are defect free. In one embodiment, the step of performing deep learning on the initial images (step S03) includes classifying each of the initial images according to a plurality of predetermined surface morphology categories (step S33). And performing a deep learning procedure on the classified initial images of the objects to establish a class prediction model for identifying the surface type of the objects (step S34).
Refer to fig. 17. In one embodiment, the method for detecting the surface morphology of the object based on the artificial neural network further includes a step of normalizing (normalizing) object images of the objects (step S11) and overlapping the normalized object images of the objects as initial images (step S21). Therefore, the asymmetry between the learning data is reduced, and the learning efficiency is improved.
In one embodiment, the method for detecting the surface morphology of the object based on the artificial neural network further includes converting the initial image of each object into a Matrix (Matrix) (step S31), and performing deep learning with the matrices to build a prediction model for identifying the surface morphology of the object (step S32). That is, the different initial images are converted into information of length, width, pixel type, pixel depth, channel number, etc. in the data matrix for further processing. Wherein the number of channels represents the image capturing condition of the corresponding object image. Here, an artificial neural network (e.g., implemented by a deep learning procedure) in the artificial neural network system has a plurality of image matrix input channels for inputting corresponding matrices, and the image matrix input channels respectively represent image capturing conditions of a plurality of spectra. In other words, step S31 converts the data format of the original image into a format (e.g., an image matrix) supported by the input channel of the artificial neural network.
In some embodiments, in the learning stage, the object image received by the artificial neural network system is of a known surface type, and the type of surface defect output by the artificial neural network system is also set. In other words, each object image for performing deep learning is marked with the existing object type. For example, in one example, when the object is a defective object, the surface of the object has one or more surface types that the artificial neural network has learned and attempted to extract, so that the artificial neural network can select them; on the other hand, when the object is a qualified object, the surface of the object does not have the surface pattern recorded to excite the selecting action of the artificial neural network. At this time, part of the object image received by the artificial neural network system is marked with one or more surface types, and the other part is marked without any surface type. Furthermore, the output of the artificial neural network system will be classified according to the preset surface types. In another example, when the object is a defective object, the surface of the object has one or more first type surface types; on the contrary, when the object is a qualified object, the surface of the object has another or more types of surface types of the second kind. At this time, the object image received by the artificial neural network system has a part of the marks with one or more first surface types and another part of the marks with one or more second surface types. Furthermore, the output of the artificial neural network system will be classified according to the preset surface types.
In some embodiments, in the learning stage, the artificial neural network system is trained using object images with known surface defects to generate judgment terms for each neuron in the prediction model and/or to adjust the weights of the connections of any two neurons, so that the prediction result (i.e., the output surface defect type) of each object image matches the known and labeled and learned surface defects thereof, thereby creating a prediction model for identifying the surface morphology of the object. In the prediction stage, the artificial neural network system can carry out classification prediction on the object image with unknown surface form through the established prediction model. In some embodiments, the artificial neural network system performs percentage prediction on the object images according to the surface type categories, i.e., determines the percentage of each object image that may fall into each surface type category.
In some embodiments, an artificial neural network system includes an input layer and a plurality of hidden layers. The input layer is coupled with the hidden layer. The input layer is used to perform the operations of the steps S01-S02 (and the steps S11 and S21). The hidden layer is used to perform the step S03.
In other embodiments, an artificial neural network system includes a preprocessing unit and a neural network unit. The preprocessing unit is coupled with the neural network unit. The preprocessing unit performs the above steps S01-S02 (and steps S11 and S21). The neural network unit is configured to perform the step S03. The neural network unit comprises an input layer and a plurality of hidden layers, and the input layer is coupled with the hidden layers.
Refer to fig. 9. In one embodiment, in the prediction stage, the artificial neural network system is configured to perform a prediction procedure according to a plurality of object images M corresponding to different lighting orientations to identify a region image representing a surface type of an object in the object images M (step S04). In other words, in the prediction process, after the object image M is input into the artificial neural network system, the artificial neural network system executes the prediction model according to the object image M generated by stitching, so as to identify the object image representing the surface type of the object 10 in the object image M. In one embodiment, the object image M is classified by a prediction model, i.e., an artificial neural network system, which classifies the object image of the surface type of the object according to a plurality of predetermined surface defect type surface type types. At the output end, the object image M is subjected to percentage prediction according to the predetermined surface defect classes in step S04, i.e. the percentage of the object image M that may fall into each class.
In some embodiments, the processor 30 may have the artificial neural network system described above to automatically classify the surface type according to the stitched image of the object, thereby automatically determining the surface type of the surface of the object 10. In other words, during the learning phase, the object image generated by the processor 30 may be subsequently trained by the artificial neural network system to establish a prediction model for identifying the surface morphology of the object. In the prediction stage, the object image generated by the processor 30 may be subsequently predicted by the artificial neural network system, so as to perform the classification prediction of the object image through the prediction model.
In some embodiments, the object image generated by the processor 30 can be fed to another processor having the aforementioned artificial neural network system, so that the artificial neural network system automatically classifies the surface type according to the stitched object image, thereby automatically determining the surface type of the surface of the object 10. In other words, the artificial neural network system automatically trains or predicts the fed object image.
In an example of steps S02 or S21, the object images of the same object may have the same spectrum. In another example of steps S02 or S21, the object images of the same object may have different spectra. That is, the plurality of object images of the same object include an image of the object captured based on light of one spectrum of different lighting orientations and an image of the object captured based on light of another spectrum of different lighting orientations. And, the two spectra are different from each other.
In some embodiments, the method for detecting the surface morphology of the object based on the artificial neural network according to the present invention can be implemented by a computer program product, so that the method for detecting the surface morphology of the object based on the artificial neural network according to any embodiment of the present invention can be completed when the computer (i.e. the processor thereof) is loaded with the program and executed. In some embodiments, the computer program product may be a non-transitory computer readable recording medium, and the program is stored in the non-transitory computer readable recording medium and loaded into a computer (i.e., a processor). In some embodiments, the program itself may be a computer program product and transmitted to the computer via wire or wireless.
In summary, in the object surface type detection system and the object surface detection method based on the artificial neural network according to the embodiments of the present invention, the object images of the same object with different imaging effects can be provided by controlling the imaging light source with different incident angles for image capturing, so as to improve the spatial stereo distinction of the various surface types of the object under the image detection. In the object surface pattern detection system and the object surface pattern detection method based on the artificial neural network according to the embodiment of the invention, the object images in different lighting directions can be integrated, and the object images are subjected to multi-dimensional superposition to improve the identification of the object surface pattern, so that the optimal analysis of the object surface pattern is obtained. In the object surface pattern detection system and the object surface detection method based on the artificial neural network according to an embodiment of the present invention, the multi-spectral surface image may be integrated to improve the identification of the surface pattern of the object. In the object surface pattern detection system and the object surface detection method based on the artificial neural network of the embodiment of the invention, the surface pattern of the object can be automatically judged through the artificial neural network system, so that a detector does not need to observe the object with naked eyes or touch the object with two hands, the identification efficiency of the surface pattern is greatly improved, and the situation of artificial misjudgment can be reduced.
The technical disclosure of the present invention is described in the above-mentioned preferred embodiments, but the present invention is not limited thereto, and those skilled in the art should understand that the present invention can be modified and modified without departing from the spirit of the present invention, and therefore, the scope of the present invention should be determined by the appended claims.

Claims (25)

1. An object surface pattern detection method based on an artificial neural network is characterized by comprising the following steps:
receiving a plurality of object images of a plurality of objects, wherein the plurality of object images of each object comprise images of the object captured based on light rays of a plurality of lighting directions, and the lighting directions are different from each other;
overlapping the object images of the objects into an initial image;
performing a deep learning with the initial images of the objects to establish a prediction model identifying the surface morphology of the objects.
2. The method of claim 1, further comprising:
normalizing the object images;
wherein the step of superimposing the object images of each object as the initial image comprises: and overlapping the plurality of normalized object images of each object to form the initial image.
3. The method of claim 1, further comprising:
converting the plurality of initial images into a matrix;
wherein the step of performing the deep learning with the plurality of initial images of the plurality of objects comprises: the deep learning is performed with the number of matrices.
4. The method of claim 1, wherein the plurality of light beams in the plurality of illumination orientations have different spectra.
5. The method of claim 1, wherein the plurality of light beams in the plurality of illumination orientations have the same spectrum.
6. The method of claim 5, wherein the plurality of object images of each object further comprises capturing an image of the object based on another light from the plurality of illumination directions, the another light having a different spectrum than the light.
7. The method of claim 1, wherein the surface type identified by the prediction model is a slot, a slit, a bump, a sand hole, a void, a bump, a scratch, and an edge.
8. The method of claim 1, wherein the deep learning is implemented by a convolutional neural network algorithm.
9. The method as claimed in claim 1, wherein each object image of each object is formed by stitching a plurality of detected images of the object.
10. The method of claim 1, wherein the step of performing the deep learning with the initial images comprises classifying each of the initial images according to a plurality of predetermined surface morphology categories.
11. An object surface topography detection system, comprising:
the driving component is used for bearing an object, wherein the surface of the object is divided into a plurality of surface blocks along a first direction, and the driving component is also used for sequentially displacing the surface blocks to a detection position;
a plurality of light source assemblies, which are arranged on a plurality of different lighting directions of the detection position facing the detection position and respectively provide a light ray to irradiate the detection position, wherein a light incident angle of the light ray provided by each light source assembly is smaller than or equal to 90 degrees relative to a normal of the surface block positioned at the detection position;
and the photosensitive element is arranged facing the detection position, and captures a detection image of each surface block sequentially positioned on the detection position when the light irradiates the detection position in each lighting direction.
12. The system of claim 11, wherein the plurality of polishing orientations include at least a front side of the detection location, a back side of the detection location, a left side of the detection location, and a right side of the detection location.
13. The system of claim 11, wherein the optical axes of any two adjacent light source modules in the plurality of light source modules have a same predetermined angle therebetween.
14. The system of claim 11, wherein the plurality of light source modules provide the light at the same incident angle.
15. The system of claim 11, wherein the light beam is a multi-spectrum light beam, each of the light source modules includes a light emitting module and a light splitting module, the light emitting module is configured to generate the multi-spectrum light beam, the light splitting module is disposed between the light emitting module and the inspection position and has a plurality of filter regions corresponding to the plurality of spectra, each of the filter regions is configured to split the multi-spectrum light beam into the light beams corresponding to the spectra, and the photosensitive element captures the inspection image of each of the surface blocks sequentially located at the inspection position when the light beams of each of the spectra illuminate the inspection position in each of the lighting orientations.
16. The system of claim 11, further comprising a light splitting element disposed between the photosensitive element and the inspection position and having a plurality of filter regions corresponding to the plurality of spectra, wherein the light is a multi-spectral light, each of the filter regions is used for splitting the multi-spectral light into the light corresponding to the spectrum, and the photosensitive element respectively passes through the plurality of filter regions to capture the inspection image of each of the surface blocks sequentially disposed on the inspection position when the light illuminates the inspection position at each of the lighting orientations.
17. The system of claim 11, wherein a photosensitive axis of the photosensitive element is parallel to the normal.
18. The system according to claim 11, wherein the surface of the object is a cylindrical surface, and the driving member rotates the object relative to the photosensitive element in a clockwise direction or a counterclockwise direction to displace the plurality of surface blocks to the detection position.
19. The system of claim 11, wherein the object is a plate, and the driving assembly moves the object horizontally along the first direction relative to the photosensitive element to displace the plurality of surface blocks to the inspection position.
20. The system for detecting the surface morphology of an object of claim 11, further comprising:
and the processor is coupled with the photosensitive element and used for splicing the plurality of detection images corresponding to the same polishing direction into an object image.
21. The system of claim 20, wherein the processor further comprises an artificial neural network system for performing a prediction process to identify an area image of the object image that represents the surface type of the object according to the object images corresponding to the different lighting orientations.
22. The system according to claim 20, wherein the photosensitive element is a linear photosensitive element.
23. The system for detecting the surface type of an object of claim 11, further comprising:
and the processor is coupled with the photosensitive element, captures the middle section area of each detection image based on the short edge of each detection image, and splices the middle section areas corresponding to the same polishing direction to form an object image.
24. The system of claim 23, wherein the image processor further comprises an artificial neural network system for performing a prediction procedure to identify a region image of the object image that represents the surface type of the object according to the object images corresponding to the different polishing orientations.
25. The system of claim 23, wherein the photosensitive element is a two-dimensional photosensitive element.
CN201910987176.3A 2019-10-17 2019-10-17 Object surface pattern detection system and detection method based on artificial neural network Pending CN112683789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910987176.3A CN112683789A (en) 2019-10-17 2019-10-17 Object surface pattern detection system and detection method based on artificial neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910987176.3A CN112683789A (en) 2019-10-17 2019-10-17 Object surface pattern detection system and detection method based on artificial neural network

Publications (1)

Publication Number Publication Date
CN112683789A true CN112683789A (en) 2021-04-20

Family

ID=75444420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910987176.3A Pending CN112683789A (en) 2019-10-17 2019-10-17 Object surface pattern detection system and detection method based on artificial neural network

Country Status (1)

Country Link
CN (1) CN112683789A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040235205A1 (en) * 2000-09-20 2004-11-25 Kla-Tencor, Inc. Methods and systems for determining a critical dimension and overlay of a specimen
US20130242294A1 (en) * 2010-11-29 2013-09-19 Hitachi High-Technologies Corporation Defect inspection device and defect inspection method
US20150212008A1 (en) * 2012-08-07 2015-07-30 Toray Engineering Co., Ltd. Device for testing application state of fiber reinforced plastic tape
CN108445007A (en) * 2018-01-09 2018-08-24 深圳市华汉伟业科技有限公司 A kind of detection method and its detection device based on image co-registration
CN109064454A (en) * 2018-07-12 2018-12-21 上海蝶鱼智能科技有限公司 Product defects detection method and system
CN109923402A (en) * 2016-11-14 2019-06-21 日本碍子株式会社 The flaw detection apparatus and defect detecting method of ceramic body

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040235205A1 (en) * 2000-09-20 2004-11-25 Kla-Tencor, Inc. Methods and systems for determining a critical dimension and overlay of a specimen
US20130242294A1 (en) * 2010-11-29 2013-09-19 Hitachi High-Technologies Corporation Defect inspection device and defect inspection method
US20150212008A1 (en) * 2012-08-07 2015-07-30 Toray Engineering Co., Ltd. Device for testing application state of fiber reinforced plastic tape
CN109923402A (en) * 2016-11-14 2019-06-21 日本碍子株式会社 The flaw detection apparatus and defect detecting method of ceramic body
CN108445007A (en) * 2018-01-09 2018-08-24 深圳市华汉伟业科技有限公司 A kind of detection method and its detection device based on image co-registration
CN109064454A (en) * 2018-07-12 2018-12-21 上海蝶鱼智能科技有限公司 Product defects detection method and system

Similar Documents

Publication Publication Date Title
US20200364442A1 (en) System for detecting surface pattern of object and artificial neural network-based method for detecting surface pattern of object
US20210073975A1 (en) Method for enhancing optical feature of workpiece, method for enhancing optical feature of workpiece through deep learning, and non transitory computer readable recording medium
US11379967B2 (en) Methods and systems for inspection of semiconductor structures with automatically generated defect features
JP4753181B2 (en) OVD inspection method and inspection apparatus
US20070211242A1 (en) Defect inspection apparatus and defect inspection method
US11080843B2 (en) Image inspecting apparatus, image inspecting method and image inspecting program
US20230296872A1 (en) Fluorescence microscopy inspection systems, apparatus and methods with darkfield channel
US20050117781A1 (en) Land mark, land mark detecting apparatus, land mark detection method and computer program of the same
CN104541145A (en) Method for segmenting the surface of a tyre and apparatus operating according to said method
CN112683924A (en) Method for screening surface form of object based on artificial neural network
CN112683923A (en) Method for screening surface form of object based on artificial neural network
US9594021B2 (en) Apparatus of detecting transmittance of trench on infrared-transmittable material and method thereof
JP2020112367A (en) Wafer inspection device
CN112683789A (en) Object surface pattern detection system and detection method based on artificial neural network
JP2023043178A (en) Workpiece inspection and defect detection system utilizing color channels
CN112683787A (en) Object surface detection system and detection method based on artificial neural network
Radovan et al. An approach for automated inspection of wood boards
KR102554478B1 (en) Real-time tool wear measurement system using infrared image-based deep learning
CN112686831A (en) Method for detecting surface form of object based on artificial neural network
CN112683925A (en) Image detection scanning method and system for possible defects on surface of object
CN112683786A (en) Object alignment method
WO2024009868A1 (en) Appearance inspection system, appearance inspection method, training device, and inference device
CN112683788A (en) Image detection scanning method and system for possible defects on surface of object
CN112683790A (en) Image detection scanning method and system for possible defects on surface of object
CN112683921A (en) Image scanning method and image scanning system for metal surface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination