CN112683786B - Object alignment method - Google Patents

Object alignment method Download PDF

Info

Publication number
CN112683786B
CN112683786B CN201910987140.5A CN201910987140A CN112683786B CN 112683786 B CN112683786 B CN 112683786B CN 201910987140 A CN201910987140 A CN 201910987140A CN 112683786 B CN112683786 B CN 112683786B
Authority
CN
China
Prior art keywords
image
photosensitive element
detection
capturing
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910987140.5A
Other languages
Chinese (zh)
Other versions
CN112683786A (en
Inventor
蔡昆佑
杨博宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Original Assignee
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitac Computer Kunshan Co Ltd, Getac Technology Corp filed Critical Mitac Computer Kunshan Co Ltd
Priority to CN201910987140.5A priority Critical patent/CN112683786B/en
Publication of CN112683786A publication Critical patent/CN112683786A/en
Application granted granted Critical
Publication of CN112683786B publication Critical patent/CN112683786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

An object alignment method, comprising: detecting a plurality of first alignment structures of the object under the rotation of the object, wherein the plurality of second alignment structures of the object sequentially face a photosensitive element during the rotation of the object; and stopping the rotation of the object and performing an image capturing procedure of the object when the first alignment structures reach a predetermined shape; the image capturing program comprises: capturing a test image of the object, wherein the test image comprises an image block presenting a second alignment structure facing the photosensitive element; detecting the display position of an image block in the test image; capturing a detection image of the object when the image block is positioned in the middle of the test image; and when the image block is not positioned in the middle of the test image, moving the object in a first direction, and returning to execute the step of capturing the test image of the object. Therefore, the artificial neural network system establishes a more accurate prediction model according to the detection images at the same positions, and the probability of misjudgment is further reduced.

Description

Object alignment method
[ Field of technology ]
The present disclosure relates to an object surface inspection system, and more particularly to an object alignment method of an object surface inspection system.
[ Background Art ]
The defect detection of products is an important part of industrial production procedures, and products with defects cannot be sold, or if intermediate products with defects are sold to other manufacturers for processing, the final products cannot work. One of the existing defect detection methods is to manually observe the product to be detected with naked eyes or touch the product with both hands to determine whether the product has defects, such as pits, scratches, chromatic aberration, defects, etc., however, manually detecting whether the product has defects is relatively inefficient, and misjudgment is very easy to occur, which causes a problem that the yield of the product cannot be controlled.
[ Invention ]
In one embodiment, an object alignment method is suitable for aligning an object. The object alignment method comprises detecting a plurality of first alignment structures of an object under the rotation of the object, wherein a plurality of second alignment structures of the object sequentially face a photosensitive element during the rotation of the object; and stopping the rotation of the object and performing the image capturing procedure of the object when the first alignment structures reach a predetermined shape. The image capturing process of the object includes the following steps: capturing a test image of the object with a photosensitive element, wherein the test image comprises an image block presenting a second alignment structure facing the photosensitive element; detecting the display position of an image block in the test image; capturing a detection image of the object by the photosensitive element when the display position is that the image block is positioned in the middle of the test image; and when the display position is that the image block is not positioned in the middle of the test image, moving the object in a first direction, and returning to execute the step of capturing the test image of the object by the photosensitive element.
In one embodiment, an object alignment method is suitable for aligning an object. The object alignment method comprises sequentially displacing a plurality of surface blocks of an object to a detection position, wherein the object is provided with a plurality of alignment structures; capturing a detection image of each surface block sequentially located on the detection position by using a photosensitive element, wherein the photosensitive element faces the detection position, and a plurality of alignment structures are located in the visual angle of the photosensitive element; splicing a plurality of detection images corresponding to a plurality of surface blocks to form an object image; comparing the object image with a preset pattern; and when the object image does not accord with the preset pattern, adjusting the splicing sequence of the plurality of detection images.
In summary, according to the embodiment of the object alignment method of the present disclosure, the presence type and the presence position of the specific structure of the object in the image analysis test image are used to determine whether the object is aligned, so as to capture the detection image on the same position on each surface block according to the aligned object. Therefore, the artificial neural network system can establish a more accurate prediction model according to the detection images at the same positions, and the probability of misjudgment is further reduced.
[ Description of the drawings ]
FIG. 1 is a schematic diagram of an embodiment of an object surface inspection system according to the present disclosure.
FIG. 2 is a functional block diagram of one embodiment of the object surface detection system of FIG. 1.
FIG. 3 is a schematic diagram of an embodiment of the optical relative positions among an object, a light source assembly and a photosensitive element.
FIG. 4 is a schematic diagram of another embodiment of the optical relative positions among an object, a light source assembly and a photosensitive element.
FIG. 5 is a schematic diagram of an embodiment of an article.
Fig. 6 is a top view of the article of fig. 5.
FIG. 7 is a flowchart of an embodiment of a method for object alignment according to the present disclosure.
FIG. 8 is a flowchart of another embodiment of an object alignment method according to the present disclosure.
FIG. 9 is a schematic diagram of an embodiment of an object image.
FIG. 10 is a schematic diagram of an embodiment of a detected image.
FIG. 11 is a flow chart of an embodiment of a test procedure.
FIG. 12 is a flow chart of another embodiment of a test procedure.
FIG. 13 is a schematic view of another embodiment of the optical relative positions among an object, a light source assembly and a photosensitive element.
FIG. 14 is a schematic diagram of an embodiment of a surface topography.
FIG. 15 is a schematic view of another embodiment of the optical relative positions among an object, a light source assembly and a photosensitive element.
FIG. 16 is a schematic diagram of another embodiment of an object surface detection system according to the present disclosure.
FIG. 17 is a schematic diagram of another embodiment of an object surface detection system according to the present disclosure.
FIG. 18 is a schematic diagram of another embodiment of an object image.
[ Detailed description ] of the invention
Referring to fig. 1, the object surface detection system is adapted to scan an object 2 to obtain at least one detection image of the object 2. In some embodiments, the surface of the object 2 may have at least one surface pattern, and the corresponding detected image may also show an image block of the surface pattern. Here, the surface pattern is a three-dimensional microstructure. Here, the three-dimensional microstructure is a sub-micrometer size to a micrometer (μm) size. I.e. the longest side or longest diameter of the three-dimensional microstructure is between sub-micron and micron. The term submicron means < 1. Mu.m, for example, 0.1 μm to 1. Mu.m. For example, the three-dimensional microstructure may be a microstructure of 300nm to 6 μm. In some embodiments, the surface type may be a surface structure such as a slot, a crack, a bump, a sand hole, an air hole, a bump, a scratch, an edge, a texture, or the like.
Referring to fig. 1 to 4, the object surface detecting system includes a driving component 11, a light source component 12, a photosensitive element 13 and a processor 15. The processor 15 is coupled to the driving assembly 11, the light source assembly 12 and the photosensitive element 13. The light source assembly 12 and the photosensitive element 13 face a detection position 14 on the driving assembly 11. The drive assembly 11 carries the object 2 to be inspected. The object 2 has a surface 21, and the surface 21 of the object 2 is divided into a plurality of surface blocks along an extending direction (hereinafter referred to as a first direction D1) of the surface 21 of the object 2. In some embodiments, the surface 21 of the object 2 is divided into nine surface blocks, three of which 21A-21C are exemplarily indicated in the figures. However, the present disclosure is not limited thereto, and the surface 21 of the object 2 may be divided into other surface blocks according to practical needs, such as 3 blocks, 5 blocks, 11 blocks, 15 blocks, 20 blocks, and so on.
In some embodiments, referring to fig. 5 and 6, the object 2 includes a body 201, a plurality of first alignment structures 202, and a plurality of second alignment structures 203. The first alignment structure 202 is located at one end of the body 201, and the second alignment structure 203 is located at the other end of the body 201. In some embodiments, the first alignment structure 202 may be a post, a bump, a slot, or the like. The second alignment structure 203 may be a column, a bump, a slot, or the like. In some embodiments, the second alignment structures 203 are disposed at intervals along the extending direction (i.e. the first direction D1) of the surface 21 of the body 201, and a spacing distance between any two adjacent second alignment structures 203 is greater than or equal to the viewing angle of the photosensitive element 13. In some embodiments, the second alignment structures 203 correspond to the surface blocks 21A-21C of the object 2, respectively. Each second alignment structure 203 is aligned to the middle of the side edge of its corresponding surface block along the first direction D1.
Hereinafter, the first alignment structure 202 is a pillar (hereinafter referred to as an alignment pillar), and the second alignment structure 203 is a slot (hereinafter referred to as an alignment slot). In some embodiments, the extending direction of each alignment post is substantially the same as the extending direction of the body 201, and one end of each alignment post is coupled to one end of the body 201. The alignment slots are located at the other end of the main body 201, and are disposed on the surface of the other end of the main body 201 around the main body 201 with the long axis of the main body 201 as the rotation axis.
In some embodiments, the first alignment structures 202 are spaced apart on the body 201. In the present example, three first alignment structures 202 are taken as an example, but this number is not a limitation of the present invention. In the case of looking down the body 201, as the body 201 rotates around its long axis, the first alignment structures 202 may assume different relative positions, for example: the first alignment structures 202 are spaced apart and do not overlap each other (as shown in fig. 6), or any two first alignment structures 202 overlap but the remaining one first alignment structure 202 does not overlap, etc.
Referring to fig. 1 to 8, the object surface detection system can execute an image capturing procedure. In the image capturing process, the object 2 is carried on the driving assembly 11, and one of the surface blocks 21A-21C of the object 2 is substantially located at the detection position 14. Before each image capturing, the object surface detection system performs alignment (i.e. fine-tuning the position of the object 2) once, so that the surface block is aligned with the viewing angle of the photosensitive element 13.
In the image capturing process, under the illumination of the light source assembly 12, the processor 15 controls the photosensitive device 13 to capture a test image of the object 2 (step S11). Here, the test image includes an image block showing the second alignment structure 203 currently facing the photosensitive element 13.
The processor 15 detects the display position of the image block of the second alignment structure 203 in the test image (step S12) to determine whether the surface block currently located at the detection position 14 is aligned with the viewing angle of the photosensitive element 13.
When the display position of the image block is not located in the middle of the test image, the processor 15 controls the driving component 11 to fine tune the position of the object 2 in the first direction D1 (step S13), and returns to execute step S11 successively. Here, by repeatedly executing steps S11 to S13, until the processor 15 detects that the display position of the image block is located in the middle of the test image.
When the display position of the image block is located in the middle of the test image, the processor 15 drives the photosensitive element 13 to capture the image; at this time, the photosensitive element 13 captures a detection image of the surface area of the object 2 under the illumination of the light source assembly 12 (step S14).
Next, the processor 15 controls the driving assembly 11 to move the next surface block of the object 2 to the detection position 14 in a first direction so that the next second alignment structure 203 faces the photosensitive element 13 (step S15), and returns to the next execution of step S11. Here, by repeatedly performing steps S11 to S15, the detected images of all the surface areas of the object 2 are captured. In some embodiments, the amplitude of the fine-tuned object 2 by the driving component 11 is smaller than the amplitude of the next surface block to be moved as the object 2.
For example, assume that object 2 has three surface areas and that image capture process is started with photosensitive element 13 facing surface area 21A of object 2. At this time, under the illumination of the light source assembly 12, the photosensitive device 13 captures a test image (hereinafter referred to as a first test image) of the object 2. The first test image includes an image area (hereinafter referred to as a first image area) of the second alignment structure 203 corresponding to the surface area 21A. Then, the processor 15 performs an image analysis of the first test image to detect a presentation position of the first image block in the first test image. When the display position of the first image block is not located in the middle of the first test image, the driving component 11 fine-adjusts the position of the object 2 in the first direction D1. After the fine tuning, the photosensitive device 13 captures the first test image again for the processor 15 to determine whether the display position of the first image block is located in the middle of the first test image. On the contrary, when the display position of the first image block is located in the middle of the first test image, the photosensitive element 13 captures the detection image of the surface block 21A of the object 2 under the illumination of the light source assembly 12. After capturing, the driving component 11 displaces the next surface block 21B of the object 2 to the detection position 14 in the first direction D1, so that the second alignment structure 203 of the corresponding surface block 21B faces the photosensitive element 13. Then, under the illumination of the light source component 12, the photosensitive element 13 captures a test image (hereinafter referred to as a second test image) of the object 2, and the second test image includes an image area (hereinafter referred to as a second image area) of the second alignment structure 203 corresponding to the surface area 21B. Then, the processor 15 performs an image analysis of the second test image to detect a presentation position of the second image block in the second test image. When the display position of the second image block is not located in the middle of the second test image, the driving component 11 fine-adjusts the position of the object 2 in the first direction D1. After the fine tuning, the photosensitive device 13 captures the second test image again for the processor 15 to determine whether the display position of the two image blocks is located in the middle of the second test image. On the contrary, when the second image area is located in the middle of the second test image, the photosensitive element 13 captures the detection image of the surface area 21B of the object 2 under the illumination of the light source assembly 12. After capturing, the driving component 11 further displaces the next surface block 21C of the object 2 to the detection position 14 in the first direction D1, so that the second alignment structure 203 of the corresponding surface block 21C faces the photosensitive element 13. Then, under the illumination of the light source component 12, the photosensitive element 13 captures a test image (hereinafter referred to as a third test image) of the object 2, and the third test image includes an image area (hereinafter referred to as a third image area) of the second alignment structure 203 corresponding to the surface area 21C. Then, the processor 15 performs an image analysis of the third test image to detect a presentation position of the third image block in the third test image. When the display position of the third image block is not located in the middle of the third test image, the driving component 11 fine-adjusts the position of the object 2 in the first direction D1. After the fine tuning, the photosensitive device 13 captures the third test image again for the processor 15 to determine whether the display position of the three image blocks is located in the middle of the third test image. On the contrary, when the third image area is located in the middle of the third test image, the photosensitive element 13 captures the detection image of the surface area 21C of the object 2 under the illumination of the light source assembly 12.
In some embodiments, when the object surface detection system needs to capture an image of an object 2 with two different image capturing parameters, the object surface detection system sequentially executes an image capturing process with each image capturing parameter. The different image capturing parameters can provide the light L1 with different brightness for the light source module 12, the light source module 12 shines with different light incidence angles, or the light source module 12 provides the light L1 with different spectrums.
In some embodiments, referring to fig. 7, after capturing the detected images of all the surface blocks 21A-21C of the object 2, the processor 15 splices the detected images of all the surface blocks 21A-21C of the corresponding object 2 into an object image according to the capturing sequence (step S21), and compares the spliced object image with a predetermined pattern (step S22). When the object image does not match the predetermined pattern, the processor 15 adjusts the stitching order of the detected images (step S23), and performs the comparison again after the adjustment (step S22). Otherwise, when the object image matches the predetermined pattern, the processor 15 obtains the object image of the object 2.
In some embodiments, the object surface detection system may also perform a pair of registration procedures. After the object 2 is placed on the driving component 11, the object surface detection system executes an alignment procedure to perform object alignment, so as to determine a position where the object 2 starts capturing images.
Referring to fig. 8, in the alignment process, the driving component 11 continuously rotates the object 2, and the processor 15 detects the first alignment structure 202 of the object 2 through the photosensitive element 13 under the rotation of the object 2 (step S01) to determine whether the first alignment structure 202 reaches a predetermined shape. In this way, during the rotation of the object 2, the second alignment structure 203 of the object 2 sequentially faces the photosensitive element 13.
In some embodiments, the predetermined pattern may be a relative position of the first alignment structure 202 and/or a luminance relationship of an image block of the first alignment structure 202.
In one example, the photosensitive element 13 continuously captures a detection image of the object 2 while the object 2 rotates, and the detection image includes an image block representing the first alignment structure 202. The processor 15 analyzes each of the detected images to determine a relative position of the image blocks of the first alignment structure 202 in the detected image and/or a brightness relationship of the image blocks of the first alignment structure 202 in the detected image. For example, the processor 15 analyzes the detected image to find that the image blocks of the first alignment structures 202 are spaced apart from each other and do not overlap, and that the brightness of the image blocks of the first alignment structures 202 located in the middle is brighter than the brightness of the image blocks located at both sides; at this time, the processor 15 determines that the first alignment structure 202 reaches the predetermined type. In other words, the predetermined pattern can be set by the image characteristics of the specific structure of the object 2.
When the first alignment structure 202 reaches the predetermined state, the processor 15 stops the rotation of the object (step S02) and performs the image capturing process of the object. I.e. the processor 15 controls the drive assembly 11 to stop rotating the object 2. Otherwise, the detected image is continuously captured and the imaging position and/or the imaging state of the image block of the first alignment structure 202 is analyzed.
In some embodiments, when the object surface inspection system has an alignment procedure, the processor 15 can splice the captured inspection images into the object image of the object 2 according to the capturing sequence after capturing the inspection images of all the surface blocks 21A-21C of the object 2 (step S31).
For example, taking the spindle shown in fig. 5 and 6 as an example, the object surface detection system can capture the detection images MB of all the surface blocks 21A-21C after performing the image capturing process (i.e. repeatedly performing steps S11-S15). Here, the processor 15 can stitch the detected images MB of all the surface blocks 21A to 21C into the object image IM of the object 2 according to the capturing sequence, as shown in fig. 9. In this example, the photosensitive element 13 may be a linear photosensitive element. At this time, the detected image MB captured by the photosensitive device 13 can be spliced by the processor 15 without being cut. In some embodiments, the line-type photosensitive element may be implemented by a line-type image sensor. The line image sensor may have a field of view (FOV) approaching 0 degrees.
In another embodiment, the photosensitive element 13 is a two-dimensional photosensitive element. At this time, when the photosensitive element 13 captures the detection image MB of the surface block 21A-21C, the processor 15 captures the middle area MBc of the detection image MB based on the short side of the detection image MB, as shown in fig. 10. The processor 15 then splices the middle area MBc corresponding to all the surface blocks 21A-21C into the object image IM. In some embodiments, the middle region MBc may have a width of, for example, one pixel (pixel). In some embodiments, the two-dimensional photosensitive element may be implemented by a planar image sensor. The area image sensor may have a field of view of about 5 degrees to about 30 degrees.
In some embodiments, the object surface detection system may further comprise a test program. In other words, before the alignment process and the image capturing process are performed, the object surface detection system may perform a test process to determine that each component (such as the driving component 11, the light source component 12, and the photosensitive element 13) is operating normally.
In the test procedure, referring to fig. 11, the photosensitive device 13 captures a test image under the illumination of the light source module 12 (step S41). The processor 15 receives the test image captured by the photosensitive device 13, and the processor 15 analyzes the test image (step S42) to determine whether the test image is normal (step S43), and determines whether the test is completed according to the test image. If the test image is normal (yes), it means that the photosensitive element 13 can capture a normal detection image in step S41 of the detection process, and then the object surface detection system will execute the alignment process (continuing to execute step S01) or execute the image capturing process (continuing to execute step S11).
If the test image is abnormal (NO), the object surface detection system may execute a calibration procedure (step S45).
In some embodiments, referring to fig. 1 and 2, the object surface detection system may further include a light source adjustment assembly 16, and the light source adjustment assembly 16 is coupled to the light source assembly 12. Here, the light source adjusting device 16 can be used to adjust the position of the light source device 12 to change the light incident angle θ.
In an exemplary embodiment, referring to fig. 1,2 and 11, the photosensitive element 13 may capture a surface area currently located at the inspection position 14 as the test image (step S41). At this time, the processor 15 analyzes the test image (step S42) to determine whether the average brightness of the test image meets a predetermined brightness to determine whether the test image is normal (step S43). If the average brightness of the test image does not meet the preset brightness (NO) as a judgment result, the test image is abnormal. For example, when the light incident angle θ of the light source module 12 is not proper, the average brightness of the test image may not meet the preset brightness; at this time, the test image may not correctly show the predetermined surface pattern to be detected on the object 2.
In the adjustment procedure, the processor 15 controls the light source adjustment assembly 16 to readjust the position of the light source assembly 12 and reset the light incident angle θ (step S45). After the light source adjusting assembly 16 readjusts the position of the light source assembly 12 (step S45), the light source assembly 12 emits another test light ray having a different light incident angle θ. At this time, the processor 15 controls the photosensitive element 13 to capture an image of a surface area currently located at the detection position 14 according to another test light (step S41) to generate another test image, and the processor 15 may analyze the other test image (step S42) to determine whether the average brightness of the other test image meets the predetermined brightness (step S43). If the average brightness of the other test image does not meet the preset brightness (no), the processor 15 controls the light source adjusting assembly 16 to readjust the position of the light source assembly 12 and readjust the light incident angle θ (step S41) until the average brightness of the test image captured by the photosensitive element 13 meets the preset brightness. When the average brightness of the test image meets the preset brightness (yes), the object surface detection system then executes the following steps S01 or S11 to perform the alignment procedure or the image capturing procedure.
In another embodiment, referring to fig. 1,2 and 12, the processor 15 can also determine whether the setting parameters of the photosensitive element 13 are normal according to whether the test image is normal (step S43). If the test image is normal (yes), the object surface detection system then executes the following steps S01 or S11 to perform the alignment procedure or the image capturing procedure. If the test image is abnormal (no), which indicates that the setting parameters of the photosensitive element 13 are abnormal, the processor 15 further determines whether the photosensitive element 13 has performed the adjustment operation of the setting parameters (step S44). If the photosensitive element 13 has performed the calibration operation of the setting parameters (yes), the processor 15 generates a warning signal indicating that the photosensitive element 13 is abnormal (step S46). If the photosensitive element 13 does not perform the calibration operation of the setting parameters (no), the object surface detection system proceeds to the calibration process (step S45). The processor 15 drives the photosensitive element 13 to execute the calibration operation of the setting parameters in the calibration procedure (step S45). After the light sensing device 13 performs the calibration operation (step S45), the light sensing device 13 captures another test image (step S41), and the processor 15 then determines whether the another test image captured after the light sensing device 13 performs the calibration operation is normal (step S43). If the processor 15 determines that the other test image is still abnormal (no), the processor 15 then determines that the photosensitive element 13 has performed the calibration operation (yes) in step S44, and the processor 15 generates a warning signal indicating that the photosensitive element 13 is abnormal (step S46).
In some embodiments, the setting parameters of the photosensitive element 13 include a photosensitive value, an exposure value, a focus value, a contrast setting value, or any combination thereof. In some embodiments, the processor 15 may determine whether the average brightness or contrast of the test image meets the preset brightness, so as to determine whether the aforementioned setting parameters are normal. For example, if the average brightness of the test image does not meet the preset brightness, it means that any one of the setting parameters of the photosensitive element 13 is wrong, so that the average brightness or contrast of the test image does not meet the preset brightness; if the average brightness or contrast of the test image matches the preset brightness, it means that each of the setting parameters of the photosensitive element 13 is correct.
In an embodiment, the object surface detection system may further include an audio-visual display unit, where the alert signal may include an image, a sound, or an image and a sound, and the audio-visual display unit may display the alert signal. Furthermore, the object surface detection system may also have a network function, and the processor 15 may send the foregoing alert signal to the cloud end for storage, or send the alert signal to other devices through the network function, so that the cloud end or a user of other devices can learn that the photosensitive element 13 is abnormal, and further perform an error removal operation on the photosensitive element 13.
In one embodiment, in the tuning procedure (step S45), the photosensitive element 13 automatically adjusts the setting parameters according to a parameter setting file. The parameter setting file stores the setting parameters of the photosensitive element 13. In some embodiments, the inspector updates the parameter setting file through the user interface of the object surface inspection system, so that the photosensitive element 13 automatically adjusts the setting parameters in the adjustment process according to the updated parameter setting file to correct the wrong setting parameters.
In the foregoing embodiment, when the photosensitive element 13 captures an image (i.e. a test image or a detection image), the light source assembly 12 emits a light L1 toward the detection position 14, and the light L1 irradiates the surface area currently located at the detection position 14 in an oblique or lateral direction.
Referring to fig. 3 and 4, the incident direction of the light L1 forms an angle (hereinafter referred to as the light incident angle θ) with the normal 14A of the surface block of the detection position 14. That is, at the light incident end, the angle between the optical axis of the light ray L1 and the normal 14A is the light incident angle θ. In some embodiments, the light incident angle θ is greater than 0 degrees and less than or equal to 90 degrees, that is, the detection light L1 irradiates the detection position 14 with the light incident angle θ greater than 0 degrees and less than or equal to 90 degrees with respect to the normal 14A, so that the surface area currently located at the detection position 14 is irradiated with the detection light L1 from the lateral direction or the oblique direction.
In some embodiments, as shown in fig. 3 and 4, the photosensitive axis 13A of the photosensitive element 13 is parallel to the forward normal 14A; or as shown in fig. 13, the photosensitive axis 13A of the photosensitive element 13 is between the normal 14A and the first direction D1, that is, an angle α is formed between the photosensitive axis 13A of the photosensitive element 13 and the normal 14A. The photosensitive element 13 receives the diffuse light generated by the light received by the surface areas 21A-21C, and the photosensitive element 13 captures the detection images of the surface areas 21A-21C sequentially located at the detection positions 14 according to the diffuse light (step S14).
In some embodiments, according to the incident angle θ of light greater than 0 degrees and less than or equal to 90 degrees, that is, according to the incident light ray L1 in the lateral direction or the oblique direction, if the surface 21 of the object 2 includes a surface structure in the shape of a groove or a hole, the light ray L1 does not strike the bottom of the surface structure, and the surface structure appears as a shadow in the detected image of the surface block 21A-21C, so that a detected image with sharp contrast between the surface 21 and the surface defect can be formed. In this way, the object surface inspection system or inspector can determine whether the surface 21 of the object 2 has defects by inspecting whether the image has shadows.
In some embodiments, surface structures having different depths exhibit different brightness in the detected image, depending on the different light incident angles θ. In detail, as shown in fig. 4, when the light incident angle θ is equal to 90 degrees, the incident direction of the light ray L1 is perpendicular to the depth direction of the surface defect, i.e., the optical axis of the light ray L1 overlaps with the tangent of the surface at the center of the detection position; at this time, no matter how deep the surface structure is, the surface structure on the surface 21 is not irradiated by the light L1 and does not generate reflected light and diffuse light, and the surface structure with a deeper or shallower depth is shaded in the detected image, i.e. the detected image has poor contrast or tends to have no contrast. As shown in fig. 3, when the light incident angle θ is smaller than 90 degrees, the incident direction of the detection light L1 is not perpendicular to the depth direction of the surface structure; at this time, the light L1 irradiates the partial area of the surface structure under the surface 21, and the partial area of the surface structure is irradiated by the light L1 to generate the reflected light and the diffuse light, so that the photosensitive element 13 receives the reflected light and the diffuse light from the partial area of the surface structure, and the surface structure presents an image with a brighter boundary (such as a boundary of a defect protrusion) or a darker boundary (such as a boundary of a defect recess) in the detected image, i.e. the detected image has a better contrast.
Also, in the case of the same light incident angle θ smaller than 90 degrees, the photosensitive element 13 receives more reflected light and diffuse light from the shallower surface structure than from the deeper surface structure. Thus, shallower surface structures present brighter images in the inspection image than surface structures of greater depth-to-width ratio. Further, in the case that the light incident angle θ is smaller than 90 degrees, if the light incident angle θ is smaller, more reflected light and diffuse light are generated in the surface structure region, the surface structure presents a brighter image in the detected image, and the brightness of the shallower surface structure presents a greater brightness in the detected image than the deeper surface structure. For example, compared with a detection image corresponding to a light incident angle θ of 60 degrees, the surface structure exhibits higher brightness in the detection image corresponding to the light incident angle θ of 30 degrees; in the detected image corresponding to the light incident angle θ of 30 degrees, the shallower surface structure exhibits higher brightness in the detected image than the deeper surface structure.
Therefore, the magnitude of the light incident angle theta and the brightness of the surface structure on the detected image have a negative correlation. If the light incident angle θ is smaller, the shallower surface structure presents a brighter image in the detected image, that is, the object surface detection system or the detector is less likely to recognize the shallower surface structure in the case of smaller light incident angle θ. In other words, the object surface inspection system or inspector can more easily recognize the deeper surface structure based on the darker image. On the contrary, if the light incident angle θ is larger, both the shallower and deeper surface structures will show darker images in the detected image, that is, the object surface detection system or the detector can recognize all the surface structures under the condition of larger light incident angle θ.
Thus, the object surface detection system or the inspector can set the corresponding light incident angle θ according to the preset hole depth of the preset surface structure to be inspected according to the negative correlation. For example, if the dark preset surface defect is to be detected but the light preset surface structure is not to be detected, the light source adjusting component 16 may adjust the position of the light source component 12 according to the above-mentioned negative correlation, and the light source adjusting component 16 drives the light source component 12 to output the detection light L1, so that the light preset surface defect presents a brighter image in the detection image and the dark preset surface structure presents a darker image in the image, and if the light preset surface defect and the dark preset surface defect are to be detected together, the light source adjusting component 16 may adjust the position of the light source component 12 according to the light incidence angle calculated according to the above-mentioned negative correlation, and the light source adjusting component 16 drives the light source component 12 to output the detection light L1, so that the light preset surface structure and the dark preset surface structure present shadows in the image.
For example, assuming that the object 2 is a spindle (spindle) for a seat belt assembly of an automobile, the surface structure may be holes or pores caused by dust or air in the process of manufacturing the object 2, or impact or scratch. Wherein the depth of the sand holes or air holes is larger than the impact marks or scratches. If the object 2 to be inspected has a sand hole or air hole and the object 2 not to be inspected has a scratch or a scratch, the light source adjusting component 16 can adjust the position of the light source component 12 according to the light incidence angle calculated by the above-mentioned negative correlation to set a smaller light incidence angle θ, so that the sand hole or air hole presents a lower brightness in the inspection image, and the scratch or scratch presents a higher brightness in the inspection image, so that the object surface inspection system or inspector can quickly identify whether the object 2 has a sand hole or air hole. If the object 2 to be inspected has the impact mark, the scratch mark, the sand hole and the air hole, the light source adjusting component 16 can set a larger light incident angle θ according to the position of the light source adjusting component 12 calculated by the negative correlation, so that the impact mark, the scratch mark, the sand hole and the air hole all show shadows in the inspection image.
In one embodiment, the light incident angle θ is related to a predetermined depth ratio of predetermined surface defects to be detected. Referring to fig. 14, taking an example that the preset surface defect includes a preset hole depth d and a preset hole radius r, the preset hole radius r is a distance between any side surface in the preset surface defect and the normal 14A, and a ratio (r/d) between the preset hole radius r and the preset hole depth d is the aforementioned depth ratio (r/d), and the light incident angle θ is an arctangent (arctangent) (r/d). Accordingly, the light source adjusting unit 16 can adjust the position of the light source unit 12 to set the light incident angle θ according to the depth ratio (r/d) of the predetermined surface defect to be detected in step S03. Here, the light incident angle θ needs to satisfy the condition of equal to or greater than the arctangent (r/d) and less than or equal to 90 degrees to obtain the capturing effect of the optimal target feature at the wavelength to be detected. The light source adjusting unit 16 drives the light source unit 12 to output the detection light L1 after adjusting the position of the light source unit 12. In some embodiments, the predetermined hole radius r may be predetermined according to the size of the surface structure of the object 2 that is expected to be detected.
In one embodiment, the processor 15 calculates the light incident angle θ according to the aforementioned negative correlation and arctangent (arctangent) (d/r), and the processor 15 then drives the light source adjusting component 16 to adjust the position of the light source component 12 according to the calculated light incident angle θ.
In some embodiments, the light L1 provided by the light source assembly 12 has a wavelength of 300nm to 3000 nm. For example, the light L1 can have a wavelength of 300nm-600nm, 600nm-900nm, 900nm-1200nm, 1200nm-1500nm, 1500-1800nm, or 1800nm-2100 nm. In one example, the light L1 provided by the light source assembly 12 may be visible light. Here, the light L1 can image a surface defect having a size of μm on the surface 21 in the inspection image. In some embodiments, the light L1 may have a wavelength in the range of 380nm to 780 nm. In some embodiments, the light L1 may be any one of visible light such as white light, violet light, blue light, green light, yellow light, orange light, and red light. In one embodiment, the white light has a wavelength of 380nm to 780nm, the violet light has a wavelength of 380nm to 450nm, the blue light has a wavelength of 450nm to 495nm, the green light has a wavelength of 495nm to 570nm, the yellow light has a wavelength of 570nm to 590nm, the orange light has a wavelength of 590nm to 620nm, and the red light has a wavelength of 620nm to 780 nm.
In some embodiments, the light L1 provided by the light source assembly 12 may be far infrared light. (e.g., having a wavelength of light in the range of 800nm to 3000 nm). Thus, the light L1 can image the surface pattern of sub-micron (e.g. 300 nm) order on the surface of the object 2 in the detected image. When the light source assembly 12 is used to provide a far infrared light to obliquely polish the object 2 with the surface attachment, the far infrared light can penetrate the attachment to the surface of the object 2, so that the photosensitive element 13 can capture the surface image of the object 2 under the attachment. In other words, far infrared light can penetrate the surface attachment of the object 2, so that the photosensitive element 13 can acquire an image of the surface 21 of the object 2. In some embodiments, the far infrared light has a wavelength of light greater than 2 μm. In some embodiments, the far infrared light has a wavelength of light that is greater than the thickness of the attachment. Preferably, the wavelength of the far infrared light is greater than 3.5 μm. In some embodiments, the object 2 is preferably made of metal. In some embodiments, the attachment may be a stain, a paint, or the like. In one example, the wavelength of the far infrared light can be adjusted according to the thickness of the object to be penetrated. In addition, the wavelength of far infrared light can be adjusted according to the surface type of the object 2 to be measured, so as to perform image filtering of micrometer (μm) structure. For example: if the sample surface has 1 μm to 3 μm of slender micro-traces or sand holes, however, such phenomena do not affect the quality of the product, and the quality control personnel is careful about structural flaws above 10 μm, the wavelength of the far infrared light L1 selected can be selected to be in the middle wavelength (for example, 4 μm) so as to obtain the best image microstructure filtering effect and low-noise image quality, and the detection of larger-scale defects is not affected.
In some embodiments, the light source 12 may have a broader light wavelength band, and the image scanning system further generates the light L1 (or the reflected light of the light L1) with a desired wavelength value by disposing a light splitting component (not shown) in the light incident path or the light receiving path, which allows the light to pass through the specific light wavelength band.
In one embodiment, the processor 15 can drive the light source adjusting component 16 to adjust the light intensity of the far infrared light L1 emitted by the light source component 12, so as to improve the glare phenomenon, and further improve the quality of the detected image captured by the photosensitive element 13, so as to obtain a low-disturbance transmitted image. For example, the light source adjusting component 16 can reduce the light intensity, so that the photosensitive element 13 obtains a less glare detection image.
In another embodiment, according to the different light incident angles θ, the surface defects with different depths show different brightness in the detected image, and the glare intensity generated by the far infrared light L1 will vary. In other words, the processor 15 can drive the light source adjusting component 16 to adjust the light incident angle θ of the far infrared light L1 emitted by the light source component 12, so as to effectively reduce the glare, and further improve the quality of the detected image captured by the photosensitive element 13, so as to obtain a low-disturbance transmitted image.
In yet another embodiment, the light source adjusting component 16 can determine the polarization direction of the far infrared light L1 emitted by the light source component 12, i.e. control the light source component 12 to output the polarized detected far infrared light L1, so as to effectively reduce the glare, and further improve the quality of the detected image captured by the photosensitive element 13, so as to obtain a low-disturbance transmitted image.
In some embodiments, referring to FIG. 15, the object surface detection system may further comprise a bias sheet 17. The bias plate 17 is located on the optical axis 13A of the photosensitive element 13 and disposed between the photosensitive element 13 and the detection position 14. The photosensitive element 13 captures an image of the surface of the object 2 through the polarization plate 17, and the polarization filtering of the polarization plate 17 can effectively prevent saturation glare caused by strong infrared light to the photosensitive element 13, so as to improve quality of the detected image captured by the photosensitive element 13, and obtain a low-disturbance penetrating image.
In one embodiment, as shown in FIG. 1, the article 2 has a cylindrical shape, such as a spindle. I.e. the body 201 of the object 2 is cylindrical. Here, the surface 21 of the object 2 may be a side surface of the body 201 of the object 2, i.e. the surface 21 is a cylindrical surface, and the surface 21 has an arc of 2pi. Here, the first direction D1 may be a clockwise direction or a counterclockwise direction about the long axis of the body of the object 2. In some embodiments, the object 2 has a narrower configuration at one end relative to the other. In one example, the supporting element 111 may be two rollers separated by a predetermined distance, and the driving motor 112 is coupled to the rotation shafts of the two rollers. Here, the predetermined distance is smaller than the diameter of the object 2 (the minimum diameter of the body). Thus, the object 2 is movably arranged between the two rollers. When the driving motor 112 rotates the two rollers, the object 2 is driven by the surface friction between the object 2 and the two rollers, so as to rotate along the first direction D1 of the surface 21 to align a surface block to the inspection position 14. In another example, the carrier 111 may be a shaft, and the driving motor 112 is coupled to one end of the shaft. At this time, the other end of the rotating shaft is provided with an embedded piece (such as a jack). At this time, the object 2 is removably embedded in the insert. When the driving motor 112 rotates the shaft, the object 2 is driven by the shaft to rotate along the first direction D1 of the surface 21, so that a surface block is aligned to the inspection position 14. In some embodiments, the surface 21 is divided into 9 surface blocks 21A-21C, and the driving motor 112 drives the carrying element 111 to rotate 40 degrees each time, so as to drive the object 2 to rotate 40 degrees along the first direction D1 of the surface 21. In some embodiments, the angle of rotation of the driving motor 112 (to fine tune the position of the object 2) in step S13 is smaller than the angle of rotation of the driving motor 112 (to shift the next surface block to the inspection position 14) in step S15.
In one embodiment, as shown in fig. 16, the object 2 is plate-shaped. I.e. the body 201 of the object 2 has a plane. The surface 21 of the object 2 (i.e., the plane of the body 201) may be a non-curved surface having a curvature equal to zero or approaching zero. Here, the first direction D1 may be an extending direction of any side length (e.g., long side) of the surface 21 of the object 2. In an example, the carrier 111 may be a planar carrier, and the driving motor 112 is coupled to a side of the planar carrier. At this time, in the inspection process, the object 2 is removably disposed on the flat carrier plate. The driving motor 112 drives the plane carrier to move along the first direction D1 of the surface 21 to displace the object 2, so as to align a surface block to the inspection position 14. The driving motor 112 drives the plane carrier to displace a predetermined distance each time, and the surface blocks 21A-21C are sequentially displaced to the inspection position 14 by repeatedly driving the plane carrier to displace. Here, the predetermined distance is substantially equal to the width of each surface block 21A-21C along the first direction D1.
In some embodiments, the drive motor 112 may be a stepper motor.
In one embodiment, as shown in fig. 1 and 16, the light source module 12 may include a light emitting element. In another embodiment, as shown in fig. 3 and 4, the light source assembly 12 may include two light emitting elements 121 and 122, and the two light emitting elements 121 and 122 are symmetrically disposed on opposite sides of the object 2 with respect to the normal 14A, the two light emitting elements 121 and 122 respectively illuminate the detection position 14, the surface 21 is illuminated by the symmetrical detection light L1 to generate symmetrical diffuse light, and the photosensitive element 13 sequentially captures the detection images of the surface blocks 21A-21C located on the detection position 14 according to the symmetrical diffuse light, so as to improve the imaging quality of the detection images. In some embodiments, the light emitting elements 121, 122 may be implemented by one or more Light Emitting Diodes (LEDs); in some embodiments, each of the light emitting elements 121, 122 may be implemented by a laser source.
In one embodiment, the object surface inspection system may have a single set of light source modules 12, as shown in FIGS. 1 and 16.
In another embodiment, referring to FIG. 17, an object surface detection system may have multiple sets of light source assemblies 12a, 12b, 12c, 12d. The light source modules 12a, 12b, 12c, 12d are respectively positioned at different orientations of the inspection position 14, i.e. at different orientations of the carrying element 111 of the carrying object 2. For example, light source assembly 12a may be disposed on a front side of inspection location 14 (or carrier element 111), light source assembly 12b may be disposed on a rear side of inspection location 14 (or carrier element 111), light source assembly 12c may be disposed on a left side of inspection location 14 (or carrier element 111), and light source assembly 12d may be disposed on a right side of inspection location 14 (or carrier element 111).
Here, under the illumination of each light source assembly (12 a, 12b, 12C, 12 d), the object surface detection system performs an image capturing process to obtain the detection images MB of all the surface blocks 21A-21C of the object 2 under the illumination of a specific orientation. For example, first, the object surface detection system emits light L1 by the light source assembly 12 a. Under the light L1 emitted from the light source 12a, the photosensitive device 13 captures the detected images MB of all the surface areas 21A-21C of the object 2. Then, the object surface detection system is switched to emit light L1 by the light source assembly 12 b. Under the light L1 emitted from the light source 12b, the photosensitive device 13 also captures the detected images MB of all the surface areas 21A-21C of the object 2. Then, the object surface detection system is switched to emit light L1 by the light source 12 c. Under the light L1 emitted from the light source 12C, the photosensitive device 13 also captures the detected images MB of all the surface areas 21A-21C of the object 2. Then, the object surface detection system is switched to emit light L1 by the light source assembly 12 d. Under the light L1 emitted by the light source 12d, the photosensitive device 13 also captures the detected images MB of all the surface areas 21A-21C of the object 2.
In one embodiment, referring to fig. 16, the object surface detecting system may be provided with a single photosensitive element 13, and the photosensitive element 13 performs image capturing on a plurality of surface areas 21A-21C to obtain a plurality of detected images corresponding to the surface areas 21A-21C, respectively. In another embodiment, referring to fig. 1 and 17, the image scanning system may be provided with a plurality of photosensitive elements 13, and such photosensitive elements 13 face the detection position 14 and are arranged along the long axis of the object 2. The photosensitive elements 13 respectively capture detection images of the surface areas of the different sections of the object 2 located at the detection positions 14.
In one example, it is assumed that the object 2 is cylindrical and the image scanning system is provided with a single photosensitive element 13. The photosensitive element 13 can capture images of a plurality of surface blocks 21A-21C on the body (i.e. the middle section) of the object 2 to obtain a plurality of detection images MB corresponding to the surface blocks 21A-21C, and then the processor 15 splices the detection images MB of the surface blocks 21A-21C into an object image IM, as shown in fig. 9.
In another example, it is assumed that the object 2 is cylindrical and the image scanning system is provided with a plurality of photosensitive elements 131-133, as shown in fig. 1 and 16. The photosensitive elements 131-133 intercept the detected images MB 1-MB 3 of the surface of the object 2 at different sections of the detecting position 14, and the processor 15 splices all the detected images MB 1-MB 3 into an object image IM, as shown in fig. 18. For example, it is assumed that the number of the photosensitive elements 131 to 133 may be three, and the processor 15 splices the object image IM of the object 2 according to the detected images MB1 to MB3 captured by the three photosensitive elements 131 to 133, as shown in fig. 18. The object image IM includes a sub-object image 22 (an upper segment of the object image IM in fig. 18) formed by the detection images MB1 of all the surface blocks 21A-21C captured by the first photosensitive element 131 in the three photosensitive elements 13, a sub-object image 23 (a middle segment of the object image IM in fig. 18) formed by the detection images MB2 of all the surface blocks 21A-21C captured by the second photosensitive element 132 in the three photosensitive elements 13, and a sub-object image 24 (a lower segment of the object image IM in fig. 18) formed by the detection images MB3 of all the surface blocks 21A-21C captured by the third photosensitive element 133 in the three photosensitive elements 13.
In some embodiments, the processor 15 can automatically determine whether the surface 21 of the object 2 includes surface defects, whether the surface 21 has different textures, and whether the surface 21 has attachments such as paint or oil, according to the obtained object image, that is, the processor 15 can automatically determine different surface types of the object 2 according to the object image. In detail, the processor 15 includes an artificial neural network system, and the artificial neural network system has a learning phase and a prediction phase. In the learning stage, after the object image input to the artificial neural network system is a known surface type (i.e. the object surface type existing on the object image is marked), the artificial neural network system performs deep learning according to the known surface type and the surface type category of the known surface type (hereinafter referred to as a preset surface type category) to build a prediction model (i.e. the artificial neural network system is composed of a plurality of hidden layers connected in sequence, each hidden layer has one or more neurons, and each neuron performs a judgment item). In other words, the learning stage uses the object images with known surface types to generate the judgment items of each neuron and/or adjust the weight of the connection of any neuron, so that the prediction result (i.e. the output preset surface type) of each object image accords with the known and marked-learned surface type.
For example, the surface patterns may be sand holes or pores, impact marks or scratches, and the image areas with different surface patterns may be imaged image areas with sand holes of different depths, imaged image areas without sand holes and with impact marks or scratches, imaged image areas with different surface roughness, imaged image areas without surface defects, or imaged image areas with different depth ratios by irradiating the surface areas 21A-21C with detection light L1 of different wavelength values to generate different contrast, or imaged image areas with different color attachments. In the learning stage, the artificial neural network system performs deep learning according to the object images of the various surface types to establish a prediction model for identifying the various surface types. In addition, the artificial neural network system can classify the object images with different surface types to generate different preset surface type categories in advance. Then, in the prediction stage, after the obtained object image is input into the artificial neural network system, the artificial neural network system executes the aforementioned prediction model according to the input object image, so as to identify the object image presenting the surface type of the object 2 in the object image. The prediction model classifies object images of the surface types of the objects according to a plurality of preset surface type categories. In some embodiments, at the output of the prediction model, the prediction model may predict the object image according to a predetermined surface defect class, i.e. predict the percentage that may fall into each class.
For example, taking the surface blocks 21A-21C as an example, the artificial neural network system executes the above-mentioned prediction model according to the object images of the spliced surface blocks 21A-21C, and the artificial neural network system can identify that the surface block 21A contains sand holes and impact marks, the surface block 21B does not have surface defects, the surface block 21C contains sand holes and paint, and the surface roughness of the surface block 21A is greater than that of the surface block 21C by using the object images of the object 2; then, taking six types of the preset surface type including sand holes or air holes, scratches or impact marks, high roughness, low roughness, attachments and no surface defects as examples, the artificial neural network system can classify the detected image of the surface block 21A into the preset type of sand holes or air holes and scratches or impact marks, classify the detected image of the surface block 21B into the preset type of no surface defects, classify the detected image of the surface block 21C into the preset type of sand holes or air holes and the preset type of attachments, classify the detected image of the surface block 21A into the preset type of high roughness, and classify the detected images of the surface blocks 21B and 21C into the preset type of low roughness. Therefore, different surface types are identified through the artificial neural network system, so that the detection efficiency is greatly improved, and the probability of artificial misjudgment can be reduced.
In an embodiment, the deep learning performed by the artificial neural network system may be implemented by a Convolutional Neural Network (CNN) algorithm, which is not limited in this case.
In summary, according to the embodiment of the object alignment method of the present disclosure, the presence type and the presence position of the specific structure of the object in the image analysis test image are used to determine whether the object is aligned, so as to capture the detection image on the same position on each surface block according to the aligned object. Therefore, the artificial neural network system can establish a more accurate prediction model according to the detection images at the same positions, and the probability of misjudgment is further reduced.
Although the present invention has been described with reference to the above embodiments, it should be understood that the invention is not limited thereto, but rather is capable of modification and variation without departing from the spirit and scope of the present invention as defined in the following claims.

Claims (12)

1. An object alignment method, suitable for aligning an object, comprising:
detecting a plurality of first alignment structures of the object under the rotation of the object, wherein the plurality of second alignment structures of the object sequentially face a photosensitive element during the rotation of the object;
when the first alignment structures reach a predetermined state, stopping rotation of the object and performing an image capturing process of the object, wherein the step of performing the image capturing process of the object includes:
Capturing a test image of the object by the photosensitive element, wherein the test image comprises an image block presenting the second alignment structure facing the photosensitive element;
Detecting the display position of the image block in the test image;
Capturing a detection image of the object by the photosensitive element when the display position is that the image block is positioned in the middle of the test image;
When the display position is that the image block is not positioned in the middle of the test image, the object is moved in a first direction, and the step of capturing the test image of the object by the photosensitive element is performed in a return mode;
After the step of capturing the detection image of the object by the photosensitive element, displacing the object so that the next second alignment structure faces the photosensitive element, and returning to the step of capturing the test image of the object by the photosensitive element until the corresponding plurality of detection images are captured according to the plurality of second alignment structure faces;
After capturing the corresponding plurality of detection images, the processor is used for splicing the plurality of detection images into an object image.
2. The method of claim 1, wherein the plurality of second alignment structures are spaced apart along the first direction, and a spacing distance between any two adjacent second alignment structures is greater than or equal to a viewing angle of the photosensitive element.
3. The method of claim 1, wherein the object is cylindrical and the first direction is a clockwise direction.
4. The method of claim 1, wherein the object is cylindrical and the first direction is a counterclockwise direction.
5. The method of claim 1, wherein the object is planar.
6. The method of claim 1, wherein the step of performing an image capturing process of the object further comprises:
After capturing each detection image, capturing a middle section area of each detection image based on a short side of each detection image, and splicing the plurality of middle section areas into the object image.
7. An object alignment method, suitable for aligning an object, comprising:
sequentially displacing a plurality of surface blocks of the object to a detection position, wherein the object is provided with a plurality of alignment structures;
Capturing a detection image of each surface block sequentially located on the detection position by using a photosensitive element, wherein the photosensitive element faces the detection position, and the plurality of alignment structures are located in the visual angle of the photosensitive element;
splicing the plurality of detection images corresponding to the plurality of surface blocks into an object image;
Comparing the object image with a preset pattern;
When the object image is not consistent with the preset pattern, adjusting the splicing sequence of the plurality of detection images;
After adjustment, comparing again;
and when the object image accords with the preset pattern, obtaining the object image of the object.
8. The method of claim 7, wherein the alignment structures are located at one end of the object body.
9. The method of claim 8, wherein the body is cylindrical and the surface of the body is divided into the plurality of surface blocks in a clockwise direction.
10. The method of claim 8, wherein the body has a plane.
11. The method of claim 7, wherein sequentially displacing the plurality of surface blocks of the object to the inspection position comprises:
a bearing element is used for bearing the object at the detection position, and the bearing element is rotated to drive the object to rotate.
12. The method of claim 7, wherein sequentially displacing the plurality of surface blocks of the object to the inspection position comprises:
A bearing element is used for bearing the object at the detection position, and the bearing element is horizontally moved to drive the object to move.
CN201910987140.5A 2019-10-17 2019-10-17 Object alignment method Active CN112683786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910987140.5A CN112683786B (en) 2019-10-17 2019-10-17 Object alignment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910987140.5A CN112683786B (en) 2019-10-17 2019-10-17 Object alignment method

Publications (2)

Publication Number Publication Date
CN112683786A CN112683786A (en) 2021-04-20
CN112683786B true CN112683786B (en) 2024-06-14

Family

ID=75444665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910987140.5A Active CN112683786B (en) 2019-10-17 2019-10-17 Object alignment method

Country Status (1)

Country Link
CN (1) CN112683786B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201057529Y (en) * 2007-05-16 2008-05-07 贝达科技有限公司 Object detecting machine

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI274156B (en) * 2004-08-16 2007-02-21 Oncoprobe Biotech Inc Automatic detection method for organism disk
JP4573308B2 (en) * 2008-04-03 2010-11-04 芝浦メカトロニクス株式会社 Surface inspection apparatus and method
JP2012235362A (en) * 2011-05-02 2012-11-29 Shanghai Microtek Technology Co Ltd Image scanning device capable of automatic scanning
JP2014035183A (en) * 2012-08-07 2014-02-24 Toray Eng Co Ltd Device for inspecting attachment state of fiber reinforced plastic tape
TWI490463B (en) * 2014-04-11 2015-07-01 Pegatron Corp Detecting method and detecting system for distinguishing the difference of two workpieces
TWM493674U (en) * 2014-08-19 2015-01-11 Microtek Int Inc Scanning optical detecting device
TWM533209U (en) * 2016-08-05 2016-12-01 Min Aik Technology Co Ltd An optical detection system
JP6941978B2 (en) * 2017-06-14 2021-09-29 株式会社Screenホールディングス Alignment method, alignment device and inspection device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201057529Y (en) * 2007-05-16 2008-05-07 贝达科技有限公司 Object detecting machine

Also Published As

Publication number Publication date
CN112683786A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
US11195045B2 (en) Method for regulating position of object
JP4719284B2 (en) Surface inspection device
US8426223B2 (en) Wafer edge inspection
JP5825278B2 (en) Defect inspection apparatus and defect inspection method
TWI617801B (en) Wafer inspection method and wafer inspection device
JP6394514B2 (en) Surface defect detection method, surface defect detection apparatus, and steel material manufacturing method
US20030090669A1 (en) Thin-film inspection method and device
JP2013534312A (en) Apparatus and method for three-dimensional inspection of wafer saw marks
JP2019533163A (en) Method and apparatus for inspecting defective portion on transparent substrate
CN112683923A (en) Method for screening surface form of object based on artificial neural network
CN112683786B (en) Object alignment method
CN112683924A (en) Method for screening surface form of object based on artificial neural network
JP5868203B2 (en) Inspection device
CN112683921B (en) Image scanning method and system for metal surface
CN112683788B (en) Image detection scanning method and system for possible defect of object surface
CN112683925A (en) Image detection scanning method and system for possible defects on surface of object
CN112683790A (en) Image detection scanning method and system for possible defects on surface of object
CN112686831B (en) Method for detecting object surface morphology based on artificial neural network
JP2014186030A (en) Defect inspection device
KR20190114072A (en) Optical inspection device and method of optical inspection
US8315464B2 (en) Method of pore detection
KR20230099834A (en) Apparatus of inspecting cutting surface of glass substrate using scattered light
CN112683789A (en) Object surface pattern detection system and detection method based on artificial neural network
CN112683787A (en) Object surface detection system and detection method based on artificial neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant