CN114299049A - Detection method and device, electronic equipment and storage medium - Google Patents

Detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114299049A
CN114299049A CN202111667201.3A CN202111667201A CN114299049A CN 114299049 A CN114299049 A CN 114299049A CN 202111667201 A CN202111667201 A CN 202111667201A CN 114299049 A CN114299049 A CN 114299049A
Authority
CN
China
Prior art keywords
detected
image
images
splicing
generate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111667201.3A
Other languages
Chinese (zh)
Inventor
陈鲁
耿亚鹏
陈驰
赵燕
张嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongke Feice Technology Co Ltd
Original Assignee
Shenzhen Zhongke Feice Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongke Feice Technology Co Ltd filed Critical Shenzhen Zhongke Feice Technology Co Ltd
Priority to CN202111667201.3A priority Critical patent/CN114299049A/en
Publication of CN114299049A publication Critical patent/CN114299049A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The detection method comprises the steps of adjusting the relative angle of a to-be-detected part and a sensor according to the warping degree of any to-be-detected region in the to-be-detected part, so that the to-be-detected region is perpendicular to the optical axis of the sensor, the to-be-detected part comprises a plurality of divided regions, and the divided regions comprise a plurality of to-be-detected regions; acquiring an acquired image of each region to be detected in the segmentation region through a sensor; splicing the collected images of the plurality of areas to be detected to generate a segmentation image; and splicing the plurality of segmented images to generate a detection image of the piece to be detected. The detection method and device, the electronic equipment and the nonvolatile computer readable storage medium have the advantages that by dividing the segmentation areas and adjusting the relative angle between the to-be-detected piece and the sensor according to the warping degree of any to-be-detected area in the segmentation areas, the quality of the acquired images is high, the number of times of adjustment is small, the splicing accuracy of the acquired images is improved, the acquired images are spliced first and then the segmented images are spliced, and the cumulant and the processing amount of splicing errors are small.

Description

Detection method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a detection method, a detection apparatus, an electronic device, and a non-volatile computer-readable storage medium.
Background
At present, in the detection of workpieces (such as wafers, display panels, and the like), because the requirement on the acquisition precision is extremely high, and the field range of a sensor for information acquisition is generally small, each workpiece acquires a large number of acquired images, and therefore, how to accurately synthesize the large number of acquired images into a complete and clear workpiece image is an urgent problem to be solved.
Disclosure of Invention
The application provides a detection method, a detection device, an electronic device and a non-volatile computer readable storage medium.
In a first aspect, the detection method in an embodiment of the present application includes adjusting a relative angle between a to-be-detected object and a sensor according to a warping degree of any to-be-detected area in a divided area of the to-be-detected object, so that the to-be-detected area is perpendicular to an optical axis of the sensor, the to-be-detected object includes a plurality of divided areas, and the divided areas include a plurality of to-be-detected areas; acquiring an acquired image of each to-be-detected region in the segmentation region through the sensor; splicing the acquired images of the plurality of regions to be detected to generate a segmentation image; and splicing the plurality of segmented images to generate a detection image of the piece to be detected, so as to detect the piece to be detected according to the detection image.
In a second aspect, the detection apparatus of the embodiment of the present application includes an adjustment module, a collection module, a first splicing module, and a second splicing module. The adjusting module is used for moving the piece to be detected to a preset position so that the identification part of the piece to be detected is positioned in the field range of the shooting equipment; the acquisition module is used for shooting the identification part through the shooting equipment to obtain a first image and determining a first deviation according to the first image; the first splicing module is used for controlling the motion of the motion platform according to the first deviation so as to enable the identification part to be located at a preset position in the field range, and the preset position is a position with the best focusing performance of the shooting equipment; the second splicing module is used for shooting the identification part again through the shooting equipment under the condition that the identification part is located at the preset position to obtain a second image, and determining a second deviation according to the second image.
In a third aspect, the electronic device in the embodiment of the present application includes a sensor, a motion platform, and a processor, where the motion platform is configured to adjust a relative angle between a to-be-detected object and the sensor according to a warping degree of any to-be-detected area in a divided area of the to-be-detected object, so that the to-be-detected area is perpendicular to an optical axis of the sensor, the to-be-detected object includes a plurality of divided areas, and the divided areas include a plurality of to-be-detected areas; the sensor is used for acquiring an acquired image of each to-be-detected area in the segmentation areas; the processor is used for splicing the collected images of the plurality of regions to be detected to generate a segmentation image; and splicing the plurality of segmented images to generate a detection image of the piece to be detected, so as to detect the piece to be detected according to the detection image.
In a fourth aspect, a non-transitory computer-readable storage medium containing a computer program, which when executed by one or more processors, causes the processors to perform the detection method. The detection method comprises the steps of adjusting the relative angle between a piece to be detected and a sensor according to the warping degree of any one area to be detected in the divided areas of the piece to be detected, so that the area to be detected is perpendicular to the optical axis of the sensor, the piece to be detected comprises a plurality of divided areas, and the divided areas comprise a plurality of areas to be detected; acquiring an acquired image of each to-be-detected region in the segmentation region through the sensor; splicing the acquired images of the plurality of regions to be detected to generate a segmentation image; and splicing the plurality of segmented images to generate a detection image of the piece to be detected, so as to detect the piece to be detected according to the detection image.
The detection method, the detection device, the electronic equipment and the nonvolatile computer readable storage medium of the application divide a to-be-detected piece into a plurality of partition areas, then when the to-be-detected area in each partition area is subjected to image acquisition, the placement angle of the to-be-detected piece is adjusted through the warping degree of the to-be-detected area, so that the to-be-detected area is perpendicular to the optical axis of a sensor, then the sensor acquires the acquired images of the to-be-detected areas in the current partition area, because the distance between the to-be-detected areas in the same partition area is short, when the optical axis of the sensor is perpendicular to one of the to-be-detected areas, the definition of the acquired images acquired by acquiring the to-be-detected areas in the current partition area is high, each partition area only needs to adjust the to-be-detected piece once, the adjustment times of the to-be-detected piece are few, and therefore the splicing accuracy of the acquired images is prevented from being influenced by too many adjustment times of the to-be-detected piece, therefore, the image splicing accuracy is improved while the definition of a single acquired image is ensured. In addition, according to the method and the device, the collected images in the same segmentation area are firstly spliced to generate a plurality of segmentation images, then the segmentation images are spliced to generate the detection image of the to-be-detected piece, compared with the situation that all the collected images are spliced to be the detection images, splicing errors and splicing processing amount exist in splicing of any two adjacent collected images, and therefore the final cumulant of the splicing errors and the splicing processing amount are large, the collected images in a single segmentation area are spliced firstly, and the cumulant of the splicing errors and the processing amount caused by splicing of the adjacent segmentation images are small, so that the splicing errors and the splicing processing amount can be reduced remarkably, and the accuracy and the splicing efficiency of the finally-spliced detection image are improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow diagram of a detection method according to certain embodiments of the present application;
FIG. 2 is a block schematic diagram of a detection device according to certain embodiments of the present application;
FIG. 3 is a schematic plan view of an electronic device of some embodiments of the present application;
FIG. 4 is a schematic plan view of a test object according to some embodiments of the present application;
FIG. 5 is a schematic flow chart of a detection method according to certain embodiments of the present application;
FIGS. 6 and 7 are schematic illustrations of the detection method of certain embodiments of the present application;
FIGS. 8 and 9 are schematic flow charts of detection methods according to certain embodiments of the present application;
FIG. 10 is a schematic illustration of the principle of the detection method of certain embodiments of the present application;
FIG. 11 is a schematic flow chart of a detection method according to certain embodiments of the present application;
FIGS. 12 and 13 are schematic illustrations of the detection method of certain embodiments of the present application;
FIG. 14 is a schematic flow chart of a detection method according to certain embodiments of the present application; and
FIG. 15 is a schematic diagram of a connection between a processor and a computer-readable storage medium according to some embodiments of the present application.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
Referring to fig. 1 to 3, the detection method according to the embodiment of the present disclosure includes the following steps:
011: according to the warping degree of any region 211 to be measured in the divided regions 210 of the piece 200 to be measured, adjusting the relative angle between the piece 200 to be measured and the sensor so that the region 211 to be measured is perpendicular to the optical axis of the sensor, wherein the piece 200 to be measured comprises a plurality of divided regions 210, and the divided regions 210 comprise a plurality of regions 211 to be measured;
012: acquiring an acquired image of each region to be measured 211 in the divided region 210 by the sensor 40;
013: splicing the collected images of the multiple regions to be measured 211 to generate a segmented image; and
014: and splicing the plurality of segmented images to generate a detection image of the piece to be detected 200 so as to detect the piece to be detected 200 according to the detection image.
The detection device 10 of the embodiment of the present application includes an adjustment module 11, a collection module 12, a first splicing module 13, and a second splicing module 14. The adjusting module 11 is configured to adjust a relative angle between the to-be-measured object 200 and the sensor 40 according to a warping degree of any to-be-measured area 211 in the divided areas 210 of the to-be-measured object 200, so that the to-be-measured area 211 is perpendicular to an optical axis of the sensor 40; the acquisition module 12 is configured to acquire an acquired image of each region to be measured 211 in the segmented region 210 through the sensor 40; the first stitching module 13 is configured to stitch the acquired images of the multiple regions 211 to be detected to generate a segmentation image; the second stitching module 14 is configured to stitch the plurality of segmented images to generate a detection image of the to-be-detected object 200, so as to detect the to-be-detected object 200 according to the detection image. That is, step 011 can be implemented by the adjustment module 11, step 012 can be performed by the acquisition module 12, step 013 can be performed by the first stitching module 13, and step 014 can be performed by the second stitching module 14.
The electronic device 100 of the embodiment of the application includes a processor 20, a motion platform 30 and a sensor 40, where the motion platform 30 is configured to adjust a relative angle between the to-be-measured object 200 and the sensor 40 according to a warping degree of any one of the to-be-measured areas 211 in the divided areas 210 of the to-be-measured object 200, so that the to-be-measured area 211 is perpendicular to an optical axis of the sensor 40, the to-be-measured object 200 includes a plurality of divided areas 210, and the divided areas 210 include a plurality of to-be-measured areas 211; the sensor 40 is used for acquiring an acquired image of each region to be measured 211 in the segmented regions 210; the processor 20 is configured to stitch the acquired images of the multiple regions to be measured 211 to generate a segmented image; and splicing the plurality of segmented images to generate a detection image of the to-be-detected piece 200, so as to detect the to-be-detected piece 200 according to the detection image. That is, step 011 can be performed by the motion platform 30, step 012 can be performed by the sensor 40, and steps 013 and 014 can be performed by the processor 20.
In particular, the electronic device 100 may be a measuring machine. It is understood that the specific form of the electronic device 100 is not limited to a measuring machine, but may be any device capable of detecting the object 200.
Electronic device 100 includes processor 20, motion platform 30, and sensor 40. Both the processor 20 and the sensor 40 may be located on the motion platform 30. The motion platform 30 can be used to carry the object 200, and the motion platform 30 moves to drive the sensor 40 and/or the object 200 to move, so that the sensor 40 collects information of the object 200.
For example, the motion platform 30 includes an XY motion platform 31 and a Z motion platform 32, and the sensor 40 is disposed on the motion platform 30, specifically: the sensor 40 is arranged on the Z-motion platform 32, wherein the XY-motion platform 31 is used for controlling the object 200 to be measured to move along a horizontal plane, so as to change the relative positions of the object 200 to be measured and the sensor 40 on the horizontal plane, and the Z-motion platform 32 is used for controlling the sensor 40 to move along a direction perpendicular to the horizontal plane, so that the XY-motion platform 31 and the Z-motion platform 32 cooperate to realize the three-dimensional position (i.e. the relative position on the horizontal plane and the relative position perpendicular to the horizontal plane) of the sensor 40 relative to the object 200 to be measured. In addition, the Z motion platform 32 can also rotate the sensor 40 to change the relative angle of the sensor 40 and the object 200; alternatively, the XY motion stage 31 may rotate the test object 200 to change the relative angle of the sensor 40 and the test object 200.
It is understood that the motion platform 30 is not limited to the above structure, and only needs to be able to change the three-dimensional position and relative angle of the sensor 40 with respect to the device under test 200.
The sensor 40 may be one or more and the plurality of sensors 40 may be different types of sensors 40, e.g., the sensors 40 may include a spectral confocal sensor 40, a visible light camera, an infrared camera, a depth camera, etc.
The device under test 200 may be a panel (e.g., a display panel, a touch panel, etc.) or a wafer. In the embodiment of the present application, the device under test 200 is taken as a wafer for example.
Referring to fig. 4, the motion platform 30 generally has a predetermined carrying area for placing the device under test 200, and after the device under test 200 is placed in the predetermined carrying area, information of the device under test 200 is collected. It is understood that the warpage of the dut 200 may be different at different positions due to manufacturing errors of the dut 200 or the molding structure of the dut 200 itself. Since the field of view of the sensor 40 is smaller than that of the device under test 200, and the field of view can only cover a portion of the device under test 200, during the inspection, the device under test 200 is divided into a plurality of larger divided regions 210, the plurality of divided regions 210 are further divided into a plurality of smaller regions under test 211, such as regions under test a 1-a 9, regions under test a 10-a 18, regions under test a 19-a 27, and regions under test a 28-a 36, which form a divided region, and the field of view of the sensor 40 can cover one or more regions under test 211 (taking one as an example) each time, and all image information of the entire device under test 200 can be acquired for multiple times.
Therefore, before the sensor 40 acquires the acquired image of each region to be measured 211, the relative angle between the piece to be measured 200 and the sensor 40 can be adjusted according to the warping degree of the region to be measured 211, so that the piece to be measured 200 can be directly opposite to the sensor 40, i.e. the region to be measured 211 is perpendicular to the optical axis of the sensor 40. At this time, the focusing effect of the sensor 40 on the region 211 to be measured is optimal, so that the definition of the sensor 40 acquiring the acquired image of the region 211 to be measured is high.
Certainly, in order to prevent the splicing accuracy of the subsequent acquired images from being affected by too many times of adjusting the relative angle between the to-be-measured object 200 and the sensor 40, in the present application, the divided areas 210 are taken as units, each divided area 210 selects one to-be-measured area 211 as an alignment area, then angle adjustment is performed according to the warping degree of the to-be-measured area 211, so that the optical axis of the sensor 40 is perpendicular to the to-be-measured area 211, and then the acquired images of all the to-be-measured areas 211 in the current divided area 210 are acquired at the adjusted angle. The warping degree of each region 211 to be measured may be preset, or the warping degree of each region 211 to be measured may be determined by identifying an image of the region 211 to be measured in the collected image after the sensor 40 collects the collected image of the current image to be measured; the alignment region may be a region 211 to be measured (e.g., regions a5, a14, a23, a32, etc.) located in a central region of the segmentation region 210, and when the region 211 to be measured in the central region is directly facing the sensor 40, relative angles between other surrounding regions 211 to be measured and the sensor 40 are smaller, so that on the basis of ensuring that the number of times of angle adjustment is smaller, the definition of an acquired image is further improved.
In other embodiments, when it is detected that the difference between the warpage degrees of the different regions 211 to be measured in the same partition region 210 is too large (for example, the difference between the warpage degrees of any two regions 211 to be measured is greater than the preset warpage threshold), so that after one of the regions 211 to be measured is directly facing the sensor 40, the included angles between the optical axes of the other regions 211 to be measured and the sensor 40 are still large, so that the quality of acquiring the images acquired by the sensor 40 in the other regions 211 to be measured is poor, and the stitching effect of the subsequent acquired images is affected. At this time, a plurality of alignment regions may be determined for the divided region 210, each alignment region corresponds to a part of the region 211 to be measured of the divided region 210, for example, the divided region 210 is divided into four sub-divided regions 210, and then each sub-divided region 210 determines one alignment region respectively, so as to realize image acquisition of all the regions 211 to be measured in the sub-divided region 210, thereby ensuring that when the sensor 40 acquires the acquired image of each region 211 to be measured, an included angle between the optical axis of the sensor 40 and the region 211 to be measured is small (for example, smaller than a preset angle), ensuring the acquisition quality of each acquired image, and improving the stitching effect of the acquired images.
After all the collected images in each of the divided regions 210 are obtained, all the collected images in each of the divided regions 210 may be stitched to generate a plurality of divided images, and then the plurality of divided images may be stitched to generate the detection image of the to-be-detected object 200. Compared with the method that all the collected images are spliced to be the detection images, splicing errors and splicing processing amount exist in any two adjacent collected images in splicing, and the final cumulant of the splicing errors and the splicing processing amount are large, the collected images in a single segmentation area 210 are spliced firstly, and then the cumulant of the splicing errors and the processing amount caused by splicing the adjacent segmented images are small, so that the splicing errors and the splicing processing amount can be reduced remarkably, and the accuracy and the splicing efficiency of the finally spliced detection images are improved.
According to the detection method, the detection device 10 and the electronic device 100, the to-be-detected piece 200 is divided into the plurality of divided areas 210, then when the to-be-detected area 211 in each divided area 210 is subjected to image acquisition, the placement angle of the to-be-detected piece 200 is adjusted according to the warping degree of the to-be-detected area 211, so that the to-be-detected area 211 is perpendicular to the optical axis of the sensor 40, then the sensor 40 acquires the acquired images of the plurality of to-be-detected areas 211 in the current divided area 210, because the distance between the plurality of to-be-detected areas 211 in the same divided area 210 is short, when the optical axis of the sensor 40 is perpendicular to one of the to-be-detected areas 211, the acquired images acquired in the current divided area 210 have high definition, and each divided area 210 only needs to adjust the to-be-detected piece 200 once, the adjustment times of the to-be-detected piece 200 are few, so that the splicing accuracy of the acquired images is prevented from being influenced by the excessive adjustment times of the to-be-detected piece 200, therefore, the image splicing accuracy is improved while the definition of a single acquired image is ensured. In addition, according to the method and the device, the collected images in the same segmentation region 210 are firstly spliced to generate a plurality of segmentation images, then the plurality of segmentation images are spliced to generate the detection image of the to-be-detected piece 200, compared with the situation that all the collected images are spliced to be the detection images, splicing errors and splicing processing amount exist in splicing of any two adjacent collected images, and the final accumulated amount of the splicing errors and the final splicing processing amount are large, the collected images in the single segmentation region 210 are spliced firstly, and then the accumulated amount of the splicing errors and the processing amount caused by splicing of the adjacent segmentation images are small, so that the splicing errors and the splicing processing amount can be reduced remarkably, and the accuracy and the splicing efficiency of the finally-spliced detection image are improved.
Referring to fig. 2, 3 and 5, in some embodiments, step 013 includes:
0131: determining a first overlapping area of any two adjacent collected images; and
0132: and according to the first overlapping area, splicing any two adjacent acquired images to generate a segmentation image.
In some embodiments, the first stitching module 13 is further configured to determine a first overlapping area of any two adjacent acquired images; and splicing any two adjacent collected images according to the first overlapping area to generate a segmentation image. That is, step 0131 and step 0132 may be performed by the first stitching module 13.
In some embodiments, the processor 20 is further configured to determine a first coincidence region of any two adjacent acquired images; and splicing any two adjacent collected images according to the first overlapping area to generate a segmentation image. That is, step 0131 and step 0132 may be executed by processor 20.
Specifically, when the collected images in the same partition region 210 are stitched, first two adjacent collected images are obtained, then image recognition is performed to determine a coincident portion (hereinafter referred to as a first coincident region) in the two adjacent collected images, then the two adjacent collected images are stitched according to the first coincident region (for example, the two collected images are stacked so that the first coincident regions of the two collected images are coincident), the two collected images are stitched to be a new collected image, then image recognition is performed on the new collected image and the adjacent collected image again to determine the first coincident region of the new collected image and the adjacent collected image, and thus all the collected images in the partition region 210 are sequentially combined into one partition image.
For example, referring to fig. 6 and 7, the collected images are sequentially arranged according to the collecting positions, the collected image P1 and the collected image P2 located at the upper left corner of the divided region 210 are firstly stitched according to the first overlapping region X1 of the two collected images, then the collected image P1 and the collected image P2 are combined into a new collected image E1, then the collected image E1 and the collected image P3 adjacent to the collected image E1 (i.e., adjacent to the collected image P2) are stitched again according to the first overlapping region X1 of the two collected images, so that all the collected images in the divided region 210 are sequentially stitched to generate a divided image, for example, multiple collected images in the divided region 210 are stitched line by line, thereby realizing accurate stitching of the collected images. In other embodiments, the first overlapping region X1 of any two adjacent acquired images in the segmentation region 210 may be identified first, and when stitching, all the adjacent acquired images in the segmentation region 210 may be quickly stitched according to the identified first overlapping region X1, so as to obtain a segmentation image, and improve stitching efficiency.
Referring to fig. 2, 3 and 8, in some embodiments, step 0132 includes:
01321: cutting a first overlapping area of one of any two adjacent collected images to generate a cut collected image;
01322: and splicing the cut collected images and the collected images which have the same first overlapping area and are not cut to generate a segmentation image.
In some embodiments, the first stitching module 13 is configured to crop a first overlapping area of one of any two adjacent captured images to generate a cropped captured image; and splicing the cut collected images and the collected images which have the same first overlapping area and are not cut to generate a segmentation image. That is, steps 1321 and 01322 may be performed by the first stitching module 13.
In some embodiments, the processor 20 is further configured to crop a first overlapping region of one of any two adjacent captured images to generate a cropped captured image; and splicing the cut collected images and the collected images which have the same first overlapping area and are not cut to generate a segmentation image. That is, step 1321 and step 01322 may be performed by processor 20.
Specifically, referring again to fig. 6 and 7, when two adjacent captured images are stitched together according to the first overlapping region X1, the first overlapping region X1 of one captured image P1 of the two captured images (e.g., captured images P1 and P2) may be cropped to generate a cropped captured image, and then the cropped captured image and the adjacent non-cropped captured image P2 may be stitched together to generate a stitched captured image E1 that only retains one of the first overlapping regions X1 of the two captured images, so that all the adjacent captured images may be stitched together accurately to generate the segmentation image T0. In this way, by image cropping, stitching of any two adjacent acquired images can be quickly and accurately achieved to generate the segmented image T0.
Referring to fig. 2, 3 and 9, in some embodiments, step 013 further includes:
0133: identifying an image area to be detected corresponding to the area to be detected 211 in the collected image; and
0134: and splicing image areas to be measured in the plurality of acquired images to generate a segmentation image.
In some embodiments, the first stitching module 13 is configured to identify an image area to be tested in the captured image, which corresponds to the area to be tested 211; and splicing the image areas to be detected in the plurality of collected images to generate a segmentation image. That is, steps 133 and 0134 may be performed by the first stitching module 13.
In some embodiments, the processor 20 is further configured to identify an image area to be measured in the captured image corresponding to the area to be measured 211; and splicing the image areas to be detected in the plurality of collected images to generate a segmentation image. That is, step 133 and step 0134 may be performed by processor 20.
Specifically, it can be understood that, for the device under test 200 such as a wafer or a panel, the surface of the device under test 200 is composed of a plurality of circuits as minimum repeating units, when the device under test 200 is tested, only the circuit part is concerned, each minimum repeating unit of the device under test 200 may be regarded as a region under test 211, and the field of view of the sensor 40 may not just cover the region under test 211, but cover the region under test 211 and a part of the periphery thereof, such as a part of the periphery not covered by the circuits or a part of the adjacent region under test 211.
Therefore, referring to fig. 10, in the stitching process, the captured image P0 may be first subjected to image recognition to determine the image region R0 corresponding to the region 211 to be measured in the captured image, then after acquiring the image area R0 to be measured corresponding to each collected image P0, directly splicing the adjacent image areas R0 to be measured to obtain a segmented image T0, it is understood that there is no substantial overlap between the different regions-to-be-measured 211, and therefore, there may be no need to identify the overlap of the adjacent image-to-be-measured regions R0, and the adjacent image areas R0 to be tested can be simply spliced directly, the splicing is simpler and the splicing accuracy is higher, in addition, because the image region R0 to be measured is generally located in the central region of the captured image P0, the definition of the image region R0 to be measured is higher than that of the edge region of the captured image P0, so that the definition of the spliced segmented image T0 can be improved.
Referring to fig. 2, 3 and 11, in some embodiments, step 014 includes:
0141: identifying a second overlapping region of any two adjacent segmented images; and
0142: and splicing any two adjacent segmentation images according to the second superposition area to generate a detection image.
In some embodiments, the second stitching module 14 is further configured to identify a second overlapping region of any two adjacent segmented images; and splicing any two adjacent segmentation images according to the second overlapping area to generate a detection image. That is, steps 0141 and 0142 may be performed by the second stitching module 14.
In some embodiments, the processor 20 is further configured to identify a second overlapping region of any two adjacent segmented images; and splicing any two adjacent segmentation images according to the second overlapping area to generate a detection image. That is, steps 0141 and 0142 may be performed by processor 20.
Specifically, when the inspection images of the device under test 200 are stitched, two adjacent divided images are first acquired, image recognition is performed to determine a coincident portion (hereinafter referred to as a second coincident region) in the two adjacent divided images, the two adjacent divided images are stitched according to the second coincident region (for example, the two divided images are stacked such that the second coincident regions of the two divided images are coincident), the two divided images are stitched to be a new divided image, image recognition is performed again on the new divided image and the adjacent divided images to determine the second coincident region of the new divided image and the adjacent divided images, and thus all the divided images in the divided region 210 are sequentially combined into one divided image.
For example, referring to fig. 12 and 13, the segmented image T1 and the segmented image T2 located at the upper left corner of the device under test 200 are first stitched according to the second overlapping region X2 of the two segmented images, then the segmented image T1 and the segmented image T2 are synthesized into a new segmented image E2, then the segmented image E2 and the segmented image T3 adjacent to the segmented image E2 (i.e., adjacent to the segmented image T1) are stitched again according to the second overlapping region X2 of the two segmented images, so that all the segmented images of the device under test 200 are sequentially stitched to generate the inspection image M0, for example, the segmented images in the device under test 200 are stitched line by line, so that the inspection image M0 is accurately stitched. In other embodiments, the second overlapping region of any two adjacent segmented images in the object 200 may be identified, and during stitching, all adjacent segmented images in the segmented region 210 may be quickly stitched according to the identified second overlapping region, so as to obtain the detected image M0, thereby improving stitching efficiency.
In stitching two adjacent split images (e.g., split images T1 and T2) according to the second stitched region, the second stitched region X2 of one split image T1 of the two split images may be cropped to generate a cropped split image, and then the cropped split image and the adjacent non-cropped split image T2 may be stitched to generate a stitched split image that retains only one of the second stitched regions X2 of the two split images, so that all the adjacent split images are accurately stitched together to generate the detection image M0. In this way, by image cropping, stitching of any two adjacent divided images can be achieved quickly and accurately to generate the detection image M0.
Referring to fig. 2, fig. 3 and fig. 14, in some embodiments, the detection method further includes:
015: judging whether the collected image is effective or not;
016: if yes, the step of stitching the collected images of the multiple regions to be measured 211 to generate a segmentation image is performed.
In certain embodiments, the detection device 10 further comprises a determination module 15. The judging module 15 is used for judging whether the collected image is valid; when the acquired images are valid, the first stitching module 13 is further configured to stitch the acquired images of the multiple regions 211 to be measured to generate the segmentation images. That is, step 015 may be performed by the determining module 15, and step 016 may be performed by the first stitching module 13.
In some embodiments, processor 20 is also configured to determine whether the captured image is valid; when the captured images are valid, the captured images of the plurality of regions-to-be-measured 211 are stitched to generate a segmentation image. That is, step 015 and step 016 may be executed by the processor 20.
Specifically, before the collected images are spliced, in order to ensure the accuracy of the spliced segmented images, the effectiveness of the collected images needs to be judged first, and it can be understood that the collected images may have abnormality during collection, for example, the collection parameters of the sensor 40 are abnormal, different collection parameters (for example, different shooting distances) are adopted for different regions to be measured 211, so that the collected images are too large or too small, and the collected images are obviously not effective at this time, which may seriously affect the accuracy of the segmented images obtained by subsequently splicing the images; or, the sensor 40 collects the collected images of one or more regions 211 to be measured, which results in that the number of the collected images is not equal to the preset number threshold (for example, the preset number threshold may be determined according to the number of the regions 211 to be measured and the number of the regions 211 to be measured collected by the sensor 40 each time), and at this time, the collected images cannot be spliced into a completed segmented image, which is obviously not effective; or the definition of the spectral signal corresponding to the acquired image is not greater than the preset definition threshold, which indicates that the acquisition quality of the acquired image is poor, and therefore, the acquired image is obviously not effective.
Therefore, when judging whether the collected image is effective, whether the size of the collected image is within the preset size range, whether the number of the collected images is equal to the preset number threshold, and/or whether the definition of the spectrum signal corresponding to the collected image is greater than the preset definition threshold can be judged. In the embodiment of the application, in order to ensure that the effectiveness of the collected images reaches the highest level and improve the accuracy of the segmented images to the maximum level, the effectiveness of the collected images is determined only when the size of the collected images is within the preset size range, the number of the collected images is equal to the preset number threshold, and the definition of the spectrum signals of the collected images is greater than the preset definition threshold. And when the collected image is invalid, the collected image of the to-be-detected piece 200 can be collected again until the collected image is valid.
Referring to fig. 15, one or more non-transitory computer-readable storage media 300 containing a computer program 302 according to an embodiment of the present disclosure, when the computer program 302 is executed by one or more processors 20, enable the processor 20 to perform any of the detection methods described above.
For example, referring to fig. 1-3, the computer program 302, when executed by the one or more processors 20, causes the processors 20 to perform the steps of:
011: according to the warping degree of any region 211 to be measured in the divided regions 210 of the piece 200 to be measured, adjusting the relative angle between the piece 200 to be measured and the sensor so that the region 211 to be measured is perpendicular to the optical axis of the sensor, wherein the piece 200 to be measured comprises a plurality of divided regions 210, and the divided regions 210 comprise a plurality of regions 211 to be measured;
012: acquiring an acquired image of each region to be measured 211 in the divided region 210 by the sensor 40;
013: splicing the collected images of the multiple regions to be measured 211 to generate a segmented image; and
014: the plurality of segmented images are stitched to generate a test image of the dut 200. .
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples and features of the various embodiments or examples described in this specification can be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A method of detection, comprising:
adjusting the relative angle between the to-be-detected part and the sensor according to the warping degree of any to-be-detected region in the to-be-detected part, so that the to-be-detected region is perpendicular to the optical axis of the sensor, the to-be-detected part comprises a plurality of dividing regions, and the dividing regions comprise a plurality of to-be-detected regions;
acquiring an acquired image of each to-be-detected region in the segmentation region through the sensor;
splicing the acquired images of the plurality of regions to be detected to generate a segmentation image; and
and splicing the plurality of segmented images to generate a detection image of the piece to be detected so as to detect the piece to be detected according to the detection image.
2. The inspection method of claim 1, wherein said stitching the captured images of the plurality of regions under test to generate a segmented image comprises:
determining a first overlapping area of any two adjacent acquired images; and
and according to the first overlapping area, splicing any two adjacent acquired images to generate the segmentation image.
3. The detection method according to claim 2, wherein the stitching any two adjacent acquired images according to the first coincidence region to generate the segmentation image comprises:
cropping the first overlapping area of one of the captured images of any two adjacent captured images to generate the cropped captured image;
and splicing the cut collected images and the collected images which have the same first overlapping area and are not cut to generate the segmentation images.
4. The inspection method of claim 1, wherein the stitching the plurality of segmented images to generate an inspection image of the object to be inspected to inspect the object to be inspected according to the inspection image comprises:
identifying a second overlapping region of any two adjacent segmented images; and
and splicing any two adjacent segmentation images according to the second overlapping area to generate the detection image.
5. The detection method according to claim 1, further comprising:
judging whether the collected image is effective or not;
and if so, splicing the acquired images of the plurality of areas to be detected to generate a segmentation image.
6. The detection method according to claim 5, wherein the determining whether the captured image is valid comprises:
judging whether the size of the collected images is within a preset size range, whether the number of the collected images is equal to a preset number threshold, and/or whether the definition of the spectrum signals corresponding to the collected images is larger than a preset definition threshold.
7. The detection method according to claim 1, wherein a field of view of the sensor covers the region to be detected, and the stitching the acquired images of the plurality of regions to be detected to generate a segmented image comprises:
identifying an image area to be detected corresponding to the area to be detected in the collected image; and
and splicing the image areas to be detected in the plurality of acquired images to generate a segmentation image.
8. A detection device, comprising:
the adjusting module is used for adjusting the relative angle between the to-be-detected part and the sensor according to the warping degree of any to-be-detected region in the to-be-detected part, so that the to-be-detected region is perpendicular to the optical axis of the sensor, the to-be-detected part comprises a plurality of dividing regions, and the dividing regions comprise a plurality of to-be-detected regions;
the acquisition module is used for acquiring an acquired image of each to-be-detected area in the segmentation areas through the sensor;
the first splicing module is used for splicing the acquired images of the plurality of areas to be detected to generate a segmentation image;
and the second splicing module is used for splicing the plurality of segmented images to generate a detection image of the piece to be detected so as to detect the piece to be detected according to the detection image.
9. An electronic device is characterized by comprising a sensor, a motion platform and a processor, wherein the motion platform is used for adjusting the relative angle between a piece to be detected and the sensor according to the warping degree of any one to-be-detected area in the to-be-detected areas so as to enable the to-be-detected areas to be perpendicular to the optical axis of the sensor, the piece to be detected comprises a plurality of the to-be-detected areas, and the to-be-detected areas comprise a plurality of the to-be-detected areas; the sensor is used for acquiring an acquired image of each to-be-detected area in the segmentation areas; the processor is used for splicing the collected images of the plurality of regions to be detected to generate a segmentation image; and splicing the plurality of segmented images to generate a detection image of the piece to be detected, so as to detect the piece to be detected according to the detection image.
10. A non-transitory computer-readable storage medium comprising a computer program which, when executed by a processor, causes the processor to perform the detection method of any one of claims 1-7.
CN202111667201.3A 2021-12-31 2021-12-31 Detection method and device, electronic equipment and storage medium Pending CN114299049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111667201.3A CN114299049A (en) 2021-12-31 2021-12-31 Detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111667201.3A CN114299049A (en) 2021-12-31 2021-12-31 Detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114299049A true CN114299049A (en) 2022-04-08

Family

ID=80973024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667201.3A Pending CN114299049A (en) 2021-12-31 2021-12-31 Detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114299049A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115656190A (en) * 2022-12-13 2023-01-31 广州粤芯半导体技术有限公司 Defect scanning detection method and device, scanning equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115656190A (en) * 2022-12-13 2023-01-31 广州粤芯半导体技术有限公司 Defect scanning detection method and device, scanning equipment and readable storage medium

Similar Documents

Publication Publication Date Title
KR101604005B1 (en) Inspection method
JP5225297B2 (en) Method for recognizing array region in die formed on wafer, and setting method for such method
US9171364B2 (en) Wafer inspection using free-form care areas
WO2008008817A2 (en) Edge inspection and metrology
JP2013160629A (en) Defect inspection method, defect inspection apparatus, program, and output unit
TW201337839A (en) Segmentation for wafer inspection
JP2008128651A (en) Pattern alignment method, and pattern inspecting device and system
US20070274593A1 (en) Specified position identifying method and specified position measuring apparatus
CN105588840A (en) Electronic element positioning method and device
TW201519344A (en) Auto-focus system and methods for die-to-die inspection
EP3264181B1 (en) Substrate pre-alignment method
US9646374B2 (en) Line width error obtaining method, line width error obtaining apparatus, and inspection system
CN109580658B (en) Inspection method and inspection apparatus
CN106030283B (en) For examining the apparatus and method of semiconductor packages
CN108507484B (en) Bundled round steel multi-vision visual identifying system and method for counting
CN114299049A (en) Detection method and device, electronic equipment and storage medium
JP2014155063A (en) Chart for resolution measurement, resolution measurement method, positional adjustment method for camera module, and camera module manufacturing method
JP3993044B2 (en) Appearance inspection method, appearance inspection device
CN114693626A (en) Method and device for detecting chip surface defects and computer readable storage medium
CN114326078A (en) Microscope system and method for calibration checking
WO2004102171A1 (en) External view inspection method, master pattern used for the same, and external view inspection device having the master pattern
KR101367193B1 (en) Inspection method for horizontality and pressure of collet
CN116634134B (en) Imaging system calibration method and device, storage medium and electronic equipment
CN113063352B (en) Detection method and device, detection equipment and storage medium
CN111906043B (en) Pose detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination