CN117173255A - Calibration method and system for multiple camera positions and wafer bonding detection equipment - Google Patents

Calibration method and system for multiple camera positions and wafer bonding detection equipment Download PDF

Info

Publication number
CN117173255A
CN117173255A CN202311259716.9A CN202311259716A CN117173255A CN 117173255 A CN117173255 A CN 117173255A CN 202311259716 A CN202311259716 A CN 202311259716A CN 117173255 A CN117173255 A CN 117173255A
Authority
CN
China
Prior art keywords
gradient
edge
wafer
camera
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311259716.9A
Other languages
Chinese (zh)
Inventor
胡其层
赵捷
孙宝
王晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tojingjianke Haining Semiconductor Equipment Co ltd
Original Assignee
Tojingjianke Haining Semiconductor Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tojingjianke Haining Semiconductor Equipment Co ltd filed Critical Tojingjianke Haining Semiconductor Equipment Co ltd
Priority to CN202311259716.9A priority Critical patent/CN117173255A/en
Publication of CN117173255A publication Critical patent/CN117173255A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a calibration method and system for multiple camera positions and wafer bonding detection equipment. The calibrating method of the multi-camera position comprises the following steps: acquiring a plurality of edge images of the same wafer by using a plurality of cameras; respectively determining multiple groups of edge information of the wafer according to each edge image, wherein the edge information comprises first coordinates of at least one edge point in each edge image; respectively determining the characteristic distance between the center of the theoretical circle and each edge point according to the second coordinate of the center of the theoretical circle of the wafer and the first coordinate of each edge point; respectively solving an optimization result which minimizes the difference between the characteristic distance of each edge image and the radius of the wafer through a gradient descent model so as to respectively determine the relative positions of each camera and the center of a theoretical circle; and determining the relative position relation among the cameras according to the relative positions of the cameras and the center of the theoretical circle.

Description

Calibration method and system for multiple camera positions and wafer bonding detection equipment
Technical Field
The present invention relates to the field of chip manufacturing technologies, and in particular, to a method for calibrating a multi-camera position, a system for calibrating a multi-camera position, a computer readable storage medium, and a wafer bonding detection device.
Background
In the wafer bonding process, it is inevitable that the bonded wafer is a large-format wafer. In the processes of fusion bonding and the like, if a single camera is used for collecting the whole outline of a wafer, the image collected by the camera often cannot meet the requirement of precision. Therefore, under the condition that the fusion bonding process needs to collect the outline of the large-format wafer, a plurality of cameras without overlapping vision are often needed to jointly collect images of the edge of the wafer, so that the accuracy requirement of the collected wafer image is met. Whether the acquired wafer edge image is accurate relates to the accuracy of the subsequent bonding process, and the edge position represented by the acquired image is closely related to the relative positions among a plurality of cameras. Therefore, calibrating the relative positional relationship of the plurality of cameras is critical to the wafer alignment process of the subsequent bonding process.
In the prior art, the relative positions of a plurality of cameras are typically determined by calibration plates. However, the accuracy and specification of the existing calibration plate cannot meet the accuracy requirements required for the relative position calibration between cameras.
In order to overcome the above-mentioned drawbacks of the prior art, there is a need in the art for a multi-camera position calibration technique for implementing calibration of the relative positions of a plurality of cameras without a high-precision and large-size calibration plate, so as to improve the accuracy of the wafer bonding inspection device and the accuracy of the wafer bonding process, so as to achieve good wafer bonding quality.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In order to overcome the above-mentioned drawbacks of the prior art, the present invention provides a calibration method for multiple camera positions, a calibration system for multiple camera positions, a computer readable storage medium, and a wafer bonding inspection apparatus, which can achieve calibration of the relative positions of multiple cameras without a high-precision and large-size calibration board, thereby improving the accuracy of the wafer bonding inspection apparatus and the accuracy of the wafer bonding process, and achieving good wafer bonding quality.
Specifically, the calibration method for multiple camera positions provided according to the first aspect of the present invention includes the following steps: acquiring a plurality of edge images of the same wafer by using a plurality of cameras; respectively determining multiple groups of edge information of the wafer according to each edge image, wherein the edge information comprises first coordinates of at least one edge point in each edge image; respectively determining the characteristic distance between the center of the theoretical circle and each edge point according to the second coordinate of the center of the theoretical circle of the wafer and the first coordinate of each edge point; respectively solving an optimization result which minimizes the difference between the characteristic distance of each edge image and the radius of the wafer through a gradient descent model so as to respectively determine the relative positions of each camera and the center of the theoretical circle; and determining the relative position relation among the cameras according to the relative positions of the cameras and the center of the theoretical circle.
Preferably, in an embodiment of the present invention, the step of capturing a plurality of edge images of the same wafer with a plurality of cameras includes: correcting the zoom multiple and/or the placement angle of at least one camera; and shooting the same wafer through each camera subjected to the correction so as to acquire a plurality of edge images with the same scaling multiple and the same offset angle.
Preferably, in an embodiment of the present invention, the step of determining multiple sets of edge information of the wafer according to each edge image includes: performing Gaussian smoothing filtering on the acquired edge image to eliminate noise in the edge image; calculating the gradient magnitude and gradient direction of the values of all pixel points in the edge image for eliminating noise; and screening the pixel points with the maximum gradient value along the gradient direction according to the gradient magnitude and the gradient direction of the values of the pixel points by adopting a non-maximum value inhibition method so as to determine the edge information of the wafer.
Preferably, in an embodiment of the present invention, the first coordinate is located in an image coordinate system and the second coordinate is located in a world coordinate system. The step of determining the characteristic distance between the center of the theoretical circle and each edge point according to the second coordinate of the center of the theoretical circle of the wafer and the first coordinate of each edge point comprises the following steps: according to the relative position relation between the world coordinate system and each image coordinate system, determining a third coordinate of at least one edge point in each edge image in the world coordinate system; and respectively determining the characteristic distance between the center of the theoretical circle and each edge point according to the second coordinate and each third coordinate.
Preferably, in an embodiment of the present invention, the step of separately solving, via a gradient descent model, an optimization result that minimizes a difference between a feature distance of each of the edge images and a radius of the wafer, so as to separately determine a relative positional relationship between each of the cameras and a center of the theoretical circle includes: constructing a cost function of the difference between the feature distance and the radius of the wafer:
wherein (x, y) is the coordinates of the camera in the world coordinate system, (E) X ,E y ) For a first coordinate of the edge point in the image coordinate system, (c) x ,c y ) A second coordinate of the center of the theoretical circle in the world coordinate system; first order bias is conducted on the cost function to determine the gradient d of the difference value along the X direction x And a gradient d of the difference in the Y direction y Wherein, the method comprises the steps of, wherein,
and
the gradient d x The gradient d y And importing the gradient descent model to solve an optimization result for minimizing the difference between the characteristic distance of each edge image and the radius of the wafer.
Preferably, in an embodiment of the present invention, the learning rate α is referred to in the gradient descent model. Said applying said gradient d x The gradient d y The step of importing the gradient descent model to solve an optimization result that minimizes a difference between a feature distance of each edge image and a radius of the wafer includes: determining a learning rate alpha of the gradient descent model; according to the gradient d x And iteratively optimizing the X coordinate of the camera in the world coordinate system by the learning rate α: x' =x- α×d x The method comprises the steps of carrying out a first treatment on the surface of the According to the gradient d y And iteratively optimizing the Y coordinate of the camera in the world coordinate system by the learning rate α: y' =y- α×d y
Preferably, in an embodiment of the present invention, the step of determining the learning rate α of the gradient descent model includes: initializing variable s=0 and defining stability parameter e=10 -10 The method comprises the steps of carrying out a first treatment on the surface of the According to the variable s, the stability parameter epsilon and a preset learning rate alpha 0 Determining the learning rate alpha of the current turn:
and
gradient d in the gradient descent model x Gradient d y The squares are summed and added to the variable s to determine the learning rate α' for the next round.
Preferably, in an embodiment of the present invention, the gradient descent model involves a learning rate α and a momentum value β. Said applying said gradient d x The gradient d y Introducing the gradient descent model to solve for a difference between a feature distance of each of the edge images and a radius of the wafer to be the greatestThe step of optimizing the results to a small value further comprises: according to the gradient d x Said gradient d y The momentum value beta and the initial momentum velocity v dx0 And v dy0 Determining momentum velocity v of current turn dx And v dy :v dx =β*v dx0 +(1-β)*d x ,v dy =β*v dy0 +(1-β)*d y The method comprises the steps of carrying out a first treatment on the surface of the According to the learning rate alpha and the momentum velocity v dx Iteratively optimizing the X-coordinate of the camera in the world coordinate system: x' =x- α v dx The method comprises the steps of carrying out a first treatment on the surface of the According to the learning rate alpha and the momentum velocity v dy Iteratively optimizing the Y-coordinate of the camera in the world coordinate system: y' =y- α v dy The method comprises the steps of carrying out a first treatment on the surface of the According to the gradient d x Said gradient d y The momentum value beta and the momentum velocity v of the current round dx And v dy Determining momentum velocity v of the next round dx ′、v dy ′:v dx ′=β*v dx +(1-β)*d x ,v dy ′=β*v dy +(1-β)*d y
In addition, the calibration system for the multi-camera position provided by the second aspect of the invention comprises a memory and a processor. The memory has stored thereon computer instructions. The processor is connected to the memory and configured to execute computer instructions stored on the memory to implement the multi-camera position calibration method provided in any one of the embodiments described above.
Further, the above-described computer-readable storage medium according to the third aspect of the present invention has stored thereon computer instructions. The computer instructions, when executed by the processor, implement the multi-camera position calibration method provided by any of the embodiments above.
In addition, the wafer bonding inspection apparatus according to the fourth aspect of the present invention includes a plurality of cameras having no overlapping fields of view, for jointly mapping the wafer edge, where the relative positions of the cameras are determined by the calibration method for the positions of the cameras according to any one of the embodiments.
Drawings
The above features and advantages of the present invention will be better understood after reading the detailed description of embodiments of the present disclosure in conjunction with the following drawings. In the drawings, the components are not necessarily to scale and components having similar related features or characteristics may have the same or similar reference numerals.
FIG. 1 illustrates a flow chart of a method of calibrating multiple camera positions provided in accordance with some embodiments of the invention;
FIGS. 2A-2B illustrate schematic diagrams for correcting the position of multiple cameras provided in accordance with some embodiments of the invention;
3A-3D illustrate edge images of the same wafer acquired by cameras provided in accordance with some embodiments of the present invention;
FIG. 4 illustrates a screening schematic of a non-maximum suppression method provided in accordance with some embodiments of the present invention;
FIG. 5 illustrates a schematic diagram of a method of calibrating multiple camera positions provided in accordance with some embodiments of the invention;
FIG. 6 illustrates a schematic diagram of a method of calibrating multiple camera positions provided in accordance with some embodiments of the invention; and
fig. 7 illustrates a schematic diagram of a gradient descent model provided in accordance with some embodiments of the present invention.
Reference numerals:
100: a calibration method of multiple camera positions;
s110 to S150: a step of;
21. 22, 23, 24: a camera;
31: edges;
41. 42, 43: a pixel point;
O 0 、O 1 、O 2 、O 3 、O 4 : a location; and
a: edge points.
Detailed Description
Further advantages and effects of the present invention will become apparent to those skilled in the art from the disclosure of the present specification, by describing the embodiments of the present invention with specific examples. While the description of the invention will be presented in connection with a preferred embodiment, it is not intended to limit the inventive features to that embodiment. Rather, the purpose of the invention described in connection with the embodiments is to cover other alternatives or modifications, which may be extended by the claims based on the invention. The following description contains many specific details for the purpose of providing a thorough understanding of the present invention. The invention may be practiced without these specific details. Furthermore, some specific details are omitted from the description in order to avoid obscuring the invention.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In addition, the terms "upper", "lower", "left", "right", "top", "bottom", "horizontal", "vertical" as used in the following description should be understood as referring to the orientation depicted in this paragraph and the associated drawings. This relative terminology is for convenience only and is not intended to be limiting of the invention as it is described in terms of the apparatus being manufactured or operated in a particular orientation.
It will be understood that, although the terms "first," "second," "third," etc. may be used herein to describe various elements, regions, layers and/or sections, these elements, regions, layers and/or sections should not be limited by these terms and these terms are merely used to distinguish between different elements, regions, layers and/or sections. Accordingly, a first component, region, layer, and/or section discussed below could be termed a second component, region, layer, and/or section without departing from some embodiments of the present invention.
As described above, whether the acquired wafer edge image is accurate is a concern for the accuracy of the subsequent bonding process, and the edge position represented by the acquired image is closely related to the relative positions between the plurality of cameras. Thus, calibrating the relative positional relationship of the plurality of cameras is critical to the wafer alignment process of the subsequent bonding process. In the prior art, the relative positions of a plurality of cameras are determined through the calibration plate, however, the precision and the specification of the conventional calibration plate cannot meet the precision requirement required by the relative position calibration between the cameras.
In order to overcome the above-mentioned drawbacks of the prior art, the present invention provides a calibration method for multiple camera positions, a calibration system for multiple camera positions, a computer readable storage medium, and a wafer bonding inspection apparatus, which can achieve calibration of the relative positions of multiple cameras without a high-precision and large-size calibration board, thereby improving the accuracy of the wafer bonding inspection apparatus and the accuracy of the wafer bonding process, and achieving good wafer bonding quality.
In some non-limiting embodiments, the calibration method for multi-camera positions provided in the first aspect of the present invention may be implemented via the calibration system for multi-camera positions provided in the second aspect of the present invention. In particular, the calibration system for the multi-camera position may be configured with a memory and a processor. The memory includes, but is not limited to, the above-described computer-readable storage medium provided by the third aspect of the present invention, having stored thereon computer instructions. The processor is coupled to the memory and configured to execute computer instructions stored on the memory to implement the method for calibrating multiple camera positions provided in the first aspect of the present invention.
The working principle of the multi-camera position calibration system will be described below in connection with some embodiments of a calibration method for multi-camera positions. It will be appreciated by those skilled in the art that these examples of the calibration method for multiple camera positions are merely some non-limiting embodiments provided by the present invention, and are intended to clearly illustrate the main concepts of the present invention and to provide some embodiments for public implementation, not to limit the overall functionality or overall operation of the calibration system. Similarly, the calibration system is just a non-limiting embodiment provided by the present invention, and does not limit the execution main body and execution sequence of each step in the calibration methods.
Please refer to fig. 1 and fig. 2A-2B first. Fig. 1 illustrates a flow chart of a calibration method for multiple camera positions provided according to some embodiments of the invention. Fig. 2A-2B illustrate schematic diagrams for correcting the position of multiple cameras provided in accordance with some embodiments of the invention.
As shown in fig. 1, the calibration method 100 for multiple camera positions provided by the present invention includes:
step S110: multiple edge images of the same wafer are acquired with multiple cameras.
In some embodiments, the invention may take the wafer being acquired as a calibration feature of a calibration method for multiple camera positions when calibrating the relative positions of the multiple cameras. The wafer size is known here, and may be a 12 inch wafer with a radius of 150 mm. If four cameras are used for drawing the edge of the wafer, each camera can obtain an edge image with the size of about 5-6 mm, and one pixel in the image is about 3 mu m, so that the requirements of non-overlapping visual fields and high precision are met.
Further, as shown in fig. 2A, there may be a case where the original camera positions are inconsistent in placement angle and zoom magnification. For example, in the upper left corner of the camera 21 in fig. 2A, there is a certain offset (offset angle θ) in the placement angle of the camera 21 compared to the placement angles of the cameras 22, 23, 24. For another example, the camera 22 in the upper right corner of fig. 2A, the actual image zoom factor (shown by the dotted lines) acquired by the camera 22 is larger than the image zoom factor acquired by the cameras 21, 23, 24 (the ideal image zoom factor that the camera 22 should acquire is shown by the solid lines in the figure) compared with the image zoom factors of the cameras 21, 23, 24.
Therefore, the calibration system for the multi-camera position provided by the invention can correct the placement angle of the camera 21 in fig. 2A and correct the zoom magnification of the camera 22 in fig. 2A. Further, if the placement angle and the zoom factor of a certain camera are inconsistent with those of other cameras, the placement angle and the zoom factor of the camera can be corrected.
As shown in fig. 2B, after correction of the zoom factor and/or the placement angle of the camera is completed, the calibration system may shoot the same wafer through the corrected cameras 21, 22, 23, 24 to collect a plurality of edge images with the same zoom factor and the same offset angle, so as to reduce the deviation caused by the positioning result of the zoom factor and the placement angle of each camera.
Referring to fig. 3A-3D, fig. 3A-3D illustrate edge images of the same wafer acquired by cameras according to some embodiments of the present invention.
As shown in fig. 3A to 3D, the present invention can perform image acquisition on the edge of the same wafer by using four corrected cameras, and extract a plurality of edge images of the same wafer as shown in fig. 3A to 3D. Here, each edge image carries at least one segment of the edge 31 of the wafer.
With continued reference to fig. 1, the calibration method 100 for multiple camera positions shown in fig. 1 further includes:
step S120: and respectively determining multiple groups of edge information of the wafer according to each edge image, wherein the edge information comprises first coordinates of at least one edge point in each edge image.
In a preferred embodiment, the calibration system may perform gaussian smoothing filtering on the acquired edge image to eliminate noise in the edge image, and then calculate the gradient magnitude and gradient direction of the values of each pixel point in the edge image with noise eliminated, so as to determine multiple groups of edge information of the wafer respectively.
Specifically, the calibration system may employ the following edge detection operators to calculate the gradient:
wherein d x Representing the gradient in the X direction, d y Representing the gradient in the Y direction.
Please continue to refer to the plurality of edge images shown in fig. 3A-3D, wherein there are inner and outer edge lines in each edge image. For this purpose, the calibration system may use a non-maximum suppression method to obtain accurate edge information, so as to preserve the maximum value in the gradient direction and exclude the interference area.
Referring specifically to fig. 4, fig. 4 is a screening schematic diagram of a non-maximum suppression method according to some embodiments of the invention.
As shown in fig. 4, in the process of implementing the non-maximum suppression method, the calibration system may first screen the pixel point having the maximum gradient value along the gradient direction according to the gradient magnitude and the gradient direction of the value of each pixel point, so as to determine the edge information of the wafer.
Specifically, the arrow direction shown in fig. 4 is the gradient direction of the pixel point 41. By the non-maximum suppression method provided by this preferred embodiment, the present invention can compare the gradient value of the pixel 41 with the gradient values of the front and rear pixels 42, 43 in the gradient direction of the pixel 41. If the gradient value of pixel 41 is greater than the gradient values of pixel 42 and pixel 43, then the value of pixel 41 is preserved, otherwise the value of pixel 41 is set to 0. Thus, by continuously comparing, the calibration system can screen out the pixel point with the maximum gradient value along the gradient direction to locate the actual edge (as shown by the edge 31 in fig. 3), thereby determining the edge information and the edge data of the wafer.
Further, after the edge information of the wafer is obtained, the calibration system may perform position matching on the edge information and the theoretical wafer at the preset position.
Referring specifically to fig. 5, fig. 5 is a schematic diagram illustrating a calibration method for multiple camera positions according to some embodiments of the present invention.
The white circle shown in fig. 5 is a theoretical circle, which is located in the world coordinate system and has a radius r. The wafer edge information obtained by positioning includes at least one edge point (white dot) in each edge image, and has a first coordinate located in a corresponding image coordinate system. In order to facilitate the mutual operation of the two, the calibration system may determine the third coordinate of at least one edge point in each edge image in the world coordinate system according to the relative position relationship between the world coordinate system and each image coordinate system, so as to convert each edge point in fig. 5 into the world coordinate system.
Referring specifically to fig. 6, fig. 6 is a schematic diagram illustrating a calibration method for multiple camera positions according to some embodiments of the present invention.
As shown in fig. 6, the images acquired by the cameras may each have a corresponding image coordinate system, wherein the origin of the world coordinate system is located at position O 0 The origin of each image coordinate system is located at the position O 1 Position O 2 Position O 3 Position O 4 . In this way, in the coordinate conversion process, the calibration system can respectively determine the third coordinate of at least one edge point in each edge image in the world coordinate system according to the relative position relationship between the world coordinate system and each image coordinate system.
Specifically, taking the edge point a located in the first image coordinate system as an example, assume that the origin of the first image coordinate system is O 1 The edge point A is located at the coordinates (x ', y') in the first image coordinate system, and the origin O of the first image coordinate system 1 The coordinates relative to the world coordinate system are (x 1 ,y 1 ) Then the third coordinate of the edge point A in the world coordinate system is (x 1 +x′,y 1 +y′)。
And so on, the calibration system can determine the third coordinate of at least one edge point in each edge image in the world coordinate system one by one, and the description is omitted here.
With continued reference to fig. 1, the calibration method 100 for multiple camera positions shown in fig. 1 further includes:
step S130: and respectively determining the characteristic distance d between the center of the theoretical circle and each edge point according to the second coordinate of the center of the theoretical circle of the wafer and the first coordinate of each edge point.
As described above, the wafer is a calibration feature of the calibration method of multiple camera positions, and the second coordinate of the theoretical circle center O thereof is located in the world coordinate system. Therefore, by converting the first coordinates of each edge point in each edge image into the third coordinates in the world coordinate system, the calibration system can efficiently determine the feature distance d between the theoretical circle center O and each edge point according to the positional relationship between the second coordinates of the theoretical circle center and the third coordinates of each edge point.
With continued reference to fig. 1, the calibration method 100 for multiple camera positions shown in fig. 1 further includes:
step S140: and respectively solving an optimization result for minimizing the difference between the characteristic distance of each edge image and the radius of the wafer through a gradient descent model so as to respectively determine the relative positions of each camera and the center of the theoretical circle.
In some reference examples of the invention, a technician can utilize a global search model to sequentially traverse pixel points from the origin of a world coordinate system according to the traversing sequence from left to right and from top to bottom until the whole image is traversed, and then sort the values of the obtained characteristic distances d to obtain the position of the minimum value of the characteristic distances d. However, in wafer bonding inspection, the accuracy requirements for image acquisition are high. Therefore, when the resolution of the image is too high, the traversal time of the global search model may be too long, which is not suitable for practical use.
In addition, in other reference examples of the present invention, the technician may utilize a jump search model to jump through a portion of pixels of the edge image according to a preset step size (e.g., 2 pixels), so as to effectively reduce the traversing range and increase the search speed. However, since most pixels are ignored, the searching precision of the jump searching model is generally low, so that the precision requirement of the wafer bonding process cannot be met, and the actual requirement is not met.
Therefore, the gradient descent model is adopted, so that the target position can be quickly positioned and the calculation time can be reduced while the precision requirement is met. Referring to fig. 7, fig. 7 illustrates a schematic diagram of a gradient descent model provided in accordance with some embodiments of the present invention.
As shown in fig. 7, in the process of searching for an optimization result that minimizes the difference between the feature distance of each edge image and the radius of the wafer, the calibration system may input the edge information obtained in the above step into the gradient descent model, and descend along the gradient direction of the difference with the center of the theoretical circle as an initial point, and continuously iterate the optimization until an optimization result that minimizes the difference between the feature distance of each edge image and the radius of the wafer is found.
Specifically, the calibration system may first construct a cost function of the difference in feature distance d and radius r of the wafer:
wherein (x, y) is the coordinates of the camera in the world coordinate system, (E) X ,E y ) For the first coordinate of the edge point in the image coordinate system, (c) x ,c y ) Is the second coordinate of the theoretical circle center in the world coordinate system.
The calibration system may then determine the gradient d of the difference along the X-direction by first-order bias of the cost function x Gradient d of difference along Y direction y Wherein, the method comprises the steps of, wherein,
still later, the calibration system may calibrate the gradient d x Gradient d y A gradient descent model is introduced to solve for an optimization result that minimizes the difference between the feature distance of each edge image and the radius of the wafer.
Further, in some embodiments, a pre-set or adaptively adjusted learning rate α may be involved in the gradient descent model. The calibration system can preferably be based on the learning rate alpha and gradient d x 、d y And through continuous iterative optimization, the X coordinate and the Y coordinate of the camera in the world coordinate system are obtained:
X′=X-α*d x
Y′=Y-α*d y
furthermore, in view of the fact that too large a learning rate α will cause the gradient descent model to skip the optimal point, and too small a learning rate α will cause the gradient descent model to converge too slowly, the present invention also preferably introduces an AdaGrad adaptive learning rate algorithm for adaptively adjusting the learning rate α at each iteration, so as to obtain a fast-falling learning rate when having parameters that lose the largest bias, and a slow-falling learning rate when having parameters that have small bias.
Specifically, based on the introduced AdaGrad adaptive learning rate algorithm, the calibration system may set an initialization variable s=0, defining a stability parameter e=10 -10 And according to the variable s, the stability parameter epsilon and the preset learning rate alpha 0 Determining the learning rate alpha of the current turn:
then, the calibration system can perform iterative optimization of X coordinate and Y coordinate of the camera in the world coordinate system according to the learning rate alpha of the current round as described above, and gradient d in the gradient descent model x Gradient d y The squares are summed and accumulated to a variable s to determine the learning rate α' for the next round. According to the reciprocation, the invention can correspondingly adjust the variable s and the obtained learning rate alpha based on the difference of parameters and gradients, thereby realizing the self-adaptive adjustment of the learning rate alpha.
Furthermore, given that conventional gradient descent models have a problem of being prone to localized optima, in some embodiments, the present invention may also preferably incorporate a dynamic value β (typically 0.9) to facilitate training of the gradient descent model.
Specifically, iterations of X and Y coordinates in a world coordinate system of a cameraIn the optimization process, the calibration system can firstly calculate the gradient d x Gradient d y Momentum value beta and initial momentum velocity v dx0 And v dy0 Determining momentum velocity v of current turn dx And v dy
v dx =β*v dx0 +(1-β)*d x
v dy =β*v dy0 +(1-β)*d y
The calibration system can then calibrate the system according to the learning rate alpha and the momentum velocity v as described above dx Iteratively optimizing the X coordinate of the camera in the world coordinate system:
X′=X-α*v dx
and according to the learning rate alpha and the momentum velocity v as described above dy Iteratively optimizing the Y coordinate of the camera in the world coordinate system:
Y′=Y-α*v dy
still later, the calibration system may be based on the gradient d x Gradient d y Momentum value beta and momentum velocity v of current turn dx And v dy Determining momentum velocity v of the next round dx ′、v dy ′:
v dx ′=β*v dx +(1-β)*d x
v dy ′=β*v dy +(1-β)*d y
Therefore, on the basis of the historical gradient optimization direction, the dynamic value is introduced in the next gradient optimization direction to weight, and the method can effectively prevent the gradient descent model from sinking into local optimum, so that the optimization result which minimizes the difference between the characteristic distance of each edge image and the radius of the wafer can be found more quickly and accurately.
With continued reference to fig. 1, the calibration method 100 for multiple camera positions shown in fig. 1 further includes:
step S150: and determining the relative position relation among the cameras according to the relative positions of the cameras and the center of the theoretical circle.
Specifically, according to the optimization result determined by the steps, the invention can respectively determine the relative positions of each camera and the center of the theoretical circle, and determine the relative positions among a plurality of cameras according to the relative positions, so that the relative positions among the plurality of cameras are calibrated under the condition of no high-precision and large-size calibration plate, and the accuracy of the wafer bonding detection equipment is improved.
In addition, in some embodiments, a person skilled in the art may further construct the wafer bonding detection apparatus according to the fourth aspect of the present invention based on the calibration method and the calibration system provided by the present invention to calibrate the obtained relative positions between the plurality of cameras. Specifically, the wafer bonding inspection apparatus includes a plurality of cameras with non-overlapping fields of view for joint mapping of wafer edges. Here, the relative positions of the plurality of cameras are determined by the calibration method of the multi-camera position provided in the first aspect of the present invention, so that the accuracy is also higher, and the accuracy of the wafer bonding process can be improved.
In summary, the calibration method for multiple camera positions, the calibration system for multiple camera positions, the computer readable storage medium, and the wafer bonding detection device provided by the invention can determine the relative positions between multiple cameras and a theoretical circle and between multiple cameras by using a gradient descent model under the condition of no high-precision and large-size calibration plate, and can realize quick and effective positioning while ensuring the accuracy and precision of the positions, thereby reducing the calculation time and improving the utilization rate of hardware resources.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood and appreciated by those skilled in the art.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. The calibrating method of the multi-camera position is characterized by comprising the following steps:
acquiring a plurality of edge images of the same wafer by using a plurality of cameras;
respectively determining multiple groups of edge information of the wafer according to each edge image, wherein the edge information comprises first coordinates of at least one edge point in each edge image;
respectively determining the characteristic distance between the center of the theoretical circle and each edge point according to the second coordinate of the center of the theoretical circle of the wafer and the first coordinate of each edge point;
respectively solving an optimization result which minimizes the difference between the characteristic distance of each edge image and the radius of the wafer through a gradient descent model so as to respectively determine the relative positions of each camera and the center of the theoretical circle; and
and determining the relative position relation among the cameras according to the relative positions of the cameras and the center of the theoretical circle.
2. The method of calibrating according to claim 1, wherein the step of capturing a plurality of edge images of the same wafer using a plurality of cameras comprises:
correcting the zoom multiple and/or the placement angle of at least one camera; and
shooting the same wafer through each camera subjected to the correction so as to acquire a plurality of edge images with the same scaling multiple and the same offset angle.
3. The method of calibrating according to claim 1, wherein the step of determining the plurality of sets of edge information of the wafer from each of the edge images includes:
performing Gaussian smoothing filtering on the acquired edge image to eliminate noise in the edge image;
calculating the gradient magnitude and gradient direction of the values of all pixel points in the edge image for eliminating noise; and
and screening the pixel points with the maximum gradient value along the gradient direction according to the gradient magnitude and the gradient direction of the value of each pixel point by adopting a non-maximum value inhibition method so as to determine the edge information of the wafer.
4. The method of calibrating according to claim 1, wherein the first coordinates are located in an image coordinate system, the second coordinates are located in a world coordinate system, and the step of determining the feature distance between the center of the theoretical circle and each of the edge points based on the second coordinates of the center of the theoretical circle of the wafer and the first coordinates of each of the edge points, respectively, comprises:
according to the relative position relation between the world coordinate system and each image coordinate system, determining a third coordinate of at least one edge point in each edge image in the world coordinate system; and
and respectively determining the characteristic distance between the center of the theoretical circle and each edge point according to the second coordinate and each third coordinate.
5. The method according to claim 4, wherein the step of separately solving, via a gradient descent model, an optimization result that minimizes a difference between a feature distance of each of the edge images and a radius of the wafer to separately determine a relative positional relationship between each of the cameras and a center of the theoretical circle includes:
constructing a cost function of the difference between the feature distance and the radius of the wafer:
wherein (x, y) is the coordinates of the camera in the world coordinate system, (E) X ,E y ) For a first coordinate of the edge point in the image coordinate system, (c) x ,c y ) A second coordinate of the center of the theoretical circle in the world coordinate system;
first order bias is conducted on the cost function to determine the gradient d of the difference value along the X direction x And a gradient d of the difference in the Y direction y Wherein, the method comprises the steps of, wherein,
and
the gradient d x The gradient d y And importing the gradient descent model to solve an optimization result for minimizing the difference between the characteristic distance of each edge image and the radius of the wafer.
6. The calibration method according to claim 5, wherein the gradient descent model involves a learning rate α, the gradient d being determined by x The gradient d y The step of importing the gradient descent model to solve an optimization result that minimizes a difference between a feature distance of each edge image and a radius of the wafer includes:
determining a learning rate alpha of the gradient descent model;
according to the gradient d x And iteratively optimizing the X coordinate of the camera in the world coordinate system by the learning rate α: x' =x- α×d x The method comprises the steps of carrying out a first treatment on the surface of the And
according to the gradient d y And iteratively optimizing the Y coordinate of the camera in the world coordinate system by the learning rate α: y' =y- α×d y
7. The calibration method of claim 6, wherein the step of determining the learning rate α of the gradient descent model comprises:
initializing variable s=0 and defining stability parameter e=10 -10
According to the variable s, the stability parameter epsilon and a preset learning rate alpha 0 Determining the learning rate alpha of the current turn:
and
gradient d in the gradient descent model x Gradient d y The squares are summed and added to the variable s to determine the learning rate α' for the next round.
8. The calibration method according to claim 5, wherein the gradient descent model involves a learning rate α and a momentum value β, and the gradient d is determined by x The gradient d y The step of importing the gradient descent model to solve an optimization result that minimizes a difference between a feature distance of each edge image and a radius of the wafer further includes:
according to the gradient d x Said gradient d y The momentum value beta and the initial momentum velocity v dx0 And v dy0 Determining momentum velocity v of current turn dx And v dy :v dx =β*v dx0 +(1-β)*d x ,v dy =β*v dyo +(1-β)*d y
According to the learning rate alpha and the momentum velocity v dx Iteratively optimizing the X-coordinate of the camera in the world coordinate system: x' =x- α v dx
According to the learning rate alpha and the momentum velocity v dy Iteratively optimizing the Y-coordinate of the camera in the world coordinate system: y' =y- α v dy The method comprises the steps of carrying out a first treatment on the surface of the And
according to the gradient d x Said gradient d y The momentum value beta and the momentum velocity v of the current round dx And v dy Determining momentum velocity v of the next round dx ′、v dy ′:v dx ′=β*v dx +(1-β)*d x ,v dy ′=β*v dy +(1-β)*d y
9. A calibration system for a multi-camera position, comprising:
a memory having stored thereon computer instructions; and
a processor connected to the memory and configured to execute computer instructions stored on the memory to implement the method of calibrating multi-camera positions of any of claims 1-8.
10. A computer readable storage medium having stored thereon computer instructions, which when executed by a processor, implement a method of calibrating a multi-camera position according to any of claims 1-8.
11. A wafer bonding inspection apparatus comprising a plurality of cameras having non-overlapping fields of view for joint mapping of wafer edges, wherein the relative position of each camera is determined via a method of calibrating the positions of the cameras as defined in any one of claims 1 to 8.
CN202311259716.9A 2023-09-26 2023-09-26 Calibration method and system for multiple camera positions and wafer bonding detection equipment Pending CN117173255A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311259716.9A CN117173255A (en) 2023-09-26 2023-09-26 Calibration method and system for multiple camera positions and wafer bonding detection equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311259716.9A CN117173255A (en) 2023-09-26 2023-09-26 Calibration method and system for multiple camera positions and wafer bonding detection equipment

Publications (1)

Publication Number Publication Date
CN117173255A true CN117173255A (en) 2023-12-05

Family

ID=88935409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311259716.9A Pending CN117173255A (en) 2023-09-26 2023-09-26 Calibration method and system for multiple camera positions and wafer bonding detection equipment

Country Status (1)

Country Link
CN (1) CN117173255A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2976938A1 (en) * 2015-02-18 2016-08-25 Siemens Healthcare Diagnostics Inc. Image-based tube slot circle detection for a vision system
CN114882122A (en) * 2022-05-23 2022-08-09 佛山市南海区广工大数控装备协同创新研究院 Image local automatic calibration method and device and related equipment
CN116168072A (en) * 2023-01-12 2023-05-26 海克斯康软件技术(青岛)有限公司 Multi-camera large-size vision measurement method and system
CN116759358A (en) * 2023-08-17 2023-09-15 泓浒(苏州)半导体科技有限公司 Wafer edge alignment method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2976938A1 (en) * 2015-02-18 2016-08-25 Siemens Healthcare Diagnostics Inc. Image-based tube slot circle detection for a vision system
CN114882122A (en) * 2022-05-23 2022-08-09 佛山市南海区广工大数控装备协同创新研究院 Image local automatic calibration method and device and related equipment
CN116168072A (en) * 2023-01-12 2023-05-26 海克斯康软件技术(青岛)有限公司 Multi-camera large-size vision measurement method and system
CN116759358A (en) * 2023-08-17 2023-09-15 泓浒(苏州)半导体科技有限公司 Wafer edge alignment method and system

Similar Documents

Publication Publication Date Title
CN109146930B (en) Infrared and visible light image registration method for electric power machine room equipment
US9420276B2 (en) Calibration of light-field camera geometry via robust fitting
JP5075757B2 (en) Image processing apparatus, image processing program, image processing method, and electronic apparatus
WO2016141882A1 (en) Automatic gluing system and gluing method therefor
WO2017092631A1 (en) Image distortion correction method for fisheye image, and calibration method for fisheye camera
JP2009230537A (en) Image processor, image processing program, image processing method, and electronic equipment
CN113920205B (en) Calibration method of non-coaxial camera
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
CN111083457A (en) Method and device for correcting projection images of multiple light machines and projection instrument of multiple light machines
CN110044262B (en) Non-contact precision measuring instrument based on image super-resolution reconstruction and measuring method
JP6956051B2 (en) Image processing equipment, driving support system, image processing method and program
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN112801870A (en) Image splicing method based on grid optimization, splicing system and readable storage medium
US20040109599A1 (en) Method for locating the center of a fiducial mark
CN111951295B (en) Method and device for determining flight trajectory with high precision based on polynomial fitting and electronic equipment
JP2009301181A (en) Image processing apparatus, image processing program, image processing method and electronic device
CN117576219A (en) Camera calibration equipment and calibration method for single shot image of large wide-angle fish-eye lens
CN110634146B (en) Circle center sub-pixel precision positioning method
CN117173255A (en) Calibration method and system for multiple camera positions and wafer bonding detection equipment
CN108198222A (en) A kind of wide-angle lens calibration and image correction method
CN111951178A (en) Image processing method and device for remarkably improving image quality and electronic equipment
JP5927265B2 (en) Image processing apparatus and program
CN112950723B (en) Robot camera calibration method based on edge scale self-adaptive defocus fuzzy estimation
CN114998571A (en) Image processing and color detection method based on fixed-size marker
CN111598940A (en) Method for positioning position of hemispherical photographic central point

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination