CN117422621A - Target object deflection correcting method, computer equipment and storage medium - Google Patents

Target object deflection correcting method, computer equipment and storage medium Download PDF

Info

Publication number
CN117422621A
CN117422621A CN202311315775.3A CN202311315775A CN117422621A CN 117422621 A CN117422621 A CN 117422621A CN 202311315775 A CN202311315775 A CN 202311315775A CN 117422621 A CN117422621 A CN 117422621A
Authority
CN
China
Prior art keywords
target object
image
angle
target
deflection angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311315775.3A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Lido Technology Co ltd
Original Assignee
Wuxi Lido Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Lido Technology Co ltd filed Critical Wuxi Lido Technology Co ltd
Priority to CN202311315775.3A priority Critical patent/CN117422621A/en
Publication of CN117422621A publication Critical patent/CN117422621A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • G06T3/608Skewing or deskewing, e.g. by two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The embodiment of the application discloses a target object deflection correcting method, computer equipment and a storage medium. The method comprises the following steps: acquiring an initial image of a target object; determining a first deflection angle of the target object according to the initial image; performing primary correction on the target object according to the primary deflection angle; acquiring at least two view images of the target object after the first alignment; matching and comparing the identification features in the at least two view images to determine a secondary deflection angle; and performing secondary correction on the target object according to the secondary deflection angle. Compared with the prior art, the accuracy of deflection alignment of the target object is improved through twice angle identification and twice alignment.

Description

Target object deflection correcting method, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a target object deflection correcting method, a computer device, and a storage medium based on an image processing technology.
Background
The surface of the wafer contains a plurality of particle chips and a pattern, i.e., a scribe line, separating the particles from each other in the longitudinal and transverse directions, and a dicing machine is required to dice the wafer according to a preset dicing direction to form individual chips (Die). The ideal dicing effect is that the kerf formed after dicing is always within the limit of the dicing pattern boundary, and the dicing causes chipping and the dicing bias is small enough.
FIG. 1 is a top view block diagram of a wafer dicing saw apparatus, wherein 1 is an X-axis guide fixed to a dicing saw bed; 2, the X-axis sliding table can horizontally move along the X-axis guide rail; 3 is a theta axis turntable (theta axis is a rotation axis in an XY plane) which is fixed on the X axis sliding table and can rotate clockwise and anticlockwise around a rotation center; 4, loading the wafer on a theta axis turntable; 5 is that the Y-axis guide rail is fixed on the dicing saw body; 6, the Y-axis sliding table can translate back and forth along the Y-axis guide rail; 7 is a Z-axis sliding table for vertical translation; 8, a lens and a camera of the vision system are fixed on the Z-axis sliding table; 9 denotes the field of view of the vision system when imaging the wafer. As shown in fig. 1, the vision system can be located right above a certain specific area of the wafer through X, Y and θ -axis movements, namely 9 is the required target field of view; focusing during shooting can be achieved through movement of the Z axis.
The wafer may have a certain error in the actual loading, wherein one error is an angle error in the XY plane, i.e. the transverse scribe line of the wafer after loading is not parallel to the X axis of the dicing saw but has a deflection angle. The increased deflection angle makes it easier to gradually deviate the kerf formed after dicing from the center of the kerf until exceeding the limit of the kerf boundary, resulting in chip size deviation and chip rejection. Therefore, θ axis alignment is required before dicing for each newly loaded wafer.
Fig. 2 is a schematic diagram of a conventional θ -axis alignment method, wherein 4 is a wafer with vertical and horizontal streets thereon. Firstly, drawing a left side view 10 of a wafer through movement of an X-axis sliding table and a Y-axis sliding table, and taking an identification feature 11 as a template image in the center of the view; then the X-axis sliding table moves a preset distance delta x A right view 12 is obtained by drawing a right side of the wafer, and features 13 matched with the identification features 11 are searched in the right view 12; then calculate the offset delta of the feature 13 in the X and Y directions relative to the center of the right field of view 12 x And delta y The method comprises the steps of carrying out a first treatment on the surface of the Then calculate the deflection angle θ e =tan -1y /(Δ xx ) A) is provided; finally, the theta axis turntable 3 rotates-theta e The wafer is aligned (clockwise).
The conventional θ -axis method has at least the following problems.
When the deflection angle of the wafer load is large, the vision system takes the left field of view 10 on the left and takes the template containing the identification feature 11, as shown in fig. 3, a distance of movement delta x The rear vision system takes the right side view 12 on the right side, but now the feature 13 matching the feature 11 has exceeded the right side view 12, which may cause a template alignment error or failure. If the search is performed by moving the field of view near the right field of view 12, additional resources and time are consumed and the accuracy of yaw angle identification and alignment is reduced because there is no predetermined direction.
Also as in fig. 4, the vision system takes a left view 10 on the left and takes a template containing identification features 11, moving a distance Δ x The posterior vision system takes the right side view 12 on the right side. The right field of view 12 matches two features 13 identical to the identification feature 11. Because of the large deflection angle, the identification feature 11 and the feature 13 are not the same row of die particles of the wafer, i.e., the feature comparison result is mislined, and in this case, even if the field of view is moved near the right side view 12 for searching, it is difficult to ensure that no misline occurs.
The above problems also occur in scenes like those with a straight arrangement of repeating units (e.g. image deflection alignment of crops). Therefore, a new deflection angle correcting method is urgently needed at present to improve the accuracy of deflection angle recognition and correction.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a target object deflection correcting method, computer equipment and a storage medium based on an image processing technology.
In order to solve one or more of the above technical problems, the technical solution adopted in the present application is:
in a first aspect, there is provided a target object yaw alignment method, the method comprising:
acquiring an initial image of a target object;
Determining a first deflection angle of the target object according to the initial image;
performing primary correction on the target object according to the primary deflection angle;
acquiring at least two view images of the target object after the first alignment;
matching and comparing the identification features in the at least two view images to determine a secondary deflection angle;
and performing secondary correction on the target object according to the secondary deflection angle.
Optionally, the acquiring at least two view images of the target object after the first alignment includes:
acquiring a first visual field image of the target object after alignment at a first position;
moving the target object along the x-axis by a preset distance delta x To a second position;
acquiring a second field of view image of the target object at the second location;
the matching and comparing the identification features in the at least two view images to determine a secondary deflection angle comprises:
acquiring identification features in the first view image;
searching the second view image for target features matching the identification features;
determining the offset of the target feature in the x axis and the y axis according to the current position of the target feature;
According to the offset and the preset distance delta x And determining the secondary deflection angle.
Optionally, the determining the offset of the target feature in the x-axis and the y-axis according to the current position of the target feature includes:
acquiring the offset delta of the target feature in the x axis according to the current position of the target feature x Offset delta in the y-axis y
Said offset and said predetermined distance delta are based on x Determining the secondary deflection angle includes:
through the formula theta e =tan -1y /(Δ xx ) The secondary deflection angle is calculated.
Optionally, the searching the second view image for the target feature matching the identification feature includes:
identifying the identification features from the second view image to obtain at least one feature area;
calculating the matching degree of each characteristic region and the identification characteristic;
if only one characteristic region with the matching degree larger than the preset matching degree exists, taking the characteristic region with the matching degree larger than the preset matching degree as the target characteristic;
if at least two characteristic areas with the matching degree larger than the preset matching degree exist, acquiring the offset delta of each characteristic area with the matching degree larger than the preset matching degree in the y axis y The offset delta is calculated y The smallest feature region serves as the target feature.
Optionally, the initial image is acquired at the first location.
Optionally, the determining the first deflection angle of the target object according to the initial image includes:
acquiring the initial image, and processing the initial image to obtain a gradient amplitude matrix;
carrying out Radon transformation on the gradient amplitude matrix to obtain a Radon transformation matrix, wherein the Radon transformation matrix takes a transformation angle as a first dimension and takes the distance from an integral path of the Radon transformation to a coordinate origin as a second dimension;
summarizing the Radon transformation matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle;
and identifying peak points according to areas under curves corresponding to the preliminary vectors so as to calculate the first deflection angle of the target object.
Optionally, the method further comprises:
performing enhancement treatment on elements in the Radon transformation matrix to obtain an enhancement matrix;
the summarizing the Radon transformation matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle comprises:
and summarizing the enhancement matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle.
Optionally, the performing enhancement processing on the elements in the Radon transform matrix to obtain an enhancement matrix includes:
performing exponentiation with an index of n on elements in the Radon transformation matrix to obtain the enhancement matrix; n >1.
Optionally, the summarizing the enhancement matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle includes:
and carrying out summation operation on the enhancement matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle.
Optionally, the method further comprises:
calculating the gradient of the preliminary vector, and filtering the gradient to form a target vector containing a main angle;
the step of identifying peak points according to the area under the curve corresponding to the preliminary vector to calculate the first deflection angle of the target object comprises the following steps:
and identifying peak points according to the area under the curve corresponding to the target vector so as to calculate the first deflection angle of the target object.
Optionally, the method further comprises:
performing interpolation operation on the target vector;
the identifying a peak point according to the area under the curve corresponding to the target vector to calculate the first deflection angle of the target object includes:
And identifying peak points according to the area under the curve corresponding to the target vector after interpolation operation so as to calculate the deflection angle of the image.
Optionally, the identifying the peak point according to the area under the curve corresponding to the interpolated target vector to calculate the deflection angle of the image includes:
calculating the area under the curve of the left side and the right side corresponding to each point of the target vector after interpolation operation;
identifying the point with the smallest area difference under the curves at the left side and the right side as the peak point;
and calculating the deflection angle of the image according to the transformation angle corresponding to the peak point.
Optionally, the processing the initial image to obtain a gradient magnitude matrix includes:
gray processing is carried out on the initial image to obtain a gray image;
converting the gray scale image into a double-precision gray scale image;
performing gradient operation on the double-precision gray level image to obtain the gradient amplitude matrix;
optionally, a central difference method is adopted to perform gradient operation on the double-precision gray level image to obtain the gradient amplitude matrix.
Optionally, performing Radon transformation on the gradient magnitude matrix to obtain a Radon transformation matrix includes:
at a change angle [ theta ] SE ]Within the interval, delta θ Carrying out Radon transformation on the gradient amplitude matrix for the angle step length to obtain a Radon transformation matrix, wherein the theta S 0 DEG, and said theta E 180 ° -delta θ
Optionally, the target object is provided with a plurality of identical sub-objects.
In a second aspect, there is also provided a dicing saw, the dicing saw including a dicing die set, a processor, a memory electrically connected with the processor, a drive carrying die set and a vision system, the memory being configured to store program instructions, the processor reading the program instructions to execute the target object deflection alignment method as above, wherein:
the driving bearing module is used for bearing a wafer to be diced, and the wafer to be diced is used as a target object;
the vision system photographs the wafer to be diced to obtain an initial image and a visual field image;
and the processor executes the program instruction and controls the driving bearing module to perform primary alignment and secondary alignment on the wafer to be diced.
In a third aspect, there is also provided a target object yaw alignment apparatus, including: an initial image acquisition unit configured to acquire an initial image of a target object.
And the first deflection angle determining unit is used for determining the first deflection angle of the target object according to the initial image.
And the first correcting unit is used for correcting the target object for the first time according to the first deflection angle.
The secondary deflection angle determining unit is used for obtaining at least two visual field images of the target object after the primary alignment and carrying out matching comparison of identification features so as to determine a secondary deflection angle.
And the secondary centering unit is used for secondarily centering the target object according to the secondary deflection angle.
In a fourth aspect, there is also provided a computer device comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, the computer program, when executed by the processor, implementing the yaw alignment method of a target object as described above.
In a fifth aspect, there is also provided a computer-readable storage medium having stored therein a computer program which, when executed, implements the yaw alignment method of the target object described above.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
compared with the prior art, the method aims at deflection correction of the target object, and before characteristic comparison of the multi-view images is utilized, an initial image of the target object is obtained in advance, and a rough identification is carried out on the deflection angle of the target object. The first alignment is performed based on the rough recognition so that the deflection angle of the target object at this time is not too large. And then, the object after the alignment is compared and the deflection angle is determined by utilizing the characteristics of the multi-view images, so that the problems of matching failure and wrong running caused by larger deflection angle can be avoided, and the success rate and accuracy of the recognition and alignment of the deflection angle are improved.
Furthermore, the invention utilizes the improved Radon transformation to roughly identify the deflection angle, and the identification is not influenced by the deflection angle and the size of the view field for drawing, thereby ensuring the accuracy of identification. The invention utilizes Radon transformation to identify the deflection angle of the image, and identifies the peak point by means of the equal surface integral segmentation, thereby avoiding the problem of inaccurate identification under partial deflection angle caused by simply identifying the peak point by using the maximum Radon transformation value point, and providing a good foundation for the characteristic comparison of the subsequent multi-view image. In addition, after Radon transformation, the matrix elements are summarized according to the second dimension, so that all the linear structures of the original image are comprehensively considered, instead of only selecting the strongest linear structure, and the image recognition precision of complex and high noise is improved; these improvements generally increase the success rate and accuracy of target object yaw angle identification and alignment.
Of course, not all of the above-described advantages need be achieved simultaneously in practicing any of the aspects of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a top view block diagram of a wafer dicing saw apparatus;
FIG. 2 is a schematic diagram of a conventional wafer alignment method;
FIG. 3 is a prior art multi-view image contrast failure schematic;
FIG. 4 is a prior art multi-view image contrast error line schematic;
FIG. 5 is a flow chart of a yaw alignment method of the present application;
FIG. 6 is a flow chart of a first yaw angle identification in the yaw alignment method of the present application;
FIG. 7 is a block diagram of a dicing machine set of the present application;
fig. 8 is a block diagram of a computer device according to the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
As described in the background art, for a target object in which a plurality of repeating units exist and linear texture arrangement is present between the units, multi-view image comparison can be performed to identify an image deflection angle. However, in the identification method in the prior art, the image deflection angle is large, so that the field of view matching fails or the field of view matching is misplaced, and the accuracy of identifying the image deflection angle is lower.
The application aims to provide a novel target object deflection correcting method, wherein the approximate deflection angle of a target object is determined and pre-corrected by a single image angle identification method of the target object, and the deflection angle of the target object is corrected within a smaller range. Then, the target object after the pre-alignment is subjected to multi-view image comparison to determine a secondary deflection angle, so that the problem of low recognition accuracy of the multi-view image comparison due to large image deflection angle is avoided
The following will further illustrate the solution of the present application by means of specific examples:
example 1
An embodiment of the present application provides a target object yaw alignment method, as shown in fig. 5, including the following steps:
s51, acquiring an initial image of the target object.
S52, determining the first deflection angle of the target object according to the initial image.
Various deflection angle identification methods for single images have been disclosed in the prior art, such as Hough (Hough) transformation and Fourier-Mellin (Fourier-Mellin) transformation, such as Radon transformation, and the like, all of which can accomplish angle identification for single images. The angle identification based on the single image does not need visual field comparison, so that the problems of visual field matching failure or visual field mismatching and the like caused by large deflection angle are avoided.
However, the single image angle recognition mode is affected by the definition of the main linear structure in the image and the accuracy of the angle recognition algorithm, so that a rough deflection angle is recognized.
S53, performing primary correction on the target object according to the primary deflection angle.
After the target object is first corrected according to the first deflection angle, the deflection angle of the target object is greatly corrected and limited in a smaller range. At this time, in combination with the multi-view image contrast mode, the possibility of recognition errors caused by large deflection angles is greatly reduced.
S54, acquiring at least two view images of the target object after the first alignment.
S55, matching and comparing the identification features in at least two view images, and determining a secondary deflection angle.
For step S55, in a specific embodiment of the present application, the following sub-steps may be included:
firstly, acquiring a first visual field image of the target object after alignment at a first position;
moving the target object along the x-axis by a preset distance delta x Obtaining a second visual field image of the target object at a second position;
acquiring identification features in the first view image;
Searching the second view image for target features matching the identification features;
determining the offset delta of the target feature in the x-axis according to the current position of the target feature x Offset in the y-axisQuantity delta y
According to the offset and the preset distance delta x And determining the secondary deflection angle. Specifically, the formula θ can be used e =tan -1y /(Δ xx ) Calculating the secondary deflection angle theta e
It should be noted that, under the influence of factors such as image differences of the target object itself and accuracy of an image matching algorithm, there is less possibility that the target feature identical to the identification feature is to be matched on the target object. Therefore, in the embodiment of the application, the matching degree with the identification feature meets a certain condition, if the matching degree is higher than the preset matching degree, the matching with the identification feature can be determined. The preset matching degree can be set by integrating the image difference of the target object and the precision of the graph matching algorithm.
The matching algorithm of the related image in the prior art can be applied to the embodiment of the present application, which is not particularly limited in this application.
Regarding the acquisition of the initial image and the view image, the image acquisition method in the prior art can be specifically adopted. Such as a camera taking a photograph to obtain an initial image and a field of view image of the target object.
The above-mentioned acquisition of at least two view images may also be performed by other means in the prior art, such as, after the acquisition of the first view image, moving the vision system to the second position to capture the second view image without moving the target object.
S56, performing secondary correction on the target object according to the secondary deflection angle.
Based on the first alignment, the target object deflection angle is small. On the premise, the problems of matching failure, wrong rows and the like can be reduced based on the comparison of the multi-view images, so that the deflection angle of the high-accuracy identification image is realized.
It should be noted that, to save complexity of the operation and reduce new errors caused by the operation, the initial image may be acquired at the same first position as the first field image in the present application. Specifically, the vision system shoots and acquires an initial image at a first position, and after the target object is first aligned, the shooting module is kept at the first position and shoots and acquires a first view image of the target object.
Example two
In some scenarios, the repeating units on the target object are small, and thus a plurality of features with a degree of matching with the identification feature higher than a preset degree of matching may be identified in the second field-of-view image. At this time, how to accurately select the target feature is related to the calculation accuracy of the deflection angle.
It will be appreciated that the deflection angle of the image after the first registration has been limited to a small range where the y-axis of the target feature is offset by a small amount. The feature with the smallest y-axis offset can be selected as the target feature in this application.
Specifically, in the second embodiment of the present application, the following steps are provided to determine the target feature:
identifying the identification features from the second view image to obtain at least one feature area;
calculating the matching degree of each characteristic region and the identification characteristic;
if only one characteristic region with the matching degree larger than the preset matching degree exists, taking the characteristic region with the matching degree larger than the preset matching degree as the target characteristic;
if at least two characteristic areas with the matching degree larger than the preset matching degree exist, acquiring the offset delta of each characteristic area with the matching degree larger than the preset matching degree in the y axis y The offset delta is calculated y The smallest feature region serves as the target feature.
It should be noted that this method is not applicable to a scene where the image deflection angle is large. Because a larger deflection angle leads to a larger offset delta of the real target feature in the y-axis y . E.g. selecting the smallest delta y But rather may lead to calculation errors. In this application, however, a small deflection angle has been defined in connection with the initial setting. At this time, the offset delta is calculated y The minimum feature is used as the target feature, so that the accuracy of target feature selection is improved, and the calculation accuracy of the deflection angle is improvedAccuracy.
Example III
In the application, radon transformation can be adopted to identify the first deflection angle of the target object. For the convenience of subsequent understanding, the Radon transform principle of the image is described first.
The essence of Radon transformation is to make a spatial transformation of the original image, and project the points on the original image onto a new plane. Corresponding to a one-dimensional projection "receiving screen", i.e. a light-sensitive straight line L. The light-sensitive straight line L passes through the origin of coordinates (which may be the center of the image) and changes the angle theta from the origin S (typically 0 °) starts to rotate one angular step delta θ To a transformation angle theta E (typically 180 ° - δ θ ) Until the image is projected to the photosensitive straight line by each pixel i under each angle theta, and the Radon transformation matrix is obtained by calculating the line integrals R1 and R2 … … Rn of the gradient amplitude matrix of the image on a series of integral paths perpendicular to the photosensitive straight line. Wherein each integration path is a different distance from the origin of coordinates. Radon transformation matrix to transform angle θ S The distance from the integral path of the Radon transformation to the origin of coordinates is taken as a second dimension, and the line integral of the initial image gradient amplitude matrix in each integral path is taken as a matrix element value.
It can be appreciated that when there is a dominant straight line structure on the image and line integration is performed along the integration path coincident therewith, the resulting line integration value is the largest. The angle of the dominant straight line structure in the image (angle of the coincident integral paths) can thus be determined from the largest line integral value in the Radon transform matrix. Specifically, the corresponding transformation angle theta at the moment is determined S Will change the angle theta S Subtracting 90 degrees is the angle of the dominant straight line structure in the image.
In order to improve the recognition accuracy, a new Radon transformation method is provided to recognize the image deflection angle. As shown in fig. 6, the new Radon transform method includes the steps of:
s61, acquiring an initial image, and processing the initial image to obtain a corresponding gradient amplitude matrix.
The initial image is a 2D image, which may be obtained by various methods in the prior art, such as photographing a target object (e.g., a wafer) with a camera.
The corresponding gradient amplitude matrix can be obtained by carrying out gradient calculation on the initial image. The gradient magnitude matrix may be used to identify edges of the initial image.
The gradient calculation can be performed by using a convolution approximation method such as Sobel or Canny in the prior art. In a specific embodiment of the present application, a central differential format is used to calculate the gradient, and the gradient amplitude matrix G is obtained by calculating the lateral gradient Gxi and the longitudinal gradient Gyi of each pixel i of the initial image, and then solving the gradient amplitude gi=sqrt (Gxi 2+ Gyi 2) of each pixel i.
S62, carrying out Radon transformation on the image gradient amplitude matrix to obtain a Radon transformation matrix. The Radon transformation matrix takes a transformation angle as a first dimension, and takes the distance from an integral path of the Radon transformation to a coordinate origin as a second dimension.
In the specific transformation, the angle step size can be selected according to the needs, such as 0.01 degree, 0.005 degree, 0.002 degree, 0.001 degree and the like. Preferably, the angular step is not less than 0.001 °. The smaller the angle step is, the higher the accuracy of identifying the deflection angle of the image is, but the larger the calculation amount is. However, when the angle step is small to a certain extent, for example, 0.001 degrees, the image deflection angle recognition accuracy is mainly limited by other links, so that the continuous reduction of the angle step not only brings about a great increase in the calculated amount, but also has a small meaning for improving the image deflection angle recognition accuracy.
S63, performing enhancement treatment on elements in the Radon transformation matrix to obtain an enhancement matrix.
The difference between elements can be enlarged by carrying out enhancement treatment on the elements in the Radon transformation matrix, so that the dominant linear structure is properly highlighted before relative enhancement, and the adverse effect caused by the non-dominant linear structure is reduced.
In a specific embodiment, the enhancement processing may be weighting processing for each element, or performing power operation for each element, and in particular, square processing may be performed. All treatments that may be enhanced to expand the differences between elements are intended to be within the scope of the present application.
S64, summarizing the enhancement matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle.
The obtained Radon transformation matrix can initially reflect the linear structure in the image, but if the dominant linear structure is directly identified on the basis of the Radon transformation matrix, the identification error can occur when the image has stronger linear noise. For example, when the planting direction of the crops is identified in the farmland, if a road in a direction different from the direction of the crops exists in the farmland, the planting direction of the crops is likely to be erroneously identified based on the strong straight line texture feature of the road. For this purpose, the enhancement matrix is summarized according to the second dimension to obtain a preliminary vector f (θ) using the transformation angle θ as a parameter. Based on the operation, the line integrals on the integral paths corresponding to the same angle are summarized, all straight lines at the same angle can be used as the recognition basis of the dominant straight line structure, and the possibility of recognition errors when stronger straight line noise exists in the image is reduced. Also taking the planting direction of the identified crops in the farmland as an example, the crops are arranged in parallel along the same direction, the number of the roads is single, and once the straight lines in the same direction are comprehensively considered, the single road cannot influence the final result.
Where the initial image dominates the straight line structure, f (θ) is typically a unimodal function of the transformation angle θ.
The summary in the embodiment of the present application specifically refers to that the element is summed according to the second dimension to obtain a preliminary vector f (θ) using the transformation angle θ as a parameter.
Based on the summarizing process, all linear structures of the same transformation angle of the initial image are comprehensively considered, instead of only selecting the strongest linear structure, so that complex and noisy images can be conveniently handled.
S65, calculating the gradient of the preliminary vector, and filtering the gradient to form a target vector containing the main angle.
The calculation of the gradient can be performed by a gradient calculation method in the prior art. In one embodiment of the present application, a central differential format is used to calculate the gradient of the preliminary vector.
Filtering the gradient may specifically include:
calculating the absolute value of the preliminary vector gradient;
the portions with absolute values lower than the preset gradient values are filtered to form a target vector f' (theta) containing the main angle.
The selection of the preset gradient value can be set empirically, or can be determined according to the distribution of the preliminary vector gradient values, for example, half of the absolute value of the highest gradient is determined as the preset gradient value, etc.
It is conceivable that by filtering out portions of the gradient that are below the preset gradient value, i.e. by removing the gentle portions at both ends of the f (θ) corresponding curve, an intermediate segment is preserved that contains the principal angle of the image, which is substantially symmetrical with respect to the principal angle of the image.
The gradient filtering step enables the target vector f' (theta) to be mainly concentrated near the main angle of the image, and the problem that calculation errors are caused by participation of non-image main angle parts in subsequent processes is solved.
S66, carrying out interpolation operation on the target vector.
S67, identifying peak points according to areas under curves corresponding to the target vectors after interpolation operation so as to calculate deflection angles of the images.
The target vector f' (θ) obtained by step S15 is concentrated near the principal angle of the image and is substantially symmetrical with respect to the principal angle. Therefore, the area of each two sides of each point on the curve can be calculated and the area difference can be calculated according to the equal area integral cutting method for the curve f' (theta) of the target vector, the point corresponding to the minimum area difference is determined as the peak point, and the deflection angle of the image is calculated according to the target transformation angle corresponding to the peak point.
The area calculation in this application is specifically numerical integration, i.e. accumulation operation. Specifically, numerical integration is performed from both ends of the target vector f' (θ) toward each other, and a difference value of both end numerical integration is calculated. And selecting the point corresponding to the smallest difference value in all the difference values as a peak value point.
In the process, in order to enable the accuracy of a calculation result to be higher, in the method, a difference operation is firstly adopted on the target vector before area calculation to refine the angle resolution, so that the angle identification accuracy is improved. Specifically, the difference refinement can be performed by using an angle smaller than the angle step in the previous step, for example, interpolation can be performed by using 0.0001 °, 0.00001 °, 0.000001 °, and the like.
The peak point is determined in the mode, the corresponding target transformation angle can be determined, and the angle of the dominant linear structure can be calculated and obtained as the deflection angle of the image by subtracting 90 degrees from the target transformation angle theta.
In the prior art, by determining the point of the maximum R value as the peak point, under a special angle (+ -90 degrees, 0 degrees and (+ -45 degrees), the peak point cannot be accurately and uniquely identified due to the flat top or inclined flat top of the target vector. And by adopting the method of equal area segmentation, the peak point can be uniquely identified, and the accuracy of angle identification is improved.
In summary, the third embodiment of the present application does not directly identify the maximum R value after Radon transformation, but rather takes some column improvement actions. The dominant linear structure is highlighted by matrix element enhancement; by summarizing the elements, all the linear structures under the same transformation angle are comprehensively considered, so that adverse effects caused by image noise are avoided; through gradient filtering, parts far away from the main angle are removed to participate in calculation, and calculation errors are reduced; through difference value operation, the angle step length is thinned, and the recognition precision is improved; the peak point is identified by the equal area segmentation mode, so that the problem that the peak point cannot be accurately and uniquely identified due to a special angle is avoided. The accuracy of image angle recognition is greatly improved as a whole by combining the series of improvements.
It should be noted that although the above embodiment lists steps S61-S67 of the Radon transform. However, radon transformation in the embodiment of the present application may be performed only by steps S61, S62, S64, and S67. The optimization may be performed by combining at least one of steps S63, S65, and S66 on the basis of steps S61, S62, S64, and S67.
Before Radon conversion, gray processing is performed on the original image, for example, edge extraction is performed on the image to obtain a binary edge image, and Radon conversion is performed on the binary edge image, so that more image information is lost.
Therefore, in the preferred embodiment of the present application, the original image may be subjected to gray processing to obtain an 8-bit unsigned integer gray image, and then converted into a double-precision real gray image. And obtaining a gradient amplitude matrix based on the double-precision real gray image, and then carrying out Radon transformation. Compared with a binary edge image, the double-precision real gray image keeps more image information, and is beneficial to improving the recognition precision of the image deflection angle.
The recognition accuracy of the image deflection angle has a great relationship with the angle step length of the Radon transformation. In general, the smaller the angle step size, i.e., the higher the resolution of the converted angle, the higher the accuracy of angle identification. However, the smaller the angle step size, the more the calculation amount increases.
In order to meet the requirements of high precision and low calculation amount of angle identification, the application further provides a mode of gradually refining the angle step size in a grading manner on the basis of the third embodiment. In the first-stage Radon transformation, the transformation angle is selected from [ theta ] SE ]Range, angle step delta θ1 Proceeding to obtain an image deflection angle theta 1max . In the second level Radon transform, the transform angle is selected [ theta ] 1max +<,θ 1max -<]Range, angle step delta θ2 Proceeding to obtain an image deflection angle theta 12max 。δ θ2θ1 . And so on, the method can select three-level, four-level or even more-level Radon transformation to finally achieve the required angle identification precision. The angular step of each stage Radon transform may be one tenth or more of the angular step of the previous stage Radon transform as desired.
The recognition accuracy of the image deflection angle can be ensured by classifying and refining the angle step length, and the calculated amount can be reduced to a great extent.
By carrying out angle identification verification on the artificially generated image, the deflection angle is identified by adopting the novel Radon transformation method, and the maximum identification error can be controlled within 0.0005 degrees. Taking an 8 inch wafer as an example, an error angle of 0.0005 degrees with a diameter of 200mm corresponds to a tangential deviation of 0.873um at a radius of 100mm length, or a tangential deviation of 1.75um at a diameter of 200mm, which error has significantly reduced the likelihood of a match failure, misline.
Example IV
The target object deflection correcting method can be applied to a target object which comprises a plurality of identical sub-objects and is characterized by linear texture in arrangement of the sub-objects. Typical applications include yaw alignment of wafers. Of course, the method can also be applied to the approximate fields such as license plates, deflection correction of texts and the like.
Aiming at deflection alignment of a wafer, a dicing saw is disclosed in the fourth embodiment of the present application, which can realize the wafer, and the dicing saw includes a dicing module 71, a processor 72, a memory 73 electrically connected with the processor 72, a driving bearing module 74 and a vision system 75. Wherein the memory 73 is used for storing program instructions that are read by the processor 72 to perform the target object yaw alignment method of the above-described embodiments. Wherein:
the driving bearing module is used for bearing a wafer to be diced, and the wafer to be diced is used as a target object;
the vision system is used for photographing the wafer to be diced to obtain an initial image and a visual field image;
and the processor executes program instructions to control the driving bearing module to perform primary alignment and secondary alignment on the wafer to be diced.
The specific processor executes program instructions, and after the first deflection angle is obtained through identification, the driving bearing module is controlled to perform first alignment on the wafer to be diced; and after the secondary deflection angle is obtained through recognition, controlling the driving bearing module to perform secondary alignment on the wafer to be diced. Based on the dicing saw, the theta axis of the wafer can be adjusted.
The hardware structure of the dicing saw described above can refer to the structure of fig. 1. The driving bearing module can comprise an X-axis guide rail, an X-axis sliding table, a Y-axis sliding table and a Z-axis sliding table; the dicing module comprises a dicing machine body. The vision system may include a lens and a camera.
Example five
Corresponding to the above embodiment, a fifth embodiment of the present application further provides a target object yaw alignment device, where the device includes: the device comprises:
an initial image acquisition unit configured to acquire an initial image of a target object.
And the first deflection angle determining unit is used for determining the first deflection angle of the target object according to the initial image.
And the first correcting unit is used for correcting the target object for the first time according to the first deflection angle.
The secondary deflection angle determining unit is used for obtaining at least two visual field images of the target object after the primary alignment and carrying out matching comparison of identification features so as to determine a secondary deflection angle.
And the secondary centering unit is used for secondarily centering the target object according to the secondary deflection angle.
The secondary deflection angle determining unit specifically includes:
and the first visual field image acquisition unit is used for acquiring a first visual field image of the target object after alignment at the first position.
A moving unit for moving the target object along the x-axis by a preset distance delta x To a second position.
And a second visual field image acquisition unit configured to acquire a second visual field image of the target object at the second position.
The secondary deflection angle determination unit includes:
and the identification feature acquisition unit is used for acquiring the identification features in the first view image.
And the target feature searching unit is used for searching the target features matched with the identification features in the second view image.
An offset obtaining unit for determining the offset of the target feature in the x-axis according to the current position of the target featureShift delta x Offset delta in the y-axis y
A secondary deflection angle determination subunit for determining the deflection angle according to the offset and the preset distance delta x And determining the secondary deflection angle. Specifically, it can be represented by the formula θ e =tan -1y /(Δ xx ) The secondary deflection angle is calculated.
Optionally, the target feature searching unit specifically includes:
and the characteristic region acquisition unit is used for identifying the identification characteristic from the second visual field image to obtain at least one characteristic region.
And the matching degree calculating unit is used for calculating the matching degree of each characteristic region and the identification characteristic.
The target feature determining unit is used for taking the feature region with the matching degree larger than the preset matching degree as the target feature when only one feature region with the matching degree larger than the preset matching degree exists; when at least two characteristic areas with the matching degree larger than the preset matching degree exist, the offset delta of each characteristic area with the matching degree larger than the preset matching degree in the y axis is obtained y The offset delta is calculated y The smallest feature region serves as the target feature.
Optionally, the first deflection angle determining unit specifically includes:
the gradient computing unit is used for acquiring an initial image, and processing the initial image to obtain a gradient amplitude matrix;
the Radon transformation unit is used for carrying out Radon transformation on the gradient amplitude matrix to obtain a Radon transformation matrix; the Radon transformation matrix takes a transformation angle as a first dimension, and takes the distance from an integral path of the Radon transformation to a coordinate origin as a second dimension;
the matrix enhancement unit is used for enhancing the elements in the Radon transformation matrix to obtain an enhancement matrix; specifically, the matrix enhancement unit is configured to perform exponentiation with an exponent of n on elements in the Radon transform matrix to obtain the enhancement matrix; n >1.
The element summarizing unit is used for summarizing the enhancement matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle; specifically, the element summarizing unit may be configured to perform a summation operation on the enhancement matrix according to the second dimension, and generate a preliminary vector corresponding to the transformation angle.
The gradient filtering unit is used for calculating the gradient of the preliminary vector and filtering the gradient to form a target vector containing a main angle; specifically, the gradient filtering unit is further used for calculating the gradient of the preliminary vector by using a central difference method and calculating the absolute value of the gradient on the preliminary vector; and filtering out the part with the absolute value lower than a preset gradient value to form the target vector.
The interpolation unit is used for carrying out interpolation operation on the target vector;
and the angle determining unit is used for identifying peak points according to the area under the curve corresponding to the target vector after interpolation operation so as to calculate the deflection angle of the image.
In a specific embodiment of the present application, the angle determining unit may include:
the area calculation unit is used for calculating the area under the curve of the left side and the right side corresponding to each point of the target vector after interpolation operation;
The peak point identification unit is used for identifying the point with the smallest area difference under the curves at the left side and the right side as the peak point;
and the angle determining subunit is used for determining and calculating the deflection angle of the image according to the transformation angle corresponding to the peak point.
Optionally, the gradient calculating unit specifically includes:
the gray image unit is used for carrying out gray processing on the initial image to obtain a gray image;
an image conversion unit for converting the original gray image into a double-precision gray image;
and the gradient calculation subunit is used for carrying out gradient operation on the double-precision gray level image to obtain the image gradient amplitude matrix. Specifically, the gradient calculation subunit is configured to perform gradient operation on the dual-precision gray scale image by using a central difference method, so as to obtain the gradient amplitude matrix.
It should be understood that "units" mentioned in the embodiments of the present application may be implemented in software and/or hardware, which are not limited in this embodiment of the present application. For example, a "unit" may be a software program, a hardware circuit or a combination of both that implements the functions described above. The hardware circuitry may include application specific integrated circuits (application specific integrated circuit, ASIC), electronic circuits, processors (e.g., shared, dedicated, or group processors, etc.) and memory that execute one or more software or firmware programs, incorporated logic circuits, and/or other suitable components that support the described functions.
Example six
Corresponding to the above method embodiment, a sixth embodiment of the present application further provides a computer device, including: one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method embodiments described above.
Fig. 8 illustrates an exemplary architecture for an electronic device, e.g., a computer device 1100 may be a mobile phone, a computer, a messaging device, a tablet device, a personal digital assistant, etc.
Referring to fig. 8, a computer device 1100 includes at least one processor 1101, a memory 1102, a communication interface 1103, an image sensor 1104, a spectrum sensor 1105. The processor 1101, the memory 1102, and the communication interface 1103 are communicatively connected, and the communication connection may be implemented by a wired (e.g., bus) method or may be implemented by a wireless method. The communication interface 1103 is configured to receive data signals sent by the sensors 1104 and 1105; the memory 1102 stores computer instructions that are executed by the processor 1101 to perform the methods of the foregoing method embodiments.
It should be appreciated that in the present embodiment, the processor 1101 may be a central processing unit CPU, and the processor 1101 may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (fieldprogrammable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like.
The memory 1102 may include read-only memory and random access memory and provides instructions and data to the processor 1101. Memory 1102 may also include nonvolatile random access memory.
The memory 1102 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous link dynamic random access memory (synchlinkDRAM, SLDRAM), and direct memory bus RAM (DR RAM).
It should be understood that the computer device 1100 according to the embodiments of the present application may perform a method for implementing the embodiments of the present application, and the detailed description of the implementation of the method is referred to above, which is not repeated herein for brevity.
Example seven
Corresponding to the above method and apparatus embodiments, embodiment seven of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the above-mentioned method to be implemented.
Those of ordinary skill would further appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those of ordinary skill in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as beyond the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing has outlined the more detailed description of the preferred embodiment of the present invention and is provided herein as a detailed description of the principles and embodiments of the present invention with the use of specific examples, the above examples being provided for the purpose of facilitating the understanding of the method of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (14)

1. A method of yaw alignment of a target object, the method comprising:
acquiring an initial image of a target object;
determining a first deflection angle of the target object according to the initial image;
performing primary correction on the target object according to the primary deflection angle;
acquiring at least two view images of the target object after the first alignment;
matching and comparing the identification features in the at least two view images to determine a secondary deflection angle;
and performing secondary correction on the target object according to the secondary deflection angle.
2. The target object yaw alignment method of claim 1, wherein the acquiring at least two field of view images of the target object after the first alignment includes:
Acquiring a first visual field image of the target object after alignment at a first position;
moving the target object along the x-axis by a preset distance delta x To a second position;
acquiring a second field of view image of the target object at the second location;
the matching and comparing the identification features in the at least two view images to determine a secondary deflection angle comprises:
acquiring identification features in the first view image;
searching the second view image for target features matching the identification features;
determining the offset of the target feature in the x axis and the y axis according to the current position of the target feature;
according to the offset and the preset distance delta x And determining the secondary deflection angle.
3. The target object yaw alignment method of claim 2, wherein the determining an offset of the target feature in the x-axis and the y-axis based on the current position of the target feature comprises:
acquiring the offset delta of the target feature along the x-axis according to the current position of the target feature x Offset delta in the y-axis y
Said offset and said predetermined distance delta are based on x Determining the secondary deflection angle includes:
Through the formula theta e =tan -1y /(Δ xx ) The secondary deflection angle is calculated.
4. The target object yaw alignment method of claim 3, wherein the searching for target features in the second field of view image that match the identification features comprises:
identifying the identification features from the second view image to obtain at least one feature area;
calculating the matching degree of each characteristic region and the identification characteristic;
if only one characteristic region with the matching degree larger than the preset matching degree exists, taking the characteristic region with the matching degree larger than the preset matching degree as the target characteristic;
if at least two features with matching degree larger than the preset matching degree existThe feature region obtains the offset delta of each feature region with the matching degree larger than the preset matching degree in the y axis t The offset delta is calculated t The smallest feature region serves as the target feature.
5. The target object yaw alignment method of claim 2, wherein the initial image is acquired at the first location.
6. The target object yaw alignment method of claim 1, wherein the determining a first yaw angle of the target object from the initial image includes:
Acquiring the initial image, and processing the initial image to obtain a gradient amplitude matrix;
carrying out Radon transformation on the gradient amplitude matrix to obtain a Radon transformation matrix, wherein the Radon transformation matrix takes a transformation angle as a first dimension and takes the distance from an integral path of the Radon transformation to a coordinate origin as a second dimension;
summarizing the Radon transformation matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle;
and identifying peak points according to areas under curves corresponding to the preliminary vectors so as to calculate the first deflection angle of the target object.
7. The target object yaw alignment method of claim 6, wherein the method further comprises:
performing enhancement treatment on elements in the Radon transformation matrix to obtain an enhancement matrix;
the summarizing the Radon transformation matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle comprises:
and summarizing the enhancement matrix according to the second dimension to generate a preliminary vector corresponding to the transformation angle.
8. The target object yaw alignment method of claim 6 or 7, wherein the method further comprises:
Calculating the gradient of the preliminary vector, and filtering the gradient to form a target vector containing a main angle;
the step of identifying peak points according to the area under the curve corresponding to the preliminary vector to calculate the first deflection angle of the target object comprises the following steps:
and identifying peak points according to the area under the curve corresponding to the target vector so as to calculate the first deflection angle of the target object.
9. The target object yaw alignment method of claim 8, the method further comprising:
performing interpolation operation on the target vector;
the identifying a peak point according to the area under the curve corresponding to the target vector to calculate the first deflection angle of the target object includes:
and identifying peak points according to the area under the curve corresponding to the target vector after interpolation operation so as to calculate the deflection angle of the image.
10. The method of claim 9, wherein the identifying peak points according to the interpolated area under the curve corresponding to the target vector to calculate the deflection angle of the image comprises:
calculating the area under the curve of the left side and the right side corresponding to each point of the target vector after interpolation operation;
Identifying the point with the smallest area difference under the curves at the left side and the right side as the peak point;
and calculating the deflection angle of the image according to the transformation angle corresponding to the peak point.
11. The method of claim 6, wherein processing the initial image to obtain a gradient magnitude matrix comprises:
gray processing is carried out on the initial image to obtain a gray image;
converting the gray scale image into a double-precision gray scale image;
performing gradient operation on the double-precision gray level image to obtain the gradient amplitude matrix;
the performing Radon transformation on the gradient amplitude matrix to obtain a Radon transformation matrix comprises the following steps:
at a change angle [ theta ] SE ]Within the interval, delta θ Carrying out Radon transformation on the gradient amplitude matrix for the angle step length to obtain a Radon transformation matrix, wherein the theta S 0 DEG, and said theta E 180 ° -delta θ
12. The dicing machine is characterized by comprising a dicing module, a processor, a memory electrically connected with the processor, a drive bearing module and a vision system, wherein the memory is used for storing program instructions, and the processor reads the program instructions to execute the target object deflection alignment method according to any one of claims 1-11, wherein:
The driving bearing module is used for bearing a wafer to be diced, and the wafer to be diced is used as the target object;
the vision system photographs the wafer to be diced to obtain an initial image and a visual field image;
and the processor executes the program instruction and controls the driving bearing module to perform primary alignment and secondary alignment on the wafer to be diced.
13. A computer device, comprising: one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of claims 1 to 11.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the method of any one of claims 1-11.
CN202311315775.3A 2023-10-11 2023-10-11 Target object deflection correcting method, computer equipment and storage medium Pending CN117422621A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311315775.3A CN117422621A (en) 2023-10-11 2023-10-11 Target object deflection correcting method, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311315775.3A CN117422621A (en) 2023-10-11 2023-10-11 Target object deflection correcting method, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117422621A true CN117422621A (en) 2024-01-19

Family

ID=89522001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311315775.3A Pending CN117422621A (en) 2023-10-11 2023-10-11 Target object deflection correcting method, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117422621A (en)

Similar Documents

Publication Publication Date Title
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
US8538168B2 (en) Image pattern matching systems and methods for wafer alignment
US7965325B2 (en) Distortion-corrected image generation unit and distortion-corrected image generation method
JP5075757B2 (en) Image processing apparatus, image processing program, image processing method, and electronic apparatus
CN112070845B (en) Calibration method and device of binocular camera and terminal equipment
CN109903346B (en) Camera attitude detecting method, device, equipment and storage medium
WO2018019143A1 (en) Image photographing alignment method and system
US7151258B2 (en) Electron beam system and electron beam measuring and observing methods
CN111028205A (en) Eye pupil positioning method and device based on binocular ranging
CN114549599A (en) Wafer rapid pre-alignment method and device, electronic equipment and storage medium
CN105783710B (en) A kind of method and device of location position
JPH11220006A (en) Method for alignment of object
JP2009301181A (en) Image processing apparatus, image processing program, image processing method and electronic device
CN117422621A (en) Target object deflection correcting method, computer equipment and storage medium
CN115103124B (en) Active alignment method for camera module
CN108520533B (en) Workpiece positioning-oriented multi-dimensional feature registration method
CN110969661A (en) Image processing device and method, position calibration system and method
CN114677448A (en) External reference correction method and device for vehicle-mounted camera, electronic equipment and storage medium
CN114670195B (en) Robot automatic calibration method and system
JPH11190611A (en) Three-dimensional measuring method and three-dimensional measuring processor using this method
CN117115242B (en) Identification method of mark point, computer storage medium and terminal equipment
CN117191805B (en) Automatic focusing method and system for AOI (automatic optical inspection) detection head
WO2024042995A1 (en) Light detection device and light detection system
CN117409074A (en) Image inclination angle recognition method, computer device and storage medium
JP2010041418A (en) Image processor, image processing program, image processing method, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination