CN115547909A - Method for wafer definition positioning - Google Patents

Method for wafer definition positioning Download PDF

Info

Publication number
CN115547909A
CN115547909A CN202211129191.2A CN202211129191A CN115547909A CN 115547909 A CN115547909 A CN 115547909A CN 202211129191 A CN202211129191 A CN 202211129191A CN 115547909 A CN115547909 A CN 115547909A
Authority
CN
China
Prior art keywords
focus
image
camera
definition
referring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211129191.2A
Other languages
Chinese (zh)
Other versions
CN115547909B (en
Inventor
田东卫
温任华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meijie Photoelectric Technology Shanghai Co ltd
Original Assignee
Meijie Photoelectric Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meijie Photoelectric Technology Shanghai Co ltd filed Critical Meijie Photoelectric Technology Shanghai Co ltd
Priority to CN202211129191.2A priority Critical patent/CN115547909B/en
Publication of CN115547909A publication Critical patent/CN115547909A/en
Application granted granted Critical
Publication of CN115547909B publication Critical patent/CN115547909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L21/00Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
    • H01L21/67Apparatus specially adapted for handling semiconductor or electric solid state devices during manufacture or treatment thereof; Apparatus specially adapted for handling wafers during manufacture or treatment of semiconductor or electric solid state devices or components ; Apparatus not specifically provided for elsewhere
    • H01L21/68Apparatus specially adapted for handling semiconductor or electric solid state devices during manufacture or treatment thereof; Apparatus specially adapted for handling wafers during manufacture or treatment of semiconductor or electric solid state devices or components ; Apparatus not specifically provided for elsewhere for positioning, orientation or alignment
    • H01L21/681Apparatus specially adapted for handling semiconductor or electric solid state devices during manufacture or treatment thereof; Apparatus specially adapted for handling wafers during manufacture or treatment of semiconductor or electric solid state devices or components ; Apparatus not specifically provided for elsewhere for positioning, orientation or alignment using optical controlling means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Manufacturing & Machinery (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Power Engineering (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

The invention relates to a method for positioning wafer definition. And moving the camera provided with the microscope for a plurality of times in the vertical direction of the wafer, recording the current parking position of the camera when the camera moves, and storing the image shot by the microscope at the time of the camera. Calculating position variables of each parking position relative to a reference position, calculating definition variables of definition of each image relative to reference definition, and fitting a quadratic function by taking the array with the position variables as a discrete independent variable of the quadratic function and taking the array with the definition variables as a dependent variable of the quadratic function. The position required by the camera to shoot the wafer is determined by adding the vertex abscissa of the quadratic function and the reference position.

Description

Method for positioning wafer definition
Technical Field
The invention mainly relates to the technical field of semiconductor wafers, in particular to a method for positioning the definition of a wafer in each process preparation link of the wafer.
Background
Wafer inspection is a commonly used and necessary process link in semiconductor manufacturing processes, and the wafer inspection process generally includes inspection target image generation and image data processing. The detection target image is generated to obtain a detection target image related to an object to be detected, such as a wafer, and the data processing is used for processing and determination of extraction of the detection target image, and common processing such as wafer defect analysis, feature size measurement, and the like. Taking wafer defect as an example, the core work is to distinguish valid defects from suspected defects, such as noise signals, which are caused by non-essential small differences or randomness in the inspection.
The critical dimension measurement often depends heavily on the shooting or image clarity of the measured object, and if the image of the measured object is only a rough image, the critical dimension measurement must be deviated. The challenge is how to accomplish fine shooting of critical dimensions. In the prior art, shooting is often realized by roughly adjusting illumination, and generally, the graph of a scanning electron microscope becomes blurred and cannot realize accurate images, so that measurement cannot be performed. Or the scanning electron microscope image is viewed as sharp but in fact does not achieve the best sharpness.
The most demanding requirement in terms of defect detection is image clarity, except similar to critical dimension measurements. The problem is how to ensure that the image is still fine enough to be improved, which leads to subsequent attempts to improve the manufacturing process to optimize the semiconductor process shift without any rules, and the present application proposes the following embodiments based on these drawbacks.
It should be noted that the above background description is only for the convenience of clear and complete explanation of the technical solutions of the present application, and is convenient for those skilled in the art to understand, but not limited to such specific application scenarios.
Disclosure of Invention
The application provides a wafer definition positioning method, wherein: moving the camera equipped with microscope in the vertical direction of the wafer N times, and recording the current parking position P of the camera when setting the K-th movement K And saving the image I taken by the camera through the microscope K ,K=1、2、3、…N;
The image of the wafer, which is shot by the camera through the microscope at one reference position, has reference definition;
calculating respective parking positions P 1 、P 2 、…P N Position variable X relative to a reference position 1 、X 2 、…X N
Computing an image I 1 、I 2 、…I N Respective sharpness variable Y of sharpness with respect to reference sharpness 1 、Y 2 、…Y N
Will have a position variable X 1 、X 2 、…X N Treating the array of (a) as a discrete argument of a quadratic function, and (b) taking the sharpness variable Y 1 、Y 2 、…Y N The array of (2) is regarded as a dependent variable of the quadratic function to fit the quadratic function;
thereby, the position required by the camera to shoot the wafer is determined by adding the vertex abscissa of the quadratic function and the reference position.
The method described above, wherein: evaluating the reference definition of the image at the reference position and the corresponding image I at each parking position by using the energy gradient function as an evaluation function F 1 、I 2 、…I N The definition of itself;
Figure BDA0003849345240000021
wherein f (xp, yp) is the gray value of the pixel point (xp, yp), f (xp +1, yp) is the gray value of the pixel point (xp +1, yp), and f (xp, yp +1_ represents the gray value of the pixel point (xp, yp + 1).
The method described above, wherein: evaluating the reference definition of the image at the reference position and the corresponding image I at each parking position by using the Laplace function as an evaluation function F 1 、I 2 、…I N The definition of itself;
Figure BDA0003849345240000022
wherein
Figure BDA0003849345240000023
f (xp, yp) is the gray value of the pixel point (xp, yp), and the gradient matrix G (xp, yp) is obtained by performing convolution on the gray value of the pixel point and the Laplacian operator L.
The method described above, wherein: in the K-th movement, causedPosition variable X K And a sharpness variable Y K Viewed as a quadratic function y = ax 2 + bx + c corresponds to a coordinate point (X) on the curve K ,Y K )。
The method described above, wherein: if the quadratic function y = ax 2 If the coefficient of the + bx + c quadratic term is not less than zero, namely a is more than or equal to 0, the position value of the reference position is changed again until the quadratic function fitted based on the new reference position meets a<0。
The method described above, wherein: if the quadratic function y = ax 2 The vertex abscissa of the + bx + c is not more than zero, namely the vertex abscissa (-b/(2 a)) ≦ 0, the position value of the reference position is changed again until a quadratic function fitted based on the new reference position is satisfied (-b/(2 a))>0。
The method described above, wherein: if the quadratic function y = ax 2 The vertex abscissa (-b/(2 a)) of + bx + c exceeds the maximum distance allowed to move by the focal point of the camera, the position value of the reference position is re-changed until the quadratic function fitted based on the new reference position satisfies that the vertex abscissa (-b/(2 a)) is less than the maximum distance allowed to move.
The method described above, wherein: the method can be used for respectively and independently carrying out definition positioning on different areas with high and low unevenness on a wafer or a chip of the wafer, and when the definition positioning is carried out by switching from the previous area to the next area, the reference position is updated from the position value corresponding to the previous area to the new position value corresponding to the next area.
The method described above, wherein: the mode of obtaining the quadratic function at least comprises operation based on a quadratic polynomial fitting principle or a neural network which is trained by utilizing a position variable and a definition variable and is used for simulating the quadratic function.
The method described above, wherein: when the sharpness evaluation is performed, the image gradient calculation of the evaluation function is performed by using a sharp image in the normal focus or a defocused image in the non-normal focus.
Drawings
To make the above objects, features and advantages more comprehensible, embodiments accompanied with figures are described in detail below, and features and advantages of the present application will become apparent upon reading the following detailed description and upon reference to the following figures.
FIG. 1 shows a vertically movable camera with an electron microscope and a stage for holding a wafer.
Fig. 2 is a diagram for judging whether focusing is successful according to image data and a fitted second-order curve.
Fig. 3 is an example of a camera implementing up and down movement with multiple repeated autofocus attempts.
FIG. 4 is an example of a pixel with gradient operations involving energy gradients or Laplace functions.
Fig. 5 is a diagram that may treat the position variable as an independent variable and the sharpness variable as a dependent variable.
Fig. 6 shows a case where both the position variable and the sharpness variable parameters are out of specification.
FIG. 7 is a graph of a quadratic function that can be modeled using a neural network in addition to a quadratic polynomial fit.
Detailed Description
The present invention will be described more fully hereinafter with reference to the accompanying examples, which are intended to illustrate and not to limit the invention, but those skilled in the art, on the basis of which they may obtain without inventive faculty, without departing from the scope of the invention.
Referring to fig. 1, the necessary knowledge involved in the present application will be described. The field of semiconductor fabrication refers generally to the silicon wafers used to fabricate integrated circuits. A measurement platform or motion platform 11 of the cd metrology apparatus is used to carry the wafer 10. The microscope and camera CA are mated or assembled together to capture fine wafer detail images. The microscope has a high power lens and a low power lens and the magnification of the lenses can be switched manually or automatically in a series of lenses LN. Such as switching from high power to medium power or to low power, or performing the opposite lens switching operation, such as switching from low power to medium power or to high power. This multiple switching relationship for the lenses includes on-axis switching.
Referring to fig. 1, for platform (CHUCK): the moving platform 11 is a special tool for absorbing and bearing wafers in the production process of various semiconductor silicon chips, and is mainly used for bearing wafers (wafers). Some documents also refer to this type of carrier as a carrier or lift mechanism, wafer carrier or platform, carrier platform, and the like. The motion platform belongs to a bearing mechanism in the semiconductor equipment. The subject tables referred to herein include a platform (CHUCK) structure. The motion stage may move within the coordinate system along an abscissa X and an ordinate Y as desired, and in some cases the motion stage may rotate the wafer within the coordinate system as desired or move the wafer up and down in the Z-axis.
Referring to fig. 1, the platform motion control module: the device consists of an X axis, a Y axis, a theta axis and a CHUCK, and before the measuring equipment measures the critical dimension of the wafer, the CHUCK is required to be driven to move by a platform motion control module, so that the movement control of the wafer is realized. The theta axis can rotate, for example, the theta axis is rotated to drive the CHUCK to rotate, which is equivalent to adjusting the value of the angle theta by controlling the rotation of the motion platform.
Referring to fig. 1, the critical dimension measuring apparatus of the semiconductor industry includes at least a motion stage 11 and a camera CA equipped with a microscope. The critical dimension measuring device can be a modification of an existing critical dimension measuring device or a measuring device with a completely new critical dimension design. In view of the critical dimension measurement apparatus already existing in the semiconductor industry, the present application does not describe the critical dimension measurement apparatus separately, and it should be noted that all or part of the technical features of the critical dimension measurement apparatus in the prior art can be applied to the measurement apparatus in the present application. This application defaults to including all or part of the prior art features when referring to a critical dimension measurement device. A camera configured with a microscope includes an electron microscope.
Referring to fig. 1, the focusing Z-axis motion module of the camera CA: the wafer can be composed of a Z axis capable of moving up and down, when the wafer is placed on a measuring platform such as the platform 11, if the view field of a camera CA is clear and the resolution is high, the wafer needs to be arranged at the focal plane of the camera, and the Z axis movement module can drive the camera and a lens to move up and down in the process, so that the focal plane with the clearest view field of the camera can be found. I.e., finding the focal plane of the critical dimension structure on the wafer.
Referring to fig. 1, regarding the distance focal plane position adjustment: the Z-axis stepping motor moves to drive the camera to move up and down so as to adjust the position of the focal plane. How the motor moves with the camera belongs to the prior art, and currently existing critical dimension measuring equipment basically adopts such a structure, and the detailed description thereof is not repeated separately. Further, the motor and the camera equipped with the microscope, which belong to the known art, will not be described.
Referring to fig. 1, the important term Critical Dimension (CD) is explained before the present application. In the manufacturing of semiconductor integrated circuit photomask and photoetching process, in order to evaluate and control the graphic processing precision of the process, special line patterns capable of reflecting the characteristic line width of the integrated circuit are specially designed in the industry, which are called as key dimensions. The industry critical dimension terminology may also be replaced with critical dimension structures or critical dimension marks.
Referring to fig. 1, the technical problem to be solved by the present application is now: when the microstructure of the wafer is detected, the images at different positions are not necessarily in the focal plane, which easily causes great errors in the measured values. The manual repeated focusing of the microscope and the continuous trimming of the working distance in the traditional scheme lead to low efficiency and poor accuracy. The automatic focusing technology disclosed by the application can complete quick, accurate and smooth focusing, and can reflect the focusing condition of the region of interest in real time.
With reference to fig. 1, two of the technical problems to be solved by the present application are now: based on the complex and slow measuring speed (such as the need of focusing again and the repeated trimming of the measuring distance) of the mode for detecting the microstructure in the prior art, the focusing process involved in the microstructure measuring step needs to be simplified, the measuring efficiency of unit time needs to be improved, the time of wafer staying in the detection link on the whole production line is reduced, and the detection accuracy of the microstructure is improved.
Referring to fig. 1, regarding the autofocus implementation aspect: the system can be divided into an image acquisition module and a dimming module, wherein the dimming module is a Z-axis motion module and is used for dimming acquired images, and then the image algorithm is used for processing and judging whether the current position is on a focal plane or not so as to drive a Z-axis (usually an up-and-down moving axis) to move for adjustment.
Referring to fig. 1, regarding the distance focal plane position adjustment: the Z-axis stepping motor moves to drive the camera to move up and down so as to adjust the position of the focal plane. When the wafer is placed on the measuring platform, the wafer needs to be placed at the focal plane of the camera to enable the visual field of the camera to be clear and the resolution to be high, and the Z-axis motion module can drive the camera and the lens to move up and down so as to find the focal plane with the clearest visual field of the camera.
Referring to fig. 1, the module or module is moved about the Z-axis: the first is related to the stepping motor, for example, the speed and position of operation can be accurately controlled without feedback, and the function of the servo motor can be replaced under the conditions of low operation speed and low power requirement. The stepping motor can be free from various interference factors in terms of step size. Such as the magnitude of the voltage or the magnitude of the current or the voltage current waveform, the change in temperature, etc.
Referring to fig. 1, the stroke of the moving module about the Z-axis: for example, the Z-axis of the Z-axis minimum stroke is achieved by a stepper motor and its minimum stroke is the linear displacement of one pulse, which can be calculated as follows.
First, the step angle of the stepper motor is determined, as is commonly indicated. For example, the example of 1.8 degrees indicates that 360/1.8=200 for one circle, that is, 200 pulses are required for one rotation of the motor.
Secondly, whether the motor driver is provided with the subdivision is determined, the subdivision number is checked, and the dial on the driver can be observed to confirm whether the motor driver is provided with the subdivision number. For example, the motor driver is provided with 4 subdivisions, and as mentioned above, by calculating the correlation between 200 pulses, 200 × 4=800 is equivalent to requiring 800 pulse motors to rotate one revolution.
Furthermore, the length or lead of one revolution of the motor shaft is determined: pitch x number of thread heads equals lead if lead screw or pitch diameter (m x z) is lead if rack and pinion drive.
Lead divided by the number of pulses (lead/pulse) equals the linear displacement of one pulse. It is generally desirable that the distance traveled by the stepper motor be greater than or equal to the minimum stroke, otherwise the stepper motor will not respond.
Referring to fig. 1, it is assumed in an alternative example that the minimum stroke of the Z-axis moving module is 0.000078mm, for example, it is assumed that the camera satisfies a condition that the minimum stroke is 0.000078 mm. Such travel is different using different stages.
Referring to fig. 1, a single movement step is defined in an alternative example (e.g., onceStep =0.000078 mm).
Referring to fig. 1, the stroke autofocus track of autofocusing is defined in an alternative example. The parameters of the auto-focusing stroke are determined according to the flatness of the product to be measured, such as a wafer, which is substantially the maximum value of the stroke when the Z-axis moves up and down to focus. For example, assume that the autofocus track takes 0.04mm.
Referring to fig. 1, the maximum number of autofocus attempts autofocus trycnt is defined in an alternative example.
Referring to fig. 1, the number of autofocus attempts is defined as cnt in an alternative example. cnt is continuously counted in cycles.
Referring to fig. 1, the current position on the Z-axis is defined as Zc in an alternative example.
Referring to fig. 1, in an alternative example, the maximum number of auto-focus times MAX _ FRAME _ COUNT is defined, where the maximum number of auto-focus times is the maximum number of Z-axis adjustments.
Referring to fig. 1, the number of times of Z-axis adjustment is defined as m _ focus _ cnt in an alternative example.
Referring to fig. 1, a method MoveZDirect (onceStep) is defined in an alternative example. For example, if the process of moving down one step along the vertical or vertical axis (Z axis) is MoveZDirect (oncostep). Conversely, if the process of moving one step up along the vertical or vertical axis (Z axis) is MoveZDIRect (-onceStep). The positive and negative values of the values inside the brackets of the method function represent downward movement and upward movement, respectively.
Referring to FIG. 1, the first type of array (m _ focus _ X [ ]) in the alternative example is a statistic of Z-axis position variation.
Referring to FIG. 1, the second type of array (m _ focus _ Y [ ]) in the alternative example is statistics on the amount of change in image sharpness.
Referring to fig. 1, a Z-axis position m _ focus _ Z of a focus start point is defined in an alternative example.
Referring to fig. 1, an image sharpness m _ focus _ def of a focus start point is defined in an alternative example.
Referring to fig. 1, the focusing referred to in the present application includes the following calculation process.
Referring to fig. 1, a temporary variable for calculation up _ load is defined, initially as double up _ load = travel. double is a type of computer language, double precision floating point type. The application may run on a computer or server or similar processing unit. Other alternatives of the processing unit: a field programmable gate array, a complex programmable logic device or a field programmable analog gate array, or a semi-custom ASIC or processor or microprocessor, or a digital signal processor or integrated circuit or software firmware program stored in memory, or the like. The Double notation in front of a computed value indicates that the type of the computed value is Double precision floating point type, and int type, hereinafter, is an identifier for defining an integer type variable.
Referring to fig. 1, before metrology is performed on critical dimensions, a number of autofocus attempts are performed, and the repeated autofocus attempts may be exemplified in computer language, namely for (cnt =0. The number cnt increases from the first zero value until the maximum number of autofocus attempts, autoFocusTryCnt, is reached, that is, if the number of autofocus attempts cnt stops increasing when the condition cnt < AutoFocusTryCnt is not satisfied. The cnt self-increment operation is identified in computer language by the expressible cnt + +.
Referring to fig. 1, in each execution of an autofocus attempt: a camera or the like (such as a worktable with a microscope) repeatedly adjusts the position in the vertical axis direction, namely the Z axis direction for a plurality of times, and the process of repeatedly adjusting the position of the Z axis camera can be exemplified by computer languages, namely for (m _ focus _ cnt =0 m _/focus _/cnt and MAX _FRAME _/COUNT; ++ m _ focus _ cnt). The number of times the camera is adjusted in the vertical axis is m _ focus _ cnt. Wherein the for statement is a loop statement.
Referring to fig. 1, the Z-axis adjustment number m _ focus _ cnt is increased from the first zero value until the maximum Z-axis adjustment number MAX _ FRAME _ COUNT is reached, that is, if the condition m _ focus _ cnt < MAX _ FRAME _ COUNT is not satisfied during the Z-axis iterative adjustment, the Z-axis adjustment number m _ focus _ cnt is stopped from increasing and the m _ focus _ cnt is identified from one addition in the computer language by the expression + + m _ focus _ cnt.
Referring to fig. 1, in each execution of an autofocus attempt: the microscope lens is preferably brought closer to the wafer to a predetermined fraction (e.g., three quarters) of a specified stroke (e.g., travel) before the camera position is repeatedly adjusted. In an alternative embodiment, the microscope lens is brought into proximity with the sample, i.e. the wafer, before the camera position is adjusted repeatedly: the shot first makes 3/4 of the way up the sample, exemplified in computer language by MoveZDIRect (-up _ load 3/4).
Referring to fig. 1, image sharpness is expressed by F. For example double def = F. The evaluation of sharpness based on image gradients is described in general terms in the foregoing: it is known that the sharp image in the positive focus is sharper and clearer than the blurred out-of-focus image, and moreover, the gray value of the pixel at the edge changes greatly. def is the real-time image sharpness. The evaluation of the image clarity will be explained below by using a mathematical expression.
Referring to FIG. 1, the current Z-axis position is denoted Zc. The real-time Z-axis coordinate is expressed in Z _ pos.
Referring to FIG. 1, a real-time Z-axis coordinate Z _ pos is obtained. For example double z _ po = Zc. Note that in the first adjustment for repeatedly adjusting the camera position, there is a case where m _ focus _ cnt = =0 as an example of a computer language, that is, a start point of focus, and m _ focus _ z and m _ focus _ def are assigned values. The first adjustment or focus start point can be exemplified in computer language, i.e. if (m _ focus _ cnt = = 0) { m _ focus _ z = z _ pos; m _ focus _ def = def; }. The Z-axis position m _ focus _ Z of the focus start point and the image sharpness m _ focus _ def of the focus start point are expressed.
Referring to FIG. 1, the x-coordinate of the quadratic or quadratic curve is the amount of change in the Z-axis position. The position data of the camera movement can be captured in computer language, for example, m _ focus _ X [ m _ focus _ cnt ] = z _ pos-m _ focus _ z, wherein the array of X-coordinates of the quadratic or quadratic curve includes m _ focus _ X [ m _ focus _ cnt ].
Referring to fig. 1, the y-coordinate of the quadratic function or second-order curve is the amount of change in image sharpness. Also, the captured image sharpness data may be exemplified in a computer language, i.e., m _ focus _ Y [ m _ focus _ cnt ] = def-m _ focus _ def, where the array of Y-coordinates of a quadratic or quadratic curve includes m _ focus _ Y [ m _ focus _ cnt ].
Referring to fig. 1, if the moving distance exceeds the specified travel (e.g., the specified travel), the adjustment is ended. That is, if the distance or distance moved by the camera on the Z axis exceeds the specified travel, the Z axis position adjustment is finished. The out-of-range trip can be exemplified in a computer language such as if (math.abs (z _ pos-m _ focus _ z) > math.abs (travel)) break. Abstract values for numbers are expressed in Math.abs, such as the Abstract values for z _ pos-m _ focus _ z or travel. break indicates that a jump out of the current for loop is required, e.g., a jump out of the for loop of m _ focus _ cnt plus one. The m _ focus _ cnt encounters a situation where it jumps out of the loop and does not increase until the next round of auto-focus attempts is entered. m _ focus _ cnt encounters a condition where it is likely that it has not yet reached MAX _ FRAME _ COUNT.
Referring to fig. 1, after the Z-axis is adjusted for a plurality of times, the position data of the camera movement and the corresponding image definition data can be captured, where the position data of the position adjustment for a plurality of times includes m _ focus _ X [ m _ focus _ cnt ], and the image definition data of the position adjustment for a plurality of times includes m _ focus _ Y [ m _ focus _ cnt ]. The Z-axis adjustment is represented by MoveZDIRect.
Referring to FIG. 5, Z-axis position variations (X) are captured N ) Statistics of (1) (first array m _ focus _ X [ ]])。
Referring to fig. 5, the variation amount of sharpness of captured image (Y) N ) "in (a second type array m _ focus _ Y])。
Referring to fig. 5, focused data m _ focus _ X [ ] and m _ focus _ Y [ ] are fitted to a second order curve.
Referring to fig. 5, the second order curve is given by the equation y = ax 2 + bx + c.
Referring to FIG. 5, the vertex coordinates of the second order curve, i.e., the abscissa of the vertex coordinates, are calculated.
Referring to fig. 5, the position m _ focus _ best at which the image is clearest is calculated: the vertex coordinates and the focus starting Z axis position are added to obtain the clearest position m _ focus _ best of the image, and m _ focus _ best = -b/(2 × a) + m _ focus _ Z. The clearest position of the image is related to the vertex coordinates of the second order curve and also to the Z-axis position m _ focus _ Z of the focus start point.
Referring to fig. 5, a second-order curve y = ax is satisfied 2 The coefficient of the second-order term of + bx + c is less than zero, i.e. a<0. The vertex coordinate is larger than zero (namely (-b/(2 a))>0. The vertex coordinate is less than a defined maximum value of the focusing stroke, i.e., (-b/(2 a))<And (4) the AutoFocusTracel considers that the focusing is successful. Instantiating if (a) in computer language<0&&(-b/(2*a))>0&&(-b/(2*a))<Autofocus track) break. break indicates successful focusing without the need to use a moving stage to find the focus. Note that predetermined conditions including the above three items, etc., need to be satisfied at the same time to indicate that focusing is successful, and that any one item is not satisfied indicates that focusing is unsuccessful.
Referring to fig. 1, in each attempt (the number of attempts is denoted by cnt): the predetermined conditions include that the second-order coefficient of the second-order curve is less than zero (a < 0), the vertex coordinate is greater than zero (-b/(2 a)) >0, the vertex coordinate is less than a defined maximum value of the focusing stroke (-b/(2 a)) < auto focus track, and the focus is found after moving the camera for a distance on the vertical axis if any one of the predetermined conditions is not met. In other words, the above condition is not satisfied, that is, the focus is not within the current stroke, and the stage needs to be moved to find the focus again.
Referring to fig. 1, dir is defined as the sum of the differences in the sharpness of two adjacent images in an alternative example. In the most initial state, for example, double dir =0. The Z-axis adjustment count is m _ focus _ cnt as described above. In the stage that the position of the camera is repeatedly adjusted for many times, any two adjacent definition difference values obtained by successively adjusting the position of the camera are subjected to difference to obtain a difference result. Two adjacent resolutions are represented by m _ focus _ Y [ m ] and m _ focus _ Y [ m +1], respectively, which are differenced to obtain a differenced result m _ focus _ Y [ m +1] -m _ focus _ Y [ m ]. Definition m is a digital variable, and the digital variable is smaller than the Z-axis adjustment times m _ focus _ cnt. m represents the position adjustment times.
Referring to fig. 1, in each attempt (the number of attempts is denoted by cnt): and in the stage of repeatedly adjusting the position of the camera for multiple times on the Z axis, based on any two adjacent definition difference values obtained by successively adjusting the position of the camera, taking the difference as a difference result m _ focus _ Y [ m +1] -m _ focus _ Y [ m ]. As the camera position is continuously adjusted, a variable term is defined to change with the increase of the position adjustment times (m or m _ focus _ cnt), and the variable term of the current time is equal to the difference result of the previous time plus the current time. The process of variation of the variable term dir can be exemplified by computer language, i.e., for (int m =0; }. The meaning of dir + = m _ focus _ Y [ m +1] -m _ focus _ Y [ m ] in this computer language expresses that: the current variable term dir is equal to its previous value plus the current differencing result, i.e., m _ focus _ Y [ m +1] -m _ focus _ Y [ m ]. In other words, dir is the sum of the differences between the two adjacent image definitions, which means the same thing.
Referring to fig. 1, it is necessary to determine whether the variable term of each change is less than zero. If yes, the camera or the workbench moves upwards for a distance and then tries to focus; if not, the camera or the workbench moves downwards for a certain distance and then tries to focus.
Referring to fig. 1, if the variable term changes to less than zero, it may be moved up by half a stroke and focus may be attempted again. For example, if the variable term is less than zero, then move up a distance to refocus, e.g., if (dir < 0) up _ load = (travel/(2 × (cnt + 1))). For example, the distance that the camera moves up relative to the start position of focus is equal to: one specifies one half of the run value (travel) divided by the current number of foci (the current number of foci is denoted by cnt +1, note that the first attempt is cnt =0, and the current number of foci is defined as cnt +1 for ease of understanding).
Referring to fig. 1, if the variable term changes to be not less than zero (other than the case where dir < 0), the camera moves down by half a stroke with respect to the focus initial position, and the focusing is tried again. An example with respect to if (dir < 0) is else up _ load = travel/2. For example, a distance by which the camera is shifted down with respect to the start position of focus is equal to: half of the specified travel value (travel).
Referring to fig. 1, a number of autofocus attempts have been performed so far. Attempts to repeat autofocusing may be exemplified in computer language by for (cnt =0. When no further focus attempts are performed, or after the end of the cycle of such attempts, the focus is moved to a relative distance dis, i.e. the vertex coordinates minus the current Z-axis position. In an alternative example, denoted by the method MoveZDirect (dis), double dis = m _ focus _ best-Zc. The autofocus is considered to be finished.
Referring to fig. 2, the focus method for the cd metrology includes steps SP1 to SP7. Step SP1 acquires image sharpness data and Z-axis position data of the camera. The Z-axis position data includes m _ focus _ X [ m _ focus _ cnt ], which is a data class in the form of an array. The Image sharpness data includes m _ focus _ Y [ m _ focus _ cnt ], which is also in array form, and can be extracted from the Image information Image1 captured by the camera.
Referring to fig. 2, step SP2 is mainly to generate m _ focus _ X [ m _ focus _ cnt ] according to position data of camera movement]And the acquired image definition data m _ focus _ Y [ m _ focus _ cnt ]]Fitting a second order curve, calculating the vertex coordinates of the second order curve, e.g. equal to the second order curve y = ax 2 The abscissa value-b/(2 a) of the vertex coordinates of + bx + c. Substantially the vertex coordinates further include a longitudinal coordinate value (4 ac-b) 2 ) (4 x a), the present application focuses on the abscissa value of the vertex coordinate rather than the ordinate value of the vertex coordinate, and the vertex coordinate is referred to generically and directly as the abscissa value-b/(2 x a), so that the present application includes the meaning of the vertex coordinate as the abscissa value of the vertex coordinate.
Referring to fig. 2, step SP3 calculates the clearest position m _ focus _ best = -b/(2 × a) + m _ focus _ z of the image. The vertex coordinates plus the focus start Z axis position m _ focus _ Z result in the clearest position of the image.
Referring to fig. 2, step SP4 determines whether focusing is successful: and if the secondary coefficient of the second-order curve is less than zero, the vertex coordinate is greater than zero and the vertex coordinate is less than a defined maximum value of the focusing stroke, the focusing is considered to be successful. If yes, the focusing success flag of step SP5 is used to indicate this.
Referring to fig. 2, step SP4 determines whether focusing is successful: and if the second-order coefficient of the second-order curve is smaller than zero, the vertex coordinate is larger than zero and the vertex coordinate is smaller than the maximum value of the focusing stroke, the camera is moved on the vertical axis for a certain distance, and then the focus is searched. Otherwise, the step SP6 is used to indicate that the workbench or the camera needs to be moved to find the focus.
Referring to fig. 2, if the known determination result is else, it can be represented by step SP 6: in step SP6, in the stage of repeatedly adjusting the position of the camera for multiple times, the difference is made based on any two adjacent definition differences obtained by successively adjusting the position of the camera (for example, the difference is made between two adjacent definition differences m _ focus _ Y [ m +1] and m _ focus _ Y [ m ]). Defining a variable term dir to change along with the increase of the position adjustment times, wherein the calculation mode of the variable term dir is as follows: the current variable term dir is equal to its previous value plus the current differencing result (e.g., m _ focus _ Y [ m +1] minus m _ focus _ Y [ m ]). Finally, whether the variable term of each time is smaller than zero is judged (namely whether if (dir < 0) is established is judged).
Referring to fig. 2, if it is determined that if (dir < 0) is true), the focusing is attempted after moving up a distance. The distance that the camera moves up relative to the start position of focus is equal to: one specified run value (travel) is divided by one half over and over by the current number of foci, equal to up _ load = (travel/(2 × cnt + 1))). Up _ load corresponds to MoveZDIRect.
Referring to fig. 2, if not (if (dir < 0) is negative), the focus is tried after moving down a distance. The distance that the camera is moved down relative to the start position of focus is equal to: one specifying half of the run value (travel). The distance moved down is equal to up _ load = travel/2.Up _ load corresponds to MoveZDIRect.
Referring to fig. 2, step SP7 is performed after step SP5 or step SP6. But SP7 is not essential. If an attempt is made to perform step SP7, this means that the camera or the stage has moved to the relative focal distance dis, i.e. the preceding position m _ focus _ best where the image is sharpest minus the current Z-axis position Zc. MoveZDirect (dis) represents the process of moving the camera or stage to the focal point relative distance dis. double dis = m _ focus _ best-Zc. Since the sharpest position of the image is closely related to the vertex coordinates, it is colloquially believed that the relative distance dis to the focus during this process, i.e., the vertex coordinates, is subtracted from the current Z-axis position. This concludes the autofocus.
Referring to fig. 2, the process of acquiring the image sharpness data in step SP1 is implemented by shifting the Z-axis, since the camera is shifted by a position that causes a change in m _ focus _ Y [ m _ focus _ cnt ] = def-m _ focus _ def, and the amount of change in m _ focus _ Y as a basis for the ordinate of the quadratic function line provides a material or source data that fits the ordinate of the second order curve.
Referring to fig. 2, the process of acquiring the position data of the camera in step SP1 is implemented by moving the Z-axis, because the camera moves the position to cause a change in m _ focus _ X [ m _ focus _ cnt ] = Z _ pos-m _ focus _ Z, and the change in m _ focus _ X provides material or source data fitting the abscissa of the second-order curve as the basis of the abscissa of the quadratic function.
Referring to fig. 2, step SP1 requires the camera equipped with the microscope to repeatedly adjust the position in the vertical axis direction, records the start position (m _ focus _ z) of the focus start point and the start point initial image clarity (m _ focus _ def), and records the real-time position (z _ pos) and the real-time image clarity (def) of the camera after each adjustment of the position. In this case, the ordinate of the fitted second order curve, i.e., the X-coordinate, is the Z-axis position variation m _ focus _ X [ m _ focus _ cnt ] = Z _ pos-m _ focus _ Z, and the abscissa of the fitted curve, i.e., the Y-coordinate, is the image sharpness variation m _ focus _ Y [ m _ focus _ cnt ] = def-m _ focus _ def.
Referring to fig. 2, the position data of step SP1 includes a plurality of sets of position differences of the real-time position and the start position. For example, the position data includes a position difference value m _ focus _ X [0] = z _ pos0-m _ focus _ z, and z _ pos0 is an actual position of the real-time position at m _ focus _ cnt =0. m _ focus _ X [1] = z _ pos1-m _ focus _ z, and z _ pos1 is the actual position of the real-time position at m _ focus _ cnt = 1. m _ focus _ X [2] = z _ pos2-m _ focus _ z, z _ pos2 is the actual position of the real-time position at m _ focus _ cnt =2, and so on. Sufficient ordinate information is provided as m _ focus _ cnt increases.
Referring to fig. 2, the image sharpness data of step SP1 includes a plurality of sets of sharpness difference values of the real-time image sharpness and the initial image sharpness. The sharpness difference m _ focus _ Y [0] = def0-m _ focus _ def, def0 is the real-time image sharpness when m _ focus _ cnt =0. m _ focus _ Y [1] = def1-m _ focus _ def, def1 is the definition of the real-time image captured when m _ focus _ cnt = 1. m _ focus _ Y [2] = def2-m _ focus _ def, def2 is the definition of the real-time image captured at m _ focus _ cnt =2, and so on. Sufficient abscissa information is provided as m _ focus _ cnt increases.
Referring to fig. 2, after the camera adjusts the position every time in step SP1, it is worth noting that the position difference and the sharpness difference of the camera at the same position are respectively regarded as an abscissa value and an ordinate value corresponding to a point on the quadratic function or the quadratic curve at the same time. For example, when the position adjustment is performed with m _ focus _ cnt =1, the position difference m _ focus _ X [1] and the sharpness difference m _ focus _ Y [1] in the condition that the cameras are at the same position are respectively regarded as the abscissa and the ordinate of the same point on the second-order curve, which correspond to the same point at the same time. After the position adjustment when m _ focus _ cnt =2, the position difference m _ focus _ X [2] and the sharpness difference m _ focus _ Y [2] of the camera at the same position are respectively regarded as an abscissa and an ordinate corresponding to the same point on the second-order curve at the same time. Note that def-m _ focus _ def is the sharpness difference or sharpness change.
Referring to fig. 2, step SP1 ends the current position adjustment if the absolute value of any position difference value exceeds a specified travel value (travel). That is, the Z-axis position adjustment is finished this time, and m _ focus _ cnt stops counting continuously.
Referring to fig. 2, step SP1 specifies a maximum number of times MAX _ FRAME _ COUNT of adjustment in the Z-axis, which the actual number of times m _ focus _ cnt of adjustment requiring the camera to repeatedly adjust the position in the vertical axis direction should be smaller than. The maximum number of autofocus, i.e., the maximum number of Z-axis adjustments, is defined as MAX _ FRAME _ COUNT. The situation that the adjustment position is continuously carried out and the circulation cannot be jumped out can be avoided, and the situation that the measurement process falls into endless adjustment can also be prevented.
Referring to fig. 3, the present embodiment is a further optimization based on fig. 2, which requires performing a plurality of auto-focus attempts (cnt) before performing metrology on the critical dimension on the wafer, and in an alternative example, defines the maximum number of auto-focus attempts to be auto-focus trycnt. As shown, the actual number of attempts cnt that require repeated autofocus attempts should be less than the maximum number of attempts autofocus trycnt. It is observed that each autofocus attempt or any single autofocus attempt includes the flow of steps SP1 to SP5 in fig. 2 or that a single autofocus attempt includes the flow of steps SP1 to SP6. Step SP7 of fig. 2 may still be used after each autofocus attempt or after the end of any single autofocus attempt.
Referring to fig. 3, a focus method for critical dimension measurement: multiple autofocus attempts are performed before metrology is performed on the critical dimensions on the wafer (focus continues to be attempted as long as cnt < autofocus trycnt). At each autofocus attempt or at any single autofocus attempt (e.g., cnt =0,1,2,3 8230; etc.), the camera is required to repeatedly adjust position in the vertical axis direction multiple times (adjustment continues as long as m _ focus _ cnt < MAX _ FRAME _ COUNT) to retrieve position data and corresponding image sharpness data for the camera movement. The number of focus attempts is recorded in cnt and each time a focus attempt is performed, cnt is required to run a self-increment operation. The number of times the position is adjusted is recorded with m _ focus _ cnt and each time the position is adjusted is performed, m _ focus _ cnt is required to run a self-add operation. The camera is required to perform m _ focus _ cnt number of position adjustment actions in the Z-axis direction in each numerical case that cnt may take.
Referring to fig. 3, a focus method for critical dimension measurement: a second order curve is also fitted according to the position data and the image sharpness data. Step SP2 informs that m _ focus _ X [ m _ focus _ cnt ] is based on the position data]And the acquired image definition data m _ focus _ Y [ m _ focus _ cnt [ ]]To fit a second order curve y = ax 2 + bx + c. Since the second order curve is known at this point, the sharpest position of the image is apparent. The aforementioned step SP3 or the reservation step SP3 can be omitted in this embodiment, which is allowed.
Referring to fig. 3, a focus method for critical dimension measurement: judging whether the second-order curve meets a preset condition, if so, determining that focusing is successful; if not, the camera is moved on the vertical axis for a certain distance, and then the focus is searched. As in step SP4.
Referring to fig. 3, the predetermined conditions include at least: the second-order coefficient of the second-order curve is less than zero, namely a <0, the vertex coordinate is greater than zero, namely (-b/(2 a)) >0, and (-b/(2 a)) < AutoFocusTracel, namely the vertex coordinate is less than the maximum value of the focusing stroke. The focusing is considered to be successful if the predetermined conditions are simultaneously satisfied, as by step SP5. The focus is found after moving the camera on the vertical axis for a distance without satisfying any of the predetermined conditions, as by step SP6.
Referring to fig. 3, the known determination result is represented by step SP 6: in step SP6, in the stage of repeatedly adjusting the position of the camera for multiple times, the difference is made based on any two adjacent definition differences obtained by successively adjusting the position of the camera (for example, the difference is made between two adjacent definition differences m _ focus _ Y [ m +1] and m _ focus _ Y [ m ]). In step SP6, it is possible that the resolution may be required to be poor at each autofocus attempt or at any single autofocus attempt. Defining a variable term which changes along with the increase of the position adjustment times, and calculating a variable term dir: the current variable term dir is the value equal to the previous time of the variable term plus the current differencing result (e.g., m _ focus _ Y [ m +1] minus m _ focus _ Y [ m ]). In an alternative example it can be seen that the current differencing result is considered to be the difference of the next subsequent sharpness difference value, e.g. m _ focus _ Y [ m +1], minus the sharpness difference value at the current position adjustment, e.g. m _ focus _ Y [ m ].
Referring to fig. 3, for example, assuming that m =3, the current variable term dir3 is equal to the value dir2 at the previous position adjustment plus the current subtraction result (the current subtraction result is m _ focus _ Y [4] minus m _ focus _ Y [3 ]). Based on this assumption, the calculation can be continued, and in the same way, the current dir2 is equal to the value dir1 of the previous position adjustment plus the current subtraction result (the current subtraction result is m _ focus _ Y3 minus m _ focus _ Y2). Based on this assumption, forward estimation can still continue, and so on, when current dir1 equals the value dir0 at the previous position adjustment plus its then-current subtraction result (m _ focus _ Y [2] minus m _ focus _ Y [1 ]). Finally, whether the variable term of each time is smaller than zero is judged (namely, whether if (dir < 0) is established is judged). In summary, the variable term corresponding to the current adjustment number is equal to the value of the variable term at the previous position adjustment plus the difference result at the current position adjustment.
Referring to fig. 3, if yes (if (dir < 0) is determined), the focusing is attempted after moving up a distance. The distance by which the camera is moved up relative to the start position of focus is for example equal to: one specified run value (travel) is divided by one half over and over by the current number of foci, equal to up _ load = (travel/(2 × cnt + 1))). Since the default focus count starts at zero, but the current focus count is more habitually used with cnt +1, provided that the statement of the zero-th attempt is not habituated. Such as occurs when the current first focus count (cnt + 1) is essentially cnt =0, and is actually tried once. Again as occurs when the current second focus count (cnt + 1) is essentially cnt =1, this time does try twice. More strictly speaking, the distance by which the starting position of the focus is moved upwards is equal to: the above-mentioned specified run value (travel) is divided by two over and over again by a total number of times equal to the number of times of focusing actually occurred plus one (i.e., cnt + 1), and the upward shift distances resulting from the different expressions are the same, which is equal to up _ load = (travel/(2 × (cnt + 1))).
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): and repeatedly adjusting the position of the camera in the vertical axis direction for multiple times, recording the initial position and the initial image definition of the focusing initial point, and recording the real-time position and the real-time image definition of the camera after each position adjustment.
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): the camera repeatedly adjusts the position in the vertical axis direction for a plurality of times, and the position data m _ focus _ X [ m _ focus _ cnt =0,1,2,3 \8230 ] includes a plurality of sets of position differences between the real-time position and the start position.
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): the camera repeatedly adjusts the position in the vertical axis direction for multiple times, and the image definition data m _ focus _ Y [ m _ focus _ cnt =0,1,2,3 \8230 ] comprises multiple groups of definition difference values of real-time image definition and initial image definition.
Referring to fig. 3, after the camera adjusts the position each time (e.g., m _ focus _ cnt =0,1,2,3 \8230; etc.), the position difference and the sharpness difference of the camera at the same position are respectively regarded as an abscissa value and an ordinate value corresponding to a point on the second-order curve at the same time.
Referring to fig. 3, the position difference m _ focus _ X [0] and the sharpness difference m _ focus _ Y [0] under the same position condition (e.g., m _ focus _ cnt = 0) are regarded as the abscissa value and the ordinate value of the same point on the second-order curve, respectively.
Referring to fig. 3, the position difference m _ focus _ X [3] and the sharpness difference m _ focus _ Y [3] under the same position condition (e.g., m _ focus _ cnt = 3) are regarded as the abscissa value and the ordinate value of the same point on the second-order curve, respectively.
Referring to fig. 3, after the camera adjusts the position each time (e.g., m _ focus _ cnt =0,1,2,3 \8230; etc.), if the absolute value of any position difference z _ pos-m _ focus _ z is greater than the specified stroke value travel, the current position adjustment is ended and the loop of repeatedly adjusting the position by the camera for a plurality of times jumps out. z _ pos-m _ focus _ z is a position difference value or position variable.
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): a maximum number of times MAX _ FRAME _ COUNT of adjustment in the vertical axis is specified, and the actual number of times m _ focus _ cnt of adjustment for which the camera is required to repeatedly adjust the position in the Z-axis direction is smaller than the maximum number of times MAX _ FRAME _ COUNT.
Referring to fig. 3, the clearest position m _ focus _ best of the image is the sum of the vertex coordinates of the second order curve plus the start position of the focus start point. m _ focus _ best = -b/(2 × a) + m _ focus _ Z, and the vertex coordinates plus the focus start Z-axis position result in the sharpest position m _ focus _ best of the image.
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): and judging whether the second-order curve meets a preset condition. The predetermined conditions have already been explained above and are not described in detail.
Referring to fig. 3, in each segment of the trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): when the above condition, that is, the predetermined condition is not satisfied, that is, the focus is not within the current stroke, the stage needs to be moved continuously to find the focus in this case. The focus of the moving stage is explained above and will not be described in detail.
Referring to fig. 3, when the timing point of trying the upper limit (e.g., cnt < autofocusttrycnt) is not reached: multiple autofocus attempts should not end. The focusing attempt requires the execution of the flow of steps SP1 to SP5 or the flow of steps SP1 to SP6 when cnt =0, the execution of the flow of steps SP1 to SP5 or the execution of the flow of steps SP1 to SP6 when cnt =1, and the execution of the flow of steps SP1 to SP5 or the execution of the flow of steps SP1 to SP6 when cnt =2. And so on. The jump ends by cnt = autofocus trycnt.
Referring to fig. 3, when the timing point of the upper attempt limit (e.g., cnt = autofocus trycnt) is reached: then multiple autofocus attempts should end. For example, the cnt maximum value is equal to AutoFocusTryCnt minus one. If step SP7 is executed, it means that the camera or the stage is required to move to the relative focal distance dis, i.e. the position m _ focus _ best where the previous image is sharpest minus the current Z-axis position Zc. double dis = m _ focus _ best-Zc. By this point the autofocus is finished and the image of the critical dimension structure on the wafer is at the highest resolution and highest definition at that time. The foregoing focusing has been achieved with substantial success for the purposes of the present application set forth in the background section. While still moving the camera to the focus relative distance is also a better embodiment to achieve auto-focus.
Referring to fig. 3, in each trial (e.g., cnt =0,1,2,3 \ 8230; \8230; etc.): before the camera position is adjusted repeatedly or each time the camera position is adjusted, in an alternative example, the microscope lens approaches the wafer to a predetermined scale value (e.g., 3/4) of a specified travel value (travel). The process of step SP0 approaching the lens to the wafer to a predetermined distance is shown. Lens and sample i.e. wafer proximity: first, a distance above the sample of about the specified travel value multiplied by the predetermined ratio is traveled. An example such as MoveZDiret (-up _ load x 3 ÷ 4) shows the shot first making 3/4 strokes over the sample. Step SP0 represents lens and wafer proximity as MoveZDiret.
Referring to fig. 3, when the timing point of trying the upper limit (e.g., cnt < autofocusttrycnt) is not reached: multiple autofocus attempts should not end. The focusing attempt requires the execution of the flow of steps SP0 to SP5 or the flow of steps SP0 to SP6 when cnt =0, the execution of the flow of steps SP0 to SP5 or the execution of the flow of steps SP0 to SP6 when cnt =1, and the execution of the flow of steps SP0 to SP5 or the execution of the flow of steps SP0 to SP6 when cnt =2. And so on. This is an example when step SP0 is employed.
Referring to fig. 1, the process of acquiring the position data of the camera is implemented by moving the Z-axis, because the movement of the camera or the stage results in a change in m _ focus _ X [ m _ focus _ cnt ] = Z _ pos-m _ focus _ Z as the fitting source data of the abscissa of the second-order curve, and the position data m _ focus _ X [ m _ focus _ cnt =0,1,2,3 \8230; ] exhibits the array characteristic.
Referring to fig. 1, the process of acquiring image sharpness data is implemented by moving the Z-axis, because a camera or stage moving position will cause a change in m _ focus _ Y [ m _ focus _ cnt ] = def-m _ focus _ def, as fitting source data of the ordinate of the second-order curve, sharpness data m _ focus _ Y [ m _ focus _ cnt =0,1,2,3 \8230; ] is an array property.
See fig. 5 for an example of a second order fit. Given a data sequence (x) i ,y i ) And satisfies (i =0,1,2,3 \ 8230;, m), fitting this set of data using a quadratic polynomial. The following calculations and simplifications, etc. generally describe the process of second order curve fitting.
p(x)=a 0 +a 1 x+a 2 x 2
Referring to fig. 5, second order curve fitting: two arrays x n of given length n],y[n]For example, assuming that both arrays are discretized, p (x) and another expression y = ax can be calculated by algorithmic fitting 2 + bx + c. Thus two arrays x n are calculated by means of a fitting function],y[n]The process of the relationship between is referred to as a second order curve fit.
The mean square error of the fitting function to the data sequence is made, for example, based on p (x):
Figure BDA0003849345240000171
from the extreme principle of the multivariate function, Q (a) is derived 0 ,a 1 ,a 2 ) The minimum value of (c) can satisfy:
Figure BDA0003849345240000172
the foregoing relates to
Figure BDA0003849345240000173
The formula (2) is simplified as follows:
Figure BDA0003849345240000174
referring to fig. 5, an array X [ n ] uses position data m _ focus _ X [ m _ focus _ cnt =0,1,2,3 \8230 ].
Referring to fig. 5, an array Y [ n ] uses sharpness data m _ focus _ Y [ m _ focus _ cnt =0,1,2,3 \8230 ].
Referring to fig. 5, according to the discretization characteristic of the data sequence (m _ focus _ X, m _ focus _ Y), the data sequence is fitted by a quadratic polynomial, and the relationship Y = ax can be calculated by algorithm fitting 2 + bx + c. Step SP2 is to fit a second order curve according to the position data of the camera movement and the acquired image sharpness data, and calculate the vertex coordinates of the second order curve, for example, to be equal to the second order curve y = ax 2 The abscissa value-b/(2 × a) of the vertex coordinates of + bx + c.
See fig. 5, similarly y = ax 2 + bx + c or p (x) = a 0 +a 1 x+a 2 x 2 Mathematically, it belongs to a quadratic relation. Contains quadratic term coefficient a, first order term coefficient b and constant term c. x is the abscissa and y is the ordinate. In the latter expression, the coefficient of the quadratic term a is included 2 First order coefficient a 1 And constant term a 0
Or about
Figure BDA0003849345240000181
The related simplification of (1) is:
Figure BDA0003849345240000182
Figure BDA0003849345240000183
given data sequence (x) i ,y i ) And fitting the data by a quadratic polynomial, further from (a) 0 ,a 1 ,a 2 ) Matrix form and composition of (y) 0 ,y 1 ,y 2 ) Correlation analysis in matrix form with respect to Q (a) 0 ,a 1 ,a 2 ) The minimum value correlation is simplified as follows:
Figure BDA0003849345240000184
it can be seen that the above simplifications differ slightly but the final result is the same. Solving by the above principle to obtain the coefficient a of a second order function 0 ,a 1 ,a 2 And the like mathematical terms or coefficients.
Referring to FIG. 4, regarding the image pixel matrix, assuming that the image width is W (width) and the height is H (height), the number of columns of the image is width-1 and the number of rows is height-1 according to the prior rules of computer vision or image processing. In order to clearly understand the expression of the image pixels, an example of a pixel matrix with 2 rows and 9 columns is given in the figure.
Referring to fig. 4, the width of the image is width =10 and the height is height =3.
Referring to fig. 4, the number of columns width-1=9 and the number of rows height-1=2 of the image.
Referring to fig. 4, 0 th behavior 0,1,2,3, 4, 5, 6, 7, 8, 9. Including columns 0-9.
Referring to fig. 4, act 1 is 10, 11, 12, 13, 14, 15, 16, 17, 18, 19. Including columns 0-9.
Referring to fig. 4, act 2 is 20, 21, 22, 23, 24, 25, 26, 27, 28, 29. Contains 0-9 columns.
Referring to fig. 4, the illustrated matrix is a pixel matrix of 2 rows and 9 columns in total, of 0-2 rows and 0-9 columns. Therefore, the number of pixels in the pixel matrix can be positioned according to the pixel coordinate line yp and the pixel coordinate column xp of the image. It should be noted that the actual number of rows and the actual number of columns of the pixel matrix are arbitrary or defined according to the image capturing device, and are not limited to the specific values given in the figure, such as 2 rows and 9 columns in total.
Referring to fig. 4, regarding the pixel address, for example, the address of pixel 15 (pixel coordinate is 1,5, yp =1, xp = 5) can be calculated as follows: 1*10+5. The address calculates yp width + xp. The figure takes pixel 15 (pix: 15) as an example and this rule has generality: the address of pixel point 27 (pix: 27) is calculated as 2 x 10+7. It should be noted that, in the pixel matrix example, it is assumed that the address of the first pixel (yp =0, xp = 0) is zero, and in practice, the address of the first pixel is not necessarily the zero address. For example, the pixel matrix is a truncated view of the entire image rather than the full image, and in cases like this, the versatility of the calculation of the address of each pixel in the pixel matrix needs to be taken into account.
Referring to fig. 4, address acquisition of an image: first, define the byte _ ptr, where ptr is the address of the 0 th pixel, for example, this address points to a byte type. Byte is a data type or language character in a programming language. Knowing the arrangement rule of the pixel matrix, the address of any point of pixel can be calculated to obtain: ptr + yp × width + xp. If the pixel gray value in the address is represented by x (address), and the address ptr + yp × width + xp is used in the image processing of the computer, the pixel gray value in the address is extracted and expressed as x ((byte) ptr + yp × width + xp). This example discloses the fact that, based on the addresses of known pixels and the already obtained images, the gray scale value of the pixel at any address can be calculated based on the addresses. The patterns or expressions for extracting the gray values of the pixel points in different computer languages are slightly different.
Referring to fig. 4, regarding the grayscale image: the logarithmic relationship between white and black can be divided into several levels, which is commonly referred to in the industry as gray scale. The gray scale is divided into 256 steps (0 to 255 steps). The image represented in grayscale is called a grayscale map. A grayscale image is an image with only grayscale values per pixel, with only one channel. According to the foregoing grayscale image calculation method: the grayscale image of any address, i.e., (byte) ptr + yp × width + xp, can be known. The respective components of the three primary colors can be extracted, since the addresses are known and the respective gray components of the three channels can be calculated.
Referring to fig. 4, regarding color images: meaning that each pixel in the image is divided into three primary color components R, G, B and each primary color component directly determines the intensity of its primary color, the color produced in this way is called true color, and a color image usually has three channels instead of only one channel. R (xp, yp), G (xp, yp), and B (xp, yp) respectively represent the red gray scale, the green gray scale, and the blue gray scale corresponding to the address, and a color image or a color mixture gray scale can be calculated.
Referring to fig. 4, there are different ways to compute a color image at different occasions, where R, G, B are components of three primary colors of the color image: gray (xp, yp) =0.299 × R (xp, yp) +0.587 × G (xp, yp) +0.114 × B (xp, yp). The respective coefficients of the three primary color component values may be adapted so that the implementation is versatile. So far, the pixel gray value or gray image at each address can be extracted, the respective gray component values of the three primary colors at each address can also be extracted, and the color image or mixed color gray at each address subjected to primary color mixing can also be extracted. In the present application, the Gray values or the area Gray values of the image may include the Gray value of any one of the primary colors, or they may also include the mixed Gray value of the Gray values of three primary colors, for example, the Gray values include R (xp, yp), G (xp, yp), B (xp, yp), gray (xp, yp), and so on.
Referring to fig. 4, the expression of the energy gradient function F is the relational expression described below, which is the summation of all pixel gradient values as a sharpness evaluation function value. Similarly, the same is true for the metrology image and its pixels for critical dimensions.
Figure BDA0003849345240000201
Wherein F (xp, yp) represents the gray value of the corresponding pixel point such as (xp, yp), and the image is clearer when the value of F is larger. For example, the camera captures an Image1/Image0, which includes the gray-level value of each pixel, such as (xp, yp). Step SP1 collects the image definition data, and the energy gradient function provides a basis for collecting the image definition according to step SP 1. The Image sharpness data may be extracted from Image information Image1/Image0 captured by the camera.
Referring to fig. 4, in the sharpness evaluation method based on image gradients, besides calculation by using an energy gradient function, a laplacian Laplace function is further provided, a gradient matrix is obtained by performing convolution on a Laplace operator and the gray value of each pixel point of an image to obtain G (xp, yp), and the sum of squares of gradients of each pixel point is taken as an evaluation function.
Figure BDA0003849345240000202
Note that F (xp, yp) is used to represent the gray scale value of the corresponding pixel point, such as (xp, yp), and the larger the value of F, the sharper the image.
Further wherein G (xp, yp) is expressed as
Figure BDA0003849345240000203
Examples (but not limiting to examples) of L in the function G (xp, yp) relating to laplacian are as follows.
Figure BDA0003849345240000204
Referring to fig. 4, the energy gradient function: and taking the square sum of the difference between the gray values of the adjacent pixels in the xp direction and the yp direction as the gradient value of each pixel, and accumulating the gradient values of all pixels to be used as a definition evaluation function value.
Referring to fig. 4, step SP1 collects image sharpness data, and may use a laplacian function in addition to the aforementioned accumulation of gradient values for all pixels as sharpness evaluation function values. And when the real-time image definition and the initial image definition are calculated, an energy gradient function or a Laplace function is used as a definition evaluation function.
Referring to fig. 1, an Image1/Image0 captured by a camera CA provides pixel coordinates. The work table referred to herein generally includes a microscope and a camera CA or the like that is fitted or assembled with the microscope.
Referring to fig. 1, automated microscopy (Motorized Microscope) is well established, and like conventional manual microscopes typically move a sample being viewed using three degrees of freedom: the X and Y axes move horizontally and the Z axis moves vertically. The lens is moved along the Z axis, and the object distance of the microscopic optical system and the focusing imaging effect are directly determined. Local or full technical features of the automated electron microscope may be applied to the microscope and its camera in the figures.
Referring to fig. 1, the present application relates to image pixel-based image gradient processing. It is therefore necessary to first introduce sharpness evaluation based on image gradients: normally, the sharp image in positive focus is sharper and sharper than the blurred out-of-focus image edges, and the edge pixel gray values vary greatly, thus having larger gradient values. When the image is processed, the image is regarded as a two-dimensional discrete matrix, and the gradient function can be used for acquiring the gray information of the image so as to judge the definition of the image.
Referring to fig. 1, as mentioned above, the motors driving the camera and the microscope and their stage are controlled by a computer or server or associated processing unit. Other alternatives for the processing unit: a field programmable gate array, a complex programmable logic device or a field programmable analog gate array, or a semi-custom ASIC or processor or microprocessor, digital signal processor or integrated circuit or software firmware program stored in memory, or the like. Steps SP1 to SP7 of fig. 2 may also be implemented by a computer or a server or a processing unit, as may be the implementation of steps SP0 to SP7 of fig. 3. The image includes a wafer image captured by the camera through the microscope.
Referring to fig. 1, a method for wafer definition positioning: the wafer 10 is moved N times in the vertical direction by the camera CA provided with a microscope: recording the current parking position P of the camera when setting the K-th movement K And saving the image I taken by the camera through the microscope K K =1, 2,3, \ 8230, N. The relation between the positive integer K and N satisfies that K is more than or equal to 1 and less than or equal to N.
Referring to fig. 1, let camera CA be at a reference position P 0 The image of the wafer 10 taken by the microscope has a reference resolution F 0 . Reference position P 0 For processing the photographed image I 0 The display (belongs to the category of Image 0). For example, the reference position includes a start position of the focus start point but is not limited to the start position. The advantage of replacing the starting position with the reference position is that the required reference sharpness can be flexibly selected and the position corresponding to the reference sharpness can be freely determined, which is very advantageous for selecting images with moderate gray values of edge pixels and selecting images with proper gradient values.
Referring to fig. 1, let camera CA be in a parking position P 1 The image of the wafer 10 photographed by the microscope has an image resolution F 1 . Parking position P 1 For processing the photographed image I 1 This indicates (belongs to the category of Image 1).
Referring to fig. 1, let camera CA be in a parking position P 2 The image of the wafer 10 photographed by the microscope has an image resolution F 2 . Parking position P 2 For taking images I 2 This indicates (belongs to the category of Image 1).
Referring to fig. 1, let camera CA be in a parking position P 3 The image of the wafer 10 photographed by the microscope has an image resolution F 3 . Parking position P 3 For taking images I 3 This indicates (belongs to the category of Image 1).
Referring to fig. 1, let camera CA be in a parking position P N The image of the wafer 10 photographed by the microscope has an image definition F N . Parking position P N For taking images I N The display (belongs to the category of Image 1).
Referring to fig. 1, let camera CA be in a parking position P K The image of the wafer 10 photographed by the microscope has an image definition F K . Parking position P K For taking images I K This indicates (belongs to the category of Image 1).
Referring to FIG. 1, reference position P 0 (belonging to the domain Sp 0) is a random or intentionally chosen location, and the domain to which the reference location belongs is not necessarily a fixed location but may be chosen.
Referring to fig. 1, a parking position P K (belonging to the location Sp1 category) is the location of the camera obtained by active displacement, moving N times on the Z axis, stopping once per movement (dynamic dithering) of the camera and taking an image of the lower stopped location and serving as a reference location P for comparison 0 Relative dynamically dithered parking position P K And thus belongs to a static location.
Referring to FIG. 1, a parking position P is calculated 1 Relative reference position P 0 Position variable X of 1 I.e. they are subtracted.
Referring to FIG. 1, a parking position P is calculated 2 Relative reference position P 0 Position variable X of 2 I.e. they are subtracted.
Referring to FIG. 1, a parking position P is calculated 3 Relative reference position P 0 Position variable X of 3 I.e. they are subtracted.
Referring to FIG. 1, a parking position P is calculated N Relative reference position P 0 Position variable X of N I.e. they are subtracted.
Referring to FIG. 1, the position variable X 1 、X 2 、…X N Including but not limited to m _ focus _ X [ m _ focus _ cnt ]]。
Referring to FIG. 1, image I is calculated 1 Definition F of 1 Relative reference definition F 0 Variable Y of 1 I.e. subtracted.
Referring to FIG. 1, image I is calculated 2 Definition F of 2 Relative reference definition F 0 Variable Y of 2 I.e. subtracted.
Referring to FIG. 1, image I is calculated 3 Definition F of 3 Relative reference definition F 0 Variable Y of 3 I.e. subtracted.
Referring to FIG. 1, image I is calculated N Definition F of N Relative reference definition F 0 Variable Y of N I.e. subtracted.
Referring to FIG. 1, the sharpness variable Y 1 、Y 2 、…Y N Including but not limited to m _ focus _ Y [ m _ focus _ cnt [ ]]。
Referring to FIG. 5, there will be a position variable X 1 、X 2 、…X N Array of (e.g., array x [ n ]]) Discrete independent variable with definition variable Y viewed as a quadratic function 1 、Y 2 、…Y N Array of (e.g., array y [ n ]]) Fitting a discrete dependent variable regarded as a quadratic function y = ax 2 + bx + c. The position of the wafer taken by the camera CA is determined as the quadratic function y = ax 2 The abscissa (-b/(2 a)) of the vertex of + bx + c plus the reference position P 0
Referring to FIG. 5, at the K-th move, the resulting position variable X K And a sharpness variable Y K Regarded as quadratic functions, i.e. y = ax 2 + bx + c corresponds to a coordinate point (X) on the curve K ,Y k )。
Referring to fig. 5, if the quadratic function y = ax 2 If the quadratic term coefficient of + bx + c is not less than zero, i.e. a is greater than or equal to 0, the reference position P is changed again 0 Until a quadratic function fitted based on the new reference position satisfies a<0。
Referring to fig. 5, if the quadratic function y = ax 2 The vertex abscissa of the + bx + c is not more than zero, namely the vertex abscissa (-b/(2 a)) ≦ 0 of the quadratic function, the position value of the reference position is changed again until it is based on the new position valueThe quadratic function fitted to the reference position satisfies (-b/(2 a))>0。
Referring to fig. 5, if the quadratic function y = x 2 The vertex abscissa (-b/(2 a)) of + b + c exceeds the maximum distance allowed to move by the focal point of the camera, the position value of the reference position is altered until the quadratic function fitted based on the new reference position satisfies that the vertex abscissa (-b/(2 a)) is less than the maximum distance allowed to move. The maximum distance that the focus of the camera is allowed to move includes, but is not limited to, the focus stroke maximum autofocus track.
Referring to fig. 5, the definition positioning method is used for individually positioning different areas with high and low unevenness on a wafer or a chip, and when the definition positioning is performed by switching from a previous area to a next area, a reference position P is used 0 And the position value corresponding to the previous area is updated accordingly to the new position value corresponding to the next area. The surface roughness and the appearance of the wafer are different in different process stages, the characteristics of the critical dimension structure are changeable, and the method is suitable for positioning the image definition of the various critical dimension structure appearances. If the method is applied to defect detection, the method is suitable for positioning the image definition of various defect type structures. Particularly, the images at different positions are not always in focus when the critical dimension of the wafer is detected or measured, which results in larger errors in measurement values or defect analysis.
Referring to fig. 5, when the F function is used to evaluate the sharpness, the image gradient calculation of the evaluation function is performed using a sharp image in the normal focus or a defocused image in the non-normal focus. In an alternative example due to the reference position P 0 Is obtained by flexible setting: with sharp image in positive focus (including I) 0 And I 1 、I 2 、…I N ) Image gradient operation to implement the evaluation function is a preferred option, but because of the reference position P 0 And F 0 With the flexibility to try to use out-of-focus images (including I) that are not in-focus 0 And I 1 、I 2 、…I N ) It is also possible to carry out image gradient calculations for the evaluation function, under which conditions the image I is calculated 1 、I 2 、…I N Respective degrees of resolution F 1 、F 2 、…F N Definition F with respect to the aforementioned reference 0 Resulting sharpness variable Y 1 、Y 2 、…Y N There is no problem, and the relationship is relative.
Referring to fig. 5, when the camera moves N times, the difference is made based on the difference between any two adjacent sharpness differences (e.g., sharpness variables) obtained by successively adjusting the position of the camera (e.g., Y) K+1 Minus Y K Difference in the construction); defining that the variable term dir changes along with the increase of the position adjustment times, wherein the variable term dir _ K corresponding to the current adjustment times (for example, K) is equal to the difference result of adding the current definition variable to the value dir _ K-1 of the previous time (for example, the adjustment times is K-1) of the variable term, and the difference result of the definition variable of the current adjustment times is Y K+1 Minus Y K (ii) a And judging whether the variable term dir _ K is less than zero: if yes, moving the camera upwards to change the reference position, and then carrying out definition positioning (for example, moving the camera for N times again to fit a quadratic function); if not, the camera moves downwards to change the reference position and then carries out definition positioning (for example, moves N times again to fit a quadratic function). The use of out-of-focus images, such as out-of-focus image sharpness at a parking position and reference sharpness in-focus, to calculate the sharpness variable can be avoided. The definition variation prevention is image gradient evaluation based on out-of-focus image definition with small change of image edge pixel gray value and reference definition with large change of image edge pixel gray value, so that errors of a second-order curve are avoided. Note that such errors are both concealed and imperceptible. The image with large change ratio of the gray value of the edge pixel to the gray value of the edge pixel is sharp and has larger gradient value.
Referring to FIG. 6, the park position P 1 、P 2 、…P N Position variable X relative to a reference position 1 、X 2 、…X N It is not always possible to close to a quadratic function y = ax 2 + bx + c, when those spurious abscissas or position variables not on the curve should be changed from the array x n]Removing the residues. At one isAlternative embodiment the sharpness variable Y K Proportional to the corresponding position variable X K Ratio of (Y) K /X K ) If the position variable X is not within the preset threshold range, the position variable X is set K Eliminating discrete independent variables for fitting quadratic function and simultaneously, also eliminating definition variable Y K And (4) removing the dependent variable used for fitting the quadratic function. The allowable threshold range is dynamically adjusted. FIG. 6 shows the variable Y of the abnormal definition K And the position variable X of the abnormality K An unnormalized quadratic function that is over-warped without culling.
Referring to FIG. 7, consider the array x [ n ]]And an array y [ n ]]Is discrete, the example can be trained by using a position variable and a definition variable to simulate a quadratic function y = an 2 + bx + c neural network Net that inputs the position variable X 1 、X 2 、…X N And outputs the utilization definition variable Y 1 、Y 2 、…Y N And labeling, wherein the neural network Net for simulating the quadratic function can calculate the vertex abscissa of the quadratic function. The manner in which the quadratic function is derived has been described above to include operations based on the quadratic polynomial fitting principle (e.g., p (x) = a) 0 +a 1 x+a 2 x 2 ) Moreover, any solution that can fit a quadratic function is suitable for this application, such as a quadratic function generator (or a parabolic generator) or a library function generated using software burned into a processor, for example, a library function of Python, typically numpy, can also generate a quadratic function expression from input coordinate data, such as an online quadratic function generator of a networked computer.
While the above specification concludes with claims defining the preferred embodiments of the invention that are presented in conjunction with the specific embodiments disclosed, it is not intended to limit the invention to the specific embodiments disclosed. Various alterations and modifications will no doubt become apparent to those skilled in the art after having read the above description. Therefore, the appended claims should be construed to cover all such variations and modifications as fall within the true spirit and scope of the invention. Any and all equivalent ranges and contents within the scope of the claims should be considered to be within the intent and scope of the present invention.

Claims (10)

1. A method for wafer definition positioning, comprising:
moving the camera equipped with microscope in the vertical direction of the wafer N times, and recording the current parking position P of the camera when setting the K-th movement K And saving the image I taken by the camera through the microscope K ,K=1、2、3、…N;
The image of the wafer, which is shot by the camera through the microscope at one reference position, has reference definition;
calculating respective parking positions P 1 、P 2 、…P N Position variable X relative to a reference position 1 、X 2 、…X N
Computing an image I 1 、I 2 、…I N Respective sharpness variable Y of sharpness with respect to reference sharpness 1 、Y 2 、…Y N
Will have a position variable X 1 、X 2 、…X N Treating the array of (a) as a discrete argument of a quadratic function, and (b) taking the sharpness variable Y 1 、Y 2 、…Y N The array of (2) is regarded as a dependent variable of the quadratic function to fit the quadratic function;
thereby, the position required by the camera to shoot the wafer is determined by adding the vertex abscissa of the quadratic function and the reference position.
2. The method of claim 1, wherein:
evaluating the reference definition of the image at the reference position and the corresponding image I at each parking position by using the energy gradient function as an evaluation function F 1 、I 2 、…I N The definition of itself;
Figure FDA0003849345230000011
wherein f (xp, yp) is the gray value of the pixel point (xp, yp), f (xp +1, yp) is the gray value of the pixel point (xp +1, yp), and f (xp, yp + 1) represents the gray value of the pixel point (xp, yp + 1).
3. The method of claim 1, wherein:
evaluating the reference definition of the image at the reference position and the corresponding image I at each parking position by using the Laplace function as an evaluation function F 1 、I 2 、…I N The definition of itself;
Figure FDA0003849345230000012
wherein
Figure FDA0003849345230000013
f (xp, yp) is the gray value of the pixel (xp, yp), and a gradient matrix G (xp, yp) is obtained by performing convolution on the gray value of the pixel and the Laplacian operator L.
4. The method of claim 1, wherein:
position variable X caused by Kth movement K And a variable of sharpness Y K Viewed as a quadratic function y = ax 2 + bx + c corresponds to a coordinate point (X) on the curve K ,Y K )。
5. The method of claim 1, wherein:
if the quadratic function y = ax 2 If the quadratic coefficient of + bx + c is not less than zero, namely a is more than or equal to 0, the position value of the reference position is changed again until the quadratic function fitted based on the new reference position meets a<0。
6. The method of claim 1, wherein:
if the quadratic function y = ax 2 The abscissa of the vertex of the + bx + c is not more than zero (namely (-b/(2 x a)) ≦ 0, and then the change is carried outPosition values of reference positions until a quadratic function fitted based on the new reference position satisfies (-b/(2 a))>0。
7. The method of claim 1, wherein:
if the quadratic function y = ax 2 The vertex abscissa (-b/(2 a)) of + bx + c exceeds the maximum distance allowed to move by the focal point of the camera, the position value of the reference position is altered until the quadratic function fitted based on the new reference position satisfies that the vertex abscissa (-b/(2 a)) is less than the maximum distance allowed to move.
8. The method of claim 1, wherein:
the method is used for respectively and independently carrying out definition positioning on different high-low uneven areas of a wafer, and when the definition positioning is carried out by switching from a previous area to a next area, the reference position is updated from a position value corresponding to the previous area to a new position value corresponding to the next area.
9. The method of claim 1, wherein:
the mode of obtaining the quadratic function comprises operation based on a quadratic polynomial fitting principle or a neural network trained by utilizing a position variable and a definition variable and used for simulating the quadratic function.
10. A method according to claim 2 or 3, characterized in that:
when the definition evaluation is performed, the image gradient calculation of the evaluation function is performed by using a clear image in the positive focus or a defocused image in the non-positive focus.
CN202211129191.2A 2022-09-16 2022-09-16 Wafer definition positioning method Active CN115547909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211129191.2A CN115547909B (en) 2022-09-16 2022-09-16 Wafer definition positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211129191.2A CN115547909B (en) 2022-09-16 2022-09-16 Wafer definition positioning method

Publications (2)

Publication Number Publication Date
CN115547909A true CN115547909A (en) 2022-12-30
CN115547909B CN115547909B (en) 2023-10-20

Family

ID=84727788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211129191.2A Active CN115547909B (en) 2022-09-16 2022-09-16 Wafer definition positioning method

Country Status (1)

Country Link
CN (1) CN115547909B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101494737A (en) * 2009-03-09 2009-07-29 杭州海康威视数字技术股份有限公司 Integrated camera device and self-adapting automatic focus method
US20120008038A1 (en) * 2010-07-09 2012-01-12 Altek Corporation Assisting focusing method for face block
CN104820328A (en) * 2015-03-27 2015-08-05 浙江大学 Rapid automatic focusing method of calculating focusing position on the basis of defocusing model curve
CN110557547A (en) * 2018-05-30 2019-12-10 北京小米移动软件有限公司 Lens position adjusting method and device
CN110865453A (en) * 2019-09-26 2020-03-06 麦克奥迪(厦门)医疗诊断系统有限公司 Automatic focusing method of automatic microscope
CN111338051A (en) * 2020-04-08 2020-06-26 中导光电设备股份有限公司 Automatic focusing method and system based on TFT liquid crystal panel
CN111784684A (en) * 2020-07-13 2020-10-16 合肥市商巨智能装备有限公司 Laser-assisted transparent product internal defect depth setting detection method and device
CN112213619A (en) * 2020-09-16 2021-01-12 杭州长川科技股份有限公司 Probe station focusing method, probe station focusing device, computer equipment and storage medium
CN112505910A (en) * 2020-12-11 2021-03-16 平湖莱顿光学仪器制造有限公司 Method, system, apparatus and medium for taking image of specimen with microscope

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101494737A (en) * 2009-03-09 2009-07-29 杭州海康威视数字技术股份有限公司 Integrated camera device and self-adapting automatic focus method
US20120008038A1 (en) * 2010-07-09 2012-01-12 Altek Corporation Assisting focusing method for face block
CN104820328A (en) * 2015-03-27 2015-08-05 浙江大学 Rapid automatic focusing method of calculating focusing position on the basis of defocusing model curve
CN110557547A (en) * 2018-05-30 2019-12-10 北京小米移动软件有限公司 Lens position adjusting method and device
CN110865453A (en) * 2019-09-26 2020-03-06 麦克奥迪(厦门)医疗诊断系统有限公司 Automatic focusing method of automatic microscope
CN111338051A (en) * 2020-04-08 2020-06-26 中导光电设备股份有限公司 Automatic focusing method and system based on TFT liquid crystal panel
CN111784684A (en) * 2020-07-13 2020-10-16 合肥市商巨智能装备有限公司 Laser-assisted transparent product internal defect depth setting detection method and device
CN112213619A (en) * 2020-09-16 2021-01-12 杭州长川科技股份有限公司 Probe station focusing method, probe station focusing device, computer equipment and storage medium
CN112505910A (en) * 2020-12-11 2021-03-16 平湖莱顿光学仪器制造有限公司 Method, system, apparatus and medium for taking image of specimen with microscope

Also Published As

Publication number Publication date
CN115547909B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN108106603B (en) Variable focus lens system with multi-stage extended depth of field image processing
US9830694B2 (en) Multi-level image focus using a tunable lens in a machine vision inspection system
KR101578711B1 (en) Defect detecting method
US7664308B2 (en) Photomask inspection apparatus comparing optical proximity correction patterns to minimum and maximum limits
CN111380470B (en) Method for measuring Z height value of workpiece surface by using machine vision inspection system
JPH0736613B2 (en) Focus adjustment method for imaging device
US7639863B2 (en) Die-to-database photomask defect detection using region data to modify inspection thresholds
CN114113114B (en) Automatic process method for detecting and repairing micro defects on surface of large-caliber element
KR101994524B1 (en) Focusing device, focusing method, and pattern inspection method
KR20090074205A (en) Microscopic device and microscopic image analysis method
JP3211491B2 (en) Projection exposure apparatus and semiconductor manufacturing method and apparatus using the same
US20170069111A1 (en) Method for measuring pattern width deviation, and pattern inspection apparatus
JP2000009991A (en) Device and method of automatic focusing
US7433542B2 (en) Method for measuring line and space pattern using scanning electron microscope
US20180059398A1 (en) Image processing device, imaging device, microscope system, image processing method, and computer-readable recording medium
CN116342435B (en) Distortion correction method for line scanning camera, computing equipment and storage medium
CN115547909A (en) Method for wafer definition positioning
JP3058781B2 (en) Focusing point detection method
CN112461853A (en) Automatic focusing method and system
US7538815B1 (en) Autofocus system and method using focus measure gradient
US7817264B2 (en) Method for preparing focus-adjustment data for focusing lens system of optical defect-inspection apparatus, and focus adjustment wafer used in such method
US20220319027A1 (en) Industrial Metrology of Workpiece Surface based on Depth Map
US20020145741A1 (en) Critical dimension measurement method and apparatus capable of measurement below the resolution of an optical microscope
CN115546114A (en) Focusing method for critical dimension measurement
US9658444B2 (en) Autofocus system and autofocus method for focusing on a surface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant