CN115547909B - Wafer definition positioning method - Google Patents

Wafer definition positioning method Download PDF

Info

Publication number
CN115547909B
CN115547909B CN202211129191.2A CN202211129191A CN115547909B CN 115547909 B CN115547909 B CN 115547909B CN 202211129191 A CN202211129191 A CN 202211129191A CN 115547909 B CN115547909 B CN 115547909B
Authority
CN
China
Prior art keywords
focus
image
camera
referring
definition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211129191.2A
Other languages
Chinese (zh)
Other versions
CN115547909A (en
Inventor
田东卫
温任华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meijie Photoelectric Technology Shanghai Co ltd
Original Assignee
Meijie Photoelectric Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meijie Photoelectric Technology Shanghai Co ltd filed Critical Meijie Photoelectric Technology Shanghai Co ltd
Priority to CN202211129191.2A priority Critical patent/CN115547909B/en
Publication of CN115547909A publication Critical patent/CN115547909A/en
Application granted granted Critical
Publication of CN115547909B publication Critical patent/CN115547909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L21/00Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
    • H01L21/67Apparatus specially adapted for handling semiconductor or electric solid state devices during manufacture or treatment thereof; Apparatus specially adapted for handling wafers during manufacture or treatment of semiconductor or electric solid state devices or components ; Apparatus not specifically provided for elsewhere
    • H01L21/68Apparatus specially adapted for handling semiconductor or electric solid state devices during manufacture or treatment thereof; Apparatus specially adapted for handling wafers during manufacture or treatment of semiconductor or electric solid state devices or components ; Apparatus not specifically provided for elsewhere for positioning, orientation or alignment
    • H01L21/681Apparatus specially adapted for handling semiconductor or electric solid state devices during manufacture or treatment thereof; Apparatus specially adapted for handling wafers during manufacture or treatment of semiconductor or electric solid state devices or components ; Apparatus not specifically provided for elsewhere for positioning, orientation or alignment using optical controlling means

Abstract

The invention relates to a method for clearly positioning a wafer. The camera equipped with the microscope moves several times in the vertical direction of the wafer, and each time the camera moves, the parking position of the camera at the time is recorded and the image taken by the microscope at the time is saved. Calculating position variables of each parking position relative to a reference position, calculating definition variables of respective definition of the image relative to reference definition, taking an array with the position variables as discrete independent variables of a quadratic function, taking the array with the definition variables as dependent variables of the quadratic function, and fitting the quadratic function. The position required by the camera to shoot the wafer is positioned as the vertex abscissa of the quadratic function plus the reference position.

Description

Wafer definition positioning method
Technical Field
The invention mainly relates to the technical field of semiconductor wafers, in particular to a method for positioning the definition of a wafer in each process preparation link of the wafer.
Background
Wafer inspection is a common and necessary process step in semiconductor manufacturing processes, and the wafer inspection process typically includes inspection target image generation and image data processing. The inspection target image is generated for obtaining an inspection target image related to an inspected object such as a wafer, and data processing is used for processing and determination of extraction of the inspection target image, common processing such as wafer defect analysis and feature size measurement. Taking wafer defect determination as an example, core work is to distinguish between valid defects and suspected defects, such as noise signals, which are caused by insubstantial minor differences or randomness in inspection.
Critical dimension measurement often depends heavily on whether the image or shot of the object is clear or not, and if the image of the object is only a rough overview image, then deviations in critical dimension measurement must occur. The trouble is how to finish the fine shooting of the critical dimension. In the prior art, the shooting is often realized by roughly adjusting illumination, and usually, the image of a scanning electron microscope becomes blurred, so that an accurate image cannot be realized, and measurement cannot be performed. Or the scanning electron microscope pattern is considered to be clear when viewed but in fact does not achieve optimal sharpness.
Apart from being similar to critical dimension measurements, the most demanding requirement in defect detection is image sharpness. The problem is how to ensure that there is still room for improvement in the fineness of the image, which would otherwise lead to the following attempts to improve the manufacturing process to optimize the semiconductor process offset, without any mention being made, and the present application proposes the following examples based on these drawbacks.
It should be noted that the foregoing description of the technical background is only for the purpose of facilitating a clear and complete explanation of the technical solution of the present application, and is convenient for a person skilled in the art to understand, but is not limited to such specific application scenarios.
Disclosure of Invention
The application provides a wafer definition positioning method, wherein: moving the camera equipped with a microscope N times in the vertical direction of the wafer, and recording the current parking position P of the camera at the time of the K-th movement K And saving the image I photographed by the camera through a microscope K ,K=1、2、3、…N;
The camera has reference definition in an image of the wafer shot by a microscope at a reference position;
calculating each parking position P 1 、P 2 、…P N Position variable X relative to a reference position 1 、X 2 、…X N
Computing image I 1 、I 2 、…I N Definition variable Y of respective definition relative to reference definition 1 、Y 2 、…Y N
Will have a position variable X 1 、X 2 、…X N Is treated as a discrete argument of a quadratic function, and will have a sharpness variable Y 1 、Y 2 、…Y N The array of (2) is regarded as the dependent variable of the quadratic function, in order to fit out the quadratic function;
thereby locating the position required by the camera to shoot the wafer as the vertex abscissa of the quadratic function plus the reference position.
The method, wherein: evaluating the reference sharpness of the image at the reference position, the correspondence at each dwell position, using the energy gradient function as an evaluation function FImage I of (2) 1 、I 2 、…I N Definition of the device itself;
wherein f (xp, yp) is the gray value of the pixel (xp, yp), f (xp+1, yp) is the gray value of the pixel (xp+1, yp), and f (xp, yp+1_represents the gray value of the pixel (xp, yp+1).
The method, wherein: evaluating the reference sharpness of the image at the reference position, the corresponding image I at each dwell position, using the Laplacian function as an evaluation function F 1 、I 2 、…I N Definition of the device itself;
wherein the method comprises the steps off (xp, yp) is a gray value of the pixel point (xp, yp), and the gradient matrix G (xp, yp) is obtained by convolving the gray value of the pixel point with the laplace operator L.
The method, wherein: in the K-th movement, the induced position variable X K And sharpness variable Y K Consider as a quadratic function y=ax 2 A coordinate point (X K ,Y K )。
The method, wherein: if the quadratic function y=ax 2 The +bx+c quadratic term coefficient is not less than zero, namely a is more than or equal to 0, the position value of the reference position is changed again until the quadratic function fitted based on the new reference position meets a<0。
The method, wherein: if the quadratic function y=ax 2 If the vertex abscissa of +bx+c is not more than zero, that is, the vertex abscissa (-b/(2*a)). Ltoreq.0, the position value of the reference position is changed again until the quadratic function fitted based on the new reference position satisfies (-b/(2*a))>0。
Above-mentionedWherein: if the quadratic function y=ax 2 If the vertex abscissa (-b/(2*a)) of +bx+c exceeds the maximum distance allowed to move by the focal point of the camera, the position value of the reference position is changed again until the quadratic function fitted based on the new reference position satisfies that the vertex abscissa (-b/(2*a)) is smaller than the maximum distance allowed to move.
The method, wherein: the method can be used for respectively and independently carrying out definition positioning on different areas with uneven height on a wafer or a chip of the wafer, and when the definition positioning is carried out by switching from a previous area to a next area, the reference position is updated from a position value corresponding to the previous area to a new position value corresponding to the next area.
The method, wherein: the method for obtaining the quadratic function at least comprises operation based on a quadratic polynomial fitting principle or a neural network trained by using position variables and definition variables and used for simulating the quadratic function.
The method, wherein: in the case of performing sharpness evaluation, an image gradient operation of the evaluation function is performed using a sharp image in the front focus or using a defocus image in the non-front focus.
Drawings
So that the manner in which the above recited objects, features and advantages of the present application can be understood in detail, a more particular description of the application, briefly summarized below, may be had by reference to the appended drawings.
Fig. 1 is a camera with an electron microscope and a stage carrying a wafer that can be moved up and down.
Fig. 2 is a graph of image data and a fitted second order curve to determine whether focusing was successful.
Fig. 3 is an exemplary embodiment of a camera performing up and down movements for multiple repeated autofocus attempts.
Fig. 4 is an example of a pixel where the gradient operation involves an energy gradient or laplace function.
Fig. 5 is a graph that may treat a position variable as an independent variable and a sharpness variable as a dependent variable.
Fig. 6 shows that both the position variable and the sharpness variable parameters are out of specification.
Fig. 7 is a graph of a neural network simulated quadratic function in addition to quadratic polynomial fitting.
Detailed Description
The solution according to the application will now be described more clearly and completely in connection with the following examples, which are given by way of illustration only and not by way of all examples, on the basis of which the person skilled in the art obtains without any inventive effort.
Referring to fig. 1, the necessary knowledge of the present application will be described. Wafers in the field of semiconductor fabrication generally refer to silicon wafers used to fabricate integrated circuits. The metrology stage or motion stage 11 of the cd metrology apparatus is configured to carry a wafer 10. The microscope and camera CA cooperate or are assembled together to capture fine wafer detail images. The microscope has a high power lens and a low power lens and the lens magnification can be switched manually or automatically in a series of lenses LN. Such as switching from a high power lens to a medium power lens or to a low power lens, or performing the opposite lens switching operation, such as switching from a low power lens to a medium power lens or to a high power lens. Such a multiple switching relationship of the lens includes on-axis switching.
Referring to fig. 1, regarding the platform (CHUCK): is a special tool for adsorbing and carrying wafers in various semiconductor silicon wafer production processes, and the motion platform 11 is mainly used for carrying wafers. Some documents also refer to such carriers as susceptor or lifting mechanism, wafer carrier trays or platforms, load-carrying platforms, and the like. The motion platform belongs to a bearing mechanism in the semiconductor equipment. The load-bearing table referred to herein includes a platform (CHUCK) structure. The motion stage may move along the X and Y axes as desired in this coordinate system, and in some cases the motion stage may rotate the wafer or move the wafer up and down in the Z axis as desired in this coordinate system.
Referring to fig. 1, a platform motion control module: the wafer movement control device consists of an X axis, a Y axis, a theta axis and a CHUCK, and before the critical dimension of the wafer is measured by the measuring equipment, the platform movement control module is required to carry out the CHUCK movement, so that the movement control of the wafer is realized. The θ axis may be rotated, for example, by rotating the θ axis to rotate the CHUCK, which is equivalent to adjusting the value of the angle θ by controlling the rotation of the motion platform.
Referring to fig. 1, a critical dimension measuring apparatus of the semiconductor industry includes at least a motion stage 11 and a camera CA configured with a microscope. The critical dimension measuring device can be a modification of the existing critical dimension measuring device or a completely new critical dimension measuring device designed. In view of the critical dimension measuring apparatus existing in the semiconductor industry, the present application is not repeated separately, and it should be noted that all technical features or local technical features of the critical dimension measuring apparatus of the prior art may be applied to the measuring apparatus of the present application. The present application defaults to critical dimension measurement equipment when referring to it includes all technical features or local technical features of the prior art. The camera equipped with a microscope includes an electron microscope.
Referring to fig. 1, the focusing Z-axis movement module of the camera CA: the wafer can be formed by a Z-axis capable of moving up and down, when the wafer is placed on a measuring platform, such as the platform 11, if the vision of the camera CA is clear and the resolution is high, the wafer needs to be located at the focal plane of the camera, and the Z-axis moving module can move up and down with the camera and the lens at the moment, so that the focal plane with the clearest vision of the camera can be found. I.e., the focal plane where the critical dimension structures on the wafer are located.
Referring to fig. 1, regarding distance focal plane position adjustment: the camera is driven to move up and down by the movement of the Z-axis stepping motor so as to achieve the adjustment of the distance from the focal plane. As to how the motor moves with the camera, it is the prior art that the key dimension measuring device currently existing basically adopts such a structure, and the description thereof is not repeated herein. In addition, the motor and its camera equipped with a microscope, etc., which are known in the art, will not be described again.
Referring to fig. 1, the Critical Dimension (CD) term is explained before describing the present application. In the process of manufacturing a photomask of a semiconductor integrated circuit, a photolithography process and the like, a special line pattern capable of reflecting the width of a characteristic line of the integrated circuit, called a critical dimension, is specially designed in the industry for evaluating and controlling the pattern processing precision of the process. The term of critical dimension in industry can be replaced by the term of critical dimension structure or critical dimension mark.
Referring to fig. 1, the integration of the technical problems to be solved by the present application is as follows: the micro-structure of the wafer is not necessarily in the focal plane due to the images at different positions during detection, so that a large error is easily caused in the measurement value. In the traditional scheme, the manual repeated focusing and continuous trimming of the working distance of the microscope lead to low efficiency and extremely poor accuracy. The automatic focusing technology disclosed by the application can complete quick, accurate and smooth focusing and can reflect the focusing condition of the region of interest in real time.
Referring to fig. 1, the solution of the technical problem to be solved by the present application is as follows: based on the complex flow of the method for detecting the microstructure in the prior art and the slow measurement speed (such as repeated focusing and repeated trimming of the measurement distance), the focusing flow involved in the microstructure measurement step needs to be simplified, the measurement efficiency per unit time is improved, the time that the wafer stays in the detection link on the whole production line is reduced, and the accuracy of detecting the microstructure is improved.
Referring to fig. 1, regarding the autofocus implementation aspect: the system can be divided into an image acquisition module and a dimming module, wherein the Z-axis movement module is used for dimming an acquired image, and then whether the current position is on a focal plane is judged through image algorithm processing so as to drive the Z-axis (usually an up-down movement axis) to move for adjustment.
Referring to fig. 1, regarding distance focal plane position adjustment: the camera is driven to move up and down by the movement of the Z-axis stepping motor so as to achieve the adjustment of the distance from the focal plane. When a wafer is placed on the measuring platform, the wafer needs to be positioned at the focal plane of the camera in order to ensure that the visual field of the camera is clear and the resolution is high, and the Z-axis movement module can move up and down with the camera and the lens so as to find the focal plane with the clearest visual field of the camera.
Referring to fig. 1, the module or module is moved about the Z-axis: firstly, the stepping motor is involved, for example, the running speed and the running position can be accurately controlled without feedback, and the stepping motor can replace the function of a servo motor under the conditions of low running speed and low power requirement. The stepper motor may be immune to various interference factors in terms of step size. Such as the magnitude of the voltage or the magnitude of the current or the voltage-current waveform, the change in temperature, etc.
Referring to fig. 1, regarding the travel of the Z-axis moving module: for example, the Z-axis of the minimum Z-axis stroke is implemented by a stepper motor and its minimum stroke is the linear displacement of one pulse, which can be calculated as follows.
First, the pitch angle of a stepper motor, which is typically designated on this motor, is determined in advance. For example, an example of 1.8 degrees indicates that 360/1.8=200 for one circumference, that is, 200 pulses are required for one revolution of the motor.
Second, it is determined whether the motor drive has a subdivision, the subdivision score is checked, and the dialing code on the drive can be observed to confirm whether the motor drive has a subdivision score. For example, the motor driver is provided with 4 subdivisions, and as stated above, by calculating the correlation with the aforementioned 200 pulses, 200×4=800, which is equivalent to requiring 800 pulse motors to rotate one revolution.
Furthermore, the length or lead of one revolution of the motor shaft is determined: the pitch of the thread is equal to the lead if the screw is a screw rod or the pitch circle diameter (m x z) is the lead if the screw rod is in gear-rack transmission.
The number of leads divided by pulses (leads/pulses) is equal to the linear displacement of one pulse. It is generally desirable that the distance of movement of the stepper motor is greater than or equal to a minimum stroke, otherwise the stepper motor will not respond.
Referring to fig. 1, in an alternative example, assume that the minimum stroke of the Z-axis moving die set is 0.000078mm, for example, assume that the camera satisfies the condition that the minimum stroke is 0.000078 mm. Such travel varies with different tables.
Referring to fig. 1, a single stride of movement is defined in an alternative example (e.g., oncestep=0.000078 mm).
Referring to fig. 1, an auto focus travel is defined in an alternative example. The parameters of the auto-focusing stroke are determined according to the flatness of the product to be measured, such as a wafer, and are substantially the maximum stroke taken when the Z-axis moves up and down to find the focal plane. For example, it may be assumed that autoFocusTravel takes 0.04mm.
Referring to fig. 1, an autofocus attempt maximum number autofocus try cnt is defined in an alternative example.
Referring to fig. 1, the number of autofocus attempts is defined as cnt in an alternative example. cnt is continuously counted in cycles.
Referring to fig. 1, in an alternative example, the current position on the Z-axis is defined as Zc.
Referring to fig. 1, in an alternative example, a maximum number of autofocus, max_frames_count, is defined, where the maximum number of autofocus is the maximum number of Z-axis adjustments.
Referring to fig. 1, in an alternative example, the Z-axis adjustment number is defined as m_focus_cnt.
Referring to fig. 1, a method MoveZDirect (onceStep) is defined in an alternative example. For example, if the process of moving one step down along the vertical or vertical axis (Z-axis) is MoveZDirect (onceStep). Conversely if the process of moving one step up along the vertical or vertical axis (Z-axis) is MoveZDirect (-onsep). The positive and negative values inside brackets of the method function represent downward and upward movement, respectively.
Referring to FIG. 1, the first class array (m_focus_X [ ]) in the alternative is a statistic of the Z-axis position variation.
Referring to fig. 1, the second class array (m_focus_y [ ]) in the alternative example is a statistic for the image sharpness change.
Referring to fig. 1, a Z-axis position m_focus_z of a focus start point is defined in an alternative example.
Referring to fig. 1, an image sharpness m_focus_def of a focus start point is defined in an alternative example.
Referring to fig. 1, focusing according to the present application includes the following calculation process.
Referring to fig. 1, a temporary variable up_load for calculation is defined, and is initially double up_load=vector. double is one type of computer language, namely the double precision floating point type. The present application may run on a computer or server or similar processing unit. Other alternatives to the processing unit: a field programmable gate array, a complex programmable logic device or a field programmable analog gate array, or a semi-custom ASIC or processor or microprocessor, or a digital signal processor or integrated circuit, or a software firmware program stored in memory, or the like. Double notation indicates that the type of the calculated value is Double-precision floating point type, hereinafter int type is an identifier for defining an integer type variable, in front of the calculated value.
Referring to fig. 1, before performing metrology on critical dimensions, multiple autofocus attempts are performed, and repeated autofocus attempts can be exemplified in computer language as for (cnt=0, cnt < autofocus trycnt++). The number cnt increases from the initial zero value until the maximum number of autofocus attempts hcuttrycnt is reached, that is, if the number of autofocus attempts cnt stops increasing when the condition cnt < autofocus trycnt is not satisfied. The cnt self-increment operation is identified in the computer language by the expression cnt++.
Referring to fig. 1, in each execution of an autofocus attempt: the camera or the like (e.g., a stage including a microscope) is repeatedly repositioned in the vertical, i.e., Z-axis direction a plurality of times, and the Z-axis camera repeated repositioning process may be exemplified by for (m_focus_cnt=0; m_focus_cnt < max_frame_count + + m_focus_cnt) in computer language. The number of camera adjustments in the vertical axis is m_focus_cnt. Wherein the for statement is a loop statement.
Referring to fig. 1, the Z-axis adjustment number m_focus_cnt increases from a value of zero at the beginning until the maximum number of Z-axis adjustments max_frame_count is reached, that is, if the condition m_focus_cnt < max_frame_count is not satisfied during the repeated adjustment in the Z-axis direction, the Z-axis adjustment number m_focus_cnt stops increasing and the operation of self-increasing m_focus_cnt is identified by the expression ++ m_focus_cnt in the computer language.
Referring to fig. 1, in each execution of an autofocus attempt: before iteratively adjusting the camera position, the microscope lens is preferably brought into proximity with the wafer to a predetermined ratio (e.g., three-quarters) of a specified travel value (e.g., travel). In an alternative embodiment, the microscope lens is brought into proximity with the sample, i.e. the wafer, before iteratively adjusting the camera position: the lens is first taken 3/4 of the way over the sample, exemplified in computer language as MoveZDirect (-up_load 3/4).
Referring to fig. 1, image sharpness is expressed in terms of F. Example double def=f. The foregoing generally describes sharpness evaluation based on image gradients: the clear image in the front focus is known to be sharper and clearer than the blurred out-of-focus image, and the gray value of the edge pixels is greatly changed. def is real-time image sharpness. The following will explain the judgment of the image sharpness by using a mathematical expression.
Referring to fig. 1, the current Z-axis position is denoted Zc. The real-time Z-axis coordinate is expressed in terms of z_pos.
Referring to fig. 1, a real-time Z-axis coordinate z_pos is acquired. Example double z_po=zc. Note that in the first adjustment of iteratively adjusting the camera position, there is a case where m_focus_cnt= 0 is exemplified in computer language, that is, the start point of focusing, the values of m_focus_z and m_focus_def are assigned. The first adjustment or focus start point may be exemplified by if (m_focus_cnt= 0) { m_focus_z=z_pos in computer language; m_focus_def=def; }. The Z-axis position m_focus_z of the focus start point, the image sharpness m_focus_def of the focus start point are expressed.
Referring to fig. 1, the x-coordinate of the quadratic or second order curve is the amount of change in the Z-axis position. The position data of the camera movement can be extracted by computer language, i.e. m_focus_x [ m_focus_cnt ] =z_pos-m_focus_z, wherein the array of X coordinates of the quadratic function or the second order curve comprises m_focus_x [ m_focus_cnt ].
Referring to fig. 1, the y-coordinate of the quadratic function or second order curve is the amount of change in image sharpness. Likewise, the captured image sharpness data may be exemplified in computer language as m_focus_y [ m_focus_cnt ] =def-m_focus_def, wherein the array of Y coordinates of the quadratic function or the second order curve comprises m_focus_y [ m_focus_cnt ].
Referring to fig. 1, when the moving distance exceeds a specified travel (e.g., a predetermined travel), the adjustment is ended. That is, if the distance or path moved by the camera in the Z axis exceeds the specified travel, the current Z axis position adjustment is ended. The out-of-travel may be in computer language exemplified by if (math.abs (z_pos-m_focus_z) > math.abs (travel)) break. The absolute value of the number is expressed by Math.abs, such as z_pos-m_focus_z or absolute. break indicates the need to jump out of the current for loop, e.g., jump out of the for loop with m_focus_cnt added by one. The m _ focus _ cnt will not increase after encountering this situation when it jumps out of the loop until the next round of autofocus attempts is entered. m_focus_cnt encounters the situation that it is likely that it has not yet reached max_frame_count.
Referring to FIG. 1, after the Z axis is adjusted multiple times, the camera moving position data and corresponding image sharpness data can be captured, wherein the position data for multiple times of position adjustment comprises m_focus_X [ m_focus_cnt ], and the image sharpness data for multiple times of position adjustment comprises m_focus_Y [ m_focus_cnt ]. The Z-axis adjustment is denoted by MoveZDirect.
Referring to fig. 5, the Z-axis position variation (X N ) Statistics of the first class array m_focus_X [])。
Referring to fig. 5, the captured image sharpness change (Y N ) Statistics of the second class array m_focus_Y [])。
Referring to fig. 5, focused data m_focus_x [ ] and m_focus_y [ ] are fitted to a second order curve.
Referring to fig. 5, the second order curve is represented by the formula y=ax 2 +bx+c.
Referring to fig. 5, the vertex coordinates-b/(2*a) or abscissa values of the vertex coordinates of the second-order curve are calculated.
Referring to fig. 5, the most clear position m_focus_best of the image is calculated: the vertex coordinates plus the focus start Z-axis position yields the most clear image position m_focus_best, m_focus_best= -b/(2*a) +m_focus_z. The most clear position of the image is related to the vertex coordinates of the second order curve and also to the Z-axis position m_focus_z of the focus start point.
Referring to fig. 5, the second order curve y=ax is satisfied 2 The quadratic coefficient of +bx+c is smaller than zero, i.e. a<0. The vertex coordinates are greater than zero, i.e., (-b/(2*a))>0、The vertex coordinates are smaller than a defined focus travel maximum, i.e., (-b/(2*a))<Autofocus drive, focus is considered successful. By way of example if (a) in computer language<0&&(-b/(2*a))>0&&(-b/(2*a))<Autofocus travel) break. break indicates successful focusing without the need to find the focus using a moving stage. Note that predetermined conditions including the above three and the like need to be satisfied simultaneously to indicate successful focusing, and any one of them does not coincide to indicate unsuccessful focusing.
Referring to fig. 1, among each attempt (number of attempts is denoted by cnt): the predetermined condition includes that the quadratic coefficient of the second order curve is smaller than zero (a < 0), the vertex coordinate is larger than zero (i.e., (-b/(2*a)) >0, the vertex coordinate is smaller than a defined focusing travel maximum (i.e., (-b/(2*a)) < autofocus travel), and the camera is moved a distance on the vertical axis and then the focus is found without any of the predetermined conditions. In other words, the above condition is not satisfied, that is, it is explained that the focus is not in the current stroke, and it is necessary to move the table to find the focus again.
Referring to fig. 1, dir is defined in an alternative example as the sum of adjacent two image sharpness differences. In the most initial state, for example, double dir=0. The Z-axis adjustment count is m_focus_cnt. And in the stage that the camera is repeatedly adjusted in position for many times, based on any two adjacent sharpness differences obtained by adjusting the position of the camera successively, the difference results are obtained by making differences. The two adjacent resolutions are represented by m_focus_Y [ m ] and m_focus_Y [ m+1], respectively, and their difference results m_focus_Y [ m+1] -m_focus_Y [ m ] are obtained by the difference. Definition m is a digital class variable, and m is smaller than the Z-axis adjustment times m_focus_cnt. m in fact characterizes the number of position adjustments.
Referring to fig. 1, among each attempt (number of attempts is denoted by cnt): and in the stage when the camera is repeatedly adjusted in position on the Z axis, based on any two adjacent sharpness differences obtained by adjusting the camera in sequence, the differences are taken as difference results m_focus_Y [ m+1] -m_focus_Y [ m ]. With the continuous adjustment of the camera position, a variable term is defined that changes with the increase of the position adjustment times (times m or m_focus_cnt), and the variable term of the current time is equal to the value of the previous time plus the current time difference result. The variation of the variable term dir can be exemplified by computer language, i.e., for (int m=0, m < m_focus_cnt-1;m ++ { dir+ =m_focus_y [ m+1] -m_focus_y [ m ]; }. The meaning of dir+=m_focus_y [ m+1] -m_focus_y [ m ] in this computer language is expressed as: the current variable term dir is equal to its previous value plus the current difference result, i.e., m_focus_Y [ m+1] -m_focus_Y [ m ]. In other words dir is the sum of the differences between the sharpness of two adjacent images, which is the same meaning.
Referring to fig. 1, it is necessary to determine whether the variable term of each change is less than zero. If yes, the camera or the workbench moves upwards for a certain distance and then tries to focus; if not, the camera or the workbench moves down for a certain distance and then tries to focus.
Referring to fig. 1, if the variable term changes to less than zero, it may be moved up by half a stroke and focus may be attempted again. For example, if (dir < 0) up_load= (vector/(2×cnt+1)) is moved up a distance below zero for refocusing. For example, the distance the camera moves up relative to the start position of focus is equal to: one half of a given travel value (travel) is divided by the current number of foci (current number of foci is denoted cnt+1, note that the first attempt is cnt=0, the current number of foci being defined as cnt+1 for ease of understanding).
Referring to fig. 1, if the variable term is changed to not less than zero (in the case of non dir < 0), the camera is moved down by half a stroke with respect to the focus initial position, and focus is attempted again. An example with respect to if (dir < 0) is else up_load=velocity/2. For example, the camera is moved down a distance relative to the start position of focus equal to: half of the specified travel value (travel).
Referring to fig. 1, a plurality of autofocus attempts have been performed so far. Attempts to iterate autofocus can be exemplified in computer language as for (cnt=0; cnt < autofocus trycnt; cnt++). When the focus attempt is no longer performed, or after the end of the cycle of the attempt, it is moved to the focus relative distance dis, i.e. the vertex coordinates minus the current Z-axis position. Represented in an alternative example by method MoveZDirect (dis), double dis=m_focus_best-Zc. The autofocus is considered to end.
Referring to fig. 2, the focusing method for critical dimension measurement includes steps SP1 to SP7. Step SP1 collects image sharpness data and Z-axis position data of the camera. The Z-axis position data includes m_focus_X [ m_focus_cnt ], which is a class of data in the form of an array. The Image sharpness data includes m_focus_y [ m_focus_cnt ], also in the form of an array, and can be extracted from Image information Image1 photographed by a camera.
Referring to FIG. 2, step SP2 is based primarily on camera-moved position data m_focus_X [ m_focus_cnt ]]And the acquired image sharpness data m_focus_y [ m_focus_cnt ]]Fitting a second order curve to calculate the vertex coordinates of the second order curve, e.g., equal to the second order curve y=ax 2 The abscissa value of the vertex coordinates +bx+c-b/(2*a). Essentially the vertex coordinates also include an ordinate value (4 ac-b 2 ) However, the present application requires attention to the abscissa value of the vertex coordinates rather than the ordinate value of the vertex coordinates, and the vertex coordinates are referred to as abscissa value-b/(2*a) directly in popular terms, so that the present application includes the meaning of the abscissa value of the vertex coordinates when referring to the vertex coordinates.
Referring to fig. 2, step SP3 calculates the image clearest position m_focus_best= -b/(2*a) +m_focus_z. The vertex coordinates plus the focus start Z-axis position m_focus_z yields the position where the image is the clearest.
Referring to fig. 2, step SP4 judges whether focusing is successful: the focusing is considered successful if the quadratic coefficient satisfying the second order curve is smaller than zero, the vertex coordinate is larger than zero, and the vertex coordinate is smaller than a defined focusing travel maximum. If the determination result is yes, the focusing success flag of step SP5 may be used to indicate.
Referring to fig. 2, step SP4 judges whether focusing is successful: if the quadratic term coefficient of the second order curve is not smaller than zero, the vertex coordinate is larger than zero and the vertex coordinate is smaller than any one of the maximum focusing stroke, the camera is moved on the vertical axis for a certain distance and then the focus is searched. The result of the determination is that otherwise it is indicated by step SP6, when it is necessary to move the table or camera to find the focus.
Referring to fig. 2, the known determination result is otherwise represented by step SP 6: in step SP6, in the stage of repeatedly adjusting the position of the camera a plurality of times, any two adjacent sharpness differences obtained based on the sequential adjustment of the position of the camera are differenced (for example, two adjacent sharpness differences m_focus_ym+1 and m_focus_ym are differenced). Defining a variable item dir to change along with the increase of the position adjustment times, wherein the variable item dir is calculated in the following way: the current variable term dir is equal to its previous value plus the current difference result (e.g., m_focus_ym+1 minus m_focus_ym). Finally, whether the variable item of each time is smaller than zero or not is judged (namely whether if (dir < 0) is true or not is judged).
Referring to fig. 2, if (dir < 0) is determined to be true), focusing is attempted after a certain distance up-shift. The distance the camera moves up relative to the start position of focus is equal to: one half of a given stroke value (travel) divided by the current number of focus times is equal to up_load= (travel/(2 x (cnt+1))). up_load corresponds to MoveZDirect.
Referring to fig. 2, if not (negative judgment if (dir < 0)), focusing is attempted after a distance downward. The camera is moved down a distance equal to the start position of the focus: one half of a given travel value (travel). The distance of the move down is equal to up_load=travel/2. up_load corresponds to MoveZDirect.
Referring to fig. 2, step SP7 is performed after step SP5 or step SP 6. But SP7 is not required. If an attempt is made to perform step SP7, this means that the camera or the table is moved to the focus relative distance dis, i.e. the position m_focus_best at which the previous image is the clearest minus the current Z-axis position Zc. MoveZDirect (dis) shows the process of moving the camera or stage to a relative distance of focus dis. double dis=m_focus_best-Zc. Since the most clear position of the image is closely related to the vertex coordinates, it is colloquially believed that moving to the focus relative distance dis in this process, i.e. subtracting the vertex coordinates, removes the current Z-axis position. This is the end of the autofocus.
Referring to fig. 2, the process of acquiring image sharpness data in step SP1 is implemented by moving the Z-axis, because the camera moving position will cause a change in m_focus_y [ m_focus_cnt ] =def-m_focus_def, and the amount of change in m_focus_y provides the material or source data fitting the ordinate of the second order curve as the basis of the ordinate of the quadratic function line.
Referring to fig. 2, the process of collecting the position data of the camera in step SP1 is implemented by moving the Z-axis, because the camera moving the position causes a change in m_focus_x [ m_focus_cnt ] =z_pos_m_focus_z, and the amount of change in m_focus_x provides the material or source data fitting the abscissa of the second order curve as the basis of the abscissa of the quadratic function line.
Referring to fig. 2, step SP1 requires the camera equipped with the microscope to repeatedly adjust the position in the vertical axis direction, records the start position (m_focus_z) of the focus start point and the start point initial image sharpness (m_focus_def), and records the real-time position (z_pos) and the real-time image sharpness (def) after each adjustment of the camera. At this time, the ordinate of the fitted second order curve, that is, the X-coordinate is the Z-axis position change amount m_focus_x [ m_focus_cnt ] =z_pos-m_focus_z, and the abscissa of the fitted second order curve, that is, the Y-coordinate is the image sharpness change amount m_focus_y [ m_focus_cnt ] =def-m_focus_def.
Referring to fig. 2, the SP1 position data includes a plurality of sets of position differences between the real-time position and the start position. For example, the position data includes the position difference m_focus_x0 ] =z_pos 0-m_focus_z, where z_pos0 is the actual position of the real-time position when m_focus_cnt=0. m_focus_x1=z_pos1-m_focus_z, and z_pos1 is the actual position of the real-time position when m_focus_cnt=1. m_focus_x2=z_pos 2-m_focus_z, z_pos2 being the actual position of the real-time position when m_focus_cnt=2, and so on. A sufficient amount of ordinate information is provided as m_focus_cnt increases.
Referring to fig. 2, the step SP1 image sharpness data includes a plurality of sets of sharpness differences of real-time image sharpness and initial image sharpness. The sharpness difference m_focus_y0=def0-m_focus_def, def0 being the real-time image sharpness when m_focus_cnt=0. m_focus_y1=def1-m_focus_def, def1 being the real-time image sharpness captured when m_focus_cnt=1. m_focus_y2=def2-m_focus_def, def2 being the real-time image sharpness captured when m_focus_cnt=2, and so on. A sufficient amount of abscissa information is provided as m_focus_cnt increases.
Referring to fig. 2, after each adjustment of the position of the camera in step SP1, it is noted that the position difference and the sharpness difference under the condition that the camera is at the same position are respectively regarded as a second order function or an abscissa value and an ordinate value corresponding to a point on the second order curve. For example, the position difference m_focus_x1 and the sharpness difference m_focus_y1 under the condition that the camera is at the same position are respectively regarded as the abscissa and the ordinate corresponding to the same point on the second-order curve after the position adjustment when m_focus_cnt=1. The position difference m_focus_x2 and the definition difference m_focus_y2 under the condition that the cameras are at the same position are respectively regarded as the abscissa and the ordinate corresponding to the same point on the second-order curve after the position adjustment when m_focus_cnt=2. Note that def-m_focus_def is a sharpness difference or sharpness change amount.
Referring to fig. 2, step SP1 ends the current position adjustment if the absolute value of any position difference exceeds the specified travel value (travel). I.e. the current Z-axis position adjustment is finished, and the m_focus_cnt stops counting continuously.
Referring to fig. 2, step SP1 specifies a maximum number of adjustments max_frame_count on the Z-axis, and the actual number of adjustments m_focus_cnt, which require the camera to repeatedly adjust the position in the vertical axis direction, should be smaller than the maximum number. The maximum number of autofocus, i.e., the maximum number of Z-axis adjustments, is defined as max_frame_count. Can avoid the continuous adjustment position and the failure to jump out of the cycle, and can also prevent the measurement process from falling into the condition of no rest and no stop adjustment.
Referring to fig. 3, this embodiment is a further optimization measure taken on the basis of fig. 2, requiring multiple autofocus attempts (cnt) to be performed before measurements are performed on critical dimensions on the wafer, and in an alternative example, a maximum number of autofocus attempts autofocus try cnt is defined. As shown, the actual number cnt of repeated autofocus attempts is required to be less than the maximum number autofocus try cnt of such attempts. It can be observed that each autofocus attempt or any single autofocus attempt link includes the flow of steps SP1 to SP5 in fig. 2 or that a single autofocus attempt includes the flow of steps SP1 to SP 6. Step SP7 of fig. 2 may still be used after each autofocus attempt or after the end of any single autofocus attempt.
Referring to fig. 3, focusing method for critical dimension measurement: multiple autofocus attempts (continuing to attempt focus as long as cnt < autofocus trycnt) are performed before measurements are made on critical dimensions on the wafer. In each autofocus attempt or any single autofocus attempt (e.g., cnt=0, 1, 2, 3 … …, etc.), the camera is required to be adjusted repeatedly in the vertical direction multiple times (continued for as long as m_focus_cnt < max_frame_count) to capture the camera movement position data and corresponding image sharpness data. The number of focus attempts is recorded with cnt and each focus attempt performed requires cnt to run a self-addition operation. The number of adjustments to the position is recorded with m_focus_cnt and each time an adjustment is made requires that m_focus_cnt run a self-addition operation. Each value that cnt can take requires the camera to perform m_focus_cnt adjustments in the Z-axis direction.
Referring to fig. 3, focusing method for critical dimension measurement: it is also necessary to fit a second order curve from the position data and the image sharpness data. Step SP2 informs that according to the position data m_focus_X [ m_focus_cnt ]]And the acquired image sharpness data m_focus_y [ m_focus_cnt ] ]Fitting a second order curve y=ax 2 +bx+c. Since the second order curve is known at this time, the sharpest position of the image is apparent. The aforementioned step SP3 or the reservation step SP3 can be omitted in the present embodiment, which is allowed.
Referring to fig. 3, focusing method for critical dimension measurement: judging whether the second-order curve meets a preset condition, if so, considering that focusing is successful; if not, the camera is moved on the vertical axis for a certain distance and then the focus is found. As in step SP4.
Referring to fig. 3, the predetermined conditions include at least: the quadratic coefficient of the second order curve is smaller than zero, i.e. a <0, the vertex coordinates are larger than zero, i.e., (-b/(2*a)) >0, and (-b/(2*a)) < autofocus travel, i.e. the vertex coordinates are smaller than the focus stroke maximum. The predetermined condition is satisfied at the same time, and the focusing is considered successful as by step SP5. Any one of the predetermined conditions is not satisfied and the camera is moved a distance on the vertical axis to find the focus, as by step SP6.
Referring to fig. 3, the known determination result is otherwise represented by step SP 6: in step SP6, in the stage of repeatedly adjusting the position of the camera, any two adjacent sharpness differences obtained based on the sequential adjustment of the position of the camera are differenced (for example, two adjacent sharpness differences m_focus_ym+1 and m_focus_ym are differenced). In step SP6, it may be necessary to take such a measure that the sharpness is poor in each autofocus attempt or in any single autofocus attempt. A calculation method for defining a variable item which changes with the increase of the position adjustment times and a variable item dir: the current variable term dir is equal to the previous value of the variable term plus the current difference result (e.g., m_focus_ym+1 minus m_focus_ym). In an alternative example, it can be seen that the current difference result is considered as the difference result of subtracting the sharpness difference, e.g. m_focus_ym, at the current position adjustment, from the next subsequent sharpness difference, e.g. m_focus_ym+1.
Referring to fig. 3, for example, assuming that m=3, the current variable item dir3 is equal to the value dir2 at the previous position adjustment plus the current difference result (the current difference result is m_focus_y4 minus m_focus_y3). Based on this assumption, the forward estimation can be continued, and the current dir2 is equal to dir1 at the previous position adjustment and the current difference result (the current difference result is m_focus_y3 minus m_focus_y2) is added. Based on this assumption, the forward calculation can still continue, and so on, when the current dir1 is equal to the value dir0 at the previous position adjustment plus its current difference result (the current difference result is m_focus_y2 minus m_focus_y1). Finally, whether the variable item of each time is smaller than zero or not is judged (namely whether if (dir < 0) is established or not is judged). In general, the variable term corresponding to the current adjustment number is equal to the value of the variable term at the previous position adjustment plus the difference result at the current position adjustment.
Referring to fig. 3, if (dir < 0) is determined to be true), focusing is attempted after a certain distance up-shift. The distance the camera is moved up with respect to the start position of the focus is for example equal to: one half of a given stroke value (travel) divided by the current number of focus times is equal to up_load= (travel/(2 x (cnt+1))). Since the default number of foci starts from zero, but provided that the statement claiming the zeroth attempt does not conform to habit, the current number of foci is more habit-conforming with cnt+1. For example, when the current first focusing number (cnt+1) is substantially cnt=0, it is actually tried once. Then, as occurs when the current second focusing count (cnt+1) is substantially cnt=1, a second attempt is indeed made at this time. More strictly speaking the initial position of the focus, is moved up a distance equal to: one half of the above specified stroke value (travel) is divided by the total number of times and the total number of times is equal to the number of times of focusing actually occurring plus one (i.e., cnt+1), and the up-shift distances obtained by different expressions are the same, which is equal to up_load= (travel/(2×cnt+1)).
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: the camera repeatedly adjusts the position in the vertical axis direction for a plurality of times, records the initial position of the focusing initial point and the initial image definition, and records the real-time position and the real-time image definition after each adjustment of the position of the camera.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: the camera is repeatedly adjusted in position in the vertical axis direction for a plurality of times, and the position data m_focus_x [ m_focus_cnt=0, 1, 2, 3 … ] comprises a plurality of groups of position differences between the real-time position and the initial position.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: the camera is repeatedly adjusted in position in the vertical axis direction for a plurality of times, and the image definition data m_focus_y [ m_focus_cnt=0, 1, 2, 3 … ] includes a plurality of sets of definition differences of real-time image definition and initial image definition.
Referring to fig. 3, after the camera adjusts the position each time (e.g., m_focus_cnt=0, 1, 2, 3 … …, etc.), the difference between the position and the sharpness of the camera under the same position condition is regarded as an abscissa value and an ordinate value corresponding to one point on the second-order curve at the same time, respectively.
Referring to fig. 3, the position difference m_focus_x0 and the sharpness difference m_focus_y0 under the same position condition (e.g., m_focus_cnt=0) are respectively regarded as the abscissa and ordinate values of the same point on the second-order curve.
Referring to fig. 3, the position difference m_focus_x3 and the sharpness difference m_focus_y3 under the same position condition (e.g., m_focus_cnt=3) are respectively regarded as the abscissa and ordinate values of the same point on the second-order curve.
Referring to fig. 3, after each position adjustment (e.g., m_focus_cnt=0, 1, 2, 3 … …, etc.), if any position difference z_pos_m_focus_z occurs, the absolute value is greater than the specified travel value travel, the current position adjustment is ended and the camera jumps out of the camera's cycle of repeatedly adjusting positions. z_pos-m_focus_z is a position difference or position variable.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: the maximum number of adjustments max_frame_count on one vertical axis is specified, and the actual number of adjustments m_focus_cnt that require the camera to repeatedly adjust positions in the Z-axis direction is smaller than the maximum number max_frame_count.
Referring to fig. 3, the most clear position m_focus_best of the image is the sum of the vertex coordinates of the second order curve plus the start position of the focus start point. m_focus_best= -b/(2*a) +m_focus_z, the vertex coordinates plus the focus start Z-axis position yields the most clear image position m_focus_best.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: and judging whether the second-order curve meets a preset condition. The predetermined conditions have been explained above and will not be described again.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: when the above condition, i.e. the predetermined condition, is not met, i.e. it is clear that the focus is not in the current stroke, it is necessary to continue moving the table for finding the focus. The focus of the mobile station finding has been explained above and will not be described in detail.
Referring to fig. 3, when the timing point of the upper limit of the attempt (for example cnt < AutoFocusTryCnt) is not reached: multiple autofocus attempts should not end. The focus attempt requires the execution of the flow of steps SP1 to SP5 or the flow of steps SP1 to SP6 when cnt=0, the execution of the flow of steps SP1 to SP5 or the flow of steps SP1 to SP6 when cnt=1, the execution of the flow of steps SP1 to SP5 or the flow of steps SP1 to SP6 when cnt=2. And so on. By cnt=autofocustrycnt the jump ends.
Referring to fig. 3, when the timing point reaches the upper limit of the attempt (e.g., cnt=autofocustrycnt): then multiple autofocus attempts should end. For example, cnt maximum is equal to AutoFocusTryCnt minus one. If step SP7 is performed, this means that the camera or table is required to be moved to the focus relative distance dis, i.e. the position m_focus_best where the previous image is most clear minus the current Z-axis position Zc. double dis=m_focus_best-Zc. The autofocus ends so far and the resolution and definition of the image of the critical dimension structures on the wafer is highest at this point. The focusing success substantially as described above has achieved the object of the present application as set forth in the background section. While still moving the camera to the relative focus distance is also a better embodiment to achieve autofocus.
Referring to fig. 3, in each trial (e.g., cnt=0, 1, 2, 3 … …, etc.) link: before iteratively adjusting the camera position or before each adjustment of the camera position, in an alternative example, the microscope lens is brought close to the wafer to a predetermined ratio value (ratio value, for example, 3/4) of a specified travel value (travel). The process of bringing the lens closer to the wafer to a predetermined distance at step SP0 is shown. The lens is close to the sample, i.e. the wafer: the distance above the sample is first about the specified travel value multiplied by the predetermined ratio. Such as MoveZDirect (-up load 3/4) this example shows the lens going 3/4 strokes over the sample. Step SP0 represents lens-to-wafer proximity as MoveZDirect.
Referring to fig. 3, when the timing point of the upper limit of the attempt (for example cnt < AutoFocusTryCnt) is not reached: multiple autofocus attempts should not end. The focusing attempt requires the execution of the flow of steps SP0 to SP5 or the flow of steps SP0 to SP6 when cnt=0, the execution of the flow of steps SP0 to SP5 or the flow of steps SP0 to SP6 when cnt=1, the flow of steps SP0 to SP5 or the flow of steps SP0 to SP6 when cnt=2. And so on. This is an example when step SP0 is employed.
Referring to fig. 1, the process of acquiring the position data of the camera is implemented by moving the Z-axis, because the camera or the stage moves the position, which causes the m_focus_x [ m_focus_cnt ] =z_pos-m_focus_z to change, the position data m_focus_x [ m_focus_cnt=0, 1,2,3 … ] as the fitting source data of the second order curve abscissa, exhibit array characteristics.
Referring to fig. 1, the process of acquiring image sharpness data is implemented by moving the Z-axis, because the camera or stage is moved in position, which causes m_focus_y [ m_focus_cnt ] =def-m_focus_def to change, and the sharpness data m_focus_y [ m_focus_cnt=0, 1,2,3 … ] is a plurality of sets of characteristics as fitting source data of the ordinate of the second order curve.
Referring to fig. 5, a second order fitting example. Given a data sequence (x i ,y i ) And (i=0, 1,2,3 …, m) is satisfied, the set of data is fitted using a quadratic polynomial. The following calculations and simplifications generally describe the process of second order curve fitting.
p(x)=a 0 +a 1 x+a 2 x 2
Referring to fig. 5, second order curve fitting: given two arrays x [ n ] of length n],y[n]For example, assuming that both arrays are discretized, another expression y=ax for p (x) can be calculated by algorithmic fitting 2 +bx+c. Thus two arrays x [ n ] are calculated by means of a fitting function ],y[n]The process of the relationship between them can be referred to as a second order curve fit.
The mean square error of the fitting function with the data sequence is made, for example, based on p (x):
from the extremum principle of the multiple function, calculate Q (a 0 ,a 1 ,a 2 ) The minimum value of (2) may satisfy:
the above is aboutThe formula of (2) is simplified as follows:
/>
referring to fig. 5, the array X [ n ] uses the position data m_focus_x [ m_focus_cnt=0, 1, 2, 3 … ].
Referring to fig. 5, the array Y n uses the sharpness data m_focus_y [ m_focus_cnt=0, 1, 2, 3 … ].
Referring to fig. 5, from the discretization of the data sequence (m_focus_x, m_focus_y), fitting the set of data with a quadratic polynomial, the relation y=ax can be calculated by an algorithmic fit 2 +bx+c. Step SP2 is to fit a second order curve according to the camera moving position data and the acquired image sharpness data, and calculate the vertex coordinates of the second order curve, for example, equal to the second order curve y=ax 2 The abscissa value of the vertex coordinates +bx+c-b/(2*a).
Referring to fig. 5, similar to y=ax 2 +bx+c or p (x) =a 0 +a 1 x+a 2 x 2 Mathematically belongs to a quadratic function relation. The method comprises a quadratic term coefficient a, a first-order term coefficient b and a constant term c. x is the abscissa and y is the ordinate. In the latter expression, the quadratic term coefficient a is contained 2 Coefficient of primary term a 1 Constant term a 0
Or aboutThe related simplifications of (1) are:
given data sequence (x i ,y i ) And fitting the second order polynomial to the set of data, and then (a) 0 ,a 1 ,a 2 ) Matrix form and the matrix is formed by (y 0 ,y 1 ,y 2 ) Correlation analysis in matrix form, with respect to Q (a 0 ,a 1 ,a 2 ) Minimum value correlation reduction is:
it can be seen that the above simplified form is slightly different but the end result is the same. Solving by the principle to obtain the coefficient a of the second-order function 0 ,a 1 ,a 2 Mathematical terms or coefficients.
Referring to fig. 4, regarding the image pixel matrix, assuming that the image width is W (width) and the height is H (height), the number of columns of the image is width-1 and the number of rows is height-1 according to a rule in advance of computer vision or image processing. For a clearer understanding of the expression of the pixels of the image, an example of a pixel matrix of 2 rows and 9 columns is given in the figure.
Referring to fig. 4, the image is wide with=10 and high with height=3.
Referring to fig. 4, the number of columns of the image width-1=9 and the number of rows height-1=2.
Referring to fig. 4, behavior 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. Contains 0-9 columns.
Referring to fig. 4, act 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19. Contains 0-9 columns.
Referring to fig. 4, act 2 20, 21, 22, 23, 24, 25, 26, 27, 28, 29. Contains 0-9 columns.
Referring to fig. 4, the illustrated matrix is a pixel matrix of 2 rows and 9 columns in total of 0-2 rows and 0-9 columns. The number of pixels can be located in the pixel matrix according to the image pixel coordinate rows yp and columns xp. Note that the actual number of rows and the actual number of columns of the pixel matrix are arbitrary or defined according to the image capturing apparatus, and are not limited to specific values given as examples of the total of 2 rows and 9 columns in the drawing.
Referring to fig. 4, regarding the pixel address, for example, the address of the pixel 15 (pixel coordinates of 1,5, yp=1, xp=5) can be calculated as follows: 1*10+5. Address calculation yp x width+xp. In the figure, the pixel 15 (pix: 15) is taken as an example, and the rule has universality: the pixel 27 (pix: 27) address is calculated as 2×10+7. It should be noted that in the pixel matrix example, assuming that the address of the first pixel (yp=0, xp=0) is zero, the address of the first pixel is not necessarily the zero address in practice. For example, the pixel matrix is a truncated view of the whole image rather than a full image, and in a similar situation, the versatility of the calculation of each pixel address in the pixel matrix needs to be fully considered.
Referring to fig. 4, address acquisition of an image: byte ptr is defined first, ptr being the address of the 0 th pixel, for example, this address points to a byte type. Byte is a data type or language character in a programming language. Knowing the arrangement rule of the pixel matrix, the address of any pixel can be calculated: ptr+yp×width+xp. In the image processing of the computer, if the pixel gray value in the address is expressed by x (address), and the address ptr+yp×width+xp, the pixel gray value in the address is extracted and expressed as x (byte x) ptr+yp×width+xp). This example reveals the fact that, from the addresses of known pixel points and the already obtained image, the gray value of the pixel at any one address can be calculated from the addresses. The pattern or expression of extracting pixel gray values in different computer languages is also slightly different.
Referring to fig. 4, regarding the gray scale image: the logarithmic relationship between white and black can be divided into several levels, commonly referred to in the industry as gray scale. The gradation is divided into 256 steps (0 to 255 steps). The image represented in gray is called a gray map. A gray image refers to an image having only gray values per pixel, with only one channel. According to the gray image calculation method described above: a gray scale image of an arbitrary address is known as ptr+yp×width+xp. The respective components of the three primary colors can be extracted because the gray components of the respective three channels can be calculated if the addresses are known.
Referring to fig. 4, regarding a color image: it is meant that each pixel in the image is divided into R, G, B three primary color components and each primary color component directly determines the intensity of its primary color, and the color produced in this way is referred to as true color, and a color image typically has three channels rather than just one channel. R (xp, yp) and G (xp, yp) and B (xp, yp) respectively represent the red gray scale and the green gray scale and the blue gray scale corresponding to the address, a color image or a mixed color gray scale can be calculated.
Referring to fig. 4, there are different ways to calculate a color image in different situations, R, G, B is the three primary color components of the color image: gray (xp, yp) =0.299×r (xp, yp) +0.587×g (xp, yp) +0.114×b (xp, yp). The coefficients of the respective three primary color component values can be adaptively adjusted, so that the embodiments are diversified. The pixel gray value or gray image at each address can be extracted, the gray component values of the three primary colors at each address can be extracted, and the color image or color mixture gray of the primary colors at each address can be extracted. The Gray values or the regional Gray values of the image in the application can comprise the Gray value of any primary color, and can also comprise the mixed Gray of the Gray values of three primary colors, such as R (xp, yp) or G (xp, yp) or B (xp, yp) or Gray (xp, yp).
Referring to fig. 4, the expression of the energy gradient function F is given as a relational expression as follows, which is to accumulate all pixel gradient values as sharpness evaluation function values. Similarly, the same is true for metrology images and their pixels for critical dimensions.
Where F (xp, yp) represents the gray value of the corresponding pixel point such as (xp, yp), and the larger the value of F, the clearer the image. The camera captures, for example, an Image1/Image0, which includes gray values of pixels such as (xp, yp). Step SP1 collects image sharpness data, and the energy gradient function provides a basis for how to collect image sharpness for step SP 1. The Image sharpness data may be extracted from Image information Image1/Image0 photographed by the camera.
Referring to fig. 4, in the image gradient-based sharpness evaluation method, besides the energy gradient function is used for calculating a Laplace function, a gradient matrix is obtained by convolving a Laplace operator with gray values of all pixel points of an image, and the sum of squares of the gradients of all pixel points is taken as an evaluation function.
Note that the gray value of the corresponding pixel point such as (xp, yp) is represented by F (xp, yp), and the larger the value of F, the clearer the image.
In addition, G (xp, yp) has the expression of
An example of L in the laplace-related function G (xp, yp), but L is not limited to the example, is as follows.
Referring to fig. 4, the energy gradient function: the sum of squares of the differences between the gradation values of adjacent pixels in the xp direction and the yp direction is used as a gradient value for each pixel point, and the gradient values of all pixels are accumulated as a sharpness evaluation function value.
Referring to fig. 4, step SP1 acquires image sharpness data, and a Laplace function may be used in addition to the aforementioned summation of all pixel gradient values as sharpness evaluation function values. When the real-time image definition and the initial image definition are calculated, an energy gradient function or a Laplacian function is utilized as a definition evaluation function.
Referring to fig. 1, an Image1/Image0 photographed by the camera CA provides pixel coordinates. The stage referred to herein typically includes a microscope, a camera CA cooperating with or assembled with the microscope, and the like.
Referring to fig. 1, the automated microscope (Motorized Microscope) technique is mature and automated microscopes, like conventional manual microscopes, typically use three degrees of freedom to move the sample being observed: x, Y axis horizontal movement and Z axis vertical up and down movement. And the Z-axis moving lens directly determines the object distance of the micro optical system and determines the focusing imaging effect. The local technical features or the whole technical features of the automatic electron microscope can be applied to the microscope and the camera thereof in the figure.
Referring to fig. 1, the present application relates to image gradient processing based on image pixels. It is therefore necessary to introduce a sharpness evaluation based on image gradients: in general, a clear image in normal focus is sharper and sharper than the edge of a blurred out-of-focus image, and the gray value of the edge pixels changes greatly, so that the image has a larger gradient value. When the image processing is carried out, the image is regarded as a two-dimensional discrete matrix, and the gradient function can be utilized to acquire the gray information of the image so as to judge the definition of the image.
Referring to fig. 1, as previously mentioned, the motors driving the camera and microscope and their work stations are controlled by a computer or server or associated processing unit. Other alternatives to the processing unit: a field programmable gate array, a complex programmable logic device or a field programmable analog gate array, or a semi-custom ASIC or processor or microprocessor, digital signal processor or integrated circuit, or software firmware program stored in memory, or the like. Steps SP1 to SP7 of fig. 2 can equally be implemented by a computer or a server or a processing unit, as can the implementation of steps SP0 to SP7 of fig. 3. The image includes an image of the wafer taken by the camera through a microscope.
Referring to fig. 1, the method for wafer definition positioning: the camera CA equipped with a microscope is moved N times in the vertical direction of the wafer 10: setting the parking position P of the camera at the time of the K-th movement K And saving the image I photographed by the camera through a microscope K K=1, 2, 3, … N. The relation between the positive integer K and N is more than or equal to 1 and less than or equal to K and less than or equal to N.
Referring to FIG. 1, camera CA is set at a reference position P 0 The image of the wafer 10 taken by the microscope has a reference sharpness F 0 . Reference position P 0 I for image shot at 0 Representation (belonging to the Image0 category). For example, the reference position includes a start position of the focus start point but is not limited to the start position. The advantage of the reference position instead of the starting position is that the required reference definition can be flexibly selected and the position corresponding to the reference definition can be freely formulated, which is very advantageous for selecting images with moderate edge pixel gray values and selecting images with suitable gradient values.
Referring to fig. 1, camera CA is set at a parking position P 1 The image of the wafer 10 taken by the microscope has an image sharpness F 1 . Parking position P 1 I for image shot at 1 Representation (belonging to the Image1 category).
Referring to fig. 1, camera CA is set at a parking position P 2 The image of the wafer 10 taken by the microscope has an image sharpness F 2 . Parking position P 2 I for image shot at 2 Representation (belonging to the Image1 category).
Referring to fig. 1, camera CA is set at a parking position P 3 The image of the wafer 10 taken by the microscope has an image sharpness F 3 . Parking position P 3 I for image shot at 3 Representation (belonging to the Image1 category).
Referring to fig. 1, camera CA is set at a parking position P N The image of the wafer 10 taken by the microscope has an image sharpness F N . Parking position P N I for image shot at N Representation (belonging to the Image1 category).
Referring to fig. 1, camera CA is set at a parking position P K The image of the wafer 10 taken by the microscope has an image sharpness F K . Parking position P K I for image shot at K Representation (belonging to the Image1 category).
Referring to FIG. 1, reference position P 0 (belonging to the category of position Sp 0) is associated withThe category to which the reference position belongs is not necessarily a fixed position, but may be selected.
Referring to FIG. 1, park position P K (belonging to the category of the position Sp 1) is the position obtained by actively shifting the camera N times on the Z axis, the camera is parked once every time it moves (dynamic shake) and images of the parked position are taken and compared with the reference position P 0 Relatively dynamic dithered park position P K Belonging to a static position.
Referring to fig. 1, a park position P is calculated 1 Relative reference position P 0 Position variable X of (2) 1 I.e. they are subtracted.
Referring to fig. 1, a park position P is calculated 2 Relative reference position P 0 Position variable X of (2) 2 I.e. they are subtracted.
Referring to fig. 1, a park position P is calculated 3 Relative reference position P 0 Position variable X of (2) 3 I.e. they are subtracted.
Referring to fig. 1, a park position P is calculated N Relative reference position P 0 Position variable X of (2) N I.e. they are subtracted.
Referring to FIG. 1, position variable X 1 、X 2 、…X N Including but not limited to m_focus_X [ m_focus_cnt]。
Referring to FIG. 1, an image I is calculated 1 Definition F of (2) 1 Relative reference definition F 0 Variable Y of (2) 1 I.e. subtracted.
Referring to FIG. 1, an image I is calculated 2 Definition F of (2) 2 Relative reference definition F 0 Variable Y of (2) 2 I.e. subtracted.
Referring to FIG. 1, an image I is calculated 3 Definition F of (2) 3 Relative reference definition F 0 Variable Y of (2) 3 I.e. subtracted.
Referring to FIG. 1, an image I is calculated N Definition F of (2) N Relative reference definition F 0 Variable Y of (2) N I.e. subtracted.
Referring to FIG. 1, sharpness variable Y 1 、Y 2 、…Y N Including but not limited to m_focus_Y [ m_focus_cnt]。
Referring to FIG. 5, there will be a position variable X 1 、X 2 、…X N Arrays (e.g. array x n]) Discrete independent variable regarded as quadratic function and provided with definition variable Y 1 、Y 2 、…Y N Array (e.g. array y n ]) Regarding as a discrete dependent variable of the quadratic function, fitting the quadratic function y=ax 2 +bx+c. The position required by the camera CA to shoot the wafer is positioned as a quadratic function y=ax 2 Vertex abscissa (-b/(2*a)) of +bx+c plus reference position P 0
Referring to FIG. 5, at the Kth movement, the induced position variable X K And sharpness variable Y K Respectively regarded as a quadratic function, i.e. y=ax 2 A coordinate point (X K ,Y k )。
Referring to fig. 5, if the quadratic function y=ax 2 The quadratic term coefficient of +bx+c is not less than zero, i.e. a is more than or equal to 0, and the reference position P is changed again 0 Until a quadratic function fitted based on the new reference position satisfies a<0。
Referring to fig. 5, if the quadratic function y=ax 2 If the vertex abscissa of +bx+c is not greater than zero, that is, the quadratic function vertex abscissa (-b/(2*a)). Ltoreq.0, the position value of the reference position is changed again until the quadratic function fitted based on the new reference position satisfies (-b/(2*a))>0。
Referring to fig. 5, if the quadratic function y=x 2 If the vertex abscissa (-b/(2*a)) of +b+c exceeds the maximum distance allowed to move by the focal point of the camera, the position value of the reference position is changed until the quadratic function fitted based on the new reference position satisfies that the vertex abscissa (-b/(2*a)) is smaller than the maximum distance allowed to move. The maximum distance that the focal point of the camera is allowed to move includes, but is not limited to, the focus travel maximum autofocus travel.
Referring to fig. 5, the method for locating the definition is used for locating the definition of different areas of the wafer or the chip with different heights respectively and independently, when the definition is located by switching from the former area to the latter area,reference position P 0 And also from the position value corresponding to the previous zone to the new position value corresponding to the subsequent zone. The wafer has different surface roughness and morphology and various critical dimension structural features at different process stages, and the method is suitable for image definition positioning of various critical dimension structural morphologies. If applied to defect detection, the method is suitable for image definition positioning of various defect type structures. Especially, the images at different positions are not necessarily all on the focal plane when the critical dimension of the wafer is detected or measured, so that larger errors occur in the measurement value or defect analysis, and the best and the clearest positions of the image of the wafer and the critical dimension of the wafer can be always found in a self-adaptive manner by the positioning method.
Referring to fig. 5, in the case of performing sharpness evaluation using the F function, an image gradient operation of the evaluation function is performed using a sharp image in the front focus or using an out-of-focus image in the non-front focus. In an alternative example due to the reference position P 0 Is obtained by flexible setting: with a sharp image of normal focus (including I 0 And I 1 、I 2 、…I N ) The image gradient operation to implement the evaluation function is the preferred choice, but due to the reference position P 0 F (F) 0 Has flexibility if trying to use out-of-focus images in non-normal focus (including I 0 And I 1 、I 2 、…I N ) It is also permissible to implement an image gradient operation of the evaluation function, under which condition the image I is calculated 1 、I 2 、…I N Respective definition F 1 、F 2 、…F N Relative to the aforementioned reference definition F 0 The resulting sharpness variable Y 1 、Y 2 、…Y N There is no problem and the relation is relative.
Referring to FIG. 5, when the camera is moved N times, the difference between any two adjacent sharpness differences (e.g., Y) obtained by successively adjusting the camera position K+1 Subtracting Y K Difference in operation); defining the change of the variable item dir along with the increase of the position adjustment times, wherein the variable item dir_K corresponding to the current adjustment times (such as K) is equal to the variableThe term is differenced by the previous value dir_K-1 (e.g., the number of adjustments is K-1) plus the current sharpness variable, the sharpness variable of the current number of adjustments being differenced by Y K+1 Subtracting Y K The method comprises the steps of carrying out a first treatment on the surface of the And determining whether the variable item dir_k is less than zero: if yes, the camera moves upwards to change the reference position and then carries out definition positioning (such as moving again for N times to fit a quadratic function); if not, the camera moves down to change the reference position and then performs sharpness positioning (such as moving again N times to fit a quadratic function). The sharpness variable may be calculated without using the out-of-focus image sharpness, such as at the dwell position, and the reference sharpness at the normal focus. The sharpness prevention variable is based on image gradient evaluation performed by out-of-focus image sharpness with small change of image edge pixel gray value and reference sharpness with large change of image edge pixel gray value, so that errors of a second-order curve are avoided. Note that this error is hidden and imperceptible. An image with a large change in edge pixel gray values is sharp and has a larger gradient value than an image with a smaller change.
Referring to FIG. 6, park position P 1 、P 2 、…P N Position variable X relative to a reference position 1 、X 2 、…X N Not always being able to approach the quadratic function y=ax 2 The +bx+c curve varies, and the false abscissa or position variable not on the curve should be determined from the array x [ n ]]And (3) removing the residues. In an alternative embodiment the sharpness variable Y K Comparing corresponding position variable X K Ratio (Y) K /X K ) If the position variable X is not within the preset threshold value range, the position variable X is determined K From discrete arguments for fitting quadratic functions, at the same time the sharpness variable Y is also removed K And removing the dependent variables used for fitting the quadratic function. The allowable threshold range is dynamically adjusted. FIG. 6 shows an outlier definition variable Y K Abnormal position variable X K The non-culled over-warped non-canonical quadratic function.
Referring to FIG. 7, consider an array x [ n ]]And array y [ n ]]Both discrete, the present example can be trained using position and sharpness variables for the simulated quadratic function y=an 2 +bx+c neural network, which inputs the position variable X 1 、X 2 、…X N And outputs a utilization sharpness variable Y 1 、Y 2 、…Y N And labeling, namely, the neural network Net for simulating the quadratic function can calculate the vertex abscissa of the quadratic function. The way of deriving the quadratic function described above involves an operation based on the principle of quadratic polynomial fitting (e.g. p (x) =a) 0 +a 1 x+a 2 x 2 ) Furthermore, any solution that fits a quadratic function is suitable for the present application, such as a quadratic function generator (or parabolic generator) or by software burned into a processor, for example, a library function of Python, such as numpy, may typically also generate a quadratic function expression from input coordinate data, such as an online second-order function generator of a networked computer.
The foregoing description and drawings set forth exemplary embodiments of the specific structure of the embodiments, and the above disclosure presents presently preferred embodiments, but is not intended to be limiting. Various alterations and modifications will no doubt become apparent to those skilled in the art after having read the above description. Therefore, the appended claims should be construed to cover all such variations and modifications as fall within the true spirit and scope of the application. Any and all equivalent ranges and contents within the scope of the claims should be considered to be within the intent and scope of the present application.

Claims (7)

1. A method for positioning wafer definition is characterized in that:
moving the camera equipped with a microscope N times in the vertical direction of the wafer, and recording the current parking position P of the camera at the time of the K-th movement K And saving the image I photographed by the camera through a microscope K ,K=1、2、3、…N;
The camera has reference definition in an image of the wafer shot by a microscope at a reference position;
calculating each parking position P 1 、P 2 、…P N Relative to the reference positionSet variable X 1 、X 2 、…X N
Computing image I 1 、I 2 、…I N Definition variable Y of respective definition relative to reference definition 1 、Y 2 、…Y N
Will have a position variable X 1 、X 2 、…X N Is treated as a discrete argument of a quadratic function, and will have a sharpness variable Y 1 、Y 2 、…Y N The array of (2) is regarded as the dependent variable of the quadratic function, in order to fit out the quadratic function; the quadratic function is y=ax 2 +bx+c, where x represents an independent variable of position, y represents a dependent variable of sharpness, a is a quadratic term coefficient, b is a first order term coefficient, and c is a constant term;
positioning the position required by the camera to shoot the wafer as the vertex abscissa of the quadratic function plus the reference position;
wherein if the quadratic function y=ax 2 If the quadratic term coefficient of +bx+c is not less than zero, i.e. a is more than or equal to 0, the position value of the reference position is changed again until the quadratic function fitted based on the new reference position meets a<0;
If the quadratic function y=ax 2 If the vertex abscissa of +bx+c is not more than zero, that is (-b/(2*a)). Ltoreq.0, the position value of the reference position is changed until the quadratic function fitted based on the new reference position satisfies (-b/(2*a)) >0;
If the quadratic function y=ax 2 If the vertex abscissa (-b/(2*a)) of +bx+c exceeds the maximum distance allowed to move by the focal point of the camera, the position value of the reference position is changed until the quadratic function fitted based on the new reference position satisfies that the vertex abscissa (-b/(2*a)) is smaller than the maximum distance allowed to move.
2. The method according to claim 1, characterized in that:
evaluating the reference sharpness of the image at the reference position, the corresponding image I at each dwell position, using the energy gradient function as an evaluation function F 1 、I 2 、…I N Definition of the device itself;
wherein f (xp, yp) is the gray value of the pixel (xp, yp), f (xp+1, yp) is the gray value of the pixel (xp+1, yp), and f (xp, yp+1) represents the gray value of the pixel (xp, yp+1).
3. The method according to claim 1, characterized in that:
evaluating the reference sharpness of the image at the reference position, the corresponding image I at each dwell position, using the Laplacian function as an evaluation function F 1 、I 2 、…I N Definition of the device itself;
wherein the method comprises the steps off (xp, yp) is a gray value of the pixel point (xp, yp), and the gradient matrix G (xp, yp) is obtained by convolving the gray value of the pixel point with the laplace operator L.
4. The method according to claim 1, characterized in that:
Position variable X caused by the Kth movement K And sharpness variable Y K Consider as a quadratic function y=ax 2 A coordinate point (X K ,Y K )。
5. The method according to claim 1, characterized in that:
the method is used for respectively and independently carrying out definition positioning on different areas with uneven height on the wafer, and when the definition positioning is carried out by switching from the former area to the latter area, the reference position is updated from the position value corresponding to the former area to the new position value corresponding to the latter area.
6. The method according to claim 1, characterized in that:
the method for obtaining the quadratic function comprises operation based on a quadratic polynomial fitting principle or a neural network trained by using position variables and definition variables and used for simulating the quadratic function.
7. A method according to claim 2 or 3, characterized in that:
in the case of performing sharpness evaluation, an image gradient operation of the evaluation function is performed using a sharp image in the front focus or using a defocus image in the non-front focus.
CN202211129191.2A 2022-09-16 2022-09-16 Wafer definition positioning method Active CN115547909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211129191.2A CN115547909B (en) 2022-09-16 2022-09-16 Wafer definition positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211129191.2A CN115547909B (en) 2022-09-16 2022-09-16 Wafer definition positioning method

Publications (2)

Publication Number Publication Date
CN115547909A CN115547909A (en) 2022-12-30
CN115547909B true CN115547909B (en) 2023-10-20

Family

ID=84727788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211129191.2A Active CN115547909B (en) 2022-09-16 2022-09-16 Wafer definition positioning method

Country Status (1)

Country Link
CN (1) CN115547909B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101494737A (en) * 2009-03-09 2009-07-29 杭州海康威视数字技术股份有限公司 Integrated camera device and self-adapting automatic focus method
CN104820328A (en) * 2015-03-27 2015-08-05 浙江大学 Rapid automatic focusing method of calculating focusing position on the basis of defocusing model curve
CN110557547A (en) * 2018-05-30 2019-12-10 北京小米移动软件有限公司 Lens position adjusting method and device
CN110865453A (en) * 2019-09-26 2020-03-06 麦克奥迪(厦门)医疗诊断系统有限公司 Automatic focusing method of automatic microscope
CN111338051A (en) * 2020-04-08 2020-06-26 中导光电设备股份有限公司 Automatic focusing method and system based on TFT liquid crystal panel
CN111784684A (en) * 2020-07-13 2020-10-16 合肥市商巨智能装备有限公司 Laser-assisted transparent product internal defect depth setting detection method and device
CN112213619A (en) * 2020-09-16 2021-01-12 杭州长川科技股份有限公司 Probe station focusing method, probe station focusing device, computer equipment and storage medium
CN112505910A (en) * 2020-12-11 2021-03-16 平湖莱顿光学仪器制造有限公司 Method, system, apparatus and medium for taking image of specimen with microscope

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8294810B2 (en) * 2010-07-09 2012-10-23 Altek Corporation Assisting focusing method for face block

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101494737A (en) * 2009-03-09 2009-07-29 杭州海康威视数字技术股份有限公司 Integrated camera device and self-adapting automatic focus method
CN104820328A (en) * 2015-03-27 2015-08-05 浙江大学 Rapid automatic focusing method of calculating focusing position on the basis of defocusing model curve
CN110557547A (en) * 2018-05-30 2019-12-10 北京小米移动软件有限公司 Lens position adjusting method and device
CN110865453A (en) * 2019-09-26 2020-03-06 麦克奥迪(厦门)医疗诊断系统有限公司 Automatic focusing method of automatic microscope
CN111338051A (en) * 2020-04-08 2020-06-26 中导光电设备股份有限公司 Automatic focusing method and system based on TFT liquid crystal panel
CN111784684A (en) * 2020-07-13 2020-10-16 合肥市商巨智能装备有限公司 Laser-assisted transparent product internal defect depth setting detection method and device
CN112213619A (en) * 2020-09-16 2021-01-12 杭州长川科技股份有限公司 Probe station focusing method, probe station focusing device, computer equipment and storage medium
CN112505910A (en) * 2020-12-11 2021-03-16 平湖莱顿光学仪器制造有限公司 Method, system, apparatus and medium for taking image of specimen with microscope

Also Published As

Publication number Publication date
CN115547909A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN108106603B (en) Variable focus lens system with multi-stage extended depth of field image processing
JPH0736613B2 (en) Focus adjustment method for imaging device
US7120286B2 (en) Method and apparatus for three dimensional edge tracing with Z height adjustment
CN1727983B (en) Strobe illumination
CN109031638B (en) Quick automatic focusing method of biological microscope
CN114113114B (en) Automatic process method for detecting and repairing micro defects on surface of large-caliber element
JP2009259036A (en) Image processing device, image processing method, image processing program, recording medium, and image processing system
US4814889A (en) Automatic focussing system
CN115547909B (en) Wafer definition positioning method
KR20000034922A (en) Removal of noise from a signal obtained with an imaging system
CN112461853B (en) Automatic focusing method and system
US5621822A (en) Method of detecting focus position of object from variable gray level image of object
US7538815B1 (en) Autofocus system and method using focus measure gradient
CN116342435A (en) Distortion correction method for line scanning camera, computing equipment and storage medium
CN114113115B (en) High-precision automatic positioning method for micro defects on surface of large-caliber element
CN115546114B (en) Focusing method for critical dimension measurement
CN113805304B (en) Automatic focusing system and method for linear array camera
CN112839168B (en) Method for automatically adjusting camera imaging resolution in AOI detection system
CN114113112B (en) Surface micro defect positioning and identifying method based on three-light-source microscopic system
Chen et al. Camera calibration with a motorized zoom lens
JP3369114B2 (en) Automatic focusing method and apparatus by image measurement
JPH0658212B2 (en) Three-dimensional coordinate measuring device
RU2280838C2 (en) Method of contact-free measurement of objects having defocused borders onto image
KR100691210B1 (en) method adjusting focus of an image automatically using multi- resolution skill and system for performing the same
CN114119555A (en) Large-diameter element edge detection method based on object distance focusing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant