CN115616018B - Positioning method and device for scanning electron microscope, electronic equipment and storage medium - Google Patents

Positioning method and device for scanning electron microscope, electronic equipment and storage medium Download PDF

Info

Publication number
CN115616018B
CN115616018B CN202211388684.8A CN202211388684A CN115616018B CN 115616018 B CN115616018 B CN 115616018B CN 202211388684 A CN202211388684 A CN 202211388684A CN 115616018 B CN115616018 B CN 115616018B
Authority
CN
China
Prior art keywords
sample
coordinate
camera
detected
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211388684.8A
Other languages
Chinese (zh)
Other versions
CN115616018A (en
Inventor
王靖夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kyky Technology Co ltd
Original Assignee
Kyky Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kyky Technology Co ltd filed Critical Kyky Technology Co ltd
Priority to CN202211388684.8A priority Critical patent/CN115616018B/en
Publication of CN115616018A publication Critical patent/CN115616018A/en
Application granted granted Critical
Publication of CN115616018B publication Critical patent/CN115616018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/22Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
    • G01N23/225Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion
    • G01N23/2251Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion using incident electron beams, e.g. scanning electron microscopy [SEM]
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01JELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
    • H01J37/00Discharge tubes with provision for introducing objects or material to be exposed to the discharge, e.g. for the purpose of examination or processing thereof
    • H01J37/02Details
    • H01J37/20Means for supporting or positioning the objects or the material; Means for adjusting diaphragms or lenses associated with the support
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01JELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
    • H01J37/00Discharge tubes with provision for introducing objects or material to be exposed to the discharge, e.g. for the purpose of examination or processing thereof
    • H01J37/02Details
    • H01J37/22Optical or photographic arrangements associated with the tube
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01JELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
    • H01J37/00Discharge tubes with provision for introducing objects or material to be exposed to the discharge, e.g. for the purpose of examination or processing thereof
    • H01J37/26Electron or ion microscopes; Electron or ion diffraction tubes
    • H01J37/261Details
    • H01J37/265Controlling the tube; circuit arrangements adapted to a particular application not otherwise provided, e.g. bright-field-dark-field illumination
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01JELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
    • H01J37/00Discharge tubes with provision for introducing objects or material to be exposed to the discharge, e.g. for the purpose of examination or processing thereof
    • H01J37/26Electron or ion microscopes; Electron or ion diffraction tubes
    • H01J37/28Electron or ion microscopes; Electron or ion diffraction tubes with scanning beams

Abstract

The application provides a positioning method and device for a scanning electron microscope, electronic equipment and a storage medium, and relates to the technical field of scanning electron microscopes. The method comprises the following steps: acquiring an initial image of a sample to be detected at a target shooting position; determining the position information of the sample cup and the sample to be detected in the initial image; determining a first coordinate correction quantity according to the position information of the sample cup and the sample to be measured and the calibration parameters of the camera; acquiring pixel coordinates of a target observation point and target position coordinates in a second coordinate system; determining the initial position coordinate of the target observation point in a second coordinate system according to the pixel coordinate, the first coordinate correction quantity and the calibration parameter of the camera; and controlling the sample stage to move according to the position coordinates. According to the method and the device, the image of the sample to be detected is collected, the real position information of the target observation point is determined according to the pixel coordinate of the target observation point and the image related information, the accurate and rapid positioning of the sample observation point is realized, and the observation efficiency of the scanning electron microscope is effectively improved.

Description

Positioning method and device for scanning electron microscope, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of scanning electron microscopes, and in particular, to a positioning method and apparatus for a scanning electron microscope, an electronic device, and a storage medium.
Background
Scanning electron microscopes (scanning electron microscope for short) are large scientific instruments used for observing and researching the shapes, components and photoelectric characteristics of micron-nanometer scale materials. When using a scanning electron microscope to observe a sample to be studied, the sample is placed on a sample stage of the scanning electron microscope.
In order to observe different samples on the sample stage or different positions on the same sample, the observed sample needs to be searched and positioned. Currently, a user generally determines a sample position based on a region feature observed in a field of a scanning electron microscope and performs small-amplitude movement within the field of view.
Because the observation visual field of the scanning electron microscope is very small, the observation point is very difficult to find on the sample, and the sample searching time is long. Therefore, it is very important to research how to quickly and accurately locate the observation point of the sample.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
An embodiment of a first aspect of the present application provides a positioning method for a scanning electron microscope, including:
acquiring an initial image of a sample to be detected positioned at a target shooting position and a distance between the target shooting position and a camera, wherein the sample to be detected is positioned on a sample cup, and the sample cup is positioned on a sample platform;
identifying the initial image to determine the position information of the sample cup and the sample to be detected in a first coordinate system;
determining a first coordinate correction quantity in the first coordinate system according to the distance between the target shooting position and the camera, the position information of the sample cup and the sample to be detected and the calibration parameters of the camera;
acquiring pixel coordinates of a target observation point in the initial image and target position coordinates of the target observation point in a second coordinate system;
determining the initial position coordinate of the target observation point in a second coordinate system according to the pixel coordinate, the first coordinate correction quantity and the calibration parameter of the camera;
and controlling the sample stage to move according to the initial position coordinates and the target position coordinates so as to enable the target observation point to move to a target position.
The embodiment of the second aspect of the present application provides a positioning device for a scanning electron microscope, including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an initial image of a sample to be detected at a target shooting position and a distance between the target shooting position and a camera, the sample to be detected is positioned on a sample cup, and the sample cup is positioned on a sample platform;
the identification module is used for identifying the initial image so as to determine the position information of the sample cup and the sample to be detected in a first coordinate system;
the first determining module is used for determining a first coordinate correction quantity in the first coordinate system according to the distance between the target shooting position and the camera, the position information of the sample cup and the sample to be detected and the calibration parameters of the camera;
the second acquisition module is used for acquiring the pixel coordinates of the target observation point in the initial image and the target position coordinates of the target observation point in a second coordinate system;
the second determining module is used for determining the corresponding initial position coordinate of the target observation point in a second coordinate system according to the pixel coordinate, the first coordinate correction quantity and the calibration parameter of the camera;
and the control module controls the sample stage to move according to the initial position coordinates and the target position coordinates so as to enable the target observation point to move to a target position.
An embodiment of a third aspect of the present application provides an electronic device, including: the device comprises a memory, a processor and computer instructions stored on the memory and executable on the processor, wherein the processor executes the computer instructions to realize the method as set forth in the embodiment of the first aspect of the application.
An embodiment of a fourth aspect of the present application provides a scanning electron microscope including an electronic device as set forth in an embodiment of the third aspect of the present application.
An embodiment of a fifth aspect of the present application provides a non-transitory computer-readable storage medium storing computer instructions, which when executed by a processor implement the method as set forth in the embodiment of the first aspect of the present application.
An embodiment of a sixth aspect of the present application provides a computer program product, which when executed by an instruction processor performs the method provided by the embodiment of the first aspect of the present application.
The positioning method, the positioning device, the electronic equipment and the storage medium for the scanning electron microscope have the following beneficial effects:
firstly, acquiring an initial image of a sample to be detected at a target shooting position and a distance between the target shooting position and a camera, and then identifying the initial image to determine position information of a sample cup and the sample to be detected in a first coordinate system; then determining a first coordinate correction amount in a first coordinate system according to the distance between the target shooting position and the camera, the position information of the sample cup and the sample to be detected and the calibration parameters of the camera; then, acquiring a pixel coordinate of the target observation point in the initial image and a target position coordinate of the target observation point in a second coordinate system, and determining a corresponding initial position coordinate of the target observation point in the second coordinate system according to the pixel coordinate, the first coordinate correction quantity and the calibration parameter of the camera; and finally, controlling the sample stage to move according to the initial position coordinates and the target position coordinates so as to enable the target observation point to move to the target position.
The image of the sample to be detected is collected through the camera, according to the position information of the sample to be detected and the sample cup in the image, the distance between the target shooting position and the camera and the calibration parameters of the camera, the coordinate correcting quantity is determined, then the coordinate correcting quantity and the calibration parameters of the camera are combined, the pixel coordinates of the target observation point in the image are converted, the real position information of the target observation point is determined, the accurate and quick positioning of the sample observation point is further realized, the observation efficiency of a scanning electron microscope is effectively improved, and the time and the energy for searching and positioning a large number of samples are saved for a user.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a positioning method for a scanning electron microscope according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a position shift of a sample to be measured when the sample is observed from a top view;
FIG. 3 is a schematic flowchart of a positioning method for a scanning electron microscope according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a camera projected from a top view according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a computed imaging offset according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a positioning apparatus for a scanning electron microscope according to an embodiment of the present disclosure;
FIG. 7 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
A positioning method, an apparatus, an electronic device, and a storage medium for a scanning electron microscope according to embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a positioning method for a scanning electron microscope according to an embodiment of the present disclosure.
The embodiment of the present application is exemplified by the positioning method for a scanning electron microscope being configured in a positioning device for a scanning electron microscope, and the positioning device for a scanning electron microscope may be applied to a hardware device having an operating system, a touch screen and/or a display screen, so that the device may perform a positioning function for a scanning electron microscope.
As shown in fig. 1, the positioning method for a scanning electron microscope may include the following steps:
step 101, obtaining an initial image of a sample to be detected at a target shooting position and a distance between the target shooting position and a camera, wherein the sample to be detected is located on a sample cup, and the sample cup is located on a sample platform.
When a sample is observed using a scanning electron microscope, the sample to be measured is usually placed on a sample cup, and the sample cup is placed on a sample stage. And then, the sample to be measured appears in the field of view of the scanning electron microscope through position adjustment.
The sample to be measured can be any type of article requiring observation of microscopic morphology. For example, the sample may be a biological sample, a nano material, etc., which is not limited in this application.
In the embodiment of the application, the camera is used for collecting the initial image of the sample to be detected. Specifically, the camera may be fixed to the sample stage, and a position on the sample stage directly below the camera may be set as the target shooting position. And when the sample to be detected appears at the target shooting position, triggering the camera to acquire an initial image.
For example, a micro-optic sensor may be used to detect a sample cup carrying a sample to be tested. When the sample to be detected is detected to be located at the target shooting position, the camera is synchronously triggered to collect images, and the sample to be detected and the sample cup are shot.
The distance between the target shooting position and the camera is fixed and can be set as needed. For example, the target shooting position may be 90mm, 80mm, etc. from the camera, which is not limited in this application.
And 102, identifying the initial image to determine the position information of the sample cup and the sample to be detected in the first coordinate system.
The first coordinate system is an image coordinate system and is a two-dimensional rectangular coordinate system. The relative position of the sample cup and the sample to be measured in the image can be described by means of a first coordinate system. For example, the first coordinate system may be a two-dimensional rectangular coordinate system established with the optical center of the camera in the initial image as the origin and two adjacent sides of the initial image as the x and y axes.
It should be noted that, since the sample to be measured is located on the sample cup, the sample to be measured in the image partially overlaps the sample cup. The color difference between the sample to be detected and the sample cup is often larger, so that the position information of the sample to be detected and the sample cup can be determined by identifying the edges of the sample to be detected and the sample cup.
For example, the image convolution can be performed using a perspective transformation method in computer vision. And extracting different edge operators according to the convolution sum, and preliminarily calibrating the positions of the sample cup and the sample to be detected according to a three-phase similarity principle of space perspective.
And 103, determining a first coordinate correction quantity in a first coordinate system according to the distance between the target shooting position and the camera, the position information of the sample cup and the sample to be detected and the calibration parameters of the camera.
It should be noted that the sample to be measured often has a certain thickness. When a sample to be measured is observed from a top view angle, the projection position of the sample to be measured can generate certain offset. As shown in fig. 2, when the sample to be measured is photographed from the point O, the projection of the point a of the sample to be measured on the initial image is A1, and the projection of the point B is B1.
In order to accurately obtain the real position information of each point on the initial image, the offset of the initial image after projection needs to be determined. In the embodiment of the application, the first coordinate correction amount can be determined according to the distance H between the target shooting position and the camera, the position information of the sample cup and the sample to be detected, and the calibration parameters of the camera.
And 104, acquiring pixel coordinates of the target observation point in the initial image and target position coordinates of the target observation point in a second coordinate system.
The target observation point on the initial image can be any position of the sample to be detected. For example, when the sample to be detected is an insect specimen, the target observation point may be a spot on a wing of the insect specimen.
Specifically, the user may click a corresponding position on the initial image with a mouse according to any point to be observed, and then determine the pixel coordinates of the target observation point.
It should be noted that, in order to facilitate the user to select the observation point, the mouse click point may be used as the center point of the magnification of the displayed image, so as to obtain images with different magnifications. The acquired initial picture is ensured to have higher resolution, and the positioning precision is improved. In addition, the measurement grids can be displayed on the image in real time, and the measurement is convenient for a client to measure.
The second coordinate system is a world coordinate system (also called a measurement coordinate system), which is a three-dimensional rectangular coordinate system, and the spatial positions of the camera, the scanning electron microscope and the sample to be measured can be described by using the second coordinate system as a reference.
In the embodiment of the application, the camera and the scanning electron microscope are relatively fixed in position on the sample stage. The origin of the second coordinate system may be the optical center of the camera, the x-axis and the y-axis are respectively parallel to two adjacent edges of the sample stage, and the z-axis is perpendicular to the plane of the sample stage.
When a sample is observed by using a scanning electron microscope, the sample needs to be moved from below the camera to below the scanning electron microscope, and a target observation point of the sample is aligned with an optical center of the scanning electron microscope. Therefore, the target position coordinates of the target observation point in the second coordinate system can be the projection position coordinates of the optical center of the scanning electron microscope on the sample stage.
And 105, determining the corresponding initial position coordinate of the target observation point in a second coordinate system according to the pixel coordinate, the first coordinate correction quantity and the calibration parameter of the camera.
It can be understood that, when the camera is used for shooting the image of the real scene, the correlation between the three-dimensional geometric position of a certain point on the surface of the space object and the corresponding point in the image can be determined based on the calibration parameters of the camera, so as to reconstruct the three-dimensional scene of the image.
The calibration parameters comprise internal parameters, external parameters and distortion parameters of the camera. The image can be subjected to distortion correction according to the distortion parameters of the camera, the corresponding relation among pixel coordinates, image coordinates, camera coordinates and world coordinates can be established according to internal parameters and external parameters of the camera, the real world coordinates can be determined according to the pixel coordinates, and the accurate positioning relation can be established.
Therefore, in the embodiment of the present application, determining the initial position coordinate of the target observation point in the second coordinate system according to the pixel coordinate, the first coordinate correction amount, and the calibration parameter of the camera may include the following steps:
firstly, determining the image coordinates of a target observation point in a first coordinate system according to the pixel coordinates and the pixel length;
wherein the pixel length comprises a per-pixel length in the x-direction and the y-direction. The pixel length can be obtained by the camera parameters, or can be determined according to the number of pixels in a certain length. For example, the length per pixel L/A can be calculated from the diameter L of the sample cup in the initial image and the number of pixels A contained in the diameter.
In the embodiment of the present application, the first coordinate system is an image coordinate system. The first coordinate system takes the optical center position G of the camera on the initial image as an original point, and according to the pixel coordinates and the pixel length, the horizontal distance and the vertical distance between the target observation point and the original point can be calculated, so that the image coordinates of the target observation point in the first coordinate system can be determined.
And secondly, correcting the image coordinates according to the first coordinate correction quantity to determine the corrected coordinates of the target observation point.
And finally, determining the initial position coordinate of the target observation point in the second coordinate system according to the corrected coordinate and the calibration parameter of the camera.
Wherein the second coordinate system is a world coordinate system. The position coordinates of the target observation point in reality can be described through the second coordinate system.
Because the target observation point only needs to be positioned in the horizontal plane, the second coordinate system can ignore the displacement in the vertical direction and only determine the two-dimensional rectangular coordinate in the horizontal plane.
And 106, controlling the sample stage to move according to the initial position coordinates and the target position coordinates so as to enable the target observation point to move to the target position.
After the initial position coordinates of the target observation point in reality are determined, the initial position coordinates can be input into a control system of the sample stage, so that the sample stage moves in the x direction and the y direction, and the target observation point moves to the target position, namely, right below the optical center of the scanning electron microscope. Furthermore, the user can directly observe the target observation point through the scanning electron microscope.
In the embodiment of the application, the image of the sample to be detected is collected through the camera, the coordinate correcting quantity is determined according to the position information of the sample to be detected and the sample cup in the image, the distance between the target shooting position and the camera and the calibration parameters of the camera, the pixel coordinate of the target observation point in the image is converted by combining the coordinate correcting quantity and the calibration parameters of the camera, the real position information of the target observation point is determined, the accurate and quick positioning of the sample observation point is further realized, the observation efficiency of the scanning electron microscope is effectively improved, and a large amount of time and energy for searching and positioning the sample are saved for a user.
Fig. 3 is a schematic flowchart of a positioning method for a scanning electron microscope according to another embodiment of the present disclosure. As shown in fig. 3, the positioning method for a scanning electron microscope may include the following steps:
step 201, obtaining an initial image of a sample to be detected located at a target shooting position and a distance between the target shooting position and a camera, wherein the sample to be detected is located on a sample cup, and the sample cup is located on a sample platform.
The specific implementation manner of step 201 may refer to the detailed description of the embodiment in step 101 in this application, and is not described herein again.
Step 202, identifying the initial image to determine the position information of the sample cup and the sample to be measured in the first coordinate system.
The specific implementation manner of step 202 may refer to the detailed description of the embodiment in step 102 in this application, and is not described herein again.
And step 203, determining the position coordinates of the first reference point on the sample cup in the first coordinate system according to the position information of the sample cup.
And 204, determining the position coordinate of the second reference point on the sample to be detected in the first coordinate system and the length of the sample to be detected according to the position information of the sample to be detected.
Wherein the first reference point may be any point on the sample cup. The second reference point may be any point on the sample to be measured. Because the lower surface of the sample to be detected and the upper surface of the sample cup are positioned on the same horizontal plane, the first reference point can represent one point on the plane where the lower surface of the sample to be detected is positioned, and the second reference point can represent one point on the plane where the upper surface of the sample to be detected is positioned.
Furthermore, according to the position information of the sample cup, the position coordinates of the selected first reference point in the first coordinate system can be determined. According to the position information of the sample to be detected in the first coordinate system, the position coordinates of the selected second reference point in the first coordinate system can be determined.
It should be noted that, after the position information of the sample to be measured is determined, the length of the sample to be measured can be determined according to the position coordinates of the two ends of the sample to be measured.
And step 205, determining the thickness of the sample to be measured according to the position coordinates of the first reference point, the position coordinates of the second reference point and the calibration parameters of the camera.
It can be understood that, according to the calibration parameters of the camera, the corresponding relationship between the first coordinate system and the second coordinate system can be established, and finally, the real world coordinates can be determined according to the image coordinates, and the accurate positioning relationship can be established.
In the embodiment of the present application, the second coordinate system is a world coordinate system, which is a three-dimensional rectangular coordinate system, and the spatial positions of the camera, the scanning electron microscope, and the sample to be measured can be described by using the second coordinate system as a reference.
For example, the origin of the second coordinate system may be the optical center of the camera, the x-axis and the y-axis are respectively parallel to two adjacent edges of the sample stage, and the z-axis is perpendicular to the plane of the sample stage.
Therefore, based on the image coordinates of the first reference point and the calibration parameters of the camera, the three-dimensional coordinates of the first reference point in the second coordinate system can be calculated; based on the image coordinates of the second reference point and the calibration parameters of the camera, the three-dimensional coordinates of the second reference point in the second coordinate system may be calculated.
And further, calculating a coordinate difference value of the first reference point and the second reference point in the z-axis direction, namely the height direction, namely the thickness of the sample to be measured.
And step 206, determining the imaging offset according to the thickness of the sample to be detected, the length of the sample to be detected and the distance between the target shooting position and the camera.
As shown in fig. 4, since the sample to be measured has a certain thickness, when the camera is used to look down the image from the space, a certain offset may occur between the actual position and the projected position of the sample to be measured. In the figure, the solid line is the actual position of the sample to be measured, the dotted line is the projection position of the sample to be measured, and the projection of the point a of the sample to be measured on the initial image along the z-axis direction is A2. When the sample to be measured is photographed from the point O, the projection of the point a of the sample to be measured on the initial image is A1. In the initial image, the point A1, the point A2 and the point G are on the same straight line, and the angle between the straight line and the y direction is an angle θ.
It should be noted that, in practical applications, when the sample cup is transferred to the sample stage, the actual position of the sample cup may deviate from the target shooting position, so that the optical center of the camera in the initial image does not coincide with the center of the sample cup. As shown in FIG. 5, G is the optical center of the camera, O1 is the center of the sample cup, and the projection of the point A of the sample to be measured on the initial image along the z-axis direction is A2.
Thus, the camera optical center G can be determined from the center point of the initial image. When the sample to be detected is shot from the point O, the projection of the point A of the sample to be detected on the initial image is A1, and the projection of the point B is B1. The height of the camera from the target position is H, the thickness of the sample to be detected is H, the point O, the point A and the point A1 are on the same straight line, and the included angle between the straight line and the z-axis direction is an angle alpha.
As shown in fig. 5, according to the similar triangle principle, it can be known that:
Figure 275646DEST_PATH_IMAGE001
Figure 350044DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,A 1 A 2 is the imaging offset.
And step 207, determining a first coordinate correction amount according to the imaging offset and the position information of the sample to be detected.
As shown in fig. 4, from the principle of similar triangles, it can be known that:
Figure 395360DEST_PATH_IMAGE003
Figure 56149DEST_PATH_IMAGE004
Figure 870521DEST_PATH_IMAGE005
wherein the content of the first and second substances,∆xfor the corner point of the sample to be measuredA 1 The distance in the x-direction from the camera optical center G,∆yis a corner pointA 1 Distance in the y-direction from the camera optical center G.x1 is the first coordinate correction in the x-direction,y1 is the first coordinate correction in the y-direction.
And step 208, acquiring pixel coordinates of the target observation point in the initial image and target position coordinates of the target observation point in a second coordinate system.
The specific implementation manner of step 208 may refer to the detailed description of the embodiment in step 104 in this application, and is not described herein again.
In step 209, the position information of the optical center of the camera and the center of the sample cup in the first coordinate system in the initial image is determined.
The optical center of the camera in the initial image is the central point of the initial image, and the center of the sample cup can be obtained by detecting the edge of the sample cup through calculation, which is not described herein again.
And step 210, determining a second coordinate correction quantity in the first coordinate system according to the position information of the optical center of the camera and the center position information of the sample cup.
When the camera is calibrated, the positions of the camera, the sample cup and the scanning electron microscope are relatively fixed, the optical center of the camera and the center of the sample cup coincide at this time, but in the actual observation, the sample cup is placed on the sample table, the sample table is conveyed to the position under the camera by using the conveyer belt, at the moment, the center of the sample cup is not necessarily under the camera, namely, the center of the sample cup deviates from the target shooting position, and the error is corrected by using the second coordinate correcting quantity.
As shown in fig. 5, after determining the position information of the optical center G of the camera and the center O1 of the sample cup, it can be determined that the second coordinate correction amount is:
Figure 958563DEST_PATH_IMAGE006
and step 211, determining the corresponding initial position coordinate of the target observation point in the second coordinate system according to the pixel coordinate, the first coordinate correction quantity, the second coordinate correction quantity and the calibration parameter of the camera.
Firstly, determining the image coordinates of a target observation point in a first coordinate system according to the pixel coordinates and the pixel length;
wherein the pixel length includes pixel lengths in the x-direction and the y-direction. The pixel length can be obtained by the parameters of the camera, and can also be determined according to the number of pixels in a certain length. For example, the pixel length may be L/A based on the diameter L of the sample cup in the initial image and the number of pixels A contained in the diameter.
In the embodiment of the present application, the first coordinate system is an image coordinate system. The first coordinate system takes the optical center position G of the camera on the initial image as an original point, and according to the pixel coordinates and the pixel length, the distances between the target observation point and the original point in the x direction and the y direction can be calculated, so that the image coordinates of the target observation point in the first coordinate system can be determined.
And secondly, correcting the image coordinates according to the first coordinate correction amount and the second coordinate correction amount to determine the corrected coordinates of the target observation point.
For example, the image coordinates of the target observation point m are
Figure 366541DEST_PATH_IMAGE007
And then the corrected coordinates of the target observation point m are as follows:
Figure 96600DEST_PATH_IMAGE008
and the signs are determined according to the quadrant of the target observation point in the image coordinate system. For example, when the target observation point is located in the first quadrant, the sign before the corrected coordinates is positive.
And finally, determining the initial position coordinate of the target observation point in the second coordinate system according to the corrected coordinate and the calibration parameter of the camera.
In the embodiment of the present application, the second coordinate system is a world coordinate system. Since the target observation point only needs to be located in the horizontal plane, the second coordinate system may be a two-dimensional rectangular coordinate system. The position coordinates of the target observation point in reality can be described through the second coordinate system.
And 212, controlling the sample stage to move according to the initial position coordinates and the target position coordinates so that the target observation point moves to the target position.
The specific implementation manner of step 212 may refer to the detailed description of the embodiment in step 106 in this application, and is not described herein again.
In the embodiment of the application, the position deviation caused by the sample thickness and the shooting position deviation is synchronously considered, the first coordinate correction quantity and the second coordinate correction quantity are determined according to the sample thickness and the shooting position respectively, and then the pixel coordinates of the target observation point in the image are converted by combining the coordinate correction quantity and the calibration parameters of the camera, so that the real position information of the target observation point is determined, the positioning precision of the sample observation point is further improved, and the use requirement of a user is met.
In order to implement the above embodiments, the present application further provides a positioning device for a scanning electron microscope.
Fig. 6 is a schematic structural diagram of a positioning device for a scanning electron microscope according to an embodiment of the present disclosure.
As shown in fig. 6, the positioning apparatus 100 for a scanning electron microscope may include: the device comprises a first acquisition module 110, a recognition module 120, a first determination module 130, a second acquisition module 140, a second determination module 150 and a control module 160.
The first obtaining module 110 is configured to obtain an initial image of a sample to be detected located at a target shooting position and a distance between the target shooting position and a camera, where the sample to be detected is located on a sample cup, and the sample cup is located on a sample stage;
the identification module 120 is configured to identify the initial image to determine position information of the sample cup and the sample to be detected in the first coordinate system;
a first determining module 130, configured to determine a first coordinate correction amount in a first coordinate system according to a distance between the target shooting position and the camera, position information of the sample cup and the sample to be measured, and calibration parameters of the camera;
a second obtaining module 140, configured to obtain a pixel coordinate of the target observation point in the initial image and a target position coordinate of the target observation point in a second coordinate system;
the second determining module 150 is configured to determine an initial position coordinate of the target observation point in the second coordinate system according to the pixel coordinate, the first coordinate correction amount, and the calibration parameter of the camera;
the control module 160 controls the sample stage to move according to the initial position coordinates and the target position coordinates, so that the target observation point moves to the target position.
In one possible implementation manner, the second determining module is configured to:
determining the image coordinates of the target observation point in a first coordinate system according to the pixel coordinates and the pixel length;
correcting the image coordinates according to the first coordinate correction quantity to determine corrected coordinates of the target observation point;
and determining the initial position coordinate of the target observation point in the second coordinate system according to the corrected coordinate and the calibration parameter of the camera.
In one possible implementation, the identification module is configured to:
identifying the initial image to determine the edges of the sample cup and the sample to be detected;
and determining the position information of the sample cup and the sample to be detected in the first coordinate system based on the edges of the sample cup and the sample to be detected.
In one possible implementation manner, the first determining module is configured to:
determining the position coordinates of a first reference point on the sample cup in a first coordinate system according to the position information of the sample cup;
determining the position coordinate of a second reference point on the sample to be detected in the first coordinate system and the length of the sample to be detected according to the position information of the sample to be detected;
determining the thickness of the sample to be measured according to the position coordinates of the first reference point, the position coordinates of the second reference point and the calibration parameters of the camera;
determining imaging offset according to the thickness of the sample to be detected, the length of the sample to be detected and the distance between the target shooting position and the camera;
and determining a first coordinate correction amount according to the imaging offset and the position information of the sample to be detected.
In one possible implementation, the apparatus further includes:
the third determining module is used for determining the position information of the optical center of the camera and the center of the sample cup in the initial image in the first coordinate system;
the fourth determining module is used for determining a second coordinate correction quantity in the first coordinate system according to the position information of the optical center of the camera and the center position information of the sample cup;
a second determination module to:
and determining the corresponding initial position coordinate of the target observation point in a second coordinate system according to the pixel coordinate, the first coordinate correction quantity, the second coordinate correction quantity and the calibration parameter of the camera.
The functions and specific implementation principles of the modules in the embodiments of the present application may refer to the embodiments of the methods, which are not described herein again.
The positioning device for the scanning electron microscope acquires the image of the sample to be detected through the camera, determines the coordinate correction amount according to the position information of the sample to be detected and the sample cup in the image, the distance between the target shooting position and the camera and the calibration parameters of the camera, further combines the coordinate correction amount and the calibration parameters of the camera, converts the pixel coordinate of the target observation point in the image, determines the real position information of the target observation point, further realizes the accurate and quick positioning of the sample observation point, effectively improves the observation efficiency of the scanning electron microscope, and saves a large amount of time and energy for searching and positioning samples for users.
In order to implement the foregoing embodiment, the present application further provides an electronic device, including: the computer program product comprises a memory, a processor and computer instructions stored on the memory and executable on the processor, wherein the processor executes the computer instructions to implement the method according to the previous embodiment of the present application.
In order to implement the foregoing embodiments, the present application further provides a scanning electron microscope including the electronic device as set forth in the foregoing embodiments of the present application.
To achieve the above embodiments, the present application also proposes a non-transitory computer-readable storage medium storing computer instructions which, when executed by a processor, implement the method as proposed by the foregoing embodiments of the present application.
In order to implement the foregoing embodiments, the present application also proposes a computer program product, wherein when the instructions in the computer program product are executed by a processor, the method as proposed by the foregoing embodiments of the present application is executed.
FIG. 7 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application. The electronic device 12 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the electronic device 12 is represented in the form of a general electronic device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. These architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro Channel Architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via the Network adapter 20. As shown, the network adapter 20 communicates with the other modules of the electronic device 12 over the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by running a program stored in the system memory 28.
According to the technical scheme, an initial image of a sample to be detected at a target shooting position and a distance between the target shooting position and a camera are obtained, and then the initial image is identified to determine position information of a sample cup and the sample to be detected in a first coordinate system; then determining a first coordinate correction amount in a first coordinate system according to the distance between the target shooting position and the camera, the position information of the sample cup and the sample to be detected and the calibration parameters of the camera; then, acquiring a pixel coordinate of the target observation point in the initial image and a target position coordinate of the target observation point in a second coordinate system, and determining a corresponding initial position coordinate of the target observation point in the second coordinate system according to the pixel coordinate, the first coordinate correction quantity and the calibration parameter of the camera; and finally, controlling the sample stage to move according to the initial position coordinates and the target position coordinates so as to enable the target observation point to move to the target position.
The image of the sample to be detected is collected through the camera, according to the position information of the sample to be detected and the sample cup in the image, the distance between the target shooting position and the camera and the calibration parameters of the camera, the coordinate correcting quantity is determined, then the coordinate correcting quantity and the calibration parameters of the camera are combined, the pixel coordinates of the target observation point in the image are converted, the real position information of the target observation point is determined, the accurate and quick positioning of the sample observation point is further realized, the observation efficiency of a scanning electron microscope is effectively improved, and the time and the energy for searching and positioning a large number of samples are saved for a user.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried out in the method for implementing the above embodiment may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer-readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (8)

1. A positioning method for a scanning electron microscope is characterized by comprising the following steps:
acquiring an initial image of a sample to be detected at a target shooting position and a distance between the target shooting position and a camera, wherein the sample to be detected is positioned on a sample cup, and the sample cup is positioned on a sample platform;
identifying the initial image to determine the position information of the sample cup and the sample to be detected in a first coordinate system;
determining a first coordinate correction amount in the first coordinate system according to the distance between the target shooting position and the camera, the position information of the sample cup and the sample to be detected, and the calibration parameters of the camera, wherein the determining the first coordinate correction amount in the first coordinate system according to the distance between the target shooting position and the camera, the position information of the sample cup and the sample to be detected, and the calibration parameters of the camera comprises: determining the position coordinates of a first reference point on the sample cup in the first coordinate system according to the position information of the sample cup; determining the position coordinate of a second reference point on the sample to be detected in the first coordinate system and the length of the sample to be detected according to the position information of the sample to be detected; determining the thickness of the sample to be measured according to the position coordinates of the first reference point, the position coordinates of the second reference point and the calibration parameters of the camera; determining imaging offset according to the thickness of the sample to be detected, the length of the sample to be detected and the distance between the target shooting position and the camera; determining the first coordinate correction amount according to the imaging offset and the position information of the sample to be detected;
acquiring pixel coordinates of a target observation point in the initial image and target position coordinates of the target observation point in a second coordinate system;
determining the corresponding initial position coordinate of the target observation point in the second coordinate system according to the pixel coordinate, the first coordinate correction quantity and the calibration parameter of the camera;
and controlling the sample stage to move according to the initial position coordinates and the target position coordinates so as to enable the target observation point to move to a target position.
2. The method of claim 1, wherein determining the corresponding initial position coordinates of the target observation point in the second coordinate system according to the pixel coordinates, the first coordinate correction amount, and the calibration parameters of the camera comprises:
determining the image coordinates of the target observation point in the first coordinate system according to the pixel coordinates and the pixel length;
correcting the image coordinates according to the first coordinate correction quantity to determine corrected coordinates of the target observation point;
and determining the initial position coordinate of the target observation point in the second coordinate system according to the corrected coordinate and the calibration parameter of the camera.
3. The method of claim 1, wherein the identifying the initial image to determine the position information of the sample cup and the sample to be measured in a first coordinate system comprises:
identifying the initial image to determine the edges of the sample cup and the sample to be detected;
and determining the position information of the sample cup and the sample to be detected in the first coordinate system based on the edges of the sample cup and the sample to be detected.
4. The method of any one of claims 1-3, wherein determining initial position coordinates of the target observation point in the second coordinate system based on the pixel coordinates, the first coordinate correction amount, and calibration parameters of the camera, further comprises:
determining positional information of a camera optical center in the initial image and a center of the sample cup in the first coordinate system;
determining a second coordinate correction quantity in the first coordinate system according to the position information of the optical center of the camera and the center position information of the sample cup;
and determining the initial position coordinate of the target observation point in the second coordinate system according to the pixel coordinate, the first coordinate correction quantity, the second coordinate correction quantity and the calibration parameter of the camera.
5. A positioning device for a scanning electron microscope, comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an initial image of a sample to be detected at a target shooting position and a distance between the target shooting position and a camera, the sample to be detected is positioned on a sample cup, and the sample cup is positioned on a sample platform;
the identification module is used for identifying the initial image so as to determine the position information of the sample cup and the sample to be detected in a first coordinate system;
a first determining module, configured to determine a first coordinate correction amount in the first coordinate system according to a distance between the target shooting position and the camera, position information of the sample cup and the sample to be measured, and calibration parameters of the camera, where the first determining module is further configured to: determining the position coordinates of a first reference point on the sample cup in the first coordinate system according to the position information of the sample cup; determining the position coordinate of a second reference point on the sample to be detected in the first coordinate system and the length of the sample to be detected according to the position information of the sample to be detected; determining the thickness of the sample to be measured according to the position coordinates of the first reference point, the position coordinates of the second reference point and the calibration parameters of the camera; determining imaging offset according to the thickness of the sample to be detected, the length of the sample to be detected and the distance between the target shooting position and the camera; determining the first coordinate correction amount according to the imaging offset and the position information of the sample to be detected;
the second acquisition module is used for acquiring the pixel coordinates of the target observation point in the initial image and the target position coordinates of the target observation point in a second coordinate system;
the second determining module is used for determining the corresponding initial position coordinate of the target observation point in a second coordinate system according to the pixel coordinate, the first coordinate correction quantity and the calibration parameter of the camera;
and the control module controls the sample stage to move according to the initial position coordinates and the target position coordinates so as to enable the target observation point to move to a target position.
6. An electronic device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, when executing the computer instructions, implementing the method of any of claims 1-4.
7. A scanning electron microscope comprising the electronic device of claim 6.
8. A computer-readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1-4.
CN202211388684.8A 2022-11-08 2022-11-08 Positioning method and device for scanning electron microscope, electronic equipment and storage medium Active CN115616018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211388684.8A CN115616018B (en) 2022-11-08 2022-11-08 Positioning method and device for scanning electron microscope, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211388684.8A CN115616018B (en) 2022-11-08 2022-11-08 Positioning method and device for scanning electron microscope, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115616018A CN115616018A (en) 2023-01-17
CN115616018B true CN115616018B (en) 2023-03-21

Family

ID=84878364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211388684.8A Active CN115616018B (en) 2022-11-08 2022-11-08 Positioning method and device for scanning electron microscope, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115616018B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225909B (en) * 2015-09-17 2017-03-29 北京大学 A kind of sample platform of scanning electronic microscope positioner and its localization method
FR3051591B1 (en) * 2016-05-17 2020-06-19 Horiba Jobin Yvon Sas MICRO-LOCATION DEVICE AND METHOD FOR IMAGING INSTRUMENT AND MEASURING APPARATUS
CN112630242B (en) * 2020-12-03 2023-01-10 成都先进金属材料产业技术研究院股份有限公司 Navigation method for scanning electron microscope sample
CN112945996A (en) * 2021-01-26 2021-06-11 西安科技大学 Rapid in-situ comparison method based on scanning electron microscope
CN113594076B (en) * 2021-07-22 2023-06-20 上海精测半导体技术有限公司 Alignment method of patterned wafer and semiconductor device
CN114778583A (en) * 2022-03-24 2022-07-22 重庆大学 Scanning electron microscope target point in-situ observation rapid positioning method and device and storage medium

Also Published As

Publication number Publication date
CN115616018A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN110689579B (en) Rapid monocular vision pose measurement method and measurement system based on cooperative target
CN111179358B (en) Calibration method, device, equipment and storage medium
EP2095332B1 (en) Feature-based registration of sectional images
Herráez et al. 3D modeling by means of videogrammetry and laser scanners for reverse engineering
JP4599184B2 (en) Index placement measurement method, index placement measurement device
CN109801333B (en) Volume measurement method, device and system and computing equipment
EP2666050B1 (en) Microscope slide coordinate system registration
US20140132729A1 (en) Method and apparatus for camera-based 3d flaw tracking system
CN107274450B (en) Information processing apparatus and control method thereof
CN111263142B (en) Method, device, equipment and medium for testing optical anti-shake of camera module
CN108535097A (en) A kind of method of triaxial test sample cylindrical distortion measurement of full field
US7627153B2 (en) Repositioning inaccuracies in an automated imaging system
CN110163918A (en) A kind of line-structured light scaling method based on projective geometry
CN112132908A (en) Camera external parameter calibration method and device based on intelligent detection technology
Wohlfeil et al. Automatic camera system calibration with a chessboard enabling full image coverage
CN115616018B (en) Positioning method and device for scanning electron microscope, electronic equipment and storage medium
CN112504156A (en) Structural surface strain measurement system and measurement method based on foreground grid
Ehrenfried Processing calibration-grid images using the Hough transformation
JPH06195472A (en) Image-processing system
CN115631245A (en) Correction method, terminal device and storage medium
CN115409693A (en) Two-dimensional positioning method based on pipeline foreign matters in three-dimensional image
CN113689397A (en) Workpiece circular hole feature detection method and workpiece circular hole feature detection device
JPH11142118A (en) Distance measuring method and apparatus and record medium in which the method is recorded
CN114513601A (en) Method for compensating lens switching error and image analysis device
Xie et al. A novel real-time positioning system with high-precision based on spot detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant