CN117537710A - Coordinate calibration method and device for microscanning, microscanning system and medium - Google Patents

Coordinate calibration method and device for microscanning, microscanning system and medium Download PDF

Info

Publication number
CN117537710A
CN117537710A CN202311490957.4A CN202311490957A CN117537710A CN 117537710 A CN117537710 A CN 117537710A CN 202311490957 A CN202311490957 A CN 202311490957A CN 117537710 A CN117537710 A CN 117537710A
Authority
CN
China
Prior art keywords
calibration
shooting
image
coordinate
microscopic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311490957.4A
Other languages
Chinese (zh)
Inventor
赵进
高绪栋
李作勇
李培静
康丹丹
解万龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Novelbeam Technology Co ltd
Original Assignee
Qingdao Novelbeam Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Novelbeam Technology Co ltd filed Critical Qingdao Novelbeam Technology Co ltd
Priority to CN202311490957.4A priority Critical patent/CN117537710A/en
Publication of CN117537710A publication Critical patent/CN117537710A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/02Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness
    • G01B21/04Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
    • G01B21/045Correction of measurements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The method firstly utilizes a calibration image obtained by a navigation camera (or a first camera) to calculate pixel coordinates of the calibration points, then controls a displacement system to move so that the field of view center of the microscope camera is sequentially overlapped with each calibration point on a calibration object, records mechanical coordinates of the displacement device at the moment when the field of view center is overlapped with the calibration points, finally obtains a coordinate calibration matrix according to the pixel coordinates and the mechanical coordinates of a plurality of calibration points to obtain a conversion relation between the field of view center of the microscope and the moving position of the displacement device, realizes a full-automatic coordinate calibration process, does not need manual participation and professional training of personnel, improves calibration efficiency, does not have errors caused by manual calibration, and improves calibration precision.

Description

Coordinate calibration method and device for microscanning, microscanning system and medium
Technical Field
The disclosure relates to the technical field of microscanning, in particular to a coordinate calibration method and device for microscanning, a microscanning system and a medium.
Background
Microscanning is mainly used for image photographing of micro-size, and the field of view of a microscope camera is usually in micrometers. The microscanning system needs to perform coordinate calibration so that the image content of the shot image is the content expected by the user. However, currently, coordinate calibration of a micro scanning system is mainly performed manually, which includes manually determining the position of a calibration point in a captured calibration image, and observing and searching the calibration point by human eyes so as to align a micro camera with the calibration point, thereby finally realizing calibration between a pixel position and a mechanical position.
Because the microscopic field has a small range and high requirement on calibration precision, the manual calibration mode needs human eyes to search for a calibration point under the microscopic field, the process is time-consuming, the calibration is required to be repeated, the calibration efficiency is low, the calibration process has high requirement on operators, professional training is required, manual errors are easy to be introduced, the calibration precision is difficult to meet the precision requirement,
disclosure of Invention
In order to solve at least one of the above technical problems, the present disclosure provides a coordinate calibration method, device, system and medium for microscanning.
A first aspect of the present disclosure proposes a coordinate calibration of a microscan, comprising: determining first pixel coordinates of calibration points of a plurality of calibration patterns in a calibration image contained in a first shooting image in the first shooting image; a microscopic shooting step, namely shooting a calibration object positioned at a current microscopic shooting position to obtain a second shooting image, wherein the second shooting image is an image shot by a microscopic camera, and the microscopic shooting position enables the second shooting image to comprise at least part of pattern content in a current target calibration pattern; determining second pixel coordinates of the calibration points of the current target calibration pattern under the coordinate system of the second shooting image according to the at least partial pattern content; driving a displacement device to drive the calibration object to approach the center of the field of view of the microscope camera in the horizontal direction until the current calibration point coincides with the center of the field of view, and acquiring the mechanical coordinates of the displacement device when the current calibration point coincides with the center of the field of view; a calibration point iteration step of driving the displacement device to enable the calibration object to move to a microscopic shooting position of a next calibration pattern, substituting the microscopic shooting position of the next calibration pattern as a current microscopic shooting position into the microscopic shooting step to obtain the mechanical coordinates of the displacement device of the calibration point corresponding to the next calibration pattern until the mechanical coordinates of the displacement device of the calibration points of the plurality of calibration patterns are obtained; and obtaining a coordinate conversion relation according to the pixel coordinates of the calibration points of the plurality of calibration patterns and the mechanical coordinates of the corresponding displacement device.
According to one embodiment of the present disclosure, the plurality of calibration patterns is at least 5 calibration patterns, and the number of the first pixel coordinates of the calibration points determined is at least 5.
According to one embodiment of the present disclosure, only one calibration point is included in each calibration pattern.
According to one embodiment of the present disclosure, each of the calibration patterns has the same shape, and is divided into a plurality of sectors having the same center and circumferentially distributed around the center.
According to one embodiment of the present disclosure, the plurality of calibration patterns includes a first calibration pattern and a plurality of second calibration patterns, a size of the first calibration pattern is larger than a size of the second calibration pattern, and a current target calibration pattern included in the first acquired second captured image is the first calibration pattern.
According to one embodiment of the present disclosure, determining first pixel coordinates of calibration points of a plurality of calibration patterns in a calibration image included in a first captured image in the first captured image includes: shooting a calibration object positioned at a first shooting position to obtain a first shooting image, wherein the first shooting image comprises a calibration image on the calibration object, and the calibration image comprises a plurality of calibration patterns; and determining first pixel coordinates of calibration points in the plurality of calibration patterns in the first captured image.
According to one embodiment of the present disclosure, photographing a calibration object located at a first photographing position to obtain a first photographed image includes: driving the displacement device to drive the calibration object to move to a first shooting position, wherein the first shooting position corresponds to the first camera; and controlling the first camera to shoot the calibration object to obtain a first shooting image.
According to one embodiment of the present disclosure, driving a displacement device to drive a calibration object to move to a first shooting position includes: the displacement device is driven to drive the objective table to move to an initial preset position, a calibration object is arranged on the objective table, and when the objective table is located at the initial preset position, the calibration object is located at a first shooting position.
According to one embodiment of the present disclosure, the calibration object located at the first photographing position is opposite to the first camera.
According to one embodiment of the present disclosure, the calibration points are located at center positions of the calibration patterns, and determining first pixel coordinates of the calibration points in the plurality of calibration patterns in the first captured image includes: threshold segmentation is carried out on the first shooting image, so that a plurality of subareas are obtained; identifying a calibration pattern region from the sub-regions according to shape information of the calibration pattern for each sub-region; and determining the central coordinate of the calibration pattern area and taking the central coordinate as a first pixel coordinate of a calibration point in the calibration pattern in the first shooting image.
According to one embodiment of the present disclosure, capturing a calibration object located at a current microscopic capturing position to obtain a second captured image includes: driving the displacement device to drive the calibration object to move to the current microscopic shooting position; and controlling the microscopic camera to shoot the calibration object to obtain a second shooting image.
According to one embodiment of the disclosure, controlling the microscope camera to shoot the calibration object to obtain a second shot image includes: a focusing shooting step, namely controlling a microscopic camera to shoot the calibration object under a plurality of different focal lengths respectively to obtain a plurality of candidate shooting images; a definition evaluation step, namely respectively performing definition evaluation on the obtained multiple candidate shooting images to obtain corresponding definition values; and an image determining step of determining a candidate captured image having an optimal sharpness value as a second captured image.
According to one embodiment of the disclosure, controlling a microscope camera to capture the calibration object at a plurality of different focal lengths, respectively, includes: and controlling the microscopic camera to move for a plurality of times in the depth of field direction by a preset step length, and controlling the microscopic camera to shoot the calibration object after each movement is completed.
According to one embodiment of the disclosure, controlling the microscope camera to shoot the calibration object to obtain a second shot image includes: taking the first step length as a preset step length to execute the focusing shooting step to obtain a plurality of first candidate shooting images; executing the definition evaluation step on the plurality of first candidate shooting images to obtain a plurality of first definition values; determining a first moving position corresponding to the optimal first definition value; in the moving range containing the first moving position, taking a second step length as a preset step length to execute the focusing shooting step to obtain a plurality of second candidate shooting images, wherein the second step length is smaller than the first step length; executing the definition evaluation step on the plurality of second candidate shooting images to obtain a plurality of second definition values; and determining a second captured image from the plurality of second sharpness values by the image determining step.
According to one embodiment of the present disclosure, the sharpness evaluation step includes: taking the candidate shooting image as an image to be evaluated, and graying the image to be evaluated to obtain a gray level image; determining an image gradient and variance of the gray map; and carrying out normalization operation according to the variance and the image gradient to obtain a definition value.
According to one embodiment of the present disclosure, determining, from the at least partial pattern content, second pixel coordinates of the calibration point of the current target calibration pattern in the coordinate system of the second captured image includes: performing linear fitting on the at least part of pattern content to obtain a plurality of linear segments, wherein the at least part of pattern content comprises at least part of areas of a plurality of sectors, and the linear segments are boundaries of the sectors; and determining second pixel coordinates of the calibration point of the current target calibration pattern under the coordinate system of the second photographed image according to the plurality of straight line segments.
According to one embodiment of the present disclosure, determining, from the plurality of straight line segments, second pixel coordinates of the calibration point of the current target calibration pattern in the coordinate system of the second captured image includes: determining the intersection point of each group of straight lines in different groups of straight lines by taking two straight line sections as a group of straight lines to obtain a plurality of intersection points; and calculating the coordinate mean value of the plurality of intersection points, wherein the obtained coordinate mean value is used as a second pixel coordinate of the calibration point of the current target calibration pattern, and the coordinate of the intersection point is the coordinate under the coordinate system of the second shooting image.
According to one embodiment of the disclosure, driving the displacement device to drive the calibration object to approach the center of the field of view of the microscope camera in the horizontal direction until the current calibration point coincides with the center of the field of view comprises: a first moving step, driving a displacement device to drive a calibration object to move towards a direction close to the center of a view field of the microscope camera; and a first iteration step of substituting a new microscopic photographing position formed by movement into the microscopic photographing step as the current microscopic photographing position when the first movement step is completed each time, thereby obtaining a new second pixel coordinate until the new second pixel coordinate coincides with the center of the field of view.
According to one embodiment of the disclosure, in the first moving step, when approaching to the center of the field of view of the microscope camera, a moving direction is obtained according to a relative positional relationship between the second pixel coordinate and the center of the field of view in a field of view coordinate system, and a moving distance is obtained according to a current pixel distance between the second pixel coordinate and the center of the field of view.
According to one embodiment of the disclosure, the distance of movement is obtained according to a physical distance estimation value corresponding to the current pixel distance.
A second aspect of the present disclosure proposes a coordinate calibration device for microscanning, including: the first pixel coordinate determining module is used for determining first pixel coordinates of calibration points of a plurality of calibration patterns in a calibration image contained in a first shooting image in the first shooting image; the microscopic shooting module is used for executing microscopic shooting steps, and the microscopic shooting steps comprise: shooting a calibration object positioned at a current microscopic shooting position to obtain a second shooting image, wherein the second shooting image is an image shot by a microscopic camera, and the microscopic shooting position enables the second shooting image to comprise at least part of pattern content in a current target calibration pattern; a second pixel coordinate determining module, configured to determine, according to the at least part of the pattern content, a second pixel coordinate of a calibration point of the current target calibration pattern in a coordinate system of the second captured image; the mechanical coordinate acquisition module is used for driving the displacement device to drive the calibration object to approach the center of the visual field of the microscope camera in the horizontal direction until the current calibration point coincides with the center of the visual field, and acquiring the mechanical coordinate of the displacement device when the current calibration point coincides with the center of the visual field; the calibration point iteration module is used for executing calibration point iteration steps, and the calibration point iteration steps comprise: driving the displacement device to enable the calibration object to move to a microscopic shooting position of a next calibration pattern, substituting the microscopic shooting position of the next calibration pattern as a current microscopic shooting position into the microscopic shooting step so as to obtain mechanical coordinates of the displacement device of the calibration point corresponding to the next calibration pattern until mechanical coordinates of the displacement device of the calibration points of the plurality of calibration patterns are obtained; and the coordinate relation determining module is used for obtaining a coordinate conversion relation according to the pixel coordinates of the calibration points of the plurality of calibration patterns and the corresponding mechanical coordinates of the displacement device.
A third aspect of the present disclosure proposes a microscanning system comprising: a microscan coordinate calibration device as defined in claim 21; the shooting system comprises a microscopic camera, wherein the microscopic camera is used for shooting an object to be shot under the control of the coordinate calibration device to obtain a shooting image; and the displacement device is used for being controlled by the coordinate calibration device to move so as to drive the object to be shot to synchronously move.
A fourth aspect of the present disclosure proposes a readable storage medium having stored therein execution instructions which, when executed by a processor, are configured to implement the coordinate calibration method for microscanning according to any one of the above embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram of a coordinate calibration method of microscan according to one embodiment of the present disclosure.
Fig. 2 is a schematic view of a first captured image according to one embodiment of the present disclosure.
FIG. 3 is a schematic illustration of a calibration image according to one embodiment of the present disclosure.
Fig. 4 is a flow diagram of determining a first pixel coordinate according to one embodiment of the present disclosure.
Fig. 5 is a flow diagram of capturing a first captured image according to one embodiment of the present disclosure.
Fig. 6 is a flow chart diagram of determining a first pixel coordinate according to another embodiment of the present disclosure.
Fig. 7 is a flow diagram of capturing a second captured image according to one embodiment of the present disclosure.
Fig. 8 is a schematic view of a second captured image according to one embodiment of the present disclosure.
Fig. 9 is a flow chart of capturing a second captured image according to another embodiment of the present disclosure.
Fig. 10 is a flow chart of capturing a second captured image according to yet another embodiment of the present disclosure.
Fig. 11 is a flow chart diagram of determining second pixel coordinates according to one embodiment of the present disclosure.
Fig. 12 is a schematic diagram obtained by performing straight line fitting on fig. 8.
Fig. 13 is a flow chart of determining second pixel coordinates according to another embodiment of the present disclosure.
Fig. 14 is a flow diagram of coinciding a current calibration point with a center of a field of view according to one embodiment of the present disclosure.
Fig. 15 is a schematic view of a calibration point coincident with the center of a field of view according to one embodiment of the present disclosure.
FIG. 16 is a schematic diagram of a coordinate calibration device employing a hardware implementation of a processing system for microscanning according to one embodiment of the present disclosure.
Fig. 17 is a schematic diagram of a microscanning system according to one embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant content and not limiting of the present disclosure. It should be further noted that, for convenience of description, only a portion relevant to the present disclosure is shown in the drawings.
In addition, embodiments of the present disclosure and features of the embodiments may be combined with each other without conflict. The technical aspects of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the exemplary implementations/embodiments shown are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Thus, unless otherwise indicated, features of the various implementations/embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising," and variations thereof, are used in the present specification, the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof is described, but the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximation terms and not as degree terms, and as such, are used to explain the inherent deviations of measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art.
The coordinate calibration method, apparatus, microscanning system, and readable storage medium of microscanning of the present disclosure are described below with reference to the accompanying drawings.
FIG. 1 is a flow diagram of a coordinate calibration method of microscan according to one embodiment of the present disclosure. Referring to fig. 1, the coordinate calibration method M10 of the micro-scanning of the present embodiment may include the following steps S100, S200, S300, S400, S500, and S600.
S100, determining first pixel coordinates of calibration points of a plurality of calibration patterns in a calibration image contained in the first shot image.
Step S200 of microscopic photographing: and shooting the calibration object positioned at the current microscopic shooting position to obtain a second shooting image, wherein the second shooting image is an image shot by a microscopic camera, and the microscopic shooting position enables the second shooting image to comprise at least part of pattern content in the current target calibration pattern.
And S300, determining second pixel coordinates of the calibration points of the current target calibration pattern under the coordinate system of the second shooting image according to at least part of the pattern content.
S400, driving the displacement device to drive the calibration object to approach the center of the field of view of the microscope camera in the horizontal direction until the current calibration point coincides with the center of the field of view, and acquiring the mechanical coordinates of the displacement device when the current calibration point coincides with the center of the field of view.
The calibration point iterates step S500: and driving the displacement device to enable the calibration object to move to the microscopic shooting position of the next calibration pattern, substituting the microscopic shooting position of the next calibration pattern into the microscopic shooting step as the current microscopic shooting position to obtain the mechanical coordinates of the displacement device of the calibration point corresponding to the next calibration pattern until the mechanical coordinates of the displacement device of the calibration points of the plurality of calibration patterns are obtained.
S600, obtaining a coordinate conversion relation according to the pixel coordinates of the calibration points of the plurality of calibration patterns and the mechanical coordinates of the corresponding displacement device.
According to the coordinate calibration method for microscopic scanning provided by the embodiment of the disclosure, the pixel coordinates of a plurality of calibration points on a calibration object on a shot image are extracted through shooting to obtain the first shot image, the mechanical coordinates corresponding to the calibration points are obtained through shooting a plurality of second shot images, the coordinate calibration is carried out by utilizing the pixel coordinates of the calibration points and the mechanical coordinates of a displacement device when a microscopic camera is aligned with the calibration points, the conversion relation between the center of a microscopic view field and the moving position of the displacement device is obtained, the full-automatic coordinate calibration process is realized, manual participation and professional training on personnel are not needed, the calibration efficiency is improved, errors caused by manual calibration cannot exist, and the calibration precision is improved.
The first shooting image is obtained by shooting a calibration object. The calibration image is arranged on the calibration object, for example, the calibration image can be printed or posted on the calibration object. The calibration object may be a calibration plate. The calibration image comprises a plurality of calibration patterns, each calibration pattern can be provided with a calibration point, and the calibration point can be positioned at the central position of the calibration pattern or a position near the central position. The position of the calibration point in the whole first photographed image is the first pixel coordinate. And (5) identifying and analyzing the positions of the calibration patterns through the step (S100) to realize coordinate calibration of the microscanning system.
The first photographed image may be obtained by photographing the calibration object through the first camera. The first camera is also called a navigation camera. The calibration object is placed on the stage in advance before shooting is started. In the process of shooting and calibrating, the position of the navigation camera is kept motionless, and the change of the relative position relationship between the calibration object and the navigation camera is realized by controlling the displacement device to move. When the coordinate calibration is started, the displacement device can be controlled to move to a preset position A, so that the calibration object is driven to the visual field of the navigation camera, and the preset position A enables the first shooting image obtained after the calibration object is shot by the navigation camera to contain the calibration image on the calibration object, and then all the calibration patterns on the calibration image are obtained.
After the first shot image is obtained, the calibration patterns in the calibration image can be identified and analyzed in position through an image processing algorithm, for example, so as to determine the calibration point position coordinates of the N calibration patterns, namely N calibration point pixel coordinates. These coordinates are used as one of the data for the subsequent determination of the coordinate conversion relationship.
The other party data used in the process of determining the coordinate conversion relation is the mechanical coordinates of the displacement device, wherein the mechanical coordinates of the displacement device refer to the mechanical coordinates of the displacement device under the motion coordinate system when the field of view of the microscopic camera is aligned with the calibration point of the calibration pattern (namely, when the field of view center coincides with the calibration point). The mechanical coordinates of the displacement device are obtained through the microscopic photographing step S200 to the calibration point iteration step S500. The determination of the coordinate conversion relation needs to use multiple sets of coordinate data, and each set of coordinate data comprises a pixel coordinate of a calibration point and a mechanical coordinate of a displacement device corresponding to the calibration point, so that each calibration pattern is sequentially processed through steps S200 to S500 to obtain corresponding N mechanical coordinates of the displacement device. And then establishing a coordinate conversion relation between the pixel coordinates of the N calibration points and the mechanical coordinates of the N displacement devices through the step S600 to obtain a coordinate calibration matrix. The mechanical coordinate corresponding to any pixel coordinate can be calculated through the coordinate calibration matrix, so that the microscope camera can shoot at the accurately aligned expected coordinate point when shooting, and an ideal shooting picture is obtained.
In the microscopic photographing step S200, the second photographed image is obtained by photographing the calibration object through the microscopic camera. The shooting magnification of the microscopic camera is lower than that of the navigation camera, and in particular, the shooting magnification of the microscopic camera can be only one percent of that of the navigation camera. The microscopic camera is kept motionless in the shooting and calibrating process, and the change of the relative position relationship between the calibration object and the microscopic camera is realized by controlling the displacement device to move. After the first shooting image is obtained, the displacement device can be controlled to move to another preset position B, so that the calibration object is driven to enter the visual field of the microscopic camera, and the preset position B enables the second shooting image obtained after the calibration object is shot by the microscopic camera to contain at least part of pattern content of one calibration pattern on the calibration object.
Because the mechanical coordinates of the displacement device corresponding to the calibration points of the plurality of calibration patterns need to be determined, the micro camera can shoot the plurality of calibration patterns in turn, wherein the calibration pattern shot by the micro camera at present is the current target calibration pattern.
The pattern style of the calibration pattern has directionality to the calibration point of the calibration pattern when observed, for example, the pattern content may include line segments pointing to the calibration point. Whether or not the calibration point of the current target calibration pattern (abbreviated as the current calibration point) is located in the field of view of the microscope camera, that is, whether or not the current calibration point can be observed by the microscope camera, the pixel coordinates, that is, the second pixel coordinates, of the current calibration point in the coordinate system of the second captured image (that is, the field of view coordinate system of the microscope camera) can be determined by analyzing the pattern content in the field of view. It is understood that the coordinate value of the second pixel coordinate is obtained based on the origin of the second captured image, which may be the coordinate point of the upper left corner of the second captured image.
It should be noted that, the first pixel coordinate and the second pixel coordinate are both coordinates of the calibration point, but the two coordinates are different in coordinate system, the first pixel coordinate is a position coordinate in the image captured by the navigation camera, the second pixel coordinate is a position coordinate under the field-of-view coordinate system of the microscope camera, the original point positions of the two are different, and the ratio of the magnification of the navigation camera to the magnification of the microscope camera may be a hundred times level, so when the mechanical coordinate of the displacement device is obtained, the second pixel coordinate is determined, and compared with the calculation by using the first pixel coordinate, the calculation by using the second pixel coordinate can reduce the error generated in the calculation of the pixel coordinate.
The second pixel coordinates and the field center point share the same coordinate system, and the origins of the second pixel coordinates and the field center point are the origins of the current field image, so that the pixel distance between the field center point and the second pixel coordinates can be calculated after the second pixel coordinates are obtained. And controlling the displacement device to move according to the pixel distance until the two are overlapped. At this time, the microscope camera is aligned to the calibration point of the current calibration pattern, the center of the field of view is the current calibration point, and the mechanical coordinates of the displacement device are those corresponding to the current calibration point. At this point a first set of coordinate data (comprising the pixel coordinates of the first calibration point and the corresponding mechanical coordinates of the displacement device) is obtained.
The determination of pixel coordinates for the calibration points of the other calibration patterns is then started by a calibration point iteration step S500. Specifically, the moving device is controlled to move from the mechanical coordinate corresponding to the first calibration point to another preset position C. The physical distance between calibration points between the calibration patterns is known, so the preset position C can be set by the physical distance between the first calibration point and the next calibration point. When the moving means moves to the preset position C, the stage moves along with it, and at least part of the pattern content of the next calibration pattern (new current calibration pattern) in the calibration image of the calibration object appears in the field of view of the microscope camera. At this time, a microscopic photographing step S200 is performed, and a first second photographed image for the second calibration pattern is obtained.
The determination of the second pixel coordinates of the calibration point of the current calibration pattern may then begin. And then the displacement device moves to enable the center of the field of view of the microscope camera to coincide with the current calibration point, so that the mechanical coordinates of the displacement device of the current calibration point are obtained, and at the moment, the coordinate data of the calibration point corresponding to the second calibration pattern, namely the second group of coordinate data, are obtained. Shooting other calibration patterns, identifying the second pixel coordinates of the corresponding calibration points, determining the mechanical coordinates of the displacement device, and so on. After N groups of coordinate data are obtained, a coordinate calibration matrix can be generated.
The coordinate calibration method M10 of the embodiment shortens the whole calibration time to about 0.5 hour, saves a great amount of labor cost, ensures that the precision of the calibration process is only related to the resolution of the camera, avoids introducing artificial errors, improves the calibration precision, and has the characteristics of short calibration time, high calibration precision and reduced subjective errors of manual calibration.
Fig. 2 is a schematic view of a first captured image according to one embodiment of the present disclosure. FIG. 3 is a schematic illustration of a calibration image according to one embodiment of the present disclosure. Referring to fig. 2 and 3, the plurality of calibration patterns in the calibration image may be at least 5 calibration patterns. The number of first pixel coordinates of the determined calibration point is at least 5, i.e. N.gtoreq.5. Accordingly, the number of sets of coordinate data to be obtained is at least 5 sets. The more the obtained coordinate data sets are, the more accurate the finally generated coordinate calibration matrix is, and the more accurate positioning of the microscope camera in actual microscopic scanning is facilitated.
The sides of the calibration objects shown in fig. 2 and 3 facing the navigation camera are rectangular surfaces, which are all provided as calibration images. The calibration image comprises 5 calibration patterns which are round as a whole. It can be understood that the coordinate data set may not be acquired for all calibration patterns in the calibration image in the calibration process, for example, the calibration image includes 10 calibration patterns, and the coordinate data set may be acquired for only 5 calibration patterns or all 10 calibration patterns when the calibration is performed.
Only one calibration point may be included in each calibration pattern. The shape of each calibration pattern may be the same. Each calibration pattern may be divided into a plurality of sectors, and centers of the plurality of sectors may be identical and circumferentially distributed around the center. Illustratively, the pattern of the calibration pattern may be: a plurality of calibration patterns formed by circles with the same circle center and different radiuses. The boundary shape of the calibration pattern may be elliptical, rectangular or other shape. In this embodiment, the calibration patterns shown in fig. 2 and 3 are adopted, each calibration pattern is in a circular shape, and each circular pattern is provided with one calibration point, so that 5 calibration points are included in fig. 2 and 3. The index point may be the center of the circular pattern.
The calibration pattern can adopt Siemens star patterns which are all composed of a plurality of sectors uniformly distributed around the center of the pattern. The sizes of the calibration patterns can be the same or different, and the radii of the sectors in the same calibration pattern are the same. The Siemens star is adopted as the calibration pattern, the line segments of the fan-shaped boundary radiate outwards from the circle center, compared with the concentric circle type calibration pattern, the Siemens star has stronger directional directivity to the position of the circle center, when the circle center is adopted as the calibration point, the direction directivity can be utilized to accurately judge the relative position direction of the circle center in the current view field, the accuracy of the calculated second pixel coordinates is improved, and the displacement device can conveniently move in the correct direction in the process that the calibration point approaches to the center of the view field.
The plurality of calibration patterns in the calibration image may include a first calibration pattern and a plurality of second calibration patterns. The size of the first calibration pattern is larger than the size of the second calibration pattern, and the current target calibration pattern included in the first acquired second captured image may be the first calibration pattern. Illustratively, among the 5 calibration patterns, the calibration pattern having a larger size is a first calibration pattern, and four calibration patterns surrounding the first calibration pattern are second calibration patterns. The first calibration pattern and the second calibration pattern may be different sizes of Siemens patterns. For example, the diameter of the first calibration pattern may be set to 8 millimeters. Since the assembly error between the navigation camera and the micro camera is small, for example, the assembly error is less than 2 mm, the field of view of the micro camera will be within the pattern range of the first calibration pattern when the calibration object is moved to the micro photographing position.
When the shooting of the first shooting image is completed and the displacement device is controlled to move to the preset position B for the first time, the preset position B at the moment corresponds to the first calibration pattern, that is, the first calibration pattern appearing in the field of view of the microscopic camera is the first calibration pattern, so that the first calibration pattern with larger size is used for ensuring that the calibration pattern is necessarily present in the field of view of the microscopic camera, and the phenomenon that any calibration pattern does not appear in the field of view of the microscopic camera after the displacement is caused by errors and the like is avoided.
Fig. 4 is a flow diagram of determining a first pixel coordinate according to one embodiment of the present disclosure. Referring to fig. 4, step S100 may include the following steps S110 and S120.
S110, shooting a calibration object located at a first shooting position to obtain a first shooting image, wherein the first shooting image comprises a calibration image on the calibration object, and the calibration image comprises a plurality of calibration patterns.
S120, determining first pixel coordinates of calibration points in the plurality of calibration patterns in the first photographed image.
The shooting position is the position of the calibration object, the preset position A, B and the preset position C are the positions of the displacement device, and the preset position A, B and the preset position C have a corresponding relationship. The preset position a corresponds to the first photographing position. If the displacement device is initially positioned at the preset position A, the navigation camera can be controlled to shoot the calibration object without moving. If the displacement device is initially located at other positions, the displacement device needs to be moved to a preset position A so that the calibration object enters the field of view of the navigation camera, and then shooting is performed.
Taking fig. 2 and fig. 3 as an example, the calibration image is a square portion in the first shot image, and 5 circles in the first shot image are 5 calibration patterns. The pattern of the calibration pattern is known, the position of the calibration point is also known, and the first pixel coordinate of the calibration point of the calibration pattern in the first captured image may be obtained by calculating the relative positional relationship between the calibration point and the origin point by using the upper left corner of the first captured image as the origin point. For example, the size of the first shot image is l1 x h1, the size of the calibration image is l2 x h2, l2< l1, l2=h2, and h2< h1. The coordinates of the first pixel at the 5 calibration points are (x 1, y 1), (x 2, y 2), (x 3, y 3), (x 4, y 4), (x 5, y 5) with the upper left corner of the first captured image as the origin, respectively, where the units of x and y may be pixels.
Fig. 5 is a flow diagram of capturing a first captured image according to one embodiment of the present disclosure. Referring to fig. 5, step S110 may include the following steps S111 and S112.
S111, driving the displacement device to drive the calibration object to move to a first shooting position, wherein the first shooting position corresponds to the first camera.
S112, controlling the first camera to shoot the calibration object, and obtaining a first shooting image.
Illustratively, step S111 may include: the driving displacement device drives the objective table to move to an initial preset position, a calibration object is arranged on the objective table, and when the objective table is located at the initial preset position, the calibration object is located at a first shooting position. The calibration object located at the first shooting position may be opposite to the first camera.
The calibration object can be detachably fixed on the objective table, or the calibration object can be directly placed on the objective table and the moving speed of the displacement device is lower, so that the relative position change between the calibration object and the objective table due to the movement is avoided. And sending a moving instruction to the displacement device, so that the displacement device moves to an initial preset position (namely a preset position A), the calibration object is correspondingly positioned at a first shooting position, the calibration image of the calibration object is opposite to the lens of the navigation camera, and the shot first shooting image comprises an integral calibration image of the calibration object on one surface facing the navigation camera.
Fig. 6 is a flow chart diagram of determining a first pixel coordinate according to another embodiment of the present disclosure. The calibration point may be located at a central position of the calibration pattern. When the calibration point is the center point of the calibration pattern, referring to fig. 6, step S120 may include the following steps S121, S122 and S123.
S121, performing threshold segmentation on the first shot image to obtain a plurality of sub-regions.
S122, for each sub-area, identifying the calibration pattern area from the sub-areas according to the shape information of the calibration pattern.
S122, determining the center coordinates of the calibration pattern area and serving as first pixel coordinates of a calibration point in the calibration pattern in the first shooting image.
And extracting pixel coordinates of the first shooting image through a feature extraction algorithm so as to obtain a coordinate data set later. Taking fig. 2 as an example, by performing threshold segmentation on the calibration image shown in fig. 2, 5 sub-areas are obtained, and each sub-area contains one calibration pattern. Since the shape of the calibration pattern is circular and the roundness information of the circular is known, and the dimensional relationship of 5 circles is also known, 5 circular areas can be identified from the 5 sub-areas, respectively, thereby obtaining the calibration pattern area, and eliminating the abnormal points in the calibration pattern area. And determining the position of the circle center from the calibration pattern area, thereby obtaining the pixel coordinates of the calibration point in the calibration image.
Fig. 7 is a flow diagram of capturing a second captured image according to one embodiment of the present disclosure. Referring to fig. 7, the microscopic photographing step S200 may include the following steps S210 and S220.
S210, driving the displacement device to drive the calibration object to move to the current microscopic shooting position.
S220, controlling the microscopic camera to shoot the calibration object to obtain a second shot image.
The microscopy shooting position corresponds to a microscopy camera. After the first captured image is obtained, a movement instruction may be transmitted to the displacement device so that the displacement device moves to the preset position B. The preset position B corresponds to the first calibration pattern to be identified, and the calibration is correspondingly located at the microscopic shooting position at the moment, and the current microscopic shooting position also corresponds to the first calibration pattern to be identified. The preset positions corresponding to different calibration patterns to be identified are different, and accordingly, the current microscopic shooting position is changed.
Fig. 8 is a schematic view of a second captured image according to one embodiment of the present disclosure. Referring to fig. 8, the second shot image obtained after moving according to the preset position B is a partial area of the first calibration pattern, and at this time, no calibration point of the first calibration pattern appears in the second shot image, and the calibration point is located outside the field of view of the microscopic camera. It will be appreciated that the index point may also appear in the second captured image initially. No matter whether the calibration point appears in the first second shooting image of the first calibration pattern, the calibration point is overlapped with the center of the field of view through the subsequent steps.
Fig. 9 is a flow chart of capturing a second captured image according to another embodiment of the present disclosure. Referring to fig. 9, step S220 may include the following steps S10, S20, and S30.
Focusing shooting step S10: and controlling the microscopic camera to shoot the calibration object under a plurality of different focal lengths respectively to obtain a plurality of candidate shooting images.
Definition evaluation step S20: and respectively carrying out definition evaluation on the obtained multiple candidate shooting images to obtain corresponding definition values.
Image determination step S30: and determining the candidate shooting image with the optimal definition value as a second shooting image.
The focal length is adjusted by changing the height distance between the microscopic camera and the calibration object, and image shooting is carried out under different focal lengths, so that images with various different resolutions are obtained, the clearest image is identified from the images by calculating the definition value, the best definition of the second shot image is ensured, and the accuracy of determining the second pixel coordinates in the subsequent process is improved.
Since the microscopic camera is not arranged to move, the change of the focal length is realized by the displacement device, and the displacement device is an XYZ three-axis movement device which can move in the height direction (namely, the depth direction), so that the height distance between the calibration object and the microscopic camera is changed, and the focal length is further adjusted.
The focus shooting step S10 may include: and controlling the microscopic camera to move for a plurality of times in the depth of field direction by a preset step length, and controlling the microscopic camera to shoot the calibration object after each movement is completed. When focusing shooting is carried out, m1 different height distances can be preselected, and the difference value between the adjacent height distances is the preset step length. The values of the preset step sizes may be the same, for example, the step sizes used each time the Z-axis movement is performed are the same and the values are smaller. The values of the preset step sizes can also be different, for example, the Z-axis movement is performed by a larger step size from the lowest position, the Z-axis movement is performed by a smaller step size after a certain number of movements, and the Z-axis movement is performed by a larger step size until the highest position after a certain number of movements. And controlling the displacement device to sequentially move in the Z axis according to a preset step length so as to move to a corresponding height distance, and immediately controlling the microscopic camera to shoot after each movement to obtain m1 candidate shooting images. Specifically, the displacement device is controlled to drive the objective table to move in the Z-axis direction according to a preset step length, and the microscope camera is controlled to take a picture once every time the objective table moves to obtain a candidate shooting image. The sharpness value of the candidate captured image is determined every time a candidate captured image is obtained. When all the moving tasks in the Z-axis direction are completed, all the candidate captured images are obtained, and the second captured image can be selected.
It will be understood that in the focusing photographing step S10, the adjustment range of the focal length (i.e., the movement step length of the Z axis) can be determined by the sharpness value obtained in the sharpness evaluation step S20 at this time, that is, the movement step length is not a preset step length but can be adjusted at any time according to the situation of the candidate photographed image. Specifically, the first shooting of the current calibration pattern may be performed at the lowest position or at a certain preset position, after the first candidate shooting image is obtained, the sharpness value of the image is first identified through the sharpness evaluation step S20, and the distance (i.e. the moving step) between the next shooting position and the current calibration pattern is determined according to the sharpness value. The higher the definition value is, the closer the position of the instruction distance corresponding to the clearest image is, and the next moving step is set to be smaller at the moment; otherwise, the farther the description distance corresponds to the position of the sharpest image, the larger the next movement step set at this time. That is, there is a non-linear inverse relationship between sharpness values and Z-axis movement steps. After the next moving step length is obtained, the displacement device is controlled to correspondingly move until the stopping condition is met. The stop condition may be: the sharpness value of the candidate shooting image obtained at this time is smaller than the sharpness value of the candidate shooting image obtained at last time.
After obtaining a plurality of candidate shooting images corresponding to a plurality of different focal lengths, performing sharpness evaluation on the candidate shooting images by using a sharpness evaluation algorithm, and calculating a sharpness value of each candidate shooting image. And if the definition value is higher, the definition value with the highest value is determined from the definition value, so that the candidate shooting image with the highest image definition is obtained, and the candidate shooting image can be used as a second shooting image for subsequent operation.
The sharpness evaluation step S20 may include: taking the candidate shooting image as an image to be evaluated, and graying the image to be evaluated to obtain a gray level image; determining the image gradient and variance of the gray map; and carrying out normalization operation according to the variance and the image gradient to obtain a definition value. For each candidate shot image, the image is firstly grayed, the image is convolved by using a filtering operator, an image edge area is extracted, the image edge gradient is amplified by using mathematical operation, the whole image gradient is calculated, the variance of the gray image is calculated, the image gradient and the image gray variance are normalized according to weight, and the result is used as a definition evaluation value.
Fig. 10 is a flow chart of capturing a second captured image according to yet another embodiment of the present disclosure. Referring to fig. 10, step S220 may include the following steps S221, S222, S223, S224, S225, and S226.
S221, performing a focusing shooting step S10 by taking the first step as a preset step to obtain a plurality of first candidate shooting images.
S222, performing a definition evaluation step S20 on the plurality of first candidate shooting images to obtain a plurality of first definition values.
S223, determining a first moving position corresponding to the optimal first definition value.
S224, in the moving range including the first moving position, the focusing shooting step S10 is executed by taking the second step as a preset step, so as to obtain a plurality of second candidate shooting images, wherein the second step is smaller than the first step.
And S225, performing a definition evaluation step S20 on the second candidate shooting images to obtain a plurality of second definition values.
S226, determining a second captured image from the plurality of second sharpness values through the image determining step S30.
The shooting process of the second shooting image is divided into coarse focusing and fine focusing, the coarse focusing corresponds to a larger depth of field moving step length, the fine focusing corresponds to a smaller depth of field moving step length, the rough focusing is utilized to roughly determine the approximate position range of the shooting position where the clearest image can be shot, then the clearest image is obtained by utilizing the fine focusing, the times of focusing and shooting candidate images are reduced, and the efficiency of the whole focusing process is improved.
Assuming a focus range of 50 microns, i.e., a displacement device has a movement range in the Z-axis direction for focusing of 50 microns, a first step size may be selected to be 5 microns and a second step size may be selected to be 0.2 microns. At this time, coarse focusing is started, namely, moving for 10 times in a large step length of 5 micrometers, and shooting is stopped once every 5 micrometers of movement, so that 11 first candidate shooting images are obtained in total. And performing definition evaluation operation on the 11 first candidate shooting images to obtain 11 first definition values. If the highest value is the 6 th first definition value, the first movement position is 25 microns. The range of movement for fine focusing is selected by a height value of 25 microns, for example, the range of movement can be set to [20,30], and then fine focusing is started, i.e., 50 times from 20 microns to 30 microns in small steps of 0.2 microns. Every 0.2 micrometer of movement is stopped to shoot, and 51 second candidate shooting images are obtained. And performing definition evaluation operation on the 51 second candidate shooting images to obtain 51 second definition values, and taking the second candidate shooting image with the largest definition value as the second shooting image.
Fig. 11 is a flow chart diagram of determining second pixel coordinates according to one embodiment of the present disclosure. Referring to fig. 11, step S300 may include step S310 and step S320.
And S310, performing linear fitting on at least part of the pattern content to obtain a plurality of linear segments, wherein the at least part of the pattern content comprises at least part of areas of a plurality of sectors, and the linear segments are boundaries of the sectors.
S320, determining the second pixel coordinates of the calibration point of the current target calibration pattern under the coordinate system of the second shooting image according to the plurality of straight line segments.
And extracting a sector boundary line in the second shooting image through straight line fitting, determining an intersection point of the boundary line, and obtaining a circle center of the sector. Taking the second shot image shown in fig. 8 as an example, a plurality of sector boundary lines in fig. 8 are extracted through a straight line fitting algorithm to obtain straight line segments of the sector boundary, then intersection points of the straight line segments are calculated, and the circle centers of the siemens satellites are used as standard points, and then the intersection points of the straight line segments are used as the circle centers, so that the standard points are obtained. The pixel coordinates of the calibration point and the second shot image share the same coordinate system, which is equivalent to sharing the same coordinate system with the current field of view of the microscope camera. If the origin of the current field of view of the microscope camera is the upper left corner of the field of view, the upper left corner of the second shot image is the origin, and the pixel coordinates of the calibration point are obtained according to the distance between the calibration point and the origin.
The straight line fitting algorithm is used for extracting coordinate points on each straight line, and the coordinate points at the moment are coordinate points under the coordinate system of the second shooting image. These coordinate points may then be fitted to a straight line using a least squares method, which may be a least squares M-estimator (M estimation) method. The plurality of extracted straight line segments may be only a part of the at least partial pattern content, or may be all of the at least partial pattern content.
Fig. 12 is a schematic diagram obtained by performing straight line fitting on fig. 8. Referring to fig. 12, assuming that all straight line segments in fig. 8 are extracted, a straight line fitting image shown in fig. 12 can be obtained. The straight line segment in fig. 12 is the extracted straight line segment.
Fig. 13 is a flow chart of determining second pixel coordinates according to another embodiment of the present disclosure. Referring to fig. 13, step S320 may include step S321 and step S322.
S321, determining intersection points of each group of straight lines in different groups of straight lines by taking two straight line segments as a group of straight lines, and obtaining a plurality of intersection points.
S322, calculating the coordinate mean value of the plurality of intersection points, taking the obtained coordinate mean value as the second pixel coordinate of the calibration point of the current target calibration pattern, wherein the coordinate of the intersection point is the coordinate under the coordinate system of the second shooting image.
For some or all of the straight line segments in fig. 13, the intersection points of every two adjacent straight line segments may be determined to obtain a plurality of intersection points, or the intersection points of any two adjacent straight line segments may be determined to obtain a plurality of intersection points. After obtaining the plurality of intersection points, an average value of X-axis coordinates and an average value of Y-axis coordinates of the intersection points are calculated, and an origin of the X-axis and the Y-axis may be an upper left corner coordinate point of the second captured image. After the average value is obtained, the coordinates of the calibration point, that is, the second pixel coordinates, are obtained.
Fig. 14 is a flow diagram of coinciding a current calibration point with a center of a field of view according to one embodiment of the present disclosure. Referring to fig. 14, step S400 may include a first moving step S410 and a first iterating step S420.
The first moving step S410: the displacement device is driven to drive the calibration object to move in the direction close to the center of the field of view of the microscope camera.
The first iteration step S420: and substituting the new microscopic shooting position formed by movement into the microscopic shooting step as the current microscopic shooting position when the first movement step is finished each time so as to obtain new second pixel coordinates until the new second pixel coordinates coincide with the center of the field of view.
When a microscope camera is actually used for microscopic scanning, the object to be scanned is aligned through the center of the field of view, so that the displacement of the displacement device is indicated to be in place by the coincidence of the center of the field of view and the calibration point in the calibration process. After the second pixel coordinates of the calibration point are obtained, since the shooting size of the field of view is known, the azimuth and distance of the calibration point at the center of the current field of view can be determined through the second pixel coordinates and the shooting size of the field of view.
Illustratively, in the first moving step S410, when approaching the center of the field of view of the microscope camera, the moving direction is obtained according to the relative positional relationship of the second pixel coordinates and the center of the field of view in the field of view coordinate system, and the moving distance is obtained according to the pixel distance between the second pixel coordinates and the center of the field of view.
Continuing with the example of FIG. 12, the coordinate system of FIG. 12 is the same as the current field of view coordinate system. Under the view field of fig. 12, the calibration point is located outside the view field, and the pixel coordinate of the center of the view field can be obtained by taking the upper left corner of the view field as the origin and the length and the width of the view field, so that the length of the second pixel coordinate in the X-axis direction is greater than the length of the current center of the view field in the X-axis direction, and the length of the second pixel coordinate in the Y-axis direction is less than the length of the current center of the view field in the Y-axis direction. The pixel distance (pixel deviation) and the relative positional relationship between the two are thus known. The displacement device is controlled to move according to the pixel distance and the relative position relation.
The moving distance can be obtained according to the physical distance estimation value corresponding to the current pixel distance. In the first moving step S410, the mechanical coordinate change of the displacement device during the movement is obtained through the pixel distance conversion, and the conversion between the mechanical coordinate and the pixel coordinate adopts the physical distance estimation value because the calibration is not completed at this time. Thus, when the first moving step S410 of the current calibration image is completed for the first time, the current calibration point may or may not be directly coincident with the center of the field of view, but the distance between the current calibration point and the center of the field of view is closer than before the first moving step S410 is not performed. Assuming that the pixel distance between the current second pixel coordinate and the center coordinate of the field of view is 10 pixels, if the estimated physical distance value corresponding to the 10 pixels is 10 micrometers, the displacement device is controlled to move by 10 micrometers.
Each time the first moving step S410 of the current calibration image is completed, it is determined whether the second pixel coordinates of the calibration point coincide with the center of the current field of view. And judging whether the second pixel coordinates are identical to the coordinate values of the center of the field of view or not. If the displacement device and the displacement device overlap, the mechanical coordinates of the current calibration point can be obtained. If not, taking the new field of view and the new microscopic photographing position formed after movement as the current field of view center and the current microscopic photographing position, and performing microscopic photographing again in the step S200 to obtain new pattern content. The content of the captured pattern may change, e.g. the calibration point changes from being outside the field of view to being present inside the field of view, compared to before the first moving step S410 is performed.
It is understood that, for the same calibration pattern, in the process of overlapping the current calibration point with the center of the field of view, the second and subsequent microscopic photographing steps are performed, and the focusing photographing step S10, the sharpness evaluation step S20 and the image determination step S30 are not required, and photographing is performed directly after the calibration object moves to the new microscopic photographing position.
After the new pattern content is obtained, determining new second pixel coordinates through straight line fitting under a new view according to the new pattern content. The new second pixel coordinates correspond to the field of view coordinate system of the new field of view. Then, whether the new second pixel coordinates coincide with the center of the field of view is judged. By performing the loop iteration in this way, until the two are coincident, the mechanical coordinates of the displacement device are recorded at this time, and the mechanical coordinates and the second pixel coordinates of the current calibration point together form a set of coordinate data of the current calibration pattern. Fig. 15 is a schematic view of a calibration point coincident with the center of a field of view according to one embodiment of the present disclosure. Referring to fig. 15, the step S400 is performed at least once such that the coordinates of the center of the field of view and the coordinates of the index point are the same under the current field of view.
After a set of coordinate data of a current calibration pattern is obtained, the displacement device is controlled to move to a preset position C of a next calibration pattern (a new calibration pattern) so that a calibration object is located at a new microscopic shooting position, a new second shooting image is obtained by shooting the new calibration pattern, a second pixel coordinate of the new calibration point is determined according to pattern content in the new second shooting image, and the displacement device is driven to enable the new second pixel coordinate to coincide with a new view field center, so that a set of coordinate data of the new calibration pattern is obtained. And analogizing to other calibration patterns so as to obtain at least 5 groups of coordinate data, and then calculating a coordinate calibration matrix.
FIG. 16 is a schematic diagram of a coordinate calibration device employing a hardware implementation of a processing system for microscanning according to one embodiment of the present disclosure. Referring to fig. 16, the disclosure further provides a coordinate calibration device 1000 for microscanning, where the coordinate calibration device 1000 of this embodiment may include a first pixel coordinate determining module 1002, a microscope shooting module 1004, a second pixel coordinate determining module 1006, a mechanical coordinate acquiring module 1008, a calibration point iteration module 1010, and a coordinate relation determining module 1012.
The first pixel coordinate determining module 1002 is configured to determine first pixel coordinates of calibration points of a plurality of calibration patterns in a calibration image included in a first captured image in the first captured image.
The first pixel coordinate determination module 1002 may specifically determine the first pixel coordinate by the following 5 steps: (1) The displacement device is driven to drive the objective table to move to an initial preset position, wherein a calibration object is arranged on the objective table, when the objective table is positioned at the initial preset position, the calibration object is positioned at a first shooting position, the first shooting position corresponds to the first camera, and the calibration object positioned at the first shooting position is opposite to the first camera; (2) Controlling a first camera to shoot a calibration object to obtain a first shot image, wherein the first shot image comprises a calibration image on the calibration object, the calibration image comprises a plurality of calibration patterns, the calibration points are positioned at the center positions of the calibration patterns, the plurality of calibration patterns are at least 5, the number of first pixel coordinates of the determined calibration points is at least 5, each calibration pattern only comprises one calibration point, the shape of each calibration pattern is the same, each calibration pattern is divided into a plurality of sectors, and the circle centers of the sectors are the same and are circumferentially distributed around the circle center; (3) Threshold segmentation is carried out on the first shooting image, so that a plurality of subareas are obtained; (4) For each sub-region, identifying a calibration pattern region from the sub-regions according to shape information of the calibration pattern; (5) The center coordinates of the calibration pattern area are determined and used as first pixel coordinates of the calibration points in the calibration pattern in the first photographed image.
The microscopy shooting module 1004 is configured to perform a microscopy shooting step, where the microscopy shooting step includes: and shooting the calibration object positioned at the current microscopic shooting position to obtain a second shooting image, wherein the second shooting image is an image shot by a microscopic camera, and the microscopic shooting position enables the second shooting image to comprise at least part of pattern content in the current target calibration pattern.
The step of microscopic photographing may specifically include the following 7 steps: (1) Driving the displacement device to drive the calibration object to move to the current microscopic shooting position; (2) Taking the first step length as a preset step length to execute a focusing shooting step to obtain a plurality of first candidate shooting images, wherein the focusing shooting step comprises the following steps: controlling the microscopic camera to move for a plurality of times in the depth of field direction with a preset step length, and controlling the microscopic camera to shoot a calibration object after each movement is completed to obtain a plurality of candidate shooting images; (3) And performing a sharpness evaluation step on the plurality of first candidate shooting images to obtain a plurality of first sharpness values, wherein the sharpness evaluation step comprises the following steps: taking the candidate shooting image as an image to be evaluated, graying the image to be evaluated to obtain a gray level image, determining the image gradient and variance of the gray level image, and carrying out normalization operation according to the variance and the image gradient to obtain a definition value; (4) Determining a first moving position corresponding to the optimal first definition value; (5) In the moving range containing the first moving position, taking the second step length as a preset step length to execute a focusing shooting step to obtain a plurality of second candidate shooting images, wherein the second step length is smaller than the first step length; (6) Performing a definition evaluation step on the plurality of second candidate photographed images to obtain a plurality of second definition values; (7) Determining a second captured image from the plurality of second sharpness values through an image determining step, wherein the image determining step includes: and determining a candidate shooting image with the optimal definition value and taking the candidate shooting image as a second shooting image, wherein the microscopic shooting position enables the second shooting image to comprise at least part of pattern content in a current target calibration pattern, the plurality of calibration patterns comprise a first calibration pattern and a plurality of second calibration patterns, the size of the first calibration pattern is larger than that of the second calibration pattern, and the current target calibration pattern included in the first acquired second shooting image is the first calibration pattern.
The second pixel coordinate determining module 1006 is configured to determine, according to at least part of the pattern content, second pixel coordinates of a calibration point of the current target calibration pattern in a coordinate system of the second captured image.
The second pixel coordinate determination module 1006 may specifically determine the second pixel coordinates by the following 3 steps: (1) Performing linear fitting on the at least partial pattern content to obtain a plurality of linear segments, wherein the at least partial pattern content comprises at least partial areas of a plurality of sectors, and the linear segments are boundaries of the sectors; (2) Two straight line segments are taken as a group of straight lines, and the intersection point of each group of straight lines in different groups of straight lines is determined to obtain a plurality of intersection points; (3) And calculating a coordinate mean value of the plurality of intersection points, and taking the obtained coordinate mean value as a second pixel coordinate of the calibration point of the current target calibration pattern, wherein the coordinate of the intersection point is a coordinate under a coordinate system of a second shooting image.
The mechanical coordinate acquisition module 1008 is used for driving the displacement device to drive the calibration object to approach the center of the field of view of the microscope camera in the horizontal direction until the current calibration point coincides with the center of the field of view, and acquiring the mechanical coordinate of the displacement device when the current calibration point coincides with the center of the field of view.
The mechanical coordinate acquiring module 1008 may specifically acquire mechanical coordinates through the following 2 steps: (1) The displacement device is driven to drive the calibration object to move towards the direction close to the center of the view field of the microscope camera, when the calibration object approaches to the center of the view field of the microscope camera, the moving direction can be obtained according to the relative position relation between the second pixel coordinate and the center of the view field under the view field coordinate system, and the moving distance can be obtained according to the physical distance estimated value corresponding to the current pixel distance between the second pixel coordinate and the center of the view field; (2) And substituting the new microscopic shooting position formed by movement into the microscopic shooting step as the current microscopic shooting position when the first movement step is completed each time so as to obtain new second pixel coordinates, until the new second pixel coordinates coincide with the center of the field of view, and acquiring mechanical coordinates of the displacement device when the new second pixel coordinates coincide with the center of the field of view.
The setpoint iteration module 1010 is configured to perform a setpoint iteration step, where the setpoint iteration step includes: and driving the displacement device to enable the calibration object to move to the microscopic shooting position of the next calibration pattern, substituting the microscopic shooting position of the next calibration pattern into the microscopic shooting step as the current microscopic shooting position to obtain the mechanical coordinates of the displacement device of the calibration point corresponding to the next calibration pattern until the mechanical coordinates of the displacement device of the calibration points of the plurality of calibration patterns are obtained.
The coordinate relation determining module 1012 is used for obtaining coordinate conversion relations according to the calibration point pixel coordinates of the plurality of calibration patterns and the corresponding mechanical coordinates of the displacement device.
It should be noted that, details not disclosed in the coordinate calibration device 1000 for micro-scanning in the present embodiment may refer to details disclosed in the coordinate calibration method M10 for micro-scanning in the foregoing embodiment proposed in the present disclosure, and are not described herein again.
The apparatus 1000 may include corresponding modules that perform the steps of the flowcharts discussed above. Thus, each step or several steps in the flowcharts described above may be performed by respective modules, and the apparatus may include one or more of these modules. A module may be one or more hardware modules specifically configured to perform the respective steps, or be implemented by a processor configured to perform the respective steps, or be stored within a computer-readable medium for implementation by a processor, or be implemented by some combination.
The hardware architecture may be implemented using a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. Bus 1100 connects together various circuits including one or more processors 1200, memory 1300, and/or hardware modules. Bus 1100 may also connect various other circuits 1400, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
Bus 1100 may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, peripheral Component) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one connection line is shown in the figure, but not only one bus or one type of bus.
Fig. 17 is a schematic diagram of a microscanning system according to one embodiment of the present disclosure. Referring to fig. 17, the disclosure further provides a micro-scanning system 2000, where the micro-scanning system 2000 of the present embodiment includes a coordinate calibration device 1000 for micro-scanning, a photographing system 2010, and a displacement device 2020.
The coordinate calibration device 1000 is the coordinate calibration device shown in fig. 16, and the coordinate calibration device 1000 may include a first pixel coordinate determining module, a microscopic photographing module, a second pixel coordinate determining module, a mechanical coordinate acquiring module, a calibration point iteration module, and a coordinate relation determining module.
The shooting system 2010 comprises a microscopic camera, wherein the microscopic camera is used for shooting an object to be shot under the control of the coordinate calibration device, so as to obtain a shooting image. The camera system may further include a navigation camera, a light source, an objective lens, and a stage. The navigation camera is used for shooting a first shooting image. The object stage is used for carrying an object to be shot, such as a calibration object. The object stage and the displacement device synchronously move.
The displacement device 2020 is used for being controlled by the coordinate calibration device to move so as to drive the object to be photographed to synchronously move. The displacement means may comprise an electrically driven linear ramp, a motor and a driver to effect linear displacement.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure. The processor performs the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied on a machine-readable medium, such as a memory. In some embodiments, part or all of the software program may be loaded and/or installed via memory and/or a communication interface. One or more of the steps of the methods described above may be performed when a software program is loaded into memory and executed by a processor. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above in any other suitable manner (e.g., by means of firmware).
Logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present disclosure may be implemented in hardware, software, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or part of the steps implementing the method of the above embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments. The storage medium may be a volatile/nonvolatile storage medium.
Furthermore, each functional unit in each embodiment of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The storage medium may be a read-only memory, a magnetic disk or optical disk, etc.
The present disclosure also provides a readable storage medium having stored therein execution instructions which, when executed by a processor, are configured to implement the coordinate calibration method of microscanning of any of the above embodiments.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or other suitable medium on which the program can be printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner if necessary, and then stored in a memory.
The present disclosure also provides a computer program product comprising computer programs/instructions which when executed by a processor implement the coordinate calibration method of microscanning of any of the above embodiments.
In the description of the present specification, a description referring to the terms "one embodiment/mode," "some embodiments/modes," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment/mode or example is included in at least one embodiment/mode or example of the present disclosure. In this specification, the schematic representations of the above terms are not necessarily the same embodiments/modes or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/implementations or examples described in this specification and the features of the various embodiments/implementations or examples may be combined and combined by persons skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is at least two, such as two, three, etc., unless explicitly specified otherwise.
It will be appreciated by those skilled in the art that the above-described embodiments are merely for clarity of illustration of the disclosure, and are not intended to limit the scope of the disclosure. Other variations or modifications will be apparent to persons skilled in the art from the foregoing disclosure, and such variations or modifications are intended to be within the scope of the present disclosure.

Claims (23)

1. A coordinate calibration method for microscanning, comprising:
determining first pixel coordinates of calibration points of a plurality of calibration patterns in a calibration image contained in a first shooting image in the first shooting image;
a microscopic shooting step, namely shooting a calibration object positioned at a current microscopic shooting position to obtain a second shooting image, wherein the second shooting image is an image shot by a microscopic camera, and the microscopic shooting position enables the second shooting image to comprise at least part of pattern content in a current target calibration pattern;
determining second pixel coordinates of the calibration points of the current target calibration pattern under the coordinate system of the second shooting image according to the at least partial pattern content;
driving a displacement device to drive the calibration object to approach the center of the field of view of the microscope camera in the horizontal direction until the current calibration point coincides with the center of the field of view, and acquiring the mechanical coordinates of the displacement device when the current calibration point coincides with the center of the field of view;
A calibration point iteration step of driving the displacement device to enable the calibration object to move to a microscopic shooting position of a next calibration pattern, substituting the microscopic shooting position of the next calibration pattern as a current microscopic shooting position into the microscopic shooting step to obtain the mechanical coordinates of the displacement device of the calibration point corresponding to the next calibration pattern until the mechanical coordinates of the displacement device of the calibration points of the plurality of calibration patterns are obtained; and
and obtaining a coordinate conversion relation according to the pixel coordinates of the calibration points of the plurality of calibration patterns and the mechanical coordinates of the corresponding displacement device.
2. The method of claim 1, wherein the plurality of calibration patterns is at least 5 calibration patterns, and the number of first pixel coordinates of the calibration points is determined to be at least 5.
3. The method of claim 1, wherein each calibration pattern includes only one calibration point.
4. The method of claim 1, wherein each of the calibration patterns is identical in shape, and each of the calibration patterns is divided into a plurality of sectors having the same center and circumferentially distributed around the center.
5. The method of claim 1, wherein the plurality of calibration patterns includes a first calibration pattern and a plurality of second calibration patterns, the first calibration pattern having a size larger than a size of the second calibration pattern, the current target calibration pattern included in the first captured image being the first calibration pattern.
6. The method of any of claims 1-5, wherein determining first pixel coordinates in the first captured image of calibration points of a plurality of calibration patterns in a calibration image comprised by the first captured image comprises:
shooting a calibration object positioned at a first shooting position to obtain a first shooting image, wherein the first shooting image comprises a calibration image on the calibration object, and the calibration image comprises a plurality of calibration patterns; and
a first pixel coordinate of a calibration point in the plurality of calibration patterns in the first captured image is determined.
7. The method of claim 6, wherein capturing the calibration object at the first capture location to obtain the first captured image comprises:
driving the displacement device to drive the calibration object to move to a first shooting position, wherein the first shooting position corresponds to the first camera; and
And controlling the first camera to shoot the calibration object to obtain a first shooting image.
8. The method of claim 7, wherein driving the displacement device to move the calibration object to the first photographing position comprises:
the displacement device is driven to drive the objective table to move to an initial preset position, a calibration object is arranged on the objective table, and when the objective table is located at the initial preset position, the calibration object is located at a first shooting position.
9. The method of claim 7 or 8, wherein the marker located at the first photographing position is directly opposite the first camera.
10. The method of claim 6, wherein the calibration point is located at a center position of the calibration pattern, and determining a first pixel coordinate of the calibration point in the plurality of calibration patterns in the first captured image comprises:
threshold segmentation is carried out on the first shooting image, so that a plurality of subareas are obtained;
identifying a calibration pattern region from the sub-regions according to shape information of the calibration pattern for each sub-region; and
and determining the central coordinate of the calibration pattern area and taking the central coordinate as a first pixel coordinate of a calibration point in the calibration pattern in the first shooting image.
11. The method according to any one of claims 1-5, wherein capturing a calibration object at a current microscopic capture location to obtain a second captured image comprises:
driving the displacement device to drive the calibration object to move to the current microscopic shooting position; and
and controlling the microscopic camera to shoot the calibration object to obtain a second shooting image.
12. The method of claim 11, wherein controlling the microscopy camera to capture the calibration object to obtain a second captured image comprises:
a focusing shooting step, namely controlling a microscopic camera to shoot the calibration object under a plurality of different focal lengths respectively to obtain a plurality of candidate shooting images;
a definition evaluation step, namely respectively performing definition evaluation on the obtained multiple candidate shooting images to obtain corresponding definition values; and
and an image determining step of determining a candidate photographed image with the optimal sharpness value as a second photographed image.
13. The method of claim 12, wherein controlling the microscopy camera to capture the calibration object at a plurality of different focal lengths, respectively, comprises:
and controlling the microscopic camera to move for a plurality of times in the depth of field direction by a preset step length, and controlling the microscopic camera to shoot the calibration object after each movement is completed.
14. The method of claim 13, wherein controlling the microscopy camera to capture the calibration object to obtain a second captured image comprises:
taking the first step length as a preset step length to execute the focusing shooting step to obtain a plurality of first candidate shooting images;
executing the definition evaluation step on the plurality of first candidate shooting images to obtain a plurality of first definition values;
determining a first moving position corresponding to the optimal first definition value;
in the moving range containing the first moving position, taking a second step length as a preset step length to execute the focusing shooting step to obtain a plurality of second candidate shooting images, wherein the second step length is smaller than the first step length;
executing the definition evaluation step on the plurality of second candidate shooting images to obtain a plurality of second definition values; and
and determining a second photographed image from the plurality of second sharpness values through the image determining step.
15. The method according to any one of claims 12 to 14, wherein the sharpness evaluation step includes:
taking the candidate shooting image as an image to be evaluated, and graying the image to be evaluated to obtain a gray level image;
Determining an image gradient and variance of the gray map; and
and carrying out normalization operation according to the variance and the image gradient to obtain a definition value.
16. The method of claim 4, wherein determining second pixel coordinates of the calibration point of the current target calibration pattern in the coordinate system of the second captured image from the at least partial pattern content comprises:
performing linear fitting on the at least part of pattern content to obtain a plurality of linear segments, wherein the at least part of pattern content comprises at least part of areas of a plurality of sectors, and the linear segments are boundaries of the sectors; and
and determining second pixel coordinates of the calibration point of the current target calibration pattern under the coordinate system of the second shooting image according to the plurality of straight line segments.
17. The method of claim 16, wherein determining, from the plurality of straight line segments, a second pixel coordinate of the calibration point of the current target calibration pattern in the coordinate system of the second captured image comprises:
determining the intersection point of each group of straight lines in different groups of straight lines by taking two straight line sections as a group of straight lines to obtain a plurality of intersection points; and
And calculating the coordinate mean value of the plurality of intersection points, taking the obtained coordinate mean value as a second pixel coordinate of the calibration point of the current target calibration pattern, wherein the coordinate of the intersection point is the coordinate under the coordinate system of the second shooting image.
18. The method of any one of claims 1-5, wherein driving a displacement device to bring the marker closer to the center of the field of view of the microscope camera in a horizontal direction until a current marker point coincides with the center of the field of view comprises:
a first moving step, driving a displacement device to drive a calibration object to move towards a direction close to the center of a view field of the microscope camera; and
and a first iteration step, wherein when the first moving step is completed each time, a new microscopic shooting position formed by moving is substituted into the microscopic shooting step as the current microscopic shooting position so as to obtain a new second pixel coordinate, and the new second pixel coordinate is overlapped with the center of the field of view.
19. The method according to claim 18, wherein in the first moving step, when approaching the center of the field of view of the microscope camera, the moving direction is obtained according to the relative positional relationship between the second pixel coordinates and the center of the field of view in the field of view coordinate system, and the moving distance is obtained according to the current pixel distance between the second pixel coordinates and the center of the field of view.
20. The method of claim 19, wherein the distance moved is derived from a physical distance estimate corresponding to the current pixel distance.
21. A coordinate calibration device for microscanning, comprising:
the first pixel coordinate determining module is used for determining first pixel coordinates of calibration points of a plurality of calibration patterns in a calibration image contained in a first shooting image in the first shooting image;
the microscopic shooting module is used for executing microscopic shooting steps, and the microscopic shooting steps comprise: shooting a calibration object positioned at a current microscopic shooting position to obtain a second shooting image, wherein the second shooting image is an image shot by a microscopic camera, and the microscopic shooting position enables the second shooting image to comprise at least part of pattern content in a current target calibration pattern;
a second pixel coordinate determining module, configured to determine, according to the at least part of the pattern content, a second pixel coordinate of a calibration point of the current target calibration pattern in a coordinate system of the second captured image;
the mechanical coordinate acquisition module is used for driving the displacement device to drive the calibration object to approach the center of the visual field of the microscope camera in the horizontal direction until the current calibration point coincides with the center of the visual field, and acquiring the mechanical coordinate of the displacement device when the current calibration point coincides with the center of the visual field;
The calibration point iteration module is used for executing calibration point iteration steps, and the calibration point iteration steps comprise: driving the displacement device to enable the calibration object to move to a microscopic shooting position of a next calibration pattern, substituting the microscopic shooting position of the next calibration pattern as a current microscopic shooting position into the microscopic shooting step so as to obtain mechanical coordinates of the displacement device of the calibration point corresponding to the next calibration pattern until mechanical coordinates of the displacement device of the calibration points of the plurality of calibration patterns are obtained; and
and the coordinate relation determining module is used for obtaining a coordinate conversion relation according to the coordinate of the calibration point pixels of the plurality of calibration patterns and the corresponding mechanical coordinates of the displacement device.
22. A microscanning system, comprising:
a microscan coordinate calibration device as defined in claim 21;
the shooting system comprises a microscopic camera, wherein the microscopic camera is used for shooting an object to be shot under the control of the coordinate calibration device to obtain a shooting image; and
and the displacement device is used for being controlled by the coordinate calibration device to move so as to drive the object to be shot to synchronously move.
23. A readable storage medium having stored therein execution instructions which when executed by a processor are adapted to carry out the coordinate calibration method according to any one of claims 1 to 20.
CN202311490957.4A 2023-11-09 2023-11-09 Coordinate calibration method and device for microscanning, microscanning system and medium Pending CN117537710A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311490957.4A CN117537710A (en) 2023-11-09 2023-11-09 Coordinate calibration method and device for microscanning, microscanning system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311490957.4A CN117537710A (en) 2023-11-09 2023-11-09 Coordinate calibration method and device for microscanning, microscanning system and medium

Publications (1)

Publication Number Publication Date
CN117537710A true CN117537710A (en) 2024-02-09

Family

ID=89795049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311490957.4A Pending CN117537710A (en) 2023-11-09 2023-11-09 Coordinate calibration method and device for microscanning, microscanning system and medium

Country Status (1)

Country Link
CN (1) CN117537710A (en)

Similar Documents

Publication Publication Date Title
US8094169B2 (en) Imaging model and apparatus
Herráez et al. 3D modeling by means of videogrammetry and laser scanners for reverse engineering
US20170292916A1 (en) Surface defects evaluation system and method for spherical optical components
WO2012053521A1 (en) Optical information processing device, optical information processing method, optical information processing system, and optical information processing program
CN112907676A (en) Calibration method, device and system of sensor, vehicle, equipment and storage medium
CN109085695B (en) Method for quickly focusing and photographing plane sample
EP2437495A1 (en) Calibration target detection apparatus, calibration target detecting method for detecting calibration target, and program for calibration target detection apparatus
JP2003161609A (en) Edge shape determining method
CN102027316A (en) Acquisition of topographies of objects having arbitrary geometries
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN117537710A (en) Coordinate calibration method and device for microscanning, microscanning system and medium
CN110653016B (en) Pipetting system and calibration method thereof
CN116778094A (en) Building deformation monitoring method and device based on optimal viewing angle shooting
CN115103124A (en) Active alignment method for camera module
CN112839168B (en) Method for automatically adjusting camera imaging resolution in AOI detection system
CN107835361B (en) Imaging method and device based on structured light and mobile terminal
CN113674361B (en) Vehicle-mounted all-round-looking calibration implementation method and system
CN116051652A (en) Parameter calibration method, electronic equipment and storage medium
CN109523633A (en) Model scanning method, apparatus, equipment, storage medium and processor
CN113204107B (en) Three-dimensional scanning microscope with double objective lenses and three-dimensional scanning method
CN107770434B (en) Rapid focusing adjustment method
CN112150553B (en) Calibration method and device of vehicle-mounted camera
CN110503683B (en) Narrow parking space reversing and warehousing guiding method and system based on monocular ranging
CN113393501A (en) Method and system for determining matching parameters of road image and point cloud data and related equipment
JPH0252204A (en) Measuring instrument for three-dimensional coordinate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination