CN114326078A - Microscope system and method for calibration checking - Google Patents

Microscope system and method for calibration checking Download PDF

Info

Publication number
CN114326078A
CN114326078A CN202111129866.9A CN202111129866A CN114326078A CN 114326078 A CN114326078 A CN 114326078A CN 202111129866 A CN202111129866 A CN 202111129866A CN 114326078 A CN114326078 A CN 114326078A
Authority
CN
China
Prior art keywords
panoramic
image
panoramic images
calculated
superimposed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111129866.9A
Other languages
Chinese (zh)
Inventor
曼努埃尔·阿姆托尔
丹尼尔·哈泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Carl Zeiss Microscopy GmbH
Original Assignee
Carl Zeiss Microscopy GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Carl Zeiss Microscopy GmbH filed Critical Carl Zeiss Microscopy GmbH
Publication of CN114326078A publication Critical patent/CN114326078A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/26Stages; Adjusting means therefor
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30072Microarray; Biochip, DNA array; Well plate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Microscoopes, Condenser (AREA)
  • Studio Devices (AREA)

Abstract

A microscope system comprises a panoramic camera (9) for taking a panoramic image (11, 12) of an environment of a sample location and a computing device (20) arranged for evaluating the panoramic image (11, 12). The computing device (20) has calibration parameters (P) for interpreting image coordinates of the panoramic images (11, 12). The panoramic camera (9) takes at least two panoramic images (11, 12) at different sample positions or sample table positions. Then, a displacement map (31) for superimposing the panoramic images (11, 12; 21,22) is calculated, wherein the panoramic images (11, 12; 21,22) are converted into the same viewing angle by means of the calibration parameters (P). Evaluating whether the calibration parameter (P) is valid based on a goodness of consistency (Q) between the superimposed panoramic images (32). A method for calibrating a test is also described.

Description

Microscope system and method for calibration checking
Technical Field
The present disclosure relates to a microscope system and a method for calibration verification.
Background
In modern microscope systems, automation plays an increasingly important role. The microscope system should take a picture, access and examine in more detail the sample to be examined in a partially or fully automated manner. For this purpose, a panoramic camera takes panoramic images of the sample and the sample environment. For example, a navigation map can be formed from the panoramic image, in which the user can select a position, which can then be automatically approached by the motorized stage. The panoramic image can furthermore be used for automatic sample recognition, for example by identifying and optionally more precisely automatically inspecting cells of a microtiter plate. The panoramic image may also be used for autofocusing, for example by estimating the sample height from the panoramic image or determining a suitable position at which the autofocusing method is subsequently performed.
In order to be able to use the panoramic image in this way, the relationship between the image coordinates and the spatial information of the panoramic image must be known, for example by the pose (angle of view and position) of the panoramic camera relative to a reference point on the microscope. For this purpose, the panoramic camera is calibrated. For example, a panoramic image of a calibration object having a known size may be taken so that the relationship to the image coordinates may be determined. The distortion of the panoramic camera can also be determined by such calibration and corrected by calculation.
A microscope system of this type therefore comprises a panoramic camera for recording a panoramic image of the surroundings of the sample site and a computing device provided for evaluating the panoramic image, wherein the computing device has calibration parameters for interpreting the image coordinates of the panoramic image.
In US9344650B2 and DE102013012987a1, the applicant describes a calibration by means of a reference object in order to process subsequently taken images by means of calibration data. In DE102017109698a1, the applicant discloses a microscope in which a panoramic image of a calibrated panoramic camera is evaluated, for example in order to classify microscope components and to load information about the identified components. Another calibration method for microscopes with a rotating support is described by the applicant in DE102013222295a1 to achieve automatic focusing and image center tracking. The applicant has described in DE102013006994a1 the evaluation of panoramic images, for example for determining and automatically accessing the sample position. The applicant has described in DE102019114117 a further image evaluation for automatically carrying out a workflow of a microscope, for which calibration and special samples are identified in a panoramic image. Furthermore, the applicant describes in DE102020118801 the evaluation of panoramic images of microscopes. In particular, the distance between the panoramic camera and the sample plane is estimated from the panoramic image, for which purpose reference markers in the field of view of the panoramic camera can be used. In DE102020101191, the applicant describes the evaluation of panoramic images of microscopes, wherein a homography is determined by which the panoramic image is converted into another representation. The correct homography in perspective can be determined by taking calibration samples of known dimensions, etc.
In DE102018133188a1, the applicant describes a microscope in which, among other things, two panoramic images of different sample stage positions are taken. The position of the particular structure is determined in both panoramic images. The pixel distance between the positions in the two panoramic images gives information about the height or distance of the structure shown.
Ensuring proper calibration is made more difficult if the user can reconfigure, replace, or otherwise position the components differently. For example, limit switches of the mobile station may be reconfigured. Even if the specification gives an indication that a new calibration is to be performed according to these measures, the user may ignore the indication. Changes may also be unknowingly caused by vibration, loose connections or temperature changes. There is therefore a risk that the navigation will only take place with reduced accuracy and result in collisions that damage the device parts or the sample.
In order to identify whether a calibration is required, the applicant proposes in DE102018133196a1 a machine learning model that identifies deviations from the correct case in the panoramic image, for example by means of a neural network trained for detecting anomalies. Thus, if maintenance is required, an error message can in principle be generated on the basis of the panoramic image. It is desirable to determine more accurately whether or to what extent calibration is still applicable.
Disclosure of Invention
It may be seen as an object of the present invention to provide a microscope system and a method for calibration verification which enable an evaluation as accurate and reliable as possible, i.e. which is used to interpret whether or to what extent the calibration parameters of the panoramic image are applicable.
This object is achieved by a microscope system having the features of claim 1 and by a method for calibration checking having the features of claim 2.
In a microscope system of the above-mentioned type, according to the invention, the computing device is provided for controlling the panoramic camera to take at least two panoramic images in different sample positions or sample stage positions. The computing device is furthermore provided for computing a displacement map for superimposing the panoramic images, wherein the panoramic images are converted into the same viewing angle by means of the calibration parameters. Based on the consistency between the superimposed panoramic images, the computing device evaluates whether the calibration parameters are valid.
A method for calibration verification according to the invention comprises: acquiring at least two panoramic images of the microscope system at different sample positions or sample stage positions; calculating a displacement map to overlay the panoramic images, wherein the panoramic images are converted to the same viewing angle by means of calibration parameters; and evaluating whether the calibration parameter is valid based on a goodness of consistency between the superimposed panoramic images.
For faster understanding, an example is explained: the image contents of two panoramic images taken at different sample stage positions should be able to correspond to one another very precisely under ideal calibration parameters. If the top views are calculated from the panoramic image, for example by means of calibration parameters, the image contents from the viewed planes in the two top views should only be shifted relative to one another and should not be distorted relative to one another in perspective. Thus, under ideal calibration parameters, image shifts should be possible, by which the image contents agree very well with each other and therefore have a high goodness of agreement. However, if the image content in the two top views is warped or scaled differently relative to each other, the goodness of conformity is lower. In this case, it can be concluded that only inaccurate overhead views are calculated by means of the calibration parameters. Insufficient accuracy or validity of the calibration parameters may be determined.
Alternative design
Advantageous variants of the microscope system according to the invention and of the method according to the invention are the subject matter of the dependent claims and are explained in the following description.
Perspective mapping by means of calibration parameters; panoramic image as top view
By means of the calibration parameters, the panoramic image is converted into the same viewing angle. A viewing angle is to be understood in particular as a viewing direction of the panoramic camera from a certain distance to a plane, structure or object in the respective panoramic image. If the sample stage is moved with a fixed-position panoramic camera, the shape and size of the structures in the panoramic image, which are usually on the sample stage, can change. By converting the panoramic image into the same viewing angle by means of the calibration parameters, it can be achieved that the shape and size of the structures of the planes in the converted panoramic image are identical despite the different sample stage positions.
The conversion of the panoramic image to the same view angle may be done as a homography mapping, which maps or projects one plane in space into another. The homography can be described, for example, by a 3 × 3 matrix, the terms of which depend on the height of the plane to be mapped (along the optical axis of the microscope), wherein this height is described by the calibration parameters.
In particular, a top/plan view may be calculated from each panoramic image. The top view corresponds to a perpendicular view of a plane. The plane may for example correspond to the plane of the sample table surface, the top or bottom surface of the sample carrier or the holding frame. In plan view of a plane, a structure lying within the plane should always have the same size and shape, regardless of where in the plane the structure lies. If the sample table is moved in this plane, the shape and size of the structures in this plane should remain the same in top view. Only a displacement of the structure should occur in the panoramic image present as a top view. The precondition for this is that the position of the plane relative to the panoramic camera has been correctly taken into account. This is done by means of calibration parameters. According to the precision of the calibration parameters, the calculated top view is slightly deviated from the actual top view, so that the shape and size of the structure are slightly changed when the sample stage moves. The evaluation of such deviations is described in more detail later. If motion is made between the shots of panoramic images mapped into planes in the top view, the two panoramic images can be mathematically identically converted to top view. The movement may be a movement and/or a rotation. In principle, the movement can also take place obliquely or perpendicularly to the plane which is converted into the top view. In these cases, the changed distance of the plane from the camera is also taken into account in the conversion into the top view by means of the calibration parameters.
Among other things, top-view is also suitable because optional calculations, such as segmentation described in more detail later, can generally be performed more robustly or with less training data. However, the conversion into a top view is not mandatory. Any other pose can also be calculated by means of the calibration parameters, which pose is the same for both images with respect to the structure to be evaluated. The same pose represents the same position and orientation after conversion with respect to the structural camera in the panoramic image.
Alternatively, other image transformations may be performed, such as transformation to an isometric perspective.
Segmentation
The panoramic image may be selectively segmented. This can be done in particular by a learning model (segmentation model) which is learned by means of a learning algorithm on the basis of training data. One output of the segmentation model is a segmentation mask, in particular a binary mask, in which a certain structure is characterized by the same pixel values, while another pixel value describes the background or indicates that the corresponding pixel does not belong to the structure. Which structures are characterized by segmentation are preset to the model by training data. For example, the segmentation model may be trained for segmenting samples, sample carriers, sample carrier containers or holding frames/holding frame clips for holding sample carriers. In principle, the segmentation can also be implemented by classical algorithms, without the need for machine learning models. In general, the segmentation mask does not necessarily indicate the segmented object by different pixel values; conversely, it is also possible, for example, to input the edges of the segmented object into the image, or a graph with the coordinates of the local object shape may be used as the segmentation mask.
The calculation of the segmentation may be performed after the same view angle is calculated. Thus, in particular, the panoramic image may first be converted into a top view and then segmented. If only the top view is segmented, as an advantage, less training data is generally required to learn the segmentation. For example, the holding frame clips on the sample table always have the same shape in top view, irrespective of their position. The cells of the microtiter plate should have a common size and circular shape in plan view, while in other views they have different sizes and oval shapes that project to different extents. If the segmentation is performed before the registration (calculation of the displacement map) it is advantageous that image content from other height planes has no disturbing influence on the calculation of the displacement map. For example, the segmentation may be performed with respect to the top surface of a (partially) transparent microtiter plate; due to the transparency, components from other height planes are also contained in the panoramic image and continue to be contained with interference after the perspective transformation (e.g. into top view), but this will be removed by segmentation.
However, in principle, the division may also be performed before the panoramic image is converted into the same view angle. Thus, the impairment of the segmentation may be avoided in some cases, which is possible if an erroneous attempt is made to convert the panoramic image to the same viewing angle.
The segmentation may be a semantic segmentation in which a segmented region is assigned a meaning. For example, the semantics may indicate "sample carrier", "holding frame", "sample stage" or "background". Optionally, the segmentation model may also be designed for entity segmentation. In this case, different objects of the same object type are distinguished from one another. In particular, in case of objects of the same object type that are touching or overlapping each other, a solid segmentation may be advantageous, for example when comparing the position or distance of these objects between two panoramic images.
Displacement mapping
The displacement mapping between the panoramic images may in particular be a movement and/or rotation of the image content of one of the panoramic images relative to the image content of the other panoramic image. In principle, the two panoramic images may also be moved and/or rotated, for example in order to orient the edges mapped in the two panoramic images in a predetermined direction.
The displacement map may be computed as that movement and/or rotation that maximizes the consistency between the image content of the panoramic image. The panoramic image may in particular be segmented and converted into an overhead view. In this case, a displacement map that maximizes the consistency can be relatively easily calculated from the divided panoramic image. The maximization may be calculated iteratively or, for example, by the maximum of the convolution of the two images. In principle, however, the displacement mapping can also be performed on the basis of panoramic images without the need to compute the segmentation.
The allowed limit values for the movement and/or rotation can be pre-specified for the calculation of the displacement map. The permissible limit values limit the possible movement or rotation regions of the images relative to one another and can be determined in particular as a function of the calibration parameters. In the case of an iterative calculation of the displacement map, the starting value of the iteration can alternatively or additionally also be determined from the calibration parameters. In addition to the calibration parameters, the limit values and the start values may also depend on the data obtained relating to the stage position or the stage movement performed between the acquisitions of panoramic images. If the sample stage is moved between the taking of two panoramic images, the displacement of the images can be estimated, for example, by means of calibration parameters and control commands to the movable sample stage by means of which the sample stage is moved between the taking or by means of position values provided by the sample stage sensors. This estimated displacement can be used as a starting value for the iteration and/or as a limit value for determining the displacement to be determined. If it is known that only a stage movement has taken place between the taking of panoramic images, the displacement mapping can furthermore be limited to a movement without rotation.
In principle, the displacement map can also be calculated by means of or only from known adjustments between the sample stage positions of the at least two panoramic images. In this variant, it is therefore not necessary to evaluate the image content of the panoramic image compulsorily in order to determine the displacement map. In contrast, in other embodiments, the movement distance of the sample stage adjustment does not have to be known and also does not have to be estimated, since the panoramic images are pushed onto one another without having to take into account the size of the respective image displacement for further evaluation.
If the at least two panoramic images are first converted into an overhead view by means of the calibration parameters, the displacement map can then be calculated as a linear displacement.
If a segmentation mask is first calculated from at least two panoramic images, one for each, a displacement map may be calculated based on the segmentation mask.
In other variants, the perspective transformation is performed by means of calibration parameters together with or simultaneously with the calculation of the displacement map. In this case, in particular, the unchanged panoramic images may be iteratively moved or rotated relative to one another, wherein for each of these movements/rotations a perspective transformation of the moved or rotated panoramic image is made which depends on the movement/rotation.
The whole panoramic image or only a part thereof
For a more linguistic comprehension, the different inventive variants name a panoramic image, which may be understood as an entire panoramic image or only a part thereof. In particular, the displacement map may also be determined from the panoramic image only over one or more image areas. The goodness of conformity may also be calculated based on either the entire panoramic image or only one or more image regions.
The remaining regions of the panoramic image may also either be cut out or also be calculated together; for example, in displacement mapping, although the entire panoramic image may be moved relative to each other, wherein the determination of the displacement mapping is made only on the basis of certain relevant areas and the remaining image areas do not influence the determination of the displacement mapping. In particular, at least one relevant region may be first determined from the panoramic image; a displacement map is then calculated for at least one of the relevant regions. The at least one relevant region may be determined, for example, by a machine learning model trained for segmentation or detection. For example, the segmentation model may be trained to distinguish the retention frame clip from the rest of the image content (background). After such a segmentation, only the movement or rotation of the holding frame clip between the two panoramic images is effected, while the change of the remaining image content has no influence on the determination of the displacement map. In the case of detection, image regions of one or more predetermined objects can be determined. For example, the edges or corners of the holding frame part or the sample carrier can be searched for by detection. Subsequently, the displacement map is determined taking into account only the image areas surrounding these objects. For example, the displacement between the panoramic images may be determined in such a manner that the consistency between the panoramic images is maximum in the image portions around the corners of the holding frame, regardless of whether the consistency between the two panoramic images is small in the other image portions.
The displacement map and the calculation of the goodness of conformity can therefore be carried out either over the entire image content or only over image regions, wherein the image regions are either fixedly predefined or are determined on the basis of the image content, for example by means of a detection model and/or a segmentation model.
Panoramic images at different sample or sample stage positions
The fields of view of the at least two panoramic images cover the sample location environment. The sample position is intended to mean a position where a sample is arranged in a measurement operation. But the sample need not be visible or even present in the panoramic image. For example, a panoramic image of an empty sample carrier or of a sample table only (without sample carrier) can also be recorded.
The movement of the sample or sample stage is performed between the two panoramic images taken. In particular, the motorized adjustable sample stage can be adjusted laterally, i.e. in a plane formed by the top surface of the sample stage. In principle, however, movements in other directions are also possible. The sample stage or the sample itself can also be moved manually, in particular laterally, between the shots of the panoramic image. The design of the sample table is of fundamental importance for the operating principle of the described invention, so that a very general understanding can be made here of a movable component by means of which the object to be examined can be moved. The object itself can be located on the sample table, accommodated in a receptacle or held by, for example, a clamp or a gripper arm. The sample stage itself may be visible in the panoramic image, or alternatively only the object of the sample stage or only the object holder.
Any object to be examined may be used as the object or sample, such as a biological sample, a semiconductor or electronic part, a rock or a material sample.
Superimposed panoramic image
A superimposed panoramic image may be understood as two panoramic images after calculation of the displacement map, the image contents of which are pushed onto each other by the displacement map. This is also referred to as registration of the two panoramic images. The superimposed panoramic images may be formed from separately presented images or from a single image, the content of which represents a superposition of the respective panoramic images. If segmentation masks are computed from the panoramic image, for example, the segmentation masks may be superimposed to form a single image. The different pixel values in the single image represent whether the corresponding pixel belongs to the segmented object in the first panoramic image only, in the second panoramic image only, in both panoramic images or not in any of the panoramic images. It is also possible to select only the relevant image areas of the panoramic image and to superimpose them as described.
Goodness of consistency
The goodness of conformity represents a measure of the degree of similarity for at least two superimposed panoramic images. The similarity is determined after applying the displacement map, so the panoramic images should substantially coincide with each other in case of validity of the calibration parameters. At least in the case of flat objects or flat surfaces which lie in the plane assumed by the calibration parameters, the consistency should be particularly high.
As a simple method for calculating the goodness of consistency, the image areas of segmentation masks (after shifting) placed on top of each other can be determined, where the two segmentation masks are distinguished from each other. The image area may be compared to a predetermined threshold to distinguish between correct and incorrect calibrations.
The displacement mapping and the calculation of the goodness of conformity may also be performed by a common calculation process, for example by convolution of the two panoramic images.
The goodness of conformity between the superimposed panoramic images may alternatively be calculated by the ratio of superimposed areas to non-superimposed areas of the superimposed panoramic images. A respective offset direction may also be calculated for each non-overlapping region. For example, the panoramic images may be pushed onto each other as divided panoramic images (division masks) and superimposed. In the overlap region, the two division masks have the same pixel value. In contrast, the two division masks have different pixel values in the non-superimposed region. If the calibration parameters are not accurate, a plurality of non-overlapping regions generally occur at the edges of the segmented object. The offset direction or movement direction of the non-superimposed region indicates in which image direction the next superimposed region (of the segmented object) is located. The ratio of superimposed area to non-superimposed area and the respective offset direction are used to characterize calibration parameters that deviate from the correct calibration parameters. It can be used here that, due to imprecise calibration parameters, field-dependent displacements and scaling of the segmentation mask or image content occur, which is qualitatively distinguished from, for example, imprecise segmentation.
The goodness of consistency may be determined based on the entire superimposed panoramic image or based only on a specific image area (evaluation image area) of the panoramic image. In particular, the evaluation image region on which the goodness of consistency is calculated can be selected by the machine learning model. At least one of the panoramic images or at least one image computed from one or both of the panoramic images is input to a machine learning model. As an output, the machine learning model gives an image region serving as an evaluation image region. For example, a learned detection model may be used, which has learned to mark a specific image region in the input image based on a pre-given training image with annotated image regions. For example, the test model may be trained to position all corners of the holding frame or sample carrier, or only certain corners, such as those adjacent to the edge of the holding frame holding the sample carrier or facing the sample carrier.
The calculation of the goodness of consistency between the superimposed panoramic images may also be performed by a machine learning model trained on one (the other) that takes the superimposed panoramic images or image areas as input and calculates the goodness of consistency as output.
Instead of or in addition to calculating the ratio between the superimposed and non-superimposed regions, the goodness of conformity may also be determined based on the image distance between structures corresponding to each other in the superimposed panoramic image. As a structure, a specific object or object part may be located, e.g. a plurality of corners of the object. The panoramic images may be moved and/or rotated relative to each other such that the image distance between the corners corresponding to each other in the two panoramic images is minimized. The image distances may be measured as pixel distances, respectively, for example. In the case of non-ideal calibration parameters, there are systematic errors due to the scaling effect, in particular the image distances of the corners point in different directions, wherein the validity or invalidity of the calibration parameters can be inferred from the magnitude and direction of the image distances.
The superimposed panoramic image on which the goodness of conformity is determined may also be represented by a two-dimensional sequence of points. Here, instead of the entire image content of the panoramic image, only the individual points of the panoramic image are used. As the goodness of consistency, the degree of accuracy with which the point sequences of different panoramic images can be made consistent is calculated.
Time variation curve of consistency goodness
Alternatively, more than two panoramic images are recorded one after the other and a time profile of the goodness of consistency is calculated using these panoramic images. In particular, respective goodness of conformity may be calculated from a pair of successively taken panoramic images, respectively. Based on the time-varying curve of the goodness of consistency, it can be evaluated whether the calibration parameters are valid.
Through the time variation curve, the consistency goodness with poor disposability can be distinguished from the consistency goodness with poor permanence. A poor one-time value may be caused, for example, by erroneous segmentation or displacement calculations, not necessarily by inappropriate calibration parameters. Conversely, if the goodness of consistency falls below a predefined limit value several times, the invalidity of the calibration parameters can be reliably inferred.
The panoramic images are repeatedly recorded while the sample stages are moved relative to one another, which is often provided for height estimation by triangulation in normal microscope operation. These shots of panoramic images to be made anyway for the height estimation can additionally be used by the invention in the described manner. In this way, no additional panoramic image recording and no stage movement are necessary for the checking of the described calibration parameters, which would not be provided for the height estimation anyway.
Validity of calibration parameters, follow-up actions
The determination of whether the calibration parameters are valid or not can be carried out as an explicit classification (e.g. yes/no) or by means of discrete or continuous digital data within a range of values by means of a quality indicator.
The calculated goodness of consistency can be used directly as a quality indicator of the calibration parameter or as a measure thereof. Alternatively, the quality index of the calibration parameter may be inferred from the average of the goodness of consistency or the time-varying curve described above.
If the calibration parameters are classified as invalid, a warning or prompt may be output to the user that calibration is required. Alternatively or additionally, the calibration process may also be triggered automatically.
General characteristics
A microscope system is understood to be a device comprising at least one computing device and a microscope. A microscope is understood to mean in principle any magnifying measuring device, in particular an optical microscope, an X-ray microscope, an electron microscope, a macro-scope or also a magnifying image recording device of different design.
The computing device may be physically designed as part of the microscope, may be separately disposed in the microscope environment, or may be disposed at any location remote from the microscope. The computing device may also be designed in a decentralized manner and communicate with the microscope via a data connection. It may generally be formed by any combination of electronic devices and software, and in particular comprises a computer, a server, a cloud-based computing system, or one or more microprocessors or graphics processors. The computing device may also be configured to control the microscope camera, image capture, stage control, and/or other microscope components.
In addition to sample cameras for taking more significantly magnified images of the sample area, there may also be panoramic cameras for taking panoramic images. However, alternatively, this could also be the same camera, where different objectives or optical systems are used to take the panoramic image and the more significantly magnified image of the sample. The panoramic camera may be attached to a fixed apparatus frame, such as a microscope stand, or a movable component, such as a sample stage, focus drive, or objective revolver. The panoramic image may be a raw image, as it is taken by a camera, or a processed image from one or more raw images. The captured raw/panoramic image may be further processed before being evaluated in the manner described herein. The method variant of the invention may be based on a previously taken panoramic image and the panoramic image may be obtained, for example, from a memory, or alternatively the taking of a panoramic image may also be part of the claimed method variant. The images described herein, such as panoramic images, may be composed of pixels, may be vector graphics, or may be formed from them as a mixture. In particular, the segmentation mask may be a vector graphic or may be converted to a vector graphic. For easier understanding, the different embodiments are described for two panoramic images, which can be understood in the sense of exactly two or at least two panoramic images.
The computer program according to the invention comprises instructions which, when executed by a computer, cause one of the described method variants to be performed.
The learning model or machine learning model described herein each represents a model that is learned by a learning algorithm based on training data. The machine learning model may for example comprise one or more Convolutional Neural Networks (CNN), respectively, which obtain as input at least one input image, in particular a panoramic image, or an image computed therefrom. The training of the machine learning model can be carried out by a monitored learning process, in which training panoramic images with corresponding annotations/identifications are predefined. A learning algorithm is used for determining model parameters of the machine learning model from the annotated training panoramic image. For this purpose, a predetermined objective function can be optimized, for example a loss function can be minimized. The loss function describes the deviation between a predetermined identifier of the machine learning model and the current output, which is calculated from the current model parameter values from the training panoramic image. The values of the model parameters are changed to minimize the loss function, which can be calculated, for example, by a (random) gradient descent. In the case of CNN, the model parameters may in particular comprise entries of convolution matrices of different layers of CNN. Instead of CNN, other model architectures of deep neural networks (english) are also possible. Instead of a monitored learning process, an unmonitored training can also be carried out, in which no annotations are specified for the training images. Learning methods that are partially supervised training or reinforcement learning are also possible.
The features of the invention described as additional apparatus features also lead to variants of the method according to the invention when used as specified. In a reverse manner, the microscope system can also be provided for carrying out the described method variants. In particular, the computing device may be provided for carrying out the described method variants and/or for outputting control instructions for carrying out the described method steps. Furthermore, the computing device may comprise the described computer program. While in some variations a trained machine learning model is used, other inventive variations of the present invention arise by performing corresponding training steps.
Drawings
Further advantages and features of the invention are described below with reference to the accompanying schematic drawings:
FIG. 1 is a schematic view of one embodiment of a microscope system of the present invention;
FIG. 2 is a schematic diagram of the process of one embodiment of the method of the present invention;
FIG. 3 is a continuation of the diagram of FIG. 2;
FIG. 4 is a schematic diagram of a process for processing panoramic images according to a variant of the invention;
FIG. 5 is a flow chart of a process of one embodiment of the invention; and
FIG. 6 is a flow chart of a process of one embodiment of the invention.
Detailed Description
Various embodiments are described below with reference to the drawings. Identical and identically functioning parts are generally identified with the same reference numerals.
FIG. 1 shows a schematic view of a
Fig. 1 shows an embodiment of a microscope system 100 according to the invention. The microscope system comprises a computing device 20 and a microscope 1, in the example shown the microscope 1 being an optical microscope, but in principle also a different type of microscope. The microscope 1 comprises a holder 2 by means of which other microscope components are held. Among these may be included, among others: an objective changer or objective revolver 3 on which, in the example shown, an objective 4 is mounted; a sample stage 5 having a holding frame 6 for holding a sample carrier 7, and a microscope camera 8. If the objective lens 4 is rotated into the microscope beam path, the microscope camera 8 receives detection light 7 from one or more samples held by the sample carrier to take a sample image. The sample carrier 7 may for example be a microtiter plate, a subject carrier made of a flat carrier with a cover slip, a chamber subject carrier, a petri dish, a gel or a gel holder.
The microscope 1 further comprises a panoramic camera 9 for taking a panoramic image of the sample environment. The panoramic image may thus particularly show the sample carrier 7 or a part thereof. The field of view 9A of the panoramic camera 9 is larger than the field of view when the sample image is taken. In the example shown, the panoramic camera 9 views the sample carrier 7 through a mirror 9B. The mirror 9B is arranged on the objective revolver 3 and may be selected instead of the objective 4. In a variant of this embodiment, the mirror or another deflecting element can also be arranged in another position. Alternatively, the panoramic camera 9 may also be arranged such that it directly observes the sample carrier 7 without the mirror 9B. Although in the example shown the panoramic camera 9 views the top surface of the sample carrier 7, alternatively the panoramic camera 9 may also be directed towards the bottom surface of the sample carrier 7. In principle, the microscope camera 8 can also be a panoramic camera if a further objective, in particular a macro objective, is selected by the objective turret 3 for capturing a panoramic image.
The computing device 20 processes the panoramic image using the computer program 80 according to the invention and optionally controls the microscope component based on the processing result. For example, the computing device 20 can evaluate the panoramic image to the extent where the cells of the microtiter plate are located, in order then to control the sample stage 5 in a manner close to a particular cell. The computing device 20 uses the calibration parameters P in order to correctly process the panoramic image and in order to realize how to control the microscope components based on the position information from the panoramic image. These calibration parameters allow interpretation of the panoramic image, e.g. how the orientation in the panoramic image is related to the orientation on the sample stage, in particular quantitatively. The calibration parameters P may also include information about the scale of scaling, in particular, at what shape or size an object in the panoramic image appears at a particular stage height. For evaluating the panoramic image, the calibration parameters P and optionally the current setting of the microscope components, for example the current height of the motorized sample stage, can then be taken into account, alternatively the height of the sample carrier can also be estimated from the panoramic image, for example, by means of the calibration parameters P without knowing the setting of the microscope components. The relationship between the position information from the panoramic image and the position information about the reference position of the microscope can be described in particular by the calibration parameter P.
The calibration parameters P may lose effectiveness if the user replaces or repositions the microscope components. The effects of vibrations, loose connections or variations in environmental parameters such as air humidity or temperature may also lead to misalignment of the microscope components, thereby making the calibration parameter P inaccurate or unsuitable. To automatically determine such changes, the computing device 20 analyzes panoramic images taken at different locations. This will be described in more detail with reference to the following figures.
FIGS. 2 and 3
Fig. 2 and 3 schematically show the flow of an embodiment of the method of the invention. The method may be performed by the computer program or computing device of fig. 1.
In fig. 2, at least two panoramic images 11 and 12 are first obtained by one and the same panoramic camera, wherein the structure being photographed is moved between the image shots. In the example shown, the panoramic images 11 and 12 show a sample carrier 7 which is circular in cross section and which is held between two holding frame clips 6A and 6B of a holding frame. As a background, the surface of the sample table 5 can furthermore be seen. The sample stage is moved between the two image acquisitions, whereby the holding frame clamps 6A, 6B and the sample carrier 7 are moved to the right in the panoramic image 12. In principle, the object shown can also be moved manually between the two image acquisitions.
In the example shown, the image content of the captured original image has been converted into a top view, which can be done by means of homography mapping by means of calibration parameters. The plan view corresponds to a vertical view of the sample table plane and thus also of the top faces of the holding frame clamps 6A, 6B and of the sample carrier 7. The same homography mapping can be done for both images. As a result, a movement of the sample stage between image acquisitions (in a direction perpendicular to the viewing direction of the top view) should only result in a movement in the panoramic images 11 and 12 presented as top views. If the top view is not calculated, then movement of the sample stage will cause the image content to not only move between panoramic images, but additionally the movement will be distorted in perspective accordingly.
The embodiment variant of fig. 2 makes use of the fact that: in the case of an exact calculation of the top view, a displacement of the sample stage should result in an exact displacement of the image content of the planes in the panoramic images 11,12 presented as top views. If this is not the case, it can be concluded that the top view is not accurately calculated. Since the plan view is calculated with the aid of calibration parameters, this means that the calibration parameters used are inaccurate or ineffective.
The implementation of these steps proceeds according to fig. 2, wherein in step S1 the two panoramic images 11 and 12, which are presented as top views, are first input into the segmentation model S. The segmentation model S is a learning model, which may include a deep neural network, such as CNN. The current segmentation model S is trained based on annotated training data to distinguish the retention frame clip from the rest of the image content. For each input image, the segmentation model S outputs one segmentation mask, i.e. the input panoramic image 11 is computed into the segmented panoramic image 21 and the input panoramic image 12 is computed into the segmented panoramic image 22, step S2.
In the divided panoramic images 21,22, a specific pixel value represents an image area 26 classified as a holding frame clip, and the remaining image areas are classified as a background 27 by different pixel values. As can be seen in the exemplary segmented panoramic images 21,22, slight segmentation errors may occur here.
The divided panoramic images 21,22 are now input in step S3 into a displacement calculation program 30 which is arranged to calculate a displacement map 31 in order to superimpose the divided panoramic images 21,22 as consistently as possible. Since in this example there is a top view and the sample stage movement occurs in or parallel to the plane mapped in the top view, the displacement map 31 can be pre-specified as a linear displacement. The displacement calculation program 30 now iteratively calculates for which displacement between the two segmented panoramic images 21,22 the consistency of the image content is the greatest. Knowledge of the already occurring movement of the sample stage is not absolutely necessary and can optionally be used, for example, for determining a starting value for an iteration or a limit value for the displacement to be calculated. Alternatively, the displacement of the panoramic images 21,22 for segmentation can also be calculated from the known stage motion by means of calibration parameters, in which calculation the image content of the segmented panoramic images 21,22 does not contribute to the magnitude of the displacement.
With the calculated displacement map 31, the superimposed panoramic image 32 is calculated in step S4. This corresponds to a pixel-by-pixel aggregation of the two panoramic images 21,22 after they have been moved relative to each other according to the displacement map 31. Thus, in the example shown, the panoramic image 22 is shifted to the left, and then a pixel-by-pixel aggregation can be made by, for example, addition, subtraction, multiplication or division of the respective pixel values of the panoramic images 21 and 22.
The superimposed panoramic image 32 includes a superimposed area 38 in which both of the panoramic images 21 and 22 that are segmented and moved relative to each other according to the displacement map 31 show one segmented object (here, one of the holding frame clips 6A, 6B). The superimposed panoramic image 32 also includes a background 37 in the image area where both panoramic images 21 and 22 segmented and moved relative to each other according to the displacement map 31 show the background 27. Further, the superimposed panoramic image 32 includes non-superimposed areas 38 and 39. In the non-superimposition area 38, an object (holding frame clip) is determined only in the panoramic image 21, and no object is determined in the panoramic image 22 that is moved according to the displacement map 31. Similarly, in the non-superimposed area 39, only objects are located in the panoramic image 22, whereas no objects are located in the panoramic image 21, wherein the panoramic images 21,22 are compared with each other after applying the displacement map 31.
The further method sequence is described further with reference to fig. 3. In step S5, the superimposed panoramic image 32 is fed to a detection model 40 trained to detect certain image areas, hereinafter referred to as evaluation image areas 41-44. The detection model 40 is a machine learning model that learns based on annotated training images. Instead of the superimposed panoramic image 32, in principle one or more of the panoramic images 11,12, 21,22 may also be used as input. In step S6, the detection model 40 outputs the determined boundaries of the evaluation image areas 41-44, which are shown in fig. 3 for better understanding in the superimposed panoramic image 32.
In addition, the evaluation image regions 41 to 44 are shown enlarged in the left half of fig. 3. The right half of fig. 3 shows exemplarily the respective evaluation image areas 41-44, which are calculated in the manner described so far for two further panoramic images.
The detection model 40 may specifically be trained to find edges or corners of the segmentation mask and determine them as evaluation image regions 41-44. At the image areas around these edges or corners, it is possible to assess, particularly convincingly, how well the image is consistent. It is furthermore conceivable in which direction the image content shown in the evaluation image areas 41 to 44, which image content originates from the two panoramic images 11,12 and 21,22 on which basis, is erroneously shifted or deviated. For example, the non-superimposed region 38 in the evaluation image region 42 in the left example from fig. 3. 3 is relatively narrow and in the example on the right is relatively wide. The direction of deviation may be defined as the direction of the area 38 relative to the overlap area 36.
The accuracy of the calibration parameters determines the size of the areas of the non-overlapping regions 38 and 39, how the non-overlapping regions 38, 39 relate to each other and how their respective directions of deviation. Based on the above features, inaccurate segmentation may also be distinguished from inaccurate calibration parameters. The evaluation of these features can in principle be carried out by classical algorithms without a learning model, wherein a trained machine learning model 50 is used for this purpose in the example shown. The machine learning model may include a deep neural network, such as CNN, by which model parameters P1-P9 are illustratively shown as terms of a convolution matrix. The machine learning model 50 may be designed as a classification or regression model and obtains a plurality of evaluation image regions 41-44 as a common input in step S7. From this input, the machine learning model 50 calculates a goodness of consistency Q, which is a measure of consistency between the superimposed panoramic images, in step S8. The goodness of consistency Q is illustratively a classification of good or bad consistency, wherein, however, finer classification of classes or continuous values as output is also possible. In this example, goodness of consistency Q is used directly as an assessment of whether the calibration parameters are valid. Validity is confirmed in the example from the left side of fig. 3, and denied in the example on the right side. In the case of a finer graduation of the effectiveness of the calibration parameters, the accuracy of the calibration parameters can thus also be given.
FIG. 4
Fig. 4 illustrates how a panoramic image may be processed into an overhead view image and further into a segmentation mask in various embodiments of the invention.
In the example shown, a panoramic image 11' is taken, wherein the panoramic camera views the top surface of the sample carrier 7 obliquely. The sample carrier 7 is a microtiter plate having a plurality of circular cells as sample containers. The concentrator 10 is also visible.
In step S0, the computer program 80 converts the panoramic image 11 'into a different viewing angle, i.e. into a top view, by means of the calibration parameters P, as shown in the panoramic image 11'. For the plane of the top surface of the sample carrier 7, the top view corresponds to the observation perpendicular to the top surface of the sample carrier 7. In contrast, the illustration of the other planes in the panoramic image 11 "(for example the bottom region of the partially transparent sample carrier 7) does not correspond to a top view. In order to correctly convert the plane of the top surface of the sample carrier 7 into a top view, the height level of the top surface of the sample carrier 7 must be accurately known by means of calibration parameters.
The panoramic image 11 ″ presented as a top view is then segmented, as also described with respect to fig. 2. The panoramic image 11 ″ is therefore fed in step S1 to a segmentation model S', which in this case is trained to segment the sample containers of the sample carriers 7. In step S2, the segmentation model S 'outputs a segmentation mask in which the superimposed volume 21' made up of the panoramic image 11 ″ and the segmentation mask is shown.
The further method steps can be carried out as described for fig. 2 and 3.
FIG. 5
Fig. 5 shows a flow chart for explaining a different order of steps of an embodiment of the method according to the invention.
In step S10, at least two panoramic images are captured, in particular by the same camera. Between the shots, the object to be shot, for example a sample or a sample stage, is moved or moved in a different manner.
Subsequently, in step S11, a process of segmenting, moving, and converting to a common angle of view is performed for the panoramic image by means of the calibration parameters. The order of these processes may be variably selected.
For example, as shown in fig. 2 to 4, one or more panoramic images may first be homographically converted to a common viewing angle, in particular homographically mapping all panoramic images onto a top view, step S12. Then, in step S13, the panoramic image presented as the top view is divided. Then, in step S14, a displacement map is calculated to superimpose the panoramic image, which is divided and presented as a top view.
The order may be changed in such a way that the segmentation of the panoramic image is first performed in step S13 'and then a homography mapping of the segmented panoramic image to the top view is calculated in step S12' by means of calibration parameters. Step S14 may follow.
Another possible sequence provides that the segmentation is first performed according to step S13', and thereafter a displacement map is calculated in step S14', with simultaneous perspective adaptation by means of the calibration parameters. In case the image content of the panoramic image is moved a certain distance, the perspective adaptation is performed such that it will correspond to the shown movement of the object in real space, which can be calculated by means of the calibration parameters.
Following step S14 or S14' is a step S15 in which the goodness of conformity between the superimposed (pushed onto each other) panoramic images presented as segmentation masks is calculated in order to evaluate the validity of the calibration parameters.
Other variations of the illustrated embodiment are possible. For example, the described segmentation may be performed as semantic segmentation or as entity segmentation. The determination of the evaluation image region described with respect to fig. 3 may be replaced by selecting one or more objects determined by the physical segmentation or a predetermined designated object portion thereof as the evaluation image region. Further, the entire overlaid panoramic image 32 from fig. 2 may be used as an input to a machine learning model 50 in order to calculate a goodness of consistency Q. Segmentation may be omitted in other embodiments. For example, the detection according to step S6 in fig. 3 may also be performed on a panoramic image that is optionally converted into an overhead view, without pre-computing the segmentation. In particular, the positions of a plurality of objects in the respective panoramic images may be determined by detection, wherein these positions of each panoramic image form a dot pattern. The consistency between the dot patterns after the movement determines the consistency goodness.
FIG. 6
Fig. 6 shows a flow chart of a method variant in which the checking of the calibration parameters is integrated into the process for triangulation-based height estimation. As will be explained, no additional image recording and no additional equipment or markers are required in order to supplement the checking of the calibration parameters to the triangulation-based height estimation.
In step S20, the process for height estimation based on triangulation is started. This can be done, for example, at the microscope within the scope of automatic navigation.
In step S21, at least two panoramic images of the same object are acquired with two sample stage positions that are displaced laterally relative to one another.
In step S22, height estimation is performed according to the principle of triangulation. The fact that the size of the image displacement caused by the displacement of the sample table depends on the height level at which the illustrated structure is located is utilized for this purpose. The structure or object shown may be, for example, a sample carrier, a holding frame or a component of a sample stage. The distance the sample stage moves in this case may be known. The relationship between the image displacement and the height level is determined by means of pre-specified calibration parameters.
In addition to step S22, an estimation of the validity or accuracy of the calibration parameters is now (simultaneously, previously or subsequently) carried out in step S23 in the manner already described, in particular according to steps S11 and S15 in fig. 5.
Subsequently, in step S24, the estimated value of the height level may be output together with information of the accuracy of the estimation based on the goodness of consistency determined as described.
The segmented panoramic image converted into an overhead view can be used not only for calibration verification but also for height estimation. Likewise, the determined displacement may optionally also be used for triangulation-based height estimation.
Depending on the result from S24, for example, a calibration process may be started or component assembly control/motion may be adapted. For example, the safe distance to avoid a collision may be determined based on the accuracy or validity of the calibration parameters. In a variation of this embodiment, step S23 for checking the calibration parameters is first performed, and only when the validity of the calibration parameters is confirmed in this case, step S22 for estimating the height level is performed.
The described embodiments are purely illustrative and modifications thereof are possible within the scope of the appended claims.
List of reference numerals
1 microscope
2 support
3 objective lens rotary base
4 microscope objective
5 sample stage
6 holding frame
6A holding frame clip
6B holding frame clip
7 sample carrier
8 microscope camera
9 panoramic camera
Field of view of a 9A panoramic camera
9B reflector
10 light collector
11. 12 panoramic image
11' panoramic image before perspective conversion to top view
11' panoramic image after perspective conversion to top view
20 computing device
21 segmented panoramic image
21 'overlay comprising panoramic image 11' and associated segmentation mask
26 are divided into image areas holding the frame clips
27 are divided into image areas of the background
30 Displacement calculation program
31 displacement mapping
32 superimposed panoramic images
36 overlap region in the overlapped panorama image
37 background in the superimposed panoramic image
38 non-superimposed areas in the superimposed panoramic image
39 non-superimposed areas in the superimposed panoramic image
40 detection model/machine learning model
41-44 evaluating image areas
50 machine learning model
80 computer program of the invention
100 microscope system of the invention
P calibration parameters
P1-P9 model parameters
Q goodness of consistency
S segmentation model
S' segmentation model
S0-S8 method steps of an embodiment of the invention
S10-S15, S12'-S14' method steps of embodiments of the invention
S20-S24 method steps of an embodiment of the invention

Claims (16)

1. A microscope system, comprising:
panoramic camera (9) for recording panoramic images (11, 12) of the environment of a sample site and method for the production thereof
A computing device (20) arranged for evaluating a panoramic image (11, 12), wherein the computing device (20) has calibration parameters (P) for interpreting image coordinates of the panoramic image (11, 12);
it is characterized in that the preparation method is characterized in that,
the computing device (20) is arranged to,
-controlling the panoramic camera (9) to take at least two panoramic images (11, 12) in different sample positions or sample stage positions;
-computing a displacement map (31) for superimposing the panoramic images (11, 12; 21,22), wherein the panoramic images (11, 12; 21,22) are converted into the same viewing angle by means of calibration parameters (P); and
-evaluating whether the calibration parameters (P) are valid based on a goodness of consistency (Q) between the superimposed panoramic images (32).
2. A method for calibration verification, comprising:
acquiring at least two panoramic images (11, 12) of a sample position environment at different sample positions or sample stage positions of a microscope system;
calculating a displacement map (31) for superimposing the panoramic images (11, 12; 21,22), wherein the panoramic images (11, 12; 21,22) are converted into the same viewing angle by means of calibration parameters (P); and
evaluating whether the calibration parameter (P) is valid based on a goodness of consistency (Q) between the superimposed panoramic images (32).
3. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that the at least two panoramic images (11, 12) are converted into a top view (S0) by means of the calibration parameters (P) and that the displacement map (31) is thereafter calculated as a linear displacement.
4. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that the at least two panoramic images (11, 12) are first segmented (S2) by a segmentation model (S) and the displacement map (31) is calculated based on the segmented panoramic images (21, 22).
5. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that the displacement map (31) is calculated as a displacement that maximizes the consistency between the panoramic images (11-12; 21-22).
6. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that, on the basis of the calibration parameter (P), permissible displacement limit values for calculating the displacement map (31) are determined, and/or
Wherein an iteratively calculated starting value of the displacement map (31) is determined depending on the calibration parameter (P).
7. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that the displacement map (31) is calculated by means of a known adjustment between the sample stage positions of the at least two panoramic images (11, 12).
8. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that, together with the calculation of the displacement map (31), also a perspective change corresponding to the displacement map (31) is calculated by means of the calibration parameters (P).
9. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that at least one relevant region is determined from the panoramic image (11, 12) and the displacement map (31) is calculated for the at least one relevant region.
10. The method of claim 9, wherein the first and second light sources are selected from the group consisting of,
wherein the at least one relevant region is determined by a machine learning model trained for segmentation or detection.
11. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that, in order to calculate a goodness of conformity (Q) between the superimposed panoramic images (32), a ratio of superimposed areas (36) to non-superimposed areas (38, 39) is calculated, wherein a respective offset direction is calculated for each non-superimposed area (38, 39).
12. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that evaluation image regions (41-44) on which a goodness of consistency (Q) is calculated are selected by means of a machine learning model (40), wherein at least one of the panoramic images (11, 12) or at least one image (32) calculated therewith is input into the machine learning model (40) and the machine learning model (40) gives as output image regions which are used as the evaluation image regions (41-44).
13. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that the calculation of the goodness of consistency (Q) between the superimposed panoramic images (32) is performed by a trained machine learning model (50) which obtains as input the superimposed panoramic images (32) or image areas thereof and calculates as output the goodness of consistency (Q).
14. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
characterized in that the goodness of conformity (Q) is determined from image distances between structures corresponding to each other in the superimposed panoramic images (11, 12).
15. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
two or more panoramic images (11, 12) are recorded one after the other, and the time profile of the goodness of consistency (Q) is calculated by means of the panoramic images (11, 12),
evaluating from the time profile of the consistency quality (Q) whether the calibration parameter (P) is valid.
16. A computer program having instructions which, when executed by a computer, cause the method according to any one of claims 2 to 15 to be performed.
CN202111129866.9A 2020-10-09 2021-09-26 Microscope system and method for calibration checking Pending CN114326078A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020126549.2A DE102020126549A1 (en) 2020-10-09 2020-10-09 MICROSCOPY SYSTEM AND CALIBRATION CHECK PROCEDURE
DE102020126549.2 2020-10-09

Publications (1)

Publication Number Publication Date
CN114326078A true CN114326078A (en) 2022-04-12

Family

ID=80817897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111129866.9A Pending CN114326078A (en) 2020-10-09 2021-09-26 Microscope system and method for calibration checking

Country Status (2)

Country Link
CN (1) CN114326078A (en)
DE (1) DE102020126549A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022130872A1 (en) 2022-11-22 2024-05-23 Leica Microsystems Cms Gmbh Optical imaging system, methods, systems and computer programs

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013006994A1 (en) 2013-04-19 2014-10-23 Carl Zeiss Microscopy Gmbh Digital microscope and method for optimizing the workflow in a digital microscope
DE102013012987A1 (en) 2013-08-03 2015-02-05 Carl Zeiss Microscopy Gmbh Method for calibrating a digital optical device and optical device
DE102013222295A1 (en) 2013-11-04 2015-05-07 Carl Zeiss Microscopy Gmbh Digital microscope, method for calibration and method for automatic focus and image center tracking for such a digital microscope
DE102017109698A1 (en) 2017-05-05 2018-11-08 Carl Zeiss Microscopy Gmbh Determining context information for change components of an optical system
DE102018133188A1 (en) 2018-12-20 2020-06-25 Carl Zeiss Microscopy Gmbh DISTANCE DETERMINATION OF A SAMPLE LEVEL IN A MICROSCOPE SYSTEM
DE102018133196A1 (en) 2018-12-20 2020-06-25 Carl Zeiss Microscopy Gmbh IMAGE-BASED MAINTENANCE PROPERTY AND MISUSE DETECTION
DE102019114117B3 (en) 2019-05-27 2020-08-20 Carl Zeiss Microscopy Gmbh Automatic workflows based on recognition of calibration samples
DE102020101191A1 (en) 2020-01-20 2021-07-22 Carl Zeiss Microscopy Gmbh Microscope and method for determining a measurement location of a microscope
DE102020118801A1 (en) 2020-07-16 2022-01-20 Carl Zeiss Microscopy Gmbh MICROSCOPE AND PROCEDURE FOR DISTANCE DETERMINATION OF A SAMPLE REFERENCE PLANE

Also Published As

Publication number Publication date
DE102020126549A1 (en) 2022-04-14

Similar Documents

Publication Publication Date Title
JP7153258B2 (en) How to determine the microscope and the measurement position of the microscope
CN107431788B (en) Method and system for image-based tray alignment and tube slot positioning in a vision system
CN111352227A (en) Distance determination of a sample plane in a microscope system
US20200371335A1 (en) Light microscope with automatic focusing
US9253449B2 (en) Mosaic picture generation
CN113034612B (en) Calibration device, method and depth camera
US20220114398A1 (en) Microscopy System and Method for Verification of a Trained Image Processing Model
US10535157B2 (en) Positioning and measuring system based on image scale
US6519358B1 (en) Parallax calculating apparatus, distance calculating apparatus, methods of the same, and information providing media
CN110146017A (en) Industrial robot repetitive positioning accuracy measurement method
CN108548824B (en) PVC (polyvinyl chloride) mask detection method and device
US12007547B2 (en) Microscopy system, method and computer program for aligning a specimen carrier
US20220114732A1 (en) Microscopy System and Method for Image Segmentation
CN109580658A (en) Inspection method and check device
US20220236551A1 (en) Microscopy System and Method for Checking a Rotational Position of a Microscope Camera
CN114326078A (en) Microscope system and method for calibration checking
Wohlfeil et al. Automatic camera system calibration with a chessboard enabling full image coverage
CN114002840A (en) Microscope and method for determining the distance to a reference plane of a sample
CN115375608A (en) Detection method and device, detection equipment and storage medium
Beauchemin et al. Modelling and removing radial and tangential distortions in spherical lenses
Valocký et al. Measure distance between camera and object using camera sensor
CN116091401A (en) Spacecraft assembly part identification positioning method based on target detection and composite target code
CN114236803A (en) Microscope system and method for checking the calibration of a microscope
CN111145247A (en) Vision-based position detection method, robot and computer storage medium
CN112651261B (en) Calculation method for conversion relation between high-precision 2D camera coordinate system and mechanical coordinate system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination