CN115552462A - Image analysis method in microscope - Google Patents

Image analysis method in microscope Download PDF

Info

Publication number
CN115552462A
CN115552462A CN202180034442.6A CN202180034442A CN115552462A CN 115552462 A CN115552462 A CN 115552462A CN 202180034442 A CN202180034442 A CN 202180034442A CN 115552462 A CN115552462 A CN 115552462A
Authority
CN
China
Prior art keywords
image
type
recording type
simulated
image recording
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180034442.6A
Other languages
Chinese (zh)
Inventor
M.阿姆索尔
D.哈泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Carl Zeiss Microscopy GmbH
Original Assignee
Carl Zeiss Microscopy GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Carl Zeiss Microscopy GmbH filed Critical Carl Zeiss Microscopy GmbH
Publication of CN115552462A publication Critical patent/CN115552462A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method for image analysis in a microscope, in which method a first image (I) of a sample is recorded in a first image recording type A ) And storing image data, wherein the image data is used to simulate an image (I) of the sample in a second image recording type A→B ). The method also records a second image in a second image record type and stores the second image. The method compares the simulated image of the second image record type with the second recorded image to find differences and capture, analyze and evaluate.

Description

Image analysis method in microscope
Technical Field
The present invention relates to a method for image analysis in a microscope according to the preamble of the independent claim.
Background
Image analysis methods are used in particular in microscopes in order to find regions of interest and/or abnormal regions and differences in microscope images.
For example, on the basis of the difference, defective sample preparations may be identified, or atypical reactions (anomalies) of certain sample regions may be captured. For example, methods such as novel detection or automatic encoders are known for this purpose.
It is also known from the prior art that, based on image data of one image recording time, an image of a sample of another image recording type can be simulated. In this regard, ounkomol et al (2018; "Nature methods" Vol. 11, page 917) describe the simulation of 3D fluorescence images starting from transmitted light images or the prediction of immunofluorescence images based on electron microscope images. The quality of the simulation result is evaluated based on the degree of correspondence between the simulated image and the actual image of the other image recording type.
Disclosure of Invention
The invention is based on the object of proposing a possibility which enables deviations in the structure of a sample and its image representation to be identified.
In a first alternative, the object is achieved by a method of image analysis in a microscope, wherein a first image of a sample is recorded with a first image recording type and image values are stored. The sample or an image of the same sample region of the second image record type is predicted based on the image values. Further, a second image of the sample or the same sample region is recorded and stored with a second image recording type.
The method is characterized in that a predicted (hereinafter also referred to as simulated) image of the second image registration type and a second image registered with the second image registration type are compared with each other in order to find the difference between them. The differences found are captured, analyzed and evaluated.
A first image recorded with a first image record type is used as a starting data set for generating a predicted or simulated image. The terms "prediction" or "simulation" are synonymously understood in the present invention as mapping or virtual projection using a mathematical function. The order in which the first and second images are captured is not important. The function, rule or mathematical algorithm and/or logical model on which the prediction or simulation is based are also commonly included in the term simulation model.
It has been realized that differences between the simulated second image and the second image can be better identified or even identified at the first time if the commonalities and differences existing between at least two image recording types are utilized and taken into account. The difference may be based on, for example, noise that typically occurs in image and signal processing. Artifacts of sample processing, such as sample preparation, sample staining or sample procurement, likewise constitute differences, also referred to as anomalies. The abnormality is for example a wrongly stained cell or cell structure. Differences (abnormalities) may also result from the fact that cells or cellular components receive or do not receive staining in an undesirable or unexpected manner. Furthermore, the dyes used may exhibit unexpected behaviour, which may for example be caused by the bleaching behaviour of fluorescent dyes.
Differences may also be caused by structurally prominent sample regions, such as deformed cell regions. If the presence or absence of a known structure is considered a difference, this can be identified and captured by the method according to the invention. As a result, this approach requires that the present invention advantageously differs from known approaches, such as autoencoders or a class of taxonomies.
The user may be particularly interested in the difference. Thus, regions in the sample where differences occur can also be identified and identified as so-called ROIs (regions of interest). In the context of the present specification, abnormalities and ROIs are referred to as differences for simplicity.
The first and second image recording types may be, for example, different contrast methods and/or different channels of contrast methods. For example, the different channels may be fluorescence channels through which different structures may be imaged. In this regard, for example, an image of DAPI fluorescence channel (DAPI is an abbreviation for a label of AT-rich region in DNA) may be generated as a simulated second image, starting from a first image showing fluorescence channels of fluorescently labeled microtubules.
The types of image recording based on which the simulation is performed or as a result of the simulation may be contrast methods such as bright field, dark field, DIC (differential interference contrast), and fluorescence methods. In the latter, the structure of the sample is particularly provided with a fluorescent marker, and its fluorescence is detected.
The image data of the first and second image recording types may be, for example, 2D data, 3D data, and/or time series data.
In another possibility of an embodiment of the method, the (first) simulation image may be generated and provided starting from the first image. Furthermore, the (second) simulation image may be generated and provided starting from an image of the second image recording type. In this case, a simulated image corresponding to the first image of the first image recording type may be generated starting from the second image of the second image recording type. The first and second analog images may be mapped onto images of the second image record type and the first image record type, respectively. In this possibility of the configuration of the method according to the invention, the analysis starting from a first image of a first image recording type and the analysis starting from a second image of a second image recording type are therefore performed in a manner similar to a cross correlation.
A second alternative of the method according to the invention is advantageously managed without capturing images of the second image recording type. A simulated image of a second image recording type is predicted starting from an image of a first image recording type. This is then used as a starting point for predicting a simulated image of the first image record type. In brief, the prediction is implemented in a loop. An image of the first image record type used as the starting data set is compared with a simulated image of the first image record type. The approach in this configuration is that the simulation model used cannot predict the occurrence of anomalies and therefore differences occur when comparing the image of the first image record type with the simulated image of the first image record type, which can be captured and analyzed. The results of the comparison may also be used to evaluate the suitability of the simulation model.
The image data of the images of the at least two image record types may also have been saved as stored data and may be retrieved and processed upon request. The same applies to simulated images based on images of a first image recording type and optionally simulated images of a second image recording type. Image data of images of at least two image recording types may be used, for example, optionally improved, reconstructed or converted to different contrasts by machine learning.
In a further configuration of the method, context information may be included, e.g. about the type of sample, about the staining of the sample, about the contrast method, etc., for predicting or simulating each image separately.
For the step of analyzing and determining the difference, the image of the first image recording type can optionally be taken into account in addition to the image of the second image recording type and the simulated image.
In a further configuration of the image analysis method, more than two image recording types may also be used. In this way, the probability of finding a difference, in particular an anomaly and/or a region of interest, is advantageously increased, or the presence of a specific anomaly can be checked by targeted selection of the type of image recording used.
An advantage of the method according to the invention is that the sample can be changed between image recordings of different image recording types. In this regard, the samples may be stained differently (initially) at the same time, e.g. chemically. In one example, such staining may be performed using hematoxylin-eosin staining (HE staining).
If differences are found during the comparison, then in one configuration of the method, the differences may be assigned to previously defined categories. Such categories are for example: simulating an error; a structure simulated but not captured in the second image; and/or the presence/absence of at least one predefined structure in the second image. The at least one category may also include those differences that are classified as noise artifacts, for example, and are not considered further in the analysis.
In another configuration of the method, a difference image may be generated from the simulated image of the second image recording type and the second image recorded with the second image recording type, and the difference image may be displayed on a display, for example, a monitor. The difference image may be evaluated by the user for differences that occur.
However, it is advantageous to automatically analyze and/or classify the found differences by means of image analysis, in particular by means of machine learning.
Prediction or simulation of the simulated image may also be performed using machine learning methods. For example, a mapping from image to image known in the art may be used for this purpose. In this case, such a mapping is learned based on the corresponding image pair of the two image record types, N in number ((IA, 1, IB, 1), (IA, 2, IB, 2), …, (IA, N, IB, N)). In this case, I A Is an image of a first image recording type, I B An image representing a second image record type. In picture I A On the basis of which the simulated image is also referred to hereinafter by the reference I A->B And (4) showing. The training model for image-to-image mapping is generally a good simulation of the correspondence between the two image record types present in the training data, that is to say the prediction I A->B Very similar to the real image IB. Cases with severe deviations from these training data can often no longer be covered well by the model and subsequently lead to simulated images I A->B And the recorded image IB, which ultimately forms the basis of the method according to the invention presented here.
As already explained, discrepancies, in particular anomalies, can be automatically identified by image analysis. In this regard, the analog image I A->B And a second image I B Simple measures in between may be quantified, for example, based on differences, absolute values, at least one correlation, variances, use of thresholds, and the like. Alternatively, machine learning may be used, for example, to perform image analysis using metrics.
In the case of machine learning, a distinction should be made here between supervised and unsupervised learning methods. Supervised learning includes learning based on a training data set consisting of real images, (predicted) simulated images and corresponding annotations on existing discrepancies, in order to identify these and further discrepancies. In this case, a classification model and/or a segmentation model may be used. Further, the difference may be identified based on the image data or by calculation.
For example, by using the classification model, prediction (yes/no judgment) regarding the occurrence of an abnormality/difference in an image can be made. A prediction may also be made as to the type of difference (e.g., no difference; type 1; type II).
For example, if a segmentation model is used, the image can be segmented into regions without discrepancies ("normal") and areas with discrepancies ("abnormal"). Alternatively, the image may be segmented into regions that are assigned to predetermined categories of difference/anomaly types (e.g., no difference; type 1; type II.).
If the reference image data is (detected) and used for the analysis, the differences can be identified therefrom or even at a finer granularity (type 1; type 2.).
In terms of calculation, the difference may be found, for example, by regression calculation, and may optionally be further specified. In this case, for example, image coordinates of the proportion or difference of the cells affected by the abnormality may be used.
Unsupervised learning may involve, for example, the use of clustering algorithms or "traditional" anomaly detection, for example by means of an automatic encoder using predicted and real images as inputs.
Analog image I A->B And a second image I B Can be used as input values for a machine learning based approach. Further, optionally, the first image I may be input A Such as the type of sample and information about the contrast method (type, dye used, wavelength, etc.) and/or information about the recording conditions (objective lens, immersion liquid, etc.) and background information determined manually or automatically.
Simulated images (I) based on a second image recording type, in addition to or instead of capturing, analyzing and evaluating the differences found A→B ) With images of a second image recording type (I) B ) The method according to the invention can be used to evaluate the quality of the simulation model. In particular, the pre-determination of the simulation model for a particular image recording type is checked by checking whether a predetermined quality criterion is fulfilledSuitability of the measurement and/or suitability of imaging for a particular sample type.
In order to provide the user of the method according to the invention with the opportunity to interact with the method sequence, the currently reached training state of the simulation model for predicting the respective simulation image, which is subjected to a compliance check of the quality criterion, can be determined from the result of the check. The determined training state is advantageously presented on a display. This is preferably associated with a query for the input or selection of at least one control command by means of which, for example, a further iteration of the training sequence of the relevant simulation model can be triggered if the current training state is judged to be insufficient. In a further configuration, the determined current training state may be displayed to the user, and in case of insufficient training state, the training data of the current sample may again be automatically captured, and the simulation model may be retrained.
The information available to the user may additionally give an indication about expected artifacts in the respective simulated images, in particular if the training state of the simulation model is classified as unsuitable or suitable only to a limited extent.
This interaction with the user allows the user to train the model himself. Furthermore, in view of the presented interactions, even users who do not have qualified expertise in the machine learning domain can handle the respective system and erroneous results due to insufficiently trained simulation models can be prevented.
In order to be able to carry out the image analysis method according to the invention, a microscope which allows image recording with at least two contrast types or with at least two channels of one contrast type is sufficient. Furthermore, an analysis unit, for example in the form of a computer, is required, which is configured for carrying out the image analysis method according to the invention. Optionally, a display facility (screen, monitor, display) may be provided, for example, on which the difference image is displayed. The possibility of entering data and commands (interface, keyboard, etc.) may additionally be provided.
The described invention relates to a method that enables automatic identification of an anomaly or a region of interest in a sample image. For example, by the method according to the invention even errors in the staining of the sample can be identified compared to previously known anomaly detection methods (e.g. automated encoders, class of taxonomies). Finding and selectively classifying differences such as anomalies and/or regions of interest is a technical challenge that will become more important in the future due to the ever increasing amount of data.
The main advantage of the method according to the invention can be seen in the fact that simulated image data are compared with actually measured target data (so-called "extrapolation"). However, as a result of this comparison, the evaluation of the quality of the respective model is more accurate and more reliable. Furthermore, it is independent of the evaluation of the target distribution. In contrast to the method according to the invention, in the method according to the prior art, the input data or the output data of the model are evaluated directly (often referred to as "validation"). This includes evaluating whether the data matches the input or output distribution, respectively.
Following the method according to the invention to find differences is done on the basis of new image data (inferences) that are not known so far, using a previously finally trained model in the workflow.
This also means that the image data of the potential differences to be found must not previously be used for training the simulation model in the course of the extrapolation by means of the method according to the invention. In contrast, in the case of verification, as described in the prior art, the performance of the simulation model is evaluated based on data of the training distributions.
Drawings
The invention is explained by the following examples with reference to two drawings, in which:
fig. 1 shows a microscope and a schematic sequence of a first configuration of an image analysis method according to the invention; and
fig. 2 shows a microscope and a schematic sequence of a second configuration of the image analysis method according to the invention.
Detailed Description
By allowing at least two types of contrastMicroscope 1 for image recording with at least two channels of a contrast type, for example, capturing a first image I of a first image recording type in bright field mode A . The microscope 1 is connected to an analysis unit 2 in the form of a computer and a monitor as a display 3. The analysis unit 2 also has a data memory (not shown) and is configured such that it simulates the model from the first image I A Starting simulation image I A→B It is as if the analog image I has been recorded at the second image recording time (for example, at the second contrast type) A→B For example as a fluorescence image. Storing the analog image I A→B The data of (1). FIG. 1 shows that in a first image I A The structure with a small number of points present in the simulated image I A→B No longer present. The decision criteria that result in these structures being omitted may be based on machine learning methods.
Furthermore, a second image I of the sample of a second image recording type (e.g. a fluorescence image) is captured and stored B . The simulated image I is then evaluated, for example, by means of a correspondingly configured evaluation unit 2 A→B And a second image I B Are compared with each other in order to find differences between them. For example, a difference image I is generated based on the comparison Diff In the difference image, any found differences are visualized.
In the analog image I A→B And a second image I B Difference image I between Diff Wherein the difference shown by way of example is in the difference image I Diff Is provided with a structure of a reticular shadow. For analog images I A→B The structure is predicted, but actually in the second image I B The structure cannot be confirmed.
For example, structures that embody this difference may not be detectable with the second image record type due to inadequate sample preparation. This may also relate to the type of structure of the sample (in image I) A Shown with dense dots) that cannot be matched with other structure types having the first image record type (again at image I) A Shown with dense dots). Based on available and retrievable stored a priori information and/orThe difference can be classified manually or preferably in an automatic manner based on the characteristics of the image record type.
The found differences can be further processed according to their classification. For example, the difference may be ignored in further image analysis, may influence the further analysis in a weighted manner, or may be explicitly selected and treated as a region of interest (ROI).
A second alternative of the method according to the invention comprises: recording type I from a first image by means of a simulation model A Image I of A Starting, predicting a simulated image I of a second image recording type A→B . In a further step, the simulated image I of the second image recording type A→B And subsequently used as a simulated image (I) for predicting the first image record type A→B )>I A The basis of (1). Recording the thus obtained analog image (I) of the first image type A→B )>I A With picture I of the first picture-recording type A A comparison is made. By comparison, both the differences present can be captured and analyzed, and a simulated image I for predicting the second image recording type can be derived A→B And an analog image (I) of a first image recording type A→B )>I A The applicability of the simulation model of (1). Compliance or non-compliance with predetermined quality criteria may be used as a basis for a comparison decision. This configuration of the method does not require the actual capture of the second image I B But rather its virtual copy is advantageously used.
Reference symbols
1 microscope
2 analysis Unit
3 display
I A First image (first image recording type)
I B Second image (second image recording type)
I A->B Simulation image
I Diff Difference image

Claims (10)

1. A method of image analysis in a microscope, wherein,
recording a first image (I) of the sample with a first image recording type A ) And storing the image values;
predicting a simulated image (I) of the sample of a second image record type by a simulation model based on the image values A→B ) And an
In the first alternative
Recording and storing a second image (I) in a second image recording type B ),
Recording said simulated image (I) of the second image type A→B ) With said second image (I) recorded with a second image recording type B ) Are compared with each other to find out the difference therebetween, and
capturing, analyzing and evaluating the found differences;
or, in a second alternative
Recording a type of image (I) from a first image A ) Starting from, a simulated image (I) of the second image recording type is predicted A→B ),
Recording said analog image (I) of the type from a second image A→B ) Starting from a simulated image (I) of a first image recording type A→B )>I A
Recording said image (I) of a first image recording type A ) With said simulated image (I) of the first recording type A→B )>I A Making a comparison to find differences between them, and
the found differences are captured, analyzed and evaluated.
2. Method according to claim 1, characterized in that the images (I) of at least two image recording types are recorded A ;I B ) And/or an analog image (I) of a second image recording type A→B ) And/or an analog image (I) of the first image recording type A→B )>I A Is saved as stored data, and the image data is retrieved and processed according to the request.
3. The method of any of the preceding claims, wherein the found differences are assigned to previously defined categories.
4. A method as claimed in claim 3, characterized in that the error is simulated in the first alternative but not in the second image (I) B ) Or an image (I) simulated in the second alternative but not of the first image recording type A ) Of the structure and/or of the second image (I) B ) Middle or first image (I) A ) Wherein the presence of at least one predefined structure is defined as a category.
5. Method according to any of the preceding claims, characterized in that different contrast methods and/or different channels of contrast methods are used as image recording types.
6. Method according to any of the preceding claims, characterized in that the first analog image (I) is taken A→B ) With images of a second image recording type (I) B ) Comparing or, alternatively, the second analog image (I) B→A ) With an image of the first image recording type (I) A ) The comparison is carried out or the first image (I) is taken into account with the selected metric A ) With an analogue image (I) of the first image recording type A→B )>I A A comparison is made.
7. The method of any of the preceding claims, wherein the analysis of the found differences and/or the classification of the found differences is performed by machine learning.
8. Method according to any of the preceding claims, characterized in that a first simulated image (I) of a second image recording type is recorded A→B ) And/or a second analogue image (I) of the first image recording type B→A ) Or an analog image of the first image recording type (I) A→B )>I A Is performed by machine learning.
9. The method of claim 7, wherein analyzing and/or classifying is performed by means of at least one of the group consisting of classification, segmentation, detection and regression as supervised machine learning methods and cluster analysis and traditional anomaly detection as unsupervised learning methods.
10. Method according to claim 1, characterized in that, in addition to or instead of capturing, analyzing and evaluating the found differences, a simulated image (I) based on the second image recording time is recorded A→B ) With images of a second image recording type (I) B ) Or corresponding analog images (I) based on the first image recording type A→B )>I A With images of a first image recording type (I) A ) By checking whether a predetermined quality criterion is fulfilled or not, a quality assessment of the simulation model is achieved.
CN202180034442.6A 2020-05-14 2021-05-11 Image analysis method in microscope Pending CN115552462A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102020206088.6A DE102020206088A1 (en) 2020-05-14 2020-05-14 Image evaluation methods in microscopy
DE102020206088.6 2020-05-14
PCT/EP2021/062539 WO2021228894A1 (en) 2020-05-14 2021-05-11 Image analysis method in microscopy

Publications (1)

Publication Number Publication Date
CN115552462A true CN115552462A (en) 2022-12-30

Family

ID=76034612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180034442.6A Pending CN115552462A (en) 2020-05-14 2021-05-11 Image analysis method in microscope

Country Status (3)

Country Link
CN (1) CN115552462A (en)
DE (1) DE102020206088A1 (en)
WO (1) WO2021228894A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021204805A1 (en) * 2021-05-11 2022-11-17 Carl Zeiss Microscopy Gmbh Evaluation method for simulation models in microscopy
DE102022121543A1 (en) 2022-08-25 2024-03-07 Carl Zeiss Microscopy Gmbh Microscopy system and method for checking the quality of a machine-learned image processing model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11783603B2 (en) * 2018-03-07 2023-10-10 Verily Life Sciences Llc Virtual staining for tissue slide images

Also Published As

Publication number Publication date
WO2021228894A1 (en) 2021-11-18
DE102020206088A1 (en) 2021-11-18

Similar Documents

Publication Publication Date Title
JP6924413B2 (en) Data generator, data generation method and data generation program
US10885618B2 (en) Inspection apparatus, data generation apparatus, data generation method, and data generation program
Kim et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)
JP7048499B2 (en) Semi-automatic labeling of datasets
US20220066905A1 (en) Explainable artificial intelligence modeling and simulation system and method
EP3796228A1 (en) Device and method for generating a counterfactual data sample for a neural network
TWI748122B (en) System, method and computer program product for classifying a plurality of items
CN115552462A (en) Image analysis method in microscope
CN112528975A (en) Industrial quality inspection method, device and computer readable storage medium
JP7353198B2 (en) Calculator, discriminator learning method, and analysis system
CN116453438B (en) Display screen parameter detection method, device, equipment and storage medium
JP2022027473A (en) Generation of training data usable for inspection of semiconductor sample
CN113254693A (en) Image recognition and retrieval for component failure analysis
KR20210100028A (en) Method of examining specimens and system thereof
JP2019158684A (en) Inspection system, identification system, and discriminator evaluation device
JP2006292615A (en) Visual examination apparatus, visual inspection method, program for making computer function as visual inspection apparatus, and recording medium
EP4246525A1 (en) Method and device for processing pathological slide image
CN112184717A (en) Automatic segmentation method for quality inspection
CN117242389A (en) Method for evaluating simulation model in microscope
US11719727B1 (en) Systems and methods for screening particle source manufacturing and development test data
US20220076383A1 (en) Method for de-noising an electron microscope image
Eldridge et al. Predicting suitability of finger marks using machine learning techniques and examiner annotations
JP7120528B2 (en) Classifier construction method, image classification method, classifier construction device, and image classification device
CN113537407A (en) Image data evaluation processing method and device based on machine learning
CN112767365A (en) Flaw detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination