WO2019225506A1 - Système de traitement d'image de tissu biologique, et procédé d'apprentissage automatique - Google Patents

Système de traitement d'image de tissu biologique, et procédé d'apprentissage automatique Download PDF

Info

Publication number
WO2019225506A1
WO2019225506A1 PCT/JP2019/019741 JP2019019741W WO2019225506A1 WO 2019225506 A1 WO2019225506 A1 WO 2019225506A1 JP 2019019741 W JP2019019741 W JP 2019019741W WO 2019225506 A1 WO2019225506 A1 WO 2019225506A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
biological tissue
estimator
learning
processing system
Prior art date
Application number
PCT/JP2019/019741
Other languages
English (en)
Japanese (ja)
Inventor
功記 小西
三雄 須賀
秀夫 西岡
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2019225506A1 publication Critical patent/WO2019225506A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/22Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
    • G01N23/225Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion
    • G01N23/2251Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion using incident electron beams, e.g. scanning electron microscopy [SEM]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • CCHEMISTRY; METALLURGY
    • C12BIOCHEMISTRY; BEER; SPIRITS; WINE; VINEGAR; MICROBIOLOGY; ENZYMOLOGY; MUTATION OR GENETIC ENGINEERING
    • C12MAPPARATUS FOR ENZYMOLOGY OR MICROBIOLOGY; APPARATUS FOR CULTURING MICROORGANISMS FOR PRODUCING BIOMASS, FOR GROWING CELLS OR FOR OBTAINING FERMENTATION OR METABOLIC PRODUCTS, i.e. BIOREACTORS OR FERMENTERS
    • C12M1/00Apparatus for enzymology or microbiology
    • C12M1/34Measuring or testing with condition measuring or sensing means, e.g. colony counters

Definitions

  • the present disclosure relates to a biological tissue image processing system and a machine learning method.
  • 3D microscopy is known as a technique for analyzing or imaging the 3D structure of biological tissue.
  • an electron microscope is generally used.
  • three-dimensional microscopy using a scanning electron microscope includes Focused Ion Beam SEM (FIB-SEM) method, Serial Block-Face SEM (SBF-SEM) method, and serial section SEM (Serial Section (SEM) method (see Non-Patent Document 1) has been proposed.
  • the continuous section SEM method is also called an array tomography method.
  • a plurality of sample sections (ultra-thin pieces) continuous in the depth direction are cut out from a sample of a biological tissue, and they are arranged on a substrate.
  • Each sample section on the substrate is sequentially observed with a scanning electron microscope, whereby a plurality of images are acquired. Based on the acquired images, the three-dimensional structure of a specific organ (cell, mitochondria, nucleus, etc.) contained in the biological tissue is analyzed or imaged. According to the continuous section SEM method, it is possible to observe a sample section that has already been observed again.
  • a machine learning type estimator is used to detect or identify a target element (Target Component) such as a cell, a cell membrane, or an organelle in the cell. It is possible. In that case, it is desirable to train the estimator so that the estimation accuracy of the estimator is increased.
  • observation conditions at the time of acquisition of the learning image should be matched with the observation conditions at the time of acquisition of the image for analysis (observation conditions at the time of analysis). Arise.
  • the estimation target in the estimator has a significant thickness in the biological tissue, the above idea does not necessarily hold, and it may be better to give diversity to the observation conditions during learning. It is thought that there is.
  • An object of the present disclosure is to improve the estimation accuracy of a target element in a biological tissue image processing system having a machine learning type estimator that estimates a target element in a biological tissue.
  • an object of the present disclosure is to cause the estimator to perform learning so that the estimation accuracy of the attention element is increased in the machine learning process of the estimator that estimates the attention element in the biological tissue.
  • a biological tissue image processing system includes an electron microscope that generates an image by observing an analysis target that is a biological tissue, and machine learning that applies processing for estimating an element of interest included in the analysis target to the image
  • An estimator of the type, and the estimator is a learned estimator that has undergone learning using a learning original image set, and the learning original image set is subjected to biological tissue using an electron microscope under a first observation condition.
  • the first observation condition is a condition set for an electron microscope when observing the analysis target sample, and the second observation condition is to perform observation over a depth range different from the first observation condition. Is that condition, it is characterized in.
  • a machine learning method applies a process of estimating a target element included in an analysis target sample to an image generated by observing the analysis target sample that is a biological tissue using an electron microscope.
  • a biological tissue image processing system including an estimator, a machine learning method for causing the estimator to learn, wherein the estimator learns as a biological tissue with an electron microscope under a first acceleration voltage.
  • a step of providing a first original image for learning generated by observing a target sample and a first correct answer image for learning corresponding thereto, and higher than the first acceleration voltage for the estimator A process for providing a second original image for learning generated by observing the learning target sample with an electron microscope under a second acceleration voltage, and a second correct image for learning corresponding thereto.
  • the above method is realized as a hardware function or a software function.
  • a program for executing the function is installed in the information processing apparatus via a network or a portable storage medium.
  • the concept of the information processing apparatus includes a personal computer, an electron microscope system, and the like.
  • a biological tissue image processing system includes an electron microscope and a machine learning type estimator.
  • the electron microscope is an apparatus that generates an image by observing a sample to be analyzed which is a biological tissue, and preferably it is a scanning electron microscope.
  • the estimator applies a process of estimating the element of interest included in the sample to be analyzed to the image generated by the electron microscope, and the estimator is a learned estimation that has been learned by the original image set for learning. It is a vessel. More precisely, the object to be estimated is an image of the element of interest.
  • the learning original image set the first original image generated by observing the learning target sample with the electron microscope under the first observation condition and the learning target sample with the microscope under the second observation condition are observed.
  • the first observation condition is a condition set for the electron microscope when observing the analysis target sample, and the condition is also set for the electron microscope when observing the learning target sample.
  • the second observation condition is a condition under which observation can be performed over a depth range different from the first observation condition, and the condition is a condition set for the electron microscope when observing the learning target sample.
  • the estimation accuracy for the element of interest can be increased by using a learned estimator that has undergone a special learning process. For example, an image (for example, a deep image) that does not relatively occur under the first observation condition is likely to be correctly estimated as a part of the target tissue.
  • the analysis target sample and the learning target sample are generally the same (same type) biological tissue, but they are usually physically different.
  • the analysis target sample and the learning target sample are generally observed with the same electron microscope, but they may be observed with different electron microscopes.
  • the element of interest is a specific tissue component, and in embodiments, the element of interest is a cell membrane.
  • the cell itself, the cell lumen (cytoplasm) surrounded by the cell membrane, the organelle in the cell (Organelle), and the like may be considered as elements of interest.
  • the machine learning type estimator is configured by CNN, for example, but other types of machine learning type estimators may be used.
  • a plurality of images are acquired by the continuous section SEM method (array tomography method), but a plurality of images may be acquired by the FIB-SEM method, the SBF-SEM method, and other methods.
  • the above configuration can be applied to two-dimensional microscopy as well as three-dimensional microscopy.
  • the second observation condition is a condition under which observation can be performed over a greater depth range than the first observation condition.
  • the acceleration voltage, irradiation current, and the like may be changed.
  • the first observation condition is a condition for setting the first acceleration voltage as the acceleration voltage of the electron microscope
  • the second observation condition is a second acceleration voltage higher than the first acceleration voltage as the acceleration voltage of the electron microscope. Is a condition for setting.
  • the acceleration voltage is increased, information from a deeper position in the observation object can be easily obtained. In other words, an object existing deeper can be easily imaged.
  • the estimator is trained using the first original image and the second original image, the deep image of the tissue of interest is detected even if the input image of the estimator is an image generated under the first observation condition. It becomes easy to be done. If the sample to be analyzed is observed under the first observation condition in the analysis process, damage to the sample to be analyzed can be prevented or reduced.
  • the biological tissue image processing system includes a correct image creation unit that generates a second correct image based on the second original image. If the correct image creation unit for the second original image is provided, image processing suitable for the second original image can be easily applied to the image when creating the correct image.
  • the correct image creation unit includes an enhancement unit that emphasizes the deep image of the element of interest in the second original image. According to this configuration, the deep part image that is a part of the target element (target element image) can be easily learned.
  • the element of interest is a cell membrane
  • the deep image of the element of interest is a deep image of the cell membrane. For example, when analyzing a three-dimensional structure of a cell, it is desired to estimate or detect even a cell membrane present inside a biological tissue section. The above configuration meets such a demand.
  • the correct image creation unit is configured to perform segmentation for dividing the image that has been processed by the enhancement unit into a plurality of small regions, and a plurality of small regions corresponding to cell membranes in the segmented image.
  • An annotation unit that performs labeling to generate a second correct image. If segmentation (small area division) is performed prior to labeling, the labeling work becomes easier or the labeling becomes accurate.
  • the correct image creation unit includes a machine learning type second estimator. This configuration uses another estimator when creating a correct image for the first estimator. The element of interest in the second original image is estimated by the second estimator, and a second correct image is created based on the estimation result.
  • the machine learning method includes a first original image for learning generated by observing a learning target sample as a biological tissue with an electron microscope under a first acceleration voltage with respect to an estimator, and A step of providing a first correct image for learning corresponding thereto, and learning generated by observing a learning target sample with an electron microscope under a second acceleration voltage higher than the first acceleration voltage with respect to the estimator And providing a second original image for learning and a second correct image for learning corresponding thereto.
  • the first image pair including the first original image and the first correct image is input to the estimator as the first teacher data, and the second original image and the second correct image are used.
  • the second image pair is input to the estimator as second teacher data.
  • the first correct image is created manually from the first original image, or is created by correcting the estimation result of the estimator for the first original image.
  • the second correct image is created from the second original image by the correct image creating unit. Both the first correct image and the second correct image may be generated by the correct image creating unit.
  • FIG. 1 shows a biological tissue image processing system according to an embodiment.
  • the illustrated biological tissue image processing system 10 is a system for analyzing and imaging a three-dimensional structure of a biological tissue. Using this biological tissue image processing system, for example, an image that three-dimensionally represents nerve cells in the brain of a human body or an animal is generated. Any tissue, organ, etc. in the organism can be the subject of analysis.
  • the biological tissue image processing system 10 includes a sample pretreatment device 12, a continuous section creation device 14, a scanning electron microscope (SEM) 16, and a biological tissue image processing device 18. .
  • SEM scanning electron microscope
  • the sample pretreatment device 12 is a device that performs pretreatment on the tissue 20 extracted from a living body, or corresponds to various instruments for the pretreatment.
  • the pretreatment include fixing treatment, dyeing treatment, conductive treatment, resin embedding treatment, and shaping treatment. All or some of them are implemented as necessary.
  • the staining treatment osmium tetroxide, uranium acetate, lead citrate, or the like may be used. A staining process may be performed on each sample section described below. Some or all of the steps included in the pretreatment may be performed manually.
  • the continuous section preparation device 14 is provided outside the SEM 16 or inside the SEM 16.
  • a plurality of sample sections 24 arranged in the depth direction (Z direction) are cut out from the cubic sample after the pretreatment by the continuous section creating apparatus 14.
  • an apparatus such as an ultramicrotome may be used.
  • the work may be performed manually.
  • a plurality of sample sections 24 constitute a sample section group 22.
  • the plurality of sample sections 24 cut out are arranged on the substrate 28 in a predetermined arrangement.
  • the substrate 28 is, for example, a glass substrate or a silicone substrate.
  • the sample section array 22 ⁇ / b> A composed of two sample section rows is configured on the substrate 28, but this is merely an example.
  • a sample unit 26 is configured by the substrate 28 and the sample section array 22A.
  • each sample section 24 is, for example, on the order of nm or ⁇ m.
  • a sample section 24 having a larger size may be produced.
  • the thickness (size in the Z direction) of each sample section 24 is, for example, in the range of several nm to several hundred nm, and in the embodiment, the thickness is in the range of, for example, 30 to 70 nm. Any numerical value given in this specification is an example.
  • the sample observed in the learning process of the estimator is a learning sample, that is, a learning target sample.
  • a plurality of tissue sections are prepared from the learning target sample.
  • the sample observed in the analysis process is the sample to be analyzed.
  • a plurality of tissue sections are prepared from the sample to be analyzed.
  • both the learning target sample and the analysis target sample are expressed as a tissue 20.
  • Each sample slice cut out from the learning target sample and each sample slice cut out from the analysis target sample are both expressed as a sample slice 24.
  • the learning target sample and the analysis target sample are usually separate, but they may be the same. For example, the analysis process and the learning process may be executed in parallel.
  • the SEM 16 includes an electron gun, a deflector (scanner), an objective lens, a sample chamber, a detector 34, a control unit 204, and the like.
  • a stage for holding the sample unit 26 and a moving mechanism for moving the stage are provided in the sample chamber.
  • the control unit 204 controls the operation of the moving mechanism, that is, the movement of the stage.
  • the electron beam 30 is irradiated to a specific sample section 24 selected from the sample section array 22A.
  • scanning the irradiation position for example, raster scanning
  • the reflected electrons 32 emitted from each irradiation position are detected by the detector 34. Thereby, an SEM image is formed. This is performed for each sample section 24.
  • the control unit 204 has a function of setting an acceleration voltage for forming an electron beam. In general, when the acceleration voltage is increased, information from a deeper position in the sample section 24 is easily obtained.
  • the X direction observation range and the Y direction observation range coincide with each other.
  • the observation range electron beam two-dimensional scanning range
  • secondary electrons or the like may be detected.
  • the first acceleration voltage (low voltage) as the first observation condition and the second as the second observation condition are obtained in units of 24 sample sections.
  • the acceleration voltage (high voltage) is alternately set. That is, the individual sample slices 24 cut out from the learning target sample are observed by scanning with the electron beam 30L formed under the first acceleration voltage (low voltage observation), whereby the low voltage image (first original image). 36L is generated.
  • the individual specimen sections 24 are observed by scanning with the electron beam 30H formed under the second acceleration voltage, thereby generating a high voltage image (second original image) 36H (high voltage observation).
  • the low voltage image stack 35L is configured by the plurality of low voltage images 36L
  • the high voltage image stack 35H is configured by the plurality of high voltage images 36H.
  • the first acceleration voltage is set as an observation condition under the control of the control unit 204. That is, the individual sample sections 24 cut out from the analysis target sample are observed by scanning with the electron beam 30L formed under the first acceleration voltage (low voltage observation), and thereby a low voltage image (analysis original image). 38L is generated. A plurality of low voltage images 38L constitute a low voltage image stack 37L.
  • the first acceleration voltage is, for example, 1 keV or 2 keV
  • the second acceleration voltage is, for example, 3 keV, 5 keV, or 7 keV. It is desirable to determine the first acceleration voltage and the second acceleration voltage in consideration of the sample, section thickness, target tissue, observation purpose, and other circumstances.
  • the image stacks 35L, 35H, and 37L include a plurality of images 36L, 36H, and 38L corresponding to a plurality of depths in the Z direction (in other words, arranged in the Z direction in the data storage space).
  • Each of the images 36L, 36H, and 38L is an original image or an input image when viewed from the biological tissue image processing apparatus 18 side.
  • the images 36L, 36H, and 38L are electronic data, and the images 36L, 36H, and 38L are transmitted from the SEM 16 to the biological tissue image processing apparatus 18 via a network or a portable storage medium.
  • the biological tissue image processing apparatus 18 is configured by a personal computer in the illustrated configuration example.
  • the biological tissue image processing apparatus 18 may be incorporated in the SEM 16, or the biological tissue image processing apparatus 18 may be incorporated in a system computer that controls the SEM 16 or the like.
  • the SEM 16 may be controlled by the biological tissue image processing apparatus 18.
  • the biological tissue image processing apparatus 18 has a main body 40, a display 46 and an input device 48.
  • the plurality of functions of the main body 40 will be described in detail later with reference to FIGS.
  • FIG. 1 two typical functions (first image processing function and second image processing function) exhibited by the main body 40 are represented as blocks.
  • the main body 40 includes a first image processing unit 200 including a machine learning type film estimator, and a second image processing unit 202 as a correct image creation unit.
  • the display 46 is configured by an LCD, an organic EL display device, or the like.
  • the input device 48 includes a keyboard operated by a user, a pointing device, and the like.
  • FIG. 2 shows a first configuration example of the main body 40.
  • the main body 40 has the first image processing unit 200, the second image processing unit 202, and the volume data processing unit 56 as described above.
  • the entity of each configuration shown in FIG. 2 is software, that is, a program executed by a general-purpose processor such as a CPU or GPU, except for a portion corresponding to a user's work or action.
  • a general-purpose processor such as a CPU or GPU
  • part or all of the configuration may be configured by a dedicated processor or other hardware. All or part of the functions of the biological tissue image processing apparatus may be executed by one or a plurality of information processing devices existing on the network.
  • the configuration that functions in the analysis process will be described in detail first, and then the configuration that functions in the learning process will be described in detail.
  • the operation in the learning process is also referred to in describing the operation in the analysis process. I will do it.
  • the first image processing unit 200 includes a machine learning type film estimator 42, a binarizer (image generator) 50, a correction unit 52, a labeling processing unit 54, and the like.
  • the membrane estimator 42 functions as membrane estimation means, and applies a membrane estimation process to the input image 206, thereby outputting a membrane likelihood map 60.
  • the membrane estimator 42 is configured by a CNN (Convolutional Neural Network) which is a machine learning type membrane estimator.
  • the learning process is executed in advance prior to the actual operation of the CNN, that is, the analysis process so that the film is correctly estimated by the CNN.
  • the learning process includes a primary learning process (initial learning process) and a secondary learning process.
  • the plurality of image pairs include a plurality of first image pairs and a plurality of second image pairs.
  • Each first image pair is composed of a first original image (low voltage image) 36L and a corresponding correct image 223.
  • Each second image pair includes a second original image (high voltage image) 36H and a correct image 224 corresponding thereto.
  • the correct image 223 is created, for example, by manual work on the first original image 36L.
  • the correct image 223 may be created from the first original image 36L by an unsupervised machine learning device, a simple classifier (for example, SVM (Support Vector Vector)) or the like.
  • the correct image 224 is created by the second image processing unit 202 based on the second original image, as will be described in detail later.
  • the correct image 223 may be created from the first original image 36L by the second image processing unit 202.
  • a plurality of image pairs are given as teacher data to the membrane estimator 42 as in the primary learning process.
  • the teacher data includes a plurality of image pairs used in the primary learning process and a plurality of image pairs added in the secondary learning process.
  • the plurality of added image pairs include a plurality of first image pairs and a plurality of second image pairs.
  • Each first image pair includes a first original image 36L and a correct image 64A corresponding to the first original image 36L.
  • Each second image pair includes a second original image 36H and a correct image 224 corresponding to the second original image 36H.
  • the correct image 64A is created by the configuration from the film estimator 42 to the processing tool unit 44. Specifically, when the first original image 36L is input to the membrane estimator 42, the membrane likelihood map 60 is output from the membrane estimator 42 as an estimation result image.
  • a correct image 64 ⁇ / b> A is created through generation of a mask image 62 based on the film likelihood map 60 and user (expert) correction of the mask image 62 using the processing tool unit 44. These processes will be described in detail later. It is also conceivable to use the mask image 62 as the correct image 62A.
  • the CNN parameter group in the film estimator 42 is further improved by the secondary learning process. That is, the machine learning result is further accumulated in the film estimator 42.
  • the secondary learning process for example, it is determined that the result of the estimation process for the input image 206 (the first original image 36L and the second original image 36H) is sufficiently similar to the corresponding correct images 223, 224, and 64A. If finished. Thereafter, re-learning of the film estimator 42 is executed by the same method as described above as necessary.
  • the second image processing unit 202 functions as correct image creation means in both the primary learning process and the secondary learning process.
  • an original image (low voltage image) 38L as the input image 206 is input to the membrane estimator 42.
  • the database 57 stores a plurality of original images 36L, 36H, and 38L and a plurality of correct images 223, 224, and 64A.
  • a plurality of film likelihood maps 60 may be stored there.
  • the film estimator 42 and the database 57 may be integrated.
  • U-net may be used, or a support vector machine, a random forest, or the like may be used.
  • the binarizer 50 functions as an image generator. Specifically, the binarizer 50 is a module that generates a temporary membrane image 62 by binarization processing on the membrane likelihood map 60, as will be exemplified later with reference to FIG.
  • the film likelihood map 60 includes a plurality of film likelihoods arranged two-dimensionally. Each film likelihood is a numerical value indicating the probability (probability) of being a film. The film likelihood is, for example, between 0 and 1.
  • the film likelihood map 60 can also be understood as a film likelihood image.
  • a threshold is set in the binarizer 50, and the binarizer 50 converts a film likelihood that is equal to or greater than the threshold to 1 and converts a film likelihood that is less than the threshold to 0.
  • the image generated as a result is a temporary membrane image 62.
  • the film image before correction is referred to as a temporary film image 62 in order to distinguish the film image before correction from the film image after correction.
  • the image generation unit 61 is configured by both the film estimator 42 and the binarizer 50.
  • the entire image generation unit 61 may be configured by CNN or the like. Even in such a case, the stepwise generation of the membrane likelihood map and the temporary membrane image can be considered. It is also possible to use the temporary membrane image 62 as the correct image 62A. Prior to binarization processing, processing such as noise removal and edge enhancement may be applied to the film likelihood map 60.
  • the machining tool unit 44 functions as a work support unit or work support means.
  • the processing tool unit 44 is provided across the first image processing unit 200 and the second image processing unit 202 in the illustrated configuration example.
  • the processing tool unit 44 has a display processing function and an image processing function from the viewpoint of information processing. From the viewpoint of work, it has a correction function and a labeling function. In FIG. 2, these functions are expressed as a correction unit 52, a labeling processing unit 54, and a correction unit 214.
  • the correction unit 52 displays, as a work target image, a mask image that is a work target for the user via a work window illustrated in FIG. 5 later, and accepts a user's correction instruction on the work target image. Is.
  • the correction contents are reflected in the mask image that is the work target image.
  • the work target image includes a discontinuous portion of the film
  • a film pixel group is added to the discontinuous portion.
  • the correction unit 52 is a module that supports the user's work or operation and manages each mask image.
  • the input original image (input image) 206 and the generated membrane likelihood are generated with respect to the correction unit 52 (or the processing tool unit 44).
  • the degree map 60 is also sequentially input.
  • the original image 206 or the membrane likelihood map 60 corresponding to the work target image can be displayed together with or instead of the temporary membrane image as the work target image.
  • the labeling processing unit 54 is a module for performing labeling (painting and labeling) on individual regions (cell lumens) included in the corrected membrane image (or the uncorrected membrane image). Labeling includes manual labeling by the user and automatic labeling. At the stage where the correction work and the labeling work are completed, the three-dimensional labeling data 66 in which the cell lumen is distinguished from the others is configured. This is sent to the volume data processing unit 56. The correction unit 214 will be described later.
  • the volume data processing unit 56 includes an analysis unit 56A and a rendering unit 56B.
  • the volume data processing unit 56 receives an original image stack 37L composed of a plurality of original images.
  • the original image stack 37L constitutes volume data.
  • the three-dimensional labeling data 66 is also input to the volume data processing unit 56. It is also a kind of volume data.
  • the analysis unit 56A analyzes a target organ (for example, a nerve cell) based on the three-dimensional labeling data 66, for example. For example, the form, volume, length, etc. may be analyzed.
  • the original image stack 37L is analyzed with reference to the three-dimensional labeling data.
  • the rendering unit 56B forms a three-dimensional image (stereoscopic expression image) based on the three-dimensional labeling data 66. For example, an imaging portion may be extracted from the original image stack 37L based on the three-dimensional labeling data 66, and a rendering process may be applied to the extracted image portion.
  • the second image processing unit 202 is a module that generates the second correct image 224 based on the second original image (high voltage image) 36H as the input image 208.
  • the high voltage image as the second original image 36H includes more deep images of the cell membrane than the low voltage image as the first original image 36L. This is because when the acceleration voltage is increased, information from a deeper position inside the sample slice can be obtained (this will be described in detail later with reference to FIG. 8).
  • a deep image of a cell membrane typically appears as a haze (or like a haze) in the vicinity of a shallow image (surface layer image) of the cell membrane. Even in a low-voltage image, a deep image appears as a haze, but the amount thereof is relatively small, and in a low-voltage image, a blurred image in which it is not clear whether the image is a deep image tends to occur.
  • the membrane estimator may be required to input a low voltage image due to damage to the sample section, imaging time, and other reasons. Assuming that a low-voltage image is input, in order to facilitate detection of a deep image of the cell membrane as a part of the cell membrane, in the learning stage of the membrane estimator, the membrane estimator is used together with the low-voltage image as its teaching data. A high voltage image is given.
  • the second image processing unit 202 includes a haze enhancement unit 210, a segmentation unit 212, and a correction unit (annotation unit) 214 in the illustrated configuration example.
  • the haze enhancement unit 210 applies processing for enhancing the deep image of the cell membrane included in the input image 208. Specifically, contrast enhancement is performed as moya enhancement.
  • the segmentation unit 212 applies segmentation to the image 218 after contrast enhancement. Segmentation is a process of dividing image content into a plurality of small areas. This segmentation is performed to make it easier to do the work in the next annotation.
  • the correction unit 214 functions as correction means or annotation means. In the sense of a tool for user work, the correction unit 214 exhibits the same function as the correction unit 52 described above. Therefore, in FIG. 2, the correction unit 214 is positioned as a part of the processing tool unit 44. Specifically, the correction unit 214 displays the segmented image 220, accepts the user's membrane designation and membrane correction for the image 220, and as a result, the second correct image 224 as a membrane image showing only the cell membrane. Is to create. A plurality of small regions are displayed in the image 220, and a membrane label is given by the user to the small region corresponding to the cell membrane. The work corresponds to annotation and painting. In order to improve the workability, a work window similar to the above may be displayed. In the learning process (primary learning process and secondary learning process) of the film estimator 42, the second correct image 224 created as described above is given to the film estimator 42.
  • the combined use of the film estimator 42 and the processing tool unit 44 can improve the quality of the image group to be analyzed or rendered, and easily and quickly generate the image group. Is possible.
  • the membrane estimator 42 can learn using a high voltage image in addition to a low voltage image. Since this is done, the film estimation accuracy of the film estimator 42 can be increased. Actually, a low voltage image and a high voltage image representing the same part of the same sample section for learning are generated, and both of them are given to the film estimator 42 as a part of the teacher data.
  • the possibility that the deep image of the cell membrane is regarded as a part of the cell membrane can be increased, so that the quality of the second correct image can be improved.
  • a plurality of first original images and a plurality of second original images are sequentially input to the membrane estimator 42 in a predetermined order. Those images may be batch processed.
  • FIG. 3 schematically shows a configuration example of the membrane estimator 42.
  • the membrane estimator 42 has a number of layers, including an input layer 80, a convolution layer 82, a pooling layer 84, an output layer 86, and the like. They act according to the CNN parameter group 88.
  • the CNN parameter group 88 includes a number of weighting factors, a number of bias values, and the like.
  • the CNN parameter group 88 is initially configured by an initial value group 94. For example, the initial value group 94 is generated using random numbers or the like.
  • the evaluation unit 90 and the update unit 92 function.
  • the evaluation unit 90 calculates an evaluation value based on a plurality of image pairs (original images 36L and 36H (see FIG. 2) and corresponding correct images 223, 224 and 64A) constituting the teacher data. is there. Specifically, the evaluation value is calculated by sequentially giving the result 60A of the estimation processing for the original images 36L and 36H and the correct images 223, 224 and 64A corresponding to the original images 36L and 36H to the error function.
  • the updating unit 92 updates the CNN parameter group 88 so that the evaluation value changes in a favorable direction. By repeating this, the CNN parameter group 88 is optimized as a whole. Actually, the end of the learning process is determined when the evaluation value reaches a certain value.
  • the configuration shown in FIG. 3 is merely an example, and it is possible to use estimators having various structures as the film estimator 42.
  • FIG. 4 shows the operation of the binarizer.
  • a film likelihood map 60 is output from the film estimator.
  • the film likelihood map 60 is composed of a plurality of film likelihoods 60a corresponding to a plurality of pixels, and each film likelihood 60a is a numerical value indicating the probability of being a film.
  • the binarizer binarizes the membrane likelihood map 60, thereby generating a temporary membrane image 62 as a binarized image.
  • each film likelihood 60a is compared with a threshold value.
  • a membrane likelihood 60a that is equal to or greater than the threshold is converted into a pixel 62a having a value of 1, and a membrane likelihood 60a that is less than the threshold is converted into a pixel 62b having a value of 0.
  • the pixel 62a is handled as a pixel (film pixel) constituting the film.
  • the threshold value can be variably set by the user. The setting may be automated. For example, the threshold value may be changed while observing the temporary membrane image 62.
  • FIG. 5 illustrates a work window displayed by the machining tool unit.
  • the work window 100 includes a display image 102.
  • the display image 102 is a composite image (composite image) composed of a mask image (work target image) corresponding to the depth selected by the user and the original image.
  • An original image as a gray scale image constitutes a background image, and a temporary membrane image (work target image) as a color image (for example, a blue image) is superimposed and displayed on the background image.
  • the illustrated display image 102 shows brain tissue, in which cross sections of a plurality of cells appear. At the same time, a cross section of an organelle (such as mitochondria) in the cell also appears.
  • the display image 102 described above is displayed.
  • the tab 105 is selected, only the original image as a gray scale image is displayed.
  • the tab 106 is selected, only the film likelihood map (film likelihood image) as a gray scale image is displayed.
  • a tab for displaying only a mask image as a work target image may be added.
  • a membrane likelihood map may be used as a background image, and a temporary membrane image may be superimposed on the background image.
  • the depth selection tool 108 is a display element (operation element) for selecting a specific depth (display depth) in the Z direction. It consists of a Z-axis symbol 108b representing the Z-axis and a marker 108a as a slider that slides along the Z-axis symbol 108b. It is possible to select the desired depth by moving the marker 108a. According to such a depth selection tool 108, it is possible to obtain an advantage that it is easy to intuitively recognize the depth to be selected and the depth change amount.
  • the left end point of the Z-axis symbol 108b corresponds to zero depth, and the right end point of the Z-axis symbol 108b corresponds to the maximum depth. Depth selection tools having other forms may be employed.
  • the depth input field 114 is a field for directly specifying the depth as a numerical value. The depth currently selected according to the position of the marker 108a may be displayed as a numerical value in the depth input field 114.
  • the transparency adjustment tool 110 is a tool for adjusting the transparency (display weight) of the color temporary membrane image (work target image) displayed in a synthesized state in a state where the display image 102 is displayed. For example, when the marker 110a is moved to the left side, the display weight of the color mask image is reduced, the transparency is increased, and the original image is dominantly displayed. On the contrary, when the marker 110a is moved to the right side, the display weight of the color mask image increases, the transparency decreases, and the color mask image is displayed more clearly.
  • the superimposition display tool 112 is an image (synthesized image, original image or film likelihood map) adjacent to the currently displayed image (synthesized image, original image or film likelihood map) on the shallow side in the depth direction.
  • the operation is performed when an image (a composite image, an original image, or a film likelihood map) adjacent to the deep side in the depth direction is displayed in an overlapping manner.
  • the marker 112a is moved to the left side, the display weight for the image adjacent to the shallow side increases.
  • the marker 112a is moved to the right side, the display weight for the image adjacent to the deep side increases.
  • Three or more images may be superimposed and displayed. Of course, if too many images are superimposed, the image content becomes too complex, so it is desirable to overlay a small number of images. Such superposition display makes it easy to obtain depth information.
  • the button row 115 includes a plurality of virtual buttons 116, 118, 120, 121, 122, 126.
  • the button 116 is a display element operated when performing image zoom (enlargement or reduction).
  • the button 118 is a display element that is operated when the pen tool is used. When the button 118 is turned on, the shape of the cursor is changed to a pen shape, and a film pixel can be added using the cursor shape. It is also possible to change the pen size.
  • the button 120 is a display element that is operated when using an eraser. When the button 120 is turned on, the shape of the cursor is changed to an eraser shape, and the film pixel can be deleted using the cursor shape. It is also possible to change the size of the eraser.
  • the button 121 is a display element that is operated when painting. If any area is designated after the button 121 is turned on, the area is filled. It is also possible to select an arbitrary function from a plurality of functions prepared for painting (or labeling) by operating the button 121.
  • the color palette 124 is displayed. For example, a color selected from the color palette is given to the painted area. Thereby, the area is colored by the selected color.
  • Each color is associated with an object number. If the same color, that is, the same object number is given to a plurality of regions across the layers, a three-dimensional lumen region in a specific cell is defined by these regions.
  • the button 126 is a button for black and white reversal. When it is operated, a portion displayed in black in the displayed image is displayed in white, and conversely, a portion displayed in white is displayed in black.
  • FIG. 6 shows a temporary membrane image 180 before correction and a temporary membrane image 182 after correction.
  • the temporary film image 180 before correction includes film break portions 184 and 188.
  • the corrected temporary membrane image 182 as shown by reference numerals 186 and 190, the film interruption portion is repaired.
  • the temporary membrane image 180 before correction includes non-membrane portions 192 and 196.
  • those non-membrane portions have disappeared. According to the corrected mask image 182, it is possible to accurately perform labeling, and consequently, it is possible to improve the quality of the three-dimensional labeling data.
  • FIG. 7 illustrates a three-dimensional splicing process included in the labeling process.
  • a plurality of temporary membrane images D1 to D4 arranged in the Z direction are shown.
  • a temporary membrane image D1 corresponding to the depth Zi is a work target image. This is a reference image in the processing described below.
  • the representative point is specified for the region R1 included in the mask image D1 that is the reference image.
  • a center point, a center of gravity point, etc. are specified as representative points.
  • a perpendicular C passing through the representative point is defined. From the reference image, for example, N masked membrane images are referred to on the deep side, and each region through which the perpendicular C passes is specified in these images. And the same label is provided with respect to area
  • the above processing may be applied to N temporary film images on the shallow side from the reference image. The result of automatic labeling is usually visually confirmed by the user.
  • the region R1 (outer edge) included in the mask image D1 which is the reference image is projected onto the mask image D2, and the projection region R1a is defined.
  • a region R2 that most overlaps the projection region R1a is specified on the mask image D2.
  • the region R2 is projected onto the mask image D3, and the projection region R2a is defined.
  • a region R3 that most overlaps the projection region R2a is specified on the mask image D3.
  • a projection region R3a is defined on the mask image D4, and the region R4 is specified based on the projection region R3a.
  • the same label is assigned to the region R1 and all of the regions R2, R3, and R4 identified from the region R1.
  • the projection source area is sequentially updated when the layer is changed, but it may be fixed.
  • the region R1 may be projected onto the mask images D2, D3, D4.
  • the connection destination is searched from the reference image to one side in the Z direction, but the connection destination may be searched from the reference image to both sides in the Z direction.
  • the search range may be selected by the user. In any case, the result of automatic labeling is usually visually confirmed by the user. In that case, the work window shown in FIG. 5 is used.
  • the three-dimensional connection process described above is an example, and a three-dimensional connection process other than the above may be employed.
  • one or a plurality of feature amounts for each region can be used.
  • As the feature amount of the area an area, a form, a perimeter, a barycentric point, a luminance histogram, a texture, or the like may be used.
  • a superposition area, a distance between representative points, or the like may be used as a feature amount between regions.
  • FIG. 8 schematically shows exaggeratedly the change in the observation depth range and the change in the SEM image caused by the change in the applied voltage.
  • Reference numerals 230, 232, and 234 indicate three different observation conditions.
  • the observation condition 230 is a condition for setting the acceleration voltage to a low voltage.
  • the observation condition 232 is a condition in which the acceleration voltage is a medium voltage.
  • the observation condition 234 is a condition for increasing the acceleration voltage.
  • the upper part (A) of FIG. 8 shows the relationship between the cross section of the sample piece 236 and the electron beams 244, 246, 248. Each cross section includes a film portion 238 extending parallel to the Z direction and a film portion 240 extending in a direction oblique to the Z direction.
  • a SEM image is shown in the lower part (B) of FIG.
  • the object to be observed by the electron beam 244 is exclusively the surface layer of the sample section 236. Therefore, on the SEM image, images 238a and 240a representing the surface layers of the two film portions 238 and 240 appear.
  • the observation condition 232 is selected, the object to be observed by the electron beam 246 extends to the intermediate layer of the sample slice 236. Therefore, on the SEM image, an image (intermediate integrated image) 238b representing the intermediate layer of the film portion 238 appears, and a deep image 240c appears in the vicinity of the image 240b for the film portion 240.
  • the object to be observed by the electron beam 248 extends to the entire sample section 236 in the illustrated example. Therefore, on the SEM image, an image (entire integrated image) 238c representing the entire film portion 238 appears, and a wide depth image 240e appears in the vicinity of the image 240d for the film portion 240.
  • the membrane estimator In order to avoid or reduce damage to the sample, it is required to observe the sample at a low acceleration voltage.On the other hand, when the membrane is estimated by the membrane estimator, the depth image of the cell membrane is also correctly estimated as part of the cell membrane. Is required. It is necessary to firmly instruct the membrane estimator that the deep image of the cell membrane is a part of the cell membrane. That is, in the learning stage of the film estimator, it is necessary to give the film estimator an image including many deep images. From such a viewpoint, in the embodiment, in the learning process of the film estimator, a plurality of high voltage images are provided to the film estimator together with the plurality of low voltage images. Their ratio may be 1: 1, but the ratio can vary depending on the situation.
  • contrast enhancement 252 as moya enhancement is applied to the second original image (high voltage image) 250.
  • segmentation 256 is performed on the image 254 after contrast enhancement.
  • many small regions are set in the image 258 after segmentation.
  • a membrane image 262 is created by painting a plurality of small regions to which a membrane label is attached.
  • the film image 262 may be corrected after painting.
  • the image may be corrected in an earlier stage.
  • the film image 262 created in this way is used as the second correct image.
  • membrane image 262 shown by FIG. 10 a white part is a film
  • both the low voltage image and the high voltage image are given to the film estimator in the learning stage of the film estimator.
  • the following comparative example gives only a low voltage image to the film estimator in the learning stage of the film estimator.
  • the input image 270 is shown on the left side of FIG. 11, and the correct image 272 corresponding to the input image 270 is shown on the right side of FIG.
  • the black part is the film part.
  • a film likelihood map 274 according to the comparative example is shown on the left side of FIG. 12.
  • a membrane likelihood map 276 according to the embodiment is shown on the right side of FIG. In these film likelihood maps 274 and 276, the magnitude of the film likelihood is expressed in black.
  • the film portions A1, B1, C1, and D1 that are thinly expressed on the film likelihood map 274 are the film portions A2 that are expressed deeply on the film likelihood map 276. It has changed to B2, C2, and D2.
  • the film portions A1 and B1 (and the film portions A2 and B2) are considered to be portions including a deep image.
  • the land error for the comparative example was 0.012, while the land error for the embodiment was 0.009.
  • the land error is defined by 1- (Rand Index), where Rand Index is a known index indicating the degree of coincidence or similarity between the evaluation target image and the correct image.
  • FIG. 13 shows a second configuration example of the main body.
  • symbol is attached
  • the main body 280 includes a first image processing unit 200 and a second image processing unit 281.
  • the second image processing unit 281 includes a machine learning type film estimator 282, a binarizer 284, and a correction unit 286.
  • the film estimator 282 is composed of CNN in the illustrated example.
  • the film estimator 282 applies a film estimation process to the second original image (high voltage image) 36H as the input image 208, and outputs a film likelihood map.
  • the film estimator 282 is a learned estimator that has undergone learning using a high-voltage image.
  • the binarizer 284 generates a temporary membrane image from the membrane likelihood map.
  • the correction unit 286 has the same function as that of the correction unit 52, accepts user correction on the mask image, and generates the second correct image 224 thereby.
  • the second configuration example uses a membrane estimator 282 that is different from the membrane estimator 42 in generating the second correct image.
  • two observation conditions are used, but three or more observation conditions may be used.
  • CNN is used as the film estimator, other machine learning type film estimators may be used.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Food Science & Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Urology & Nephrology (AREA)
  • Molecular Biology (AREA)
  • Hematology (AREA)
  • Image Analysis (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Apparatus Associated With Microorganisms And Enzymes (AREA)

Abstract

Dans un processus d'analyse, un estimateur de membrane (42) soumet une image d'origine (38L) servant d'image basse tension à un processus d'estimation d'une membrane cellulaire, et délivre une carte de probabilité de membrane (60). Dans un processus d'apprentissage, une première image d'origine (36L) servant d'image basse tension, et une première image de réponse correcte correspondante (223, 64A), et une seconde image d'origine (36H) servant d'image haute tension, et une seconde image de réponse correcte correspondante (224) sont entrées dans l'estimateur de membrane (42).
PCT/JP2019/019741 2018-05-24 2019-05-17 Système de traitement d'image de tissu biologique, et procédé d'apprentissage automatique WO2019225506A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018099388A JP7181002B2 (ja) 2018-05-24 2018-05-24 生物組織画像処理システム及び機械学習方法
JP2018-099388 2018-05-24

Publications (1)

Publication Number Publication Date
WO2019225506A1 true WO2019225506A1 (fr) 2019-11-28

Family

ID=68616942

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/019741 WO2019225506A1 (fr) 2018-05-24 2019-05-17 Système de traitement d'image de tissu biologique, et procédé d'apprentissage automatique

Country Status (2)

Country Link
JP (1) JP7181002B2 (fr)
WO (1) WO2019225506A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05290786A (ja) * 1992-04-10 1993-11-05 Hitachi Ltd 走査試料像表示方法および装置ならびにそれに供される試料
JP2004212355A (ja) * 2003-01-09 2004-07-29 Hitachi Ltd バイオ電子顕微鏡及び試料の観察方法
JP2007263932A (ja) * 2006-03-30 2007-10-11 Kose Corp 化粧料塗布膜内部の画像の取得方法
US20100183217A1 (en) * 2007-04-24 2010-07-22 Seung H Sebastian Method and apparatus for image processing
JP2015149169A (ja) * 2014-02-06 2015-08-20 株式会社日立ハイテクノロジーズ 荷電粒子線装置、画像生成方法、観察システム
WO2017221592A1 (fr) * 2016-06-23 2017-12-28 コニカミノルタ株式会社 Dispositif, procédé et programme de traitement d'images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05290786A (ja) * 1992-04-10 1993-11-05 Hitachi Ltd 走査試料像表示方法および装置ならびにそれに供される試料
JP2004212355A (ja) * 2003-01-09 2004-07-29 Hitachi Ltd バイオ電子顕微鏡及び試料の観察方法
JP2007263932A (ja) * 2006-03-30 2007-10-11 Kose Corp 化粧料塗布膜内部の画像の取得方法
US20100183217A1 (en) * 2007-04-24 2010-07-22 Seung H Sebastian Method and apparatus for image processing
JP2015149169A (ja) * 2014-02-06 2015-08-20 株式会社日立ハイテクノロジーズ 荷電粒子線装置、画像生成方法、観察システム
WO2017221592A1 (fr) * 2016-06-23 2017-12-28 コニカミノルタ株式会社 Dispositif, procédé et programme de traitement d'images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIGONG HAN ET AL.: "Learning Generative Models of Tissue Organization with Supervised GANs", 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION, 12 March 2018 (2018-03-12), pages 682 - 690, XP033337732, DOI: 10.1109/WACV.2018.00080 *
OSUMI MASAKO: "Low-voltage scanning electronic microscopy - application of biological specimens", ELECTRONIC MICROSCOPE, vol. 31, no. 1, pages 45 - 50 *
UCHIHASHI KENSHI ET AL.: "Neuronal cell memebrane segmentation from electron microscopy images using a hostile generation model", THE 31ST ANNUAL CONFERENCE OF THE JAPANESE SOCIETY FOR ARTIFICAIL INTELLIGENCE, 2017, pages 1 - 4 *

Also Published As

Publication number Publication date
JP7181002B2 (ja) 2022-11-30
JP2019203803A (ja) 2019-11-28

Similar Documents

Publication Publication Date Title
US11854201B2 (en) Biological tissue image processing system, and machine learning method
Klumpe et al. A modular platform for automated cryo-FIB workflows
CN107680107B (zh) 一种基于多图谱的扩散张量磁共振图像的自动分割方法
CN109584156A (zh) 显微序列图像拼接方法及装置
US11594051B2 (en) Microscope system and projection unit
CN111445478A (zh) 一种用于cta图像的颅内动脉瘤区域自动检测系统和检测方法
CN113012172A (zh) 一种基于AS-UNet的医学图像分割方法及系统
Bartesaghi et al. An energy-based three-dimensional segmentation approach for the quantitative interpretation of electron tomograms
Pollastri et al. Improving skin lesion segmentation with generative adversarial networks
Krawczyk et al. Bones detection in the pelvic area on the basis of YOLO neural network
Waggoner et al. 3D materials image segmentation by 2D propagation: A graph-cut approach considering homomorphism
CN115546605A (zh) 一种基于图像标注和分割模型的训练方法及装置
TW201709150A (zh) 組織學染色之空間多工
CN114764189A (zh) 用于评估图像处理结果的显微镜系统和方法
CN113744195B (zh) 一种基于深度学习的hRPE细胞微管自动检测方法
US8837795B2 (en) Microscopy of several samples using optical microscopy and particle beam microscopy
Fortun et al. Reconstruction from multiple particles for 3D isotropic resolution in fluorescence microscopy
Rangan et al. Deep reconstructing generative networks for visualizing dynamic biomolecules inside cells
CN116797463B (zh) 特征点对提取方法及图像拼接方法
WO2019225506A1 (fr) Système de traitement d'image de tissu biologique, et procédé d'apprentissage automatique
JP7181000B2 (ja) 生物組織画像処理装置及び方法
CN112633248A (zh) 深度学习全在焦显微图像获取方法
JP7181003B2 (ja) 生物組織画像処理装置及び方法
CN110189283A (zh) 基于语义分割图的遥感图像dsm融合方法
CN115424319A (zh) 一种基于深度学习的斜视识别系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19807182

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19807182

Country of ref document: EP

Kind code of ref document: A1