WO2019225507A1 - Dispositif et procédé de traitement d'image de tissu biologique - Google Patents

Dispositif et procédé de traitement d'image de tissu biologique Download PDF

Info

Publication number
WO2019225507A1
WO2019225507A1 PCT/JP2019/019742 JP2019019742W WO2019225507A1 WO 2019225507 A1 WO2019225507 A1 WO 2019225507A1 JP 2019019742 W JP2019019742 W JP 2019019742W WO 2019225507 A1 WO2019225507 A1 WO 2019225507A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
film
biological tissue
depth
haze
Prior art date
Application number
PCT/JP2019/019742
Other languages
English (en)
Japanese (ja)
Inventor
功記 小西
吉田 隆彦
三雄 須賀
秀夫 西岡
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2019225507A1 publication Critical patent/WO2019225507A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/22Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
    • G01N23/225Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion
    • G01N23/2251Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion using incident electron beams, e.g. scanning electron microscopy [SEM]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01JELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
    • H01J37/00Discharge tubes with provision for introducing objects or material to be exposed to the discharge, e.g. for the purpose of examination or processing thereof
    • H01J37/02Details
    • H01J37/22Optical or photographic arrangements associated with the tube
    • CCHEMISTRY; METALLURGY
    • C12BIOCHEMISTRY; BEER; SPIRITS; WINE; VINEGAR; MICROBIOLOGY; ENZYMOLOGY; MUTATION OR GENETIC ENGINEERING
    • C12MAPPARATUS FOR ENZYMOLOGY OR MICROBIOLOGY; APPARATUS FOR CULTURING MICROORGANISMS FOR PRODUCING BIOMASS, FOR GROWING CELLS OR FOR OBTAINING FERMENTATION OR METABOLIC PRODUCTS, i.e. BIOREACTORS OR FERMENTERS
    • C12M1/00Apparatus for enzymology or microbiology
    • C12M1/34Measuring or testing with condition measuring or sensing means, e.g. colony counters

Definitions

  • the present disclosure relates to a biological tissue image processing apparatus and method.
  • 3D microscopy is known as a technique for analyzing or imaging the 3D structure of biological tissue.
  • an electron microscope is generally used.
  • three-dimensional microscopy using a scanning electron microscope includes Focused Ion Beam SEM (FIB-SEM) method, Serial Block-Face SEM (SBF-SEM) method, and serial section SEM (Serial Section (SEM) method (see Non-Patent Document 1) has been proposed.
  • the continuous section SEM method is also called an array tomography method.
  • a plurality of sample sections (ultra-thin pieces) continuous in the depth direction are cut out from a sample of a biological tissue, and they are arranged on a substrate.
  • Each sample section on the substrate is sequentially observed with a scanning electron microscope, whereby a plurality of images are acquired. Based on the acquired images, the three-dimensional structure of a specific organ (cell, mitochondria, nucleus, etc.) contained in the biological tissue is analyzed or imaged. According to the continuous section SEM method, it is possible to observe a sample section that has already been observed again.
  • the content of the electron microscope image generated by the observation of the biological tissue varies depending on the observation conditions such as the acceleration voltage. According to the experiments and researches of the present inventors, for example, it has been confirmed that a deep image of a cell membrane (for example, a nerve cell membrane) is likely to appear like a haze when observed with an increased acceleration voltage.
  • the deep image is considered to reflect the three-dimensional form or three-dimensional structure of the cell membrane.
  • a membrane for example, a mitochondrial membrane
  • the purpose of the present disclosure is to utilize a deep image included in an electron microscope image as a biological tissue image in a three-dimensional labeling process.
  • a biological tissue image processing apparatus includes a membrane image generation unit that generates a membrane image based on a first electron microscope image generated by observation of a biological tissue, and a second generated by observation of the biological tissue. Based on the electron microscope image, a moya image generating unit that generates a moya image including a moya region as a deep image of the film, and a labeling processing unit that uses the moya region in a three-dimensional labeling process based on the film image. It is characterized by including.
  • the biological tissue image processing method includes a step of generating a membrane image based on a first electron microscope image generated by observation of a biological tissue, and a second electron microscope image generated by observation of the biological tissue. And a step of generating a haze image including a haze area as a deep image of the film, and a step of using the haze area in a three-dimensional labeling process based on the film image.
  • the above method is realized as a hardware function or a software function.
  • a program for executing the function is installed in the information processing apparatus via a network or a portable storage medium.
  • the concept of the information processing apparatus includes a personal computer, an electron microscope system, and the like.
  • a biological tissue image processing system includes a membrane image generation unit, a moya image generation unit, and a labeling processing unit.
  • the membrane image generation unit generates a membrane image based on the first electron microscope image generated by observing the biological tissue.
  • the haze image generation unit generates a haze image including a haze region (Hazy Region) as a deep image of the film based on the second electron microscope image generated by observing the biological tissue.
  • the labeling processing unit uses a haze region in the three-dimensional labeling process based on the film image.
  • the haze area is special information reflecting the three-dimensional form or three-dimensional structure of the film. The above configuration uses such special information in the three-dimensional labeling process.
  • the membrane is a cell membrane in the embodiment, but may be other membranes.
  • the first electron microscope image and the second electron microscope image are generated by observing the same part in the biological tissue. Those images usually have the same magnification.
  • the first electron microscope image and the second electron microscope image are separate images, but the first electron microscope image and the second electron microscope image are the same as long as the haze area can be extracted and used.
  • a modified example is also conceivable.
  • a plurality of images are acquired by the continuous section SEM method (array tomography method), but a plurality of images may be acquired by the FIB-SEM method, the SBF-SEM method, and other methods.
  • the first observation condition is set for the electron microscope
  • the electron microscope is larger than the first observation condition.
  • a second observation condition that allows observation over a depth range is set. According to this configuration, a deep image of the film is likely to appear in the second electron microscope image.
  • the first observation condition is a condition for setting the first acceleration voltage for the electron microscope
  • the second observation condition is for setting the second acceleration voltage higher than the first acceleration voltage for the electron microscope. It is a condition.
  • the membrane image generation unit includes a machine learning type membrane estimator.
  • the film estimator is composed of CNN, for example. Other types of machine learning type film estimators may be used. It is also possible to use a film estimator other than the machine learning type. If the input image of the membrane estimator is a low voltage image, the burden on the user when correcting the temporary membrane image can be reduced. If the input image of the haze image generation unit is a high voltage image, the detection accuracy of the haze area can be increased.
  • the haze image generation unit generates a haze image by extracting a haze image component from the second electron microscope image.
  • a haze image component generally has a characteristic property, and it is possible to extract a haze image component using this property.
  • the haze image generation unit may be configured as a filter processor, a machine learning type haze estimator, or the like.
  • a single estimator (for example, CNN) may function as the film image generation unit and the haze image generation unit.
  • the film image generation unit generates a plurality of film images corresponding to a plurality of depths based on the plurality of first electron microscope images corresponding to the plurality of depths.
  • a plurality of haze images corresponding to a plurality of depths are generated based on a plurality of second electron microscope images corresponding to the depth, and a labeling processing unit executes a three-dimensional labeling process on the plurality of film images,
  • the labeling processing unit generates reference information for selecting a target region included in a film image having a second depth different from the first depth, based on a moire region included in the first depth of the moire image.
  • a reference information generation unit to be generated is included.
  • Standard information is used as judgment information in automatic labeling, or referred to as support information in manual labeling.
  • the second depth may be a depth adjacent to the first depth, or may be another depth.
  • the Reference information can be used as such a feature amount.
  • the target region is a region to be extracted. In the embodiment, the target region is the inside of the cell membrane, that is, the in-membrane region. A cell region (region including up to the cell membrane) that is a larger region or the like may be the target region.
  • the reference information generation unit includes a moire region included in the first depth moire image, and a first target region that is an area included in the first depth film image and includes the moire region.
  • a reference vector calculation unit that calculates a reference vector of a first depth as reference information
  • the labeling processing unit includes a film of a second depth based on the reference vector of the first depth.
  • a selection unit that selects a second target area having a connection relation with the first target area is provided.
  • a reference vector of the first depth is used as information indicating a three-dimensional structure or form such as a cytoplasm surrounded by a membrane.
  • the concept of a connection relationship includes a relationship in which the possibility of connection is recognized. In determining the connection relationship, a plurality of feature amounts including the reference vector may be comprehensively considered.
  • the reference vector calculation unit is based on the representative coordinates of the haze region included in the first depth of the moire image and the representative coordinates of the first target region included in the film image of the first depth.
  • the first depth reference vector is calculated.
  • Each representative coordinate represents each region, for example, the center of gravity, the center, and other coordinates.
  • the reference information generation unit includes a moire region included in the first depth moire image, and a first target region that is an area included in the first depth film image and includes the moire region. Based on the reference region, a reference region calculation unit that calculates a reference region of the first depth is used, and the labeling processing unit includes a film of the second depth based on the reference region of the first depth. As a target area included in the image, a selection unit that selects a second target area having a connection relation with the first target area is provided.
  • the reference area calculation unit includes, from the first target area included in the first depth film image, the haze area included in the first depth moya image or an area generated based on the same Is subtracted to calculate the reference area of the first depth.
  • the reference region of the first depth is a portion that is highly likely to be superposed on the second target region in the first target region as viewed from the depth direction.
  • the haze region is a portion of the first target region that is unlikely to be superposed on the second target region when viewed from the depth direction.
  • the above configuration excludes a portion that is unlikely to be polymerized and uses a portion that is highly likely to be polymerized for determination of a connection relationship.
  • FIG. 1 shows a biological tissue image processing system according to an embodiment.
  • the illustrated biological tissue image processing system 10 is a system for analyzing and imaging a three-dimensional structure of a biological tissue. Using this biological tissue image processing system, for example, an image that three-dimensionally represents nerve cells in the brain of a human body or an animal is generated. Any tissue, organ, etc. in the organism can be the subject of analysis.
  • the biological tissue image processing system 10 includes a sample pretreatment device 12, a continuous section creation device 14, a scanning electron microscope (SEM) 16, and a biological tissue image processing device 18. .
  • SEM scanning electron microscope
  • the sample pretreatment device 12 is a device that performs pretreatment on the tissue 20 extracted from a living body, or corresponds to various instruments for the pretreatment.
  • the pretreatment include fixing treatment, dyeing treatment, conductive treatment, resin embedding treatment, and shaping treatment. All or some of them are implemented as necessary.
  • the staining treatment osmium tetroxide, uranium acetate, lead citrate, or the like may be used. A staining process may be performed on each sample section described below. Some or all of the steps included in the pretreatment may be performed manually.
  • the continuous section preparation device 14 is provided outside the SEM 16 or inside the SEM 16.
  • a plurality of sample sections 24 arranged in the depth direction (Z direction) are cut out from the cubic sample after the pretreatment by the continuous section creating apparatus 14.
  • an apparatus such as an ultramicrotome may be used.
  • the work may be performed manually.
  • a plurality of sample sections 24 constitute a sample section group 22.
  • the plurality of sample sections 24 cut out are arranged on the substrate 28 in a predetermined arrangement.
  • the substrate 28 is, for example, a glass substrate or a silicone substrate.
  • the sample section array 22 ⁇ / b> A composed of two sample section rows is configured on the substrate 28, but this is merely an example.
  • a sample unit 26 is configured by the substrate 28 and the sample section array 22A.
  • each sample section 24 is, for example, on the order of nm or ⁇ m.
  • a sample section 24 having a larger size may be produced.
  • the thickness (size in the Z direction) of each sample section 24 is, for example, in the range of several nm to several hundred nm, and in the embodiment, the thickness is in the range of, for example, 30 to 70 nm. Any numerical value given in this specification is an example.
  • the SEM 16 includes an electron gun, a deflector (scanner), an objective lens, a sample chamber, a detector 34, a control unit 204, and the like.
  • a stage for holding the sample unit 26 and a moving mechanism for moving the stage are provided in the sample chamber.
  • the control unit 204 controls the operation of the moving mechanism, that is, the movement of the stage.
  • the electron beam 30 is irradiated to a specific sample section 24 selected from the sample section array 22A.
  • scanning the irradiation position for example, raster scanning
  • the reflected electrons 32 emitted from each irradiation position are detected by the detector 34. Thereby, an SEM image is formed. This is performed for each sample section 24.
  • the control unit 204 has a function of setting an acceleration voltage for forming an electron beam. In general, when the acceleration voltage is increased, information from a deeper position in the sample section 24 is easily obtained.
  • the X direction observation range and the Y direction observation range coincide with each other.
  • the observation range electron beam two-dimensional scanning range
  • secondary electrons or the like may be detected.
  • the first acceleration voltage (low voltage) as the first observation condition and the second acceleration voltage (second observation condition) as the second observation condition are controlled for each sample section 24.
  • High voltage is set sequentially. That is, the individual sample slices 24 cut out from the sample to be analyzed are observed by scanning with the electron beam 30L formed under the first acceleration voltage (low voltage observation), whereby a low voltage image (first original image). 38L is generated. Subsequently, the individual sample sections 24 are observed by scanning with the electron beam 30H formed under the second acceleration voltage, thereby generating a high voltage image (second original image) 38H (high voltage observation).
  • low-voltage observation and low-voltage observation are sequentially and sequentially executed for one sample section 24, and this is repeated for each sample section 24.
  • the low voltage image stack 36L is configured by the plurality of low voltage images 38L
  • the high voltage image stack 36H is configured by the plurality of high voltage images 38H.
  • the first acceleration voltage is, for example, 1 keV or 2 keV
  • the second acceleration voltage is, for example, 3 keV, 5 keV, or 7 keV. It is desirable to determine the first acceleration voltage and the second acceleration voltage in consideration of the sample, observation object, observation purpose, and other circumstances.
  • the first observation condition that is, the first acceleration voltage
  • the individual sample slices 24 cut out from the learning target sample are observed by scanning with the electron beam 30L formed under the first acceleration voltage (low voltage) (low voltage observation), and thereby the low voltage image (analysis). Original image) is generated.
  • a plurality of low voltage images constitute a low voltage image stack.
  • the learning process includes a primary learning (initial learning process) and a secondary learning process. In the learning process, a high voltage image may be used as a learning image in addition to the low voltage image.
  • the image stacks 36L and 36H acquired in the analysis process are configured by a plurality of images 38L and 38H corresponding to a plurality of depths in the Z direction (in other words, arranged in the Z direction in the data storage space).
  • Each of the images 38L and 38H is an original image or an input image when viewed from the biological tissue image processing apparatus 18 side.
  • the images 38L and 38H are electronic data, and the images 38L and 38H are transmitted from the SEM 16 to the biological tissue image processing apparatus 18 via a network or a portable storage medium.
  • the biological tissue image processing apparatus 18 is configured by a personal computer in the illustrated configuration example.
  • the biological tissue image processing apparatus 18 may be incorporated in the SEM 16, or the biological tissue image processing apparatus 18 may be incorporated in a system computer that controls the SEM 16 or the like.
  • the SEM 16 may be controlled by the biological tissue image processing apparatus 18.
  • the biological tissue image processing apparatus 18 has a main body 40, a display 46 and an input device 48.
  • the plurality of functions of the main body 40 will be described in detail later with reference to FIGS.
  • FIG. 1 three typical functions (film estimation function, haze detection function, and labeling processing function) exhibited by the main body 40 are represented as blocks.
  • the main body 40 includes a machine learning type film estimator 42 that constitutes a part of the film image generation unit, a haze detector 200 as a haze image generation unit, and a labeling process that executes a three-dimensional labeling process.
  • the display 46 is configured by an LCD, an organic EL display device, or the like.
  • the input device 48 includes a keyboard operated by a user, a pointing device, and the like.
  • FIG. 2 shows a configuration example of the main body 40.
  • the entity of each configuration shown in FIG. 2 is software, that is, a program executed by a general-purpose processor such as a CPU or GPU, except for a portion corresponding to a user's work or action.
  • a general-purpose processor such as a CPU or GPU
  • part or all of the configuration may be configured by a dedicated processor or other hardware. All or part of the functions of the biological tissue image processing apparatus may be executed by one or a plurality of information processing devices existing on the network.
  • the main body 40 includes a machine learning type film estimator 42, a binarizer (image generator) 50, a correction unit 52, a haze detector 200, a labeling processing unit 202, a volume data processing unit 56, and the like.
  • the membrane estimator 42 functions as membrane estimation means, and applies a membrane estimation process to the input image 206, thereby outputting a membrane likelihood map 60.
  • the input image 206 is an original image 38L as a low voltage image.
  • the original image 38H as a high-voltage image is input to the haze detector 200 provided in parallel with the membrane estimator 42.
  • the membrane estimator 42 is configured by a CNN (ConvolutionalConNeural Network) which is a machine learning type membrane estimator.
  • CNN ConvolutionalConNeural Network
  • a specific configuration example will be described later with reference to FIG.
  • the learning process is executed in advance prior to the actual operation of the CNN, that is, the analysis process so that the film is correctly estimated by the CNN.
  • the learning process includes a primary learning process and a secondary learning process as described above. Only the primary learning process may be performed.
  • each image pair is constituted by the original image 38L and the correct image 68 corresponding thereto.
  • the correct answer image 68 is created, for example, manually by the original image 38L.
  • the correct image 68 may be created from the original image 38L by an unsupervised machine learning device, a simple classifier (for example, SVM (Support Vector Vector)) or the like.
  • the output may be used as the correct image 68.
  • the correct answer image 68 may be created based on the output of the film estimator 42 that has undergone a certain primary learning process.
  • a plurality of image pairs are given as teacher data to the membrane estimator 42 as in the primary learning process.
  • the teacher data includes a plurality of image pairs used in the primary learning process and a plurality of image pairs added in the secondary learning process.
  • Each added image pair includes an original image 38L and a corresponding correct image 64A.
  • the correct image 64A is created by the biological image processing apparatus itself by the configuration from the membrane estimator 42 to the correction unit 52. Specifically, when the original image 38L is input to the membrane estimator 42, the membrane likelihood map 60 is output from the membrane estimator 42 as an estimation result image.
  • the correct image 64 ⁇ / b> A is created through generation of the mask image 62 based on the film likelihood map 60 and user (expert) correction of the mask image 62 using the correction unit 52. Each process will be described in detail later. It is also conceivable to use the mask image 62 as the correct image 62A.
  • the CNN parameter group in the film estimator 42 is further improved by the secondary learning process. That is, the machine learning result is further accumulated in the film estimator 42. For example, the secondary learning process ends when it is determined that the result of the estimation process for the original image 38L is sufficiently similar to the correct images 68 and 64A corresponding to the original image 38L. Thereafter, re-learning of the film estimator 42 is executed by the same method as described above as necessary.
  • the database 57 stores a plurality of original images 38L and 38H and a plurality of correct images 68 and 64A.
  • the film estimator 42 and the database 57 may be integrated.
  • U-net may be used, or SVM, random forest, or the like may be used.
  • the binarizer 50 functions as an image generator. Specifically, the binarizer 50 is a module that generates a temporary membrane image 62 by binarization processing on the membrane likelihood map 60, as will be exemplified later with reference to FIG.
  • the film likelihood map 60 includes a plurality of film likelihoods arranged two-dimensionally. Each film likelihood is a numerical value indicating the probability (probability) of being a film. The film likelihood is, for example, between 0 and 1.
  • the film likelihood map 60 can also be understood as a film likelihood image.
  • a threshold is set in the binarizer 50, and the binarizer 50 converts a film likelihood that is equal to or greater than the threshold to 1 and converts a film likelihood that is less than the threshold to 0.
  • the image generated as a result is a temporary membrane image 62.
  • the film image before correction is referred to as a temporary film image 62 in order to distinguish the film image before correction from the film image after correction.
  • the entire film estimator 42 and binarizer 50 may be configured by CNN or the like. Even in such a case, the stepwise generation of the membrane likelihood map and the temporary membrane image can be considered. It is also possible to use the temporary membrane image 62 as the correct image 62A. Prior to binarization processing, processing such as noise removal and edge enhancement may be applied to the film likelihood map 60. In the embodiment, the part from the membrane estimator 42 to the correction unit 52 functions as a membrane image generation unit or a membrane image generation unit.
  • the correction unit 52 displays, as a work target image, a mask image that is a work target for the user via a work window illustrated in FIG. 5 later, and accepts a user's correction instruction on the work target image. Is.
  • the correction contents are reflected in the mask image that is the work target image.
  • the work target image includes a discontinuous portion of the film
  • a film pixel group is added to the discontinuous portion.
  • the correction unit 52 is a module that supports the user's work or operation and manages each mask image. When the user is satisfied with the quality of the mask image, the processing by the correction unit 52 can be skipped.
  • an input image (original image) 206 and a generated membrane likelihood map 60 are input to the correction unit 52 in addition to the generated temporary membrane image 62.
  • the original image 206 or the membrane likelihood map 60 corresponding to the work target image can be displayed together with or instead of the temporary membrane image 62 as the work target image.
  • the labeling processing unit 202 functions as a labeling processing unit, and also functions as a reference information generation unit and a selection unit.
  • the labeling processing unit 202 is a module that performs labeling (painting and labeling) on individual regions (in-film regions) included in the corrected film image.
  • ⁇ Labeling includes manual labeling by the user and automatic labeling.
  • in order to form a burden on the user at the time of the labeling process or to increase the accuracy of the labeling process in the embodiment, as a feature quantity for determining the connection relation in the depth direction, or a plurality of feature quantities for that purpose.
  • reference information is used.
  • the reference information is a reference vector and a reference region that are selectively (or simultaneously) used as will be described in detail later.
  • the labeling processing unit 202 includes a reference vector calculation unit 210 and a reference region calculation unit 212. These will be described in detail later.
  • the labeling processing unit 202 has a selection unit, which is not shown.
  • three-dimensional labeling data 66 is constructed in which the cytoplasm surrounded by the membrane is distinguished from the others. This is sent to the volume data processing unit 56.
  • the volume data processing unit 56 includes an analysis unit 56A and a rendering unit 56B.
  • the volume data processing unit 56 is input with an original image stack 36L composed of a plurality of original images.
  • the original image stack 36L constitutes volume data.
  • the three-dimensional labeling data 66 is also input to the volume data processing unit 56. It is also a kind of volume data.
  • the analysis unit 56A analyzes a target organ (for example, a nerve cell) based on the three-dimensional labeling data 66, for example. For example, the form, volume, length, etc. may be analyzed.
  • the original image stack 36L is analyzed with reference to the three-dimensional labeling data.
  • the rendering unit 56B forms a three-dimensional image (stereoscopic expression image) based on the three-dimensional labeling data 66. For example, an imaging portion may be extracted from the original image stack 37L based on the three-dimensional labeling data 66, and a rendering process may be applied to the extracted image portion.
  • the haze detector 200 shown in the lower part of FIG. 2 functions as a haze image generator.
  • An original image 38H which is a high-voltage image, is input to the haze detector 200 as an input image 208.
  • the high voltage image the three-dimensional structure or form inside the cell is more likely to appear than in the low voltage image (this will be described in detail later with reference to FIG. 7).
  • more deep images of the cell membrane appear.
  • the deep image appears like a haze in the high voltage image.
  • a module that detects the haze, that is, a haze image component, is a haze detector 200.
  • the haze detector 200 has a filter for extracting a haze image component as will be described later.
  • a deep image, or haze also appears in the low voltage image, but the amount is relatively small. Therefore, in the embodiment, two images of a low voltage image and a high voltage image are acquired from the same sample section.
  • the input image 206 of the membrane estimator 42 can include a certain amount of haze
  • a modification in which the input image 206 is input to the haze detector 200 can be considered.
  • the haze detector 200 may be configured by a machine learning type film estimator such as CNN. It is also conceivable that the film estimator 42 and the haze detector 200 are constituted by a single machine learning type film estimator. In that case, both the membrane likelihood map and the Complicated likelihood map are output in parallel from the estimator.
  • the temporary membrane image 62 often contains non-membrane images. For this reason, the burden of the correction work in the correction part 52 will increase.
  • the haze area is set. It is possible to detect with high accuracy. The haze area is used in the calculation of the reference vector and the calculation of the reference area. This will be described later in detail with reference to FIGS.
  • FIG. 3 shows a configuration example of the membrane estimator 42 as a schematic diagram.
  • the membrane estimator 42 has a number of layers, which include an input layer 80, a convolution layer 82, a pooling layer 84, an output layer 86, and the like. These layers act according to the CNN parameter group 88.
  • the CNN parameter group 88 includes a number of weighting factors, a number of bias values, and the like.
  • the CNN parameter group 88 is initially configured by an initial value group 94. For example, the initial value group 94 is generated using random numbers or the like.
  • the evaluation unit 90 and the update unit 92 function.
  • the evaluation unit 90 calculates an evaluation value based on a plurality of image pairs (original image 38L (see FIG. 2) and correct images 68 and 64A corresponding thereto) constituting the teacher data.
  • the evaluation value is calculated by sequentially giving the result 60A of the estimation process for the original image 38L and the correct images 68 and 64A corresponding to the original image 38L to the error function.
  • the updating unit 92 updates the CNN parameter group 88 so that the evaluation value changes in a favorable direction. By repeating this, the CNN parameter group 88 is optimized as a whole.
  • the end of the learning process is determined when the evaluation value reaches a certain value.
  • the configuration shown in FIG. 3 is merely an example, and it is possible to use estimators having various structures as the film estimator 42.
  • FIG. 4 shows the operation of the binarizer.
  • a film likelihood map 60 is output from the film estimator.
  • the film likelihood map 60 is composed of a plurality of film likelihoods 60a corresponding to a plurality of pixels, and each film likelihood 60a is a numerical value indicating the probability of being a film.
  • the binarizer binarizes the membrane likelihood map 60, thereby generating a temporary membrane image 62 as a binarized image.
  • each film likelihood 60a is compared with a threshold value.
  • a membrane likelihood 60a that is equal to or greater than the threshold is converted into a pixel 62a having a value of 1, and a membrane likelihood 60a that is less than the threshold is converted into a pixel 62b having a value of 0.
  • the pixel 62a is handled as a pixel (film pixel) constituting the film.
  • the threshold value can be variably set by the user. The setting may be automated. For example, the threshold value may be changed while observing the temporary membrane image 62.
  • FIG. 5 illustrates a work window displayed by the machining tool unit (the correction unit 52 and the labeling processing unit 202).
  • the work window 100 includes a display image 102.
  • the display image 102 is a composite image (composite image) composed of a mask image (work target image) corresponding to the depth selected by the user and the original image.
  • An original image as a gray scale image constitutes a background image, and a temporary membrane image (work target image) as a color image (for example, a blue image) is superimposed and displayed on the background image.
  • the illustrated display image 102 shows brain tissue, in which cross sections of a plurality of cells appear. At the same time, a cross section of an organelle (such as mitochondria) in the cell also appears.
  • the display image 102 described above is displayed.
  • the tab 105 is selected, only the original image as a gray scale image is displayed.
  • the tab 106 is selected, only the film likelihood map (film likelihood image) as a gray scale image is displayed.
  • a tab for displaying only a mask image as a work target image may be added.
  • a membrane likelihood map may be used as a background image, and a temporary membrane image may be superimposed on the background image.
  • the depth selection tool 108 is a display element (operation element) for selecting a specific depth (display depth) in the Z direction. It consists of a Z-axis symbol 108b representing the Z-axis and a marker 108a as a slider that slides along the Z-axis symbol 108b. It is possible to select the desired depth by moving the marker 108a. According to such a depth selection tool 108, it is possible to obtain an advantage that it is easy to intuitively recognize the depth to be selected and the depth change amount.
  • the left end point of the Z-axis symbol 108b corresponds to zero depth, and the right end point of the Z-axis symbol 108b corresponds to the maximum depth. Depth selection tools having other forms may be employed.
  • the depth input field 114 is a field for directly specifying the depth as a numerical value. The depth currently selected according to the position of the marker 108a may be displayed as a numerical value in the depth input field 114.
  • the transparency adjustment tool 110 is a tool for adjusting the transparency (display weight) of the color temporary membrane image (work target image) displayed in a synthesized state in a state where the display image 102 is displayed. For example, when the marker 110a is moved to the left side, the display weight of the color mask image is reduced, the transparency is increased, and the original image is dominantly displayed. On the contrary, when the marker 110a is moved to the right side, the display weight of the color mask image increases, the transparency decreases, and the color mask image becomes more clearly expressed.
  • the superimposition display tool 112 is an image (synthesized image, original image or film likelihood map) adjacent to the currently displayed image (synthesized image, original image or film likelihood map) on the shallow side in the depth direction.
  • the operation is performed when an image (a composite image, an original image, or a film likelihood map) adjacent to the deep side in the depth direction is displayed in an overlapping manner.
  • the marker 112a is moved to the left side, the display weight for the image adjacent to the shallow side increases.
  • the marker 112a is moved to the right side, the display weight for the image adjacent to the deep side increases.
  • Three or more images may be superimposed and displayed. Of course, if too many images are superimposed, the image content becomes too complex, so it is desirable to overlay a small number of images. Such superposition display makes it easy to obtain depth information.
  • the button row 115 includes a plurality of virtual buttons 116, 118, 120, 121, 122, 126.
  • the button 116 is a display element operated when performing image zoom (enlargement or reduction).
  • the button 118 is a display element that is operated when the pen tool is used. When the button 118 is turned on, the shape of the cursor is changed to a pen shape, and a film pixel can be added using the cursor shape. It is also possible to change the pen size.
  • the button 120 is a display element that is operated when using an eraser. When the button 120 is turned on, the shape of the cursor is changed to an eraser shape, and the film pixel can be deleted using the cursor shape. It is also possible to change the size of the eraser.
  • the button 121 is a display element that is operated when painting manually. If any area is designated after the button 121 is turned on, the area is filled. It is also possible to select an arbitrary function from a plurality of functions prepared for painting (or labeling) by operating the button 121.
  • the color palette 124 is displayed. For example, a color selected from the color palette is given to the painted area. Thereby, the area is colored by the selected color.
  • Each color is associated with an object number. If the same color, that is, the same object number is given to a plurality of regions across the layers, a three-dimensional lumen region in a specific cell is defined by these regions.
  • the button 126 is a button for black and white reversal. When it is operated, a portion displayed in black in the displayed image is displayed in white, and conversely, a portion displayed in white is displayed in black.
  • the labeling process is executed manually or automatically.
  • one or more labeling candidates are automatically specified based on reference information described later (or based on a plurality of feature amounts including the reference information).
  • a labeling target is automatically specified based on reference information described later (or based on a plurality of feature amounts including the reference information). That is, in the embodiment, the reference information is calculated based on the film image and the haze image at a certain depth, and using the reference information, the labeling candidate or the labeling target is automatically detected in the film image at another depth. Specific.
  • FIG. 6 shows a temporary membrane image 180 before correction and a temporary membrane image 182 after correction.
  • the temporary film image 180 before correction includes film break portions 184 and 188.
  • the corrected temporary membrane image 182 as shown by reference numerals 186 and 190, the film interruption portion is repaired.
  • the temporary membrane image 180 before correction includes non-membrane portions 192 and 196.
  • those non-membrane portions have disappeared. According to the corrected mask image 182, it is possible to accurately perform labeling, and consequently, it is possible to improve the quality of the three-dimensional labeling data.
  • FIG. 7 schematically shows exaggerated changes in the observation depth range and changes in the SEM image caused by changes in the acceleration voltage.
  • Reference numerals 230, 232, and 234 indicate three different observation conditions.
  • the observation condition 230 is a condition for setting the acceleration voltage to a low voltage.
  • the observation condition 232 is a condition in which the acceleration voltage is a medium voltage.
  • the observation condition 234 is a condition for increasing the acceleration voltage.
  • the upper part (A) of FIG. 7 shows the relationship between the cross section of the sample piece 236 and the electron beams 244, 246, 248.
  • Each cross section includes a membrane portion 238 that runs parallel to the Z direction and a membrane portion 240 that runs in an oblique direction with respect to the Z direction.
  • Reference numeral 242 indicates a portion (lumen, cytoplasm) surrounded by the membrane.
  • the lower part (B) of FIG. 7 shows an SEM image.
  • the object to be observed by the electron beam 244 is exclusively the surface layer of the sample section 236. For this reason, images 238a and 240a representing the surface layers of the two film portions 238 and 240 appear on the SEM image.
  • the observation condition 232 is selected, the object to be observed by the electron beam 246 extends to the intermediate layer of the sample slice 236. Therefore, on the SEM image, an image (intermediate integrated image) 238b representing the intermediate layer of the film portion 238 appears, and a deep image 240c appears in the vicinity of the image 240b for the film portion 240.
  • the object to be observed by the electron beam 248 extends to the entire sample section 236 in the illustrated example. Therefore, on the SEM image, an image (entire integrated image) 238c representing the entire film portion 238 appears, and a wide depth image 240e appears in the vicinity of the image 240d for the film portion 240.
  • the low voltage image and the high voltage image are acquired from the individual sample sections as described above. After these are processed separately, reference information for determining the connection relationship between the in-film regions in the depth direction is generated from the processing results.
  • an upper image illustrates an original image 300L as a low-voltage image, and a film image 302 generated based on the original image 300L is illustrated.
  • an original image 300H as a high voltage image is illustrated in the lower part, and a haze image 304 generated based on the original image 300H is illustrated.
  • the original image 300H includes a clearer image component than the original image 300L.
  • the haze image 304 includes a haze area as a high luminance portion.
  • a superimposed image 306 is shown on the right side of FIG. 8 as an image for understanding the spatial relationship between the membrane image 302 and the haze image 304.
  • the superimposed image 306 includes a film image 302 as a background image, and a haze image 304 superimposed thereon.
  • the superimposed image 306 is not actually generated, and the film image 302 and the haze image 304 are processed separately.
  • FIG. 9 schematically shows the action of the haze detector.
  • the original image 330 is a high voltage image, on which the filter 332 is raster scanned.
  • the center of the filter 332 is the target pixel 334.
  • the enlarged filter 332A it is composed of 3 ⁇ 3 pixels, and it is determined whether or not the target pixel 334A is a haze pixel using the filter 332A.
  • the average luminance of the nine pixels included in the filter 332 is equal to or greater than a1 and the variance of the luminance of the nine pixels is equal to or less than a2.
  • 1 is written as the moy flag in the moy management table 342. Specifically, 1 is written in the cell 346 corresponding to the target pixel.
  • 0 is written in the moya management table 342 as a moya flag as indicated by reference numeral 340. Specifically, 0 is written in the cell 346 corresponding to the target pixel.
  • the above processing is executed at each movement position while moving the filter 332. As a result, values are written in all the cells of the moyah management table 342. At that time, the Complicated Image is completed.
  • the haze area is a relatively high brightness area, and the change in brightness is relatively small in the haze area.
  • a haze region is detected using such features.
  • An area condition may be added to the above conditions to extract a haze pixel group having a certain area or more.
  • the filter 332 described above is only one method for extracting a haze image component, and a haze image component may be extracted by another method.
  • FIG. 10 and 11 show a first example of the reference information calculation method. This method is executed in the reference vector calculation unit in the labeling processing unit shown in FIG. The process shown in FIG. 10 is executed for each n-th image set (film image and haze image) and for each haze area. n is a depth number.
  • a haze region M is specified.
  • a superimposed image 250 is shown for understanding the embodiment.
  • the center of gravity O1 is calculated as a representative point of the haze region M. Instead of the center of gravity O1, a center point or the like may be calculated.
  • the in-film region including the center of gravity O1 is specified.
  • the outer edge of the in-membrane region is indicated by the symbol U.
  • the center of gravity O2 is calculated as a representative point of the in-film region. A center point or the like may be calculated instead of the center of gravity O2.
  • S18 as shown in the center of FIG.
  • a straight line L passing through the two centroids O1 and O2 is defined.
  • the intersection A with the outer edge U is specified on the straight line L.
  • the intersection point A is specified as the intersection point on the side closer to the haze region M.
  • an ellipse Q0 that surrounds and circumscribes the haze region M is defined.
  • the center of the ellipse Q0 is the center of gravity O1 of the haze region M.
  • the generation of the ellipse Q0 makes it easier to handle the haze region M in calculation.
  • the ellipse Q0 is expanded until the ellipse Q0 touches the outer edge U.
  • An ellipse at the time of contact with the outer edge U is an ellipse Q1.
  • the ellipse Q0 and the ellipse Q1 are similar. That is, in the expansion process, the major axis length and the minor axis length of the ellipse Q0 are increased at the same ratio.
  • intersection farther from the intersection A is specified. That is intersection B.
  • a reference vector V is defined as a vector from the intersection A to the intersection B.
  • the same process as described above is executed for each haze area.
  • the above processing may be applied to the largest haze area.
  • the above processing is not applied to an in-film region that does not include a haze region.
  • the reference vector is added to one or a plurality of feature amounts for determining the connection relationship only in the in-film region having the haze.
  • the reference vector indicates the feature of the haze area, that is, the deep image of the film.
  • the base point of the reference vector indicates the end of the film, and the direction of the reference vector indicates the direction in which the film is traveling.
  • the magnitude of the reference vector indicates the magnitude of the film tilt angle. Therefore, it is possible to support the 3D labeling process using such a reference vector or to automate it.
  • connection relationship specifying method shows a first example of the connection relationship specifying method. This method is executed in the labeling processing unit.
  • one reference vector is calculated for the nth depth based on the film image and the haze image of the nth depth.
  • the connection relation is specified for each reference vector.
  • the following processing may be applied in parallel to a plurality of film images existing on the deep side and a plurality of film images existing on the deep side from the n-th depth film image (reference film image).
  • the n-th depth reference vector calculated based on the n-th depth film image (and the haze image) is projected onto the n + i (or ni) -th depth film image.
  • I is an integer of 1 or more.
  • FIG. 13 shows a film image 252 of the (n + i) th (or (n ⁇ i)) th depth.
  • the reference vector projected there is denoted by reference numeral V1.
  • n is a depth number and is an integer.
  • the film image 252 includes a plurality of in-film regions Ra, Rb, Rc, and Rd.
  • a dividing line P that is orthogonal to the projected reference vector V1 and passes through the base point of the reference vector V1 is defined.
  • the membrane image 252 is divided into two portions 252A and 252B by the dividing line P.
  • the in-film region where the center of gravity exists in the portion 252A on the side where the reference vector faces is the candidate region.
  • centroids Oa, Ob, Oc, and Od of the plurality of in-film regions Ra, Rb, Rc, and Rd those belonging to the portion 252A are the centroids Oa and Ob.
  • In-film regions Ra and Rb including the centroids Oa and Ob are set as candidate regions.
  • the in-film region specified on the n-th depth film image 252 is projected onto the n + i (or n ⁇ i) -th depth film image 252.
  • a label (100) is given to the in-film region of the projection source.
  • the in-film region Rb that generates the maximum overlap area has a connection relationship. Selected as an in-film region. That is, the same label (100) is given to it.
  • the above processing is executed at each depth and for each reference vector.
  • the distance between the center of gravity, the distance between the tip of the reference vector and the center of gravity, the closeness of the area size, the closeness of the form, etc. may be used, or a combination thereof is used. Also good.
  • the automatic labeling process is executed, the execution result is confirmed by the user.
  • a plurality of candidate areas specified by the above process are displayed. In that case, the most probable candidate area is displayed with a hue different from the others. You may make it express the magnitude of a connection possibility with the difference in a hue.
  • FIGS. 10 and 11 show a second example of the reference information calculation method.
  • the method is executed in the reference area calculation unit in the labeling processing unit shown in FIG.
  • the process shown in FIG. 14 is executed for each n-th image set (film image and haze image) and for each reference region.
  • the steps and elements shown in FIGS. 10 and 11 are denoted by the same step numbers or the same reference numerals, and description thereof is omitted.
  • a circumscribed circle Q2 that surrounds the haze region M and touches it is generated.
  • a circumscribed ellipse may be generated.
  • the center of the circumscribed circle Q2 coincides with the center of gravity of the haze region M.
  • S24A it is determined whether or not the circumscribed circle Q2 includes the intersection A. If included, the process proceeds to S28A. If not included, the process proceeds to S26A.
  • the circumscribed circle Q2 is expanded until the circumscribed circle Q2 includes the intersection A.
  • a circumscribed circle in a state where the intersection point A is located on the edge of the circumscribed circle Q2 is indicated by a symbol Q3.
  • the circumscribed circle Q3 is deleted from the in-film region surrounded by the outer edge U, and the reference region E shown on the left side of FIG. 15 is generated.
  • the reference area may be generated by simply subtracting the haze area from the in-film area.
  • the reference region E is assumed to be a portion extending in the depth direction in the in-film region, and is used as a representative region or a main region.
  • connection relationship specifying method shows a second example of the connection relationship specifying method. This method is also executed in the labeling processing unit.
  • one reference region is calculated for the nth depth based on the film image and the haze image of the nth depth.
  • the connection relation is specified for each reference area.
  • the following processing may be applied to a plurality of film images existing on the deeper side and a plurality of film images existing on the shallower side from the nth depth film image as a reference.
  • FIG. 17 the elements shown in FIG. 15 are denoted by the same reference numerals, and the description thereof is omitted.
  • the nth depth reference area calculated based on the nth depth film image (and the haze image) is projected onto the n + i (or ni) th depth film image.
  • the reference area projected there is indicated by E1.
  • the membrane image 256 includes a plurality of intramembrane regions Ra, Rb, Rc, Rd.
  • the overlapping area is calculated between the projected reference area E1 and each in-film area Ra, Rb, Rc, Rd.
  • the in-film region Rb having the maximum polymerization area is selected as the in-film region having a connection relationship. That is, the same label (100) is given there. In determining the connection relationship, one or more other feature amounts may be considered.
  • the above processing is executed at each depth and for each reference area.
  • the connection relationship between the in-film regions in the depth direction is determined based on one or more predetermined feature values.
  • a plurality of candidate areas and the most probable candidate areas may be displayed.
  • the reference vector and the reference region are calculated based on the in-film region and the haze region, but other reference information may be calculated. For example, the deviation amount of the haze area relative to the in-film area, the area ratio between the in-film area and the haze area, and the like may be calculated.
  • FIG. 18 schematically shows a three-dimensional image 258 generated based on the labeling processing result.
  • the three-dimensional image 258 is generated by, for example, a surface rendering method, a volume rendering method, or the like.
  • the individual cell images 262a, 262b, 262c are expressed with different hues, for example.
  • connection relationship may be determined based on three or more images having different observation conditions. Instead of changing the acceleration voltage, other conditions such as irradiation current may be changed. In determining the connection relationship, both the reference vector and the reference region may be used. In general, various feature quantities can be used for determining a connection relationship. Examples of such a feature amount include texture, Hu moment, shape similarity, perimeter, and the like in addition to those described above. The above treatment may be applied to membranes other than cell membranes.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Hematology (AREA)
  • Molecular Biology (AREA)
  • Urology & Nephrology (AREA)
  • Food Science & Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

Selon la présente invention, une image de membrane (64) est générée sur la base d'une image d'origine (206) servant d'image basse tension. Une image profonde d'une membrane cellulaire est extraite d'une image d'origine (208) servant d'image haute tension, ce qui permet de générer une image de voile (209). Une unité de traitement de marquage (202) génère des informations de référence (vecteur de référence, région de référence) destinées à être utilisées dans un processus de marquage tridimensionnel, sur la base d'une région dans la membrane comprise dans l'image de membrane (64) et d'une région de voile comprise dans l'image de voile (209).
PCT/JP2019/019742 2018-05-24 2019-05-17 Dispositif et procédé de traitement d'image de tissu biologique WO2019225507A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018099389A JP7181003B2 (ja) 2018-05-24 2018-05-24 生物組織画像処理装置及び方法
JP2018-099389 2018-05-24

Publications (1)

Publication Number Publication Date
WO2019225507A1 true WO2019225507A1 (fr) 2019-11-28

Family

ID=68616939

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/019742 WO2019225507A1 (fr) 2018-05-24 2019-05-17 Dispositif et procédé de traitement d'image de tissu biologique

Country Status (2)

Country Link
JP (1) JP7181003B2 (fr)
WO (1) WO2019225507A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05290786A (ja) * 1992-04-10 1993-11-05 Hitachi Ltd 走査試料像表示方法および装置ならびにそれに供される試料
JP2004212355A (ja) * 2003-01-09 2004-07-29 Hitachi Ltd バイオ電子顕微鏡及び試料の観察方法
JP2007263932A (ja) * 2006-03-30 2007-10-11 Kose Corp 化粧料塗布膜内部の画像の取得方法
US20100183217A1 (en) * 2007-04-24 2010-07-22 Seung H Sebastian Method and apparatus for image processing
JP2015149169A (ja) * 2014-02-06 2015-08-20 株式会社日立ハイテクノロジーズ 荷電粒子線装置、画像生成方法、観察システム
WO2017221592A1 (fr) * 2016-06-23 2017-12-28 コニカミノルタ株式会社 Dispositif, procédé et programme de traitement d'images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05290786A (ja) * 1992-04-10 1993-11-05 Hitachi Ltd 走査試料像表示方法および装置ならびにそれに供される試料
JP2004212355A (ja) * 2003-01-09 2004-07-29 Hitachi Ltd バイオ電子顕微鏡及び試料の観察方法
JP2007263932A (ja) * 2006-03-30 2007-10-11 Kose Corp 化粧料塗布膜内部の画像の取得方法
US20100183217A1 (en) * 2007-04-24 2010-07-22 Seung H Sebastian Method and apparatus for image processing
JP2015149169A (ja) * 2014-02-06 2015-08-20 株式会社日立ハイテクノロジーズ 荷電粒子線装置、画像生成方法、観察システム
WO2017221592A1 (fr) * 2016-06-23 2017-12-28 コニカミノルタ株式会社 Dispositif, procédé et programme de traitement d'images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIGONG HAN ET AL.: "Learning Generative Models of Tissue Organization with Supervised GANs", 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION, 12 March 2018 (2018-03-12), pages 682 - 690, XP033337732, DOI: 10.1109/WACV.2018.00080 *
UCHIHASHI KENSHI ET AL.: "Neuronal cell membrane segmentation from electron microscopy images using a hostile generation model", THE 31TH ANNUAL CONFERENCE OF THE JAPANESE SOCIETY FOR ARTIFICIAL INTELLIGENCE, vol. 4K1, no. 4in2, 2017, pages 1 - 4 *

Also Published As

Publication number Publication date
JP7181003B2 (ja) 2022-11-30
JP2019204695A (ja) 2019-11-28

Similar Documents

Publication Publication Date Title
WO2019225505A1 (fr) Système de traitement d'image de tissu biologique, et procédé d'apprentissage automatique
Klumpe et al. A modular platform for automated cryo-FIB workflows
US9177378B2 (en) Updating landmarks to improve coregistration as regions of interest are corrected
US20220130027A1 (en) Structure Estimation System and Structure Estimation Program
CN109584156A (zh) 显微序列图像拼接方法及装置
CN110736747B (zh) 一种细胞液基涂片镜下定位的方法及系统
CN103649992B (zh) 用于自动调节数字病理图像的焦平面的方法
US9036869B2 (en) Multi-surface optical 3D microscope
Bartesaghi et al. An energy-based three-dimensional segmentation approach for the quantitative interpretation of electron tomograms
CN111932673A (zh) 一种基于三维重建的物体空间数据增广方法及系统
Waggoner et al. 3D materials image segmentation by 2D propagation: A graph-cut approach considering homomorphism
CN114764189A (zh) 用于评估图像处理结果的显微镜系统和方法
JP6947289B2 (ja) 細胞観察装置
JP6345001B2 (ja) 画像処理方法および画像処理装置
CN113744195B (zh) 一种基于深度学习的hRPE细胞微管自动检测方法
US8837795B2 (en) Microscopy of several samples using optical microscopy and particle beam microscopy
WO2019225507A1 (fr) Dispositif et procédé de traitement d'image de tissu biologique
JP7181000B2 (ja) 生物組織画像処理装置及び方法
Xu et al. HALCON application for shape-based matching
WO2019225506A1 (fr) Système de traitement d'image de tissu biologique, et procédé d'apprentissage automatique
Pan Processing and feature analysis of atomic force microscopy images
EP2383767A1 (fr) Procédé d'imagerie d'un objet
US20230245360A1 (en) Microscopy System and Method for Editing Overview Images
KR102680501B1 (ko) 화상 평가 장치 및 방법
Najgebauer et al. Interest point localization based on edge detection according to gestalt laws

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19807326

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19807326

Country of ref document: EP

Kind code of ref document: A1