WO2024063119A1 - Dispositif de traduction d'image, système d'imagerie de diagnostic, procédé de traduction d'image, programme de commande et support d'enregistrement - Google Patents

Dispositif de traduction d'image, système d'imagerie de diagnostic, procédé de traduction d'image, programme de commande et support d'enregistrement Download PDF

Info

Publication number
WO2024063119A1
WO2024063119A1 PCT/JP2023/034225 JP2023034225W WO2024063119A1 WO 2024063119 A1 WO2024063119 A1 WO 2024063119A1 JP 2023034225 W JP2023034225 W JP 2023034225W WO 2024063119 A1 WO2024063119 A1 WO 2024063119A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
translated
tissue
group
generator
Prior art date
Application number
PCT/JP2023/034225
Other languages
English (en)
Japanese (ja)
Inventor
宏彦 新岡
淳哉 佐藤
哲郎 ▲高▼松
辰也 松本
龍太 中尾
浩幸 田中
正司 前原
重行 深井
Original Assignee
国立大学法人大阪大学
京都府公立大学法人
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人大阪大学, 京都府公立大学法人 filed Critical 国立大学法人大阪大学
Publication of WO2024063119A1 publication Critical patent/WO2024063119A1/fr

Links

Images

Classifications

    • CCHEMISTRY; METALLURGY
    • C12BIOCHEMISTRY; BEER; SPIRITS; WINE; VINEGAR; MICROBIOLOGY; ENZYMOLOGY; MUTATION OR GENETIC ENGINEERING
    • C12MAPPARATUS FOR ENZYMOLOGY OR MICROBIOLOGY; APPARATUS FOR CULTURING MICROORGANISMS FOR PRODUCING BIOMASS, FOR GROWING CELLS OR FOR OBTAINING FERMENTATION OR METABOLIC PRODUCTS, i.e. BIOREACTORS OR FERMENTERS
    • C12M1/00Apparatus for enzymology or microbiology
    • C12M1/34Measuring or testing with condition measuring or sensing means, e.g. colony counters
    • CCHEMISTRY; METALLURGY
    • C12BIOCHEMISTRY; BEER; SPIRITS; WINE; VINEGAR; MICROBIOLOGY; ENZYMOLOGY; MUTATION OR GENETIC ENGINEERING
    • C12QMEASURING OR TESTING PROCESSES INVOLVING ENZYMES, NUCLEIC ACIDS OR MICROORGANISMS; COMPOSITIONS OR TEST PAPERS THEREFOR; PROCESSES OF PREPARING SUCH COMPOSITIONS; CONDITION-RESPONSIVE CONTROL IN MICROBIOLOGICAL OR ENZYMOLOGICAL PROCESSES
    • C12Q1/00Measuring or testing processes involving enzymes, nucleic acids or microorganisms; Compositions therefor; Processes of preparing such compositions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present invention relates to an image translation device that converts images of tissue, an image translation method, and an image diagnostic system equipped with an image translation device.
  • Patent Document 1 discloses a medical image processing device that uses a high-quality image engine (artificial intelligence) to generate a high-quality image from an acquired medical image.
  • a high-quality image engine artificial intelligence
  • Non-Patent Document 1 discloses a technique for generating a virtual HE-stained image from an image of unstained lung tissue using conditional GAN (CGAN).
  • Non-Patent Document 2 discloses a technique that uses deep learning to generate a virtual tissue image obtained by applying a staining method different from HE staining from an HE staining image. Staining methods specifically described in Non-Patent Document 2 as different staining methods from HE staining include Masson's trichrome staining, PAS (periodic acid-Schiff) staining, and Jones silver staining. ) is a staining method.
  • Bayramoglu N et al. “Towards Virtual H&E Staining of Hyperspectral Lung Histology Images Using Conditional Generative Adversarial Networks”, IEEE Conference Proceedings, Vol.2017, P.64-71, 2017. de Haan K et al., “Deep learning-based transformation of H&E stained tissues into special stains”, Nature Communications 12: 4884, doi.org/10.1038/s41467-021-25221-2, 2021.
  • existing image analysis models cannot be applied. Therefore, it is necessary to create a new image analysis model to analyze newly acquired images.
  • existing image diagnostic models that can be applied to pathological images use images of tissues related to various diseases (including diseases with few cases) that have been collected (accumulated) over a long period of time. Created by learning.
  • Such existing image diagnostic models can be applied to images of tissues that have been subjected to traditional typical embedding and staining processes, but they cannot be applied to images obtained using new processing methods and new techniques. It cannot be applied to images that have been
  • the image translation device also includes a first image group of a tissue that has been subjected to a first process including embedding the tissue in a predetermined embedding agent and slicing.
  • a first generator that has learned a relationship between a color-related feature and a color-related feature of a second image group obtained by capturing a tissue that has been subjected to a second process that does not include the embedding process and the slicing process; an image translation unit including a second generator; and a first input unit that inputs either a first target image belonging to the first image group or a second target image belonging to the second image group to the image translation unit.
  • the first generator generates the first translated image having color-related features of the second image group from the first target image
  • the second generator generates a characteristic of the first image group from the second target image.
  • a second translated image having color-related features is generated.
  • An image diagnostic system is an image diagnostic system comprising the image translation device according to the first aspect and an image diagnostic device, wherein the image diagnostic device scans the tissue that has been subjected to the first process.
  • a first neural network that has learned the correspondence between the captured first training images and the state of the tissue shown in each of the first training images, or the tissue that has been subjected to the second processing is imaged.
  • a diagnosis unit comprising at least one of a second neural network that has learned a correspondence relationship between a second training image group and the state of tissue shown in each of the second training image group; and the first translated image or a second input unit that inputs the second translated image to the diagnostic unit; and diagnostic information regarding the state of the tissue shown in the first translated image or the second translated image, which is output from the diagnostic unit. and a diagnostic information output unit that outputs.
  • An image translation method relates to the color of a first image group of a tissue that has been subjected to an embedding process of embedding the tissue in a predetermined embedding agent or a first process that includes slicing.
  • a first generator and a second generator that have learned a relationship between a feature and a color-related feature of a second image group obtained by capturing a tissue that has been subjected to the embedding process and the second process that does not include the slicing process.
  • the first translated image having features related to the color of a second group of images;
  • the second generator generates a second translated image having features related to the color of the first group of images from the second target image; generate.
  • the image translation device may be realized by a computer, and in this case, the image translation device can be implemented as a computer by operating the computer as each section (software element) included in the image translation device.
  • the present invention also includes a control program for an image translation device that is realized by using the control program, and a computer-readable recording medium on which the program is recorded.
  • FIG. 1 is a block diagram showing a configuration example of an image translation device according to Embodiment 1 of the present invention.
  • FIG. FIG. 2 is a functional block diagram showing an example of the configuration of an image translation device. It is a flowchart which shows an example of the process which an image translation device performs.
  • FIG. 3 is a diagram illustrating an example of the flow of a first process for acquiring an image of a living body's tissue.
  • FIG. 7 is a diagram illustrating an example of the flow of a second process for acquiring an image of a living body's tissue.
  • FIG. 7 is a diagram illustrating another example of the flow of the second process for acquiring an image of the tissue of a living body.
  • FIG. 1 is a diagram showing an example of image translation by an image translation device.
  • FIG. 11 is a functional block diagram showing another example of the configuration of an image translation device. It is an explanatory diagram explaining the function of a 1st classifier and a 2nd classifier.
  • FIG. 2 is a block diagram showing a configuration example of an image diagnostic system according to Embodiment 2 of the present invention.
  • FIG. 1 is a functional block diagram showing a configuration example of an image diagnosis system.
  • FIG. 1 is a functional block diagram showing a configuration example of an image diagnosis system.
  • the image translation device 1 converts an image that is similar to an image obtained by subjecting the tissue to a second process different from the first process from an image actually captured of a tissue that has been subjected to the first process.
  • a translated image (first translated image) is created.
  • the image translation device 1 creates a translated image (second translated image) similar to the image obtained by subjecting the tissue to the first processing, from an image actually captured of the tissue subjected to the second processing.
  • the first process is a process that includes embedding the tissue in a predetermined embedding agent and slicing
  • the second process is a process that does not include embedding and slicing. .
  • tissue may include structures formed by an aggregation of any of cells, fungi, and bacteria. That is, tissues may be organs of the body of living organisms, colonies of cultured cells, aggregations of fungi, and bacterial colonies and flora.
  • Fig. 1 is a block diagram showing an example of the configuration of the image translation device 1 according to the first embodiment of the present invention.
  • Fig. 2 is a functional block diagram showing an example of the configuration of the image translation device 1.
  • the image translation device 1 is, for example, a computer, and includes a processor section 2, a hard disk 3, a memory 4, and a display section 5, as shown in FIG.
  • the processor unit 2 reads various programs from the hard disk 3 and executes them.
  • the processor unit 2 may be, for example, at least one of a CPU and a GPU.
  • the hard disk 3 stores various programs executed by the processor section 2. Further, the hard disk 3 may store various image data used by the processor unit 2 to execute various programs.
  • the memory 4 stores various data and programs used for various processes being executed by the processor unit 2.
  • the memory 4 functions as a working memory that stores a program that realizes a neural network structure loaded from the hard disk 3.
  • “memory” may refer to the main memory or the memory of the GPU.
  • the display unit 5 displays various images (e.g., target images) required for executing various processes executed by the processor unit 2 and various images (for example, translated images) generated by the various processes executed by the processor unit 2. It may be any display for displaying. Note that the display unit 5 is not an essential component of the image translation device 1.
  • the image translation device 1 may be configured to transmit various data to an external display device (not shown) that is communicably connected to the image translation device 1 and display the data on the display device.
  • the image translation device 1 includes a control section 20 corresponding to the processor section 2 and memory 4 shown in FIG. 1, a storage section 30 corresponding to the hard disk 3 shown in FIG. 1, and a display section 5. There is.
  • the control section 20 includes a first input section 21, an image translation section 22, and a translated image output section 23.
  • the first input unit 21 includes a first target image belonging to a first image group that captures a tissue that has been subjected to a first process, and a second image that captures a tissue that has undergone a second process that is different from the first process.
  • One of the second target images belonging to the group is input to an image translation unit 22, which will be described later.
  • the image translation unit 22 includes a first generator 221 and a second generator 222 that have learned the relationship between the color features of the first image group and the color features of the second image group.
  • the first generator 221 and the second generator 222 are neural networks (generative models) that extract features of an input image and generate a new image having the extracted features.
  • a known deep learning algorithm such as a generative adversarial network (GAN) may be applied to the learning of the first generator 221 and the second generator 222.
  • GAN generative adversarial network
  • the learning of the first generator 221 and the second generator 222 is not limited to learning that applies a generative adversarial network.
  • learning may be performed using either the first image group or the second image group as input data, and an image generated by converting either the first image group or the second image group by an AI (e.g., Dalle2) capable of generating images as training data.
  • the learning process of the first generator 221 and the second generator 222 may be executed using a computer different from the image translation device 1. In this case, by installing the trained first generator 221 and second generator 222 and a predetermined arbitrary program in an arbitrary computer, the computer can function as the image translation device 1.
  • the first generator 221 generates, from the first target image, a first translated image that has color-related features of the second image group without significantly changing the structure of the tissue shown in the first target image.
  • the second generator 222 generates, from the second target image, a second translated image having color-related characteristics of the first image group without significantly changing the structure of the tissue depicted in the second target image.
  • the translated image output unit 23 acquires and outputs the first translated image or the second translated image generated by the image translation unit 22.
  • the translated image output unit 23 may output the first translated image or the second translated image to the display unit 5.
  • the storage unit 30 may store a target image 31 and a translated image 32.
  • the target image 31 may store a first target image belonging to the first image group and a second target image belonging to the second image group.
  • the translated image 32 may store a translated image generated by the image translation unit 22.
  • FIG. 3 is a flowchart showing an example of processing performed by the image translation device 1.
  • the first input unit 21 inputs into the neural network (the first generator 221 or the second generator 222) which of the first target image belonging to the first image group and the second target image belonging to the second image group. (Step S1: input step).
  • the translated image output unit 23 outputs a first translated image generated from the first target image or a second translated image generated from the second target image, which is generated by the image translation unit 22.
  • Step S2 translated image output step
  • the image translation device 1 converts an image actually captured of a tissue subjected to the first processing (first target image) into a translated image as if the tissue had been subjected to the second processing. Image translation can be performed. In addition, the image translation device 1 converts an image actually captured of a tissue subjected to the second processing (second target image) into a translated image that looks as if the tissue was imaged after the first processing. Image translation can be done.
  • FIG. 4 is a diagram illustrating an example of the flow of the first process for acquiring an image of a biological tissue.
  • FIGS. 5 and 6 are diagrams illustrating an example of the flow of the second process for acquiring an image of a tissue.
  • the first process shown in FIG. 4 is a conventional method of preparing and imaging a pathological specimen.
  • tissue is first collected from a living body (step S11). Subsequently, the collected tissue is fixed with a fixative such as formalin, and then embedded using an embedding agent such as paraffin and resin (step S12). Next, the embedded tissue is sliced (step S13), and the sliced tissue is stained using a predetermined staining method (step S14).
  • slicing is a process performed using a microtome or the like, and by slicing, tissue is typically sliced into a thickness of about several ⁇ m to 10 ⁇ m.
  • An example of a predetermined staining method is HE staining.
  • HE staining is one of the methods used to stain collected tissue pieces, and uses a combination of hematoxylin staining and eosin staining. Hematoxylin stains the chromatin in the cell nucleus and the ribosomes in the cytoplasm blue-purple. On the other hand, eosin stains cytoplasmic components and extracellular matrix red. Next, the stained tissue is imaged using a bright field microscope or the like (step S15).
  • the second process shown in FIG. 5 does not include embedding and slicing.
  • tissue is first collected from a living body (step S21). Subsequently, the tissue or tissue piece is imaged (step S22).
  • the following microscopes can be used, which are capable of obtaining images of tissues that have not been subjected to staining treatment and are applicable to image diagnosis and the like.
  • tissue fragmentation is a process of slicing tissue into a thickness of typically 1 mm to several mm, and is different from slicing.
  • low-temperature gas may be sprayed onto the surface of the tissue to temporarily fix the tissue so that the tissue is not deformed or crushed by the blade that has entered the tissue.
  • the tissue fragmentation in step S22 is not essential. Further, when performing imaging using these microscopes like an endoscope, the tissue collection in step S21 and the tissue fragmentation in step S22 are not essential.
  • the second process shown in FIG. 6 also does not include embedding process and slicing.
  • tissue is first collected from a living body (step S31). Subsequently, the collected tissue is stained using a predetermined staining method (step S32). The stained tissue is then imaged (step S33).
  • a deep ultraviolet excitation fluorescence microscope or the like may be used, which is capable of obtaining an image of a tissue that has not been subjected to embedding processing and is applicable to image diagnosis and the like. Note that when using a deep ultraviolet excitation fluorescence microscope, the staining in step S32 is not essential.
  • the image translation device 1 translates an image from an image of a tissue processed using a newly developed processing method to a translated image similar to an image of a specimen of a lesion site processed using a conventional processing method. be able to.
  • the image translation device 1 converts an image from, for example, an image of a specimen of a lesion site processed using a conventional processing method into a translated image similar to an image of a tissue processed using a newly developed processing method. Can be translated.
  • FIG. 7 is a diagram showing an example of image translation by the image translation device 1.
  • a virtual deep ultraviolet excitation fluorescence microscopy image (first translated image) image translated from an HE staining image (first target image) of a tissue section actually observed after HE staining
  • a deep ultraviolet excitation fluorescence microscope image (first translation image)
  • a virtual HE staining image (second translated image) translated from an image of a tissue section actually observed with a fluorescence microscope (second target image) (no slicing, with staining) is shown.
  • the image translation device 1 can convert HE-stained images of tissue sections actually observed after HE staining into virtual deep-UV-excited fluorescence microscopy images from deep-UV-excited fluorescence microscopy images. Conversion to HE-stained images is also possible.
  • an image diagnosis model that has learned HE images of cancer cell tissues and normal cell tissues in advance can detect cancer cell tissues and normal cell tissues with high accuracy (for example, AUC (Area Under the Curve, an indicator of accuracy) of 0. 9 or higher).
  • a tissue containing cancer and a tissue not containing cancer are classified using an image diagnosis model trained on the first target image and a second target image actually captured, 66. Classified with an accuracy of 4%.
  • the second translated image is classified in the same way. When done, it is classified with an accuracy of 84.6%.
  • the image translation unit 22 may perform negative-positive inversion processing on the input image as preprocessing. By performing the negative-positive reversal process, the degree of completeness of the translated image generated by the image translation unit 22 can be improved. This will be explained below.
  • a deep ultraviolet excitation fluorescence image the brightness of a background area where tissue is not visible is low, and in an HE staining image (bright field image), the background area where tissue is not visible is high brightness (see FIG. 7).
  • negative-positive inversion processing is performed on a deep ultraviolet excitation fluorescence image
  • the brightness of the background region of the image after the inversion becomes close to the brightness of the background region of the virtual HE-stained image to be generated.
  • the brightness of the background region of the image after inversion becomes close to the brightness of the background region of the virtual deep ultraviolet excitation fluorescence image to be generated.
  • Such preprocessing using domain adaptation contributes to improving the learning efficiency of the image translation model, and as a result, can improve the completeness of translated images.
  • an image diagnosis model is sometimes created that outputs diagnostic information (estimated results) based on images of tissues that have been subjected to predetermined processing. For example, an image diagnosis model created using the first image group of tissues that has been subjected to the first processing as training data will not be reliable if an image having color characteristics of the first image group is input. It is possible to output high diagnostic information. However, even if such an image diagnosis model is inputted with an image having characteristics related to the color of the second group of images of tissues subjected to the second processing, correct diagnostic information may not be obtained. This is because the color-related features of the first image group and the color-related features of the second image group are different.
  • the image translation device 1 converts a first target image having color-related features of a first image group into a first translation having color-related features of a second image group without significantly changing the structure reflected in the target image. Images can be generated.
  • the generated first translated image can be applied to an existing image diagnosis model created using the second image group as training data, and similarly, the generated second translated image can be applied to an existing image diagnosis model created using the second image group as training data. It can be applied to existing image diagnosis models created as training data. That is, by employing the image translation device 1, it is possible to generate an image to which an existing image diagnosis model is applicable from an image to which the existing image diagnosis model is not applicable. Therefore, there is no need to create separate image analysis models depending on whether the image is a tissue image that has been subjected to predetermined processing.
  • the embedding process and slicing are processes that require time and effort. Therefore, obtaining the second target image is easier than obtaining the first target image.
  • the second process is a processing method with a short history, there are still few image diagnostic models created based on the second target image, or they may be incomplete. In such a case, a translated image can be generated from the second target image using the image translation device 1, and the translated image can be applied to the image diagnostic model created based on the first target image.
  • the image translation device 1 can facilitate the creation of such an image diagnosis model.
  • the image translation device 1 is capable of image translation from an image of a tissue processed using an existing method to an image of the tissue processed using a newly developed method. By using such translated images, it becomes possible to efficiently create a new image diagnosis model that outputs diagnostic information based on images of tissues processed using a newly developed method. Further, the image diagnosis model created in this manner can diagnose tissues processed by existing methods.
  • the image translation device 1 may translate an image from an image of tissue processed using a newly developed method to an image of tissue processed using an existing method.
  • the image diagnostic model created in this way can diagnose tissues processed using the newly developed method.
  • the image translation device 1 is also capable of image translation from an image of a tissue in a state in which it appears infrequently, which has been processed using an existing method, to an image of the tissue that has been processed using a newly developed method. It is.
  • the image diagnosis model created in this manner can diagnose tissues that have been processed using existing methods and that appear in a low frequency state.
  • the image translation device 1 can also translate an image from an image of a tissue processed using a newly developed method to an image of a tissue that is processed using an existing method and has a low appearance frequency. good.
  • By using such translated images it is possible to efficiently create a new image diagnosis model that outputs diagnostic information based on images of tissues with low frequency of occurrence that have been processed using existing methods. becomes.
  • the image diagnostic model created in this manner can diagnose tissues that are treated with a newly developed method and that have a low frequency of appearance.
  • the translated image is not an image obtained as a result of actually observing the tissue, it does not significantly change the structure of the tissue. Therefore, like an image obtained by actually observing a tissue, the translated image can be treated as a captured image of the tissue.
  • the translated images generated by the image translation device 1 can be used for learning an image diagnosis model. For example, if a translated image is generated from a first target image using the image translation device 1, the translated image can be used for learning to create an image diagnosis model based on the second target image.
  • the image translation unit 22 of the image translation device 1 only needs to include a first generator 221 and a second generator 222 that extract features of an input image and generate a new image having the extracted features.
  • the configuration is not limited to that shown in FIG. 2.
  • CycleGAN may be applied to implement the image translation unit 22.
  • FIG. 8 is a functional block diagram showing another example of the configuration of the image translation device 1a.
  • the image translation unit 22a may further include a first classifier 223 and a second classifier 224.
  • the first discriminator 223 distinguishes between the images included in the first image group and the second one based on a first error between the color-related features of the first image group and the color-related features of the translated image generated by the second generator. Identify the translated image generated by the generator.
  • the second discriminator 224 distinguishes between the images included in the second image group and the first Identify the translated image generated by the generator.
  • FIG. 9 is a diagram illustrating an example of processing performed by the image translation unit 22a including the first classifier 223 and the second classifier 224.
  • the first input unit 21 also inputs the images of the first image group input to the first generator 221 to the first classifier 223.
  • the first generator 221 generates a translated image from the input image.
  • the second generator 222 further generates a translated image from the translated image generated by the first generator 221.
  • the first discriminator 223 calculates a first error between the color-related features of the translated image generated by the second generator 222 and the color-related features of the images of the first image group that are the source of the translated image. .
  • the first input unit 21 also inputs the images of the second image group input to the second generator 222 to the second discriminator 224.
  • the second generator 222 generates a translated image from the input image.
  • the first generator 221 further generates a translated image from the translated image generated by the second generator 222.
  • the second classifier 224 calculates a second error between the color-related features of the translated image generated by the first generator 221 and the color-related features of the images of the second image group that are the source of the translated image. .
  • the image translation unit 22a generates a translated image by repeating the above processing.
  • the image translation unit 22a outputs a translated image whose first error and second error are below a predetermined level as a first translated image or a second translated image. do.
  • the image translation unit 22a calculates the cycle consistency loss based on the first error and the second error, and converts the translated image whose cycle consistency loss is less than or equal to a predetermined value into the first translated image or the second translated image. It may also be output as an image.
  • the image translation device 1a having such a configuration can generate and output highly accurate translated images.
  • CycleGAN can learn the relationship between the color-related features of the first image group and the color-related features of the second image group, it is possible to learn the relationship between the color-related features of the first image group and the color-related features of the second image group. There is no need to prepare.
  • FIG. 10 is a block diagram showing a configuration example of an image diagnostic system 100 according to Embodiment 3 of the present invention.
  • the image diagnosis system 100 includes image translation devices 1 and 1a, and an image diagnosis device 7.
  • the image diagnostic device 7 is, for example, a computer communicably connected to the image translation devices 1 and 1a.
  • the image diagnostic apparatus 7 includes a processor section 71, a hard disk 73, a memory 72, and a display section 74, as shown in FIG.
  • the processor unit 71 reads various programs from the hard disk 73 and executes them.
  • the processor unit 71 may be, for example, at least one of a CPU and a GPU.
  • the hard disk 73 stores various programs executed by the processor unit 71.
  • the hard disk 73 may also store various image data used by the processor unit 71 to execute the various programs.
  • the memory 72 stores various data and various programs used for various processes being executed by the processor section 71.
  • the memory 72 functions as a working memory that stores a program loaded from the hard disk 73 that implements the neural network structure.
  • the display unit 74 may be any display for displaying images required to execute various processes executed by the processor unit 71 and diagnostic information output by the processor unit 71. Note that the display section 74 is not an essential component of the image diagnostic apparatus 7.
  • the image diagnostic device 7 may be configured to transmit diagnostic information to an external display device (not shown) or image translation device 1, 1a that is communicably connected to the image diagnostic device 7.
  • the image diagnostic apparatus 7 includes an image diagnostic model (a first neural network 7121 described later) that estimates the state of the tissue based on an image of the tissue that has been subjected to the first processing shown in FIG.
  • FIG. 11 is a functional block diagram showing a configuration example of the image diagnostic system 100.
  • the image diagnostic apparatus 7 includes a control section 710 corresponding to the processor section 71 and memory 72 shown in FIG. 10, a storage section corresponding to the hard disk 73 shown in FIG. 10, and a display section 5. .
  • the storage unit is not illustrated in order to simplify the explanation.
  • the control section 710 includes a second input section 711, a diagnostic section 712, and a diagnostic information output section 713.
  • the second input unit 711 inputs to the diagnosis unit 712 a second translated image that has the color characteristics of the first image group acquired from the image translation device 1 or 1a.
  • the diagnosis section 712 includes a first neural network 7121.
  • the first neural network 7121 is a neural network (inference model). Note that a known supervised machine learning algorithm may be applied to the learning of the first neural network 7121. Note that the learning process of the first neural network 7121 may be executed using a computer different from the image diagnostic apparatus 7. In this case, by installing the trained first neural network 7121 into any computer, it is possible to cause the computer to function as the image diagnostic apparatus 7.
  • the diagnostic information output unit 713 acquires and outputs the diagnostic information output from the diagnostic unit 712. For example, the diagnostic information output unit 713 may output diagnostic information to the display unit 74.
  • the image translation devices 1 and 1a generate a second translated image from an image of a tissue that has undergone new processing or an image that has adopted a new imaging technique.
  • the first neural network 7121 which is an existing inference model
  • diagnostic information (estimation results) based on the existing inference model can be obtained.
  • the image diagnostic system 100 for example, generates a translated image to which the existing disease condition determination criteria can be applied from an image to which the existing disease condition determination criteria cannot be applied, and then generates a translation image based on the translated image. Diagnostic information can be output.
  • the image translation devices 1 and 1a may have the configuration of the image diagnostic device 7.
  • the image translation devices 1 and 1a may include a diagnostic section 712 having a first neural network 7121 and a diagnostic information output section 713.
  • the image translation devices 1 and 1a generate a second translated image from an image of a tissue that has undergone new processing or an image that uses a new imaging technique, and generates a second translated image based on the second translated image. 1.
  • Estimation is performed using a neural network 7121. By performing estimation in this manner, the image translation devices 1 and 1a can output diagnostic information (estimation results) based on the existing inference model.
  • the image diagnostic apparatus 7a includes an image diagnostic model (second neural network 7122 described later) that estimates the state of the tissue based on the image of the tissue that has been subjected to the second processing shown in FIG.
  • FIG. 12 is a functional block diagram showing a configuration example of the image diagnosis system 100a.
  • the image diagnostic apparatus 7a shown in FIG. 12 includes a second neural network 7122 in the diagnostic section 712.
  • the second neural network 7122 is a neural network ( (inference model). Note that a known supervised machine learning algorithm may be applied to the learning of the second neural network 7122.
  • an image of a tissue taken using a new method may have more features that can be read than an image taken of a tissue using an existing method. Therefore, the second neural network 7122, which is a newer inference model, may be able to output more accurate diagnostic information than the existing inference model.
  • the image diagnostic system 100a the image translation devices 1 and 1a generate a first translated image from images (for example, pathological images) of tissues taken in the past. If the first translated image is input to the second neural network 7122, which is a new inference model that outputs diagnostic information based on images of tissue that have undergone new processing or images that have adopted new imaging technology, Diagnostic information (estimation results) based on the new inference model can be obtained.
  • the image diagnostic apparatus 7a may be configured to include the functions of the diagnostic units 712 and 712. In this case, any configuration may be sufficient as long as the neural network to be used is switched depending on whether the translated image acquired from the image translation devices 1, 1a is the first translated image or the second translated image.
  • the function of the image translation devices 1 and 1a (hereinafter referred to as "devices") is a program for making a computer function as the device, and each control block of the device (particularly each part included in the control units 20 and 20a). ) can be realized by a program for making a computer function.
  • the device includes a computer having at least one control device (for example, a processor) and at least one storage device (for example, a memory) as hardware for executing the program.
  • control device for example, a processor
  • storage device for example, a memory
  • the above program may be recorded on one or more computer-readable recording media instead of temporary.
  • This recording medium may or may not be included in the above device. In the latter case, the program may be supplied to the device via any transmission medium, wired or wireless.
  • each of the control blocks described above can also be realized by a logic circuit.
  • a logic circuit for example, an integrated circuit in which a logic circuit functioning as each of the control blocks described above is formed is also included in the scope of the present invention.
  • the image translation device relates to the color of a first image group of a tissue that has been subjected to a first process including embedding the tissue in a predetermined embedding agent and slicing.
  • a first generator and a second generator that have learned a relationship between a feature and a feature regarding color of a second group of images obtained by capturing a tissue that has been subjected to a second process that does not include the embedding process and the slicing process.
  • an image translation unit including a generator; a first input unit that inputs either a first target image belonging to the first image group or a second target image belonging to the second image group to the image translation unit; a translated image output unit that outputs a first translated image generated from the first target image or a second translated image generated from the second target image, the first generator: From the first target image, the first translated image has a feature related to the color of the second group of images, and the second generator generates a feature related to the color of the first group of images from the second target image. A second translated image having features is generated.
  • the image translation unit is configured to combine color-related features of the first image group and color-related features of the translated images generated by the second generator.
  • a first classifier that identifies, based on a first error, an image included in the first image group and a translated image generated by the second generator; a color-related feature of the second image group; a second discriminator that identifies the images included in the second image group and the translated image generated by the first generator based on a second error between the translated image generated by the first generator and the color-related feature; , and a translated image in which the first error and the second error are below a predetermined level may be output as the first translated image or the second translated image.
  • the tissue in Aspect 1 or 2, may have a structure formed by an aggregation of cells, fungi, and bacteria.
  • the second target image may be captured using a deep ultraviolet excitation fluorescence microscope.
  • An image diagnostic system is an image diagnostic system comprising the image translation device according to any one of aspects 1 to 4, and an image diagnostic device, wherein the image diagnostic device performs the first processing.
  • a first neural network that has learned the correspondence between a first group of training images obtained by imaging a tissue that has been subjected to the above training and the state of the tissue that is reflected in each of the first group of training images, or a first neural network that has undergone the second processing.
  • a diagnostic unit comprising at least one of a second neural network that has learned a correspondence relationship between a second training image group in which the tissue has been imaged and a state of the tissue shown in each of the second training image group; a second input unit that inputs the first translated image or the second translated image to the diagnostic unit; and a second input unit that inputs the first translated image or the second translated image to the diagnostic unit; and a diagnostic information output unit that outputs diagnostic information regarding the state of the tissue.
  • the image translation method relates to the color of a first image group of a tissue that has been subjected to a first process including embedding the tissue in a predetermined embedding agent and slicing.
  • a first generator and a second generator that have learned a relationship between a feature and a color-related feature of a second image group obtained by capturing a tissue that has been subjected to the embedding process and the second process that does not include the slicing process.
  • the first translated image having features related to the color of a second group of images;
  • the second generator generates a second translated image having features related to the color of the first group of images from the second target image; generate.
  • a control program according to aspect 7 of the present disclosure is a control program for causing a computer to function as the image translation device according to any one of aspects 1 to 4, the control program including the image translation section, the first input section, and A computer is made to function as the translated image output section.
  • a recording medium according to aspect 8 of the present disclosure is a computer-readable recording medium on which the control program according to aspect 7 is recorded.

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Organic Chemistry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Zoology (AREA)
  • Wood Science & Technology (AREA)
  • Biotechnology (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biochemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Microbiology (AREA)
  • Analytical Chemistry (AREA)
  • General Engineering & Computer Science (AREA)
  • Proteomics, Peptides & Aminoacids (AREA)
  • Genetics & Genomics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Immunology (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Medicinal Chemistry (AREA)
  • Biomedical Technology (AREA)
  • Sustainable Development (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention permet d'obtenir un dispositif de traduction d'image, par exemple, qui peut utiliser efficacement un modèle d'imagerie de diagnostic existant. Un dispositif de traduction d'image (1) comprend une unité de traduction d'image (22) qui comporte un premier générateur (221) et un second générateur (222), l'unité de traduction d'image (22) ayant appris la relation entre une caractéristique liée à la couleur d'un premier groupe d'images obtenu par capture d'images d'un tissu sur lequel un premier processus comprenant un processus d'incorporation et un découpage a été effectué et une caractéristique liée à la couleur d'un second groupe d'images obtenu par capture d'images du tissu sur lequel un second processus qui ne comprend pas le processus d'incorporation et le découpage a été effectué. Une première image d'objet appartenant au premier groupe d'images ou une seconde image d'objet appartenant au second groupe d'images est entrée dans l'unité de traduction d'image (22), qui délivre ensuite une première image de traduction générée à partir de la première image d'objet ou une seconde image de traduction générée à partir de la seconde image d'objet.
PCT/JP2023/034225 2022-09-21 2023-09-21 Dispositif de traduction d'image, système d'imagerie de diagnostic, procédé de traduction d'image, programme de commande et support d'enregistrement WO2024063119A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-150643 2022-09-21
JP2022150643 2022-09-21

Publications (1)

Publication Number Publication Date
WO2024063119A1 true WO2024063119A1 (fr) 2024-03-28

Family

ID=90454639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/034225 WO2024063119A1 (fr) 2022-09-21 2023-09-21 Dispositif de traduction d'image, système d'imagerie de diagnostic, procédé de traduction d'image, programme de commande et support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2024063119A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018192264A (ja) * 2017-05-18 2018-12-06 キヤノンメディカルシステムズ株式会社 医用画像処理装置
JP2021513065A (ja) * 2018-02-12 2021-05-20 エフ・ホフマン−ラ・ロシュ・アクチェンゲゼルシャフト デジタル病理画像の変換
JP2021519924A (ja) * 2018-03-30 2021-08-12 ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア ディープラーニングを使用して無標識蛍光画像をデジタル染色する方法及びシステム
JP2022068043A (ja) * 2020-10-21 2022-05-09 キヤノンメディカルシステムズ株式会社 医用画像処理装置及び医用画像処理システム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018192264A (ja) * 2017-05-18 2018-12-06 キヤノンメディカルシステムズ株式会社 医用画像処理装置
JP2021513065A (ja) * 2018-02-12 2021-05-20 エフ・ホフマン−ラ・ロシュ・アクチェンゲゼルシャフト デジタル病理画像の変換
JP2021519924A (ja) * 2018-03-30 2021-08-12 ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア ディープラーニングを使用して無標識蛍光画像をデジタル染色する方法及びシステム
JP2022068043A (ja) * 2020-10-21 2022-05-09 キヤノンメディカルシステムズ株式会社 医用画像処理装置及び医用画像処理システム

Similar Documents

Publication Publication Date Title
JP7344568B2 (ja) ディープラーニングを使用して無標識蛍光画像をデジタル染色する方法及びシステム
JP2019095853A (ja) 画像解析方法、装置、プログラムおよび学習済み深層学習アルゴリズムの製造方法
Lopez et al. Assessing deep learning methods for the identification of kidney stones in endoscopic images
US20220237783A1 (en) Slide-free histological imaging method and system
Shen et al. Deep learning autofluorescence-harmonic microscopy
EP3918577B1 (fr) Systèmes, procédés et supports permettant de transformer automatiquement une image numérique en une image de pathologie simulée
CN115036011B (zh) 基于数字病理图像进行实体肿瘤预后评估的系统
Cai et al. Stain style transfer using transitive adversarial networks
WO2021198247A1 (fr) Co-conception optimale de matériel et de logiciel pour coloration virtuelle de tissu non marqué
Szczotka et al. Zero-shot super-resolution with a physically-motivated downsampling kernel for endomicroscopy
Terradillos et al. Analysis on the characterization of multiphoton microscopy images for malignant neoplastic colon lesion detection under deep learning methods
WO2024063119A1 (fr) Dispositif de traduction d'image, système d'imagerie de diagnostic, procédé de traduction d'image, programme de commande et support d'enregistrement
CN111370098A (zh) 一种基于边缘侧计算和服务装置的病理诊断系统及方法
WO2019171909A1 (fr) Procédé de traitement d'image, dispositif de traitement d'image et programme
Sharma et al. Modified GAN augmentation algorithms for the MRI-classification of myocardial scar tissue in ischemic cardiomyopathy
JP7470339B2 (ja) 染色画像推定器学習装置、画像処理装置、染色画像推定器学習方法、画像処理方法、染色画像推定器学習プログラム、及び、画像処理プログラム
CN115984107A (zh) 自监督多模态结构光显微重建方法和系统
Kanakatte et al. Surgical smoke dehazing and color reconstruction
Mao et al. Single generative networks for stain normalization and quality enhancement of histological images in digital pathology
CN117351196B (zh) 图像分割方法、装置、计算机设备和存储介质
WO2024096013A1 (fr) Dispositif d'estimation d'épaisseur de tranche de tissu, dispositif d'évaluation d'épaisseur de tranche de tissu, procédé d'estimation d'épaisseur de tranche de tissu, programme d'estimation d'épaisseur de tranche de tissu et support d'enregistrement
Chen et al. Single color virtual H&E staining with In-and-Out Net
Yang et al. Virtual histological stain transformations through cascaded deep neural networks
Murali Multi-Compartment Segmentation in Renal Transplant Pathology
Arafat et al. Fibre tracing in biomedical images: An objective comparison between seven algorithms

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23868240

Country of ref document: EP

Kind code of ref document: A1