WO2024063119A1 - Image translation device, diagnostic imaging system, image translation method, control program, and recording medium - Google Patents

Image translation device, diagnostic imaging system, image translation method, control program, and recording medium Download PDF

Info

Publication number
WO2024063119A1
WO2024063119A1 PCT/JP2023/034225 JP2023034225W WO2024063119A1 WO 2024063119 A1 WO2024063119 A1 WO 2024063119A1 JP 2023034225 W JP2023034225 W JP 2023034225W WO 2024063119 A1 WO2024063119 A1 WO 2024063119A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
translated
tissue
group
generator
Prior art date
Application number
PCT/JP2023/034225
Other languages
French (fr)
Japanese (ja)
Inventor
宏彦 新岡
淳哉 佐藤
哲郎 ▲高▼松
辰也 松本
龍太 中尾
浩幸 田中
正司 前原
重行 深井
Original Assignee
国立大学法人大阪大学
京都府公立大学法人
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人大阪大学, 京都府公立大学法人 filed Critical 国立大学法人大阪大学
Publication of WO2024063119A1 publication Critical patent/WO2024063119A1/en

Links

Images

Classifications

    • CCHEMISTRY; METALLURGY
    • C12BIOCHEMISTRY; BEER; SPIRITS; WINE; VINEGAR; MICROBIOLOGY; ENZYMOLOGY; MUTATION OR GENETIC ENGINEERING
    • C12MAPPARATUS FOR ENZYMOLOGY OR MICROBIOLOGY; APPARATUS FOR CULTURING MICROORGANISMS FOR PRODUCING BIOMASS, FOR GROWING CELLS OR FOR OBTAINING FERMENTATION OR METABOLIC PRODUCTS, i.e. BIOREACTORS OR FERMENTERS
    • C12M1/00Apparatus for enzymology or microbiology
    • C12M1/34Measuring or testing with condition measuring or sensing means, e.g. colony counters
    • CCHEMISTRY; METALLURGY
    • C12BIOCHEMISTRY; BEER; SPIRITS; WINE; VINEGAR; MICROBIOLOGY; ENZYMOLOGY; MUTATION OR GENETIC ENGINEERING
    • C12QMEASURING OR TESTING PROCESSES INVOLVING ENZYMES, NUCLEIC ACIDS OR MICROORGANISMS; COMPOSITIONS OR TEST PAPERS THEREFOR; PROCESSES OF PREPARING SUCH COMPOSITIONS; CONDITION-RESPONSIVE CONTROL IN MICROBIOLOGICAL OR ENZYMOLOGICAL PROCESSES
    • C12Q1/00Measuring or testing processes involving enzymes, nucleic acids or microorganisms; Compositions therefor; Processes of preparing such compositions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present invention relates to an image translation device that converts images of tissue, an image translation method, and an image diagnostic system equipped with an image translation device.
  • Patent Document 1 discloses a medical image processing device that uses a high-quality image engine (artificial intelligence) to generate a high-quality image from an acquired medical image.
  • a high-quality image engine artificial intelligence
  • Non-Patent Document 1 discloses a technique for generating a virtual HE-stained image from an image of unstained lung tissue using conditional GAN (CGAN).
  • Non-Patent Document 2 discloses a technique that uses deep learning to generate a virtual tissue image obtained by applying a staining method different from HE staining from an HE staining image. Staining methods specifically described in Non-Patent Document 2 as different staining methods from HE staining include Masson's trichrome staining, PAS (periodic acid-Schiff) staining, and Jones silver staining. ) is a staining method.
  • Bayramoglu N et al. “Towards Virtual H&E Staining of Hyperspectral Lung Histology Images Using Conditional Generative Adversarial Networks”, IEEE Conference Proceedings, Vol.2017, P.64-71, 2017. de Haan K et al., “Deep learning-based transformation of H&E stained tissues into special stains”, Nature Communications 12: 4884, doi.org/10.1038/s41467-021-25221-2, 2021.
  • existing image analysis models cannot be applied. Therefore, it is necessary to create a new image analysis model to analyze newly acquired images.
  • existing image diagnostic models that can be applied to pathological images use images of tissues related to various diseases (including diseases with few cases) that have been collected (accumulated) over a long period of time. Created by learning.
  • Such existing image diagnostic models can be applied to images of tissues that have been subjected to traditional typical embedding and staining processes, but they cannot be applied to images obtained using new processing methods and new techniques. It cannot be applied to images that have been
  • the image translation device also includes a first image group of a tissue that has been subjected to a first process including embedding the tissue in a predetermined embedding agent and slicing.
  • a first generator that has learned a relationship between a color-related feature and a color-related feature of a second image group obtained by capturing a tissue that has been subjected to a second process that does not include the embedding process and the slicing process; an image translation unit including a second generator; and a first input unit that inputs either a first target image belonging to the first image group or a second target image belonging to the second image group to the image translation unit.
  • the first generator generates the first translated image having color-related features of the second image group from the first target image
  • the second generator generates a characteristic of the first image group from the second target image.
  • a second translated image having color-related features is generated.
  • An image diagnostic system is an image diagnostic system comprising the image translation device according to the first aspect and an image diagnostic device, wherein the image diagnostic device scans the tissue that has been subjected to the first process.
  • a first neural network that has learned the correspondence between the captured first training images and the state of the tissue shown in each of the first training images, or the tissue that has been subjected to the second processing is imaged.
  • a diagnosis unit comprising at least one of a second neural network that has learned a correspondence relationship between a second training image group and the state of tissue shown in each of the second training image group; and the first translated image or a second input unit that inputs the second translated image to the diagnostic unit; and diagnostic information regarding the state of the tissue shown in the first translated image or the second translated image, which is output from the diagnostic unit. and a diagnostic information output unit that outputs.
  • An image translation method relates to the color of a first image group of a tissue that has been subjected to an embedding process of embedding the tissue in a predetermined embedding agent or a first process that includes slicing.
  • a first generator and a second generator that have learned a relationship between a feature and a color-related feature of a second image group obtained by capturing a tissue that has been subjected to the embedding process and the second process that does not include the slicing process.
  • the first translated image having features related to the color of a second group of images;
  • the second generator generates a second translated image having features related to the color of the first group of images from the second target image; generate.
  • the image translation device may be realized by a computer, and in this case, the image translation device can be implemented as a computer by operating the computer as each section (software element) included in the image translation device.
  • the present invention also includes a control program for an image translation device that is realized by using the control program, and a computer-readable recording medium on which the program is recorded.
  • FIG. 1 is a block diagram showing a configuration example of an image translation device according to Embodiment 1 of the present invention.
  • FIG. FIG. 2 is a functional block diagram showing an example of the configuration of an image translation device. It is a flowchart which shows an example of the process which an image translation device performs.
  • FIG. 3 is a diagram illustrating an example of the flow of a first process for acquiring an image of a living body's tissue.
  • FIG. 7 is a diagram illustrating an example of the flow of a second process for acquiring an image of a living body's tissue.
  • FIG. 7 is a diagram illustrating another example of the flow of the second process for acquiring an image of the tissue of a living body.
  • FIG. 1 is a diagram showing an example of image translation by an image translation device.
  • FIG. 11 is a functional block diagram showing another example of the configuration of an image translation device. It is an explanatory diagram explaining the function of a 1st classifier and a 2nd classifier.
  • FIG. 2 is a block diagram showing a configuration example of an image diagnostic system according to Embodiment 2 of the present invention.
  • FIG. 1 is a functional block diagram showing a configuration example of an image diagnosis system.
  • FIG. 1 is a functional block diagram showing a configuration example of an image diagnosis system.
  • the image translation device 1 converts an image that is similar to an image obtained by subjecting the tissue to a second process different from the first process from an image actually captured of a tissue that has been subjected to the first process.
  • a translated image (first translated image) is created.
  • the image translation device 1 creates a translated image (second translated image) similar to the image obtained by subjecting the tissue to the first processing, from an image actually captured of the tissue subjected to the second processing.
  • the first process is a process that includes embedding the tissue in a predetermined embedding agent and slicing
  • the second process is a process that does not include embedding and slicing. .
  • tissue may include structures formed by an aggregation of any of cells, fungi, and bacteria. That is, tissues may be organs of the body of living organisms, colonies of cultured cells, aggregations of fungi, and bacterial colonies and flora.
  • Fig. 1 is a block diagram showing an example of the configuration of the image translation device 1 according to the first embodiment of the present invention.
  • Fig. 2 is a functional block diagram showing an example of the configuration of the image translation device 1.
  • the image translation device 1 is, for example, a computer, and includes a processor section 2, a hard disk 3, a memory 4, and a display section 5, as shown in FIG.
  • the processor unit 2 reads various programs from the hard disk 3 and executes them.
  • the processor unit 2 may be, for example, at least one of a CPU and a GPU.
  • the hard disk 3 stores various programs executed by the processor section 2. Further, the hard disk 3 may store various image data used by the processor unit 2 to execute various programs.
  • the memory 4 stores various data and programs used for various processes being executed by the processor unit 2.
  • the memory 4 functions as a working memory that stores a program that realizes a neural network structure loaded from the hard disk 3.
  • “memory” may refer to the main memory or the memory of the GPU.
  • the display unit 5 displays various images (e.g., target images) required for executing various processes executed by the processor unit 2 and various images (for example, translated images) generated by the various processes executed by the processor unit 2. It may be any display for displaying. Note that the display unit 5 is not an essential component of the image translation device 1.
  • the image translation device 1 may be configured to transmit various data to an external display device (not shown) that is communicably connected to the image translation device 1 and display the data on the display device.
  • the image translation device 1 includes a control section 20 corresponding to the processor section 2 and memory 4 shown in FIG. 1, a storage section 30 corresponding to the hard disk 3 shown in FIG. 1, and a display section 5. There is.
  • the control section 20 includes a first input section 21, an image translation section 22, and a translated image output section 23.
  • the first input unit 21 includes a first target image belonging to a first image group that captures a tissue that has been subjected to a first process, and a second image that captures a tissue that has undergone a second process that is different from the first process.
  • One of the second target images belonging to the group is input to an image translation unit 22, which will be described later.
  • the image translation unit 22 includes a first generator 221 and a second generator 222 that have learned the relationship between the color features of the first image group and the color features of the second image group.
  • the first generator 221 and the second generator 222 are neural networks (generative models) that extract features of an input image and generate a new image having the extracted features.
  • a known deep learning algorithm such as a generative adversarial network (GAN) may be applied to the learning of the first generator 221 and the second generator 222.
  • GAN generative adversarial network
  • the learning of the first generator 221 and the second generator 222 is not limited to learning that applies a generative adversarial network.
  • learning may be performed using either the first image group or the second image group as input data, and an image generated by converting either the first image group or the second image group by an AI (e.g., Dalle2) capable of generating images as training data.
  • the learning process of the first generator 221 and the second generator 222 may be executed using a computer different from the image translation device 1. In this case, by installing the trained first generator 221 and second generator 222 and a predetermined arbitrary program in an arbitrary computer, the computer can function as the image translation device 1.
  • the first generator 221 generates, from the first target image, a first translated image that has color-related features of the second image group without significantly changing the structure of the tissue shown in the first target image.
  • the second generator 222 generates, from the second target image, a second translated image having color-related characteristics of the first image group without significantly changing the structure of the tissue depicted in the second target image.
  • the translated image output unit 23 acquires and outputs the first translated image or the second translated image generated by the image translation unit 22.
  • the translated image output unit 23 may output the first translated image or the second translated image to the display unit 5.
  • the storage unit 30 may store a target image 31 and a translated image 32.
  • the target image 31 may store a first target image belonging to the first image group and a second target image belonging to the second image group.
  • the translated image 32 may store a translated image generated by the image translation unit 22.
  • FIG. 3 is a flowchart showing an example of processing performed by the image translation device 1.
  • the first input unit 21 inputs into the neural network (the first generator 221 or the second generator 222) which of the first target image belonging to the first image group and the second target image belonging to the second image group. (Step S1: input step).
  • the translated image output unit 23 outputs a first translated image generated from the first target image or a second translated image generated from the second target image, which is generated by the image translation unit 22.
  • Step S2 translated image output step
  • the image translation device 1 converts an image actually captured of a tissue subjected to the first processing (first target image) into a translated image as if the tissue had been subjected to the second processing. Image translation can be performed. In addition, the image translation device 1 converts an image actually captured of a tissue subjected to the second processing (second target image) into a translated image that looks as if the tissue was imaged after the first processing. Image translation can be done.
  • FIG. 4 is a diagram illustrating an example of the flow of the first process for acquiring an image of a biological tissue.
  • FIGS. 5 and 6 are diagrams illustrating an example of the flow of the second process for acquiring an image of a tissue.
  • the first process shown in FIG. 4 is a conventional method of preparing and imaging a pathological specimen.
  • tissue is first collected from a living body (step S11). Subsequently, the collected tissue is fixed with a fixative such as formalin, and then embedded using an embedding agent such as paraffin and resin (step S12). Next, the embedded tissue is sliced (step S13), and the sliced tissue is stained using a predetermined staining method (step S14).
  • slicing is a process performed using a microtome or the like, and by slicing, tissue is typically sliced into a thickness of about several ⁇ m to 10 ⁇ m.
  • An example of a predetermined staining method is HE staining.
  • HE staining is one of the methods used to stain collected tissue pieces, and uses a combination of hematoxylin staining and eosin staining. Hematoxylin stains the chromatin in the cell nucleus and the ribosomes in the cytoplasm blue-purple. On the other hand, eosin stains cytoplasmic components and extracellular matrix red. Next, the stained tissue is imaged using a bright field microscope or the like (step S15).
  • the second process shown in FIG. 5 does not include embedding and slicing.
  • tissue is first collected from a living body (step S21). Subsequently, the tissue or tissue piece is imaged (step S22).
  • the following microscopes can be used, which are capable of obtaining images of tissues that have not been subjected to staining treatment and are applicable to image diagnosis and the like.
  • tissue fragmentation is a process of slicing tissue into a thickness of typically 1 mm to several mm, and is different from slicing.
  • low-temperature gas may be sprayed onto the surface of the tissue to temporarily fix the tissue so that the tissue is not deformed or crushed by the blade that has entered the tissue.
  • the tissue fragmentation in step S22 is not essential. Further, when performing imaging using these microscopes like an endoscope, the tissue collection in step S21 and the tissue fragmentation in step S22 are not essential.
  • the second process shown in FIG. 6 also does not include embedding process and slicing.
  • tissue is first collected from a living body (step S31). Subsequently, the collected tissue is stained using a predetermined staining method (step S32). The stained tissue is then imaged (step S33).
  • a deep ultraviolet excitation fluorescence microscope or the like may be used, which is capable of obtaining an image of a tissue that has not been subjected to embedding processing and is applicable to image diagnosis and the like. Note that when using a deep ultraviolet excitation fluorescence microscope, the staining in step S32 is not essential.
  • the image translation device 1 translates an image from an image of a tissue processed using a newly developed processing method to a translated image similar to an image of a specimen of a lesion site processed using a conventional processing method. be able to.
  • the image translation device 1 converts an image from, for example, an image of a specimen of a lesion site processed using a conventional processing method into a translated image similar to an image of a tissue processed using a newly developed processing method. Can be translated.
  • FIG. 7 is a diagram showing an example of image translation by the image translation device 1.
  • a virtual deep ultraviolet excitation fluorescence microscopy image (first translated image) image translated from an HE staining image (first target image) of a tissue section actually observed after HE staining
  • a deep ultraviolet excitation fluorescence microscope image (first translation image)
  • a virtual HE staining image (second translated image) translated from an image of a tissue section actually observed with a fluorescence microscope (second target image) (no slicing, with staining) is shown.
  • the image translation device 1 can convert HE-stained images of tissue sections actually observed after HE staining into virtual deep-UV-excited fluorescence microscopy images from deep-UV-excited fluorescence microscopy images. Conversion to HE-stained images is also possible.
  • an image diagnosis model that has learned HE images of cancer cell tissues and normal cell tissues in advance can detect cancer cell tissues and normal cell tissues with high accuracy (for example, AUC (Area Under the Curve, an indicator of accuracy) of 0. 9 or higher).
  • a tissue containing cancer and a tissue not containing cancer are classified using an image diagnosis model trained on the first target image and a second target image actually captured, 66. Classified with an accuracy of 4%.
  • the second translated image is classified in the same way. When done, it is classified with an accuracy of 84.6%.
  • the image translation unit 22 may perform negative-positive inversion processing on the input image as preprocessing. By performing the negative-positive reversal process, the degree of completeness of the translated image generated by the image translation unit 22 can be improved. This will be explained below.
  • a deep ultraviolet excitation fluorescence image the brightness of a background area where tissue is not visible is low, and in an HE staining image (bright field image), the background area where tissue is not visible is high brightness (see FIG. 7).
  • negative-positive inversion processing is performed on a deep ultraviolet excitation fluorescence image
  • the brightness of the background region of the image after the inversion becomes close to the brightness of the background region of the virtual HE-stained image to be generated.
  • the brightness of the background region of the image after inversion becomes close to the brightness of the background region of the virtual deep ultraviolet excitation fluorescence image to be generated.
  • Such preprocessing using domain adaptation contributes to improving the learning efficiency of the image translation model, and as a result, can improve the completeness of translated images.
  • an image diagnosis model is sometimes created that outputs diagnostic information (estimated results) based on images of tissues that have been subjected to predetermined processing. For example, an image diagnosis model created using the first image group of tissues that has been subjected to the first processing as training data will not be reliable if an image having color characteristics of the first image group is input. It is possible to output high diagnostic information. However, even if such an image diagnosis model is inputted with an image having characteristics related to the color of the second group of images of tissues subjected to the second processing, correct diagnostic information may not be obtained. This is because the color-related features of the first image group and the color-related features of the second image group are different.
  • the image translation device 1 converts a first target image having color-related features of a first image group into a first translation having color-related features of a second image group without significantly changing the structure reflected in the target image. Images can be generated.
  • the generated first translated image can be applied to an existing image diagnosis model created using the second image group as training data, and similarly, the generated second translated image can be applied to an existing image diagnosis model created using the second image group as training data. It can be applied to existing image diagnosis models created as training data. That is, by employing the image translation device 1, it is possible to generate an image to which an existing image diagnosis model is applicable from an image to which the existing image diagnosis model is not applicable. Therefore, there is no need to create separate image analysis models depending on whether the image is a tissue image that has been subjected to predetermined processing.
  • the embedding process and slicing are processes that require time and effort. Therefore, obtaining the second target image is easier than obtaining the first target image.
  • the second process is a processing method with a short history, there are still few image diagnostic models created based on the second target image, or they may be incomplete. In such a case, a translated image can be generated from the second target image using the image translation device 1, and the translated image can be applied to the image diagnostic model created based on the first target image.
  • the image translation device 1 can facilitate the creation of such an image diagnosis model.
  • the image translation device 1 is capable of image translation from an image of a tissue processed using an existing method to an image of the tissue processed using a newly developed method. By using such translated images, it becomes possible to efficiently create a new image diagnosis model that outputs diagnostic information based on images of tissues processed using a newly developed method. Further, the image diagnosis model created in this manner can diagnose tissues processed by existing methods.
  • the image translation device 1 may translate an image from an image of tissue processed using a newly developed method to an image of tissue processed using an existing method.
  • the image diagnostic model created in this way can diagnose tissues processed using the newly developed method.
  • the image translation device 1 is also capable of image translation from an image of a tissue in a state in which it appears infrequently, which has been processed using an existing method, to an image of the tissue that has been processed using a newly developed method. It is.
  • the image diagnosis model created in this manner can diagnose tissues that have been processed using existing methods and that appear in a low frequency state.
  • the image translation device 1 can also translate an image from an image of a tissue processed using a newly developed method to an image of a tissue that is processed using an existing method and has a low appearance frequency. good.
  • By using such translated images it is possible to efficiently create a new image diagnosis model that outputs diagnostic information based on images of tissues with low frequency of occurrence that have been processed using existing methods. becomes.
  • the image diagnostic model created in this manner can diagnose tissues that are treated with a newly developed method and that have a low frequency of appearance.
  • the translated image is not an image obtained as a result of actually observing the tissue, it does not significantly change the structure of the tissue. Therefore, like an image obtained by actually observing a tissue, the translated image can be treated as a captured image of the tissue.
  • the translated images generated by the image translation device 1 can be used for learning an image diagnosis model. For example, if a translated image is generated from a first target image using the image translation device 1, the translated image can be used for learning to create an image diagnosis model based on the second target image.
  • the image translation unit 22 of the image translation device 1 only needs to include a first generator 221 and a second generator 222 that extract features of an input image and generate a new image having the extracted features.
  • the configuration is not limited to that shown in FIG. 2.
  • CycleGAN may be applied to implement the image translation unit 22.
  • FIG. 8 is a functional block diagram showing another example of the configuration of the image translation device 1a.
  • the image translation unit 22a may further include a first classifier 223 and a second classifier 224.
  • the first discriminator 223 distinguishes between the images included in the first image group and the second one based on a first error between the color-related features of the first image group and the color-related features of the translated image generated by the second generator. Identify the translated image generated by the generator.
  • the second discriminator 224 distinguishes between the images included in the second image group and the first Identify the translated image generated by the generator.
  • FIG. 9 is a diagram illustrating an example of processing performed by the image translation unit 22a including the first classifier 223 and the second classifier 224.
  • the first input unit 21 also inputs the images of the first image group input to the first generator 221 to the first classifier 223.
  • the first generator 221 generates a translated image from the input image.
  • the second generator 222 further generates a translated image from the translated image generated by the first generator 221.
  • the first discriminator 223 calculates a first error between the color-related features of the translated image generated by the second generator 222 and the color-related features of the images of the first image group that are the source of the translated image. .
  • the first input unit 21 also inputs the images of the second image group input to the second generator 222 to the second discriminator 224.
  • the second generator 222 generates a translated image from the input image.
  • the first generator 221 further generates a translated image from the translated image generated by the second generator 222.
  • the second classifier 224 calculates a second error between the color-related features of the translated image generated by the first generator 221 and the color-related features of the images of the second image group that are the source of the translated image. .
  • the image translation unit 22a generates a translated image by repeating the above processing.
  • the image translation unit 22a outputs a translated image whose first error and second error are below a predetermined level as a first translated image or a second translated image. do.
  • the image translation unit 22a calculates the cycle consistency loss based on the first error and the second error, and converts the translated image whose cycle consistency loss is less than or equal to a predetermined value into the first translated image or the second translated image. It may also be output as an image.
  • the image translation device 1a having such a configuration can generate and output highly accurate translated images.
  • CycleGAN can learn the relationship between the color-related features of the first image group and the color-related features of the second image group, it is possible to learn the relationship between the color-related features of the first image group and the color-related features of the second image group. There is no need to prepare.
  • FIG. 10 is a block diagram showing a configuration example of an image diagnostic system 100 according to Embodiment 3 of the present invention.
  • the image diagnosis system 100 includes image translation devices 1 and 1a, and an image diagnosis device 7.
  • the image diagnostic device 7 is, for example, a computer communicably connected to the image translation devices 1 and 1a.
  • the image diagnostic apparatus 7 includes a processor section 71, a hard disk 73, a memory 72, and a display section 74, as shown in FIG.
  • the processor unit 71 reads various programs from the hard disk 73 and executes them.
  • the processor unit 71 may be, for example, at least one of a CPU and a GPU.
  • the hard disk 73 stores various programs executed by the processor unit 71.
  • the hard disk 73 may also store various image data used by the processor unit 71 to execute the various programs.
  • the memory 72 stores various data and various programs used for various processes being executed by the processor section 71.
  • the memory 72 functions as a working memory that stores a program loaded from the hard disk 73 that implements the neural network structure.
  • the display unit 74 may be any display for displaying images required to execute various processes executed by the processor unit 71 and diagnostic information output by the processor unit 71. Note that the display section 74 is not an essential component of the image diagnostic apparatus 7.
  • the image diagnostic device 7 may be configured to transmit diagnostic information to an external display device (not shown) or image translation device 1, 1a that is communicably connected to the image diagnostic device 7.
  • the image diagnostic apparatus 7 includes an image diagnostic model (a first neural network 7121 described later) that estimates the state of the tissue based on an image of the tissue that has been subjected to the first processing shown in FIG.
  • FIG. 11 is a functional block diagram showing a configuration example of the image diagnostic system 100.
  • the image diagnostic apparatus 7 includes a control section 710 corresponding to the processor section 71 and memory 72 shown in FIG. 10, a storage section corresponding to the hard disk 73 shown in FIG. 10, and a display section 5. .
  • the storage unit is not illustrated in order to simplify the explanation.
  • the control section 710 includes a second input section 711, a diagnostic section 712, and a diagnostic information output section 713.
  • the second input unit 711 inputs to the diagnosis unit 712 a second translated image that has the color characteristics of the first image group acquired from the image translation device 1 or 1a.
  • the diagnosis section 712 includes a first neural network 7121.
  • the first neural network 7121 is a neural network (inference model). Note that a known supervised machine learning algorithm may be applied to the learning of the first neural network 7121. Note that the learning process of the first neural network 7121 may be executed using a computer different from the image diagnostic apparatus 7. In this case, by installing the trained first neural network 7121 into any computer, it is possible to cause the computer to function as the image diagnostic apparatus 7.
  • the diagnostic information output unit 713 acquires and outputs the diagnostic information output from the diagnostic unit 712. For example, the diagnostic information output unit 713 may output diagnostic information to the display unit 74.
  • the image translation devices 1 and 1a generate a second translated image from an image of a tissue that has undergone new processing or an image that has adopted a new imaging technique.
  • the first neural network 7121 which is an existing inference model
  • diagnostic information (estimation results) based on the existing inference model can be obtained.
  • the image diagnostic system 100 for example, generates a translated image to which the existing disease condition determination criteria can be applied from an image to which the existing disease condition determination criteria cannot be applied, and then generates a translation image based on the translated image. Diagnostic information can be output.
  • the image translation devices 1 and 1a may have the configuration of the image diagnostic device 7.
  • the image translation devices 1 and 1a may include a diagnostic section 712 having a first neural network 7121 and a diagnostic information output section 713.
  • the image translation devices 1 and 1a generate a second translated image from an image of a tissue that has undergone new processing or an image that uses a new imaging technique, and generates a second translated image based on the second translated image. 1.
  • Estimation is performed using a neural network 7121. By performing estimation in this manner, the image translation devices 1 and 1a can output diagnostic information (estimation results) based on the existing inference model.
  • the image diagnostic apparatus 7a includes an image diagnostic model (second neural network 7122 described later) that estimates the state of the tissue based on the image of the tissue that has been subjected to the second processing shown in FIG.
  • FIG. 12 is a functional block diagram showing a configuration example of the image diagnosis system 100a.
  • the image diagnostic apparatus 7a shown in FIG. 12 includes a second neural network 7122 in the diagnostic section 712.
  • the second neural network 7122 is a neural network ( (inference model). Note that a known supervised machine learning algorithm may be applied to the learning of the second neural network 7122.
  • an image of a tissue taken using a new method may have more features that can be read than an image taken of a tissue using an existing method. Therefore, the second neural network 7122, which is a newer inference model, may be able to output more accurate diagnostic information than the existing inference model.
  • the image diagnostic system 100a the image translation devices 1 and 1a generate a first translated image from images (for example, pathological images) of tissues taken in the past. If the first translated image is input to the second neural network 7122, which is a new inference model that outputs diagnostic information based on images of tissue that have undergone new processing or images that have adopted new imaging technology, Diagnostic information (estimation results) based on the new inference model can be obtained.
  • the image diagnostic apparatus 7a may be configured to include the functions of the diagnostic units 712 and 712. In this case, any configuration may be sufficient as long as the neural network to be used is switched depending on whether the translated image acquired from the image translation devices 1, 1a is the first translated image or the second translated image.
  • the function of the image translation devices 1 and 1a (hereinafter referred to as "devices") is a program for making a computer function as the device, and each control block of the device (particularly each part included in the control units 20 and 20a). ) can be realized by a program for making a computer function.
  • the device includes a computer having at least one control device (for example, a processor) and at least one storage device (for example, a memory) as hardware for executing the program.
  • control device for example, a processor
  • storage device for example, a memory
  • the above program may be recorded on one or more computer-readable recording media instead of temporary.
  • This recording medium may or may not be included in the above device. In the latter case, the program may be supplied to the device via any transmission medium, wired or wireless.
  • each of the control blocks described above can also be realized by a logic circuit.
  • a logic circuit for example, an integrated circuit in which a logic circuit functioning as each of the control blocks described above is formed is also included in the scope of the present invention.
  • the image translation device relates to the color of a first image group of a tissue that has been subjected to a first process including embedding the tissue in a predetermined embedding agent and slicing.
  • a first generator and a second generator that have learned a relationship between a feature and a feature regarding color of a second group of images obtained by capturing a tissue that has been subjected to a second process that does not include the embedding process and the slicing process.
  • an image translation unit including a generator; a first input unit that inputs either a first target image belonging to the first image group or a second target image belonging to the second image group to the image translation unit; a translated image output unit that outputs a first translated image generated from the first target image or a second translated image generated from the second target image, the first generator: From the first target image, the first translated image has a feature related to the color of the second group of images, and the second generator generates a feature related to the color of the first group of images from the second target image. A second translated image having features is generated.
  • the image translation unit is configured to combine color-related features of the first image group and color-related features of the translated images generated by the second generator.
  • a first classifier that identifies, based on a first error, an image included in the first image group and a translated image generated by the second generator; a color-related feature of the second image group; a second discriminator that identifies the images included in the second image group and the translated image generated by the first generator based on a second error between the translated image generated by the first generator and the color-related feature; , and a translated image in which the first error and the second error are below a predetermined level may be output as the first translated image or the second translated image.
  • the tissue in Aspect 1 or 2, may have a structure formed by an aggregation of cells, fungi, and bacteria.
  • the second target image may be captured using a deep ultraviolet excitation fluorescence microscope.
  • An image diagnostic system is an image diagnostic system comprising the image translation device according to any one of aspects 1 to 4, and an image diagnostic device, wherein the image diagnostic device performs the first processing.
  • a first neural network that has learned the correspondence between a first group of training images obtained by imaging a tissue that has been subjected to the above training and the state of the tissue that is reflected in each of the first group of training images, or a first neural network that has undergone the second processing.
  • a diagnostic unit comprising at least one of a second neural network that has learned a correspondence relationship between a second training image group in which the tissue has been imaged and a state of the tissue shown in each of the second training image group; a second input unit that inputs the first translated image or the second translated image to the diagnostic unit; and a second input unit that inputs the first translated image or the second translated image to the diagnostic unit; and a diagnostic information output unit that outputs diagnostic information regarding the state of the tissue.
  • the image translation method relates to the color of a first image group of a tissue that has been subjected to a first process including embedding the tissue in a predetermined embedding agent and slicing.
  • a first generator and a second generator that have learned a relationship between a feature and a color-related feature of a second image group obtained by capturing a tissue that has been subjected to the embedding process and the second process that does not include the slicing process.
  • the first translated image having features related to the color of a second group of images;
  • the second generator generates a second translated image having features related to the color of the first group of images from the second target image; generate.
  • a control program according to aspect 7 of the present disclosure is a control program for causing a computer to function as the image translation device according to any one of aspects 1 to 4, the control program including the image translation section, the first input section, and A computer is made to function as the translated image output section.
  • a recording medium according to aspect 8 of the present disclosure is a computer-readable recording medium on which the control program according to aspect 7 is recorded.

Abstract

The present invention achieves an image translation device, for example, that can effectively utilize an existing diagnostic imaging model. An image translation device (1) comprises an image translation unit (22) that includes a first generator (221) and a second generator (222), the image translation unit (22) having learned the relationship between a color-related feature of a first image group obtained by capturing images of a tissue on which a first process including an embedding process and slicing has been performed, and a color-related feature of a second image group obtained by capturing images of the tissue on which a second process that does not include the embedding process and slicing has been performed. A first object image belonging to the first image group or a second object image belonging to the second image group is input to the image translation unit (22), which then outputs a first translation image generated from the first object image or a second translation image generated from the second object image.

Description

画像翻訳装置、画像診断システム、画像翻訳方法、制御プログラム、記録媒体IMAGE TRANSLATION DEVICE, IMAGE DIAGNOSIS SYSTEM, IMAGE TRANSLATION METHOD, CONTROL PROGRAM, AND RECORDING MEDIUM
 本発明は、組織を撮像した画像を変換する画像翻訳装置、画像翻訳方法、および画像翻訳装置を備える画像診断システム等に関する。 The present invention relates to an image translation device that converts images of tissue, an image translation method, and an image diagnostic system equipped with an image translation device.
 取得した医用画像から、人工知能を用いて異なる画像を生成する技術が知られている。 There is a known technology that uses artificial intelligence to generate different images from acquired medical images.
 特許文献1は、高画質化エンジン(人工知能)を用いて、取得した医用画像から高画質化された画像を生成する医用画像処理装置を開示している。 Patent Document 1 discloses a medical image processing device that uses a high-quality image engine (artificial intelligence) to generate a high-quality image from an acquired medical image.
 非特許文献1は、conditional GAN(CGAN)により、非染色の肺組織の画像から、仮想的なHE染色像を生成する技術を開示している。非特許文献2は、深層学習により、HE染色像から、HE染色とは異なる染色法を仮に適用した場合に得られる仮想組織画像を生成する技術を開示している。HE染色とは異なる染色法として非特許文献2に具体的に記載されている染色法は、マッソントリクローム(Masson’s trichrome)染色法、PAS(periodic acid-Schiff)染色法、ジョーンズ鍍銀(Jones silver)染色法である。 Non-Patent Document 1 discloses a technique for generating a virtual HE-stained image from an image of unstained lung tissue using conditional GAN (CGAN). Non-Patent Document 2 discloses a technique that uses deep learning to generate a virtual tissue image obtained by applying a staining method different from HE staining from an HE staining image. Staining methods specifically described in Non-Patent Document 2 as different staining methods from HE staining include Masson's trichrome staining, PAS (periodic acid-Schiff) staining, and Jones silver staining. ) is a staining method.
日本国特開2020-166813号公報Japanese Patent Application Publication No. 2020-166813
 組織の画像を取得するために該組織に施し得る包埋処理および染色処理は多種類存在しており、例えば、採用された染色処理に応じて、撮像された組織の画像の色調(色に関する特徴)は異なる。また、従来使用されていなかった波長の光を照射したり、従来は使用されていない染色法を施したりして、組織を観察し、撮像する技術などが開発されている。 There are many types of embedding and staining processes that can be applied to tissues to obtain images of the tissues. For example, the color tone (color characteristics) of the image of the tissue taken will vary depending on the staining process used. In addition, technologies have been developed to observe and image tissues by irradiating them with light of wavelengths not previously used or by applying staining methods not previously used.
 新規の処理方法および新規の技術を採用して組織の画像を取得した場合、既存の画像解析モデルが適用できない。それゆえ、新たに取得された画像を解析するための新規の画像解析モデルを作成する必要がある。例えば、病理画像に適用可能な既存の画像診断モデルは、長い年月をかけて収集(蓄積)された、さまざまな疾病(症例が少ない疾病を含む)に関連している組織を撮像した画像を学習することによって作成されている。このような既存の画像診断モデルは、従来の代表的な包埋処理および染色処理などを施した組織を撮像した画像には適用可能であるが、新規の処理方法および新規の技術を用いて取得された画像には適用できない。 When tissue images are acquired using new processing methods and new techniques, existing image analysis models cannot be applied. Therefore, it is necessary to create a new image analysis model to analyze newly acquired images. For example, existing image diagnostic models that can be applied to pathological images use images of tissues related to various diseases (including diseases with few cases) that have been collected (accumulated) over a long period of time. Created by learning. Such existing image diagnostic models can be applied to images of tissues that have been subjected to traditional typical embedding and staining processes, but they cannot be applied to images obtained using new processing methods and new techniques. It cannot be applied to images that have been
 既存の画像診断モデルの適用できる画像が限定的であることは、画像診断法を採用している多様な分野において共通する問題であり、医学および病理学の分野に限定された問題ではない。既存の画像診断技術は過去の膨大な知見の蓄積に基づいて作成されたものであり、これを有効活用するための技術が広く求められている。 The limited number of images to which existing diagnostic imaging models can be applied is a common problem in various fields that employ diagnostic imaging methods, and is not limited to the fields of medicine and pathology. Existing diagnostic imaging techniques have been created based on the vast amount of knowledge accumulated in the past, and there is a widespread need for techniques to effectively utilize this knowledge.
 また、本発明の態様1に係る画像翻訳装置は、所定の包埋剤に組織を包埋する包埋処理および薄切化を含む第1処理が施された組織を撮像した第1画像群の色に関する特徴と、前記包埋処理および前記薄切化を含まない第2処理が施された組織を撮像した第2画像群の色に関する特徴との間の関係を学習した、第1生成器および第2生成器を備える画像翻訳部と、前記第1画像群に属する第1対象画像、および前記第2画像群に属する第2対象画像のいずれかを前記画像翻訳部に入力する第1入力部と、前記第1対象画像から生成された第1の翻訳画像、または、前記第2対象画像から生成された第2の翻訳画像を出力する翻訳画像出力部と、を備え、前記第1生成器は、前記第1対象画像から、前記第2画像群の色に関する特徴を有する前記第1の翻訳画像を生成し、前記第2生成器は、前記第2対象画像から、前記第1画像群の色に関する特徴を有する第2の翻訳画像を生成する。 The image translation device according to aspect 1 of the present invention also includes a first image group of a tissue that has been subjected to a first process including embedding the tissue in a predetermined embedding agent and slicing. a first generator that has learned a relationship between a color-related feature and a color-related feature of a second image group obtained by capturing a tissue that has been subjected to a second process that does not include the embedding process and the slicing process; an image translation unit including a second generator; and a first input unit that inputs either a first target image belonging to the first image group or a second target image belonging to the second image group to the image translation unit. and a translated image output unit that outputs a first translated image generated from the first target image or a second translated image generated from the second target image, the first generator generates the first translated image having color-related features of the second image group from the first target image, and the second generator generates a characteristic of the first image group from the second target image. A second translated image having color-related features is generated.
 本発明の一態様に係る画像診断システムは、前記態様1に係る画像翻訳装置、および画像診断装置を備える画像診断システムであって、前記画像診断装置は、前記第1処理が施された組織を撮像した訓練用第1画像群と、該訓練用第1画像群のそれぞれに写る組織の状態との対応関係を学習した第1ニューラルネットワーク、または、前記第2処理が施された組織を撮像した訓練用第2画像群と、該訓練用第2画像群のそれぞれに写る組織の状態との対応関係を学習した第2ニューラルネットワークの少なくともいずれかを備える診断部と、前記第1の翻訳画像または前記第2の翻訳画像を前記診断部に入力する第2入力部と、前記診断部から出力された、前記第1の翻訳画像または前記第2の翻訳画像に示された組織の状態に関する診断情報を出力する診断情報出力部と、を備える。 An image diagnostic system according to one aspect of the present invention is an image diagnostic system comprising the image translation device according to the first aspect and an image diagnostic device, wherein the image diagnostic device scans the tissue that has been subjected to the first process. A first neural network that has learned the correspondence between the captured first training images and the state of the tissue shown in each of the first training images, or the tissue that has been subjected to the second processing is imaged. a diagnosis unit comprising at least one of a second neural network that has learned a correspondence relationship between a second training image group and the state of tissue shown in each of the second training image group; and the first translated image or a second input unit that inputs the second translated image to the diagnostic unit; and diagnostic information regarding the state of the tissue shown in the first translated image or the second translated image, which is output from the diagnostic unit. and a diagnostic information output unit that outputs.
 本発明の一態様に係る画像翻訳方法は、所定の包埋剤に組織を包埋する包埋処理または薄切化を含む第1処理が施された組織を撮像した第1画像群の色に関する特徴と、前記包埋処理および前記薄切化を含まない第2処理が施された組織を撮像した第2画像群の色に関する特徴との間の関係を学習した、第1生成器および第2生成器を備えるニューラルネットワークに、前記第1画像群に属する第1対象画像、および前記第2画像群に属する第2対象画像のいずれかを入力する入力ステップと、前記第1対象画像から生成された第1の翻訳画像、または、前記第2対象画像から生成された第2の翻訳画像を出力する翻訳画像出力ステップと、を含み、前記第1生成器は、前記第1対象画像から、前記第2画像群の色に関する特徴を有する前記第1の翻訳画像を生成し、前記第2生成器は、前記第2対象画像から、前記第1画像群の色に関する特徴を有する第2の翻訳画像を生成する。 An image translation method according to one aspect of the present invention relates to the color of a first image group of a tissue that has been subjected to an embedding process of embedding the tissue in a predetermined embedding agent or a first process that includes slicing. A first generator and a second generator that have learned a relationship between a feature and a color-related feature of a second image group obtained by capturing a tissue that has been subjected to the embedding process and the second process that does not include the slicing process. an input step of inputting either a first target image belonging to the first image group and a second target image belonging to the second image group to a neural network including a generator; a translated image output step of outputting a first translated image generated from the first translated image or a second translated image generated from the second target image, wherein the first generator generates the first translated image from the first target image. the first translated image having features related to the color of a second group of images; the second generator generates a second translated image having features related to the color of the first group of images from the second target image; generate.
 本発明の各態様に係る画像翻訳装置は、コンピュータによって実現してもよく、この場合には、コンピュータを前記画像翻訳装置が備える各部(ソフトウェア要素)として動作させることにより前記画像翻訳装置をコンピュータにて実現させる画像翻訳装置の制御プログラム、およびそれを記録したコンピュータ読み取り可能な記録媒体も、本発明の範疇に入る。 The image translation device according to each aspect of the present invention may be realized by a computer, and in this case, the image translation device can be implemented as a computer by operating the computer as each section (software element) included in the image translation device. The present invention also includes a control program for an image translation device that is realized by using the control program, and a computer-readable recording medium on which the program is recorded.
 本発明の一態様によれば、既存の画像診断モデルを有効活用することができる。 According to one aspect of the present invention, existing image diagnosis models can be effectively utilized.
本発明の実施形態1に係る画像翻訳装置の構成例を示すブロック図である。1 is a block diagram showing a configuration example of an image translation device according to Embodiment 1 of the present invention. FIG. 画像翻訳装置の構成の一例を示す機能ブロック図である。FIG. 2 is a functional block diagram showing an example of the configuration of an image translation device. 画像翻訳装置が行う処理の一例を示すフローチャートである。It is a flowchart which shows an example of the process which an image translation device performs. 生体の組織を撮像した画像を取得するための第1処理の流れの一例を説明する図である。FIG. 3 is a diagram illustrating an example of the flow of a first process for acquiring an image of a living body's tissue. 生体の組織を撮像した画像を取得するための第2処理の流れの一例を説明する図である。FIG. 7 is a diagram illustrating an example of the flow of a second process for acquiring an image of a living body's tissue. 生体の組織を撮像した画像を取得するための第2処理の流れの他の一例を説明する図である。FIG. 7 is a diagram illustrating another example of the flow of the second process for acquiring an image of the tissue of a living body. 画像翻訳装置による画像翻訳の例を示す図である。FIG. 1 is a diagram showing an example of image translation by an image translation device. 画像翻訳装置の別の構成例を示す機能ブロック図である。FIG. 11 is a functional block diagram showing another example of the configuration of an image translation device. 第1識別器および第2識別器の機能を説明する説明図である。It is an explanatory diagram explaining the function of a 1st classifier and a 2nd classifier. 本発明の実施形態2に係る画像診断システムの構成例を示すブロック図である。FIG. 2 is a block diagram showing a configuration example of an image diagnostic system according to Embodiment 2 of the present invention. 画像診断システムの構成例を示す機能ブロック図である。FIG. 1 is a functional block diagram showing a configuration example of an image diagnosis system. 画像診断システムの構成例を示す機能ブロック図である。FIG. 1 is a functional block diagram showing a configuration example of an image diagnosis system.
 〔実施形態1〕
 以下、本発明の一実施形態について、詳細に説明する。
[Embodiment 1]
Hereinafter, one embodiment of the present invention will be described in detail.
 (画像翻訳装置1の概略)
 本発明の一実施形態に係る画像翻訳装置1は、第1処理が施された組織を実際に撮像した画像から、該組織に第1処理とは異なる第2処理を施して撮像した画像様の翻訳画像(第1の翻訳画像)を作成する。また、画像翻訳装置1は、第2処理が施された組織を実際に撮像した画像から、該組織に第1処理を施して撮像した画像様の翻訳画像(第2の翻訳画像)を作成する。ここで、第1処理は、組織を所定の包埋剤に包埋する包埋処理および薄切化を含む処理であり、第2処理は、包埋処理および薄切化を含まない処理である。
(Outline of image translation device 1)
The image translation device 1 according to an embodiment of the present invention converts an image that is similar to an image obtained by subjecting the tissue to a second process different from the first process from an image actually captured of a tissue that has been subjected to the first process. A translated image (first translated image) is created. In addition, the image translation device 1 creates a translated image (second translated image) similar to the image obtained by subjecting the tissue to the first processing, from an image actually captured of the tissue subjected to the second processing. . Here, the first process is a process that includes embedding the tissue in a predetermined embedding agent and slicing, and the second process is a process that does not include embedding and slicing. .
 本明細書において、組織とは、細胞、真菌、および細菌のうちのいずれかが集合して成る構造を含んでいてもよい。すなわち、組織は、生体の身体の器官、培養細胞のコロニー、真菌の集合体、および、細菌のコロニーおよび細菌叢などであり得る。 As used herein, tissue may include structures formed by an aggregation of any of cells, fungi, and bacteria. That is, tissues may be organs of the body of living organisms, colonies of cultured cells, aggregations of fungi, and bacterial colonies and flora.
 (画像翻訳装置1の構成)
 まず、画像翻訳装置1の構成について、図1および図2を用いて説明する。図1は、本発明の実施形態1に係る画像翻訳装置1の構成例を示すブロック図である。図2は、画像翻訳装置1の構成の一例を示す機能ブロック図である。
(Configuration of Image Translation Device 1)
First, the configuration of the image translation device 1 will be described with reference to Fig. 1 and Fig. 2. Fig. 1 is a block diagram showing an example of the configuration of the image translation device 1 according to the first embodiment of the present invention. Fig. 2 is a functional block diagram showing an example of the configuration of the image translation device 1.
 画像翻訳装置1は、例えばコンピュータであり、図1に示すように、プロセッサ部2、ハードディスク3、メモリ4、および表示部5を備えている。 The image translation device 1 is, for example, a computer, and includes a processor section 2, a hard disk 3, a memory 4, and a display section 5, as shown in FIG.
 プロセッサ部2は、各種プログラムをハードディスク3から読み出して実行する。プロセッサ部2は、例えばCPUおよびGPUのうち少なくとも何れか一方であってもよい。 The processor unit 2 reads various programs from the hard disk 3 and executes them. The processor unit 2 may be, for example, at least one of a CPU and a GPU.
 ハードディスク3には、プロセッサ部2が実行する各種プログラムが格納されている。また、ハードディスク3には、プロセッサ部2が各種プログラムを実行するために利用する各種画像データが格納されていてもよい。 The hard disk 3 stores various programs executed by the processor section 2. Further, the hard disk 3 may store various image data used by the processor unit 2 to execute various programs.
 メモリ4は、プロセッサ部2が実行中の各種処理に用いられる各種データおよび各種プログラムを格納する。例えば、メモリ4は、ハードディスク3からロードされたニューラルネットワーク構造を実現するプログラムを格納するワーキングメモリとして機能する。なお、本明細書において「メモリ」とは、メインメモリを指してもよいし、GPUのメモリを指してもよい。 The memory 4 stores various data and programs used for various processes being executed by the processor unit 2. For example, the memory 4 functions as a working memory that stores a program that realizes a neural network structure loaded from the hard disk 3. Note that in this specification, "memory" may refer to the main memory or the memory of the GPU.
 表示部5は、プロセッサ部2が実行する各種処理の実行に必要とされる各種画像(例えば、対象画像)、およびプロセッサ部2が実行した各種処理によって生成された各種画像(例えば、翻訳画像)を表示するための任意のディスプレイであってもよい。なお、表示部5は、画像翻訳装置1の必須の構成ではない。例えば、画像翻訳装置1は、画像翻訳装置1と通信可能に接続された外部の表示装置(図示せず)に各種データを送信して、該表示装置に表示させる構成であってもよい。 The display unit 5 displays various images (e.g., target images) required for executing various processes executed by the processor unit 2 and various images (for example, translated images) generated by the various processes executed by the processor unit 2. It may be any display for displaying. Note that the display unit 5 is not an essential component of the image translation device 1. For example, the image translation device 1 may be configured to transmit various data to an external display device (not shown) that is communicably connected to the image translation device 1 and display the data on the display device.
 図2に示すように、画像翻訳装置1は、図1に示すプロセッサ部2およびメモリ4に対応する制御部20、図1に示すハードディスク3に対応する記憶部30、および表示部5を備えている。 As shown in FIG. 2, the image translation device 1 includes a control section 20 corresponding to the processor section 2 and memory 4 shown in FIG. 1, a storage section 30 corresponding to the hard disk 3 shown in FIG. 1, and a display section 5. There is.
 制御部20は、第1入力部21、画像翻訳部22、および翻訳画像出力部23を備えている。 The control section 20 includes a first input section 21, an image translation section 22, and a translated image output section 23.
 第1入力部21は、第1処理が施された組織を撮像した第1画像群に属する第1対象画像、および第1処理とは異なる第2処理が施された組織を撮像した第2画像群に属する第2対象画像のいずれかを後述する画像翻訳部22に入力する。 The first input unit 21 includes a first target image belonging to a first image group that captures a tissue that has been subjected to a first process, and a second image that captures a tissue that has undergone a second process that is different from the first process. One of the second target images belonging to the group is input to an image translation unit 22, which will be described later.
 画像翻訳部22は、第1画像群の色に関する特徴と、第2画像群の色に関する特徴との間の関係を学習した、第1生成器221および第2生成器222を備える。第1生成器221および第2生成器222は、入力された画像の特徴を抽出して、抽出した特徴を有する新しい画像を生成するニューラルネットワーク(生成モデル)である。なお、第1生成器221および第2生成器222の学習には、敵対的生成ネットワーク(GAN:generative Adversarial Networks)などの公知の深層学習アルゴリズムが適用され得る。第1生成器221および第2生成器222の学習は、敵対的生成ネットワークを適用した学習に限られない。例えば、第1画像群および第2画像群のいずれかを入力データとし、画像を生成可能なAI(例:Dalle2)によって第1画像群および第2画像群のいずれかを変換することで生成された画像を教師データとする学習が行われてもよい。なお、第1生成器221および第2生成器222の学習処理は、画像翻訳装置1とは異なるコンピュータを用いて実行されてもよい。この場合、学習済の第1生成器221および第2生成器222、および所定の任意プログラムを任意のコンピュータにインストールすることにより、該コンピュータを画像翻訳装置1として機能させることが可能である。 The image translation unit 22 includes a first generator 221 and a second generator 222 that have learned the relationship between the color features of the first image group and the color features of the second image group. The first generator 221 and the second generator 222 are neural networks (generative models) that extract features of an input image and generate a new image having the extracted features. In addition, a known deep learning algorithm such as a generative adversarial network (GAN) may be applied to the learning of the first generator 221 and the second generator 222. The learning of the first generator 221 and the second generator 222 is not limited to learning that applies a generative adversarial network. For example, learning may be performed using either the first image group or the second image group as input data, and an image generated by converting either the first image group or the second image group by an AI (e.g., Dalle2) capable of generating images as training data. The learning process of the first generator 221 and the second generator 222 may be executed using a computer different from the image translation device 1. In this case, by installing the trained first generator 221 and second generator 222 and a predetermined arbitrary program in an arbitrary computer, the computer can function as the image translation device 1.
 第1生成器221は、第1対象画像から、該第1対象画像に写っている組織の構造を大きく変えることなく、第2画像群の色に関する特徴を有する第1の翻訳画像を生成する。第2生成器222は、第2対象画像から、該第2対象画像に写っている組織の構造を大きく変えることなく、第1画像群の色に関する特徴を有する第2の翻訳画像を生成する。 The first generator 221 generates, from the first target image, a first translated image that has color-related features of the second image group without significantly changing the structure of the tissue shown in the first target image. The second generator 222 generates, from the second target image, a second translated image having color-related characteristics of the first image group without significantly changing the structure of the tissue depicted in the second target image.
 翻訳画像出力部23は、画像翻訳部22によって生成された第1の翻訳画像または第2の翻訳画像を取得して出力する。例えば、翻訳画像出力部23は、第1の翻訳画像または第2の翻訳画像を表示部5に出力してもよい。 The translated image output unit 23 acquires and outputs the first translated image or the second translated image generated by the image translation unit 22. For example, the translated image output unit 23 may output the first translated image or the second translated image to the display unit 5.
 記憶部30には、対象画像31および翻訳画像32が格納されていてもよい。この場合、対象画像31には、第1画像群に属する第1対象画像、および第2画像群に属する第2対象画像が格納されていてもよい。翻訳画像32には、画像翻訳部22が生成した翻訳画像が格納されていてもよい。 The storage unit 30 may store a target image 31 and a translated image 32. In this case, the target image 31 may store a first target image belonging to the first image group and a second target image belonging to the second image group. The translated image 32 may store a translated image generated by the image translation unit 22.
 (画像翻訳装置1が行う処理の流れ)
 続いて、画像翻訳装置1が行う処理の流れについて、図3を用いて説明する。図3は、画像翻訳装置1が行う処理の一例を示すフローチャートである。
(Flow of processing performed by image translation device 1)
Next, the flow of processing performed by the image translation device 1 will be explained using FIG. 3. FIG. 3 is a flowchart showing an example of processing performed by the image translation device 1.
 まず、第1入力部21は、ニューラルネットワーク(第1生成器221または第2生成器222)に、第1画像群に属する第1対象画像、および第2画像群に属する第2対象画像のいずれかを入力する(ステップS1:入力ステップ)。 First, the first input unit 21 inputs into the neural network (the first generator 221 or the second generator 222) which of the first target image belonging to the first image group and the second target image belonging to the second image group. (Step S1: input step).
 次に、翻訳画像出力部23は、画像翻訳部22によって生成された、第1対象画像から生成された第1の翻訳画像、または、第2対象画像から生成された第2の翻訳画像を出力する(ステップS2:翻訳画像出力ステップ)。 Next, the translated image output unit 23 outputs a first translated image generated from the first target image or a second translated image generated from the second target image, which is generated by the image translation unit 22. (Step S2: translated image output step).
 このように、画像翻訳装置1は、第1処理が施された組織を実際に撮像した画像(第1対象画像)から、あたかも第2処理が施された組織を撮像したときのような翻訳画像への画像翻訳を行うことができる。また、画像翻訳装置1は、第2処理が施された組織を実際に撮像した画像(第2対象画像)から、あたかも第1処理が施された組織を撮像したときのような翻訳画像への画像翻訳を行うことができる。 In this way, the image translation device 1 converts an image actually captured of a tissue subjected to the first processing (first target image) into a translated image as if the tissue had been subjected to the second processing. Image translation can be performed. In addition, the image translation device 1 converts an image actually captured of a tissue subjected to the second processing (second target image) into a translated image that looks as if the tissue was imaged after the first processing. Image translation can be done.
 (第1処理および第2処理)
 ここで、第1処理および第2処理のそれぞれについて、生体の身体に生じた病変部位(組織)を撮像する場合を例に挙げて、図4および図5を用いて説明する。図4は、生体の組織を撮像した画像を取得するための第1処理の流れの一例を説明する図である。図5および図6は、組織を撮像した画像を取得するための第2処理の流れの一例を説明する図である。
(First processing and second processing)
Here, each of the first process and the second process will be explained using FIGS. 4 and 5, taking as an example a case where a lesion site (tissue) occurring in a living body is imaged. FIG. 4 is a diagram illustrating an example of the flow of the first process for acquiring an image of a biological tissue. FIGS. 5 and 6 are diagrams illustrating an example of the flow of the second process for acquiring an image of a tissue.
 図4に示す第1処理は、従来の病理標本を作製し、撮像する方法である。第1処理では、まず生体から組織が採取される(ステップS11)。続いて、採取された組織はホルマリンなど固定液によって固定された後、パラフィンおよび樹脂などの包埋剤などを用いて包埋される(ステップS12)。次に、包埋された組織は薄切化され(ステップS13)、薄切化された組織は所定の染色法によって染色される(ステップS14)。ここで、薄切化は、ミクロトーム等を用いて行われる処理であり、薄切化によって、組織は、典型的には数μm~10μm程度の厚さにスライスされる。所定の染色法の一例として、HE染色が挙げられる。HE染色は、採取された組織片を染色するために用いられる方法の1つであり、ヘマトキシリンによる染色と、エオシンによる染色とを併用する。ヘマトキシリンは、細胞核のクロマチンと細胞質のリボソームを青紫色に染める。一方、エオシンは、細胞質の構成成分と細胞外の基質とを赤色に染める。次に、染色された組織が明視野顕微鏡などを用いて撮像される(ステップS15)。 The first process shown in FIG. 4 is a conventional method of preparing and imaging a pathological specimen. In the first process, tissue is first collected from a living body (step S11). Subsequently, the collected tissue is fixed with a fixative such as formalin, and then embedded using an embedding agent such as paraffin and resin (step S12). Next, the embedded tissue is sliced (step S13), and the sliced tissue is stained using a predetermined staining method (step S14). Here, slicing is a process performed using a microtome or the like, and by slicing, tissue is typically sliced into a thickness of about several μm to 10 μm. An example of a predetermined staining method is HE staining. HE staining is one of the methods used to stain collected tissue pieces, and uses a combination of hematoxylin staining and eosin staining. Hematoxylin stains the chromatin in the cell nucleus and the ribosomes in the cytoplasm blue-purple. On the other hand, eosin stains cytoplasmic components and extracellular matrix red. Next, the stained tissue is imaged using a bright field microscope or the like (step S15).
 図5に示す第2処理は、包埋処理および薄切化を含まない。図5に示す第2処理では、まず生体から組織が採取される(ステップS21)。続いて、組織または組織片が撮像される(ステップS22)。組織または組織片の撮像には、染色処理を施されていない組織の画像であって、画像診断などに適用可能な画像を取得することが可能な、以下のような顕微鏡が使用され得る。
・蛍光顕微鏡
・ラマン顕微鏡
・多光子顕微鏡(二光子蛍光顕微鏡、三光子蛍光顕微鏡、第2高調波発生(SHG:Second-Harmonic-Generation)顕微鏡、第3高調波発生(THG:Third-Harmonic-Generation)顕微鏡、スティミュレーテッドラマンスキャッタリング(SRS:Stimulated Raman scattering)顕微鏡、コヒーレントアンチストークスラマン(CARS:Coherent anti-Stokes Raman scattering)顕微鏡など)。
The second process shown in FIG. 5 does not include embedding and slicing. In the second process shown in FIG. 5, tissue is first collected from a living body (step S21). Subsequently, the tissue or tissue piece is imaged (step S22). To image a tissue or a tissue piece, the following microscopes can be used, which are capable of obtaining images of tissues that have not been subjected to staining treatment and are applicable to image diagnosis and the like.
・Fluorescence microscope ・Raman microscope ・Multiphoton microscope (two-photon fluorescence microscope, three-photon fluorescence microscope, Second-Harmonic-Generation (SHG) microscope, Third-Harmonic-Generation (THG) microscope) ) microscope, stimulated Raman scattering (SRS) microscope, coherent anti-Stokes Raman scattering (CARS) microscope, etc.).
 なお、採取された組織を組織片化してもよい。組織片化は、組織を、典型的には1mm~数mm程度の厚さにスライスする処理であり、薄切化とは異なる。組織片化する際には、組織に侵入した刃によって組織が変形したり、潰されたりしないように、組織の表面に低温のガスを吹き付けて仮固定してもよい。多光子顕微鏡、共焦点光学顕微鏡を使用する場合、ステップS22の組織片化は必須ではない。また、これらの顕微鏡を内視鏡のように用いてイメージングを行う場合、ステップS21の組織の採取と、ステップS22の組織片化は必須ではない。 Note that the collected tissue may be cut into tissue pieces. Tissue fragmentation is a process of slicing tissue into a thickness of typically 1 mm to several mm, and is different from slicing. When fragmenting the tissue, low-temperature gas may be sprayed onto the surface of the tissue to temporarily fix the tissue so that the tissue is not deformed or crushed by the blade that has entered the tissue. When using a multiphoton microscope or a confocal optical microscope, the tissue fragmentation in step S22 is not essential. Further, when performing imaging using these microscopes like an endoscope, the tissue collection in step S21 and the tissue fragmentation in step S22 are not essential.
 また、図6に示す第2処理も、包埋処理および薄切化を含まない。図6に示す第2処理では、まず生体から組織が採取される(ステップS31)。続いて、採取された組織は所定の染色法によって染色される(ステップS32)。そして、染色された組織が撮像される(ステップS33)。組織の撮像には、包埋処理を施されていない組織の画像であって、画像診断などに適用可能な画像を取得することが可能な、深紫外励起蛍光顕微鏡などが使用され得る。なお、深紫外励起蛍光顕微鏡を使用する場合、ステップS32の染色は必須ではない。 Further, the second process shown in FIG. 6 also does not include embedding process and slicing. In the second process shown in FIG. 6, tissue is first collected from a living body (step S31). Subsequently, the collected tissue is stained using a predetermined staining method (step S32). The stained tissue is then imaged (step S33). To image the tissue, a deep ultraviolet excitation fluorescence microscope or the like may be used, which is capable of obtaining an image of a tissue that has not been subjected to embedding processing and is applicable to image diagnosis and the like. Note that when using a deep ultraviolet excitation fluorescence microscope, the staining in step S32 is not essential.
 画像翻訳装置1は、例えば、新しく開発された処理法で処理された組織を撮像した画像から、従来の処理法で処理された病変部位の標本を撮像した画像様の翻訳画像へと画像翻訳することができる。また、画像翻訳装置1は、例えば、従来の処理法で処理された病変部位の標本を撮像した画像から、新しく開発された処理法で処理された組織を撮像した画像様の翻訳画像へと画像翻訳することができる。 For example, the image translation device 1 translates an image from an image of a tissue processed using a newly developed processing method to a translated image similar to an image of a specimen of a lesion site processed using a conventional processing method. be able to. In addition, the image translation device 1 converts an image from, for example, an image of a specimen of a lesion site processed using a conventional processing method into a translated image similar to an image of a tissue processed using a newly developed processing method. Can be translated.
 (画像翻訳装置による画像翻訳の例)
 図7は、画像翻訳装置1による画像翻訳の例を示す図である。図7では、HE染色の後に実際に観察された組織切片のHE染色像(第1対象画像)から画像翻訳された仮想的な深紫外励起蛍光顕微鏡画像(第1翻訳画像)と、深紫外励起蛍光顕微鏡で実際に観察した組織切片の画像(第2対象画像)(薄切化無し、染色あり)から画像翻訳された仮想HE染色像(第2翻訳画像)とを示す。
(Example of image translation by image translation device)
FIG. 7 is a diagram showing an example of image translation by the image translation device 1. In Figure 7, a virtual deep ultraviolet excitation fluorescence microscopy image (first translated image) image translated from an HE staining image (first target image) of a tissue section actually observed after HE staining, and a deep ultraviolet excitation fluorescence microscope image (first translation image) are shown. A virtual HE staining image (second translated image) translated from an image of a tissue section actually observed with a fluorescence microscope (second target image) (no slicing, with staining) is shown.
 図7に示すように、画像翻訳装置1は、HE染色の後に実際に観察された組織切片のHE染色像から仮想的な深紫外励起蛍光顕微鏡画像への変換も、深紫外励起蛍光顕微鏡画像からHE染色像への変換も可能である。また、事前に癌細胞組織及び正常細胞組織のHE画像を学習した画像診断モデルは、癌細胞組織と正常細胞組織とを高精度(例えば、AUC(Area Under the Curve、精度の指標)が0.9以上)で分類可能である。 As shown in FIG. 7, the image translation device 1 can convert HE-stained images of tissue sections actually observed after HE staining into virtual deep-UV-excited fluorescence microscopy images from deep-UV-excited fluorescence microscopy images. Conversion to HE-stained images is also possible. In addition, an image diagnosis model that has learned HE images of cancer cell tissues and normal cell tissues in advance can detect cancer cell tissues and normal cell tissues with high accuracy (for example, AUC (Area Under the Curve, an indicator of accuracy) of 0. 9 or higher).
 一例として、第1対象画像を学習させた画像診断モデルと、実際に撮像された第2対象画像とを用いて、癌を含む組織と、癌を含まない組織との分類を行うと、66.4%の精度で分類される。これに対して、実際に撮像された第2対象画像を第2翻訳画像に変換し、上記の第1対象画像を学習させた画像診断モデルを用いて、第2翻訳画像について、同様の分類を行うと、84.6%の精度で分類される。 As an example, if a tissue containing cancer and a tissue not containing cancer are classified using an image diagnosis model trained on the first target image and a second target image actually captured, 66. Classified with an accuracy of 4%. On the other hand, by converting the actually captured second target image into a second translated image and using the image diagnosis model trained on the first target image, the second translated image is classified in the same way. When done, it is classified with an accuracy of 84.6%.
 ここで、画像翻訳部22は翻訳画像を生成するために、前処理として、入力された画像に対するネガティブ-ポジティブ反転処理を行ってもよい。ネガティブ-ポジティブ反転処理を行うことによって、画像翻訳部22が生成する翻訳画像の完成度を向上させることができる。このことについて、以下に説明する。 Here, in order to generate a translated image, the image translation unit 22 may perform negative-positive inversion processing on the input image as preprocessing. By performing the negative-positive reversal process, the degree of completeness of the translated image generated by the image translation unit 22 can be improved. This will be explained below.
 例えば、深紫外励起蛍光画像では組織が写っていない背景領域の輝度は低く、HE染色像(明視野画像)では組織が写っていない背景領域の輝度は高い(図7参照)。深紫外励起蛍光画像に対してネガティブ-ポジティブ反転処理を行えば、反転後の画像の背景領域の輝度は、生成する仮想HE染色像の背景領域の輝度と近くなる。HE染色像に対してネガティブ-ポジティブ反転処理を行えば、反転後の画像の背景領域の輝度は、生成する仮想的な深紫外励起蛍光画像の背景領域の輝度と近くなる。このようなドメイン適応による前処理は画像翻訳モデルの学習効率の向上に貢献し、その結果、翻訳画像の完成度を向上させ得る。 For example, in a deep ultraviolet excitation fluorescence image, the brightness of a background area where tissue is not visible is low, and in an HE staining image (bright field image), the background area where tissue is not visible is high brightness (see FIG. 7). When negative-positive inversion processing is performed on a deep ultraviolet excitation fluorescence image, the brightness of the background region of the image after the inversion becomes close to the brightness of the background region of the virtual HE-stained image to be generated. If negative-positive inversion processing is performed on the HE stained image, the brightness of the background region of the image after inversion becomes close to the brightness of the background region of the virtual deep ultraviolet excitation fluorescence image to be generated. Such preprocessing using domain adaptation contributes to improving the learning efficiency of the image translation model, and as a result, can improve the completeness of translated images.
 (画像翻訳装置1によって生成される翻訳画像の有用性)
 近年、さまざまな分野において、画像診断技術が利用されている。画像診断技術では、定められた処理を施した組織を撮像した画像に基づいて診断情報(推定結果)を出力する画像診断モデルが作成される場合がある。例えば、第1処理が施された組織を撮像した第1画像群を訓練データとして作成された画像診断モデルは、第1画像群の色に関する特徴を有する画像が入力された場合には、信頼性の高い診断情報を出力することが可能である。しかし、このような画像診断モデルに、第2処理が施された組織を撮像した第2画像群の色に関する特徴を有する画像を入力しても、正しい診断情報を得ることはできない場合がある。なぜならば、第1画像群の色に関する特徴と、第2画像群の色に関する特徴とが異なるからである。
(Usefulness of translated images generated by image translation device 1)
In recent years, diagnostic imaging technology has been used in various fields. In image diagnosis techniques, an image diagnosis model is sometimes created that outputs diagnostic information (estimated results) based on images of tissues that have been subjected to predetermined processing. For example, an image diagnosis model created using the first image group of tissues that has been subjected to the first processing as training data will not be reliable if an image having color characteristics of the first image group is input. It is possible to output high diagnostic information. However, even if such an image diagnosis model is inputted with an image having characteristics related to the color of the second group of images of tissues subjected to the second processing, correct diagnostic information may not be obtained. This is because the color-related features of the first image group and the color-related features of the second image group are different.
 画像翻訳装置1は、第1画像群の色に関する特徴を有する第1対象画像から、該対象画像に写っている構造を大きく変えることなく、第2画像群の色に関する特徴を有する第1の翻訳画像を生成することができる。生成された第1の翻訳画像は、第2画像群を訓練データとして作成された既存の画像診断モデルに適用可能であり、同様に、生成された第2の翻訳画像は、第1画像群を訓練データとして作成された既存の画像診断モデルに適用可能である。すなわち、画像翻訳装置1を採用すれば、既存の画像診断モデルが適用不可能な画像から、該画像診断モデルが適用可能な画像を生成することができる。それゆえ、所定の処理を施した組織を撮像した画像か否かに応じて、画像解析モデルを個別に作成する必要がなくなる。 The image translation device 1 converts a first target image having color-related features of a first image group into a first translation having color-related features of a second image group without significantly changing the structure reflected in the target image. Images can be generated. The generated first translated image can be applied to an existing image diagnosis model created using the second image group as training data, and similarly, the generated second translated image can be applied to an existing image diagnosis model created using the second image group as training data. It can be applied to existing image diagnosis models created as training data. That is, by employing the image translation device 1, it is possible to generate an image to which an existing image diagnosis model is applicable from an image to which the existing image diagnosis model is not applicable. Therefore, there is no need to create separate image analysis models depending on whether the image is a tissue image that has been subjected to predetermined processing.
 例えば、第1処理において、包埋処理および薄切化は時間および手間を要する処理である。それゆえ、第1対象画像の取得に比べて、第2対象画像の取得は簡便である。しかし、第2処理は歴史の浅い処理方法であるため、第2対象画像に基づいて作成された画像診断モデルはまだ少ない、あるいは未完成である可能性がある。このような場合、画像翻訳装置1を用いて第2対象画像から翻訳画像を生成し、該翻訳画像を第1対象画像に基づいて作成された画像診断モデルに適用すればよい。 For example, in the first process, the embedding process and slicing are processes that require time and effort. Therefore, obtaining the second target image is easier than obtaining the first target image. However, since the second process is a processing method with a short history, there are still few image diagnostic models created based on the second target image, or they may be incomplete. In such a case, a translated image can be generated from the second target image using the image translation device 1, and the translated image can be applied to the image diagnostic model created based on the first target image.
 新規の処理を施した組織を撮像した画像、あるいは新規の撮像技術を採用した画像を対象とする画像診断を実現するためには、新規の処理を施した組織を撮像した画像を新たに蓄積し、新しい画像診断モデルを作成する必要がある。画像翻訳装置1は、このような画像診断モデルの作成を容易化できる。画像翻訳装置1は、既存の方法で処理された組織を撮像した画像から、新しく開発された方法で処理された該組織の画像へと画像翻訳することが可能である。このような翻訳画像を用いれば、新しく開発された方法で処理された組織を撮像した画像に基づいて診断情報を出力する新規の画像診断モデルを効率的に作成することが可能となる。また、このようにして作成された画像診断モデルは、既存の方法で処理された組織を診断可能である。また、画像翻訳装置1は、新しく開発された方法で処理された組織を撮像した画像から、既存の方法で処理された組織を撮像した画像へと画像翻訳してもよい。このような翻訳画像を用いれば、既存の方法で処理された組織を撮像した画像に基づいて診断情報を出力する新規の画像診断モデルを効率的に作成することが可能となる。また、このようにして作成された画像診断モデルは、新しく開発された方法で処理された組織を診断可能である。 In order to achieve image diagnosis that targets images of tissue that has undergone new processing or images that employ new imaging technology, it is necessary to accumulate new images of tissue that have undergone new processing. , it is necessary to create a new diagnostic imaging model. The image translation device 1 can facilitate the creation of such an image diagnosis model. The image translation device 1 is capable of image translation from an image of a tissue processed using an existing method to an image of the tissue processed using a newly developed method. By using such translated images, it becomes possible to efficiently create a new image diagnosis model that outputs diagnostic information based on images of tissues processed using a newly developed method. Further, the image diagnosis model created in this manner can diagnose tissues processed by existing methods. Furthermore, the image translation device 1 may translate an image from an image of tissue processed using a newly developed method to an image of tissue processed using an existing method. By using such translated images, it becomes possible to efficiently create a new image diagnosis model that outputs diagnostic information based on images of tissues processed using existing methods. Furthermore, the image diagnostic model created in this way can diagnose tissues processed using the newly developed method.
 出現頻度の低い状態(例えば、症例が少ない疾病)について、新規の処理を施した組織を撮像した画像、あるいは新規の撮像技術を採用した画像を対象とする画像診断を早期に実現することは特に困難であった。なぜなら、出現頻度が低いため、新規の処理を施した組織を撮像した画像自体が少ないからである。出現頻度が低い状態の組織を撮像した画像は、多くの場合、既存の方法で処理された組織を撮像した画像として蓄積されている。そこで、画像翻訳装置1は、既存の方法で処理された、出現頻度が低い状態の組織を撮像した画像から、新しく開発された方法で処理された該組織の画像へと画像翻訳することも可能である。このような翻訳画像を用いれば、出現頻度が低い状態の組織であっても、新しく開発された方法で処理された組織を撮像した画像に基づいて診断情報を出力する新規の画像診断モデルを効率的に作成することが可能となる。また、このようにして作成された画像診断モデルは、既存の方法で処理された、出現頻度が低い状態の組織を診断可能である。また、画像翻訳装置1は、新しく開発された方法で処理された組織を撮像した画像から、既存の方法で処理された、出現頻度が低い状態の組織を撮像した画像へと画像翻訳してもよい。このような翻訳画像を用いれば、既存の方法で処理された、出現頻度が低い状態の組織を撮像した画像に基づいて診断情報を出力する新規の画像診断モデルを効率的に作成することが可能となる。また、このようにして作成された画像診断モデルは、新しく開発された方法で処理された、出現頻度が低い状態の組織を診断可能である。 For conditions that occur infrequently (for example, diseases with few cases), it is especially important to quickly realize image diagnosis using images of tissue that has undergone new processing or images that use new imaging technology. It was difficult. This is because the appearance frequency is low, and there are few images of tissues that have undergone new processing. In many cases, images of tissues that appear infrequently are stored as images of tissues that have been processed using existing methods. Therefore, the image translation device 1 is also capable of image translation from an image of a tissue in a state in which it appears infrequently, which has been processed using an existing method, to an image of the tissue that has been processed using a newly developed method. It is. Using such translated images, it is possible to efficiently create a new image diagnosis model that outputs diagnostic information based on images of tissues processed using a newly developed method, even for tissues that appear in a state with low frequency. It becomes possible to create Further, the image diagnosis model created in this manner can diagnose tissues that have been processed using existing methods and that appear in a low frequency state. The image translation device 1 can also translate an image from an image of a tissue processed using a newly developed method to an image of a tissue that is processed using an existing method and has a low appearance frequency. good. By using such translated images, it is possible to efficiently create a new image diagnosis model that outputs diagnostic information based on images of tissues with low frequency of occurrence that have been processed using existing methods. becomes. Furthermore, the image diagnostic model created in this manner can diagnose tissues that are treated with a newly developed method and that have a low frequency of appearance.
 翻訳画像は、組織を実際に観察した結果得た画像ではないものの、組織が有する構造を大きく変更したものではない。それゆえ、実際に組織を観察することによって得られた画像と同様に、翻訳画像を、組織を撮像した画像として扱うことができる。例えば、画像翻訳装置1によって生成された翻訳画像は、画像診断モデルの学習に利用することができる。例えば、画像翻訳装置1を用いて第1対象画像から翻訳画像を生成すれば、該翻訳画像は、第2対象画像に基づく画像診断モデルの作成のための学習に利用することが可能である。 Although the translated image is not an image obtained as a result of actually observing the tissue, it does not significantly change the structure of the tissue. Therefore, like an image obtained by actually observing a tissue, the translated image can be treated as a captured image of the tissue. For example, the translated images generated by the image translation device 1 can be used for learning an image diagnosis model. For example, if a translated image is generated from a first target image using the image translation device 1, the translated image can be used for learning to create an image diagnosis model based on the second target image.
 〔実施形態2〕
 本発明の他の実施形態について、以下に説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
[Embodiment 2]
Other embodiments of the invention will be described below. For convenience of explanation, members having the same functions as the members described in the above embodiment are given the same reference numerals, and the description thereof will not be repeated.
 (画像翻訳装置1aの構成)
 画像翻訳装置1の画像翻訳部22は、入力された画像の特徴を抽出して、抽出した特徴を有する新しい画像を生成する第1生成器221および第2生成器222を備えていればよく、図2に示す構成に限定されない。例えば、画像翻訳部22を実現するためにCycleGANが適用されてもよい。
(Configuration of image translation device 1a)
The image translation unit 22 of the image translation device 1 only needs to include a first generator 221 and a second generator 222 that extract features of an input image and generate a new image having the extracted features. The configuration is not limited to that shown in FIG. 2. For example, CycleGAN may be applied to implement the image translation unit 22.
 CycleGANを適用した画像翻訳部22aを備える画像翻訳装置1aの構成について、図8を用いて説明する。図8は、画像翻訳装置1aの別の構成例を示す機能ブロック図である。 The configuration of an image translation device 1a including an image translation unit 22a to which CycleGAN is applied will be described using FIG. 8. FIG. 8 is a functional block diagram showing another example of the configuration of the image translation device 1a.
 図8に示すように、画像翻訳部22aは、第1識別器223および第2識別器224をさらに備えていてもよい。 As shown in FIG. 8, the image translation unit 22a may further include a first classifier 223 and a second classifier 224.
 第1識別器223は、第1画像群の色に関する特徴と、第2生成器が生成した翻訳画像の色に関する特徴との第1誤差に基づいて、第1画像群に含まれる画像と第2生成器が生成した翻訳画像とを識別する。 The first discriminator 223 distinguishes between the images included in the first image group and the second one based on a first error between the color-related features of the first image group and the color-related features of the translated image generated by the second generator. Identify the translated image generated by the generator.
 第2識別器224は、第2画像群の色に関する特徴と、第1生成器が生成した翻訳画像の色に関する特徴との第2誤差に基づいて、第2画像群に含まれる画像と第1生成器が生成した翻訳画像とを識別する。 The second discriminator 224 distinguishes between the images included in the second image group and the first Identify the translated image generated by the generator.
 <画像翻訳部22aの処理>
 図9は、第1識別器223および第2識別器224を備える画像翻訳部22aが行う処理の一例を説明する図である。
<Processing of the image translation unit 22a>
FIG. 9 is a diagram illustrating an example of processing performed by the image translation unit 22a including the first classifier 223 and the second classifier 224.
 第1入力部21は、第1生成器221に入力した第1画像群の画像を、第1識別器223にも入力する。第1生成器221は、入力された画像から翻訳画像を生成する。 The first input unit 21 also inputs the images of the first image group input to the first generator 221 to the first classifier 223. The first generator 221 generates a translated image from the input image.
 第2生成器222は、第1生成器221によって生成された翻訳画像から、さらに翻訳画像を生成する。第1識別器223は、第2生成器222によって生成された翻訳画像の色に関する特徴と、当該翻訳画像の元となった第1画像群の画像の色に関する特徴との第1誤差を算出する。 The second generator 222 further generates a translated image from the translated image generated by the first generator 221. The first discriminator 223 calculates a first error between the color-related features of the translated image generated by the second generator 222 and the color-related features of the images of the first image group that are the source of the translated image. .
 第1入力部21は、第2生成器222に入力した第2画像群の画像を、第2識別器224にも入力する。第2生成器222は、入力された画像から翻訳画像を生成する。 The first input unit 21 also inputs the images of the second image group input to the second generator 222 to the second discriminator 224. The second generator 222 generates a translated image from the input image.
 第1生成器221は、第2生成器222によって生成された翻訳画像から、さらに翻訳画像を生成する。第2識別器224は、第1生成器221によって生成された翻訳画像の色に関する特徴と、当該翻訳画像の元となった第2画像群の画像の色に関する特徴との第2誤差を算出する。 The first generator 221 further generates a translated image from the translated image generated by the second generator 222. The second classifier 224 calculates a second error between the color-related features of the translated image generated by the first generator 221 and the color-related features of the images of the second image group that are the source of the translated image. .
 画像翻訳部22aは、上記のような処理を反復することによって、翻訳画像を生成する。 The image translation unit 22a generates a translated image by repeating the above processing.
 図8を参照して続く処理を説明すれば、画像翻訳部22aは、第1誤差および第2誤差が所定レベル以下となった翻訳画像を、第1の翻訳画像または第2の翻訳画像として出力する。画像翻訳部22aは、第1誤差および第2誤差に基づいて、サイクル一貫性損失を算出し、サイクル一貫性損失が所定値以下となった翻訳画像を、第1の翻訳画像または第2の翻訳画像として出力してもよい。 To explain the subsequent processing with reference to FIG. 8, the image translation unit 22a outputs a translated image whose first error and second error are below a predetermined level as a first translated image or a second translated image. do. The image translation unit 22a calculates the cycle consistency loss based on the first error and the second error, and converts the translated image whose cycle consistency loss is less than or equal to a predetermined value into the first translated image or the second translated image. It may also be output as an image.
 このような構成を備える画像翻訳装置1aは、精度の高い翻訳画像を生成し、出力することができる。 The image translation device 1a having such a configuration can generate and output highly accurate translated images.
 例えば、過去に第1処理を施した組織と同じ組織の同じ場所に対して、新たに第2処理を施して撮像することは不可能である。このように、同じ組織の同じ場所に、異なる処理を施して撮像された画像のペアを準備することは困難(あるいは不可能)である。しかし、CycleGANは、第1画像群の色に関する特徴と第2画像群の色に関する特徴との関係を学習可能であるため、同じ組織の同じ場所に対して異なる処理を施して撮像した画像のペアを用意する必要がない。 For example, it is impossible to newly perform the second process and image the same location of the same tissue that has previously been subjected to the first process. In this way, it is difficult (or impossible) to prepare a pair of images obtained by performing different processing on the same location of the same tissue. However, since CycleGAN can learn the relationship between the color-related features of the first image group and the color-related features of the second image group, it is possible to learn the relationship between the color-related features of the first image group and the color-related features of the second image group. There is no need to prepare.
 〔実施形態3〕
 本発明の他の実施形態について、以下に説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
[Embodiment 3]
Other embodiments of the invention will be described below. For convenience of explanation, members having the same functions as the members described in the above embodiment are given the same reference numerals, and the description thereof will not be repeated.
 (画像診断システム100の概略構成)
 図10は、本発明の実施形態3に係る画像診断システム100の構成例を示すブロック図である。
(Schematic configuration of image diagnostic system 100)
FIG. 10 is a block diagram showing a configuration example of an image diagnostic system 100 according to Embodiment 3 of the present invention.
 画像診断システム100は、画像翻訳装置1、1a、および画像診断装置7を備える。 The image diagnosis system 100 includes image translation devices 1 and 1a, and an image diagnosis device 7.
 画像診断装置7は、例えば、画像翻訳装置1、1aと通信可能に接続されたコンピュータである。画像診断装置7は、図10に示すように、プロセッサ部71、ハードディスク73、メモリ72、および表示部74を備えている。 The image diagnostic device 7 is, for example, a computer communicably connected to the image translation devices 1 and 1a. The image diagnostic apparatus 7 includes a processor section 71, a hard disk 73, a memory 72, and a display section 74, as shown in FIG.
 プロセッサ部71は、各種プログラムをハードディスク73から読み出して実行する。プロセッサ部71は、例えばCPUおよびGPUの少なくとも何れか一方であってもよい。 The processor unit 71 reads various programs from the hard disk 73 and executes them. The processor unit 71 may be, for example, at least one of a CPU and a GPU.
 ハードディスク73には、プロセッサ部71が実行する各種プログラムが格納されている。また、ハードディスク73には、プロセッサ部71が各種プログラムを実行するために利用する各種画像データが格納されていてもよい。 The hard disk 73 stores various programs executed by the processor unit 71. The hard disk 73 may also store various image data used by the processor unit 71 to execute the various programs.
 メモリ72は、プロセッサ部71が実行中の各種処理に用いられる各種データおよび各種プログラムを格納する。例えば、メモリ72は、ハードディスク73からロードされたニューラルネットワーク構造を実現するプログラムを格納するワーキングメモリとして機能する。 The memory 72 stores various data and various programs used for various processes being executed by the processor section 71. For example, the memory 72 functions as a working memory that stores a program loaded from the hard disk 73 that implements the neural network structure.
 表示部74は、プロセッサ部71が実行する各種処理の実行に必要とされる画像、およびプロセッサ部71が出力した診断情報を表示するための任意のディスプレイであってもよい。なお、表示部74は、画像診断装置7の必須の構成ではない。例えば、画像診断装置7は、画像診断装置7と通信可能に接続された外部の表示装置(図示せず)あるいは画像翻訳装置1、1aに診断情報を送信する構成であってもよい。 The display unit 74 may be any display for displaying images required to execute various processes executed by the processor unit 71 and diagnostic information output by the processor unit 71. Note that the display section 74 is not an essential component of the image diagnostic apparatus 7. For example, the image diagnostic device 7 may be configured to transmit diagnostic information to an external display device (not shown) or image translation device 1, 1a that is communicably connected to the image diagnostic device 7.
 以下では、画像診断装置7を備える画像診断システム100について説明する。画像診断装置7は、図4に示した第1処理を施した組織を撮像した画像に基づいて組織の状態を推定する画像診断モデル(後述の第1ニューラルネットワーク7121)を備えている。 In the following, an image diagnosis system 100 including an image diagnosis apparatus 7 will be described. The image diagnostic apparatus 7 includes an image diagnostic model (a first neural network 7121 described later) that estimates the state of the tissue based on an image of the tissue that has been subjected to the first processing shown in FIG.
 図11は、画像診断システム100の構成例を示す機能ブロック図である。図11に示すように、画像診断装置7は、図10に示すプロセッサ部71およびメモリ72に対応する制御部710、図10に示すハードディスク73に対応する記憶部、および表示部5を備えている。なお、記憶部は、説明を簡略するために、その図示を省略している。 FIG. 11 is a functional block diagram showing a configuration example of the image diagnostic system 100. As shown in FIG. 11, the image diagnostic apparatus 7 includes a control section 710 corresponding to the processor section 71 and memory 72 shown in FIG. 10, a storage section corresponding to the hard disk 73 shown in FIG. 10, and a display section 5. . Note that the storage unit is not illustrated in order to simplify the explanation.
 制御部710は、第2入力部711、診断部712、および診断情報出力部713を備えている。 The control section 710 includes a second input section 711, a diagnostic section 712, and a diagnostic information output section 713.
 第2入力部711は、画像翻訳装置1、1aから取得した第1画像群の色の特徴と有する第2の翻訳画像を診断部712に入力する。 The second input unit 711 inputs to the diagnosis unit 712 a second translated image that has the color characteristics of the first image group acquired from the image translation device 1 or 1a.
 診断部712は、第1ニューラルネットワーク7121を備えている。第1ニューラルネットワーク7121は、第1処理が施された組織を撮像した訓練用第1画像群と、該訓練用第1画像群のそれぞれに写る組織の状態との対応関係を学習したニューラルネットワーク(推論モデル)である。なお、第1ニューラルネットワーク7121の学習には、公知の教師有りの機械学習アルゴリズムが適用され得る。なお、第1ニューラルネットワーク7121の学習処理は、画像診断装置7とは異なるコンピュータを用いて実行されてもよい。この場合、学習済の第1ニューラルネットワーク7121を任意のコンピュータにインストールすることにより、該コンピュータを画像診断装置7として機能させることが可能である。 The diagnosis section 712 includes a first neural network 7121. The first neural network 7121 is a neural network ( (inference model). Note that a known supervised machine learning algorithm may be applied to the learning of the first neural network 7121. Note that the learning process of the first neural network 7121 may be executed using a computer different from the image diagnostic apparatus 7. In this case, by installing the trained first neural network 7121 into any computer, it is possible to cause the computer to function as the image diagnostic apparatus 7.
 診断情報出力部713は、診断部712から出力された診断情報を取得して出力する。例えば、診断情報出力部713は、診断情報を表示部74に出力してもよい。 The diagnostic information output unit 713 acquires and outputs the diagnostic information output from the diagnostic unit 712. For example, the diagnostic information output unit 713 may output diagnostic information to the display unit 74.
 画像診断システム100では、画像翻訳装置1、1aが、新規の処理を施した組織を撮像した画像、あるいは新規の撮像技術を採用した画像から第2の翻訳画像を生成する。第2の翻訳画像を、既存の推論モデルである第1ニューラルネットワーク7121に入力すれば、既存の推論モデルに基づく診断情報(推定結果)を得ることができる。このように、画像診断システム100は、例えば、既存の疾患の病態判定基準を適用不可能な画像から、既存の疾患の病態判定基準を適用可能な翻訳画像を生成して、該翻訳画像に基づく診断情報を出力することができる。なお、画像診断システム100では、画像翻訳装置1、1aが、画像診断装置7の構成を備えていてもよい。例えば、画像翻訳装置1、1aは、第1ニューラルネットワーク7121を有する診断部712、および診断情報出力部713を備えていてもよい。これにより、画像翻訳装置1、1aは、新規の処理を施した組織を撮像した画像、あるいは新規の撮像技術を採用した画像から第2の翻訳画像を生成し、第2の翻訳画像に基づき第1ニューラルネットワーク7121を用いた推定を行う。画像翻訳装置1、1aは、このようにして推定を行うことで、既存の推論モデルに基づく診断情報(推定結果)を出力することができる。 In the image diagnostic system 100, the image translation devices 1 and 1a generate a second translated image from an image of a tissue that has undergone new processing or an image that has adopted a new imaging technique. By inputting the second translated image into the first neural network 7121, which is an existing inference model, diagnostic information (estimation results) based on the existing inference model can be obtained. In this way, the image diagnostic system 100, for example, generates a translated image to which the existing disease condition determination criteria can be applied from an image to which the existing disease condition determination criteria cannot be applied, and then generates a translation image based on the translated image. Diagnostic information can be output. Note that in the image diagnostic system 100, the image translation devices 1 and 1a may have the configuration of the image diagnostic device 7. For example, the image translation devices 1 and 1a may include a diagnostic section 712 having a first neural network 7121 and a diagnostic information output section 713. As a result, the image translation devices 1 and 1a generate a second translated image from an image of a tissue that has undergone new processing or an image that uses a new imaging technique, and generates a second translated image based on the second translated image. 1. Estimation is performed using a neural network 7121. By performing estimation in this manner, the image translation devices 1 and 1a can output diagnostic information (estimation results) based on the existing inference model.
 また、画像翻訳装置1、1aが生成した第2の翻訳画像を、第1ニューラルネットワーク7121を作成するための訓練用第1画像群の少なくとも一部として用いることも可能である。 It is also possible to use the second translated images generated by the image translation devices 1 and 1a as at least part of the first training image group for creating the first neural network 7121.
 (変形例)
 画像診断装置7aを備える画像診断システム100aについて説明する。画像診断装置7aは、図5に示した第2処理を施した組織を撮像した画像に基づいて組織の状態を推定する画像診断モデル(後述の第2ニューラルネットワーク7122)を備えている。
(Modified example)
An image diagnosis system 100a including an image diagnosis apparatus 7a will be described. The image diagnostic apparatus 7a includes an image diagnostic model (second neural network 7122 described later) that estimates the state of the tissue based on the image of the tissue that has been subjected to the second processing shown in FIG.
 図12は、画像診断システム100aの構成例を示す機能ブロック図である。図12に示す画像診断装置7aは、診断部712に第2ニューラルネットワーク7122を備えている。第2ニューラルネットワーク7122は、第2処理が施された組織を撮像した訓練用第2画像群と、該訓練用第2画像群のそれぞれに写る組織の状態との対応関係を学習したニューラルネットワーク(推論モデル)である。なお、第2ニューラルネットワーク7122の学習には、公知の教師有りの機械学習アルゴリズムが適用され得る。 FIG. 12 is a functional block diagram showing a configuration example of the image diagnosis system 100a. The image diagnostic apparatus 7a shown in FIG. 12 includes a second neural network 7122 in the diagnostic section 712. The second neural network 7122 is a neural network ( (inference model). Note that a known supervised machine learning algorithm may be applied to the learning of the second neural network 7122.
 新しい方法を採用して組織を撮像した画像は、既存の方法を採用して組織を撮像した画像よりも、読み取ることが可能な特徴量が多い場合がある。それゆえ、既存の推論モデルよりも新しい推論モデルである第2ニューラルネットワーク7122の方が、より精度の高い診断情報を出力できる可能性がある。画像診断システム100aでは、画像翻訳装置1、1aが、過去に組織を撮像した画像(例えば、病理画像)から第1の翻訳画像を生成する。第1の翻訳画像は、新規の処理を施した組織を撮像した画像、あるいは新規の撮像技術を採用した画像に基づく診断情報を出力する新しい推論モデルである第2ニューラルネットワーク7122に入力すれば、新しい推論モデルに基づく診断情報(推定結果)を得ることができる。 An image of a tissue taken using a new method may have more features that can be read than an image taken of a tissue using an existing method. Therefore, the second neural network 7122, which is a newer inference model, may be able to output more accurate diagnostic information than the existing inference model. In the image diagnostic system 100a, the image translation devices 1 and 1a generate a first translated image from images (for example, pathological images) of tissues taken in the past. If the first translated image is input to the second neural network 7122, which is a new inference model that outputs diagnostic information based on images of tissue that have undergone new processing or images that have adopted new imaging technology, Diagnostic information (estimation results) based on the new inference model can be obtained.
 また、画像翻訳装置1、1aが生成した第1の翻訳画像を、第2ニューラルネットワーク7122を作成するための訓練用第2画像群の少なくとも一部として用いることも可能である。 It is also possible to use the first translated images generated by the image translation devices 1 and 1a as at least part of the second training image group for creating the second neural network 7122.
 なお、画像診断装置7aは、診断部712および712の機能を備える構成であってもよい。この場合、画像翻訳装置1、1aから取得した翻訳画像が第1の翻訳画像であるか、第2の翻訳画像であるかに応じて、使用するニューラルネットワークを切り替える構成であればよい。 Note that the image diagnostic apparatus 7a may be configured to include the functions of the diagnostic units 712 and 712. In this case, any configuration may be sufficient as long as the neural network to be used is switched depending on whether the translated image acquired from the image translation devices 1, 1a is the first translated image or the second translated image.
 〔ソフトウェアによる実現例〕
 画像翻訳装置1、1a(以下、「装置」と呼ぶ)の機能は、当該装置としてコンピュータを機能させるためのプログラムであって、当該装置の各制御ブロック(特に制御部20、20aに含まれる各部)としてコンピュータを機能させるためのプログラムにより実現することができる。
[Example of implementation using software]
The function of the image translation devices 1 and 1a (hereinafter referred to as "devices") is a program for making a computer function as the device, and each control block of the device (particularly each part included in the control units 20 and 20a). ) can be realized by a program for making a computer function.
 この場合、上記装置は、上記プログラムを実行するためのハードウェアとして、少なくとも1つの制御装置(例えばプロセッサ)と少なくとも1つの記憶装置(例えばメモリ)を有するコンピュータを備えている。この制御装置と記憶装置により上記プログラムを実行することにより、上記各実施形態で説明した各機能が実現される。 In this case, the device includes a computer having at least one control device (for example, a processor) and at least one storage device (for example, a memory) as hardware for executing the program. By executing the above program using this control device and storage device, each function described in each of the above embodiments is realized.
 上記プログラムは、一時的ではなく、コンピュータ読み取り可能な、1または複数の記録媒体に記録されていてもよい。この記録媒体は、上記装置が備えていてもよいし、備えていなくてもよい。後者の場合、上記プログラムは、有線または無線の任意の伝送媒体を介して上記装置に供給されてもよい。 The above program may be recorded on one or more computer-readable recording media instead of temporary. This recording medium may or may not be included in the above device. In the latter case, the program may be supplied to the device via any transmission medium, wired or wireless.
 また、上記各制御ブロックの機能の一部または全部は、論理回路により実現することも可能である。例えば、上記各制御ブロックとして機能する論理回路が形成された集積回路も本発明の範疇に含まれる。この他にも、例えば量子コンピュータにより上記各制御ブロックの機能を実現することも可能である。 Furthermore, part or all of the functions of each of the control blocks described above can also be realized by a logic circuit. For example, an integrated circuit in which a logic circuit functioning as each of the control blocks described above is formed is also included in the scope of the present invention. In addition to this, it is also possible to realize the functions of each of the control blocks described above using, for example, a quantum computer.
 〔まとめ〕
 本発明の態様1に係る画像翻訳装置は、所定の包埋剤に組織を包埋する包埋処理および薄切化を含む第1処理が施された組織を撮像した第1画像群の色に関する特徴と、前記包埋処理および前記薄切化を含まない第2処理が施された組織を撮像した第2画像群の色に関する特徴との間の関係を学習した、第1生成器および第2生成器を備える画像翻訳部と、前記第1画像群に属する第1対象画像、および前記第2画像群に属する第2対象画像のいずれかを前記画像翻訳部に入力する第1入力部と、前記第1対象画像から生成された第1の翻訳画像、または、前記第2対象画像から生成された第2の翻訳画像を出力する翻訳画像出力部と、を備え、前記第1生成器は、前記第1対象画像から、前記第2画像群の色に関する特徴を有する前記第1の翻訳画像を生成し、前記第2生成器は、前記第2対象画像から、前記第1画像群の色に関する特徴を有する第2の翻訳画像を生成する。
〔summary〕
The image translation device according to aspect 1 of the present invention relates to the color of a first image group of a tissue that has been subjected to a first process including embedding the tissue in a predetermined embedding agent and slicing. A first generator and a second generator that have learned a relationship between a feature and a feature regarding color of a second group of images obtained by capturing a tissue that has been subjected to a second process that does not include the embedding process and the slicing process. an image translation unit including a generator; a first input unit that inputs either a first target image belonging to the first image group or a second target image belonging to the second image group to the image translation unit; a translated image output unit that outputs a first translated image generated from the first target image or a second translated image generated from the second target image, the first generator: From the first target image, the first translated image has a feature related to the color of the second group of images, and the second generator generates a feature related to the color of the first group of images from the second target image. A second translated image having features is generated.
 本開示の態様2に係る画像翻訳装置は、前記態様1において、前記画像翻訳部は、前記第1画像群の色に関する特徴と、前記第2生成器が生成した翻訳画像の色に関する特徴との第1誤差に基づいて、前記第1画像群に含まれる画像と前記第2生成器が生成した翻訳画像とを識別する第1識別器と、前記第2画像群の色に関する特徴と、前記第1生成器が生成した翻訳画像の色に関する特徴との第2誤差に基づいて、前記第2画像群に含まれる画像と前記第1生成器が生成した翻訳画像とを識別する第2識別器と、を備え、前記第1誤差および前記第2誤差が所定レベル以下となった翻訳画像を、前記第1の翻訳画像または前記第2の翻訳画像として出力してもよい。 In the image translation device according to Aspect 2 of the present disclosure, in Aspect 1, the image translation unit is configured to combine color-related features of the first image group and color-related features of the translated images generated by the second generator. a first classifier that identifies, based on a first error, an image included in the first image group and a translated image generated by the second generator; a color-related feature of the second image group; a second discriminator that identifies the images included in the second image group and the translated image generated by the first generator based on a second error between the translated image generated by the first generator and the color-related feature; , and a translated image in which the first error and the second error are below a predetermined level may be output as the first translated image or the second translated image.
 本開示の態様3に係る画像翻訳装置は、前記態様1または2において、前記組織は、細胞、真菌、および細菌のうちのいずれかが集合して成る構造を有していてもよい。 In the image translation device according to Aspect 3 of the present disclosure, in Aspect 1 or 2, the tissue may have a structure formed by an aggregation of cells, fungi, and bacteria.
 本開示の態様4に係る画像翻訳装置は、前記態様1から3のいずれかにおいて、前記第2対象画像は、深紫外励起蛍光顕微鏡を用いて撮像されてもよい。 In the image translation device according to aspect 4 of the present disclosure, in any of aspects 1 to 3, the second target image may be captured using a deep ultraviolet excitation fluorescence microscope.
 本開示の態様5に係る画像診断システムは、前記態様1から4のいずれかに記載の画像翻訳装置、および画像診断装置を備える画像診断システムであって、前記画像診断装置は、前記第1処理が施された組織を撮像した訓練用第1画像群と、該訓練用第1画像群のそれぞれに写る組織の状態との対応関係を学習した第1ニューラルネットワーク、または、前記第2処理が施された組織を撮像した訓練用第2画像群と、該訓練用第2画像群のそれぞれに写る組織の状態との対応関係を学習した第2ニューラルネットワークの少なくともいずれかを備える診断部と、前記第1の翻訳画像または前記第2の翻訳画像を前記診断部に入力する第2入力部と、前記診断部から出力された、前記第1の翻訳画像または前記第2の翻訳画像に示された組織の状態に関する診断情報を出力する診断情報出力部と、を備える。 An image diagnostic system according to aspect 5 of the present disclosure is an image diagnostic system comprising the image translation device according to any one of aspects 1 to 4, and an image diagnostic device, wherein the image diagnostic device performs the first processing. A first neural network that has learned the correspondence between a first group of training images obtained by imaging a tissue that has been subjected to the above training and the state of the tissue that is reflected in each of the first group of training images, or a first neural network that has undergone the second processing. a diagnostic unit comprising at least one of a second neural network that has learned a correspondence relationship between a second training image group in which the tissue has been imaged and a state of the tissue shown in each of the second training image group; a second input unit that inputs the first translated image or the second translated image to the diagnostic unit; and a second input unit that inputs the first translated image or the second translated image to the diagnostic unit; and a diagnostic information output unit that outputs diagnostic information regarding the state of the tissue.
 本開示の態様6に係る画像翻訳方法は、所定の包埋剤に組織を包埋する包埋処理および薄切化を含む第1処理が施された組織を撮像した第1画像群の色に関する特徴と、前記包埋処理および前記薄切化を含まない第2処理が施された組織を撮像した第2画像群の色に関する特徴との間の関係を学習した、第1生成器および第2生成器を備えるニューラルネットワークに、前記第1画像群に属する第1対象画像、および前記第2画像群に属する第2対象画像のいずれかを入力する入力ステップと、前記第1対象画像から生成された第1の翻訳画像、または、前記第2対象画像から生成された第2の翻訳画像を出力する翻訳画像出力ステップと、を含み、前記第1生成器は、前記第1対象画像から、前記第2画像群の色に関する特徴を有する前記第1の翻訳画像を生成し、前記第2生成器は、前記第2対象画像から、前記第1画像群の色に関する特徴を有する第2の翻訳画像を生成する。 The image translation method according to aspect 6 of the present disclosure relates to the color of a first image group of a tissue that has been subjected to a first process including embedding the tissue in a predetermined embedding agent and slicing. A first generator and a second generator that have learned a relationship between a feature and a color-related feature of a second image group obtained by capturing a tissue that has been subjected to the embedding process and the second process that does not include the slicing process. an input step of inputting either a first target image belonging to the first image group and a second target image belonging to the second image group to a neural network including a generator; a translated image output step of outputting a first translated image generated from the first translated image or a second translated image generated from the second target image, wherein the first generator generates the first translated image from the first target image. the first translated image having features related to the color of a second group of images; the second generator generates a second translated image having features related to the color of the first group of images from the second target image; generate.
 本開示の態様7に係る制御プログラムは、前記態様1から4のいずれかに記載の画像翻訳装置としてコンピュータを機能させるための制御プログラムであって、上記画像翻訳部、前記第1入力部、および前記翻訳画像出力部としてコンピュータを機能させる。 A control program according to aspect 7 of the present disclosure is a control program for causing a computer to function as the image translation device according to any one of aspects 1 to 4, the control program including the image translation section, the first input section, and A computer is made to function as the translated image output section.
 本開示の態様8に係る記録媒体は、前記態様7に記載の制御プログラムを記録したコンピュータ読み取り可能な記録媒体である。 A recording medium according to aspect 8 of the present disclosure is a computer-readable recording medium on which the control program according to aspect 7 is recorded.
 本発明は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。 The present invention is not limited to the embodiments described above, and various modifications can be made within the scope of the claims, and embodiments obtained by appropriately combining technical means disclosed in different embodiments. are also included within the technical scope of the present invention.
1、1a 画像翻訳装置
7、7a 画像診断装置
21 第1入力部
22、22a 画像翻訳部
23 翻訳画像出力部
100、100a 画像診断システム
221 第1生成器
222 第2生成器
223 第1識別器
224 第2識別器
7121 第1ニューラルネットワーク
7122 第2ニューラルネットワーク
S1 入力ステップ
S2 翻訳画像出力ステップ
1, 1a Image translation device 7, 7a Image diagnosis device 21 First input section 22, 22a Image translation section 23 Translated image output section 100, 100a Image diagnosis system 221 First generator 222 Second generator 223 First discriminator 224 Second discriminator 7121 First neural network 7122 Second neural network S1 Input step S2 Translated image output step

Claims (8)

  1.  所定の包埋剤に組織を包埋する包埋処理および薄切化を含む第1処理が施された組織を撮像した第1画像群の色に関する特徴と、前記包埋処理および前記薄切化を含まない第2処理が施された組織を撮像した第2画像群の色に関する特徴との間の関係を学習した、第1生成器および第2生成器を備える画像翻訳部と、
     前記第1画像群に属する第1対象画像、および前記第2画像群に属する第2対象画像のいずれかを前記画像翻訳部に入力する第1入力部と、
     前記第1対象画像から生成された第1の翻訳画像、または、前記第2対象画像から生成された第2の翻訳画像を出力する翻訳画像出力部と、を備え、
     前記第1生成器は、前記第1対象画像から、前記第2画像群の色に関する特徴を有する前記第1の翻訳画像を生成し、
     前記第2生成器は、前記第2対象画像から、前記第1画像群の色に関する特徴を有する第2の翻訳画像を生成する、
    画像翻訳装置。
    Characteristics related to the color of a first image group obtained by capturing a tissue that has been subjected to an embedding process of embedding the tissue in a predetermined embedding agent and a first process including slicing, and the embedding process and the slicing process. an image translation unit comprising a first generator and a second generator, which has learned a relationship between color-related features of a second group of images obtained by imaging tissues that have been subjected to a second process that does not include
    a first input unit that inputs either a first target image belonging to the first image group or a second target image belonging to the second image group to the image translation unit;
    A translated image output unit that outputs a first translated image generated from the first target image or a second translated image generated from the second target image,
    The first generator generates the first translated image having characteristics related to the color of the second image group from the first target image,
    The second generator generates a second translated image having color-related features of the first image group from the second target image.
    Image translation device.
  2.  前記画像翻訳部は、
      前記第1画像群の色に関する特徴と、前記第2生成器が生成した翻訳画像の色に関する特徴との第1誤差に基づいて、前記第1画像群に含まれる画像と前記第2生成器が生成した翻訳画像とを識別する第1識別器と、
      前記第2画像群の色に関する特徴と、前記第1生成器が生成した翻訳画像の色に関する特徴との第2誤差に基づいて、前記第2画像群に含まれる画像と前記第1生成器が生成した翻訳画像とを識別する第2識別器と、を備え、
      前記第1誤差および前記第2誤差が所定レベル以下となった翻訳画像を、前記第1の翻訳画像または前記第2の翻訳画像として出力する、
    請求項1に記載の画像翻訳装置。
    The image translation department is
    Based on a first error between the color-related features of the first image group and the color-related features of the translated image generated by the second generator, the images included in the first image group and the second generator a first classifier that identifies the generated translated image;
    Based on a second error between the color-related features of the second image group and the color-related features of the translated image generated by the first generator, the images included in the second image group and the first generator a second classifier that identifies the generated translated image;
    outputting a translated image in which the first error and the second error are below a predetermined level as the first translated image or the second translated image;
    The image translation device according to claim 1.
  3.  前記組織は、細胞、真菌、および細菌のうちのいずれかが集合して成る構造を有する、請求項1に記載の画像翻訳装置。 The image translation device according to claim 1, wherein the tissue has a structure consisting of an aggregation of cells, fungi, and bacteria.
  4.  前記第2対象画像は、深紫外励起蛍光顕微鏡、第2高調波発生(SHG:Second-Harmonic-Generation)顕微鏡、スティミュレーテッドラマンスキャッタリング(SRS:Stimulated Raman scattering)顕微鏡、コヒーレントアンチストークスラマン(CARS:Coherent anti-Stokes Raman scattering)顕微鏡、または蛍光顕微鏡を用いて撮像される、
    請求項1に記載の画像翻訳装置。
    The second target image is a deep ultraviolet excitation fluorescence microscope, a second-harmonic-generation (SHG) microscope, a stimulated Raman scattering (SRS) microscope, a coherent anti-Stokes Raman ( imaged using a CARS (Coherent anti-Stokes Raman scattering) microscope or a fluorescence microscope.
    The image translation device according to claim 1.
  5.  請求項1に記載の画像翻訳装置、および画像診断装置を備える画像診断システムであって、
     前記画像診断装置は、
      前記第1処理が施された組織を撮像した訓練用第1画像群と、該訓練用第1画像群のそれぞれに写る組織の状態との対応関係を学習した第1ニューラルネットワーク、または、前記第2処理が施された組織を撮像した訓練用第2画像群と、該訓練用第2画像群のそれぞれに写る組織の状態との対応関係を学習した第2ニューラルネットワークの少なくともいずれかを備える診断部と、
      前記第1の翻訳画像または前記第2の翻訳画像を前記診断部に入力する第2入力部と、
      前記診断部から出力された、前記第1の翻訳画像または前記第2の翻訳画像に示された組織の状態に関する診断情報を出力する診断情報出力部と、を備える、
    画像診断システム。
    13. An image diagnosis system comprising the image translation device according to claim 1 and an image diagnosis device,
    The imaging diagnostic apparatus includes:
    a diagnosis unit including at least one of a first neural network that learns a correspondence relationship between a first training image group in which tissue that has been subjected to the first processing is captured and a state of the tissue captured in each of the first training images, or a second neural network that learns a correspondence relationship between a second training image group in which tissue that has been subjected to the second processing is captured and a state of the tissue captured in each of the second training images;
    a second input unit that inputs the first translated image or the second translated image to the diagnosis unit;
    and a diagnostic information output unit that outputs diagnostic information on a state of a tissue shown in the first translated image or the second translated image output from the diagnostic unit.
    Diagnostic imaging system.
  6.  所定の包埋剤に組織を包埋する包埋処理および薄切化を含む第1処理が施された組織を撮像した第1画像群の色に関する特徴と、前記包埋処理および前記薄切化を含まない第2処理が施された組織を撮像した第2画像群の色に関する特徴との間の関係を学習した、第1生成器および第2生成器を備えるニューラルネットワークに、前記第1画像群に属する第1対象画像、および前記第2画像群に属する第2対象画像のいずれかを入力する入力ステップと、
     前記第1対象画像から生成された第1の翻訳画像、または、前記第2対象画像から生成された第2の翻訳画像を出力する翻訳画像出力ステップと、を含み、
     前記第1生成器は、前記第1対象画像から、前記第2画像群の色に関する特徴を有する前記第1の翻訳画像を生成し、
     前記第2生成器は、前記第2対象画像から、前記第1画像群の色に関する特徴を有する第2の翻訳画像を生成する、
    画像翻訳方法。
    Characteristics related to the color of a first image group obtained by capturing a tissue that has been subjected to an embedding process of embedding the tissue in a predetermined embedding agent and a first process including slicing, and the embedding process and the slicing process. A neural network comprising a first generator and a second generator that has learned the relationship between the first image an input step of inputting either a first target image belonging to the group and a second target image belonging to the second image group;
    a translated image output step of outputting a first translated image generated from the first target image or a second translated image generated from the second target image,
    The first generator generates the first translated image having characteristics related to the color of the second image group from the first target image,
    The second generator generates a second translated image having color-related features of the first image group from the second target image.
    Image translation method.
  7.  請求項1に記載の画像翻訳装置としてコンピュータを機能させるための制御プログラムであって、上記画像翻訳部、前記第1入力部、および前記翻訳画像出力部としてコンピュータを機能させるための制御プログラム。 A control program for causing a computer to function as the image translation device according to claim 1, the control program for causing the computer to function as the image translation section, the first input section, and the translated image output section.
  8.  請求項7に記載の制御プログラムを記録したコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium recording the control program according to claim 7.
PCT/JP2023/034225 2022-09-21 2023-09-21 Image translation device, diagnostic imaging system, image translation method, control program, and recording medium WO2024063119A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-150643 2022-09-21
JP2022150643 2022-09-21

Publications (1)

Publication Number Publication Date
WO2024063119A1 true WO2024063119A1 (en) 2024-03-28

Family

ID=90454639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/034225 WO2024063119A1 (en) 2022-09-21 2023-09-21 Image translation device, diagnostic imaging system, image translation method, control program, and recording medium

Country Status (1)

Country Link
WO (1) WO2024063119A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018192264A (en) * 2017-05-18 2018-12-06 キヤノンメディカルシステムズ株式会社 Medical image processing apparatus
JP2021513065A (en) * 2018-02-12 2021-05-20 エフ・ホフマン−ラ・ロシュ・アクチェンゲゼルシャフト Conversion of digital pathological images
JP2021519924A (en) * 2018-03-30 2021-08-12 ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア Methods and systems for digitally staining unlabeled fluorescent images using deep learning
JP2022068043A (en) * 2020-10-21 2022-05-09 キヤノンメディカルシステムズ株式会社 Medical image processing device and medical image processing system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018192264A (en) * 2017-05-18 2018-12-06 キヤノンメディカルシステムズ株式会社 Medical image processing apparatus
JP2021513065A (en) * 2018-02-12 2021-05-20 エフ・ホフマン−ラ・ロシュ・アクチェンゲゼルシャフト Conversion of digital pathological images
JP2021519924A (en) * 2018-03-30 2021-08-12 ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア Methods and systems for digitally staining unlabeled fluorescent images using deep learning
JP2022068043A (en) * 2020-10-21 2022-05-09 キヤノンメディカルシステムズ株式会社 Medical image processing device and medical image processing system

Similar Documents

Publication Publication Date Title
JP7344568B2 (en) Method and system for digitally staining label-free fluorescent images using deep learning
JP2021508373A (en) Automatic screening of tissue samples for histopathological examination by analysis of normal models
Lopez et al. Assessing deep learning methods for the identification of kidney stones in endoscopic images
Shen et al. Deep learning autofluorescence-harmonic microscopy
Mikołajczyk et al. Towards explainable classifiers using the counterfactual approach-global explanations for discovering bias in data
US20220237783A1 (en) Slide-free histological imaging method and system
Cai et al. Stain style transfer using transitive adversarial networks
Szczotka et al. Zero-shot super-resolution with a physically-motivated downsampling kernel for endomicroscopy
Terradillos et al. Analysis on the characterization of multiphoton microscopy images for malignant neoplastic colon lesion detection under deep learning methods
WO2024063119A1 (en) Image translation device, diagnostic imaging system, image translation method, control program, and recording medium
WO2019171909A1 (en) Image processing method, image processing device, and program
Sharma et al. Modified gan augmentation algorithms for the mri-classification of myocardial scar tissue in ischemic cardiomyopathy
Haddadi et al. A novel medical image enhancement algorithm based on CLAHE and pelican optimization
JP7470339B2 (en) Dye image estimator learning device, image processing device, dye image estimator learning method, image processing method, dye image estimator learning program, and image processing program
CN115984107A (en) Self-supervision multi-mode structure light microscopic reconstruction method and system
Kanakatte et al. Surgical smoke dehazing and color reconstruction
EP3918577A1 (en) Systems, methods, and media for automatically transforming a digital image into a simulated pathology image
Jung et al. Integration of deep learning and graph theory for analyzing histopathology whole-slide images
WO2021198247A1 (en) Optimal co-design of hardware and software for virtual staining of unlabeled tissue
de Haan et al. Deep Learning-Based Virtual Staining of Unlabeled Tissue Samples
CN117351196B (en) Image segmentation method, device, computer equipment and storage medium
de Haan et al. Deep learning-based transformation of the H&E stain into special stains
CN115036011B (en) System for solid tumor prognosis evaluation based on digital pathological image
Yang et al. Virtual histological stain transformations through cascaded deep neural networks
Tanachotnarangkun et al. A Framework for Generating an ICGA from a Fundus Image using GAN