WO2022113367A1 - Image conversion method, program, and image processing device - Google Patents

Image conversion method, program, and image processing device Download PDF

Info

Publication number
WO2022113367A1
WO2022113367A1 PCT/JP2020/044565 JP2020044565W WO2022113367A1 WO 2022113367 A1 WO2022113367 A1 WO 2022113367A1 JP 2020044565 W JP2020044565 W JP 2020044565W WO 2022113367 A1 WO2022113367 A1 WO 2022113367A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
trained model
trained
biological sample
learning
Prior art date
Application number
PCT/JP2020/044565
Other languages
French (fr)
Japanese (ja)
Inventor
隆彦 吉田
ちか 坂本
千枝子 中田
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Priority to PCT/JP2020/044565 priority Critical patent/WO2022113367A1/en
Publication of WO2022113367A1 publication Critical patent/WO2022113367A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements

Definitions

  • the present invention relates to an image conversion method, a program, and an image processing apparatus.
  • Patent Document 1 a technique of converting a bright-field image of a cell into an image like a phase-difference image by AI is known. (See, for example, Patent Document 1).
  • the technique of the present disclosure provides a novel image conversion method.
  • One embodiment of the present disclosure is a method for image-transforming an image of an image of a biological sample using a trained model, wherein the image conversion method is an image of a biological sample, phase difference.
  • the trained model includes a training source image group consisting of images that are not phase-difference images and a phase-difference image, including a third step of converting the second image into a third image using the trained model. It is a method that is a trained model in which a training target image group consisting of is trained as training data.
  • Another embodiment of the present disclosure is a method of image-transforming an image obtained by capturing an image of a biological sample using a trained model, wherein the first image obtained by capturing the biological sample by the first method is obtained.
  • the process of selecting a trained model suitable for the first method from a plurality of different trained models based on the acquisition process and the information regarding the first method, and the selected trained model are used.
  • the learned image includes a step of converting the original image into an image similar to an image captured by a second method different from the first method, and a step of outputting the converted image.
  • the model is a model in which a learning source image group consisting of images captured by the first method and a learning target image group consisting of images captured by the second method are trained as learning data. The method.
  • it is an image before and after conversion when a bright field image captured by a camera is converted into an artificial retardation image by a trained model. The same image was converted after (A) denoising (B) without denoising.
  • it is an image before and after conversion when a bright field image captured by a transmission detector is converted into an artificial retardation image by a trained model. The same image was converted after (A) denoising (B) without denoising.
  • the original image (A) is processed by a dispersion filter (B), and then the image (C) is binarized. ).
  • it is an image for determining whether or not the magnification of an objective lens is equal to or higher than a predetermined magnification.
  • One embodiment of the technique of the present disclosure is a method for transforming an image of an image of a biological sample using a trained model, not a phase-difference image of an image of the biological sample.
  • Including a third step of converting the image of It is a method that is a trained model trained as training data.
  • the image conversion system of the present disclosure includes an image acquisition device 101 and an image conversion device 103 (FIG. 1).
  • the image conversion device 103 acquires an image obtained by capturing a biological sample by an imaging means from the image acquisition device 101 by the image acquisition unit 111 (S11).
  • the image acquisition device 101 has a biological sample observation means 106 and an image pickup means 107.
  • the observation means 106 is a microscope suitable for observing a biological sample, and is, for example, a bright-field microscope, a fluorescence microscope, a differential interference microscope, and the like.
  • the image pickup means 107 is an image pickup element such as a CCD.
  • the images captured by the method of the imaging means 107 are a bright field image, a fluorescence image, and a differential interference contrast image, respectively. Since the image acquired by the image acquisition device 101 is an image before image processing described later, it may be referred to as an original image.
  • the exposure time may be lengthened or the light source power may be increased.
  • the image preprocessing unit 113 may perform a process of enhancing the contrast of the image (S12). Specifically, a known method may be used, and examples thereof include tone mapping, histogram equalization, and local histogram equalization.
  • the noise reduction unit 115 determines whether or not there is a possibility that an artifact may occur in the image to be converted (S13). For example, if the background area occupies a predetermined ratio or more in the image, or if the imaging conditions are high magnification or the predetermined magnification or more, an artifact may occur in the background portion. Can be judged.
  • the background area is equal to or more than a predetermined ratio in the image, for example, the position where the observation object exists is detected, the area where the observation object exists is calculated, and the area where the observation object exists is calculated from the total area.
  • the background area is calculated by subtraction, the ratio of the background area to the total area is calculated, and the judgment can be made by determining whether or not the ratio is equal to or more than a predetermined value.
  • the predetermined ratio is not particularly limited, but may be 40%, preferably 50%, and more preferably 60%.
  • the imaging condition is at least a predetermined magnification
  • the image is processed with a dispersion filter (FIG. 5).
  • the image processed by the dispersion filter is binarized to generate a binarized image (FIG. 5).
  • a predetermined magnification FIG. 6
  • the magnification information of the image may be acquired from the data incidental to the image, and this data may be included in the metadata.
  • the predetermined value is not particularly limited, but when the magnification information of the image is the magnification of the objective lens, it may be 20 times, preferably 30 times, and more preferably 40 times.
  • the noise reduction unit 115 performs a process of reducing noise that may cause an artifact in a region where an artifact may occur. Specifically, for an image of an object region in which an object is imaged and a background region not including the object, which is determined to have a possibility of causing an artifact, a part or all of the background area is taken. The region of No. 1 is smoothed (S14).
  • the object area When processing at least a part of the background area, the object area may be specified in advance and only at least a part of the background area may be processed. As a result, the object area to be observed is maintained as the original image, and noise is reduced only in the background portion.
  • the entire image may be processed for each object area without specifying the object area.
  • the smoothing of the image affects not only the background area but also the object area.
  • an image that has been subjected to noise reduction processing and image-converted may be combined with an image that has been image-converted without noise reduction processing.
  • the same two images are prepared, the first image is entirely smoothed and converted, the second image is converted without smoothing, and the background area is removed from the converted first image.
  • smoothing in the spatial direction or the luminance direction can be given.
  • the image reduction may be performed only in the background area as described above. As a result, the resolution of the object area to be observed is maintained, and the resolution is lowered only in the processed background area.
  • Image reduction may include, for example, reducing the image to 90%, 80%, 75%, 70%, 60%, 50%, 40%, 30%, 25%, 20%. The larger the reduction ratio, the higher the effect of removing the artifacts. Therefore, when creating a converted image in which only the background area is processed, it is preferable that the reduction ratio is large.
  • the reduction ratio of the image is small when the entire image is processed for each object area without specifying the object area. Is preferable.
  • the image reduction ratio may be changed between the background area and the object area.
  • smoothing in the luminance direction there is a process of correcting the luminance value of the portion to be processed to make it constant.
  • a filter process such as a generally well-known bilateral filter.
  • other well-known noise reduction processing may be utilized.
  • noise reduction process also called noise reduction process
  • the image may be divided into small areas (S15). If you split the image without duplication, the borders will be noticeable when combined after image conversion. Therefore, when dividing into small areas, it is preferable to divide by overlapping the boundaries. Then, when the images are combined after the image conversion, a uniform image can be obtained as a whole by combining the images with weighting at the overlapping portion.
  • an image conversion network is used, an image belonging to the first image group is used as a source image, and an image of a second image group different from the first image group is used as a target image. It is preferable to learn the mapping from the source to the target and generate a trained model.
  • the image conversion network is not particularly limited, and is "convolutional neural networks (CNN)”, “generative adversarial networks (GAN)”, “CGAN (Conditional GAN)”, and “DCGAN (Deep Conventional GAN)”. ) ”,“ Pix2Pix ”,“ CycleGAN ”, etc. Is preferable.
  • the image post-processing unit When a small area divided into inputs is used, the image post-processing unit combines the obtained small areas after conversion (S17) to generate a third image.
  • the third image is a training model trained using an image belonging to the first image group as a source image and an image of the second image group as a target image, and is used as a second image group. It refers to an image obtained by inputting an image that does not belong to the first image group and converting it. It can be said that the third image is an image converted into the style of the second image group while the position of the object captured in the input image does not change.
  • the target image is a phase difference image, it may be referred to as an artificial phase difference image or a pseudo phase difference image. It was
  • the overlapping portion it is preferable to weight the overlapping portion and combine them. As a result, the brightness of the entire image can be made uniform and the boundaries of division can be made inconspicuous.
  • the weighting for example, when weighting the divided area 1 and the divided area 2, the distance to the divided area 1 and the distance to the divided area 2 are a (pixel) and b (pixel), respectively, for the pixels of the overlapping portion. At this time, it is conceivable to add the brightness of the original image with an inversely proportional weight according to the distance.
  • the weighting calculation method is not limited to this, and different calculation formulas may be used, but even in that case, the distance to the overlapping portion of the pixels is short according to the distance to the division area 1 and the division area 2. It is preferable to increase the weight corresponding to the brightness of the divided region.
  • a microscope observation image other than the phase difference image is input to a trained model trained using a microscope observation image other than the phase difference image as a source image and a phase difference microscope observation image as a target image. Therefore, an artificial phase difference image similar to the phase difference image can be output.
  • the microscope-observed image other than the phase difference image may be an image captured at a magnification of 20 times or more, but it is preferably a magnification of 10 times or more, and more preferably a magnification of 4 times or more.
  • the microscope observation image other than the phase contrast image as the source image includes a bright field image, a fluorescence image, a differential interference contrast image, and the like, and a trained model can be created separately for each. From this, it is preferable to select the most suitable trained model for the image to be converted, and for that purpose, select a trained model that uses the image captured by the same method as when it was captured as the source image. It is preferable to do.
  • training data of the source image and the target image images captured at different magnifications of the objective lens may be used, and a trained model trained by each may be created. In that case, it is preferable to select a trained model that uses an image having the same magnification as the image to be converted as training data. It was
  • the image post-processing unit determines whether or not the third image obtained by combining the images has been reduced in the pre-processing (S18). If the image is not reduced, the third image is output as it is (S20). When the image is reduced, the reduced portion is enlarged to generate a fourth image (S19), and the fourth image is output (S20).
  • the image output unit 121 outputs the generated artificial phase difference image and causes the image display means 123 to display it.
  • the image display means 123 may display the original image and / or an image of only the region where the cells exist.
  • the image conversion process and the output of the artificial phase difference image may be performed while observing the sample in the image pickup apparatus.
  • preprocessing and image conversion are performed immediately, and the artificial phase difference image is displayed in real time on the image display means 123.
  • the user can operate the microscope while looking at the image display means 123 on which the artificial retardation image is displayed.
  • a program for causing a computer to execute the image processing method described so far, and a non-transient computer-readable storage medium for storing the program are also embodiments of the technique of the present disclosure.
  • HEK293 cells were seeded on a glass bottom dish with a diameter of 35 mm, and a bright field image (using a camera or transmission detector for imaging) and a phase difference image were imaged in a 50% confluent state (objective lens was 60 times).
  • a trained model trained by CycleGAN was prepared with multiple bright-field images or transparent bright-field images as sources and multiple phase-difference images as targets.
  • a bright-field image or a transparent bright-field image different from the source was reduced by 50%, input to the trained model, and converted into an artificial phase difference image using a generator.
  • the obtained image was magnified twice using linear interpolation to obtain a converted image.
  • the bright field image was converted into an artificial retardation image without reduction-enlargement processing.
  • the images before and after the conversion are shown in FIGS. 3 and 4.

Landscapes

  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Microscoopes, Condenser (AREA)
  • Image Processing (AREA)

Abstract

A method is for performing conversion of an image that is not a phase contrast image, by using a trained model, the method comprising: a first step for acquiring a first image obtained by imaging a biological sample and that is not a phase contrast image; a second step for generating a second image by performing smoothing processing on at least a partial region of the first image; and a third step for converting the second image to a third image by using the trained model, wherein the trained model has been trained by using, as learning data, a learning source image group composed of images that are not phase contrast images, and a learning target image group composed of phase contrast images.

Description

画像変換方法、プログラム、画像処理装置Image conversion method, program, image processing device
 本発明は、画像変換方法、プログラム、及び画像処理装置に関する。 The present invention relates to an image conversion method, a program, and an image processing apparatus.
 従来、AIによって細胞の明視野画像を位相差画像風に画像に変換する技術が知られている。(例えば、特許文献1参照)。 Conventionally, a technique of converting a bright-field image of a cell into an image like a phase-difference image by AI is known. (See, for example, Patent Document 1).
特開2020-60822号公報Japanese Unexamined Patent Publication No. 2020-60822
 本開示の技術は、新規な画像変換方法を提供する。 The technique of the present disclosure provides a novel image conversion method.
 本開示の一実施態様は、生物学的サンプルを撮像した画像を、学習済みモデルを用いて画像変換するための方法であって、前記画像変換方法は、生物学的サンプルを撮像した、位相差画像ではない第1の画像を取得する第1の工程と、前記第1の画像の少なくとも一部領域に対して平滑化処理を行い、第2の画像を生成する第2の工程と、前記学習済みモデルを用いて前記第2の画像を第3の画像に変換する第3の工程と、を含み、前記学習済みモデルは、位相差画像ではない画像からなる学習用ソース画像群と位相差画像からなる学習用ターゲット画像群とを学習用データとして学習させた学習済みモデルである、方法である。 One embodiment of the present disclosure is a method for image-transforming an image of an image of a biological sample using a trained model, wherein the image conversion method is an image of a biological sample, phase difference. The first step of acquiring a first image that is not an image, the second step of performing a smoothing process on at least a part of the region of the first image to generate a second image, and the learning. The trained model includes a training source image group consisting of images that are not phase-difference images and a phase-difference image, including a third step of converting the second image into a third image using the trained model. It is a method that is a trained model in which a training target image group consisting of is trained as training data.
 本開示の他の実施態様は、生物学的サンプルを撮像した画像を、学習済みモデルを用いて画像変換する方法であって、第1の手法で生物学的サンプルを撮像した第1の画像を取得する工程と、前記第1の手法に関する情報に基づいて、複数の異なる学習済みモデルから、前記第1の手法に適した学習済みモデルを選択する工程と、前記選択された学習済みモデルを利用して、前記オリジナル画像を、第1の手法とは異なる第2の手法で撮像した画像に似た画像に変換する工程と、前記変換後の画像を出力する工程と、を含み、前記学習済みモデルは、前記第1の手法で撮像した画像から成る学習用ソース画像群、及び、前記第2の手法で撮像した画像からなる学習用ターゲット画像群を学習用データとして学習させたモデルである、方法である。 Another embodiment of the present disclosure is a method of image-transforming an image obtained by capturing an image of a biological sample using a trained model, wherein the first image obtained by capturing the biological sample by the first method is obtained. The process of selecting a trained model suitable for the first method from a plurality of different trained models based on the acquisition process and the information regarding the first method, and the selected trained model are used. Then, the learned image includes a step of converting the original image into an image similar to an image captured by a second method different from the first method, and a step of outputting the converted image. The model is a model in which a learning source image group consisting of images captured by the first method and a learning target image group consisting of images captured by the second method are trained as learning data. The method.
本発明の一実施態様にかかる画像処理システムの模式図である。It is a schematic diagram of the image processing system which concerns on one Embodiment of this invention. 本発明の一実施態様にかかる画像処理方法のフローチャートである。It is a flowchart of the image processing method which concerns on one Embodiment of this invention. 本発明の一実施例において、カメラで撮像した明視野画像を学習済みモデルで人工位相差画像に変換したときの変換前と変換後の画像である。同じ画像を(A)ノイズ除去なしで(B)ノイズ除去後に、変換した。In one embodiment of the present invention, it is an image before and after conversion when a bright field image captured by a camera is converted into an artificial retardation image by a trained model. The same image was converted after (A) denoising (B) without denoising. 本発明の一実施例において、透過ディテクターで撮像した明視野画像を学習済みモデルで人工位相差画像に変換したときの変換前と変換後の画像である。同じ画像を(A)ノイズ除去なしで(B)ノイズ除去後に、変換した。In one embodiment of the present invention, it is an image before and after conversion when a bright field image captured by a transmission detector is converted into an artificial retardation image by a trained model. The same image was converted after (A) denoising (B) without denoising. 本発明の一実施例において、撮像条件が所定の倍率以上であることを判定するため、元画像(A)を分散フィルターで処理した画像(B)、及びその後二値化したときの画像(C)を示す図である。In one embodiment of the present invention, in order to determine that the imaging condition is equal to or higher than a predetermined magnification, the original image (A) is processed by a dispersion filter (B), and then the image (C) is binarized. ). 本発明の一実施例において、対物レンズの倍率が所定の倍率以上か否かを判定するための画像である。In one embodiment of the present invention, it is an image for determining whether or not the magnification of an objective lens is equal to or higher than a predetermined magnification.
 以下、本開示の技術の実施の形態につき、添付図面を用いて詳細に説明する。以下に記載された発明の実施の形態及び具体的な実施例などは、例示又は説明のために示されているのであって、本発明をそれらに限定するものではない。 Hereinafter, embodiments of the technique disclosed in the present disclosure will be described in detail with reference to the attached drawings. The embodiments and specific examples of the invention described below are shown for illustration purposes only, and the present invention is not limited thereto.
 また、実施例に説明する画像処理はあくまでも一例であり、本開示の技術を実施する際には、主旨を逸脱しない範囲内において不要なステップを削除したり、新たなステップを追加したり、処理順序を入れ替えたりしてもよいことは言うまでもない。 In addition, the image processing described in the examples is merely an example, and when the technique of the present disclosure is implemented, unnecessary steps are deleted, new steps are added, and processing is performed within a range that does not deviate from the gist. Needless to say, the order may be changed.
 なお、本明細書に記載された全ての文献(特許文献および非特許文献)並びに技術規格は、本明細書に具体的にかつ個々に記載された場合と同様に、本明細書中に参照により取り込まれる。 In addition, all documents (patented documents and non-patented documents) and technical standards described in the present specification are referred to in the present specification as if they were specifically and individually described in the present specification. It is captured.
 本開示の技術の一実施形態は、生物学的サンプルを撮像した画像を、学習済みモデルを用いて変換するための方法であって、生物学的サンプルを撮像した、位相差画像ではない第1の画像を取得する第1の工程と、第1の画像の少なくとも一部領域に対して平滑化処理を行い、第2の画像を生成する第2の工程と、学習済みモデルを用いて第2の画像を第3の画像に変換する第3の工程と、を含み、学習済みモデルは、位相差画像ではない画像からなる学習用ソース画像群と位相差画像からなる学習用ターゲット画像群とを学習用データとして学習させた学習済みモデルである、方法である。 One embodiment of the technique of the present disclosure is a method for transforming an image of an image of a biological sample using a trained model, not a phase-difference image of an image of the biological sample. The first step of acquiring the image of the above, the second step of performing the smoothing process on at least a part of the region of the first image to generate the second image, and the second step using the trained model. Including a third step of converting the image of It is a method that is a trained model trained as training data.
 以下、図1のシステム構成および図2のフローチャートを参照しながら、各工程について詳細に説明する。 Hereinafter, each process will be described in detail with reference to the system configuration of FIG. 1 and the flowchart of FIG.
<画像の取得>
 本開示の画像変換システムは、画像取得装置101と画像変換装置103を備える(図1)。
<Image acquisition>
The image conversion system of the present disclosure includes an image acquisition device 101 and an image conversion device 103 (FIG. 1).
 まず、画像変換装置103は、画像取得装置101から、撮像手段によって生物学的サンプルを撮像した画像を、画像取得部111で取得する(S11)。 First, the image conversion device 103 acquires an image obtained by capturing a biological sample by an imaging means from the image acquisition device 101 by the image acquisition unit 111 (S11).
 ここで、画像取得装置101は、生物学的サンプルの観察手段106と撮像手段107を有する。観察手段106は、生物学的サンプルの観察に適した顕微鏡であり、例えば明視野顕微鏡、蛍光顕微鏡、微分干渉顕微鏡などである。撮像手段107は、例えばCCDなどの撮像素子である。撮像手段107による手法で撮像した画像は、それぞれ、明視野画像、蛍光画像、微分干渉画像である。なお、画像取得装置101で取得された画像は、後述する画像処理を行う前の画像であるため、オリジナル画像と称してもよい。 Here, the image acquisition device 101 has a biological sample observation means 106 and an image pickup means 107. The observation means 106 is a microscope suitable for observing a biological sample, and is, for example, a bright-field microscope, a fluorescence microscope, a differential interference microscope, and the like. The image pickup means 107 is an image pickup element such as a CCD. The images captured by the method of the imaging means 107 are a bright field image, a fluorescence image, and a differential interference contrast image, respectively. Since the image acquired by the image acquisition device 101 is an image before image processing described later, it may be referred to as an original image.
 撮像する際、光量が少ないと、後の画像変換がうまく行かないことがある。従って、高倍率、例えば40倍~100倍で撮像する際には、露光時間を長くしてもよく、光源パワーを強くしてもよい。 When taking an image, if the amount of light is low, the subsequent image conversion may not be successful. Therefore, when imaging at a high magnification, for example, 40 to 100 times, the exposure time may be lengthened or the light source power may be increased.
<コントラストの増強>
 まず、画像前処理部113は、画像のコントラストを強調する処理を行ってもよい(S12)。具体的には、公知の手法を用いればよく、例えば、トーンマッピング、ヒストグラム均等法、局所的ヒストグラム均等化法が例示できる。
<Increased contrast>
First, the image preprocessing unit 113 may perform a process of enhancing the contrast of the image (S12). Specifically, a known method may be used, and examples thereof include tone mapping, histogram equalization, and local histogram equalization.
<アーティファクトの可能性>
 図3及び図4に示すように、特に高倍率の画像において、画像変換すると、細胞が存在しない背景領域において、オリジナル画像では認識できない構造や模様(以後、アーティファクトと称する)が生じる場合がある。
<Possibility of artifacts>
As shown in FIGS. 3 and 4, particularly in a high-magnification image, image conversion may result in a structure or pattern (hereinafter referred to as an artifact) that cannot be recognized in the original image in the background region where cells do not exist.
 そこで、ノイズ低減部115は、変換対象の画像に、アーティファクトが生じる可能性があるかどうかを判断する(S13)。例えば、背景領域が画像において占める割合が所定の割合以上であった場合、あるいは撮像条件が高倍率であったり、所定の倍率以上であったりした場合に、背景部分でアーティファクトが生じる可能性があると判断することができる。 Therefore, the noise reduction unit 115 determines whether or not there is a possibility that an artifact may occur in the image to be converted (S13). For example, if the background area occupies a predetermined ratio or more in the image, or if the imaging conditions are high magnification or the predetermined magnification or more, an artifact may occur in the background portion. Can be judged.
 画像において背景領域が所定の割合以上であることは、例えば、観察対象物の存在する位置を検出し、観察対象物の存在する面積を算出し、全体の面積から観察対象物の存在する面積を引き算することによって背景の面積を算出し、全体の面積に対する背景の面積の割合を算出し、その割合が所定値以上であるかどうかを判定することで判断ができる。ここで、所定の割合は特に限定されないが、40%であってもよく、50%であることが好ましく、60%であることがより好ましい。 When the background area is equal to or more than a predetermined ratio in the image, for example, the position where the observation object exists is detected, the area where the observation object exists is calculated, and the area where the observation object exists is calculated from the total area. The background area is calculated by subtraction, the ratio of the background area to the total area is calculated, and the judgment can be made by determining whether or not the ratio is equal to or more than a predetermined value. Here, the predetermined ratio is not particularly limited, but may be 40%, preferably 50%, and more preferably 60%.
 撮像条件が所定の倍率以上であることは、例えば、画像を二値化した二値化画像から、対物レンズの倍率が所定の倍率以上か否かを判定することができる。まず、画像を分散フィルターで処理する(図5)。次に、分散フィルターで処理した画像を二値化し、二値化画像を生成する(図5)。この二値化画像を用いて、対物レンズの倍率が所定の倍率以上か否かを判定する(図6)。ここで、二値化画像の部分領域の中で、対象物を含まない領域を有するか否かに基づいて、そのような領域があれば、高倍率であると判定できる(図6右)。あるいは、画像の倍率情報は、画像に付帯するデータから取得してもよく、このデータがメタデータに含まれていてもよい。ここで、所定値は特に限定されないが、画像の倍率情報が対物レンズの倍率である場合、20倍であってもよく、30倍であることが好ましく、40倍であることがより好ましい。 If the imaging condition is at least a predetermined magnification, it can be determined, for example, whether or not the magnification of the objective lens is at least a predetermined magnification from the binarized image obtained by binarizing the image. First, the image is processed with a dispersion filter (FIG. 5). Next, the image processed by the dispersion filter is binarized to generate a binarized image (FIG. 5). Using this binarized image, it is determined whether or not the magnification of the objective lens is equal to or higher than a predetermined magnification (FIG. 6). Here, based on whether or not there is a region that does not include an object in the partial region of the binarized image, if there is such a region, it can be determined that the magnification is high (FIG. 6, right). Alternatively, the magnification information of the image may be acquired from the data incidental to the image, and this data may be included in the metadata. Here, the predetermined value is not particularly limited, but when the magnification information of the image is the magnification of the objective lens, it may be 20 times, preferably 30 times, and more preferably 40 times.
<前処理>
 ノイズ低減部115は、アーティファクトが生じる可能性がある領域について、アーティファクトの原因となりうるノイズを低減する処理を行う。具体的には、アーティファクトが生じる可能性があると判断された、対象物を撮像した対象物領域と、対象物を含まない背景領域とを撮像した画像に対し、その背景領域の一部または全部の領域の平滑化処理を行う(S14)。
<Preprocessing>
The noise reduction unit 115 performs a process of reducing noise that may cause an artifact in a region where an artifact may occur. Specifically, for an image of an object region in which an object is imaged and a background region not including the object, which is determined to have a possibility of causing an artifact, a part or all of the background area is taken. The region of No. 1 is smoothed (S14).
 背景領域の少なくとも一部の領域を処理する際、予め対象物領域を特定し、背景領域の少なくとも一部の領域だけを処理してもよい。それによって、観察対象である対象物領域はオリジナル画像のままでは保たれ、背景部分だけノイズが低減される。 When processing at least a part of the background area, the object area may be specified in advance and only at least a part of the background area may be processed. As a result, the object area to be observed is maintained as the original image, and noise is reduced only in the background portion.
 また、対象物領域を特定せず、対象物領域ごと画像全体を処理してもよい。このことにより、簡便な処理となる。その場合、背景領域だけでなく対象物領域に対しても、画像の平滑化の影響が及ぶ。この場合には、画像変換後の対象物領域の解像度が所望の解像度になるよう、平滑化のパラメータを調整することが好ましい。あるいは、ノイズ低減処理を行い画像変換した画像と、ノイズ低減処理を行わずに画像変換した画像とを合成してもよい。例えば、同じ2つの画像を準備し、第1の画像は全体に平滑化処理をして変換し、第2の画像は平滑化処理をしないで変換し、変換した第1の画像から背景領域を切り出し、変換した第2の画像から対象物領域を切り出し、それらを組み合わせて変換画像を生成することにより、背景領域だけを処理した変換画像を作成することができる。 Alternatively, the entire image may be processed for each object area without specifying the object area. This makes the process simple. In that case, the smoothing of the image affects not only the background area but also the object area. In this case, it is preferable to adjust the smoothing parameter so that the resolution of the object region after image conversion becomes a desired resolution. Alternatively, an image that has been subjected to noise reduction processing and image-converted may be combined with an image that has been image-converted without noise reduction processing. For example, the same two images are prepared, the first image is entirely smoothed and converted, the second image is converted without smoothing, and the background area is removed from the converted first image. By cutting out an object area from the second image that has been cut out and converted and combining them to generate a converted image, it is possible to create a converted image in which only the background area is processed.
 平滑化処理として、空間方向または輝度方向への平滑化が例としてあげられる。 As an example of smoothing processing, smoothing in the spatial direction or the luminance direction can be given.
 空間方向の平滑化の一例として、空間方向に対し、画像にサンプリング処理を行う処理があげられ、この処理により画像サイズが小さくなる。画像を縮小する際の補間処理としては、バイキュービック法など、公知の方法を用いることができる。画像縮小は、前述の通り背景領域のみに行ってもよい。それによって、観察対象である対象物領域の解像度は保たれ、処理を行った背景領域のみ解像度が低くなる。画像縮小は例えば、90%、80%、75%、70%、60%、50%、40%、30%、25%、20%に画像を縮小することが挙げられる。縮小する割合が大きいほどアーティファクトの除去効果は高いため、背景領域だけを処理した変換画像を作成する場合には、縮小する割合が大きいことが好ましい。一方で、画像の縮小により、画像変換後の対象物領域の解像度が低くなるため、対象物領域を特定せず、対象物領域ごと画像全体を処理する場合には、画像の縮小割合が小さいことが好ましい。背景領域と対象物領域とで、画像縮小の割合を変えてもよい。 As an example of smoothing in the spatial direction, there is a process of sampling an image in the spatial direction, and this process reduces the image size. As the interpolation process when reducing the image, a known method such as a bicubic method can be used. The image reduction may be performed only in the background area as described above. As a result, the resolution of the object area to be observed is maintained, and the resolution is lowered only in the processed background area. Image reduction may include, for example, reducing the image to 90%, 80%, 75%, 70%, 60%, 50%, 40%, 30%, 25%, 20%. The larger the reduction ratio, the higher the effect of removing the artifacts. Therefore, when creating a converted image in which only the background area is processed, it is preferable that the reduction ratio is large. On the other hand, since the resolution of the object area after image conversion is lowered due to the reduction of the image, the reduction ratio of the image is small when the entire image is processed for each object area without specifying the object area. Is preferable. The image reduction ratio may be changed between the background area and the object area.
 輝度方向への平滑化の一例として、処理する部分の輝度値を補正し、一定にする処理が挙げられる。また、輝度方向への平滑化処理の例として、一般によく知られたバイラテラルフィルターのようなフィルター処理があげられる。また、他のよく知られたノイズ低減処理を利用してもよい。輝度値を補正する場合、画像を細胞領域と背景領域に分割し、背景領域にのみこの処理を行うことが望ましい。 As an example of smoothing in the luminance direction, there is a process of correcting the luminance value of the portion to be processed to make it constant. Further, as an example of the smoothing process in the luminance direction, there is a filter process such as a generally well-known bilateral filter. Also, other well-known noise reduction processing may be utilized. When correcting the luminance value, it is desirable to divide the image into a cell region and a background region, and perform this processing only on the background region.
 これらの平滑化処理によって、画像変換前のノイズを低減できるので、ノイズ低減処理(ノイズリダクション処理とも呼ばれる)になる。そして、画像変換後のアーティファクトが減少する。 Since these smoothing processes can reduce noise before image conversion, it is a noise reduction process (also called noise reduction process). And the artifacts after image conversion are reduced.
 前処理の後、変換処理に入る前に、画像を小領域に分割してもよい(S15)。重複を持たせずに分割すると、画像変換の後で結合したときに、境界がめだつ画像になる。従って、小領域に分割する際、境界を重複させて分割することが好ましい。そして、画像変換の後で結合したときに、重複部分で重みづけをもたせて結合することにより、全体に均一な画像が得られる。 After the pre-processing and before starting the conversion processing, the image may be divided into small areas (S15). If you split the image without duplication, the borders will be noticeable when combined after image conversion. Therefore, when dividing into small areas, it is preferable to divide by overlapping the boundaries. Then, when the images are combined after the image conversion, a uniform image can be obtained as a whole by combining the images with weighting at the overlapping portion.
<画像変換>

 平滑化処理をした画像、あるいはさらにそれを分割した小領域をそれぞれ、画像変換部が有する学習済みモデルに入力して変換する(S16)。 
<Image conversion>

The smoothed image or a small area obtained by further dividing the smoothed image is input to the trained model of the image conversion unit and converted (S16).
 本開示の技術においては、画像変換ネットワークを用い、ソース画像として、第1の画像群に属ずる画像を用い、ターゲット画像として、第1の画像群とは異なる第2の画像群の画像を用い、ソースからターゲットへの写像を学習し、学習済みモデルを生成するのが好ましい。画像変換ネットワークとしては特に限定されず、「畳み込みニューラルネットワーク(convolutional neural networks;CNN)」、「敵対的生成ネットワーク」(Generative Adversarial Networks;GAN)、「CGAN(Conditional GAN)」「DCGAN(Deep Conventional GAN)」、「Pix2Pix」、「CycleGAN」などが例示できるが、学習の際、第一画像群に含まれる画像と第二画像群に含まれる画像は、視野が異なっていてもかまわないため、CycleGANが好ましい。
In the technique of the present disclosure, an image conversion network is used, an image belonging to the first image group is used as a source image, and an image of a second image group different from the first image group is used as a target image. It is preferable to learn the mapping from the source to the target and generate a trained model. The image conversion network is not particularly limited, and is "convolutional neural networks (CNN)", "generative adversarial networks (GAN)", "CGAN (Conditional GAN)", and "DCGAN (Deep Conventional GAN)". ) ”,“ Pix2Pix ”,“ CycleGAN ”, etc. Is preferable.
 入力に分割した小領域を用いた場合、得られた変換後の小領域を画像後処理部が結合する(S17)ことによって、第3の画像を生成する。 When a small area divided into inputs is used, the image post-processing unit combines the obtained small areas after conversion (S17) to generate a third image.
 ここで第3の画像とは、ソース画像として第1の画像群に属ずる画像を用い、ターゲット画像として第2の画像群の画像を用いて学習させた学習モデルに、第2の画像群に属さず第1の画像群に属ずる画像を入力して、変換されて得られる画像のことをいう。第3の画像は、入力した画像に撮像された物体の位置は変わらないまま、第2画像群の画風に変換された画像であるといえる。ターゲット画像が位相差画像である場合には、人工位相差画像または疑似位相差画像と称してもよい。  Here, the third image is a training model trained using an image belonging to the first image group as a source image and an image of the second image group as a target image, and is used as a second image group. It refers to an image obtained by inputting an image that does not belong to the first image group and converting it. It can be said that the third image is an image converted into the style of the second image group while the position of the object captured in the input image does not change. When the target image is a phase difference image, it may be referred to as an artificial phase difference image or a pseudo phase difference image. It was
 この結合の際、重なり部分に対し重み付けをして結合するのが好ましい。それによって、画像全体の輝度を一様にし、分割の境界を目立たなくすることができる。 At the time of this connection, it is preferable to weight the overlapping portion and combine them. As a result, the brightness of the entire image can be made uniform and the boundaries of division can be made inconspicuous.
 重み付けとしては、例えば、分割領域1と分割領域2に関して重み付けをする際、重なり部分のピクセルに対し、分割領域1への距離と分割領域2への距離がそれぞれa(pixel)、b(pixel)のとき、その距離に応じた逆比例の重みで元画像の輝度を足し合わせることが考えられる。すなわち、分割領域1の重みw1=b/(a+b)、分割領域2の重みw2=a/(a+b)として、結合した画像の該当ピクセルの輝度値I=w1×I1+w2×I2となる(I1,I2は分割領域1、分割領域2の該当ピクセル輝度値を指す)。 As the weighting, for example, when weighting the divided area 1 and the divided area 2, the distance to the divided area 1 and the distance to the divided area 2 are a (pixel) and b (pixel), respectively, for the pixels of the overlapping portion. At this time, it is conceivable to add the brightness of the original image with an inversely proportional weight according to the distance. That is, with the weight w1 = b / (a + b) of the divided area 1 and the weight w2 = a / (a + b) of the divided area 2, the luminance value of the corresponding pixel of the combined image is I = w1 × I1 + w2 × I2 (I1, I2 refers to the corresponding pixel luminance value of the divided area 1 and the divided area 2).
 重み付けの算出方法はこれに限定されず、異なる計算式を利用してもよいが、その場合でも、重なり部分のピクセルに対し、分割領域1と分割領域2への距離に応じて、距離が短い分割領域の輝度に対応した重みを大きくすることが好ましい。
The weighting calculation method is not limited to this, and different calculation formulas may be used, but even in that case, the distance to the overlapping portion of the pixels is short according to the distance to the division area 1 and the division area 2. It is preferable to increase the weight corresponding to the brightness of the divided region.
 画像変換の一例として、ソース画像として位相差画像以外の顕微鏡観察画像を用い、ターゲット画像として位相差顕微鏡観察画像を用いて学習させた学習済みモデルに、位相差画像以外の顕微鏡観察画像を入力して、位相差画像に類似した人工位相差画像を出力させることができる。位相差画像以外の顕微鏡観察画像は、倍率20倍以上で撮像された画像であってもよいが、倍率10倍以上であることが好ましく、倍率4倍以上であることがより好ましい。 As an example of image conversion, a microscope observation image other than the phase difference image is input to a trained model trained using a microscope observation image other than the phase difference image as a source image and a phase difference microscope observation image as a target image. Therefore, an artificial phase difference image similar to the phase difference image can be output. The microscope-observed image other than the phase difference image may be an image captured at a magnification of 20 times or more, but it is preferably a magnification of 10 times or more, and more preferably a magnification of 4 times or more.
 ここで、ソース画像としての位相差画像以外の顕微鏡観察画像は、明視野画像、蛍光画像、微分干渉画像などがあり、それぞれ別個に学習済みモデルを作成することができる。この中から、変換するための画像に最適な学習済みモデルを選択することが好ましく、そのためには、撮像されたときの手法と同じ手法で撮像された画像をソース画像にした学習済みモデルを選択するのが好ましい。 Here, the microscope observation image other than the phase contrast image as the source image includes a bright field image, a fluorescence image, a differential interference contrast image, and the like, and a trained model can be created separately for each. From this, it is preferable to select the most suitable trained model for the image to be converted, and for that purpose, select a trained model that uses the image captured by the same method as when it was captured as the source image. It is preferable to do.
 また、ソース画像及びターゲット画像の学習用データとして、対物レンズの異なる倍率で撮像した画像を用い、それぞれで学習させた学習済みモデルを作成してもよい。その場合、変換するための画像の倍率と同じ倍率の画像を学習用データとして用いた学習済みモデルを選択するのが好ましい。  Further, as the training data of the source image and the target image, images captured at different magnifications of the objective lens may be used, and a trained model trained by each may be created. In that case, it is preferable to select a trained model that uses an image having the same magnification as the image to be converted as training data. It was
<後処理>
 画像後処理部は、結合して得られた第3の画像が、前処理において、画像を縮小したかどうかを判断する(S18)。画像を縮小していない場合は、第3の画像をそのまま出力する(S20)。画像を縮小した場合は、縮小した部分について拡大処理を行って第4の画像を生成し(S19)、第4の画像を出力する(S20)。
<Post-processing>
The image post-processing unit determines whether or not the third image obtained by combining the images has been reduced in the pre-processing (S18). If the image is not reduced, the third image is output as it is (S20). When the image is reduced, the reduced portion is enlarged to generate a fourth image (S19), and the fourth image is output (S20).
<人工位相差画像の出力>
 最後に、画像出力部121が生成された人工位相差画像を出力し、画像表示手段123に表示させる。この時、画像表示手段123は、オリジナル画像および/または細胞の存在する領域だけの画像とともに表示してもよい。
<Output of artificial phase difference image>
Finally, the image output unit 121 outputs the generated artificial phase difference image and causes the image display means 123 to display it. At this time, the image display means 123 may display the original image and / or an image of only the region where the cells exist.
 なお、画像変換処理及び人工位相差画像の出力は、撮像装置においてサンプルを観察している際に行われてもよい。この場合にはオリジナル画像を取得すると即時で前処理及び画像変換が行われ、画像表示手段123には人工位相差画像がリアルタイム表示される。このことにより、ユーザは人工位相差画像が表示された画像表示手段123を見ながら、顕微鏡操作を行うことができる。 Note that the image conversion process and the output of the artificial phase difference image may be performed while observing the sample in the image pickup apparatus. In this case, when the original image is acquired, preprocessing and image conversion are performed immediately, and the artificial phase difference image is displayed in real time on the image display means 123. As a result, the user can operate the microscope while looking at the image display means 123 on which the artificial retardation image is displayed.
<プログラム及び記憶媒体>
 ここまでに記載した画像処理方法をコンピューターに実行させるプログラムや、そのプログラムを格納する、非一過性のコンピューター可読記憶媒体も、本開示の技術の実施形態である。
<Programs and storage media>
A program for causing a computer to execute the image processing method described so far, and a non-transient computer-readable storage medium for storing the program are also embodiments of the technique of the present disclosure.
 HEK293細胞を直径35mmのガラスボトムディッシュに播種し、50%コンフルエントの状態で明視野画像(撮像にはカメラまたは透過ディテクターを使用)、及び位相差画像を撮像した(対物レンズは60倍)。 HEK293 cells were seeded on a glass bottom dish with a diameter of 35 mm, and a bright field image (using a camera or transmission detector for imaging) and a phase difference image were imaged in a 50% confluent state (objective lens was 60 times).
 複数の明視野画像または透過明視野画像をソースとして、複数の位相差画像をターゲットとしてCycleGANで学習した学習済みモデルを準備した。 A trained model trained by CycleGAN was prepared with multiple bright-field images or transparent bright-field images as sources and multiple phase-difference images as targets.
 一方、ソースとは別の明視野画像または透過明視野画像を50%縮小し、学習済みモデルに入力し、生成器(generator)を利用して人工位相差画像に変換した。得られた画像を線形補間を用いて2倍に拡大して、変換後の画像とした。比較例として、縮小-拡大処理をせず、明視野画像を人工位相差画像に変換した。変換前後の画像を図3及び4に示した。 On the other hand, a bright-field image or a transparent bright-field image different from the source was reduced by 50%, input to the trained model, and converted into an artificial phase difference image using a generator. The obtained image was magnified twice using linear interpolation to obtain a converted image. As a comparative example, the bright field image was converted into an artificial retardation image without reduction-enlargement processing. The images before and after the conversion are shown in FIGS. 3 and 4.
 図3及び図4から明らかなように、画像変換ネットワークを用いて明視野画像または透過明視野画像から位相差画像に変換すると、背景部分にアーティファクトである白い点状のノイズが生じる。そのような場合に、本開示の技術によってノイズを除去することが可能になった。  As is clear from FIGS. 3 and 4, when a bright-field image or a transparent bright-field image is converted into a phase-difference image using an image conversion network, white dot-like noise, which is an artifact, is generated in the background portion. In such cases, the techniques of the present disclosure have made it possible to remove noise. The

Claims (17)

  1. 位相差画像ではない画像を、学習済みモデルを用いて変換するための方法であって、
     前記方法は、
      生物学的サンプルを撮像した、位相差画像ではない第1の画像を取得する第1の工程と、
      前記第1の画像の少なくとも一部領域に対して平滑化処理を行い、第2の画像を生成する第2の工程と、
      前記学習済みモデルを用いて前記第2の画像を第3の画像に変換する第3の工程と、を含み、
     前記学習済みモデルは、位相差画像ではない画像からなる学習用ソース画像群と位相差画像からなる学習用ターゲット画像群とを学習用データとして学習させた学習済みモデルである、方法。
    A method for transforming an image that is not a phase difference image using a trained model.
    The method is
    The first step of acquiring a first non-phase difference image of a biological sample,
    A second step of performing a smoothing process on at least a part of the first image to generate a second image, and a second step.
    A third step of converting the second image into a third image using the trained model, and the like.
    The trained model is a trained model in which a learning source image group consisting of images that are not retardation images and a learning target image group consisting of retardation images are trained as training data.
  2.  前記平滑化する処理は、前記第1の画像の前記少なくとも一部の領域の輝度値を補正する処理である、請求項1に記載の方法。 The method according to claim 1, wherein the smoothing process is a process for correcting a luminance value in at least a part of the first image.
  3.  前記平滑化する処理は、前記第1の画像の前記少なくとも一部の領域を縮小する処理である、請求項2に記載の方法。 The method according to claim 2, wherein the smoothing process is a process of reducing at least a part of the area of the first image.
  4.  生成された前記第3の画像を出力する第4の工程をさらに含む、請求項1~3のいずれか1項に記載の方法。 The method according to any one of claims 1 to 3, further comprising a fourth step of outputting the generated third image.
  5.  前記第3の画像の前記少なくとも一部の領域を拡大し、第4の画像を生成する第5の工程を含む、請求項3に記載の方法。 The method of claim 3, comprising a fifth step of enlarging at least a portion of the third image to generate a fourth image.
  6.  前記第4の画像を出力する工程をさらに含む、請求項5に記載の方法。 The method according to claim 5, further comprising the step of outputting the fourth image.
  7.  前記第3の工程は、
      前記第2の画像を小領域に分割するサブステップと、
      前記小領域の各々を前記学習モデルに入力して変換するサブステップと、
      変換後の各小領域を結合し、前記第3の画像を生成するサブステップと、
    を含む、請求項1から6のいずれか一項に記載の方法。
    The third step is
    A sub-step that divides the second image into smaller areas,
    A sub-step in which each of the small areas is input to the learning model and converted.
    A sub-step that combines the converted subregions to generate the third image,
    The method according to any one of claims 1 to 6, comprising the method according to claim 1.
  8.  前記第2の工程において、前記第1の画像の全体に対して前記平滑化処理が行われ、
     前記第5の工程において、前記第3の画像の全体が拡大され、
     前記方法は、
      前記学習モデルに前記第1の画像を入力して第5の画像に変換する第6の工程と、
      前記第4の画像内の第一領域と、前記第5の画像内の前記第一領域とは異なる第二領域と、を合成して、前記第4の画像を生成する第7の工程と、
    を含む、請求項5に記載の方法。
    In the second step, the smoothing process is performed on the entire first image, and the smoothing process is performed.
    In the fifth step, the entire third image is enlarged.
    The method is
    A sixth step of inputting the first image into the learning model and converting it into a fifth image,
    A seventh step of synthesizing a first region in the fourth image and a second region different from the first region in the fifth image to generate the fourth image.
    5. The method of claim 5.
  9.  前記第1の画像は、前記生物学的サンプルを撮像した対象物領域と、前記生物学的サンプルを含まない背景領域とを含み、
     前記一部領域は、前記背景領域の一部または全部である、請求項1~8のいずれか一項に記載の画像変換方法。
    The first image includes an object region in which the biological sample is imaged and a background region that does not include the biological sample.
    The image conversion method according to any one of claims 1 to 8, wherein the partial area is a part or all of the background area.
  10.  前記第1の画像から、前記対象物領域と前記背景領域とを検出する工程を含む、請求項9に記載の画像変換方法。 The image conversion method according to claim 9, further comprising a step of detecting the object region and the background region from the first image.
  11.  前記生物学的サンプルが細胞である、請求項1~10のいずれか1項に記載の画像変換方法。 The image conversion method according to any one of claims 1 to 10, wherein the biological sample is a cell.
  12.  前記第1の画像及び前記ソース画像が、明視野画像、蛍光画像、または微分干渉画像である、請求項1~11のいずれか1項に記載の方法。 The method according to any one of claims 1 to 11, wherein the first image and the source image are a bright field image, a fluorescent image, or a differential interference image.
  13.  請求項1~12のいずれか1項に記載の方法をコンピューターに実行させるプログラム。 A program that causes a computer to execute the method according to any one of claims 1 to 12.
  14.  請求項1~13のいずれか一項に記載の方法を実行する処理部を有する、画像処理装置。 An image processing apparatus having a processing unit that executes the method according to any one of claims 1 to 13.
  15.  生物学的サンプルを撮像した画像を、学習済みモデルを用いて変換する方法であって、
      第1の手法で生物学的サンプルを撮像した第1の画像を取得する工程と、
      前記第1の手法に関する情報に基づいて、複数の異なる学習済みモデルから、前記第1の手法に適した学習済みモデルを選択する工程と、
      前記選択された学習済みモデルを利用して、前記オリジナル画像を、第1の手法とは異なる第2の手法で撮像した画像に似た画像に変換する工程と、
      前記変換後の画像を出力する工程と、を含み、
     前記学習済みモデルは、前記第1の手法で撮像した画像から成る学習用ソース画像群、及び、前記第2の手法で撮像した画像からなる学習用ターゲット画像群を学習用データとして学習させたモデルである、方法。
    A method of transforming an image of a biological sample using a trained model.
    The process of acquiring the first image of the biological sample taken by the first method,
    A step of selecting a trained model suitable for the first method from a plurality of different trained models based on the information regarding the first method.
    A step of converting the original image into an image similar to an image captured by a second method different from the first method by using the selected trained model.
    Including the step of outputting the converted image.
    The trained model is a model in which a learning source image group consisting of images captured by the first method and a learning target image group consisting of images captured by the second method are trained as training data. Is the way.
  16.  前記複数の異なる学習済みモデルは、前記第1の手法が互いに異なる、請求項15に記載の方法。 The method according to claim 15, wherein the plurality of different trained models have different first methods from each other.
  17.  前記複数の異なる画像変換ネットワークは、前記学習用データに含まれる画像の撮像時の倍率が異なる、請求項15又は16に記載の方法。
     
    The method according to claim 15 or 16, wherein the plurality of different image conversion networks have different magnifications at the time of capturing an image included in the learning data.
PCT/JP2020/044565 2020-11-30 2020-11-30 Image conversion method, program, and image processing device WO2022113367A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/044565 WO2022113367A1 (en) 2020-11-30 2020-11-30 Image conversion method, program, and image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/044565 WO2022113367A1 (en) 2020-11-30 2020-11-30 Image conversion method, program, and image processing device

Publications (1)

Publication Number Publication Date
WO2022113367A1 true WO2022113367A1 (en) 2022-06-02

Family

ID=81754204

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/044565 WO2022113367A1 (en) 2020-11-30 2020-11-30 Image conversion method, program, and image processing device

Country Status (1)

Country Link
WO (1) WO2022113367A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010525486A (en) * 2007-04-27 2010-07-22 ヒューレット−パッカード デベロップメント カンパニー エル.ピー. Image segmentation and image enhancement
US20120306904A1 (en) * 2011-06-02 2012-12-06 Yoostar Entertainment Group, Inc. Image processing
JP2015219515A (en) * 2014-05-21 2015-12-07 オリンパス株式会社 Image display method, control device, and microscope system
WO2018179946A1 (en) * 2017-03-30 2018-10-04 富士フイルム株式会社 Observation device, observation control method, and observation control program
JP2018185759A (en) * 2017-04-27 2018-11-22 シスメックス株式会社 Image analysis method, device, program, and method of producing deep learning algorithm
US20190080498A1 (en) * 2017-09-08 2019-03-14 Apple Inc. Creating augmented reality self-portraits using machine learning
JP2019211468A (en) * 2018-06-01 2019-12-12 株式会社フロンティアファーマ Image processing method, chemical sensitivity testing method and image processing device
JP2020060822A (en) * 2018-10-05 2020-04-16 株式会社フロンティアファーマ Image processing method and image processing apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010525486A (en) * 2007-04-27 2010-07-22 ヒューレット−パッカード デベロップメント カンパニー エル.ピー. Image segmentation and image enhancement
US20120306904A1 (en) * 2011-06-02 2012-12-06 Yoostar Entertainment Group, Inc. Image processing
JP2015219515A (en) * 2014-05-21 2015-12-07 オリンパス株式会社 Image display method, control device, and microscope system
WO2018179946A1 (en) * 2017-03-30 2018-10-04 富士フイルム株式会社 Observation device, observation control method, and observation control program
JP2018185759A (en) * 2017-04-27 2018-11-22 シスメックス株式会社 Image analysis method, device, program, and method of producing deep learning algorithm
US20190080498A1 (en) * 2017-09-08 2019-03-14 Apple Inc. Creating augmented reality self-portraits using machine learning
JP2019211468A (en) * 2018-06-01 2019-12-12 株式会社フロンティアファーマ Image processing method, chemical sensitivity testing method and image processing device
JP2020060822A (en) * 2018-10-05 2020-04-16 株式会社フロンティアファーマ Image processing method and image processing apparatus

Similar Documents

Publication Publication Date Title
JP5780865B2 (en) Image processing apparatus, imaging system, and image processing system
RU2716843C1 (en) Digital correction of optical system aberrations
US11127117B2 (en) Information processing method, information processing apparatus, and recording medium
Abd Halim et al. Nucleus segmentation technique for acute leukemia
JP2005228342A (en) Method and system for segmenting scanned document
WO2015156378A1 (en) Image processing apparatus, image processing method, and image processing system
JP7212554B2 (en) Information processing method, information processing device, and program
CN112734650A (en) Virtual multi-exposure fusion based uneven illumination image enhancement method
JP2017517818A (en) Method and system for processing the color of a digital image
CN111583201B (en) Transfer learning method for constructing super-resolution pathology microscope
CN106296608A (en) A kind of fish eye images processing method based on mapping table and system
US11892615B2 (en) Image processing method for microscopic image, computer readable medium, image processing apparatus, image processing system, and microscope system
CN112819699A (en) Video processing method and device and electronic equipment
WO2019181072A1 (en) Image processing method, computer program, and recording medium
WO2019160041A1 (en) Image processing device, microscope system, image processing method and image processing program
CN112801913A (en) Method for solving field depth limitation of microscope
JP2023532755A (en) Computer-implemented method, computer program product, and system for processing images
WO2022113367A1 (en) Image conversion method, program, and image processing device
JP6742863B2 (en) Microscope image processing apparatus, method and program
WO2017175452A1 (en) Image processing device, image pickup device, image processing method, and program
US10194880B2 (en) Body motion display device and body motion display method
WO2022113368A1 (en) Image conversion method, program, image conversion device, and image conversion system
JPWO2002045021A1 (en) Entropy filter and region extraction method using the filter
US20210109045A1 (en) Continuous scanning for localization microscopy
JP2023033982A (en) Image processing device, image processing system, sharpening method of image, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20963630

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20963630

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP