WO2021045019A1 - Ophthalmic image processing program and ophthalmic image processing device - Google Patents

Ophthalmic image processing program and ophthalmic image processing device Download PDF

Info

Publication number
WO2021045019A1
WO2021045019A1 PCT/JP2020/032949 JP2020032949W WO2021045019A1 WO 2021045019 A1 WO2021045019 A1 WO 2021045019A1 JP 2020032949 W JP2020032949 W JP 2020032949W WO 2021045019 A1 WO2021045019 A1 WO 2021045019A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
ophthalmic
image processing
mathematical model
conversion
Prior art date
Application number
PCT/JP2020/032949
Other languages
French (fr)
Japanese (ja)
Inventor
涼介 柴
徹哉 加納
佳紀 熊谷
Original Assignee
株式会社ニデック
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニデック filed Critical 株式会社ニデック
Priority to JP2021543759A priority Critical patent/JPWO2021045019A1/ja
Publication of WO2021045019A1 publication Critical patent/WO2021045019A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to an ophthalmic image processing program and an ophthalmic image processing apparatus used for processing an ophthalmic image of an eye to be inspected.
  • IOL-related information for example, expected postoperative anterior chamber depth
  • IOL-related information for example, expected postoperative anterior chamber depth
  • the IOL frequency is calculated based on the acquired IOL-related information.
  • Non-Patent Document 1 by inputting an ophthalmic image as an input image into a mathematical model trained by a machine learning algorithm, a converted image obtained by converting the image quality of the input image is obtained.
  • the conversion from the input image to the converted image by the mathematical model may not be executed properly. For example, when the ophthalmic image used for learning the mathematical model and the ophthalmic image actually input to the mathematical model are significantly different, it is difficult to properly convert the input image to the converted image. If the converted image is presented to the user as it is even though the conversion from the input image to the converted image is not properly performed, the user may not be able to accurately perform various judgments based on the converted image.
  • a typical object of the present disclosure is to provide an ophthalmic image processing program and an ophthalmic image processing apparatus capable of presenting more appropriate information to a user.
  • the ophthalmic image processing program provided by the typical embodiment in the present disclosure is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image which is an image of a tissue of an eye to be examined, and is the ophthalmic image processing.
  • the image acquisition step of acquiring the ophthalmic image taken by the ophthalmic image capturing device and the image acquisition step of the mathematical model trained by the machine learning algorithm In the conversion image acquisition step of acquiring the converted image obtained by converting the image quality of the input image by inputting the ophthalmic image acquired in the above as the input image, and the conversion of the input image to the converted image by the mathematical model.
  • the evaluation information acquisition step of acquiring the evaluation information for evaluating the validity is executed by the ophthalmic image processing apparatus.
  • the ophthalmic image processing apparatus is an ophthalmic image processing apparatus that processes an ophthalmic image which is an image of a tissue of an eye to be inspected, and a control unit of the ophthalmic image processing apparatus is an ophthalmic image.
  • the input image is obtained by inputting the ophthalmic image acquired in the image acquisition step as an input image into an image acquisition step for acquiring an ophthalmic image captured by an imaging device and a mathematical model trained by a machine learning algorithm.
  • the conversion image acquisition step of acquiring the converted image obtained by converting the image quality of the above, and the evaluation information acquisition step of acquiring the evaluation information for evaluating the validity of the conversion from the input image to the converted image by the mathematical model are executed. ..
  • the control unit of the ophthalmic image processing apparatus exemplified in the present disclosure executes an image acquisition step, a converted image acquisition step, and an evaluation information acquisition step.
  • the control unit acquires an ophthalmic image taken by the ophthalmologic image capturing device.
  • the control unit acquires the converted image obtained by converting the image quality of the input image by inputting the ophthalmic image acquired in the image acquisition step as an input image into the mathematical model trained by the machine learning algorithm.
  • the control unit acquires evaluation information for evaluating the validity of the conversion from the input image to the converted image by the mathematical model.
  • evaluation information for evaluating the validity of conversion from an input image to a converted image by a mathematical model is acquired. Therefore, the ophthalmic image processing apparatus can present appropriate information to the user by using the evaluation information.
  • ophthalmic images can be used as the input image.
  • tomographic images taken by an OCT device two-dimensional tomographic image or three-dimensional tomographic image
  • images taken by a fundus camera images taken by a laser scanning eye examination device (SLO), and corneal endothelial cell imaging.
  • SLO laser scanning eye examination device
  • corneal endothelial cell imaging At least one of the images taken by the device may be used as the input image.
  • the ophthalmologic image may be an OCT angio image of the fundus of the eye to be inspected taken by the OCT apparatus.
  • the OCT angio image may be a two-dimensional front image in which the fundus is viewed from the front (that is, the line-of-sight direction of the eye to be inspected).
  • the OCT angio image may be, for example, a motion contrast image acquired by processing at least two OCT signals acquired at different times with respect to the same position.
  • the ophthalmologic image is an Enface image (OCT front image) when at least a part of the three-dimensional tomographic image taken by the OCT device is viewed from the direction (front direction) along the optical axis of the measurement light of the OCT device. There may be.
  • the image quality converted by the mathematical model can be selected as appropriate.
  • the control unit may acquire a converted image in which at least one of the noise amount, contrast, and resolution of the input image is converted by using a mathematical model.
  • the control unit may acquire the difference information of the pixel values between the corresponding pixels of the input image input to the mathematical model and the converted image output from the mathematical model as evaluation information. ..
  • the difference information may be the difference in pixel values between the corresponding pixels, or may be the ratio of the other pixel value to one pixel value.
  • control unit may acquire the difference information after executing the Fourier transform (for example, two-dimensional Fourier transform) for each of the input image and the converted image in the evaluation information acquisition step.
  • Periodic artifacts may occur in the transformed image.
  • the input image and the converted image show different frequency distributions. Therefore, by using the Fourier transform, the presence or absence of periodic artifacts is evaluated more appropriately.
  • the control unit evaluates that the conversion from the input image to the converted image is not appropriate when the value of the difference in an arbitrary region including a plurality of pixels is equal to or more than the threshold value in the difference image obtained by imaging the difference information.
  • One evaluation step may be performed.
  • the difference image may be interspersed with pixels with large values, even if the conversion is performed properly.
  • the control unit compares the difference value in an arbitrary region including a plurality of pixels with the threshold value in the difference image to suppress the influence of the scattered pixels even when the conversion is properly executed. Then, the validity of the conversion can be evaluated appropriately.
  • control unit may evaluate the validity of the conversion based on the difference image after performing the smoothing process on the pixel value of the difference image (the value of the difference corresponding to each pixel). In this case, the influence of the scattered pixels is more appropriately suppressed even when the conversion is properly executed.
  • control unit may evaluate that the conversion is not valid when the average value of the differences in the region is equal to or greater than the threshold value. Further, the control unit may evaluate whether or not the conversion is appropriate based on the number of pixels in the unit region whose difference value is equal to or greater than the threshold value. Further, the control unit may acquire information indicating the degree of validity of the conversion based on the difference information. Information indicating the degree of validity may be displayed on the display unit.
  • the control unit acquires a difference image that is an image of the difference information of the pixel values between the corresponding pixels of the input image input to the mathematical model and the converted image output from the mathematical model.
  • the degree of similarity between the difference image and the input image may be acquired as evaluation information.
  • the image quality of the input image is appropriately converted, the difference between the input image and the converted image becomes small, so that the similarity between the difference image and the input image becomes small.
  • the conversion of the input image is not properly executed and the irregular part or the like in the input image affects the conversion, the position of the irregular part or the like in the input image and the difference value in the difference image become large.
  • the similarity between the input image and the difference image is increased. Therefore, the validity of the conversion is appropriately evaluated by acquiring the similarity between the difference image and the input image as evaluation information.
  • the method of acquiring the degree of similarity can be appropriately selected. For example, a correlation diagram may be acquired, or a correlation coefficient may be acquired.
  • the control unit may further perform a second evaluation step of evaluating that the conversion from the input image to the converted image is not appropriate when the value indicating the similarity is equal to or higher than the threshold value.
  • a second evaluation step of evaluating that the conversion from the input image to the converted image is not appropriate when the value indicating the similarity is equal to or higher than the threshold value.
  • the control unit may further execute a difference image display step of displaying a difference image in which the difference information is imaged on the display unit.
  • a difference image there is a difference between the region where the conversion is properly performed and the region where the conversion is not performed properly. Therefore, the user can confirm whether or not the conversion is properly performed by confirming the difference image. Further, the user can grasp the area where the conversion is not properly performed by checking the difference image.
  • the input image contains an irregular part (for example, a diseased part), it is difficult to properly convert the image quality of the irregular part. Therefore, the user can appropriately grasp the irregular portion in the input image by checking the difference image.
  • an irregular part for example, a diseased part
  • the specific method for displaying the difference image on the display unit can be selected as appropriate.
  • the display unit may display at least one of the input image and the converted image at the same time as the difference image (for example, side by side).
  • the control unit may superimpose and display the difference image on at least one of the input image and the converted image. In this case, the user can easily compare at least one of the input image and the converted image with the converted image. Further, the control unit may independently display the difference image on the display unit.
  • the control unit inputs the input image and the converted image into a mathematical model trained by a machine learning algorithm (a mathematical model for acquiring evaluation information different from the mathematical model for acquiring the converted image). Then, the evaluation information may be acquired. In this case, the validity of the conversion is appropriately evaluated even if the difference between the input image and the converted image is not acquired.
  • the mode of evaluation information output by the mathematical model for acquiring evaluation information can also be selected as appropriate.
  • the mathematical model for acquiring evaluation information may output evaluation information indicating whether or not the conversion from the input image to the converted image is appropriate. In this case, it is easy to evaluate whether the conversion is valid or not. Further, the mathematical model for acquiring the evaluation information may output information such as a numerical value indicating the degree of validity of the conversion as the evaluation information.
  • control unit may acquire evaluation information by using various parameters related to the image quality of the image (at least the converted image).
  • Parameters related to image quality include, for example, the signal strength of an ophthalmic image, an index indicating the goodness of the signal (for example, SSI (Signal Contrast Index) or SQI (SLO Quality Index)), and noise with respect to the signal level of the image.
  • SSI Signal Strength of an ophthalmic image
  • SQI SLO Quality Index
  • noise with respect to the signal level of the image for example, SSI (Signal Contrast Index) or SQI (SLO Quality Index)
  • At least one of the level ratio SNR (Signal to Noise Radio), background noise level, image contrast, etc.) may be used as a parameter related to image quality.
  • the input image is converted to improve the image quality of the input image.
  • the image quality of the converted image should be better than the image quality of the input image. Therefore, for example, a parameter indicating the image quality of the converted image may be used. It may be acquired as evaluation information. Further, the difference between the parameter indicating the image quality of the converted image and the parameter indicating the image quality of the input image may be acquired as evaluation information.
  • the control unit may further execute a warning step that warns the user when the conversion is evaluated as invalid by the evaluation information acquired in the evaluation information acquisition step. In this case, the user can easily grasp that the conversion from the input image to the converted image may not have been performed properly.
  • the control unit may warn the user by displaying at least one of a warning message, a warning image, and the like on the display unit. Further, the control unit may issue a warning to the user by generating at least one of a warning message and a warning sound from the speaker. Further, the control unit may execute the warning process while displaying the actually converted converted image on the display unit, or may execute the warning process without displaying the converted image.
  • the control unit stops the display processing of the converted image acquired in the converted image acquisition step on the display unit when the evaluation information acquired in the evaluation information acquisition step evaluates that the conversion is not valid. May be further executed. In this case, it is suppressed that the converted image that has not been properly converted from the input image is displayed on the display unit. Therefore, the possibility that the user cannot accurately perform various judgments based on the converted image is reduced.
  • control unit may display at least one of a numerical value indicating the acquired evaluation information, a graph, and the like on the display unit.
  • the user can easily grasp whether or not the conversion from the input image to the converted image is properly performed based on the displayed evaluation information.
  • the difference image between the input image and the converted image may be displayed on the display unit as evaluation information.
  • the control unit inputs the input image to a mathematical model different from the mathematical model that performed the conversion evaluated as invalid, so that the converted image is input. You may get it.
  • the characteristics of conversion by the mathematical model differ depending on the algorithm and training data used when training the mathematical model. Therefore, if the transformation is evaluated as invalid, the transformed image may be acquired appropriately by acquiring the transformed image by a different mathematical model.
  • the control unit may display the difference image on the display unit regardless of the validity of the conversion from the input image to the converted image. As a result, the user can easily grasp the position of the irregular portion based on the difference image.
  • the ophthalmic image processing program can be expressed as follows. It is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image which is an image of the tissue of the eye to be inspected, and the ophthalmic image processing program is executed by a control unit of the ophthalmic image processing apparatus.
  • the device that executes the image acquisition step, the converted image acquisition step, the evaluation information acquisition step, etc. can be appropriately selected.
  • the control unit of a personal computer (hereinafter referred to as “PC”) may execute all of the converted image acquisition step, the evaluation information acquisition step, and the like. That is, the control unit of the PC may acquire an ophthalmologic image from the ophthalmology image capturing apparatus and perform conversion image acquisition processing or the like based on the acquired ophthalmology image. Further, the control unit of the ophthalmic imaging apparatus may execute all of the conversion image acquisition step, the evaluation information acquisition step, and the like. Further, the control units of a plurality of devices (for example, an ophthalmologic image capturing device and a PC, etc.) may cooperate to execute the converted image acquisition step, the evaluation information acquisition step, and the like.
  • a plurality of devices for example, an ophthalmologic image capturing device and a PC, etc.
  • FIG. 1 It is a block diagram which shows the schematic structure of the mathematical model construction apparatus 1, the ophthalmologic image processing apparatus 21, and the ophthalmology imaging apparatus 11A, 11B. It is a figure which shows an example of the training data for input and the training data for output in the case of outputting a high-quality two-dimensional tomographic image as a conversion image to a mathematical model. It is a flowchart of the mathematical model construction process executed by the mathematical model construction apparatus 1. It is a flowchart of ophthalmic image processing executed by ophthalmic image processing apparatus 21. It is a figure which shows an example of an ophthalmic image used as an input image. It is a figure which shows an example of the converted image which converted the image quality of the input image shown in FIG.
  • the mathematical model construction device 1 constructs a mathematical model by training the mathematical model by a machine learning algorithm.
  • the program that realizes the constructed mathematical model is stored in the storage device 24 of the ophthalmic image processing device 21.
  • the ophthalmic image processing device 21 inputs an ophthalmic image as an input image into a mathematical model to acquire a converted image in which the image quality of the input image is converted (in the present embodiment, the image quality is improved).
  • the ophthalmic image processing device 21 acquires evaluation information for evaluating the validity of conversion of the converted image from the input image.
  • the ophthalmic imaging devices 11A and 11B capture an ophthalmic image which is an image of the tissue of the eye to be inspected.
  • a personal computer (hereinafter referred to as "PC") is used for the mathematical model construction device 1 of the present embodiment.
  • the mathematical model building apparatus 1 uses an ophthalmic image acquired from the ophthalmic imaging apparatus 11A (hereinafter referred to as “training ophthalmic image”) and an image obtained by converting the image quality of the training ophthalmic image. Build a mathematical model by training it.
  • the device that can function as the mathematical model construction device 1 is not limited to the PC.
  • the ophthalmologic imaging device 11A may function as the mathematical model building device 1.
  • the control units of the plurality of devices (for example, the CPU of the PC and the CPU 13A of the ophthalmologic imaging apparatus 11A) may collaborate to construct a mathematical model.
  • a CPU is used as an example of a controller that performs various processes.
  • a controller other than the CPU may be used for at least a part of various devices. For example, by adopting a GPU as a controller, the processing speed may be increased.
  • the mathematical model construction device 1 will be described.
  • the mathematical model construction device 1 is arranged, for example, in an ophthalmic image processing device 21 or a manufacturer that provides an ophthalmic image processing program to a user.
  • the mathematical model building apparatus 1 includes a control unit 2 that performs various control processes and a communication I / F5.
  • the control unit 2 includes a CPU 3 which is a controller that controls control, and a storage device 4 that can store programs, data, and the like.
  • the storage device 4 stores a mathematical model construction program for executing a mathematical model construction process (see FIG. 3) described later.
  • the communication I / F5 connects the mathematical model building device 1 to other devices (for example, an ophthalmic imaging device 11A and an ophthalmic image processing device 21).
  • the mathematical model construction device 1 is connected to the operation unit 7 and the display device 8.
  • the operation unit 7 is operated by the user in order for the user to input various instructions to the mathematical model construction device 1.
  • the operation unit 7 for example, at least one of a keyboard, a mouse, a touch panel, and the like can be used.
  • a microphone or the like for inputting various instructions may be used together with the operation unit 7 or instead of the operation unit 7.
  • the display device 8 displays various images.
  • various devices capable of displaying an image for example, at least one of a monitor, a display, a projector, and the like
  • the "image" in the present disclosure includes both a still image and a moving image.
  • the mathematical model construction device 1 can acquire data of an ophthalmic image (hereinafter, may be simply referred to as an “ophthalmic image”) from the ophthalmic imaging device 11A.
  • the mathematical model building apparatus 1 may acquire ophthalmic image data from the ophthalmic imaging apparatus 11A by, for example, at least one of wired communication, wireless communication, a detachable storage medium (for example, a USB memory), and the like.
  • the ophthalmic image processing device 21 will be described.
  • the ophthalmologic image processing device 21 is arranged, for example, in a facility (for example, a hospital or a health examination facility) for diagnosing or examining a subject.
  • the ophthalmic image processing device 21 includes a control unit 22 that performs various control processes and a communication I / F 25.
  • the control unit 22 includes a CPU 23, which is a controller that controls control, and a storage device 24 that can store programs, data, and the like.
  • the storage device 24 stores an ophthalmic image processing program for executing ophthalmic image processing (see FIGS. 4 and 9) described later.
  • the ophthalmic image processing program includes a program that realizes a mathematical model constructed by the mathematical model building apparatus 1.
  • the communication I / F 25 connects the ophthalmic image processing device 21 to other devices (for example, the ophthalmic imaging device 11B and the mathematical model building device 1).
  • the ophthalmic image processing device 21 is connected to the operation unit 27 and the display device 28.
  • various devices can be used in the same manner as the operation unit 7 and the display device 8 described above.
  • the ophthalmic image processing device 21 can acquire an ophthalmic image from the ophthalmic image capturing device 11B.
  • the ophthalmic image processing device 21 may acquire an ophthalmic image from the ophthalmic image capturing device 11B by, for example, at least one of wired communication, wireless communication, a detachable storage medium (for example, a USB memory), and the like. Further, the ophthalmic image processing device 21 may acquire a program or the like for realizing the mathematical model constructed by the mathematical model building device 1 via communication or the like.
  • the ophthalmic imaging devices 11A and 11B will be described. As an example, in the present embodiment, a case where an ophthalmic image capturing device 11A for providing an ophthalmic image to the mathematical model building apparatus 1 and an ophthalmologic imaging device 11B for providing an ophthalmic image to the ophthalmic image processing device 21 will be described. ..
  • the number of ophthalmic imaging devices used is not limited to two.
  • the mathematical model construction device 1 and the ophthalmic image processing device 21 may acquire ophthalmic images from a plurality of ophthalmic imaging devices.
  • the mathematical model construction device 1 and the ophthalmology image processing device 21 may acquire an ophthalmology image from one common ophthalmology image capturing device.
  • the OCT device is exemplified as the ophthalmic imaging device 11 (11A, 11B).
  • an ophthalmologic imaging device other than the OCT device for example, a laser scanning optometry device (SLO), a fundus camera, a Scheimpflug camera, a corneal endothelial cell imaging device (CEM), etc.
  • SLO laser scanning optometry device
  • CEM corneal endothelial cell imaging device
  • the ophthalmic imaging device 11 includes a control unit 12 (12A, 12B) that performs various control processes, and an ophthalmic imaging unit 16 (16A, 16B).
  • the control unit 12 includes a CPU 13 (13A, 13B) which is a controller that controls control, and a storage device 14 (14A, 14B) capable of storing programs, data, and the like.
  • the ophthalmic image capturing apparatus 11 executes at least a part of the ophthalmic image processing (see FIGS. 4 and 9) described later, at least a part of the ophthalmic image processing program for executing the ophthalmic image processing is stored in the storage device 14. Needless to say, it is remembered in.
  • the ophthalmic imaging unit 16 includes various configurations necessary for capturing an ophthalmic image of the eye to be inspected.
  • the ophthalmic imaging unit 16 of the present embodiment is provided with an OCT light source, a branched optical element that branches OCT light emitted from the OCT light source into measurement light and reference light, a scanning unit for scanning the measurement light, and measurement light. It includes an optical system for irradiating the eye examination, a light receiving element that receives the combined light of the light reflected by the tissue of the eye to be inspected and the reference light, and the like.
  • the ophthalmologic image capturing device 11 can capture a two-dimensional tomographic image and a three-dimensional tomographic image of the fundus of the eye to be inspected.
  • the CPU 13 scans the OCT light (measurement light) on the scan line to take a two-dimensional tomographic image (see FIG. 5) of the cross section intersecting the scan line.
  • the CPU 13 can capture a three-dimensional tomographic image of the tissue by scanning the OCT light two-dimensionally.
  • the CPU 13 acquires a plurality of two-dimensional tomographic images by scanning measurement light on each of a plurality of scan lines having different positions in a two-dimensional region when the tissue is viewed from the front.
  • the CPU 13 acquires a three-dimensional tomographic image by combining a plurality of captured two-dimensional tomographic images.
  • the CPU 13 can capture a plurality of ophthalmic images of the same site by scanning the measurement light a plurality of times on the same site on the tissue (in the present embodiment, on the same scan line).
  • the CPU 13 can acquire an averaging image in which the influence of speckle noise is suppressed by performing an averaging process on a plurality of ophthalmic images of the same portion.
  • the image quality of the two-dimensional tomographic image can be improved by performing the addition averaging processing on a plurality of two-dimensional tomographic images of the same part.
  • the addition averaging process may be performed, for example, by averaging the pixel values of the pixels at the same position in a plurality of ophthalmic images.
  • the ophthalmologic image capturing device 11 executes a tracking process for tracking the scanning position of the OCT light with the movement of the eye to be inspected while taking a plurality of ophthalmic images of the same site.
  • the mathematical model construction process executed by the mathematical model construction apparatus 1 will be described with reference to FIGS. 2 and 3.
  • the mathematical model construction process is executed by the CPU 3 according to the mathematical model construction program stored in the storage device 4.
  • the mathematical model is trained by the training data set, so that the mathematical model that outputs the converted image obtained by converting the image quality of the input image is constructed.
  • the training data set includes input side data (input training data) and output side data (output training data).
  • the mathematical model can convert various ophthalmic images into converted images.
  • the type of training data set used to train the mathematical model is determined by the type of ophthalmic image to which the mathematical model transforms image quality.
  • a two-dimensional tom image (high-quality image) in which the image quality of the input image is improved by inputting the two-dimensional tom image as an input image to the mathematical model is output to the mathematical model as a converted image will be described.
  • FIG. 2 shows an example of input training data and output training data when a high-quality two-dimensional tomographic image is output as a converted image to a mathematical model.
  • the CPU 3 acquires a set 40 of a plurality of two-dimensional tomographic images 400A to 400X in which the same part of the tissue is photographed.
  • the CPU 3 uses a part of the plurality of two-dimensional tomographic images 400A to 400X in the set 40 (the number of images smaller than the number of images used for the addition averaging of the output training data described later) as the input training data. Further, the CPU 3 acquires the additional average image 41 of the plurality of two-dimensional tomographic images 400A to 400X in the set 40 as output training data.
  • the influence of speckle noise is suppressed by inputting a two-dimensional tomographic image as an input image to the trained mathematical model.
  • the high-quality two-dimensional tomographic image is output as a converted image.
  • the ophthalmic image that converts the image quality to the mathematical model is not limited to the two-dimensional tomographic image of the fundus.
  • the ophthalmologic image may be an image of a portion other than the fundus of the eye to be examined.
  • the ophthalmic image may be a three-dimensional tomographic image, an OCT angio image, an Enface image, or the like taken by an OCT apparatus.
  • the OCT angio image may be a two-dimensional front image in which the fundus is viewed from the front (that is, the line-of-sight direction of the eye to be inspected).
  • the OCT angio image may be, for example, a motion contrast image acquired by processing at least two OCT signals acquired at different times with respect to the same position.
  • the Enface image is a two-dimensional front image when at least a part of the three-dimensional tomographic image taken by the OCT device is viewed from the direction (front direction) along the optical axis of the measurement light of the OCT device.
  • the ophthalmologic image may be an image taken by a fundus camera, an image taken by a laser scanning optometry device (SLO), an image taken by a corneal endothelial cell photography device, or the like.
  • the image quality may be improved by a process other than the addition averaging process.
  • the mathematical model construction process will be described with reference to FIG.
  • the CPU 3 acquires at least a part of the ophthalmic image taken by the ophthalmic image capturing device 11A as input training data (S1).
  • the data of the ophthalmic image is generated by the ophthalmic imaging apparatus 11A and then acquired by the mathematical model construction apparatus 1.
  • the CPU 3 acquires the data of the ophthalmic image by acquiring the signal (for example, OCT signal) that is the basis for generating the ophthalmic image from the ophthalmic imaging apparatus 11A and generating the ophthalmic image based on the acquired signal. You may.
  • the CPU 3 acquires the output training data corresponding to the input training data acquired in S1 (S3).
  • S3 An example of the correspondence between the input training data and the output training data is as described above.
  • the CPU 3 executes the training of the mathematical model using the training data set by the machine learning algorithm (S3).
  • machine learning algorithms for example, neural networks, random forests, boosting, support vector machines (SVMs), and the like are generally known.
  • Neural networks are a method of imitating the behavior of biological nerve cell networks.
  • Neural networks include, for example, feed-forward (forward propagation) neural networks, RBF networks (radial basis functions), spiking neural networks, convolutional neural networks, recursive neural networks (recurrent neural networks, feedback neural networks, etc.), and probabilities.
  • Neural networks Boltzmann machines, Basian networks, etc.).
  • Random forest is a method of generating a large number of decision trees by learning based on randomly sampled training data.
  • the branches of a plurality of decision trees learned in advance as a discriminator are traced, and the average (or majority vote) of the results obtained from each decision tree is taken.
  • Boosting is a method of generating a strong classifier by combining multiple weak classifiers.
  • a strong classifier is constructed by sequentially learning simple and weak classifiers.
  • SVM is a method of constructing a two-class pattern classifier using a linear input element.
  • the SVM learns the parameters of the linear input element based on, for example, the criterion (hyperplane separation theorem) of obtaining the margin maximizing hyperplane that maximizes the distance from each data point from the training data.
  • a mathematical model refers to, for example, a data structure for predicting the relationship between input data and output data.
  • Mathematical models are constructed by training with training datasets.
  • the training data set is a set of training data for input and training data for output.
  • training updates the correlation data (eg, weights) for each input and output.
  • a multi-layer neural network is used as a machine learning algorithm.
  • a neural network includes an input layer for inputting data, an output layer for generating the data to be predicted, and one or more hidden layers between the input layer and the output layer.
  • a plurality of nodes also called units
  • a convolutional neural network (CNN) which is a kind of multi-layer neural network
  • CNN convolutional neural network
  • other machine learning algorithms may be used.
  • GAN hostile generative network
  • GAN hostile generative network
  • 4 to 8 show an example of evaluating the validity of image quality conversion by a mathematical model based on the difference information (difference image) between the input image and the converted image.
  • the ophthalmic image processing illustrated in FIG. 4 is executed by the CPU 23 according to the ophthalmic image processing program stored in the storage device 24.
  • the CPU 23 acquires an ophthalmologic image of the tissue of the eye to be inspected taken by the ophthalmologic imaging apparatus (OCT apparatus in this embodiment) 11B (S11).
  • OCT apparatus ophthalmologic imaging apparatus
  • S11 of the present embodiment a two-dimensional tomographic image (see FIG. 5) of the fundus tissue of the eye to be inspected is acquired.
  • the CPU 23 converted the image quality of the input image by inputting the ophthalmic image acquired in S11 as the input image into the mathematical model trained by the machine learning algorithm (in the present embodiment, the image quality of the input image is changed.
  • the (improved) converted image is acquired (S12).
  • FIG. 5 shows an example of an ophthalmic image used as an input image.
  • FIG. 6 shows a converted image in which the image quality of the input image shown in FIG. 5 is converted (improved image quality).
  • the image quality of the input image shown in FIG. 5 is lower than that of the converted image shown in FIG.
  • the input image shown in FIG. 5 is generated without performing the addition averaging process or by performing the addition averaging process on a small number of ophthalmic images. Therefore, the input image shown in FIG. 5 can be captured in a short time.
  • a high-quality converted image is acquired by inputting an input image taken in a short time into a mathematical model. Therefore, a high-quality image can be acquired while suppressing a long shooting time.
  • the CPU 23 obtains the difference information of the pixel values between the corresponding pixels of the input image (see FIG. 5) input to the mathematical model in S12 and the converted image (see FIG. 6) output from the mathematical model in S12. , Acquired as evaluation information (S13).
  • the evaluation information is information for evaluating the validity of conversion from an input image to a converted image by a mathematical model. When the image quality of the input image is appropriately converted and the converted image is output, the difference between the input image and the converted image becomes small. On the other hand, the image quality conversion may not be properly executed depending on the state of the input image.
  • the irregular part for example, an irregular part such as a lesion part contained in the training data set (ophthalmic image) used for training a mathematical model
  • the irregular part is converted. Is difficult to execute properly. Therefore, if the irregular portion in the input image affects the conversion and the conversion of the input image is not properly executed, the difference between the input image and the converted image becomes large. Therefore, by acquiring the difference information as the evaluation information, the validity of the conversion from the input image to the converted image is appropriately evaluated.
  • the CPU 23 may acquire the difference information after executing the Fourier transform (for example, two-dimensional Fourier transform) for each of the input image and the converted image.
  • Periodic artifacts may occur in the transformed image.
  • the input image and the converted image show different frequency distributions. Therefore, by using the Fourier transform, the presence or absence of periodic artifacts is evaluated more appropriately.
  • a difference image showing the distribution of the difference values acquired for each pixel is acquired as the difference information.
  • the gray portion where the brightness is an intermediate value for example, 128, which is an intermediate value when the brightness changes in the range of 1 to 256
  • the difference image there is a difference between the region where the conversion is properly performed and the region where the conversion is not performed properly. Therefore, the validity of the conversion from the input image to the converted image is appropriately evaluated based on the difference image.
  • the difference information may be information other than the difference image.
  • the average value of the difference values between the plurality of pixels may be acquired as the difference information.
  • the difference information may be the difference in pixel values between the corresponding pixels, or may be the ratio of the other pixel value to one pixel value.
  • the CPU 23 evaluates the validity of the conversion from the input image to the converted image in S12 based on the difference image (see FIG. 7) acquired in S13 (S14). Specifically, in S14 of this embodiment, it is evaluated whether or not the conversion in S12 was appropriate.
  • the CPU 23 executes a smoothing process on the pixel value of the difference image (the value of the difference corresponding to each pixel), and then evaluates the validity of the conversion based on the difference image. Therefore, even when the conversion is properly executed, the validity of the conversion is evaluated more appropriately in a state where the influence of the scattered pixels 51 is suppressed.
  • the specific method for evaluating the validity of the conversion based on the value of the difference in the region can be appropriately selected.
  • the CPU 23 evaluates that the conversion of the image quality in the region is not appropriate.
  • the CPU 23 may evaluate whether or not the conversion is appropriate based on the number of pixels in the unit region whose difference value is equal to or greater than the threshold value.
  • the CPU 23 may acquire information (for example, a numerical value or a graph) indicating the degree of validity of the conversion in S12 based on the difference information. Good.
  • the CPU 23 may notify the user (for example, display on the display device 28) of information indicating the degree of validity of the conversion as evaluation information.
  • the CPU 23 executes a warning process for the user (S17).
  • the CPU 23 warns the user by displaying a warning message such as "image conversion was not properly executed” or a warning image on the display device 28.
  • the warning method can be changed as appropriate.
  • the CPU 23 may issue a warning to the user by generating at least one of a warning message and a warning sound from the speaker.
  • the CPU 23 can also execute warning processing while displaying the converted image that has not been properly converted on the display device 28.
  • the CPU 23 may use a warning message such as "the displayed converted image may be inappropriate" in the process of S17.
  • the CPU 23 evaluates that the conversion is not valid (S15: NO)
  • the CPU 23 displays the ophthalmic image used as the input image while stopping the process of displaying the converted image acquired in S12 on the display device 28. It may be displayed on the device 28. In this case, the user can observe the desired site based on the ophthalmic image before the image quality is converted.
  • the CPU 23 causes the display device 28 to display the difference image (see FIG. 7) acquired in S13 (S18).
  • the difference image there is a difference between the region where the conversion is properly performed and the region where the conversion is not performed properly. Therefore, the user can confirm whether or not the conversion is properly performed by confirming the difference image. Further, the user can grasp the area where the conversion is not properly performed by checking the difference image. Further, when the input image contains an irregular portion (for example, a diseased portion), it is difficult to properly convert the image quality of the irregular portion. Therefore, the user can appropriately grasp the irregular portion in the input image by checking the difference image.
  • an irregular portion for example, a diseased portion
  • FIG. 9 is a flowchart of ophthalmic image processing in the transformation example.
  • the similarity for example, correlation
  • the validity of the conversion is evaluated based on the similarity.
  • at least a part of the ophthalmologic image processing (see FIG. 4) exemplified in the above embodiment can be similarly adopted in the ophthalmologic image processing of the transformation example shown in FIG. Therefore, for the processes that can adopt the same processes as those in the above embodiment, the same step numbers as those in the above embodiments are assigned, and the description thereof will be omitted or simplified.
  • the CPU 23 acquires the difference image (see FIG. 7) between the input image and the converted image after executing the converted image acquisition process (S12) (S23). Next, the CPU 23 acquires the similarity between the input image and the difference image as evaluation information (S24). As described above, when the image quality of the input image is appropriately converted, the difference between the input image and the converted image becomes small, so that the similarity between the difference image and the input image becomes small.
  • the conversion of the input image is not properly executed and the irregular part or the like in the input image affects the conversion, the position (area) of the irregular part or the like in the input image and the difference value in the difference image Since the position (region) where is large is approximated, the degree of similarity between the input image and the difference image is large. Therefore, the validity of the conversion is appropriately evaluated by acquiring the similarity between the difference image and the input image as evaluation information.
  • the CPU 23 evaluates the validity of the conversion from the input image to the converted image in S12 based on the similarity acquired in S24 (S25). Specifically, in S25 of the present embodiment, whether or not the conversion in S12 was appropriate is evaluated based on the degree of similarity. As mentioned above, when the conversion is performed properly, the similarity between the input image and the difference image becomes small. On the other hand, if the conversion is not performed properly, the similarity between the input image and the difference image becomes large. Therefore, the CPU 23 can appropriately evaluate whether or not the conversion from the input image to the converted image is appropriate by determining whether or not the value indicating the similarity is equal to or greater than the threshold value.
  • the CPU 23 displays information (for example, numerical value, correlation diagram, graph, etc.) indicating the degree of validity of the conversion in S12 as evaluation information. It may be displayed on 28.
  • the technology disclosed in the above embodiments and transformation examples is only an example. Therefore, it is possible to modify the techniques exemplified in the above embodiments and transformation examples.
  • the validity of the conversion from the input image to the converted image is evaluated based on the difference information (difference image). Further, in the above transformation example, the validity of the conversion is evaluated based on the similarity between the input image and the difference image.
  • the method of acquiring the evaluation information for evaluating the validity of the conversion is not limited to the method exemplified in the above-described embodiment and the transformation example.
  • the CPU 23 may acquire evaluation information using a mathematical model trained by a machine learning algorithm.
  • the mathematical model (mathematical model for acquiring evaluation information) uses, for example, input images and converted images as input training data, and provides evaluation information indicating the validity of conversion in the input images and converted images of the input training data. It may be trained in advance as output training data. The training data for output may be generated by the user comparing the input image and the converted image.
  • the CPU 23 may acquire the evaluation information output by the mathematical model by inputting the input image and the converted image into the mathematical model for acquiring the evaluation information. By acquiring the evaluation information by the mathematical model for acquiring the evaluation information, the validity of the conversion is appropriately evaluated even if the difference between the input image and the converted image is not acquired.
  • the mathematical model for acquiring evaluation information may output evaluation information indicating whether or not the conversion from the input image to the converted image is appropriate, or evaluates a numerical value or the like indicating the degree of validity of the conversion. Information may be output.
  • the process to be executed when it is judged that the conversion from the input image to the converted image is not appropriate can be changed as appropriate.
  • the CPU 23 evaluates that the conversion is not valid based on the evaluation information
  • the CPU 23 acquires the converted image by inputting the input image to a mathematical model different from the mathematical model that performed the conversion evaluated as invalid. May be good.
  • the characteristics of conversion by the mathematical model differ depending on the algorithm and training data used when training the mathematical model. Therefore, if the transformation is evaluated as invalid, the transformed image may be acquired appropriately by acquiring the transformed image by a different mathematical model.
  • the process of acquiring an ophthalmic image in S11 of FIGS. 4 and 9 is an example of the “image acquisition step”.
  • the process of acquiring the converted image in S12 of FIGS. 4 and 9 is an example of the “converted image acquisition step”.
  • the process of acquiring the evaluation information in S13 of FIG. 4 and S24 of FIG. 9 is an example of the “evaluation information acquisition step”.
  • the process of evaluating the validity of the conversion in S14 of FIG. 4 is an example of the “first evaluation step”.
  • the process of evaluating the validity of the conversion in S25 of FIG. 9 is an example of the “second evaluation step”.
  • the process of displaying the difference image in S18 of FIGS. 4 and 9 is an example of the “difference image display step”.
  • the warning process shown in S17 of FIGS. 4 and 9 is an example of the “warning step”.
  • the process of stopping the display process of the converted image at S15: NO in FIGS. 4 and 9 is an example of the “display stop step”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A control unit of this ophthalmic image processing device executes an image acquisition step (S11), a transformed image acquisition step (S12), and an evaluation information acquisition step (S13). In the image acquisition step, the control unit acquires an ophthalmic image captured by an ophthalmic image imaging device. In the transformed image acquisition step, the control unit enters, as an input image, the ophthalmic image acquired in the image acquisition step to a mathematical model that has been trained by a machine learning algorithm, so as to acquire a transformed image that is obtained by transforming the image quality of the input image. In the evaluation information acquisition step, the control unit acquires evaluation information for evaluating the validity of transformation from the input image into the transformed image by using the mathematical model.

Description

眼科画像処理プログラムおよび眼科画像処理装置Ophthalmic image processing program and ophthalmic image processing device
 本開示は、被検眼の眼科画像の処理に使用される眼科画像処理プログラムおよび眼科画像処理装置に関する。 The present disclosure relates to an ophthalmic image processing program and an ophthalmic image processing apparatus used for processing an ophthalmic image of an eye to be inspected.
 機械学習アルゴリズムによって訓練された数学モデルを用いて、種々の医療情報を取得する技術が提案されている。例えば、特許文献1に記載の眼科装置では、機械学習アルゴリズムによって訓練された数学モデルに眼形状パラメータが入力されることで、被検眼のIOL関連情報(例えば、予想術後前房深度)が取得される。取得されたIOL関連情報に基づいて、IOL度数が算出される。 A technique for acquiring various medical information using a mathematical model trained by a machine learning algorithm has been proposed. For example, in the ophthalmic apparatus described in Patent Document 1, IOL-related information (for example, expected postoperative anterior chamber depth) of the eye to be inspected is acquired by inputting an eye shape parameter into a mathematical model trained by a machine learning algorithm. Will be done. The IOL frequency is calculated based on the acquired IOL-related information.
 また、非特許文献1では、機械学習アルゴリズムによって訓練された数学モデルに眼科画像を入力画像として入力することで、入力画像の画質を変換した変換画像が取得される。 Further, in Non-Patent Document 1, by inputting an ophthalmic image as an input image into a mathematical model trained by a machine learning algorithm, a converted image obtained by converting the image quality of the input image is obtained.
特開2018-51223号公報JP-A-2018-51223
 数学モデルによる入力画像から変換画像への変換が、適切に実行されない場合もあり得る。例えば、数学モデルの学習に用いられた眼科画像と、実際に数学モデルに入力される眼科画像が大幅に異なる場合等には、入力画像から変換画像への変換が適切に行われ難い。入力画像から変換画像への変換が適切に行われなかったにも関わらず、変換画像がそのままユーザに提示されると、変換画像に基づく各種判断をユーザが正確に実行できない可能性がある。 The conversion from the input image to the converted image by the mathematical model may not be executed properly. For example, when the ophthalmic image used for learning the mathematical model and the ophthalmic image actually input to the mathematical model are significantly different, it is difficult to properly convert the input image to the converted image. If the converted image is presented to the user as it is even though the conversion from the input image to the converted image is not properly performed, the user may not be able to accurately perform various judgments based on the converted image.
 本開示の典型的な目的は、より適切な情報をユーザに提示することが可能な眼科画像処理プログラム、および眼科画像処理装置を提供することである。 A typical object of the present disclosure is to provide an ophthalmic image processing program and an ophthalmic image processing apparatus capable of presenting more appropriate information to a user.
 本開示における典型的な実施形態が提供する眼科画像処理プログラムは、被検眼の組織の画像である眼科画像を処理する眼科画像処理装置によって実行される眼科画像処理プログラムであって、前記眼科画像処理プログラムが前記眼科画像処理装置の制御部によって実行されることで、眼科画像撮影装置によって撮影された眼科画像を取得する画像取得ステップと、機械学習アルゴリズムによって訓練された数学モデルに、前記画像取得ステップにおいて取得された前記眼科画像を入力画像として入力することで、前記入力画像の画質を変換した変換画像を取得する変換画像取得ステップと、前記入力画像から前記変換画像への前記数学モデルによる変換の妥当性を評価する評価情報を取得する評価情報取得ステップと、が前記眼科画像処理装置によって実行される。 The ophthalmic image processing program provided by the typical embodiment in the present disclosure is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image which is an image of a tissue of an eye to be examined, and is the ophthalmic image processing. When the program is executed by the control unit of the ophthalmic image processing device, the image acquisition step of acquiring the ophthalmic image taken by the ophthalmic image capturing device and the image acquisition step of the mathematical model trained by the machine learning algorithm. In the conversion image acquisition step of acquiring the converted image obtained by converting the image quality of the input image by inputting the ophthalmic image acquired in the above as the input image, and the conversion of the input image to the converted image by the mathematical model. The evaluation information acquisition step of acquiring the evaluation information for evaluating the validity is executed by the ophthalmic image processing apparatus.
 本開示における典型的な実施形態が提供する眼科画像処理装置は、被検眼の組織の画像である眼科画像を処理する眼科画像処理装置であって、前記眼科画像処理装置の制御部は、眼科画像撮影装置によって撮影された眼科画像を取得する画像取得ステップと、機械学習アルゴリズムによって訓練された数学モデルに、前記画像取得ステップにおいて取得された前記眼科画像を入力画像として入力することで、前記入力画像の画質を変換した変換画像を取得する変換画像取得ステップと、前記入力画像から前記変換画像への前記数学モデルによる変換の妥当性を評価する評価情報を取得する評価情報取得ステップと、を実行する。 The ophthalmic image processing apparatus provided by the typical embodiment in the present disclosure is an ophthalmic image processing apparatus that processes an ophthalmic image which is an image of a tissue of an eye to be inspected, and a control unit of the ophthalmic image processing apparatus is an ophthalmic image. The input image is obtained by inputting the ophthalmic image acquired in the image acquisition step as an input image into an image acquisition step for acquiring an ophthalmic image captured by an imaging device and a mathematical model trained by a machine learning algorithm. The conversion image acquisition step of acquiring the converted image obtained by converting the image quality of the above, and the evaluation information acquisition step of acquiring the evaluation information for evaluating the validity of the conversion from the input image to the converted image by the mathematical model are executed. ..
 本開示に係る眼科画像処理プログラムおよび眼科画像処理装置によると、より適切な情報がユーザに提示される。 According to the ophthalmic image processing program and the ophthalmic image processing apparatus according to the present disclosure, more appropriate information is presented to the user.
 本開示で例示する眼科画像処理装置の制御部は、画像取得ステップ、変換画像取得ステップ、および評価情報取得ステップを実行する。画像取得ステップでは、制御部は、眼科画像撮影装置によって撮影された眼科画像を取得する。変換画像取得ステップでは、制御部は、機械学習アルゴリズムによって訓練された数学モデルに、画像取得ステップにおいて取得された眼科画像を入力画像として入力することで、入力画像の画質を変換した変換画像を取得する。評価情報取得ステップでは、制御部は、入力画像から変換画像への数学モデルによる変換の妥当性を評価する評価情報を取得する。 The control unit of the ophthalmic image processing apparatus exemplified in the present disclosure executes an image acquisition step, a converted image acquisition step, and an evaluation information acquisition step. In the image acquisition step, the control unit acquires an ophthalmic image taken by the ophthalmologic image capturing device. In the converted image acquisition step, the control unit acquires the converted image obtained by converting the image quality of the input image by inputting the ophthalmic image acquired in the image acquisition step as an input image into the mathematical model trained by the machine learning algorithm. To do. In the evaluation information acquisition step, the control unit acquires evaluation information for evaluating the validity of the conversion from the input image to the converted image by the mathematical model.
 本開示で例示する眼科画像処理装置によると、入力画像から変換画像への数学モデルによる変換の妥当性を評価する評価情報が取得される。従って、眼科画像処理装置は、評価情報を利用することで、適切な情報をユーザに提示することができる。 According to the ophthalmic image processing apparatus exemplified in the present disclosure, evaluation information for evaluating the validity of conversion from an input image to a converted image by a mathematical model is acquired. Therefore, the ophthalmic image processing apparatus can present appropriate information to the user by using the evaluation information.
 入力画像には、種々の眼科画像を採用することができる。例えば、OCT装置によって撮影された断層画像(二次元断層画像または三次元断層画像)、眼底カメラによって撮影された画像、レーザ走査型検眼装置(SLO)によって撮影された画像、および、角膜内皮細胞撮影装置によって撮影された画像等の少なくともいずれかが、入力画像として使用されてもよい。また、眼科画像は、OCT装置によって撮影された被検眼の眼底のOCTアンジオ画像であってもよい。OCTアンジオ画像は、眼底を正面(つまり、被検眼の視線方向)から見た二次元の正面画像であってもよい。OCTアンジオ画像は、例えば、同一位置に関して異なる時間に取得された少なくとも2つのOCT信号が処理されることで取得されるモーションコントラスト画像であってもよい。また、眼科画像は、OCT装置によって撮影された三次元断層画像の少なくとも一部を、OCT装置の測定光の光軸に沿う方向(正面方向)から見た場合のEnface画像(OCT正面画像)であってもよい。 Various ophthalmic images can be used as the input image. For example, tomographic images taken by an OCT device (two-dimensional tomographic image or three-dimensional tomographic image), images taken by a fundus camera, images taken by a laser scanning eye examination device (SLO), and corneal endothelial cell imaging. At least one of the images taken by the device may be used as the input image. Further, the ophthalmologic image may be an OCT angio image of the fundus of the eye to be inspected taken by the OCT apparatus. The OCT angio image may be a two-dimensional front image in which the fundus is viewed from the front (that is, the line-of-sight direction of the eye to be inspected). The OCT angio image may be, for example, a motion contrast image acquired by processing at least two OCT signals acquired at different times with respect to the same position. Further, the ophthalmologic image is an Enface image (OCT front image) when at least a part of the three-dimensional tomographic image taken by the OCT device is viewed from the direction (front direction) along the optical axis of the measurement light of the OCT device. There may be.
 また、数学モデルによって変換される画質も適宜選択できる。例えば、制御部は、入力画像のノイズ量、コントラスト、および解像度等の少なくともいずれかが変換された変換画像を、数学モデルを利用して取得してもよい。 Also, the image quality converted by the mathematical model can be selected as appropriate. For example, the control unit may acquire a converted image in which at least one of the noise amount, contrast, and resolution of the input image is converted by using a mathematical model.
 制御部は、評価情報取得ステップにおいて、数学モデルに入力された入力画像と、数学モデルから出力された変換画像の、対応する画素間の画素値の差分情報を、評価情報として取得してもよい。入力画像の画質が適切に変換された場合、入力画像と変換画像の差分が小さくなる。従って、差分情報が評価情報として取得されることで、入力画像から変換画像への変換の妥当性が適切に評価される。なお、差分情報は、対応する画素間の画素値の差であってもよいし、一方の画素値に対する他方の画素値の割合であってもよい。 In the evaluation information acquisition step, the control unit may acquire the difference information of the pixel values between the corresponding pixels of the input image input to the mathematical model and the converted image output from the mathematical model as evaluation information. .. When the image quality of the input image is converted appropriately, the difference between the input image and the converted image becomes small. Therefore, by acquiring the difference information as the evaluation information, the validity of the conversion from the input image to the converted image is appropriately evaluated. The difference information may be the difference in pixel values between the corresponding pixels, or may be the ratio of the other pixel value to one pixel value.
 なお、制御部は、評価情報取得ステップにおいて、入力画像と変換画像の各々に対するフーリエ変換(例えば、二次元フーリエ変換)を実行した後に、差分情報を取得してもよい。変換画像には、周期的なアーチファクトが発生する場合がある。変換画像に周期的なアーチファクトが発生すると、入力画像と変換画像は異なる周波数分布を示す。従って、フーリエ変換を利用することで、周期的なアーチファクトの発生の有無が、より適切に評価される。 Note that the control unit may acquire the difference information after executing the Fourier transform (for example, two-dimensional Fourier transform) for each of the input image and the converted image in the evaluation information acquisition step. Periodic artifacts may occur in the transformed image. When periodic artifacts occur in the converted image, the input image and the converted image show different frequency distributions. Therefore, by using the Fourier transform, the presence or absence of periodic artifacts is evaluated more appropriately.
 制御部は、差分情報を画像化した差分画像において、複数の画素を含む任意の領域内の差分の値が閾値以上である場合に、入力画像から変換画像への変換が妥当でないと評価する第1評価ステップを実行してもよい。差分画像には、変換が適切に実行された場合でも、大きな値の画素が点在する場合がある。一方で、疾患部位等の存在によって、変換が適切に実行されなかった場合には、大きな値の画素が密集する領域が差分画像に表れる。従って、制御部は、差分画像のうち、複数の画素を含む任意の領域内の差分の値を閾値と比較することで、変換が適切に実行された場合にも点在する画素の影響を抑制して、適切に変換の妥当性を評価することができる。 The control unit evaluates that the conversion from the input image to the converted image is not appropriate when the value of the difference in an arbitrary region including a plurality of pixels is equal to or more than the threshold value in the difference image obtained by imaging the difference information. One evaluation step may be performed. The difference image may be interspersed with pixels with large values, even if the conversion is performed properly. On the other hand, when the conversion is not properly executed due to the presence of the diseased part or the like, a region in which pixels having a large value are densely appears appears in the difference image. Therefore, the control unit compares the difference value in an arbitrary region including a plurality of pixels with the threshold value in the difference image to suppress the influence of the scattered pixels even when the conversion is properly executed. Then, the validity of the conversion can be evaluated appropriately.
 なお、制御部は、差分画像の画素値(各画素に対応する差分の値)に対して平滑化処理を実行した後、差分画像に基づいて変換の妥当性を評価してもよい。この場合、変換が適切に実行された場合にも点在する画素の影響が、より適切に抑制される。 Note that the control unit may evaluate the validity of the conversion based on the difference image after performing the smoothing process on the pixel value of the difference image (the value of the difference corresponding to each pixel). In this case, the influence of the scattered pixels is more appropriately suppressed even when the conversion is properly executed.
 また、領域内の差分の値に基づいて変換の妥当性を評価するための具体的な方法も、適宜選択できる。例えば、制御部は、領域内の差分の値の平均値が閾値以上である場合に、変換が妥当でないと評価してもよい。また、制御部は、単位領域内に存在する、差分の値が閾値以上である画素の数に基づいて、変換が妥当であるか否かを評価してもよい。また、制御部は、変換の妥当性の程度を示す情報を、差分情報に基づいて取得してもよい。妥当性の程度を示す情報は、表示部に表示されてもよい。 In addition, a specific method for evaluating the validity of conversion based on the value of the difference in the region can be appropriately selected. For example, the control unit may evaluate that the conversion is not valid when the average value of the differences in the region is equal to or greater than the threshold value. Further, the control unit may evaluate whether or not the conversion is appropriate based on the number of pixels in the unit region whose difference value is equal to or greater than the threshold value. Further, the control unit may acquire information indicating the degree of validity of the conversion based on the difference information. Information indicating the degree of validity may be displayed on the display unit.
 評価情報取得ステップにおいて、制御部は、数学モデルに入力された入力画像と、数学モデルから出力された変換画像の、対応する画素間の画素値の差分情報を画像化した差分画像を取得すると共に、差分画像と入力画像の類似度(例えば相関等)を評価情報として取得してもよい。入力画像の画質が適切に変換されると、入力画像と変換画像の差分が小さくなるので、差分画像と入力画像の類似度が小さくなる。一方で、入力画像の変換が適切に実行されずに、入力画像内のイレギュラー部位等が変換に影響すると、入力画像内のイレギュラー部位等の位置と、差分画像において差分の値が大きくなる位置が近似するので、入力画像と差分画像の類似度が大きくなる。よって、差分画像と入力画像の類似度が評価情報として取得されることで、変換の妥当性が適切に評価される。類似度の取得方法は適宜選択できる。例えば、相関図が取得されてもよいし、相関係数が取得されてもよい。 In the evaluation information acquisition step, the control unit acquires a difference image that is an image of the difference information of the pixel values between the corresponding pixels of the input image input to the mathematical model and the converted image output from the mathematical model. , The degree of similarity between the difference image and the input image (for example, correlation) may be acquired as evaluation information. When the image quality of the input image is appropriately converted, the difference between the input image and the converted image becomes small, so that the similarity between the difference image and the input image becomes small. On the other hand, if the conversion of the input image is not properly executed and the irregular part or the like in the input image affects the conversion, the position of the irregular part or the like in the input image and the difference value in the difference image become large. Since the positions are similar, the similarity between the input image and the difference image is increased. Therefore, the validity of the conversion is appropriately evaluated by acquiring the similarity between the difference image and the input image as evaluation information. The method of acquiring the degree of similarity can be appropriately selected. For example, a correlation diagram may be acquired, or a correlation coefficient may be acquired.
 制御部は、類似度を示す値が閾値以上である場合に、入力画像から変換画像への変換が妥当でないと評価する第2評価ステップをさらに実行してもよい。前述したように、変換が適切に実行されると、入力画像と差分画像の類似度は小さくなる。一方で、変換が適切に実行されなかった場合、入力画像と差分画像の類似度は大きくなる。従って、制御部は、類似度を示す値が閾値以上であるか否かを判断することで、入力画像から変換画像への変換が妥当であるか否かを適切に評価することができる。なお、類似度を示す値には、種々の値(例えば相関係数等)を適宜利用できる。 The control unit may further perform a second evaluation step of evaluating that the conversion from the input image to the converted image is not appropriate when the value indicating the similarity is equal to or higher than the threshold value. As mentioned above, when the conversion is performed properly, the similarity between the input image and the difference image becomes small. On the other hand, if the conversion is not performed properly, the similarity between the input image and the difference image becomes large. Therefore, the control unit can appropriately evaluate whether or not the conversion from the input image to the converted image is appropriate by determining whether or not the value indicating the similarity is equal to or greater than the threshold value. As the value indicating the degree of similarity, various values (for example, correlation coefficient and the like) can be appropriately used.
 制御部は、差分情報を画像化した差分画像を表示部に表示させる差分画像表示ステップをさらに実行してもよい。差分画像では、変換が適切に行われた領域と、変換が適切に行われなかった領域に差が生じる。従って、ユーザは、差分画像を確認することで、変換が適切に行われたか否かを確認することができる。さらに、ユーザは、差分画像を確認することで、変換が適切に行われなかった領域を把握することも可能である。 The control unit may further execute a difference image display step of displaying a difference image in which the difference information is imaged on the display unit. In the difference image, there is a difference between the region where the conversion is properly performed and the region where the conversion is not performed properly. Therefore, the user can confirm whether or not the conversion is properly performed by confirming the difference image. Further, the user can grasp the area where the conversion is not properly performed by checking the difference image.
 なお、入力画像にイレギュラー部位(例えば疾患部位等)が含まれている場合には、イレギュラー部位の画質の変換は適切に実行され難い。従って、ユーザは、差分画像を確認することで、入力画像におけるイレギュラー部位を適切に把握することも可能である。 If the input image contains an irregular part (for example, a diseased part), it is difficult to properly convert the image quality of the irregular part. Therefore, the user can appropriately grasp the irregular portion in the input image by checking the difference image.
 差分画像を表示部に表示させる際の具体的な方法は、適宜選択できる。例えば、表示部は、入力画像および変換画像の少なくともいずれかを、差分画像と同時に(例えば並べて)表示させてもよい。また、制御部は、入力画像および変換画像の少なくともいずれかに、差分画像を重畳させて表示させてもよい。この場合、ユーザは、入力画像および変換画像の少なくともいずれかと、変換画像とを容易に比較することができる。また、制御部は、差分画像を単独で表示部に表示させてもよい。 The specific method for displaying the difference image on the display unit can be selected as appropriate. For example, the display unit may display at least one of the input image and the converted image at the same time as the difference image (for example, side by side). Further, the control unit may superimpose and display the difference image on at least one of the input image and the converted image. In this case, the user can easily compare at least one of the input image and the converted image with the converted image. Further, the control unit may independently display the difference image on the display unit.
 前記評価情報取得ステップでは、制御部は、入力画像と変換画像を、機械学習アルゴリズムによって訓練された数学モデル(変換画像取得用の数学モデルとは異なる評価情報取得用の数学モデル)に入力することで、評価情報を取得してもよい。この場合、入力画像と変換画像の差分等が取得されなくても、変換の妥当性が適切に評価される。 In the evaluation information acquisition step, the control unit inputs the input image and the converted image into a mathematical model trained by a machine learning algorithm (a mathematical model for acquiring evaluation information different from the mathematical model for acquiring the converted image). Then, the evaluation information may be acquired. In this case, the validity of the conversion is appropriately evaluated even if the difference between the input image and the converted image is not acquired.
 なお、評価情報取得用の数学モデルが出力する評価情報の態様も適宜選択できる。例えば、評価情報取得用の数学モデルは、入力画像から変換画像への変換が妥当であるか否かを示す評価情報を出力してもよい。この場合、変換が妥当であるか否かが容易に評価される。また、評価情報取得用の数学モデルは、変換の妥当性の程度を示す数値等の情報を、評価情報として出力してもよい。 The mode of evaluation information output by the mathematical model for acquiring evaluation information can also be selected as appropriate. For example, the mathematical model for acquiring evaluation information may output evaluation information indicating whether or not the conversion from the input image to the converted image is appropriate. In this case, it is easy to evaluate whether the conversion is valid or not. Further, the mathematical model for acquiring the evaluation information may output information such as a numerical value indicating the degree of validity of the conversion as the evaluation information.
 なお、評価情報の態様を変更することも可能である。例えば、制御部は、画像(少なくとも変換画像)の画質に関する種々のパラメータを利用して評価情報を取得してもよい。画質に関するパラメータには、例えば、眼科画像の信号の強さ、または、信号の良好さを示す指標(例えば、SSI(Signal Strength Index)またはSQI(SLO Quality Index)等)、画像の信号レベルに対するノイズレベルの比(SNR(Signal to Noise Ratio)、背景のノイズレベル、画像のコントラスト等の少なくともいずれかが、画質に関するパラメータとして用いられてもよい。入力画像を変換して、入力画像の画質を向上させた変換画像を取得する場合、変換が適切に実行されると、変換画像の画質は入力画像の画質よりも向上しているはずである。従って、例えば、変換画像の画質を示すパラメータが、評価情報として取得されてもよい。また、変換画像の画質を示すパラメータと、入力画像の画質を示すパラメータの差分が、評価情報として取得されてもよい。 It is also possible to change the mode of evaluation information. For example, the control unit may acquire evaluation information by using various parameters related to the image quality of the image (at least the converted image). Parameters related to image quality include, for example, the signal strength of an ophthalmic image, an index indicating the goodness of the signal (for example, SSI (Signal Contrast Index) or SQI (SLO Quality Index)), and noise with respect to the signal level of the image. At least one of the level ratio (SNR (Signal to Noise Radio), background noise level, image contrast, etc.) may be used as a parameter related to image quality. The input image is converted to improve the image quality of the input image. When acquiring a transformed image, if the conversion is performed properly, the image quality of the converted image should be better than the image quality of the input image. Therefore, for example, a parameter indicating the image quality of the converted image may be used. It may be acquired as evaluation information. Further, the difference between the parameter indicating the image quality of the converted image and the parameter indicating the image quality of the input image may be acquired as evaluation information.
 制御部は、評価情報取得ステップにおいて取得された評価情報によって、変換が妥当でないと評価された場合に、ユーザに対する警告を行う警告ステップをさらに実行してもよい。この場合、ユーザは、入力画像から変換画像への変換が適切に行われなかった可能性がある旨を、容易に把握することができる。 The control unit may further execute a warning step that warns the user when the conversion is evaluated as invalid by the evaluation information acquired in the evaluation information acquisition step. In this case, the user can easily grasp that the conversion from the input image to the converted image may not have been performed properly.
 なお、警告処理の具体的な方法は適宜選択できる。例えば、制御部は、警告メッセージおよび警告画像等の少なくともいずれかを表示部に表示させることで、ユーザに対する警告を行ってもよい。また、制御部は、警告メッセージおよび警告音等の少なくともいずれかをスピーカから発生させることで、ユーザに対する警告を行ってもよい。また、制御部は、実際に変換された変換画像を表示部に表示させつつ警告処理を実行してもよいし、変換画像を表示させずに警告処理を実行してもよい。 The specific method of warning processing can be selected as appropriate. For example, the control unit may warn the user by displaying at least one of a warning message, a warning image, and the like on the display unit. Further, the control unit may issue a warning to the user by generating at least one of a warning message and a warning sound from the speaker. Further, the control unit may execute the warning process while displaying the actually converted converted image on the display unit, or may execute the warning process without displaying the converted image.
 制御部は、評価情報取得ステップにおいて取得された評価情報によって、変換が妥当でないと評価された場合に、変換画像取得ステップにおいて取得された変換画像の表示部への表示処理を停止する表示停止ステップをさらに実行してもよい。この場合、入力画像からの変換が適切に行われなかった変換画像が表示部に表示されることが抑制される。よって、変換画像に基づく各種判断をユーザが正確に実行できない可能性が低下する。 The control unit stops the display processing of the converted image acquired in the converted image acquisition step on the display unit when the evaluation information acquired in the evaluation information acquisition step evaluates that the conversion is not valid. May be further executed. In this case, it is suppressed that the converted image that has not been properly converted from the input image is displayed on the display unit. Therefore, the possibility that the user cannot accurately perform various judgments based on the converted image is reduced.
 なお、評価情報の利用方法を変更することも可能である。例えば、制御部は、取得した評価情報を示す数値およびグラフ等の少なくともいずれかを表示部に表示させてもよい。この場合、ユーザは、表示された評価情報に基づいて、入力画像から変換画像への変換が適切に行われたか否かを容易に把握することができる。また、前述したように、入力画像と変換画像の差分画像が、評価情報として表示部に表示されてもよい。 It is also possible to change the method of using the evaluation information. For example, the control unit may display at least one of a numerical value indicating the acquired evaluation information, a graph, and the like on the display unit. In this case, the user can easily grasp whether or not the conversion from the input image to the converted image is properly performed based on the displayed evaluation information. Further, as described above, the difference image between the input image and the converted image may be displayed on the display unit as evaluation information.
 また、制御部は、評価情報によって変換が妥当でないと評価された場合に、妥当でないと評価された変換を行った数学モデルとは別の数学モデルに入力画像を入力することで、変換画像を取得してもよい。数学モデルを訓練する際のアルゴリズムおよび訓練データ等によって、数学モデルによる変換の特徴が異なる。従って、変換が妥当でないと評価された場合に、異なる数学モデルによって変換画像が取得されることで、変換画像が適切に取得される可能性がある。 In addition, when the conversion is evaluated as invalid by the evaluation information, the control unit inputs the input image to a mathematical model different from the mathematical model that performed the conversion evaluated as invalid, so that the converted image is input. You may get it. The characteristics of conversion by the mathematical model differ depending on the algorithm and training data used when training the mathematical model. Therefore, if the transformation is evaluated as invalid, the transformed image may be acquired appropriately by acquiring the transformed image by a different mathematical model.
 前述したように、入力画像にイレギュラー部位(例えば疾患部位等)が含まれている場合には、イレギュラー部位の画質の変換は適切に実行され難い。この場合、入力画像と変換画像の差分画像には、イレギュラー部位の位置が表れる。従って、制御部は、入力画像から変換画像への変換の妥当性に関わらず、差分画像を表示部に表示させてもよい。その結果、ユーザは、イレギュラー部位の位置を、差分画像に基づいて容易に把握することができる。 As described above, when the input image contains an irregular part (for example, a diseased part), it is difficult to properly convert the image quality of the irregular part. In this case, the position of the irregular portion appears in the difference image between the input image and the converted image. Therefore, the control unit may display the difference image on the display unit regardless of the validity of the conversion from the input image to the converted image. As a result, the user can easily grasp the position of the irregular portion based on the difference image.
 この場合、眼科画像処理プログラムは以下のように表現することができる。被検眼の組織の画像である眼科画像を処理する眼科画像処理装置によって実行される眼科画像処理プログラムであって、前記眼科画像処理プログラムが前記眼科画像処理装置の制御部によって実行されることで、眼科画像撮影装置によって撮影された眼科画像を取得する画像取得ステップと、機械学習アルゴリズムによって訓練された数学モデルに、前記画像取得ステップにおいて取得された前記眼科画像を入力画像として入力することで、前記入力画像の画質を変換した変換画像を取得する変換画像取得ステップと、前記数学モデルに入力された入力画像と、前記数学モデルから出力された前記変換画像の、対応する画素間の画素値の差分情報を画像化した差分画像を取得する差分画像取得ステップと、前記差分画像を表示部に表示させる差分画像表示ステップと、が前記眼科画像処理装置によって実行されることを特徴とする眼科画像処理プログラム。 In this case, the ophthalmic image processing program can be expressed as follows. It is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image which is an image of the tissue of the eye to be inspected, and the ophthalmic image processing program is executed by a control unit of the ophthalmic image processing apparatus. By inputting the ophthalmic image acquired in the image acquisition step as an input image into the image acquisition step of acquiring the ophthalmic image taken by the ophthalmic image capturing apparatus and the mathematical model trained by the machine learning algorithm, the said A conversion image acquisition step of acquiring a converted image obtained by converting the image quality of the input image, and a difference in pixel value between the corresponding pixels of the input image input to the mathematical model and the converted image output from the mathematical model. An ophthalmic image processing program characterized in that a difference image acquisition step of acquiring a difference image obtained by imaging information and a difference image display step of displaying the difference image on a display unit are executed by the ophthalmic image processing apparatus. ..
 画像取得ステップ、変換画像取得ステップ、および評価情報取得ステップ等を実行するデバイスは、適宜選択できる。例えば、パーソナルコンピュータ(以下、「PC」という)の制御部が、変換画像取得ステップ、および評価情報取得ステップ等の全てを実行してもよい。つまり、PCの制御部は、眼科画像撮影装置から眼科画像を取得し、取得した眼科画像に基づいて、変換画像の取得処理等を行ってもよい。また、眼科画像撮影装置の制御部が、変換画像取得ステップ、および評価情報取得ステップ等の全てを実行してもよい。また、複数のデバイス(例えば、眼科画像撮影装置およびPC等)の制御部が協働して、変換画像取得ステップ、および評価情報取得ステップ等を実行してもよい。 The device that executes the image acquisition step, the converted image acquisition step, the evaluation information acquisition step, etc. can be appropriately selected. For example, the control unit of a personal computer (hereinafter referred to as “PC”) may execute all of the converted image acquisition step, the evaluation information acquisition step, and the like. That is, the control unit of the PC may acquire an ophthalmologic image from the ophthalmology image capturing apparatus and perform conversion image acquisition processing or the like based on the acquired ophthalmology image. Further, the control unit of the ophthalmic imaging apparatus may execute all of the conversion image acquisition step, the evaluation information acquisition step, and the like. Further, the control units of a plurality of devices (for example, an ophthalmologic image capturing device and a PC, etc.) may cooperate to execute the converted image acquisition step, the evaluation information acquisition step, and the like.
数学モデル構築装置1、眼科画像処理装置21、および眼科画像撮影装置11A,11Bの概略構成を示すブロック図である。It is a block diagram which shows the schematic structure of the mathematical model construction apparatus 1, the ophthalmologic image processing apparatus 21, and the ophthalmology imaging apparatus 11A, 11B. 高画質の二次元断層画像を変換画像として数学モデルに出力させる場合の、入力用訓練データと出力用訓練データの一例を示す図である。It is a figure which shows an example of the training data for input and the training data for output in the case of outputting a high-quality two-dimensional tomographic image as a conversion image to a mathematical model. 数学モデル構築装置1が実行する数学モデル構築処理のフローチャートである。It is a flowchart of the mathematical model construction process executed by the mathematical model construction apparatus 1. 眼科画像処理装置21が実行する眼科画像処理のフローチャートである。It is a flowchart of ophthalmic image processing executed by ophthalmic image processing apparatus 21. 入力画像として使用される眼科画像の一例を示す図である。It is a figure which shows an example of an ophthalmic image used as an input image. 図5に示す入力画像の画質を変換した変換画像の一例を示す図である。It is a figure which shows an example of the converted image which converted the image quality of the input image shown in FIG. 図5に示す入力画像と、図6に示す変換画像の差分画像の一例を示す図である。It is a figure which shows an example of the difference image of the input image shown in FIG. 5 and the conversion image shown in FIG. 入力画像から変換画像への変換の妥当性を、差分画像に基づいて評価する方法の一例を説明するための説明図である。It is explanatory drawing for demonstrating an example of the method of evaluating the validity of conversion from an input image to a conversion image based on a difference image. 変容例における眼科画像処理のフローチャートである。It is a flowchart of ophthalmic image processing in a transformation example.
(装置構成)
 以下、本開示における典型的な実施形態の1つについて、図面を参照して説明する。図1に示すように、本実施形態では、数学モデル構築装置1、眼科画像処理装置21、および眼科画像撮影装置11A,11Bが用いられる。数学モデル構築装置1は、機械学習アルゴリズムによって数学モデルを訓練させることで、数学モデルを構築する。構築された数学モデルを実現するプログラムは、眼科画像処理装置21の記憶装置24に記憶される。眼科画像処理装置21は、数学モデルに眼科画像を入力画像として入力することで、入力画像の画質を変換した(本実施形態では画質を向上させた)変換画像を取得する。また、眼科画像処理装置21は、入力画像から変換画像の変換の妥当性を評価する評価情報を取得する。眼科画像撮影装置11A,11Bは、被検眼の組織の画像である眼科画像を撮影する。
(Device configuration)
Hereinafter, one of the typical embodiments in the present disclosure will be described with reference to the drawings. As shown in FIG. 1, in this embodiment, the mathematical model construction device 1, the ophthalmic image processing device 21, and the ophthalmic imaging devices 11A and 11B are used. The mathematical model construction device 1 constructs a mathematical model by training the mathematical model by a machine learning algorithm. The program that realizes the constructed mathematical model is stored in the storage device 24 of the ophthalmic image processing device 21. The ophthalmic image processing device 21 inputs an ophthalmic image as an input image into a mathematical model to acquire a converted image in which the image quality of the input image is converted (in the present embodiment, the image quality is improved). In addition, the ophthalmic image processing device 21 acquires evaluation information for evaluating the validity of conversion of the converted image from the input image. The ophthalmic imaging devices 11A and 11B capture an ophthalmic image which is an image of the tissue of the eye to be inspected.
 一例として、本実施形態の数学モデル構築装置1にはパーソナルコンピュータ(以下、「PC」という)が用いられる。詳細は後述するが、数学モデル構築装置1は、眼科画像撮影装置11Aから取得した眼科画像(以下、「訓練用眼科画像」という)と、訓練用眼科画像の画質を変換した画像とを利用して数学モデルを訓練させることで、数学モデルを構築する。しかし、数学モデル構築装置1として機能できるデバイスは、PCに限定されない。例えば、眼科画像撮影装置11Aが数学モデル構築装置1として機能してもよい。また、複数のデバイスの制御部(例えば、PCのCPUと、眼科画像撮影装置11AのCPU13A)が、協働して数学モデルを構築してもよい。 As an example, a personal computer (hereinafter referred to as "PC") is used for the mathematical model construction device 1 of the present embodiment. Although the details will be described later, the mathematical model building apparatus 1 uses an ophthalmic image acquired from the ophthalmic imaging apparatus 11A (hereinafter referred to as “training ophthalmic image”) and an image obtained by converting the image quality of the training ophthalmic image. Build a mathematical model by training it. However, the device that can function as the mathematical model construction device 1 is not limited to the PC. For example, the ophthalmologic imaging device 11A may function as the mathematical model building device 1. Further, the control units of the plurality of devices (for example, the CPU of the PC and the CPU 13A of the ophthalmologic imaging apparatus 11A) may collaborate to construct a mathematical model.
 また、本実施形態では、各種処理を行うコントローラの一例としてCPUが用いられる場合について例示する。しかし、各種デバイスの少なくとも一部に、CPU以外のコントローラが用いられてもよいことは言うまでもない。例えば、コントローラとしてGPUを採用することで、処理の高速化を図ってもよい。 Further, in the present embodiment, a case where a CPU is used as an example of a controller that performs various processes will be illustrated. However, it goes without saying that a controller other than the CPU may be used for at least a part of various devices. For example, by adopting a GPU as a controller, the processing speed may be increased.
 数学モデル構築装置1について説明する。数学モデル構築装置1は、例えば、眼科画像処理装置21または眼科画像処理プログラムをユーザに提供するメーカー等に配置される。数学モデル構築装置1は、各種制御処理を行う制御ユニット2と、通信I/F5を備える。制御ユニット2は、制御を司るコントローラであるCPU3と、プログラムおよびデータ等を記憶することが可能な記憶装置4を備える。記憶装置4には、後述する数学モデル構築処理(図3参照)を実行するための数学モデル構築プログラムが記憶されている。また、通信I/F5は、数学モデル構築装置1を他のデバイス(例えば、眼科画像撮影装置11Aおよび眼科画像処理装置21等)と接続する。 The mathematical model construction device 1 will be described. The mathematical model construction device 1 is arranged, for example, in an ophthalmic image processing device 21 or a manufacturer that provides an ophthalmic image processing program to a user. The mathematical model building apparatus 1 includes a control unit 2 that performs various control processes and a communication I / F5. The control unit 2 includes a CPU 3 which is a controller that controls control, and a storage device 4 that can store programs, data, and the like. The storage device 4 stores a mathematical model construction program for executing a mathematical model construction process (see FIG. 3) described later. Further, the communication I / F5 connects the mathematical model building device 1 to other devices (for example, an ophthalmic imaging device 11A and an ophthalmic image processing device 21).
 数学モデル構築装置1は、操作部7および表示装置8に接続されている。操作部7は、ユーザが各種指示を数学モデル構築装置1に入力するために、ユーザによって操作される。操作部7には、例えば、キーボード、マウス、タッチパネル等の少なくともいずれかを使用できる。なお、操作部7と共に、または操作部7に代えて、各種指示を入力するためのマイク等が使用されてもよい。表示装置8は、各種画像を表示する。表示装置8には、画像を表示可能な種々のデバイス(例えば、モニタ、ディスプレイ、プロジェクタ等の少なくともいずれか)を使用できる。なお、本開示における「画像」には、静止画像も動画像も共に含まれる。 The mathematical model construction device 1 is connected to the operation unit 7 and the display device 8. The operation unit 7 is operated by the user in order for the user to input various instructions to the mathematical model construction device 1. For the operation unit 7, for example, at least one of a keyboard, a mouse, a touch panel, and the like can be used. A microphone or the like for inputting various instructions may be used together with the operation unit 7 or instead of the operation unit 7. The display device 8 displays various images. As the display device 8, various devices capable of displaying an image (for example, at least one of a monitor, a display, a projector, and the like) can be used. The "image" in the present disclosure includes both a still image and a moving image.
 数学モデル構築装置1は、眼科画像撮影装置11Aから眼科画像のデータ(以下、単に「眼科画像」という場合もある)を取得することができる。数学モデル構築装置1は、例えば、有線通信、無線通信、着脱可能な記憶媒体(例えばUSBメモリ)等の少なくともいずれかによって、眼科画像撮影装置11Aから眼科画像のデータを取得してもよい。 The mathematical model construction device 1 can acquire data of an ophthalmic image (hereinafter, may be simply referred to as an “ophthalmic image”) from the ophthalmic imaging device 11A. The mathematical model building apparatus 1 may acquire ophthalmic image data from the ophthalmic imaging apparatus 11A by, for example, at least one of wired communication, wireless communication, a detachable storage medium (for example, a USB memory), and the like.
 眼科画像処理装置21について説明する。眼科画像処理装置21は、例えば、被検者の診断または検査等を行う施設(例えば、病院または健康診断施設等)に配置される。眼科画像処理装置21は、各種制御処理を行う制御ユニット22と、通信I/F25を備える。制御ユニット22は、制御を司るコントローラであるCPU23と、プログラムおよびデータ等を記憶することが可能な記憶装置24を備える。記憶装置24には、後述する眼科画像処理(図4および図9参照)を実行するための眼科画像処理プログラムが記憶されている。眼科画像処理プログラムには、数学モデル構築装置1によって構築された数学モデルを実現させるプログラムが含まれる。通信I/F25は、眼科画像処理装置21を他のデバイス(例えば、眼科画像撮影装置11Bおよび数学モデル構築装置1等)と接続する。 The ophthalmic image processing device 21 will be described. The ophthalmologic image processing device 21 is arranged, for example, in a facility (for example, a hospital or a health examination facility) for diagnosing or examining a subject. The ophthalmic image processing device 21 includes a control unit 22 that performs various control processes and a communication I / F 25. The control unit 22 includes a CPU 23, which is a controller that controls control, and a storage device 24 that can store programs, data, and the like. The storage device 24 stores an ophthalmic image processing program for executing ophthalmic image processing (see FIGS. 4 and 9) described later. The ophthalmic image processing program includes a program that realizes a mathematical model constructed by the mathematical model building apparatus 1. The communication I / F 25 connects the ophthalmic image processing device 21 to other devices (for example, the ophthalmic imaging device 11B and the mathematical model building device 1).
 眼科画像処理装置21は、操作部27および表示装置28に接続されている。操作部27および表示装置28には、前述した操作部7および表示装置8と同様に、種々のデバイスを使用することができる。 The ophthalmic image processing device 21 is connected to the operation unit 27 and the display device 28. As the operation unit 27 and the display device 28, various devices can be used in the same manner as the operation unit 7 and the display device 8 described above.
 眼科画像処理装置21は、眼科画像撮影装置11Bから眼科画像を取得することができる。眼科画像処理装置21は、例えば、有線通信、無線通信、着脱可能な記憶媒体(例えばUSBメモリ)等の少なくともいずれかによって、眼科画像撮影装置11Bから眼科画像を取得してもよい。また、眼科画像処理装置21は、数学モデル構築装置1によって構築された数学モデルを実現させるプログラム等を、通信等を介して取得してもよい。 The ophthalmic image processing device 21 can acquire an ophthalmic image from the ophthalmic image capturing device 11B. The ophthalmic image processing device 21 may acquire an ophthalmic image from the ophthalmic image capturing device 11B by, for example, at least one of wired communication, wireless communication, a detachable storage medium (for example, a USB memory), and the like. Further, the ophthalmic image processing device 21 may acquire a program or the like for realizing the mathematical model constructed by the mathematical model building device 1 via communication or the like.
 眼科画像撮影装置11A,11Bについて説明する。一例として、本実施形態では、数学モデル構築装置1に眼科画像を提供する眼科画像撮影装置11Aと、眼科画像処理装置21に眼科画像を提供する眼科画像撮影装置11Bが使用される場合について説明する。しかし、使用される眼科画像撮影装置の数は2つに限定されない。例えば、数学モデル構築装置1および眼科画像処理装置21は、複数の眼科画像撮影装置から眼科画像を取得してもよい。また、数学モデル構築装置1および眼科画像処理装置21は、共通する1つの眼科画像撮影装置から眼科画像を取得してもよい。 The ophthalmic imaging devices 11A and 11B will be described. As an example, in the present embodiment, a case where an ophthalmic image capturing device 11A for providing an ophthalmic image to the mathematical model building apparatus 1 and an ophthalmologic imaging device 11B for providing an ophthalmic image to the ophthalmic image processing device 21 will be described. .. However, the number of ophthalmic imaging devices used is not limited to two. For example, the mathematical model construction device 1 and the ophthalmic image processing device 21 may acquire ophthalmic images from a plurality of ophthalmic imaging devices. Further, the mathematical model construction device 1 and the ophthalmology image processing device 21 may acquire an ophthalmology image from one common ophthalmology image capturing device.
 また、本実施形態では、眼科画像撮影装置11(11A,11B)として、OCT装置を例示する。ただし、OCT装置以外の眼科画像撮影装置(例えば、レーザ走査型検眼装置(SLO)、眼底カメラ、シャインプルーフカメラ、または角膜内皮細胞撮影装置(CEM)等)が用いられてもよい。 Further, in the present embodiment, the OCT device is exemplified as the ophthalmic imaging device 11 (11A, 11B). However, an ophthalmologic imaging device other than the OCT device (for example, a laser scanning optometry device (SLO), a fundus camera, a Scheimpflug camera, a corneal endothelial cell imaging device (CEM), etc.) may be used.
 眼科画像撮影装置11(11A,11B)は、各種制御処理を行う制御ユニット12(12A,12B)と、眼科画像撮影部16(16A,16B)を備える。制御ユニット12は、制御を司るコントローラであるCPU13(13A,13B)と、プログラムおよびデータ等を記憶することが可能な記憶装置14(14A,14B)を備える。後述する眼科画像処理(図4および図9参照)の少なくとも一部を眼科画像撮影装置11が実行する場合には、眼科画像処理を実行するための眼科画像処理プログラムの少なくとも一部が記憶装置14に記憶されることは言うまでもない。 The ophthalmic imaging device 11 (11A, 11B) includes a control unit 12 (12A, 12B) that performs various control processes, and an ophthalmic imaging unit 16 (16A, 16B). The control unit 12 includes a CPU 13 (13A, 13B) which is a controller that controls control, and a storage device 14 (14A, 14B) capable of storing programs, data, and the like. When the ophthalmic image capturing apparatus 11 executes at least a part of the ophthalmic image processing (see FIGS. 4 and 9) described later, at least a part of the ophthalmic image processing program for executing the ophthalmic image processing is stored in the storage device 14. Needless to say, it is remembered in.
 眼科画像撮影部16は、被検眼の眼科画像を撮影するために必要な各種構成を備える。本実施形態の眼科画像撮影部16には、OCT光源、OCT光源から出射されたOCT光を測定光と参照光に分岐する分岐光学素子、測定光を走査するための走査部、測定光を被検眼に照射するための光学系、被検眼の組織によって反射された光と参照光の合成光を受光する受光素子等が含まれる。 The ophthalmic imaging unit 16 includes various configurations necessary for capturing an ophthalmic image of the eye to be inspected. The ophthalmic imaging unit 16 of the present embodiment is provided with an OCT light source, a branched optical element that branches OCT light emitted from the OCT light source into measurement light and reference light, a scanning unit for scanning the measurement light, and measurement light. It includes an optical system for irradiating the eye examination, a light receiving element that receives the combined light of the light reflected by the tissue of the eye to be inspected and the reference light, and the like.
 眼科画像撮影装置11は、被検眼の眼底の二次元断層画像および三次元断層画像を撮影することができる。詳細には、CPU13は、スキャンライン上にOCT光(測定光)を走査させることで、スキャンラインに交差する断面の二次元断層画像(図5参照)を撮影する。また、CPU13は、OCT光を二次元的に走査することによって、組織における三次元断層画像を撮影することができる。例えば、CPU13は、組織を正面から見た際の二次元の領域内において、位置が互いに異なる複数のスキャンライン上の各々に測定光を走査させることで、複数の二次元断層画像を取得する。次いで、CPU13は、撮影された複数の二次元断層画像を組み合わせることで、三次元断層画像を取得する。 The ophthalmologic image capturing device 11 can capture a two-dimensional tomographic image and a three-dimensional tomographic image of the fundus of the eye to be inspected. Specifically, the CPU 13 scans the OCT light (measurement light) on the scan line to take a two-dimensional tomographic image (see FIG. 5) of the cross section intersecting the scan line. In addition, the CPU 13 can capture a three-dimensional tomographic image of the tissue by scanning the OCT light two-dimensionally. For example, the CPU 13 acquires a plurality of two-dimensional tomographic images by scanning measurement light on each of a plurality of scan lines having different positions in a two-dimensional region when the tissue is viewed from the front. Next, the CPU 13 acquires a three-dimensional tomographic image by combining a plurality of captured two-dimensional tomographic images.
 さらに、CPU13は、組織上の同一部位(本実施形態では、同一のスキャンライン上)に測定光を複数回走査させることで、同一部位の眼科画像を複数撮影することも可能である。CPU13は、同一部位の複数の眼科画像に対して加算平均処理を行うことで、スペックルノイズの影響が抑制された加算平均画像を取得することができる。同一部位の複数の二次元断層画像に対して加算平均処理を行うことで、二次元断層画像の画質を向上させることができる。加算平均処理は、例えば、複数の眼科画像のうち、同一の位置の画素の画素値を平均化することで行われてもよい。加算平均処理を行う画像の数が多い程、スペックルノイズの影響は抑制され易いが、撮影時間は長くなる。なお、眼科画像撮影装置11は、同一部位の眼科画像を複数撮影する間に、被検眼の動きにOCT光の走査位置を追従させるトラッキング処理を実行する。 Further, the CPU 13 can capture a plurality of ophthalmic images of the same site by scanning the measurement light a plurality of times on the same site on the tissue (in the present embodiment, on the same scan line). The CPU 13 can acquire an averaging image in which the influence of speckle noise is suppressed by performing an averaging process on a plurality of ophthalmic images of the same portion. The image quality of the two-dimensional tomographic image can be improved by performing the addition averaging processing on a plurality of two-dimensional tomographic images of the same part. The addition averaging process may be performed, for example, by averaging the pixel values of the pixels at the same position in a plurality of ophthalmic images. The larger the number of images to be subjected to the averaging process, the easier it is to suppress the influence of speckle noise, but the longer the shooting time. The ophthalmologic image capturing device 11 executes a tracking process for tracking the scanning position of the OCT light with the movement of the eye to be inspected while taking a plurality of ophthalmic images of the same site.
(数学モデル構築処理)
 図2および図3を参照して、数学モデル構築装置1が実行する数学モデル構築処理について説明する。数学モデル構築処理は、記憶装置4に記憶された数学モデル構築プログラムに従って、CPU3によって実行される。
(Mathematical model construction process)
The mathematical model construction process executed by the mathematical model construction apparatus 1 will be described with reference to FIGS. 2 and 3. The mathematical model construction process is executed by the CPU 3 according to the mathematical model construction program stored in the storage device 4.
 数学モデル構築処理では、訓練用データセットによって数学モデルが訓練されることで、入力画像の画質を変換した変換画像を出力する数学モデルが構築される。訓練用データセットには、入力側のデータ(入力用訓練データ)と出力側のデータ(出力用訓練データ)が含まれる。数学モデルには、種々の眼科画像を変換画像に変換させることが可能である。数学モデルが画質を変換する眼科画像の種類に応じて、数学モデルの訓練に用いられる訓練用データセットの種類が定まる。以下、二次元断層画像を入力画像として数学モデルに入力することで、入力画像の画質を向上させた二次元断層画像(高画質画像)を、変換画像として数学モデルに出力させる場合について説明する。 In the mathematical model construction process, the mathematical model is trained by the training data set, so that the mathematical model that outputs the converted image obtained by converting the image quality of the input image is constructed. The training data set includes input side data (input training data) and output side data (output training data). The mathematical model can convert various ophthalmic images into converted images. The type of training data set used to train the mathematical model is determined by the type of ophthalmic image to which the mathematical model transforms image quality. Hereinafter, a case where a two-dimensional tom image (high-quality image) in which the image quality of the input image is improved by inputting the two-dimensional tom image as an input image to the mathematical model is output to the mathematical model as a converted image will be described.
 図2に、高画質の二次元断層画像を変換画像として数学モデルに出力させる場合の、入力用訓練データと出力用訓練データの一例を示す。図2に示す例では、CPU3は、組織の同一部位を撮影した複数の二次元断層画像400A~400Xのセット40を取得する。CPU3は、セット40内の複数の二次元断層画像400A~400Xの一部(後述する出力用訓練データの加算平均に使用された枚数よりも少ない枚数)を、入力用訓練データとする。また、CPU3は、セット40内の複数の二次元断層画像400A~400Xの加算平均画像41を、出力用訓練データとして取得する。図2に例示する入力用訓練データおよび出力用訓練データによって数学モデルが訓練された場合、訓練された数学モデルに二次元断層画像が入力画像として入力されることで、スペックルノイズの影響が抑制された高画質の二次元断層画像が、変換画像として出力される。 FIG. 2 shows an example of input training data and output training data when a high-quality two-dimensional tomographic image is output as a converted image to a mathematical model. In the example shown in FIG. 2, the CPU 3 acquires a set 40 of a plurality of two-dimensional tomographic images 400A to 400X in which the same part of the tissue is photographed. The CPU 3 uses a part of the plurality of two-dimensional tomographic images 400A to 400X in the set 40 (the number of images smaller than the number of images used for the addition averaging of the output training data described later) as the input training data. Further, the CPU 3 acquires the additional average image 41 of the plurality of two-dimensional tomographic images 400A to 400X in the set 40 as output training data. When a mathematical model is trained using the input training data and output training data illustrated in FIG. 2, the influence of speckle noise is suppressed by inputting a two-dimensional tomographic image as an input image to the trained mathematical model. The high-quality two-dimensional tomographic image is output as a converted image.
 なお、数学モデルに画質を変換させる眼科画像は、眼底の二次元断層画像に限定されない。例えば、眼科画像は被検眼の眼底以外の部位の画像であってもよい。また、眼科画像は、OCT装置によって撮影された三次元断層画像、OCTアンジオ画像、またはEnface画像等であってもよい。OCTアンジオ画像は、眼底を正面(つまり、被検眼の視線方向)から見た二次元の正面画像であってもよい。OCTアンジオ画像は、例えば、同一位置に関して異なる時間に取得された少なくとも2つのOCT信号が処理されることで取得されるモーションコントラスト画像であってもよい。Enface画像は、OCT装置によって撮影された三次元断層画像の少なくとも一部を、OCT装置の測定光の光軸に沿う方向(正面方向)から見た場合の二次元正面画像である。また、眼科画像は、眼底カメラによって撮影された画像、レーザ走査型検眼装置(SLO)によって撮影された画像、または、角膜内皮細胞撮影装置によって撮影された画像等であってもよい。 Note that the ophthalmic image that converts the image quality to the mathematical model is not limited to the two-dimensional tomographic image of the fundus. For example, the ophthalmologic image may be an image of a portion other than the fundus of the eye to be examined. Further, the ophthalmic image may be a three-dimensional tomographic image, an OCT angio image, an Enface image, or the like taken by an OCT apparatus. The OCT angio image may be a two-dimensional front image in which the fundus is viewed from the front (that is, the line-of-sight direction of the eye to be inspected). The OCT angio image may be, for example, a motion contrast image acquired by processing at least two OCT signals acquired at different times with respect to the same position. The Enface image is a two-dimensional front image when at least a part of the three-dimensional tomographic image taken by the OCT device is viewed from the direction (front direction) along the optical axis of the measurement light of the OCT device. Further, the ophthalmologic image may be an image taken by a fundus camera, an image taken by a laser scanning optometry device (SLO), an image taken by a corneal endothelial cell photography device, or the like.
 また、出力用訓練データとして使用される高画質の眼科画像を生成する方法を変更することも可能である。例えば、加算平均処理以外の処理によって画質を向上させてもよい。 It is also possible to change the method of generating high-quality ophthalmic images used as output training data. For example, the image quality may be improved by a process other than the addition averaging process.
 図3を参照して、数学モデル構築処理について説明する。CPU3は、眼科画像撮影装置11Aによって撮影された眼科画像の少なくとも一部を、入力用訓練データとして取得する(S1)。本実施形態では、眼科画像のデータは、眼科画像撮影装置11Aによって生成された後、数学モデル構築装置1によって取得される。しかし、CPU3は、眼科画像を生成する基となる信号(例えばOCT信号)を眼科画像撮影装置11Aから取得し、取得した信号に基づいて眼科画像を生成することで、眼科画像のデータを取得してもよい。 The mathematical model construction process will be described with reference to FIG. The CPU 3 acquires at least a part of the ophthalmic image taken by the ophthalmic image capturing device 11A as input training data (S1). In the present embodiment, the data of the ophthalmic image is generated by the ophthalmic imaging apparatus 11A and then acquired by the mathematical model construction apparatus 1. However, the CPU 3 acquires the data of the ophthalmic image by acquiring the signal (for example, OCT signal) that is the basis for generating the ophthalmic image from the ophthalmic imaging apparatus 11A and generating the ophthalmic image based on the acquired signal. You may.
 次いで、CPU3は、S1で取得した入力用訓練データに対応する出力用訓練データを取得する(S3)。入力用訓練データと出力用訓練データの対応関係の一例については、前述した通りである。 Next, the CPU 3 acquires the output training data corresponding to the input training data acquired in S1 (S3). An example of the correspondence between the input training data and the output training data is as described above.
 次いで、CPU3は、機械学習アルゴリズムによって、訓練データセットを用いた数学モデルの訓練を実行する(S3)。機械学習アルゴリズムとしては、例えば、ニューラルネットワーク、ランダムフォレスト、ブースティング、サポートベクターマシン(SVM)等が一般的に知られている。 Next, the CPU 3 executes the training of the mathematical model using the training data set by the machine learning algorithm (S3). As machine learning algorithms, for example, neural networks, random forests, boosting, support vector machines (SVMs), and the like are generally known.
 ニューラルネットワークは、生物の神経細胞ネットワークの挙動を模倣する手法である。ニューラルネットワークには、例えば、フィードフォワード(順伝播型)ニューラルネットワーク、RBFネットワーク(放射基底関数)、スパイキングニューラルネットワーク、畳み込みニューラルネットワーク、再帰型ニューラルネットワーク(リカレントニューラルネット、フィードバックニューラルネット等)、確率的ニューラルネット(ボルツマンマシン、ベイシアンネットワーク等)等がある。 Neural networks are a method of imitating the behavior of biological nerve cell networks. Neural networks include, for example, feed-forward (forward propagation) neural networks, RBF networks (radial basis functions), spiking neural networks, convolutional neural networks, recursive neural networks (recurrent neural networks, feedback neural networks, etc.), and probabilities. Neural networks (Boltzmann machines, Basian networks, etc.).
 ランダムフォレストは、ランダムサンプリングされた訓練データに基づいて学習を行って、多数の決定木を生成する方法である。ランダムフォレストを用いる場合、予め識別器として学習しておいた複数の決定木の分岐を辿り、各決定木から得られる結果の平均(あるいは多数決)を取る。 Random forest is a method of generating a large number of decision trees by learning based on randomly sampled training data. When using a random forest, the branches of a plurality of decision trees learned in advance as a discriminator are traced, and the average (or majority vote) of the results obtained from each decision tree is taken.
 ブースティングは、複数の弱識別器を組み合わせることで強識別器を生成する手法である。単純で弱い識別器を逐次的に学習させることで、強識別器を構築する。 Boosting is a method of generating a strong classifier by combining multiple weak classifiers. A strong classifier is constructed by sequentially learning simple and weak classifiers.
 SVMは、線形入力素子を利用して2クラスのパターン識別器を構成する手法である。SVMは、例えば、訓練データから、各データ点との距離が最大となるマージン最大化超平面を求めるという基準(超平面分離定理)で、線形入力素子のパラメータを学習する。 SVM is a method of constructing a two-class pattern classifier using a linear input element. The SVM learns the parameters of the linear input element based on, for example, the criterion (hyperplane separation theorem) of obtaining the margin maximizing hyperplane that maximizes the distance from each data point from the training data.
 数学モデルは、例えば、入力データと出力データの関係を予測するためのデータ構造を指す。数学モデルは、訓練データセットを用いて訓練されることで構築される。前述したように、訓練データセットは、入力用訓練データと出力用訓練データのセットである。例えば、訓練によって、各入力と出力の相関データ(例えば、重み)が更新される。 A mathematical model refers to, for example, a data structure for predicting the relationship between input data and output data. Mathematical models are constructed by training with training datasets. As described above, the training data set is a set of training data for input and training data for output. For example, training updates the correlation data (eg, weights) for each input and output.
 本実施形態では、機械学習アルゴリズムとして多層型のニューラルネットワークが用いられている。ニューラルネットワークは、データを入力するための入力層と、予測したいデータを生成するための出力層と、入力層と出力層の間の1つ以上の隠れ層を含む。各層には、複数のノード(ユニットとも言われる)が配置される。詳細には、本実施形態では、多層型ニューラルネットワークの一種である畳み込みニューラルネットワーク(CNN)が用いられている。ただし、他の機械学習アルゴリズムが用いられてもよい。例えば、競合する2つのニューラルネットワークを利用する敵対的生成ネットワーク(Generative adversarial networks:GAN)が、機械学習アルゴリズムとして採用されてもよい。 In this embodiment, a multi-layer neural network is used as a machine learning algorithm. A neural network includes an input layer for inputting data, an output layer for generating the data to be predicted, and one or more hidden layers between the input layer and the output layer. A plurality of nodes (also called units) are arranged in each layer. Specifically, in this embodiment, a convolutional neural network (CNN), which is a kind of multi-layer neural network, is used. However, other machine learning algorithms may be used. For example, a hostile generative network (GAN) that utilizes two competing neural networks may be adopted as a machine learning algorithm.
 数学モデルの構築が完了するまで(S5:NO)、S1~S3の処理が繰り返される。数学モデルの構築が完了すると(S5:YES)、数学モデル構築処理は終了する。構築された数学モデルを実現させるプログラムおよびデータは、眼科画像処理装置21に組み込まれる。 The processes of S1 to S3 are repeated until the construction of the mathematical model is completed (S5: NO). When the construction of the mathematical model is completed (S5: YES), the mathematical model construction process is completed. The program and data for realizing the constructed mathematical model are incorporated in the ophthalmologic image processing apparatus 21.
(眼科画像処理)
 図4~図8を参照して、眼科画像処理装置21が実行する眼科画像処理の一例について説明する。図4~図8では、入力画像と変換画像の差分情報(差分画像)に基づいて、数学モデルによる画質の変換の妥当性を評価する場合について例示する。図4に例示する眼科画像処理は、記憶装置24に記憶された眼科画像処理プログラムに従って、CPU23によって実行される。
(Ophthalmic image processing)
An example of ophthalmic image processing executed by the ophthalmic image processing apparatus 21 will be described with reference to FIGS. 4 to 8. 4 to 8 show an example of evaluating the validity of image quality conversion by a mathematical model based on the difference information (difference image) between the input image and the converted image. The ophthalmic image processing illustrated in FIG. 4 is executed by the CPU 23 according to the ophthalmic image processing program stored in the storage device 24.
 図4に示すように、CPU23は、眼科画像撮影装置(本実施形態ではOCT装置)11Bによって撮影された、被検眼の組織の眼科画像を取得する(S11)。本実施形態のS11では、被検眼の眼底組織の二次元断層画像(図5参照)が取得される。 As shown in FIG. 4, the CPU 23 acquires an ophthalmologic image of the tissue of the eye to be inspected taken by the ophthalmologic imaging apparatus (OCT apparatus in this embodiment) 11B (S11). In S11 of the present embodiment, a two-dimensional tomographic image (see FIG. 5) of the fundus tissue of the eye to be inspected is acquired.
 次いで、CPU23は、機械学習アルゴリズムによって訓練された数学モデルに、S11で取得された眼科画像を入力画像として入力することで、入力画像の画質を変換した(本実施形態では、入力画像の画質を向上させた)変換画像を取得する(S12)。 Next, the CPU 23 converted the image quality of the input image by inputting the ophthalmic image acquired in S11 as the input image into the mathematical model trained by the machine learning algorithm (in the present embodiment, the image quality of the input image is changed. The (improved) converted image is acquired (S12).
 図5に、入力画像として使用される眼科画像の一例を示す。また、図5に示す入力画像の画質を変換した(画質を向上させた)変換画像を、図6に示す。図5に示す入力画像は、図6に示す変換画像に比べて画質が低い。しかし、図5に示す入力画像は、加算平均処理を行わずに、または、少ない眼科画像を加算平均処理することで生成される。よって、図5に示す入力画像は、短時間で撮影することが可能である。図6に示す変換画像と同等の高画質の画像を、眼科画像撮影装置11Bによって撮影する場合、同一部位の画像を複数回撮影して加算平均処理を実行する必要があるので撮影時間を短縮することが困難である。本実施形態では、短時間で撮影された入力画像を数学モデルに入力することで、高画質の変換画像が取得される。よって、撮影時間が長くなることを抑制しつつ、高画質の画像が取得される。 FIG. 5 shows an example of an ophthalmic image used as an input image. Further, FIG. 6 shows a converted image in which the image quality of the input image shown in FIG. 5 is converted (improved image quality). The image quality of the input image shown in FIG. 5 is lower than that of the converted image shown in FIG. However, the input image shown in FIG. 5 is generated without performing the addition averaging process or by performing the addition averaging process on a small number of ophthalmic images. Therefore, the input image shown in FIG. 5 can be captured in a short time. When an image with high image quality equivalent to the converted image shown in FIG. 6 is photographed by the ophthalmologic image photographing apparatus 11B, it is necessary to photograph the image of the same part a plurality of times and execute the addition averaging process, thus shortening the photographing time. Is difficult. In the present embodiment, a high-quality converted image is acquired by inputting an input image taken in a short time into a mathematical model. Therefore, a high-quality image can be acquired while suppressing a long shooting time.
 次いで、CPU23は、S12で数学モデルに入力された入力画像(図5参照)と、S12で数学モデルから出力された変換画像(図6参照)の、対応する画素間の画素値の差分情報が、評価情報として取得される(S13)。評価情報とは、数学モデルによる入力画像から変換画像への変換の妥当性を評価するための情報である。入力画像の画質が適切に変換されて変換画像が出力された場合、入力画像と変換画像の差分が小さくなる。一方で、入力画像に状態に応じて、画質の変換が適切に実行されない場合もある。例えば、数学モデルの訓練に用いられた訓練データセット(眼科画像)に含まれていた割合が低い部位(例えば、病変部位等のイレギュラー部位)が、入力画像に存在すると、イレギュラー部位の変換が適切に実行され難い。従って、入力画像内のイレギュラー部位等が変換に影響し、入力画像の変換が適切に実行されない場合には、入力画像と変換画像の差分が大きくなる。よって、差分情報が評価情報として取得されることで、入力画像から変換画像への変換の妥当性が適切に評価される。 Next, the CPU 23 obtains the difference information of the pixel values between the corresponding pixels of the input image (see FIG. 5) input to the mathematical model in S12 and the converted image (see FIG. 6) output from the mathematical model in S12. , Acquired as evaluation information (S13). The evaluation information is information for evaluating the validity of conversion from an input image to a converted image by a mathematical model. When the image quality of the input image is appropriately converted and the converted image is output, the difference between the input image and the converted image becomes small. On the other hand, the image quality conversion may not be properly executed depending on the state of the input image. For example, if a part (for example, an irregular part such as a lesion part) contained in the training data set (ophthalmic image) used for training a mathematical model is present in the input image, the irregular part is converted. Is difficult to execute properly. Therefore, if the irregular portion in the input image affects the conversion and the conversion of the input image is not properly executed, the difference between the input image and the converted image becomes large. Therefore, by acquiring the difference information as the evaluation information, the validity of the conversion from the input image to the converted image is appropriately evaluated.
 なお、CPU23は、入力画像と変換画像の各々に対するフーリエ変換(例えば、二次元フーリエ変換)を実行した後に、差分情報を取得してもよい。変換画像には、周期的なアーチファクトが発生する場合がある。変換画像に周期的なアーチファクトが発生すると、入力画像と変換画像は異なる周波数分布を示す。従って、フーリエ変換を利用することで、周期的なアーチファクトの発生の有無が、より適切に評価される。 Note that the CPU 23 may acquire the difference information after executing the Fourier transform (for example, two-dimensional Fourier transform) for each of the input image and the converted image. Periodic artifacts may occur in the transformed image. When periodic artifacts occur in the converted image, the input image and the converted image show different frequency distributions. Therefore, by using the Fourier transform, the presence or absence of periodic artifacts is evaluated more appropriately.
 本実施形態のS13では、画素毎に取得された差分の値の分布を示す差分画像(図7参照)が、差分情報として取得される。図7に例示する差分画像では、輝度が中間値(例えば、輝度が1~256の範囲で変化する場合には、中間値である128)である灰色の部分が、差分の値が小さい画素として表示されている。差分画像では、変換が適切に行われた領域と、変換が適切に行われなかった領域に差が生じる。従って、入力画像から変換画像への変換の妥当性が、差分画像に基づいて適切に評価される。 In S13 of the present embodiment, a difference image (see FIG. 7) showing the distribution of the difference values acquired for each pixel is acquired as the difference information. In the difference image illustrated in FIG. 7, the gray portion where the brightness is an intermediate value (for example, 128, which is an intermediate value when the brightness changes in the range of 1 to 256) is a pixel having a small difference value. It is displayed. In the difference image, there is a difference between the region where the conversion is properly performed and the region where the conversion is not performed properly. Therefore, the validity of the conversion from the input image to the converted image is appropriately evaluated based on the difference image.
 なお、差分情報は、差分画像以外の情報であってもよい。例えば、S13では、複数の画素間における差分の値の平均値等が、差分情報として取得されてもよい。また、差分情報は、対応する画素間の画素値の差であってもよいし、一方の画素値に対する他方の画素値の割合であってもよい。 Note that the difference information may be information other than the difference image. For example, in S13, the average value of the difference values between the plurality of pixels may be acquired as the difference information. Further, the difference information may be the difference in pixel values between the corresponding pixels, or may be the ratio of the other pixel value to one pixel value.
 次いで、CPU23は、S13で取得した差分画像(図7参照)に基づいて、S12における入力画像から変換画像への変換の妥当性を評価する(S14)。詳細には、本実施形態のS14では、S12における変換が妥当であったか否かが評価される。 Next, the CPU 23 evaluates the validity of the conversion from the input image to the converted image in S12 based on the difference image (see FIG. 7) acquired in S13 (S14). Specifically, in S14 of this embodiment, it is evaluated whether or not the conversion in S12 was appropriate.
 図8を参照して、差分画像に基づいて変換の妥当性を評価する方法の一例について説明する。図8に示すように、入力画像から変換画像への変換が適切に実行された場合でも、差分の値が大きい画素51が、差分画像の画像領域50内に点在する。一方で、イレギュラー部位(例えば疾患部位等)の存在によって、変換が適切に実行されなかった領域では、差分の値が大きい画素51が密集する。従って、本実施形態では、CPU23は、複数の画素を含む任意の領域内の差分の値が閾値以上である場合に、変換が適切に実行されなかった領域が存在する(つまり、変換が妥当でない)と評価する。図8に示す例では、領域55内の差分の値が閾値以上となっている。従って、CPU23は、領域55を、変換が適切に実行されなかった領域(イレギュラー部位が存在する領域)と評価する。 An example of a method for evaluating the validity of conversion based on a difference image will be described with reference to FIG. As shown in FIG. 8, even when the conversion from the input image to the converted image is properly executed, the pixels 51 having a large difference value are scattered in the image area 50 of the difference image. On the other hand, in the region where the conversion is not properly executed due to the presence of the irregular portion (for example, the disease portion), the pixels 51 having a large difference value are densely packed. Therefore, in the present embodiment, in the CPU 23, when the value of the difference in an arbitrary region including a plurality of pixels is equal to or greater than the threshold value, there is an region in which the conversion is not properly executed (that is, the conversion is not valid). ). In the example shown in FIG. 8, the value of the difference in the region 55 is equal to or greater than the threshold value. Therefore, the CPU 23 evaluates the region 55 as a region in which the conversion is not properly executed (a region in which an irregular portion exists).
 また、CPU23は、差分画像の画素値(各画素に対応する差分の値)に対して平滑化処理を実行した後、差分画像に基づいて変換の妥当性を評価する。従って、変換が適切に実行された場合にも点在する画素51の影響が抑制された状態で、より適切に変換の妥当性が評価される。 Further, the CPU 23 executes a smoothing process on the pixel value of the difference image (the value of the difference corresponding to each pixel), and then evaluates the validity of the conversion based on the difference image. Therefore, even when the conversion is properly executed, the validity of the conversion is evaluated more appropriately in a state where the influence of the scattered pixels 51 is suppressed.
 なお、領域内の差分の値に基づいて変換の妥当性を評価するための具体的な方法は、適宜選択できる。一例として、本実施形態では、CPU23は、領域内の差分の値の平均値が閾値以上である場合に、その領域における画質の変換が妥当でないと評価する。しかし、CPU23は、単位領域内に存在する、差分の値が閾値以上である画素の数に基づいて、変換が妥当であるか否かを評価してもよい。 The specific method for evaluating the validity of the conversion based on the value of the difference in the region can be appropriately selected. As an example, in the present embodiment, when the average value of the difference values in the region is equal to or more than the threshold value, the CPU 23 evaluates that the conversion of the image quality in the region is not appropriate. However, the CPU 23 may evaluate whether or not the conversion is appropriate based on the number of pixels in the unit region whose difference value is equal to or greater than the threshold value.
 また、CPU23は、S12における変換が妥当であったか否かを評価する代わりに、S12における変換の妥当性の程度を示す情報(例えば、数値またはグラフ等)を、差分情報に基づいて取得してもよい。CPU23は、変換の妥当性の程度を示す情報を、評価情報としてユーザに通知(例えば、表示装置28に表示)してもよい。 Further, instead of evaluating whether or not the conversion in S12 is valid, the CPU 23 may acquire information (for example, a numerical value or a graph) indicating the degree of validity of the conversion in S12 based on the difference information. Good. The CPU 23 may notify the user (for example, display on the display device 28) of information indicating the degree of validity of the conversion as evaluation information.
 図4の説明に戻る。CPU23は、S12における入力画像から変換画像への変換が妥当である場合には(S15:YES)、S12で取得された変換画像を表示装置28に表示させる(S16)。一方で、CPU23は、S12における入力画像から変換画像への変換が妥当でないと評価した場合には(S15:NO)、S12で取得された変換画像を表示装置28に表示させる処理を停止する(つまり、S16の処理を実行しない)。 Return to the explanation in Fig. 4. When the conversion from the input image to the converted image in S12 is appropriate (S15: YES), the CPU 23 causes the display device 28 to display the converted image acquired in S12 (S16). On the other hand, when the CPU 23 evaluates that the conversion from the input image to the converted image in S12 is not appropriate (S15: NO), the CPU 23 stops the process of displaying the converted image acquired in S12 on the display device 28 (S15: NO). That is, the process of S16 is not executed).
 また、CPU23は、変換が妥当でないと評価した場合に(S15:NO)、ユーザに対する警告処理を実行する(S17)。一例として、CPU23は、「画像の変換が適切に実行されませんでした」等の警告メッセージ、または警告画像を、表示装置28に表示させることで、ユーザに対する警告を行う。しかし、警告の方法は適宜変更できる。例えば、CPU23は、警告メッセージおよび警告音等の少なくともいずれかをスピーカから発生させることで、ユーザに対する警告を行ってもよい。 Further, when the CPU 23 evaluates that the conversion is not valid (S15: NO), the CPU 23 executes a warning process for the user (S17). As an example, the CPU 23 warns the user by displaying a warning message such as "image conversion was not properly executed" or a warning image on the display device 28. However, the warning method can be changed as appropriate. For example, the CPU 23 may issue a warning to the user by generating at least one of a warning message and a warning sound from the speaker.
 なお、CPU23は、変換が適切に行われなかった変換画像を表示装置28に表示させつつ、警告処理を実行することも可能である。この場合、CPU23は、S17の処理において、「表示されている変換画像は不適切な可能性があります」等の警告メッセージを使用してもよい。また、CPU23は、変換が妥当でないと評価した場合に(S15:NO)、S12で取得された変換画像を表示装置28に表示させる処理を停止しつつ、入力画像として用いられた眼科画像を表示装置28に表示させてもよい。この場合、ユーザは、画質が変換される前の眼科画像に基づいて、所望の部位を観察することができる。 Note that the CPU 23 can also execute warning processing while displaying the converted image that has not been properly converted on the display device 28. In this case, the CPU 23 may use a warning message such as "the displayed converted image may be inappropriate" in the process of S17. Further, when the CPU 23 evaluates that the conversion is not valid (S15: NO), the CPU 23 displays the ophthalmic image used as the input image while stopping the process of displaying the converted image acquired in S12 on the display device 28. It may be displayed on the device 28. In this case, the user can observe the desired site based on the ophthalmic image before the image quality is converted.
 次いで、CPU23は、S13で取得した差分画像(図7参照)を表示装置28に表示させる(S18)。前述したように、差分画像では、変換が適切に行われた領域と、変換が適切に行われなかった領域に差が生じる。従って、ユーザは、差分画像を確認することで、変換が適切に行われたか否かを確認することができる。さらに、ユーザは、差分画像を確認することで、変換が適切に行われなかった領域を把握することも可能である。また、入力画像にイレギュラー部位(例えば疾患部位等)が含まれている場合には、イレギュラー部位の画質の変換は適切に実行され難い。従って、ユーザは、差分画像を確認することで、入力画像におけるイレギュラー部位を適切に把握することも可能である。 Next, the CPU 23 causes the display device 28 to display the difference image (see FIG. 7) acquired in S13 (S18). As described above, in the difference image, there is a difference between the region where the conversion is properly performed and the region where the conversion is not performed properly. Therefore, the user can confirm whether or not the conversion is properly performed by confirming the difference image. Further, the user can grasp the area where the conversion is not properly performed by checking the difference image. Further, when the input image contains an irregular portion (for example, a diseased portion), it is difficult to properly convert the image quality of the irregular portion. Therefore, the user can appropriately grasp the irregular portion in the input image by checking the difference image.
(変容例)
 図9を参照して、上記実施形態の変容例について説明する。図9は、変容例における眼科画像処理のフローチャートである。図9に示す変容例では、差分画像と入力画像の類似度(例えば相関等)が評価情報として取得され、類似度に基づいて変換の妥当性が評価される。なお、上記実施形態で例示した眼科画像処理(図4参照)の少なくとも一部は、図9に示す変容例の眼科画像処理でも同様に採用できる。従って、上記実施形態と同様の処理を採用できる処理については、上記実施形態と同じステップ番号を付し、その説明を省略または簡略化する。
(Transformation example)
An example of transformation of the above embodiment will be described with reference to FIG. FIG. 9 is a flowchart of ophthalmic image processing in the transformation example. In the transformation example shown in FIG. 9, the similarity (for example, correlation) between the difference image and the input image is acquired as evaluation information, and the validity of the conversion is evaluated based on the similarity. In addition, at least a part of the ophthalmologic image processing (see FIG. 4) exemplified in the above embodiment can be similarly adopted in the ophthalmologic image processing of the transformation example shown in FIG. Therefore, for the processes that can adopt the same processes as those in the above embodiment, the same step numbers as those in the above embodiments are assigned, and the description thereof will be omitted or simplified.
 図9に示す変容例の眼科画像処理では、CPU23は、変換画像の取得処理(S12)の実行後に、入力画像と変換画像の差分画像(図7参照)を取得する(S23)。次いで、CPU23は、入力画像と差分画像の類似度を、評価情報として取得する(S24)。前述したように、入力画像の画質が適切に変換されると、入力画像と変換画像の差分が小さくなるので、差分画像と入力画像の類似度が小さくなる。一方で、入力画像の変換が適切に実行されずに、入力画像内のイレギュラー部位等が変換に影響すると、入力画像内のイレギュラー部位等の位置(領域)と、差分画像において差分の値が大きくなる位置(領域)が近似するので、入力画像と差分画像の類似度が大きくなる。よって、差分画像と入力画像の類似度が評価情報として取得されることで、変換の妥当性が適切に評価される。 In the ophthalmic image processing of the transformation example shown in FIG. 9, the CPU 23 acquires the difference image (see FIG. 7) between the input image and the converted image after executing the converted image acquisition process (S12) (S23). Next, the CPU 23 acquires the similarity between the input image and the difference image as evaluation information (S24). As described above, when the image quality of the input image is appropriately converted, the difference between the input image and the converted image becomes small, so that the similarity between the difference image and the input image becomes small. On the other hand, if the conversion of the input image is not properly executed and the irregular part or the like in the input image affects the conversion, the position (area) of the irregular part or the like in the input image and the difference value in the difference image Since the position (region) where is large is approximated, the degree of similarity between the input image and the difference image is large. Therefore, the validity of the conversion is appropriately evaluated by acquiring the similarity between the difference image and the input image as evaluation information.
 次いで、CPU23は、S24で取得した類似度に基づいて、S12における入力画像から変換画像への変換の妥当性を評価する(S25)。詳細には、本実施形態のS25では、S12における変換が妥当であったか否かが、類似度に基づいて評価される。前述したように、変換が適切に実行されると、入力画像と差分画像の類似度は小さくなる。一方で、変換が適切に実行されなかった場合、入力画像と差分画像の類似度は大きくなる。従って、CPU23は、類似度を示す値が閾値以上であるか否かを判断することで、入力画像から変換画像への変換が妥当であったか否かを適切に評価することができる。 Next, the CPU 23 evaluates the validity of the conversion from the input image to the converted image in S12 based on the similarity acquired in S24 (S25). Specifically, in S25 of the present embodiment, whether or not the conversion in S12 was appropriate is evaluated based on the degree of similarity. As mentioned above, when the conversion is performed properly, the similarity between the input image and the difference image becomes small. On the other hand, if the conversion is not performed properly, the similarity between the input image and the difference image becomes large. Therefore, the CPU 23 can appropriately evaluate whether or not the conversion from the input image to the converted image is appropriate by determining whether or not the value indicating the similarity is equal to or greater than the threshold value.
 なお、類似度を示す値には、種々の値(例えば相関係数等)を適宜利用できる。また、CPU23は、S12における変換が妥当であったか否かを評価する代わりに、S12における変換の妥当性の程度を示す情報(例えば、数値、相関図、またはグラフ等)を、評価情報として表示装置28に表示させてもよい。 Various values (for example, correlation coefficient) can be appropriately used as values indicating the degree of similarity. Further, instead of evaluating whether or not the conversion in S12 is appropriate, the CPU 23 displays information (for example, numerical value, correlation diagram, graph, etc.) indicating the degree of validity of the conversion in S12 as evaluation information. It may be displayed on 28.
 上記実施形態および変容例で開示された技術は一例に過ぎない。従って、上記実施形態および変容例で例示された技術を変更することも可能である。まず、上記実施形態では、入力画像から変換画像への変換の妥当性が、差分情報(差分画像)に基づいて評価される。また、上記変容例では、入力画像と差分画像の類似度に基づいて、変換の妥当性が評価される。しかし、変換の妥当性を評価するための評価情報の取得方法は、上記実施形態および変容例で例示した方法に限定されない。 The technology disclosed in the above embodiments and transformation examples is only an example. Therefore, it is possible to modify the techniques exemplified in the above embodiments and transformation examples. First, in the above embodiment, the validity of the conversion from the input image to the converted image is evaluated based on the difference information (difference image). Further, in the above transformation example, the validity of the conversion is evaluated based on the similarity between the input image and the difference image. However, the method of acquiring the evaluation information for evaluating the validity of the conversion is not limited to the method exemplified in the above-described embodiment and the transformation example.
 例えば、CPU23は、機械学習アルゴリズムによって訓練された数学モデルを用いて、評価情報を取得してもよい。この場合、数学モデル(評価情報取得用の数学モデル)は、例えば、入力画像と変換画像を入力用訓練データとし、入力用訓練データの入力画像と変換画像における変換の妥当性を示す評価情報を出力用訓練データとして、予め訓練されていてもよい。出力用訓練データは、入力画像と変換画像をユーザが比較することで生成されてもよい。CPU23は、評価情報取得用の数学モデルに入力画像と変換画像を入力することで、数学モデルによって出力される評価情報を取得してもよい。評価情報取得用の数学モデルによって評価情報が取得されることで、入力画像と変換画像の差分等が取得されなくても、変換の妥当性が適切に評価される。 For example, the CPU 23 may acquire evaluation information using a mathematical model trained by a machine learning algorithm. In this case, the mathematical model (mathematical model for acquiring evaluation information) uses, for example, input images and converted images as input training data, and provides evaluation information indicating the validity of conversion in the input images and converted images of the input training data. It may be trained in advance as output training data. The training data for output may be generated by the user comparing the input image and the converted image. The CPU 23 may acquire the evaluation information output by the mathematical model by inputting the input image and the converted image into the mathematical model for acquiring the evaluation information. By acquiring the evaluation information by the mathematical model for acquiring the evaluation information, the validity of the conversion is appropriately evaluated even if the difference between the input image and the converted image is not acquired.
 なお、評価情報取得用の数学モデルは、入力画像から変換画像への変換が妥当であるか否かを示す評価情報を出力してもよいし、変換の妥当性の程度を示す数値等の評価情報を出力してもよい。 The mathematical model for acquiring evaluation information may output evaluation information indicating whether or not the conversion from the input image to the converted image is appropriate, or evaluates a numerical value or the like indicating the degree of validity of the conversion. Information may be output.
 また、入力画像から変換画像への変換が妥当でないと判断された場合に実行する処理も、適宜変更できる。例えば、CPU23は、評価情報によって変換が妥当でないと評価した場合に、妥当でないと評価した変換を行った数学モデルとは別の数学モデルに入力画像を入力することで、変換画像を取得してもよい。数学モデルを訓練する際のアルゴリズムおよび訓練データ等によって、数学モデルによる変換の特徴が異なる。従って、変換が妥当でないと評価された場合に、異なる数学モデルによって変換画像が取得されることで、変換画像が適切に取得される可能性がある。 Also, the process to be executed when it is judged that the conversion from the input image to the converted image is not appropriate can be changed as appropriate. For example, when the CPU 23 evaluates that the conversion is not valid based on the evaluation information, the CPU 23 acquires the converted image by inputting the input image to a mathematical model different from the mathematical model that performed the conversion evaluated as invalid. May be good. The characteristics of conversion by the mathematical model differ depending on the algorithm and training data used when training the mathematical model. Therefore, if the transformation is evaluated as invalid, the transformed image may be acquired appropriately by acquiring the transformed image by a different mathematical model.
 上記実施形態および変容例で例示された複数の技術のうちの一部のみを実行することも可能である。例えば、図4に示す眼科画像処理では、変換が妥当でないと評価された場合に(S15:NO)、変換画像の表示を停止させる処理、および警告処理が共に実行される。しかし、変換画像の表示を停止させる処理、および警告処理の少なくとも一方を省略することも可能である。 It is also possible to implement only some of the plurality of techniques exemplified in the above embodiments and transformation examples. For example, in the ophthalmic image processing shown in FIG. 4, when the conversion is evaluated to be invalid (S15: NO), the processing for stopping the display of the converted image and the warning processing are both executed. However, it is also possible to omit at least one of the process of stopping the display of the converted image and the process of warning.
 図4および図9のS11で眼科画像を取得する処理は、「画像取得ステップ」の一例である。図4および図9のS12で変換画像を取得する処理は、「変換画像取得ステップ」の一例である。図4のS13、および図9のS24で評価情報を取得する処理は、「評価情報取得ステップ」の一例である。図4のS14で変換の妥当性を評価する処理は、「第1評価ステップ」の一例である。図9のS25で変換の妥当性を評価する処理は、「第2評価ステップ」の一例である。図4および図9のS18で差分画像を表示させる処理は、「差分画像表示ステップ」の一例である。図4および図9のS17に示す警告処理は、「警告ステップ」の一例である。図4および図9のS15:NOで変換画像の表示処理を停止する処理は、「表示停止ステップ」の一例である。 The process of acquiring an ophthalmic image in S11 of FIGS. 4 and 9 is an example of the “image acquisition step”. The process of acquiring the converted image in S12 of FIGS. 4 and 9 is an example of the “converted image acquisition step”. The process of acquiring the evaluation information in S13 of FIG. 4 and S24 of FIG. 9 is an example of the “evaluation information acquisition step”. The process of evaluating the validity of the conversion in S14 of FIG. 4 is an example of the “first evaluation step”. The process of evaluating the validity of the conversion in S25 of FIG. 9 is an example of the “second evaluation step”. The process of displaying the difference image in S18 of FIGS. 4 and 9 is an example of the “difference image display step”. The warning process shown in S17 of FIGS. 4 and 9 is an example of the “warning step”. The process of stopping the display process of the converted image at S15: NO in FIGS. 4 and 9 is an example of the “display stop step”.
11A,11B  眼科画像撮影装置
21  眼科画像処理装置
23  CPU
24  記憶装置
28  表示装置

 
11A, 11B Ophthalmic Imaging Device 21 Ophthalmic Image Processing Device 23 CPU
24 Storage device 28 Display device

Claims (10)

  1.  被検眼の組織の画像である眼科画像を処理する眼科画像処理装置によって実行される眼科画像処理プログラムであって、
     前記眼科画像処理プログラムが前記眼科画像処理装置の制御部によって実行されることで、
     眼科画像撮影装置によって撮影された眼科画像を取得する画像取得ステップと、
     機械学習アルゴリズムによって訓練された数学モデルに、前記画像取得ステップにおいて取得された前記眼科画像を入力画像として入力することで、前記入力画像の画質を変換した変換画像を取得する変換画像取得ステップと、
     前記入力画像から前記変換画像への前記数学モデルによる変換の妥当性を評価する評価情報を取得する評価情報取得ステップと、
     が前記眼科画像処理装置によって実行されることを特徴とする眼科画像処理プログラム。
    An ophthalmic image processing program executed by an ophthalmic image processing device that processes an ophthalmic image that is an image of the tissue of the eye to be inspected.
    When the ophthalmic image processing program is executed by the control unit of the ophthalmic image processing device,
    An image acquisition step to acquire an ophthalmic image taken by an ophthalmic imaging device, and
    A conversion image acquisition step of acquiring a converted image obtained by converting the image quality of the input image by inputting the ophthalmic image acquired in the image acquisition step as an input image into a mathematical model trained by a machine learning algorithm.
    An evaluation information acquisition step for acquiring evaluation information for evaluating the validity of conversion from the input image to the conversion image by the mathematical model, and
    Is executed by the ophthalmic image processing apparatus.
  2.  請求項1に記載の眼科画像処理プログラムであって、
     前記評価情報取得ステップでは、前記数学モデルに入力された入力画像と、前記数学モデルから出力された前記変換画像の、対応する画素間の画素値の差分情報が、前記評価情報として取得されることを特徴とする眼科画像処理プログラム。
    The ophthalmic image processing program according to claim 1.
    In the evaluation information acquisition step, the difference information of the pixel values between the corresponding pixels of the input image input to the mathematical model and the converted image output from the mathematical model is acquired as the evaluation information. An ophthalmic image processing program characterized by.
  3.  請求項2に記載の眼科画像処理プログラムであって、
     前記差分情報を画像化した差分画像において、複数の画素を含む任意の領域内の前記差分の値が閾値以上である場合に、前記入力画像から前記変換画像への変換が妥当でないと評価する第1評価ステップ、
     が前記眼科画像処理装置によって実行されることを特徴とする眼科画像処理プログラム。
    The ophthalmic image processing program according to claim 2.
    In the difference image obtained by imaging the difference information, when the value of the difference in an arbitrary region including a plurality of pixels is equal to or larger than the threshold value, it is evaluated that the conversion from the input image to the converted image is not appropriate. 1 evaluation step,
    Is executed by the ophthalmic image processing apparatus.
  4.  請求項1に記載の眼科画像処理プログラムであって、
     前記評価情報取得ステップでは、前記数学モデルに入力された入力画像と、前記数学モデルから出力された前記変換画像の、対応する画素間の画素値の差分情報を画像化した差分画像が取得されると共に、前記差分画像と前記入力画像の類似度が、前記評価情報として取得されることを特徴とする眼科画像処理プログラム。
    The ophthalmic image processing program according to claim 1.
    In the evaluation information acquisition step, a difference image obtained by imaging the difference information of the pixel values between the corresponding pixels of the input image input to the mathematical model and the converted image output from the mathematical model is acquired. In addition, an ophthalmic image processing program characterized in that the similarity between the difference image and the input image is acquired as the evaluation information.
  5.  請求項4に記載の眼科画像処理プログラムであって、
     前記類似度を示す値が閾値以上である場合に、前記入力画像から前記変換画像への変換が妥当でないと評価する第2評価ステップ、
     が前記眼科画像処理装置によって実行されることを特徴とする眼科画像処理プログラム。
    The ophthalmic image processing program according to claim 4.
    The second evaluation step of evaluating that the conversion from the input image to the converted image is not appropriate when the value indicating the similarity is equal to or greater than the threshold value.
    Is executed by the ophthalmic image processing apparatus.
  6.  請求項2から5のいずれかに記載の眼科画像処理プログラムであって、
     前記差分情報を画像化した差分画像を表示部に表示させる差分画像表示ステップ、
     が前記眼科画像処理装置によって実行されることを特徴とする眼科画像処理プログラム。
    The ophthalmic image processing program according to any one of claims 2 to 5.
    A difference image display step of displaying a difference image obtained by imaging the difference information on the display unit,
    Is executed by the ophthalmic image processing apparatus.
  7.  請求項1に記載の眼科画像処理プログラムであって、
     前記評価情報取得ステップでは、
     前記変換画像取得ステップにおいて前記数学モデルに入力された前記入力画像と、前記数学モデルから出力された前記変換画像を、機械学習アルゴリズムによって訓練された前記評価情報取得用の数学モデルに入力することで、前記評価情報が取得されることを特徴とする眼科画像処理プログラム。
    The ophthalmic image processing program according to claim 1.
    In the evaluation information acquisition step,
    By inputting the input image input to the mathematical model in the converted image acquisition step and the converted image output from the mathematical model into the mathematical model for acquiring evaluation information trained by a machine learning algorithm. , An ophthalmic image processing program characterized in that the evaluation information is acquired.
  8.  請求項1から7のいずれかに記載の眼科画像処理プログラムであって、
     前記評価情報取得ステップにおいて取得された前記評価情報によって、変換が妥当でないと評価された場合に、ユーザに対する警告を行う警告ステップ、
     が前記眼科画像処理装置によって実行されることを特徴とする眼科画像処理プログラム。
    The ophthalmic image processing program according to any one of claims 1 to 7.
    A warning step that warns the user when the conversion is evaluated as invalid by the evaluation information acquired in the evaluation information acquisition step.
    Is executed by the ophthalmic image processing apparatus.
  9.  請求項1から8のいずれかに記載の眼科画像処理プログラムであって、
     前記評価情報取得ステップにおいて取得された前記評価情報によって、変換が妥当でないと評価された場合に、前記変換画像取得ステップにおいて取得された前記変換画像の表示部への表示処理を停止する表示停止ステップ、
     が前記眼科画像処理装置によって実行されることを特徴とする眼科画像処理プログラム。
    The ophthalmic image processing program according to any one of claims 1 to 8.
    A display stop step for stopping the display process of the converted image acquired in the converted image acquisition step on the display unit when the evaluation information acquired in the evaluation information acquisition step evaluates that the conversion is not valid. ,
    Is executed by the ophthalmic image processing apparatus.
  10.  被検眼の組織の画像である眼科画像を処理する眼科画像処理装置であって、
     前記眼科画像処理装置の制御部は、
     眼科画像撮影装置によって撮影された眼科画像を取得する画像取得ステップと、
     機械学習アルゴリズムによって訓練された数学モデルに、前記画像取得ステップにおいて取得された前記眼科画像を入力画像として入力することで、前記入力画像の画質を変換した変換画像を取得する変換画像取得ステップと、
     前記数学モデルによる前記入力画像から前記変換画像への変換の妥当性を評価する評価情報を取得する評価情報取得ステップと、
     を実行することを特徴とする眼科画像処理装置。
     

     
    An ophthalmic image processing device that processes an ophthalmic image, which is an image of the tissue of the eye to be inspected.
    The control unit of the ophthalmic image processing device
    An image acquisition step to acquire an ophthalmic image taken by an ophthalmic imaging device, and
    A conversion image acquisition step of acquiring a converted image obtained by converting the image quality of the input image by inputting the ophthalmic image acquired in the image acquisition step as an input image into a mathematical model trained by a machine learning algorithm.
    An evaluation information acquisition step for acquiring evaluation information for evaluating the validity of conversion from the input image to the converted image by the mathematical model, and
    An ophthalmic image processing apparatus characterized by performing.


PCT/JP2020/032949 2019-09-04 2020-08-31 Ophthalmic image processing program and ophthalmic image processing device WO2021045019A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021543759A JPWO2021045019A1 (en) 2019-09-04 2020-08-31

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019161616 2019-09-04
JP2019-161616 2019-09-04

Publications (1)

Publication Number Publication Date
WO2021045019A1 true WO2021045019A1 (en) 2021-03-11

Family

ID=74852928

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/032949 WO2021045019A1 (en) 2019-09-04 2020-08-31 Ophthalmic image processing program and ophthalmic image processing device

Country Status (2)

Country Link
JP (1) JPWO2021045019A1 (en)
WO (1) WO2021045019A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001118063A (en) * 1999-10-19 2001-04-27 Canon Inc Device, system, and method for image processing, and storage medium
JP2006031075A (en) * 2004-07-12 2006-02-02 Ricoh Co Ltd Image processing evaluation system
JP2009181508A (en) * 2008-01-31 2009-08-13 Sharp Corp Image processing device, inspection system, image processing method, image processing program, computer-readable recording medium recording the program
JP4430743B2 (en) * 1996-07-30 2010-03-10 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Correct ring-shaped image artifacts
JP2010198068A (en) * 2009-02-23 2010-09-09 Seiko Epson Corp Simulation device for image processing circuit, simulation method for image processing circuit, design method for image processing circuit, and simulation program for image processing circuit
JP2011134200A (en) * 2009-12-25 2011-07-07 Konica Minolta Holdings Inc Image evaluation method, image processing method and image processing device
WO2018210978A1 (en) * 2017-05-19 2018-11-22 Retinai Medical Gmbh Reducing noise in an image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4430743B2 (en) * 1996-07-30 2010-03-10 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Correct ring-shaped image artifacts
JP2001118063A (en) * 1999-10-19 2001-04-27 Canon Inc Device, system, and method for image processing, and storage medium
JP2006031075A (en) * 2004-07-12 2006-02-02 Ricoh Co Ltd Image processing evaluation system
JP2009181508A (en) * 2008-01-31 2009-08-13 Sharp Corp Image processing device, inspection system, image processing method, image processing program, computer-readable recording medium recording the program
JP2010198068A (en) * 2009-02-23 2010-09-09 Seiko Epson Corp Simulation device for image processing circuit, simulation method for image processing circuit, design method for image processing circuit, and simulation program for image processing circuit
JP2011134200A (en) * 2009-12-25 2011-07-07 Konica Minolta Holdings Inc Image evaluation method, image processing method and image processing device
WO2018210978A1 (en) * 2017-05-19 2018-11-22 Retinai Medical Gmbh Reducing noise in an image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MA YUHU I ET AL.: "Speckle noise reduction in optical coherence tomography images based on edge-sensitive cGAN", BIOMEDICAL OPTICS EXPRESS, vol. 9, no. 11, 2018, pages 5129 - 5146, XP055675651, DOI: 10.1364/BOE.9.005129 *

Also Published As

Publication number Publication date
JPWO2021045019A1 (en) 2021-03-11

Similar Documents

Publication Publication Date Title
US11633096B2 (en) Ophthalmologic image processing device and non-transitory computer-readable storage medium storing computer-readable instructions
JP7388525B2 (en) Ophthalmology image processing device and ophthalmology image processing program
WO2021106967A1 (en) Ocular fundus image processing device and ocular fundus image processing program
WO2020026535A1 (en) Ophthalmic image processing device, oct device, and ophthalmic image processing program
JP2024045441A (en) Ophthalmologic image processing device and ophthalmologic image processing program
JP6703319B1 (en) Ophthalmic image processing device and OCT device
JP6866954B2 (en) Ophthalmic image processing program and OCT device
JPWO2020116351A1 (en) Diagnostic support device and diagnostic support program
WO2021045019A1 (en) Ophthalmic image processing program and ophthalmic image processing device
JP7439990B2 (en) Medical image processing device, medical image processing program, and medical image processing method
JP6747617B2 (en) Ophthalmic image processing device and OCT device
JP2021037177A (en) Ophthalmologic image processing program and ophthalmologic image processing device
WO2020241794A1 (en) Ophthalmic image processing device, ophthalmic image processing program, and ophthalmic image processing system
JP7328489B2 (en) Ophthalmic image processing device and ophthalmic photographing device
JP2022138552A (en) Ophthalmologic image processing device, ophthalmologic image processing program, and ophthalmologic imaging device
JP2021074095A (en) Ophthalmologic image processing device and ophthalmologic image processing program
JP7521575B2 (en) Ophthalmic image processing device, OCT device, and ophthalmic image processing program
JP7435067B2 (en) Ophthalmology image processing device and ophthalmology image processing program
JP2024097535A (en) Ophthalmic image processing program and ophthalmic image processing device
US12096981B2 (en) Ophthalmologic image processing device and non-transitory computer-readable storage medium storing computer-readable instructions
JP7180187B2 (en) Ophthalmic image processing device, OCT device, and ophthalmic image processing program
JP7302184B2 (en) Ophthalmic image processing device and ophthalmic image processing program
WO2023281965A1 (en) Medical image processing device and medical image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20861862

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021543759

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20861862

Country of ref document: EP

Kind code of ref document: A1