WO2021045019A1 - Programme de traitement d'images ophtalmiques et dispositif de traitement d'images ophtalmiques - Google Patents

Programme de traitement d'images ophtalmiques et dispositif de traitement d'images ophtalmiques Download PDF

Info

Publication number
WO2021045019A1
WO2021045019A1 PCT/JP2020/032949 JP2020032949W WO2021045019A1 WO 2021045019 A1 WO2021045019 A1 WO 2021045019A1 JP 2020032949 W JP2020032949 W JP 2020032949W WO 2021045019 A1 WO2021045019 A1 WO 2021045019A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
ophthalmic
image processing
mathematical model
conversion
Prior art date
Application number
PCT/JP2020/032949
Other languages
English (en)
Japanese (ja)
Inventor
涼介 柴
徹哉 加納
佳紀 熊谷
Original Assignee
株式会社ニデック
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニデック filed Critical 株式会社ニデック
Priority to JP2021543759A priority Critical patent/JPWO2021045019A1/ja
Publication of WO2021045019A1 publication Critical patent/WO2021045019A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to an ophthalmic image processing program and an ophthalmic image processing apparatus used for processing an ophthalmic image of an eye to be inspected.
  • IOL-related information for example, expected postoperative anterior chamber depth
  • IOL-related information for example, expected postoperative anterior chamber depth
  • the IOL frequency is calculated based on the acquired IOL-related information.
  • Non-Patent Document 1 by inputting an ophthalmic image as an input image into a mathematical model trained by a machine learning algorithm, a converted image obtained by converting the image quality of the input image is obtained.
  • the conversion from the input image to the converted image by the mathematical model may not be executed properly. For example, when the ophthalmic image used for learning the mathematical model and the ophthalmic image actually input to the mathematical model are significantly different, it is difficult to properly convert the input image to the converted image. If the converted image is presented to the user as it is even though the conversion from the input image to the converted image is not properly performed, the user may not be able to accurately perform various judgments based on the converted image.
  • a typical object of the present disclosure is to provide an ophthalmic image processing program and an ophthalmic image processing apparatus capable of presenting more appropriate information to a user.
  • the ophthalmic image processing program provided by the typical embodiment in the present disclosure is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image which is an image of a tissue of an eye to be examined, and is the ophthalmic image processing.
  • the image acquisition step of acquiring the ophthalmic image taken by the ophthalmic image capturing device and the image acquisition step of the mathematical model trained by the machine learning algorithm In the conversion image acquisition step of acquiring the converted image obtained by converting the image quality of the input image by inputting the ophthalmic image acquired in the above as the input image, and the conversion of the input image to the converted image by the mathematical model.
  • the evaluation information acquisition step of acquiring the evaluation information for evaluating the validity is executed by the ophthalmic image processing apparatus.
  • the ophthalmic image processing apparatus is an ophthalmic image processing apparatus that processes an ophthalmic image which is an image of a tissue of an eye to be inspected, and a control unit of the ophthalmic image processing apparatus is an ophthalmic image.
  • the input image is obtained by inputting the ophthalmic image acquired in the image acquisition step as an input image into an image acquisition step for acquiring an ophthalmic image captured by an imaging device and a mathematical model trained by a machine learning algorithm.
  • the conversion image acquisition step of acquiring the converted image obtained by converting the image quality of the above, and the evaluation information acquisition step of acquiring the evaluation information for evaluating the validity of the conversion from the input image to the converted image by the mathematical model are executed. ..
  • the control unit of the ophthalmic image processing apparatus exemplified in the present disclosure executes an image acquisition step, a converted image acquisition step, and an evaluation information acquisition step.
  • the control unit acquires an ophthalmic image taken by the ophthalmologic image capturing device.
  • the control unit acquires the converted image obtained by converting the image quality of the input image by inputting the ophthalmic image acquired in the image acquisition step as an input image into the mathematical model trained by the machine learning algorithm.
  • the control unit acquires evaluation information for evaluating the validity of the conversion from the input image to the converted image by the mathematical model.
  • evaluation information for evaluating the validity of conversion from an input image to a converted image by a mathematical model is acquired. Therefore, the ophthalmic image processing apparatus can present appropriate information to the user by using the evaluation information.
  • ophthalmic images can be used as the input image.
  • tomographic images taken by an OCT device two-dimensional tomographic image or three-dimensional tomographic image
  • images taken by a fundus camera images taken by a laser scanning eye examination device (SLO), and corneal endothelial cell imaging.
  • SLO laser scanning eye examination device
  • corneal endothelial cell imaging At least one of the images taken by the device may be used as the input image.
  • the ophthalmologic image may be an OCT angio image of the fundus of the eye to be inspected taken by the OCT apparatus.
  • the OCT angio image may be a two-dimensional front image in which the fundus is viewed from the front (that is, the line-of-sight direction of the eye to be inspected).
  • the OCT angio image may be, for example, a motion contrast image acquired by processing at least two OCT signals acquired at different times with respect to the same position.
  • the ophthalmologic image is an Enface image (OCT front image) when at least a part of the three-dimensional tomographic image taken by the OCT device is viewed from the direction (front direction) along the optical axis of the measurement light of the OCT device. There may be.
  • the image quality converted by the mathematical model can be selected as appropriate.
  • the control unit may acquire a converted image in which at least one of the noise amount, contrast, and resolution of the input image is converted by using a mathematical model.
  • the control unit may acquire the difference information of the pixel values between the corresponding pixels of the input image input to the mathematical model and the converted image output from the mathematical model as evaluation information. ..
  • the difference information may be the difference in pixel values between the corresponding pixels, or may be the ratio of the other pixel value to one pixel value.
  • control unit may acquire the difference information after executing the Fourier transform (for example, two-dimensional Fourier transform) for each of the input image and the converted image in the evaluation information acquisition step.
  • Periodic artifacts may occur in the transformed image.
  • the input image and the converted image show different frequency distributions. Therefore, by using the Fourier transform, the presence or absence of periodic artifacts is evaluated more appropriately.
  • the control unit evaluates that the conversion from the input image to the converted image is not appropriate when the value of the difference in an arbitrary region including a plurality of pixels is equal to or more than the threshold value in the difference image obtained by imaging the difference information.
  • One evaluation step may be performed.
  • the difference image may be interspersed with pixels with large values, even if the conversion is performed properly.
  • the control unit compares the difference value in an arbitrary region including a plurality of pixels with the threshold value in the difference image to suppress the influence of the scattered pixels even when the conversion is properly executed. Then, the validity of the conversion can be evaluated appropriately.
  • control unit may evaluate the validity of the conversion based on the difference image after performing the smoothing process on the pixel value of the difference image (the value of the difference corresponding to each pixel). In this case, the influence of the scattered pixels is more appropriately suppressed even when the conversion is properly executed.
  • control unit may evaluate that the conversion is not valid when the average value of the differences in the region is equal to or greater than the threshold value. Further, the control unit may evaluate whether or not the conversion is appropriate based on the number of pixels in the unit region whose difference value is equal to or greater than the threshold value. Further, the control unit may acquire information indicating the degree of validity of the conversion based on the difference information. Information indicating the degree of validity may be displayed on the display unit.
  • the control unit acquires a difference image that is an image of the difference information of the pixel values between the corresponding pixels of the input image input to the mathematical model and the converted image output from the mathematical model.
  • the degree of similarity between the difference image and the input image may be acquired as evaluation information.
  • the image quality of the input image is appropriately converted, the difference between the input image and the converted image becomes small, so that the similarity between the difference image and the input image becomes small.
  • the conversion of the input image is not properly executed and the irregular part or the like in the input image affects the conversion, the position of the irregular part or the like in the input image and the difference value in the difference image become large.
  • the similarity between the input image and the difference image is increased. Therefore, the validity of the conversion is appropriately evaluated by acquiring the similarity between the difference image and the input image as evaluation information.
  • the method of acquiring the degree of similarity can be appropriately selected. For example, a correlation diagram may be acquired, or a correlation coefficient may be acquired.
  • the control unit may further perform a second evaluation step of evaluating that the conversion from the input image to the converted image is not appropriate when the value indicating the similarity is equal to or higher than the threshold value.
  • a second evaluation step of evaluating that the conversion from the input image to the converted image is not appropriate when the value indicating the similarity is equal to or higher than the threshold value.
  • the control unit may further execute a difference image display step of displaying a difference image in which the difference information is imaged on the display unit.
  • a difference image there is a difference between the region where the conversion is properly performed and the region where the conversion is not performed properly. Therefore, the user can confirm whether or not the conversion is properly performed by confirming the difference image. Further, the user can grasp the area where the conversion is not properly performed by checking the difference image.
  • the input image contains an irregular part (for example, a diseased part), it is difficult to properly convert the image quality of the irregular part. Therefore, the user can appropriately grasp the irregular portion in the input image by checking the difference image.
  • an irregular part for example, a diseased part
  • the specific method for displaying the difference image on the display unit can be selected as appropriate.
  • the display unit may display at least one of the input image and the converted image at the same time as the difference image (for example, side by side).
  • the control unit may superimpose and display the difference image on at least one of the input image and the converted image. In this case, the user can easily compare at least one of the input image and the converted image with the converted image. Further, the control unit may independently display the difference image on the display unit.
  • the control unit inputs the input image and the converted image into a mathematical model trained by a machine learning algorithm (a mathematical model for acquiring evaluation information different from the mathematical model for acquiring the converted image). Then, the evaluation information may be acquired. In this case, the validity of the conversion is appropriately evaluated even if the difference between the input image and the converted image is not acquired.
  • the mode of evaluation information output by the mathematical model for acquiring evaluation information can also be selected as appropriate.
  • the mathematical model for acquiring evaluation information may output evaluation information indicating whether or not the conversion from the input image to the converted image is appropriate. In this case, it is easy to evaluate whether the conversion is valid or not. Further, the mathematical model for acquiring the evaluation information may output information such as a numerical value indicating the degree of validity of the conversion as the evaluation information.
  • control unit may acquire evaluation information by using various parameters related to the image quality of the image (at least the converted image).
  • Parameters related to image quality include, for example, the signal strength of an ophthalmic image, an index indicating the goodness of the signal (for example, SSI (Signal Contrast Index) or SQI (SLO Quality Index)), and noise with respect to the signal level of the image.
  • SSI Signal Strength of an ophthalmic image
  • SQI SLO Quality Index
  • noise with respect to the signal level of the image for example, SSI (Signal Contrast Index) or SQI (SLO Quality Index)
  • At least one of the level ratio SNR (Signal to Noise Radio), background noise level, image contrast, etc.) may be used as a parameter related to image quality.
  • the input image is converted to improve the image quality of the input image.
  • the image quality of the converted image should be better than the image quality of the input image. Therefore, for example, a parameter indicating the image quality of the converted image may be used. It may be acquired as evaluation information. Further, the difference between the parameter indicating the image quality of the converted image and the parameter indicating the image quality of the input image may be acquired as evaluation information.
  • the control unit may further execute a warning step that warns the user when the conversion is evaluated as invalid by the evaluation information acquired in the evaluation information acquisition step. In this case, the user can easily grasp that the conversion from the input image to the converted image may not have been performed properly.
  • the control unit may warn the user by displaying at least one of a warning message, a warning image, and the like on the display unit. Further, the control unit may issue a warning to the user by generating at least one of a warning message and a warning sound from the speaker. Further, the control unit may execute the warning process while displaying the actually converted converted image on the display unit, or may execute the warning process without displaying the converted image.
  • the control unit stops the display processing of the converted image acquired in the converted image acquisition step on the display unit when the evaluation information acquired in the evaluation information acquisition step evaluates that the conversion is not valid. May be further executed. In this case, it is suppressed that the converted image that has not been properly converted from the input image is displayed on the display unit. Therefore, the possibility that the user cannot accurately perform various judgments based on the converted image is reduced.
  • control unit may display at least one of a numerical value indicating the acquired evaluation information, a graph, and the like on the display unit.
  • the user can easily grasp whether or not the conversion from the input image to the converted image is properly performed based on the displayed evaluation information.
  • the difference image between the input image and the converted image may be displayed on the display unit as evaluation information.
  • the control unit inputs the input image to a mathematical model different from the mathematical model that performed the conversion evaluated as invalid, so that the converted image is input. You may get it.
  • the characteristics of conversion by the mathematical model differ depending on the algorithm and training data used when training the mathematical model. Therefore, if the transformation is evaluated as invalid, the transformed image may be acquired appropriately by acquiring the transformed image by a different mathematical model.
  • the control unit may display the difference image on the display unit regardless of the validity of the conversion from the input image to the converted image. As a result, the user can easily grasp the position of the irregular portion based on the difference image.
  • the ophthalmic image processing program can be expressed as follows. It is an ophthalmic image processing program executed by an ophthalmic image processing apparatus that processes an ophthalmic image which is an image of the tissue of the eye to be inspected, and the ophthalmic image processing program is executed by a control unit of the ophthalmic image processing apparatus.
  • the device that executes the image acquisition step, the converted image acquisition step, the evaluation information acquisition step, etc. can be appropriately selected.
  • the control unit of a personal computer (hereinafter referred to as “PC”) may execute all of the converted image acquisition step, the evaluation information acquisition step, and the like. That is, the control unit of the PC may acquire an ophthalmologic image from the ophthalmology image capturing apparatus and perform conversion image acquisition processing or the like based on the acquired ophthalmology image. Further, the control unit of the ophthalmic imaging apparatus may execute all of the conversion image acquisition step, the evaluation information acquisition step, and the like. Further, the control units of a plurality of devices (for example, an ophthalmologic image capturing device and a PC, etc.) may cooperate to execute the converted image acquisition step, the evaluation information acquisition step, and the like.
  • a plurality of devices for example, an ophthalmologic image capturing device and a PC, etc.
  • FIG. 1 It is a block diagram which shows the schematic structure of the mathematical model construction apparatus 1, the ophthalmologic image processing apparatus 21, and the ophthalmology imaging apparatus 11A, 11B. It is a figure which shows an example of the training data for input and the training data for output in the case of outputting a high-quality two-dimensional tomographic image as a conversion image to a mathematical model. It is a flowchart of the mathematical model construction process executed by the mathematical model construction apparatus 1. It is a flowchart of ophthalmic image processing executed by ophthalmic image processing apparatus 21. It is a figure which shows an example of an ophthalmic image used as an input image. It is a figure which shows an example of the converted image which converted the image quality of the input image shown in FIG.
  • the mathematical model construction device 1 constructs a mathematical model by training the mathematical model by a machine learning algorithm.
  • the program that realizes the constructed mathematical model is stored in the storage device 24 of the ophthalmic image processing device 21.
  • the ophthalmic image processing device 21 inputs an ophthalmic image as an input image into a mathematical model to acquire a converted image in which the image quality of the input image is converted (in the present embodiment, the image quality is improved).
  • the ophthalmic image processing device 21 acquires evaluation information for evaluating the validity of conversion of the converted image from the input image.
  • the ophthalmic imaging devices 11A and 11B capture an ophthalmic image which is an image of the tissue of the eye to be inspected.
  • a personal computer (hereinafter referred to as "PC") is used for the mathematical model construction device 1 of the present embodiment.
  • the mathematical model building apparatus 1 uses an ophthalmic image acquired from the ophthalmic imaging apparatus 11A (hereinafter referred to as “training ophthalmic image”) and an image obtained by converting the image quality of the training ophthalmic image. Build a mathematical model by training it.
  • the device that can function as the mathematical model construction device 1 is not limited to the PC.
  • the ophthalmologic imaging device 11A may function as the mathematical model building device 1.
  • the control units of the plurality of devices (for example, the CPU of the PC and the CPU 13A of the ophthalmologic imaging apparatus 11A) may collaborate to construct a mathematical model.
  • a CPU is used as an example of a controller that performs various processes.
  • a controller other than the CPU may be used for at least a part of various devices. For example, by adopting a GPU as a controller, the processing speed may be increased.
  • the mathematical model construction device 1 will be described.
  • the mathematical model construction device 1 is arranged, for example, in an ophthalmic image processing device 21 or a manufacturer that provides an ophthalmic image processing program to a user.
  • the mathematical model building apparatus 1 includes a control unit 2 that performs various control processes and a communication I / F5.
  • the control unit 2 includes a CPU 3 which is a controller that controls control, and a storage device 4 that can store programs, data, and the like.
  • the storage device 4 stores a mathematical model construction program for executing a mathematical model construction process (see FIG. 3) described later.
  • the communication I / F5 connects the mathematical model building device 1 to other devices (for example, an ophthalmic imaging device 11A and an ophthalmic image processing device 21).
  • the mathematical model construction device 1 is connected to the operation unit 7 and the display device 8.
  • the operation unit 7 is operated by the user in order for the user to input various instructions to the mathematical model construction device 1.
  • the operation unit 7 for example, at least one of a keyboard, a mouse, a touch panel, and the like can be used.
  • a microphone or the like for inputting various instructions may be used together with the operation unit 7 or instead of the operation unit 7.
  • the display device 8 displays various images.
  • various devices capable of displaying an image for example, at least one of a monitor, a display, a projector, and the like
  • the "image" in the present disclosure includes both a still image and a moving image.
  • the mathematical model construction device 1 can acquire data of an ophthalmic image (hereinafter, may be simply referred to as an “ophthalmic image”) from the ophthalmic imaging device 11A.
  • the mathematical model building apparatus 1 may acquire ophthalmic image data from the ophthalmic imaging apparatus 11A by, for example, at least one of wired communication, wireless communication, a detachable storage medium (for example, a USB memory), and the like.
  • the ophthalmic image processing device 21 will be described.
  • the ophthalmologic image processing device 21 is arranged, for example, in a facility (for example, a hospital or a health examination facility) for diagnosing or examining a subject.
  • the ophthalmic image processing device 21 includes a control unit 22 that performs various control processes and a communication I / F 25.
  • the control unit 22 includes a CPU 23, which is a controller that controls control, and a storage device 24 that can store programs, data, and the like.
  • the storage device 24 stores an ophthalmic image processing program for executing ophthalmic image processing (see FIGS. 4 and 9) described later.
  • the ophthalmic image processing program includes a program that realizes a mathematical model constructed by the mathematical model building apparatus 1.
  • the communication I / F 25 connects the ophthalmic image processing device 21 to other devices (for example, the ophthalmic imaging device 11B and the mathematical model building device 1).
  • the ophthalmic image processing device 21 is connected to the operation unit 27 and the display device 28.
  • various devices can be used in the same manner as the operation unit 7 and the display device 8 described above.
  • the ophthalmic image processing device 21 can acquire an ophthalmic image from the ophthalmic image capturing device 11B.
  • the ophthalmic image processing device 21 may acquire an ophthalmic image from the ophthalmic image capturing device 11B by, for example, at least one of wired communication, wireless communication, a detachable storage medium (for example, a USB memory), and the like. Further, the ophthalmic image processing device 21 may acquire a program or the like for realizing the mathematical model constructed by the mathematical model building device 1 via communication or the like.
  • the ophthalmic imaging devices 11A and 11B will be described. As an example, in the present embodiment, a case where an ophthalmic image capturing device 11A for providing an ophthalmic image to the mathematical model building apparatus 1 and an ophthalmologic imaging device 11B for providing an ophthalmic image to the ophthalmic image processing device 21 will be described. ..
  • the number of ophthalmic imaging devices used is not limited to two.
  • the mathematical model construction device 1 and the ophthalmic image processing device 21 may acquire ophthalmic images from a plurality of ophthalmic imaging devices.
  • the mathematical model construction device 1 and the ophthalmology image processing device 21 may acquire an ophthalmology image from one common ophthalmology image capturing device.
  • the OCT device is exemplified as the ophthalmic imaging device 11 (11A, 11B).
  • an ophthalmologic imaging device other than the OCT device for example, a laser scanning optometry device (SLO), a fundus camera, a Scheimpflug camera, a corneal endothelial cell imaging device (CEM), etc.
  • SLO laser scanning optometry device
  • CEM corneal endothelial cell imaging device
  • the ophthalmic imaging device 11 includes a control unit 12 (12A, 12B) that performs various control processes, and an ophthalmic imaging unit 16 (16A, 16B).
  • the control unit 12 includes a CPU 13 (13A, 13B) which is a controller that controls control, and a storage device 14 (14A, 14B) capable of storing programs, data, and the like.
  • the ophthalmic image capturing apparatus 11 executes at least a part of the ophthalmic image processing (see FIGS. 4 and 9) described later, at least a part of the ophthalmic image processing program for executing the ophthalmic image processing is stored in the storage device 14. Needless to say, it is remembered in.
  • the ophthalmic imaging unit 16 includes various configurations necessary for capturing an ophthalmic image of the eye to be inspected.
  • the ophthalmic imaging unit 16 of the present embodiment is provided with an OCT light source, a branched optical element that branches OCT light emitted from the OCT light source into measurement light and reference light, a scanning unit for scanning the measurement light, and measurement light. It includes an optical system for irradiating the eye examination, a light receiving element that receives the combined light of the light reflected by the tissue of the eye to be inspected and the reference light, and the like.
  • the ophthalmologic image capturing device 11 can capture a two-dimensional tomographic image and a three-dimensional tomographic image of the fundus of the eye to be inspected.
  • the CPU 13 scans the OCT light (measurement light) on the scan line to take a two-dimensional tomographic image (see FIG. 5) of the cross section intersecting the scan line.
  • the CPU 13 can capture a three-dimensional tomographic image of the tissue by scanning the OCT light two-dimensionally.
  • the CPU 13 acquires a plurality of two-dimensional tomographic images by scanning measurement light on each of a plurality of scan lines having different positions in a two-dimensional region when the tissue is viewed from the front.
  • the CPU 13 acquires a three-dimensional tomographic image by combining a plurality of captured two-dimensional tomographic images.
  • the CPU 13 can capture a plurality of ophthalmic images of the same site by scanning the measurement light a plurality of times on the same site on the tissue (in the present embodiment, on the same scan line).
  • the CPU 13 can acquire an averaging image in which the influence of speckle noise is suppressed by performing an averaging process on a plurality of ophthalmic images of the same portion.
  • the image quality of the two-dimensional tomographic image can be improved by performing the addition averaging processing on a plurality of two-dimensional tomographic images of the same part.
  • the addition averaging process may be performed, for example, by averaging the pixel values of the pixels at the same position in a plurality of ophthalmic images.
  • the ophthalmologic image capturing device 11 executes a tracking process for tracking the scanning position of the OCT light with the movement of the eye to be inspected while taking a plurality of ophthalmic images of the same site.
  • the mathematical model construction process executed by the mathematical model construction apparatus 1 will be described with reference to FIGS. 2 and 3.
  • the mathematical model construction process is executed by the CPU 3 according to the mathematical model construction program stored in the storage device 4.
  • the mathematical model is trained by the training data set, so that the mathematical model that outputs the converted image obtained by converting the image quality of the input image is constructed.
  • the training data set includes input side data (input training data) and output side data (output training data).
  • the mathematical model can convert various ophthalmic images into converted images.
  • the type of training data set used to train the mathematical model is determined by the type of ophthalmic image to which the mathematical model transforms image quality.
  • a two-dimensional tom image (high-quality image) in which the image quality of the input image is improved by inputting the two-dimensional tom image as an input image to the mathematical model is output to the mathematical model as a converted image will be described.
  • FIG. 2 shows an example of input training data and output training data when a high-quality two-dimensional tomographic image is output as a converted image to a mathematical model.
  • the CPU 3 acquires a set 40 of a plurality of two-dimensional tomographic images 400A to 400X in which the same part of the tissue is photographed.
  • the CPU 3 uses a part of the plurality of two-dimensional tomographic images 400A to 400X in the set 40 (the number of images smaller than the number of images used for the addition averaging of the output training data described later) as the input training data. Further, the CPU 3 acquires the additional average image 41 of the plurality of two-dimensional tomographic images 400A to 400X in the set 40 as output training data.
  • the influence of speckle noise is suppressed by inputting a two-dimensional tomographic image as an input image to the trained mathematical model.
  • the high-quality two-dimensional tomographic image is output as a converted image.
  • the ophthalmic image that converts the image quality to the mathematical model is not limited to the two-dimensional tomographic image of the fundus.
  • the ophthalmologic image may be an image of a portion other than the fundus of the eye to be examined.
  • the ophthalmic image may be a three-dimensional tomographic image, an OCT angio image, an Enface image, or the like taken by an OCT apparatus.
  • the OCT angio image may be a two-dimensional front image in which the fundus is viewed from the front (that is, the line-of-sight direction of the eye to be inspected).
  • the OCT angio image may be, for example, a motion contrast image acquired by processing at least two OCT signals acquired at different times with respect to the same position.
  • the Enface image is a two-dimensional front image when at least a part of the three-dimensional tomographic image taken by the OCT device is viewed from the direction (front direction) along the optical axis of the measurement light of the OCT device.
  • the ophthalmologic image may be an image taken by a fundus camera, an image taken by a laser scanning optometry device (SLO), an image taken by a corneal endothelial cell photography device, or the like.
  • the image quality may be improved by a process other than the addition averaging process.
  • the mathematical model construction process will be described with reference to FIG.
  • the CPU 3 acquires at least a part of the ophthalmic image taken by the ophthalmic image capturing device 11A as input training data (S1).
  • the data of the ophthalmic image is generated by the ophthalmic imaging apparatus 11A and then acquired by the mathematical model construction apparatus 1.
  • the CPU 3 acquires the data of the ophthalmic image by acquiring the signal (for example, OCT signal) that is the basis for generating the ophthalmic image from the ophthalmic imaging apparatus 11A and generating the ophthalmic image based on the acquired signal. You may.
  • the CPU 3 acquires the output training data corresponding to the input training data acquired in S1 (S3).
  • S3 An example of the correspondence between the input training data and the output training data is as described above.
  • the CPU 3 executes the training of the mathematical model using the training data set by the machine learning algorithm (S3).
  • machine learning algorithms for example, neural networks, random forests, boosting, support vector machines (SVMs), and the like are generally known.
  • Neural networks are a method of imitating the behavior of biological nerve cell networks.
  • Neural networks include, for example, feed-forward (forward propagation) neural networks, RBF networks (radial basis functions), spiking neural networks, convolutional neural networks, recursive neural networks (recurrent neural networks, feedback neural networks, etc.), and probabilities.
  • Neural networks Boltzmann machines, Basian networks, etc.).
  • Random forest is a method of generating a large number of decision trees by learning based on randomly sampled training data.
  • the branches of a plurality of decision trees learned in advance as a discriminator are traced, and the average (or majority vote) of the results obtained from each decision tree is taken.
  • Boosting is a method of generating a strong classifier by combining multiple weak classifiers.
  • a strong classifier is constructed by sequentially learning simple and weak classifiers.
  • SVM is a method of constructing a two-class pattern classifier using a linear input element.
  • the SVM learns the parameters of the linear input element based on, for example, the criterion (hyperplane separation theorem) of obtaining the margin maximizing hyperplane that maximizes the distance from each data point from the training data.
  • a mathematical model refers to, for example, a data structure for predicting the relationship between input data and output data.
  • Mathematical models are constructed by training with training datasets.
  • the training data set is a set of training data for input and training data for output.
  • training updates the correlation data (eg, weights) for each input and output.
  • a multi-layer neural network is used as a machine learning algorithm.
  • a neural network includes an input layer for inputting data, an output layer for generating the data to be predicted, and one or more hidden layers between the input layer and the output layer.
  • a plurality of nodes also called units
  • a convolutional neural network (CNN) which is a kind of multi-layer neural network
  • CNN convolutional neural network
  • other machine learning algorithms may be used.
  • GAN hostile generative network
  • GAN hostile generative network
  • 4 to 8 show an example of evaluating the validity of image quality conversion by a mathematical model based on the difference information (difference image) between the input image and the converted image.
  • the ophthalmic image processing illustrated in FIG. 4 is executed by the CPU 23 according to the ophthalmic image processing program stored in the storage device 24.
  • the CPU 23 acquires an ophthalmologic image of the tissue of the eye to be inspected taken by the ophthalmologic imaging apparatus (OCT apparatus in this embodiment) 11B (S11).
  • OCT apparatus ophthalmologic imaging apparatus
  • S11 of the present embodiment a two-dimensional tomographic image (see FIG. 5) of the fundus tissue of the eye to be inspected is acquired.
  • the CPU 23 converted the image quality of the input image by inputting the ophthalmic image acquired in S11 as the input image into the mathematical model trained by the machine learning algorithm (in the present embodiment, the image quality of the input image is changed.
  • the (improved) converted image is acquired (S12).
  • FIG. 5 shows an example of an ophthalmic image used as an input image.
  • FIG. 6 shows a converted image in which the image quality of the input image shown in FIG. 5 is converted (improved image quality).
  • the image quality of the input image shown in FIG. 5 is lower than that of the converted image shown in FIG.
  • the input image shown in FIG. 5 is generated without performing the addition averaging process or by performing the addition averaging process on a small number of ophthalmic images. Therefore, the input image shown in FIG. 5 can be captured in a short time.
  • a high-quality converted image is acquired by inputting an input image taken in a short time into a mathematical model. Therefore, a high-quality image can be acquired while suppressing a long shooting time.
  • the CPU 23 obtains the difference information of the pixel values between the corresponding pixels of the input image (see FIG. 5) input to the mathematical model in S12 and the converted image (see FIG. 6) output from the mathematical model in S12. , Acquired as evaluation information (S13).
  • the evaluation information is information for evaluating the validity of conversion from an input image to a converted image by a mathematical model. When the image quality of the input image is appropriately converted and the converted image is output, the difference between the input image and the converted image becomes small. On the other hand, the image quality conversion may not be properly executed depending on the state of the input image.
  • the irregular part for example, an irregular part such as a lesion part contained in the training data set (ophthalmic image) used for training a mathematical model
  • the irregular part is converted. Is difficult to execute properly. Therefore, if the irregular portion in the input image affects the conversion and the conversion of the input image is not properly executed, the difference between the input image and the converted image becomes large. Therefore, by acquiring the difference information as the evaluation information, the validity of the conversion from the input image to the converted image is appropriately evaluated.
  • the CPU 23 may acquire the difference information after executing the Fourier transform (for example, two-dimensional Fourier transform) for each of the input image and the converted image.
  • Periodic artifacts may occur in the transformed image.
  • the input image and the converted image show different frequency distributions. Therefore, by using the Fourier transform, the presence or absence of periodic artifacts is evaluated more appropriately.
  • a difference image showing the distribution of the difference values acquired for each pixel is acquired as the difference information.
  • the gray portion where the brightness is an intermediate value for example, 128, which is an intermediate value when the brightness changes in the range of 1 to 256
  • the difference image there is a difference between the region where the conversion is properly performed and the region where the conversion is not performed properly. Therefore, the validity of the conversion from the input image to the converted image is appropriately evaluated based on the difference image.
  • the difference information may be information other than the difference image.
  • the average value of the difference values between the plurality of pixels may be acquired as the difference information.
  • the difference information may be the difference in pixel values between the corresponding pixels, or may be the ratio of the other pixel value to one pixel value.
  • the CPU 23 evaluates the validity of the conversion from the input image to the converted image in S12 based on the difference image (see FIG. 7) acquired in S13 (S14). Specifically, in S14 of this embodiment, it is evaluated whether or not the conversion in S12 was appropriate.
  • the CPU 23 executes a smoothing process on the pixel value of the difference image (the value of the difference corresponding to each pixel), and then evaluates the validity of the conversion based on the difference image. Therefore, even when the conversion is properly executed, the validity of the conversion is evaluated more appropriately in a state where the influence of the scattered pixels 51 is suppressed.
  • the specific method for evaluating the validity of the conversion based on the value of the difference in the region can be appropriately selected.
  • the CPU 23 evaluates that the conversion of the image quality in the region is not appropriate.
  • the CPU 23 may evaluate whether or not the conversion is appropriate based on the number of pixels in the unit region whose difference value is equal to or greater than the threshold value.
  • the CPU 23 may acquire information (for example, a numerical value or a graph) indicating the degree of validity of the conversion in S12 based on the difference information. Good.
  • the CPU 23 may notify the user (for example, display on the display device 28) of information indicating the degree of validity of the conversion as evaluation information.
  • the CPU 23 executes a warning process for the user (S17).
  • the CPU 23 warns the user by displaying a warning message such as "image conversion was not properly executed” or a warning image on the display device 28.
  • the warning method can be changed as appropriate.
  • the CPU 23 may issue a warning to the user by generating at least one of a warning message and a warning sound from the speaker.
  • the CPU 23 can also execute warning processing while displaying the converted image that has not been properly converted on the display device 28.
  • the CPU 23 may use a warning message such as "the displayed converted image may be inappropriate" in the process of S17.
  • the CPU 23 evaluates that the conversion is not valid (S15: NO)
  • the CPU 23 displays the ophthalmic image used as the input image while stopping the process of displaying the converted image acquired in S12 on the display device 28. It may be displayed on the device 28. In this case, the user can observe the desired site based on the ophthalmic image before the image quality is converted.
  • the CPU 23 causes the display device 28 to display the difference image (see FIG. 7) acquired in S13 (S18).
  • the difference image there is a difference between the region where the conversion is properly performed and the region where the conversion is not performed properly. Therefore, the user can confirm whether or not the conversion is properly performed by confirming the difference image. Further, the user can grasp the area where the conversion is not properly performed by checking the difference image. Further, when the input image contains an irregular portion (for example, a diseased portion), it is difficult to properly convert the image quality of the irregular portion. Therefore, the user can appropriately grasp the irregular portion in the input image by checking the difference image.
  • an irregular portion for example, a diseased portion
  • FIG. 9 is a flowchart of ophthalmic image processing in the transformation example.
  • the similarity for example, correlation
  • the validity of the conversion is evaluated based on the similarity.
  • at least a part of the ophthalmologic image processing (see FIG. 4) exemplified in the above embodiment can be similarly adopted in the ophthalmologic image processing of the transformation example shown in FIG. Therefore, for the processes that can adopt the same processes as those in the above embodiment, the same step numbers as those in the above embodiments are assigned, and the description thereof will be omitted or simplified.
  • the CPU 23 acquires the difference image (see FIG. 7) between the input image and the converted image after executing the converted image acquisition process (S12) (S23). Next, the CPU 23 acquires the similarity between the input image and the difference image as evaluation information (S24). As described above, when the image quality of the input image is appropriately converted, the difference between the input image and the converted image becomes small, so that the similarity between the difference image and the input image becomes small.
  • the conversion of the input image is not properly executed and the irregular part or the like in the input image affects the conversion, the position (area) of the irregular part or the like in the input image and the difference value in the difference image Since the position (region) where is large is approximated, the degree of similarity between the input image and the difference image is large. Therefore, the validity of the conversion is appropriately evaluated by acquiring the similarity between the difference image and the input image as evaluation information.
  • the CPU 23 evaluates the validity of the conversion from the input image to the converted image in S12 based on the similarity acquired in S24 (S25). Specifically, in S25 of the present embodiment, whether or not the conversion in S12 was appropriate is evaluated based on the degree of similarity. As mentioned above, when the conversion is performed properly, the similarity between the input image and the difference image becomes small. On the other hand, if the conversion is not performed properly, the similarity between the input image and the difference image becomes large. Therefore, the CPU 23 can appropriately evaluate whether or not the conversion from the input image to the converted image is appropriate by determining whether or not the value indicating the similarity is equal to or greater than the threshold value.
  • the CPU 23 displays information (for example, numerical value, correlation diagram, graph, etc.) indicating the degree of validity of the conversion in S12 as evaluation information. It may be displayed on 28.
  • the technology disclosed in the above embodiments and transformation examples is only an example. Therefore, it is possible to modify the techniques exemplified in the above embodiments and transformation examples.
  • the validity of the conversion from the input image to the converted image is evaluated based on the difference information (difference image). Further, in the above transformation example, the validity of the conversion is evaluated based on the similarity between the input image and the difference image.
  • the method of acquiring the evaluation information for evaluating the validity of the conversion is not limited to the method exemplified in the above-described embodiment and the transformation example.
  • the CPU 23 may acquire evaluation information using a mathematical model trained by a machine learning algorithm.
  • the mathematical model (mathematical model for acquiring evaluation information) uses, for example, input images and converted images as input training data, and provides evaluation information indicating the validity of conversion in the input images and converted images of the input training data. It may be trained in advance as output training data. The training data for output may be generated by the user comparing the input image and the converted image.
  • the CPU 23 may acquire the evaluation information output by the mathematical model by inputting the input image and the converted image into the mathematical model for acquiring the evaluation information. By acquiring the evaluation information by the mathematical model for acquiring the evaluation information, the validity of the conversion is appropriately evaluated even if the difference between the input image and the converted image is not acquired.
  • the mathematical model for acquiring evaluation information may output evaluation information indicating whether or not the conversion from the input image to the converted image is appropriate, or evaluates a numerical value or the like indicating the degree of validity of the conversion. Information may be output.
  • the process to be executed when it is judged that the conversion from the input image to the converted image is not appropriate can be changed as appropriate.
  • the CPU 23 evaluates that the conversion is not valid based on the evaluation information
  • the CPU 23 acquires the converted image by inputting the input image to a mathematical model different from the mathematical model that performed the conversion evaluated as invalid. May be good.
  • the characteristics of conversion by the mathematical model differ depending on the algorithm and training data used when training the mathematical model. Therefore, if the transformation is evaluated as invalid, the transformed image may be acquired appropriately by acquiring the transformed image by a different mathematical model.
  • the process of acquiring an ophthalmic image in S11 of FIGS. 4 and 9 is an example of the “image acquisition step”.
  • the process of acquiring the converted image in S12 of FIGS. 4 and 9 is an example of the “converted image acquisition step”.
  • the process of acquiring the evaluation information in S13 of FIG. 4 and S24 of FIG. 9 is an example of the “evaluation information acquisition step”.
  • the process of evaluating the validity of the conversion in S14 of FIG. 4 is an example of the “first evaluation step”.
  • the process of evaluating the validity of the conversion in S25 of FIG. 9 is an example of the “second evaluation step”.
  • the process of displaying the difference image in S18 of FIGS. 4 and 9 is an example of the “difference image display step”.
  • the warning process shown in S17 of FIGS. 4 and 9 is an example of the “warning step”.
  • the process of stopping the display process of the converted image at S15: NO in FIGS. 4 and 9 is an example of the “display stop step”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

L'invention concerne une unité de commande de dispositif de traitement d'images ophtalmiques exécutant une étape d'acquisition d'image (S11), une étape d'acquisition d'image transformée (S12) et une étape d'acquisition d'informations d'évaluation (S13). Lors de l'étape d'acquisition d'image, l'unité de commande acquiert une image ophtalmique capturée par un dispositif d'imagerie d'images ophtalmiques. Lors de l'étape d'acquisition d'image transformée, l'unité de commande introduit, sous forme d'image d'entrée, l'image ophtalmique acquise lors de l'étape d'acquisition d'image dans un modèle mathématique instruit par un algorithme d'apprentissage automatique, de façon à acquérir une image transformée obtenue par transformation de la qualité d'image de l'image d'entrée. Lors de l'étape d'acquisition d'informations d'évaluation, l'unité de commande acquiert des informations d'évaluation pour évaluer la validité de la transformation de l'image d'entrée en image transformée à l'aide du modèle mathématique.
PCT/JP2020/032949 2019-09-04 2020-08-31 Programme de traitement d'images ophtalmiques et dispositif de traitement d'images ophtalmiques WO2021045019A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021543759A JPWO2021045019A1 (fr) 2019-09-04 2020-08-31

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-161616 2019-09-04
JP2019161616 2019-09-04

Publications (1)

Publication Number Publication Date
WO2021045019A1 true WO2021045019A1 (fr) 2021-03-11

Family

ID=74852928

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/032949 WO2021045019A1 (fr) 2019-09-04 2020-08-31 Programme de traitement d'images ophtalmiques et dispositif de traitement d'images ophtalmiques

Country Status (2)

Country Link
JP (1) JPWO2021045019A1 (fr)
WO (1) WO2021045019A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001118063A (ja) * 1999-10-19 2001-04-27 Canon Inc 画像処理装置、画像処理システム、画像処理方法、及び記憶媒体
JP2006031075A (ja) * 2004-07-12 2006-02-02 Ricoh Co Ltd 画像処理評価システム
JP2009181508A (ja) * 2008-01-31 2009-08-13 Sharp Corp 画像処理装置、検査システム、画像処理方法、画像処理プログラム、及び該プログラムを記録したコンピュータ読み取り可能な記録媒体
JP4430743B2 (ja) * 1996-07-30 2010-03-10 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ リング状画像アーチファクトの補正
JP2010198068A (ja) * 2009-02-23 2010-09-09 Seiko Epson Corp 画像処理回路のシミュレーション装置、画像処理回路のシミュレーション方法、画像処理回路の設計方法、及び画像処理回路のシミュレーションプログラム
JP2011134200A (ja) * 2009-12-25 2011-07-07 Konica Minolta Holdings Inc 画像評価方法、画像処理方法および画像処理装置
WO2018210978A1 (fr) * 2017-05-19 2018-11-22 Retinai Medical Gmbh Réduction du bruit dans une image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4430743B2 (ja) * 1996-07-30 2010-03-10 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ リング状画像アーチファクトの補正
JP2001118063A (ja) * 1999-10-19 2001-04-27 Canon Inc 画像処理装置、画像処理システム、画像処理方法、及び記憶媒体
JP2006031075A (ja) * 2004-07-12 2006-02-02 Ricoh Co Ltd 画像処理評価システム
JP2009181508A (ja) * 2008-01-31 2009-08-13 Sharp Corp 画像処理装置、検査システム、画像処理方法、画像処理プログラム、及び該プログラムを記録したコンピュータ読み取り可能な記録媒体
JP2010198068A (ja) * 2009-02-23 2010-09-09 Seiko Epson Corp 画像処理回路のシミュレーション装置、画像処理回路のシミュレーション方法、画像処理回路の設計方法、及び画像処理回路のシミュレーションプログラム
JP2011134200A (ja) * 2009-12-25 2011-07-07 Konica Minolta Holdings Inc 画像評価方法、画像処理方法および画像処理装置
WO2018210978A1 (fr) * 2017-05-19 2018-11-22 Retinai Medical Gmbh Réduction du bruit dans une image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MA YUHU I ET AL.: "Speckle noise reduction in optical coherence tomography images based on edge-sensitive cGAN", BIOMEDICAL OPTICS EXPRESS, vol. 9, no. 11, 2018, pages 5129 - 5146, XP055675651, DOI: 10.1364/BOE.9.005129 *

Also Published As

Publication number Publication date
JPWO2021045019A1 (fr) 2021-03-11

Similar Documents

Publication Publication Date Title
US11633096B2 (en) Ophthalmologic image processing device and non-transitory computer-readable storage medium storing computer-readable instructions
JP7388525B2 (ja) 眼科画像処理装置および眼科画像処理プログラム
WO2020026535A1 (fr) Dispositif de traitement d'images ophtalmiques, dispositif oct et programme de traitement d'images ophtalmiques
US20220284577A1 (en) Fundus image processing device and non-transitory computer-readable storage medium storing computer-readable instructions
JP2024045441A (ja) 眼科画像処理装置、および眼科画像処理プログラム
JP6703319B1 (ja) 眼科画像処理装置、およびoct装置
JP2022082077A (ja) 眼科画像処理装置、および、眼科画像処理プログラム
WO2020116351A1 (fr) Dispositif d'aide au diagnostic et programme d'aide au diagnostic
JP6866954B2 (ja) 眼科画像処理プログラム、およびoct装置
WO2021045019A1 (fr) Programme de traitement d'images ophtalmiques et dispositif de traitement d'images ophtalmiques
JP7439990B2 (ja) 医療画像処理装置、医療画像処理プログラム、および医療画像処理方法
JP6747617B2 (ja) 眼科画像処理装置、およびoct装置
JP2021037177A (ja) 眼科画像処理プログラムおよび眼科画像処理装置
WO2021020419A1 (fr) Dispositif de traitement d'image médicale et programme de traitement d'image médicale
WO2020241794A1 (fr) Dispositif de traitement d'image ophtalmique, programme de traitement d'image ophtalmique et système de traitement d'image ophtalmique
JP7328489B2 (ja) 眼科画像処理装置、および眼科撮影装置
JP2021074095A (ja) 眼科画像処理装置および眼科画像処理プログラム
JP7435067B2 (ja) 眼科画像処理装置および眼科画像処理プログラム
JP7210927B2 (ja) 眼科画像処理装置、oct装置、および眼科画像処理プログラム
JP7180187B2 (ja) 眼科画像処理装置、oct装置、および眼科画像処理プログラム
JP7302184B2 (ja) 眼科画像処理装置、および眼科画像処理プログラム
WO2023281965A1 (fr) Dispositif et programme de traitement d'image médicale
JP2022138552A (ja) 眼科画像処理装置、眼科画像処理プログラム、および眼科画像撮影装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20861862

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021543759

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20861862

Country of ref document: EP

Kind code of ref document: A1