WO2020049828A1 - Image processing apparatus, image processing method, and program - Google Patents

Image processing apparatus, image processing method, and program Download PDF

Info

Publication number
WO2020049828A1
WO2020049828A1 PCT/JP2019/023650 JP2019023650W WO2020049828A1 WO 2020049828 A1 WO2020049828 A1 WO 2020049828A1 JP 2019023650 W JP2019023650 W JP 2019023650W WO 2020049828 A1 WO2020049828 A1 WO 2020049828A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
display
quality improvement
control unit
Prior art date
Application number
PCT/JP2019/023650
Other languages
French (fr)
Japanese (ja)
Inventor
弘樹 内田
好彦 岩瀬
治 嵯峨野
律也 富田
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019068663A external-priority patent/JP7305401B2/en
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to CN201980057669.5A priority Critical patent/CN112638234A/en
Publication of WO2020049828A1 publication Critical patent/WO2020049828A1/en
Priority to US17/182,402 priority patent/US20210183019A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Definitions

  • the present invention relates to an image processing device, an image processing method, and a program.
  • OCT apparatus As a method for non-destructively and non-invasively acquiring a tomographic image of a subject such as a living body, an apparatus (OCT apparatus) using optical coherence tomography (OCT) has been put to practical use.
  • OCT apparatus is widely used as an ophthalmic apparatus for acquiring an image for ophthalmic diagnosis.
  • a tomographic image of a subject can be obtained by causing light reflected from a measurement target and light reflected from a reference mirror to interfere with each other and analyzing the intensity of the interference light.
  • a time domain OCT (TD-OCT: Time @ Domain @ OCT) is known.
  • TD-OCT depth information of a subject is obtained by sequentially changing the position of a reference mirror.
  • SD-OCT Spectral Domain OCT
  • SS-OCT Swept Source OCT
  • SD-OCT interference light that is caused to interfere using low coherence light
  • SS-OCT interference light is acquired using light whose wavelength has been previously separated using a wavelength-swept light source.
  • SD-OCT and SS-OCT are also collectively referred to as Fourier domain OCT (FD-OCT: Fourier Domain OCT).
  • a tomographic image based on depth information of the subject can be obtained. Further, by integrating the acquired three-dimensional tomographic images in the depth direction and projecting them on a two-dimensional plane, a front image of the measurement target can be generated. Conventionally, in order to improve the image quality of these images, it has been performed to acquire the images a plurality of times and to perform the superimposition processing. However, in such a case, it takes time to take a plurality of shots.
  • Patent Literature 1 discloses a technique for converting a previously acquired image into a higher-resolution image using an artificial intelligence engine in order to cope with rapid advances in medical technology and simple imaging in an emergency. . According to such a technique, for example, an image acquired by less photographing can be converted to an image with higher resolution.
  • an image having a high resolution may not be an image suitable for image diagnosis.
  • an object to be observed may not be able to be properly grasped when there is much noise or when the contrast is low.
  • one of the objects of the present invention is to provide an image processing device, an image processing method, and a program that can generate an image more suitable for image diagnosis than before.
  • the image processing apparatus uses the learned model to perform at least one of noise reduction and contrast enhancement from the first image of the subject's eye as compared to the first image.
  • An image quality improving unit that generates a second image
  • a display control unit that causes the display unit to switch between the first image and the second image and display them side by side or in a superimposed manner.
  • An image processing device uses a learned model to convert a first image, which is a front image generated based on information in a range in the depth direction of the subject's eye, into the first image. Generating a second image in which at least one of noise reduction and contrast enhancement has been performed compared to the image, based on an image quality improving unit and a depth direction range for generating the first image, A selecting unit that selects a learned model used by the image quality improving unit from a plurality of learned models.
  • 1 shows a schematic configuration of an OCT apparatus according to a first embodiment.
  • 2 illustrates a schematic configuration of a control unit according to the first embodiment.
  • 4 shows an example of teacher data according to the first embodiment.
  • 4 shows an example of teacher data according to the first embodiment.
  • 4 illustrates an example of a configuration of a learned model according to the first embodiment.
  • 5 is a flowchart of a series of image processing according to the first embodiment.
  • 13 shows an example of a report screen for switching and displaying images before and after the image quality improvement processing.
  • 13 shows an example of a report screen for switching and displaying images before and after the image quality improvement processing.
  • 13 shows an example of a report screen in which images before and after the image quality improvement processing are displayed side by side.
  • 7 shows an example of a report screen that simultaneously displays a plurality of images to which image quality improvement processing has been applied.
  • 7 shows an example of a report screen that simultaneously displays a plurality of images to which image quality improvement processing has been applied.
  • 9 illustrates a schematic configuration of a control unit according to the second embodiment. 9 is a flowchart of a series of image processing according to the second embodiment. An example of changing the image quality improvement processing will be described. An example of changing the image quality improvement processing will be described.
  • 7 shows an example of a report screen that simultaneously displays a plurality of images to which image quality improvement processing has been applied.
  • 7 shows an example of a report screen that simultaneously displays a plurality of images to which image quality improvement processing has been applied.
  • 13 is a flowchart of a series of image processing according to the third embodiment.
  • 9 shows a schematic configuration of a control unit according to a fourth embodiment.
  • 13 is a flowchart of a series of image processing according to the fourth embodiment.
  • 15 shows an example of the configuration of a neural network used as a machine learning model according to Modification 9.
  • 15 shows an example of the configuration of a neural network used as a machine learning model according to Modification 9.
  • 15 shows an example of the configuration of a neural network used as a machine learning model according to Modification 9.
  • 15 shows an example of the configuration of a neural network used as a machine learning model according to Modification 9.
  • 17 illustrates an example of a user interface according to a fifth embodiment.
  • 5 shows an example of En-Face images of a plurality of OCTAs.
  • 5 shows an example of En-Face images of a plurality of OCTAs.
  • 17 illustrates an example of a user interface according to a fifth embodiment.
  • 17 illustrate
  • OCTA OCT @ Angiography
  • an OCTA image of an eye to be examined will be described as an example of an image on which image quality improvement processing is performed using a learned model related to a machine learning model (machine learning engine).
  • OCTA is an angiography using OCT without using a contrast agent.
  • an OCTA image frontal blood vessel image
  • the motion contrast data is data obtained by repeatedly photographing substantially the same part of the subject and detecting a temporal change of the subject during the photographing.
  • the substantially identical portion refers to a position that is the same as an allowable level for generating the motion contrast data, and includes a portion slightly shifted from the strictly identical portion.
  • the motion contrast data is obtained, for example, by calculating a temporal change of the phase, vector, and intensity of the complex OCT signal from a difference, a ratio, a correlation, or the like.
  • a machine learning model is used to generate an image more suitable for image diagnosis than before, and for such an image, it is possible to easily determine the authenticity of the tissue drawn on the image.
  • An image processing device is provided.
  • the image on which the image quality improvement processing is performed is not limited to this, and may be a tomographic image, a luminance En-Face image, or the like.
  • the En-Face image is a front image generated by projecting or integrating data in a predetermined depth range determined based on two reference planes on a two-dimensional plane in three-dimensional data of the subject. It is.
  • the En-Face image includes, for example, a luminance En-Face image based on a luminance tomographic image and an OCTA image based on motion contrast data.
  • FIG. 1 shows a schematic configuration of the OCT apparatus according to the present embodiment.
  • the OCT apparatus 1 includes an OCT imaging unit 100, a control unit (image processing device) 200, an input unit 260, and a display unit 270.
  • the OCT imaging unit 100 includes an imaging optical system of the SD-OCT apparatus, and causes interference between return light from the eye E to which the measurement light is irradiated via the scanning unit and reference light corresponding to the measurement light. Based on the light, a signal including information on the tomography of the eye E (tomographic information) is acquired.
  • the OCT imaging unit 100 includes a light interference unit 110 and a scanning optical system 150.
  • the control unit 200 can control the OCT imaging unit 100, generate an image from signals obtained from the OCT imaging unit 100 and other devices (not shown), and process the generated / acquired image.
  • the display unit 270 is an arbitrary display such as an LCD display, and is a GUI for operating the OCT imaging unit 100 and the control unit 200, a generated image, an image subjected to arbitrary processing, and various information such as patient information. Can be displayed.
  • the input unit 260 is used to operate the control unit 200 by operating a GUI or inputting information.
  • the input unit 260 includes, for example, a mouse, a touchpad, a trackball, a touch panel display, a pointing device such as a stylus pen, a keyboard, and the like.
  • the display unit 270 and the input unit 260 can be integrally configured.
  • the OCT imaging unit 100, the control unit 200, the input unit 260, and the display unit 270 are separate elements, but some or all of them may be integrally configured. Good.
  • the light interference unit 110 in the OCT imaging unit 100 includes a light source 111, a coupler 113, a collimating optical system 121, a dispersion compensating optical system 122, a reflection mirror 123, a lens 131, a diffraction grating 132, an imaging lens 133, and a line sensor 134.
  • the light source 111 is a low coherence light source that emits near-infrared light. Light emitted from the light source 111 propagates through the optical fiber 112a and enters the coupler 113, which is an optical branching unit.
  • the light incident on the coupler 113 is split into measurement light traveling toward the scanning optical system 150 and reference light traveling toward the reference optical system including the collimating optical system 121, the dispersion compensating optical system 122, and the reflection mirror 123.
  • the measurement light enters the optical fiber 112b and is guided to the scanning optical system 150.
  • the reference light enters the optical fiber 112c and is guided to the reference optical system.
  • the reference light that has entered the optical fiber 112c is emitted from the fiber end, enters the dispersion compensating optical system 122 via the collimating optical system 121, and is guided to the reflection mirror 123.
  • the reference light reflected by the reflection mirror 123 follows the optical path in the reverse direction and enters the optical fiber 112c again.
  • the dispersion compensating optical system 122 is for compensating the dispersion of the optical system in the scanning optical system 150 and the eye E to be inspected, and adjusting the dispersion of the measurement light and the reference light.
  • the reflection mirror 123 is configured to be drivable in the optical axis direction of the reference light by a driving unit (not shown) controlled by the control unit 200, and sets the optical path length of the reference light to the optical path length of the measurement light. And the reference light and the measurement light can have the same optical path length.
  • the scanning optical system 150 is an optical system configured to be relatively movable with respect to the eye E to be inspected.
  • the scanning optical system 150 is configured to be able to be driven in the front, rear, up, down, left, and right directions with respect to the eye axis of the eye E by driving means (not shown) controlled by the control unit 200, and to perform alignment with respect to the eye E. It can be carried out.
  • the scanning optical system 150 may be configured to include the light source 111, the coupler 113, the reference optical system, and the like.
  • the scanning optical system 150 includes a collimating optical system 151, a scanning unit 152, and a lens 153. Light emitted from the fiber end of the optical fiber 112b is substantially collimated by the collimating optical system 151 and enters the scanning unit 152.
  • the scanning unit 152 has two galvanometer mirrors capable of rotating a mirror surface, one of which deflects light in the horizontal direction and the other deflects light in the vertical direction.
  • the scanning unit 152 deflects the incident light under the control of the control unit 200.
  • the scanning unit 152 scans the measurement light on the fundus Er of the eye E in two directions, that is, the main scanning direction perpendicular to the paper surface (X direction) and the sub-scanning direction in the paper surface direction (Y direction).
  • the main scanning direction and the sub-scanning direction are not limited to the X direction and the Y direction, but are perpendicular to the depth direction (Z direction) of the eye E, and the main scanning direction and the sub-scanning direction intersect each other. Any direction is acceptable. Therefore, for example, the main scanning direction may be the Y direction, and the sub scanning direction may be the X direction.
  • the measurement light scanned by the scanning unit 152 forms an illumination spot on the fundus Er of the subject's eye E via the lens 153.
  • each illumination spot moves (scans) on the fundus Er of the eye E to be inspected.
  • the return light of the measurement light reflected and scattered from the fundus Er at the position of the illumination spot follows the optical path in reverse, enters the optical fiber 112b, and returns to the coupler 113.
  • the reference light reflected by the reflection mirror 123 and the return light of the measurement light from the fundus Er of the eye E are returned to the coupler 113 and interfere with each other to become interference light.
  • the interference light passes through the optical fiber 112d and is emitted to the lens 131.
  • the interference light is made substantially parallel by the lens 131 and enters the diffraction grating 132.
  • the diffraction grating 132 has a periodic structure and splits the input interference light.
  • the split interference light is imaged on the line sensor 134 by the imaging lens 133 whose focus state can be changed.
  • the line sensor 134 outputs a signal corresponding to the intensity of light emitted to each sensor unit to the control unit 200.
  • the control unit 200 can generate a tomographic image of the eye E based on the interference signal output from the line sensor 134.
  • the control unit 200 can configure one B-scan image by collecting a plurality of A-scan images based on the interference signal acquired by the A-scan.
  • this B-scan image is referred to as a two-dimensional tomographic image.
  • the galvanomirror of the scanning unit 152 can be minutely driven in the sub-scanning direction orthogonal to the main scanning direction to acquire tomographic information at another position (adjacent scanning line) of the eye E.
  • the control unit 200 can obtain a three-dimensional tomographic image of the eye E in a predetermined range by collecting a plurality of B-scan images by repeating this operation.
  • the control unit 200 includes an acquisition unit 210, an image processing unit 220, a drive control unit 230, a storage unit 240, and a display control unit 250.
  • the acquisition unit 210 can acquire the data of the output signal of the line sensor 134 corresponding to the interference signal of the eye E from the OCT imaging unit 100.
  • the data of the output signal acquired by the acquisition unit 210 may be an analog signal or a digital signal.
  • the control unit 200 can convert the analog signal into a digital signal.
  • the acquisition unit 210 can acquire the tomographic data generated by the image processing unit 220 and various images such as a two-dimensional tomographic image, a three-dimensional tomographic image, a motion contrast image, and an En-Face image.
  • the tomographic data is data including information on a tomographic image of a subject, a signal obtained by performing a Fourier transform on an interference signal by OCT, a signal obtained by performing an arbitrary process on the signal, a tomographic image based on the signal, and the like.
  • the acquisition unit 210 may include a shooting condition group (for example, shooting date and time, shooting site name, shooting area, shooting angle of view, shooting method, image resolution and gradation, image size, image filter, , And information on the data format of the image).
  • a shooting condition group for example, shooting date and time, shooting site name, shooting area, shooting angle of view, shooting method, image resolution and gradation, image size, image filter, , And information on the data format of the image.
  • the photographing condition group is not limited to the illustrated one. Further, the photographing condition group does not need to include all of the illustrated ones, and may include some of them.
  • the acquisition unit 210 acquires the imaging conditions of the OCT imaging unit 100 at the time of capturing an image.
  • the acquiring unit 210 can also acquire a group of photographing conditions stored in a data structure forming an image according to the data format of the image. Note that when the imaging conditions are not stored in the data structure of the image, the acquiring unit 210 can also acquire the imaging information group including the imaging conditions group from a storage device that separately stores the imaging conditions.
  • the acquisition unit 210 can also acquire information for identifying the eye to be examined, such as the subject identification number, from the input unit 260 or the like. Note that the acquisition unit 210 may acquire various data, various images, and various information from the storage unit 240 and other devices (not illustrated) connected to the control unit 200. The acquisition unit 210 can cause the storage unit 240 to store the acquired various data and images.
  • the image processing unit 220 generates a tomographic image, an En-Face image, or the like from the data acquired by the acquisition unit 210 or the data stored in the storage unit 240, or performs image processing on the generated or acquired image. Can be.
  • the image processing unit 220 includes a tomographic image generation unit 221, a motion contrast generation unit 222, an En-Face image generation unit 223, and an image quality improvement unit 224.
  • the tomographic image generation unit 221 performs wave number conversion, Fourier transform, absolute value conversion (acquisition of amplitude), and the like on the data of the interference signal acquired by the acquisition unit 210 to generate tomographic data, and generates tomographic data based on the tomographic data.
  • a tomographic image of the optometry E can be generated.
  • the data of the interference signal acquired by the acquisition unit 210 may be data of a signal output from the line sensor 134 or acquired from a device (not shown) connected to the storage unit 240 or the control unit 200. It may be data of the interference signal obtained. Note that any known method may be used as a method for generating a tomographic image, and a detailed description thereof will be omitted.
  • the tomographic image generation unit 221 can generate a three-dimensional tomographic image based on the generated tomographic images of a plurality of parts.
  • the tomographic image generation unit 221 can generate a three-dimensional tomographic image by arranging, for example, tomographic images of a plurality of parts in one coordinate system.
  • the tomographic image generation unit 221 may generate a three-dimensional tomographic image based on tomographic images of a plurality of parts acquired from a device (not illustrated) connected to the storage unit 240 and the control unit 200.
  • the motion contrast generation unit 222 can generate a two-dimensional motion contrast image using a plurality of tomographic images obtained by photographing substantially the same location.
  • the motion contrast generation unit 222 can generate a three-dimensional motion contrast image by arranging the generated two-dimensional motion contrast images of the respective parts in one coordinate system.
  • the motion contrast generation unit 222 generates a motion contrast image based on decorrelation values between a plurality of tomographic images obtained by photographing substantially the same part of the eye E.
  • the motion contrast generation unit 222 acquires a plurality of tomographic images that have been aligned with respect to a plurality of tomographic images obtained by photographing substantially the same locations where photographing times are continuous with each other.
  • various known methods can be used for alignment. For example, one of a plurality of tomographic images is selected as a reference image, and while changing the position and angle of the reference image, the degree of similarity with other tomographic images is calculated, and the position of each tomographic image with respect to the reference image is calculated. A shift amount is calculated. By correcting each tomographic image based on the calculation result, positioning of a plurality of tomographic images is performed.
  • the alignment processing may be performed by a component that is separate from the motion contrast generation unit 222. The method of positioning is not limited to this, and may be performed by any known method.
  • the motion contrast generation unit 222 calculates a decorrelation value for each of two tomographic images whose imaging times are continuous from each other among a plurality of aligned tomographic images by the following Equation 1.
  • a (x, z) indicates the amplitude at the position (x, z) of the tomographic image A
  • B (x, z) indicates the amplitude at the same position (x, z) of the tomographic image B.
  • the resulting decorrelation value M (x, z) takes a value from 0 to 1, and becomes closer to 1 as the difference between the two amplitude values increases.
  • a case has been described in which a two-dimensional tomographic image on the XZ plane is used, but a two-dimensional tomographic image on the YZ plane or the like may be used.
  • the position (x, z) may be replaced with the position (y, z) or the like.
  • the decorrelation value may be obtained based on the luminance value of the tomographic image, or may be obtained based on the value of an interference signal corresponding to the tomographic image.
  • the motion contrast generation unit 222 determines the pixel value of the motion contrast image based on the decorrelation value M (x, z) at each position (pixel position), and generates a motion contrast image.
  • the motion contrast generation unit 222 calculates the decorrelation value for tomographic images whose imaging times are continuous with each other, but the method of calculating motion contrast data is not limited to this.
  • the two tomographic images for which the decorrelation value M is obtained need only have an imaging time of each corresponding tomographic image within a predetermined time interval, and the imaging times need not be continuous.
  • two tomographic images whose imaging interval is longer than a normal specified time are extracted from a plurality of acquired tomographic images, and a decorrelation value is calculated.
  • a decorrelation value a variance value, a value obtained by dividing the maximum value by the minimum value (maximum value / minimum value), or the like may be obtained.
  • the method of generating the motion contrast image is not limited to the above-described method, and any other known method may be used.
  • the En-Face image generation unit 223 can generate an En-Face image (OCTA image) as a front image from the three-dimensional motion contrast image generated by the motion contrast generation unit 222. Specifically, the En-Face image generation unit 223 projects the three-dimensional motion contrast image on a two-dimensional plane based on, for example, two arbitrary reference planes in the depth direction (Z direction) of the subject's eye E. It is possible to generate an OCTA image that is a front image that has been obtained. In addition, the En-Face image generation unit 223 can similarly generate an En-Face image of luminance from the three-dimensional tomographic image generated by the tomographic image generation unit 221.
  • OCTA image an En-Face image
  • the En-Face image generation unit 223 determines, for example, a representative value of the pixel value in the depth direction at each position in the XY directions of the area surrounded by the two reference planes, and determines the representative value as the representative value.
  • a pixel value at each position is determined based on the position, and an En-Face image is generated.
  • the representative value includes a value such as an average value, a median value, or a maximum value of pixel values within a range in a depth direction of a region surrounded by two reference planes.
  • the reference plane may be a plane along a layer boundary of a slice of the eye E to be examined, or may be a plane.
  • the range in the depth direction between the reference planes for generating the En-Face image is referred to as the generation range of the En-Face image.
  • the method of generating an En-Face image according to the present embodiment is an example, and the En-Face image generation unit 223 may generate an En-Face image using any known method.
  • the image quality improving unit 224 generates a high-quality OCTA image based on the OCTA image generated by the En-Face image generating unit 223, using a learned model described later. Further, the image quality improving unit 224 is a high-quality tomographic image or high-quality luminance based on the tomographic image generated by the tomographic image generating unit 221 and the luminance En-Face image generated by the En-Face image generating unit 223. May be generated. Note that the image quality improving unit 224 obtains not only the OCTA image or the like captured using the OCT capturing unit 100, but also the obtaining unit 210 obtained from another device (not shown) connected to the storage unit 240 or the control unit 200. A high-quality image can be generated based on various images. Further, the image quality improvement unit 224 may perform image quality improvement processing of not only an OCTA image and a tomographic image but also a three-dimensional motion contrast image and a three-dimensional tomographic image.
  • the drive control unit 230 can control driving of components such as the light source 111 of the OCT imaging unit 100, the scanning optical system 150, the scanning unit 152, and the imaging lens 133, which are connected to the control unit 200.
  • the storage unit 240 can store various data acquired by the acquisition unit 210 and various images and data such as tomographic images and OCTA images generated and processed by the image processing unit 220.
  • the storage unit 240 stores information on the subject's eye, such as attributes (name, age, and the like) of the subject and measurement results (eye axis length, intraocular pressure, and the like) acquired using other examination equipment, imaging parameters, and images. Analysis parameters and parameters set by the operator can be stored.
  • the image and the information may be stored in an external storage device (not shown).
  • the storage unit 240 can also store a program or the like for performing the function of each component of the control unit 200 by being executed by the processor.
  • the display control unit 250 can cause the display unit 270 to display various information acquired by the acquisition unit 210 and various images such as tomographic images, OCTA images, and three-dimensional motion contrast images generated and processed by the image processing unit 220. it can. Further, the display control unit 250 can cause the display unit 270 to display information and the like input by the user.
  • the control unit 200 may be configured using, for example, a general-purpose computer. Note that the control unit 200 may be configured using a computer dedicated to the OCT apparatus 1.
  • the control unit 200 includes a storage medium including a CPU (Central Processing Unit) and an MPU (Micro Processing Unit) (not shown), and a memory such as an optical disk and a ROM (Read Only Memory).
  • Each component other than the storage unit 240 of the control unit 200 may be configured by a software module executed by a processor such as a CPU or an MPU.
  • each of the constituent elements may be configured by a circuit that performs a specific function such as an ASIC, an independent device, or the like.
  • the storage unit 240 may be configured by an arbitrary storage medium such as an optical disk and a memory.
  • control unit 200 may include one processor such as a CPU and storage media such as a ROM, or may include a plurality of storage media. Therefore, each component of the control unit 200 functions when at least one or more processors and at least one storage medium are connected and at least one or more processors execute a program stored in at least one or more storage media. May be configured.
  • the processor is not limited to a CPU or an MPU, but may be a GPU (Graphics Processing Unit) or the like.
  • the learned model according to the present embodiment generates and outputs an image in which image quality improvement processing has been performed based on the input image according to the tendency of learning.
  • the image quality improvement processing in this specification refers to converting an input image into an image with an image quality more suitable for image diagnosis
  • a high-quality image refers to an image converted into an image with an image quality more suitable for image diagnosis.
  • the content of the image quality suitable for the image diagnosis depends on what to be diagnosed by various image diagnoses. For this reason, it cannot be said unconditionally, but for example, image quality suitable for image diagnosis is low in noise, high in contrast, the object to be photographed is shown in colors and gradations that are easy to observe, the image size is large, Includes image quality that may be high resolution. Further, the image quality may include an image in which objects and gradations which do not actually exist and have been drawn in the process of image generation are removed from the image.
  • a trained model is a model in which a machine learning model according to an arbitrary machine learning algorithm such as deep learning is previously trained (learned) using appropriate teacher data (learning data). However, it is assumed that the learned model does not perform any further learning and can perform additional learning.
  • the teacher data is composed of one or more pairs of input data and output data (correct data).
  • a pair of input data and output data is composed of an OCTA image and an OCTA image obtained by performing a superposition process such as an averaging process on a plurality of OCTA images including the OCTA image.
  • the superimposed image that has been subjected to the superimposition process becomes a high-quality image suitable for image diagnosis because pixels that are commonly drawn in the original image group are emphasized.
  • the generated high-quality image is a high-contrast image in which the difference between the low-brightness region and the high-brightness region is clear as a result of emphasizing the pixels drawn in common.
  • random noise generated at each shooting can be reduced, or a region that is not well drawn in an original image at a certain time can be interpolated by another original image group.
  • pairs that do not contribute to high image quality can be removed from the teacher data.
  • a high-quality image which is output data forming a pair of teacher data
  • an image output by a trained model learned using the teacher data is also not suitable for image diagnosis.
  • Image quality may be lost. Therefore, by removing from the teacher data a pair whose output data is unsuitable for image diagnosis, it is possible to reduce the possibility that the trained model generates an image having image quality unsuitable for image diagnosis.
  • the trained model learned using the teacher data is used to generate an image that is not suitable for image diagnosis having a luminance distribution that is significantly different from the low-quality image. There is a possibility to output. For this reason, a pair of input data and output data having significantly different average luminance and luminance distribution can be removed from the teacher data.
  • the learned model learned using the teacher data sets the imaging target in a structure or position that is significantly different from the low-quality image. There is a possibility of outputting an image that is not suitable for the drawn image diagnosis. For this reason, a pair of input data and output data in which the structure or position of the imaging target to be drawn greatly differs can be removed from the teacher data.
  • the image quality improving unit 224 can increase the contrast and reduce noise by superimposing processing when an OCTA image acquired in one shot (inspection) is input. It is possible to generate a high-quality OCTA image that has been reduced. Therefore, the image quality improving unit 224 can generate a high-quality image suitable for image diagnosis based on the low-quality image that is the input image.
  • the creation of the image will be described with reference to FIGS. 3A and 3B.
  • one of the group of pairs forming the teacher data is an OCTA image 301 and a high-quality OCTA image 302.
  • a pair is formed by using the entire OCTA image 301 as input data and the entire high-quality OCTA image 302 as output data.
  • a pair of input data and output data is formed by the entirety of each image, but the pair is not limited to this.
  • a pair may be formed by using a rectangular area image 311 of the OCTA image 301 as input data and a rectangular area image 321 as a corresponding imaging area in the OCTA image 302 as output data.
  • the scan range (shooting angle of view) and the scan density (the number of A-scans and the number of B-scans) are normalized to make the image size uniform, so that the rectangular area size at the time of learning can be made uniform.
  • the rectangular area images shown in FIGS. 3A and 3B are examples of the rectangular area size when learning is performed separately.
  • the number of rectangular areas can be set to one in the example shown in FIG. 3A and plural in the example shown in FIG. 3B.
  • a pair may be formed using the rectangular area image 312 of the OCTA image 301 as input data and the rectangular area image 322 that is a corresponding imaging area in the high-quality OCTA image 302 as output data. it can.
  • a pair of rectangular area images different from each other can be created from a pair of an OCTA image and a high-quality OCTA image one by one.
  • the original OCTA image and the high-quality OCTA image by creating a large number of rectangular area image pairs while changing the position of the area to different coordinates, it is possible to enrich the group of pairs forming the teacher data. .
  • the original OCTA image and the high-quality OCTA image are divided into a group of continuous rectangular area images having a constant image size without gaps. Can be.
  • the original OCTA image and the high-quality OCTA image may be divided into rectangular area image groups at random positions corresponding to each other. As described above, by selecting an image of a smaller area as a rectangular area as a pair of input data and output data, a large amount of pair data is generated from the OCTA image 301 and the high-quality OCTA image 302 constituting the original pair. it can. Therefore, the time required for training the machine learning model can be reduced.
  • FIG. 4 shows an example of the configuration 401 of the learned model used by the image quality improving unit 224.
  • the learned model shown in FIG. 4 is composed of a plurality of layer groups responsible for processing the input value group and outputting the processed value group.
  • the types of layers included in the configuration 401 of the learned model include a convolution (Convolution) layer, a downsampling (Downsampling) layer, an upsampling (Upsampling) layer, and a synthesis (Merger) layer.
  • the convolution layer is a layer that performs a convolution process on an input value group according to parameters such as a set filter kernel size, the number of filters, a stride value, and a dilation value.
  • the number of dimensions of the kernel size of the filter may be changed according to the number of dimensions of the input image.
  • the downsampling layer is a layer that performs processing to reduce the number of output value groups from the number of input value groups by thinning out or combining input value groups. Specifically, for example, there is a Max @ Pooling process.
  • the upsampling layer is a layer that performs processing for increasing the number of output value groups beyond the number of input value groups by duplicating the input value group or adding a value interpolated from the input value group. Specifically, as such processing, for example, there is a linear interpolation processing.
  • the composition layer is a layer that performs a process of inputting a value group such as an output value group of a certain layer or a pixel value group constituting an image from a plurality of sources, concatenating them, and adding them to combine them.
  • parameters set in the convolutional layer group included in the configuration 401 shown in FIG. 4 for example, by setting the kernel size of the filter to 3 pixels in width, 3 pixels in height, and 64 to the number of filters, a certain accuracy Image quality improvement processing is possible.
  • the parameter setting for the layer group or the node group forming the neural network is different, the degree to which the tendency trained from the teacher data can be reproduced in the output data may be different. In other words, in many cases, appropriate parameters are different depending on the mode of implementation, and can be changed to preferable values as needed.
  • the CNN can obtain better characteristics.
  • the better characteristics include, for example, a higher accuracy of the image quality improvement processing, a shorter time of the image quality improvement processing, a shorter time required for training of the machine learning model, and the like.
  • a batch normalization (Batch Normalization) layer or an activation layer using a normalized linear function (Rectifier Linear Unit) may be incorporated after the convolutional layer.
  • the learned model When learning is performed by dividing the image region, the learned model outputs a rectangular region image that is a high-quality OCTA image corresponding to each rectangular region.
  • the image quality improving unit 224 first divides the OCTA image 301, which is the input image, into a rectangular area image group based on the image size at the time of learning, and inputs the divided rectangular area image group to the trained model. After that, the image quality improving unit 224 compares each of the rectangular area image groups, which are high-quality OCTA images output from the learned model, with the same positional relationship as each of the rectangular area image groups input to the learned model. And combine them. Accordingly, the image quality improving unit 224 can generate a high-quality OCTA image 302 corresponding to the input OCTA image 301.
  • FIG. 5 is a flowchart of a series of image processing according to the present embodiment.
  • the acquiring unit 210 acquires a plurality of three-dimensional tomographic information obtained by photographing the eye E several times.
  • the obtaining unit 210 may obtain the tomographic information of the eye E using the OCT imaging unit 100, or may obtain the tomographic information from the storage unit 240 or another device connected to the control unit 200.
  • the tomographic information of the eye E is acquired using the OCT imaging unit 100.
  • the operator sits down on the patient, who is the subject, in front of the scanning optical system 150, starts OCT imaging after performing alignment, inputting patient information and the like to the control unit 200, and the like.
  • the drive control unit 230 of the control unit 200 drives the galvanomirror of the scanning unit 152, scans the substantially same portion of the eye to be examined a plurality of times, and acquires a plurality of tomographic information (interference signals) at the substantially same portion of the eye to be examined. .
  • the drive control unit 230 minutely drives the galvanomirror of the scanning unit 152 in the sub-scanning direction orthogonal to the main scanning direction, and acquires a plurality of pieces of tomographic information at another position (adjacent scanning line) of the eye E. I do.
  • the acquisition unit 210 acquires a plurality of three-dimensional tomographic information in a predetermined range of the eye E.
  • step S502 the tomographic image generation unit 221 generates a plurality of three-dimensional tomographic images based on the obtained plurality of three-dimensional tomographic information. Note that when the acquisition unit 210 acquires a plurality of three-dimensional tomographic images from another device connected to the storage unit 240 or the control unit 200 in step S501, step S502 may be omitted.
  • the motion contrast generation unit 222 generates three-dimensional motion contrast data (three-dimensional motion contrast image) based on the plurality of three-dimensional tomographic images.
  • the motion contrast generation unit 222 may obtain a plurality of pieces of motion contrast data based on three or more tomographic images acquired for substantially the same location, and generate an average value of the pieces of motion contrast data as final motion contrast data. If the acquisition unit 210 acquires the three-dimensional motion contrast data from the storage unit 240 or another device connected to the control unit 200 in step S501, steps S502 and S503 may be omitted.
  • step S504 the En-Face image generation unit 223 generates an OCTA image for the three-dimensional motion contrast data according to the instruction of the operator or based on a predetermined En-Face image generation range. If the acquisition unit 210 acquires an OCTA image from another device connected to the storage unit 240 or the control unit 200 in step S501, steps S502 to S504 may be omitted.
  • the image quality improving unit 224 performs an image quality improving process on the OCTA image using the learned model.
  • the image quality improving unit 224 inputs the OCTA image to the trained model, and generates a high-quality OCTA image based on the output from the trained model.
  • the image quality improving unit 224 first divides the OCTA image, which is the input image, into a rectangular region image group based on the image size at the time of learning. Then, the divided rectangular area image group is input to the learned model.
  • the image quality improving unit 224 compares each of the rectangular area image groups, which are high-quality OCTA images output from the learned model, with the same positional relationship as each of the rectangular area image groups input to the learned model. And combine them to generate a final high quality OCTA image.
  • step S506 the display control unit 250 causes the display unit 270 to switch and display the high-quality OCTA image (second image) generated by the image quality improvement unit 224 with the original OCTA image (first image). .
  • the display control unit 250 causes the display unit 270 to switch the generated high-quality OCTA image to the original OCTA image and display the same, so that the blood vessel newly generated by the image quality improvement processing or the original blood vessel is displayed. It is possible to easily determine whether the blood vessel is also present in the image.
  • a series of image processing ends.
  • 6A and 6B show an example of a report screen for switching and displaying images before and after the image quality improvement processing.
  • a report screen 600 shown in FIG. 6A shows a tomographic image 611 and an OCTA image 601 before the image quality improvement processing.
  • a report screen 600 shown in FIG. 6B shows a tomographic image 611 and an OCTA image 602 (high-quality OCTA image) after the image quality improvement processing.
  • a pop-up menu 620 for selecting whether or not to perform the image quality improvement processing Is displayed.
  • the image quality improvement unit 224 executes the image quality improvement processing on the OCTA image 601.
  • the display control unit 250 switches the OCTA image 601 displayed on the report screen 600 before performing the image quality improvement processing to the OCTA image 602 after performing the image quality improvement processing and displays the same.
  • the pop-up menu 620 can be opened and the OCTA image 601 can be switched to the image 601 before the image quality improvement processing is performed and displayed.
  • the image is switched by any method other than the pop-up menu. Is also good.
  • the image may be switched by a button (for example, the button 3420 in FIG. 18, FIG. 20A and FIG. 20B) arranged on the report screen, a pull-down menu, a radio button, a check box, a keyboard operation, or the like.
  • the image may be switched and displayed by a mouse wheel operation or a touch operation of the touch panel display.
  • the operator can arbitrarily switch and display the OCTA image 601 before performing the image quality improvement processing and the OCTA image 602 after performing the image quality improvement processing by the above method. Therefore, the operator can easily compare and compare the OCTA images before and after the image quality improvement processing, and can easily confirm a change in the OCTA image due to the image quality improvement processing. Therefore, the operator can easily identify a blood vessel that does not actually exist in the OCTA image due to the image quality improvement processing, or even if the blood vessel that originally exists has disappeared. It is possible to easily determine the authenticity of the organization described in the above.
  • FIG. 7 shows an example of a report screen when images before and after image quality improvement processing are displayed side by side.
  • an OCTA image 701 before the image quality improvement processing and an OCTA image 702 after the image quality improvement processing are displayed side by side.
  • the operator can easily compare and compare the images before and after the image quality improvement processing, and can easily confirm a change in the image due to the image quality improvement processing. Therefore, the operator can easily identify a blood vessel that does not actually exist in the OCTA image by the image quality improvement processing, or even if the blood vessel that originally exists disappears, and can easily identify the blood vessel. The authenticity of the depicted organization can be easily determined.
  • the display control unit 250 sets the transparency of at least one of the images before and after the image quality improvement processing, and displays the images before and after the image quality improvement processing on the display unit 270. They can be superimposed and displayed.
  • the image quality improving unit 224 may perform the image quality improving process using the learned model not only on the OCTA image but also on the tomographic image and the En-Face image of the luminance.
  • the tomographic image and the luminance En-Face image before superimposition are used as input data, and the tomographic image and the luminance En-Face image after superimposition are used as output data.
  • the trained model may be a single trained model trained using teacher data such as an OCTA image or a tomographic image, or a plurality of trained models trained for each type of image. It may be.
  • the image quality improvement unit 224 can use a learned model corresponding to the type of an image on which the image quality improvement processing is performed. Note that the image quality improvement unit 224 may perform image quality improvement processing using a learned model on a three-dimensional motion contrast image or a three-dimensional tomographic image, and learning data in this case can be prepared in the same manner as described above.
  • a tomographic image 711 before the image quality improvement processing and a tomographic image 712 after the image quality improvement processing are displayed side by side.
  • the display control unit 250 switches between a tomographic image before and after the image quality improvement processing and an En-Face image of the luminance before and after the image quality improvement processing as shown in the OCTA images before and after the image quality improvement processing shown in FIGS. You may.
  • the display control unit 250 may cause the display unit 270 to superimpose the tomographic image before and after the image quality improvement processing or the En-Face image of the luminance.
  • the operator can easily compare the images before and after the image quality improvement processing, and can easily confirm a change in the image due to the image quality improvement processing. Therefore, the operator can easily identify a tissue that does not actually exist in the image due to the image quality improvement process, or even if the tissue that originally exists disappears, and can easily identify the tissue on the image. The authenticity of the performed organization can be easily determined.
  • the control unit 200 includes the image quality improvement unit 224 and the display control unit 250.
  • the image quality improving unit 224 uses the learned model to generate, from the first image of the subject's eye, a second image in which at least one of noise reduction and contrast enhancement has been performed compared to the first image.
  • the display control unit 250 causes the display unit 270 to switch between the first image and the second image and display them side by side or overlapped.
  • the display control unit 250 can switch between the first image and the second image and display the first image and the second image on the display unit 270 according to an instruction from the operator.
  • control unit 200 can generate a high-quality image in which noise is reduced or contrast is enhanced from the original image. For this reason, the control unit 200 can generate an image more suitable for image diagnosis than before, such as a clearer image, an image in which a site to be observed or a lesion is emphasized, and the like.
  • the operator can easily compare the images before and after the image quality improvement processing, and can easily confirm a change in the image due to the image quality improvement processing. Therefore, the operator can easily identify a tissue that does not actually exist in the image due to the image quality improvement process, or even if the tissue that originally exists disappears, and can easily identify the tissue on the image. The authenticity of the performed organization can be easily determined.
  • the superimposed image is used as the output data of the teacher data, but the teacher data is not limited to this.
  • a high-quality image obtained by performing a maximum posterior probability estimation process (MAP estimation process) on an original image group may be used as output data of teacher data.
  • MAP estimation process a maximum posterior probability estimation process
  • a likelihood function is obtained from the probability density of each pixel value in a plurality of images, and a true signal value (pixel value) is estimated using the obtained likelihood function.
  • the high-quality image obtained by the MAP estimation process becomes a high-contrast image based on pixel values close to the true signal value.
  • the learned model can reduce the noise or increase the contrast from the input image and obtain a high-quality image suitable for image diagnosis. Images can be generated.
  • the method of generating the pair of the input data and the output data of the teacher data may be performed in the same manner as the case where the superimposed image is used as the teacher data.
  • a high-quality image obtained by applying a smoothing filter process to the original image may be used as the output data of the teacher data.
  • the trained model can generate a high-quality image with reduced random noise from the input image.
  • an image obtained by applying a gradation conversion process to the original image may be used as the output data of the teacher data.
  • the learned model can generate a high-quality image in which contrast is enhanced from the input image. The method of generating the pair of the input data and the output data of the teacher data may be performed in the same manner as the case where the superimposed image is used as the teacher data.
  • the input data of the teacher data may be an image acquired from an imaging device having the same image quality tendency as the OCT imaging unit 100.
  • the output data of the teacher data may be a high-quality image obtained by a high-cost process such as a successive approximation method, or a subject corresponding to the input data may have a higher performance than the OCT imaging unit 100. It may be a high-quality image obtained by shooting with a shooting device.
  • the output data may be a high-quality image acquired by performing a noise reduction process based on a rule based on the structure of the subject.
  • the noise reduction process may include, for example, a process of replacing a high-luminance pixel of only one pixel which clearly appears in a low-luminance region with an average value of neighboring low-luminance pixel values.
  • the trained model uses an image captured by an imaging device having a higher performance than an imaging device used for capturing an input image, or an image acquired in an imaging process that requires more man-hours than the input image capturing process. It may be data.
  • the image quality improvement unit 224 generates a high-quality image in which noise is reduced or contrast is enhanced using the learned model, but the image quality improvement processing by the image quality improvement unit 224 has been described. Is not limited to this.
  • the image quality improving unit 224 only needs to be able to generate an image having an image quality more suitable for image diagnosis as described above by the image quality improving process.
  • the display control unit 250 When the display control unit 250 causes the display unit 270 to display the images before and after the image quality improvement processing side by side, according to an instruction from the operator, the display control unit 250 selects any one of the images before and after the image quality improvement processing displayed side by side on the display unit 270. May be enlarged and displayed. More specifically, for example, when the operator selects the OCTA image 701 on the report screen 700 illustrated in FIG. 7, the display control unit 250 can enlarge and display the OCTA image 701 on the report screen 700. Further, when the operator selects the OCTA image 702 after the image quality improvement processing, the display control unit 250 can enlarge and display the OCTA image 702 on the report screen 700. In this case, the operator can observe the image to be observed among the images before and after the image quality improvement processing in more detail.
  • the control unit 200 changes the images displayed side by side to an image based on the changed generation range and a high image quality.
  • the image may be changed and displayed. More specifically, when the operator changes the generation range of the En-Face image via the input unit 260, the En-Face image generation unit 223 uses the changed generation range to generate the En-en before the image quality improvement processing. Generate a Face image.
  • the image quality improvement unit 224 generates a high-quality En-Face image from the En-Face image newly generated by the En-Face image generation unit 223 using the learned model.
  • the display control unit 250 changes the En-Face images before and after the image quality improvement processing displayed side by side on the display unit 270 to the newly generated En-Face images before and after the image quality improvement processing and displays the images.
  • the display control unit 250 changes the En-Face images before and after the image quality improvement processing displayed side by side on the display unit 270 to the newly generated En-Face images before and after the image quality improvement processing and displays the images.
  • Modification 1 As described above, in an image on which image quality improvement processing has been performed using a trained model, a tissue that does not actually exist is depicted, or a tissue that originally exists disappears. Therefore, an erroneous diagnosis may occur when the operator performs an image diagnosis based on the image. Therefore, when displaying the OCTA image, the tomographic image, and the like after the image quality improvement processing on the display unit 270, the display control unit 250 confirms that the image is an image on which the image quality improvement processing has been performed using the learned model. It may be displayed. In this case, occurrence of erroneous diagnosis by the operator can be suppressed.
  • the display mode may be any mode as long as the mode can be understood as a high-quality image acquired using the learned model.
  • Modification 2 The first embodiment has described the example in which the image quality improvement processing is applied to an OCTA image, a tomographic image, and the like obtained by one imaging (inspection). On the other hand, image quality improvement processing using a learned model can be applied to a plurality of OCTA images, tomographic images, and the like obtained by a plurality of imagings (inspections). Modification 2 will be described with reference to FIGS. 8A and 8B, in which a plurality of OCTA images, tomographic images, and the like are simultaneously displayed with images to which image quality improvement processing using a learned model is applied.
  • FIGS. 8A and 8B show an example of a time-series report screen for displaying a plurality of OCTA images obtained by imaging the same subject's eye a plurality of times over time.
  • a plurality of OCTA images 801 before the image quality improvement processing is performed are displayed in chronological order.
  • the report screen 800 also includes a pop-up menu 820, and the operator can select whether or not to apply the image quality improvement processing by operating the pop-up menu 820 via the input unit 260.
  • the image quality improvement unit 224 applies the image quality improvement processing using the learned model to all the displayed OCTA images. Then, as illustrated in FIG. 8B, the display control unit 250 switches the plurality of OCTA images 802 after the image quality improvement processing is performed to the displayed plurality of OCTA images 801 and displays them.
  • the display control unit 250 replaces the plurality of OCTA images 801 before the image quality improvement processing with the plurality of OCTA images 801 displayed after the image quality improvement processing.
  • an example has been described in which a plurality of OCTA images before and after image quality improvement processing using a learned model are simultaneously switched and displayed.
  • a plurality of tomographic images before and after the image quality improvement processing using the learned model, an En-Face image of luminance, and the like may be simultaneously switched and displayed.
  • the operation method is not limited to the method using the pop-up menu 820, but employs any operation method such as a button, a pull-down menu, a radio button, a check box, a keyboard, a mouse wheel, or a touch panel operation arranged on the report screen. May do it.
  • Example 2 The trained model outputs output data likely to correspond to the input data according to the tendency of learning.
  • the learned model learns using a group of images having similar image quality trends as teacher data
  • an image having a higher image quality is output more effectively for images having similar similarities. be able to. Therefore, in the second embodiment, the image quality improvement processing is performed by using a plurality of learned models that are learned using the imaging data such as the imaging region and the teacher data formed by the group of pairs grouped for each generation range of the En-Face image.
  • the image quality improvement processing is performed more effectively.
  • the OCT apparatus according to the present embodiment will be described with reference to FIGS.
  • the configuration of the OCT apparatus according to the present embodiment is the same as that of the OCT apparatus 1 according to the first embodiment except for a control unit. Therefore, the same reference numerals are used for the same configurations as those illustrated in FIG. And the explanation is omitted.
  • the OCT apparatus according to the present embodiment will be described focusing on differences from the OCT apparatus 1 according to the first embodiment.
  • FIG. 9 shows a schematic configuration of the control unit 900 according to the present embodiment.
  • the configuration of the control unit 900 according to the present embodiment other than the image processing unit 920 and the selection unit 925 is the same as each configuration of the control unit 200 according to the first embodiment. Therefore, the same components as those shown in FIG. 2 are denoted by the same reference numerals, and description thereof is omitted.
  • the image processing unit 920 of the control unit 900 includes a selection unit 925 in addition to the tomographic image generation unit 221, the motion contrast generation unit 222, the En-Face image generation unit 223, and the image quality improvement unit 224.
  • the selecting unit 925 selects a learned model to be used by the image quality improving unit 224 from among the plurality of learned models based on a shooting condition of an image to be subjected to the image quality improving process by the image quality improving unit 224 and a generation range of the En-Face image.
  • Select The image quality improvement unit 224 performs image quality improvement processing on a target OCTA image, tomographic image, or the like using the learned model selected by the selection unit 925, and generates a high-quality OCTA image or high-quality tomographic image. .
  • the learned model outputs output data that is highly likely to correspond to the input data according to the tendency of learning.
  • the learned model learns using a group of images having similar image quality trends as teacher data, an image having a higher image quality is output more effectively for images having similar similarities. be able to. Therefore, in the present embodiment, a pair group is grouped for each imaging condition including an imaging region, an imaging method, an imaging region, an imaging angle of view, a scan density, an image resolution, and the like, and an En-Face image generation range.
  • a plurality of learned models learned using the learned teacher data are prepared.
  • a plurality of learned models such as a learned model using an OCTA image in which a macular portion is an imaging region as teacher data and a learned model using an OCTA image in which a nipple is an imaging region as teacher data are used.
  • a model is an imaging region, and may include other imaging regions.
  • a learned model may be prepared in which an OCTA image for each specific imaging region in an imaging region such as a macula or a nipple is used as teacher data.
  • a learned model may be prepared in which learning is performed for each piece of teacher data according to the shooting angle of view and the scan density.
  • the imaging method there are imaging methods such as SD-OCT and SS-OCT, and the image quality, the imaging range, and the degree of depth reach differ depending on the difference between these imaging methods. For this reason, a learned model in which learning has been performed for each piece of teacher data according to the imaging method may be prepared.
  • an OCTA image in which blood vessels of all layers of the retina are extracted at once it is rare to generate an OCTA image in which blood vessels of all layers of the retina are extracted at once, and it is general to generate an OCTA image in which only blood vessels existing in a predetermined depth range are extracted.
  • a depth range such as a shallow layer, a deep layer, an outer layer, and a choroid shallow layer of the retina
  • an OCTA image in which blood vessels are extracted in respective depth ranges is generated.
  • the form of the blood vessel depicted in the OCTA image varies greatly depending on the depth range.
  • a learned model may be prepared in which learning is performed for each teacher data according to the generation range of an En-Face image such as an OCTA image.
  • the example in which the OCTA image is used as the teacher data has been described.
  • these images are used as the teacher data. be able to.
  • a plurality of learned models that have learned for each teacher data according to the shooting conditions of these images and the generation range of the En-Face image are prepared.
  • FIG. 10 is a flowchart of a series of image processing according to the present embodiment. Note that a description of processing similar to a series of image processing according to the first embodiment will be appropriately omitted.
  • the acquisition unit 210 acquires a plurality of three-dimensional tomographic information obtained by imaging the eye E multiple times.
  • the obtaining unit 210 may obtain the tomographic information of the eye E using the OCT imaging unit 100, or may obtain the tomographic information from the storage unit 240 or another device connected to the control unit 200.
  • the acquisition unit 210 acquires a group of imaging conditions related to tomographic information. Specifically, the acquisition unit 210 can acquire imaging conditions such as an imaging region and an imaging method when imaging is performed on tomographic information. Note that the acquisition unit 210 may acquire a group of imaging conditions stored in a data structure configuring data of the tomographic information according to the data format of the tomographic information. Further, when the imaging condition is not stored in the data structure of the tomographic information, the acquisition unit 210 can acquire the imaging information group from a server or a database that stores a file describing the imaging condition. In addition, the acquisition unit 210 may estimate the imaging information group from an image based on the tomographic information by any known method.
  • the acquisition unit 210 acquires a group of imaging conditions related to the acquired images and data.
  • the acquisition unit 210 sets the imaging conditions of the tomographic image. It is not necessary to acquire a group.
  • Steps S1002 to S1004 are the same as steps S502 to S504 according to the first embodiment, and thus description thereof is omitted.
  • step S1004 when the En-Face image generation unit 223 generates an OCTA image, the process proceeds to step S1005.
  • the selection unit 925 selects a learned model to be used by the image quality improvement unit 224, based on the imaging condition group and generation range for the generated OCTA image and the information on the teacher data regarding the plurality of learned models. More specifically, for example, when the imaging region of the OCTA image is a nipple, the selection unit 925 selects a learned model in which learning has been performed using the OCTA image of the nipple as teacher data. Further, for example, when the generation range of the OCTA image is the shallow layer of the retina, the selection unit 925 selects a learned model in which learning has been performed using the OCTA image whose generation range is the shallow layer of the retina as teacher data. .
  • the selecting unit 925 sets an image having a similar image quality as the teacher data even if the imaging condition group and the generation range of the generated OCTA image do not completely match the information of the teacher data of the trained model.
  • a learned model that has undergone learning may be selected.
  • the selection unit 925 may include a table in which the correspondence between the imaging condition group and the generation range related to the OCTA image and the learned model to be used is described.
  • step S1006 the image quality improving unit 224 performs an image quality improving process on the OCTA image generated in step S1004 using the learned model selected by the selecting unit 925, and generates a high-quality OCTA image.
  • the method of generating a high-quality OCTA image is the same as that in step S505 according to the first embodiment, and a description thereof will not be repeated.
  • Step S1007 is the same as step S506 according to the first embodiment, and a description thereof will not be repeated.
  • step S1007 when a high-quality OCTA image is displayed on the display unit 270, a series of image processing according to the present embodiment ends.
  • control unit 900 includes the selection unit 925 that selects a learned model used by the image quality improving unit 224 from a plurality of learned models.
  • the selecting unit 925 selects a learned model used by the image quality improving unit 224 based on a range in the depth direction for generating an OCTA image to be subjected to the image quality improving process.
  • the selecting unit 925 can select a learned model based on a display region in an OCTA image to be subjected to image quality improvement processing and a range in a depth direction for generating the OCTA image.
  • the selection unit 925 may use the learning used by the image quality improvement unit 224 based on the imaging region including the display region in the OCTA image to be subjected to the image quality improvement process and the range in the depth direction for generating the OCTA image. May be selected.
  • the selection unit 925 may select a learned model used by the image quality improvement unit 224 based on the imaging condition of the OCTA image to be subjected to the image quality improvement process.
  • control unit 900 performs image quality improvement processing by using a plurality of learned models that have been learned using teacher data configured by pairs of groups that are grouped according to shooting conditions and generation ranges of En-Face images. Image quality improvement processing can be performed more effectively.
  • the selection unit 925 selects a learned model based on an imaging condition such as an imaging region of the OCTA image or a generation range.
  • the learned model is selected based on conditions other than the above. May be changed.
  • the selection unit 925 determines, for example, a projection method (maximum intensity projection method or average intensity projection method) when generating an OCTA image or an En-Face image of luminance, and the presence or absence of an artifact removal process caused by a blood vessel shadow.
  • a learned model may be selected. In this case, it is possible to prepare a learned model in which learning is performed for each teacher data according to the projection method and the presence or absence of the artifact removal processing.
  • the selection unit 925 automatically selects an appropriate learned model according to the shooting conditions, the generation range of the En-Face image, and the like.
  • the selection unit 925 may select a learned model according to an instruction of the operator.
  • the selection unit 925 may change the learned model and change the image quality improvement processing applied to the image in accordance with the instruction of the operator.
  • FIGS. 11A and 11B show an example of a report screen that switches and displays images before and after the image quality improvement processing.
  • a report screen 1100 shown in FIG. 11A shows an OCTA image 1101 to which an image quality improvement process using a tomographic image 1111 and an automatically selected learned model has been applied.
  • the report screen 1100 shown in FIG. 11B shows the tomographic image 1111 and the OCTA image 1102 to which the image quality improvement processing using the learned model according to the instruction of the operator has been applied.
  • the report screen 1100 shown in FIGS. 11A and 11B shows a process designation unit 1120 for changing the image quality improvement process applied to the OCTA image.
  • the OCTA image 1101 displayed on the report screen 1100 shown in FIG. 11A depicts a deep blood vessel (Deep @ Capillary) in the macula.
  • the image quality improvement processing applied to the OCTA image using the learned model automatically selected by the selection unit 925 is suitable for a shallow layer blood vessel (RPC) of the papilla. Therefore, regarding the OCTA image 1101 displayed on the report screen 1100 shown in FIG. 11A, the image quality improvement processing applied to the OCTA image is not optimal for the blood vessels extracted in the OCTA image.
  • the selection unit 925 changes the trained model used by the image quality improving unit 224 to a trained model that has been trained using the OCTA image relating to the deep blood vessels of the macula as teacher data in response to a selection instruction from the operator.
  • the image quality improvement unit 224 performs the image quality improvement process on the OCTA image again using the learned model changed by the selection unit 925.
  • the display control unit 250 causes the display unit 270 to display the high-quality OCTA image 1102 newly generated by the image quality improvement unit 224, as shown in FIG. 11B.
  • the operator can re-designate an appropriate image quality improvement process for the same OCTA image. it can.
  • the specification of the image quality improvement processing may be performed many times.
  • control unit 900 is configured so that the image quality improvement processing applied to the OCTA image can be manually changed.
  • control unit 900 may be configured to be able to manually change the image quality improvement processing applied to the tomographic image, the luminance En-Face image, and the like.
  • the report screens shown in FIGS. 11A and 11B have a mode in which the images before and after the image quality improvement processing are switched and displayed. However, the report screens in which the images before and after the image quality improvement processing are displayed side by side or overlapped are displayed. It may be a screen. Furthermore, the mode of the process designating unit 1120 is not limited to the modes shown in FIGS. 11A and 11B, and may be any mode that can indicate the image quality improvement processing or the learned model. Further, the types of the image quality improvement processing shown in FIGS. 11A and 11B are examples, and may include other types of the image quality improvement processing according to the teacher data for the learned model.
  • a plurality of images to which the image quality improvement processing has been applied may be simultaneously displayed. At this time, it is also possible to configure so that it is possible to specify which image quality improvement processing is to be applied.
  • An example of the report screen in this case is shown in FIGS. 12A and 12B.
  • FIGS. 12A and 12B show an example of a report screen for switching and displaying a plurality of images before and after the image quality improvement processing.
  • the OCTA image 1201 before the image quality improvement processing is shown on the report screen 1200 shown in FIG. 12A.
  • the OCTA image 1202 to which the image quality improvement processing according to the operator's instruction is applied is shown on the report screen 1200 shown in FIG. 12B.
  • a report screen 1200 shown in FIGS. 12A and 12B shows a process designation unit 1220 for changing the image quality improvement process applied to the OCTA image.
  • the selection unit 925 selects a learned model corresponding to the image quality improvement processing instructed using the processing designation unit 1220 as a learned model used by the image quality improvement unit 224.
  • the image quality improvement unit 224 performs image quality improvement processing on the plurality of OCTA images 1201 using the learned model selected by the selection unit 925.
  • the display control unit 250 causes the generated plurality of high-quality OCTA images 1202 to be displayed on the report screen 1200 at a time as shown in FIG. 12B.
  • the learned model may be selected and changed in accordance with the operator's instruction with respect to the image quality improvement processing for the tomographic image, the luminance En-Face image, and the like.
  • a plurality of images before and after the image quality improvement processing may be displayed side by side on the report screen, or may be displayed in an overlapping manner. Also in this case, a plurality of images to which the image quality improvement processing has been applied according to the instruction from the operator can be displayed at a time.
  • the image quality improving unit 224 automatically executes the image quality improving process after capturing a tomographic image or an OCTA image.
  • the image quality improvement processing performed by the image quality improvement unit 224 using the learned model may take a long time in some cases.
  • generation of motion contrast data by the motion contrast generation unit 222 and generation of an OCTA image by the En-Face image generation unit 223 require time. Therefore, when displaying an image after the image quality improvement processing is completed after shooting, it may take a long time from shooting to display.
  • the imaging may fail due to blinking or unintended movement of the subject's eye. Therefore, the convenience of the OCT apparatus can be improved by confirming the success or failure of imaging at an early stage. Therefore, in the third embodiment, prior to generation and display of a high-quality OCTA image, an En-Face image or an OCTA image having a luminance based on tomographic information obtained by photographing the eye to be inspected is displayed at an early stage.
  • the OCT apparatus is configured so that a captured image can be confirmed.
  • an OCT apparatus according to the present embodiment will be described with reference to FIG. Since the configuration of the OCT apparatus according to the present embodiment is the same as that of the OCT apparatus 1 according to the first embodiment, the same reference numerals are used and the description is omitted. Hereinafter, the OCT apparatus according to the present embodiment will be described focusing on differences from the OCT apparatus 1 according to the first embodiment.
  • FIG. 13 is a flowchart of a series of image processing according to the present embodiment.
  • the acquiring unit 210 acquires a plurality of three-dimensional tomographic information by photographing the eye E by the OCT photographing unit 100.
  • Step S1302 is the same as step S502 according to the first embodiment, and a description thereof will not be repeated.
  • step S1302 the process proceeds to step S1303.
  • step S1303 the En-Face image generation unit 223 generates a front image (luminance En-Face image) of the fundus by projecting the three-dimensional tomographic image generated in step S1302 onto a two-dimensional plane.
  • step S1304 the display control unit 250 causes the display unit 270 to display the generated luminance En-Face image.
  • Steps S1305 and S1306 are the same as steps S503 and S504 according to the first embodiment, and a description thereof will not be repeated.
  • the process proceeds to step S1307.
  • the display control unit 250 switches the OCTA image before the image quality improvement processing generated in step S1306 to the luminance En-Face image and causes the display unit 270 to display it.
  • step S1308 as in step S505 according to the first embodiment, the image quality improving unit 224 performs image quality improving processing on the OCTA image generated in step S1306 using the learned model, and obtains a high-quality OCTA image.
  • the display control unit 250 causes the display unit 270 to display the generated high-quality OCTA image by switching to the OCTA image before the image quality improvement processing.
  • the display control unit 250 uses the En- of the luminance En- which is the front image generated based on the tomographic data in the depth direction of the subject's eye.
  • the face image (third image) is displayed on the display unit 270.
  • the display control unit 250 switches the displayed luminance En-Face image to the OCTA image, and causes the display unit 270 to display the OC-image.
  • the display control unit 250 switches the displayed OCTA image to a high-quality OCTA image and causes the display unit 270 to display the OCTA image.
  • the operator determines the success or failure of the imaging at an early stage. be able to.
  • the motion contrast data generation process (step S1305) is started after the brightness En-Face image display process (step S1304), but the timing of the motion contrast data generation process is not limited to this.
  • the motion contrast generation unit 222 may start the generation process of the motion contrast data in parallel with the generation process (Step S1303) and the display process (Step S1304) of the luminance En-Face image.
  • the image quality improving unit 224 may start the image quality improving process (Step S1308) in parallel with the OCTA image display process (Step S1307).
  • Example 4 In the first embodiment, the example in which the OCTA images before and after the image quality improvement processing are switched and displayed has been described. On the other hand, in the fourth embodiment, the images before and after the image quality improvement processing are compared.
  • the OCT apparatus according to the present embodiment will be described with reference to FIGS.
  • the configuration of the OCT apparatus according to the present embodiment is the same as that of the OCT apparatus 1 according to the first embodiment except for a control unit. Therefore, the same reference numerals are used for the same configurations as those illustrated in FIG. And the explanation is omitted.
  • the OCT apparatus according to the present embodiment will be described focusing on differences from the OCT apparatus 1 according to the first embodiment.
  • FIG. 14 shows a schematic configuration of the control unit 1400 according to the present embodiment.
  • the configuration of the control unit 1400 according to the present embodiment other than the image processing unit 1420 and the comparison unit 1426 is the same as each configuration of the control unit 200 according to the first embodiment. Therefore, the same components as those shown in FIG. 2 are denoted by the same reference numerals, and description thereof is omitted.
  • the image processing unit 1420 of the control unit 1400 is provided with a comparing unit 1426 in addition to the tomographic image generating unit 221, the motion contrast generating unit 222, the En-Face image generating unit 223, and the image quality improving unit 224.
  • the comparison unit 1426 generates a color map image colored according to the magnitude of the difference value. For example, when the pixel value of the image after the image quality improvement processing is larger than that of the image before the image quality improvement processing, the color tone of the warm color (yellow to orange to red) is changed to the pixel of the image after the image quality improvement processing. If the value is small, use a cool (yellow-green to green-blue) color tone.
  • a cool color scheme it is possible to easily identify a portion shown in a warm color system on the color map image as a tissue restored (or newly created) by the image quality improvement processing.
  • a portion indicated by a cool color system on the color map image as noise (or a tissue that has been erased) removed by the image quality improvement processing.
  • the color arrangement of the color map image is an example.
  • the color scheme of the color map image is arbitrarily set according to a desired configuration, such as performing a different color tone according to the magnitude of the pixel value in the image after the image quality improvement processing with respect to the pixel value in the image before the image quality improvement processing. May be.
  • the display control unit 250 can display the color map image generated by the comparison unit 1426 on the display unit 270 by superimposing the color map image on the image before the image quality improvement processing or the image after the image quality improvement processing.
  • Steps S1501 to S1505 are the same as steps S501 to S505 according to the first embodiment, and a description thereof will not be repeated. If a high-quality OCTA image is generated by the image quality improving unit 224 in step S1505, the process proceeds to step S1506.
  • step S1506 the comparing unit 1426 compares the OCTA image generated in step S1504 with the high-quality OCTA image generated in step S1505 to calculate a difference between pixel values, and based on the difference between pixel values. Generate a color map image. Note that the comparing unit 1426 compares the images using another method such as a pixel value ratio or a correlation value between the images before and after the high image quality processing, instead of the difference between the pixel values in the images before and after the high image quality processing. A color map image may be generated based on the result.
  • step S1507 the display control unit 250 causes the display unit 270 to superimpose the color map image on the image before the image quality improvement processing or the image after the image quality improvement processing.
  • the display control unit 250 can set the transparency of the color map and superimpose the color map image on the target image so as not to hide the image on which the color map image is superimposed.
  • the control unit 1400 includes the comparison unit 1426 that compares the first image with the second image on which the image quality improvement processing has been performed.
  • the comparing unit 1426 calculates a difference between the first image and the second image, and generates a color map image that is color-coded based on the difference.
  • the display control unit 250 controls the display on the display unit 270 based on the comparison result by the comparison unit 1426. More specifically, the display control unit 250 causes the display unit 270 to display a color map image superimposed on the first image or the second image.
  • the operator should be able to more easily identify a tissue that does not actually exist in the image due to the image quality improvement process, or even if the original tissue disappears. And the authenticity of the organization can be determined more easily.
  • the operator can easily identify, according to the color arrangement of the color map image, whether the part is a part newly drawn or erased by the image quality improvement processing.
  • the display control unit 250 can enable or disable the superimposed display of the color map image in accordance with an instruction from the operator.
  • the on / off operation of the superimposed display of the color map image may be simultaneously applied to a plurality of images displayed on the display unit 270.
  • the comparison unit 1426 generates a color map image for each of the images before and after the corresponding image quality improvement process
  • the display control unit 250 converts the color map image into an image before the image quality improvement process or an image after the image quality improvement process. Can be superimposed.
  • the display control unit 250 may cause the display unit 270 to display an image before the image quality improvement processing or an image after the image quality improvement processing before displaying the color map image.
  • an OCTA image has been described as an example.
  • a similar process can be performed when an image quality improvement process is performed on a tomographic image, an En-Face image of luminance, or the like.
  • the comparison processing and the color map display processing according to the present embodiment can be applied to the OCT apparatuses according to the second and third embodiments.
  • the comparison unit 1426 may compare the images before and after the image quality improvement processing, and the display control unit 250 may display a warning on the display unit 270 according to the comparison result by the comparison unit 1426. More specifically, when the difference between the pixel values in the images before and after the image quality improvement processing calculated by the comparison unit 1426 is larger than a predetermined value, the display control unit 250 causes the display unit 270 to display a warning. According to such a configuration, in the generated high-quality image, if a tissue that does not actually exist is generated by the learned model, or an organization that originally exists is erased, The operator can be alerted. Note that the comparison between the difference and the predetermined value may be performed by the comparing unit 1426 or may be performed by the display control unit 250. Further, instead of the difference, a statistical value such as an average value of the difference may be compared with a predetermined value.
  • the display control unit 250 may not display the image after the image quality improvement process is performed on the display unit 270.
  • the high-quality image if a tissue that does not actually exist is generated by the learned model or an existing tissue is erased, the high-quality image is generated. Erroneous diagnosis based on an image can be suppressed.
  • the comparison between the difference and the predetermined value may be performed by the comparing unit 1426 or may be performed by the display control unit 250. Further, instead of the difference, a statistical value such as an average value of the difference may be compared with a predetermined value.
  • an image processing apparatus (control unit 200) according to a fifth embodiment will be described with reference to FIGS. 20A and 20B.
  • the display control unit 250 displays the processing result of the image quality improving unit 224 on the display unit 270.
  • the display screen is not limited to this.
  • the image quality improvement processing can be similarly applied to a display screen in which a plurality of images obtained at different dates and times are displayed side by side.
  • the image quality improvement processing can be similarly applied to a display screen in which the examiner confirms the success or failure of imaging immediately after imaging, such as an imaging confirmation screen.
  • the display control unit 250 can cause the display unit 270 to display the plurality of high-quality images generated by the image quality improvement unit 224 and the low-quality images that have not been subjected to the high quality improvement.
  • a low-quality image and a high-quality image can be output according to the instruction of the examiner.
  • Reference numeral 3400 denotes the entire screen
  • 3401 denotes a patient tab
  • 3402 denotes an imaging tab
  • 3403 denotes a report tab
  • 3404 denotes a setting tab.
  • a hatched line in the report tab 3403 indicates an active state of the report screen.
  • Im3405 displays an SLO image
  • Im3406 displays an OCTA En-Face image indicated by Im3407 on the SLO image Im3405.
  • the SLO image is a front image of the fundus oculi acquired by an SLO (Scanning Laser Ophthalmoscope) optical system (not shown).
  • Im3407 and Im3408 denote OCTA En-Face images
  • Im3409 denotes a luminance En-Face image
  • Im3411 and Im3412 denote tomographic images.
  • Reference numerals 3413 and 3414 superimpose and display the boundaries of the upper and lower ranges of the OCTA En-Face image shown in Im3407 and Im3408, respectively, on the tomographic image.
  • Button 3420 is a button for designating execution of the image quality improvement processing. Of course, as described later, the button 3420 may be a button for instructing display of a high-quality image.
  • the execution of the image quality improvement processing is performed by designating the button 3420, or the presence or absence of the execution is determined based on information stored (stored) in the database.
  • the button 3420 is designated in accordance with an instruction from the examiner to switch between the display of a high-quality image and the display of a low-quality image.
  • the target image of the image quality improvement processing will be described as an OCTA En-Face image.
  • the examiner designates the report tab 3403 and transits to the report screen, the low quality OCTA En-Face images Im3407 and Im3408 are displayed. Thereafter, when the examiner specifies the button 3420, the image quality improving unit 224 executes the image quality improvement processing on the images Im3407 and Im3408 displayed on the screen. After the image quality improvement processing is completed, the display control unit 250 displays the high quality image generated by the image quality improvement unit 224 on the report screen. In addition, since Im3406 is obtained by superimposing and displaying Im3407 on the SLO image Im3405, Im3406 also displays an image that has been subjected to high quality processing. Then, the display of the button 3420 is changed to the active state, and a display that indicates that the image quality improvement processing has been executed is displayed.
  • the execution of the processing in the image quality improving unit 224 need not be limited to the timing at which the examiner has designated the button 3420. Since the types of the OCTA En-Face images Im3407 and Im3408 to be displayed when the report screen is opened are known in advance, the image quality improvement processing may be executed when the screen transitions to the report screen. Then, at the timing when the button 3420 is pressed, the display control unit 250 may display a high quality image on the report screen. Furthermore, there is no need for two types of images to be subjected to the image quality improvement processing in response to an instruction from the examiner or when transitioning to the report screen.
  • OCTA En-Faces such as surface (Im2910), deep (Im2920), outer (Im2930), and choroidal vascular networks (Im2940) as shown in FIG. 19A and FIG. Processing may be performed on an image.
  • the image on which the image quality improvement processing has been performed may be temporarily stored in a memory or may be stored in a database.
  • the image quality improvement processing is executed based on the information stored (recorded) in the database.
  • a high quality image obtained by performing the image quality improvement process is displayed by default when the report screen is displayed.
  • the button 3420 is displayed as an active state by default, so that the examiner can recognize that the high-quality image obtained by executing the high-quality processing is displayed. . If the examiner wants to display a low-quality image before the high-quality processing, the examiner can display the low-quality image by specifying the button 3420 to release the active state. To return to the high-quality image, the examiner specifies the button 3420.
  • Whether or not the image quality improvement processing is performed on the database is specified for each layer, such as common to all the data stored in the database and for each photographing data (for each inspection). For example, when the state in which the image quality improvement processing is executed is stored for the entire database, the state in which the examiner does not execute the image quality improvement processing is stored for individual photographing data (individual examination). In this case, when the image data is displayed next time, the image data is displayed without performing the image quality improvement processing.
  • a user interface (not shown) (for example, a save button) may be used to save the execution state of the image quality improvement processing for each piece of imaging data (for each inspection).
  • the display state for example, the button 3420 Based on the (state)
  • the state in which the image quality improving process is performed may be stored. Accordingly, when the presence / absence of the execution of the image quality improvement processing is not specified in the imaging data unit (inspection unit), the processing is performed based on the information specified for the entire database, and in the imaging data unit (inspection unit). If specified, processing can be executed individually based on that information.
  • Im3407 and Im3408 are displayed as the OCTA En-Face images, but the displayed OCTA En-Face images can be changed by the examiner's designation. Therefore, a description will be given of an image change when the execution of the image quality improvement processing is designated (the button 3420 is in the active state).
  • the image is changed using a user interface (not shown) (for example, a combo box).
  • a user interface for example, a combo box.
  • the image quality improving unit 224 performs an image quality improving process on the choroidal vascular network image
  • the display control unit 250 generates the image quality improving unit 224.
  • the high-quality image on the report screen That is, the display control unit 250 changes the display of the high-quality image in the first depth range to the high-quality image in the second depth range that is at least partially different from the first depth range in response to an instruction from the examiner.
  • the display may be changed to an image display.
  • the display control unit 250 changes the first depth range to the second depth range in response to an instruction from the examiner, thereby displaying the high-quality image in the first depth range on the second depth range. May be changed to display a high-quality image in a depth range of. As described above, if a high-quality image has already been generated for an image that is likely to be displayed when the report screen transitions, the display control unit 250 may display the generated high-quality image. .
  • the method of changing the type of image is not limited to the method described above, and it is also possible to generate an OCTA En-Face image in which a different depth range is set by changing a reference layer and an offset value.
  • the image quality improving unit 224 performs an image quality improving process on the En-Face image of an arbitrary OCTA, and the display control unit 250 Is displayed on the report screen.
  • the reference layer and the offset value can be changed using a user interface (not shown) (for example, a combo box or a text box). Further, by dragging (moving the layer boundary) any one of the boundaries 3413 and 3414 superimposed on the tomographic images Im3411 and Im3412, the generation range of the OCTA En-Face image can be changed.
  • the image quality improving unit 224 may always process the execution command, or may execute the command after the layer boundary is changed by dragging. Alternatively, the execution of the image quality improvement processing is continuously instructed, but when the next instruction comes, the previous instruction may be canceled and the latest instruction may be executed.
  • the high-quality processing may take a relatively long time. Therefore, even if the command is executed at any timing described above, it may take a relatively long time before a high-quality image is displayed. Therefore, from when a depth range for generating an OCTA En-Face image is set in response to an instruction from the examiner until the high-quality image is displayed, the OCTA corresponding to the set depth range is displayed.
  • An En-Face image (low-quality image) may be displayed. That is, when the depth range is set, the OCTA En-Face image (low-quality image) corresponding to the set depth range is displayed, and when the high-quality processing ends, the OCTA En-Face image is displayed.
  • the display of the (low-quality image) may be changed to the display of a high-quality image.
  • information indicating that the high-quality image processing is being performed may be displayed from when the depth range is set until the high-quality image is displayed. Note that these are not limited to the case where the execution of the image quality improvement processing has already been designated (the button 3420 is in the active state). For example, the execution of the image quality improvement processing is performed in response to an instruction from the examiner. Is applicable until the high-quality image is displayed.
  • an example is shown in which different layers are displayed on Im3407 and Im3408 as En-Face images of OCTA, and low-quality and high-quality images are switched and displayed.
  • the present invention is not limited to this.
  • a low-quality OCTA En-Face image may be displayed side by side on the Im3407
  • a high-quality OCTA En-Face image may be displayed side by side on the Im3408.
  • the images are switched and displayed, the images are switched at the same place, so that it is easy to compare the changed portions.
  • the images can be displayed at the same time, so that the entire image can be easily compared.
  • FIG. 20B is a screen example in which the Enta-Face image Im3407 of OCTA in FIG. 20A is enlarged and displayed.
  • a button 3420 is displayed as in FIG. 20A.
  • the screen transition from FIG. 20A to FIG. 20B is performed, for example, by double-clicking the En-Face image Im3407 of OCTA, and the screen transition from FIG. 20B to FIG.
  • the screen transition is not limited to the method shown here, and a user interface (not shown) may be used.
  • the button 3420 When the execution of the image quality improvement processing is designated at the time of the screen transition (the button 3420 is active), the state is maintained even at the time of the screen transition. That is, in a case where a transition is made to the screen of FIG. 20B while the high-quality image is being displayed on the screen of FIG. 20A, the high-quality image is also displayed on the screen of FIG. 20B. Then, the button 3420 is activated. The same applies to the transition from FIG. 20B to FIG. 20A. In FIG. 20B, the display can be switched to a low quality image by designating a button 3420.
  • the display state of the high-quality image is maintained. Transition is performed as is. That is, an image corresponding to the state of button 3420 on the display screen before the transition is displayed on the display screen after the transition. For example, if the button 3420 on the display screen before the transition is in the active state, a high-quality image is displayed on the display screen after the transition. Further, for example, if the active state of the button 3420 on the display screen before the transition is released, a low-quality image is displayed on the display screen after the transition.
  • the button 3420 on the display screen for follow-up observation When the button 3420 on the display screen for follow-up observation is activated, a plurality of images obtained at different dates and times (different examination dates) displayed side by side on the display screen for follow-up observation are switched to high-quality images. You may. That is, when the button 3420 on the display screen for follow-up observation is activated, the configuration may be such that the button 3420 is collectively reflected on a plurality of images obtained at different dates and times.
  • FIG. 18 shows an example of a display screen for follow-up observation.
  • the examiner can change the depth range of the En-Face image by selecting from a predetermined depth range set (3802 and 3803) displayed in the list box.
  • a predetermined depth range set (3802 and 3803) displayed in the list box.
  • the retinal surface layer is selected, and in the list box 3803, the retinal deep layer is selected.
  • the upper display area displays the analysis result of the En-Face image of the retinal surface layer, and the lower display area displays the analysis result of the En-Face image of the deeper retina. That is, when the depth range is selected, the display of the analysis results of the plurality of En-Face images in the selected depth range is simultaneously changed for a plurality of images at different dates and times.
  • the display of the analysis result is set to the non-selection state, the display may be changed to a parallel display of a plurality of En-Face images at different dates and times.
  • the button 3420 is designated in response to an instruction from the examiner, the display of the plurality of En-Face images is changed to the display of a plurality of high-quality images at once.
  • the display of the analysis result When the display of the analysis result is in the selected state, when the button 3420 is designated according to the instruction from the examiner, the display of the analysis result of the plurality of En-Face images is performed by the analysis of the plurality of high-quality images. It is changed to the display of the result at once.
  • the display of the analysis result may be a display in which the analysis result is superimposed on the image with an arbitrary transparency.
  • the display of the analysis result may be changed, for example, to a state where the analysis result is superimposed on the displayed image with arbitrary transparency.
  • the change to the display of the analysis result may be, for example, a change to the display of an image (for example, a two-dimensional map) obtained by blending the analysis result and the image with arbitrary transparency.
  • the type and offset position of the layer boundary used to specify the depth range can be changed collectively from a user interface such as 3805 and 3806.
  • the tomographic images are also displayed together, and the layer boundary data superimposed on the tomographic images is moved according to an instruction from the examiner, so that the depth ranges of a plurality of En-Face images at different dates and times are collectively determined. It may be changed.
  • the layer boundary data may be similarly moved on other tomographic images.
  • the presence or absence of the image projection method or the projection artifact suppression process may be changed by selecting the user interface such as a context menu, for example.
  • the selection screen may be displayed by selecting the selection button 3807, and the image selected from the image list displayed on the selection screen may be displayed.
  • an arrow 3804 displayed at the top of FIG. 18 is a mark indicating that the test is currently selected, and the reference test (Baseline) is the test selected at the time of Follow-up imaging (one of FIG. 18). Left image).
  • a mark indicating the reference inspection may be displayed on the display unit.
  • a measured value distribution (map or sector map) for the reference image is displayed on the reference image. Further, in this case, a difference measurement value map between the measurement value distribution calculated for the reference image and the measurement distribution calculated for the image displayed in the region is displayed in an area corresponding to the other inspection dates. I do.
  • a trend graph (a graph of the measurement values for the images on each inspection day obtained by the measurement over time) may be displayed on the report screen. That is, time-series data (for example, a time-series graph) of a plurality of analysis results corresponding to a plurality of images at different dates and times may be displayed.
  • the analysis results regarding the dates and times other than the multiple dates and times corresponding to the multiple displayed images are also distinguished from the multiple analysis results corresponding to the multiple displayed images (for example, time-series (The color of each point on the graph differs depending on whether an image is displayed or not). Further, a regression line (curve) of the trend graph and a corresponding mathematical expression may be displayed on a report screen.
  • An image related to processing such as display, image quality improvement, and image analysis according to the present embodiment may be an En-Face image of luminance. Further, not only the En-Face image but also a different image such as a tomographic image, an SLO image, a fundus photograph, or a fluorescent fundus photograph may be used.
  • the user interface for executing the image quality improvement processing is to instruct execution of the image quality improvement processing for a plurality of different types of images, or to select an arbitrary image from the plurality of types of the different images. There may be one that instructs execution of the high image quality processing.
  • the display control unit 250 can display the image processed by the image quality improving unit 224 according to the present embodiment on the display unit 270.
  • the display screen is displayed. The transition may be made, or the selected state may be maintained.
  • the state in which at least one is selected is maintained even if another condition is changed to a selected state.
  • the display control unit 250 increases the display of the analysis result of the low image quality image in response to an instruction from the examiner (for example, when the button 3420 is specified).
  • the display may be changed to a display of the analysis result of the image quality image.
  • the display control unit 250 displays the analysis result of the high-quality image in response to an instruction from the examiner (for example, when the designation of the button 3420 is released). May be changed to the display of the analysis result of the low-quality image.
  • the display control unit 250 responds to an instruction from the examiner (for example, when the display of the analysis result is released), and The display of the analysis result may be changed to the display of a low-quality image. Further, when the display of the high-quality image is in the non-selection state, the display control unit 250 displays the low-quality image in response to an instruction from the examiner (for example, when the display of the analysis result is specified). The display of the analysis result of the low-quality image may be changed.
  • the display control unit 250 analyzes the high-quality image in response to an instruction from the examiner (for example, when the display of the analysis result is released). The display of the result may be changed to the display of a high-quality image.
  • the display control unit 250 increases the display of the high-quality image in response to an instruction from the examiner (for example, when the display of the analysis result is specified). The display may be changed to a display of the analysis result of the image quality image.
  • the display control unit 250 in response to an instruction from the examiner (for example, when the display of the second type of analysis result is specified), the display control unit 250 generates the first type of analysis result of the low image quality image.
  • the display may be changed to the display of the second type of analysis result of the low image quality image. Also, consider the case where the display of a high-quality image is in a selected state and the display of the first type of analysis result is in a selected state.
  • the display control unit 250 in response to an instruction from the examiner (for example, when the display of the second type of analysis result is specified), the display control unit 250 outputs the first type of analysis result of the high-quality image.
  • the display may be changed to the display of the second type of analysis result of the high quality image.
  • the display screen for follow-up observation may be configured so that these display changes are collectively reflected on a plurality of images obtained at different dates and times.
  • the display of the analysis result may be a display in which the analysis result is superimposed on the image with an arbitrary transparency.
  • the display of the analysis result may be changed, for example, to a state where the analysis result is superimposed on the displayed image with arbitrary transparency.
  • the change to the display of the analysis result may be, for example, a change to the display of an image (for example, a two-dimensional map) obtained by blending the analysis result and the image with arbitrary transparency.
  • the display control unit 250 displays the image selected according to the instruction from the examiner, out of the high-quality image generated by the image quality improvement unit 224 and the input image. Can be displayed.
  • the display control unit 250 may switch the display on the display unit 270 from a captured image (input image) to a high-quality image according to an instruction from the examiner. That is, the display control unit 250 may change the display of the low-quality image to the display of the high-quality image according to an instruction from the examiner.
  • the display control unit 250 may change the display of the high-quality image to the display of the low-quality image according to an instruction from the examiner.
  • the image quality improvement unit 224 starts the image quality improvement processing (input of an image to the image quality improvement engine) by the image quality improvement engine (learned model for the image quality improvement) in response to an instruction from the examiner. Then, the display control unit 250 may cause the display unit 270 to display the high-quality image generated by the image quality improving unit 224.
  • the high-quality engine automatically generates a high-quality image based on the input image, and the display control unit 250 May be displayed on the display unit 270 in accordance with the instruction.
  • the image quality improvement engine includes a learned model that performs the above-described image quality improvement processing (image quality improvement processing).
  • the display control unit 250 may change the display of the analysis result of the low-quality image to the display of the analysis result of the high-quality image according to an instruction from the examiner. Further, the display control unit 250 may change the display of the analysis result of the high-quality image to the display of the analysis result of the low-quality image according to an instruction from the examiner. Of course, the display control unit 250 may change the display of the analysis result of the low-quality image to the display of the low-quality image according to an instruction from the examiner. In addition, the display control unit 250 may change the display of the low-quality image to the display of the analysis result of the low-quality image according to an instruction from the examiner.
  • the display control unit 250 may change the display of the analysis result of the high-quality image to the display of the high-quality image according to an instruction from the examiner. Further, the display control unit 250 may change the display of the high-quality image to the display of the analysis result of the high-quality image according to an instruction from the examiner.
  • the display control unit 250 may change the display of the analysis result of the low-quality image to the display of another type of analysis result of the low-quality image in accordance with an instruction from the examiner.
  • the display control unit 250 may change the display of the analysis result of the high-quality image to the display of another type of analysis result of the high-quality image according to an instruction from the examiner.
  • the analysis result of the high-quality image may be displayed by superimposing and displaying the analysis result of the high-quality image on the high-quality image with any transparency.
  • the display of the analysis result of the low-quality image may be a display in which the analysis result of the low-quality image is superimposed and displayed on the low-quality image with arbitrary transparency.
  • the display of the analysis result may be changed, for example, to a state where the analysis result is superimposed on the displayed image with arbitrary transparency.
  • the change to the display of the analysis result may be, for example, a change to the display of an image (for example, a two-dimensional map) obtained by blending the analysis result and the image with arbitrary transparency.
  • the artifact is, for example, a false image region caused by light absorption by a blood vessel region or the like, a projection artifact, a band-like artifact in a front image generated in a main scanning direction of measurement light due to a state (movement, blink, etc.) of an eye to be inspected, or the like.
  • the artifact may be, for example, any image-failure region that randomly appears on a medical image of a predetermined part of the subject for each imaging.
  • the values (distributions) of parameters relating to an area including at least one of the various artifacts (missing areas) as described above may be displayed as an analysis result.
  • a parameter value (distribution) relating to a region including at least one of abnormal sites such as drusen, new blood vessels, vitiligo (hard vitiligo), and pseudo drusen may be displayed as an analysis result.
  • the analysis result may be displayed as an analysis map, a sector indicating a statistical value corresponding to each divided region, or the like.
  • the analysis result may be generated using a learned model (analysis result generation engine, a learned model for generating the analysis result) obtained by learning the analysis result of the medical image as learning data.
  • the trained model is a learning model using learning data including a medical image and an analysis result of the medical image, learning data including a medical image and an analysis result of a medical image of a type different from the medical image, and the like. May be obtained.
  • the learned model is obtained by learning using learning data including input data in which a plurality of different types of medical images of a predetermined part are set, such as a luminance front image and a motion contrast front image.
  • the luminance front image corresponds to the luminance En-Face image
  • the motion contrast front image corresponds to the OCTA En-Face image.
  • a configuration may be adopted in which an analysis result obtained using a high-quality image generated by a learned model for improving image quality is displayed.
  • the input data included in the learning data may be a high-quality image generated by a learned model for improving image quality, or may be a set of a low-quality image and a high-quality image.
  • the learning data includes, for example, at least an analysis value (for example, an average value or a median value) obtained by analyzing the analysis area, a table including the analysis value, an analysis map, and the position of the analysis area such as a sector in the image.
  • Information including one may be data obtained by labeling input data as correct answer data (for supervised learning).
  • the analysis result obtained by the learned model for generating the analysis result may be displayed.
  • various diagnosis results such as glaucoma and age-related macular degeneration may be displayed on the report screens in the various embodiments and the modifications described above.
  • a highly accurate diagnosis result can be displayed.
  • the diagnosis result the position of the specified abnormal part or the like may be displayed on the image, or the state or the like of the abnormal part may be displayed by characters or the like.
  • a classification result for example, a Curtin classification
  • a diagnosis result may be displayed as a diagnosis result.
  • the diagnosis result may be generated using a learned model (diagnosis result generation engine, a learned model for generating a diagnosis result) obtained by learning a diagnosis result of a medical image as learning data.
  • the learned model is obtained by learning using learning data including a medical image and a diagnosis result of the medical image, and learning data including a medical image and a diagnosis result of a medical image of a different type from the medical image. It may be obtained. Further, a configuration may be adopted in which a diagnosis result obtained using a high-quality image generated by a learned model for improving image quality is displayed.
  • the input data included in the learning data may be a high-quality image generated by a learned model for improving image quality, or may be a set of a low-quality image and a high-quality image.
  • the learning data includes, for example, the diagnosis name, the type and state (degree) of the lesion (abnormal part), the position of the lesion in the image, the position of the lesion with respect to the attention area, the findings (interpretation findings, etc.), Information that includes at least one of the following: information that includes at least one of grounds for negating the diagnosis name (negative medical support information), and the like, as correct data (for supervised learning). You may. It should be noted that the diagnosis result obtained by the learned model for generating the diagnosis result may be displayed according to the instruction from the examiner.
  • the object recognition result (object detection result) and the segmentation result of the noted part, the artifact, the abnormal part, and the like as described above may be displayed.
  • a rectangular frame or the like may be superimposed and displayed around the object on the image.
  • a color or the like may be superimposed and displayed on an object in an image.
  • the object recognition result and the segmentation result may be generated using a learned model obtained by learning learning data obtained by labeling a medical image with information indicating object recognition and segmentation as correct data.
  • the above-described generation of the analysis result and the generation of the diagnosis result may be obtained by using the above-described object recognition result and the segmentation result.
  • a process of generating an analysis result and a process of generating a diagnosis result may be performed on a region of interest obtained by the processing of object recognition and segmentation.
  • the above-described learned model may be a learned model obtained by learning using learning data including input data in which a plurality of different types of medical images of a predetermined part of the subject are set.
  • input data included in the learning data for example, input data in which a front image of a motion contrast of a fundus and a luminance front image (or a luminance tomographic image) are set can be considered.
  • input data included in the learning data for example, input data in which a tomographic image of a fundus (B-scan image) and a color fundus image (or a fluorescent fundus image) are set may be considered.
  • the plurality of different types of medical images may be any medical images obtained by different modalities, different optical systems, different principles, or the like.
  • the learned model described above may be a learned model obtained by learning using learning data including input data in which a plurality of medical images of different parts of the subject are set.
  • input data included in the learning data for example, input data in which a tomographic image of the fundus (B-scan image) and a tomographic image of the anterior segment (B-scan image) are set can be considered.
  • input data included in the learning data for example, input data in which a three-dimensional OCT image (three-dimensional tomographic image) of the macula of the fundus and a circle scan (or raster scan) tomographic image of the optic papilla of the fundus are set. Is also conceivable.
  • the input data included in the learning data may be different parts of the subject and a plurality of different types of medical images.
  • the input data included in the learning data may be, for example, input data that sets a tomographic image of the anterior ocular segment and a color fundus image.
  • the above-described learned model may be a learned model obtained by learning using learning data including input data in which a plurality of medical images of a predetermined part of the subject with different imaging angles of view are set.
  • the input data included in the learning data may be a combination of a plurality of medical images obtained by time-dividing a predetermined region into a plurality of regions, such as a panoramic image.
  • the input data included in the learning data may be input data in which a plurality of medical images at different dates and times of a predetermined part of the subject are set.
  • the display screen on which at least one of the analysis result, the diagnosis result, the object recognition result, and the segmentation result is displayed is not limited to the report screen.
  • a display screen includes, for example, at least one display screen such as a shooting confirmation screen, a display screen for follow-up observation, and a preview screen for various adjustments before shooting (a display screen on which various live moving images are displayed). May be displayed.
  • the examiner can check the result with high accuracy even immediately after shooting.
  • the change of the display of the low-quality image and the high-quality image described above may be, for example, the change of the display of the analysis result of the low-quality image and the analysis result of the high-quality image.
  • the various learned models described above can be obtained by machine learning using learning data.
  • the machine learning includes, for example, deep learning (Deep @ Learning) including a multi-layer neural network.
  • deep learning Deep @ Learning
  • a convolutional neural network CNN: Convolutional Neural Network
  • CNN Convolutional Neural Network
  • at least a part of the multi-layer neural network may use a technology related to an auto encoder (self-encoder).
  • a technology related to back propagation error back propagation method
  • the machine learning is not limited to the deep learning, but may be any model that can extract (represent) the feature amount of learning data such as an image by learning.
  • the high-quality image engine (learned model for high-quality image) may be a learned model obtained by additionally learning learning data including at least one high-quality image generated by the high-quality image engine. Good. At this time, whether or not to use the high-quality image as learning data for additional learning may be configured to be selectable by an instruction from the examiner.
  • the preview screens in the various embodiments and modifications described above may be configured so that the learned model is used for at least one frame of the live moving image.
  • the learned model corresponding to each live moving image may be used.
  • the processing time can be shortened, so that the examiner can obtain highly accurate information before the start of imaging. For this reason, for example, failure in re-imaging can be reduced, so that the accuracy and efficiency of diagnosis can be improved.
  • the plurality of live moving images may be, for example, a moving image of the anterior segment for alignment in the XYZ directions, and a front moving image of the fundus for focus adjustment or OCT focus adjustment of the fundus observation optical system.
  • the plurality of live moving images may be, for example, tomographic moving images of a fundus for coherence gate adjustment of OCT (adjustment of an optical path length difference between a measurement optical path length and a reference optical path length).
  • the moving image to which the learned model can be applied is not limited to a live moving image, and may be, for example, a moving image stored (saved) in a storage unit.
  • a moving image obtained by aligning at least one frame of the tomographic moving image of the fundus stored (saved) in the storage unit may be displayed on the display screen.
  • a reference frame may be selected based on the condition that the vitreous body exists on the frame as much as possible.
  • each frame is a tomographic image (B-scan image) in the XZ direction.
  • a moving image in which another frame is aligned in the XZ direction with respect to the selected reference frame may be displayed on the display screen.
  • a configuration may be adopted in which high-quality images (high-quality frames) sequentially generated by the learned model for improving image quality are successively displayed for at least one frame of the moving image.
  • the same method may be applied to the method of positioning in the X direction and the method of positioning in the Z direction (depth direction), or all different methods may be used. May be applied.
  • the alignment in the same direction may be performed a plurality of times by different methods. For example, after the rough alignment is performed, the precise alignment may be performed.
  • a method of positioning for example, a plurality of positions obtained by dividing a tomographic image (coarse in the Z direction) using a retinal layer boundary obtained by performing a segmentation process on a tomographic image (B scan image) and dividing the tomographic image are obtained.
  • Positioning precision in the X and Z directions
  • correlation information similarity
  • one-dimensional projection images generated for each tomographic image B scan image
  • Alignment X-direction
  • the configuration may be such that, after coarse positioning is performed in pixel units, precise positioning is performed in subpixel units.
  • a different learned model for improving image quality is prepared for each shooting mode having a different scanning pattern or the like, and a learned model for improving image quality corresponding to the selected shooting mode is selected. Is also good. Further, one learned model for improving image quality obtained by learning learning data including various medical images obtained in different imaging modes may be used.
  • a learned model obtained by learning for each imaging region may be selectively used. Specifically, a first learned model obtained using learning data including a first imaging region (lung, eye to be examined, and the like) and a learning including a second imaging region different from the first imaging region A plurality of learned models including the second learned model obtained using the data can be prepared. Then, the control unit 200 may include a selection unit that selects any one of the plurality of learned models. At this time, the control unit 200 may include a control unit that executes the selected learned model as additional learning. The control means searches for data in which an imaging part corresponding to the selected learned model and an imaging image of the imaging part are paired in accordance with an instruction from the examiner, and retrieves the obtained data as learning data.
  • the imaging part corresponding to the selected learned model may be obtained from the information in the header of the data or manually input by the examiner.
  • the data search may be performed via a network from a server or the like of an external facility such as a hospital or a laboratory.
  • the selection unit and the control unit may be configured by a software module executed by a processor such as a CPU or an MPU of the control unit 200. Further, the selection unit and the control unit may be configured by a circuit that performs a specific function such as an ASIC, an independent device, or the like.
  • the validity of the learning data for additional learning may be detected by confirming the matching by digital signature or hashing. Thereby, the learning data for additional learning can be protected. At this time, if the validity of the learning data for additional learning cannot be detected as a result of checking the consistency by digital signature or hashing, a warning to that effect is issued, and additional learning using the learning data is performed. Absent.
  • the server may be in any form, such as a cloud server, a fog server, an edge server, etc., regardless of the installation location.
  • the instruction from the examiner may be an instruction by voice or the like in addition to a manual instruction (for example, an instruction using a user interface or the like).
  • a machine learning model including a speech recognition model speech recognition engine, learned model for speech recognition
  • the manual instruction may be an instruction by character input using a keyboard, a touch panel, or the like.
  • a machine learning model including a character recognition model a character recognition engine, a learned model for character recognition
  • the instruction from the examiner may be an instruction by a gesture or the like.
  • a machine learning model including a gesture recognition model gesture recognition engine, learned model for gesture recognition
  • the instruction from the examiner may be a line of sight detection result of the examiner on the monitor.
  • the gaze detection result may be, for example, a pupil detection result using a moving image of the examiner obtained by photographing from the periphery of the monitor.
  • the pupil detection from the moving image may use the above-described object recognition engine.
  • the instruction from the examiner may be an instruction based on an electroencephalogram, a weak electric signal flowing through the body, or the like.
  • the learning data character data or voice data (waveform data) indicating an instruction to display a result of processing of the various learned models as described above is used as input data, and various learned models are used. It may be learning data in which an execution instruction for actually displaying the result of the model processing on the display unit is correct data. Further, as the learning data, for example, character data or audio data indicating an instruction to display a high-quality image obtained by a learned model for improving image quality is used as input data, and an execution instruction and a button for displaying a high-quality image are input. Learning data in which an execution instruction for changing 3420 to the active state may be correct data.
  • any data may be used as long as the instruction content indicated by the character data or the voice data and the execution instruction content correspond to each other.
  • a process of reducing noise data superimposed on audio data may be performed using waveform data obtained by a plurality of microphones.
  • an instruction using characters or voice and an instruction using a mouse, a touch panel, or the like may be configured to be selectable according to an instruction from the examiner.
  • on / off of an instruction by a character, a voice, or the like may be configured to be selectable according to an instruction from the examiner.
  • machine learning includes deep learning as described above, and a recursive neural network (RNN) may be used as at least a part of the multi-layer neural network, for example.
  • RNN recursive neural network
  • an RNN that is a neural network that handles time-series information will be described with reference to FIGS. 16A and 16B.
  • LSTM Long @ short-term @ memory
  • FIG. 16A shows the structure of RNN which is a machine learning model.
  • RNN3520 has a loop structure to the network, enter the data x t 3510 at time t, and outputs the data h t 3530. Since the RNN 3520 has a loop function in the network, the state at the current time can be taken over to the next state, so that the time series information can be handled.
  • FIG. 16B shows an example of input / output of the parameter vector at time t.
  • the data x t 3510 includes N data (Params 1 to Params N).
  • the data h t 3530 output from RNN3520 includes data of N (Params1 ⁇ ParamsN) corresponding to the input data.
  • FIG. 17A shows the structure of the LSTM.
  • the information that the network takes over at the next time t is the internal state ct -1 of the network called a cell and the output data ht -1 .
  • FIG. 17B shows details of the LSTM3540.
  • FG indicates a forgetting gate network
  • IG indicates an input gate network
  • OG indicates an output gate network, each of which is a sigmoid layer. Therefore, a vector in which each element takes a value from 0 to 1 is output.
  • the forgetting gate network FG determines how much past information is retained, and the input gate network IG determines which value to update.
  • the CU is a cell update candidate network, and is an activation function tanh layer. This creates a vector of new candidate values to be added to the cell.
  • the output gate network OG selects the element of the cell candidate and selects how much information to transmit at the next time.
  • the LSTM model described above is a basic model, and is not limited to the network shown here.
  • the coupling between the networks may be changed.
  • a QRNN Quasi ⁇ Current ⁇ Neural ⁇ Network
  • the machine learning model is not limited to the neural network, and boosting, a support vector machine, or the like may be used.
  • a technology related to natural language processing for example, Sequence to Sequence
  • a dialogue engine a dialogue model, a trained model for a dialogue that responds to the examiner with an output using characters or voices may be applied.
  • a high-quality image or the like may be stored in the storage unit according to an instruction from the examiner.
  • any part of the file name for example, the first part, the last part, Location
  • a file name including information for example, characters
  • image quality improvement image quality improvement process
  • the displayed image is a high-quality image generated by a process using a learned model for improving the image quality.
  • the display indicating the presence may be displayed together with the high-quality image.
  • the user can easily identify from the display that the displayed high-quality image is not the image itself obtained by shooting, thereby reducing erroneous diagnosis or improving diagnosis efficiency.
  • the display indicating that the image is a high-quality image generated by the process using the learned model for improving the image quality is a display that can identify the input image and the high-quality image generated by the process. Any mode may be used.
  • the processing using the learned model for high image quality but also the processing using the various learned models described above are the results generated by the processing using the type of the learned model.
  • a display indicating the presence may be displayed together with the result.
  • a display screen such as a report screen may be stored in the storage unit in accordance with an instruction from the examiner.
  • the report screen is stored as one image in which high-quality images and the like and a display indicating that these images are high-quality images generated by a process using a learned model for high image quality are arranged. It may be stored in.
  • the display indicating that the image is a high-quality image generated by processing using the learned model for high image quality is obtained by learning the learning model for high image quality with what learning data.
  • a display indicating whether or not there is may be displayed on the display unit.
  • the display may include an explanation of the types of the input data and the correct answer data of the learning data and an arbitrary display related to the correct answer data such as the imaging part included in the input data and the correct answer data. It should be noted that not only the processing using the learned model for improving the image quality, but also the processing using the various learned models as described above, the learning model of that type performs learning based on what learning data.
  • a display indicating whether or not the information is displayed may be displayed on the display unit.
  • information for example, characters
  • the portion to be superimposed on the image may be any region (for example, the end of the image) that does not overlap with the region where the target region to be photographed is displayed.
  • a non-overlapping area may be determined and superimposed on the determined area.
  • buttons 3420 are set as an initial display screen of the report screen so that the button 3420 is set to the active state (the high-quality processing is turned on), a high-quality image or the like is displayed in accordance with an instruction from the examiner.
  • a report image corresponding to the included report screen may be transmitted to the server.
  • the button 3420 is set to an active state by default, at the end of the examination (for example, when the photographing confirmation screen or the preview screen is changed to the report screen according to an instruction from the examiner).
  • a report image corresponding to a report screen including a high-quality image or the like may be configured to be (automatically) transmitted to the server.
  • various settings in the default settings for example, a depth range for generating an En-Face image on the initial display screen of the report screen, presence / absence of superimposition of the analysis map, whether or not the image is a high-quality image, and a display screen for follow-up observation
  • the report image generated based on at least one of the settings, such as whether or not the report image may be transmitted to the server.
  • an image for example, a high-quality image, an analysis map such as an analysis map
  • the image, the image indicating the object recognition result, and the image indicating the segmentation result may be input to a second type of learned model different from the first type.
  • a result for example, an analysis result, a diagnosis result, an object recognition result, and a segmentation result
  • the first type of the first type is obtained by using a result (for example, an analysis result, a diagnosis result, an object recognition result, and a segmentation result) obtained by processing the first type of the learned model.
  • a result for example, an analysis result, a diagnosis result, an object recognition result, and a segmentation result
  • An image to be input to a second type of learned model different from the first type may be generated from the image input to the learned model.
  • the generated image is likely to be an image suitable as an image to be processed by the second type of learned model.
  • an image obtained by inputting the generated image to the second type of learned model for example, a high-quality image, an image showing an analysis result such as an analysis map, an image showing an object recognition result, and a segmentation result (The image shown) can be improved.
  • Similar image search using an external database stored in a server or the like may be performed using, as a search key, an analysis result, a diagnosis result, or the like obtained by processing the learned model as described above.
  • the images themselves are used as search keys.
  • a similar image search engine similar image inspection model, trained model for similar image search
  • Modification 12 Note that the generation processing of the motion contrast data in the above embodiment and the modification is not limited to the configuration performed based on the luminance value of the tomographic image.
  • the above various processes are performed on tomographic data including an interference signal acquired by the OCT imaging unit 100, a signal obtained by performing a Fourier transform on the interference signal, a signal obtained by subjecting the signal to arbitrary processing, and a tomographic image based on these signals. May be applied. In these cases, the same effect as the above configuration can be obtained.
  • the configuration of the OCT imaging unit 100 is not limited to the above configuration, and a part of the configuration included in the OCT imaging unit 100 may be configured separately from the OCT imaging unit 100.
  • the configuration of the Mach-Zehnder interferometer is used as the interference optical system of the OCT imaging unit 100, but the configuration of the interference optical system is not limited to this.
  • the interference optical system of the OCT apparatus 1 may have a configuration of a Michelson interferometer.
  • a spectral domain OCT (SD-OCT) device using an SLD as a light source has been described as an OCT device, but the configuration of the OCT device according to the present invention is not limited to this.
  • the present invention can be applied to any other type of OCT device such as a wavelength-swept OCT (SS-OCT) device using a wavelength-swept light source capable of sweeping the wavelength of emitted light.
  • SS-OCT wavelength-swept OCT
  • the present invention can also be applied to a Line-OCT apparatus using line light.
  • the acquisition unit 210 acquires the interference signal acquired by the OCT imaging unit 100, the three-dimensional tomographic image generated by the image processing unit 220, and the like.
  • the configuration in which the acquisition unit 210 acquires these signals and images is not limited to this.
  • the acquisition unit 210 may acquire these signals from a server or an imaging device connected to the control unit via a LAN, a WAN, the Internet, or the like.
  • the learned model can be provided in the control units 200, 900, and 1400, which are image processing devices.
  • the learned model can be constituted by, for example, a software module executed by a processor such as a CPU. Further, the learned model may be provided in another server or the like connected to the control units 200, 900, and 1400. In this case, the control units 200, 900, and 1400 can perform image quality improvement processing using the learned model by connecting to a server having the learned model via an arbitrary network such as the Internet.
  • the image processed by the image processing device or the image processing method according to the various embodiments and the modified examples described above includes a medical image acquired using an arbitrary modality (imaging device, imaging method).
  • the medical image to be processed can include a medical image acquired by an arbitrary imaging device or the like, and an image created by the image processing apparatus or the image processing method according to the above-described embodiment and the modification.
  • the medical image to be processed is an image of a predetermined part of the subject (subject), and the image of the predetermined part includes at least a part of the predetermined part of the subject.
  • the medical image may include other parts of the subject.
  • the medical image may be a still image or a moving image, and may be a black and white image or a color image.
  • the medical image may be an image representing the structure (form) of the predetermined part or an image representing the function thereof.
  • the images representing functions include, for example, images representing blood flow dynamics (blood flow, blood flow velocity, etc.) such as OCTA images, Doppler OCT images, fMRI images, and ultrasonic Doppler images.
  • the predetermined site of the subject may be determined according to the imaging target, and includes organs such as the human eye (eye to be examined), brain, lung, intestine, heart, pancreas, kidney, and liver, head, chest, Includes any parts such as legs and arms.
  • the medical image may be a tomographic image of the subject or a front image.
  • the front image is, for example, at least a part of the fundus front image, the front image of the anterior eye part, the fundus image obtained by fluorescence imaging, and data obtained by OCT (three-dimensional OCT data) in the depth direction of the imaging target.
  • OCT three-dimensional OCT data
  • the En-Face image is an OCTA En-Face image (motion contrast front image) generated using three-dimensional OCTA data (three-dimensional motion contrast data) using data in at least a part of the range in the depth direction of the imaging target. ).
  • the three-dimensional OCT data and the three-dimensional motion contrast data are examples of three-dimensional medical image data.
  • the imaging device is a device for imaging an image used for diagnosis.
  • the imaging apparatus detects, for example, a device that obtains an image of a predetermined portion by irradiating a predetermined portion of the subject with light, radiation such as X-rays, electromagnetic waves, or ultrasonic waves, or detects radiation emitted from a subject.
  • a device for obtaining an image of a predetermined part More specifically, the imaging apparatuses according to the various embodiments and modifications described above include at least an X-ray imaging apparatus, a CT apparatus, an MRI apparatus, a PET apparatus, a SPECT apparatus, an SLO apparatus, an OCT apparatus, an OCTA apparatus, and a fundus. It includes a camera and an endoscope.
  • the OCT device may include a time domain OCT (TD-OCT) device and a Fourier domain OCT (FD-OCT) device. Further, the Fourier domain OCT device may include a spectral domain OCT (SD-OCT) device and a wavelength sweep type OCT (SS-OCT) device. Further, the SLO device or OCT device may include a wavefront compensation SLO (AO-SLO) device using a wavefront compensation optical system, a wavefront compensation OCT (AO-OCT) device, or the like. Further, the SLO device and the OCT device may include a polarization SLO (PS-SLO) device, a polarization OCT (PS-OCT) device, and the like for visualizing information on a polarization phase difference and depolarization.
  • TD-OCT time domain OCT
  • FD-OCT Fourier domain OCT
  • SD-OCT spectral domain OCT
  • SS-OCT wavelength sweep type OCT
  • the SLO device or OCT device may include
  • the present invention supplies a program that realizes one or more functions of the above-described embodiments to a system or an apparatus via a network or a storage medium, and one or more processors in a computer of the system or the apparatus read and execute the program.
  • This processing can be realized. Further, it can also be realized by a circuit (for example, an ASIC) that realizes one or more functions.
  • control unit image processing device
  • 224 image quality improvement unit
  • 250 display control unit

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

An image processing apparatus comprising: an image quality improvement unit which generates, from a first image of an eye to be examined, a second image in which at least one of noise reduction and contrast enhancement is performed as compared with the first image, by using a learned model; and a display control unit which displays, on a display unit, the first image and the second image by switching, arranging side by side, or overlapping the first image and the second image.

Description

画像処理装置、画像処理方法、及びプログラムImage processing apparatus, image processing method, and program
 本発明は、画像処理装置、画像処理方法及びプログラムに関する。 The present invention relates to an image processing device, an image processing method, and a program.
 生体などの被検体の断層画像を非破壊、非侵襲で取得する方法として、光干渉断層撮像法(OCT:Optical Coherence Tomography)を利用した装置(OCT装置)が実用化されている。OCT装置は、特に眼科診断のための画像を取得する眼科装置として広く利用されている。 (2) As a method for non-destructively and non-invasively acquiring a tomographic image of a subject such as a living body, an apparatus (OCT apparatus) using optical coherence tomography (OCT) has been put to practical use. An OCT apparatus is widely used as an ophthalmic apparatus for acquiring an image for ophthalmic diagnosis.
 OCTでは、測定対象から反射した光と参照鏡から反射した光を干渉させ、その干渉光の強度を解析することにより被検体の断層画像を得ることができる。このようなOCTとして、タイムドメインOCT(TD-OCT:Time Domain OCT)が知られている。TD-OCTでは、参照鏡の位置を順次変えることで被検体の深さ情報を得る。 In OCT, a tomographic image of a subject can be obtained by causing light reflected from a measurement target and light reflected from a reference mirror to interfere with each other and analyzing the intensity of the interference light. As such an OCT, a time domain OCT (TD-OCT: Time @ Domain @ OCT) is known. In TD-OCT, depth information of a subject is obtained by sequentially changing the position of a reference mirror.
 また、スペクトラルドメインOCT(SD-OCT:Spectral Domain OCT)、及び波長掃引型OCT(SS-OCT:Swept Source OCT)が知られている。SD-OCTでは、低コヒーレンス光を用いて干渉させた干渉光を分光し、深さ情報を周波数情報に置き換えて取得する。また、SS-OCTでは、波長掃引光源を用いて先に波長を分光した光を用いて干渉光を取得する。なお、SD-OCTとSS-OCTは総称してフーリエドメインOCT(FD-OCT:Fourier Domain OCT)とも呼ばれる。 Also, a spectral domain OCT (SD-OCT: Spectral Domain OCT) and a wavelength-swept OCT (SS-OCT: Swept Source OCT) are known. In the SD-OCT, interference light that is caused to interfere using low coherence light is separated, and depth information is replaced with frequency information and acquired. In SS-OCT, interference light is acquired using light whose wavelength has been previously separated using a wavelength-swept light source. Note that SD-OCT and SS-OCT are also collectively referred to as Fourier domain OCT (FD-OCT: Fourier Domain OCT).
 OCTを用いることで、被検体の深さ情報に基づく断層画像を取得することができる。また、取得した三次元の断層画像を深度方向に統合し、二次元平面上に投影することで、測定対象の正面画像を生成することができる。従来、これら画像の画質を向上させるため、複数回画像を取得し重ね合わせ処理を施すことが行われている。しかしながら、このような場合、複数回の撮影に時間がかかる。 By using OCT, a tomographic image based on depth information of the subject can be obtained. Further, by integrating the acquired three-dimensional tomographic images in the depth direction and projecting them on a two-dimensional plane, a front image of the measurement target can be generated. Conventionally, in order to improve the image quality of these images, it has been performed to acquire the images a plurality of times and to perform the superimposition processing. However, in such a case, it takes time to take a plurality of shots.
 特許文献1には、医用技術の急激な進歩や緊急時の簡易な撮影に対応するため、以前に取得した画像を、人工知能エンジンによって、より解像度の高い画像に変換する技術が開示されている。このような技術によれば、例えば、より少ない撮影によって取得された画像をより解像度の高い画像に変換することができる。 Patent Literature 1 discloses a technique for converting a previously acquired image into a higher-resolution image using an artificial intelligence engine in order to cope with rapid advances in medical technology and simple imaging in an emergency. . According to such a technique, for example, an image acquired by less photographing can be converted to an image with higher resolution.
特開2018-5841号公報JP 2018-5841A
 しかしながら、解像度が高い画像であっても、画像診断に適した画像とは言えない場合もある。例えば、解像度が高い画像であっても、ノイズが多い場合やコントラストが低い場合等には観察すべき対象が適切に把握できないことがある。 However, even an image having a high resolution may not be an image suitable for image diagnosis. For example, even if the image has a high resolution, an object to be observed may not be able to be properly grasped when there is much noise or when the contrast is low.
 そこで、本発明の目的の一つは、従来よりも画像診断に適した画像を生成することができる画像処理装置、画像処理方法、及びプログラムを提供することである。 Therefore, one of the objects of the present invention is to provide an image processing device, an image processing method, and a program that can generate an image more suitable for image diagnosis than before.
 本発明の一実施態様に係る画像処理装置は、学習済モデルを用いて、被検眼の第1の画像から、該第1の画像と比べてノイズ低減及びコントラスト強調のうちの少なくとも一つがなされた第2の画像を生成する、画質向上部と、表示部に前記第1の画像と前記第2の画像とを切り替えて、並べて、又は重ねて表示させる表示制御部とを備える。 The image processing apparatus according to one embodiment of the present invention uses the learned model to perform at least one of noise reduction and contrast enhancement from the first image of the subject's eye as compared to the first image. An image quality improving unit that generates a second image, and a display control unit that causes the display unit to switch between the first image and the second image and display them side by side or in a superimposed manner.
 本発明の他の実施態様に係る画像処理装置は、学習済モデルを用いて、被検眼の深さ方向の範囲における情報に基づいて生成された正面画像である第1の画像から、該第1の画像と比べてノイズ低減及びコントラスト強調のうちの少なくとも一つがなされた第2の画像を生成する、画質向上部と、前記第1の画像を生成するための深さ方向の範囲に基づいて、複数の学習済モデルから、前記画質向上部によって用いられる学習済モデルを選択する選択部とを備える。 An image processing device according to another embodiment of the present invention uses a learned model to convert a first image, which is a front image generated based on information in a range in the depth direction of the subject's eye, into the first image. Generating a second image in which at least one of noise reduction and contrast enhancement has been performed compared to the image, based on an image quality improving unit and a depth direction range for generating the first image, A selecting unit that selects a learned model used by the image quality improving unit from a plurality of learned models.
 本発明のさらなる特徴が、添付の図面を参照して以下の例示的な実施形態の説明から明らかになる。 Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the accompanying drawings.
実施例1に係るOCT装置の概略構成を示す。1 shows a schematic configuration of an OCT apparatus according to a first embodiment. 実施例1に係る制御部の概略構成を示す。2 illustrates a schematic configuration of a control unit according to the first embodiment. 実施例1に係る教師データの一例を示す。4 shows an example of teacher data according to the first embodiment. 実施例1に係る教師データの一例を示す。4 shows an example of teacher data according to the first embodiment. 実施例1に係る学習済モデルの構成の一例を示す。4 illustrates an example of a configuration of a learned model according to the first embodiment. 実施例1に係る一連の画像処理のフローチャートである。5 is a flowchart of a series of image processing according to the first embodiment. 画質向上処理前後の画像を切り替えて表示するレポート画面の一例を示す。13 shows an example of a report screen for switching and displaying images before and after the image quality improvement processing. 画質向上処理前後の画像を切り替えて表示するレポート画面の一例を示す。13 shows an example of a report screen for switching and displaying images before and after the image quality improvement processing. 画質向上処理前後の画像を並べて表示するレポート画面の一例を示す。13 shows an example of a report screen in which images before and after the image quality improvement processing are displayed side by side. 画質向上処理が適用された複数の画像を同時に表示するレポート画面の一例を示す。7 shows an example of a report screen that simultaneously displays a plurality of images to which image quality improvement processing has been applied. 画質向上処理が適用された複数の画像を同時に表示するレポート画面の一例を示す。7 shows an example of a report screen that simultaneously displays a plurality of images to which image quality improvement processing has been applied. 実施例2に係る制御部の概略構成を示す。9 illustrates a schematic configuration of a control unit according to the second embodiment. 実施例2に係る一連の画像処理のフローチャートである。9 is a flowchart of a series of image processing according to the second embodiment. 画質向上処理を変更する一例を示す。An example of changing the image quality improvement processing will be described. 画質向上処理を変更する一例を示す。An example of changing the image quality improvement processing will be described. 画質向上処理が適用された複数の画像を同時に表示するレポート画面の一例を示す。7 shows an example of a report screen that simultaneously displays a plurality of images to which image quality improvement processing has been applied. 画質向上処理が適用された複数の画像を同時に表示するレポート画面の一例を示す。7 shows an example of a report screen that simultaneously displays a plurality of images to which image quality improvement processing has been applied. 実施例3に係る一連の画像処理のフローチャートである。13 is a flowchart of a series of image processing according to the third embodiment. 実施例4に係る制御部の概略構成を示す。9 shows a schematic configuration of a control unit according to a fourth embodiment. 実施例4に係る一連の画像処理のフローチャートである。13 is a flowchart of a series of image processing according to the fourth embodiment. 変形例9に係る機械学習モデルとして用いられるニューラルネットワークの構成の一例を示す。15 shows an example of the configuration of a neural network used as a machine learning model according to Modification 9. 変形例9に係る機械学習モデルとして用いられるニューラルネットワークの構成の一例を示す。15 shows an example of the configuration of a neural network used as a machine learning model according to Modification 9. 変形例9に係る機械学習モデルとして用いられるニューラルネットワークの構成の一例を示す。15 shows an example of the configuration of a neural network used as a machine learning model according to Modification 9. 変形例9に係る機械学習モデルとして用いられるニューラルネットワークの構成の一例を示す。15 shows an example of the configuration of a neural network used as a machine learning model according to Modification 9. 実施例5に係るユーザーインターフェースの一例を示す。17 illustrates an example of a user interface according to a fifth embodiment. 複数のOCTAのEn-Face画像の一例を示す。5 shows an example of En-Face images of a plurality of OCTAs. 複数のOCTAのEn-Face画像の一例を示す。5 shows an example of En-Face images of a plurality of OCTAs. 実施例5に係るユーザーインターフェースの一例を示す。17 illustrates an example of a user interface according to a fifth embodiment. 実施例5に係るユーザーインターフェースの一例を示す。17 illustrates an example of a user interface according to a fifth embodiment.
 以下、本発明を実施するための例示的な実施例を、図面を参照して詳細に説明する。 Hereinafter, exemplary embodiments for carrying out the present invention will be described in detail with reference to the drawings.
 ただし、以下の実施例で説明する寸法、材料、形状、及び構成要素の相対的な位置等は任意であり、本発明が適用される装置の構成又は様々な条件に応じて変更できる。また、図面において、同一であるか又は機能的に類似している要素を示すために図面間で同じ参照符号を用いる。 However, dimensions, materials, shapes, relative positions of components, and the like described in the following embodiments are arbitrary, and can be changed according to the configuration of an apparatus to which the present invention is applied or various conditions. Also, in the drawings, the same reference numerals are used between the drawings to indicate the same or functionally similar elements.
 以下の実施例では、被検体として被検眼を例に挙げるが、人の他の臓器等を被検体としてもよい。また、機械学習モデル(機械学習エンジン)に関する学習済モデルを用いて画質向上処理を施す画像として、被検眼のOCTA(OCT Angiography)画像を例に挙げて説明する。なお、OCTAとは、OCTを用いた、造影剤を用いない血管造影法である。OCTAでは、被検体の深さ情報に基づいて取得される三次元のモーションコントラストデータを深度方向に統合し、二次元平面上に投影することでOCTA画像(正面血管画像)を生成する。 In the following embodiments, the subject's eye is taken as an example, but other human organs may be used as the subject. Also, an OCTA (OCT @ Angiography) image of an eye to be examined will be described as an example of an image on which image quality improvement processing is performed using a learned model related to a machine learning model (machine learning engine). OCTA is an angiography using OCT without using a contrast agent. In OCTA, an OCTA image (frontal blood vessel image) is generated by integrating three-dimensional motion contrast data acquired based on depth information of a subject in a depth direction and projecting the integrated data on a two-dimensional plane.
 ここで、モーションコントラストデータとは、被検体の略同一箇所を繰り返し撮影し、その撮影間における被写体の時間的な変化を検出したデータである。なお、略同一箇所とは、モーションコントラストデータを生成するのに許容できる程度に同一である位置をいい、厳密に同一である箇所から僅かにずれた箇所も含むものをいう。モーションコントラストデータは、例えば、複素OCT信号の位相やベクトル、強度の時間的な変化を差、比率、又は相関等から計算することによって得られる。 Here, the motion contrast data is data obtained by repeatedly photographing substantially the same part of the subject and detecting a temporal change of the subject during the photographing. In addition, the substantially identical portion refers to a position that is the same as an allowable level for generating the motion contrast data, and includes a portion slightly shifted from the strictly identical portion. The motion contrast data is obtained, for example, by calculating a temporal change of the phase, vector, and intensity of the complex OCT signal from a difference, a ratio, a correlation, or the like.
 ここで、機械学習モデルに関する学習済モデルを用いた画質向上処理に関する注意点を記載する。画像について機械学習モデルに関する学習済モデルを用いて画質向上処理を行うことで、少ない画像から高画質な画像が得られる一方で、現実には存在しない組織を画像上に描出してしまったり、本来存在している組織を消してしまったりすることがある。そのため、学習済モデルを用いた画質向上処理によって高画質化された画像では、画像上に描出された組織の真偽が判断しにくいという問題があった。 Here, notes on image quality improvement processing using a learned model for a machine learning model are described. By performing image quality improvement processing on images using a trained model related to machine learning models, high-quality images can be obtained from a small number of images, but tissues that do not actually exist are drawn on the images, Sometimes an existing organization is erased. Therefore, there is a problem that it is difficult to determine the authenticity of the tissue drawn on the image in the image whose image quality has been improved by the image quality improvement processing using the learned model.
 そのため、以下の実施例では、機械学習モデルを用いて、従来よりも画像診断に適した画像を生成するとともに、このような画像について、画像上に描出された組織の真偽を容易に判断できる画像処理装置を提供する。 Therefore, in the following embodiments, a machine learning model is used to generate an image more suitable for image diagnosis than before, and for such an image, it is possible to easily determine the authenticity of the tissue drawn on the image. An image processing device is provided.
 なお、以下の実施例ではOCTA画像について説明するが、画質向上処理を施す画像はこれに限られず、断層画像や輝度のEn-Face画像等であってもよい。ここで、En-Face画像とは、被検体の三次元のデータにおいて、2つの基準面に基づいて定められる所定の深さ範囲内のデータを二次元平面に投影又は積算して生成した正面画像である。En-Face画像には、例えば、輝度の断層画像に基づく輝度のEn-Face画像やモーションコントラストデータに基づくOCTA画像が含まれる。 In the following embodiments, an OCTA image will be described. However, the image on which the image quality improvement processing is performed is not limited to this, and may be a tomographic image, a luminance En-Face image, or the like. Here, the En-Face image is a front image generated by projecting or integrating data in a predetermined depth range determined based on two reference planes on a two-dimensional plane in three-dimensional data of the subject. It is. The En-Face image includes, for example, a luminance En-Face image based on a luminance tomographic image and an OCTA image based on motion contrast data.
(実施例1)
 以下、図1乃至図7を参照して、本発明の実施例1に係る光干渉断層撮像装置(OCT装置)及び画像処理方法について説明する。図1は、本実施例に係るOCT装置の概略構成を示す。
(Example 1)
Hereinafter, an optical coherence tomography apparatus (OCT apparatus) and an image processing method according to a first embodiment of the present invention will be described with reference to FIGS. FIG. 1 shows a schematic configuration of the OCT apparatus according to the present embodiment.
 本実施例に係るOCT装置1には、OCT撮影部100、制御部(画像処理装置)200、入力部260、表示部270が設けられている。 The OCT apparatus 1 according to the present embodiment includes an OCT imaging unit 100, a control unit (image processing device) 200, an input unit 260, and a display unit 270.
 OCT撮影部100は、SD-OCT装置の撮影光学系を含み、走査部を介して測定光が照射された被検眼Eからの戻り光と、測定光に対応する参照光とを干渉させた干渉光に基づいて、被検眼Eの断層の情報(断層情報)を含む信号を取得する。OCT撮影部100には、光干渉部110、及び走査光学系150が設けられている。 The OCT imaging unit 100 includes an imaging optical system of the SD-OCT apparatus, and causes interference between return light from the eye E to which the measurement light is irradiated via the scanning unit and reference light corresponding to the measurement light. Based on the light, a signal including information on the tomography of the eye E (tomographic information) is acquired. The OCT imaging unit 100 includes a light interference unit 110 and a scanning optical system 150.
 制御部200は、OCT撮影部100を制御したり、OCT撮影部100や不図示の他の装置から得られた信号から画像を生成したり、生成/取得した画像を処理したりすることができる。表示部270は、LCDディスプレイ等の任意のディスプレイであり、OCT撮影部100及び制御部200を操作するためのGUIや生成した画像、任意の処理を施した画像、及び患者情報等の各種の情報を表示することができる。 The control unit 200 can control the OCT imaging unit 100, generate an image from signals obtained from the OCT imaging unit 100 and other devices (not shown), and process the generated / acquired image. . The display unit 270 is an arbitrary display such as an LCD display, and is a GUI for operating the OCT imaging unit 100 and the control unit 200, a generated image, an image subjected to arbitrary processing, and various information such as patient information. Can be displayed.
 入力部260は、GUIを操作したり、情報を入力したりすることで、制御部200を操作するために用いられる。入力部260は、例えば、マウスやタッチパッド、トラックボール、タッチパネルディスプレイ、スタイラスペン等のポインティングデバイス及びキーボード等を含む。なお、タッチパネルディスプレイを用いる場合には、表示部270と入力部260を一体的に構成できる。なお、本実施例では、OCT撮影部100、制御部200、入力部260、及び表示部270は別々の要素とされているが、これらのうちの一部又は全部を一体的に構成してもよい。 The input unit 260 is used to operate the control unit 200 by operating a GUI or inputting information. The input unit 260 includes, for example, a mouse, a touchpad, a trackball, a touch panel display, a pointing device such as a stylus pen, a keyboard, and the like. When a touch panel display is used, the display unit 270 and the input unit 260 can be integrally configured. In the present embodiment, the OCT imaging unit 100, the control unit 200, the input unit 260, and the display unit 270 are separate elements, but some or all of them may be integrally configured. Good.
 OCT撮影部100における光干渉部110には、光源111、カプラ113、コリメート光学系121、分散補償光学系122、反射ミラー123、レンズ131、回折格子132、結像レンズ133、及びラインセンサ134が設けられている。光源111は、近赤外光を発光する低コヒーレンス光源である。光源111から発光した光は、光ファイバ112aを伝搬し、光分岐部であるカプラ113に入射する。カプラ113に入射した光は、走査光学系150側に向かう測定光と、コリメート光学系121、分散補償光学系122、及び反射ミラー123を含む参照光学系側に向かう参照光に分割される。測定光は、光ファイバ112bに入射され、走査光学系150に導かれる。一方、参照光は、光ファイバ112cに入射され、参照光学系へ導かれる。 The light interference unit 110 in the OCT imaging unit 100 includes a light source 111, a coupler 113, a collimating optical system 121, a dispersion compensating optical system 122, a reflection mirror 123, a lens 131, a diffraction grating 132, an imaging lens 133, and a line sensor 134. Is provided. The light source 111 is a low coherence light source that emits near-infrared light. Light emitted from the light source 111 propagates through the optical fiber 112a and enters the coupler 113, which is an optical branching unit. The light incident on the coupler 113 is split into measurement light traveling toward the scanning optical system 150 and reference light traveling toward the reference optical system including the collimating optical system 121, the dispersion compensating optical system 122, and the reflection mirror 123. The measurement light enters the optical fiber 112b and is guided to the scanning optical system 150. On the other hand, the reference light enters the optical fiber 112c and is guided to the reference optical system.
 光ファイバ112cに入射した参照光は、ファイバ端から射出され、コリメート光学系121を介して、分散補償光学系122に入射し、反射ミラー123へと導かれる。反射ミラー123で反射した参照光は、光路を逆にたどり再び光ファイバ112cに入射する。分散補償光学系122は、走査光学系150及び被検体である被検眼Eにおける光学系の分散を補償し、測定光と参照光の分散を合わせるためのものである。反射ミラー123は、制御部200によって制御される不図示の駆動手段により、参照光の光軸方向に駆動可能なように構成されており、参照光の光路長を、測定光の光路長に対して相対的に変化させ、参照光と測定光の光路長を一致させることができる。 The reference light that has entered the optical fiber 112c is emitted from the fiber end, enters the dispersion compensating optical system 122 via the collimating optical system 121, and is guided to the reflection mirror 123. The reference light reflected by the reflection mirror 123 follows the optical path in the reverse direction and enters the optical fiber 112c again. The dispersion compensating optical system 122 is for compensating the dispersion of the optical system in the scanning optical system 150 and the eye E to be inspected, and adjusting the dispersion of the measurement light and the reference light. The reflection mirror 123 is configured to be drivable in the optical axis direction of the reference light by a driving unit (not shown) controlled by the control unit 200, and sets the optical path length of the reference light to the optical path length of the measurement light. And the reference light and the measurement light can have the same optical path length.
 一方、光ファイバ112bに入射した測定光はファイバ端より射出され、走査光学系150に入射する。走査光学系150は被検眼Eに対して相対的に移動可能なように構成された光学系である。走査光学系150は、制御部200によって制御される不図示の駆動手段により、被検眼Eの眼軸に対して前後上下左右方向に駆動可能なように構成され、被検眼Eに対してアライメントを行うことができる。なお、走査光学系150は、光源111、カプラ113及び参照光学系等を含むように構成されてもよい。 On the other hand, the measurement light that has entered the optical fiber 112b is emitted from the end of the fiber and enters the scanning optical system 150. The scanning optical system 150 is an optical system configured to be relatively movable with respect to the eye E to be inspected. The scanning optical system 150 is configured to be able to be driven in the front, rear, up, down, left, and right directions with respect to the eye axis of the eye E by driving means (not shown) controlled by the control unit 200, and to perform alignment with respect to the eye E. It can be carried out. Note that the scanning optical system 150 may be configured to include the light source 111, the coupler 113, the reference optical system, and the like.
 走査光学系150には、コリメート光学系151、走査部152、及びレンズ153が設けられている。光ファイバ112bのファイバ端より射出した光は、コリメート光学系151により略平行化され、走査部152へ入射する。 The scanning optical system 150 includes a collimating optical system 151, a scanning unit 152, and a lens 153. Light emitted from the fiber end of the optical fiber 112b is substantially collimated by the collimating optical system 151 and enters the scanning unit 152.
 走査部152は、ミラー面を回転可能なガルバノミラーを2つ有し、一方は水平方向に光を偏向し、他方は垂直方向に光を偏向する。走査部152は、入射した光を制御部200による制御に従って偏向する。これにより、走査部152は、紙面垂直方向(X方向)の主走査方向と紙面内方向(Y方向)の副走査方向の2方向に、被検眼Eの眼底Er上で測定光を走査することができる。なお、主走査方向と副走査方向は、X方向及びY方向に限られず、被検眼Eの深さ方向(Z方向)に対して垂直な方向であり、主走査方向と副走査方向が互いに交差する方向であればよい。そのため、例えば、主走査方向がY方向であってもよいし、副走査方向がX方向であってもよい。 The scanning unit 152 has two galvanometer mirrors capable of rotating a mirror surface, one of which deflects light in the horizontal direction and the other deflects light in the vertical direction. The scanning unit 152 deflects the incident light under the control of the control unit 200. Thus, the scanning unit 152 scans the measurement light on the fundus Er of the eye E in two directions, that is, the main scanning direction perpendicular to the paper surface (X direction) and the sub-scanning direction in the paper surface direction (Y direction). Can be. The main scanning direction and the sub-scanning direction are not limited to the X direction and the Y direction, but are perpendicular to the depth direction (Z direction) of the eye E, and the main scanning direction and the sub-scanning direction intersect each other. Any direction is acceptable. Therefore, for example, the main scanning direction may be the Y direction, and the sub scanning direction may be the X direction.
 走査部152により走査された測定光は、レンズ153を経由して被検眼Eの眼底Er上に、照明スポットを形成する。走査部152により面内偏向を受けると、各照明スポットは被検眼Eの眼底Er上を移動(走査)する。この照明スポットの位置における眼底Erから反射・散乱された測定光の戻り光が光路を逆にたどり光ファイバ112bに入射して、カプラ113に戻る。 (4) The measurement light scanned by the scanning unit 152 forms an illumination spot on the fundus Er of the subject's eye E via the lens 153. When receiving the in-plane deflection by the scanning unit 152, each illumination spot moves (scans) on the fundus Er of the eye E to be inspected. The return light of the measurement light reflected and scattered from the fundus Er at the position of the illumination spot follows the optical path in reverse, enters the optical fiber 112b, and returns to the coupler 113.
 以上のように、反射ミラー123で反射された参照光、及び被検眼Eの眼底Erからの測定光の戻り光は、カプラ113に戻され、互いに干渉して干渉光となる。干渉光は光ファイバ112dを通過し、レンズ131に射出される。干渉光は、レンズ131により略平行化され、回折格子132に入射する。回折格子132は周期構造を有し、入力した干渉光を分光する。分光された干渉光は、合焦状態を変更可能な結像レンズ133によりラインセンサ134に結像される。ラインセンサ134は、各センサ部に照射される光の強度に応じた信号を制御部200に出力する。制御部200は、ラインセンサ134から出力される干渉信号に基づいて、被検眼Eの断層画像を生成することができる。 As described above, the reference light reflected by the reflection mirror 123 and the return light of the measurement light from the fundus Er of the eye E are returned to the coupler 113 and interfere with each other to become interference light. The interference light passes through the optical fiber 112d and is emitted to the lens 131. The interference light is made substantially parallel by the lens 131 and enters the diffraction grating 132. The diffraction grating 132 has a periodic structure and splits the input interference light. The split interference light is imaged on the line sensor 134 by the imaging lens 133 whose focus state can be changed. The line sensor 134 outputs a signal corresponding to the intensity of light emitted to each sensor unit to the control unit 200. The control unit 200 can generate a tomographic image of the eye E based on the interference signal output from the line sensor 134.
 上記一連の動作により、被検眼Eの一点における深さ方向の断層情報を取得することができる。このような動作をAスキャンという。 に よ り Through the above series of operations, it is possible to acquire tomographic information in the depth direction at one point of the eye E. Such an operation is called A-scan.
 また、走査部152のガルバノミラーを駆動させることで、被検眼Eの隣接する一点の干渉光を発生させ、被検眼Eの隣接する一点における深さ方向の断層情報を取得する。この一連の制御を繰り返すことにより、Aスキャンを任意の横断方向(主走査方向)において複数回行うことで被検眼Eの当該横断方向と深さ方向の二次元の断層情報を取得することができる。このような動作をBスキャンという。制御部200は、Aスキャンによって取得された干渉信号に基づくAスキャン画像を複数集めることで、一つのBスキャン画像を構成することができる。以下、このBスキャン画像のことを、二次元断層画像と呼ぶ。 Further, by driving the galvanomirror of the scanning unit 152, interference light at one point adjacent to the eye E is generated, and tomographic information in the depth direction at one point adjacent to the eye E is acquired. By repeating this series of controls, two-dimensional tomographic information of the eye E to be examined in the transverse direction and the depth direction can be obtained by performing the A-scan a plurality of times in an arbitrary transverse direction (main scanning direction). . Such an operation is called a B scan. The control unit 200 can configure one B-scan image by collecting a plurality of A-scan images based on the interference signal acquired by the A-scan. Hereinafter, this B-scan image is referred to as a two-dimensional tomographic image.
 さらに、走査部152のガルバノミラーを主走査方向に直交する副走査方向に微小に駆動させ、被検眼Eの別の箇所(隣接する走査ライン)における断層情報を取得することができる。制御部200は、この動作を繰り返すことにより、Bスキャン画像を複数集めることで、被検眼Eの所定範囲における三次元断層画像を取得することができる。 Furthermore, the galvanomirror of the scanning unit 152 can be minutely driven in the sub-scanning direction orthogonal to the main scanning direction to acquire tomographic information at another position (adjacent scanning line) of the eye E. The control unit 200 can obtain a three-dimensional tomographic image of the eye E in a predetermined range by collecting a plurality of B-scan images by repeating this operation.
 次に、図2を参照して制御部200について説明する。図2は制御部200の概略構成を示す。制御部200には、取得部210、画像処理部220、駆動制御部230、記憶部240、及び表示制御部250が設けられている。 Next, the control unit 200 will be described with reference to FIG. FIG. 2 shows a schematic configuration of the control unit 200. The control unit 200 includes an acquisition unit 210, an image processing unit 220, a drive control unit 230, a storage unit 240, and a display control unit 250.
 取得部210は、OCT撮影部100から、被検眼Eの干渉信号に対応するラインセンサ134の出力信号のデータを取得することができる。なお、取得部210が取得する出力信号のデータは、アナログ信号でもデジタル信号でもよい。取得部210がアナログ信号を取得する場合には、制御部200でアナログ信号をデジタル信号に変換することができる。 The acquisition unit 210 can acquire the data of the output signal of the line sensor 134 corresponding to the interference signal of the eye E from the OCT imaging unit 100. Note that the data of the output signal acquired by the acquisition unit 210 may be an analog signal or a digital signal. When the acquisition unit 210 acquires an analog signal, the control unit 200 can convert the analog signal into a digital signal.
 また、取得部210は、画像処理部220で生成された断層データや、二次元断層画像、三次元断層画像、モーションコントラスト画像、及びEn-Face画像等の各種画像を取得することができる。ここで、断層データとは、被検体の断層に関する情報を含むデータであり、OCTによる干渉信号にフーリエ変換を施した信号、該信号に任意の処理を施した信号、及びこれらに基づく断層画像等を含むものをいう。 The acquisition unit 210 can acquire the tomographic data generated by the image processing unit 220 and various images such as a two-dimensional tomographic image, a three-dimensional tomographic image, a motion contrast image, and an En-Face image. Here, the tomographic data is data including information on a tomographic image of a subject, a signal obtained by performing a Fourier transform on an interference signal by OCT, a signal obtained by performing an arbitrary process on the signal, a tomographic image based on the signal, and the like. Including
 さらに、取得部210は、画像処理すべき画像の撮影条件群(例えば、撮影日時、撮影部位名、撮影領域、撮影画角、撮影方式、画像の解像度や階調、画像の画像サイズ、画像フィルタ、及び画像のデータ形式に関する情報など)を取得する。なお、撮影条件群については、例示したものに限られない。また、撮影条件群は、例示したもの全てを含む必要はなく、これらのうちの一部を含んでもよい。 Further, the acquisition unit 210 may include a shooting condition group (for example, shooting date and time, shooting site name, shooting area, shooting angle of view, shooting method, image resolution and gradation, image size, image filter, , And information on the data format of the image). Note that the photographing condition group is not limited to the illustrated one. Further, the photographing condition group does not need to include all of the illustrated ones, and may include some of them.
 具体的には、取得部210は、画像を撮影した際のOCT撮影部100の撮影条件を取得する。また、取得部210は、画像のデータ形式に応じて、画像を構成するデータ構造に保存された撮影条件群を取得することもできる。なお、画像のデータ構造に撮影条件が保存されていない場合には、取得部210は、別途撮影条件を保存している記憶装置等から撮影条件群を含む撮影情報群を取得することもできる。 Specifically, the acquisition unit 210 acquires the imaging conditions of the OCT imaging unit 100 at the time of capturing an image. The acquiring unit 210 can also acquire a group of photographing conditions stored in a data structure forming an image according to the data format of the image. Note that when the imaging conditions are not stored in the data structure of the image, the acquiring unit 210 can also acquire the imaging information group including the imaging conditions group from a storage device that separately stores the imaging conditions.
 また、取得部210は、被検者識別番号等の被検眼を同定するための情報を入力部260等から取得することもできる。なお、取得部210は、記憶部240や、制御部200に接続される不図示のその他の装置から各種データや各種画像、各種情報を取得してもよい。取得部210は、取得した各種データや画像を記憶部240に記憶させることができる。 The acquisition unit 210 can also acquire information for identifying the eye to be examined, such as the subject identification number, from the input unit 260 or the like. Note that the acquisition unit 210 may acquire various data, various images, and various information from the storage unit 240 and other devices (not illustrated) connected to the control unit 200. The acquisition unit 210 can cause the storage unit 240 to store the acquired various data and images.
 画像処理部220は、取得部210で取得されたデータや記憶部240に記憶されたデータから断層画像やEn-Face画像等を生成したり、生成又は取得した画像に画像処理を施したりすることができる。画像処理部220には、断層画像生成部221、モーションコントラスト生成部222、En-Face画像生成部223、及び画質向上部224が設けられている。 The image processing unit 220 generates a tomographic image, an En-Face image, or the like from the data acquired by the acquisition unit 210 or the data stored in the storage unit 240, or performs image processing on the generated or acquired image. Can be. The image processing unit 220 includes a tomographic image generation unit 221, a motion contrast generation unit 222, an En-Face image generation unit 223, and an image quality improvement unit 224.
 断層画像生成部221は、取得部210が取得した干渉信号のデータに対して波数変換やフーリエ変換、絶対値変換(振幅の取得)等を施して断層データを生成し、断層データに基づいて被検眼Eの断層画像を生成することができる。ここで、取得部210で取得される干渉信号のデータは、ラインセンサ134から出力された信号のデータであってもよいし、記憶部240や制御部200に接続された不図示の装置から取得された干渉信号のデータであってもよい。なお、断層画像の生成方法としては公知の任意の方法を採用してよく、詳細な説明は省略する。 The tomographic image generation unit 221 performs wave number conversion, Fourier transform, absolute value conversion (acquisition of amplitude), and the like on the data of the interference signal acquired by the acquisition unit 210 to generate tomographic data, and generates tomographic data based on the tomographic data. A tomographic image of the optometry E can be generated. Here, the data of the interference signal acquired by the acquisition unit 210 may be data of a signal output from the line sensor 134 or acquired from a device (not shown) connected to the storage unit 240 or the control unit 200. It may be data of the interference signal obtained. Note that any known method may be used as a method for generating a tomographic image, and a detailed description thereof will be omitted.
 また、断層画像生成部221は、生成した複数部位の断層画像に基づいて三次元断層画像を生成することができる。断層画像生成部221は、例えば、複数部位の断層画像を1の座標系に並べて配置することで三次元断層画像を生成することができる。ここで、断層画像生成部221は、記憶部240や制御部200に接続された不図示の装置から取得された複数部位の断層画像に基づいて三次元断層画像を生成してもよい。 断層 Further, the tomographic image generation unit 221 can generate a three-dimensional tomographic image based on the generated tomographic images of a plurality of parts. The tomographic image generation unit 221 can generate a three-dimensional tomographic image by arranging, for example, tomographic images of a plurality of parts in one coordinate system. Here, the tomographic image generation unit 221 may generate a three-dimensional tomographic image based on tomographic images of a plurality of parts acquired from a device (not illustrated) connected to the storage unit 240 and the control unit 200.
 モーションコントラスト生成部222は、略同一箇所を撮影して得た複数の断層画像を用いて二次元モーションコントラスト画像を生成することができる。また、モーションコントラスト生成部222は、生成した各部位の二次元モーションコントラスト画像を1の座標系に並べて配置することで三次元モーションコントラスト画像を生成することができる。 The motion contrast generation unit 222 can generate a two-dimensional motion contrast image using a plurality of tomographic images obtained by photographing substantially the same location. In addition, the motion contrast generation unit 222 can generate a three-dimensional motion contrast image by arranging the generated two-dimensional motion contrast images of the respective parts in one coordinate system.
 本実施例では、モーションコントラスト生成部222は被検眼Eの略同一箇所を撮影して得た複数の断層画像間の脱相関値に基づいてモーションコントラスト画像を生成する。 In the present embodiment, the motion contrast generation unit 222 generates a motion contrast image based on decorrelation values between a plurality of tomographic images obtained by photographing substantially the same part of the eye E.
 具体的には、モーションコントラスト生成部222は、撮影時刻が互いに連続する略同一箇所を撮影して得た複数の断層画像について、位置合わせが行われた複数の断層画像を取得する。なお、位置合わせは、種々の公知の方法を使用することができる。例えば、複数の断層画像のうちの1つを基準画像として選択し、基準画像の位置及び角度を変更しながら、その他の断層画像との類似度が算出され、各断層画像の基準画像との位置ずれ量が算出される。算出結果に基づいて各断層画像を補正することで、複数の断層画像の位置合わせが行われる。なお、当該位置合わせの処理は、モーションコントラスト生成部222とは別個の構成要素によって行われてもよい。また、位置合わせの方法はこれに限られず、公知の任意の手法により行われてよい。 Specifically, the motion contrast generation unit 222 acquires a plurality of tomographic images that have been aligned with respect to a plurality of tomographic images obtained by photographing substantially the same locations where photographing times are continuous with each other. Note that various known methods can be used for alignment. For example, one of a plurality of tomographic images is selected as a reference image, and while changing the position and angle of the reference image, the degree of similarity with other tomographic images is calculated, and the position of each tomographic image with respect to the reference image is calculated. A shift amount is calculated. By correcting each tomographic image based on the calculation result, positioning of a plurality of tomographic images is performed. Note that the alignment processing may be performed by a component that is separate from the motion contrast generation unit 222. The method of positioning is not limited to this, and may be performed by any known method.
 モーションコントラスト生成部222は、位置合わせが行われた複数の断層画像のうち撮影時刻が互いに連続する2枚の断層画像ずつについて、以下の数式1により脱相関値を算出する。
Figure JPOXMLDOC01-appb-M000001
The motion contrast generation unit 222 calculates a decorrelation value for each of two tomographic images whose imaging times are continuous from each other among a plurality of aligned tomographic images by the following Equation 1.
Figure JPOXMLDOC01-appb-M000001
 ここで、A(x,z)は断層画像Aの位置(x,z)における振幅、B(x,z)は断層画像Bの同一位置(x,z)における振幅を示している。結果として得られる脱相関値M(x,z)は0から1までの値を取り、二つの振幅値の差異が大きいほど1に近い値となる。なお、本実施例では、XZ平面の二次元の断層画像を用いる場合について述べたが、例えばYZ平面等の二次元断層画像を用いてもよい。この場合には、位置(x、z)を位置(y、z)等に置き換えてよい。なお、脱相関値は、断層画像の輝度値に基づいて求められてもよいし、断層画像に対応する干渉信号の値に基づいて求められてもよい。 Here, A (x, z) indicates the amplitude at the position (x, z) of the tomographic image A, and B (x, z) indicates the amplitude at the same position (x, z) of the tomographic image B. The resulting decorrelation value M (x, z) takes a value from 0 to 1, and becomes closer to 1 as the difference between the two amplitude values increases. In the present embodiment, a case has been described in which a two-dimensional tomographic image on the XZ plane is used, but a two-dimensional tomographic image on the YZ plane or the like may be used. In this case, the position (x, z) may be replaced with the position (y, z) or the like. The decorrelation value may be obtained based on the luminance value of the tomographic image, or may be obtained based on the value of an interference signal corresponding to the tomographic image.
 モーションコントラスト生成部222は、各位置(画素位置)での脱相関値M(x、z)に基づいて、モーションコントラスト画像の画素値を決定し、モーションコントラスト画像を生成する。なお、本実施例では、モーションコントラスト生成部222は、撮影時刻が互いに連続する断層画像について脱相関値を算出したが、モーションコントラストデータの算出方法はこれに限定されない。脱相関値Mを求める2つの断層画像は、互いに対応する各断層画像に関する撮影時間が所定の時間間隔以内であればよく、撮影時間が連続していなくてもよい。そのため、例えば、時間的変化が少ない対象物の抽出を目的として、取得した複数の断層画像から撮影間隔が通常の規定時間より長くなるような2つの断層画像を抽出して脱相関値を算出してもよい。また、脱相関値に代えて、分散値や、最大値を最小値で割った値(最大値/最小値)等を求めてもよい。 The motion contrast generation unit 222 determines the pixel value of the motion contrast image based on the decorrelation value M (x, z) at each position (pixel position), and generates a motion contrast image. In the present embodiment, the motion contrast generation unit 222 calculates the decorrelation value for tomographic images whose imaging times are continuous with each other, but the method of calculating motion contrast data is not limited to this. The two tomographic images for which the decorrelation value M is obtained need only have an imaging time of each corresponding tomographic image within a predetermined time interval, and the imaging times need not be continuous. Therefore, for example, for the purpose of extracting an object having a small temporal change, two tomographic images whose imaging interval is longer than a normal specified time are extracted from a plurality of acquired tomographic images, and a decorrelation value is calculated. You may. Instead of the decorrelation value, a variance value, a value obtained by dividing the maximum value by the minimum value (maximum value / minimum value), or the like may be obtained.
 なお、モーションコントラスト画像の生成方法は、上述の方法に限られず、公知の他の任意の方法を用いてもよい。 The method of generating the motion contrast image is not limited to the above-described method, and any other known method may be used.
 En-Face画像生成部223は、モーションコントラスト生成部222が生成した三次元モーションコントラスト画像から正面画像であるEn-Face画像(OCTA画像)を生成することができる。具体的には、En-Face画像生成部223は、三次元モーションコントラスト画像を、例えば、被検眼Eの深さ方向(Z方向)における2つの任意の基準面に基づいて、二次元平面に投影した正面画像であるOCTA画像を生成することができる。また、En-Face画像生成部223は、断層画像生成部221が生成した三次元断層画像から同様に輝度のEn-Face画像を生成することもできる。 The En-Face image generation unit 223 can generate an En-Face image (OCTA image) as a front image from the three-dimensional motion contrast image generated by the motion contrast generation unit 222. Specifically, the En-Face image generation unit 223 projects the three-dimensional motion contrast image on a two-dimensional plane based on, for example, two arbitrary reference planes in the depth direction (Z direction) of the subject's eye E. It is possible to generate an OCTA image that is a front image that has been obtained. In addition, the En-Face image generation unit 223 can similarly generate an En-Face image of luminance from the three-dimensional tomographic image generated by the tomographic image generation unit 221.
 En-Face画像生成部223は、より具体的には、例えば、2つの基準面に囲まれた領域のXY方向の各位置において深さ方向における画素値の代表値を決定し、その代表値に基づいて各位置における画素値を決定して、En-Face画像を生成する。ここで、代表値は、2つの基準面に囲まれた領域の深さ方向の範囲内における画素値の平均値、中央値又は最大値などの値を含む。 More specifically, the En-Face image generation unit 223 determines, for example, a representative value of the pixel value in the depth direction at each position in the XY directions of the area surrounded by the two reference planes, and determines the representative value as the representative value. A pixel value at each position is determined based on the position, and an En-Face image is generated. Here, the representative value includes a value such as an average value, a median value, or a maximum value of pixel values within a range in a depth direction of a region surrounded by two reference planes.
 なお、基準面は被検眼Eの断層の層境界に沿った面でもよいし、平面であってもよい。以下、En-Face画像を生成するための基準面間の深さ方向の範囲をEn-Face画像の生成範囲という。また、本実施例に係るEn-Face画像の生成方法は一例であり、En-Face画像生成部223は、公知の任意の方法を用いてEn-Face画像を生成してよい。 Note that the reference plane may be a plane along a layer boundary of a slice of the eye E to be examined, or may be a plane. Hereinafter, the range in the depth direction between the reference planes for generating the En-Face image is referred to as the generation range of the En-Face image. In addition, the method of generating an En-Face image according to the present embodiment is an example, and the En-Face image generation unit 223 may generate an En-Face image using any known method.
 画質向上部224は、後述する学習済モデルを用いて、En-Face画像生成部223で生成されたOCTA画像に基づく、高画質なOCTA画像を生成する。また、画質向上部224は、断層画像生成部221により生成された断層画像やEn-Face画像生成部223により生成された輝度のEn-Face画像に基づく、高画質な断層画像や高画質な輝度のEn-Face画像を生成してもよい。なお、画質向上部224は、OCT撮影部100を用いて撮影されたOCTA画像等だけでなく、取得部210が、記憶部240や制御部200に接続される不図示のその他の装置から取得した各種画像に基づいて高画質な画像を生成することもできる。さらに、画質向上部224はOCTA画像や断層画像だけでなく、三次元モーションコントラスト画像や三次元断層画像の画質向上処理を行ってもよい。 The image quality improving unit 224 generates a high-quality OCTA image based on the OCTA image generated by the En-Face image generating unit 223, using a learned model described later. Further, the image quality improving unit 224 is a high-quality tomographic image or high-quality luminance based on the tomographic image generated by the tomographic image generating unit 221 and the luminance En-Face image generated by the En-Face image generating unit 223. May be generated. Note that the image quality improving unit 224 obtains not only the OCTA image or the like captured using the OCT capturing unit 100, but also the obtaining unit 210 obtained from another device (not shown) connected to the storage unit 240 or the control unit 200. A high-quality image can be generated based on various images. Further, the image quality improvement unit 224 may perform image quality improvement processing of not only an OCTA image and a tomographic image but also a three-dimensional motion contrast image and a three-dimensional tomographic image.
 駆動制御部230は、制御部200に接続されている、OCT撮影部100の光源111や、走査光学系150、走査部152、結像レンズ133等の構成要素の駆動を制御することができる。記憶部240は、取得部210で取得された各種データ、及び画像処理部220で生成・処理された断層画像やOCTA画像等の各種画像やデータ等を記憶することができる。また、記憶部240は、被検者の属性(氏名や年齢など)や他の検査機器を用いて取得した計測結果(眼軸長や眼圧など)などの被検眼に関する情報、撮影パラメータ、画像解析パラメータ、操作者によって設定されたパラメータを記憶することができる。なお、これらの画像及び情報は、不図示の外部記憶装置に記憶する構成にしてもよい。また、記憶部240は、プロセッサーによって実行されることで制御部200の各構成要素の機能を果たすためのプログラム等を記憶することもできる。 The drive control unit 230 can control driving of components such as the light source 111 of the OCT imaging unit 100, the scanning optical system 150, the scanning unit 152, and the imaging lens 133, which are connected to the control unit 200. The storage unit 240 can store various data acquired by the acquisition unit 210 and various images and data such as tomographic images and OCTA images generated and processed by the image processing unit 220. In addition, the storage unit 240 stores information on the subject's eye, such as attributes (name, age, and the like) of the subject and measurement results (eye axis length, intraocular pressure, and the like) acquired using other examination equipment, imaging parameters, and images. Analysis parameters and parameters set by the operator can be stored. The image and the information may be stored in an external storage device (not shown). In addition, the storage unit 240 can also store a program or the like for performing the function of each component of the control unit 200 by being executed by the processor.
 表示制御部250は、取得部210で取得された各種情報や画像処理部220で生成・処理された断層画像やOCTA画像、三次元モーションコントラスト画像等の各種画像を表示部270に表示させることができる。また、表示制御部250は、ユーザによって入力された情報等を表示部270に表示させることができる。 The display control unit 250 can cause the display unit 270 to display various information acquired by the acquisition unit 210 and various images such as tomographic images, OCTA images, and three-dimensional motion contrast images generated and processed by the image processing unit 220. it can. Further, the display control unit 250 can cause the display unit 270 to display information and the like input by the user.
 制御部200は、例えば汎用のコンピュータを用いて構成されてよい。なお、制御部200は、OCT装置1の専用のコンピュータを用いて構成されてもよい。制御部200は、不図示のCPU(Central Processing Unit)やMPU(Micro Processing Unit)、及び光学ディスクやROM(Read Only Memory)等のメモリを含む記憶媒体を備えている。制御部200の記憶部240以外の各構成要素は、CPUやMPU等のプロセッサーによって実行されるソフトウェアモジュールにより構成されてよい。また、当該各構成要素は、ASIC等の特定の機能を果たす回路や独立した装置等によって構成されてもよい。記憶部240は、例えば、光学ディスクやメモリ等の任意の記憶媒体によって構成されてよい。 The control unit 200 may be configured using, for example, a general-purpose computer. Note that the control unit 200 may be configured using a computer dedicated to the OCT apparatus 1. The control unit 200 includes a storage medium including a CPU (Central Processing Unit) and an MPU (Micro Processing Unit) (not shown), and a memory such as an optical disk and a ROM (Read Only Memory). Each component other than the storage unit 240 of the control unit 200 may be configured by a software module executed by a processor such as a CPU or an MPU. In addition, each of the constituent elements may be configured by a circuit that performs a specific function such as an ASIC, an independent device, or the like. The storage unit 240 may be configured by an arbitrary storage medium such as an optical disk and a memory.
 なお、制御部200が備えるCPU等のプロセッサー及びROM等の記憶媒体は1つであってもよいし複数であってもよい。そのため、制御部200の各構成要素は、少なくとも1以上のプロセッサーと少なくとも1つの記憶媒体とが接続され、少なくとも1以上のプロセッサーが少なくとも1以上の記憶媒体に記憶されたプログラムを実行した場合に機能するように構成されてもよい。なお、プロセッサーはCPUやMPUに限定されるものではなく、GPU(Graphics Processing Unit)等であってもよい。 Note that the control unit 200 may include one processor such as a CPU and storage media such as a ROM, or may include a plurality of storage media. Therefore, each component of the control unit 200 functions when at least one or more processors and at least one storage medium are connected and at least one or more processors execute a program stored in at least one or more storage media. May be configured. The processor is not limited to a CPU or an MPU, but may be a GPU (Graphics Processing Unit) or the like.
 次に、図3A乃至図4を参照して、本実施例に係るディープラーニング等の機械学習アルゴリズムに従った機械学習モデルに関する学習済モデルについて説明する。本実施例に係る学習済モデルは、学習の傾向に従って、入力された画像に基づいて、画質向上処理が行われたような画像を生成して出力する。 Next, a learned model related to a machine learning model according to a machine learning algorithm such as deep learning according to the present embodiment will be described with reference to FIGS. 3A to 4. The learned model according to the present embodiment generates and outputs an image in which image quality improvement processing has been performed based on the input image according to the tendency of learning.
 本明細書における画質向上処理とは、入力された画像を画像診断により適した画質の画像に変換することをいい、高画質画像とは、画像診断により適した画質の画像に変換された画像をいう。ここで、画像診断に適した画質の内容は、各種の画像診断で何を診断したいのかということに依存する。そのため一概には言えないが、例えば、画像診断に適した画質は、ノイズが少なかったり、高コントラストであったり、撮影対象を観察しやすい色や階調で示していたり、画像サイズが大きかったり、高解像度であったりする画質を含む。また、画像生成の過程で描画されてしまった実際には存在しないオブジェクトやグラデーションが画像から除去されているような画質を含むことができる。 The image quality improvement processing in this specification refers to converting an input image into an image with an image quality more suitable for image diagnosis, and a high-quality image refers to an image converted into an image with an image quality more suitable for image diagnosis. Say. Here, the content of the image quality suitable for the image diagnosis depends on what to be diagnosed by various image diagnoses. For this reason, it cannot be said unconditionally, but for example, image quality suitable for image diagnosis is low in noise, high in contrast, the object to be photographed is shown in colors and gradations that are easy to observe, the image size is large, Includes image quality that may be high resolution. Further, the image quality may include an image in which objects and gradations which do not actually exist and have been drawn in the process of image generation are removed from the image.
 学習済モデルとは、ディープラーニング等の任意の機械学習アルゴリズムに従った機械学習モデルに対して、事前に適切な教師データ(学習データ)を用いてトレーニング(学習)を行ったモデルである。ただし、学習済モデルは、それ以上の学習を行わないものではなく、追加の学習を行うこともできるものとする。教師データは、一つ以上の、入力データと出力データ(正解データ)とのペア群で構成される。本実施例では、入力データ及び出力データのペアを、OCTA画像と、該OCTA画像を含む複数のOCTA画像について加算平均等の重ね合わせ処理が行われたOCTA画像によって構成する。 A trained model is a model in which a machine learning model according to an arbitrary machine learning algorithm such as deep learning is previously trained (learned) using appropriate teacher data (learning data). However, it is assumed that the learned model does not perform any further learning and can perform additional learning. The teacher data is composed of one or more pairs of input data and output data (correct data). In the present embodiment, a pair of input data and output data is composed of an OCTA image and an OCTA image obtained by performing a superposition process such as an averaging process on a plurality of OCTA images including the OCTA image.
 重ね合わせ処理を行った重ね合わせ画像は、元画像群で共通して描出された画素が強調されるため、画像診断に適した高画質画像になる。この場合には、生成される高画質画像は、共通して描出された画素が強調された結果、低輝度領域と高輝度領域との違いがはっきりした高コントラストな画像になる。また、例えば、重ね合わせ画像では、撮影毎に発生するランダムノイズが低減されたり、ある時点の元画像ではうまく描出されなかった領域が他の元画像群によって補間されたりすることができる。 画素 The superimposed image that has been subjected to the superimposition process becomes a high-quality image suitable for image diagnosis because pixels that are commonly drawn in the original image group are emphasized. In this case, the generated high-quality image is a high-contrast image in which the difference between the low-brightness region and the high-brightness region is clear as a result of emphasizing the pixels drawn in common. In addition, for example, in a superimposed image, random noise generated at each shooting can be reduced, or a region that is not well drawn in an original image at a certain time can be interpolated by another original image group.
 なお、教師データを構成するペア群のうち、高画質化に寄与しないペアは教師データから取り除くことができる。例えば、教師データのペアを構成する出力データである高画質画像が画像診断に適さない画質である場合には、当該教師データを用いて学習した学習済モデルが出力する画像も画像診断に適さない画質になってしまう可能性がある。そのため、出力データが画像診断に適さない画質であるペアを教師データから取り除くことで、学習済モデルが画像診断に適さない画質の画像を生成する可能性を低減させることができる。 ペ ア Note that among pairs forming the teacher data, pairs that do not contribute to high image quality can be removed from the teacher data. For example, when a high-quality image, which is output data forming a pair of teacher data, has an image quality that is not suitable for image diagnosis, an image output by a trained model learned using the teacher data is also not suitable for image diagnosis. Image quality may be lost. Therefore, by removing from the teacher data a pair whose output data is unsuitable for image diagnosis, it is possible to reduce the possibility that the trained model generates an image having image quality unsuitable for image diagnosis.
 また、ペアである画像群の平均輝度や輝度分布が大きく異なる場合には、当該教師データを用いて学習した学習済モデルが、低画質画像と大きく異なる輝度分布を持つ画像診断に適さない画像を出力する可能性がある。このため、平均輝度や輝度分布が大きく異なる入力データと出力データのペアを教師データから取り除くこともできる。 When the average luminance and the luminance distribution of the image group that is a pair are significantly different, the trained model learned using the teacher data is used to generate an image that is not suitable for image diagnosis having a luminance distribution that is significantly different from the low-quality image. There is a possibility to output. For this reason, a pair of input data and output data having significantly different average luminance and luminance distribution can be removed from the teacher data.
 さらに、ペアである画像群に描画される撮影対象の構造や位置が大きく異なる場合には、当該教師データを用いて学習した学習済モデルが、低画質画像と大きく異なる構造や位置に撮影対象を描画した画像診断に適さない画像を出力する可能性がある。このため、描画される撮影対象の構造や位置が大きく異なる入力データと出力データのペアを教師データから取り除くこともできる。 Further, when the structure or position of the imaging target drawn in the group of images that is a pair differs greatly, the learned model learned using the teacher data sets the imaging target in a structure or position that is significantly different from the low-quality image. There is a possibility of outputting an image that is not suitable for the drawn image diagnosis. For this reason, a pair of input data and output data in which the structure or position of the imaging target to be drawn greatly differs can be removed from the teacher data.
 このように学習を行った学習済モデルを用いることで、画質向上部224は、一回の撮影(検査)で取得されたOCTA画像が入力された場合に、重ね合わせ処理によって高コントラスト化やノイズ低減等が行われたような高画質なOCTA画像を生成できる。このため、画質向上部224は、入力画像である低画質画像に基づいて、画像診断に適した高画質画像を生成することができる。 By using the trained model that has been trained in this way, the image quality improving unit 224 can increase the contrast and reduce noise by superimposing processing when an OCTA image acquired in one shot (inspection) is input. It is possible to generate a high-quality OCTA image that has been reduced. Therefore, the image quality improving unit 224 can generate a high-quality image suitable for image diagnosis based on the low-quality image that is the input image.
 次に、学習時の画像について説明する。教師データを構成する、OCTA画像301と高画質なOCTA画像302とのペア群を構成する画像群を、位置関係が対応する一定の画像サイズの矩形領域画像によって作成する。当該画像の作成について、図3A及び図3Bを参照して説明する。 Next, the image at the time of learning will be described. An image group forming a pair group of an OCTA image 301 and a high-quality OCTA image 302, which constitutes teacher data, is created by a rectangular area image of a fixed image size corresponding to a positional relationship. The creation of the image will be described with reference to FIGS. 3A and 3B.
 まず、教師データを構成するペア群の1つを、OCTA画像301と高画質なOCTA画像302とした場合について説明する。この場合には、図3Aに示すように、OCTA画像301の全体を入力データ、高画質なOCTA画像302の全体を出力データとして、ペアを構成する。なお、図3Aに示す例では各画像の全体により入力データと出力データのペアを構成しているが、ペアはこれに限らない。 First, a case will be described in which one of the group of pairs forming the teacher data is an OCTA image 301 and a high-quality OCTA image 302. In this case, as shown in FIG. 3A, a pair is formed by using the entire OCTA image 301 as input data and the entire high-quality OCTA image 302 as output data. Note that in the example shown in FIG. 3A, a pair of input data and output data is formed by the entirety of each image, but the pair is not limited to this.
 例えば、図3Bに示すように、OCTA画像301のうちの矩形領域画像311を入力データ、OCTA画像302における対応する撮影領域である矩形領域画像321を出力データとして、ペアを構成してもよい。 For example, as shown in FIG. 3B, a pair may be formed by using a rectangular area image 311 of the OCTA image 301 as input data and a rectangular area image 321 as a corresponding imaging area in the OCTA image 302 as output data.
 なお、学習時には、スキャン範囲(撮影画角)、スキャン密度(Aスキャン数、Bスキャン数)を正規化して画像サイズを揃えて、学習時の矩形領域サイズを一定に揃えることができる。また、図3A及び図3Bに示した矩形領域画像は、それぞれ別々に学習する際の矩形領域サイズの一例である。 At the time of learning, the scan range (shooting angle of view) and the scan density (the number of A-scans and the number of B-scans) are normalized to make the image size uniform, so that the rectangular area size at the time of learning can be made uniform. The rectangular area images shown in FIGS. 3A and 3B are examples of the rectangular area size when learning is performed separately.
 また、矩形領域の数は、図3Aに示す例では1つ、図3Bに示す例では複数設定可能である。例えば、図3Bに示す例において、OCTA画像301のうちの矩形領域画像312を入力データ、高画質なOCTA画像302における対応する撮影領域である矩形領域画像322を出力データとしてペアを構成することもできる。このように、1枚ずつのOCTA画像及び高画質なOCTA画像のペアから、互いに異なる矩形領域画像のペアを作成できる。なお、元となるOCTA画像及び高画質なOCTA画像において、領域の位置を異なる座標に変えながら多数の矩形領域画像のペアを作成することで、教師データを構成するペア群を充実させることができる。 In addition, the number of rectangular areas can be set to one in the example shown in FIG. 3A and plural in the example shown in FIG. 3B. For example, in the example illustrated in FIG. 3B, a pair may be formed using the rectangular area image 312 of the OCTA image 301 as input data and the rectangular area image 322 that is a corresponding imaging area in the high-quality OCTA image 302 as output data. it can. In this manner, a pair of rectangular area images different from each other can be created from a pair of an OCTA image and a high-quality OCTA image one by one. In addition, in the original OCTA image and the high-quality OCTA image, by creating a large number of rectangular area image pairs while changing the position of the area to different coordinates, it is possible to enrich the group of pairs forming the teacher data. .
 なお、図3Bに示す例では、離散的に矩形領域を示しているが、元となるOCTA画像及び高画質なOCTA画像を、隙間なく連続する一定の画像サイズの矩形領域画像群に分割することができる。また、元となるOCTA画像及び高画質なOCTA画像について、互いに対応する、ランダムな位置の矩形領域画像群に分割してもよい。このように、矩形領域として、より小さな領域の画像を入力データ及び出力データのペアとして選択することで、もともとのペアを構成するOCTA画像301及び高画質なOCTA画像302から多くのペアデータを生成できる。そのため、機械学習モデルのトレーニングにかかる時間を短縮することができる。 Although the rectangular area is discretely shown in the example shown in FIG. 3B, the original OCTA image and the high-quality OCTA image are divided into a group of continuous rectangular area images having a constant image size without gaps. Can be. Alternatively, the original OCTA image and the high-quality OCTA image may be divided into rectangular area image groups at random positions corresponding to each other. As described above, by selecting an image of a smaller area as a rectangular area as a pair of input data and output data, a large amount of pair data is generated from the OCTA image 301 and the high-quality OCTA image 302 constituting the original pair. it can. Therefore, the time required for training the machine learning model can be reduced.
 次に、本実施例に係る学習済モデルの一例として、入力された断層画像に対して、画質向上処理を行う畳み込みニューラルネットワーク(CNN)に関して、図4を参照して説明する。図4は、画質向上部224が用いる学習済モデルの構成401の一例を示している。 Next, as an example of the learned model according to the present embodiment, a convolutional neural network (CNN) that performs image quality improvement processing on an input tomographic image will be described with reference to FIG. FIG. 4 shows an example of the configuration 401 of the learned model used by the image quality improving unit 224.
 図4に示す学習済モデルは、入力値群を加工して出力する処理を担う複数の層群によって構成される。なお、当該学習済モデルの構成401に含まれる層の種類としては、畳み込み(Convolution)層、ダウンサンプリング(Downsampling)層、アップサンプリング(Upsampling)層、及び合成(Merger)層がある。 学習 The learned model shown in FIG. 4 is composed of a plurality of layer groups responsible for processing the input value group and outputting the processed value group. Note that the types of layers included in the configuration 401 of the learned model include a convolution (Convolution) layer, a downsampling (Downsampling) layer, an upsampling (Upsampling) layer, and a synthesis (Merger) layer.
 畳み込み層は、設定されたフィルタのカーネルサイズ、フィルタの数、ストライドの値、ダイレーションの値等のパラメータに従い、入力値群に対して畳み込み処理を行う層である。なお、入力される画像の次元数に応じて、フィルタのカーネルサイズの次元数も変更してもよい。 The convolution layer is a layer that performs a convolution process on an input value group according to parameters such as a set filter kernel size, the number of filters, a stride value, and a dilation value. The number of dimensions of the kernel size of the filter may be changed according to the number of dimensions of the input image.
 ダウンサンプリング層は、入力値群を間引いたり、合成したりすることによって、出力値群の数を入力値群の数よりも少なくする処理を行う層である。具体的には、このような処理として、例えば、Max Pooling処理がある。 (4) The downsampling layer is a layer that performs processing to reduce the number of output value groups from the number of input value groups by thinning out or combining input value groups. Specifically, for example, there is a Max @ Pooling process.
 アップサンプリング層は、入力値群を複製したり、入力値群から補間した値を追加したりすることによって、出力値群の数を入力値群の数よりも多くする処理を行う層である。具体的には、このような処理として、例えば、線形補間処理がある。 (4) The upsampling layer is a layer that performs processing for increasing the number of output value groups beyond the number of input value groups by duplicating the input value group or adding a value interpolated from the input value group. Specifically, as such processing, for example, there is a linear interpolation processing.
 合成層は、ある層の出力値群や画像を構成する画素値群といった値群を、複数のソースから入力し、それらを連結したり、加算したりして合成する処理を行う層である。 The composition layer is a layer that performs a process of inputting a value group such as an output value group of a certain layer or a pixel value group constituting an image from a plurality of sources, concatenating them, and adding them to combine them.
 なお、図4に示す構成401に含まれる畳み込み層群に設定されるパラメータとして、例えば、フィルタのカーネルサイズを幅3画素、高さ3画素、フィルタの数を64とすることで、一定の精度の画質向上処理が可能である。ただし、ニューラルネットワークを構成する層群やノード群に対するパラメータの設定が異なると、教師データからトレーニングされた傾向を出力データに再現可能な程度が異なる場合があるので注意が必要である。つまり、多くの場合、実施する際の形態に応じて適切なパラメータは異なるので、必要に応じて好ましい値に変更することができる。 As parameters set in the convolutional layer group included in the configuration 401 shown in FIG. 4, for example, by setting the kernel size of the filter to 3 pixels in width, 3 pixels in height, and 64 to the number of filters, a certain accuracy Image quality improvement processing is possible. However, it should be noted that if the parameter setting for the layer group or the node group forming the neural network is different, the degree to which the tendency trained from the teacher data can be reproduced in the output data may be different. In other words, in many cases, appropriate parameters are different depending on the mode of implementation, and can be changed to preferable values as needed.
 また、上述したようなパラメータを変更するという方法だけでなく、CNNの構成を変更することによって、CNNがより良い特性を得られる場合がある。より良い特性とは、例えば、画質向上処理の精度が高かったり、画質向上処理の時間が短かったり、機械学習モデルのトレーニングにかかる時間が短かったりする等である。 In addition to the method of changing the parameters as described above, by changing the configuration of the CNN, there is a case where the CNN can obtain better characteristics. The better characteristics include, for example, a higher accuracy of the image quality improvement processing, a shorter time of the image quality improvement processing, a shorter time required for training of the machine learning model, and the like.
 図示しないが、CNNの構成の変更例として、例えば、畳み込み層の後にバッチ正規化(Batch Normalization)層や、正規化線形関数(Rectifier Linear Unit)を用いた活性化層を組み込む等してもよい。 Although not illustrated, as a modification example of the configuration of the CNN, for example, a batch normalization (Batch Normalization) layer or an activation layer using a normalized linear function (Rectifier Linear Unit) may be incorporated after the convolutional layer. .
 このような機械学習モデルの学習済モデルにデータを入力すると、機械学習モデルの設計に従ったデータが出力される。例えば、教師データを用いてトレーニングされた傾向に従って入力データに対応する可能性の高い出力データが出力される。本実施例に係る学習済モデルでは、OCTA画像301が入力されると、教師データを用いてトレーニングされた傾向に従って、高画質なOCTA画像302を出力する。 (4) When data is input to a trained model of such a machine learning model, data according to the design of the machine learning model is output. For example, output data having a high possibility of corresponding to the input data is output according to the tendency trained using the teacher data. In the learned model according to the present embodiment, when the OCTA image 301 is input, a high-quality OCTA image 302 is output according to the tendency of training using the teacher data.
 なお、画像の領域を分割して学習している場合、学習済モデルは、それぞれの矩形領域に対応する高画質なOCTA画像である矩形領域画像を出力する。この場合、画質向上部224は、まず、入力画像であるOCTA画像301を学習時の画像サイズに基づいて矩形領域画像群に分割し、分割した矩形領域画像群を学習済モデルに入力する。その後、画質向上部224は、学習済モデルから出力された高画質なOCTA画像である矩形領域画像群のそれぞれを、学習済モデルに入力した矩形領域画像群のぞれぞれと同様の位置関係に配置して結合する。これにより、画質向上部224は、入力されたOCTA画像301に対応する、高画質なOCTA画像302を生成することができる。 When learning is performed by dividing the image region, the learned model outputs a rectangular region image that is a high-quality OCTA image corresponding to each rectangular region. In this case, the image quality improving unit 224 first divides the OCTA image 301, which is the input image, into a rectangular area image group based on the image size at the time of learning, and inputs the divided rectangular area image group to the trained model. After that, the image quality improving unit 224 compares each of the rectangular area image groups, which are high-quality OCTA images output from the learned model, with the same positional relationship as each of the rectangular area image groups input to the learned model. And combine them. Accordingly, the image quality improving unit 224 can generate a high-quality OCTA image 302 corresponding to the input OCTA image 301.
 次に、図5乃至図7を参照して、本実施例に係る一連の画像処理について説明する。図5は、本実施例に係る一連の画像処理のフローチャートである。 Next, a series of image processing according to the present embodiment will be described with reference to FIGS. FIG. 5 is a flowchart of a series of image processing according to the present embodiment.
 まず、ステップS501では、取得部210が、被検眼Eを複数回撮影して得た複数の三次元の断層情報を取得する。取得部210は、OCT撮影部100を用いて被検眼Eの断層情報を取得してもよいし、記憶部240や制御部200に接続される他の装置から断層情報を取得してもよい。 First, in step S501, the acquiring unit 210 acquires a plurality of three-dimensional tomographic information obtained by photographing the eye E several times. The obtaining unit 210 may obtain the tomographic information of the eye E using the OCT imaging unit 100, or may obtain the tomographic information from the storage unit 240 or another device connected to the control unit 200.
 ここで、OCT撮影部100を用いて被検眼Eの断層情報を取得する場合について説明する。まず、操作者は、走査光学系150の前に被検者である患者を着座させ、アライメントを行ったり、患者情報等を制御部200に入力したりした後にOCT撮影を開始する。制御部200の駆動制御部230は、走査部152のガルバノミラーを駆動し、被検眼の略同一箇所を複数回走査して被検眼の略同一箇所における複数の断層情報(干渉信号)を取得する。その後、駆動制御部230は、走査部152のガルバノミラーを主走査方向に直交する副走査方向に微小に駆動させ、被検眼Eの別の箇所(隣接する走査ライン)における複数の断層情報を取得する。この制御を繰り返すことにより、取得部210は、被検眼Eの所定範囲における複数の三次元の断層情報を取得する。 Here, a case where the tomographic information of the eye E is acquired using the OCT imaging unit 100 will be described. First, the operator sits down on the patient, who is the subject, in front of the scanning optical system 150, starts OCT imaging after performing alignment, inputting patient information and the like to the control unit 200, and the like. The drive control unit 230 of the control unit 200 drives the galvanomirror of the scanning unit 152, scans the substantially same portion of the eye to be examined a plurality of times, and acquires a plurality of tomographic information (interference signals) at the substantially same portion of the eye to be examined. . After that, the drive control unit 230 minutely drives the galvanomirror of the scanning unit 152 in the sub-scanning direction orthogonal to the main scanning direction, and acquires a plurality of pieces of tomographic information at another position (adjacent scanning line) of the eye E. I do. By repeating this control, the acquisition unit 210 acquires a plurality of three-dimensional tomographic information in a predetermined range of the eye E.
 次に、ステップS502において、断層画像生成部221は、取得された複数の三次元の断層情報に基づいて、複数の三次元断層画像を生成する。なお、取得部210が、ステップS501において、記憶部240や制御部200に接続される他の装置から複数の三次元断層画像を取得する場合には、ステップS502は省略されてよい。 Next, in step S502, the tomographic image generation unit 221 generates a plurality of three-dimensional tomographic images based on the obtained plurality of three-dimensional tomographic information. Note that when the acquisition unit 210 acquires a plurality of three-dimensional tomographic images from another device connected to the storage unit 240 or the control unit 200 in step S501, step S502 may be omitted.
 ステップS503では、モーションコントラスト生成部222が、複数の三次元断層画像に基づいて、三次元モーションコントラストデータ(三次元モーションコントラスト画像)を生成する。なお、モーションコントラスト生成部222は、略同一箇所について取得した3枚以上の断層画像に基づいて複数のモーションコントラストデータを求め、それらの平均値を最終的なモーションコントラストデータとして生成してもよい。なお、取得部210が、ステップS501において、記憶部240や制御部200に接続される他の装置から三次元モーションコントラストデータを取得する場合には、ステップS502及びステップS503は省略されてよい。 In step S503, the motion contrast generation unit 222 generates three-dimensional motion contrast data (three-dimensional motion contrast image) based on the plurality of three-dimensional tomographic images. Note that the motion contrast generation unit 222 may obtain a plurality of pieces of motion contrast data based on three or more tomographic images acquired for substantially the same location, and generate an average value of the pieces of motion contrast data as final motion contrast data. If the acquisition unit 210 acquires the three-dimensional motion contrast data from the storage unit 240 or another device connected to the control unit 200 in step S501, steps S502 and S503 may be omitted.
 ステップS504では、En-Face画像生成部223が、三次元モーションコントラストデータについて、操作者の指示に応じた又は所定のEn-Face画像の生成範囲に基づいて、OCTA画像を生成する。なお、取得部210が、ステップS501において、記憶部240や制御部200に接続される他の装置からOCTA画像を取得する場合には、ステップS502乃至ステップS504は省略されてよい。 In step S504, the En-Face image generation unit 223 generates an OCTA image for the three-dimensional motion contrast data according to the instruction of the operator or based on a predetermined En-Face image generation range. If the acquisition unit 210 acquires an OCTA image from another device connected to the storage unit 240 or the control unit 200 in step S501, steps S502 to S504 may be omitted.
 ステップS505では、画質向上部224が、学習済モデルを用いて、OCTA画像の画質向上処理を行う。画質向上部224は、OCTA画像を学習済モデルに入力し、学習済モデルからの出力に基づいて高画質なOCTA画像を生成する。なお、学習済モデルが画像の領域を分割して学習している場合には、画質向上部224は、まず、入力画像であるOCTA画像を学習時の画像サイズに基づいて矩形領域画像群に分割し、分割した矩形領域画像群を学習済モデルに入力する。その後、画質向上部224は、学習済モデルから出力された高画質なOCTA画像である矩形領域画像群のそれぞれを、学習済モデルに入力した矩形領域画像群のぞれぞれと同様の位置関係に配置して結合することで、最終的な高画質なOCTA画像を生成する。 In step S505, the image quality improving unit 224 performs an image quality improving process on the OCTA image using the learned model. The image quality improving unit 224 inputs the OCTA image to the trained model, and generates a high-quality OCTA image based on the output from the trained model. When the learned model is learning by dividing the image region, the image quality improving unit 224 first divides the OCTA image, which is the input image, into a rectangular region image group based on the image size at the time of learning. Then, the divided rectangular area image group is input to the learned model. After that, the image quality improving unit 224 compares each of the rectangular area image groups, which are high-quality OCTA images output from the learned model, with the same positional relationship as each of the rectangular area image groups input to the learned model. And combine them to generate a final high quality OCTA image.
 ステップS506では、表示制御部250が、表示部270に、画質向上部224によって生成された高画質なOCTA画像(第2の画像)を元のOCTA画像(第1の画像)と切り替えて表示させる。上述のように、機械学習モデルを用いた画質向上処理では、現実には存在しない血管をOCTA画像上に描出してしまったり、本来存在している血管を消してしまったりすることがある。これに対し、表示制御部250は、表示部270に、生成された高画質なOCTA画像を元のOCTA画像と切り替えて表示させることで、画質向上処理によって新たに生成された血管か、元の画像にも存在していた血管かの判断を容易にすることができる。表示制御部250による表示処理が終了すると、一連の画像処理が終了する。 In step S506, the display control unit 250 causes the display unit 270 to switch and display the high-quality OCTA image (second image) generated by the image quality improvement unit 224 with the original OCTA image (first image). . As described above, in the image quality improvement processing using the machine learning model, a blood vessel that does not actually exist may be drawn on the OCTA image, or a blood vessel that originally exists may be erased. On the other hand, the display control unit 250 causes the display unit 270 to switch the generated high-quality OCTA image to the original OCTA image and display the same, so that the blood vessel newly generated by the image quality improvement processing or the original blood vessel is displayed. It is possible to easily determine whether the blood vessel is also present in the image. When the display processing by the display control unit 250 ends, a series of image processing ends.
 次に、図6A乃至図7を参照して、制御部200の操作方法について説明する。図6A及び図6Bは、画質向上処理前後の画像を切り替えて表示するレポート画面の一例を示す。図6Aに示すレポート画面600には、断層画像611と画質向上処理前のOCTA画像601が示されている。図6Bに示すレポート画面600には、断層画像611と画質向上処理後のOCTA画像602(高画質なOCTA画像)が示されている。 Next, an operation method of the control unit 200 will be described with reference to FIGS. 6A to 7. 6A and 6B show an example of a report screen for switching and displaying images before and after the image quality improvement processing. A report screen 600 shown in FIG. 6A shows a tomographic image 611 and an OCTA image 601 before the image quality improvement processing. A report screen 600 shown in FIG. 6B shows a tomographic image 611 and an OCTA image 602 (high-quality OCTA image) after the image quality improvement processing.
 図6Aに示すレポート画面600において、操作者が入力部260の一例であるマウスを用い、OCTA画像601上でマウスの右ボタンを押下すると、画質向上処理を行うか否かを選択するポップアップメニュー620が表示される。操作者がポップアップメニュー620上で画質向上処理を行うことを選択すると、画質向上部224はOCTA画像601に対する画質向上処理を実行する。 In the report screen 600 shown in FIG. 6A, when the operator presses the right button of the mouse on the OCTA image 601 using a mouse as an example of the input unit 260, a pop-up menu 620 for selecting whether or not to perform the image quality improvement processing Is displayed. When the operator selects the image quality improvement processing on the pop-up menu 620, the image quality improvement unit 224 executes the image quality improvement processing on the OCTA image 601.
 そして、図6Bに示すように、表示制御部250は、レポート画面600に表示された画質向上処理を行う前のOCTA画像601を、画質向上処理を行った後のOCTA画像602に切り替えて表示させる。なお、OCTA画像602上でマウスの右ボタンを再度押下することでポップアップメニュー620を開き、画質向上処理を行う前のOCTA画像601に切り替えて表示させることもできる。 Then, as illustrated in FIG. 6B, the display control unit 250 switches the OCTA image 601 displayed on the report screen 600 before performing the image quality improvement processing to the OCTA image 602 after performing the image quality improvement processing and displays the same. . By pressing the right mouse button again on the OCTA image 602, the pop-up menu 620 can be opened and the OCTA image 601 can be switched to the image 601 before the image quality improvement processing is performed and displayed.
 なお、マウスの右ボタンの押下に応じて表示されるポップアップメニュー620によって画質向上処理前後の画像の切替表示を行う例を示したが、画像の切替方法はポップアップメニュー以外の任意の方法で行ってもよい。例えば、レポート画面上に配置されたボタン(例えば、図18や図20A及び図20Bのボタン3420)、プルダウンメニュー、ラジオボタン、チェックボックス、又はキーボード操作などで画像の切替を行ってもよい。さらに、マウスホイールの操作やタッチパネルディスプレイのタッチ操作によって画像を切替表示してもよい。 Although the example in which the image is displayed before and after the image quality improvement process is displayed by the pop-up menu 620 displayed in response to the pressing of the right button of the mouse, the image is switched by any method other than the pop-up menu. Is also good. For example, the image may be switched by a button (for example, the button 3420 in FIG. 18, FIG. 20A and FIG. 20B) arranged on the report screen, a pull-down menu, a radio button, a check box, a keyboard operation, or the like. Further, the image may be switched and displayed by a mouse wheel operation or a touch operation of the touch panel display.
 操作者は上記方法により、画質向上処理を行う前のOCTA画像601と、画質向上処理を行った後のOCTA画像602を任意に切替表示することができる。そのため、操作者は、画質向上処理の前後のOCTA画像を容易に見比べることができ、画質向上処理によるOCTA画像の変化を容易に確認することができる。従って、操作者は、画質向上処理によってOCTA画像に現実には存在しない血管が描出されてしまったり、本来存在している血管が消えてしまったりしても容易に識別することができ、画像上に描出された組織の真偽を容易に判断することができる。 The operator can arbitrarily switch and display the OCTA image 601 before performing the image quality improvement processing and the OCTA image 602 after performing the image quality improvement processing by the above method. Therefore, the operator can easily compare and compare the OCTA images before and after the image quality improvement processing, and can easily confirm a change in the OCTA image due to the image quality improvement processing. Therefore, the operator can easily identify a blood vessel that does not actually exist in the OCTA image due to the image quality improvement processing, or even if the blood vessel that originally exists has disappeared. It is possible to easily determine the authenticity of the organization described in the above.
 なお、上述の表示方法では、画質向上処理前後の画像を切り替えて表示したが、これらの画像を並べて表示したり、重ねて表示したりすることでも、同様の効果を奏することができる。図7は、画質向上処理前後の画像を並べて表示する場合のレポート画面の一例を示す。図7に示すレポート画面700には、画質向上処理前のOCTA画像701と画質向上処理後のOCTA画像702が並べて表示されている。 In the above-described display method, the images before and after the image quality improvement processing are switched and displayed. However, similar effects can be obtained by displaying these images side by side or overlapping them. FIG. 7 shows an example of a report screen when images before and after image quality improvement processing are displayed side by side. On a report screen 700 shown in FIG. 7, an OCTA image 701 before the image quality improvement processing and an OCTA image 702 after the image quality improvement processing are displayed side by side.
 この場合でも、操作者は、画質向上処理前後の画像を容易に見比べることができ、画質向上処理による画像の変化を容易に確認することができる。そのため、操作者は画質向上処理によってOCTA画像に現実には存在しない血管が描出されてしまったり、本来存在している血管が消えてしまったりしても容易に識別することができ、画像上に描出された組織の真偽を容易に判断することができる。なお、画質向上処理前後の画像を重ねて表示する場合には、表示制御部250は、画質向上処理前後の画像の少なくとも一方について、透明度を設定し、表示部270に画質向上処理前後の画像を重ねて表示させることができる。 で も Even in this case, the operator can easily compare and compare the images before and after the image quality improvement processing, and can easily confirm a change in the image due to the image quality improvement processing. Therefore, the operator can easily identify a blood vessel that does not actually exist in the OCTA image by the image quality improvement processing, or even if the blood vessel that originally exists disappears, and can easily identify the blood vessel. The authenticity of the depicted organization can be easily determined. When displaying the images before and after the image quality improvement processing in a superimposed manner, the display control unit 250 sets the transparency of at least one of the images before and after the image quality improvement processing, and displays the images before and after the image quality improvement processing on the display unit 270. They can be superimposed and displayed.
 また、上述のように、画質向上部224は、OCTA画像だけでなく、断層画像や輝度のEn-Face画像について学習済モデルを用いた画質向上処理を行ってもよい。この場合には、学習済モデルの教師データのペアとして、重ね合わせ前の断層画像や輝度のEn-Face画像を入力データとし、重ね合わせ後の断層画像や輝度のEn-Face画像を出力データとしたペアを用いることができる。なお、この場合、学習済モデルは、OCTA画像や断層画像等の教師データを用いて学習を行った1つの学習済モデルとしてもよいし、画像の種類毎に学習を行った複数の学習済モデルとしてもよい。複数の学習済モデルを用いる場合には、画質向上部224は、画質向上処理を行う対象である画像の種類に応じた学習済モデルを用いることができる。なお、画質向上部224は、三次元モーションコントラスト画像や三次元断層画像について学習済モデルを用いた画質向上処理を行ってもよく、この場合の学習データも上述と同様に用意することができる。 As described above, the image quality improving unit 224 may perform the image quality improving process using the learned model not only on the OCTA image but also on the tomographic image and the En-Face image of the luminance. In this case, as a pair of the teacher data of the trained model, the tomographic image and the luminance En-Face image before superimposition are used as input data, and the tomographic image and the luminance En-Face image after superimposition are used as output data. Can be used. In this case, the trained model may be a single trained model trained using teacher data such as an OCTA image or a tomographic image, or a plurality of trained models trained for each type of image. It may be. When a plurality of learned models are used, the image quality improvement unit 224 can use a learned model corresponding to the type of an image on which the image quality improvement processing is performed. Note that the image quality improvement unit 224 may perform image quality improvement processing using a learned model on a three-dimensional motion contrast image or a three-dimensional tomographic image, and learning data in this case can be prepared in the same manner as described above.
 図7には、画質向上処理前の断層画像711と、画質向上処理後の断層画像712が並べて表示されている。なお、表示制御部250は、図6A及び図6Bに示す画質向上処理前後のOCTA画像のように、画質向上処理前後の断層画像や輝度のEn-Face画像を切り替えて、表示部270に表示させてもよい。また、表示制御部250は、画質向上処理前後の断層画像や輝度のEn-Face画像を重ねて表示部270に表示させてもよい。これらの場合でも、操作者は、画質向上処理の前後の画像を容易に見比べることができ、画質向上処理による画像の変化を容易に確認することができる。そのため操作者は、画質向上処理によって画像に現実には存在しない組織が描出されてしまったり、本来存在している組織が消えてしまったりしても容易に識別することができ、画像上に描出された組織の真偽を容易に判断することができる。 In FIG. 7, a tomographic image 711 before the image quality improvement processing and a tomographic image 712 after the image quality improvement processing are displayed side by side. The display control unit 250 switches between a tomographic image before and after the image quality improvement processing and an En-Face image of the luminance before and after the image quality improvement processing as shown in the OCTA images before and after the image quality improvement processing shown in FIGS. You may. In addition, the display control unit 250 may cause the display unit 270 to superimpose the tomographic image before and after the image quality improvement processing or the En-Face image of the luminance. Even in these cases, the operator can easily compare the images before and after the image quality improvement processing, and can easily confirm a change in the image due to the image quality improvement processing. Therefore, the operator can easily identify a tissue that does not actually exist in the image due to the image quality improvement process, or even if the tissue that originally exists disappears, and can easily identify the tissue on the image. The authenticity of the performed organization can be easily determined.
 上記のように、本実施例に係る制御部200は、画質向上部224と表示制御部250を備える。画質向上部224は、学習済モデルを用いて、被検眼の第1の画像から、該第1の画像と比べてノイズ低減及びコントラスト強調のうちの少なくとも一つがなされた第2の画像を生成する。表示制御部250は、表示部270に第1の画像と第2の画像とを切り替えて、並べて、又は重ねて表示させる。なお、表示制御部250は、操作者からの指示に応じて、第1の画像及び第2の画像を切り替えて、表示部270に表示させることができる。 As described above, the control unit 200 according to the present embodiment includes the image quality improvement unit 224 and the display control unit 250. The image quality improving unit 224 uses the learned model to generate, from the first image of the subject's eye, a second image in which at least one of noise reduction and contrast enhancement has been performed compared to the first image. . The display control unit 250 causes the display unit 270 to switch between the first image and the second image and display them side by side or overlapped. The display control unit 250 can switch between the first image and the second image and display the first image and the second image on the display unit 270 according to an instruction from the operator.
 これにより、制御部200は、元の画像から、ノイズが低減されていたり、コントラストが強調されていたりする高画質な画像を生成することができる。このため、制御部200は、より明瞭な画像や観察したい部位や病変が強調されている画像等の、従来よりも画像診断に適した画像を生成することができる。 Thereby, the control unit 200 can generate a high-quality image in which noise is reduced or contrast is enhanced from the original image. For this reason, the control unit 200 can generate an image more suitable for image diagnosis than before, such as a clearer image, an image in which a site to be observed or a lesion is emphasized, and the like.
 また、操作者は、画質向上処理の前後の画像を容易に見比べることができ、画質向上処理による画像の変化を容易に確認することができる。そのため操作者は、画質向上処理によって画像に現実には存在しない組織が描出されてしまったり、本来存在している組織が消えてしまったりしても容易に識別することができ、画像上に描出された組織の真偽を容易に判断することができる。 Also, the operator can easily compare the images before and after the image quality improvement processing, and can easily confirm a change in the image due to the image quality improvement processing. Therefore, the operator can easily identify a tissue that does not actually exist in the image due to the image quality improvement process, or even if the tissue that originally exists disappears, and can easily identify the tissue on the image. The authenticity of the performed organization can be easily determined.
 本実施例に係る学習済モデルでは、教師データの出力データとして、重ね合わせ画像を用いたが、教師データはこれに限られない。例えば、教師データの出力データとして、元画像群に対して最大事後確率推定処理(MAP推定処理)を行うことで得られる高画質画像を用いてもよい。MAP推定処理では、複数の画像における各画素値の確率密度から尤度関数を求め、求めた尤度関数を用いて真の信号値(画素値)を推定する。 In the learned model according to the present embodiment, the superimposed image is used as the output data of the teacher data, but the teacher data is not limited to this. For example, a high-quality image obtained by performing a maximum posterior probability estimation process (MAP estimation process) on an original image group may be used as output data of teacher data. In the MAP estimation processing, a likelihood function is obtained from the probability density of each pixel value in a plurality of images, and a true signal value (pixel value) is estimated using the obtained likelihood function.
 MAP推定処理により得られた高画質画像は、真の信号値に近い画素値に基づいて高コントラストな画像となる。また、推定される信号値は、確率密度に基づいて求められるため、MAP推定処理により得られた高画質画像では、ランダムに発生するノイズが低減される。このため、MAP推定処理により得られた高画質画像を教師データとして用いることで、学習済モデルは、入力画像から、ノイズが低減されたり、高コントラストとなったりした、画像診断に適した高画質画像を生成することができる。なお、教師データの入力データと出力データのペアの生成方法は、重ね合わせ画像を教師データとした場合と同様の方法で行われてよい。 The high-quality image obtained by the MAP estimation process becomes a high-contrast image based on pixel values close to the true signal value. In addition, since the estimated signal value is obtained based on the probability density, in the high-quality image obtained by the MAP estimation processing, randomly generated noise is reduced. For this reason, by using the high-quality image obtained by the MAP estimation processing as the teacher data, the learned model can reduce the noise or increase the contrast from the input image and obtain a high-quality image suitable for image diagnosis. Images can be generated. The method of generating the pair of the input data and the output data of the teacher data may be performed in the same manner as the case where the superimposed image is used as the teacher data.
 また、教師データの出力データとして、元画像に平滑化フィルタ処理を適用した高画質画像を用いてもよい。この場合には、学習済モデルは、入力画像から、ランダムノイズが低減された高画質画像を生成することができる。さらに、教師データの出力データとして、元画像に階調変換処理を適用した画像を用いてもよい。この場合には、学習済モデルは、入力画像から、コントラスト強調された高画質画像を生成することができる。なお、教師データの入力データと出力データのペアの生成方法は、重ね合わせ画像を教師データとした場合と同様の方法で行われてよい。 Also, a high-quality image obtained by applying a smoothing filter process to the original image may be used as the output data of the teacher data. In this case, the trained model can generate a high-quality image with reduced random noise from the input image. Further, as the output data of the teacher data, an image obtained by applying a gradation conversion process to the original image may be used. In this case, the learned model can generate a high-quality image in which contrast is enhanced from the input image. The method of generating the pair of the input data and the output data of the teacher data may be performed in the same manner as the case where the superimposed image is used as the teacher data.
 なお、教師データの入力データは、OCT撮影部100と同じ画質傾向を持つ撮影装置から取得された画像でもよい。また、教師データの出力データは、逐次近似法等の高コストな処理によって得られた高画質画像であってもよいし、入力データに対応する被検体を、OCT撮影部100よりも高性能な撮影装置で撮影することで取得した高画質画像であってもよい。さらに、出力データは、被検体の構造等に基づくルールベースによるノイズ低減処理を行うことによって取得された高画質画像であってもよい。ここで、ノイズ低減処理は、例えば、低輝度領域内に現れた明らかにノイズである1画素のみの高輝度画素を、近傍の低輝度画素値の平均値に置き換える等の処理を含むことができる。このため、学習済モデルは、入力画像の撮影に用いられる撮影装置よりも高性能な撮影装置によって撮影された画像、又は入力画像の撮影工程よりも工数の多い撮影工程で取得された画像を教師データとしてもよい。 Note that the input data of the teacher data may be an image acquired from an imaging device having the same image quality tendency as the OCT imaging unit 100. Further, the output data of the teacher data may be a high-quality image obtained by a high-cost process such as a successive approximation method, or a subject corresponding to the input data may have a higher performance than the OCT imaging unit 100. It may be a high-quality image obtained by shooting with a shooting device. Furthermore, the output data may be a high-quality image acquired by performing a noise reduction process based on a rule based on the structure of the subject. Here, the noise reduction process may include, for example, a process of replacing a high-luminance pixel of only one pixel which clearly appears in a low-luminance region with an average value of neighboring low-luminance pixel values. . For this reason, the trained model uses an image captured by an imaging device having a higher performance than an imaging device used for capturing an input image, or an image acquired in an imaging process that requires more man-hours than the input image capturing process. It may be data.
 なお、画質向上部224が、学習済モデルを用いて、ノイズが低減されていたり、コントラストが強調されていたりする高画質な画像を生成することについて述べたが、画質向上部224による画質向上処理はこれに限られない。画質向上部224は、画質向上処理により、上述のように、画像診断により適した画質の画像を生成できればよい。 Note that the image quality improvement unit 224 generates a high-quality image in which noise is reduced or contrast is enhanced using the learned model, but the image quality improvement processing by the image quality improvement unit 224 has been described. Is not limited to this. The image quality improving unit 224 only needs to be able to generate an image having an image quality more suitable for image diagnosis as described above by the image quality improving process.
 また、表示制御部250は、表示部270に画質向上処理前後の画像を並べて表示させる場合、操作者からの指示に応じて、表示部270に並べて表示されている画質向上処理前後の画像のいずれかを拡大表示させてもよい。より具体的には、例えば、図7に示すレポート画面700において、操作者がOCTA画像701を選択したら、表示制御部250は、レポート画面700においてOCTA画像701を拡大表示させることができる。また、操作者が画質向上処理後のOCTA画像702を選択したら、表示制御部250は、レポート画面700においてOCTA画像702を拡大表示させることができる。この場合には、操作者は画質向上処理前後の画像のうち観察したい画像をより詳細に観察することができる。 When the display control unit 250 causes the display unit 270 to display the images before and after the image quality improvement processing side by side, according to an instruction from the operator, the display control unit 250 selects any one of the images before and after the image quality improvement processing displayed side by side on the display unit 270. May be enlarged and displayed. More specifically, for example, when the operator selects the OCTA image 701 on the report screen 700 illustrated in FIG. 7, the display control unit 250 can enlarge and display the OCTA image 701 on the report screen 700. Further, when the operator selects the OCTA image 702 after the image quality improvement processing, the display control unit 250 can enlarge and display the OCTA image 702 on the report screen 700. In this case, the operator can observe the image to be observed among the images before and after the image quality improvement processing in more detail.
 さらに、制御部200は、操作者の指示に応じてOCTA画像等のEn-Face画像の生成範囲が変更された場合、並べて表示されている画像を、変更された生成範囲に基づく画像及び高画質化した画像に変更して表示してもよい。より具体的には、操作者が入力部260を介して、En-Face画像の生成範囲を変更すると、En-Face画像生成部223が変更された生成範囲に基づいて、画質向上処理前のEn-Face画像を生成する。画質向上部224は、学習済モデルを用いて、En-Face画像生成部223によって新たに生成されたEn-Face画像から、高画質なEn-Face画像を生成する。その後、表示制御部250は、表示部270に並べて表示されている画質向上処理前後のEn-Face画像を、新たに生成された画質向上処理前後のEn-Face画像に変更して表示させる。このような場合には、操作者が観察したい深さ方向の範囲を任意に変更しながら、変更された深さ方向の範囲に基づく画質向上処理前後のEn-Face画像を観察することができる。 Further, when the generation range of an En-Face image such as an OCTA image is changed in response to an instruction from the operator, the control unit 200 changes the images displayed side by side to an image based on the changed generation range and a high image quality. The image may be changed and displayed. More specifically, when the operator changes the generation range of the En-Face image via the input unit 260, the En-Face image generation unit 223 uses the changed generation range to generate the En-en before the image quality improvement processing. Generate a Face image. The image quality improvement unit 224 generates a high-quality En-Face image from the En-Face image newly generated by the En-Face image generation unit 223 using the learned model. After that, the display control unit 250 changes the En-Face images before and after the image quality improvement processing displayed side by side on the display unit 270 to the newly generated En-Face images before and after the image quality improvement processing and displays the images. In such a case, it is possible to observe the En-Face images before and after the image quality improvement processing based on the changed range in the depth direction while arbitrarily changing the range in the depth direction desired by the operator.
(変形例1)
 上述のように、学習済モデルを用いて画質向上処理を行った画像では、現実には存在しない組織が描出されてしまったり、本来存在している組織が消えてしまったりする。そのため、当該画像に基づいて操作者が画像診断を行うことにより誤診断が生じてしまう場合がある。そこで、表示制御部250は、画質向上処理後のOCTA画像や断層画像等を表示部270に表示させる際に、当該画像が学習済モデルを用いて画質向上処理を行った画像である旨をともに表示させてもよい。この場合には、操作者による誤診断の発生を抑制することができる。なお、学習済モデルを用いて取得した高画質画像である旨が理解できる態様であれば、表示の態様については任意であってよい。
(Modification 1)
As described above, in an image on which image quality improvement processing has been performed using a trained model, a tissue that does not actually exist is depicted, or a tissue that originally exists disappears. Therefore, an erroneous diagnosis may occur when the operator performs an image diagnosis based on the image. Therefore, when displaying the OCTA image, the tomographic image, and the like after the image quality improvement processing on the display unit 270, the display control unit 250 confirms that the image is an image on which the image quality improvement processing has been performed using the learned model. It may be displayed. In this case, occurrence of erroneous diagnosis by the operator can be suppressed. The display mode may be any mode as long as the mode can be understood as a high-quality image acquired using the learned model.
(変形例2)
 実施例1では一回の撮影(検査)で得られたOCTA画像や断層画像等に対して画質向上処理を適用する例について述べた。これに対し、複数回の撮影(検査)で得られた複数のOCTA画像や断層画像等に対して、学習済モデルを用いた画質向上処理を適用することもできる。変形例2では、図8A及び図8Bを参照して、複数のOCTA画像や断層画像等に対して、学習済モデルを用いた画質向上処理を適用した画像を同時に表示させる構成について説明する。
(Modification 2)
The first embodiment has described the example in which the image quality improvement processing is applied to an OCTA image, a tomographic image, and the like obtained by one imaging (inspection). On the other hand, image quality improvement processing using a learned model can be applied to a plurality of OCTA images, tomographic images, and the like obtained by a plurality of imagings (inspections). Modification 2 will be described with reference to FIGS. 8A and 8B, in which a plurality of OCTA images, tomographic images, and the like are simultaneously displayed with images to which image quality improvement processing using a learned model is applied.
 図8A及び図8Bは、同一被検眼を経時的に複数回撮影することによって取得された複数のOCTA画像を表示するための時系列レポート画面の一例を示す。図8Aに示すレポート画面800では、画質向上処理を行う前の複数のOCTA画像801が時系列に並んで表示されている。また、レポート画面800にはポップアップメニュー820も含まれており、操作者は、入力部260を介してポップアップメニュー820を操作することで、画質向上処理の適用有無を選択することが可能である。 FIGS. 8A and 8B show an example of a time-series report screen for displaying a plurality of OCTA images obtained by imaging the same subject's eye a plurality of times over time. On the report screen 800 shown in FIG. 8A, a plurality of OCTA images 801 before the image quality improvement processing is performed are displayed in chronological order. The report screen 800 also includes a pop-up menu 820, and the operator can select whether or not to apply the image quality improvement processing by operating the pop-up menu 820 via the input unit 260.
 操作者が画質向上処理の適用を選択すると、画質向上部224は表示されている全てのOCTA画像に対して、学習済モデルを用いた画質向上処理を適用する。そして、表示制御部250は、図8Bに示すように、画質向上処理を行った後の複数のOCTA画像802を、表示されていた複数のOCTA画像801と切り替えて表示する。 (4) When the operator selects the application of the image quality improvement processing, the image quality improvement unit 224 applies the image quality improvement processing using the learned model to all the displayed OCTA images. Then, as illustrated in FIG. 8B, the display control unit 250 switches the plurality of OCTA images 802 after the image quality improvement processing is performed to the displayed plurality of OCTA images 801 and displays them.
 また、操作者が、ポップアップメニュー820にて画質向上処理を適用しないことを選択すると、表示制御部250は、画質向上処理前の複数のOCTA画像801を、表示されていた画質向上処理後の複数のOCTA画像802と切り替えて表示する。 When the operator selects not to apply the image quality improvement processing in the pop-up menu 820, the display control unit 250 replaces the plurality of OCTA images 801 before the image quality improvement processing with the plurality of OCTA images 801 displayed after the image quality improvement processing. Of the OCTA image 802 of FIG.
 なお、本変形例では、学習済モデルを用いた画質向上処理前後の複数のOCTA画像について同時に切り替えて表示する例を示した。しかしながら、学習済モデルを用いた画質向上処理前後の複数の断層画像や輝度のEn-Face画像等を同時に切り替えて表示してもよい。なお、操作方法はポップアップメニュー820を用いる方法に限られず、レポート画面上に配置されたボタンやプルダウンメニュー、ラジオボタン、チェックボックス、又はキーボードやマウスホイール、タッチパネルの操作等の任意の操作方法を採用してよい。 In this modification, an example has been described in which a plurality of OCTA images before and after image quality improvement processing using a learned model are simultaneously switched and displayed. However, a plurality of tomographic images before and after the image quality improvement processing using the learned model, an En-Face image of luminance, and the like may be simultaneously switched and displayed. The operation method is not limited to the method using the pop-up menu 820, but employs any operation method such as a button, a pull-down menu, a radio button, a check box, a keyboard, a mouse wheel, or a touch panel operation arranged on the report screen. May do it.
(実施例2)
 学習済モデルは、学習の傾向に従って入力データに対応する可能性の高い出力データを出力する。これに関連して、学習済モデルは、画質の傾向が似た画像群を教師データとして学習を行うと、当該似た傾向の画像に対して、より効果的に高画質化した画像を出力することができる。そこで、実施例2では、撮影部位等の撮影条件やEn-Face画像の生成範囲毎にグルーピングされたペア群で構成された教師データを用いて学習した複数の学習済モデルによって画質向上処理を行うことで、より効果的に画質向上処理を行う。
(Example 2)
The trained model outputs output data likely to correspond to the input data according to the tendency of learning. In this regard, when the learned model learns using a group of images having similar image quality trends as teacher data, an image having a higher image quality is output more effectively for images having similar similarities. be able to. Therefore, in the second embodiment, the image quality improvement processing is performed by using a plurality of learned models that are learned using the imaging data such as the imaging region and the teacher data formed by the group of pairs grouped for each generation range of the En-Face image. Thus, the image quality improvement processing is performed more effectively.
 以下、図9及び図10を参照して本実施例に係るOCT装置について説明する。なお、本実施例に係るOCT装置の構成は、制御部を除いて実施例1に係るOCT装置1と同様であるため、図1に示す構成と同様の構成については、同一の参照符号を用いて示し、説明を省略する。以下、実施例1に係るOCT装置1との違いを中心に、本実施例に係るOCT装置について説明する。 Hereinafter, the OCT apparatus according to the present embodiment will be described with reference to FIGS. Note that the configuration of the OCT apparatus according to the present embodiment is the same as that of the OCT apparatus 1 according to the first embodiment except for a control unit. Therefore, the same reference numerals are used for the same configurations as those illustrated in FIG. And the explanation is omitted. Hereinafter, the OCT apparatus according to the present embodiment will be described focusing on differences from the OCT apparatus 1 according to the first embodiment.
 図9は、本実施例に係る制御部900の概略構成を示す。なお、本実施例に係る制御部900における画像処理部920及び選択部925以外の構成は実施例1に係る制御部200の各構成と同様である。そのため、図2に示す構成と同様の構成については、同一の参照符号を用いて示し説明を省略する。 FIG. 9 shows a schematic configuration of the control unit 900 according to the present embodiment. The configuration of the control unit 900 according to the present embodiment other than the image processing unit 920 and the selection unit 925 is the same as each configuration of the control unit 200 according to the first embodiment. Therefore, the same components as those shown in FIG. 2 are denoted by the same reference numerals, and description thereof is omitted.
 制御部900の画像処理部920には、断層画像生成部221、モーションコントラスト生成部222、En-Face画像生成部223、及び画質向上部224に加えて、選択部925が設けられている。 The image processing unit 920 of the control unit 900 includes a selection unit 925 in addition to the tomographic image generation unit 221, the motion contrast generation unit 222, the En-Face image generation unit 223, and the image quality improvement unit 224.
 選択部925は、画質向上部224によって画質向上処理を行うべき画像の撮影条件やEn-Face画像の生成範囲に基づいて、複数の学習済モデルのうち、画質向上部224が用いるべき学習済モデルを選択する。画質向上部224は、選択部925によって選択された学習済モデルを用いて、対象となるOCTA画像や断層画像等に画質向上処理を行い、高画質なOCTA画像や高画質な断層画像を生成する。 The selecting unit 925 selects a learned model to be used by the image quality improving unit 224 from among the plurality of learned models based on a shooting condition of an image to be subjected to the image quality improving process by the image quality improving unit 224 and a generation range of the En-Face image. Select The image quality improvement unit 224 performs image quality improvement processing on a target OCTA image, tomographic image, or the like using the learned model selected by the selection unit 925, and generates a high-quality OCTA image or high-quality tomographic image. .
 次に、本実施例に係る複数の学習済モデルについて説明する。上述のように、学習済モデルは、学習の傾向に従って入力データに対応する可能性の高い出力データを出力する。これに関連して、学習済モデルは、画質の傾向が似た画像群を教師データとして学習を行うと、当該似た傾向の画像に対して、より効果的に高画質化した画像を出力することができる。そこで、本実施例では、撮影部位、撮影方式、撮影領域、撮影画角、スキャン密度、及び画像の解像度等を含む撮影条件やEn-Face画像の生成範囲毎にグルーピングされたペア群で構成された教師データを用いて学習した複数の学習済モデルを用意する。 Next, a plurality of learned models according to the present embodiment will be described. As described above, the learned model outputs output data that is highly likely to correspond to the input data according to the tendency of learning. In this regard, when the learned model learns using a group of images having similar image quality trends as teacher data, an image having a higher image quality is output more effectively for images having similar similarities. be able to. Therefore, in the present embodiment, a pair group is grouped for each imaging condition including an imaging region, an imaging method, an imaging region, an imaging angle of view, a scan density, an image resolution, and the like, and an En-Face image generation range. A plurality of learned models learned using the learned teacher data are prepared.
 より具体的には、例えば、黄斑部を撮影部位としたOCTA画像を教師データとした学習済モデル、及び乳頭部を撮影部位としたOCTA画像を教師データとした学習済モデル等の複数の学習済モデルを用意する。なお、黄斑部や乳頭部は撮影部位の一例であり、他の撮影部位を含んでもよい。また、黄斑部や乳頭部等の撮影部位における特定の撮影領域毎のOCTA画像を教師データとした学習済モデルを用意してもよい。 More specifically, for example, a plurality of learned models such as a learned model using an OCTA image in which a macular portion is an imaging region as teacher data and a learned model using an OCTA image in which a nipple is an imaging region as teacher data are used. Prepare a model. Note that the macula and the nipple are examples of the imaging region, and may include other imaging regions. Further, a learned model may be prepared in which an OCTA image for each specific imaging region in an imaging region such as a macula or a nipple is used as teacher data.
 また、例えば、網膜を広画角・低密度で撮影した場合と、網膜を狭画角・高密度で撮影した場合とでは、OCTA画像に描出される血管等の構造物の描出が大きく異なる。そのため、撮影画角やスキャン密度に応じた教師データ毎に学習を行った学習済モデルを用意してもよい。さらに、撮影方式の例としては、SD-OCTとSS-OCT等の撮影方式があり、これらの撮影方式の違いにより、画質、撮影範囲、及び深さ方向の深達度等が異なる。このため、撮影方式に応じた教師データ毎に学習を行った学習済モデルを用意してもよい。 Also, for example, when the retina is photographed at a wide angle of view and low density, and when the retina is photographed at a narrow angle of view and high density, the depiction of a structure such as a blood vessel depicted in the OCTA image is significantly different. Therefore, a learned model may be prepared in which learning is performed for each piece of teacher data according to the shooting angle of view and the scan density. Further, as examples of the imaging method, there are imaging methods such as SD-OCT and SS-OCT, and the image quality, the imaging range, and the degree of depth reach differ depending on the difference between these imaging methods. For this reason, a learned model in which learning has been performed for each piece of teacher data according to the imaging method may be prepared.
 また、通常、網膜の全ての層の血管を一度に抽出したOCTA画像を生成することは稀であり、所定の深度範囲に存在する血管のみを抽出したOCTA画像を生成することが一般的である。例えば、網膜の浅層、深層、外層、及び脈絡膜浅層等の深度範囲において、それぞれの深度範囲で血管を抽出したOCTA画像を生成する。一方、OCTA画像に描出される血管の態様は、深度範囲に応じて大きく異なる。例えば、網膜の浅層で描出される血管は低密度で細く明瞭な血管網を形成するのに対し、脈絡膜浅層で描出される血管は高密度で一本一本の血管を明瞭に識別することは困難である。このため、OCTA画像等のEn-Face画像の生成範囲に応じた教師データ毎に学習を行った学習済モデルを用意してもよい。 Further, it is rare to generate an OCTA image in which blood vessels of all layers of the retina are extracted at once, and it is general to generate an OCTA image in which only blood vessels existing in a predetermined depth range are extracted. . For example, in a depth range such as a shallow layer, a deep layer, an outer layer, and a choroid shallow layer of the retina, an OCTA image in which blood vessels are extracted in respective depth ranges is generated. On the other hand, the form of the blood vessel depicted in the OCTA image varies greatly depending on the depth range. For example, blood vessels visualized in the shallow layer of the retina form a low-density, thin and clear blood vessel network, while blood vessels visualized in the shallow layer of the choroid are dense and clearly distinguish individual blood vessels. It is difficult. For this reason, a learned model may be prepared in which learning is performed for each teacher data according to the generation range of an En-Face image such as an OCTA image.
 ここではOCTA画像を教師データとする例について述べたが、実施例1と同様に、断層画像や輝度のEn-Face画像等について画質向上処理を行う場合には、これらの画像を教師データとすることができる。この場合には、これら画像の撮影条件やEn-Face画像の生成範囲に応じた教師データ毎に学習を行った複数の学習済モデルを用意する。 Here, the example in which the OCTA image is used as the teacher data has been described. However, when the image quality improvement processing is performed on the tomographic image, the En-Face image of the luminance, and the like, as in the first embodiment, these images are used as the teacher data. be able to. In this case, a plurality of learned models that have learned for each teacher data according to the shooting conditions of these images and the generation range of the En-Face image are prepared.
 次に、図10を参照して、本実施例に係る一連の画像処理について説明する。図10は、本実施例に係る一連の画像処理のフローチャートである。なお、実施例1に係る一連の画像処理と同様の処理に関しては、適宜説明を省略する。 Next, a series of image processing according to the present embodiment will be described with reference to FIG. FIG. 10 is a flowchart of a series of image processing according to the present embodiment. Note that a description of processing similar to a series of image processing according to the first embodiment will be appropriately omitted.
 まず、ステップS1001において、実施例1に係るステップS501と同様に、取得部210は、被検眼Eを複数回撮影して得た複数の三次元の断層情報を取得する。取得部210は、OCT撮影部100を用いて被検眼Eの断層情報を取得してもよいし、記憶部240や制御部200に接続される他の装置から断層情報を取得してもよい。 First, in step S1001, similarly to step S501 according to the first embodiment, the acquisition unit 210 acquires a plurality of three-dimensional tomographic information obtained by imaging the eye E multiple times. The obtaining unit 210 may obtain the tomographic information of the eye E using the OCT imaging unit 100, or may obtain the tomographic information from the storage unit 240 or another device connected to the control unit 200.
 また、取得部210は、断層情報に関する撮影条件群を取得する。具体的には、取得部210は、断層情報に関する撮影を行った際の撮影部位や撮影方式等の撮影条件を取得することができる。なお、取得部210は、断層情報のデータ形式に応じて、断層情報のデータを構成するデータ構造に保存された撮影条件群を取得してもよい。また、断層情報のデータ構造に撮影条件が保存されていない場合には、取得部210は、撮影条件を記載したファイルを記憶したサーバやデータベース等から撮影情報群を取得することができる。また、公知の任意の方法により、取得部210は、断層情報に基づく画像から撮影情報群を推定してもよい。 (4) The acquisition unit 210 acquires a group of imaging conditions related to tomographic information. Specifically, the acquisition unit 210 can acquire imaging conditions such as an imaging region and an imaging method when imaging is performed on tomographic information. Note that the acquisition unit 210 may acquire a group of imaging conditions stored in a data structure configuring data of the tomographic information according to the data format of the tomographic information. Further, when the imaging condition is not stored in the data structure of the tomographic information, the acquisition unit 210 can acquire the imaging information group from a server or a database that stores a file describing the imaging condition. In addition, the acquisition unit 210 may estimate the imaging information group from an image based on the tomographic information by any known method.
 また、取得部210が、複数の三次元断層画像や三次元モーションコントラストデータ、OCTA画像等を取得する場合には、取得部210は取得した画像やデータに関する撮影条件群を取得する。なお、OCTA画像や輝度のEn-Face画像の生成範囲に応じた教師データ毎に学習を行った複数の学習済モデルのみを画質向上処理に用いる場合には、取得部210は断層画像の撮影条件群を取得しなくてもよい。 In the case where the acquisition unit 210 acquires a plurality of three-dimensional tomographic images, three-dimensional motion contrast data, OCTA images, and the like, the acquisition unit 210 acquires a group of imaging conditions related to the acquired images and data. When only a plurality of learned models that have been trained for each teacher data according to the generation range of the OCTA image or the En-Face image of the luminance are used for the image quality improvement processing, the acquisition unit 210 sets the imaging conditions of the tomographic image. It is not necessary to acquire a group.
 ステップS1002乃至ステップS1004は、実施例1に係るステップS502乃至S504と同様であるため説明を省略する。ステップS1004において、En-Face画像生成部223がOCTA画像を生成すると、処理はステップS1005に移行する。 Steps S1002 to S1004 are the same as steps S502 to S504 according to the first embodiment, and thus description thereof is omitted. In step S1004, when the En-Face image generation unit 223 generates an OCTA image, the process proceeds to step S1005.
 ステップS1005では、選択部925が、生成されたOCTA画像に関する撮影条件群や生成範囲及び複数の学習済モデルに関する教師データの情報に基づいて、画質向上部224が用いるべき学習済モデルを選択する。より具体的には、例えば、選択部925は、OCTA画像の撮影部位が乳頭部である場合には、乳頭部のOCTA画像を教師データとして学習を行った学習済モデルを選択する。また、例えば、選択部925は、OCTA画像の生成範囲が網膜の浅層である場合には、網膜の浅層を生成範囲としたOCTA画像を教師データとして学習を行った学習済モデルを選択する。 In step S1005, the selection unit 925 selects a learned model to be used by the image quality improvement unit 224, based on the imaging condition group and generation range for the generated OCTA image and the information on the teacher data regarding the plurality of learned models. More specifically, for example, when the imaging region of the OCTA image is a nipple, the selection unit 925 selects a learned model in which learning has been performed using the OCTA image of the nipple as teacher data. Further, for example, when the generation range of the OCTA image is the shallow layer of the retina, the selection unit 925 selects a learned model in which learning has been performed using the OCTA image whose generation range is the shallow layer of the retina as teacher data. .
 なお、選択部925は、生成されたOCTA画像に関する撮影条件群や生成範囲と学習済モデルの教師データの情報が完全には一致していなくても、画質が似た傾向の画像を教師データとして学習を行った学習済モデルを選択してもよい。この場合には、例えば、選択部925は、OCTA画像に関する撮影条件群や生成範囲と、用いるべき学習済モデルとの対応関係を記載したテーブルを備えてもよい。 Note that the selecting unit 925 sets an image having a similar image quality as the teacher data even if the imaging condition group and the generation range of the generated OCTA image do not completely match the information of the teacher data of the trained model. A learned model that has undergone learning may be selected. In this case, for example, the selection unit 925 may include a table in which the correspondence between the imaging condition group and the generation range related to the OCTA image and the learned model to be used is described.
 ステップS1006では、画質向上部224が、選択部925によって選択された学習済モデルを用いて、ステップS1004で生成されたOCTA画像について画質向上処理を行い、高画質なOCTA画像を生成する。高画質なOCTA画像の生成方法は、実施例1に係るステップS505と同様であるため説明を省略する。 In step S1006, the image quality improving unit 224 performs an image quality improving process on the OCTA image generated in step S1004 using the learned model selected by the selecting unit 925, and generates a high-quality OCTA image. The method of generating a high-quality OCTA image is the same as that in step S505 according to the first embodiment, and a description thereof will not be repeated.
 ステップS1007は、実施例1に係るステップS506と同様であるため説明を省略する。ステップS1007において、高画質なOCTA画像が表示部270に表示されると、本実施例に係る一連の画像処理が終了する。 Step S1007 is the same as step S506 according to the first embodiment, and a description thereof will not be repeated. In step S1007, when a high-quality OCTA image is displayed on the display unit 270, a series of image processing according to the present embodiment ends.
 上記のように、本実施例に係る制御部900は、複数の学習済モデルから、画質向上部224によって用いられる学習済モデルを選択する選択部925を備える。選択部925は、画質向上処理を行うべきOCTA画像を生成するための深さ方向の範囲に基づいて、画質向上部224によって用いられる学習済モデルを選択する。 As described above, the control unit 900 according to the present embodiment includes the selection unit 925 that selects a learned model used by the image quality improving unit 224 from a plurality of learned models. The selecting unit 925 selects a learned model used by the image quality improving unit 224 based on a range in the depth direction for generating an OCTA image to be subjected to the image quality improving process.
 例えば、選択部925は、画質向上処理をすべきOCTA画像における表示部位及び当該OCTA画像を生成するための深さ方向の範囲に基づいて、学習済モデルを選択することができる。また、例えば、選択部925は、画質向上処理を行うべきOCTA画像における表示部位を含む撮影部位及び当該OCTA画像を生成するための深さ方向の範囲に基づいて、画質向上部224によって用いられる学習済モデルを選択してもよい。さらに、例えば、選択部925は、画質向上処理を行うべきOCTA画像の撮影条件に基づいて、画質向上部224によって用いられる学習済モデルを選択してもよい。 For example, the selecting unit 925 can select a learned model based on a display region in an OCTA image to be subjected to image quality improvement processing and a range in a depth direction for generating the OCTA image. In addition, for example, the selection unit 925 may use the learning used by the image quality improvement unit 224 based on the imaging region including the display region in the OCTA image to be subjected to the image quality improvement process and the range in the depth direction for generating the OCTA image. May be selected. Further, for example, the selection unit 925 may select a learned model used by the image quality improvement unit 224 based on the imaging condition of the OCTA image to be subjected to the image quality improvement process.
 このため、制御部900は、撮影条件やEn-Face画像の生成範囲毎にグルーピングされたペア群で構成された教師データを用いて学習した複数の学習済モデルによって画質向上処理を行うことで、より効果的に画質向上処理を行うことができる。 For this reason, the control unit 900 performs image quality improvement processing by using a plurality of learned models that have been learned using teacher data configured by pairs of groups that are grouped according to shooting conditions and generation ranges of En-Face images. Image quality improvement processing can be performed more effectively.
 なお、本実施例では、選択部925が、OCTA画像に関する撮影部位等の撮影条件又は生成範囲に基づいて、学習済モデルを選択する例を説明したが、上記以外の条件に基づいて学習済モデルを変更するようにしてもよい。選択部925は、例えば、OCTA画像や輝度のEn-Face画像を生成する際の投影方法(最大値投影法又は平均値投映法)や、血管影によって生じるアーチファクトの除去処理の有無に応じて、学習済モデルを選択してもよい。この場合には、投影方法やアーチファクト除去処理の有無に応じた教師データ毎に学習を行った学習済モデルを用意することができる。 In the present embodiment, an example has been described in which the selection unit 925 selects a learned model based on an imaging condition such as an imaging region of the OCTA image or a generation range. However, the learned model is selected based on conditions other than the above. May be changed. The selection unit 925 determines, for example, a projection method (maximum intensity projection method or average intensity projection method) when generating an OCTA image or an En-Face image of luminance, and the presence or absence of an artifact removal process caused by a blood vessel shadow. A learned model may be selected. In this case, it is possible to prepare a learned model in which learning is performed for each teacher data according to the projection method and the presence or absence of the artifact removal processing.
(変形例3)
 実施例2では、選択部925が、撮影条件やEn-Face画像の生成範囲等に応じて適切な学習済モデルを自動的に選択した。これに対し、操作者が画像に適用する画質向上処理を手動で選択することを望む場合もある。そのため、選択部925は、操作者の指示に応じて、学習済モデルを選択してもよい。
(Modification 3)
In the second embodiment, the selection unit 925 automatically selects an appropriate learned model according to the shooting conditions, the generation range of the En-Face image, and the like. On the other hand, there are cases where the operator desires to manually select the image quality improvement processing to be applied to the image. Therefore, the selection unit 925 may select a learned model according to an instruction of the operator.
 また、操作者が、画像に対して適用された画質向上処理を変更することを望む場合もある。そのため、選択部925は、操作者の指示に応じて、学習済モデルを変更し、画像に対して適用される画質向上処理を変更してもよい。 In some cases, the operator desires to change the image quality improvement processing applied to the image. Therefore, the selection unit 925 may change the learned model and change the image quality improvement processing applied to the image in accordance with the instruction of the operator.
 以下、図11A及び図11Bを参照して、画像に対して適用される画質向上処理を手動で変更する際の操作方法について説明する。図11A及び図11Bは、画質向上処理前後の画像を切り替えて表示するレポート画面の一例を示す。図11Aに示すレポート画面1100には、断層画像1111と自動選択された学習済モデルを用いた画質向上処理が適用されたOCTA画像1101が示されている。図11Bに示すレポート画面1100には、断層画像1111と操作者の指示に応じた学習済モデルを用いた画質向上処理が適用されたOCTA画像1102が示されている。また、図11A及び図11Bに示すレポート画面1100には、OCTA画像に適用する画質向上処理を変更するための処理指定部1120が示されている。 Hereinafter, with reference to FIGS. 11A and 11B, an operation method when manually changing the image quality improvement processing applied to an image will be described. 11A and 11B show an example of a report screen that switches and displays images before and after the image quality improvement processing. A report screen 1100 shown in FIG. 11A shows an OCTA image 1101 to which an image quality improvement process using a tomographic image 1111 and an automatically selected learned model has been applied. The report screen 1100 shown in FIG. 11B shows the tomographic image 1111 and the OCTA image 1102 to which the image quality improvement processing using the learned model according to the instruction of the operator has been applied. Further, the report screen 1100 shown in FIGS. 11A and 11B shows a process designation unit 1120 for changing the image quality improvement process applied to the OCTA image.
 ここで、図11Aに示すレポート画面1100に表示されているOCTA画像1101は、黄斑部の深層血管(Deep Capillary)を描出したものである。一方で、選択部925によって自動選択された学習済モデルを用いてOCTA画像に適用された画質向上処理は、乳頭部の浅層血管(RPC)に適したものである。そのため、図11Aに示すレポート画面1100に表示されているOCTA画像1101に関して、OCTA画像に適用されている画質向上処理は、OCTA画像に抽出されている血管に対して最適なものではない。 Here, the OCTA image 1101 displayed on the report screen 1100 shown in FIG. 11A depicts a deep blood vessel (Deep @ Capillary) in the macula. On the other hand, the image quality improvement processing applied to the OCTA image using the learned model automatically selected by the selection unit 925 is suitable for a shallow layer blood vessel (RPC) of the papilla. Therefore, regarding the OCTA image 1101 displayed on the report screen 1100 shown in FIG. 11A, the image quality improvement processing applied to the OCTA image is not optimal for the blood vessels extracted in the OCTA image.
 そこで、操作者は、入力部260を介して処理指定部1120にて、Deep Capillaryを選択する。選択部925は、操作者による選択指示に応じて、画質向上部224によって用いられる学習済モデルを、黄斑部の深層血管に関するOCTA画像を教師データとして学習を行った学習済モデルに変更する。 Therefore, the operator selects Deep @ Capillary in the process specification unit 1120 via the input unit 260. The selection unit 925 changes the trained model used by the image quality improving unit 224 to a trained model that has been trained using the OCTA image relating to the deep blood vessels of the macula as teacher data in response to a selection instruction from the operator.
 画質向上部224は、選択部925によって変更された学習済モデルを用いてOCTA画像について画質向上処理を再度行う。表示制御部250は、図11Bに示すように、画質向上部224によって改めて生成された高画質なOCTA画像1102を表示部270に表示させる。 (4) The image quality improvement unit 224 performs the image quality improvement process on the OCTA image again using the learned model changed by the selection unit 925. The display control unit 250 causes the display unit 270 to display the high-quality OCTA image 1102 newly generated by the image quality improvement unit 224, as shown in FIG. 11B.
 このように、選択部925が、操作者の指示に応じて、学習済モデルを変更するように構成することで、操作者は同じOCTA画像に対して適切な画質向上処理を指定し直すことができる。また、当該画質向上処理の指定は何度も行われてもよい。 In this way, by configuring the selection unit 925 to change the learned model in accordance with the operator's instruction, the operator can re-designate an appropriate image quality improvement process for the same OCTA image. it can. The specification of the image quality improvement processing may be performed many times.
 ここでは、OCTA画像に対して適用される画質向上処理を手動で変更することができるように、制御部900を構成する例を示した。これに対して、制御部900は、断層画像や輝度のEn-Face画像等に対して適用される画質向上処理を手動で変更可能なように構成されてもよい。 Here, an example has been described in which the control unit 900 is configured so that the image quality improvement processing applied to the OCTA image can be manually changed. On the other hand, the control unit 900 may be configured to be able to manually change the image quality improvement processing applied to the tomographic image, the luminance En-Face image, and the like.
 また、図11A及び図11Bに示すレポート画面は、画質向上処理前後の画像を切り替えて表示する態様を有するが、画質向上処理前後の画像を並べて表示したり、重ねて表示したりする態様のレポート画面としてもよい。さらに、処理指定部1120の態様は、図11A及び図11Bに示す態様に限られず、画質向上処理又は学習済モデルを指示できる任意の態様であってよい。また、図11A及び図11Bに示す画質向上処理の種類は一例であり、学習済モデルについての教師データに応じた他の画質向上処理の種類を含んでよい。 The report screens shown in FIGS. 11A and 11B have a mode in which the images before and after the image quality improvement processing are switched and displayed. However, the report screens in which the images before and after the image quality improvement processing are displayed side by side or overlapped are displayed. It may be a screen. Furthermore, the mode of the process designating unit 1120 is not limited to the modes shown in FIGS. 11A and 11B, and may be any mode that can indicate the image quality improvement processing or the learned model. Further, the types of the image quality improvement processing shown in FIGS. 11A and 11B are examples, and may include other types of the image quality improvement processing according to the teacher data for the learned model.
 また、変形例2と同様に、画質向上処理が適用された複数の画像を同時に表示させてもよい。また、この際に、どの画質向上処理を適用するかの指定を行うことができるように構成することもできる。この場合の、レポート画面の一例を図12A及び図12Bに示す。 Also, similarly to the second modification, a plurality of images to which the image quality improvement processing has been applied may be simultaneously displayed. At this time, it is also possible to configure so that it is possible to specify which image quality improvement processing is to be applied. An example of the report screen in this case is shown in FIGS. 12A and 12B.
 図12A及び図12Bは、画質向上処理前後の複数の画像を切り替えて表示するレポート画面の一例を示す。図12Aに示すレポート画面1200には、画質向上処理前のOCTA画像1201が示されている。図12Bに示すレポート画面1200には、操作者の指示に応じた画質向上処理が適用されたOCTA画像1202が示されている。また、図12A及び図12Bに示すレポート画面1200には、OCTA画像に適用する画質向上処理を変更するための処理指定部1220が示されている。 FIGS. 12A and 12B show an example of a report screen for switching and displaying a plurality of images before and after the image quality improvement processing. The OCTA image 1201 before the image quality improvement processing is shown on the report screen 1200 shown in FIG. 12A. The OCTA image 1202 to which the image quality improvement processing according to the operator's instruction is applied is shown on the report screen 1200 shown in FIG. 12B. A report screen 1200 shown in FIGS. 12A and 12B shows a process designation unit 1220 for changing the image quality improvement process applied to the OCTA image.
 この場合には、選択部925は、処理指定部1220を用いて指示された画質向上処理に応じた学習済モデルを、画質向上部224が用いる学習済モデルとして選択する。画質向上部224は、選択部925により選択された学習済モデルを用いて、複数のOCTA画像1201に対して画質向上処理を行う。表示制御部250は、生成された高画質な複数のOCTA画像1202を、図12Bに示すようにレポート画面1200に一度に表示させる。 In this case, the selection unit 925 selects a learned model corresponding to the image quality improvement processing instructed using the processing designation unit 1220 as a learned model used by the image quality improvement unit 224. The image quality improvement unit 224 performs image quality improvement processing on the plurality of OCTA images 1201 using the learned model selected by the selection unit 925. The display control unit 250 causes the generated plurality of high-quality OCTA images 1202 to be displayed on the report screen 1200 at a time as shown in FIG. 12B.
 なお、OCTA画像についての画質向上処理について説明したが、断層画像や輝度のEn-Face画像等についての画質向上処理に関して、操作者の指示に応じて、学習済モデルを選択・変更してもよい。なお、レポート画面に画質向上処理前後の複数の画像を並べて表示したり、重ねて表示したりしてもよい。この場合にも、操作者からの指示に応じた画質向上処理が適用された複数の画像を一度に表示することができる。 Although the image quality improvement processing for the OCTA image has been described, the learned model may be selected and changed in accordance with the operator's instruction with respect to the image quality improvement processing for the tomographic image, the luminance En-Face image, and the like. . A plurality of images before and after the image quality improvement processing may be displayed side by side on the report screen, or may be displayed in an overlapping manner. Also in this case, a plurality of images to which the image quality improvement processing has been applied according to the instruction from the operator can be displayed at a time.
(実施例3)
 実施例1及び2では、画質向上部224は、断層画像やOCTA画像を撮影した後、自動的に画質向上処理を実行した。しかしながら、画質向上部224が実行する学習済モデルを用いた画質向上処理は、処理に長時間を要する場合がある。また、モーションコントラスト生成部222によるモーションコントラストデータの生成及びEn-Face画像生成部223によるOCTA画像の生成にも時間を要する。そのため、撮影後に画質向上処理が完了するのを待ってから画像を表示する場合には、撮影から表示までに長時間を要する場合がある。
(Example 3)
In the first and second embodiments, the image quality improving unit 224 automatically executes the image quality improving process after capturing a tomographic image or an OCTA image. However, the image quality improvement processing performed by the image quality improvement unit 224 using the learned model may take a long time in some cases. Also, generation of motion contrast data by the motion contrast generation unit 222 and generation of an OCTA image by the En-Face image generation unit 223 require time. Therefore, when displaying an image after the image quality improvement processing is completed after shooting, it may take a long time from shooting to display.
 これに対し、OCT装置を用いた被検眼の撮影では、まばたきや被検眼の意図しない移動等により、撮影が失敗することがある。そのため、撮影の成否を早い段階で確認することで、OCT装置の利便性を高めることができる。そこで、実施例3では、高画質なOCTA画像の生成や表示に先立って、被検眼を撮影して得た断層情報に基づく輝度のEn-Face画像やOCTA画像を表示することにより、早い段階で撮影画像の確認が行えるようにOCT装置を構成する。 On the other hand, in the imaging of the subject's eye using the OCT apparatus, the imaging may fail due to blinking or unintended movement of the subject's eye. Therefore, the convenience of the OCT apparatus can be improved by confirming the success or failure of imaging at an early stage. Therefore, in the third embodiment, prior to generation and display of a high-quality OCTA image, an En-Face image or an OCTA image having a luminance based on tomographic information obtained by photographing the eye to be inspected is displayed at an early stage. The OCT apparatus is configured so that a captured image can be confirmed.
 以下、図13を参照して本実施例に係るOCT装置について説明する。なお、本実施例に係るOCT装置の構成は、実施例1に係るOCT装置1と同様であるため、同一の参照符号を用いて示し、説明を省略する。以下、実施例1に係るOCT装置1との違いを中心に、本実施例に係るOCT装置について説明する。 Hereinafter, an OCT apparatus according to the present embodiment will be described with reference to FIG. Since the configuration of the OCT apparatus according to the present embodiment is the same as that of the OCT apparatus 1 according to the first embodiment, the same reference numerals are used and the description is omitted. Hereinafter, the OCT apparatus according to the present embodiment will be described focusing on differences from the OCT apparatus 1 according to the first embodiment.
 図13は、本実施例に係る一連の画像処理のフローチャートである。まず、ステップS1301では、取得部210は、OCT撮影部100により、被検眼Eを撮影して複数の三次元の断層情報を取得する。 FIG. 13 is a flowchart of a series of image processing according to the present embodiment. First, in step S1301, the acquiring unit 210 acquires a plurality of three-dimensional tomographic information by photographing the eye E by the OCT photographing unit 100.
 ステップS1302は、実施例1に係るステップS502と同様であるため説明を省略する。ステップS1302において三次元断層画像が生成されると、処理はステップS1303に移行する。 Step S1302 is the same as step S502 according to the first embodiment, and a description thereof will not be repeated. When a three-dimensional tomographic image is generated in step S1302, the process proceeds to step S1303.
 ステップS1303では、En-Face画像生成部223が、ステップS1302において生成された三次元断層画像を二次元平面上に投影することで、眼底の正面画像(輝度のEn-Face画像)を生成する。その後、ステップS1304において、表示制御部250が、生成された輝度のEn-Face画像を表示部270に表示させる。 In step S1303, the En-Face image generation unit 223 generates a front image (luminance En-Face image) of the fundus by projecting the three-dimensional tomographic image generated in step S1302 onto a two-dimensional plane. After that, in step S1304, the display control unit 250 causes the display unit 270 to display the generated luminance En-Face image.
 ステップS1305及びステップS1306は、実施例1に係るステップS503及びS504と同様であるため説明を省略する。ステップS1306においてOCTA画像が生成されると処理はステップS1307に移行する。ステップS1307では、表示制御部250が、ステップS1306で生成された画質向上処理前のOCTA画像を輝度のEn-Face画像と切り替えて表示部270に表示させる。 Steps S1305 and S1306 are the same as steps S503 and S504 according to the first embodiment, and a description thereof will not be repeated. When an OCTA image is generated in step S1306, the process proceeds to step S1307. In step S1307, the display control unit 250 switches the OCTA image before the image quality improvement processing generated in step S1306 to the luminance En-Face image and causes the display unit 270 to display it.
 ステップS1308では、実施例1に係るステップS505と同様に、画質向上部224が、ステップS1306で生成されたOCTA画像に対して、学習済モデルを用いて画質向上処理を行い、高画質なOCTA画像を生成する。ステップS1309では、表示制御部250が、生成された高画質なOCTA画像を画質向上処理前のOCTA画像と切り替えて表示部270に表示させる。 In step S1308, as in step S505 according to the first embodiment, the image quality improving unit 224 performs image quality improving processing on the OCTA image generated in step S1306 using the learned model, and obtains a high-quality OCTA image. Generate In step S1309, the display control unit 250 causes the display unit 270 to display the generated high-quality OCTA image by switching to the OCTA image before the image quality improvement processing.
 上記のように、本実施例に係る表示制御部250は、取得部210によるOCTA画像の取得前に、被検眼の深さ方向における断層データに基づいて生成された正面画像である輝度のEn-Face画像(第3の画像)を表示部270に表示させる。また、表示制御部250は、OCTA画像の取得直後に、表示されている輝度のEn-Face画像をOCTA画像に切り替えて表示部270に表示させる。さらに、表示制御部250は、画質向上部224によって高画質なOCTA画像が生成された後に、表示されているOCTA画像を高画質なOCTA画像に切り替えて表示部270に表示させる。 As described above, before the acquisition unit 210 acquires the OCTA image, the display control unit 250 according to the present embodiment uses the En- of the luminance En- which is the front image generated based on the tomographic data in the depth direction of the subject's eye. The face image (third image) is displayed on the display unit 270. In addition, immediately after the acquisition of the OCTA image, the display control unit 250 switches the displayed luminance En-Face image to the OCTA image, and causes the display unit 270 to display the OC-image. Further, after a high-quality OCTA image is generated by the image quality improving unit 224, the display control unit 250 switches the displayed OCTA image to a high-quality OCTA image and causes the display unit 270 to display the OCTA image.
 これにより、操作者は撮影後ただちに被検眼の正面画像を確認することができ、撮影の成否をすぐに判断することができる。また、OCTA画像が生成された直後にOCTA画像が表示されるため、操作者は、モーションコントラストデータを生成するための複数の三次元の断層情報が適切に取得されているか否かを早い段階で判断することができる。 This allows the operator to check the front image of the subject's eye immediately after imaging, and immediately determine the success or failure of imaging. Further, since the OCTA image is displayed immediately after the OCTA image is generated, the operator determines at an early stage whether or not a plurality of three-dimensional tomographic information for generating motion contrast data has been appropriately acquired. You can judge.
 なお、断層画像や輝度のEn-Face画像等についても、画質向上処理を行う前の断層画像や輝度のEn-Face画像等を表示することで、操作者は早い段階で撮影の成否を判断することができる。 For the tomographic image, the luminance En-Face image, and the like, by displaying the tomographic image, the luminance En-Face image, and the like before performing the image quality improvement processing, the operator determines the success or failure of the imaging at an early stage. be able to.
 本実施例では、輝度のEn-Face画像の表示処理(ステップS1304)後にモーションコントラストデータの生成処理(ステップS1305)が開始されているが、モーションコントラストデータの生成処理のタイミングはこれに限られない。モーションコントラスト生成部222は、例えば、輝度のEn-Face画像の生成処理(ステップS1303)や表示処理(ステップS1304)と並行して、モーションコントラストデータの生成処理を開始してもよい。同様に、画質向上部224は、OCTA画像の表示処理(ステップS1307)と並行して、画質向上処理(ステップS1308)を開始してもよい。 In the present embodiment, the motion contrast data generation process (step S1305) is started after the brightness En-Face image display process (step S1304), but the timing of the motion contrast data generation process is not limited to this. . For example, the motion contrast generation unit 222 may start the generation process of the motion contrast data in parallel with the generation process (Step S1303) and the display process (Step S1304) of the luminance En-Face image. Similarly, the image quality improving unit 224 may start the image quality improving process (Step S1308) in parallel with the OCTA image display process (Step S1307).
(実施例4)
 実施例1では、画質向上処理前後のOCTA画像を切り替えて表示する例について述べた。これに対し、実施例4では、画質向上処理前後の画像の比較を行う。
(Example 4)
In the first embodiment, the example in which the OCTA images before and after the image quality improvement processing are switched and displayed has been described. On the other hand, in the fourth embodiment, the images before and after the image quality improvement processing are compared.
 以下、図14及び図15を参照して本実施例に係るOCT装置について説明する。なお、本実施例に係るOCT装置の構成は、制御部を除いて実施例1に係るOCT装置1と同様であるため、図1に示す構成と同様の構成については、同一の参照符号を用いて示し、説明を省略する。以下、実施例1に係るOCT装置1との違いを中心に、本実施例に係るOCT装置について説明する。 Hereinafter, the OCT apparatus according to the present embodiment will be described with reference to FIGS. Note that the configuration of the OCT apparatus according to the present embodiment is the same as that of the OCT apparatus 1 according to the first embodiment except for a control unit. Therefore, the same reference numerals are used for the same configurations as those illustrated in FIG. And the explanation is omitted. Hereinafter, the OCT apparatus according to the present embodiment will be described focusing on differences from the OCT apparatus 1 according to the first embodiment.
 図14は、本実施例に係る制御部1400の概略構成を示す。なお、本実施例に係る制御部1400における画像処理部1420及び比較部1426以外の構成は実施例1に係る制御部200の各構成と同様である。そのため、図2に示す構成と同様の構成については、同一の参照符号を用いて示し説明を省略する。 FIG. 14 shows a schematic configuration of the control unit 1400 according to the present embodiment. The configuration of the control unit 1400 according to the present embodiment other than the image processing unit 1420 and the comparison unit 1426 is the same as each configuration of the control unit 200 according to the first embodiment. Therefore, the same components as those shown in FIG. 2 are denoted by the same reference numerals, and description thereof is omitted.
 制御部1400の画像処理部1420には、断層画像生成部221、モーションコントラスト生成部222、En-Face画像生成部223、及び画質向上部224に加えて、比較部1426が設けられている。 The image processing unit 1420 of the control unit 1400 is provided with a comparing unit 1426 in addition to the tomographic image generating unit 221, the motion contrast generating unit 222, the En-Face image generating unit 223, and the image quality improving unit 224.
 比較部1426は、画質向上部224によって画質向上処理が行われる前の画像(元の画像)と画質向上処理が行われた後の画像の比較を行う。より具体的には、比較部1426は、画質向上処理前後の画像を比較し、画質向上処理前後の画像の対応する画素位置における画素値の差分を算出する。 The comparing unit 1426 compares the image before the image quality improvement processing is performed by the image quality improvement unit 224 (original image) with the image after the image quality improvement processing is performed. More specifically, the comparing unit 1426 compares the images before and after the image quality improvement processing, and calculates the difference between the pixel values at the corresponding pixel positions of the images before and after the image quality improvement processing.
 そして、比較部1426は、差分値の大小に応じて色付けされたカラーマップ画像を生成する。例えば、画質向上処理前の画像に対して、画質向上処理後の画像の画素値が大きくなっている場合には暖色(黄~橙~赤)系の色調を、画質向上処理後の画像の画素値が小さくなっている場合には寒色(黄緑~緑~青)系の色調を用いる。このような配色を用いることで、カラーマップ画像上において暖色系で示された箇所は、画質向上処理によって復元された(又は新たに生み出された)組織であることが容易に識別できる。同様に、カラーマップ画像上において寒色系で示された箇所は、画質向上処理で除去されたノイズ(又は消されてしまった組織)であることも容易に識別できる。 {The comparison unit 1426 generates a color map image colored according to the magnitude of the difference value. For example, when the pixel value of the image after the image quality improvement processing is larger than that of the image before the image quality improvement processing, the color tone of the warm color (yellow to orange to red) is changed to the pixel of the image after the image quality improvement processing. If the value is small, use a cool (yellow-green to green-blue) color tone. By using such a color scheme, it is possible to easily identify a portion shown in a warm color system on the color map image as a tissue restored (or newly created) by the image quality improvement processing. Similarly, it is possible to easily identify a portion indicated by a cool color system on the color map image as noise (or a tissue that has been erased) removed by the image quality improvement processing.
 なお、当該カラーマップ画像の配色は一例である。例えば、画質向上処理前の画像における画素値に対する画質向上処理後の画像における画素値の大小に応じて異なる色調の配色を行う等、カラーマップ画像の配色は所望の構成に応じて任意に設定されてよい。 The color arrangement of the color map image is an example. For example, the color scheme of the color map image is arbitrarily set according to a desired configuration, such as performing a different color tone according to the magnitude of the pixel value in the image after the image quality improvement processing with respect to the pixel value in the image before the image quality improvement processing. May be.
 表示制御部250は、比較部1426によって生成されたカラーマップ画像を画質向上処理前の画像又は画質向上処理後の画像に重畳して、表示部270に表示させることができる。 The display control unit 250 can display the color map image generated by the comparison unit 1426 on the display unit 270 by superimposing the color map image on the image before the image quality improvement processing or the image after the image quality improvement processing.
 次に、図15を参照して本実施例に係る一連の画像処理について説明する。なお、ステップS1501乃至ステップS1505は、実施例1に係るステップS501乃至S505と同様であるため説明を省略する。ステップS1505において、画質向上部224により高画質なOCTA画像が生成されたら、処理はステップS1506に移行する。 Next, a series of image processing according to the present embodiment will be described with reference to FIG. Steps S1501 to S1505 are the same as steps S501 to S505 according to the first embodiment, and a description thereof will not be repeated. If a high-quality OCTA image is generated by the image quality improving unit 224 in step S1505, the process proceeds to step S1506.
 ステップS1506では、比較部1426が、ステップS1504で生成されたOCTA画像とステップS1505で生成された高画質なOCTA画像を比較して各画素値の差分を算出し、各画素値の差分に基づいてカラーマップ画像を生成する。なお、比較部1426は、高画質処理前後の画像における画素値の差分に代えて、高画質処理前後の画像における画素値の比や相関値など別の手法を用いて画像の比較を行い、比較結果に基づいてカラーマップ画像を生成してもよい。 In step S1506, the comparing unit 1426 compares the OCTA image generated in step S1504 with the high-quality OCTA image generated in step S1505 to calculate a difference between pixel values, and based on the difference between pixel values. Generate a color map image. Note that the comparing unit 1426 compares the images using another method such as a pixel value ratio or a correlation value between the images before and after the high image quality processing, instead of the difference between the pixel values in the images before and after the high image quality processing. A color map image may be generated based on the result.
 ステップS1507では、表示制御部250が、カラーマップ画像を画質向上処理前の画像又は画質向上処理後の画像に重畳して、表示部270に表示させる。このとき、表示制御部250は、カラーマップ画像が重畳される画像を隠さないように、カラーマップについて透過度を設定して対象となる画像に重畳表示させることができる。 In step S1507, the display control unit 250 causes the display unit 270 to superimpose the color map image on the image before the image quality improvement processing or the image after the image quality improvement processing. At this time, the display control unit 250 can set the transparency of the color map and superimpose the color map image on the target image so as not to hide the image on which the color map image is superimposed.
 また、表示制御部250は、カラーマップ画像において、画質向上処理前後の画像の差が少ない(カラーマップ画像の画素値が低い)箇所の透過度を高く設定したり、差が所定値以下の箇所が完全に透明になるように透明度を設定したりしてもよい。このようにすることで、カラーマップ画像の下に表示された画像とカラーマップ画像の両方を良好に視認することができる。なお、カラーマップ画像の透明度については、比較部1426が透明度の設定を含んだカラーマップ画像を生成してもよい。 In addition, the display control unit 250 sets a high transparency at a place where the difference between the images before and after the image quality improvement processing is small (the pixel value of the color map image is low) in the color map image, The transparency may be set so that is completely transparent. By doing so, both the image displayed below the color map image and the color map image can be viewed well. As for the transparency of the color map image, the comparing unit 1426 may generate a color map image including the transparency setting.
 上記のように、本実施例に係る制御部1400は、第1の画像と画質向上処理が行われた第2の画像を比較する比較部1426を備える。比較部1426は、第1の画像と第2画像の差分を算出し、該差分に基づいて色分けされたカラーマップ画像を生成する。表示制御部250は、比較部1426による比較結果に基づいて表示部270の表示を制御する。より具体的には、表示制御部250は、第1の画像又は第2の画像にカラーマップ画像を重畳して表示部270に表示させる。 As described above, the control unit 1400 according to the present embodiment includes the comparison unit 1426 that compares the first image with the second image on which the image quality improvement processing has been performed. The comparing unit 1426 calculates a difference between the first image and the second image, and generates a color map image that is color-coded based on the difference. The display control unit 250 controls the display on the display unit 270 based on the comparison result by the comparison unit 1426. More specifically, the display control unit 250 causes the display unit 270 to display a color map image superimposed on the first image or the second image.
 これにより、画質向上処理前後の画像に重畳されたカラーマップ画像を観察することで、画質向上処理による画像の変化をより容易に確認することができる。そのため操作者は、画質向上処理によって画像に現実には存在しない組織が描出されてしまったり、本来存在している組織が消えてしまったりしても、そのような組織をより容易に識別することができ、組織の真偽をより容易に判断することができる。また、操作者は、カラーマップ画像の配色に応じて、画質向上処理により新たに描出された箇所であるか、消された箇所であるかを容易に識別することができる。 Thereby, by observing the color map image superimposed on the image before and after the image quality improvement processing, it is possible to more easily confirm the change of the image due to the image quality improvement processing. For this reason, the operator should be able to more easily identify a tissue that does not actually exist in the image due to the image quality improvement process, or even if the original tissue disappears. And the authenticity of the organization can be determined more easily. In addition, the operator can easily identify, according to the color arrangement of the color map image, whether the part is a part newly drawn or erased by the image quality improvement processing.
 なお、表示制御部250は、操作者の指示に応じてカラーマップ画像の重畳表示を有効にしたり、無効にしたりすることができる。このカラーマップ画像の重畳表示のオン/オフ操作は、表示部270に表示されている複数の画像に対して同時に適用するようにしてもよい。この場合、比較部1426は、対応する画質向上処理前後の画像毎にカラーマップ画像を生成し、表示制御部250は、カラーマップ画像を対応する画質向上処理前の画像又は画質向上処理後の画像に重畳表示させることができる。また、表示制御部250は、カラーマップ画像の表示の前に、画質向上処理前の画像や画質向上処理後の画像を表示部270に表示させてもよい。 The display control unit 250 can enable or disable the superimposed display of the color map image in accordance with an instruction from the operator. The on / off operation of the superimposed display of the color map image may be simultaneously applied to a plurality of images displayed on the display unit 270. In this case, the comparison unit 1426 generates a color map image for each of the images before and after the corresponding image quality improvement process, and the display control unit 250 converts the color map image into an image before the image quality improvement process or an image after the image quality improvement process. Can be superimposed. The display control unit 250 may cause the display unit 270 to display an image before the image quality improvement processing or an image after the image quality improvement processing before displaying the color map image.
 なお、本実施例ではOCTA画像を例に説明したが、断層画像や輝度のEn-Face画像等について画質向上処理を行う場合についても同様の処理を行うことができる。また、本実施例に係る比較処理及びカラーマップの表示処理は、実施例2及び実施例3に係るOCT装置にも適用することができる。 In the present embodiment, an OCTA image has been described as an example. However, a similar process can be performed when an image quality improvement process is performed on a tomographic image, an En-Face image of luminance, or the like. The comparison processing and the color map display processing according to the present embodiment can be applied to the OCT apparatuses according to the second and third embodiments.
(変形例4)
 また、比較部1426が画質向上処理前後の画像の比較を行い、表示制御部250が比較部1426による比較結果に応じて、表示部270に警告を表示させてもよい。より具体的には、比較部1426が算出した、画質向上処理前後の画像における画素値の差分が所定値よりも大きい場合に、表示制御部250が表示部270に警告を表示させる。このような構成によれば、生成された高画質画像において、学習済モデルによって、現実には存在しない組織が生成されてしまったり、本来存在している組織が消されてしまったりした場合に、操作者に注意を促すことができる。なお、差分と所定値の比較は、比較部1426によって行われてもよいし、表示制御部250によって行われてもよい。また、差分に代えて差分の平均値等の統計的な値が所定値と比較されてもよい。
(Modification 4)
Further, the comparison unit 1426 may compare the images before and after the image quality improvement processing, and the display control unit 250 may display a warning on the display unit 270 according to the comparison result by the comparison unit 1426. More specifically, when the difference between the pixel values in the images before and after the image quality improvement processing calculated by the comparison unit 1426 is larger than a predetermined value, the display control unit 250 causes the display unit 270 to display a warning. According to such a configuration, in the generated high-quality image, if a tissue that does not actually exist is generated by the learned model, or an organization that originally exists is erased, The operator can be alerted. Note that the comparison between the difference and the predetermined value may be performed by the comparing unit 1426 or may be performed by the display control unit 250. Further, instead of the difference, a statistical value such as an average value of the difference may be compared with a predetermined value.
 さらに、表示制御部250は、画質向上処理前後の画像の差分が所定値よりも大きい場合に、画質向上処理を行った後の画像の表示を表示部270に表示させないようにしてもよい。この場合には、生成された高画質画像において、学習済モデルによって、現実には存在しない組織が生成されてしまったり、本来存在している組織が消されてしまったりした場合に、当該高画質画像に基づく誤診断を抑制することができる。なお、差分と所定値の比較は、比較部1426によって行われてもよいし、表示制御部250によって行われてもよい。また、差分に代えて差分の平均値等の統計的な値が所定値と比較されてもよい。 Further, when the difference between the images before and after the image quality improvement process is larger than a predetermined value, the display control unit 250 may not display the image after the image quality improvement process is performed on the display unit 270. In this case, in the generated high-quality image, if a tissue that does not actually exist is generated by the learned model or an existing tissue is erased, the high-quality image is generated. Erroneous diagnosis based on an image can be suppressed. Note that the comparison between the difference and the predetermined value may be performed by the comparing unit 1426 or may be performed by the display control unit 250. Further, instead of the difference, a statistical value such as an average value of the difference may be compared with a predetermined value.
(実施例5)
 次に、図20A及び図20Bを参照して、実施例5に係る画像処理装置(制御部200)について説明する。本実施例では、画質向上部224での処理結果を表示制御部250が表示部270に表示を行う例について説明を行う。なお、本実施例では、図20A及び図20Bを用いて説明を行うが表示画面はこれに限らない。経過観察のように、異なる日時で得た複数の画像を並べて表示する表示画面においても同様に高画質化処理(画質向上処理)は適用可能である。また、撮影確認画面のように、検者が撮影直後に撮影成否を確認する表示画面においても同様に高画質化処理は適用可能である。表示制御部250は、画質向上部224が生成した複数の高画質画像や高画質化を行っていない低画質画像を表示部270に表示させることができる。これにより、検者の指示に応じて低画質画像、高画質画像をそれぞれ出力することができる。
(Example 5)
Next, an image processing apparatus (control unit 200) according to a fifth embodiment will be described with reference to FIGS. 20A and 20B. In the present embodiment, an example will be described in which the display control unit 250 displays the processing result of the image quality improving unit 224 on the display unit 270. In the present embodiment, description will be given with reference to FIGS. 20A and 20B, but the display screen is not limited to this. Like a follow-up observation, the image quality improvement processing (image quality improvement processing) can be similarly applied to a display screen in which a plurality of images obtained at different dates and times are displayed side by side. The image quality improvement processing can be similarly applied to a display screen in which the examiner confirms the success or failure of imaging immediately after imaging, such as an imaging confirmation screen. The display control unit 250 can cause the display unit 270 to display the plurality of high-quality images generated by the image quality improvement unit 224 and the low-quality images that have not been subjected to the high quality improvement. Thus, a low-quality image and a high-quality image can be output according to the instruction of the examiner.
 以下、図20A及び図20Bを参照して、当該インターフェース3400の一例を示す。3400は画面全体、3401は患者タブ、3402は撮影タブ、3403はレポートタブ、3404は設定タブを表している。また、3403のレポートタブにおける斜線は、レポート画面のアクティブ状態を表している。本実施例においては、レポート画面を表示する例について説明をする。Im3405はSLO画像、Im3406は、Im3407に示すOCTAのEn-Face画像をSLO画像Im3405に重畳表示している。ここでSLO画像とは、不図示のSLO(Scanning Laser Ophthalmoscope:走査型検眼鏡)光学系によって取得した眼底の正面画像である。Im3407とIm3408はOCTAのEn-Face画像、Im3409は輝度のEn-Face画像、Im3411とIm3412は断層画像を示している。3413と3414は、それぞれIm3407とIm3408に示したOCTAのEn-Face画像の上下範囲の境界線を断層画像に重畳表示している。ボタン3420は、高画質化処理の実行を指定するためのボタンである。もちろん、後述するように、ボタン3420は、高画質画像の表示を指示するためのボタンであってもよい。 Hereinafter, an example of the interface 3400 will be described with reference to FIGS. 20A and 20B. Reference numeral 3400 denotes the entire screen, 3401 denotes a patient tab, 3402 denotes an imaging tab, 3403 denotes a report tab, and 3404 denotes a setting tab. A hatched line in the report tab 3403 indicates an active state of the report screen. In the present embodiment, an example in which a report screen is displayed will be described. Im3405 displays an SLO image, and Im3406 displays an OCTA En-Face image indicated by Im3407 on the SLO image Im3405. Here, the SLO image is a front image of the fundus oculi acquired by an SLO (Scanning Laser Ophthalmoscope) optical system (not shown). Im3407 and Im3408 denote OCTA En-Face images, Im3409 denotes a luminance En-Face image, and Im3411 and Im3412 denote tomographic images. Reference numerals 3413 and 3414 superimpose and display the boundaries of the upper and lower ranges of the OCTA En-Face image shown in Im3407 and Im3408, respectively, on the tomographic image. Button 3420 is a button for designating execution of the image quality improvement processing. Of course, as described later, the button 3420 may be a button for instructing display of a high-quality image.
 本実施例において、高画質化処理の実行はボタン3420を指定して行うか、データベースに保存(記憶)されている情報に基づいて実行の有無を判断する。初めに、検者からの指示に応じてボタン3420を指定することで高画質画像の表示と低画質画像の表示を切り替える例について説明をする。なお、高画質化処理の対象画像はOCTAのEn-Face画像として説明する。 In the present embodiment, the execution of the image quality improvement processing is performed by designating the button 3420, or the presence or absence of the execution is determined based on information stored (stored) in the database. First, an example will be described in which the button 3420 is designated in accordance with an instruction from the examiner to switch between the display of a high-quality image and the display of a low-quality image. Note that the target image of the image quality improvement processing will be described as an OCTA En-Face image.
 検者がレポートタブ3403を指定してレポート画面に遷移した際には、低画質なOCTAのEn-Face画像Im3407とIm3408を表示する。その後、検者がボタン3420を指定することで、画質向上部224は画面に表示している画像Im3407とIm3408に対して高画質化処理を実行する。高画質化処理が完了後、表示制御部250は画質向上部224が生成した高画質画像をレポート画面に表示する。なお、Im3406は、Im3407をSLO画像Im3405に重畳表示しているものであるため、Im3406も高画質化処理した画像を表示する。そして、ボタン3420の表示をアクティブ状態に変更し、高画質化処理を実行したことが分かるような表示をする。 (4) When the examiner designates the report tab 3403 and transits to the report screen, the low quality OCTA En-Face images Im3407 and Im3408 are displayed. Thereafter, when the examiner specifies the button 3420, the image quality improving unit 224 executes the image quality improvement processing on the images Im3407 and Im3408 displayed on the screen. After the image quality improvement processing is completed, the display control unit 250 displays the high quality image generated by the image quality improvement unit 224 on the report screen. In addition, since Im3406 is obtained by superimposing and displaying Im3407 on the SLO image Im3405, Im3406 also displays an image that has been subjected to high quality processing. Then, the display of the button 3420 is changed to the active state, and a display that indicates that the image quality improvement processing has been executed is displayed.
 ここで、画質向上部224における処理の実行は、検者がボタン3420を指定したタイミングに限る必要はない。レポート画面を開く際に表示するOCTAのEn-Face画像Im3407とIm3408の種類は事前に分かっているため、レポート画面に遷移する際に高画質化処理の実行をしてもよい。そして、ボタン3420が押下されたタイミングで、表示制御部250が高画質画像をレポート画面に表示するようにしてもよい。さらに、検者からの指示に応じて、又はレポート画面に遷移する際に高画質化処理を行う画像の種類は2種類である必要はない。表示する可能性の高い画像、例えば、図19A及び図19Bで示すような表層(Im2910)、深層(Im2920)、外層(Im2930)、及び脈絡膜血管網(Im2940)などの複数のOCTAのEn-Face画像に対して処理を行うようにしてもよい。この場合、高画質化処理を行った画像を一時的にメモリに記憶、あるいはデータベースに記憶しておくようにしてもよい。 Here, the execution of the processing in the image quality improving unit 224 need not be limited to the timing at which the examiner has designated the button 3420. Since the types of the OCTA En-Face images Im3407 and Im3408 to be displayed when the report screen is opened are known in advance, the image quality improvement processing may be executed when the screen transitions to the report screen. Then, at the timing when the button 3420 is pressed, the display control unit 250 may display a high quality image on the report screen. Furthermore, there is no need for two types of images to be subjected to the image quality improvement processing in response to an instruction from the examiner or when transitioning to the report screen. Multiple OCTA En-Faces, such as surface (Im2910), deep (Im2920), outer (Im2930), and choroidal vascular networks (Im2940) as shown in FIG. 19A and FIG. Processing may be performed on an image. In this case, the image on which the image quality improvement processing has been performed may be temporarily stored in a memory or may be stored in a database.
 次に、データベースに保存(記録)されている情報に基づいて高画質化処理を実行する場合について説明をする。データベースに高画質化処理の実行を行う状態が保存されている場合、レポート画面に遷移した際に、高画質化処理を実行して得た高画質画像をデフォルトで表示する。そして、ボタン3420がアクティブ状態としてデフォルトで表示されることで、検者に対しては高画質化処理を実行して得た高画質画像が表示されていることが分かるように構成することができる。検者は、高画質化処理前の低画質画像を表示したい場合には、ボタン3420を指定してアクティブ状態を解除することで、低画質画像を表示することができる。高画質画像に戻したい場合、検者はボタン3420を指定する。 Next, a description will be given of a case where the image quality improvement processing is executed based on the information stored (recorded) in the database. When the state in which the image quality improvement process is performed is stored in the database, a high quality image obtained by performing the image quality improvement process is displayed by default when the report screen is displayed. Then, the button 3420 is displayed as an active state by default, so that the examiner can recognize that the high-quality image obtained by executing the high-quality processing is displayed. . If the examiner wants to display a low-quality image before the high-quality processing, the examiner can display the low-quality image by specifying the button 3420 to release the active state. To return to the high-quality image, the examiner specifies the button 3420.
 データベースへの高画質化処理の実行有無は、データベースに保存されているデータ全体に対して共通、及び撮影データ毎(検査毎)など、階層別に指定するものとする。例えば、データベース全体に対して高画質化処理を実行する状態を保存してある場合において、個別の撮影データ(個別の検査)に対して、検者が高画質化処理を実行しない状態を保存した場合、その撮影データを次回表示する際には高画質化処理を実行しない状態で表示を行う。撮影データ毎(検査毎)に高画質化処理の実行状態を保存するために、不図示のユーザーインターフェース(例えば、保存ボタン)を用いてもよい。また、他の撮影データ(他の検査)や他の患者データに遷移(例えば、検者からの指示に応じてレポート画面以外の表示画面に変更)する際に、表示状態(例えば、ボタン3420の状態)に基づいて、高画質化処理の実行を行う状態が保存されるようにしてもよい。これにより、撮影データ単位(検査単位)で高画質化処理実行の有無が指定されていない場合、データベース全体に対して指定されている情報に基づいて処理を行い、撮影データ単位(検査単位)で指定されている場合には、その情報に基づいて個別に処理を実行することができる。 (4) Whether or not the image quality improvement processing is performed on the database is specified for each layer, such as common to all the data stored in the database and for each photographing data (for each inspection). For example, when the state in which the image quality improvement processing is executed is stored for the entire database, the state in which the examiner does not execute the image quality improvement processing is stored for individual photographing data (individual examination). In this case, when the image data is displayed next time, the image data is displayed without performing the image quality improvement processing. A user interface (not shown) (for example, a save button) may be used to save the execution state of the image quality improvement processing for each piece of imaging data (for each inspection). In addition, when transitioning to other imaging data (other examination) or other patient data (for example, changing to a display screen other than the report screen in response to an instruction from the examiner), the display state (for example, the button 3420 Based on the (state), the state in which the image quality improving process is performed may be stored. Accordingly, when the presence / absence of the execution of the image quality improvement processing is not specified in the imaging data unit (inspection unit), the processing is performed based on the information specified for the entire database, and in the imaging data unit (inspection unit). If specified, processing can be executed individually based on that information.
 本実施例におけるOCTAのEn-Face画像として、Im3407とIm3408を表示する例を示しているが、表示するOCTAのEn-Face画像は検者の指定により変更することが可能である。そのため、高画質化処理の実行が指定されている時(ボタン3420がアクティブ状態)における画像の変更について説明をする。 In the present embodiment, an example is shown in which Im3407 and Im3408 are displayed as the OCTA En-Face images, but the displayed OCTA En-Face images can be changed by the examiner's designation. Therefore, a description will be given of an image change when the execution of the image quality improvement processing is designated (the button 3420 is in the active state).
 画像の変更は、不図示のユーザーインターフェース(例えば、コンボボックス)を用いて変更を行う。例えば、検者が画像の種類を表層から脈絡膜血管網に変更した時に、画質向上部224は脈絡膜血管網画像に対して高画質化処理を実行し、表示制御部250は画質向上部224が生成した高画質な画像をレポート画面に表示する。すなわち、表示制御部250は、検者からの指示に応じて、第1の深度範囲の高画質画像の表示を、第1の深度範囲とは少なくとも一部が異なる第2の深度範囲の高画質画像の表示に変更してもよい。このとき、表示制御部250は、検者からの指示に応じて第1の深度範囲が第2の深度範囲に変更されることにより、第1の深度範囲の高画質画像の表示を、第2の深度範囲の高画質画像の表示に変更してもよい。なお、上述したようにレポート画面遷移時に表示する可能性の高い画像に対しては、既に高画質画像が生成済みである場合、表示制御部250は生成済みの高画質な画像を表示すればよい。 The image is changed using a user interface (not shown) (for example, a combo box). For example, when the examiner changes the type of image from the surface layer to the choroidal vascular network, the image quality improving unit 224 performs an image quality improving process on the choroidal vascular network image, and the display control unit 250 generates the image quality improving unit 224. The high-quality image on the report screen. That is, the display control unit 250 changes the display of the high-quality image in the first depth range to the high-quality image in the second depth range that is at least partially different from the first depth range in response to an instruction from the examiner. The display may be changed to an image display. At this time, the display control unit 250 changes the first depth range to the second depth range in response to an instruction from the examiner, thereby displaying the high-quality image in the first depth range on the second depth range. May be changed to display a high-quality image in a depth range of. As described above, if a high-quality image has already been generated for an image that is likely to be displayed when the report screen transitions, the display control unit 250 may display the generated high-quality image. .
 なお、画像の種類の変更方法は上記したものに限らず、基準となる層とオフセットの値を変えて異なる深度範囲を設定したOCTAのEn-Face画像を生成することも可能である。その場合、基準となる層、あるいはオフセット値が変更された時に、画質向上部224は任意のOCTAのEn-Face画像に対して高画質化処理を実行し、表示制御部250は高画質な画像をレポート画面に表示する。基準となる層やオフセット値の変更は、不図示のユーザーインターフェース(例えば、コンボボックスやテキストボックス)を用いて行われることができる。また、断層画像Im3411とIm3412に重畳表示している境界線3413と3414のいずれかをドラッグ(層境界を移動)することで、OCTAのEn-Face画像の生成範囲を変更することができる。 The method of changing the type of image is not limited to the method described above, and it is also possible to generate an OCTA En-Face image in which a different depth range is set by changing a reference layer and an offset value. In this case, when the reference layer or the offset value is changed, the image quality improving unit 224 performs an image quality improving process on the En-Face image of an arbitrary OCTA, and the display control unit 250 Is displayed on the report screen. The reference layer and the offset value can be changed using a user interface (not shown) (for example, a combo box or a text box). Further, by dragging (moving the layer boundary) any one of the boundaries 3413 and 3414 superimposed on the tomographic images Im3411 and Im3412, the generation range of the OCTA En-Face image can be changed.
 境界線をドラッグによって変更する場合、高画質化処理の実行命令が連続的に実施される。そのため、画質向上部224は実行命令に対して常に処理を行ってもよいし、ドラッグによる層境界の変更後に実行するようにしてもよい。あるいは、高画質化処理の実行は連続的に命令されるが、次の命令が来た時点で前回の命令をキャンセルし、最新の命令を実行するようにしてもよい。 す る When the boundary line is changed by dragging, the execution instruction of the image quality improvement processing is continuously executed. Therefore, the image quality improving unit 224 may always process the execution command, or may execute the command after the layer boundary is changed by dragging. Alternatively, the execution of the image quality improvement processing is continuously instructed, but when the next instruction comes, the previous instruction may be canceled and the latest instruction may be executed.
 なお、高画質化処理には比較的時間がかかる場合がある。このため、上述したどのようなタイミングで命令が実行されたとしても、高画質画像が表示されるまでに比較的時間がかかる場合がある。そこで、検者からの指示に応じてOCTAのEn-Face画像を生成するための深度範囲が設定されてから、高画質画像が表示されるまでの間、該設定された深度範囲に対応するOCTAのEn-Face画像(低画質画像)が表示されてもよい。すなわち、上記深度範囲が設定されると、該設定された深度範囲に対応するOCTAのEn-Face画像(低画質画像)が表示され、高画質化処理が終了すると、該OCTAのEn-Face画像(該低画質画像)の表示が高画質画像の表示に変更されるように構成されてもよい。また、上記深度範囲が設定されてから、高画質画像が表示されるまでの間、高画質化処理が実行されていることを示す情報が表示されてもよい。なお、これらは、高画質化処理の実行が既に指定されている状態(ボタン3420がアクティブ状態)を前提とする場合だけでなく、例えば、検者からの指示に応じて高画質化処理の実行が指示された際に、高画質画像が表示されるまでの間においても、適用することが可能である。 The high-quality processing may take a relatively long time. Therefore, even if the command is executed at any timing described above, it may take a relatively long time before a high-quality image is displayed. Therefore, from when a depth range for generating an OCTA En-Face image is set in response to an instruction from the examiner until the high-quality image is displayed, the OCTA corresponding to the set depth range is displayed. An En-Face image (low-quality image) may be displayed. That is, when the depth range is set, the OCTA En-Face image (low-quality image) corresponding to the set depth range is displayed, and when the high-quality processing ends, the OCTA En-Face image is displayed. The display of the (low-quality image) may be changed to the display of a high-quality image. In addition, information indicating that the high-quality image processing is being performed may be displayed from when the depth range is set until the high-quality image is displayed. Note that these are not limited to the case where the execution of the image quality improvement processing has already been designated (the button 3420 is in the active state). For example, the execution of the image quality improvement processing is performed in response to an instruction from the examiner. Is applicable until the high-quality image is displayed.
 本実施例では、OCTAのEn-Face画像として、Im3407とIm3408に異なる層を表示し、低画質と高画質な画像は切り替えて表示する例を示したが、これに限らない。例えば、Im3407には低画質なOCTAのEn-Face画像、Im3408には高画質なOCTAのEn-Face画像を並べて表示するようにしてもよい。画像を切り替えて表示する場合には、同じ場所で画像を切り替えるので変化がある部分の比較を行いやすく、並べて表示する場合には、同時に画像を表示することができるので画像全体を比較しやすい。 In the present embodiment, an example is shown in which different layers are displayed on Im3407 and Im3408 as En-Face images of OCTA, and low-quality and high-quality images are switched and displayed. However, the present invention is not limited to this. For example, a low-quality OCTA En-Face image may be displayed side by side on the Im3407, and a high-quality OCTA En-Face image may be displayed side by side on the Im3408. When the images are switched and displayed, the images are switched at the same place, so that it is easy to compare the changed portions. When the images are displayed side by side, the images can be displayed at the same time, so that the entire image can be easily compared.
 次に、図20Aと図20Bを用いて、画面遷移における高画質化処理の実行について説明を行う。図20Bは、図20AにおけるOCTAのEn-Face画像Im3407を拡大表示した画面例である。図20Bにおいても、図20Aと同様にボタン3420を表示する。図20Aから図20Bへの画面遷移は、例えば、OCTAのEn-Face画像Im3407をダブルクリックすることで遷移し、図20Bから図20Aへは閉じるボタン3430で遷移する。なお、画面遷移に関しては、ここで示した方法に限らず、不図示のユーザーインターフェースを用いてもよい。 Next, with reference to FIGS. 20A and 20B, a description will be given of the execution of the image quality improvement processing at the screen transition. FIG. 20B is a screen example in which the Enta-Face image Im3407 of OCTA in FIG. 20A is enlarged and displayed. 20B, a button 3420 is displayed as in FIG. 20A. The screen transition from FIG. 20A to FIG. 20B is performed, for example, by double-clicking the En-Face image Im3407 of OCTA, and the screen transition from FIG. 20B to FIG. Note that the screen transition is not limited to the method shown here, and a user interface (not shown) may be used.
 画面遷移の際に高画質化処理の実行が指定されている場合(ボタン3420がアクティブ)、画面遷移時においてもその状態を保つ。すなわち、図20Aの画面で高画質画像を表示している状態で図20Bの画面に遷移する場合、図20Bの画面においても高画質画像を表示する。そして、ボタン3420はアクティブ状態にする。図20Bから図20Aへ遷移する場合にも同様である。図20Bにおいて、ボタン3420を指定して低画質画像に表示を切り替えることもできる。 (4) When the execution of the image quality improvement processing is designated at the time of the screen transition (the button 3420 is active), the state is maintained even at the time of the screen transition. That is, in a case where a transition is made to the screen of FIG. 20B while the high-quality image is being displayed on the screen of FIG. 20A, the high-quality image is also displayed on the screen of FIG. 20B. Then, the button 3420 is activated. The same applies to the transition from FIG. 20B to FIG. 20A. In FIG. 20B, the display can be switched to a low quality image by designating a button 3420.
 画面遷移に関して、ここで示した画面に限らず、経過観察用の表示画面、又はパノラマ用の表示画面など同じ撮影データを表示する画面への遷移であれば、高画質画像の表示状態を保ったまま遷移を行う。すなわち、遷移後の表示画面において、遷移前の表示画面におけるボタン3420の状態に対応する画像が表示される。例えば、遷移前の表示画面におけるボタン3420がアクティブ状態であれば、遷移後の表示画面において高画質画像が表示される。また、例えば、遷移前の表示画面におけるボタン3420のアクティブ状態が解除されていれば、遷移後の表示画面において低画質画像が表示される。なお、経過観察用の表示画面におけるボタン3420がアクティブ状態になると、経過観察用の表示画面に並べて表示される異なる日時(異なる検査日)で得た複数の画像が高画質画像に切り換わるようにしてもよい。すなわち、経過観察用の表示画面におけるボタン3420がアクティブ状態になると、異なる日時で得た複数の画像に対して一括で反映されるように構成してもよい。 Regarding the screen transition, not only the screen shown here, but also a transition to a screen that displays the same shooting data such as a display screen for follow-up observation or a display screen for panorama, the display state of the high-quality image is maintained. Transition is performed as is. That is, an image corresponding to the state of button 3420 on the display screen before the transition is displayed on the display screen after the transition. For example, if the button 3420 on the display screen before the transition is in the active state, a high-quality image is displayed on the display screen after the transition. Further, for example, if the active state of the button 3420 on the display screen before the transition is released, a low-quality image is displayed on the display screen after the transition. When the button 3420 on the display screen for follow-up observation is activated, a plurality of images obtained at different dates and times (different examination dates) displayed side by side on the display screen for follow-up observation are switched to high-quality images. You may. That is, when the button 3420 on the display screen for follow-up observation is activated, the configuration may be such that the button 3420 is collectively reflected on a plurality of images obtained at different dates and times.
 なお、経過観察用の表示画面の例を、図18に示す。検者からの指示に応じてタブ3801が選択されると、図18のように、経過観察用の表示画面が表示される。このとき、En-Face画像の深度範囲を、リストボックスに表示された既定の深度範囲セット(3802及び3803)から検者が選択することで変更できる。例えば、リストボックス3802では網膜表層が選択され、また、リストボックス3803では網膜深層が選択されている。上側の表示領域には網膜表層のEn-Face画像の解析結果が表示され、また、下側の表示領域には網膜深層のEn-Face画像の解析結果が表示されている。すなわち、深度範囲が選択されると、異なる日時の複数の画像について、選択された深度範囲の複数のEn-Face画像の解析結果の並列表示に一括して変更される。 FIG. 18 shows an example of a display screen for follow-up observation. When the tab 3801 is selected according to an instruction from the examiner, a display screen for follow-up observation is displayed as shown in FIG. At this time, the examiner can change the depth range of the En-Face image by selecting from a predetermined depth range set (3802 and 3803) displayed in the list box. For example, in the list box 3802, the retinal surface layer is selected, and in the list box 3803, the retinal deep layer is selected. The upper display area displays the analysis result of the En-Face image of the retinal surface layer, and the lower display area displays the analysis result of the En-Face image of the deeper retina. That is, when the depth range is selected, the display of the analysis results of the plurality of En-Face images in the selected depth range is simultaneously changed for a plurality of images at different dates and times.
 このとき、解析結果の表示を非選択状態にすると、異なる日時の複数のEn-Face画像の並列表示に一括して変更されてもよい。そして、検者からの指示に応じてボタン3420が指定されると、複数のEn-Face画像の表示が複数の高画質画像の表示に一括して変更される。 At this time, if the display of the analysis result is set to the non-selection state, the display may be changed to a parallel display of a plurality of En-Face images at different dates and times. When the button 3420 is designated in response to an instruction from the examiner, the display of the plurality of En-Face images is changed to the display of a plurality of high-quality images at once.
 また、解析結果の表示が選択状態である場合には、検者からの指示に応じてボタン3420が指定されると、複数のEn-Face画像の解析結果の表示が複数の高画質画像の解析結果の表示に一括して変更される。ここで、解析結果の表示は、解析結果を任意の透明度により画像に重畳表示させたものであってもよい。このとき、解析結果の表示への変更は、例えば、表示されている画像に対して任意の透明度により解析結果を重畳させた状態に変更したものであってもよい。また、解析結果の表示への変更は、例えば、解析結果と画像とを任意の透明度によりブレンド処理して得た画像(例えば、2次元マップ)の表示への変更であってもよい。 When the display of the analysis result is in the selected state, when the button 3420 is designated according to the instruction from the examiner, the display of the analysis result of the plurality of En-Face images is performed by the analysis of the plurality of high-quality images. It is changed to the display of the result at once. Here, the display of the analysis result may be a display in which the analysis result is superimposed on the image with an arbitrary transparency. At this time, the display of the analysis result may be changed, for example, to a state where the analysis result is superimposed on the displayed image with arbitrary transparency. The change to the display of the analysis result may be, for example, a change to the display of an image (for example, a two-dimensional map) obtained by blending the analysis result and the image with arbitrary transparency.
 また、深度範囲の指定に用いる層境界の種類とオフセット位置をそれぞれ、3805,3806のようなユーザーインターフェースから一括して変更することができる。なお、断層画像も一緒に表示させ、断層画像上に重畳された層境界データを検者からの指示に応じて移動させることにより、異なる日時の複数のEn-Face画像の深度範囲を一括して変更されてもよい。このとき、異なる日時の複数の断層画像を並べて表示し、1つの断層画像上で上記移動が行われると、他の断層画像上でも同様に層境界データが移動されてもよい。 種類 In addition, the type and offset position of the layer boundary used to specify the depth range can be changed collectively from a user interface such as 3805 and 3806. Note that the tomographic images are also displayed together, and the layer boundary data superimposed on the tomographic images is moved according to an instruction from the examiner, so that the depth ranges of a plurality of En-Face images at different dates and times are collectively determined. It may be changed. At this time, when a plurality of tomographic images at different dates and times are displayed side by side and the above-described movement is performed on one tomographic image, the layer boundary data may be similarly moved on other tomographic images.
 また、画像投影法やプロジェクションアーチファクト抑制処理の有無を、例えば、コンテキストメニューのようなユーザーインターフェースから選択することにより変更してもよい。 The presence or absence of the image projection method or the projection artifact suppression process may be changed by selecting the user interface such as a context menu, for example.
 また、選択ボタン3807を選択して選択画面を表示させ、該選択画面上に表示された画像リストから選択された画像が表示されてもよい。なお、図18の上部に表示されている矢印3804は現在選択されている検査であることを示す印であり、基準検査(Baseline)はFollow-up撮影の際に選択した検査(図18の一番左側の画像)である。もちろん、基準検査を示すマークを表示部に表示させてもよい。 Alternatively, the selection screen may be displayed by selecting the selection button 3807, and the image selected from the image list displayed on the selection screen may be displayed. Note that an arrow 3804 displayed at the top of FIG. 18 is a mark indicating that the test is currently selected, and the reference test (Baseline) is the test selected at the time of Follow-up imaging (one of FIG. 18). Left image). Of course, a mark indicating the reference inspection may be displayed on the display unit.
 また、「Show Difference」チェックボックス3808が指定された場合には、基準画像上に基準画像に対する計測値分布(マップもしくはセクタマップ)を表示する。さらに、この場合には、それ以外の検査日に対応する領域に基準画像に対して算出した計測値分布と当該領域に表示される画像に対して算出した計測分布との差分計測値マップを表示する。計測結果としては、レポート画面上にトレンドグラフ(経時変化計測によって得られた各検査日の画像に対する計測値のグラフ)を表示させてもよい。すなわち、異なる日時の複数の画像に対応する複数の解析結果の時系列データ(例えば、時系列グラフ)が表示されてもよい。このとき、表示されている複数の画像に対応する複数の日時以外の日時に関する解析結果についても、表示されている複数の画像に対応する複数の解析結果と判別可能な状態で(例えば、時系列グラフ上の各点の色が画像の表示の有無で異なる)時系列データとして表示させてもよい。また、該トレンドグラフの回帰直線(曲線)や対応する数式をレポート画面に表示させてもよい。 If the “Show @ Difference” check box 3808 is specified, a measured value distribution (map or sector map) for the reference image is displayed on the reference image. Further, in this case, a difference measurement value map between the measurement value distribution calculated for the reference image and the measurement distribution calculated for the image displayed in the region is displayed in an area corresponding to the other inspection dates. I do. As the measurement result, a trend graph (a graph of the measurement values for the images on each inspection day obtained by the measurement over time) may be displayed on the report screen. That is, time-series data (for example, a time-series graph) of a plurality of analysis results corresponding to a plurality of images at different dates and times may be displayed. At this time, the analysis results regarding the dates and times other than the multiple dates and times corresponding to the multiple displayed images are also distinguished from the multiple analysis results corresponding to the multiple displayed images (for example, time-series (The color of each point on the graph differs depending on whether an image is displayed or not). Further, a regression line (curve) of the trend graph and a corresponding mathematical expression may be displayed on a report screen.
 本実施例においては、OCTAのEn-Face画像に関して説明を行ったが、これに限らない。本実施例に係る表示、高画質化、及び画像解析等の処理に関する画像は、輝度のEn-Face画像でもよい。さらには、En-Face画像だけではなく、断層画像やSLO画像、眼底写真、又は蛍光眼底写真など、異なる画像であっても構わない。その場合、高画質化処理を実行するためのユーザーインターフェースは、種類の異なる複数の画像に対して高画質化処理の実行を指示するもの、種類の異なる複数の画像から任意の画像を選択して高画質化処理の実行を指示するものがあってもよい。 In the present embodiment, the description has been given of the OCTA En-Face image, but the present invention is not limited to this. An image related to processing such as display, image quality improvement, and image analysis according to the present embodiment may be an En-Face image of luminance. Further, not only the En-Face image but also a different image such as a tomographic image, an SLO image, a fundus photograph, or a fluorescent fundus photograph may be used. In that case, the user interface for executing the image quality improvement processing is to instruct execution of the image quality improvement processing for a plurality of different types of images, or to select an arbitrary image from the plurality of types of the different images. There may be one that instructs execution of the high image quality processing.
 このような構成により、本実施例に係る画質向上部224が処理した画像を表示制御部250が表示部270に表示することができる。このとき、上述したように、高画質画像の表示、解析結果の表示、表示される正面画像の深度範囲等に関する複数の条件のうち少なくとも1つが選択された状態である場合には、表示画面が遷移されても、選択された状態が維持されてもよい。 With such a configuration, the display control unit 250 can display the image processed by the image quality improving unit 224 according to the present embodiment on the display unit 270. At this time, as described above, when at least one of a plurality of conditions regarding the display of the high-quality image, the display of the analysis result, the depth range of the front image to be displayed, and the like is selected, the display screen is displayed. The transition may be made, or the selected state may be maintained.
 また、上述したように、複数の条件のうち少なくとも1つが選択された状態である場合には、他の条件が選択された状態に変更されても、該少なくとも1つが選択された状態が維持されてもよい。例えば、表示制御部250は、解析結果の表示が選択状態である場合に、検者からの指示に応じて(例えば、ボタン3420が指定されると)、低画質画像の解析結果の表示を高画質画像の解析結果の表示に変更してもよい。また、表示制御部250は、解析結果の表示が選択状態である場合に、検者からの指示に応じて(例えば、ボタン3420の指定が解除されると)、高画質画像の解析結果の表示を低画質画像の解析結果の表示に変更してもよい。 Further, as described above, when at least one of the plurality of conditions is in a selected state, the state in which at least one is selected is maintained even if another condition is changed to a selected state. You may. For example, when the display of the analysis result is in the selected state, the display control unit 250 increases the display of the analysis result of the low image quality image in response to an instruction from the examiner (for example, when the button 3420 is specified). The display may be changed to a display of the analysis result of the image quality image. In addition, when the display of the analysis result is in the selected state, the display control unit 250 displays the analysis result of the high-quality image in response to an instruction from the examiner (for example, when the designation of the button 3420 is released). May be changed to the display of the analysis result of the low-quality image.
 また、表示制御部250は、高画質画像の表示が非選択状態である場合に、検者からの指示に応じて(例えば、解析結果の表示の指定が解除されると)、低画質画像の解析結果の表示を低画質画像の表示に変更してもよい。また、表示制御部250は、高画質画像の表示が非選択状態である場合に、検者からの指示に応じて(例えば、解析結果の表示が指定されると)、低画質画像の表示を低画質画像の解析結果の表示に変更してもよい。また、表示制御部250は、高画質画像の表示が選択状態である場合に、検者からの指示に応じて(例えば、解析結果の表示の指定が解除されると)、高画質画像の解析結果の表示を高画質画像の表示に変更してもよい。また、表示制御部250は、高画質画像の表示が選択状態である場合に、検者からの指示に応じて(例えば、解析結果の表示が指定されると)、高画質画像の表示を高画質画像の解析結果の表示に変更してもよい。 In addition, when the display of the high-quality image is in the non-selection state, the display control unit 250 responds to an instruction from the examiner (for example, when the display of the analysis result is released), and The display of the analysis result may be changed to the display of a low-quality image. Further, when the display of the high-quality image is in the non-selection state, the display control unit 250 displays the low-quality image in response to an instruction from the examiner (for example, when the display of the analysis result is specified). The display of the analysis result of the low-quality image may be changed. In addition, when the display of the high-quality image is in the selected state, the display control unit 250 analyzes the high-quality image in response to an instruction from the examiner (for example, when the display of the analysis result is released). The display of the result may be changed to the display of a high-quality image. In addition, when the display of the high-quality image is in the selected state, the display control unit 250 increases the display of the high-quality image in response to an instruction from the examiner (for example, when the display of the analysis result is specified). The display may be changed to a display of the analysis result of the image quality image.
 また、高画質画像の表示が非選択状態で且つ第1の種類の解析結果の表示が選択状態である場合を考える。この場合には、表示制御部250は、検者からの指示に応じて(例えば、第2の種類の解析結果の表示が指定されると)、低画質画像の第1の種類の解析結果の表示を低画質画像の第2の種類の解析結果の表示に変更してもよい。また、高画質画像の表示が選択状態で且つ第1の種類の解析結果の表示が選択状態である場合を考える。この場合には、表示制御部250は、検者からの指示に応じて(例えば、第2の種類の解析結果の表示が指定されると)、高画質画像の第1の種類の解析結果の表示を高画質画像の第2の種類の解析結果の表示に変更してもよい。 {Suppose that the display of the high-quality image is in the non-selected state and the display of the first type of analysis result is in the selected state. In this case, in response to an instruction from the examiner (for example, when the display of the second type of analysis result is specified), the display control unit 250 generates the first type of analysis result of the low image quality image. The display may be changed to the display of the second type of analysis result of the low image quality image. Also, consider the case where the display of a high-quality image is in a selected state and the display of the first type of analysis result is in a selected state. In this case, in response to an instruction from the examiner (for example, when the display of the second type of analysis result is specified), the display control unit 250 outputs the first type of analysis result of the high-quality image. The display may be changed to the display of the second type of analysis result of the high quality image.
 なお、経過観察用の表示画面においては、上述したように、これらの表示の変更が、異なる日時で得た複数の画像に対して一括で反映されるように構成してもよい。ここで、解析結果の表示は、解析結果を任意の透明度により画像に重畳表示させたものであってもよい。このとき、解析結果の表示への変更は、例えば、表示されている画像に対して任意の透明度により解析結果を重畳させた状態に変更したものであってもよい。また、解析結果の表示への変更は、例えば、解析結果と画像とを任意の透明度によりブレンド処理して得た画像(例えば、2次元マップ)の表示への変更であってもよい。 Note that, as described above, the display screen for follow-up observation may be configured so that these display changes are collectively reflected on a plurality of images obtained at different dates and times. Here, the display of the analysis result may be a display in which the analysis result is superimposed on the image with an arbitrary transparency. At this time, the display of the analysis result may be changed, for example, to a state where the analysis result is superimposed on the displayed image with arbitrary transparency. The change to the display of the analysis result may be, for example, a change to the display of an image (for example, a two-dimensional map) obtained by blending the analysis result and the image with arbitrary transparency.
(変形例5)
 上述した様々な実施例及び変形例において、表示制御部250は、画質向上部224によって生成された高画質画像と入力画像のうち、検者からの指示に応じて選択された画像を表示部270に表示させることができる。また、表示制御部250は、検者からの指示に応じて、表示部270上の表示を撮影画像(入力画像)から高画質画像に切り替えてもよい。すなわち、表示制御部250は、検者からの指示に応じて、低画質画像の表示を高画質画像の表示に変更してもよい。また、表示制御部250は、検者からの指示に応じて、高画質画像の表示を低画質画像の表示に変更してもよい。
(Modification 5)
In the various embodiments and modifications described above, the display control unit 250 displays the image selected according to the instruction from the examiner, out of the high-quality image generated by the image quality improvement unit 224 and the input image. Can be displayed. In addition, the display control unit 250 may switch the display on the display unit 270 from a captured image (input image) to a high-quality image according to an instruction from the examiner. That is, the display control unit 250 may change the display of the low-quality image to the display of the high-quality image according to an instruction from the examiner. In addition, the display control unit 250 may change the display of the high-quality image to the display of the low-quality image according to an instruction from the examiner.
 さらに、画質向上部224が、高画質化エンジン(高画質化用の学習済モデル)による高画質化処理の開始(高画質化エンジンへの画像の入力)を検者からの指示に応じて実行し、表示制御部250が、画質向上部224によって生成された高画質画像を表示部270に表示させてもよい。これに対し、撮影装置(OCT撮影部100)によって入力画像が撮影されると、高画質化エンジンが自動的に入力画像に基づいて高画質画像を生成し、表示制御部250が、検者からの指示に応じて高画質画像を表示部270に表示させてもよい。ここで、高画質化エンジンとは、上述した画質向上処理(高画質化処理)を行う学習済モデルを含む。 Further, the image quality improvement unit 224 starts the image quality improvement processing (input of an image to the image quality improvement engine) by the image quality improvement engine (learned model for the image quality improvement) in response to an instruction from the examiner. Then, the display control unit 250 may cause the display unit 270 to display the high-quality image generated by the image quality improving unit 224. On the other hand, when an input image is captured by the imaging device (OCT imaging unit 100), the high-quality engine automatically generates a high-quality image based on the input image, and the display control unit 250 May be displayed on the display unit 270 in accordance with the instruction. Here, the image quality improvement engine includes a learned model that performs the above-described image quality improvement processing (image quality improvement processing).
 なお、これらの処理は解析結果の出力についても同様に行うことができる。すなわち、表示制御部250は、検者からの指示に応じて、低画質画像の解析結果の表示を高画質画像の解析結果の表示に変更してもよい。また、表示制御部250は、検者からの指示に応じて、高画質画像の解析結果の表示を低画質画像の解析結果の表示に変更してもよい。もちろん、表示制御部250は、検者からの指示に応じて、低画質画像の解析結果の表示を低画質画像の表示に変更してもよい。また、表示制御部250は、検者からの指示に応じて、低画質画像の表示を低画質画像の解析結果の表示に変更してもよい。また、表示制御部250は、検者からの指示に応じて、高画質画像の解析結果の表示を高画質画像の表示に変更してもよい。また、表示制御部250は、検者からの指示に応じて、高画質画像の表示を高画質画像の解析結果の表示に変更してもよい。 処理 Note that these processes can be similarly performed on the output of the analysis result. That is, the display control unit 250 may change the display of the analysis result of the low-quality image to the display of the analysis result of the high-quality image according to an instruction from the examiner. Further, the display control unit 250 may change the display of the analysis result of the high-quality image to the display of the analysis result of the low-quality image according to an instruction from the examiner. Of course, the display control unit 250 may change the display of the analysis result of the low-quality image to the display of the low-quality image according to an instruction from the examiner. In addition, the display control unit 250 may change the display of the low-quality image to the display of the analysis result of the low-quality image according to an instruction from the examiner. In addition, the display control unit 250 may change the display of the analysis result of the high-quality image to the display of the high-quality image according to an instruction from the examiner. Further, the display control unit 250 may change the display of the high-quality image to the display of the analysis result of the high-quality image according to an instruction from the examiner.
 また、表示制御部250は、検者からの指示に応じて、低画質画像の解析結果の表示を低画質画像の他の種類の解析結果の表示に変更してもよい。また、表示制御部250は、検者からの指示に応じて、高画質画像の解析結果の表示を高画質画像の他の種類の解析結果の表示に変更してもよい。 The display control unit 250 may change the display of the analysis result of the low-quality image to the display of another type of analysis result of the low-quality image in accordance with an instruction from the examiner. In addition, the display control unit 250 may change the display of the analysis result of the high-quality image to the display of another type of analysis result of the high-quality image according to an instruction from the examiner.
 ここで、高画質画像の解析結果の表示は、高画質画像の解析結果を任意の透明度により高画質画像に重畳表示させたものであってもよい。また、低画質画像の解析結果の表示は、低画質画像の解析結果を任意の透明度により低画質画像に重畳表示させたものであってもよい。このとき、解析結果の表示への変更は、例えば、表示されている画像に対して任意の透明度により解析結果を重畳させた状態に変更したものであってもよい。また、解析結果の表示への変更は、例えば、解析結果と画像とを任意の透明度によりブレンド処理して得た画像(例えば、2次元マップ)の表示への変更であってもよい。 Here, the analysis result of the high-quality image may be displayed by superimposing and displaying the analysis result of the high-quality image on the high-quality image with any transparency. The display of the analysis result of the low-quality image may be a display in which the analysis result of the low-quality image is superimposed and displayed on the low-quality image with arbitrary transparency. At this time, the display of the analysis result may be changed, for example, to a state where the analysis result is superimposed on the displayed image with arbitrary transparency. The change to the display of the analysis result may be, for example, a change to the display of an image (for example, a two-dimensional map) obtained by blending the analysis result and the image with arbitrary transparency.
(変形例6)
 上述した様々な実施例及び変形例におけるレポート画面において、所望の層の層厚や各種の血管密度等の解析結果を表示させてもよい。また、視神経乳頭部、黄斑部、血管領域、神経線維束、硝子体領域、黄斑領域、脈絡膜領域、強膜領域、篩状板領域、網膜層境界、網膜層境界端部、視細胞、血球、血管壁、血管内壁境界、血管外側境界、神経節細胞、角膜領域、隅角領域、シュレム管等の少なくとも1つを含む注目部位に関するパラメータの値(分布)を解析結果として表示させてもよい。このとき、例えば、各種のアーチファクトの低減処理が適用された医用画像を解析することで、精度の良い解析結果を表示させることができる。なお、アーチファクトは、例えば、血管領域等による光吸収により生じる偽像領域や、プロジェクションアーチファクト、被検眼の状態(動きや瞬き等)によって測定光の主走査方向に生じる正面画像における帯状のアーチファクト等であってもよい。また、アーチファクトは、例えば、被検者の所定部位の医用画像上に撮影毎にランダムに生じるような写損領域であれば、何でもよい。また、上述したような様々なアーチファクト(写損領域)の少なくとも1つを含む領域に関するパラメータの値(分布)を解析結果として表示させてもよい。また、ドルーゼン、新生血管、白斑(硬性白斑)、シュードドルーゼン等の異常部位等の少なくとも1つを含む領域に関するパラメータの値(分布)を解析結果として表示させてもよい。
(Modification 6)
Analysis results such as a desired layer thickness and various blood vessel densities may be displayed on the report screens in the various embodiments and modifications described above. In addition, the optic papilla, macula, blood vessel region, nerve fiber bundle, vitreous region, macula region, choroid region, sclera region, cribriform region, retinal layer boundary, retinal layer boundary edge, photoreceptor cells, blood cells, The value (distribution) of a parameter relating to a site of interest including at least one of a blood vessel wall, a blood vessel inner wall boundary, a blood vessel outer boundary, a ganglion cell, a corneal region, a corner region, and Schlemm's canal may be displayed as an analysis result. At this time, for example, by analyzing a medical image to which various types of artifact reduction processing are applied, a highly accurate analysis result can be displayed. Note that the artifact is, for example, a false image region caused by light absorption by a blood vessel region or the like, a projection artifact, a band-like artifact in a front image generated in a main scanning direction of measurement light due to a state (movement, blink, etc.) of an eye to be inspected, or the like. There may be. In addition, the artifact may be, for example, any image-failure region that randomly appears on a medical image of a predetermined part of the subject for each imaging. Further, the values (distributions) of parameters relating to an area including at least one of the various artifacts (missing areas) as described above may be displayed as an analysis result. In addition, a parameter value (distribution) relating to a region including at least one of abnormal sites such as drusen, new blood vessels, vitiligo (hard vitiligo), and pseudo drusen may be displayed as an analysis result.
 また、解析結果は、解析マップや、各分割領域に対応する統計値を示すセクター等で表示されてもよい。なお、解析結果は、医用画像の解析結果を学習データとして学習して得た学習済モデル(解析結果生成エンジン、解析結果生成用の学習済モデル)を用いて生成されたものであってもよい。このとき、学習済モデルは、医用画像とその医用画像の解析結果とを含む学習データや、医用画像とその医用画像とは異なる種類の医用画像の解析結果とを含む学習データ等を用いた学習により得たものであってもよい。また、学習済モデルは、輝度正面画像及びモーションコントラスト正面画像のように、所定部位の異なる種類の複数の医用画像をセットとする入力データを含む学習データを用いた学習により得たものであってもよい。ここで、輝度正面画像は輝度のEn-Face画像に対応し、モーションコントラスト正面画像はOCTAのEn-Face画像に対応する。また、高画質化用の学習済モデルにより生成された高画質画像を用いて得た解析結果が表示されるように構成されてもよい。 The analysis result may be displayed as an analysis map, a sector indicating a statistical value corresponding to each divided region, or the like. The analysis result may be generated using a learned model (analysis result generation engine, a learned model for generating the analysis result) obtained by learning the analysis result of the medical image as learning data. . At this time, the trained model is a learning model using learning data including a medical image and an analysis result of the medical image, learning data including a medical image and an analysis result of a medical image of a type different from the medical image, and the like. May be obtained. Further, the learned model is obtained by learning using learning data including input data in which a plurality of different types of medical images of a predetermined part are set, such as a luminance front image and a motion contrast front image. Is also good. Here, the luminance front image corresponds to the luminance En-Face image, and the motion contrast front image corresponds to the OCTA En-Face image. Further, a configuration may be adopted in which an analysis result obtained using a high-quality image generated by a learned model for improving image quality is displayed.
 また、学習データに含まれる入力データとしては、高画質化用の学習済モデルにより生成された高画質画像であってもよいし、低画質画像と高画質画像とのセットであってもよい。また、学習データは、例えば、解析領域を解析して得た解析値(例えば、平均値や中央値等)、解析値を含む表、解析マップ、画像におけるセクター等の解析領域の位置等の少なくとも1つを含む情報を(教師あり学習の)正解データとして、入力データにラベル付けしたデータであってもよい。なお、検者からの指示に応じて、解析結果生成用の学習済モデルにより得た解析結果が表示されるように構成されてもよい。 The input data included in the learning data may be a high-quality image generated by a learned model for improving image quality, or may be a set of a low-quality image and a high-quality image. Further, the learning data includes, for example, at least an analysis value (for example, an average value or a median value) obtained by analyzing the analysis area, a table including the analysis value, an analysis map, and the position of the analysis area such as a sector in the image. Information including one may be data obtained by labeling input data as correct answer data (for supervised learning). In addition, according to the instruction from the examiner, the analysis result obtained by the learned model for generating the analysis result may be displayed.
 また、上述した様々な実施例及び変形例におけるレポート画面において、緑内障や加齢黄斑変性等の種々の診断結果を表示させてもよい。このとき、例えば、上述したような各種のアーチファクトの低減処理が適用された医用画像を解析することで、精度の良い診断結果を表示させることができる。また、診断結果は、特定された異常部位等の位置を画像上に表示されてもよいし、また、異常部位の状態等を文字等によって表示されてもよい。また、異常部位等の分類結果(例えば、カーティン分類)を診断結果として表示させてもよい。 In addition, various diagnosis results such as glaucoma and age-related macular degeneration may be displayed on the report screens in the various embodiments and the modifications described above. At this time, for example, by analyzing the medical image to which the various artifact reduction processes described above are applied, a highly accurate diagnosis result can be displayed. In the diagnosis result, the position of the specified abnormal part or the like may be displayed on the image, or the state or the like of the abnormal part may be displayed by characters or the like. Further, a classification result (for example, a Curtin classification) of an abnormal part or the like may be displayed as a diagnosis result.
 なお、診断結果は、医用画像の診断結果を学習データとして学習して得た学習済モデル(診断結果生成エンジン、診断結果生成用の学習済モデル)を用いて生成されたものであってもよい。また、学習済モデルは、医用画像とその医用画像の診断結果とを含む学習データや、医用画像とその医用画像とは異なる種類の医用画像の診断結果とを含む学習データ等を用いた学習により得たものであってもよい。また、高画質化用の学習済モデルにより生成された高画質画像を用いて得た診断結果が表示されるように構成されてもよい。 Note that the diagnosis result may be generated using a learned model (diagnosis result generation engine, a learned model for generating a diagnosis result) obtained by learning a diagnosis result of a medical image as learning data. . The learned model is obtained by learning using learning data including a medical image and a diagnosis result of the medical image, and learning data including a medical image and a diagnosis result of a medical image of a different type from the medical image. It may be obtained. Further, a configuration may be adopted in which a diagnosis result obtained using a high-quality image generated by a learned model for improving image quality is displayed.
 また、学習データに含まれる入力データとしては、高画質化用の学習済モデルにより生成された高画質画像であってもよいし、低画質画像と高画質画像とのセットであってもよい。また、学習データは、例えば、診断名、病変(異常部位)の種類や状態(程度)、画像における病変の位置、注目領域に対する病変の位置、所見(読影所見等)、診断名の根拠(肯定的な医用支援情報等)、診断名を否定する根拠(否定的な医用支援情報)等の少なくとも1つを含む情報を(教師あり学習の)正解データとして、入力データにラベル付けしたデータであってもよい。なお、検者からの指示に応じて、診断結果生成用の学習済モデルにより得た診断結果が表示されるように構成されてもよい。 The input data included in the learning data may be a high-quality image generated by a learned model for improving image quality, or may be a set of a low-quality image and a high-quality image. The learning data includes, for example, the diagnosis name, the type and state (degree) of the lesion (abnormal part), the position of the lesion in the image, the position of the lesion with respect to the attention area, the findings (interpretation findings, etc.), Information that includes at least one of the following: information that includes at least one of grounds for negating the diagnosis name (negative medical support information), and the like, as correct data (for supervised learning). You may. It should be noted that the diagnosis result obtained by the learned model for generating the diagnosis result may be displayed according to the instruction from the examiner.
 また、上述した様々な実施例及び変形例におけるレポート画面において、上述したような注目部位、アーチファクト、異常部位等の物体認識結果(物体検出結果)やセグメンテーション結果を表示させてもよい。このとき、例えば、画像上の物体の周辺に矩形の枠等を重畳して表示させてもよい。また、例えば、画像における物体上に色等を重畳して表示させてもよい。なお、物体認識結果やセグメンテーション結果は、物体認識やセグメンテーションを示す情報を正解データとして医用画像にラベル付けした学習データを学習して得た学習済モデルを用いて生成されたものであってもよい。なお、上述した解析結果生成や診断結果生成は、上述した物体認識結果やセグメンテーション結果を利用することで得られたものであってもよい。例えば、物体認識やセグメンテーションの処理により得た注目部位に対して解析結果生成や診断結果生成の処理を行ってもよい。 In addition, on the report screens in the various embodiments and modified examples described above, the object recognition result (object detection result) and the segmentation result of the noted part, the artifact, the abnormal part, and the like as described above may be displayed. At this time, for example, a rectangular frame or the like may be superimposed and displayed around the object on the image. Further, for example, a color or the like may be superimposed and displayed on an object in an image. Note that the object recognition result and the segmentation result may be generated using a learned model obtained by learning learning data obtained by labeling a medical image with information indicating object recognition and segmentation as correct data. . Note that the above-described generation of the analysis result and the generation of the diagnosis result may be obtained by using the above-described object recognition result and the segmentation result. For example, a process of generating an analysis result and a process of generating a diagnosis result may be performed on a region of interest obtained by the processing of object recognition and segmentation.
 また、上述した学習済モデルは、被検者の所定部位の異なる種類の複数の医用画像をセットとする入力データを含む学習データにより学習して得た学習済モデルであってもよい。このとき、学習データに含まれる入力データとして、例えば、眼底のモーションコントラスト正面画像及び輝度正面画像(あるいは輝度断層画像)をセットとする入力データが考えられる。また、学習データに含まれる入力データとして、例えば、眼底の断層画像(Bスキャン画像)及びカラー眼底画像(あるいは蛍光眼底画像)をセットとする入力データ等も考えられる。また、異なる種類の複数の医療画像は、異なるモダリティ、異なる光学系、又は異なる原理等により取得されたものであれば何でもよい。 The above-described learned model may be a learned model obtained by learning using learning data including input data in which a plurality of different types of medical images of a predetermined part of the subject are set. At this time, as input data included in the learning data, for example, input data in which a front image of a motion contrast of a fundus and a luminance front image (or a luminance tomographic image) are set can be considered. Further, as input data included in the learning data, for example, input data in which a tomographic image of a fundus (B-scan image) and a color fundus image (or a fluorescent fundus image) are set may be considered. Further, the plurality of different types of medical images may be any medical images obtained by different modalities, different optical systems, different principles, or the like.
 また、上述した学習済モデルは、被検者の異なる部位の複数の医用画像をセットとする入力データを含む学習データにより学習して得た学習済モデルであってもよい。このとき、学習データに含まれる入力データとして、例えば、眼底の断層画像(Bスキャン画像)と前眼部の断層画像(Bスキャン画像)とをセットとする入力データが考えられる。また、学習データに含まれる入力データとして、例えば、眼底の黄斑の三次元OCT画像(三次元断層画像)と眼底の視神経乳頭のサークルスキャン(又はラスタスキャン)断層画像とをセットとする入力データ等も考えられる。 The learned model described above may be a learned model obtained by learning using learning data including input data in which a plurality of medical images of different parts of the subject are set. At this time, as input data included in the learning data, for example, input data in which a tomographic image of the fundus (B-scan image) and a tomographic image of the anterior segment (B-scan image) are set can be considered. Further, as input data included in the learning data, for example, input data in which a three-dimensional OCT image (three-dimensional tomographic image) of the macula of the fundus and a circle scan (or raster scan) tomographic image of the optic papilla of the fundus are set. Is also conceivable.
 なお、学習データに含まれる入力データは、被検者の異なる部位及び異なる種類の複数の医用画像であってもよい。このとき、学習データに含まれる入力データは、例えば、前眼部の断層画像とカラー眼底画像とをセットとする入力データ等が考えられる。また、上述した学習済モデルは、被検者の所定部位の異なる撮影画角の複数の医用画像をセットとする入力データを含む学習データにより学習して得た学習済モデルであってもよい。また、学習データに含まれる入力データは、パノラマ画像のように、所定部位を複数領域に時分割して得た複数の医用画像を貼り合わせたものであってもよい。このとき、パノラマ画像のような広画角画像を学習データとして用いることにより、狭画角画像よりも情報量が多い等の理由から画像の特徴量を精度良く取得できる可能性があるため、各処理の結果を向上することができる。また、学習データに含まれる入力データは、被検者の所定部位の異なる日時の複数の医用画像をセットとする入力データであってもよい。 The input data included in the learning data may be different parts of the subject and a plurality of different types of medical images. At this time, the input data included in the learning data may be, for example, input data that sets a tomographic image of the anterior ocular segment and a color fundus image. Further, the above-described learned model may be a learned model obtained by learning using learning data including input data in which a plurality of medical images of a predetermined part of the subject with different imaging angles of view are set. The input data included in the learning data may be a combination of a plurality of medical images obtained by time-dividing a predetermined region into a plurality of regions, such as a panoramic image. At this time, by using a wide-field-of-view image such as a panoramic image as learning data, it is possible to acquire the feature amount of the image with high accuracy because the amount of information is larger than that of the narrow-field-of-view image. The result of the processing can be improved. The input data included in the learning data may be input data in which a plurality of medical images at different dates and times of a predetermined part of the subject are set.
 また、上述した解析結果と診断結果と物体認識結果とセグメンテーション結果とのうち少なくとも1つの結果が表示される表示画面は、レポート画面に限らない。このような表示画面は、例えば、撮影確認画面、経過観察用の表示画面、及び撮影前の各種調整用のプレビュー画面(各種のライブ動画像が表示される表示画面)等の少なくとも1つの表示画面に表示されてもよい。例えば、上述した学習済モデルを用いて得た上記少なくとも1つの結果を撮影確認画面に表示させることにより、検者は、撮影直後であっても精度の良い結果を確認することができる。また、上述した低画質画像と高画質画像との表示の変更は、例えば、低画質画像の解析結果と高画質画像の解析結果との表示の変更であってもよい。 The display screen on which at least one of the analysis result, the diagnosis result, the object recognition result, and the segmentation result is displayed is not limited to the report screen. Such a display screen includes, for example, at least one display screen such as a shooting confirmation screen, a display screen for follow-up observation, and a preview screen for various adjustments before shooting (a display screen on which various live moving images are displayed). May be displayed. For example, by displaying the at least one result obtained using the above-described learned model on a shooting confirmation screen, the examiner can check the result with high accuracy even immediately after shooting. Further, the change of the display of the low-quality image and the high-quality image described above may be, for example, the change of the display of the analysis result of the low-quality image and the analysis result of the high-quality image.
 ここで、上述した様々な学習済モデルは、学習データを用いた機械学習により得ることができる。機械学習には、例えば、多階層のニューラルネットワークから成る深層学習(Deep Learning)がある。また、多階層のニューラルネットワークの少なくとも一部には、例えば、畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)を用いることができる。また、多階層のニューラルネットワークの少なくとも一部には、オートエンコーダ(自己符号化器)に関する技術が用いられてもよい。また、学習には、バックプロパゲーション(誤差逆伝搬法)に関する技術が用いられてもよい。ただし、機械学習としては、深層学習に限らず、画像等の学習データの特徴量を学習によって自ら抽出(表現)可能なモデルであれば何でもよい。 Here, the various learned models described above can be obtained by machine learning using learning data. The machine learning includes, for example, deep learning (Deep @ Learning) including a multi-layer neural network. For at least a part of the multi-layer neural network, for example, a convolutional neural network (CNN: Convolutional Neural Network) can be used. In addition, at least a part of the multi-layer neural network may use a technology related to an auto encoder (self-encoder). Further, a technology related to back propagation (error back propagation method) may be used for learning. However, the machine learning is not limited to the deep learning, but may be any model that can extract (represent) the feature amount of learning data such as an image by learning.
 また、高画質化エンジン(高画質化用の学習済モデル)は、高画質化エンジンにより生成された少なくとも1つの高画質画像を含む学習データを追加学習して得た学習済モデルであってもよい。このとき、高画質画像を追加学習用の学習データとして用いるか否かを、検者からの指示により選択可能に構成されてもよい。 Further, the high-quality image engine (learned model for high-quality image) may be a learned model obtained by additionally learning learning data including at least one high-quality image generated by the high-quality image engine. Good. At this time, whether or not to use the high-quality image as learning data for additional learning may be configured to be selectable by an instruction from the examiner.
(変形例7)
 上述した様々な実施例及び変形例におけるプレビュー画面において、ライブ動画像の少なくとも1つのフレーム毎に上述した学習済モデルが用いられるように構成されてもよい。このとき、プレビュー画面において、異なる部位や異なる種類の複数のライブ動画像が表示されている場合には、各ライブ動画像に対応する学習済モデルが用いられるように構成されてもよい。これにより、例えば、ライブ動画像であっても、処理時間を短縮することができるため、検者は撮影開始前に精度の高い情報を得ることができる。このため、例えば、再撮影の失敗等を低減することができるため、診断の精度や効率を向上させることができる。なお、複数のライブ動画像は、例えば、XYZ方向のアライメントのための前眼部の動画像、及び眼底観察光学系のフォーカス調整やOCTフォーカス調整のための眼底の正面動画像であってよい。また、複数のライブ動画像は、例えば、OCTのコヒーレンスゲート調整(測定光路長と参照光路長との光路長差の調整)のための眼底の断層動画像等であってもよい。
(Modification 7)
The preview screens in the various embodiments and modifications described above may be configured so that the learned model is used for at least one frame of the live moving image. At this time, when a plurality of live moving images of different parts or different types are displayed on the preview screen, the learned model corresponding to each live moving image may be used. Thus, for example, even for a live moving image, the processing time can be shortened, so that the examiner can obtain highly accurate information before the start of imaging. For this reason, for example, failure in re-imaging can be reduced, so that the accuracy and efficiency of diagnosis can be improved. Note that the plurality of live moving images may be, for example, a moving image of the anterior segment for alignment in the XYZ directions, and a front moving image of the fundus for focus adjustment or OCT focus adjustment of the fundus observation optical system. Further, the plurality of live moving images may be, for example, tomographic moving images of a fundus for coherence gate adjustment of OCT (adjustment of an optical path length difference between a measurement optical path length and a reference optical path length).
 また、上述した学習済モデルを適用可能な動画像は、ライブ動画像に限らず、例えば、記憶部に記憶(保存)された動画像であってもよい。このとき、例えば、記憶部に記憶(保存)された眼底の断層動画像の少なくとも1つのフレーム毎に位置合わせして得た動画像が表示画面に表示されてもよい。例えば、硝子体を好適に観察したい場合には、まず、フレーム上に硝子体ができるだけ存在する等の条件を基準とする基準フレームを選択してもよい。このとき、各フレームは、XZ方向の断層画像(Bスキャン像)である。そして、選択された基準フレームに対して他のフレームがXZ方向に位置合わせされた動画像が表示画面に表示されてもよい。このとき、例えば、動画像の少なくとも1つのフレーム毎に高画質化用の学習済モデルにより順次生成された高画質画像(高画質フレーム)を連続表示させるように構成してもよい。 The moving image to which the learned model can be applied is not limited to a live moving image, and may be, for example, a moving image stored (saved) in a storage unit. At this time, for example, a moving image obtained by aligning at least one frame of the tomographic moving image of the fundus stored (saved) in the storage unit may be displayed on the display screen. For example, when it is desired to appropriately observe the vitreous body, first, a reference frame may be selected based on the condition that the vitreous body exists on the frame as much as possible. At this time, each frame is a tomographic image (B-scan image) in the XZ direction. Then, a moving image in which another frame is aligned in the XZ direction with respect to the selected reference frame may be displayed on the display screen. At this time, for example, a configuration may be adopted in which high-quality images (high-quality frames) sequentially generated by the learned model for improving image quality are successively displayed for at least one frame of the moving image.
 なお、上述したフレーム間の位置合わせの手法としては、X方向の位置合わせの手法とZ方向(深度方向)の位置合わせの手法とは、同じ手法が適用されてもよいし、全て異なる手法が適用されてもよい。また、同一方向の位置合わせは、異なる手法で複数回行われてもよく、例えば、粗い位置合わせを行った後に、精密な位置合わせが行われてもよい。また、位置合わせの手法としては、例えば、断層画像(Bスキャン像)をセグメンテーション処理して得た網膜層境界を用いた(Z方向の粗い)位置合わせ、断層画像を分割して得た複数の領域と基準画像との相関情報(類似度)を用いた(X方向やZ方向の精密な)位置合わせ、断層画像(Bスキャン像)毎に生成した1次元投影像を用いた(X方向の)位置合わせ、2次元正面画像を用いた(X方向の)位置合わせ等がある。また、ピクセル単位で粗く位置合わせが行われてから、サブピクセル単位で精密な位置合わせが行われるように構成されてもよい。 As the above-described method of positioning between frames, the same method may be applied to the method of positioning in the X direction and the method of positioning in the Z direction (depth direction), or all different methods may be used. May be applied. In addition, the alignment in the same direction may be performed a plurality of times by different methods. For example, after the rough alignment is performed, the precise alignment may be performed. Further, as a method of positioning, for example, a plurality of positions obtained by dividing a tomographic image (coarse in the Z direction) using a retinal layer boundary obtained by performing a segmentation process on a tomographic image (B scan image) and dividing the tomographic image are obtained. Positioning (precision in the X and Z directions) using correlation information (similarity) between the region and the reference image, and one-dimensional projection images generated for each tomographic image (B scan image) were used (in the X direction). ) Alignment (X-direction) alignment using a two-dimensional front image and the like. In addition, the configuration may be such that, after coarse positioning is performed in pixel units, precise positioning is performed in subpixel units.
 ここで、各種の調整中では、被検眼の網膜等の撮影対象がまだ上手く撮像できていない可能性がある。このため、学習済モデルに入力される医用画像と学習データとして用いられた医用画像との違いが大きいために、精度良く高画質画像が得られない可能性がある。そこで、断層画像(Bスキャン)の画質評価等の評価値が閾値を超えたら、高画質動画像の表示(高画質フレームの連続表示)を自動的に開始するように構成してもよい。また、断層画像(Bスキャン)の画質評価等の評価値が閾値を超えたら、高画質化ボタンを検者が指定可能な状態(アクティブ状態)に変更するように構成されてもよい。 Here, during the various adjustments, there is a possibility that the imaging target such as the retina of the eye to be inspected has not been imaged well yet. For this reason, since there is a large difference between the medical image input to the learned model and the medical image used as the learning data, a high-quality image may not be obtained with high accuracy. Therefore, when an evaluation value such as the image quality evaluation of a tomographic image (B scan) exceeds a threshold, display of a high-quality moving image (continuous display of high-quality frames) may be automatically started. Further, when the evaluation value such as the image quality evaluation of the tomographic image (B scan) exceeds the threshold value, the image quality improvement button may be changed to a state in which the examiner can specify (active state).
 また、走査パターン等が異なる撮影モード毎に異なる高画質化用の学習済モデルを用意して、選択された撮影モードに対応する高画質化用の学習済モデルが選択されるように構成されてもよい。また、異なる撮影モードで得た様々な医用画像を含む学習データを学習して得た1つの高画質化用の学習済モデルが用いられてもよい。 In addition, a different learned model for improving image quality is prepared for each shooting mode having a different scanning pattern or the like, and a learned model for improving image quality corresponding to the selected shooting mode is selected. Is also good. Further, one learned model for improving image quality obtained by learning learning data including various medical images obtained in different imaging modes may be used.
(変形例8)
 上述した様々な実施例及び変形例においては、学習済モデルが追加学習中である場合、追加学習中の学習済モデル自体を用いて出力(推論・予測)することが難しい可能性がある。このため、追加学習中の学習済モデルに対する医用画像の入力を禁止することがよい。また、追加学習中の学習済モデルと同じ学習済モデルをもう一つ予備の学習済モデルとして用意してもよい。このとき、追加学習中には、予備の学習済モデルに対して医用画像の入力が実行できるようにすることがよい。そして、追加学習が完了した後に、追加学習後の学習済モデルを評価し、問題なければ、予備の学習済モデルから追加学習後の学習済モデルに置き換えればよい。また、問題があれば、予備の学習済モデルが用いられるようにしてもよい。
(Modification 8)
In the various embodiments and the modified examples described above, when the learned model is under additional learning, it may be difficult to output (inference / prediction) using the learned model itself under additional learning. For this reason, it is preferable to prohibit the input of a medical image to the learned model during the additional learning. Further, the same learned model as the learned model during the additional learning may be prepared as another spare learned model. At this time, during the additional learning, it is preferable that a medical image can be input to the spare learned model. Then, after the additional learning is completed, the learned model after the additional learning is evaluated, and if there is no problem, the spare learned model may be replaced with the learned model after the additional learning. If there is a problem, a spare learned model may be used.
 また、撮影部位毎に学習して得た学習済モデルを選択的に利用できるようにしてもよい。具体的には、第1の撮影部位(肺、被検眼等)を含む学習データを用いて得た第1の学習済モデルと、第1の撮影部位とは異なる第2の撮影部位を含む学習データを用いて得た第2の学習済モデルと、を含む複数の学習済モデルを用意することができる。そして、制御部200は、これら複数の学習済モデルのいずれかを選択する選択手段を有してもよい。このとき、制御部200は、選択された学習済モデルに対して追加学習として実行する制御手段を有してもよい。制御手段は、検者からの指示に応じて、選択された学習済モデルに対応する撮影部位と該撮影部位の撮影画像とがペアとなるデータを検索し、検索して得たデータを学習データとする学習を、選択された学習済モデルに対して追加学習として実行することができる。なお、選択された学習済モデルに対応する撮影部位は、データのヘッダの情報から取得したり、検者により手動入力されたりしたものであってよい。また、データの検索は、例えば、病院や研究所等の外部施設のサーバ等からネットワークを介して行われてよい。これにより、学習済モデルに対応する撮影部位の撮影画像を用いて、撮影部位毎に効率的に追加学習することができる。 学習 Also, a learned model obtained by learning for each imaging region may be selectively used. Specifically, a first learned model obtained using learning data including a first imaging region (lung, eye to be examined, and the like) and a learning including a second imaging region different from the first imaging region A plurality of learned models including the second learned model obtained using the data can be prepared. Then, the control unit 200 may include a selection unit that selects any one of the plurality of learned models. At this time, the control unit 200 may include a control unit that executes the selected learned model as additional learning. The control means searches for data in which an imaging part corresponding to the selected learned model and an imaging image of the imaging part are paired in accordance with an instruction from the examiner, and retrieves the obtained data as learning data. Can be executed as additional learning on the selected learned model. The imaging part corresponding to the selected learned model may be obtained from the information in the header of the data or manually input by the examiner. The data search may be performed via a network from a server or the like of an external facility such as a hospital or a laboratory. Thus, additional learning can be efficiently performed for each imaging region using the imaging image of the imaging region corresponding to the learned model.
 なお、選択手段及び制御手段は、制御部200のCPUやMPU等のプロセッサーによって実行されるソフトウェアモジュールにより構成されてよい。また、選択手段及び制御手段は、ASIC等の特定の機能を果たす回路や独立した装置等によって構成されてもよい。 Note that the selection unit and the control unit may be configured by a software module executed by a processor such as a CPU or an MPU of the control unit 200. Further, the selection unit and the control unit may be configured by a circuit that performs a specific function such as an ASIC, an independent device, or the like.
 また、追加学習用の学習データを、病院や研究所等の外部施設のサーバ等からネットワークを介して取得する際には、改ざんや、追加学習時のシステムトラブル等による信頼性低下を低減したい。そこで、デジタル署名やハッシュ化による一致性の確認を行うことで、追加学習用の学習データの正当性を検出してもよい。これにより、追加学習用の学習データを保護することができる。このとき、デジタル署名やハッシュ化による一致性の確認した結果として、追加学習用の学習データの正当性が検出できなかった場合には、その旨の警告を行い、その学習データによる追加学習を行わない。なお、サーバは、その設置場所を問わず、例えば、クラウドサーバ、フォグサーバ、エッジサーバ等のどのような形態でもよい。 学習 Also, when acquiring learning data for additional learning from a server or the like of an external facility such as a hospital or a research institute via a network, it is desirable to reduce a decrease in reliability due to falsification or a system trouble at the time of additional learning. Therefore, the validity of the learning data for additional learning may be detected by confirming the matching by digital signature or hashing. Thereby, the learning data for additional learning can be protected. At this time, if the validity of the learning data for additional learning cannot be detected as a result of checking the consistency by digital signature or hashing, a warning to that effect is issued, and additional learning using the learning data is performed. Absent. The server may be in any form, such as a cloud server, a fog server, an edge server, etc., regardless of the installation location.
(変形例9)
 上述した様々な実施例及び変形例において、検者からの指示は、手動による指示(例えば、ユーザーインターフェース等を用いた指示)以外にも、音声等による指示であってもよい。このとき、例えば、機械学習により得た音声認識モデル(音声認識エンジン、音声認識用の学習済モデル)を含む機械学習モデルが用いられてもよい。また、手動による指示は、キーボードやタッチパネル等を用いた文字入力等による指示であってもよい。このとき、例えば、機械学習により得た文字認識モデル(文字認識エンジン、文字認識用の学習済モデル)を含む機械学習モデルが用いられてもよい。また、検者からの指示は、ジェスチャー等による指示であってもよい。このとき、機械学習により得たジェスチャー認識モデル(ジェスチャー認識エンジン、ジェスチャー認識用の学習済モデル)を含む機械学習モデルが用いられてもよい。
(Modification 9)
In the various embodiments and modifications described above, the instruction from the examiner may be an instruction by voice or the like in addition to a manual instruction (for example, an instruction using a user interface or the like). At this time, for example, a machine learning model including a speech recognition model (speech recognition engine, learned model for speech recognition) obtained by machine learning may be used. The manual instruction may be an instruction by character input using a keyboard, a touch panel, or the like. At this time, for example, a machine learning model including a character recognition model (a character recognition engine, a learned model for character recognition) obtained by machine learning may be used. The instruction from the examiner may be an instruction by a gesture or the like. At this time, a machine learning model including a gesture recognition model (gesture recognition engine, learned model for gesture recognition) obtained by machine learning may be used.
 また、検者からの指示は、モニタ上の検者の視線検出結果等であってもよい。視線検出結果は、例えば、モニタ周辺から撮影して得た検者の動画像を用いた瞳孔検出結果であってもよい。このとき、動画像からの瞳孔検出は、上述したような物体認識エンジンを用いてもよい。また、検者からの指示は、脳波、体を流れる微弱な電気信号等による指示であってもよい。 The instruction from the examiner may be a line of sight detection result of the examiner on the monitor. The gaze detection result may be, for example, a pupil detection result using a moving image of the examiner obtained by photographing from the periphery of the monitor. At this time, the pupil detection from the moving image may use the above-described object recognition engine. Further, the instruction from the examiner may be an instruction based on an electroencephalogram, a weak electric signal flowing through the body, or the like.
 このような場合、例えば、学習データとしては、上述したような種々の学習済モデルの処理による結果の表示の指示を示す文字データ又は音声データ(波形データ)等を入力データとし、種々の学習済モデルの処理による結果等を実際に表示部に表示させるための実行命令を正解データとする学習データであってもよい。また、学習データとしては、例えば、高画質化用の学習済モデルで得た高画質画像の表示の指示を示す文字データ又は音声データ等を入力データとし、高画質画像の表示の実行命令及びボタン3420をアクティブ状態に変更するための実行命令を正解データとする学習データであってもよい。もちろん、学習データとしては、例えば、文字データ又は音声データ等が示す指示内容と実行命令内容とが互いに対応するものであれば何でもよい。また、音響モデルや言語モデル等を用いて、音声データから文字データに変換してもよい。また、複数のマイクで得た波形データを用いて、音声データに重畳しているノイズデータを低減する処理を行ってもよい。また、文字又は音声等による指示と、マウス、タッチパネル等による指示とを、検者からの指示に応じて選択可能に構成されてもよい。また、文字又は音声等による指示のオン・オフを、検者からの指示に応じて選択可能に構成されてもよい。 In such a case, for example, as the learning data, character data or voice data (waveform data) indicating an instruction to display a result of processing of the various learned models as described above is used as input data, and various learned models are used. It may be learning data in which an execution instruction for actually displaying the result of the model processing on the display unit is correct data. Further, as the learning data, for example, character data or audio data indicating an instruction to display a high-quality image obtained by a learned model for improving image quality is used as input data, and an execution instruction and a button for displaying a high-quality image are input. Learning data in which an execution instruction for changing 3420 to the active state may be correct data. Of course, as the learning data, any data may be used as long as the instruction content indicated by the character data or the voice data and the execution instruction content correspond to each other. Moreover, you may convert audio | voice data into character data using an acoustic model or a language model. Further, a process of reducing noise data superimposed on audio data may be performed using waveform data obtained by a plurality of microphones. In addition, an instruction using characters or voice and an instruction using a mouse, a touch panel, or the like may be configured to be selectable according to an instruction from the examiner. In addition, on / off of an instruction by a character, a voice, or the like may be configured to be selectable according to an instruction from the examiner.
 ここで、機械学習には、上述したような深層学習があり、また、多階層のニューラルネットワークの少なくとも一部には、例えば、再帰型ニューラルネットワーク(RNN:Recurrernt Neural Network)を用いることができる。ここで、本変形例に係る機械学習モデルの一例として、時系列情報を扱うニューラルネットワークであるRNNに関して、図16A及び図16Bを参照して説明する。また、RNNの一種であるLong short-term memory(以下、LSTM)に関して、図17A及び図17Bを参照して説明する。 Here, machine learning includes deep learning as described above, and a recursive neural network (RNN) may be used as at least a part of the multi-layer neural network, for example. Here, as an example of a machine learning model according to the present modification, an RNN that is a neural network that handles time-series information will be described with reference to FIGS. 16A and 16B. Further, Long @ short-term @ memory (hereinafter, LSTM) which is a kind of RNN will be described with reference to FIGS. 17A and 17B.
 図16Aは、機械学習モデルであるRNNの構造を示す。RNN3520は、ネットワークにループ構造を持ち、時刻tにおいてデータx3510を入力し、データh3530を出力する。RNN3520はネットワークにループ機能を持つため、現時刻の状態を次の状態に引き継ぐことが可能であるため、時系列情報を扱うことができる。図16Bには時刻tにおけるパラメータベクトルの入出力の一例を示す。データx3510にはN個(Params1~ParamsN)のデータが含まれる。また、RNN3520より出力されるデータh3530には入力データに対応するN個(Params1~ParamsN)のデータが含まれる。 FIG. 16A shows the structure of RNN which is a machine learning model. RNN3520 has a loop structure to the network, enter the data x t 3510 at time t, and outputs the data h t 3530. Since the RNN 3520 has a loop function in the network, the state at the current time can be taken over to the next state, so that the time series information can be handled. FIG. 16B shows an example of input / output of the parameter vector at time t. The data x t 3510 includes N data (Params 1 to Params N). Further, the data h t 3530 output from RNN3520 includes data of N (Params1 ~ ParamsN) corresponding to the input data.
 しかし、RNNでは誤差逆伝搬時に長期時間の情報を扱うことができないため、LSTMが用いられることがある。LSTMは、忘却ゲート、入力ゲート、及び出力ゲートを備えることで長期時間の情報を学習することができる。ここで、図17AにLSTMの構造を示す。LSTM3540において、ネットワークが次の時刻tに引き継ぐ情報は、セルと呼ばれるネットワークの内部状態ct-1と出力データht-1である。なお、図の小文字(c、h、x)はベクトルを表している。 However, since the RNN cannot handle long-term information at the time of error back propagation, LSTM may be used. The LSTM can learn long-term information by providing a forgetting gate, an input gate, and an output gate. Here, FIG. 17A shows the structure of the LSTM. In the LSTM3540, the information that the network takes over at the next time t is the internal state ct -1 of the network called a cell and the output data ht -1 . Note that lowercase letters (c, h, x) in the figure represent vectors.
 次に、図17BにLSTM3540の詳細を示す。図17Bにおいて、FGは忘却ゲートネットワーク、IGは入力ゲートネットワーク、OGは出力ゲートネットワークを示し、それぞれはシグモイド層である。そのため、各要素が0から1の値となるベクトルを出力する。忘却ゲートネットワークFGは過去の情報をどれだけ保持するかを決め、入力ゲートネットワークIGはどの値を更新するかを判定するものである。CUは、セル更新候補ネットワークであり、活性化関数tanh層である。これは、セルに加えられる新たな候補値のベクトルを作成する。出力ゲートネットワークOGは、セル候補の要素を選択し次の時刻にどの程度の情報を伝えるか選択する。 Next, FIG. 17B shows details of the LSTM3540. In FIG. 17B, FG indicates a forgetting gate network, IG indicates an input gate network, and OG indicates an output gate network, each of which is a sigmoid layer. Therefore, a vector in which each element takes a value from 0 to 1 is output. The forgetting gate network FG determines how much past information is retained, and the input gate network IG determines which value to update. The CU is a cell update candidate network, and is an activation function tanh layer. This creates a vector of new candidate values to be added to the cell. The output gate network OG selects the element of the cell candidate and selects how much information to transmit at the next time.
 なお、上述したLSTMのモデルは基本形であるため、ここで示したネットワークに限らない。ネットワーク間の結合を変更してもよい。LSTMではなく、QRNN(Quasi Recurrent Neural Network)を用いてもよい。さらに、機械学習モデルは、ニューラルネットワークに限定されるものではなく、ブースティングやサポートベクターマシン等が用いられてもよい。また、検者からの指示が文字又は音声等による入力の場合には、自然言語処理に関する技術(例えば、Sequence to Sequence)が適用されてもよい。また、検者に対して文字又は音声等による出力で応答する対話エンジン(対話モデル、対話用の学習済モデル)が適用されてもよい。 The LSTM model described above is a basic model, and is not limited to the network shown here. The coupling between the networks may be changed. Instead of the LSTM, a QRNN (Quasi \ Current \ Neural \ Network) may be used. Further, the machine learning model is not limited to the neural network, and boosting, a support vector machine, or the like may be used. When the instruction from the examiner is an input using characters or voice, a technology related to natural language processing (for example, Sequence to Sequence) may be applied. Further, a dialogue engine (a dialogue model, a trained model for a dialogue) that responds to the examiner with an output using characters or voices may be applied.
(変形例10)
 上述した様々な実施例及び変形例において、高画質画像等は、検者からの指示に応じて記憶部に保存されてもよい。このとき、高画質画像等を保存するための検者からの指示の後、ファイル名の登録の際に、推奨のファイル名として、ファイル名のいずれかの箇所(例えば、最初の箇所、最後の箇所)に、高画質化用の学習済モデルを用いた処理(高画質化処理)により生成された画像であることを示す情報(例えば、文字)を含むファイル名が、検者からの指示に応じて編集可能な状態で表示されてもよい。
(Modification 10)
In the various embodiments and modifications described above, a high-quality image or the like may be stored in the storage unit according to an instruction from the examiner. At this time, after registering the file name after an instruction from the examiner to save a high-quality image or the like, any part of the file name (for example, the first part, the last part, Location), a file name including information (for example, characters) indicating that the image is an image generated by a process using a learned model for image quality improvement (image quality improvement process) is included in the instruction from the examiner. It may be displayed in an editable state accordingly.
 また、レポート画面等の種々の表示画面において、表示部に高画質画像を表示させる際に、表示されている画像が高画質化用の学習済モデルを用いた処理により生成された高画質画像であることを示す表示が、高画質画像とともに表示されてもよい。この場合には、ユーザは、当該表示によって、表示された高画質画像が撮影によって取得した画像そのものではないことが容易に識別できるため、誤診断を低減させたり、診断効率を向上させたりすることができる。なお、高画質化用の学習済モデルを用いた処理により生成された高画質画像であることを示す表示は、入力画像と当該処理により生成された高画質画像とを識別可能な表示であればどのような態様のものでもよい。また、高画質化用の学習済モデルを用いた処理だけでなく、上述したような種々の学習済モデルを用いた処理についても、その種類の学習済モデルを用いた処理により生成された結果であることを示す表示が、その結果とともに表示されてもよい。 In addition, when displaying a high-quality image on the display unit in various display screens such as a report screen, the displayed image is a high-quality image generated by a process using a learned model for improving the image quality. The display indicating the presence may be displayed together with the high-quality image. In this case, the user can easily identify from the display that the displayed high-quality image is not the image itself obtained by shooting, thereby reducing erroneous diagnosis or improving diagnosis efficiency. Can be. Note that the display indicating that the image is a high-quality image generated by the process using the learned model for improving the image quality is a display that can identify the input image and the high-quality image generated by the process. Any mode may be used. Further, not only the processing using the learned model for high image quality but also the processing using the various learned models described above are the results generated by the processing using the type of the learned model. A display indicating the presence may be displayed together with the result.
 このとき、レポート画面等の表示画面は、検者からの指示に応じて記憶部に保存されてもよい。例えば、高画質画像等と、これらの画像が高画質化用の学習済モデルを用いた処理により生成された高画質画像であることを示す表示とが並んだ1つの画像としてレポート画面が記憶部に保存されてもよい。 At this time, a display screen such as a report screen may be stored in the storage unit in accordance with an instruction from the examiner. For example, the report screen is stored as one image in which high-quality images and the like and a display indicating that these images are high-quality images generated by a process using a learned model for high image quality are arranged. It may be stored in.
 また、高画質化用の学習済モデルを用いた処理により生成された高画質画像であることを示す表示について、高画質化用の学習済モデルがどのような学習データによって学習を行ったものであるかを示す表示が表示部に表示されてもよい。当該表示としては、学習データの入力データと正解データの種類の説明や、入力データと正解データに含まれる撮影部位等の正解データに関する任意の表示を含んでよい。なお、高画質化用の学習済モデルを用いた処理だけでなく、上述したような種々の学習済モデルを用いた処理についても、その種類の学習済モデルがどのような学習データによって学習を行ったものであるかを示す表示が表示部に表示されてもよい。 In addition, the display indicating that the image is a high-quality image generated by processing using the learned model for high image quality is obtained by learning the learning model for high image quality with what learning data. A display indicating whether or not there is may be displayed on the display unit. The display may include an explanation of the types of the input data and the correct answer data of the learning data and an arbitrary display related to the correct answer data such as the imaging part included in the input data and the correct answer data. It should be noted that not only the processing using the learned model for improving the image quality, but also the processing using the various learned models as described above, the learning model of that type performs learning based on what learning data. A display indicating whether or not the information is displayed may be displayed on the display unit.
 また、高画質化用の学習済モデルを用いた処理により生成された画像であることを示す情報(例えば、文字)を、高画質画像等に重畳した状態で表示又は保存されるように構成されてもよい。このとき、画像上に重畳する箇所は、撮影対象となる注目部位等が表示されている領域には重ならない領域(例えば、画像の端)であればどこでもよい。また、重ならない領域を判定し、判定された領域に重畳させてもよい。 In addition, information (for example, characters) indicating that the image is generated by the process using the learned model for improving the image quality is displayed or stored in a state of being superimposed on the high-quality image or the like. You may. At this time, the portion to be superimposed on the image may be any region (for example, the end of the image) that does not overlap with the region where the target region to be photographed is displayed. Alternatively, a non-overlapping area may be determined and superimposed on the determined area.
 また、レポート画面の初期表示画面として、ボタン3420がアクティブ状態(高画質化処理がオン)となるようにデフォルト設定されている場合には、検者からの指示に応じて、高画質画像等を含むレポート画面に対応するレポート画像がサーバに送信されるように構成されてもよい。また、ボタン3420がアクティブ状態となるようにデフォルト設定されている場合には、検査終了時(例えば、検者からの指示に応じて、撮影確認画面やプレビュー画面からレポート画面に変更された場合)に、高画質画像等を含むレポート画面に対応するレポート画像がサーバに(自動的に)送信されるように構成されてもよい。このとき、デフォルト設定における各種設定(例えば、レポート画面の初期表示画面におけるEn-Face画像の生成のための深度範囲、解析マップの重畳の有無、高画質画像か否か、経過観察用の表示画面か否か等の少なくとも1つに関する設定)に基づいて生成されたレポート画像がサーバに送信されるように構成されもよい。 If the button 3420 is set as an initial display screen of the report screen so that the button 3420 is set to the active state (the high-quality processing is turned on), a high-quality image or the like is displayed in accordance with an instruction from the examiner. A report image corresponding to the included report screen may be transmitted to the server. In addition, when the button 3420 is set to an active state by default, at the end of the examination (for example, when the photographing confirmation screen or the preview screen is changed to the report screen according to an instruction from the examiner). Alternatively, a report image corresponding to a report screen including a high-quality image or the like may be configured to be (automatically) transmitted to the server. At this time, various settings in the default settings (for example, a depth range for generating an En-Face image on the initial display screen of the report screen, presence / absence of superimposition of the analysis map, whether or not the image is a high-quality image, and a display screen for follow-up observation) The report image generated based on at least one of the settings, such as whether or not the report image may be transmitted to the server.
(変形例11)
 上述した様々な実施例及び変形例において、上述したような種々の学習済モデルのうち、第1の種類の学習済モデルで得た画像(例えば、高画質画像、解析マップ等の解析結果を示す画像、物体認識結果を示す画像、セグメンテーション結果を示す画像)を、第1の種類とは異なる第2の種類の学習済モデルに入力してもよい。このとき、第2の種類の学習済モデルの処理による結果(例えば、解析結果、診断結果、物体認識結果、セグメンテーション結果)が生成されるように構成されてもよい。
(Modification 11)
In the above-described various embodiments and modified examples, among the various learned models as described above, an image (for example, a high-quality image, an analysis map such as an analysis map) obtained by the first type of learned model is shown. The image, the image indicating the object recognition result, and the image indicating the segmentation result) may be input to a second type of learned model different from the first type. At this time, a result (for example, an analysis result, a diagnosis result, an object recognition result, and a segmentation result) by the processing of the second type of learned model may be generated.
 また、上述したような種々の学習済モデルのうち、第1の種類の学習済モデルの処理による結果(例えば、解析結果、診断結果、物体認識結果、セグメンテーション結果)を用いて、第1の種類の学習済モデルに入力した画像から、第1の種類とは異なる第2の種類の学習済モデルに入力する画像を生成してもよい。このとき、生成された画像は、第2の種類の学習済モデルにより処理する画像として適した画像である可能性が高い。このため、生成された画像を第2の種類の学習済モデルに入力して得た画像(例えば、高画質画像、解析マップ等の解析結果を示す画像、物体認識結果を示す画像、セグメンテーション結果を示す画像)の精度を向上することができる。 In addition, of the various types of learned models described above, the first type of the first type is obtained by using a result (for example, an analysis result, a diagnosis result, an object recognition result, and a segmentation result) obtained by processing the first type of the learned model. An image to be input to a second type of learned model different from the first type may be generated from the image input to the learned model. At this time, the generated image is likely to be an image suitable as an image to be processed by the second type of learned model. For this reason, an image obtained by inputting the generated image to the second type of learned model (for example, a high-quality image, an image showing an analysis result such as an analysis map, an image showing an object recognition result, and a segmentation result (The image shown) can be improved.
 また、上述したような学習済モデルの処理による解析結果や診断結果等を検索キーとして、サーバ等に格納された外部のデータベースを利用した類似画像検索を行ってもよい。なお、データベースにおいて保存されている複数の画像が、既に機械学習等によって該複数の画像それぞれの特徴量を付帯情報として付帯された状態で管理されている場合等には、画像自体を検索キーとする類似画像検索エンジン(類似画像検査モデル、類似画像検索用の学習済モデル)が用いられてもよい。 {Circle around (2)} Similar image search using an external database stored in a server or the like may be performed using, as a search key, an analysis result, a diagnosis result, or the like obtained by processing the learned model as described above. In the case where a plurality of images stored in the database are already managed in such a manner that feature amounts of the plurality of images are attached as additional information by machine learning or the like, the images themselves are used as search keys. A similar image search engine (similar image inspection model, trained model for similar image search) may be used.
(変形例12)
 なお、上記実施例及び変形例におけるモーションコントラストデータの生成処理は、断層画像の輝度値に基づいて行われる構成に限られない。上記各種処理は、OCT撮影部100で取得された干渉信号、干渉信号にフーリエ変換を施した信号、該信号に任意の処理を施した信号、及びこれらに基づく断層画像等を含む断層データに対して適用されてよい。これらの場合も、上記構成と同様の効果を奏することができる。
(Modification 12)
Note that the generation processing of the motion contrast data in the above embodiment and the modification is not limited to the configuration performed based on the luminance value of the tomographic image. The above various processes are performed on tomographic data including an interference signal acquired by the OCT imaging unit 100, a signal obtained by performing a Fourier transform on the interference signal, a signal obtained by subjecting the signal to arbitrary processing, and a tomographic image based on these signals. May be applied. In these cases, the same effect as the above configuration can be obtained.
 分割手段としてカプラを使用したファイバ光学系を用いているが、コリメータとビームスプリッタを使用した空間光学系を用いてもよい。また、OCT撮影部100の構成は、上記の構成に限られず、OCT撮影部100に含まれる構成の一部をOCT撮影部100と別体の構成としてもよい。 Although a fiber optical system using a coupler is used as the dividing means, a spatial optical system using a collimator and a beam splitter may be used. The configuration of the OCT imaging unit 100 is not limited to the above configuration, and a part of the configuration included in the OCT imaging unit 100 may be configured separately from the OCT imaging unit 100.
 また、上記実施例及び変形例では、OCT撮影部100の干渉光学系としてマッハツェンダー型干渉計の構成を用いているが、干渉光学系の構成はこれに限られない。例えば、OCT装置1の干渉光学系はマイケルソン干渉計の構成を有していてもよい。 Also, in the above-described embodiment and modified examples, the configuration of the Mach-Zehnder interferometer is used as the interference optical system of the OCT imaging unit 100, but the configuration of the interference optical system is not limited to this. For example, the interference optical system of the OCT apparatus 1 may have a configuration of a Michelson interferometer.
 さらに、上記実施例及び変形例では、OCT装置として、SLDを光源として用いたスペクトラルドメインOCT(SD-OCT)装置について述べたが、本発明によるOCT装置の構成はこれに限られない。例えば、出射光の波長を掃引することができる波長掃引光源を用いた波長掃引型OCT(SS-OCT)装置等の他の任意の種類のOCT装置にも本発明を適用することができる。また、ライン光を用いたLine-OCT装置に対して本発明を適用することもできる。 Further, in the above-described embodiments and modified examples, a spectral domain OCT (SD-OCT) device using an SLD as a light source has been described as an OCT device, but the configuration of the OCT device according to the present invention is not limited to this. For example, the present invention can be applied to any other type of OCT device such as a wavelength-swept OCT (SS-OCT) device using a wavelength-swept light source capable of sweeping the wavelength of emitted light. Further, the present invention can also be applied to a Line-OCT apparatus using line light.
 また、上記実施例及び変形例では、取得部210は、OCT撮影部100で取得された干渉信号や画像処理部220で生成された三次元断層画像等を取得した。しかしながら、取得部210がこれらの信号や画像を取得する構成はこれに限られない。例えば、取得部210は、制御部とLAN、WAN、又はインターネット等を介して接続されるサーバや撮影装置からこれらの信号を取得してもよい。 In addition, in the above-described embodiment and the modification, the acquisition unit 210 acquires the interference signal acquired by the OCT imaging unit 100, the three-dimensional tomographic image generated by the image processing unit 220, and the like. However, the configuration in which the acquisition unit 210 acquires these signals and images is not limited to this. For example, the acquisition unit 210 may acquire these signals from a server or an imaging device connected to the control unit via a LAN, a WAN, the Internet, or the like.
 なお、学習済モデルは、画像処理装置である制御部200,900,1400に設けられることができる。学習済モデルは、例えば、CPU等のプロセッサーによって実行されるソフトウェアモジュール等で構成されることができる。また、学習済モデルは、制御部200,900,1400と接続される別のサーバ等に設けられてもよい。この場合には、制御部200,900,1400は、インターネット等の任意のネットワークを介して学習済モデルを備えるサーバに接続することで、学習済モデルを用いて画質向上処理を行うことができる。 The learned model can be provided in the control units 200, 900, and 1400, which are image processing devices. The learned model can be constituted by, for example, a software module executed by a processor such as a CPU. Further, the learned model may be provided in another server or the like connected to the control units 200, 900, and 1400. In this case, the control units 200, 900, and 1400 can perform image quality improvement processing using the learned model by connecting to a server having the learned model via an arbitrary network such as the Internet.
(変形例13)
 また、上述した様々な実施例及び変形例による画像処理装置又は画像処理方法によって処理される画像は、任意のモダリティ(撮影装置、撮影方法)を用いて取得された医用画像を含む。処理される医用画像は、任意の撮影装置等で取得された医用画像や、上記実施例及び変形例による画像処理装置又は画像処理方法によって作成された画像を含むことができる。
(Modification 13)
Further, the image processed by the image processing device or the image processing method according to the various embodiments and the modified examples described above includes a medical image acquired using an arbitrary modality (imaging device, imaging method). The medical image to be processed can include a medical image acquired by an arbitrary imaging device or the like, and an image created by the image processing apparatus or the image processing method according to the above-described embodiment and the modification.
 さらに、処理される医用画像は、被検者(被検体)の所定部位の画像であり、所定部位の画像は被検者の所定部位の少なくとも一部を含む。また、当該医用画像は、被検者の他の部位を含んでもよい。また、医用画像は、静止画像又は動画像であってよく、白黒画像又はカラー画像であってもよい。さらに医用画像は、所定部位の構造(形態)を表す画像でもよいし、その機能を表す画像でもよい。機能を表す画像は、例えば、OCTA画像、ドップラーOCT画像、fMRI画像、及び超音波ドップラー画像等の血流動態(血流量、血流速度等)を表す画像を含む。なお、被検者の所定部位は、撮影対象に応じて決定されてよく、人眼(被検眼)、脳、肺、腸、心臓、すい臓、腎臓、及び肝臓等の臓器、頭部、胸部、脚部、並びに腕部等の任意の部位を含む。 {Furthermore, the medical image to be processed is an image of a predetermined part of the subject (subject), and the image of the predetermined part includes at least a part of the predetermined part of the subject. Further, the medical image may include other parts of the subject. Further, the medical image may be a still image or a moving image, and may be a black and white image or a color image. Further, the medical image may be an image representing the structure (form) of the predetermined part or an image representing the function thereof. The images representing functions include, for example, images representing blood flow dynamics (blood flow, blood flow velocity, etc.) such as OCTA images, Doppler OCT images, fMRI images, and ultrasonic Doppler images. Note that the predetermined site of the subject may be determined according to the imaging target, and includes organs such as the human eye (eye to be examined), brain, lung, intestine, heart, pancreas, kidney, and liver, head, chest, Includes any parts such as legs and arms.
 また、医用画像は、被検者の断層画像であってもよいし、正面画像であってもよい。正面画像は、例えば、眼底正面画像や、前眼部の正面画像、蛍光撮影された眼底画像、OCTで取得したデータ(三次元のOCTデータ)について撮影対象の深さ方向における少なくとも一部の範囲のデータを用いて生成したEn-Face画像を含む。En-Face画像は、三次元のOCTAデータ(三次元のモーションコントラストデータ)について撮影対象の深さ方向における少なくとも一部の範囲のデータを用いて生成したOCTAのEn-Face画像(モーションコントラスト正面画像)でもよい。また、三次元のOCTデータや三次元のモーションコントラストデータは、三次元の医用画像データの一例である。 医 Also, the medical image may be a tomographic image of the subject or a front image. The front image is, for example, at least a part of the fundus front image, the front image of the anterior eye part, the fundus image obtained by fluorescence imaging, and data obtained by OCT (three-dimensional OCT data) in the depth direction of the imaging target. And an En-Face image generated using the data of FIG. The En-Face image is an OCTA En-Face image (motion contrast front image) generated using three-dimensional OCTA data (three-dimensional motion contrast data) using data in at least a part of the range in the depth direction of the imaging target. ). The three-dimensional OCT data and the three-dimensional motion contrast data are examples of three-dimensional medical image data.
 また、撮影装置とは、診断に用いられる画像を撮影するための装置である。撮影装置は、例えば、被検者の所定部位に光、X線等の放射線、電磁波、又は超音波等を照射することにより所定部位の画像を得る装置や、被写体から放出される放射線を検出することにより所定部位の画像を得る装置を含む。より具体的には、上述した様々な実施例及び変形例に係る撮影装置は、少なくとも、X線撮影装置、CT装置、MRI装置、PET装置、SPECT装置、SLO装置、OCT装置、OCTA装置、眼底カメラ、及び内視鏡等を含む。 撮 影 The imaging device is a device for imaging an image used for diagnosis. The imaging apparatus detects, for example, a device that obtains an image of a predetermined portion by irradiating a predetermined portion of the subject with light, radiation such as X-rays, electromagnetic waves, or ultrasonic waves, or detects radiation emitted from a subject. And a device for obtaining an image of a predetermined part. More specifically, the imaging apparatuses according to the various embodiments and modifications described above include at least an X-ray imaging apparatus, a CT apparatus, an MRI apparatus, a PET apparatus, a SPECT apparatus, an SLO apparatus, an OCT apparatus, an OCTA apparatus, and a fundus. It includes a camera and an endoscope.
 なお、OCT装置としては、タイムドメインOCT(TD-OCT)装置やフーリエドメインOCT(FD-OCT)装置を含んでよい。また、フーリエドメインOCT装置はスペクトラルドメインOCT(SD-OCT)装置や波長掃引型OCT(SS-OCT)装置を含んでよい。また、SLO装置やOCT装置として、波面補償光学系を用いた波面補償SLO(AO-SLO)装置や波面補償OCT(AO-OCT)装置等を含んでよい。また、SLO装置やOCT装置として、偏光位相差や偏光解消に関する情報を可視化するための偏光SLO(PS-SLO)装置や偏光OCT(PS-OCT)装置等を含んでよい。 The OCT device may include a time domain OCT (TD-OCT) device and a Fourier domain OCT (FD-OCT) device. Further, the Fourier domain OCT device may include a spectral domain OCT (SD-OCT) device and a wavelength sweep type OCT (SS-OCT) device. Further, the SLO device or OCT device may include a wavefront compensation SLO (AO-SLO) device using a wavefront compensation optical system, a wavefront compensation OCT (AO-OCT) device, or the like. Further, the SLO device and the OCT device may include a polarization SLO (PS-SLO) device, a polarization OCT (PS-OCT) device, and the like for visualizing information on a polarization phase difference and depolarization.
 本発明の上述の実施例及び変形例の一つによれば、従来よりも画像診断に適した画像を生成することができる。 According to the above-described embodiment and one of the modified examples of the present invention, it is possible to generate an image more suitable for image diagnosis than in the related art.
(その他の実施例)
 本発明は、上述の実施例の1以上の機能を実現するプログラムを、ネットワーク又は記憶媒体を介してシステム又は装置に供給し、そのシステム又は装置のコンピュータにおける1つ以上のプロセッサーがプログラムを読出し実行する処理でも実現可能である。また、1以上の機能を実現する回路(例えば、ASIC)によっても実現可能である。
(Other Examples)
The present invention supplies a program that realizes one or more functions of the above-described embodiments to a system or an apparatus via a network or a storage medium, and one or more processors in a computer of the system or the apparatus read and execute the program. This processing can be realized. Further, it can also be realized by a circuit (for example, an ASIC) that realizes one or more functions.
 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために以下の請求項を添付する。 The present invention is not limited to the above embodiment, and various changes and modifications can be made without departing from the spirit and scope of the present invention. Therefore, the following claims are appended to make the scope of the present invention public.
 本願は、2018年9月6日提出の日本国特許出願特願2018-166817、及び2019年3月29日提出の日本国特許出願特願2019-068663を基礎として優先権を主張するものであり、その記載内容の全てをここに援用する。 This application claims the priority based on Japanese Patent Application No. 2018-166817 filed on Sep. 6, 2018 and Japanese Patent Application No. 2019-068663 filed on Mar. 29, 2019. , The entire contents of which are incorporated herein.
200:制御部(画像処理装置)、224:画質向上部、250:表示制御部 200: control unit (image processing device), 224: image quality improvement unit, 250: display control unit

Claims (33)

  1.  学習済モデルを用いて、被検眼の第1の画像から、該第1の画像と比べてノイズ低減及びコントラスト強調のうちの少なくとも一つがなされた第2の画像を生成する、画質向上部と、
     表示部に前記第1の画像と前記第2の画像とを切り替えて、並べて、又は重ねて表示させる表示制御部と、
    を備える、画像処理装置。
    An image quality improvement unit configured to generate a second image in which at least one of noise reduction and contrast enhancement has been performed from the first image of the subject's eye using the learned model, as compared with the first image;
    A display control unit that switches the first image and the second image on a display unit, and displays the images side by side or in an overlapping manner;
    An image processing apparatus comprising:
  2.  複数の学習済モデルから、前記画質向上部によって用いられる学習済モデルを選択する選択部を更に備える、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, further comprising: a selection unit that selects a learned model used by the image quality improving unit from the plurality of learned models.
  3.  前記第1の画像は、被検眼の深さ方向の範囲における情報に基づいて生成された正面画像であり、
     前記選択部は、前記第1の画像を生成するための深さ方向の範囲に基づいて、前記画質向上部によって用いられる学習済モデルを選択する、請求項2に記載の画像処理装置。
    The first image is a front image generated based on information in a depth direction range of the subject's eye,
    The image processing apparatus according to claim 2, wherein the selection unit selects a learned model used by the image quality improvement unit based on a range in a depth direction for generating the first image.
  4.  前記選択部は、前記第1の画像における表示部位及び前記第1の画像を生成するための深さ方向の範囲に基づいて、前記画質向上部によって用いられる学習済モデルを選択する、請求項3に記載の画像処理装置。 The said selection part selects the learned model used by the said image quality improvement part based on the display part in the said 1st image, and the range of the depth direction for producing | generating the said 1st image. An image processing apparatus according to claim 1.
  5.  前記選択部は、前記表示部位を含む撮影部位及び前記第1の画像を生成するための深さ方向の範囲に基づいて、前記画質向上部によって用いられる学習済モデルを選択する、請求項4に記載の画像処理装置。 5. The selection unit according to claim 4, wherein the selection unit selects a learned model used by the image quality improvement unit based on an imaging region including the display region and a range in a depth direction for generating the first image. The image processing apparatus according to any one of the preceding claims.
  6.  前記選択部は、前記第1の画像の撮影条件に基づいて、前記画質向上部によって用いられる学習済モデルを選択する、請求項2に記載の画像処理装置。 The image processing device according to claim 2, wherein the selection unit selects a learned model used by the image quality improvement unit based on a shooting condition of the first image.
  7.  前記選択部は、操作者からの指示に応じて、前記画質向上部によって用いられる学習済モデルを選択する、請求項2に記載の画像処理装置。 The image processing device according to claim 2, wherein the selection unit selects a learned model used by the image quality improvement unit in response to an instruction from an operator.
  8.  前記選択部は、操作者からの指示に応じて、前記画質向上部によって用いられる学習済モデルを変更する、請求項2乃至7のいずれか一項に記載の画像処理装置。 The image processing device according to any one of claims 2 to 7, wherein the selection unit changes a learned model used by the image quality improvement unit in response to an instruction from an operator.
  9.  前記表示制御部は、操作者からの指示に応じて、前記第1の画像及び前記第2の画像を切り替えて、前記表示部に表示させる、請求項1乃至8のいずれか一項に記載の画像処理装置。 The display control unit according to any one of claims 1 to 8, wherein the display control unit switches between the first image and the second image according to an instruction from an operator and causes the display unit to display the first image and the second image. Image processing device.
  10.  学習済モデルを用いて、被検眼の第1の画像から、該第1の画像に対して画質向上処理を行った第2の画像を生成する、画質向上部と、
     操作者からの指示に応じて、表示部に前記第1の画像と前記第2の画像とを切り替えて表示させる表示制御部と、
    を備える、画像処理装置。
    An image quality improvement unit configured to generate a second image obtained by performing image quality improvement processing on the first image from the first image of the subject's eye using the learned model,
    A display control unit configured to switch and display the first image and the second image on a display unit in response to an instruction from an operator;
    An image processing apparatus comprising:
  11.  前記画質向上部は、複数の前記第1の画像から、複数の前記第2の画像を生成し、
     前記表示制御部は、前記表示部に、前記複数の第1の画像と前記複数の第2の画像とを切り替えて表示させる、請求項1乃至10のいずれか一項に記載の画像処理装置。
    The image quality improving unit generates a plurality of the second images from a plurality of the first images,
    The image processing device according to any one of claims 1 to 10, wherein the display control unit causes the display unit to switch and display the plurality of first images and the plurality of second images.
  12.  前記第1の画像を取得する取得部を更に備え、
     前記表示制御部は、
      前記取得部による前記第1の画像の取得直後に前記表示部に前記第1の画像を表示させ、
      前記画質向上部によって前記第2の画像が生成された後に、表示されている前記第1の画像を前記第2の画像に切り替えて前記表示部に表示させる、請求項1乃至11のいずれか一項に記載の画像処理装置。
    An acquisition unit that acquires the first image,
    The display control unit,
    Immediately after the acquisition of the first image by the acquisition unit, display the first image on the display unit,
    The method according to claim 1, wherein after the second image is generated by the image quality improving unit, the displayed first image is switched to the second image and displayed on the display unit. An image processing apparatus according to the item.
  13.  前記第1の画像は、被検眼の深さ方向の範囲におけるモーションコントラストデータに基づいて生成された正面画像であり、
     前記表示制御部は、
      前記取得部による前記第1の画像の取得前に、被検眼の深さ方向における断層データに基づいて生成された正面画像である第3の画像を表示し、
      前記取得部による前記第1の画像の取得直後に、表示されている前記第3の画像を前記第1の画像に切り替えて前記表示部に表示させる、請求項12に記載の画像処理装置。
    The first image is a front image generated based on motion contrast data in a range in a depth direction of the subject's eye,
    The display control unit,
    Before the acquisition of the first image by the acquisition unit, a third image that is a front image generated based on tomographic data in the depth direction of the subject's eye is displayed,
    The image processing apparatus according to claim 12, wherein immediately after the acquisition of the first image by the acquisition unit, the displayed third image is switched to the first image and displayed on the display unit.
  14.  前記第1の画像は、被検眼の深さ方向の範囲における情報に基づいて生成された正面画像であり、
     操作者からの指示に応じて第1の画像の前記深さ方向の範囲が変更されると、前記表示制御部は、前記表示部に並べて表示されている前記第1の画像と前記第2の画像を、前記変更された深さ方向の範囲に基づく第1の画像と該第1の画像から生成された第2の画像に変更して表示させる、請求項1乃至8のいずれか一項に記載の画像処理装置。
    The first image is a front image generated based on information in a depth direction range of the subject's eye,
    When the range of the first image in the depth direction is changed in response to an instruction from the operator, the display control unit controls the first image and the second image displayed side by side on the display unit. The image according to any one of claims 1 to 8, wherein the image is displayed by being changed into a first image based on the changed range in the depth direction and a second image generated from the first image. The image processing apparatus according to any one of the preceding claims.
  15.  前記表示制御部は、操作者からの指示に応じて、前記表示部に並べて表示されている前記第1の画像及び前記第2の画像のいずれかを拡大表示させる、請求項1乃至8及び14のいずれか一項に記載の画像処理装置。 15. The display control unit, according to an instruction from an operator, enlarges any one of the first image and the second image displayed side by side on the display unit. The image processing device according to any one of the above.
  16.  前記表示制御部は、前記第1の画像及び前記第2の画像の少なくとも一方に透明度を設定し、前記表示部に前記第1の画像及び前記第2の画像を重ねて表示させる、請求項1乃至8のいずれか一項に記載の画像処理装置。 The display control unit sets transparency to at least one of the first image and the second image, and causes the display unit to display the first image and the second image in a superimposed manner. An image processing apparatus according to any one of claims 1 to 8.
  17.  学習済モデルを用いて、被検眼の深さ方向の範囲における情報に基づいて生成された正面画像である第1の画像から、該第1の画像と比べてノイズ低減及びコントラスト強調のうちの少なくとも一つがなされた第2の画像を生成する、画質向上部と、
     前記第1の画像を生成するための深さ方向の範囲に基づいて、複数の学習済モデルから、前記画質向上部によって用いられる学習済モデルを選択する選択部と、
    を備える、画像処理装置。
    Using the trained model, at least one of noise reduction and contrast enhancement from the first image, which is a front image generated based on information in the depth direction range of the subject's eye, as compared to the first image. An image quality improvement unit that generates a second image that has been made,
    A selecting unit that selects a learned model used by the image quality improving unit from a plurality of learned models based on a range in a depth direction for generating the first image;
    An image processing apparatus comprising:
  18.  前記第1の画像は、被検眼の輝度の正面画像及び被検眼の正面血管画像のいずれかである、請求項3乃至12及び14乃至17のいずれか一項に記載の画像処理装置。  The image processing apparatus according to any one of claims 3 to 12, wherein the first image is one of a front image of the luminance of the eye to be inspected and a blood vessel image of the eye to be inspected.
  19.  前記第1の画像と前記第2の画像を比較し、比較結果に基づいて色付けされたカラーマップ画像を生成する比較部を更に備え、
     前記カラーマップ画像は、前記第1の画像又は前記第2の画像に重畳表示される、請求項1乃至18のいずれか一項に記載の画像処理装置。
    A comparing unit configured to compare the first image and the second image and generate a color map image colored based on the comparison result;
    The image processing device according to claim 1, wherein the color map image is superimposed on the first image or the second image.
  20.  学習済モデルを用いて、被検眼の第1の画像から、該第1の画像と比べてノイズ低減及びコントラスト強調のうちの少なくとも一つがなされた第2の画像を生成する、画質向上部と、
     前記第1の画像と前記第2の画像を比較する比較部と、
     前記比較部による比較結果に基づいて表示部の表示を制御する表示制御部と、
    を備える、画像処理装置。
    An image quality improvement unit configured to generate a second image in which at least one of noise reduction and contrast enhancement has been performed from the first image of the subject's eye using the learned model, as compared with the first image;
    A comparing unit that compares the first image and the second image;
    A display control unit that controls display on a display unit based on a comparison result by the comparison unit;
    An image processing apparatus comprising:
  21.  前記比較部は、前記第1の画像と前記第2の画像の差分を算出し、該差分に基づいて色分けされたカラーマップ画像を生成し、
     前記表示制御部は前記カラーマップ画像を前記表示部に表示させる、請求項20に記載の画像処理装置。
    The comparison unit calculates a difference between the first image and the second image, generates a color map image that is color-coded based on the difference,
    The image processing device according to claim 20, wherein the display control unit causes the display unit to display the color map image.
  22.  前記表示制御部は、前記第1の画像又は前記第2の画像に前記カラーマップ画像を重畳表示させる、請求項21に記載の画像処理装置。 22. The image processing device according to claim 21, wherein the display control unit superimposes and displays the color map image on the first image or the second image.
  23.  前記表示制御部は、操作者の指示に応じて、前記第1の画像又は前記第2の画像に前記カラーマップ画像を重畳表示させる、請求項21に記載の画像処理装置。 23. The image processing device according to claim 21, wherein the display control unit superimposes and displays the color map image on the first image or the second image according to an instruction from an operator.
  24.  前記表示制御部は、複数の前記第1の画像又は複数の前記第2の画像に対応する前記カラーマップ画像を重畳表示させる、請求項21乃至23のいずれか一項に記載の画像処理装置。 24. The image processing device according to claim 21, wherein the display control unit superimposes and displays the color map images corresponding to a plurality of the first images or a plurality of the second images.
  25.  前記比較部は前記第1の画像と前記第2の画像の差分を算出し、
     前記表示制御部は前記差分が所定の値よりも大きい場合に前記表示部に警告を表示させる、請求項20に記載の画像処理装置。
    The comparison unit calculates a difference between the first image and the second image,
    The image processing device according to claim 20, wherein the display control unit causes the display unit to display a warning when the difference is larger than a predetermined value.
  26.  前記比較部は前記第1の画像と前記第2の画像の差分を算出し、
     前記表示制御部は、前記差分が所定の値よりも大きい場合には、前記表示部に前記第2の画像を表示させない、請求項20に記載の画像処理装置。
    The comparison unit calculates a difference between the first image and the second image,
    The image processing device according to claim 20, wherein the display control unit does not cause the display unit to display the second image when the difference is larger than a predetermined value.
  27.  前記学習済モデルの学習データは、重ね合わせ処理、最大事後確率推定処理、平滑化フィルタ処理及び階調変換処理のうちの一つの処理により得られた画像を含む、請求項1乃至26のいずれか一項に記載の画像処理装置。 27. The learning data of the learned model includes an image obtained by one of a superposition process, a maximum posterior probability estimation process, a smoothing filter process, and a gradation conversion process. The image processing device according to claim 1.
  28.  前記学習済モデルの学習データは、前記第1の画像の撮影に用いられる撮影装置よりも高性能な撮影装置によって撮影された画像、又は前記第1の画像の撮影工程よりも工数の多い撮影工程で取得された画像を含む、請求項1乃至26のいずれか一項に記載の画像処理装置。 The learning data of the learned model is an image captured by an image capturing device having a higher performance than the image capturing device used for capturing the first image, or an image capturing process that requires more man-hours than the image capturing process of the first image. The image processing apparatus according to claim 1, wherein the image processing apparatus includes an image acquired by:
  29.  学習済モデルを用いて、被検眼の第1の画像から、該第1の画像と比べてノイズ低減及びコントラスト強調のうちの少なくとも一つがなされた第2の画像を生成する工程と、
     表示部に前記第1の画像と前記第2の画像とを切り替えて、並べて、又は重ねて表示させる工程と、
    を含む、画像処理方法。
    Generating, from the first image of the eye to be examined, a second image in which at least one of noise reduction and contrast enhancement has been performed as compared with the first image, using the learned model;
    A step of switching between the first image and the second image on a display unit and displaying the image side by side or in an overlapping manner;
    An image processing method comprising:
  30.  学習済モデルを用いて、被検眼の第1の画像から、該第1の画像に対して画質向上処理を行った第2の画像を生成する工程と、
     操作者からの指示に応じて、表示部に前記第1の画像と前記第2の画像とを切り替えて表示させる工程と、
    を含む、画像処理方法。
    Generating a second image obtained by performing image quality improvement processing on the first image from the first image of the eye to be inspected, using the learned model;
    A step of switching and displaying the first image and the second image on a display unit according to an instruction from an operator;
    An image processing method comprising:
  31.  学習済モデルを用いて、被検眼の深さ方向の範囲における情報に基づいて生成された正面画像である第1の画像から、該第1の画像と比べてノイズ低減及びコントラスト強調のうちの少なくとも一つがなされた第2の画像を生成する工程と、
     複数の学習済モデルから、前記第1の画像を生成するための深さ方向の範囲に基づいて、前記第2の画像の生成に用いられる学習済モデルを選択する工程と、
    を含む、画像処理方法。
    Using the trained model, at least one of noise reduction and contrast enhancement from the first image, which is a front image generated based on information in the depth direction range of the subject's eye, as compared to the first image. Generating a second image, one of which has been made;
    Selecting, from a plurality of learned models, a learned model used for generating the second image based on a range in a depth direction for generating the first image;
    An image processing method comprising:
  32.  学習済モデルを用いて、被検眼の第1の画像から、該第1の画像と比べてノイズ低減及びコントラスト強調のうちの少なくとも一つがなされた第2の画像を生成する工程と、
     前記第1の画像と前記第2の画像を比較する工程と、
     前記第1の画像と前記第2の画像の比較結果に基づいて表示部の表示を制御する工程と、
    を含む、画像処理方法。
    Generating, from the first image of the eye to be examined, a second image in which at least one of noise reduction and contrast enhancement has been performed as compared with the first image, using the learned model;
    Comparing the first image and the second image;
    Controlling display on a display unit based on a comparison result between the first image and the second image;
    An image processing method comprising:
  33.  プロセッサーによって実行されると、該プロセッサーに請求項29乃至32のいずれか一項に記載の画像処理方法の各工程を実行させる、プログラム。 A program which, when executed by a processor, causes the processor to execute each step of the image processing method according to any one of claims 29 to 32.
PCT/JP2019/023650 2018-09-06 2019-06-14 Image processing apparatus, image processing method, and program WO2020049828A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980057669.5A CN112638234A (en) 2018-09-06 2019-06-14 Image processing apparatus, image processing method, and program
US17/182,402 US20210183019A1 (en) 2018-09-06 2021-02-23 Image processing apparatus, image processing method and computer-readable medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018-166817 2018-09-06
JP2018166817 2018-09-06
JP2019-068663 2019-03-29
JP2019068663A JP7305401B2 (en) 2018-09-06 2019-03-29 Image processing device, method of operating image processing device, and program

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/182,402 Continuation US20210183019A1 (en) 2018-09-06 2021-02-23 Image processing apparatus, image processing method and computer-readable medium

Publications (1)

Publication Number Publication Date
WO2020049828A1 true WO2020049828A1 (en) 2020-03-12

Family

ID=69723129

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/023650 WO2020049828A1 (en) 2018-09-06 2019-06-14 Image processing apparatus, image processing method, and program

Country Status (2)

Country Link
JP (1) JP7488934B2 (en)
WO (1) WO2020049828A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06180569A (en) * 1992-09-30 1994-06-28 Hudson Soft Co Ltd Image processor
JP2015198757A (en) * 2014-04-08 2015-11-12 株式会社トーメーコーポレーション Fault imaging apparatus
JP2017094097A (en) * 2015-11-27 2017-06-01 株式会社東芝 Medical image processing device, x-ray computer tomographic imaging device, and medical image processing method
WO2017143300A1 (en) * 2016-02-19 2017-08-24 Optovue, Inc. Methods and apparatus for reducing artifacts in oct angiography using machine learning techniques
JP2018033717A (en) * 2016-08-31 2018-03-08 株式会社トプコン Ophthalmological device
JP2018055516A (en) * 2016-09-30 2018-04-05 キヤノン株式会社 Image processing method, image processing apparatus, imaging apparatus, image processing program, and storage medium
JP2018068748A (en) * 2016-10-31 2018-05-10 キヤノン株式会社 Information processing apparatus, information processing method, and program
JP2018077786A (en) * 2016-11-11 2018-05-17 株式会社東芝 Image processing apparatus, image processing method, program, drive control system, and vehicle
US20180214087A1 (en) * 2017-01-30 2018-08-02 Cognizant Technology Solutions India Pvt. Ltd. System and method for detecting retinopathy

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007029460A (en) 2005-07-27 2007-02-08 Topcon Corp Ophthalmic image processor and ophthalmic image processing program
JP6180073B2 (en) 2010-08-31 2017-08-16 キヤノン株式会社 Image processing apparatus, control method therefor, and program
JP6229255B2 (en) 2012-10-24 2017-11-15 株式会社ニデック Ophthalmic analysis apparatus and ophthalmic analysis program
JP6598502B2 (en) 2015-05-01 2019-10-30 キヤノン株式会社 Image generating apparatus, image generating method, and program

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06180569A (en) * 1992-09-30 1994-06-28 Hudson Soft Co Ltd Image processor
JP2015198757A (en) * 2014-04-08 2015-11-12 株式会社トーメーコーポレーション Fault imaging apparatus
JP2017094097A (en) * 2015-11-27 2017-06-01 株式会社東芝 Medical image processing device, x-ray computer tomographic imaging device, and medical image processing method
WO2017143300A1 (en) * 2016-02-19 2017-08-24 Optovue, Inc. Methods and apparatus for reducing artifacts in oct angiography using machine learning techniques
JP2018033717A (en) * 2016-08-31 2018-03-08 株式会社トプコン Ophthalmological device
JP2018055516A (en) * 2016-09-30 2018-04-05 キヤノン株式会社 Image processing method, image processing apparatus, imaging apparatus, image processing program, and storage medium
JP2018068748A (en) * 2016-10-31 2018-05-10 キヤノン株式会社 Information processing apparatus, information processing method, and program
JP2018077786A (en) * 2016-11-11 2018-05-17 株式会社東芝 Image processing apparatus, image processing method, program, drive control system, and vehicle
US20180214087A1 (en) * 2017-01-30 2018-08-02 Cognizant Technology Solutions India Pvt. Ltd. System and method for detecting retinopathy

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DEVALLA SRIPAD KRISHNA: "A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head", 27 September 2018 (2018-09-27), XP081195312, Retrieved from the Internet <URL:https://arxiv.org/pdf/1809.10589v1.pdf> [retrieved on 20190819] *
DEVALLA SRIPAD KRISHNA: "DRUNET: A Dliated-Residual U-Net Deep Learning Network to Digitally Strain Optic Nerve Head Tissues in Optical Coherence Tomography Images", 1 March 2018 (2018-03-01), Retrieved from the Internet <URL:https://arxiv.org/pdf/1803.00232vl.pdf> [retrieved on 20190819] *
SHEET DEBDOOT,: "DEEP LEARNING OF TISSUE SPECIFIC SPECKLE REPRESENTATIONS IN OPTICAL COHERENCE TOMOGRAPHY AND DEEPER EXPLORATION FOR IN SITU HISTOLOGY", 2015 IEEE 12TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI, April 2015 (2015-04-01), pages 777 - 780, XP033179567 *

Also Published As

Publication number Publication date
JP2023076615A (en) 2023-06-01
JP7488934B2 (en) 2024-05-22

Similar Documents

Publication Publication Date Title
JP7229881B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, TRAINED MODEL, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM
JP7250653B2 (en) Image processing device, image processing method and program
JP7341874B2 (en) Image processing device, image processing method, and program
JP7269413B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING SYSTEM, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM
GB2589250A (en) Medical image processing device, medical image processing method and program
JP7305401B2 (en) Image processing device, method of operating image processing device, and program
US11922601B2 (en) Medical image processing apparatus, medical image processing method and computer-readable medium
WO2020183791A1 (en) Image processing device and image processing method
JP7374615B2 (en) Information processing device, information processing method and program
JP2021037239A (en) Area classification method
JP7362403B2 (en) Image processing device and image processing method
WO2020138128A1 (en) Image processing device, image processing method, and program
JP2021122559A (en) Image processing device, image processing method, and program
WO2020075719A1 (en) Image processing device, image processing method, and program
JP2021164535A (en) Image processing device, image processing method and program
JP7488934B2 (en) IMAGE PROCESSING APPARATUS, OPERATION METHOD OF IMAGE PROCESSING APPARATUS, AND PROGRAM
JP2021069667A (en) Image processing device, image processing method and program
JP2019208845A (en) Image processing device, image processing method, and program
JP7446730B2 (en) Image processing device, image processing method and program
JP7086708B2 (en) Image processing equipment, image processing methods and programs
JP2023010308A (en) Image processing device and image processing method
JP2022121202A (en) Image processing device and image processing method
JP2019198384A (en) Image processing system, image processing method and program
JP2020174862A (en) Ophthalmologic imaging apparatus and control method of the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19857030

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19857030

Country of ref document: EP

Kind code of ref document: A1