WO2019216019A1 - 画像処理装置、画像処理方法及びプログラム - Google Patents

画像処理装置、画像処理方法及びプログラム Download PDF

Info

Publication number
WO2019216019A1
WO2019216019A1 PCT/JP2019/010004 JP2019010004W WO2019216019A1 WO 2019216019 A1 WO2019216019 A1 WO 2019216019A1 JP 2019010004 W JP2019010004 W JP 2019010004W WO 2019216019 A1 WO2019216019 A1 WO 2019216019A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
motion contrast
value
variation
processing apparatus
Prior art date
Application number
PCT/JP2019/010004
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
彰人 宇治
好彦 岩瀬
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2019216019A1 publication Critical patent/WO2019216019A1/ja

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, and a program.
  • An ophthalmic tomographic imaging apparatus such as an optical coherence tomography (OCT) can three-dimensionally observe the state inside the retinal layer.
  • OCT optical coherence tomography
  • Such a tomographic imaging apparatus has attracted attention in recent years because it is useful for more accurately diagnosing diseases in the eye.
  • OCT optical coherence tomography
  • SD-OCT Spectral domain OCT
  • SS-OCT Spectral domain OCT
  • spectral interference is measured with a single channel photodetector by using a high-speed wavelength swept light source.
  • OCTA OCT Angiography
  • a blood vessel image (hereinafter referred to as an OCTA image) is generated by projecting three-dimensional motion contrast data acquired by OCT onto a two-dimensional plane.
  • the motion contrast data is data obtained by repeatedly photographing the same cross section of the measurement object by OCT and detecting a temporal change of the measurement object during the imaging.
  • the motion contrast data can be obtained, for example, by calculating the phase, vector, and intensity temporal change of the complex OCT signal from the difference, ratio, or correlation.
  • Japanese Patent Application Laid-Open No. 2004-228688 discloses a technique for acquiring position information of a blood vessel by processing motion contrast data and acquiring analysis information about the blood vessel based on the position information.
  • the motion contrast data acquired by OCT is imaged at a place where there is a change in time, it is possible to acquire an image relating to a blood vessel portion where blood flows. Therefore, blood vessels can be analyzed by using OCTA images, and support for diagnosing the degree of illness can be provided.
  • the physical size of a blood vessel such as dimensions, area, and volume, is analyzed, and the blood flow in the blood vessel is not evaluated.
  • the present invention provides a technique that enables evaluation of circulation such as blood flow by using a plurality of motion contrast data photographed at different times.
  • An image processing apparatus includes: Acquisition means for acquiring a plurality of motion contrast images relating to the same position of the fundus at different times, Calculating means for calculating a value indicating a variation in luminance for a plurality of corresponding pixels of the plurality of motion contrast images; Generating means for generating an output image based on a value indicating the variation in luminance calculated by the calculating means; Output means for outputting the output image generated by the generating means.
  • FIG. 1 is a diagram illustrating a configuration example of an image processing system according to a first embodiment.
  • FIG. 2A is a diagram illustrating the structure of the eye part.
  • FIG. 2B is a diagram illustrating a tomographic image.
  • FIG. 2C is a diagram illustrating a fundus image.
  • FIG. 3A is a flowchart showing processing by the image processing apparatus.
  • FIG. 3B is a flowchart illustrating processing by the image processing apparatus.
  • FIG. 4A is a flowchart showing processing by the image processing apparatus.
  • FIG. 4B is a flowchart illustrating processing by the image processing apparatus.
  • FIG. 5 is a diagram for explaining generation of motion contrast data.
  • FIG. 6A is a diagram for explaining the removal of artifacts.
  • FIG. 6B is a diagram for explaining the removal of artifacts.
  • FIG. 7A is a diagram for explaining the first alignment.
  • FIG. 7B is a diagram for explaining the first alignment.
  • FIG. 8A is a diagram for explaining the second alignment.
  • FIG. 8B is a diagram for explaining the second alignment. It is a figure for demonstrating fluctuation
  • FIG. 10 is a diagram for explaining a screen for displaying an image.
  • FIG. 11 is a diagram for explaining a screen for displaying an image.
  • FIG. 12A is a diagram illustrating an example of data display.
  • FIG. 12B is a diagram illustrating an example of data display.
  • FIG. 13 is a diagram for explaining data analysis.
  • FIG. 14 is a diagram for explaining data analysis.
  • FIG. 15 is a diagram illustrating a configuration example of an image processing system according to the third embodiment.
  • FIG. 16A is a flowchart illustrating a process according to the third embodiment.
  • FIG. 16B is a flowchart illustrating processing according to the third embodiment.
  • FIG. 17 is a diagram for explaining a screen for displaying an image.
  • the image processing apparatus aligns a plurality of motion contrast data and calculates temporal variations of the motion contrast data when generating two-dimensional motion contrast data with reduced artifacts. .
  • changes over time can be displayed.
  • high image quality refers to an image in which the S / N ratio is improved as compared with one-time shooting or an image in which the amount of information necessary for diagnosis is increased.
  • the details of the image processing system including the image processing apparatus according to each embodiment will be described.
  • FIG. 1 is a block diagram illustrating a configuration example of an image processing system 100 including an image processing apparatus 300 according to the first embodiment.
  • a tomographic imaging apparatus also referred to as OCT
  • a fundus imaging apparatus 400 is connected to the image processing apparatus 300 via an interface.
  • an external storage unit 500 is connected to the image processing apparatus 300 via an interface.
  • an input unit 700 is connected to the image processing apparatus 300 via an interface.
  • the tomographic imaging apparatus 200 is an apparatus that captures tomographic images of the eye, and includes, for example, SD-OCT and SS-OCT.
  • a known device can be used for the tomographic imaging apparatus 200.
  • the galvanometer mirror 201 scans the fundus of the measurement light and defines the fundus imaging range by OCT.
  • the drive control unit 202 controls the driving range and speed of the galvanometer mirror 201, thereby defining the imaging range and the number of scanning lines (scanning speed in the plane direction) on the fundus.
  • the galvanometer mirror 201 is shown as one unit.
  • the galvanometer mirror 201 is configured by an X scan mirror (X scanner) and a Y scan mirror (Y scanner), and a desired range on the fundus. With this, the measuring light can be scanned.
  • the focus 203 uses a focus lens (not shown) to focus measurement light in OCT on the retinal layer of the fundus through the anterior segment of the subject eye.
  • the internal fixation lamp 204 includes a display unit 241 and a lens 242.
  • the display unit 241 includes a plurality of light emitting diodes (LD) arranged in a matrix. The lighting position of the light emitting diode is changed according to the part to be photographed under the control of the drive control unit 202.
  • the light from the display unit 241 is guided to the eye to be examined through the lens 242.
  • the light emitted from the display unit 241 is 520 nm, and a desired pattern is displayed by the drive control unit 202.
  • the coherence gate stage 205 is controlled by the drive control unit 202 in order to cope with a difference in the axial length of the eye to be examined.
  • the coherence gate represents a position where the optical distances of the measurement light and the reference light in OCT are equal.
  • the fundus image capturing apparatus 400 is an apparatus that captures a fundus image of an eye part, and includes, for example, a fundus camera, SLO (Scanning Laser Ophthalmoscope), and the like.
  • FIG. 2A shows a schematic diagram of an eyeball.
  • C represents the cornea
  • CL represents the lens
  • V represents the vitreous body
  • M represents the macula (the central part of the macula represents the fovea)
  • D represents the optic nerve head.
  • the posterior pole part of the retina including the vitreous body, the macula part, and the optic papilla is mainly illustrated will be exemplified, but the tomographic imaging apparatus 200 may photograph the anterior eye part of the cornea and the lens. It goes without saying that it is possible.
  • FIG. 2B shows an example of a tomographic image acquired when the tomographic imaging apparatus 200 images the retina.
  • AS represents a unit of image acquisition in an OCT tomographic image called A scan. By scanning this A scan in the x-axis direction in the figure, one B scan is formed. This B-scan is called a tomographic image (or tomographic image).
  • Ve represents a blood vessel
  • V represents a vitreous body
  • M represents a macular region
  • D represents an optic papilla
  • La represents a posterior surface of a sieve plate.
  • L1 is the boundary between the inner boundary membrane (ILM) and the nerve fiber layer (NFL)
  • L2 is the boundary between the nerve fiber layer and the ganglion cell layer (GCL)
  • L3 is the joint between the inner and outer segments of photoreceptor cells (ISOS).
  • L4 represents the retinal pigment epithelial layer (RPE)
  • L5 represents the Bruch's membrane (BM)
  • L6 represents the choroid.
  • the horizontal axis OCT main scanning direction
  • the vertical axis depth direction
  • FIG. 2C shows an example of the fundus image acquired by the fundus image capturing apparatus 400.
  • M represents the macular region
  • D represents the optic nerve head
  • the thick curve represents the retinal blood vessel.
  • the horizontal axis (OCT main scanning direction) is the x-axis
  • the vertical axis (OCT sub-scanning direction) is the y-axis.
  • the device configurations of the tomographic imaging apparatus 200 and the fundus imaging apparatus 400 may be an integrated type or a separate type.
  • the image processing apparatus 300 includes an image acquisition unit 301, a storage unit 302, an image processing unit 303, an instruction unit 304, and a display control unit 305.
  • the tomographic image generation unit 311 acquires signal data of a tomographic image captured by the tomographic imaging apparatus 200, and generates a tomographic image by performing signal processing.
  • the motion contrast data generation unit 312 generates motion contrast data.
  • the image acquisition unit 301 acquires fundus image data captured by the fundus image capturing apparatus 400.
  • the tomographic image generated by the image acquisition unit 301 and the acquired fundus image are stored in the storage unit 302.
  • the preprocessing unit 331 performs a process of removing artifacts from the motion contrast data.
  • the image generation unit 332 generates a two-dimensional motion contrast front image (also referred to as an OCTA image) from the three-dimensional motion contrast data, and a two-dimensional front image (also referred to as an Enfac image) from the three-dimensional tomographic image.
  • the detection unit 333 detects the boundary line of each layer from the retina.
  • the first alignment unit 334 performs alignment of the two-dimensional front image.
  • the selection unit 335 selects reference data from the result of the first alignment unit 334.
  • the second alignment unit 336 performs alignment in the lateral direction (x axis) of the retina using the OCTA image.
  • the calculation unit 340 calculates a statistical value (average value, standard deviation, coefficient of variation, maximum value, minimum value, etc.) between the plurality of OCTA images.
  • the fluctuation image generation unit 341 generates an image based on the statistical value obtained by the calculation unit 340.
  • Each image generated by the image processing unit 303 is stored in the storage unit 302.
  • the instruction unit 304 instructs the tomographic imaging apparatus 200 to drive.
  • the display control unit 305 uses the images stored in the storage unit 302 to perform various types of display on the display unit 600 including displays as will be described later with reference to FIGS. 10 and 11.
  • the external storage unit 500 stores information relating to the eye to be examined (patient name, age, sex, etc.), captured image data, imaging parameters, image analysis parameters, and parameters set by the operator in association with each other.
  • the display unit 600 is a liquid crystal display, for example.
  • the input unit 700 is, for example, a mouse, a keyboard, a touch operation screen, and the like, and an operator gives an instruction to the image processing apparatus 300, the tomographic image capturing apparatus 200, and the fundus image capturing apparatus 400 via the input unit 700.
  • FIG. 3A shows an overall operation process of the image processing apparatus 300 according to the first embodiment.
  • FIG. 3B shows a process for generating high-quality data (high-quality image) in the first embodiment.
  • step S301 the image processing apparatus 300 acquires a subject identification number from the outside as information for identifying the eye to be examined (test eye information).
  • the subject identification number may be input by the user from the input unit 700, for example.
  • the image processing apparatus 300 acquires information on the subject eye held by the external storage unit 500 based on the subject identification number and stores it in the storage unit 302.
  • the tomographic imaging apparatus 200 scans the eye to be imaged.
  • the scan of the eye to be examined is started in response to a scan start instruction from the operator.
  • the tomographic imaging apparatus 200 controls the drive control unit 202 and operates the galvanometer mirror 201 to scan a tomographic image.
  • the galvanometer mirror 201 includes the horizontal X scanner and the vertical Y scanner. When the orientations of these scanners are changed, scanning can be performed in the horizontal direction (X) and the vertical direction (Y) in the apparatus coordinate system. By changing the orientations of these scanners simultaneously, scanning can be performed in a direction in which the horizontal direction and the vertical direction are combined. Therefore, the galvano mirror 201 can scan the fundus plane in any direction. .
  • the indication unit 304 sets the display position of the individual lamp by the internal fixation lamp 204, the scan range and scan pattern by the galvanometer mirror 201, the position of the coherence gate by the coherence gate stage 205, and the focus position of the focus 203.
  • the drive control unit 202 controls the light emitting diode of the display unit 241 to control the display position of the individual lamp by the internal fixation lamp 204 so as to perform imaging on the center of the macula or the optic disc.
  • Examples of scan patterns that can be set on the galvanometer mirror 201 include raster scans that capture a three-dimensional volume, radial scans, and cross scans.
  • a raster scan for photographing a three-dimensional volume is used as a scan pattern, and the three-dimensional volume is photographed N times (N ⁇ 2) in order to generate high-quality data.
  • N ⁇ 2 the same shooting range is shot with the same scan pattern.
  • photographing is repeatedly performed at an interval of 300 ⁇ 300 (main scanning ⁇ sub scanning) in a range of 3 mm ⁇ 3 mm.
  • the same line portion is repeatedly photographed m times (m ⁇ 2) in order to calculate the motion contrast. That is, when m is twice, 300 ⁇ 600 data is actually captured, and 300 ⁇ 300 three-dimensional motion contrast data is generated therefrom.
  • the tomographic imaging apparatus 200 performs tracking of the eye to be imaged in order to image the same place for addition averaging, thereby reducing the influence of fixation micromotion. And scan the eye. Further, when motion that becomes an artifact in detecting an image such as blinking is detected, rescan is automatically performed at the place where the artifact occurs.
  • the tomographic imaging apparatus 200 may control tomographic imaging so that the imaging interval is a constant interval.
  • the shooting interval for acquiring one motion contrast data is automatically controlled according to the scan density, such as 3 seconds, 5 seconds, etc., so that N repeated shootings are automatically performed. May be.
  • the operator may be presented with an index for photographing at regular intervals.
  • the indicator may be any indicator that notifies the operator of the timing to start shooting. For example, a display that allows the operator to recognize the passage of time, such as a countdown at the start of shooting or how many seconds have passed since the previous shooting, can be used as an index.
  • the tomographic image generation unit 311 generates a tomographic image by performing reconstruction processing on each interference signal.
  • the tomographic image generation unit 311 performs fixed pattern noise removal from the interference signal.
  • the fixed pattern noise removal is performed, for example, by extracting fixed pattern noise by averaging a plurality of detected A scan signals, and subtracting this from the input interference signal.
  • the tomographic image generation unit 311 performs desired window function processing in order to optimize the depth resolution and dynamic range that are in a trade-off relationship when Fourier transform is performed in a finite section.
  • the tomographic image generation unit 311 generates a tomographic signal by performing FFT processing.
  • step S304 the motion contrast data generation unit 312 generates motion contrast data.
  • Generation of motion contrast data will be described with reference to FIG.
  • MC indicates three-dimensional motion contrast data
  • LMC indicates two-dimensional motion contrast data constituting the three-dimensional motion contrast data.
  • the motion contrast data generation unit 312 first corrects a positional shift between a plurality of tomographic images taken in the same range of the eye to be examined.
  • the method for correcting the misalignment may be any method.
  • the motion contrast data generation unit 312 aligns the tomographic image data corresponding to the same location obtained by photographing the same range m times using the features of the fundus shape. Specifically, one of the m pieces of tomographic image data is selected as a template, the degree of similarity with other tomographic image data is obtained while changing the position and angle of the template, and the amount of positional deviation from the template is obtained. . Thereafter, the motion contrast data generation unit 312 corrects each tomographic image data based on the obtained positional deviation amount.
  • the motion contrast data generation unit 312 obtains a decorrelation value M (x, z) by the following [Equation 1] between two tomographic image data in which the imaging times related to the respective tomographic image data are continuous.
  • a (x, z) represents the luminance at the position (x, z) of the tomographic image data A
  • B (x, z) represents the luminance at the same position (x, z) of the tomographic image data B. ing.
  • the decorrelation value M (x, z) is a value between 0 and 1, and the value of M (x, z) increases as the difference between the two luminances increases.
  • the motion contrast data generation unit 312 has a plurality of decorrelation values M (x, z) at the same position (x, z). Can be requested.
  • the motion contrast data generation unit 312 can generate final motion contrast data by performing statistical processing such as the maximum value calculation and the average calculation of the obtained plurality of decorrelation values M (x, z). it can.
  • the obtained decorrelation value M (x, z) is one. In this case, statistical processing such as the maximum value calculation and the average calculation is not performed, and the decorrelation value M (x, z) between the adjacent tomographic images A and B is the motion contrast at the position (x, z). Value.
  • the motion contrast calculation formula shown in [Equation 1] tends to be susceptible to noise.
  • the motion contrast data generation unit 312 may regard tomographic data that falls below a predetermined threshold value as noise and replace it with zero as preprocessing.
  • the image generation part 332 can generate
  • step S305 the image processing unit 303 generates high-quality data using the acquired three-dimensional motion contrast data. High-quality data generation processing by the image processing unit 303 will be described with reference to the flowcharts of FIGS. 3B and 4A.
  • the detection unit 333 detects the boundary lines of the retinal layer in the plurality of tomographic images captured by the tomographic image capturing apparatus 200.
  • the detection unit 333 detects, for example, each boundary of L1 to L6 or a GCL / IPL, IPL / INL, INL / OPL, and OPL / ONL boundary (not shown) in the tomographic image of FIG. 2B.
  • An example of the boundary detection process is as follows. First, the detection unit 333 applies the median filter and the Sobel filter to the tomographic image to be processed, and creates an image (hereinafter referred to as a median image and a Sobel image).
  • the detection unit 333 creates a profile for each A scan from the created median image and Sobel image.
  • a brightness value profile is created from the median image, and a gradient profile is created from the Sobel image.
  • the detection unit 333 detects a peak in the profile created from the Sobel image.
  • the detection unit 333 detects the boundary of each region of the retinal layer by referring to the profile of the median image before and after the detected peak and between the peaks. Note that the method of detecting the boundary line in the retinal layer is not limited to the above.
  • step S352 the image generation unit 332 projects motion contrast data corresponding to the range between the upper end of the generation range and the lower end of the generation range specified for the three-dimensional motion contrast data on the two-dimensional plane, and OCTA. Generate an image.
  • the image generation unit 332 performs processing such as average value projection (AIP) or maximum value projection (MIP) on motion contrast data between the upper end of the generation range and the lower end of the generation range of the entire motion contrast data.
  • AIP average value projection
  • MIP maximum value projection
  • an OCTA image that is an Enface image (front image) of the three-dimensional motion contrast data is generated. That is, in step S352, a plurality of (N) motion contrast images (OCTA images) relating to the same position of the fundus at different times are acquired.
  • the OCTA image generation method is not limited to the projection using the average value or the maximum value.
  • An OCTA image may be generated with values such as a minimum value, median value, variance, standard deviation, and sum.
  • the upper end of the generation range is the ILM / NFL boundary line
  • the lower end of the generation range is the boundary line at the lower end of 50 ⁇ m from GCL / IPL
  • the OCTA image is generated by the average value projection method.
  • the motion contrast data generation unit 312 may generate motion contrast data using tomographic data in a range between the upper end of the generation range and the lower end of the generation range.
  • the image generation unit 332 since motion contrast data between the upper end of the generation range and the lower end of the generation range is obtained, the image generation unit 332 does not need to consider the range between the upper end of the generation range and the lower end of the generation range. That is, the image generation unit 332 performs an average value projection or the like using the entire motion contrast data generated by the motion contrast data generation unit 312, so that the OCTA image based on the tomographic data in the specified depth range. Can be generated.
  • step S353 the first alignment unit 334 determines the horizontal direction (x axis) and the vertical direction (y axis) of the images in the N OCTA images obtained based on the N three-dimensional volumes. , Rotation alignment in the xy plane is performed. This alignment process will be described with reference to the flowchart of FIG. 4A.
  • step S3531 of FIG. 4A a reduction process for reducing artifacts is performed on a plurality of motion contrast images (OCTA images).
  • the preprocessing unit 331 detects an artifact such as a black belt or a white line from the OCTA image generated by the image generation unit 332, and removes the detected artifact.
  • the black area of the OCTA image represents a place where the decorrelation value is high, that is, a place where blood flow is detected (corresponding to a blood vessel), and the white area represents a place where the decorrelation value is low.
  • BB in FIG. 6A shows an example of a black belt
  • the black belt is a phenomenon in which the brightness value of the retinal tomogram decreases and the decorrelation value decreases due to the movement of the retina away from a highly sensitive position due to movement during shooting, and the entire image becomes dark due to blinking, etc. It occurs when the value of decorrelation becomes low.
  • the white line is calculated by aligning M tomographic images in the calculation of decorrelation. If the alignment is not successful or if the position cannot be corrected by the alignment, the entire image is decorrelated. Occurs when the value increases. Since these artifacts occur in the calculation of decorrelation, they occur in units of one line in the main scanning direction. Therefore, the preprocessing unit 331 detects artifacts in units of one line.
  • black band detection is performed when the average value of decorrelation in one line is equal to or less than the threshold value TH AVG_B .
  • the white line detection is a white line when the average value of the decorrelation values in one line is equal to or greater than the threshold value TH AVG_W and the standard deviation (or variance value) is equal to or less than TH SD_W . Since the correlation value may be high in large blood vessels, etc., if white line detection determination is performed using only the average value, a region containing blood vessels with high correlation values such as these may be erroneously detected as a white line. is there. For this reason, in the present embodiment, it is determined whether or not a white line is combined with an index for evaluating variation in values such as standard deviation and variance.
  • the average value is high and the standard deviation is also high.
  • the average value is high but the variation in value is small, so the standard deviation is low. Therefore, it is possible to reduce false detection by performing white line detection based on the average value and variation of the decorrelation.
  • the pre-processing unit 331 stores the artifact area obtained above in the Mask image corresponding to the OCTA image.
  • a white area is set to 1 and a black area is set to 0.
  • the threshold values used for the black band detection and the white line detection described above are set for each image.
  • a dynamic threshold method such as P-tile or discriminant analysis.
  • step S3532 the first alignment unit 334 initializes a two-dimensional matrix for storing alignment parameters when the OCTA images are aligned.
  • each matrix element information necessary for high image quality such as deformation parameters and image similarity at the time of alignment is collectively stored.
  • step S3533 the first alignment unit 334 selects an OCTA image to be aligned.
  • all the N OCTA images are sequentially set as an image serving as a reference for alignment (hereinafter referred to as a target image), and the OCTA image set as the target image and the remaining OCTA images are aligned. Performed (alignment is performed in step S3534). For example, when an OCTA image of Data0 is selected as a target image in step S3533, alignment with each of the OCTA images of Data1 to Data (N ⁇ 1) is performed. Next, when an OCTA image of Data1 is selected as a target image, alignment with each of the OCTA images of Data2 to Data (N-1) is performed.
  • FIG. 7A An example of this is shown in FIG. 7A.
  • Data 0 to Data 2 are shown for the sake of simplicity.
  • alignment is performed among N OCTA images.
  • the start Data number of the OCTA image that is the target of alignment is also increased by one.
  • the OC2 image of Data2 is used as a reference, the alignment of the OCTA images of Data0 and Data1, Data0 and Data2, and Data1 and Data2 has already been completed. This is because the alignment for the combination of the OCTA images is performed by the processing up to that point (alignment processing based on the OCTA images of Data0 and Data1). Therefore, when the OCTA image of Data2 is used as a reference, alignment is performed from Data3. Thereby, although it is the alignment of all the OCTA images, a half combination may be calculated.
  • step S3534 the first alignment unit 334 determines that the horizontal direction (x-axis) and the vertical direction (y-axis) of the image between the OCTA image selected as the target image in step S3533 and another OCTA image, xy. Perform in-plane rotational alignment.
  • the alignment is performed by enlarging the size of the OCTA image. This is because the subpixel alignment is expected to improve the alignment accuracy over the pixel alignment. For example, when the photographing size of the OCTA image is 300 ⁇ 300, it is enlarged to 600 ⁇ 600.
  • an interpolation method such as Bicubic or Lanczos (n) method is used.
  • an evaluation function representing the similarity between two OCTA images is defined in advance, and an evaluation value is calculated while shifting or rotating the OCTA image position, The location where the evaluation value is the best is taken as the alignment result.
  • the evaluation function include a method of evaluating with a pixel value (for example, a method of performing evaluation using a correlation coefficient).
  • An expression in the case of using a correlation coefficient as an evaluation function representing the similarity S is shown in [Equation 2].
  • the area of the Data0 OCTA image is f (x, y), and the area of the Data1 OCTA image is g (x, y).
  • f ave and g ave represent the average of the region f (x, y) and the region g (x, y), respectively.
  • the area is an image area used for alignment, and an area equal to or smaller than the size of the normal OCTA image is set, and the size of the ROI is set.
  • the evaluation function is not limited to [Equation 2], but may be SSD (Sum of Squared Difference) or SAD (Sum of Absolute Difference) as long as the similarity or difference of images can be evaluated.
  • alignment may be performed by a method such as POC (Phase Only Correlation). Through this processing, global alignment within the XY plane is performed.
  • the present invention is not limited to this.
  • the input OCTA image size is a high-density scan such as 900 ⁇ 900, enlargement is not necessarily required.
  • the alignment may be performed by generating pyramid structure data.
  • step S3535 the first alignment unit 334 calculates an image evaluation value of the OCTA image selected as the target image.
  • the image evaluation value is calculated using the common area of the image that does not include the invalid area generated by the alignment in the OCTA image that has been two-dimensionally aligned in step S3534.
  • the image evaluation value Q can be obtained by [Equation 3].
  • ⁇ f ⁇ (f (x, y) -f ave ) 2 dxdy
  • ⁇ g ⁇ (g (x, y) -g ave ) 2 dxdy
  • ⁇ fg ⁇ (f (x, y ) -f ave ) (g (x, y) -g ave ) dxdy.
  • the second term is a term for evaluating brightness
  • f ave and g ave represent the average of the region f (x, y) and the region g (x, y), respectively.
  • the third term is a term for evaluating contrast. Each term has a minimum value of 0 and a maximum value of 1.
  • the evaluation value is 1. Therefore, the evaluation value is high when an average image among N OCTA images is used as a reference, and the evaluation value is low when an OCTA image that is different from other OCTA images is used as a reference.
  • being different from other OCTA images is a case where the photographing position is different, the image is distorted, the entire image is dark or too bright, and artifacts such as white lines and black bands are included.
  • Q for [Equation 3] is calculated between one target image and other N-1 OCTA images, and the sum of these Qs is used as an evaluation value. Note that it is not always necessary to use [Equation 3] as the image evaluation value, and each term may be evaluated independently or the combination may be changed.
  • the first alignment unit 334 stores values in a two-dimensional matrix for storing parameters necessary for image quality improvement such as alignment and image similarity initialized in step S3532.
  • the target image is Data0 and is aligned with the Data1 image
  • the horizontal alignment parameter X, the vertical alignment parameter Y, and the XY plane are aligned with the elements (0, 1) of the two-dimensional matrix.
  • the rotation parameter ⁇ , the image evaluation value, and the image similarity are stored.
  • the Mask image shown in FIGS. 6A to 6B is stored in association with the OCTA image. Further, although not described in the present embodiment, when magnification correction is performed, the magnification may be stored.
  • step S3537 the first alignment unit 334 determines whether or not all OCTA images have been aligned with the remaining OCTA images as target images. If processing has not been performed on all OCTA images, the process returns to step S3533. If processing has been performed using all OCTA images as a reference, the process proceeds to step S3538.
  • step S3538 the first alignment unit 334 updates the remaining elements of the two-dimensional matrix. As described in steps S3533 to S3537, calculation is performed only for a half combination of the N OCTA images. Therefore, these values are copied to the elements that are not calculated. For example, the parameter of the element (0, 1) of the two-dimensional matrix is copied to the element (1, 0). That is, the first alignment unit 334 copies the element (i, j) to the element (j, i). At this time, since the alignment parameters X and Y and the rotation parameter ⁇ are reversed, copying is performed by multiplying a negative value. Since the image similarity and the like are not reversed, the same value is copied as it is. The OCTA image alignment is performed by these processes.
  • the selection unit 335 selects a reference image from the N OCTA images based on the result of the alignment performed in step S353.
  • Information necessary for generating a high-quality image is stored in each element of the two-dimensional matrix generated in step S353, and the selection unit 335 uses the information to select a reference image.
  • the selection unit 335 uses, for example, an image evaluation value, an alignment parameter evaluation value, and an artifact region evaluation value in selecting a reference image.
  • the image evaluation value the value obtained in step S3535 (for example, the value Q obtained by [Equation 3]) is used.
  • the alignment parameter evaluation value is acquired by, for example, [Equation 4] using X and Y of the alignment result obtained in step S3534. In [Equation 4], the larger the movement amount, the larger the value.
  • the artifact region evaluation value is calculated by, for example, [Equation 5] using the Mask image obtained in step S3531.
  • T (x, y) represents a pixel in a region that is not an artifact in the Mask image
  • the image evaluation value Q and the artifact area evaluation value NA should be larger in numerical value, and the alignment parameter evaluation value SV should be smaller in numerical value.
  • the image evaluation value and the alignment parameter evaluation value are values obtained in relation to other images when a certain image is used as a reference, and therefore are N ⁇ 1 total values. Since these evaluation values have different evaluation scales, the selection unit 335 sorts by each value, and selects a reference image based on the total value of the sorted indexes. For example, the selection unit 335 sorts the image evaluation value and the artifact area evaluation value so that the index after sorting decreases as the numerical value increases, and the index after sorting decreases as the numerical value of the alignment parameter evaluation value decreases. Sort as follows. Then, the selection unit 335 selects the OCTA image having the smallest index value after sorting as the reference image.
  • the evaluation values may be calculated by assigning weights to the sorted indexes of the evaluation values.
  • normalization may be performed so that each evaluation value becomes 1 instead of the sort value.
  • the image evaluation value is normalized to 1, in this embodiment, since it is N ⁇ 1 total values, an average value may be used.
  • the alignment parameter evaluation value can be normalized to 1 if defined as in [Equation 6]. In this case, the evaluation value closer to 1 is a better evaluation value.
  • SV n is a total value of N-1 values obtained in [Expression 4], and the subscript n corresponds to the Data number. Therefore, in the case of Data0, a SV 0.
  • SV max is the maximum alignment parameter evaluation value between Data 0 and Data (N ⁇ 1).
  • is a weight, and is a parameter for adjusting how many values of NSV n are set when SV n and SV max are the same numerical value.
  • the maximum value SV max may be determined from actual data as described above, or may be defined in advance as a threshold value. Since the artifact area evaluation value is normalized to 0 to 1, it can be used as it is. As described above, when all the evaluation values are normalized to 1, the image having the largest evaluation value is selected as the reference image.
  • the reference image is an average image among the N images, and the image that satisfies the condition that the movement amount is small and the artifacts are small when the other images are aligned is selected. Is done.
  • An example of the reference image selected according to this example is shown in FIG. 7B.
  • Data1 is selected as the reference image.
  • Data0 and Data2 are moved based on the alignment parameters obtained by the first alignment unit 334, respectively.
  • step S355 the second alignment unit 336 performs retina lateral (x-axis) alignment using the OCTA image selected as the reference image in step S354.
  • FIG. 8A shows an example in which the horizontal alignment of Data2 is performed when the reference image is Data1 and the alignment target is Data2.
  • Mask sets 0 to an invalid area (vertical black line in the figure) generated by the movement of Data2 as a result of alignment with the artifact (horizontal black line in the figure) included in Data2 and alignment with Data1. It is.
  • the second alignment unit 336 aligns the horizontal direction of each line with respect to the reference image and the alignment target image, and calculates the similarity in line units.
  • [Equation 2] can be used for calculating the similarity. Then, the second alignment unit 336 moves the line to a position where the similarity is maximized. Also, the similarity to the reference image is calculated in line units, and a weight is set to Mask according to the similarity.
  • FIG. 8B shows an example in which it is determined that the upper end of the image and the vicinity of the center of the image are not similar to the reference image, and black lines 801 and 802 in the horizontal direction are set in the Mask image as lines that are not used for superposition. Further, an example in which the vicinity of the center of the image and the lower end of the image are shifted to the right near the center and shifted to the left at the lower end of the image as a result of the alignment in line units. Since the invalid area is generated by shifting the image, the second alignment unit 336 sets 0 to the invalid areas 803 and 804 generated in Mask. By this processing, local alignment in the XY plane is performed.
  • the rotation parameter ⁇ obtained in the first alignment may be applied to each image before the second alignment, or may be applied after the second alignment. May be.
  • the second alignment unit 336 has described the alignment in the x direction.
  • the second alignment unit 336 may perform the alignment in the y direction.
  • the similarity S according to [Equation 2] is calculated between a certain line of the Data 1 image and each line of the Data 2 image, and one image is arranged in the y direction so that the line having the largest similarity S corresponds. It may be possible to shift to
  • step S356 the calculation unit 340 calculates a statistical value (an average value, a standard deviation, a variation coefficient, a maximum value, a minimum value, etc.) from a plurality (N) of OCTA images.
  • a statistical value an average value, a standard deviation, a variation coefficient, a maximum value, a minimum value, etc.
  • the calculation unit 340 first obtains an average value.
  • the calculation unit 340 includes, for a plurality of OCTA images and a plurality of Mask images, a total value SUM_A obtained by multiplying the corresponding pixel values of the corresponding OCTA image and the Mask image, and a total value of the corresponding pixel values of the plurality of Mask images. Each SUM_B is held.
  • the total value SUM_B of the Mask image holds a different value for each pixel position.
  • the pixel value of SUM_B near the center of the image is N, and The pixel value of SUM_B is a value smaller than N.
  • the average value of each pixel is obtained by dividing SUM_A by SUM_B.
  • the addition average image generated in this way is, for example, an image showing the blood vessel morphology of the fundus.
  • a variation image as an output image is generated based on a value (for example, standard deviation, variance, etc.) indicating variation in luminance (motion contrast value) in a plurality of corresponding pixels of a plurality of motion contrast images.
  • the value indicating variation is used after being normalized by a representative value of luminance (for example, the above average value or median value) in a plurality of corresponding pixels.
  • a representative value of luminance for example, the above average value or median value
  • the mask image is also used to exclude pixels that are invalid areas from the calculation.
  • the invalid area is excluded from the calculation using the Mask image, the standard deviation and the coefficient of variation cannot be calculated for pixels whose number of pixels used for the calculation is equal to or less than a threshold (for example, 1 or less). Therefore, for example, when it is set to 0, it can be seen that it is invalid so that it can be seen that it is an invalid area when displayed as a final image.
  • the standard deviation can be obtained from the average value and the value of each pixel.
  • the formula of standard deviation obtained from a plurality of OCTA images is shown in [Formula 7].
  • Equation 7 SD is standard deviation
  • n the number of pixels used for calculation at each pixel
  • x ave is an average value of each pixel
  • x i represents each pixel value.
  • CV represents a coefficient of variation. That is, the value obtained by dividing the standard deviation by the average value. With standard deviation, pixels with a larger average value tend to be larger, but with the coefficient of variation, even if the average value varies among pixels, the degree of variation must be compared. Is possible.
  • FIGS. 9A to 9B show examples of OCTA images related to blood vessels in a poorly flowing portion.
  • FIG. 9A shows an example of images that are a plurality of OCTA images 1 to 3 taken at different times and that have already been aligned.
  • blood vessels 901 to 903 of OCTA images 1 to 3 are shown as examples of blood vessels having a poor flow.
  • the value of motion contrast becomes small, so that it hardly appears like the blood vessels 902 and 903, or there is a reflection.
  • FIG. 9B An example in which the coefficient of variation is calculated using these OCTA images 1 to 3 and imaged is shown in FIG. 9B.
  • the value of the portion 910 becomes high as a portion corresponding to the blood vessels 901 to 903 having changes at different times, a change appears in this portion.
  • the calculation unit 340 may calculate the histogram after converting the histogram of each OCTA image to the same reference before calculating the statistical value as described above.
  • the histogram of the OCTA image defines a standard deviation and an average value of a reference image, and performs conversion so that the standard deviation and the average value are defined for each OCTA image, thereby obtaining a constant average value and a standard value.
  • the image has a deviation.
  • step S357 the image generation unit 332 generates the average value (addition average image) calculated by the calculation unit 340 as a high-quality OCTA image.
  • the above is the processing in step S305 in FIG. 3A.
  • the fluctuation image generation unit 341 generates a fluctuation image based on the statistical value calculated by the calculation unit 340 in step S356.
  • the fluctuation image generation unit 341 performs sharpening processing or enhancement processing on an addition average image (high-quality OCTA image) of a plurality of motion contrast images, and performs binarization processing to thereby perform blood vessel processing.
  • An image showing the form is generated.
  • the variation image generation unit 341 generates a variation image by associating information (CV) indicating variation in motion contrast values in a plurality of motion contrast images with a region indicating a blood vessel included in the image indicating the shape of the blood vessel. To do.
  • This variation image generation process will be described with reference to the flowchart of FIG. 4B.
  • step S361 the fluctuation image generation unit 341 performs sharpening processing or enhancement processing (hereinafter collectively referred to as enhancement processing) on the high-quality image (addition average image) generated in step S305.
  • enhancement processing for example, an Unsharp mask or a Hessian filter can be used. Thereby, the emphasis process of the location corresponding to the blood vessel in the OCTA image is performed.
  • one enhancement process may be used, and an AND process of each enhancement image obtained by performing a plurality of enhancement processes may be performed.
  • an enhanced image may be acquired using any one enhancement process of an Unsharp mask and a Hessian filter, or an enhanced image obtained by performing AND processing on each enhanced image obtained by performing these two enhancement processes. May be acquired.
  • step S362 the variation image generation unit 341 performs binarization processing on the enhanced OCTA image to generate a blood vessel mask image.
  • a fixed threshold value may be used, or a dynamic threshold value may be used.
  • a dynamic threshold value it can be obtained by discriminant analysis method, P-tile method, Median, Mean, or the like.
  • one threshold value may be set for the entire image, or a local threshold value may be set for each ROI by setting an ROI having a size smaller than the image size.
  • minute noise may be removed by applying a Deckle process, a Morphology operation, or the like to the mask image obtained by the binarization process.
  • alignment is performed when performing high quality processing of an OCTA image, there may be a slight misalignment.
  • the portion corresponding to the blood vessel of the blood vessel mask is thinned by about several pixels by the shrinkage processing of the Morphology operation or the like.
  • a mask may be created.
  • the variation image generation unit 341 corresponds to a blood vessel by taking a logical product (AND processing) of the data obtained by imaging the variation coefficient CV obtained by the calculation unit 340 and the mask image obtained in step S362.
  • the data which imaged the coefficient of variation of the part to perform is generated.
  • a variation coefficient image also referred to as a variation image or variation data
  • the processing returns to the flowchart of FIG. 3A.
  • step S307 the display control unit 305 displays the high-quality two-dimensional motion contrast data (addition average image) generated in step S305 and the fluctuation image generated in step S306 as an output image. 600.
  • the output form of the output image is not limited to display on the display unit 600.
  • an output unit that performs storage in the external storage unit 500 may be provided.
  • FIG. 1100 denotes an entire screen
  • 1101 denotes a patient tab
  • 1102 denotes an imaging tab
  • 1103 denotes a report tab
  • 1104 denotes a setting tab.
  • the diagonal lines in the report tab 1103 represent the active state of the report screen.
  • Reference numeral 1105 denotes a patient information display unit
  • 1106 denotes an examination sort tab
  • 1107 denotes an examination list
  • a black frame of 1108 denotes selection of an examination list, and selected examination data is displayed on the screen.
  • thumbnails of SLOs and tomographic images are displayed, but the present invention is not limited to this.
  • an OCTA thumbnail may be displayed.
  • the inspection data acquired by photographing and the inspection data generated by the image quality enhancement process are displayed in a list in the inspection list 1107.
  • the image displayed on the thumbnail may be displayed from the data that has been subjected to the image quality enhancement processing.
  • 1130 and 1131 represent view mode tabs.
  • a two-dimensional OCTA image generated from the three-dimensional motion contrast data is displayed.
  • a three-dimensional tomographic image and three-dimensional motion contrast data are displayed. In FIG. 10, the tab 1130 is selected.
  • 1129 is a button for instructing execution of high-quality generation of motion contrast.
  • the button 1129 when the button 1129 is pressed, the high-quality data generation process shown in step S305 is executed.
  • This button 1129 is pressed to display data candidates used for improving image quality.
  • high-quality data generation may be executed without displaying data candidates.
  • the user can select the high quality data as a report display.
  • the image quality improvement processing has already been completed, and in FIG. 10, data that has undergone the image quality improvement processing is selected and displayed. Therefore, in the following description, data that has been subjected to high image quality processing (addition averaging) will be described.
  • 1109 is an SLO image
  • 1110 is a first OCTA image
  • 1111 is a first tomographic image
  • 1112 is a front image (Enface image) generated from a three-dimensional tomographic image
  • 1113 is a second OCTA image
  • 1114 is a second image.
  • Reference numeral 1120 denotes a tab for switching the image type of the Enface image 1112.
  • the Enface image 1112 shows Enface created from the same depth range as the first OCTA image 1110, but it is also possible to switch to Enface created from the same depth range as the second OCTA image 1113 using the tab 1120. is there.
  • Reference numeral 1115 denotes an image superimposed on the SLO image 1109
  • 1116 denotes a tab for switching the type of the image 1115
  • Reference numeral 1117 denotes a tab for switching the type of OCTA image displayed as the first OCTA image 1110
  • 1121 denotes a tab for switching the type of OCTA image displayed as the second OCTA image.
  • types of OCTA images there are OCTA images created in a shallow layer, a deep layer, a choroid, etc., or an arbitrary range.
  • the upper end of the creation range of the first OCTA image 1110 is indicated by a display 1118 indicating the type of the boundary line and its offset value, and an upper boundary line 1125 displayed superimposed on the tomographic image.
  • the lower end of the creation range of the first OCTA image 1110 is indicated by a display 1119 indicating the type of the lower boundary line and its offset value, and a boundary line 1126 displayed superimposed on the first tomographic image.
  • the display 1123 and the boundary line 1127 indicate the upper end
  • the display 1124 and the boundary line 1128 indicate the lower end.
  • the arrow 1145 indicates the position of the first tomographic image in the XY plane
  • the arrow 1146 indicates the position of the second tomographic image in the XY plane.
  • the positions of the first tomographic image 1111 and the second tomographic image 1114 in the XY plane can be switched by operating the arrows 1145 and 1146 by dragging the mouse.
  • the image quality enhancement processing is performed using a plurality of OCTA images, and at the same time, the fluctuation image is generated using the plurality of OCTA images. Therefore, when selecting data that has been subjected to high image quality processing and displaying it on the screen, it is also possible to display a varying image.
  • FIG. 11 shows an example in which a variation image 1150 is displayed instead of the first OCTA image 1110.
  • the variation image 1150 is shown as a substantially black image, but actually, the variation image described in this specification is displayed.
  • the variation image 1150 used for display may be data obtained by imaging the variation coefficient CV calculated by [Equation 8], or may be data obtained by imaging the variation coefficient of a portion corresponding to a blood vessel by the processing of FIG. 4B.
  • a variation image 1150 is generated by adding a color corresponding to information indicating variation in motion contrast values to a region indicating a blood vessel (a blood vessel region indicated by a mask image) included in the addition average image. That is, the fluctuation image 1150 is a color image. Therefore, a bar 1151 indicating the color scale of the fluctuation image 1150 is displayed.
  • the color scale is expressed in a cold color system (for example, blue) when the value of the variation coefficient is small, and in a warm color system (for example, red) when the value of the variation coefficient is large.
  • the fluctuation image 1150 When the fluctuation image 1150 is displayed, it may be displayed by switching to the OCTA image 1110 or may be superimposed on the OCTA image 1110. In the case of superimposed display, it is desirable to display the variable image 1150 with the transparency ⁇ set.
  • On / Off of the display of the fluctuation image 1150 may be selected by a right-click menu (not shown), or the display may be switched by On / Off of a check box (not shown). In the menu selection or check box, when data for which no variation image is generated is selected and displayed, it is desirable that the selection item is not displayed or is semi-transparent and cannot be selected.
  • FIG. 11 shows an example in which the fluctuation image is displayed by switching to the OCTA image or superimposed on the OCTA image.
  • the present invention is not limited to this.
  • Other display examples of the fluctuation image 1150 are shown in FIGS. 12A to 12B.
  • FIG. 12A is an example showing only a portion where the first OCTA image 1110 in FIG. 10 is displayed.
  • FIG. 12A shows an example in which a variation portion that is equal to or greater than a predetermined threshold is displayed in color on the first OCTA image 1210.
  • FIG. 12A shows an example in which only the blood vessel 1201 (only the blood vessel region indicated by the blood vessel mask image generated in step S362) is displayed in color.
  • information indicating variations in motion contrast values is not associated with regions other than the regions indicating blood vessels included in the addition average image (OCTA image). That is, in the variation image 1150 of FIG. 11, the variation coefficient is displayed for the entire image, but in FIG. 12A, the variation coefficient is displayed in color for a part of the image.
  • FIG. 12B is an example showing only a portion where the first OCTA image 1110 and the first tomographic image 1111 in FIG. 10 are displayed.
  • FIG. 12B shows an example in which the variation coefficient 1240 is superimposed on the tomographic image 1111.
  • the tomographic image 1111 is superimposed and displayed in a color corresponding to the variation image 1150.
  • a portion where the value of the variation coefficient is equal to or greater than a predetermined threshold is displayed in a superimposed manner.
  • the coefficient of variation is calculated with a two-dimensional image on the XY plane, the value of Z is not known when displayed on a tomographic image.
  • the same color is used for the upper and lower sides (boundary lines 1125 and 1126), which are the OCTA image creation range.
  • the motion contrast value is equal to or greater than a predetermined threshold value and is displayed only at a location that satisfies both of the variation coefficient values equal to or greater than the predetermined threshold value. good.
  • the fluctuation image 1150 also shows an image in the same range.
  • the fluctuation image to be generated is not limited to this range. It is also possible to change the depth range of the OCTA image used for alignment and generate a fluctuation image in accordance with the range.
  • alignment is performed with an OCTA image created in a depth range in which blood vessel characteristics can be easily expressed, and only the alignment parameter is applied to an OCTA image generated in a different depth range, thereby generating a fluctuation image in different depth ranges.
  • the fluctuation image 1150 is displayed on the first OCTA image, but the alignment parameter obtained when the first OCTA image is generated can be applied to the second OCTA image 1113. It is. Therefore, it is possible to perform both high image quality and variable image generation in the OCTA image 1113 in different depth ranges.
  • an instruction acquisition unit (not shown) acquires an instruction from the outside as to whether or not tomography or analysis of the tomographic image by the image processing system 100 is to be terminated.
  • This instruction is input by the operator using the input unit 700, for example.
  • the image processing system 100 ends the process.
  • the process returns to step S302 to continue. Thus, the processing of the image processing system 100 is performed.
  • FIG. 13 a is an example of a grid 1301 that divides the upper, lower, left, and right into four with the macular at the center.
  • the size of the inner circle is 1 mm
  • the size of the outer circle is 3 mm.
  • 13b is an example of a grid 1302 that divides the macula up and down around the macula.
  • the size of the circle is the same as 13a.
  • the grid size may be 1 mm, 3 mm, or 6 mm.
  • 13c in FIG. 13 is an example of a grid 1303 that divides an image into nine.
  • the number of grid divisions is not limited to nine divisions, and may be, for example, 16 divisions.
  • one grid size may be 1 mm, and the number of divisions may be changed according to the angle of view of the captured image. In this case, for example, when the shooting angle of view is 3 ⁇ 3 mm, it is divided into nine.
  • 13c is an example in which a FAZ (Foveal Avalar Zone) is detected, and a grid 1304 is applied in a range of FAZ + several mm according to the shape.
  • the grid around the FAZ may be a single numerical value as a whole, or the inside of the area may be divided into four parts, upper and lower, left and right, and upper and lower parts as shown in 13a and 13b.
  • 13e is an example in which the grid is radially divided into 12 like a clock.
  • an OCTA image 1110 is shown as an image for displaying the grid in a superimposed manner, but the present invention is not limited to this.
  • a grid may be superimposed and displayed on the fluctuation image 1150 or the OCTA image 1210 + color blood vessel 1201 shown in FIG. 12A.
  • the grid display On / Off may be selected by a right-click menu (not shown), or the display may be switched by On / Off of a check box (not shown).
  • FIG. 14a and 14b are graph examples when the grid shown by 13e in FIG. 13 is radially divided.
  • the grid is 1 to 12 clockwise from the top, the horizontal axis is the location of the divided grid, and the vertical axis is the statistical value in the grid.
  • 0 is the same value as 12.
  • 14a and 14b are examples of data for different eyes, and show examples in which statistical values appear differently for different eyes.
  • 14c in FIG. 14 is an example in which the data of 14a and 14b are shifted in the horizontal direction with the minimum value of the statistical value of each grid as a reference.
  • the data is organized on the basis of the statistical value instead of the place as in 14c, it becomes easy to compare the characteristics of the surrounding blood vessels on the basis of the place where the blood vessels have some characteristic. Therefore, when comparing with a database or the like created from different eyes or normal data, the comparison may be performed using a characteristic place as a reference, as in 14c.
  • statistical values may be calculated and displayed in units of blood vessels.
  • the blood vessel mask it can be divided into units of blood vessels one by one.
  • the connection point of 1 pixel is the end point of the line.
  • the thinned blood vessel can be identified in units of blood vessels using these pieces of information.
  • the statistical value of the selected blood vessel may be displayed in a pop-up on an OCTA image at an arbitrary location on the screen.
  • numerical values may be displayed not in units of blood vessels but in units of pixels. In that case, since the coefficient of variation is calculated in units of pixels, when the operator moves the mouse cursor to an arbitrary pixel, the numerical value of the selected pixel is set to an arbitrary location on the screen or an OCTA image. It may be displayed in a pop-up above.
  • the second embodiment it is possible to evaluate the variation between images by calculating the coefficient of variation between OCTA images.
  • an index for evaluating the variation between a plurality of images as a numerical value for a blood vessel that has a poor flow due to a circulatory disorder or the like, it is possible to grasp the state of circulation.
  • 2nd Embodiment although it shows about calculation of the statistical value of a coefficient of variation as an analysis object, it is not restricted to this. For example, the area density or thinning (skeleton) density of blood vessels may be analyzed using a grid area described later.
  • FIG. 15 is a diagram illustrating a configuration of an image processing system 1000 including the image processing apparatus 1400 according to the present embodiment.
  • the image processing unit 1403 has a depth alignment unit 1437.
  • the depth alignment unit 1437 performs alignment in the depth direction (z axis) of the retina in the three-dimensional tomographic image and the three-dimensional motion contrast data.
  • FIGS. 16A to 16B the same processes as those in FIGS. 3A to 3B are denoted by the same reference numerals.
  • FIG. 16A is a flowchart showing an overall operation process of the image processing apparatus 1400 according to the third embodiment.
  • the image processing unit 303 When the processes in steps S301 to S304 are completed, in step S1505, the image processing unit 303 generates high-quality data using the acquired three-dimensional motion contrast data.
  • FIG. 16B is a flowchart illustrating high-quality data generation processing according to the third embodiment.
  • steps S351 to S355 are as described in the first embodiment (FIG. 3B).
  • the depth alignment unit 1437 performs alignment in the depth direction.
  • the depth alignment unit 1437 performs alignment in the depth direction within the three-dimensional data (adjacent tomographic images) and in the depth direction between the three-dimensional data (tomographic images of different data). For example, by first performing alignment within the data, the depth direction of the retina and the position of the retina inclination are aligned between adjacent tomographic images. Next, the retina depth direction and the retina inclination are aligned between different data using the data that has already been aligned within the data. XYZ alignment between different three-dimensional data is performed by the first alignment unit 334, the second alignment unit 336, and the depth alignment unit 1437.
  • step S1557 the calculation unit 1440 calculates statistical values (average value, standard deviation, coefficient of variation, maximum value, minimum value, etc.) from the plurality of three-dimensional motion contrast data.
  • the calculation of the statistical value is the same as in step S356, but in the case of three-dimensional data, it is a voxel instead of a pixel. Therefore, a statistical value between different data is calculated in a plurality of aligned 3D motion contrast data.
  • the statistical value may be calculated not in units of voxels but in units of voxels having a thickness in the depth direction. For example, instead of one voxel, an average or median value of several voxels around the target voxel may be calculated in the Z-axis direction, and a statistical value between the data may be calculated using the value.
  • step S1558 the image generation unit 332 generates the average value calculated by the calculation unit 1440 as high-quality three-dimensional motion contrast data.
  • the image generation unit 332 generates high-quality data of 3D motion contrast data and also generates high-quality data of 3D tomographic images. After performing the above processing, the processing returns to the flowchart of FIG. 16A.
  • step S1506 the fluctuation image generation unit 341 generates a fluctuation image based on the statistical value calculated by the calculation unit 1440 in step S1557. More specifically, first, the image generation unit 332 uses the three-dimensional motion contrast data that has been averaged and the three-dimensional statistical value (for example, three-dimensional variation coefficient data) obtained from the three-dimensional motion contrast data, respectively. A two-dimensional front image is generated. The variation image generation unit 341 generates a variation image using the two-dimensional image generated by the image generation unit 332.
  • the variation image generation method is the same as that in the first embodiment (step S306 in FIG. 3A), and a description thereof will be omitted.
  • step S1507 the high-quality three-dimensional motion contrast data, the high-quality three-dimensional tomographic image, the two-dimensional high-quality motion contrast data, and the fluctuation image that are created by averaging are displayed.
  • FIG. 17 an example of a screen displayed on the display unit 600 is shown in FIG.
  • three-dimensional tomographic images and two-dimensional motion contrast data display high-quality data.
  • high-quality three-dimensional motion contrast data 1640 is superimposed on the tomographic image.
  • three-dimensional variation coefficient data may be displayed instead of the three-dimensional motion contrast data 1640.
  • the three-dimensional variation coefficient data is displayed in a superimposed manner, it is desirable to superimpose and display a portion where each of the three-dimensional motion contrast data 1640 and the three-dimensional variation coefficient data is equal to or greater than a predetermined threshold.
  • a predetermined threshold As a result, it is possible to display the three-dimensional variation coefficient data of a portion three-dimensionally corresponding to the blood vessel.
  • the three-dimensional variation coefficient data is displayed in a superimposed manner, it is displayed in color and displayed in the same color as the bar 1151 of the two-dimensional variation image.
  • the variation between the data can be evaluated by calculating the coefficient of variation between the three-dimensional motion contrast data.
  • an index for evaluating a variation between a plurality of data as an image or a numerical value for a blood vessel having a poor flow due to a circulatory disorder or the like, the state of circulation can be grasped.
  • Modification 1 In each of the above-described embodiments, the example in which the variation information in a plurality of data is displayed as one image is shown, but the present invention is not limited to this. You may display as a moving image instead of making it one image. For example, OCTA images 1 to 3 taken at different times and aligned as shown in FIG. 9A are displayed at the same location (eg, the first OCTA image is displayed at a predetermined time interval (eg, 1 frame / second)). Displayed). Thereby, the operator can confirm the change of the OCTA image.
  • a predetermined time interval eg, 1 frame / second
  • Modification 2 In the third embodiment, an example in which three-dimensional variation coefficient data is obtained from three-dimensional motion contrast data has been described.
  • the present invention is not limited to this.
  • high-quality three-dimensional motion contrast data and high-quality tomographic images are generated, and the variation image is obtained from the two-dimensional OCTA image as shown in the first and second embodiments. May be.
  • Modification 3 In each of the above embodiments, an example in which a circulation state is evaluated using a coefficient of variation has been described, but the present invention is not limited to this.
  • a standard deviation, a difference between data, or the like may be used, or by calculating with Log, it is possible to suppress the influence of variation in a portion having a large value. Any index can be used as long as it can evaluate the variation among a plurality of data.
  • a blood vessel mask is created by binarization processing from an OCTA image with high image quality
  • thinning processing may be performed after binarization processing, and the thinned image may be used as a mask image.
  • a blood vessel is represented by one pixel, and thus is not affected by an artifact caused by an alignment error that may appear in the fold of the blood vessel. For this reason, it is also possible to calculate blood vessel fluctuation using a thinned image as a mask image.
  • the value obtained by the thinning process may be extended to the blood vessel equivalent region in the binarized image before thinning.
  • a variation image may be generated using an image from which projection artifacts have been removed as an OCTA image.
  • the projection artifact is a phenomenon in which the shadow of the blood vessel in the upper layer is reflected in the lower layer, and the artifact is generated in a place other than the blood vessel by changing the shadow due to a change due to the flow of blood.
  • Such artifacts may exist in motion contrast data. For this reason, even when the range for generating the OCTA image is arbitrarily designated by operating the UI, the fluctuation image may be generated and displayed using the image from which the artifact is removed.
  • the process from shooting to display is shown in a series of flows, but the present invention is not limited to this.
  • the variation image generation process may be performed using data that has already been shot. In that case, steps S302 to S304 of the processing relating to imaging are skipped, and instead, a plurality of 3D motion contrast data and 3D tomographic images already acquired are acquired. In step S306 or step S1506, the variation image generation process is performed.
  • the fluctuating image generation process can be executed when necessary, without processing the data that has been shot a plurality of times without taking a process at the time of shooting. Therefore, when shooting, you can concentrate on shooting.
  • the present invention supplies a program that realizes one or more functions of the above-described embodiments to a system or apparatus via a network or a storage medium, and one or more processors in a computer of the system or apparatus read and execute the program This process can be realized. It can also be realized by a circuit (for example, ASIC) that realizes one or more functions.
  • a circuit for example, ASIC
  • the CPU of the image processing apparatus controls the entire computer using computer programs and data stored in RAM or ROM.
  • the execution of software corresponding to each unit of the image processing apparatus is controlled to realize the function of each unit.
  • the user interface such as buttons and the display layout are not limited to those shown above.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)
PCT/JP2019/010004 2018-05-11 2019-03-12 画像処理装置、画像処理方法及びプログラム WO2019216019A1 (ja)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-092472 2018-05-11
JP2018092472A JP7281872B2 (ja) 2018-05-11 2018-05-11 画像処理装置、画像処理方法及びプログラム

Publications (1)

Publication Number Publication Date
WO2019216019A1 true WO2019216019A1 (ja) 2019-11-14

Family

ID=68467926

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/010004 WO2019216019A1 (ja) 2018-05-11 2019-03-12 画像処理装置、画像処理方法及びプログラム

Country Status (2)

Country Link
JP (1) JP7281872B2 (enrdf_load_stackoverflow)
WO (1) WO2019216019A1 (enrdf_load_stackoverflow)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025070087A1 (ja) * 2023-09-29 2025-04-03 株式会社ニデック Oct装置およびoct信号処理プログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017046976A (ja) * 2015-09-02 2017-03-09 株式会社ニデック 眼科撮影装置及び眼科撮影プログラム
JP2017077414A (ja) * 2015-10-21 2017-04-27 株式会社ニデック 眼科解析装置、眼科解析プログラム
WO2017119437A1 (ja) * 2016-01-07 2017-07-13 株式会社ニデック Oct信号処理装置、およびoct信号処理プログラム

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3855024B2 (ja) 2001-12-13 2006-12-06 国立大学法人九州工業大学 血流速度測定装置
JP6402902B2 (ja) 2014-06-30 2018-10-10 株式会社ニデック 光コヒーレンストモグラフィ装置及び光コヒーレンストモグラフィ演算プログラム
JP6584126B2 (ja) 2015-05-01 2019-10-02 キヤノン株式会社 画像生成装置、画像生成方法およびプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017046976A (ja) * 2015-09-02 2017-03-09 株式会社ニデック 眼科撮影装置及び眼科撮影プログラム
JP2017077414A (ja) * 2015-10-21 2017-04-27 株式会社ニデック 眼科解析装置、眼科解析プログラム
WO2017119437A1 (ja) * 2016-01-07 2017-07-13 株式会社ニデック Oct信号処理装置、およびoct信号処理プログラム

Also Published As

Publication number Publication date
JP2019195586A (ja) 2019-11-14
JP7281872B2 (ja) 2023-05-26

Similar Documents

Publication Publication Date Title
US20210224997A1 (en) Image processing apparatus, image processing method and computer-readable medium
Abràmoff et al. Retinal imaging and image analysis
US10973406B2 (en) Image processing apparatus, image processing method, and non-transitory computer readable medium
JP2020093076A (ja) 医用画像処理装置、学習済モデル、医用画像処理方法及びプログラム
CN113543695B (zh) 图像处理装置和图像处理方法
JP6526145B2 (ja) 画像処理システム、処理方法及びプログラム
JP7195745B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP7009265B2 (ja) 画像処理装置、画像処理方法及びプログラム
WO2020050308A1 (ja) 画像処理装置、画像処理方法及びプログラム
JP2019150485A (ja) 画像処理システム、画像処理方法及びプログラム
JP2015160105A (ja) 画像処理装置、画像処理方法及びプログラム
JP2019047839A (ja) 画像処理装置、位置合わせ方法及びプログラム
JP7106304B2 (ja) 画像処理装置、画像処理方法及びプログラム
US10846892B2 (en) Image processing apparatus, image processing method, and storage medium
JP2021122559A (ja) 画像処理装置、画像処理方法及びプログラム
JP7027076B2 (ja) 画像処理装置、位置合わせ方法及びプログラム
JP7005382B2 (ja) 情報処理装置、情報処理方法およびプログラム
JP2019063446A (ja) 画像処理装置、画像処理方法及びプログラム
WO2019216019A1 (ja) 画像処理装置、画像処理方法及びプログラム
JP7297952B2 (ja) 情報処理装置、情報処理方法およびプログラム
JP7604160B2 (ja) 画像処理装置、画像処理方法及びプログラム
WO2020090439A1 (ja) 画像処理装置、画像処理方法およびプログラム
JP7204345B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP7158860B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP7646321B2 (ja) 画像処理装置、画像処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19800902

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19800902

Country of ref document: EP

Kind code of ref document: A1