WO2016098850A1 - Image processing device, laser radiation system, image processing method, and image processing program - Google Patents

Image processing device, laser radiation system, image processing method, and image processing program Download PDF

Info

Publication number
WO2016098850A1
WO2016098850A1 PCT/JP2015/085334 JP2015085334W WO2016098850A1 WO 2016098850 A1 WO2016098850 A1 WO 2016098850A1 JP 2015085334 W JP2015085334 W JP 2015085334W WO 2016098850 A1 WO2016098850 A1 WO 2016098850A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
correlation
images
shift
image processing
Prior art date
Application number
PCT/JP2015/085334
Other languages
French (fr)
Japanese (ja)
Inventor
安野 嘉晃
和博 鈴木
Original Assignee
国立大学法人筑波大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人筑波大学 filed Critical 国立大学法人筑波大学
Publication of WO2016098850A1 publication Critical patent/WO2016098850A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated

Definitions

  • the present invention relates to an image processing apparatus, a laser irradiation system, an image processing method, and an image processing program.
  • This application claims priority on December 17, 2014 based on Japanese Patent Application No. 2014-255208 for which it applied to Japan, and uses the content here.
  • laser coagulation treatment In clinical treatment, laser coagulation treatment is known in which a living body is irradiated with laser light to locally scar the tissue. Laser coagulation treatment is widely used, for example, in fundus treatment.
  • the conditions at the time of laser beam irradiation may be determined by the experience of an operator such as a doctor or a medical worker. As a result, the effect of laser coagulation treatment may vary depending on the operator. In addition, even with the same surgeon, the reproducibility of treatment may not be high.
  • the monitoring accuracy that is, the detection accuracy may not be sufficient.
  • the present invention has been made in view of such circumstances, and an object of the present invention is to provide an image processing apparatus, a laser irradiation system, an image processing method, and an image processing program capable of improving detection accuracy. I will.
  • an acquisition unit that acquires a plurality of images representing the state of an object in time series, and a plurality of images acquired by the acquisition unit are translated in a plane direction to translate a first image into a plurality of images.
  • a shift image generation unit that generates a shift image, a correlation that calculates an index indicating a correlation between the shift image generated by the shift image generation unit and a second image among the plurality of images acquired by the acquisition unit
  • An image processing apparatus comprising: a calculation unit; and a movement state calculation unit that calculates a movement state of the object for each pixel based on the index calculated by the correlation calculation unit.
  • an irradiation unit that irradiates laser light
  • an acquisition unit that acquires a plurality of images representing a state of an object irradiated with laser light by the irradiation unit in time series
  • the acquisition unit acquires a plurality of images representing a state of an object irradiated with laser light by the irradiation unit in time series
  • a shift image generation unit that generates a plurality of shift images by translating a first image in a plane direction among the plurality of acquired images, a shift image generated by the shift image generation unit, and the acquisition unit
  • a correlation calculating unit that calculates an index indicating a correlation with the second image among the plurality of acquired images, and a moving state of the object is calculated for each pixel based on the index calculated by the correlation calculating unit.
  • a laser irradiation system comprising: a movement state calculation unit; and an irradiation control unit that controls the irradiation unit so that the intensity of the laser beam is changed based on the movement state of the object calculated by the movement state calculation unit. Is Temu.
  • a plurality of images representing the state of an object are acquired in time series, and a plurality of shifted images are generated by translating a first image in the plane direction among the acquired plurality of images. Then, an index indicating a correlation between the generated shift image and the second image among the plurality of acquired images is calculated, and the movement state of the object is calculated for each pixel based on the calculated index.
  • a computer that performs image processing acquires a plurality of images representing the state of an object in time series, and the first image among the plurality of acquired images is translated in the plane direction. Generating a plurality of shift images, calculating an index indicating a correlation between the generated shift image and the second image among the plurality of acquired images, and moving the object based on the calculated index.
  • This is an image processing program for calculating a state for each pixel.
  • an image processing apparatus it is possible to provide an image processing apparatus, a laser irradiation system, an image processing method, and an image processing program that can improve detection accuracy.
  • FIG. 3 is a flowchart illustrating an example of a processing flow of the image processing apparatus 100 according to the first embodiment. It is a block diagram which shows an example of the laser irradiation system 2 containing the image processing apparatus 100 in 2nd Embodiment.
  • FIG. 1 is a configuration diagram illustrating an example of an optical coherence tomometer 1 including an image processing apparatus 100 according to the first embodiment.
  • the optical coherence tomometer 1 in the present embodiment corresponds to a so-called optical coherence tomography (OCT) device.
  • OCT1 optical coherence tomography
  • the OCT 1 irradiates light on an object to be measured (hereinafter referred to as “target object OB”), and measures light in which light reflected from the target object OB interferes with part of the irradiated light.
  • This is a device for measuring the displacement of the target object OB.
  • the target object OB in the present embodiment is, for example, a living body (organic matter) such as a fundus, a blood vessel, a tooth, a subcutaneous tissue (for example, a tumor), or an inorganic material such as an electronic component (for example, a semiconductor) or a mechanical component.
  • the inorganic material may partially include an organic material.
  • the OCT 1 includes a light source 10, a beam splitter 20, collimators 30a, 30b, 50a, and 50b, a reference mirror 40, galvanometer mirrors 60a and 60b, a spectroscope 70, and an image processing apparatus 100.
  • the beam splitter 20, the collimators 30a, 30b, 50a, and 50b, the reference mirror 40, and the galvanometer mirrors 60a and 60b correspond to an optical system called a so-called interferometer.
  • the interferometer in the present embodiment is, for example, a Michelson interferometer configured by an optical fiber F.
  • the light source 10, the spectroscope 70, and the collimators 30a and 50a are respectively connected to the beam splitter 20 by an optical fiber F having a transmission band in the wavelength band of light emitted from the light source 10.
  • OCT1 in this embodiment is, for example, Fourier domain OCT (Fourier-domain OCT; FD-OCT) such as spectral domain OCT (Spectral-domain OCT; SD-OCT) or wavelength sweep OCT (Swept-source OCT; SS-OCT). ), But is not limited to this.
  • OCT1 may be, for example, time domain OCT (Time-domain OCT; TD-OCT).
  • time domain OCT Time-domain OCT; TD-OCT
  • the light source 10 irradiates, for example, probe light having a wavelength of near infrared (for example, about 800 to 1000 nm).
  • the light source 10 is preferably a wavelength swept light source such as an SLD (super luminescent diode) or an ultrashort pulse laser.
  • the light emitted from the light source 10 is guided in the optical fiber F, and is split by the beam splitter 20 into light guided to the collimator 30a side and light guided to the collimator 50a side.
  • light guided to the collimator 30a side is referred to as “reference light”
  • light guided to the collimator 50a side is referred to as “measurement light”.
  • the beam splitter 20 is, for example, a cube beam splitter.
  • the reference light is changed into, for example, parallel light by the collimator 30a, and then the parallel light is changed to light condensed by the collimator 30b.
  • the light (reference light) collected by the collimator 30 b is reflected by the reference mirror 40.
  • the reference light reflected by the reference mirror 40 is changed into, for example, parallel light by the collimator 30b, and then the parallel light is changed into light collected by the collimator 30a and guided to the beam splitter 20.
  • the measurement light is converted into parallel light by the collimator 50a. Thereafter, the parallel light (measurement light) is reflected to the collimator 50b side by the galvanometer mirrors 60a and 60b. Thereafter, the parallel light (measurement light) is changed into the light collected by the collimator 50b and irradiated onto the target object OB.
  • the measurement light irradiated on the target object OB is reflected by the reflection surface of the target object OB and is incident on the collimator 50b.
  • the reflection surface is, for example, a boundary surface having a different refractive index inside the target object OB, or a boundary surface between the target object OB and the environment (for example, air) around the target object OB.
  • the light reflected by the reflecting surface of the target object OB and incident on the collimator 50b is referred to as “reflected light”.
  • the reflected light is guided to the beam splitter 20 via the galvanometer mirrors 60a and 60b and the collimator 50a.
  • the beam splitter 20 guides the reference light and the reflected light reflected by the reference mirror 40 to the spectroscope 70 through a coaxial optical fiber F, for example.
  • the reference light and the reflected light guided to the spectroscope 70 are split by the diffraction grating inside the spectroscope 70 and interfere with each other.
  • the interfered reference light and reflected light are referred to as “interference light”.
  • the spectroscope 70 detects interference light by a light receiving element such as a photodiode, for example, and generates a signal (hereinafter referred to as “detection signal”) based on the detected interference light.
  • FIG. 2 is a diagram illustrating a mechanism configuration example of the image processing apparatus 100 according to the first embodiment.
  • the image processing apparatus 100 includes a control unit 110 and a storage unit 180.
  • the control unit 110 is a software function unit that functions when, for example, a processor such as a CPU (Central Processing Unit) executes a program stored in the storage unit 180. Some or all of these functional units may be hardware functional units such as LSI (Large Scale Integration) and ASIC (Application Specific Integrated Circuit).
  • the storage unit 180 includes a nonvolatile storage medium (non-temporary storage medium) such as a ROM (Read Only Memory), a flash memory, and an HDD (Hard Disk Drive).
  • the storage unit 180 may include a volatile storage medium such as a RAM (Random Access Memory) or a register, for example.
  • the storage unit 180 may store a program for operating the software function unit.
  • the control unit 110 includes, for example, an optical system control unit 120, a detection signal acquisition unit 130, an OCT image generation unit 140, a shift image generation unit 150, a correlation calculation unit 160, and a movement state calculation unit 170.
  • the optical system control unit 120 drives the galvanometer mirrors 60a and 60b and controls the interferometer so as to scan the measurement point of the target object OB one-dimensionally.
  • the detection signal acquisition unit 130 acquires a detection signal from the spectroscope 70.
  • the OCT image generation unit 140 appropriately performs signal processing on the detection signal acquired by the detection signal acquisition unit 130 to differentiate the one-dimensional refractive index distribution in the depth direction (light propagation direction), that is, reflected light. Intensity distribution (so-called B-mode image) is calculated.
  • B-mode image Intensity distribution
  • an image obtained by converting the intensity of the reflected light into a luminance value is referred to as an “OCT image”.
  • the OCT image generation unit 140 generates an OCT image for each one-dimensional scan. That is, the OCT image generation unit 140 generates a plurality of OCT images in time series.
  • the OCT image generation unit 140 is an example of an “acquisition unit”.
  • the shift image generation unit 150 sets a local region A for any two OCT images among a plurality of OCT images generated by the OCT image generation unit 140.
  • the local area A is an area that is a target in the process of calculating a correlation described later.
  • the local area A can be set as an arbitrary area of the user, and is set as all or part of the OCT image.
  • the shift image generation unit 150 selects an older OCT image in time series from the OCT images in which the local region A is set.
  • the shift image generation unit 150 translates (shifts) the local region A on the selected time-series older OCT image by a predetermined number of pixels in the plane direction.
  • the older one in time series is referred to as a “reference image”
  • the newer one in time series is referred to as a “target image”.
  • the reference image is an example of a “first image”
  • the target image is an example of a “second image”.
  • the reference image may be an example of a “second image”
  • the target image may be an example of a “first image”.
  • the shift image generation unit 150 translates the local area A by a predetermined number of pixels in the left direction on the reference image, and newly sets the translated local area A as the local area A1.
  • the shift image generation unit 150 translates the local area A by a predetermined number of pixels in the right direction on the reference image, for example, and newly sets the translated local area A as the local area A2.
  • the shift image generation unit 150 translates the local area A by a predetermined number of pixels in the upward direction on the reference image, for example, and newly sets the translated local area A as the local area A3.
  • the shift image generation unit 150 translates the local area A by a predetermined number of pixels in the downward direction on the reference image, for example, and newly sets the translated local area A as the local area A4.
  • the OCT image in which the local areas A1 to A4 are newly set is referred to as a “shift image”.
  • the correlation calculation unit 160 calculates an index (for example, a correlation coefficient) indicating the correlation between the local area A of the target image and the local area A of the reference image. Further, the correlation calculation unit 160 calculates an index (for example, a correlation coefficient) indicating the correlation between the local area A of the target image and the local areas A1 to A4 of the shift image. That is, the correlation calculation unit 160 calculates five indexes (for example, correlation coefficients) indicating five correlations based on the local region A of the reference image, the local region A of the target image, and the local regions A1 to A4 of the shift image. calculate.
  • the movement state calculation unit 170 calculates the distance (displacement) to the reflection surface of the target object OB based on the index indicating the correlation calculated by the correlation calculation unit 160. That is, the movement state calculation unit 170 can calculate the movement amount of the target object OB when the target object OB moves.
  • the image processing apparatus 100 displays, for example, an OCT image in which the local area A is set, a correlation index, a distance (displacement) to the reflection surface of the target object OB, and the like (display device such as a touch panel or a liquid crystal display) May be displayed.
  • a point spread function (PSF) h ( ⁇ r) is derived as a function representing a response to the point light source.
  • the point spread function h ( ⁇ r) is expressed by, for example, the following formula (1).
  • represents a vector.
  • x and z indicate the lateral direction and depth direction (light propagation direction) in the OCT image plane, respectively.
  • y in the equation (1) indicates a lateral direction out of the plane with respect to the OCT image plane (xz plane).
  • wl in the formula (1) indicates a spot radius of the probe light defined by a point where the light intensity is 1 / e2 times the peak value.
  • wz in the formula (1) indicates a half of the width at which the point image distribution intensity becomes 1 / e2 times the peak value.
  • z0 in the equation (1) indicates that the initial position in the depth direction of the OCT1 interferometer is “0”.
  • kc in the formula (1) indicates the center wave number of the light source 10.
  • ⁇ r (x, y, z)” in the equation (2) indicates a position vector in Cartesian coordinates, and ⁇ x ( ⁇ r), ⁇ y ( ⁇ r), and ⁇ z ( ⁇ r) are x, The displacement in the y and z directions is shown.
  • the OCT image s ( ⁇ r) is expressed by Equation (3).
  • the OCT image s ( ⁇ r) is derived by superimposing the ⁇ ( ⁇ r) indicating the structure (sample) of the target object OB and the point spread function h ( ⁇ r).
  • the OCT image s ( ⁇ r + ⁇ ⁇ r) when the structure of the target object OB changes is expressed by Expression (4).
  • the OCT image s ( ⁇ r + ⁇ ⁇ r) is derived by superposition of ⁇ ( ⁇ r) and a point spread function h ( ⁇ r + ⁇ ⁇ r) that takes into account the displacement ⁇ ⁇ r of the structure of the target object OB.
  • the local displacement ⁇ ⁇ r ( ⁇ r) is simplified and expressed as ⁇ ⁇ r in the formula (4) and the following formulas.
  • Formula (5) is derived from Formulas (3) and (4).
  • Formula (5) represents the superposition integral of the OCT image before and after the deformation of the target object OB.
  • the correlation coefficient ⁇ between the point spread function h ( ⁇ r) before deformation and the point spread function h ( ⁇ r + ⁇ ⁇ r) after deformation is expressed as a complex phase in the OCT image before and after the deformation of the target object OB. It is derived as a relationship number (amplitude of the complex correlation coefficient; ACCC). Equation (6) for deriving the correlation coefficient ⁇ between the point spread function h ( ⁇ r) and the point spread function h ( ⁇ r + ⁇ ⁇ r) is shown below.
  • the local region A on the reference image is translated (shifted) by a predetermined number of pixels in the vertical and horizontal directions to derive five simultaneous equations and solve these five simultaneous equations.
  • Equation (7) represents, for example, a correlation coefficient ⁇ 0 (x, z) between the local area A of the target image and the local area A of the reference image.
  • Equation (8) represents a correlation coefficient ⁇ 1 (x, z) between the local area A of the target image and the local area A1 of the shift image (shifted to the left by the pixel ⁇ x).
  • Equation (9) represents a correlation coefficient ⁇ 2 (x, z) between the local area A of the target image and the local area A2 of the shift image (shifted to the right by the pixel ⁇ x).
  • Equation (10) represents a correlation coefficient ⁇ 3 (x, z) between the local area A of the target image and the local area A3 of the shift image (shifted upward by the pixel ⁇ z).
  • Equation (11) represents a correlation coefficient ⁇ 4 (x, z) between the local area A of the target image and the local area A4 of the shift image (shifted down by the pixel ⁇ z).
  • ⁇ x and ⁇ z are described by simplifying ⁇ x (x, z) and ⁇ z (x, z), respectively.
  • Equation (22) to (26) described above into Equation (6) may be taken into account when substituting Equations (22) to (26) described above into Equation (6) to derive the correlation coefficient ⁇ .
  • Equation (27) for the correlation coefficient ⁇ s in consideration of the influence of noise is shown below.
  • Formula (27) is a formula obtained by multiplying the formula term of the correlation coefficient by the formula term for noise reduction.
  • * gR (xi, zi) represents the complex conjugate of the luminance of the coordinates (xi, zi) on the reference image.
  • gT (xi, zi) represents the luminance of the coordinates (xi, zi) on the target image.
  • W represents the area of the local region A on the OCT image.
  • SNRR-1 and SNRT-1 are the signal-noise ratio (SNR) on the reference image and the signal-to-noise ratio SNR on the target image, respectively. Represents. These SNRR-1 and SNRT-1 are defined by the following formula (28).
  • SNRn (x, z) shown in Equation (28) is a generalized representation of SNRR-1 and SNRT-1.
  • gn (xi, zi) is a notation obtained by generalizing * gR (xi, zi) and gT (xi, zi).
  • N (xi, zi) represents the noise (noise floor) of the DC component that appears in the intensity distribution of the reflected light.
  • FIG. 3 is a diagram illustrating an example of the local region A set on the reference image.
  • the shift image generation unit 150 sets, for example, a local region A of 14 ⁇ 14 pixels on the reference image.
  • the shift image generation unit 150 translates (shifts) the local area A of 14 ⁇ 14 pixels by predetermined pixels in the vertical and horizontal directions, and resets the local areas A1 to A4.
  • the shift image generation unit 150 may repeat the setting process for the local region A described above a plurality of times.
  • the shift image generation unit 150 repeats the setting process of the local area A a plurality of times, it is preferable to set an averaged area of the local areas A on the OCT image.
  • the correlation calculation unit 160 applies the local area A on each reference image and the local area A on each reference image, respectively. Based on the corresponding local area A of the target image and local areas A1 to A4 of the shift image, an index indicating correlation may be calculated a plurality of times.
  • the movement state calculation unit 170 calculates the distance (displacement) to the reflection surface of the target object OB based on the average value of the index indicating the correlation calculated by the correlation calculation unit 160 a plurality of times. Thereby, the image processing apparatus 100 can improve the detection accuracy while suppressing the influence of noise.
  • FIG. 4 is a diagram illustrating an example of a relationship between a theoretical value that is a calculation result of the displacement of the target object OB and an actual measurement value of the movement amount when the target object OB is actually moved by the three-axis stage.
  • the triaxial stage is, for example, an actuator using a piezoelectric effect or the like that can be translated in the vertical and horizontal directions (x, y, z directions) in space.
  • the target object OB is placed and fixed on a three-axis stage. That is, the movement amount of the target object OB matches the parallel movement amount of the three-axis stage.
  • the vertical axis shown in the figure is the displacement of the target object OB (for example, the unit is [ ⁇ m]), and the horizontal axis is the amount of movement of the target object OB in the x or y direction (for example, the unit is [ ⁇ m]).
  • the plots Px and Pz in the figure represent the displacement of the target object OB in the x direction and the displacement of the target object OB in the z direction, respectively. In these plots Px and Pz, the displacement of the target object OB and the amount of movement of the target object OB show a substantially one-to-one relationship.
  • the calculation result of the displacement in the x and z directions of the target object OB in the present embodiment is substantially the same as the actual measurement value (true value), and the usefulness of the above-described method for calculating the displacement of the target object OB is demonstrated. It is.
  • FIG. 5 is a diagram illustrating an example of standard deviations of displacements in the x direction and the z direction of the target object OB with respect to each movement amount of the target object OB.
  • the vertical axis shown in the figure is the standard deviation of the displacement in the x direction and the z direction of the target object OB
  • the horizontal axis is the amount of movement of the target object OB in the x direction or z direction (for example, the unit is [ ⁇ m]). is there.
  • the plots Px and Pz in the figure are the same as those in FIG.
  • the standard deviation in the x direction is larger than the standard deviation in the z direction, the average is about 0.3.
  • the standard deviation in the z direction is about 0.1 on average. From these results, it is suggested that the OCT 1 in the present embodiment can perform displacement measurement with higher accuracy in the z direction (depth direction) than in the x direction.
  • FIG. 6 is a diagram illustrating another example of the relationship between the theoretical value, which is the calculation result of the displacement of the target object OB, and the actually measured value of the movement amount when the target object OB is actually moved by the three-axis stage. .
  • the vertical axis shown in the figure is the displacement of the target object OB (for example, the unit is [ ⁇ m]), and the horizontal axis is the amount of movement of the target object OB in the y direction (for example, the unit is [ ⁇ m]).
  • the plot Py in the figure represents the displacement of the target object OB in the y direction.
  • the displacement of the target object OB and the movement amount of the target object OB show a substantially one-to-one relationship. That is, the calculation result of the displacement of the target object OB in the y direction in the present embodiment substantially matches the actual measurement value (true value), and the usefulness of the above-described method of calculating the displacement of the target object OB is indicated.
  • FIG. 7 is a diagram illustrating an example of the standard deviation of the displacement in the y direction of the target object OB with respect to each movement amount of the target object OB.
  • the vertical axis shown in the figure is the standard deviation of the displacement in the y direction of the target object OB, and the horizontal axis is the amount of movement of the target object OB in the y direction (for example, the unit is [ ⁇ m]).
  • the standard deviation in the y direction is about 0.1 on average. From this result, it is suggested that OCT1 in the present embodiment can perform accurate displacement measurement in the y direction (lateral direction out of the plane) as well as in the z direction (depth direction).
  • FIG. 8 is a flowchart illustrating an example of a processing flow of the image processing apparatus 100 according to the first embodiment.
  • the image processing apparatus 100 repeatedly performs the processing of this flowchart at a predetermined cycle.
  • the optical system control unit 120 drives the galvanometer mirrors 60a and 60b to control the interferometer so as to perform one-dimensional scanning of the measurement point of the target object OB (step S100).
  • the detection signal acquisition unit 130 acquires a detection signal from the spectrometer 70 (step S102).
  • the OCT image generation unit 140 appropriately performs signal processing on the detection signal acquired by the detection signal acquisition unit 130, and generates an OCT image based on the detection signal (step S104).
  • the shift image generation unit 150 translates the local region A on the reference image, which is the older OCT image in the time series among the OCT images in which the local region A is set, by a predetermined number of pixels in the plane direction. (Shift) (step S110). Next, the shift image generation unit 150 generates a shift image in which the local area A that has been translated by a predetermined number of pixels is used as a new local area (step S112).
  • the correlation calculation unit 160 uses an index (for example, a correlation coefficient) indicating the correlation between the local region A of the target image, which is a newer OCT image in the time series, and the local region A of the reference image, and the target image.
  • An index (for example, a correlation coefficient) indicating the correlation between the local area A and the local areas A1 to A4 of the shifted image is calculated (step S114).
  • the movement state calculation unit 170 calculates the distance (displacement) to the reflection surface of the target object OB based on the index indicating the correlation calculated by the correlation calculation unit 160 (step S116). As a result, the image processing apparatus 100 ends the processing of this flowchart.
  • a plurality of OCT images of the target object OB are generated in time series, and the local region A set as the reference image is translated in the plane direction to generate a plurality of shifted images. And calculating an index indicating the correlation between the target image and the local area of the reference image and the shift image, and calculating the displacement of the target object OB based on the calculated index, thereby improving the detection accuracy. Can do.
  • the laser irradiation system 2 including the image processing apparatus 100 according to the second embodiment will be described.
  • the laser irradiation system 2 of the second embodiment is different from the first embodiment in that a laser irradiation optical system 200 is further provided in addition to the OCT 1 in the first embodiment. Therefore, it demonstrates centering on such a difference and the description about a common part is abbreviate
  • FIG. 9 is a configuration diagram illustrating an example of the laser irradiation system 2 including the image processing apparatus 100 according to the second embodiment.
  • the laser irradiation system 2 in the present embodiment is a system for treating scars on a target object OB (for example, retinal detachment edema that occurs in the fundus) in a living eyeball Eye.
  • the laser irradiation system 2 further includes a laser irradiation optical system 200 in addition to the OCT 1 in the first embodiment.
  • the optical system control unit 120 of the image processing apparatus 100 further has a function of controlling the laser irradiation optical system 200.
  • the laser irradiation optical system 200 is an example of an “irradiation unit”.
  • the laser irradiation optical system 200 is, for example, a micro pulse laser device that can coagulate the target object OB with minimal invasiveness. Further, the laser irradiation optical system 200 may include an optical scanner for scanning the target object OB with the laser beam to be irradiated. The laser irradiation optical system 200 may include an optical path coupling member such as a dichroic mirror or a half mirror. The optical path coupling member arranges the optical axis of the laser irradiation optical system 200 and the optical axis of the OCT 1 coaxially.
  • the optical system control unit 120 controls, for example, the beam width and intensity of the laser light emitted from the laser irradiation optical system 200 based on the displacement of the target object OB calculated by the movement state calculation unit 170.
  • the optical system control unit 120 is an example of an “irradiation control unit”.
  • Example 2 The present applicant observed through experiments that the displacement of the target object OB changes due to thermal expansion and contraction caused by laser irradiation when the target object OB is irradiated with laser light.
  • these experimental results will be described with reference to the drawings. Note that the image processing apparatus 100 displays an image indicating an experimental result described later on a display device or the like.
  • FIG. 10 is a diagram illustrating an example of a result of measuring the displacement of the target object OB while irradiating the target object OB with laser light.
  • the map shown at the right end (fourth column) in the figure is a diagram in which displacement vectors are overlaid on the OCT image.
  • the size of the vector arrow indicates the magnitude of the displacement
  • the direction of the vector arrow indicates the direction of the displacement.
  • the direction of displacement may be expressed as a two-dimensional color wheel.
  • the direction of displacement and the up / down / left / right directions of the color wheel are previously associated with each other.
  • the direction of displacement can be expressed by the color of the plot shown in the figure.
  • the user recognizes the displacement direction of this region as a displacement direction (for example, a negative x direction) corresponding to the color of the plot. it can.
  • the retina on the inner side of the fundus is expanded in the radial direction by performing laser irradiation for 12 to 24 ms. After the end of laser irradiation (after 24 ms), it was confirmed that the retinal tissue displaced by laser irradiation returns to the original displacement (position) in a period until 180 ms elapses.
  • the laser irradiation system 2 including the image processing apparatus 100 in the second embodiment, it is possible to quantitatively evaluate the displacement of the target object OB while irradiating the target object OB with laser light.
  • the image processing apparatus 100-A according to the third embodiment is different from the above-described embodiment in that some or all of the processes are shared by other apparatuses or devices in image processing. Therefore, it demonstrates centering on such a difference and the description about a common part is abbreviate
  • FIG. 11 is a diagram illustrating an example of a connection relationship between the OCT 1-A including the image processing apparatus 100-A according to the third embodiment and other apparatuses.
  • the image processing apparatus 100-A is connected to another apparatus via a network NW such as a LAN or a WAN, for example.
  • NW such as a LAN or a WAN
  • other apparatuses include the image processing apparatus 100-B in the OCT1-B, the image processing apparatus 300 in the ultrasonic diagnostic imaging apparatus 3, the image processing apparatus 400 in the CT (Computed Tomography) apparatus 4, and the MRI ( Magnetic Resonance Imaging) is an image processing apparatus 500 in the apparatus 5, a single image processing apparatus 600, and the like.
  • the image processing apparatus 100-A calculates the load related to the image processing apparatus 100 when calculating the displacement based on the capacity and number of the generated OCT images.
  • the image processing apparatus 100-A transmits the generated OCT image to another apparatus, and performs a process related to the calculation of the displacement of the target object OB described above. Some or all may be performed by another device. That is, the image processing apparatus 100-A performs a distributed process on the process related to the calculation of the displacement of the target object OB.
  • the image processing apparatus 100-A according to the third embodiment can quickly calculate the displacement of the target object OB.
  • the OCT image generation unit 140 may not only generate an OCT image based on the detection signal, but also acquire an OCT image that has already been generated from another functional unit or another device.
  • SYMBOLS 1 Optical coherence tomography, OCT, 10 ... Light source, 20 ... Beam splitter, 30a, 30b, 50a, 50b ... Collimator, 40 ... Reference mirror, 60a, 60b ... Galvano mirror, 70 ... Spectroscope, 100 ... Image processing apparatus DESCRIPTION OF SYMBOLS 110 ... Control part 120 ... Optical system control part 130 ... Detection signal acquisition part 140 ... OCT image generation part 150 ... Shift image generation part 160 ... Correlation calculation part 170 ... Movement state calculation part 180 ... Memory

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

In the present invention, an image processing device is provided with: an acquisition unit that acquires, in chronological order, a plurality of images representing the state of an object; a shift image generation unit that subjects a first image from among the plurality of images acquired by the acquisition unit to parallel shifting in a planar direction, and thereby generates a plurality of shift images; a correlation calculation unit that calculates an index that indicates correlation between the shift images generated by the shift image generation unit, and a second image from among the plurality of images acquired by the acquisition unit; and a movement state calculation unit that calculates, for each pixel, the state of movement of the object, on the basis of the index calculated by the correlation calculation unit.

Description

画像処理装置、レーザ照射システム、画像処理方法、および画像処理プログラムImage processing apparatus, laser irradiation system, image processing method, and image processing program
 本発明は、画像処理装置、レーザ照射システム、画像処理方法、および画像処理プログラムに関する。
 本願は、2014年12月17日に、日本に出願された特願2014-255208号に基づき優先権を主張し、その内容をここに援用する。
The present invention relates to an image processing apparatus, a laser irradiation system, an image processing method, and an image processing program.
This application claims priority on December 17, 2014 based on Japanese Patent Application No. 2014-255208 for which it applied to Japan, and uses the content here.
 臨床治療において、レーザ光を生体に照射して、局所的に組織を瘢痕化するレーザ凝固治療が知られている。レーザ凝固治療は、例えば、眼底治療において広く用いられている。しかしながら、レーザ凝固治療において、レーザ光を照射時の条件は、医師や医療従事者等の術者の経験によって決定される場合があった。この結果、レーザ凝固治療の効果は、術者に応じてばらつく場合があった。また、同一の術者であっても、治療の再現性が高くない場合があった。 In clinical treatment, laser coagulation treatment is known in which a living body is irradiated with laser light to locally scar the tissue. Laser coagulation treatment is widely used, for example, in fundus treatment. However, in laser coagulation treatment, the conditions at the time of laser beam irradiation may be determined by the experience of an operator such as a doctor or a medical worker. As a result, the effect of laser coagulation treatment may vary depending on the operator. In addition, even with the same surgeon, the reproducibility of treatment may not be high.
 これに関連し、レーザ凝固治療時に組織の瘢痕化をモニタリングし、治療時の不確実性を低下させることについて研究が進められている。 In this connection, research is being conducted on monitoring tissue scarring during laser coagulation treatment to reduce uncertainty during treatment.
特開2014-150889号公報JP 2014-150889 A
 しかしながら、従来の技術では、モニタリングの精度、すなわち検出精度が十分でない場合があった。 However, with the conventional technology, the monitoring accuracy, that is, the detection accuracy may not be sufficient.
 本発明は、このような事情を考慮してなされたものであり、検出精度を向上させることができる画像処理装置、レーザ照射システム、画像処理方法、および画像処理プログラムを提供することを目的の一つとする。 The present invention has been made in view of such circumstances, and an object of the present invention is to provide an image processing apparatus, a laser irradiation system, an image processing method, and an image processing program capable of improving detection accuracy. I will.
 本発明の一態様は、物体の状態を表す画像を時系列に複数取得する取得部と、前記取得部により取得された複数の画像のうち第1の画像を面方向に平行移動させて複数のシフト画像を生成するシフト画像生成部と、前記シフト画像生成部により生成されたシフト画像と、前記取得部により取得された複数の画像のうち第2の画像との相関を示す指標を算出する相関算出部と、前記相関算出部により算出された指標に基づいて、前記物体の移動状態を画素ごとに算出する移動状態算出部とを備える画像処理装置である。 According to one aspect of the present invention, an acquisition unit that acquires a plurality of images representing the state of an object in time series, and a plurality of images acquired by the acquisition unit are translated in a plane direction to translate a first image into a plurality of images. A shift image generation unit that generates a shift image, a correlation that calculates an index indicating a correlation between the shift image generated by the shift image generation unit and a second image among the plurality of images acquired by the acquisition unit An image processing apparatus comprising: a calculation unit; and a movement state calculation unit that calculates a movement state of the object for each pixel based on the index calculated by the correlation calculation unit.
 また、本発明の他の態様は、レーザ光を照射する照射部と、前記照射部によってレーザ光が照射された物体の状態を表す画像を時系列に複数取得する取得部と、前記取得部により取得された複数の画像のうち第1の画像を面方向に平行移動させて複数のシフト画像を生成するシフト画像生成部と、前記シフト画像生成部により生成されたシフト画像と、前記取得部により取得された複数の画像のうち第2の画像との相関を示す指標を算出する相関算出部と、前記相関算出部により算出された指標に基づいて、前記物体の移動状態を画素ごとに算出する移動状態算出部と、前記移動状態算出部により算出された前記物体の移動状態に基づいて、前記レーザ光の強度が変更されるように前記照射部を制御する照射制御部とを備えるレーザ照射システムである。 According to another aspect of the present invention, there is provided an irradiation unit that irradiates laser light, an acquisition unit that acquires a plurality of images representing a state of an object irradiated with laser light by the irradiation unit in time series, and the acquisition unit. A shift image generation unit that generates a plurality of shift images by translating a first image in a plane direction among the plurality of acquired images, a shift image generated by the shift image generation unit, and the acquisition unit A correlation calculating unit that calculates an index indicating a correlation with the second image among the plurality of acquired images, and a moving state of the object is calculated for each pixel based on the index calculated by the correlation calculating unit. A laser irradiation system comprising: a movement state calculation unit; and an irradiation control unit that controls the irradiation unit so that the intensity of the laser beam is changed based on the movement state of the object calculated by the movement state calculation unit. Is Temu.
 また、本発明の他の態様は、物体の状態を表す画像を時系列に複数取得し、取得した複数の前記画像のうち第1の画像を面方向に平行移動させて複数のシフト画像を生成し、生成した前記シフト画像と、取得した複数の前記画像のうち第2の画像との相関を示す指標を算出し、算出した前記指標に基づいて、前記物体の移動状態を画素ごとに算出する画像処理方法である。 In another aspect of the present invention, a plurality of images representing the state of an object are acquired in time series, and a plurality of shifted images are generated by translating a first image in the plane direction among the acquired plurality of images. Then, an index indicating a correlation between the generated shift image and the second image among the plurality of acquired images is calculated, and the movement state of the object is calculated for each pixel based on the calculated index. This is an image processing method.
 また、本発明の他の態様は、画像処理を行うコンピュータに、物体の状態を表す画像を時系列に複数取得させ、取得した複数の前記画像のうち第1の画像を面方向に平行移動させて複数のシフト画像を生成させ、生成した前記シフト画像と、取得した複数の前記画像のうち第2の画像との相関を示す指標を算出させ、算出した前記指標に基づいて、前記物体の移動状態を画素ごとに算出させる画像処理プログラムである。 According to another aspect of the present invention, a computer that performs image processing acquires a plurality of images representing the state of an object in time series, and the first image among the plurality of acquired images is translated in the plane direction. Generating a plurality of shift images, calculating an index indicating a correlation between the generated shift image and the second image among the plurality of acquired images, and moving the object based on the calculated index This is an image processing program for calculating a state for each pixel.
 本発明によれば、検出精度を向上させることができる画像処理装置、レーザ照射システム、画像処理方法、および画像処理プログラムを提供することができる。 According to the present invention, it is possible to provide an image processing apparatus, a laser irradiation system, an image processing method, and an image processing program that can improve detection accuracy.
第1実施形態における画像処理装置100を含む光干渉断層計1の一例を示す構成図である。It is a block diagram which shows an example of the optical coherence tomography 1 containing the image processing apparatus 100 in 1st Embodiment. 第1実施形態における画像処理装置100の機構構成例を示す図である。It is a figure which shows the example of a mechanism structure of the image processing apparatus 100 in 1st Embodiment. 参照画像上に設定される局所領域Aの一例を示す図である。It is a figure which shows an example of the local area | region A set on a reference image. 対象物体OBの変位の算出結果である理論値と、三軸ステージによって実際に対象物体OBを移動させた際の移動量の実測値との関係の一例を示す図である。It is a figure which shows an example of the relationship between the theoretical value which is the calculation result of the displacement of the target object OB, and the measured value of the movement amount when the target object OB is actually moved by the three-axis stage. 対象物体OBの各移動量に対して、対象物体OBにおけるx方向およびz方向の変位の標準偏差の一例を示した図である。It is the figure which showed an example of the standard deviation of the displacement of x direction and z direction in target object OB with respect to each moving amount of target object OB. 対象物体OBの変位の算出結果である理論値と、三軸ステージによって実際に対象物体OBを移動させた際の移動量の実測値との関係の他の例を示す図である。It is a figure which shows the other example of the relationship between the theoretical value which is the calculation result of the displacement of the target object OB, and the measured value of the movement amount when the target object OB is actually moved by the three-axis stage. 対象物体OBの各移動量に対して、対象物体OBにおけるy方向の変位の標準偏差の一例を示した図である。It is the figure which showed an example of the standard deviation of the displacement of the y direction in target object OB with respect to each moving amount of target object OB. 第1実施形態における画像処理装置100の処理の流れの一例を示すフローチャートである。3 is a flowchart illustrating an example of a processing flow of the image processing apparatus 100 according to the first embodiment. 第2実施形態における画像処理装置100を含むレーザ照射システム2の一例を示す構成図である。It is a block diagram which shows an example of the laser irradiation system 2 containing the image processing apparatus 100 in 2nd Embodiment. 対象物体OBにレーザ光を照射しながら、対象物体OBの変位を測定した結果の一例を示す図である。It is a figure which shows an example of the result of having measured the displacement of target object OB, irradiating laser beam to target object OB. 第3実施形態の画像処理装置100-Aを含むOCT1-Aと、他の装置との接続関係の一例を示す図である。It is a figure which shows an example of the connection relation between OCT1-A including image processing apparatus 100-A of 3rd Embodiment, and another apparatus.
 以下、図面を参照し、本発明の画像処理装置、レーザ照射システム、画像処理方法、および画像処理プログラムの実施形態について説明する。
 (第1実施形態)
 図1は、第1実施形態における画像処理装置100を含む光干渉断層計1の一例を示す構成図である。本実施形態における光干渉断層計1は、所謂、光コヒーレンストモグラフィー(Optical Coherence Tomography;OCT)と称される機器に相当する。以下、光干渉断層計1を、「OCT1」と記載する。OCT1は、測定対象である物体(以下、「対象物体OB」と称する)に光を照射し、対象物体OBから反射された光と照射した一部の光とが干渉した光を計測することで、対象物体OBの変位を測定する装置である。本実施形態における対象物体OBは、例えば、眼底、血管、歯牙、皮下組織(例えば腫瘍)等の生体(有機物)や、電子部品(例えば半導体)、機械部品等の無機物である。なお、無機物には、一部に有機物が含まれていてもよい。
Hereinafter, embodiments of an image processing apparatus, a laser irradiation system, an image processing method, and an image processing program according to the present invention will be described with reference to the drawings.
(First embodiment)
FIG. 1 is a configuration diagram illustrating an example of an optical coherence tomometer 1 including an image processing apparatus 100 according to the first embodiment. The optical coherence tomometer 1 in the present embodiment corresponds to a so-called optical coherence tomography (OCT) device. Hereinafter, the optical coherence tomography 1 is referred to as “OCT1”. The OCT 1 irradiates light on an object to be measured (hereinafter referred to as “target object OB”), and measures light in which light reflected from the target object OB interferes with part of the irradiated light. This is a device for measuring the displacement of the target object OB. The target object OB in the present embodiment is, for example, a living body (organic matter) such as a fundus, a blood vessel, a tooth, a subcutaneous tissue (for example, a tumor), or an inorganic material such as an electronic component (for example, a semiconductor) or a mechanical component. Note that the inorganic material may partially include an organic material.
 OCT1は、光源10と、ビームスプリッタ20と、コリメータ30a、30b、50a、50bと、参照鏡40と、ガルバノミラー60a、60bと、分光器70と、画像処理装置100とを備える。この中で、例えば、ビームスプリッタ20と、コリメータ30a、30b、50a、50bと、参照鏡40と、ガルバノミラー60a、60bとは、所謂、干渉計と称される光学系に相当する。本実施形態における干渉計は、例えば、光ファイバFによって構成されるマイケルソン干渉計である。光源10および分光器70と、コリメータ30a、50aとは、光源10から照射される光の波長帯の伝送帯域を有する光ファイバFによって、それぞれビームスプリッタ20に接続されている。 The OCT 1 includes a light source 10, a beam splitter 20, collimators 30a, 30b, 50a, and 50b, a reference mirror 40, galvanometer mirrors 60a and 60b, a spectroscope 70, and an image processing apparatus 100. Among them, for example, the beam splitter 20, the collimators 30a, 30b, 50a, and 50b, the reference mirror 40, and the galvanometer mirrors 60a and 60b correspond to an optical system called a so-called interferometer. The interferometer in the present embodiment is, for example, a Michelson interferometer configured by an optical fiber F. The light source 10, the spectroscope 70, and the collimators 30a and 50a are respectively connected to the beam splitter 20 by an optical fiber F having a transmission band in the wavelength band of light emitted from the light source 10.
 本実施形態におけるOCT1は、例えば、スペクトラルドメインOCT(Spectral-domain OCT;SD-OCT)や波長掃引OCT(Swept-source OCT;SS-OCT)等のフーリエドメインOCT(Fourier-domain OCT;FD-OCT)として説明するが、これに限られない。OCT1は、例えば、タイムドメインOCT(Time-domain OCT;TD-OCT)であってもよい。OCT1がタイムドメインOCTの場合、干渉計の参照鏡40を固定せず、光源から参照鏡40までの光路長を変化させるように干渉計を駆動可能な状態にしておく。 OCT1 in this embodiment is, for example, Fourier domain OCT (Fourier-domain OCT; FD-OCT) such as spectral domain OCT (Spectral-domain OCT; SD-OCT) or wavelength sweep OCT (Swept-source OCT; SS-OCT). ), But is not limited to this. OCT1 may be, for example, time domain OCT (Time-domain OCT; TD-OCT). When OCT1 is time domain OCT, the interferometer reference mirror 40 is not fixed, and the interferometer is driven so that the optical path length from the light source to the reference mirror 40 is changed.
 光源10は、例えば、近赤外(例えば800~1000nm程度)の波長のプローブ光を照射する。光源10は、例えば、SLD(スーパールミネッセントダイオード)や超短波パルスレーザ等の波長掃引光源であると好適である。 The light source 10 irradiates, for example, probe light having a wavelength of near infrared (for example, about 800 to 1000 nm). The light source 10 is preferably a wavelength swept light source such as an SLD (super luminescent diode) or an ultrashort pulse laser.
 光源10から照射された光は、光ファイバF内を導光し、ビームスプリッタ20によって、コリメータ30a側へ導光される光と、コリメータ50a側へ導光される光とに分割される。以下、コリメータ30a側へ導光する光を、「参照光」と称し、コリメータ50a側へ導光する光を、「測定光」と称する。ビームスプリッタ20は、例えば、キューブビームスプリッタ等である。 The light emitted from the light source 10 is guided in the optical fiber F, and is split by the beam splitter 20 into light guided to the collimator 30a side and light guided to the collimator 50a side. Hereinafter, light guided to the collimator 30a side is referred to as “reference light”, and light guided to the collimator 50a side is referred to as “measurement light”. The beam splitter 20 is, for example, a cube beam splitter.
 参照光は、例えば、コリメータ30aによって平行光に変化され、その後平行光がコリメータ30bによって集光された光に変化される。コリメータ30bによって集光された光(参照光)は、参照鏡40によって反射される。参照鏡40によって反射された参照光は、例えば、コリメータ30bによって平行光に変化され、その後平行光がコリメータ30aによって集光された光に変化され、ビームスプリッタ20に導光される。 The reference light is changed into, for example, parallel light by the collimator 30a, and then the parallel light is changed to light condensed by the collimator 30b. The light (reference light) collected by the collimator 30 b is reflected by the reference mirror 40. The reference light reflected by the reference mirror 40 is changed into, for example, parallel light by the collimator 30b, and then the parallel light is changed into light collected by the collimator 30a and guided to the beam splitter 20.
 一方、測定光は、コリメータ50aによって平行光に変化される。その後、平行光(測定光)は、ガルバノミラー60a、60bによってコリメータ50b側へ反射される。その後、平行光(測定光)は、コリメータ50bによって集光された光に変化され、対象物体OBに照射される。対象物体OBに照射された測定光は、対象物体OBの反射面にて反射され、コリメータ50bに入射される。反射面とは、例えば、対象物体OB内部の屈折率の異なる境界面、或いは対象物体OBと対象物体OBの周囲の環境(例えば空気)との境界面である。以下、対象物体OBの反射面にて反射され、コリメータ50bに入射される光を、「反射光」と称する。反射光は、ガルバノミラー60a、60bとコリメータ50aとを介して、ビームスプリッタ20に導光される。 On the other hand, the measurement light is converted into parallel light by the collimator 50a. Thereafter, the parallel light (measurement light) is reflected to the collimator 50b side by the galvanometer mirrors 60a and 60b. Thereafter, the parallel light (measurement light) is changed into the light collected by the collimator 50b and irradiated onto the target object OB. The measurement light irradiated on the target object OB is reflected by the reflection surface of the target object OB and is incident on the collimator 50b. The reflection surface is, for example, a boundary surface having a different refractive index inside the target object OB, or a boundary surface between the target object OB and the environment (for example, air) around the target object OB. Hereinafter, the light reflected by the reflecting surface of the target object OB and incident on the collimator 50b is referred to as “reflected light”. The reflected light is guided to the beam splitter 20 via the galvanometer mirrors 60a and 60b and the collimator 50a.
 ビームスプリッタ20は、参照鏡40によって反射された参照光と反射光とを、例えば同軸の光ファイバFを介して、分光器70に導光する。 The beam splitter 20 guides the reference light and the reflected light reflected by the reference mirror 40 to the spectroscope 70 through a coaxial optical fiber F, for example.
 分光器70に導光された参照光および反射光は、分光器70内部の回折格子によって分光され、互いに干渉する。以下、干渉した参照光および反射光を、「干渉光」と称する。
分光器70は、例えば、フォトダイオード等の受光素子によって干渉光を検出し、検出した干渉光に基づく信号(以下、「検出信号」と称する)を生成する。
The reference light and the reflected light guided to the spectroscope 70 are split by the diffraction grating inside the spectroscope 70 and interfere with each other. Hereinafter, the interfered reference light and reflected light are referred to as “interference light”.
The spectroscope 70 detects interference light by a light receiving element such as a photodiode, for example, and generates a signal (hereinafter referred to as “detection signal”) based on the detected interference light.
 図2は、第1実施形態における画像処理装置100の機構構成例を示す図である。
 画像処理装置100は、制御部110と記憶部180とを備える。制御部110は、例えば、CPU(Central Processing Unit)等のプロセッサが、記憶部180に記憶されたプログラムを実行することにより機能するソフトウェア機能部である。また、これらの機能部のうち一部または全部は、LSI(Large Scale Integration)やASIC(Application Specific Integrated Circuit)等のハードウェア機能部であってもよい。また、記憶部180は、例えば、ROM(Read Only Memory)、フラッシュメモリ、HDD(Hard Disk Drive)などの不揮発性の記憶媒体(非一時的な記憶媒体)を有する。また、記憶部180は、例えば、RAM(Random Access Memory)やレジスタなどの揮発性の記憶媒体を有していてもよい。また、記憶部180は、ソフトウェア機能部を動作させるためのプログラムを記憶してもよい。
FIG. 2 is a diagram illustrating a mechanism configuration example of the image processing apparatus 100 according to the first embodiment.
The image processing apparatus 100 includes a control unit 110 and a storage unit 180. The control unit 110 is a software function unit that functions when, for example, a processor such as a CPU (Central Processing Unit) executes a program stored in the storage unit 180. Some or all of these functional units may be hardware functional units such as LSI (Large Scale Integration) and ASIC (Application Specific Integrated Circuit). The storage unit 180 includes a nonvolatile storage medium (non-temporary storage medium) such as a ROM (Read Only Memory), a flash memory, and an HDD (Hard Disk Drive). The storage unit 180 may include a volatile storage medium such as a RAM (Random Access Memory) or a register, for example. The storage unit 180 may store a program for operating the software function unit.
 制御部110は、例えば、光学系制御部120と、検出信号取得部130と、OCT画像生成部140と、シフト画像生成部150と、相関算出部160と、移動状態算出部170とを備える。 The control unit 110 includes, for example, an optical system control unit 120, a detection signal acquisition unit 130, an OCT image generation unit 140, a shift image generation unit 150, a correlation calculation unit 160, and a movement state calculation unit 170.
 光学系制御部120は、ガルバノミラー60a、60bを駆動させ、対象物体OBの計測点を一次元走査するように干渉計を制御する。 The optical system control unit 120 drives the galvanometer mirrors 60a and 60b and controls the interferometer so as to scan the measurement point of the target object OB one-dimensionally.
 検出信号取得部130は、分光器70から検出信号を取得する。 The detection signal acquisition unit 130 acquires a detection signal from the spectroscope 70.
 OCT画像生成部140は、検出信号取得部130によって取得された検出信号に対して適宜信号処理を行い、深さ方向(光の伝播方向)の一次元の屈折率分布の微分、すなわち、反射光の強度分布(所謂Bモード像)を算出する。以下、上述した反射光の強度を輝度値に換算した画像を、「OCT画像」と称する。OCT画像生成部140は、例えば、一次元走査ごとにOCT画像を生成する。すなわち、OCT画像生成部140は、時系列に複数のOCT画像を生成する。なお、OCT画像生成部140は、「取得部」の一例である。 The OCT image generation unit 140 appropriately performs signal processing on the detection signal acquired by the detection signal acquisition unit 130 to differentiate the one-dimensional refractive index distribution in the depth direction (light propagation direction), that is, reflected light. Intensity distribution (so-called B-mode image) is calculated. Hereinafter, an image obtained by converting the intensity of the reflected light into a luminance value is referred to as an “OCT image”. For example, the OCT image generation unit 140 generates an OCT image for each one-dimensional scan. That is, the OCT image generation unit 140 generates a plurality of OCT images in time series. The OCT image generation unit 140 is an example of an “acquisition unit”.
 シフト画像生成部150は、OCT画像生成部140によって生成された複数のOCT画像のうち、任意の2つのOCT画像に対して、局所領域Aを設定する。局所領域Aは、後述する相関を算出する処理の際に対象となる領域である。局所領域Aは、ユーザの任意の領域として設定可能であり、OCT画像の全部または一部として設定される。 The shift image generation unit 150 sets a local region A for any two OCT images among a plurality of OCT images generated by the OCT image generation unit 140. The local area A is an area that is a target in the process of calculating a correlation described later. The local area A can be set as an arbitrary area of the user, and is set as all or part of the OCT image.
 シフト画像生成部150は、局所領域Aを設定したOCT画像のうち、より時系列の古い方のOCT画像を選択する。シフト画像生成部150は、選択した時系列の古い方のOCT画像上の局所領域Aを面方向に所定の画素分平行移動(シフト)させる。以下、任意の2つのOCT画像のうち、時系列の古い方を「参照画像」と称し、時系列の新しい方を「ターゲット画像」と称する。なお、参照画像は、「第1の画像」の一例であり、ターゲット画像は、「第2の画像」の一例である。また、参照画像は、「第2の画像」の一例であってもよいし、ターゲット画像は、「第1の画像」の一例であってもよい。 The shift image generation unit 150 selects an older OCT image in time series from the OCT images in which the local region A is set. The shift image generation unit 150 translates (shifts) the local region A on the selected time-series older OCT image by a predetermined number of pixels in the plane direction. Hereinafter, of any two OCT images, the older one in time series is referred to as a “reference image”, and the newer one in time series is referred to as a “target image”. The reference image is an example of a “first image”, and the target image is an example of a “second image”. The reference image may be an example of a “second image”, and the target image may be an example of a “first image”.
 シフト画像生成部150は、例えば、参照画像上の左方向に、局所領域Aを所定の画素分平行移動させ、平行移動させた局所領域Aを新たに局所領域A1として設定する。また、シフト画像生成部150は、例えば、参照画像上の右方向に、局所領域Aを所定の画素分平行移動させ、平行移動させた局所領域Aを新たに局所領域A2として設定する。また、シフト画像生成部150は、例えば、参照画像上の上方向に、局所領域Aを所定の画素分平行移動させ、平行移動させた局所領域Aを新たに局所領域A3として設定する。また、シフト画像生成部150は、例えば、参照画像上の下方向に、局所領域Aを所定の画素分平行移動させ、平行移動させた局所領域Aを新たに局所領域A4として設定する。以下、新たに局所領域A1~4が設定されたOCT画像を、「シフト画像」と称する。 For example, the shift image generation unit 150 translates the local area A by a predetermined number of pixels in the left direction on the reference image, and newly sets the translated local area A as the local area A1. In addition, the shift image generation unit 150 translates the local area A by a predetermined number of pixels in the right direction on the reference image, for example, and newly sets the translated local area A as the local area A2. In addition, the shift image generation unit 150 translates the local area A by a predetermined number of pixels in the upward direction on the reference image, for example, and newly sets the translated local area A as the local area A3. In addition, the shift image generation unit 150 translates the local area A by a predetermined number of pixels in the downward direction on the reference image, for example, and newly sets the translated local area A as the local area A4. Hereinafter, the OCT image in which the local areas A1 to A4 are newly set is referred to as a “shift image”.
 相関算出部160は、ターゲット画像の局所領域Aと、参照画像の局所領域Aとの相関を示す指標(例えば相関係数)を算出する。また、相関算出部160は、ターゲット画像の局所領域Aと、シフト画像の局所領域A1~4とのそれぞれの相関を示す指標(例えば相関係数)を算出する。すなわち、相関算出部160は、参照画像の局所領域Aと、ターゲット画像の局所領域Aと、シフト画像の局所領域A1~4とに基づいて、5つの相関を示す指標(例えば相関係数)を算出する。 The correlation calculation unit 160 calculates an index (for example, a correlation coefficient) indicating the correlation between the local area A of the target image and the local area A of the reference image. Further, the correlation calculation unit 160 calculates an index (for example, a correlation coefficient) indicating the correlation between the local area A of the target image and the local areas A1 to A4 of the shift image. That is, the correlation calculation unit 160 calculates five indexes (for example, correlation coefficients) indicating five correlations based on the local region A of the reference image, the local region A of the target image, and the local regions A1 to A4 of the shift image. calculate.
 移動状態算出部170は、相関算出部160によって算出された相関を示す指標に基づいて、対象物体OBの反射面までの距離(変位)を算出する。すなわち、移動状態算出部170は、対象物体OBが移動した場合には、対象物体OBの移動量を算出することができる。 The movement state calculation unit 170 calculates the distance (displacement) to the reflection surface of the target object OB based on the index indicating the correlation calculated by the correlation calculation unit 160. That is, the movement state calculation unit 170 can calculate the movement amount of the target object OB when the target object OB moves.
 画像処理装置100は、例えば、局所領域Aを設定したOCT画像や、相関を示す指標、対象物体OBの反射面までの距離(変位)等を、タッチパネルや液晶ディスプレイ等の表示装置(不図示)に表示してもよい。 The image processing apparatus 100 displays, for example, an OCT image in which the local area A is set, a correlation index, a distance (displacement) to the reflection surface of the target object OB, and the like (display device such as a touch panel or a liquid crystal display) May be displayed.
 (理論;相関および変位の算出方法)
 以下、対象物体OBの移動量の算出、および画像間の相関を示す指標の算出について数式を用いて説明する。まず、光源から照射される光を点光源とした場合、この点光源に対する応答を表す関数として、点像分布関数(Point spread function;PSF)h(→r)を導出する。点像分布関数h(→r)は、例えば、以下の数式(1)によって表される。以下、「→」は、ベクトルを表すものとする。
(Theory; Calculation method of correlation and displacement)
Hereinafter, calculation of the movement amount of the target object OB and calculation of an index indicating the correlation between images will be described using mathematical expressions. First, when the light emitted from the light source is a point light source, a point spread function (PSF) h (→ r) is derived as a function representing a response to the point light source. The point spread function h (→ r) is expressed by, for example, the following formula (1). Hereinafter, “→” represents a vector.
 式(1)中のx、zは、それぞれOCT画像面内の横方向、深さ方向(光の伝播方向)を示す。また、式(1)中のyは、OCT画像面(x-z平面)に対する面外の横方向を示す。また、式(1)中のwlは、光強度がピーク値の1/e2倍となる点で定義されたプローブ光のスポット半径を示す。また、式(1)中のwzは、点像分布強度がピーク値の1/e2倍になる幅の半分を示す。また、式(1)中のz0は、OCT1の干渉計の深さ方向の初期位置が“0”であることを示す。また、式(1)中のkcは、光源10の中心波数を示す。 In the formula (1), x and z indicate the lateral direction and depth direction (light propagation direction) in the OCT image plane, respectively. Further, y in the equation (1) indicates a lateral direction out of the plane with respect to the OCT image plane (xz plane). Further, wl in the formula (1) indicates a spot radius of the probe light defined by a point where the light intensity is 1 / e2 times the peak value. Further, wz in the formula (1) indicates a half of the width at which the point image distribution intensity becomes 1 / e2 times the peak value. Also, z0 in the equation (1) indicates that the initial position in the depth direction of the OCT1 interferometer is “0”. Further, kc in the formula (1) indicates the center wave number of the light source 10.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 次に、対象物体OBの任意の位置→rの局所変位Δ→r(→r)を数式(2)に基づいて導出する。式(2)中の“→r=(x、y、z)”は、デカルト座標における位置ベクトルを示し、Δx(→r)、Δy(→r)、Δz(→r)は、それぞれx、y、z方向の変位を示す。 Next, an arbitrary position of the target object OB → a local displacement Δ → r (→ r) of r is derived based on the formula (2). “→ r = (x, y, z)” in the equation (2) indicates a position vector in Cartesian coordinates, and Δx (→ r), Δy (→ r), and Δz (→ r) are x, The displacement in the y and z directions is shown.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 また、OCT画像s(→r)は、数式(3)によって表される。OCT画像s(→r)は、対象物体OBの標本(見本)となる構造を示すη(→r)と、点像分布関数h(→r)との重畳積分によって導出される。 In addition, the OCT image s (→ r) is expressed by Equation (3). The OCT image s (→ r) is derived by superimposing the η (→ r) indicating the structure (sample) of the target object OB and the point spread function h (→ r).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 ここで、対象物体OBの微小な局所領域が動かないものと仮定すると、対象物体OBの構造が変化した際のOCT画像s(→r+Δ→r)は、数式(4)によって表される。OCT画像s(→r+Δ→r)は、η(→r)と、対象物体OBの構造の変位Δ→rを加味した点像分布関数h(→r+Δ→r)との重畳積分によって導出される。なお、数式(4)およびこれ以降の数式において、局所変位Δ→r(→r)を、Δ→rのように簡略化して記載する。 Here, assuming that the minute local region of the target object OB does not move, the OCT image s (→ r + Δ → r) when the structure of the target object OB changes is expressed by Expression (4). The OCT image s (→ r + Δ → r) is derived by superposition of η (→ r) and a point spread function h (→ r + Δ → r) that takes into account the displacement Δ → r of the structure of the target object OB. . It should be noted that the local displacement Δ → r (→ r) is simplified and expressed as Δ → r in the formula (4) and the following formulas.
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 数式(3)、(4)により、数式(5)が導出される。数式(5)は、対象物体OBの変形前後のOCT画像の重畳積分を表す。 Formula (5) is derived from Formulas (3) and (4). Formula (5) represents the superposition integral of the OCT image before and after the deformation of the target object OB.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 また、変形前の点像分布関数h(→r)と、変形後の点像分布関数h(→r+Δ→r)との相関係数ρを、対象物体OBの変形前後のOCT画像における複素相関係数(amplitude of the complex correlation coefficient;ACCC)として導出する。以下に、点像分布関数h(→r)と、点像分布関数h(→r+Δ→r)との相関係数ρを導出する数式(6)を示す。 Further, the correlation coefficient ρ between the point spread function h (→ r) before deformation and the point spread function h (→ r + Δ → r) after deformation is expressed as a complex phase in the OCT image before and after the deformation of the target object OB. It is derived as a relationship number (amplitude of the complex correlation coefficient; ACCC). Equation (6) for deriving the correlation coefficient ρ between the point spread function h (→ r) and the point spread function h (→ r + Δ → r) is shown below.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 数式(6)に基づいて相関係数ρを導出する際、対象物体OBの変形前後の点像分布関数h(→r)、h(→r+Δ→r)のみでは、数式(6)中の5つの未知の変数(数式項)Δx、Δy、Δz、wl、wzを同定することが困難な場合がある。この結果、最終的に導出したい局所変位Δ→r(→r)を導出できない場合があった。 When deriving the correlation coefficient ρ based on the equation (6), only the point spread functions h (→ r) and h (→ r + Δ → r) before and after the deformation of the target object OB, 5 in the equation (6). It may be difficult to identify two unknown variables (formula terms) Δx, Δy, Δz, wl, wz. As a result, the local displacement Δ → r (→ r) to be finally derived may not be derived.
 そのため、本実施形態では、参照画像上の局所領域Aを、上下左右方向に所定の画素分平行移動(シフト)させる処理を行い、5つの連立方程式を導出し、これら5つの連立方程式を解くことにより、未知の変数Δx、Δy、Δz、wl、wzを導出する。 For this reason, in this embodiment, the local region A on the reference image is translated (shifted) by a predetermined number of pixels in the vertical and horizontal directions to derive five simultaneous equations and solve these five simultaneous equations. To derive unknown variables Δx, Δy, Δz, wl, wz.
 以下に示す5つの連立方程式は、ターゲット画像の局所領域Aおよび参照画像の局所領域Aの相関係数と、ターゲット画像の局所領域Aおよびシフト画像の局所領域A1~4のそれぞれの相関係数を表す。 The following five simultaneous equations indicate the correlation coefficient of the local area A of the target image and the local area A of the reference image, and the correlation coefficient of the local area A of the target image and the local areas A1 to A4 of the shift image. To express.
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 数式(7)は、例えば、ターゲット画像の局所領域Aと、参照画像の局所領域Aとの相関係数μ0(x、z)を表す。また、数式(8)は、ターゲット画像の局所領域Aとシフト画像の局所領域A1(画素δx分左にシフト)との相関係数μ1(x、z)を表す。また、数式(9)は、ターゲット画像の局所領域Aとシフト画像の局所領域A2(画素δx分右にシフト)との相関係数μ2(x、z)を表す。また、数式(10)は、ターゲット画像の局所領域Aとシフト画像の局所領域A3(画素δz分上にシフト)との相関係数μ3(x、z)を表す。また、数式(11)は、ターゲット画像の局所領域Aとシフト画像の局所領域A4(画素δz分下にシフト)との相関係数μ4(x、z)を表す。なお、数式(8)~(11)において、Δx、Δzは、それぞれΔx(x、z)、Δz(x、z)を簡略化して記載している。 Equation (7) represents, for example, a correlation coefficient μ0 (x, z) between the local area A of the target image and the local area A of the reference image. Equation (8) represents a correlation coefficient μ1 (x, z) between the local area A of the target image and the local area A1 of the shift image (shifted to the left by the pixel δx). Equation (9) represents a correlation coefficient μ2 (x, z) between the local area A of the target image and the local area A2 of the shift image (shifted to the right by the pixel δx). Equation (10) represents a correlation coefficient μ3 (x, z) between the local area A of the target image and the local area A3 of the shift image (shifted upward by the pixel δz). Equation (11) represents a correlation coefficient μ4 (x, z) between the local area A of the target image and the local area A4 of the shift image (shifted down by the pixel δz). In Equations (8) to (11), Δx and Δz are described by simplifying Δx (x, z) and Δz (x, z), respectively.
 次に、数式(6)と、5つの連立方程式(7)~(11)とに基づいて、数式(12)~(16)を導出する。 Next, formulas (12) to (16) are derived based on formula (6) and five simultaneous equations (7) to (11).
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 次に、導出した5つの数式(12)~(16)に対して、それぞれ自然対数ln表記の数式(17)~(21)を導出する。 Next, formulas (17) to (21) expressed in natural logarithm ln are derived from the derived five formulas (12) to (16), respectively.
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 これによって、例えば、面外の横方向を示すyを“0”とした場合、以下の数式(22)~(26)が得られる。すなわち、横方向の変位Δx(x、z)と、深さ方向の変位Δz(x、z)と、面外の横方向の変位Δy(x、z)の絶対値と、OCT1の分解能であるwl(x、z)およびwz(x、z)とを、一度に導出することができる。 Thus, for example, when y indicating the out-of-plane lateral direction is set to “0”, the following formulas (22) to (26) are obtained. That is, the absolute value of the lateral displacement Δx (x, z), the depth displacement Δz (x, z), the out-of-plane lateral displacement Δy (x, z), and the resolution of OCT1. wl (x, z) and wz (x, z) can be derived at once.
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
 また、上述した数式(22)~(26)を数式(6)に代入して相関係数ρを導出する際、ノイズの影響を考慮してもよい。以下に、ノイズの影響を加味した相関係数ρsの導出式(27)を示す。 Also, the influence of noise may be taken into account when substituting Equations (22) to (26) described above into Equation (6) to derive the correlation coefficient ρ. The derivation formula (27) for the correlation coefficient ρs in consideration of the influence of noise is shown below.
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011
 数式(27)は、相関係数の数式項にノイズ低減の数式項を乗算した数式である。数式(27)中の相関係数の数式項において、*gR(xi、zi)は、参照画像上の座標(xi、zi)の輝度の複素共役を表す。また、gT(xi、zi)は、ターゲット画像上の座標(xi、zi)の輝度を表す。また、Wは、OCT画像上の局所領域Aの面積を表す。 Formula (27) is a formula obtained by multiplying the formula term of the correlation coefficient by the formula term for noise reduction. In the mathematical expression of the correlation coefficient in the mathematical expression (27), * gR (xi, zi) represents the complex conjugate of the luminance of the coordinates (xi, zi) on the reference image. Further, gT (xi, zi) represents the luminance of the coordinates (xi, zi) on the target image. W represents the area of the local region A on the OCT image.
 また、数式(27)中のノイズ低減の数式項において、SNRR-1とSNRT-1とは、それぞれ参照画像上の信号雑音比(Signal-Noise Ratio;SNR)とターゲット画像上の信号雑音比SNRとを表す。これらSNRR-1とSNRT-1とは、以下の数式(28)によって定義される。 Also, in the noise reduction equation in Equation (27), SNRR-1 and SNRT-1 are the signal-noise ratio (SNR) on the reference image and the signal-to-noise ratio SNR on the target image, respectively. Represents. These SNRR-1 and SNRT-1 are defined by the following formula (28).
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000012
 数式(28)中に示すSNRn(x、z)は、SNRR-1およびSNRT-1を一般化させた表記である。同様にgn(xi、zi)は、*gR(xi、zi)およびgT(xi、zi)を一般化させた表記である。また、N(xi、zi)は、反射光の強度分布に現れる直流成分のノイズ(ノイズフロア)を表す。 SNRn (x, z) shown in Equation (28) is a generalized representation of SNRR-1 and SNRT-1. Similarly, gn (xi, zi) is a notation obtained by generalizing * gR (xi, zi) and gT (xi, zi). N (xi, zi) represents the noise (noise floor) of the DC component that appears in the intensity distribution of the reflected light.
(実験1)
 本出願人は、上述した対象物体OBの変位の算出方法に基づいて種々の実験を行った。
以下、これら実験結果について、図を参照して説明する。
(Experiment 1)
The present applicant conducted various experiments based on the above-described method for calculating the displacement of the target object OB.
Hereinafter, these experimental results will be described with reference to the drawings.
 図3は、参照画像上に設定される局所領域Aの一例を示す図である。
 図示の例において、シフト画像生成部150は、例えば、参照画像上に、14×14ピクセルの局所領域Aを設定する。次に、シフト画像生成部150は、14×14ピクセルの局所領域Aを、上下左右方向にそれぞれ所定の画素分平行移動(シフト)させて、局所領域A1~A4を再設定する。なお、シフト画像生成部150は、上述した局所領域Aの設定処理を、複数回繰り返してもよい。シフト画像生成部150は、A複数回局所領域Aの設定処理を繰り返す場合、これら局所領域Aの平均化した領域をOCT画像上に設定すると好適である。
FIG. 3 is a diagram illustrating an example of the local region A set on the reference image.
In the illustrated example, the shift image generation unit 150 sets, for example, a local region A of 14 × 14 pixels on the reference image. Next, the shift image generation unit 150 translates (shifts) the local area A of 14 × 14 pixels by predetermined pixels in the vertical and horizontal directions, and resets the local areas A1 to A4. Note that the shift image generation unit 150 may repeat the setting process for the local region A described above a plurality of times. When the shift image generation unit 150 repeats the setting process of the local area A a plurality of times, it is preferable to set an averaged area of the local areas A on the OCT image.
 相関算出部160は、シフト画像生成部150が局所領域Aの設定処理を複数回(例えばn回)行った場合、各参照画像上の局所領域Aと、各参照画像上の局所領域Aにそれぞれ対応したターゲット画像の局所領域Aおよびシフト画像の局所領域A1~4とに基づいて、相関を示す指標を複数回算出してもよい。移動状態算出部170は、相関算出部160によって複数回算出された相関を示す指標の平均値に基づいて、対象物体OBの反射面までの距離(変位)を算出する。これによって、画像処理装置100は、ノイズの影響を抑制しつつ、検出精度を向上させることができる。 When the shift image generation unit 150 performs the local area A setting process a plurality of times (for example, n times), the correlation calculation unit 160 applies the local area A on each reference image and the local area A on each reference image, respectively. Based on the corresponding local area A of the target image and local areas A1 to A4 of the shift image, an index indicating correlation may be calculated a plurality of times. The movement state calculation unit 170 calculates the distance (displacement) to the reflection surface of the target object OB based on the average value of the index indicating the correlation calculated by the correlation calculation unit 160 a plurality of times. Thereby, the image processing apparatus 100 can improve the detection accuracy while suppressing the influence of noise.
 図4は、対象物体OBの変位の算出結果である理論値と、三軸ステージによって実際に対象物体OBを移動させた際の移動量の実測値との関係の一例を示す図である。三軸ステージは、例えば、空間における上下左右方向(x、y、z方向)に平行移動が可能な、圧電効果等を利用したアクチュエータである。対象物体OBは、例えば、三軸ステージ上に載置および固定される。すなわち、対象物体OBの移動量は、三軸ステージの平行移動量と一致する。 FIG. 4 is a diagram illustrating an example of a relationship between a theoretical value that is a calculation result of the displacement of the target object OB and an actual measurement value of the movement amount when the target object OB is actually moved by the three-axis stage. The triaxial stage is, for example, an actuator using a piezoelectric effect or the like that can be translated in the vertical and horizontal directions (x, y, z directions) in space. For example, the target object OB is placed and fixed on a three-axis stage. That is, the movement amount of the target object OB matches the parallel movement amount of the three-axis stage.
 図中に示す縦軸は、対象物体OBの変位(例えば単位は[μm])であり、横軸は、対象物体OBのxまたはy方向の移動量(例えば単位は[μm])である。また、図中のプロットPx、Pzは、それぞれx方向の対象物体OBの変位、z方向の対象物体OBの変位を表す。これらプロットPx、Pzにおいて、対象物体OBの変位と対象物体OBの移動量とがほぼ一対一の関係を示す。すなわち、本実施形態における対象物体OBのx、z方向の変位の算出結果が、実測値(真値)とほぼ一致することを示し、上述した対象物体OBの変位の算出方法の有用性が示される。 The vertical axis shown in the figure is the displacement of the target object OB (for example, the unit is [μm]), and the horizontal axis is the amount of movement of the target object OB in the x or y direction (for example, the unit is [μm]). Also, the plots Px and Pz in the figure represent the displacement of the target object OB in the x direction and the displacement of the target object OB in the z direction, respectively. In these plots Px and Pz, the displacement of the target object OB and the amount of movement of the target object OB show a substantially one-to-one relationship. That is, the calculation result of the displacement in the x and z directions of the target object OB in the present embodiment is substantially the same as the actual measurement value (true value), and the usefulness of the above-described method for calculating the displacement of the target object OB is demonstrated. It is.
 図5は、対象物体OBの各移動量に対して、対象物体OBにおけるx方向およびz方向の変位の標準偏差の一例を示した図である。図中に示す縦軸は、対象物体OBにおけるx方向およびz方向の変位の標準偏差であり、横軸は、対象物体OBのx方向またはz方向の移動量(例えば単位は[μm])である。図中のプロットPx、Pzは、図4と同様であるため説明を省略する。x方向の標準偏差は、z方向の標準偏差に対して大きいものの、平均0.3程度である。また、z方向の標準偏差は、平均0.1程度である。これらの結果から、本実施形態におけるOCT1は、x方向よりもz方向(深さ方向)に対して、精度の良い変位測定を行うことができることが示唆される。 FIG. 5 is a diagram illustrating an example of standard deviations of displacements in the x direction and the z direction of the target object OB with respect to each movement amount of the target object OB. The vertical axis shown in the figure is the standard deviation of the displacement in the x direction and the z direction of the target object OB, and the horizontal axis is the amount of movement of the target object OB in the x direction or z direction (for example, the unit is [μm]). is there. The plots Px and Pz in the figure are the same as those in FIG. Although the standard deviation in the x direction is larger than the standard deviation in the z direction, the average is about 0.3. The standard deviation in the z direction is about 0.1 on average. From these results, it is suggested that the OCT 1 in the present embodiment can perform displacement measurement with higher accuracy in the z direction (depth direction) than in the x direction.
 図6は、対象物体OBの変位の算出結果である理論値と、三軸ステージによって実際に対象物体OBを移動させた際の移動量の実測値との関係の他の例を示す図である。 FIG. 6 is a diagram illustrating another example of the relationship between the theoretical value, which is the calculation result of the displacement of the target object OB, and the actually measured value of the movement amount when the target object OB is actually moved by the three-axis stage. .
 図中に示す縦軸は、対象物体OBの変位(例えば単位は[μm])であり、横軸は、対象物体OBのy方向の移動量(例えば単位は[μm])である。また、図中のプロットPyは、y方向の対象物体OBの変位を表す。プロットPyにおいて、対象物体OBの変位と対象物体OBの移動量とは、ほぼ一対一の関係を示す。すなわち、本実施形態における対象物体OBのy方向の変位の算出結果が、実測値(真値)とほぼ一致することを示し、上述した対象物体OBの変位の算出方法の有用性が示される。 The vertical axis shown in the figure is the displacement of the target object OB (for example, the unit is [μm]), and the horizontal axis is the amount of movement of the target object OB in the y direction (for example, the unit is [μm]). Also, the plot Py in the figure represents the displacement of the target object OB in the y direction. In the plot Py, the displacement of the target object OB and the movement amount of the target object OB show a substantially one-to-one relationship. That is, the calculation result of the displacement of the target object OB in the y direction in the present embodiment substantially matches the actual measurement value (true value), and the usefulness of the above-described method of calculating the displacement of the target object OB is indicated.
 図7は、対象物体OBの各移動量に対して、対象物体OBにおけるy方向の変位の標準偏差の一例を示した図である。図中に示す縦軸は、対象物体OBにおけるy方向の変位の標準偏差であり、横軸は、対象物体OBのy方向の移動量(例えば単位は[μm])である。y方向の標準偏差は、平均0.1程度である。この結果から、本実施形態におけるOCT1は、z方向(深さ方向)と同様にy方向(面外の横方向)に対して、精度の良い変位測定を行うことができることが示唆される。 FIG. 7 is a diagram illustrating an example of the standard deviation of the displacement in the y direction of the target object OB with respect to each movement amount of the target object OB. The vertical axis shown in the figure is the standard deviation of the displacement in the y direction of the target object OB, and the horizontal axis is the amount of movement of the target object OB in the y direction (for example, the unit is [μm]). The standard deviation in the y direction is about 0.1 on average. From this result, it is suggested that OCT1 in the present embodiment can perform accurate displacement measurement in the y direction (lateral direction out of the plane) as well as in the z direction (depth direction).
 以下、本実施形態における画像処理装置100の一連の処理の流れを説明する。図8は、第1実施形態における画像処理装置100の処理の流れの一例を示すフローチャートである。画像処理装置100は、例えば、所定の周期で本フローチャートの処理を繰り返し行う。 Hereinafter, a series of processing flow of the image processing apparatus 100 according to the present embodiment will be described. FIG. 8 is a flowchart illustrating an example of a processing flow of the image processing apparatus 100 according to the first embodiment. For example, the image processing apparatus 100 repeatedly performs the processing of this flowchart at a predetermined cycle.
 まず、光学系制御部120は、ガルバノミラー60a、60bを駆動させ、対象物体OBの計測点を一次元走査するように干渉計を制御する(ステップS100)。次に、検出信号取得部130は、分光器70から検出信号を取得する(ステップS102)。次に、OCT画像生成部140は、検出信号取得部130によって取得された検出信号に対して適宜信号処理を行い、検出信号に基づいたOCT画像を生成する(ステップS104)。 First, the optical system control unit 120 drives the galvanometer mirrors 60a and 60b to control the interferometer so as to perform one-dimensional scanning of the measurement point of the target object OB (step S100). Next, the detection signal acquisition unit 130 acquires a detection signal from the spectrometer 70 (step S102). Next, the OCT image generation unit 140 appropriately performs signal processing on the detection signal acquired by the detection signal acquisition unit 130, and generates an OCT image based on the detection signal (step S104).
 次に、画像処理装置100は、上述したステップS100~ステップS104までの一連の処理のループ回数が所定回数(例えばn=1、2、…)を超えたか否か判定する(ステップS106)。画像処理装置100は、ループ回数が所定回数を超えていない場合(ステップS106;No)、ステップS100の処理に戻る。シフト画像生成部150は、ループ回数が所定回数を超えた場合(ステップS106;Yes)、OCT画像生成部140によって生成された複数のOCT画像のうち、任意の2つのOCT画像に対して、局所領域Aを設定する(ステップS108)。 Next, the image processing apparatus 100 determines whether or not the number of loops of the series of processes from step S100 to step S104 described above has exceeded a predetermined number (for example, n = 1, 2,...) (Step S106). If the number of loops does not exceed the predetermined number (step S106; No), the image processing apparatus 100 returns to the process of step S100. When the number of loops exceeds a predetermined number (step S106; Yes), the shift image generation unit 150 performs local processing on any two OCT images among a plurality of OCT images generated by the OCT image generation unit 140. Region A is set (step S108).
 次に、シフト画像生成部150は、局所領域Aを設定したOCT画像のうち、より時系列の古い方のOCT画像である参照画像上の局所領域Aを、面方向に所定の画素分平行移動(シフト)させる(ステップS110)。次に、シフト画像生成部150は、所定の画素分平行移動させた局所領域Aを、新たな局所領域としたシフト画像を生成する(ステップS112)。 Next, the shift image generation unit 150 translates the local region A on the reference image, which is the older OCT image in the time series among the OCT images in which the local region A is set, by a predetermined number of pixels in the plane direction. (Shift) (step S110). Next, the shift image generation unit 150 generates a shift image in which the local area A that has been translated by a predetermined number of pixels is used as a new local area (step S112).
 次に、相関算出部160は、より時系列の新しい方のOCT画像であるターゲット画像の局所領域A、および参照画像の局所領域Aの相関を示す指標(例えば相関係数)と、ターゲット画像の局所領域A、およびシフト画像の局所領域A1~4のそれぞれの相関を示す指標(例えば相関係数)とを算出する(ステップS114)。次に、移動状態算出部170は、相関算出部160によって算出された相関を示す指標に基づいて、対象物体OBの反射面までの距離(変位)を算出する(ステップS116)。これによって、画像処理装置100は、本フローチャートの処理を終了する。 Next, the correlation calculation unit 160 uses an index (for example, a correlation coefficient) indicating the correlation between the local region A of the target image, which is a newer OCT image in the time series, and the local region A of the reference image, and the target image. An index (for example, a correlation coefficient) indicating the correlation between the local area A and the local areas A1 to A4 of the shifted image is calculated (step S114). Next, the movement state calculation unit 170 calculates the distance (displacement) to the reflection surface of the target object OB based on the index indicating the correlation calculated by the correlation calculation unit 160 (step S116). As a result, the image processing apparatus 100 ends the processing of this flowchart.
 以上、第1実施形態の画像処理装置100によれば、対象物体OBのOCT画像を時系列に複数生成し、参照画像に設定された局所領域Aを面方向に平行移動させて複数のシフト画像を生成し、ターゲット画像と、参照画像およびシフト画像の局所領域との相関を示す指標を算出し、算出した指標に基づいて、対象物体OBの変位を算出することにより、検出精度を向上させることができる。 As described above, according to the image processing apparatus 100 of the first embodiment, a plurality of OCT images of the target object OB are generated in time series, and the local region A set as the reference image is translated in the plane direction to generate a plurality of shifted images. And calculating an index indicating the correlation between the target image and the local area of the reference image and the shift image, and calculating the displacement of the target object OB based on the calculated index, thereby improving the detection accuracy. Can do.
 (第2実施形態)
 以下、第2実施形態における画像処理装置100を含むレーザ照射システム2について説明する。第2実施形態のレーザ照射システム2では、第1実施形態におけるOCT1に加え、さらにレーザ照射光学系200が備えられている点で、第1の実施形態と相違する。従って、係る相違点を中心に説明し、共通する部分についての説明は省略する。
(Second Embodiment)
Hereinafter, the laser irradiation system 2 including the image processing apparatus 100 according to the second embodiment will be described. The laser irradiation system 2 of the second embodiment is different from the first embodiment in that a laser irradiation optical system 200 is further provided in addition to the OCT 1 in the first embodiment. Therefore, it demonstrates centering on such a difference and the description about a common part is abbreviate | omitted.
 図9は、第2実施形態における画像処理装置100を含むレーザ照射システム2の一例を示す構成図である。本実施形態におけるレーザ照射システム2は、例えば、生体の眼球Eye内の対象物体OB(例えば眼底に生じる網膜剥離の浮腫)を瘢痕治療するシステムである。レーザ照射システム2は、第1実施形態におけるOCT1に加え、さらにレーザ照射光学系200を備える。これに対して、画像処理装置100の光学系制御部120は、さらにレーザ照射光学系200を制御する機能を有する。なお、レーザ照射光学系200は、「照射部」の一例である。 FIG. 9 is a configuration diagram illustrating an example of the laser irradiation system 2 including the image processing apparatus 100 according to the second embodiment. The laser irradiation system 2 in the present embodiment is a system for treating scars on a target object OB (for example, retinal detachment edema that occurs in the fundus) in a living eyeball Eye. The laser irradiation system 2 further includes a laser irradiation optical system 200 in addition to the OCT 1 in the first embodiment. On the other hand, the optical system control unit 120 of the image processing apparatus 100 further has a function of controlling the laser irradiation optical system 200. The laser irradiation optical system 200 is an example of an “irradiation unit”.
 レーザ照射光学系200は、例えば、低侵襲にて対象物体OBを凝固可能なマイクロパルスレーザ装置等である。また、レーザ照射光学系200は、照射するレーザ光を対象物体OBに対して走査させるための光スキャナを備えていてもよい。また、レーザ照射光学系200は、例えば、ダイクロイックミラーやハーフミラー等の光路結合部材を備えていてもよい。光路結合部材は、レーザ照射光学系200の光軸とOCT1の光軸とを同軸に配置する。 The laser irradiation optical system 200 is, for example, a micro pulse laser device that can coagulate the target object OB with minimal invasiveness. Further, the laser irradiation optical system 200 may include an optical scanner for scanning the target object OB with the laser beam to be irradiated. The laser irradiation optical system 200 may include an optical path coupling member such as a dichroic mirror or a half mirror. The optical path coupling member arranges the optical axis of the laser irradiation optical system 200 and the optical axis of the OCT 1 coaxially.
 光学系制御部120は、例えば、移動状態算出部170によって算出された対象物体OBの変位に基づいて、レーザ照射光学系200から照射されるレーザ光のビーム幅や強度等を制御する。なお、光学系制御部120は、「照射制御部」の一例である。 The optical system control unit 120 controls, for example, the beam width and intensity of the laser light emitted from the laser irradiation optical system 200 based on the displacement of the target object OB calculated by the movement state calculation unit 170. The optical system control unit 120 is an example of an “irradiation control unit”.
(実験2)
 本出願人は、上述した対象物体OBにレーザ光を照射した際、レーザ照射による熱膨張や熱収縮等によって対象物体OBの変位が変化する様子を実験で観察した。以下、これら実験結果について、図を参照して説明する。なお、画像処理装置100は、後述する実験結果を示す画像を表示装置等に表示するものとする。
(Experiment 2)
The present applicant observed through experiments that the displacement of the target object OB changes due to thermal expansion and contraction caused by laser irradiation when the target object OB is irradiated with laser light. Hereinafter, these experimental results will be described with reference to the drawings. Note that the image processing apparatus 100 displays an image indicating an experimental result described later on a display device or the like.
 図10は、対象物体OBにレーザ光を照射しながら、対象物体OBの変位を測定した結果の一例を示す図である。 FIG. 10 is a diagram illustrating an example of a result of measuring the displacement of the target object OB while irradiating the target object OB with laser light.
 図中の右端(4列目)に示すマップは、OCT画像に、変位ベクトルを重ねて表記した図である。ベクトルの矢印の大きさは、変位の大きさを示し、ベクトルの矢印の向きは、変位の方向を示す。なお、変位の方向は、2次元のカラーホイールとして表現されてもよい。例えば、予め変位の方向とカラーホイールの上下左右の方向とを対応させておく。これによって、図中に示すプロットのカラーによって、変位の方向を表現することができる。ユーザは、例えば、OCT画像上に赤色のプロットが示された領域が存在する場合、この領域の変位方向を、プロットの色と対応した変位の方向(例えばマイナスのx方向)として認識することができる。 The map shown at the right end (fourth column) in the figure is a diagram in which displacement vectors are overlaid on the OCT image. The size of the vector arrow indicates the magnitude of the displacement, and the direction of the vector arrow indicates the direction of the displacement. The direction of displacement may be expressed as a two-dimensional color wheel. For example, the direction of displacement and the up / down / left / right directions of the color wheel are previously associated with each other. Thus, the direction of displacement can be expressed by the color of the plot shown in the figure. For example, when there is a region where a red plot is shown on the OCT image, the user recognizes the displacement direction of this region as a displacement direction (for example, a negative x direction) corresponding to the color of the plot. it can.
 図示の例において、12~24ms間にレーザ照射を行ったことにより、眼底内側の網膜が径方向に拡張していることがわかる。レーザ照射終了後(24ms以降)、レーザ照射によって変位した網膜組織は、180ms経過するまでの期間に、元の変位(位置)に戻ることが確認された。 In the example shown in the figure, it can be seen that the retina on the inner side of the fundus is expanded in the radial direction by performing laser irradiation for 12 to 24 ms. After the end of laser irradiation (after 24 ms), it was confirmed that the retinal tissue displaced by laser irradiation returns to the original displacement (position) in a period until 180 ms elapses.
 以上、第2実施形態における画像処理装置100を含むレーザ照射システム2によれば、対象物体OBにレーザ光を照射しながら、対象物体OBの変位を定量的に評価することができる。 As described above, according to the laser irradiation system 2 including the image processing apparatus 100 in the second embodiment, it is possible to quantitatively evaluate the displacement of the target object OB while irradiating the target object OB with laser light.
 (第3実施形態)
 以下、第3実施形態における画像処理装置100-Aを含むOCT1-Aについて説明する。第3実施形態の画像処理装置100-Aでは、画像処理において、他の装置または機器に一部または全部の処理を分担させる点で、上述した実施形態と相違する。従って、係る相違点を中心に説明し、共通する部分についての説明は省略する。
(Third embodiment)
Hereinafter, the OCT1-A including the image processing apparatus 100-A according to the third embodiment will be described. The image processing apparatus 100-A according to the third embodiment is different from the above-described embodiment in that some or all of the processes are shared by other apparatuses or devices in image processing. Therefore, it demonstrates centering on such a difference and the description about a common part is abbreviate | omitted.
 図11は、第3実施形態の画像処理装置100-Aを含むOCT1-Aと、他の装置との接続関係の一例を示す図である。
 画像処理装置100-Aは、例えば、LANやWAN等のネットワークNWを介して、他の装置と接続される。他の装置とは、例えば、OCT1-B内の画像処理装置100-Bや、超音波画像診断装置3内の画像処理装置300、CT(Computed Tomography)装置4内の画像処理装置400、MRI(Magnetic Resonance Imaging)装置5内の画像処理装置500、単独の画像処理装置600等である。
FIG. 11 is a diagram illustrating an example of a connection relationship between the OCT 1-A including the image processing apparatus 100-A according to the third embodiment and other apparatuses.
The image processing apparatus 100-A is connected to another apparatus via a network NW such as a LAN or a WAN, for example. Examples of other apparatuses include the image processing apparatus 100-B in the OCT1-B, the image processing apparatus 300 in the ultrasonic diagnostic imaging apparatus 3, the image processing apparatus 400 in the CT (Computed Tomography) apparatus 4, and the MRI ( Magnetic Resonance Imaging) is an image processing apparatus 500 in the apparatus 5, a single image processing apparatus 600, and the like.
 画像処理装置100-Aは、例えば、生成したOCT画像の容量や数等に基づいて、変位算出時に自装置に係る負荷を計算する。画像処理装置100-Aは、算出した負荷が自装置の計算処理能力を上回る場合には、生成したOCT画像を他の装置に送信し、上述した対象物体OBの変位の算出に係る処理の一部または全部を他の装置に行わせてもよい。
すなわち、画像処理装置100-Aは、対象物体OBの変位の算出に係る処理を分散処理する。これによって、第3実施形態における画像処理装置100-Aは、速やかに対象物体OBの変位を算出することができる。
For example, the image processing apparatus 100-A calculates the load related to the image processing apparatus 100 when calculating the displacement based on the capacity and number of the generated OCT images. When the calculated load exceeds the calculation processing capability of the image processing apparatus 100-A, the image processing apparatus 100-A transmits the generated OCT image to another apparatus, and performs a process related to the calculation of the displacement of the target object OB described above. Some or all may be performed by another device.
That is, the image processing apparatus 100-A performs a distributed process on the process related to the calculation of the displacement of the target object OB. As a result, the image processing apparatus 100-A according to the third embodiment can quickly calculate the displacement of the target object OB.
 以下に、その他の実施例(変形例)について記載する。
 OCT画像生成部140は、検出信号に基づいてOCT画像を生成するだけでなく、他の機能部、或いは他の装置から既に生成されたOCT画像を取得してもよい。
Other embodiments (modifications) will be described below.
The OCT image generation unit 140 may not only generate an OCT image based on the detection signal, but also acquire an OCT image that has already been generated from another functional unit or another device.
 以上、本発明を実施するための形態について実施形態を用いて説明したが、本発明はこうした実施形態に何等限定されるものではなく、本発明の要旨を逸脱しない範囲内において種々の変形及び置換を加えることができる。 As mentioned above, although the form for implementing this invention was demonstrated using embodiment, this invention is not limited to such embodiment at all, In the range which does not deviate from the summary of this invention, various deformation | transformation and substitution Can be added.
1‥光干渉断層計、OCT、10…光源、20…ビームスプリッタ、30a、30b、50a、50b…コリメータ、40…参照鏡、60a、60b…ガルバノミラー、70…分光器、100…画像処理装置、110…制御部、120…光学系制御部、130…検出信号取得部、140…OCT画像生成部、150…シフト画像生成部、160…相関算出部、170…移動状態算出部、180…記憶部、OB…対象物体 DESCRIPTION OF SYMBOLS 1 ... Optical coherence tomography, OCT, 10 ... Light source, 20 ... Beam splitter, 30a, 30b, 50a, 50b ... Collimator, 40 ... Reference mirror, 60a, 60b ... Galvano mirror, 70 ... Spectroscope, 100 ... Image processing apparatus DESCRIPTION OF SYMBOLS 110 ... Control part 120 ... Optical system control part 130 ... Detection signal acquisition part 140 ... OCT image generation part 150 ... Shift image generation part 160 ... Correlation calculation part 170 ... Movement state calculation part 180 ... Memory | storage Part, OB ... target object

Claims (8)

  1.  物体の状態を表す画像を時系列に複数取得する取得部と、
     前記取得部により取得された複数の画像のうち第1の画像を面方向に平行移動させて複数のシフト画像を生成するシフト画像生成部と、
     前記シフト画像生成部により生成されたシフト画像と、前記取得部により取得された複数の画像のうち第2の画像との相関を示す指標を算出する相関算出部と、
     前記相関算出部により算出された指標に基づいて、前記物体の移動状態を画素ごとに算出する移動状態算出部と、
     を備える画像処理装置。
    An acquisition unit for acquiring a plurality of images representing the state of the object in time series;
    A shift image generation unit that generates a plurality of shift images by translating a first image of the plurality of images acquired by the acquisition unit in a plane direction;
    A correlation calculation unit that calculates an index indicating a correlation between the shift image generated by the shift image generation unit and a second image among the plurality of images acquired by the acquisition unit;
    Based on the index calculated by the correlation calculation unit, a movement state calculation unit that calculates the movement state of the object for each pixel;
    An image processing apparatus comprising:
  2.  前記移動状態算出部は、5つの異なる変数を有し、点像分布関数間の相関を示す数式を、前記相関算出部により算出された少なくとも5種類の相関を示す指標を用いて解くことにより、前記物体の移動状態を画素ごとに算出する、
     請求項1記載の画像処理装置。
    The moving state calculation unit has five different variables, and solves a mathematical expression indicating the correlation between the point spread functions using an index indicating at least five types of correlations calculated by the correlation calculation unit, Calculating the movement state of the object for each pixel;
    The image processing apparatus according to claim 1.
  3.  前記シフト画像生成部は、前記第1の画像を面方向の上下左右に平行移動させて少なくとも4つのシフト画像を生成する、
     請求項1または2記載の画像処理装置。
    The shift image generation unit generates at least four shift images by translating the first image vertically and horizontally in a plane direction;
    The image processing apparatus according to claim 1.
  4.  前記相関算出部は、前記第1の画像および前記第2の画像の相関を示す指標を算出し、
     前記移動状態算出部は、前記シフト画像および前記第2の画像の相関と、前記第1の画像および前記第2の画像の相関とを示す、少なくとも5種類の指標に基づいて、前記物体の移動状態を画素ごとに算出する、
     請求項3記載の画像処理装置。
    The correlation calculation unit calculates an index indicating the correlation between the first image and the second image;
    The movement state calculation unit is configured to move the object based on at least five types of indices indicating a correlation between the shift image and the second image and a correlation between the first image and the second image. Calculate the state for each pixel,
    The image processing apparatus according to claim 3.
  5.  前記相関算出部は、前記第1の画像と前記第2の画像との相関を示す数式項にノイズを抑制するための数式項が乗算された数式を解くことにより、前記相関を示す指標を算出する、
     請求項1から4のうちいずれか1項記載の画像処理装置。
    The correlation calculation unit calculates an index indicating the correlation by solving a mathematical expression obtained by multiplying a mathematical expression indicating the correlation between the first image and the second image by a mathematical expression for suppressing noise. To
    The image processing apparatus according to claim 1.
  6.  レーザ光を照射する照射部と、
     前記照射部によってレーザ光が照射された物体の状態を表す画像を時系列に複数取得する取得部と、
     前記取得部により取得された複数の画像のうち第1の画像を面方向に平行移動させて複数のシフト画像を生成するシフト画像生成部と、
     前記シフト画像生成部により生成されたシフト画像と、前記取得部により取得された複数の画像のうち第2の画像との相関を示す指標を算出する相関算出部と、
     前記相関算出部により算出された指標に基づいて、前記物体の移動状態を画素ごとに算出する移動状態算出部と、
     前記移動状態算出部により算出された前記物体の移動状態に基づいて、前記レーザ光の出力が変更されるように前記照射部を制御する照射制御部と、
     を備えるレーザ照射システム。
    An irradiation unit for irradiating a laser beam;
    An acquisition unit that acquires a plurality of images in time series representing the state of an object irradiated with laser light by the irradiation unit;
    A shift image generation unit that generates a plurality of shift images by translating a first image of the plurality of images acquired by the acquisition unit in a plane direction;
    A correlation calculation unit that calculates an index indicating a correlation between the shift image generated by the shift image generation unit and a second image among the plurality of images acquired by the acquisition unit;
    Based on the index calculated by the correlation calculation unit, a movement state calculation unit that calculates the movement state of the object for each pixel;
    An irradiation control unit that controls the irradiation unit so that the output of the laser beam is changed based on the movement state of the object calculated by the movement state calculation unit;
    A laser irradiation system comprising:
  7.  物体の状態を表す画像を時系列に複数取得し、
     取得した複数の前記画像のうち第1の画像を面方向に平行移動させて複数のシフト画像を生成し、
     生成した前記シフト画像と、取得した複数の前記画像のうち第2の画像との相関を示す指標を算出し、
     算出した前記指標に基づいて、前記物体の移動状態を画素ごとに算出する、
     画像処理方法。
    Obtain multiple images representing the state of an object in time series,
    A plurality of shift images are generated by translating a first image of the plurality of acquired images in a plane direction;
    Calculating an index indicating a correlation between the generated shift image and a second image among the plurality of acquired images;
    Based on the calculated index, the moving state of the object is calculated for each pixel.
    Image processing method.
  8.  画像処理を行うコンピュータに、
     物体の状態を表す画像を時系列に複数取得する処理と、
     取得した複数の前記画像のうち第1の画像を面方向に平行移動させて複数のシフト画像を生成する処理と、
     生成した前記シフト画像と、取得させた複数の前記画像のうち第2の画像との相関を示す指標を算出する処理と、
     算出した前記指標に基づいて、前記物体の移動状態を画素ごとに算出する処理と、
     を実行させる画像処理プログラム。
    To the computer that performs image processing,
    Processing to acquire a plurality of images representing the state of the object in time series,
    A process of generating a plurality of shift images by translating a first image in the plane direction among the plurality of acquired images;
    Processing for calculating an index indicating a correlation between the generated shift image and a second image among the plurality of acquired images;
    A process of calculating the movement state of the object for each pixel based on the calculated index;
    An image processing program for executing
PCT/JP2015/085334 2014-12-17 2015-12-17 Image processing device, laser radiation system, image processing method, and image processing program WO2016098850A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-255208 2014-12-17
JP2014255208A JP2016114557A (en) 2014-12-17 2014-12-17 Image processing apparatus, laser irradiation system, image processing method, and image processing program

Publications (1)

Publication Number Publication Date
WO2016098850A1 true WO2016098850A1 (en) 2016-06-23

Family

ID=56126734

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/085334 WO2016098850A1 (en) 2014-12-17 2015-12-17 Image processing device, laser radiation system, image processing method, and image processing program

Country Status (2)

Country Link
JP (1) JP2016114557A (en)
WO (1) WO2016098850A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022114850A (en) * 2021-01-27 2022-08-08 シンクランド株式会社 Light interference tomography system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6705963B2 (en) * 2016-10-27 2020-06-03 国立大学法人 筑波大学 Medical information processing apparatus and medical information processing program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012105631A (en) * 2010-10-19 2012-06-07 Sony Corp Image processing apparatus, method, and program
JP2013192468A (en) * 2012-03-16 2013-09-30 Olympus Corp Image analysis method, image analyzer, and image photographing apparatus and program for biological sample

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012105631A (en) * 2010-10-19 2012-06-07 Sony Corp Image processing apparatus, method, and program
JP2013192468A (en) * 2012-03-16 2013-09-30 Olympus Corp Image analysis method, image analyzer, and image photographing apparatus and program for biological sample

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAZUHIRO KUROKAWA ET AL.: "In-plane bidirectional displacement measurement method using localized correlation coefficients of OCT signals", ANNUAL MEETING OF THE OPTICAL SOCIETY OF JAPAN, vol. 2014, 24 October 2014 (2014-10-24), pages ROMBUNNO.7AAl *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022114850A (en) * 2021-01-27 2022-08-08 シンクランド株式会社 Light interference tomography system
JP7134509B2 (en) 2021-01-27 2022-09-12 シンクランド株式会社 Optical coherence tomography system

Also Published As

Publication number Publication date
JP2016114557A (en) 2016-06-23

Similar Documents

Publication Publication Date Title
JP5166889B2 (en) Quantitative measurement device for fundus blood flow
CN106097296B (en) Image generation device and image generation method
JP5685013B2 (en) Optical tomographic imaging apparatus, control method therefor, and program
JP5149535B2 (en) Polarization-sensitive optical coherence tomography apparatus, signal processing method for the apparatus, and display method for the apparatus
US10098536B2 (en) Imaging apparatus, method of operating an imaging apparatus, information processing apparatus, and storing medium
JP6798095B2 (en) Optical coherence tomography equipment and control programs used for it
US10007989B2 (en) OCT data processing method, storage medium storing program for executing the OCT data processing method, and processing device
US8921767B2 (en) Automatic calibration of fourier-domain optical coherence tomography systems
EP2801814A1 (en) Swept source optical coherence tomograph and method for stabilizing phase thereof
TW201803521A (en) Skin diagnosing apparatus, method of outputting skin state, program and recording media
US9687147B2 (en) Optical coherence tomography device and control program
US9615736B2 (en) Optical interference tomographic apparatus, and method for controlling optical interference tomographic apparatus
JPWO2010143601A1 (en) 2-beam optical coherence tomography system
JP2014228473A (en) Jones matrix oct system and program for performing image processing of measurement data obtained by the oct
JP2013019773A (en) Program to correct data measured by ps-oct and ps-oct system comprising the program
US20180000341A1 (en) Tomographic imaging apparatus, tomographic imaging method, image processing apparatus, image processing method, and program
JP2010197180A (en) Optical image measuring device
US20140293289A1 (en) Method for Generating Two-Dimensional Images From Three-Dimensional Optical Coherence Tomography Interferogram Data
US20170228521A1 (en) Report driven workflow for ophthalmic image data acquisition
JP2022176282A (en) Ophthalmologic apparatus and control method thereof
WO2016098850A1 (en) Image processing device, laser radiation system, image processing method, and image processing program
Ksenofontov et al. Numerical method for axial motion artifact correction in retinal spectral-domain optical coherence tomography
JP6606640B2 (en) Ophthalmic apparatus and control method thereof
JP6402921B2 (en) Optical coherence tomography apparatus and speed measurement program
JP6480104B2 (en) Optical coherence tomography apparatus and displacement measurement method using optical coherence tomography

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15870052

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15870052

Country of ref document: EP

Kind code of ref document: A1