WO2021095826A1 - Image processing device, image processing method, and program - Google Patents

Image processing device, image processing method, and program Download PDF

Info

Publication number
WO2021095826A1
WO2021095826A1 PCT/JP2020/042352 JP2020042352W WO2021095826A1 WO 2021095826 A1 WO2021095826 A1 WO 2021095826A1 JP 2020042352 W JP2020042352 W JP 2020042352W WO 2021095826 A1 WO2021095826 A1 WO 2021095826A1
Authority
WO
WIPO (PCT)
Prior art keywords
mask
electric field
sample
image
image processing
Prior art date
Application number
PCT/JP2020/042352
Other languages
French (fr)
Japanese (ja)
Inventor
安野 嘉晃
大輔 笈田
Original Assignee
国立大学法人筑波大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人筑波大学 filed Critical 国立大学法人筑波大学
Priority to JP2021556159A priority Critical patent/JPWO2021095826A1/ja
Publication of WO2021095826A1 publication Critical patent/WO2021095826A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated

Definitions

  • the present invention relates to an image processing apparatus, an image processing method and a program.
  • the present application claims priority with respect to Japanese Patent Application No. 2019-206436 filed in Japan on November 14, 2019, the contents of which are incorporated herein by reference.
  • OCT optical coherence tomography
  • samples mainly living organisms
  • OCT optical coherence tomography
  • it is possible to acquire an image showing not only the surface of a sample but also the internal structure thereof with high spatial resolution.
  • OCT has been put into practical use for retinal diagnosis in ophthalmology.
  • visualization and quantification of tissue structures below optical resolution and microfiber structure characteristics for example, statistical properties of directionality and size
  • observed tissues such as cultured tissues and ex vivo (ex vivo) samples as samples. It is used for conversion.
  • Non-Patent Document 1 a method of indirectly visualizing the characteristics of the fine structure of a tissue by performing multi-directional OCT measurement has been attempted.
  • Non-Patent Document 1 realizes multi-directional observation using hardware, the configuration of the device becomes complicated and it is difficult to realize economically.
  • the measurement time tends to be long because the measurement is performed a plurality of times. Therefore, it is not always realistic to apply it to a living sample.
  • the number of measurements is limited, there are restrictions on the measurement target, such as being unsuitable for quantitative observation that requires a large number of measurements.
  • the present invention has been made in view of the above circumstances, and the subject of the present invention is, for example, to eliminate or relax restrictions on observation such as observation direction, measurement time, number of measurements, measurement target, etc., and more easily sample. It is an object of the present invention to provide an image processing apparatus, an image processing method and a program capable of adjusting the spatial conditions related to the observation of the above.
  • the present invention has been made to solve the above problems, and one aspect of the present invention is to convert an optical interference tom signal representing the state of a sample into a spatial frequency region and convert it into the reverse space of the sample.
  • An electric field estimation unit that generates electric field data indicating an electric field on the corresponding pupil surface
  • a mask unit that generates mask electric field data by acting on the electric field data with a mask indicating the passage characteristic distribution of the electric field on the pupil surface, and the above.
  • It is an image processing device including a conversion unit that converts mask electric field data into a spatial region and generates a mask light interference tom signal.
  • Another aspect of the present invention is the image processing apparatus of (1), and the passing characteristic distribution may indicate a spatial frequency band of an azimuth angle passing through the electric field in the pupil plane.
  • Another aspect of the present invention is the image processing apparatus of (1) or (2), wherein the passing characteristic distribution indicates a spatial frequency band of a moving diameter passing through the electric field in the pupil plane. May be good.
  • Another aspect of the present invention is the image processing apparatus of (1) or (2), in which the mask may indicate the presence or absence of passage of the electric field for each sample point corresponding to the spatial frequency. ..
  • Another aspect of the present invention is the image processing apparatus of (4), wherein the mask is within a predetermined range from the boundary between a passing region that passes through the electric field and a blocking region that does not pass through the electric field. It may have a mask value that changes monotonically between sample points.
  • Another aspect of the present invention is the image processing apparatus according to any one of (1) to (5), which is a synthesis unit that generates a mask image showing the structure of the sample based on the mask light interference tom signal. May be provided.
  • the image processing apparatus is based on each of M mask optical interference tomographic signals (M is an integer of 2 or more).
  • the mask unit includes a compositing unit that synthesizes mask images with different display modes to generate a composite image, and the mask unit acts on the electric field data with M masks having different passage characteristic distributions to generate M mask electric field data.
  • the conversion unit may convert the M mask electric field data into a spatial region to generate the M mask optical interference tomographic signals.
  • Another aspect of the present invention is the image processing apparatus of (7), and the hues of the colors representing the M mask images may be different as the display mode.
  • Another aspect of the present invention is the image processing apparatus of (7), and the color in which the colors representing the M mask images are combined among the M as the display mode is achromatic. There may be.
  • Another aspect of the present invention is a method in an image processing apparatus, in which an optical interference tom signal representing a sample state is converted into a spatial frequency region to generate an electric field in the pupil surface corresponding to the reverse space of the sample.
  • An electric field estimation process that generates the electric field data shown
  • a mask process that generates masked electric field data by acting on the electric field data with a mask showing the passage characteristic distribution of the electric field on the pupil surface, and the masked electric field data in a spatial region.
  • It is an image processing method having a conversion process of converting to generate a masked optical interference tomographic signal.
  • Another aspect of the present invention is an electric field in which a computer of an image processing apparatus converts an optical interference tom signal representing the state of a sample into a spatial frequency region to indicate an electric field in the pupil surface corresponding to the reverse space of the sample.
  • the electric field estimation procedure for generating data the mask procedure for generating mask electric field data by acting on the electric field data with a mask showing the distribution of the passing characteristics of the electric field on the pupil surface, and the mask electric field data being converted into a spatial region.
  • This is a program for executing a conversion procedure for generating a masked optical interference tomographic signal.
  • the spatial conditions related to the observation of the sample can be adjusted more easily. For example, it is possible to virtually realize the adjustment of the observation direction and the multi-directional observation without the provision or adjustment of the optical component for acquiring the reflected light from each direction.
  • FIG. 1 is a configuration diagram showing an example of an optical interference tomographic meter 1 according to the present embodiment.
  • the optical coherence tomography 1 constitutes an observation system for observing the state of a sample using OCT.
  • the optical interference tomometer 1 irradiates the sample Sm with light, and acquires and acquires the interference light generated by interfering the reflected light reflected from the sample Sm with the reference light reflected by the reference mirror 40 (described later). It is a device that generates an image showing the surface of the sample Sm and the state inside the sample Sm from the interference light.
  • the object to be observed as the sample Sm may be, for example, a human or animal living body or a non-living body.
  • the living body may be a fundus, a blood vessel, a tooth, a subcutaneous tissue, or the like.
  • the non-living organism may be an artificial structure such as an electronic part or a mechanical part, a natural structure such as a stone or a mineral, or a substance having no specific shape.
  • the optical interference tomometer 1 includes a light source 10, a beam splitter 20, collimators 30a, 30b, 50a, 50b, a reference mirror 40, a galvanometer mirror 60a, 60b, a spectroscope 70, and an image processing device 100.
  • the beam splitter 20, the collimators 30a, 30b, 50a, 50b, the reference mirror 40, the galvanometer mirrors 60a, 60b, and the spectroscope 70 constitute an optical system called an interferometer.
  • the interferometer illustrated in FIG. 1 is a Michelson interferometer including an optical fiber F.
  • the light source 10, the spectroscope 70, the collimator 30a, and the collimator 50a are each connected to the beam splitter by using an optical fiber F.
  • the optical fiber F has a transmission band including a wavelength band of light emitted from the light source 10.
  • FD-OCT Fourier domain OCT
  • SD-OCT Spectral-Domain OCT
  • SS-OCT Swept-Source OCT
  • the light source 10 is a wavelength sweep light source such as a very high frequency pulse laser and SLD (Superluminescent Diode).
  • the light source 10 irradiates probe light having, for example, a near-infrared wavelength (for example, 800 to 1000 nm) and having low coherence.
  • the light emitted from the light source 10 is guided inside the optical fiber F and incident on the beam splitter 20.
  • the beam splitter 20 separates the incident light into light that is guided toward the collimator 30a (hereinafter, reference light) and light that is guided toward the collimator 50a (hereinafter, measurement light).
  • the beam splitter 20 is, for example, a cube beam splitter.
  • the collimator 30a changes the reference light guided from the beam splitter 20 into parallel light, and emits the parallel light toward the collimator 30b.
  • the collimator 30b collects the parallel light incident from the collimator 30a and emits the collected reference light toward the reference mirror 40.
  • the collimator 30b incidents the reference light reflected by the reference mirror 40, converts it into parallel light, and emits the converted parallel light toward the collimator 30a.
  • the collimator 30a collects the parallel light incident from the collimator 30b and guides it toward the beam splitter 20.
  • the collimator 50a converts the measurement light guided from the beam splitter 20 into parallel light, and emits the converted parallel light toward the galvanometer mirror 60a.
  • the parallel light incident from the collimator 50a is reflected and emitted toward the collimator 50b, respectively.
  • the collimator 50b collects the parallel light incident from the collimator 50a via the galvanometer mirrors 60a and 60b, and irradiates the sample Sm with the collected measurement light.
  • the measurement light applied to the sample Sm is reflected by the reflecting surface of the sample Sm and incident on the collimator 50b.
  • the reflecting surface is not limited to, for example, the boundary surface between the sample Sm and the surrounding environment (for example, the atmosphere) of the sample Sm, and can be a boundary surface that separates materials or structures having different refractive indexes inside the sample Sm.
  • the light reflected on the reflecting surface of the sample Sm and incident on the collimator 50b is referred to as reflected light.
  • the collimator 50b emits the incident reflected light toward the galvanometer mirror 60b. It is reflected on the surfaces of the galvanometer mirrors 60b and 60a, respectively, and emitted toward the collimator 50a.
  • the collimator 50a collects the parallel light incident from the collimator 50a via the galvanometer mirrors 60a and 60b, and guides the collected reflected light toward the beam splitter 20.
  • the beam splitter 20 guides the reference light reflected by the reference mirror 40 and the reflected light reflected by the sample Sm to the spectroscope 70 via the optical fiber F.
  • the spectroscope 70 includes a diffraction grating and a light receiving element inside the spectroscope 70.
  • the diffraction grating disperses the reference light and the reflected light guided from the beam splitter 20.
  • the separated reference light and reflected light interfere with each other and become interference light that interferes with each other.
  • the light receiving element is arranged on the imaging surface to which the interference light is irradiated.
  • the light receiving element detects the emitted interference light and generates a signal based on the detected interference light (hereinafter referred to as a detection signal).
  • the light receiving element outputs the generated detection signal to the image processing device 100.
  • the image processing apparatus 100 acquires an OCT signal representing the state of the sample Sm from the detection signal input from the spectroscope 70.
  • the image processing apparatus 100 sequentially changes the observation points in a predetermined order, and sequentially accumulates the detection signals acquired at the individual observation points or for each of a plurality of observation points forming a predetermined unit, and determines the predetermined ones. Generates an OCT signal in the observation area.
  • the image processing device 100 converts the generated OCT signal into a spatial frequency domain, and estimates the electric field of the reflected light on the pupil surface corresponding to the reciprocal space of the sample Sm as the pupil surface electric field.
  • the image processing device 100 acts on the electric field data indicating the electric field in which the mask indicating the passing characteristic distribution of the electric field on the pupil surface is estimated to acquire the mask electric field data.
  • the image processing device 100 converts the acquired mask electric field data into a spatial region to generate a mask OCT signal.
  • the image processing device 100 generates a mask OCT image based on the generated mask OCT signal.
  • the image processing apparatus 100 generates M mask electric field data by acting on the electric field data with M masks having different passage characteristic distributions (M is an integer of 2 or more), and the generated M mask electric field data. May be converted into spatial regions to generate M mask images.
  • the image processing apparatus 100 may generate a composite image by synthesizing the generated M mask images with different display modes (for example, colors).
  • FIG. 2 is a block diagram showing a functional configuration example of the image processing device 100 according to the present embodiment.
  • the image processing device 100 includes a control unit 110 and a storage unit 190.
  • a part or all the functions of the control unit 110 are realized as a computer including a processor such as a CPU (Central Processing Unit), for example.
  • the processor reads a program stored in the storage unit 190 in advance, performs a process instructed by a command described in the read program, and performs its function. In the present application, performing a process instructed by a command described in a program may be referred to as executing a program, executing a program, or the like.
  • a part or all of the control unit 110 is not limited to general-purpose hardware such as a processor, and may be configured to include dedicated hardware such as LSI (Large Scale Integration) and ASIC (Application Specific Integrated Circuit).
  • the control unit 110 includes an optical system control unit 120, a detection signal acquisition unit 130, an electric field estimation unit 140, a mask unit 150, a conversion unit 160, an image composition unit 170, and an output processing unit 180.
  • the optical system control unit 120 drives a drive mechanism that changes the positions of the galvanometer mirrors 60a and 60b, and scans the sample point which is the observation point of the sample Sm.
  • the sample points of the sample Sm are scanned in each of the depth direction of the sample Sm and the direction intersecting the depth direction (for example, the direction parallel to the front surface of the sample Sm).
  • the depth direction of the sample may be referred to as the z direction in the three-dimensional Cartesian coordinate system, and the first and second directions orthogonal to the z direction may be referred to as the x direction and the y direction, respectively.
  • Signal acquisition in the depth direction of the sample is called A-scan
  • signal acquisition in the direction intersecting that direction (for example, x-direction or y-direction) is called B-scan (B-scan).
  • scan scan
  • the detection signal acquisition unit 130 sequentially acquires detection signals from the spectroscope 70.
  • the detection signal acquisition unit 130 sets signal values indicating the intensity distribution of the reflected light in the depth direction of the sample Sm based on the acquired detection signal for each sample point arranged at predetermined intervals in the three-dimensional space where the sample Sm exists. To get to.
  • the depth direction of the sample Sm corresponds to the incident direction (z direction) of the measurement light.
  • the detection signal acquisition unit 130 repeats the process of acquiring the intensity distribution of the reflected light for each observation point changed by signal acquisition along the plane intersecting in the z direction (for example, the xy plane).
  • the intensity distribution of the acquired reflected light is based on the distribution of the refractive index of the sample Sm in the depth direction.
  • the detection signal acquisition unit 130 acquires data representing the state of the sample Sm in the observable three-dimensional region (hereinafter, observable region) as a three-dimensional OCT signal (also referred to as an OCT volume). can do.
  • the detection signal acquisition unit 130 stores the three-dimensional OCT signal in the storage unit 190.
  • the planes intersecting in the z direction do not necessarily have to be orthogonal to the z direction and may not be parallel to the z direction. Further, in the two-dimensional planes intersecting in the z direction, the individual sample points do not necessarily have to be arranged on each lattice point of the orthogonal lattice.
  • the individual sample points may be arranged, for example, on each grid point of the diagonal grid or may be arranged aperiodically.
  • signal acquisition involving a change in the target sample point or batch signal acquisition for a plurality of sample points is referred to as "scan", but it is not necessarily a member constituting the optical system (for example, a collimator). It may also include cases without scanning driven by 50b, sample Sm) or detector. In that case, the detection signal acquisition unit 130 is responsible for the function related to signal acquisition, and is omitted in the optical system control unit 120.
  • the detection signal acquisition unit 130 converts reflected light having different frequency components depending on the depth of the sample Sm. Based on the interference light, it is detected as a detection signal.
  • the detection signal acquisition unit 130 can Fourier transform the detection signal to calculate the conversion coefficient for each frequency, and determine the conversion coefficient of the depth associated with the frequency as the signal value at that depth. Further, when scanning by driving in the x-direction or the y-direction is not required, the optical system control unit 120 may be omitted.
  • the detection signal acquisition unit 130 refers to a portion of the acquired three-dimensional OCT signal that represents the intensity distribution of reflected light in the two-dimensional plane to be observed (hereinafter, the observation target plane) as a two-dimensional OCT signal (hereinafter, simply referred to as an OCT signal). It may be called).
  • the observation target plane is, for example, the front surface of the sample Sm.
  • the front surface is a plane (that is, an xy plane) orthogonal to the depth direction of the sample Sm, and is also called an en face plane.
  • the detection signal acquisition unit 130 stores the extracted OCT signal in the storage unit 190.
  • the detection signal acquisition unit 130 may select, for example, a two-dimensional plane (for example, the xy plane) of the depth (that is, z coordinate) indicated by the control signal input from the output processing unit 180 as the observation target plane. .. In this embodiment, it is desirable that the distance between adjacent sample points in the observable region is equal to or less than the spatial resolution of the optical system.
  • the electric field estimation unit 140 reads the OCT signal stored in the storage unit 190, performs a two-dimensional Fourier transform on the read OCT signal in the spatial domain, and generates data in the spatial frequency domain.
  • the data in the spatial frequency domain obtained by the Fourier transform corresponds to the data (hereinafter, electric field data) indicating the electric field (hereinafter, pupillary electric field) in the pupil plane (pupil plane). That is, the electric field of the reflected light projected on the pupil surface is estimated by performing a two-dimensional Fourier transform on the OCT signal related to the observation target plane.
  • the conversion coefficient for each frequency sample point (frequency bin) converted into the spatial frequency domain is a complex number, and is represented by its absolute value and argument. Therefore, in the electric field data, the amplitude intensity and the phase are represented by the absolute value and the declination of each conversion coefficient.
  • the electric field estimation unit 140 stores the generated data in the spatial frequency region as electric field data in the storage unit 190.
  • the pupil plane Pp corresponds to the back-focal plane corresponding to the reciprocal space of the sample Sm. That is, the pupil surface Pp is a surface arranged on the opposite side of the sample Sm from the objective lens Ol, and the reflected light diffused from the sample Sm and parallelized by the objective lens Ol is virtually projected. This is the surface.
  • the entire band of the spatial frequency converted into the spatial frequency domain corresponds to the observation region of the electric field to be observed on the pupil surface.
  • the intersection of the optical axis of the objective lens Ol and the pupil surface corresponds to the origin of the spatial frequency. Spatial frequencies in the spatial frequency domain correspond to sample points in the observation domain.
  • the sample Sm is installed at a position where it intersects with the optical axis of the objective lens Ol.
  • the mask unit 150 reads out the electric field data stored in the storage unit 190, applies a mask to the read electric field data, and displays the masked electric field data indicating the masked electric field. Generate.
  • the mask is numerical data showing the passing characteristic distribution of the electric field of the reflected light in the spatial frequency domain.
  • a numerical value indicating the passing characteristics of the electric field is shown as a mask value for each frequency sample point corresponding to each spatial frequency.
  • the mask value for each frequency sample point takes one of two values, 1 and 0, and 1 and 0 indicate the presence or absence of passage of an electric field, respectively.
  • the distribution of frequency sample points with the mask value set to 1 indicates the pass band of the spatial frequency domain passing through the electric field in the reciprocal space of the sample Sm.
  • the passband represents a virtual aperture on the pupil plane through which the reflected light passes.
  • the distribution of frequency sample points with the mask value set to 0 indicates the cutoff band in the spatial frequency region that does not pass through the electric field in the reciprocal space of the sample Sm.
  • the blocking band represents a virtual shield on the pupil surface that does not allow reflected light to pass through.
  • the mask unit 150 multiplies the electric field value for each frequency sample point indicated by the electric field data by the mask value of the frequency sample.
  • the mask unit 150 generates data having the multiplication value obtained by multiplication as the mask electric field value for each frequency sample point as the mask electric field data.
  • the masked electric field data is used to generate an OCT image that reflects spatial characteristics (for example, directivity) corresponding to the passage characteristic distribution, as will be described later.
  • the mask unit 150 stores the generated mask electric field data in the storage unit 190. Specific examples of masks will be described later.
  • the conversion unit 160 reads out the mask electric field data stored in the storage unit 190, performs a two-dimensional inverse Fourier transform on the read mask electric field data in the spatial frequency domain, and generates a mask OCT signal in the spatial domain.
  • the conversion unit 160 stores the generated mask OCT signal in the spatial region in the storage unit 190.
  • the mask unit 150 acts on each of the M masks on the read electric field data, and M A number of masked electric field data may be generated. In that case, the conversion unit 160 converts each of the M mask electric field data in the spatial frequency domain into a mask OCT signal in the spatial domain.
  • the image synthesizing unit 170 reads out the mask OCT signal stored in the storage unit 190, and uses a predetermined conversion function for the signal value for each sample point in the observation target plane indicated by the read mask OCT signal to obtain the brightness for each pixel. Convert to a value.
  • the converted luminance value takes a value within a value range that can be expressed by the bit depth of each pixel.
  • the image synthesizing unit 170 generates mask image data having a luminance value converted for each sample point.
  • the generated mask image data shows a mask image that reflects the spatial characteristics corresponding to the passage characteristic distribution.
  • the image synthesizing unit 170 uses the generated mask image data as output image data according to the control signal input from the output processing unit 180. Output to the display (not shown).
  • the image synthesizing unit 170 When the mask OCT signals corresponding to the M masks are acquired, the image synthesizing unit 170 generates a mask image having a different display mode (for example, color) for each of the M mask OCT image signals. ..
  • the image synthesizing unit 170 synthesizes the generated M mask images to generate composite image data indicating one composite image. For example, when generating a color composite image using an RGB color system, the image synthesis unit 170 determines the average value of the pixel values for each pixel among the M mask images as the pixel value of the composite image. .. In the RGB color system, the hue (chromaticity) is determined by the ratio of the pixel values of the pixels representing the primary colors red, green, and blue for each pixel block in which the shade is set.
  • the image composition unit 170 outputs the generated composite image data to the display unit as output image data according to the control signal input from the output processing unit 180.
  • the image synthesizing unit 170 may store the output image data in the storage unit 190 according to the control signal input from the output processing unit 180.
  • the output processing unit 180 controls the generation or output of output image data indicating an OCT image based on an operation signal input from an operation input unit (not shown).
  • the operation input unit may include, for example, a button, a knob, a dial, a mouse, a joystick, and other members that accept user operations and generate operation signals according to the received operations.
  • the operation input unit may be an input interface that receives an operation signal wirelessly or by wire from another device (for example, a portable device such as a remote controller).
  • the operation signal indicates, for example, the necessity of displaying or storing the OCT image, the observation target plane as the observation target area, the spatial frequency characteristic of the mask, and the like as parameters.
  • the output processing unit 180 may display a parameter that can be set by operation, a parameter setting, and a setting screen for guiding the parameter that can be set on the display unit, and configure a user interface related to the display of the OCT image. Good. For example, when an operation signal indicating the necessity of displaying an OCT image is input, the output processing unit 180 outputs a control signal indicating the necessity of displaying the OCT image to the image synthesis unit 170.
  • the image synthesizing unit 170 outputs output image data to the display unit when a control signal indicating display necessity is input from the output processing unit 180, and outputs an output image when a control signal indicating display / rejection is input from the output processing unit 180. Do not output data to the display.
  • the output processing unit 180 When the operation signal indicating the observation target plane is input, the output processing unit 180 outputs the control signal indicating the observation target plane to the detection signal acquisition unit 130.
  • the detection signal acquisition unit 130 outputs a portion of the three-dimensional OCT signal related to the observation target plane indicated by the control signal input from the output processing unit 180 as an OCT signal.
  • the observation target plane is defined, for example, as parameters such as the depth from the surface of the sample Sm, the observation direction, and the area of the observation area.
  • the output processing unit 180 When an operation signal indicating the spatial frequency characteristic of the mask is input, the output processing unit 180 outputs a control signal indicating the spatial frequency characteristic to the mask unit 150.
  • the output processing unit 180 may set the spatial frequency characteristics of each of the plurality of masks based on the input operation signal, and output a control signal indicating the set spatial frequency characteristics to the mask unit 150.
  • the mask unit 150 sets a mask having spatial frequency characteristics indicated by a control signal input from the output processing unit 180.
  • the storage unit 190 stores various data used for processing executed by the control unit 110 and various data acquired by the control unit 110.
  • the storage unit 190 includes, for example, a non-volatile (non-temporary) storage medium such as a ROM (Read Only Memory), a flash memory, and an HDD (Hard Disk Drive).
  • the storage unit 190 includes, for example, a volatile storage medium such as a RAM (Random Access Memory) and a register.
  • FIG. 3 is an explanatory diagram showing an example of image processing according to the present embodiment.
  • the image processing exemplified in FIG. 3 is an application example to directional imaging of an OCT image.
  • a mask having directivity in the spatial frequency domain hereinafter, directional mask
  • the directional mask has a mask value with a dependency on a spatial frequency (hereinafter, azimuth spatial frequency) corresponding to an azimuth from the origin in a two-dimensional spatial frequency domain.
  • the electric field estimation unit 140 performs a two-dimensional Fourier transform on the OCT signal indicating the front OCT image Oi01 of the spatial region acquired by the detection signal acquisition unit 130, and performs a two-dimensional Fourier transform on the pupil in the spatial frequency region indicating the pupil surface electric field Pe01. Generate surface electric field data.
  • the front OCT image Oi01 is an OCT image with the observation target plane as the front.
  • the mask unit 150 acts on the pupillary surface electric field data generated by the electric field estimation unit 140 as an example of a virtual mask with the directional mask Dm01 to generate mask electric field data indicating the mask electric field Ie01.
  • the spatial frequency (hereinafter referred to as the radial spatial frequency) corresponding to the distance (driving diameter) from the origin is within a predetermined band, and the azimuth angle spatial frequency is ⁇ / 2 to 0.
  • the half-circle band up to ⁇ / 2 is used as the pass band, and the other bands are used as the cutoff band.
  • Step S03 The conversion unit 160 performs a two-dimensional inverse Fourier transform on the mask electric field data generated by the mask unit 150 to generate a mask OCT signal in the spatial region.
  • the image synthesizing unit 170 converts the signal value for each sample point indicated by the mask OCT signal into a pixel value for each pixel, and generates mask image data having the pixel value obtained by the conversion.
  • the image composition unit 170 outputs the generated mask image data as output image data to a display unit (not shown).
  • the pupillary surface electric field Pe01 by acting the directional mask Dm01 on the pupillary surface electric field Pe01, the pupillary surface in the spatial frequency band within the range of the bandwidth ⁇ centered on the azimuth spatial frequency 0 (rad) (main direction).
  • the electric field is passed through, and the pupillary electric field in other bands is cut off.
  • an OCT image having directivity with the azimuth spatial frequency 0 (rad) as the main direction can be obtained by the obtained mask electric field Ie01.
  • the user can arbitrarily adjust the directivity of the OCT image by using the acquired OCT signal without driving or adjusting the optical system.
  • the output processing unit 180 can set one or both of the azimuth space frequency band and the radial space frequency band as the pass band through which the electric field is passed based on the operation signal input from the operation input unit (not shown).
  • the mask unit 150 sets a mask value at a frequency sample point in the pass band set by the output processing unit 180 as 1, a mask value related to other frequency sample points as 0, and sets a directional mask indicating the set mask value. Should be generated.
  • FIG. 4 is an explanatory example showing another example of image processing according to the present embodiment.
  • An application example to virtual multi-directional imaging will be described with reference to FIG.
  • the mask unit 150 in step S02 (FIG. 3), the mask unit 150 generates mask electric field data using M directional masks having different directivities in the spatial frequency domain with respect to a common pupillary electric field. To do.
  • three directional masks Dm01-Dm03 are used.
  • the pass bands of the directional masks Dm01 to Dm03 are set in 2 ⁇ / 3 cycles and do not overlap each other in the spatial frequency domain.
  • step S03 the conversion unit 160 performs a two-dimensional inverse Fourier transform on each of the generated M mask electric field data to generate a mask OCT signal in the spatial region.
  • the image synthesizing unit 170 converts each of the generated M mask OCT signals in the spatial region into a mask OCT image.
  • the image synthesizing unit 170 generates output image data indicating composite images displayed in different colors for the converted M mask OCT images, and outputs the generated output image data to the display unit. Therefore, a composite image in which OCT images having different directivities are multiplexed with different colors is displayed.
  • the output processing unit 180 sets each of the M masks (M is an integer of 2 or more) as a pass band through which the electric field is passed, based on the operation signal input from the operation input unit (not shown).
  • M is an integer of 2 or more
  • the mask unit 150 sets the mask value of the frequency sample points in the pass band set by the output processing unit 180 to 1 for each mask, and sets the mask value of the other frequency sample points to 0.
  • FIG. 5 is an explanatory diagram showing an example of the synthesis process according to the present embodiment.
  • the image synthesizing unit 170 colors M directional images with different colors to generate colored image data indicating the colored images.
  • the image synthesizing unit 170 colors the two directional images Di11 and Di12 with different colors (for example, red and green) to generate colored images Ci11 and Ci12.
  • each color is represented by an upward-sloping diagonal line and a downward-sloping diagonal line.
  • the directional images Di11 and Di12 are OCT images generated by using directional masks having directivity 1 and 2, respectively.
  • Directivity 1 indicates a pass band having an azimuth angle of 7 / 6 ⁇ as a main direction and a bandwidth of ⁇ .
  • Directivity 2 indicates a pass band having an azimuth angle of -5 / 6 ⁇ as a main direction and a bandwidth of ⁇ .
  • Step S12 The image synthesizing unit 170 synthesizes the colored images indicated by the M colored image data to generate the composite image data indicating the composite image.
  • the colored colors of the colored images Ci11 and Ci12 are mixed for each part.
  • the composite image Si11 represents a mixed color obtained by mixing each part.
  • the diagonally shaded portion indicates a mixed color obtained by mixing the colored colors in the colored images Ci11 and Ci12, respectively.
  • the upward-sloping oblique line portion and the downward-sloping oblique line portion indicate a portion colored in the colored images Ci11 and Ci12, respectively, but not colored in the colored images Ci12 and Ci11.
  • hue generally means a hue represented by a component of a specific remarkable wavelength in image light, and is one of the three attributes of color. It is desirable that the hues of the colors to be colored differ as much as possible in the color space between the M mask OCT images.
  • M 2
  • the color of one mask OCT image may be a complementary color to the color of the other mask OCT image. For example, when the color of one mask OCT image is red, the color of the other mask OCT image may be green.
  • M is 3 for example, the colors of the three mask OCT images may be the three primary colors of red, green, and blue.
  • the color obtained by simply mixing the colors of the M mask OCT images among the M masks may be set to be achromatic.
  • An achromatic color is a desaturated color, that is, white, black, or gray of various densities.
  • M 2
  • M 3
  • the colors of the three mask OCT images are the three primary colors of red, green, and blue
  • mixing the respective colors results in an achromatic color.
  • the image synthesizing unit 170 may generate average image data having an average value obtained by averaging the pixel values indicated by each of the M mask OCT image data as pixel values. Then, the image synthesizing unit 170 generates average colored image data showing a new colored image (hereinafter, average colored image) colored with a color different from any of the M colored images with respect to the average image data. The image synthesizing unit 170 synthesizes the colored image shown by the M colored image data and the average colored image shown by the average colored image data to generate the composite image data showing the composite image. For example, when M is 2, the image synthesizing unit 170 sets the colors of the two colored images to red and green, and sets the color of the average colored image to blue. As a result, the parts in which the states corresponding to the respective directivities are prominent due to the colors remarkably displayed in the composite image are represented by comparing their average values.
  • the compositing mode related to the generation of the compositing image is not limited to this.
  • the composite mode refers to a combination of spatial frequency characteristics of the mask used for generation for each of the mask OCT images (for example, directional images) used for generating the composite image.
  • 6A-6F are diagrams showing an example of a composite image for each composite mode according to the present embodiment. An example is taken when the front OCT image Oi01 is used as a common OCT image between FIGS. 6A and 6F in generating the composite image.
  • the low-frequency image Li11 shown in FIG. 6A is a composite image generated by using the composite mode 11.
  • the azimuth angle 7 / 6 ⁇ is the main direction
  • the first spatial frequency characteristic showing the directionalness indicating the passband having the bandwidth ⁇ , the azimuth angle -5 / 6 ⁇ is the main direction
  • the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of ⁇ .
  • the first spatial frequency characteristic and the second spatial frequency characteristic each have a low frequency range from frequency 1 to frequency 2 as a passing band and other radial spatial frequencies as a cutoff band.
  • Frequency 1 is a predetermined radial space frequency that is sufficiently close to 0 or 0.
  • the frequency 2 is a predetermined radial space frequency higher than the frequency 1 and lower than the upper limit of the radial space frequency.
  • the upper limit of the radial space frequency is determined by the sampling theorem based on the sample point spacing of the OCT signal. Since the low-frequency image Li11 is generated by using the low-frequency component of the pupillary electric field, it is configured to include a pattern that is coarser and has an unclear outline.
  • the high-frequency image Hi11 shown in FIG. 6B is a composite image generated by using the composite mode 12.
  • the azimuth angle 7 / 6 ⁇ is the main direction
  • the first spatial frequency characteristic showing the directionalness indicating the pass band having the bandwidth ⁇ , the azimuth angle -5 / 6 ⁇ is the main direction
  • the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of ⁇ .
  • the first spatial frequency characteristic and the second spatial frequency characteristic each have a high frequency range from frequency 2 to frequency 3 as a passing band and other radial spatial frequencies as a cutoff band.
  • the frequency 3 is a predetermined low radial space frequency that is higher than the frequency 2 and equal to or less than the upper limit of the radial space frequency. Since the high-frequency image Hi11 is generated by using the high-frequency component of the pupillary electric field, it is configured to include a relatively fine pattern.
  • the composite image Si11 shown in FIG. 6C is a composite image generated by using the composite mode 13.
  • the azimuth angle 7 / 6 ⁇ is the main direction
  • the azimuth angle -5 / 6 ⁇ is the main direction
  • the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of ⁇ .
  • the first spatial frequency characteristic and the second spatial frequency characteristic are not provided with band limitation in the radial spatial frequency region. That is, the synthesis mode 13 is the same as the synthesis mode including the directivity 1 and 2 illustrated in FIG. 5, and has both the pass band of the synthesis mode 11 and the pass band of the synthesis mode 12.
  • the low-frequency image Li12 shown in FIG. 6D is a composite image generated by using the composite mode 14.
  • the azimuth angle of 5 / 6 ⁇ is the main direction
  • the first spatial frequency characteristic showing the directionalness indicating the passband having the bandwidth of ⁇ , the azimuth angle of -7 / 6 ⁇ is the main direction
  • the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of ⁇ .
  • the first spatial frequency characteristic and the second spatial frequency characteristic each have a low frequency range from frequency 1 to frequency 2 as a passing band and other radial spatial frequencies as a cutoff band.
  • the low-frequency image Li12 is generated by using a mask whose main direction of the pass band is different from that of the low-frequency image Li11, it represents a pattern different from that of the low-frequency image Li11.
  • the low-frequency image Li12 shows a pattern that is relatively coarse and has an unclear outline.
  • the high-frequency image Hi12 shown in FIG. 6E is a composite image generated by using the composite mode 15.
  • the azimuth angle of 5 / 6 ⁇ is the main direction
  • the azimuth angle of -7 / 6 ⁇ is the main direction
  • the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of ⁇ .
  • the first spatial frequency characteristic and the second spatial frequency characteristic each have a high frequency range from frequency 2 to frequency 3 as a passing band and other radial spatial frequencies as a cutoff band.
  • the high-frequency image Hi12 represents a pattern more similar to the low-frequency image Li12 having a common passband azimuthal spatial frequency than the high-frequency image Hi11 having a common passband spatial frequency of the mask used for generation. ..
  • the high-frequency image Hi12 shows a finer pattern than the low-frequency image Li12.
  • the composite image Si12 shown in FIG. 6F is a composite image generated by using the composite mode 16.
  • the azimuth angle 7 / 6 ⁇ is the main direction
  • the azimuth angle -5 / 6 ⁇ is the main direction
  • the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of ⁇ .
  • the first spatial frequency characteristic and the second spatial frequency characteristic are not provided with band limitation in the radial spatial frequency region. That is, the synthesis mode 16 has both the pass band of the synthesis mode 14 and the pass band of the synthesis mode 15.
  • a case where a part or all of the pass bands related to each of the plurality of masks do not overlap is taken as an example, but the present invention is not limited to this.
  • a part of the pass band may overlap between at least two masks among the plurality of masks.
  • the entire pass band of each of the plurality of masks does not necessarily cover the entire spatial frequency band.
  • FIG. 3 to FIG. 6F exemplify the case where the observation target plane is a front surface perpendicular to the depth direction of the sample Sm, but if it is a surface that crosses a part or all of the observation target area, this may be used.
  • the output processing unit 180 may be able to set the direction of the observation target plane by the user's operation. More specifically, an operation signal indicating the direction of the observation target plane is input from the operation input unit (not shown), and a control signal indicating the direction of the observation target plane indicated by the operation signal is output to the detection signal acquisition unit 130. ..
  • the detection signal acquisition unit 130 determines the observation target plane in the direction indicated by the control signal input from the output processing unit 180.
  • the cross-section OCT image Cs21 exemplified in FIG. 7A is an OCT image showing the state of the sample Sm in the cross-section parallel to the depth direction.
  • the cross-sectional OCT image Cs21 represents a pattern different from the front OCT image Oi01.
  • the composite image Si21 exemplified in FIG. 7B is a composite image generated by using the composite mode 13 with respect to the OCT signal related to the cross-sectional OCT image Cs21.
  • the diagonally shaded portion indicates a portion commonly extracted by using a mask having a first directivity and a second directivity constituting the synthesis mode 13.
  • the upward-sloping diagonal line portion and the downward-sloping diagonally-lined portion are extracted by the mask having the first directivity and the second directivity, respectively, but the mask having the second directivity and the first directivity. Indicates the part that is not extracted.
  • the uncolored portion indicates a portion that is not extracted by either the mask having the first directivity or the second directivity.
  • the composite image Si22 exemplified in FIG. 7C is a composite image generated by using the composite mode 16 with respect to the OCT signal related to the cross-sectional OCT image Cs21.
  • the composite image Si22 represents a pattern different from that of the composite image Si21.
  • the directional mask may have a narrower bandwidth with azimuth spatial frequencies.
  • the composite mode of the directional mask Dm31 illustrated in FIG. 8 has two pass bands.
  • the bandwidths of the two passbands are ⁇ / 9, respectively.
  • the main directions of the respective pass bands are azimuth angles ⁇ / 2 and ⁇ / 2, respectively, and are directed in opposite directions.
  • the mask unit 150 rotates, for example, the main direction of the passing band in the direction of the azimuth at minute intervals (for example, 1 ° to 5 °), and each of the rotated main directions is directional to the pupillary electric field data.
  • the mask is applied to generate mask electric field data.
  • the conversion unit 160 performs a two-dimensional inverse Fourier transform on the generated individual mask electric field data to generate a mask OCT signal in the spatial region. Even if the control unit 110 includes an analysis unit (not shown) that analyzes the mask OCT signal for each rotated main direction or the change characteristic of the intensity of a part or all of the mask image data based on the mask OCT signal. Good.
  • FIGS. 9 and 10 exemplify the main direction dependence of the signal intensity obtained using the bovine Achilles tendon and the chicken breast muscle as samples, respectively.
  • the vertical axis and the horizontal axis indicate the signal strength and the main direction, respectively.
  • the signal strength the total pixel strength of the mask image data was used.
  • the pixel strength corresponds to the total value of the signal values for each pixel.
  • 9 and 10 show that the signal strength depends on the main direction and the signal strength has a significant maximum value, respectively.
  • the main directions in which the maximum values are taken are 152 ° and 116 °, respectively.
  • the maximum value of the signal strength is about four times the average value. This suggests that the tissue structure used as a sample has a high periodicity with respect to the main direction in which the signal intensity reaches the maximum value.
  • the OCT volume is set for each of the cases where the z coordinate of the sample model Sp is farther ( ⁇ z> 0) and closer ( ⁇ z ⁇ 0) than the reference point (the focal point of the optical system, which corresponds to the focal point of the collimator 50b in FIG. 1). Synthesized.
  • a directional mask was applied to the pupillary electric field data based on the OCT image signals of the front image and the cross-sectional image obtained from each OCT volume to generate mask electric field data for each pass band.
  • a directivity mask a synthesis mode 32 (FIG. 12A) having a passing region having a main direction of ⁇ and a bandwidth of ⁇ and a passing region having a main direction of 0 and a bandwidth of ⁇ was used.
  • a two-dimensional inverse Fourier transform was performed on the mask electric field data generated for each pass band, and the mask OCT signal in the spatial region was used to generate OCT image data showing each one composite image.
  • FIGS. 12A and 12B show a composite image for the front image and a composite image for the cross-sectional image generated on the assumption that ⁇ z> 0, respectively.
  • Both FIGS. 12A and 12B show patterns according to the period of height in the x direction.
  • the brightness is constant in the y direction, and there is a portion in the x direction in which the brightness is higher than the periphery in the negative direction and the positive direction from the center of each cycle.
  • the negative and positive directions from the center of each cycle correspond to the passbands whose main directions are ⁇ and 0, respectively.
  • a portion of the surface of the sample model Sp that is proportional to the height in the z direction has a portion having a higher brightness than the surroundings.
  • FIGS. 12C and 12D show a composite image for the front image and a composite image for the cross-sectional image generated on the assumption that ⁇ z ⁇ 0, respectively.
  • the patterns of the composite images shown in FIGS. 12C and 12D are symmetrical with the patterns of the composite images shown in FIGS. 12A and 12B, respectively.
  • the height of the interface between the bovine Achilles tendon and the chicken pectoralis major used as samples in the examples of FIGS. 9 and 10 is high in one direction (x direction) intersecting the height direction (z direction), respectively. It is confirmed that the structure has periodicity and is less dependent on other directions (y direction) (see FIG. 13). Then, the image of the surface inclined in the negative direction of the period and the image of the surface inclined in the positive direction are associated with the main directions of the individual pass bands, and are displayed in the composite image in different patterns. Will be done.
  • the number M of one directional mask is mainly one or two has been described, but the present invention is not limited to this.
  • the number M of the masks may be 3 or more, for example, 60, 120, 360, and the like.
  • the passband bandwidth for each mask can be finer.
  • the bandwidth is not limited to the azimuth space frequency, and may be fixed or variable with respect to the azimuth space frequency or the combination of the azimuth space frequency and the azimuth space frequency, and may be further subdivided.
  • the number of masks may be large enough that the individual passbands can be considered spatially continuous.
  • the passbands of each of the M masks do not necessarily have to be periodically, regularly or exhaustively distributed in the spatial frequency domain, and may be randomly or intermittently distributed.
  • the mask value for each spatial frequency corresponding to each frequency sample point constituting the mask is mainly either 0 or 1, but the case is not limited to this.
  • the mask value may be any real number. This makes it possible to set the degree of passage of the pupillary electric field for each spatial frequency in more detail. For example, if the mask value in the pass band is larger than the mask value in the cutoff band, the dependence of the mask OCT signal by the mask can be emphasized or relaxed rather than being limited to 1 and 0, respectively.
  • the mask according to the spatial frequency around the boundary is set. The change in value can be more gradual. Therefore, it is possible to alleviate an abnormal spatial change in the abnormal brightness of the mask OCT image due to aliasing noise that may occur in the mask OCT signal.
  • the mask value for each spatial frequency is not limited to a real number but may be a complex number.
  • the mask value may be, for example, a value obtained by further dividing the above mask value by a coefficient (complex number) indicating the phase and amplitude for each spatial frequency representing the aberration generated in the optical system and the sample to be measured.
  • a coefficient complex number
  • the generation of mask electric field data and thus the mask OCT signal using the mask described above can be realized by complex number calculation in the spatial frequency domain. In other words, it cannot be achieved by existing hardware such as optical components or image processing that handles real luminance values or color signal values.
  • the control unit 110 may further include an analysis unit (not shown) that analyzes the spatial frequency characteristics of the OCT signal.
  • the analysis unit causes the processing of steps S01 to S03 to be performed using a mask having an azimuthal spatial frequency band having a predetermined bandwidth as a pass band, calculates the power of the generated mask OCT signal, and processes the processing target. The process of changing the main direction of the pass band may be repeated.
  • the analysis unit can acquire the azimuth spatial frequency dependence of the power and specify the azimuth spatial frequency at which the power is maximum, maximum, minimum, or minimum.
  • the analysis of the signal strength illustrated in FIGS. 9 and 10 using the directional mask illustrated in FIG. 8 corresponds to the analysis example of the azimuth spatial frequency.
  • the analysis unit may calculate the power of the mask OCT signal instead of the azimuth space frequency band or for each pass band determined by the azimuth space frequency band together with the azimuth space frequency band. As a result, the analysis unit can acquire the spatial frequency dependence of the power and specify the spatial frequency at which the power becomes the maximum, the maximum, the minimum, or the minimum.
  • the analysis unit may execute the process of calculating the power for each part of the observation target plane indicated by the OCT signal, instead of the entire plane.
  • the analysis unit can acquire the spatial frequency dependence of power for each part, and can specify the spatial frequency at which the power is maximum, maximum, minimum, or minimum for each part.
  • the analysis unit can determine a range of power for each spatial frequency that can be taken for each site, and can identify a site whose spatial frequency dependence is more remarkable than a predetermined range.
  • the output processing unit 180 outputs information indicating at least one of the powers calculated by the analysis unit, the specified spatial frequency, or the part to the display unit or other equipment according to the operation signal input from the operation input unit. You may.
  • the image processing apparatus 100 converts an optical interference tom signal representing the state of the sample into a spatial frequency domain and shows an electric field in the pupil surface corresponding to the reciprocal space of the sample.
  • the electric field estimation unit 140 for generating data is provided.
  • the image processing apparatus 100 acts on the electric field data with a mask showing the distribution of the passing characteristics of the electric field on the pupil surface to generate the mask electric field data, and converts the mask electric field data into a spatial region to convert the mask light into a space region. It includes a conversion unit 160 that generates an interference fault signal.
  • the image synthesizing unit 170 can acquire a masked optical tomographic image by converting the signal value of the acquired masked optical tomographic signal into a luminance value. Therefore, even if the optical system is not actually equipped with various optical components, the distribution of the passing characteristics of the electric field according to the spatial conditions related to the observation of the sample can be virtually adjusted by the action of the mask.
  • the passage characteristic distribution of the mask may indicate the spatial frequency band of the azimuth angle passing through the electric field on the pupil surface.
  • the spatial frequency band of the azimuth angle as the pass band of the electric field is indicated as the pass characteristic distribution of the mask, so that the observation of the sample having the directivity of the observation direction according to the pass band is realized.
  • by acquiring a mask image showing the structure of the sample based on the mask it is possible to eliminate or relax the restriction on the observation direction and visualize the anisotropy of the fine structure of the structure constituting the sample. ..
  • the passage characteristic distribution of the mask may indicate the spatial frequency band of the moving diameter passing through the electric field on the pupil surface.
  • the spatial frequency band of the moving diameter which is the pass band of the electric field, is indicated as the pass characteristic distribution of the mask, so that the observation of the sample having the spatial resolution according to the pass band is realized.
  • the mask may indicate the presence / absence of passage of an electric field for each sample point corresponding to the spatial frequency.
  • the spatial frequency band through which the pupillary electric field passes is defined as digital data, so that the calculation related to the action of the mask can be simplified.
  • the image processing apparatus 100 may include an image synthesizing unit 170 that generates a composite image by synthesizing mask images based on each of the M masked optical interference tomographic signals in different display modes.
  • the mask unit 150 acts on the electric field data with M masks having different passage characteristic distributions to acquire the M mask electric field data.
  • the conversion unit 160 converts each of the M masked electric field data into a spatial region to generate M masked optical interference tomographic signals.
  • the hues of the colors representing the M mask images may be different.
  • the user can intuitively identify each mask image based on the color appearing in the composite image, and easily grasp the part where the state of the sample corresponding to the passage characteristic distribution (for example, directivity) appears. be able to.
  • the color in which the colors representing the M mask images are combined between the M mask images is achromatic.
  • the image processing device 100 is a part of the optical interference tomographic meter 1 is taken as an example, but the present invention is not limited to this.
  • the image processing device 100 may be a single device that is independent of the optical interference tomometer 1 and does not have an optical system.
  • the optical system control unit 120 may be omitted in the control unit 110 of the image processing device 100.
  • the detection signal acquisition unit 130 is not limited to the optical system, and may acquire a detection signal or an OCT signal from another device such as a data storage device or a PC by wire or wirelessly, for example, via a network.
  • the image processing device 100 may include the above-mentioned operation input unit and display unit, or one or both of them may be omitted.
  • the control unit 110 of the image processing device 100 one or both of the image composition unit 170 and the output processing unit 180 may be omitted.
  • the conversion unit 160 may output the generated mask OCT signal to another device such as a data storage device, a PC, or another image processing device.
  • the device as the output destination has the same function as the image synthesizing unit 170, that is, generates output image data based on the mask OCT signal input from the image processing device 100, and displays an image based on the generated output image data. It may have a function.
  • the color system adopted by the image composition unit 170 is not necessarily limited to the RGB color system, and may be another color system, for example, a YCbCr color system.
  • the display mode of each of the M mask images is not limited to hue and gradation, and other methods such as patterns such as halftone dots, grids, and diagonal lines, brightness such as blinking, and time change of hue may be used.
  • a program for realizing this control function is recorded on a computer-readable recording medium. Then, the program recorded on the recording medium may be read into a computer system and executed.
  • the "computer system” referred to here is a computer system built in the image processing device 100, and includes hardware such as an OS and peripheral devices.
  • the "computer-readable recording medium” refers to a portable medium such as a flexible disk, a magneto-optical disk, a ROM, or a CD-ROM, or a storage device such as a hard disk built in a computer system.
  • a "computer-readable recording medium” is a medium that dynamically holds a program for a short period of time, such as a communication line when a program is transmitted via a network such as the Internet or a communication line such as a telephone line.
  • a program may be held for a certain period of time, such as a volatile memory inside a computer system serving as a server or a client.
  • the above-mentioned program may be a program for realizing a part of the above-mentioned functions, and may be a program for realizing the above-mentioned functions in combination with a program already recorded in the computer system.
  • a part or all of the image processing device 100 in the above-described embodiment may be realized as an integrated circuit such as an LSI (Large Scale Integration).
  • LSI Large Scale Integration
  • Each functional block of the image processing apparatus 100 may be individually made into a processor, or a part or all of them may be integrated into a processor.
  • the method of making an integrated circuit is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. Further, when an integrated circuit technology that replaces an LSI appears due to advances in semiconductor technology, an integrated circuit based on this technology may be used.
  • Optical interference tomography 10 ... Light source, 20 ... Beam splitter, 30a, 30b, 50a, 50b ... Collimator, 40 ... Reference mirror, 60a, 60b ... Galvano mirror, 70 ... Spectrometer, 100 ... Image processing device, 110 ... Control unit, 120 ... Optical system control unit, 130 ... Detection signal acquisition unit, 140 ... Electric field estimation unit, 150 ... Mask unit, 160 ... Conversion unit, 170 ... Image synthesis unit, 180 ... Output processing unit, 190 ... Storage unit

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

According to the present invention, an electric field estimation unit converts an optical coherence tomographic signal representing the state of a sample into a spatial frequency domain and generates electric field data indicating the electric field on a pupil surface corresponding to the reciprocal space of the sample, a mask unit acts on the electric field data with a mask indicating the distribution of the transmission characteristics of the electric field on the pupil surface and generates mask electric field data, and a conversion unit converts the mask electric field data into a spatial region to generate a mask optical coherence tomographic signal. The present embodiment may be any of an image processing apparatus, an image processing method, and a program.

Description

画像処理装置、画像処理方法およびプログラムImage processing equipment, image processing methods and programs
 本発明は、画像処理装置、画像処理方法およびプログラムに関する。
 本願は、2019年11月14日に日本に出願された特願2019-206436号について優先権を主張し、それらの内容をここに援用する。
The present invention relates to an image processing apparatus, an image processing method and a program.
The present application claims priority with respect to Japanese Patent Application No. 2019-206436 filed in Japan on November 14, 2019, the contents of which are incorporated herein by reference.
 光干渉断層撮影(OCT:Optical Coherence Tomography)は、光の干渉性(コヒーレンス)を利用して試料(主に生体)の断層画像を取得する技術である。OCTによれば、試料の表面に限らず、その内部の構造を高い空間分解能で表す画像を取得することができる。従来から、OCTは眼科の網膜診断への実用化が行われている。その他、培養組織、ex vivo(生体外)サンプルなどの被観察組織を試料とする光学分解能以下の組織構造、微小繊維構造の特性(例えば、方向性、大きさの統計的性質)の可視化、定量化に用いられる。 Optical coherence tomography (OCT) is a technique for acquiring tomographic images of samples (mainly living organisms) using the coherence of light. According to OCT, it is possible to acquire an image showing not only the surface of a sample but also the internal structure thereof with high spatial resolution. Conventionally, OCT has been put into practical use for retinal diagnosis in ophthalmology. In addition, visualization and quantification of tissue structures below optical resolution and microfiber structure characteristics (for example, statistical properties of directionality and size) using observed tissues such as cultured tissues and ex vivo (ex vivo) samples as samples. It is used for conversion.
 OCTを用いて試料を複数の方向から撮影すると、撮影方向ごとに異なる信号強度パターンを有する画像が得られる。信号強度パターンの変化は、試料を構成する組織の微細構造によると考えられている。そこで、非特許文献1に示すように、多方向(Multi-directional)OCT計測を行うことで、組織の微細構造の特性を間接的に可視化する手法が試みられている。 When a sample is photographed from a plurality of directions using OCT, an image having a different signal intensity pattern is obtained for each imaging direction. The change in the signal intensity pattern is considered to be due to the fine structure of the tissue constituting the sample. Therefore, as shown in Non-Patent Document 1, a method of indirectly visualizing the characteristics of the fine structure of a tissue by performing multi-directional OCT measurement has been attempted.
 しかしながら、非特許文献1に記載の手法は、ハードウェアを用いて多方向の観察を実現されるため、装置の構成が複雑となり、経済的な実現が困難であった。また、計測を複数回行うために計測時間が長くなりがちである。そのため、生きた試料に適用することは必ずしも現実的ではない。また、計測回数が制限されるので、多くの計測回数を要する定量的観察には不向きであるなど、計測対象の制約が生じる。 However, since the method described in Non-Patent Document 1 realizes multi-directional observation using hardware, the configuration of the device becomes complicated and it is difficult to realize economically. In addition, the measurement time tends to be long because the measurement is performed a plurality of times. Therefore, it is not always realistic to apply it to a living sample. In addition, since the number of measurements is limited, there are restrictions on the measurement target, such as being unsuitable for quantitative observation that requires a large number of measurements.
 本発明は上記事情に鑑みてなされたものであり、本発明の課題は、例えば、観察方向、計測時間、計測回数、計測対象などの観察に係る制約を解消または緩和して、より簡便に試料の観察に係る空間的条件を調整することができる画像処理装置、画像処理方法およびプログラムを提供することである。 The present invention has been made in view of the above circumstances, and the subject of the present invention is, for example, to eliminate or relax restrictions on observation such as observation direction, measurement time, number of measurements, measurement target, etc., and more easily sample. It is an object of the present invention to provide an image processing apparatus, an image processing method and a program capable of adjusting the spatial conditions related to the observation of the above.
(1)本発明は上記の課題を解決するためになされたものであり、本発明の一態様は、試料の状態を表す光干渉断層信号を空間周波数領域に変換して前記試料の逆空間に対応する瞳面における電場を示す電場データを生成する電場推定部と、前記瞳面における電場の通過特性分布を示すマスクを前記電場データに作用して、マスク電場データを生成するマスク部と、前記マスク電場データを空間領域に変換してマスク光干渉断層信号を生成する変換部と、を備える画像処理装置である。 (1) The present invention has been made to solve the above problems, and one aspect of the present invention is to convert an optical interference tom signal representing the state of a sample into a spatial frequency region and convert it into the reverse space of the sample. An electric field estimation unit that generates electric field data indicating an electric field on the corresponding pupil surface, a mask unit that generates mask electric field data by acting on the electric field data with a mask indicating the passage characteristic distribution of the electric field on the pupil surface, and the above. It is an image processing device including a conversion unit that converts mask electric field data into a spatial region and generates a mask light interference tom signal.
(2)本発明の他の態様は、(1)の画像処理装置であって、前記通過特性分布は、前記瞳面において前記電場を通過する方位角の空間周波数帯域を示してもよい。 (2) Another aspect of the present invention is the image processing apparatus of (1), and the passing characteristic distribution may indicate a spatial frequency band of an azimuth angle passing through the electric field in the pupil plane.
(3)本発明の他の態様は、(1)または(2)の画像処理装置であって、前記通過特性分布は、前記瞳面において前記電場を通過する動径の空間周波数帯域を示してもよい。 (3) Another aspect of the present invention is the image processing apparatus of (1) or (2), wherein the passing characteristic distribution indicates a spatial frequency band of a moving diameter passing through the electric field in the pupil plane. May be good.
(4)本発明の他の態様は、(1)または(2)の画像処理装置であって、前記マスクは、空間周波数に対応するサンプル点ごとに前記電場の通過の有無を示してもよい。 (4) Another aspect of the present invention is the image processing apparatus of (1) or (2), in which the mask may indicate the presence or absence of passage of the electric field for each sample point corresponding to the spatial frequency. ..
(5)本発明の他の態様は、(4)の画像処理装置であって、前記マスクは、前記電場を通過する通過領域と前記電場を通過しない遮断領域との境界から所定の範囲内においてサンプル点間で単調に変化するマスク値を有してもよい。 (5) Another aspect of the present invention is the image processing apparatus of (4), wherein the mask is within a predetermined range from the boundary between a passing region that passes through the electric field and a blocking region that does not pass through the electric field. It may have a mask value that changes monotonically between sample points.
(6)本発明の他の態様は、(1)から(5)のいずれかの画像処理装置であって、前記マスク光干渉断層信号に基づく前記試料の構造を示すマスク画像を生成する合成部を備えてもよい。 (6) Another aspect of the present invention is the image processing apparatus according to any one of (1) to (5), which is a synthesis unit that generates a mask image showing the structure of the sample based on the mask light interference tom signal. May be provided.
(7)本発明の他の態様は、(1)から(6)のいずれかの画像処理装置であって、M個(Mは、2以上の整数)のマスク光干渉断層信号のそれぞれに基づくマスク画像を異なる表示態様をもって合成して合成画像を生成する合成部を備え、前記マスク部は、前記電場データに前記通過特性分布が異なるM個のマスクをそれぞれ作用してM個のマスク電場データを取得し、前記変換部は、前記M個のマスク電場データをそれぞれ空間領域に変換して前記M個のマスク光干渉断層信号を生成してもよい。 (7) Another aspect of the present invention is the image processing apparatus according to any one of (1) to (6), which is based on each of M mask optical interference tomographic signals (M is an integer of 2 or more). The mask unit includes a compositing unit that synthesizes mask images with different display modes to generate a composite image, and the mask unit acts on the electric field data with M masks having different passage characteristic distributions to generate M mask electric field data. The conversion unit may convert the M mask electric field data into a spatial region to generate the M mask optical interference tomographic signals.
(8)本発明の他の態様は、(7)の画像処理装置であって、前記表示態様として前記M個のマスク画像をそれぞれ表す色の色相が異なってもよい。 (8) Another aspect of the present invention is the image processing apparatus of (7), and the hues of the colors representing the M mask images may be different as the display mode.
(9)本発明の他の態様は、(7)の画像処理装置であって、前記表示態様として前記M個のマスク画像をそれぞれ表す色を前記M個間で合成した色は、無彩色であってもよい。 (9) Another aspect of the present invention is the image processing apparatus of (7), and the color in which the colors representing the M mask images are combined among the M as the display mode is achromatic. There may be.
(10)本発明の他の態様は、画像処理装置における方法であって、試料の状態を表す光干渉断層信号を空間周波数領域に変換して前記試料の逆空間に対応する瞳面における電場を示す電場データを生成する電場推定過程と、前記瞳面における電場の通過特性分布を示すマスクを前記電場データに作用して、マスク電場データを生成するマスク過程と、前記マスク電場データを空間領域に変換してマスク光干渉断層信号を生成する変換過程と、を有する画像処理方法である。 (10) Another aspect of the present invention is a method in an image processing apparatus, in which an optical interference tom signal representing a sample state is converted into a spatial frequency region to generate an electric field in the pupil surface corresponding to the reverse space of the sample. An electric field estimation process that generates the electric field data shown, a mask process that generates masked electric field data by acting on the electric field data with a mask showing the passage characteristic distribution of the electric field on the pupil surface, and the masked electric field data in a spatial region. It is an image processing method having a conversion process of converting to generate a masked optical interference tomographic signal.
(11)本発明の他の態様は、画像処理装置のコンピュータに、試料の状態を表す光干渉断層信号を空間周波数領域に変換して前記試料の逆空間に対応する瞳面における電場を示す電場データを生成する電場推定手順と、前記瞳面における電場の通過特性分布を示すマスクを前記電場データに作用して、マスク電場データを生成するマスク手順と、前記マスク電場データを空間領域に変換してマスク光干渉断層信号を生成する変換手順と、を実行させるためのプログラムである。 (11) Another aspect of the present invention is an electric field in which a computer of an image processing apparatus converts an optical interference tom signal representing the state of a sample into a spatial frequency region to indicate an electric field in the pupil surface corresponding to the reverse space of the sample. The electric field estimation procedure for generating data, the mask procedure for generating mask electric field data by acting on the electric field data with a mask showing the distribution of the passing characteristics of the electric field on the pupil surface, and the mask electric field data being converted into a spatial region. This is a program for executing a conversion procedure for generating a masked optical interference tomographic signal.
 本発明によれば、より簡便に試料の観察に係る空間的条件を調整することができる。例えば、各方向からの反射光を取得するための光学部品の具備または調整を伴わなくても観察方向の調整や多方向観察を仮想的に実現することができる。 According to the present invention, the spatial conditions related to the observation of the sample can be adjusted more easily. For example, it is possible to virtually realize the adjustment of the observation direction and the multi-directional observation without the provision or adjustment of the optical component for acquiring the reflected light from each direction.
本実施形態に係る光干渉断層計の一例を示す構成図である。It is a block diagram which shows an example of the optical interference tomography which concerns on this embodiment. 本実施形態に係る信号処理装置の機能構成例を示すブロック図である。It is a block diagram which shows the functional structure example of the signal processing apparatus which concerns on this embodiment. 本実施形態に係る画像処理の一例を示す説明図である。It is explanatory drawing which shows an example of the image processing which concerns on this embodiment. 本実施形態に係る画像処理の他の例を示す説明図である。It is explanatory drawing which shows the other example of the image processing which concerns on this embodiment. 本実施形態に係る合成処理の例を示す説明図である。It is explanatory drawing which shows the example of the synthesis processing which concerns on this embodiment. 本実施形態に係る合成画像の第1例を示す図である。It is a figure which shows the 1st example of the composite image which concerns on this embodiment. 本実施形態に係る合成画像の第2例を示す図である。It is a figure which shows the 2nd example of the composite image which concerns on this embodiment. 本実施形態に係る合成画像の第3例を示す図である。It is a figure which shows the 3rd example of the composite image which concerns on this embodiment. 本実施形態に係る合成画像の第4例を示す図である。It is a figure which shows the 4th example of the composite image which concerns on this embodiment. 本実施形態に係る合成画像の第5例を示す図である。It is a figure which shows the 5th example of the composite image which concerns on this embodiment. 本実施形態に係る合成画像の第6例を示す図である。It is a figure which shows the 6th example of the composite image which concerns on this embodiment. 本実施形態に係る断面OCT画像の例を示す図である。It is a figure which shows the example of the cross-section OCT image which concerns on this embodiment. 本実施形態に係る断面OCT画像に対する合成画像の第1例を示す図である。It is a figure which shows the 1st example of the composite image with respect to the cross-section OCT image which concerns on this embodiment. 本実施形態に係る断面OCT画像に対する合成画像の第2例を示す図である。It is a figure which shows the 2nd example of the composite image with respect to the cross-section OCT image which concerns on this embodiment. 本実施形態に係る指向性マスクの他の例を示す図である。It is a figure which shows another example of the directional mask which concerns on this embodiment. 信号強度の主方向依存性の第1例を示す図である。It is a figure which shows the 1st example of the principal direction dependence of a signal strength. 信号強度の主方向依存性の第2例を示す図である。It is a figure which shows the 2nd example of the principal direction dependence of a signal strength. 試料モデルの一例を示す図である。It is a figure which shows an example of a sample model. 本実施形態に係る合成画像の第7例を示す図である。It is a figure which shows the 7th example of the composite image which concerns on this embodiment. 本実施形態に係る合成画像の第8例を示す図である。It is a figure which shows the 8th example of the composite image which concerns on this embodiment. 本実施形態に係る合成画像の第9例を示す図である。It is a figure which shows the 9th example of the composite image which concerns on this embodiment. 本実施形態に係る合成画像の第10例を示す図である。It is a figure which shows the tenth example of the composite image which concerns on this embodiment. 組織構造の推定例を示す図である。It is a figure which shows the estimation example of the tissue structure.
 以下、図面を参照しながら本発明の実施形態について説明する。
 図1は、本実施形態に係る光干渉断層計1の一例を示す構成図である。
 光干渉断層計1は、OCTを用いて試料の状態を観測するための観測システムを構成する。
 光干渉断層計1は、試料Smに光を照射し、試料Smから反射した反射光と、参照鏡40(後述)で反射された参照光とを干渉させて生じた干渉光を取得し、取得した干渉光から試料Smの表面とその内部の状態を示す画像を生成する装置である。
 試料Smとする観測対象の物体は、例えば、人間もしくは動物の生体、非生物のいずれでもよい。生体は、眼底、血管、歯牙、皮下組織などであってもよい。非生物は、電子部品、機械部品など人工的な構造体、石材、鉱物などの天然の構造体、特定の形状を有しない物質のいずれでもよい。
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
FIG. 1 is a configuration diagram showing an example of an optical interference tomographic meter 1 according to the present embodiment.
The optical coherence tomography 1 constitutes an observation system for observing the state of a sample using OCT.
The optical interference tomometer 1 irradiates the sample Sm with light, and acquires and acquires the interference light generated by interfering the reflected light reflected from the sample Sm with the reference light reflected by the reference mirror 40 (described later). It is a device that generates an image showing the surface of the sample Sm and the state inside the sample Sm from the interference light.
The object to be observed as the sample Sm may be, for example, a human or animal living body or a non-living body. The living body may be a fundus, a blood vessel, a tooth, a subcutaneous tissue, or the like. The non-living organism may be an artificial structure such as an electronic part or a mechanical part, a natural structure such as a stone or a mineral, or a substance having no specific shape.
 光干渉断層計1は、光源10、ビームスプリッタ20、コリメータ30a、30b、50a、50b、参照鏡40、ガルバノミラー60a、60b、分光器70、および画像処理装置100を含んで構成される。これらの構成部品のうち、ビームスプリッタ20、コリメータ30a、30b、50a、50b、参照鏡40、ガルバノミラー60a、60b、および分光器70は、干渉計と呼ばれる光学系を構成する。図1に例示される干渉計は、光ファイバFを備えるマイケルソン干渉計である。より具体的には、光源10と分光器70、コリメータ30aとコリメータ50aは、それぞれ光ファイバFを用いてビームスプリッタに接続されている。光ファイバFは、光源10から照射される光の波長帯を含む伝送帯域を有する。 The optical interference tomometer 1 includes a light source 10, a beam splitter 20, collimators 30a, 30b, 50a, 50b, a reference mirror 40, a galvanometer mirror 60a, 60b, a spectroscope 70, and an image processing device 100. Among these components, the beam splitter 20, the collimators 30a, 30b, 50a, 50b, the reference mirror 40, the galvanometer mirrors 60a, 60b, and the spectroscope 70 constitute an optical system called an interferometer. The interferometer illustrated in FIG. 1 is a Michelson interferometer including an optical fiber F. More specifically, the light source 10, the spectroscope 70, the collimator 30a, and the collimator 50a are each connected to the beam splitter by using an optical fiber F. The optical fiber F has a transmission band including a wavelength band of light emitted from the light source 10.
 光干渉断層計1では、フーリエドメインOCT(FD-OCT:Fourier-Domain OCT)である。光干渉断層計1では、FD-OCTとして分類される方式であるスペクトラルドメインOCT(SD-OCT:Spectral-DomainOCT)、波長掃引OCT(SS-OCT:Swept-Source OCT)等、のいずれが採用されてもよい。 In the optical coherence tomography 1, it is a Fourier domain OCT (FD-OCT: Fourier-Domain OCT). In the optical coherence tomography 1, any of Spectral Domain OCT (SD-OCT: Spectral-Domain OCT) and Wavelength Sweep OCT (SS-OCT: Swept-Source OCT), which are classified as FD-OCT, is adopted. You may.
 光源10は、超短波パルスレーザ、SLD(スーパールミネセントダイオード;Superluminescent Diode)などの波長掃引光源である。光源10は、例えば、近赤外の波長(例えば、800~1000nm)を有し、コヒーレンスが低いプローブ光を照射する。光源10から照射された光は、光ファイバF内部で導光され、ビームスプリッタ20に入射する。 The light source 10 is a wavelength sweep light source such as a very high frequency pulse laser and SLD (Superluminescent Diode). The light source 10 irradiates probe light having, for example, a near-infrared wavelength (for example, 800 to 1000 nm) and having low coherence. The light emitted from the light source 10 is guided inside the optical fiber F and incident on the beam splitter 20.
 ビームスプリッタ20は、入射した光をコリメータ30aに向けて導光される光(以下、参照光)と、コリメータ50aに向けて導光される光(以下、測定光)に分離する。ビームスプリッタ20は、例えば、キューブビームスプリッタなどである。 The beam splitter 20 separates the incident light into light that is guided toward the collimator 30a (hereinafter, reference light) and light that is guided toward the collimator 50a (hereinafter, measurement light). The beam splitter 20 is, for example, a cube beam splitter.
 コリメータ30aは、ビームスプリッタ20から導光される参照光を平行光に変化させ、平行光をコリメータ30bに向けて出射する。
 コリメータ30bは、コリメータ30aから入射される平行光を集光し、集光した参照光を参照鏡40に向けて出射する。なお、コリメータ30bは、参照鏡40で反射した参照光を入射し、平行光に変換し、変換した平行光をコリメータ30aに向けて出射する。
 コリメータ30aは、コリメータ30bから入射した平行光を集光し、ビームスプリッタ20に向けて導光する。
The collimator 30a changes the reference light guided from the beam splitter 20 into parallel light, and emits the parallel light toward the collimator 30b.
The collimator 30b collects the parallel light incident from the collimator 30a and emits the collected reference light toward the reference mirror 40. The collimator 30b incidents the reference light reflected by the reference mirror 40, converts it into parallel light, and emits the converted parallel light toward the collimator 30a.
The collimator 30a collects the parallel light incident from the collimator 30b and guides it toward the beam splitter 20.
 他方、コリメータ50aは、ビームスプリッタ20から導光される測定光を平行光に変換し、変換した平行光をガルバノミラー60aに向けて出射する。ガルバノミラー60a、60bの表面において、コリメータ50aから入射される平行光は、それぞれ反射され、コリメータ50bに向けて出射される。コリメータ50bは、コリメータ50aからガルバノミラー60a、60bを経由して入射された平行光を集光し、集光した測定光を試料Smに照射する。試料Smに照射される測定光は、試料Smの反射面において反射されコリメータ50bに入射される。反射面は、例えば、試料Smと試料Smの周囲環境(例えば、大気)との境界面に限られず、試料Sm内部における屈折率が異なる材質間もしくは組織間を区分する境界面となりうる。以下、試料Smの反射面において反射され、コリメータ50bに入射される光を反射光と呼ぶ。 On the other hand, the collimator 50a converts the measurement light guided from the beam splitter 20 into parallel light, and emits the converted parallel light toward the galvanometer mirror 60a. On the surfaces of the galvanometer mirrors 60a and 60b, the parallel light incident from the collimator 50a is reflected and emitted toward the collimator 50b, respectively. The collimator 50b collects the parallel light incident from the collimator 50a via the galvanometer mirrors 60a and 60b, and irradiates the sample Sm with the collected measurement light. The measurement light applied to the sample Sm is reflected by the reflecting surface of the sample Sm and incident on the collimator 50b. The reflecting surface is not limited to, for example, the boundary surface between the sample Sm and the surrounding environment (for example, the atmosphere) of the sample Sm, and can be a boundary surface that separates materials or structures having different refractive indexes inside the sample Sm. Hereinafter, the light reflected on the reflecting surface of the sample Sm and incident on the collimator 50b is referred to as reflected light.
 コリメータ50bは、入射された反射光をガルバノミラー60bに向けて出射する。ガルバノミラー60b、60aそれぞれの表面において、それぞれ反射され、コリメータ50aに向けて出射される。コリメータ50aは、コリメータ50aからガルバノミラー60a、60bを経由して入射された平行光を集光し、集光した反射光をビームスプリッタ20に向けて導光する。
 ビームスプリッタ20は、参照鏡40で反射された参照光と試料Smで反射された反射光とを、光ファイバFを経由して分光器70に導光する。
The collimator 50b emits the incident reflected light toward the galvanometer mirror 60b. It is reflected on the surfaces of the galvanometer mirrors 60b and 60a, respectively, and emitted toward the collimator 50a. The collimator 50a collects the parallel light incident from the collimator 50a via the galvanometer mirrors 60a and 60b, and guides the collected reflected light toward the beam splitter 20.
The beam splitter 20 guides the reference light reflected by the reference mirror 40 and the reflected light reflected by the sample Sm to the spectroscope 70 via the optical fiber F.
 分光器70は、その内部に回折格子と受光素子を備える。回折格子は、ビームスプリッタ20から導光された参照光と反射光を分光する。分光した参照光と反射光は、互いに干渉し、干渉した干渉光となる。受光素子は、干渉光が照射される撮像面に配置される。受光素子は、照射される干渉光を検出し、検出した干渉光に基づく信号(以下、検出信号)を生成する。受光素子は、生成した検出信号を画像処理装置100に出力する。 The spectroscope 70 includes a diffraction grating and a light receiving element inside the spectroscope 70. The diffraction grating disperses the reference light and the reflected light guided from the beam splitter 20. The separated reference light and reflected light interfere with each other and become interference light that interferes with each other. The light receiving element is arranged on the imaging surface to which the interference light is irradiated. The light receiving element detects the emitted interference light and generates a signal based on the detected interference light (hereinafter referred to as a detection signal). The light receiving element outputs the generated detection signal to the image processing device 100.
 画像処理装置100は、分光器70から入力した検出信号から試料Smの状態を表すOCT信号を取得する。画像処理装置100は、観測点を所定の順序に従って順次変更し、個々の観測点において、または所定の単位をなす複数の観測点ごとに一括して取得された検出信号を順次累積して所定の観測領域内のOCT信号を生成する。
 画像処理装置100は、生成したOCT信号を空間周波数領域に変換して試料Smの逆空間に対応する瞳面における反射光の電場を瞳面電場として推定する。画像処理装置100は、瞳面における電場の通過特性分布を示すマスクを推定した電場を示す電場データに作用して、マスク電場データを取得する。画像処理装置100は、取得したマスク電場データを空間領域に変換してマスクOCT信号を生成する。画像処理装置100は、生成したマスクOCT信号に基づくマスクOCT画像を生成する。
The image processing apparatus 100 acquires an OCT signal representing the state of the sample Sm from the detection signal input from the spectroscope 70. The image processing apparatus 100 sequentially changes the observation points in a predetermined order, and sequentially accumulates the detection signals acquired at the individual observation points or for each of a plurality of observation points forming a predetermined unit, and determines the predetermined ones. Generates an OCT signal in the observation area.
The image processing device 100 converts the generated OCT signal into a spatial frequency domain, and estimates the electric field of the reflected light on the pupil surface corresponding to the reciprocal space of the sample Sm as the pupil surface electric field. The image processing device 100 acts on the electric field data indicating the electric field in which the mask indicating the passing characteristic distribution of the electric field on the pupil surface is estimated to acquire the mask electric field data. The image processing device 100 converts the acquired mask electric field data into a spatial region to generate a mask OCT signal. The image processing device 100 generates a mask OCT image based on the generated mask OCT signal.
 画像処理装置100は、通過特性分布が異なるM個(Mは、2以上の整数)のマスクを電場データにそれぞれ作用してM個のマスク電場データを生成し、生成したM個のマスク電場データをそれぞれ空間領域に変換してM個のマスク画像を生成してもよい。画像処理装置100は、生成したM個のマスク画像にそれぞれ異なる表示態様(例えば、色)をもって合成して合成画像を生成してもよい。 The image processing apparatus 100 generates M mask electric field data by acting on the electric field data with M masks having different passage characteristic distributions (M is an integer of 2 or more), and the generated M mask electric field data. May be converted into spatial regions to generate M mask images. The image processing apparatus 100 may generate a composite image by synthesizing the generated M mask images with different display modes (for example, colors).
 次に、本実施形態に係る画像処理装置100の機能構成例について説明する。
 図2は、本実施形態に係る画像処理装置100の機能構成例を示すブロック図である。
 画像処理装置100は、制御部110と記憶部190を含んで構成される。制御部110の一部または全部の機能は、例えば、CPU(Central Processing Unit)等のプロセッサを含んで構成されるコンピュータとして実現される。プロセッサは、予め記憶部190に記憶させておいたプログラムを読み出し、読み出したプログラムに記述された指令で指示される処理を行って、その機能を奏する。本願では、プログラムに記述された指令で指示される処理を行うことを、プログラムを実行する、プログラムの実行、などと呼ぶことがある。制御部110の一部または全部は、プロセッサなどの汎用のハードウェアに限られず、LSI(Large Scale Integration)、ASIC(Application Specific Integrated Circuit)等の専用のハードウェアを含んで構成されてもよい。
Next, a functional configuration example of the image processing device 100 according to the present embodiment will be described.
FIG. 2 is a block diagram showing a functional configuration example of the image processing device 100 according to the present embodiment.
The image processing device 100 includes a control unit 110 and a storage unit 190. A part or all the functions of the control unit 110 are realized as a computer including a processor such as a CPU (Central Processing Unit), for example. The processor reads a program stored in the storage unit 190 in advance, performs a process instructed by a command described in the read program, and performs its function. In the present application, performing a process instructed by a command described in a program may be referred to as executing a program, executing a program, or the like. A part or all of the control unit 110 is not limited to general-purpose hardware such as a processor, and may be configured to include dedicated hardware such as LSI (Large Scale Integration) and ASIC (Application Specific Integrated Circuit).
 制御部110は、光学系制御部120、検出信号取得部130、電場推定部140、マスク部150、変換部160、画像合成部170、および出力処理部180を含んで構成される。 The control unit 110 includes an optical system control unit 120, a detection signal acquisition unit 130, an electric field estimation unit 140, a mask unit 150, a conversion unit 160, an image composition unit 170, and an output processing unit 180.
 光学系制御部120は、ガルバノミラー60a、60bの位置を可変にする駆動機構を駆動させ、試料Smの観測点であるサンプル点を走査する。試料Smのサンプル点は、試料Smの深さ方向と、その方向に交差する方向(例えば、試料Smの正面に平行な方向)のそれぞれについて走査される。以下の説明では、試料の深さ方向を、三次元直交座標系におけるz方向と呼び、z方向と相互に直交する第1方向、第2方向をそれぞれx方向、y方向と呼ぶことがある。試料の深さ方向への信号取得は、A-スキャン(A-scan)と呼ばれ、その方向に交差する方向(例えば、x方向またはy方向)への信号取得は、B-スキャン(B-scan)と呼ばれることがある。 The optical system control unit 120 drives a drive mechanism that changes the positions of the galvanometer mirrors 60a and 60b, and scans the sample point which is the observation point of the sample Sm. The sample points of the sample Sm are scanned in each of the depth direction of the sample Sm and the direction intersecting the depth direction (for example, the direction parallel to the front surface of the sample Sm). In the following description, the depth direction of the sample may be referred to as the z direction in the three-dimensional Cartesian coordinate system, and the first and second directions orthogonal to the z direction may be referred to as the x direction and the y direction, respectively. Signal acquisition in the depth direction of the sample is called A-scan, and signal acquisition in the direction intersecting that direction (for example, x-direction or y-direction) is called B-scan (B-scan). Sometimes called scan).
 検出信号取得部130は、分光器70から検出信号を逐次に取得する。検出信号取得部130は、取得した検出信号に基づいて試料Smの深さ方向の反射光の強度分布を示す信号値を試料Smが存在する三次元空間において所定の間隔で配列されたサンプル点ごとに取得する。試料Smの深さ方向は、測定光の入射方向(z方向)に相当する。検出信号取得部130は、z方向に交差する面(例えば、x-y平面)に沿った信号取得により変更される観測点ごとに反射光の強度分布を取得する処理を繰り返す。取得される反射光の強度分布は、試料Smの屈折率の深さ方向の分布に基づく。これにより、検出信号取得部130は、観測可能とする三次元領域(以下、観測可能領域)内の試料Smの状態を表すデータを三次元OCT信号(OCTボリューム(OCT volume)とも呼ばれる)として取得することができる。検出信号取得部130は、三次元OCT信号を記憶部190に記憶する。なお、z方向に交差する面は、必ずしもz方向に直交していなくてもよく、z方向と平行でなければよい。また、z方向に交差する二次元の面において、個々のサンプル点は、必ずしも、直交格子の各格子点上に配置されていなくてもよい。個々のサンプル点は、例えば、斜め格子の各格子点上に配置されてもよいし、非周期的に配置されてもよい。
 上記の説明では、対象とするサンプル点の変更を伴う信号取得もしくは複数のサンプル点に対する一括した信号取得に対して「スキャン」と呼称しているが、必ずしも光学系を構成する部材(例えば、コリメータ50b、試料Smの支持台)または検出器の駆動による走査を伴わない場合も含まれうる。その場合には、信号取得に係る機能を検出信号取得部130が担い、光学系制御部120において省略される。例えば、OCTの方式としてFD-OCTを用いてx-y平面上のある1点においてA-スキャンを行う際、検出信号取得部130は、試料Smの深さにより異なる周波数成分を有する反射光に基づく干渉光を検出信号として検出する。検出信号取得部130は、検出信号をフーリエ変換して周波数ごとの変換係数を算出し、周波数に対応付けられた深さの変換係数をその深さにおける信号値として定めることができる。さらに、x方向またはy方向への駆動による走査を要しない場合には、光学系制御部120が省略されうる。
The detection signal acquisition unit 130 sequentially acquires detection signals from the spectroscope 70. The detection signal acquisition unit 130 sets signal values indicating the intensity distribution of the reflected light in the depth direction of the sample Sm based on the acquired detection signal for each sample point arranged at predetermined intervals in the three-dimensional space where the sample Sm exists. To get to. The depth direction of the sample Sm corresponds to the incident direction (z direction) of the measurement light. The detection signal acquisition unit 130 repeats the process of acquiring the intensity distribution of the reflected light for each observation point changed by signal acquisition along the plane intersecting in the z direction (for example, the xy plane). The intensity distribution of the acquired reflected light is based on the distribution of the refractive index of the sample Sm in the depth direction. As a result, the detection signal acquisition unit 130 acquires data representing the state of the sample Sm in the observable three-dimensional region (hereinafter, observable region) as a three-dimensional OCT signal (also referred to as an OCT volume). can do. The detection signal acquisition unit 130 stores the three-dimensional OCT signal in the storage unit 190. The planes intersecting in the z direction do not necessarily have to be orthogonal to the z direction and may not be parallel to the z direction. Further, in the two-dimensional planes intersecting in the z direction, the individual sample points do not necessarily have to be arranged on each lattice point of the orthogonal lattice. The individual sample points may be arranged, for example, on each grid point of the diagonal grid or may be arranged aperiodically.
In the above description, signal acquisition involving a change in the target sample point or batch signal acquisition for a plurality of sample points is referred to as "scan", but it is not necessarily a member constituting the optical system (for example, a collimator). It may also include cases without scanning driven by 50b, sample Sm) or detector. In that case, the detection signal acquisition unit 130 is responsible for the function related to signal acquisition, and is omitted in the optical system control unit 120. For example, when performing an A-scan at a certain point on the xy plane using FD-OCT as an OCT method, the detection signal acquisition unit 130 converts reflected light having different frequency components depending on the depth of the sample Sm. Based on the interference light, it is detected as a detection signal. The detection signal acquisition unit 130 can Fourier transform the detection signal to calculate the conversion coefficient for each frequency, and determine the conversion coefficient of the depth associated with the frequency as the signal value at that depth. Further, when scanning by driving in the x-direction or the y-direction is not required, the optical system control unit 120 may be omitted.
 検出信号取得部130は、取得した三次元OCT信号のうち観測対象とする二次元平面(以下、観測対象平面)における反射光の強度分布を表す部分を二次元OCT信号(以下、単にOCT信号と呼ぶことがある)として抽出する。観測対象平面は、例えば、試料Smの正面である。正面は、試料Smの深さ方向に直交する平面(つまり、x-y平面)であり、en face面とも呼ばれる。検出信号取得部130は、抽出したOCT信号を記憶部190に記憶する。
 検出信号取得部130は、例えば、出力処理部180から入力される制御信号が示す深度(つまり、z座標)の二次元平面(例えば、x-y平面)を観測対象平面として選択してもよい。
 なお、本実施形態では、観測可能領域における隣接するサンプル点の間隔は、光学系の空間分解能以下とすることが望ましい。
The detection signal acquisition unit 130 refers to a portion of the acquired three-dimensional OCT signal that represents the intensity distribution of reflected light in the two-dimensional plane to be observed (hereinafter, the observation target plane) as a two-dimensional OCT signal (hereinafter, simply referred to as an OCT signal). It may be called). The observation target plane is, for example, the front surface of the sample Sm. The front surface is a plane (that is, an xy plane) orthogonal to the depth direction of the sample Sm, and is also called an en face plane. The detection signal acquisition unit 130 stores the extracted OCT signal in the storage unit 190.
The detection signal acquisition unit 130 may select, for example, a two-dimensional plane (for example, the xy plane) of the depth (that is, z coordinate) indicated by the control signal input from the output processing unit 180 as the observation target plane. ..
In this embodiment, it is desirable that the distance between adjacent sample points in the observable region is equal to or less than the spatial resolution of the optical system.
 電場推定部140は、記憶部190に記憶されたOCT信号を読み出し、読み出した空間領域のOCT信号に対して二次元フーリエ変換を行い、空間周波数領域のデータを生成する。フーリエ変換により得られた空間周波数領域のデータは瞳面(pupil plane)における電場(以下、瞳面電場)を示すデータ(以下、電場データ)に相当する。即ち、観測対象平面に係るOCT信号に対して二次元フーリエ変換を行うことで瞳面に投影された反射光の電場が推定される。また、一般に空間周波数領域に変換された周波数サンプル点(周波数ビン)ごとの変換係数は複素数となり、その絶対値と偏角をもって表される。従って、電場データは、個々の変換係数の絶対値、偏角により振幅強度、位相が表される。電場推定部140は、生成した空間周波数領域のデータを電場データとして記憶部190に記憶する。 The electric field estimation unit 140 reads the OCT signal stored in the storage unit 190, performs a two-dimensional Fourier transform on the read OCT signal in the spatial domain, and generates data in the spatial frequency domain. The data in the spatial frequency domain obtained by the Fourier transform corresponds to the data (hereinafter, electric field data) indicating the electric field (hereinafter, pupillary electric field) in the pupil plane (pupil plane). That is, the electric field of the reflected light projected on the pupil surface is estimated by performing a two-dimensional Fourier transform on the OCT signal related to the observation target plane. Further, in general, the conversion coefficient for each frequency sample point (frequency bin) converted into the spatial frequency domain is a complex number, and is represented by its absolute value and argument. Therefore, in the electric field data, the amplitude intensity and the phase are represented by the absolute value and the declination of each conversion coefficient. The electric field estimation unit 140 stores the generated data in the spatial frequency region as electric field data in the storage unit 190.
 図3に例示されるように、瞳面Ppは、試料Smの逆空間に対応する後焦点面(back-focal plane)に相当する。つまり、瞳面Ppは、試料Smを挟んで対物レンズOlとは反対側に配置される面であって、試料Smから拡散し、対物レンズOlにより平行化された反射光が仮想的に投影される面である。空間周波数領域に変換される空間周波数の全帯域が、瞳面において電場の観測対象とする観測領域に対応する。対物レンズOlの光学軸と瞳面が交わる交点が空間周波数の原点に対応する。空間周波数領域における空間周波数は、観測領域内のサンプル点に対応する。試料Smは、対物レンズOlの光学軸と交わる位置に設置される。 As illustrated in FIG. 3, the pupil plane Pp corresponds to the back-focal plane corresponding to the reciprocal space of the sample Sm. That is, the pupil surface Pp is a surface arranged on the opposite side of the sample Sm from the objective lens Ol, and the reflected light diffused from the sample Sm and parallelized by the objective lens Ol is virtually projected. This is the surface. The entire band of the spatial frequency converted into the spatial frequency domain corresponds to the observation region of the electric field to be observed on the pupil surface. The intersection of the optical axis of the objective lens Ol and the pupil surface corresponds to the origin of the spatial frequency. Spatial frequencies in the spatial frequency domain correspond to sample points in the observation domain. The sample Sm is installed at a position where it intersects with the optical axis of the objective lens Ol.
 図2に戻り、マスク部150は、記憶部190に記憶された電場データを読み出し、読み出した電場データに対してマスク(mask)を作用してマスク電場(masked electric field)を示すマスク電場データを生成する。マスクは、空間周波数領域における反射光の電場の通過特性分布を示す数値データである。マスクは、個々の空間周波数に相当する周波数サンプル点ごとに電場の通過特性を示す数値をマスク値として示す。例えば、周波数サンプル点ごとのマスク値は、1または0の2通りのうち一方の値をとり、1、0は、それぞれ電場の通過の有無を示す。従って、マスク値を1とする周波数サンプル点の分布は、試料Smの逆空間において電場を通過する空間周波数領域の通過帯域を示す。他の観点では、通過帯域は反射光を通過する瞳面上の仮想的な開口部(aperture)を示す。他方、マスク値を0とする周波数サンプル点の分布は、試料Smの逆空間において電場を通過しない空間周波数領域の遮断帯域を示す。他の観点では、遮断帯域は反射光を通過しない瞳面上の仮想的な遮蔽部(shield)を示す。 Returning to FIG. 2, the mask unit 150 reads out the electric field data stored in the storage unit 190, applies a mask to the read electric field data, and displays the masked electric field data indicating the masked electric field. Generate. The mask is numerical data showing the passing characteristic distribution of the electric field of the reflected light in the spatial frequency domain. As the mask, a numerical value indicating the passing characteristics of the electric field is shown as a mask value for each frequency sample point corresponding to each spatial frequency. For example, the mask value for each frequency sample point takes one of two values, 1 and 0, and 1 and 0 indicate the presence or absence of passage of an electric field, respectively. Therefore, the distribution of frequency sample points with the mask value set to 1 indicates the pass band of the spatial frequency domain passing through the electric field in the reciprocal space of the sample Sm. In another aspect, the passband represents a virtual aperture on the pupil plane through which the reflected light passes. On the other hand, the distribution of frequency sample points with the mask value set to 0 indicates the cutoff band in the spatial frequency region that does not pass through the electric field in the reciprocal space of the sample Sm. In another aspect, the blocking band represents a virtual shield on the pupil surface that does not allow reflected light to pass through.
 マスクを作用する際、マスク部150は、電場データが示す周波数サンプル点ごとの電場値に当該周波数サンプルのマスク値を乗算する。マスク部150は、乗算により得られる乗算値をマスク電場値として周波数サンプル点ごとに有するデータをマスク電場データとして生成する。マスク電場データは、後述するように通過特性分布に対応する空間特性(例えば、指向性)を反映したOCT画像の生成に用いられる。マスク部150は、生成したマスク電場データを記憶部190に記憶する。マスクの具体例については、後述する。 When the mask is applied, the mask unit 150 multiplies the electric field value for each frequency sample point indicated by the electric field data by the mask value of the frequency sample. The mask unit 150 generates data having the multiplication value obtained by multiplication as the mask electric field value for each frequency sample point as the mask electric field data. The masked electric field data is used to generate an OCT image that reflects spatial characteristics (for example, directivity) corresponding to the passage characteristic distribution, as will be described later. The mask unit 150 stores the generated mask electric field data in the storage unit 190. Specific examples of masks will be described later.
 変換部160は、記憶部190に記憶されたマスク電場データを読み出し、読み出した空間周波数領域のマスク電場データに対して二次元逆フーリエ変換を行い、空間領域のマスクOCT信号を生成する。変換部160は、生成した空間領域のマスクOCT信号を記憶部190に記憶する。
 なお、空間周波数領域における電場の通過特性分布がそれぞれ異なるM個のマスクが設定される場合には、マスク部150は、読み出した電場データに対してM個のマスクのそれぞれを作用して、M個のマスク電場データを生成してもよい。その場合、変換部160は、空間周波数領域のM個のマスク電場データのそれぞれに対して空間領域のマスクOCT信号に変換する。
The conversion unit 160 reads out the mask electric field data stored in the storage unit 190, performs a two-dimensional inverse Fourier transform on the read mask electric field data in the spatial frequency domain, and generates a mask OCT signal in the spatial domain. The conversion unit 160 stores the generated mask OCT signal in the spatial region in the storage unit 190.
When M masks having different electric field passing characteristic distributions in the spatial frequency region are set, the mask unit 150 acts on each of the M masks on the read electric field data, and M A number of masked electric field data may be generated. In that case, the conversion unit 160 converts each of the M mask electric field data in the spatial frequency domain into a mask OCT signal in the spatial domain.
 画像合成部170は、記憶部190に記憶されたマスクOCT信号を読み出し、読み出したマスクOCT信号が示す観測対象平面におけるサンプル点ごとの信号値に対して所定の変換関数を用いて画素ごとの輝度値に変換する。変換される輝度値は、画素ごとのビット深度で表現可能な値域内の値をとる。画像合成部170は、サンプル点ごとに変換した輝度値を有するマスク画像データを生成する。生成されるマスク画像データは、通過特性分布に対応する空間特性が反映されたマスク画像を示す。
 瞳面電場のマスクに用いられるマスクの数が1個である場合には、画像合成部170は、生成したマスク画像データを、出力処理部180から入力される制御信号に応じて出力画像データとして表示部(図示せず)に出力する。
The image synthesizing unit 170 reads out the mask OCT signal stored in the storage unit 190, and uses a predetermined conversion function for the signal value for each sample point in the observation target plane indicated by the read mask OCT signal to obtain the brightness for each pixel. Convert to a value. The converted luminance value takes a value within a value range that can be expressed by the bit depth of each pixel. The image synthesizing unit 170 generates mask image data having a luminance value converted for each sample point. The generated mask image data shows a mask image that reflects the spatial characteristics corresponding to the passage characteristic distribution.
When the number of masks used for the mask of the pupillary electric field is one, the image synthesizing unit 170 uses the generated mask image data as output image data according to the control signal input from the output processing unit 180. Output to the display (not shown).
 M個のマスクにそれぞれ対応するマスクOCT信号が取得される場合には、画像合成部170は、M個のマスクOCT画像信号のそれぞれについて異なる表示態様(例えば、色)を有するマスク画像を生成する。
 画像合成部170は、生成したM個のマスク画像を合成して、1個の合成画像を示す合成画像データを生成する。例えば、RGB表色系を用いたカラーの合成画像を生成する場合には、画像合成部170は、M個のマスク画像間で画素ごとの画素値の平均値を、合成画像の画素値として定める。RGB表色系では、濃淡が設定される画素ブロックごとに原色である赤、緑、青を表す画素それぞれの画素値の比率で色合い(色度)が定まる。画像合成部170は、生成した合成画像データを、出力処理部180から入力される制御信号に応じて出力画像データとして表示部に出力する。
 画像合成部170は、出力処理部180から入力される制御信号に応じて出力画像データを記憶部190に記憶してもよい。
When the mask OCT signals corresponding to the M masks are acquired, the image synthesizing unit 170 generates a mask image having a different display mode (for example, color) for each of the M mask OCT image signals. ..
The image synthesizing unit 170 synthesizes the generated M mask images to generate composite image data indicating one composite image. For example, when generating a color composite image using an RGB color system, the image synthesis unit 170 determines the average value of the pixel values for each pixel among the M mask images as the pixel value of the composite image. .. In the RGB color system, the hue (chromaticity) is determined by the ratio of the pixel values of the pixels representing the primary colors red, green, and blue for each pixel block in which the shade is set. The image composition unit 170 outputs the generated composite image data to the display unit as output image data according to the control signal input from the output processing unit 180.
The image synthesizing unit 170 may store the output image data in the storage unit 190 according to the control signal input from the output processing unit 180.
 出力処理部180は、操作入力部(図示せず)から入力される操作信号に基づいてOCT画像を示す出力画像データの生成または出力を制御する。
 操作入力部は、例えば、ボタン、つまみ、ダイヤル、マウス、ジョイスティックなど、ユーザの操作を受け付け、受け付けた操作に応じた操作信号を生成する部材を含んで構成されてもよい。操作入力部は、他の機器(例えば、リモートコントローラ等の携帯機器)から操作信号を無線または有線で受信する入力インタフェースであってもよい。
The output processing unit 180 controls the generation or output of output image data indicating an OCT image based on an operation signal input from an operation input unit (not shown).
The operation input unit may include, for example, a button, a knob, a dial, a mouse, a joystick, and other members that accept user operations and generate operation signals according to the received operations. The operation input unit may be an input interface that receives an operation signal wirelessly or by wire from another device (for example, a portable device such as a remote controller).
 操作信号により、例えば、OCT画像の表示または記憶の要否、観測対象領域とする観測対象平面、マスクの空間周波数特性などがパラメータとして指示される。出力処理部180は、操作により設定可能とするパラメータ、パラメータの設定、設定可能とするパラメータを案内するための設定画面を表示部に表示させ、OCT画像の表示に係るユーザインタフェースを構成してもよい。
 例えば、OCT画像の表示の要否を示す操作信号が入力されるとき、出力処理部180は、その表示の要否を示す制御信号を画像合成部170に出力する。画像合成部170は、出力処理部180から表示要を示す制御信号が入力されるとき出力画像データを表示部に出力し、出力処理部180から表示否を示す制御信号が入力されるとき出力画像データを表示部に出力しない。
The operation signal indicates, for example, the necessity of displaying or storing the OCT image, the observation target plane as the observation target area, the spatial frequency characteristic of the mask, and the like as parameters. The output processing unit 180 may display a parameter that can be set by operation, a parameter setting, and a setting screen for guiding the parameter that can be set on the display unit, and configure a user interface related to the display of the OCT image. Good.
For example, when an operation signal indicating the necessity of displaying an OCT image is input, the output processing unit 180 outputs a control signal indicating the necessity of displaying the OCT image to the image synthesis unit 170. The image synthesizing unit 170 outputs output image data to the display unit when a control signal indicating display necessity is input from the output processing unit 180, and outputs an output image when a control signal indicating display / rejection is input from the output processing unit 180. Do not output data to the display.
 観測対象平面を示す操作信号が入力されるとき、出力処理部180は、その観測対象平面を示す制御信号を検出信号取得部130に出力する。検出信号取得部130は、三次元OCT信号のうち出力処理部180から入力される制御信号で示される観測対象平面に係る部分をOCT信号として出力する。観測対象平面は、例えば、試料Smの表面からの深さ、観察方向、観察領域の面積、などをパラメータとして定義される。
 マスクの空間周波数特性を示す操作信号が入力されるとき、出力処理部180は、その空間周波数特性を示す制御信号をマスク部150に出力する。出力処理部180は、入力される操作信号に基づいて複数のマスクのそれぞれの空間周波数特性を設定し、設定した空間周波数特性を示す制御信号をマスク部150に出力してもよい。マスク部150は、出力処理部180から入力される制御信号で示される空間周波数特性を有するマスクを設定する。
When the operation signal indicating the observation target plane is input, the output processing unit 180 outputs the control signal indicating the observation target plane to the detection signal acquisition unit 130. The detection signal acquisition unit 130 outputs a portion of the three-dimensional OCT signal related to the observation target plane indicated by the control signal input from the output processing unit 180 as an OCT signal. The observation target plane is defined, for example, as parameters such as the depth from the surface of the sample Sm, the observation direction, and the area of the observation area.
When an operation signal indicating the spatial frequency characteristic of the mask is input, the output processing unit 180 outputs a control signal indicating the spatial frequency characteristic to the mask unit 150. The output processing unit 180 may set the spatial frequency characteristics of each of the plurality of masks based on the input operation signal, and output a control signal indicating the set spatial frequency characteristics to the mask unit 150. The mask unit 150 sets a mask having spatial frequency characteristics indicated by a control signal input from the output processing unit 180.
 記憶部190は、上記のプログラムの他、制御部110が実行する処理に用いられる各種のデータ、制御部110が取得した各種のデータを記憶する。
 記憶部190は、例えば、ROM(Read Only Memory)、フラッシュメモリ、HDD(Hard Disk Drive)などの不揮発性の(非一時的)記憶媒体を含んで構成される。記憶部190は、例えば、RAM(Random Access Memory)、レジスタなどの揮発性の記憶媒体を含んで構成される。
In addition to the above program, the storage unit 190 stores various data used for processing executed by the control unit 110 and various data acquired by the control unit 110.
The storage unit 190 includes, for example, a non-volatile (non-temporary) storage medium such as a ROM (Read Only Memory), a flash memory, and an HDD (Hard Disk Drive). The storage unit 190 includes, for example, a volatile storage medium such as a RAM (Random Access Memory) and a register.
(画像処理)
 次に、本実施形態に係る画像処理の例について説明する。図3は、本実施形態に係る画像処理の一例を示す説明図である。図3に例示される画像処理は、OCT画像の指向性イメージング(directional imaging)への一応用例である。指向性イメージングでは、空間周波数領域における指向性(directionality)を有するマスク(以下、指向性マスク)が用いられる。指向性マスクは、二次元の空間周波数領域において原点からの方位角(azimuth)に対応する空間周波数(以下、方位角空間周波数)に対する依存性を伴うマスク値を有する。
(Image processing)
Next, an example of image processing according to the present embodiment will be described. FIG. 3 is an explanatory diagram showing an example of image processing according to the present embodiment. The image processing exemplified in FIG. 3 is an application example to directional imaging of an OCT image. In directional imaging, a mask having directivity in the spatial frequency domain (hereinafter, directional mask) is used. The directional mask has a mask value with a dependency on a spatial frequency (hereinafter, azimuth spatial frequency) corresponding to an azimuth from the origin in a two-dimensional spatial frequency domain.
(ステップS01)電場推定部140は、検出信号取得部130が取得した空間領域の正面OCT画像Oi01を示すOCT信号に対して二次元フーリエ変換を行い、瞳面電場Pe01を示す空間周波数領域の瞳面電場データを生成する。正面OCT画像Oi01は、観測対象平面を正面とするOCT画像である。
(ステップS02)マスク部150は、電場推定部140が生成した瞳面電場データに対して、仮想マスクの例として指向性マスクDm01を作用してマスク電場Ie01を示すマスク電場データを生成する。
 指向性マスクDm01は、原点からの距離(動径)に対応する空間周波数(以下、動径空間周波数)が所定の帯域内であって、かつ、方位角空間周波数として-π/2から0を経てπ/2までの半周の帯域を通過帯域とし、それ以外の帯域を遮断帯域とする。
(Step S01) The electric field estimation unit 140 performs a two-dimensional Fourier transform on the OCT signal indicating the front OCT image Oi01 of the spatial region acquired by the detection signal acquisition unit 130, and performs a two-dimensional Fourier transform on the pupil in the spatial frequency region indicating the pupil surface electric field Pe01. Generate surface electric field data. The front OCT image Oi01 is an OCT image with the observation target plane as the front.
(Step S02) The mask unit 150 acts on the pupillary surface electric field data generated by the electric field estimation unit 140 as an example of a virtual mask with the directional mask Dm01 to generate mask electric field data indicating the mask electric field Ie01.
In the directional mask Dm01, the spatial frequency (hereinafter referred to as the radial spatial frequency) corresponding to the distance (driving diameter) from the origin is within a predetermined band, and the azimuth angle spatial frequency is −π / 2 to 0. The half-circle band up to π / 2 is used as the pass band, and the other bands are used as the cutoff band.
(ステップS03)変換部160は、マスク部150が生成したマスク電場データに対して二次元逆フーリエ変換を行い、空間領域のマスクOCT信号を生成する。
 画像合成部170は、マスクOCT信号が示すサンプル点ごとの信号値に対し、画素ごとの画素値に変換し、変換により得られた画素値を有するマスク画像データを生成する。
 画像合成部170は、生成したマスク画像データを出力画像データとして表示部(図示せず)に出力する。
(Step S03) The conversion unit 160 performs a two-dimensional inverse Fourier transform on the mask electric field data generated by the mask unit 150 to generate a mask OCT signal in the spatial region.
The image synthesizing unit 170 converts the signal value for each sample point indicated by the mask OCT signal into a pixel value for each pixel, and generates mask image data having the pixel value obtained by the conversion.
The image composition unit 170 outputs the generated mask image data as output image data to a display unit (not shown).
 上記のように、指向性マスクDm01を瞳面電場Pe01に作用することで、方位角空間周波数が0(rad)を中心(主方向)とする帯域幅πの範囲内の空間周波数帯域における瞳面電場が通過され、それ以外の帯域における瞳面電場が遮断される。これにより、得られたマスク電場Ie01により方位角空間周波数0(rad)を主方向とする指向性を有するOCT画像が得られる。 As described above, by acting the directional mask Dm01 on the pupillary surface electric field Pe01, the pupillary surface in the spatial frequency band within the range of the bandwidth π centered on the azimuth spatial frequency 0 (rad) (main direction). The electric field is passed through, and the pupillary electric field in other bands is cut off. As a result, an OCT image having directivity with the azimuth spatial frequency 0 (rad) as the main direction can be obtained by the obtained mask electric field Ie01.
 また、マスクの空間周波数特性を操作に応じて可変とすることで、光学系の駆動または調整を伴わずに取得済のOCT信号を用いて、ユーザは任意にOCT画像の指向性を調整することができる。例えば、出力処理部180は、操作入力部(図示せず)から入力される操作信号に基づいて電場を通過させる通過帯域として方位角空間周波数帯域と動径空間周波数帯域の一方または両方を設定可能とする。マスク部150は、出力処理部180が設定した通過帯域内の周波数サンプル点におけるマスク値を1とし、その他の周波数サンプル点に係るマスク値を0と設定し、設定したマスク値を示す指向性マスクを生成すればよい。 Further, by making the spatial frequency characteristic of the mask variable according to the operation, the user can arbitrarily adjust the directivity of the OCT image by using the acquired OCT signal without driving or adjusting the optical system. Can be done. For example, the output processing unit 180 can set one or both of the azimuth space frequency band and the radial space frequency band as the pass band through which the electric field is passed based on the operation signal input from the operation input unit (not shown). And. The mask unit 150 sets a mask value at a frequency sample point in the pass band set by the output processing unit 180 as 1, a mask value related to other frequency sample points as 0, and sets a directional mask indicating the set mask value. Should be generated.
 図4は、本実施形態に係る画像処理の他の例を示す説明例である。図4を用いて、仮想多方向イメージング(virtual multi-directional imaging)への応用例について説明する。
 多方向イメージングでは、ステップS02(図3)において、マスク部150は、共通の瞳面電場に対して空間周波数領域における指向性がそれぞれ異なるM個の指向性マスクを用いて、マスク電場データを生成する。図4に示す例では、3個の指向性マスクDm01-Dm03が用いられる。指向性マスクDm01-Dm03のそれぞれの通過帯域は、2π/3周期で設定され、空間周波数領域において互いに重ならない。
FIG. 4 is an explanatory example showing another example of image processing according to the present embodiment. An application example to virtual multi-directional imaging will be described with reference to FIG.
In multidirectional imaging, in step S02 (FIG. 3), the mask unit 150 generates mask electric field data using M directional masks having different directivities in the spatial frequency domain with respect to a common pupillary electric field. To do. In the example shown in FIG. 4, three directional masks Dm01-Dm03 are used. The pass bands of the directional masks Dm01 to Dm03 are set in 2π / 3 cycles and do not overlap each other in the spatial frequency domain.
 ステップS03(図3)において、変換部160は、生成されたM個のマスク電場データに対してそれぞれ二次元逆フーリエ変換を行い、空間領域のマスクOCT信号を生成する。画像合成部170は、生成したM個の空間領域のマスクOCT信号のそれぞれをマスクOCT画像に変換する。
 画像合成部170は、変換されたM個のマスクOCT画像に対してそれぞれ異なる色で表示される合成画像を示す出力画像データを生成し、生成した出力画像データを表示部に出力する。よって、指向性がそれぞれ異なるOCT画像を異なる色で多重化した合成画像が表示される。
 ここで、出力処理部180は、操作入力部(図示せず)から入力される操作信号に基づいてM個(Mは、2以上の整数)のマスクのそれぞれについて、電場を通過させる通過帯域として方位角空間周波数帯域と動径空間周波数帯域の一方または両方を設定可能としてもよい。その場合、マスク部150は、個々のマスクについて、出力処理部180が設定した通過帯域内の周波数サンプル点のマスク値を1とし、その他の周波数サンプル点のマスク値0と設定する。
In step S03 (FIG. 3), the conversion unit 160 performs a two-dimensional inverse Fourier transform on each of the generated M mask electric field data to generate a mask OCT signal in the spatial region. The image synthesizing unit 170 converts each of the generated M mask OCT signals in the spatial region into a mask OCT image.
The image synthesizing unit 170 generates output image data indicating composite images displayed in different colors for the converted M mask OCT images, and outputs the generated output image data to the display unit. Therefore, a composite image in which OCT images having different directivities are multiplexed with different colors is displayed.
Here, the output processing unit 180 sets each of the M masks (M is an integer of 2 or more) as a pass band through which the electric field is passed, based on the operation signal input from the operation input unit (not shown). One or both of the azimuth space frequency band and the radial space frequency band may be set. In that case, the mask unit 150 sets the mask value of the frequency sample points in the pass band set by the output processing unit 180 to 1 for each mask, and sets the mask value of the other frequency sample points to 0.
 次に、M個のそれぞれ異なる指向性を付与したマスクOCT画像(以下、指向性画像)に対する合成処理の例について説明する。
 図5は、本実施形態に係る合成処理の例を示す説明図である。
(ステップS11)画像合成部170は、M個の指向性画像に対してそれぞれ異なる色で着色し、着色画像を示す着色画像データを生成する。図5に示す例では、画像合成部170は、2つの指向性画像Di11、Di12に対して、それぞれ異なる色(例えば、赤、緑)で着色して着色画像Ci11、Ci12を生成する。図5では、それぞれの色は、右上がりの斜線と右下がりの斜線を用いて表されている。指向性画像Di11、Di12は、それぞれ指向性1、2を有する指向性マスクを用いて生成されたOCT画像である。指向性1は、方位角7/6πを主方向とし、帯域幅をπとする通過帯域を示す。指向性2は、方位角-5/6πを主方向とし、帯域幅をπとする通過帯域を示す。
Next, an example of compositing processing for mask OCT images (hereinafter, directivity images) to which M different directivities are given will be described.
FIG. 5 is an explanatory diagram showing an example of the synthesis process according to the present embodiment.
(Step S11) The image synthesizing unit 170 colors M directional images with different colors to generate colored image data indicating the colored images. In the example shown in FIG. 5, the image synthesizing unit 170 colors the two directional images Di11 and Di12 with different colors (for example, red and green) to generate colored images Ci11 and Ci12. In FIG. 5, each color is represented by an upward-sloping diagonal line and a downward-sloping diagonal line. The directional images Di11 and Di12 are OCT images generated by using directional masks having directivity 1 and 2, respectively. Directivity 1 indicates a pass band having an azimuth angle of 7 / 6π as a main direction and a bandwidth of π. Directivity 2 indicates a pass band having an azimuth angle of -5 / 6π as a main direction and a bandwidth of π.
(ステップS12)画像合成部170は、M個の着色画像データがそれぞれ示す着色画像を合成して合成画像を示す合成画像データを生成する。図5に示す例では、着色画像Ci11、Ci12のそれぞれに着色された色が部位ごとに混合される。合成画像Si11は、部位ごとに混合して得られる混合色を表す。斜めの網掛けの部位は、着色画像Ci11、Ci12にそれぞれ着色された色を混合して得られた混合色を示す。右上がりの斜線の部位、右下がりの斜線の部位は、それぞれ着色画像Ci11、Ci12において着色されているが、着色画像Ci12、Ci11において着色されていない部位を示す。 (Step S12) The image synthesizing unit 170 synthesizes the colored images indicated by the M colored image data to generate the composite image data indicating the composite image. In the example shown in FIG. 5, the colored colors of the colored images Ci11 and Ci12 are mixed for each part. The composite image Si11 represents a mixed color obtained by mixing each part. The diagonally shaded portion indicates a mixed color obtained by mixing the colored colors in the colored images Ci11 and Ci12, respectively. The upward-sloping oblique line portion and the downward-sloping oblique line portion indicate a portion colored in the colored images Ci11 and Ci12, respectively, but not colored in the colored images Ci12 and Ci11.
 従って、表示された合成画像に接したユーザは、色合いにより試料の観察領域における指向性に対応する試料の状態の分布を直感的に把握することができる。表示態様として色相を用いることで、その他の種別の表示態様、例えば、網掛けなどの模様などで区別する場合よりも微細な指向性の分布の差異が表現できる。色相とは、一般に画像光のうち特定の顕著な波長の成分で表される色合いを意味し、色の三属性のうちの1つである。着色される色の色相は、M個のマスクOCT画像間で色空間において極力異なっていることが望ましい。Mが2である場合には、一方のマスクOCT画像の色が他方のマスクOCT画像の色に対する補色であればよい。例えば、一方のマスクOCT画像の色が赤であるとき、他方のマスクOCT画像の色は緑であればよい。Mが3である場合には、例えば、3個のマスクOCT画像それぞれの色が、赤、緑、青の三原色であればよい。 Therefore, the user who comes into contact with the displayed composite image can intuitively grasp the distribution of the state of the sample corresponding to the directivity in the observation area of the sample by the hue. By using hue as the display mode, it is possible to express a finer difference in directivity distribution than when distinguishing by other types of display modes, for example, a pattern such as shading. Hue generally means a hue represented by a component of a specific remarkable wavelength in image light, and is one of the three attributes of color. It is desirable that the hues of the colors to be colored differ as much as possible in the color space between the M mask OCT images. When M is 2, the color of one mask OCT image may be a complementary color to the color of the other mask OCT image. For example, when the color of one mask OCT image is red, the color of the other mask OCT image may be green. When M is 3, for example, the colors of the three mask OCT images may be the three primary colors of red, green, and blue.
 また、M個のマスクOCT画像の色をM個間で単純に混合して得られる色が、無彩色となるように設定されてもよい。無彩色とは、彩度を有しない色、つまり、白、黒、または各種の濃度の灰色である。Mが2である場合、一方のマスクOCT画像の色が他方のマスクOCT画像の色に対する補色であれば、それぞれの色を混合すると無彩色となる。Mが3である場合には、3個のマスクOCT画像それぞれの色が、赤、緑、青の三原色であるとき、それぞれの色を混合すると無彩色となる。
 これにより、合成画像の色により異なる指向性間での明るさの違いが表現される。そのため、合成画像に接したユーザは、特定の色が顕著に表される部位が、その色に関連付けられた指向性に対応する組織構造を有することを容易に認識することができる。
Further, the color obtained by simply mixing the colors of the M mask OCT images among the M masks may be set to be achromatic. An achromatic color is a desaturated color, that is, white, black, or gray of various densities. When M is 2, if the color of one mask OCT image is a complementary color to the color of the other mask OCT image, mixing the respective colors results in an achromatic color. When M is 3, when the colors of the three mask OCT images are the three primary colors of red, green, and blue, mixing the respective colors results in an achromatic color.
As a result, the difference in brightness between different directivities is expressed depending on the color of the composite image. Therefore, the user who comes into contact with the composite image can easily recognize that the portion where a specific color is prominently represented has a tissue structure corresponding to the directivity associated with the color.
 なお、画像合成部170は、M個のマスクOCT画像データがそれぞれ示す画素値を平均して得られる平均値を画素値として有する平均画像データを生成してもよい。そして、画像合成部170は、平均画像データに対して、M個の着色画像のいずれとも異なる色で着色した新たな着色画像(以下、平均着色画像)を示す平均着色画像データを生成する。
 画像合成部170は、M個の着色画像データが示す着色画像と平均着色画像データが示す平均着色画像を合成して、合成画像を示す合成画像データを生成する。例えば、Mが2である場合、画像合成部170は、2つの着色画像のそれぞれの色を赤、緑とし、平均着色画像の色を青とする。
 これにより、合成画像に顕著に表された色により、それぞれの指向性に対応する状態が顕著な部位が、それらの平均値を比較対象として表される。
The image synthesizing unit 170 may generate average image data having an average value obtained by averaging the pixel values indicated by each of the M mask OCT image data as pixel values. Then, the image synthesizing unit 170 generates average colored image data showing a new colored image (hereinafter, average colored image) colored with a color different from any of the M colored images with respect to the average image data.
The image synthesizing unit 170 synthesizes the colored image shown by the M colored image data and the average colored image shown by the average colored image data to generate the composite image data showing the composite image. For example, when M is 2, the image synthesizing unit 170 sets the colors of the two colored images to red and green, and sets the color of the average colored image to blue.
As a result, the parts in which the states corresponding to the respective directivities are prominent due to the colors remarkably displayed in the composite image are represented by comparing their average values.
 合成画像の生成に係る合成モードは、これには限られない。合成モードとは、合成画像の生成に用いられるマスクOCT画像(例えば、指向性画像)のそれぞれに生成に用いられるマスクの空間周波数特性の組み合わせを指す。
 図6A-図6Fは、本実施形態に係る合成モードごとの合成画像の例を示す図である。合成画像の生成にあたり図6A-図6F間で共通のOCT画像として正面OCT画像Oi01が用いられる場合を例にする。
The compositing mode related to the generation of the compositing image is not limited to this. The composite mode refers to a combination of spatial frequency characteristics of the mask used for generation for each of the mask OCT images (for example, directional images) used for generating the composite image.
6A-6F are diagrams showing an example of a composite image for each composite mode according to the present embodiment. An example is taken when the front OCT image Oi01 is used as a common OCT image between FIGS. 6A and 6F in generating the composite image.
 図6Aに示す低域画像Li11は、合成モード11を用いて生成される合成画像である。合成モード11は、方位角7/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を示す第1の空間周波数特性、方位角-5/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を有する第2の空間周波数特性とからなる。但し、第1の空間周波数特性と第2の空間周波数特性は、それぞれ動径空間周波数が周波数1から周波数2までの低域を通過帯域とし、それ以外の動径空間周波数を遮断帯域とする。周波数1は、0または0に十分に近似した所定の動径空間周波数である。周波数2は、周波数1より高く動径空間周波数の上限よりも低い所定の動径空間周波数である。動径空間周波数の上限は、サンプリング定理によりOCT信号のサンプル点間隔に基づいて定まる。低域画像Li11は、瞳面電場のうち低域成分を用いて生成されるため、より粗く、輪郭が不明瞭な模様を含んで構成される。 The low-frequency image Li11 shown in FIG. 6A is a composite image generated by using the composite mode 11. In the synthesis mode 11, the azimuth angle 7 / 6π is the main direction, and the first spatial frequency characteristic showing the directionalness indicating the passband having the bandwidth π, the azimuth angle -5 / 6π is the main direction, and the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of π. However, the first spatial frequency characteristic and the second spatial frequency characteristic each have a low frequency range from frequency 1 to frequency 2 as a passing band and other radial spatial frequencies as a cutoff band. Frequency 1 is a predetermined radial space frequency that is sufficiently close to 0 or 0. The frequency 2 is a predetermined radial space frequency higher than the frequency 1 and lower than the upper limit of the radial space frequency. The upper limit of the radial space frequency is determined by the sampling theorem based on the sample point spacing of the OCT signal. Since the low-frequency image Li11 is generated by using the low-frequency component of the pupillary electric field, it is configured to include a pattern that is coarser and has an unclear outline.
 図6Bに示す高域画像Hi11は、合成モード12を用いて生成される合成画像である。合成モード12は、方位角7/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を示す第1の空間周波数特性、方位角-5/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を有する第2の空間周波数特性とからなる。但し、第1の空間周波数特性と第2の空間周波数特性は、それぞれ動径空間周波数が周波数2から周波数3までの高域を通過帯域とし、それ以外の動径空間周波数を遮断帯域とする。周波数3は、周波数2より高く、動径空間周波数の上限以下となる低い所定の動径空間周波数である。高域画像Hi11は、瞳面電場のうち高域成分を用いて生成されるため、比較的微細な模様を含んで構成される。 The high-frequency image Hi11 shown in FIG. 6B is a composite image generated by using the composite mode 12. In the synthesis mode 12, the azimuth angle 7 / 6π is the main direction, and the first spatial frequency characteristic showing the directionalness indicating the pass band having the bandwidth π, the azimuth angle -5 / 6π is the main direction, and the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of π. However, the first spatial frequency characteristic and the second spatial frequency characteristic each have a high frequency range from frequency 2 to frequency 3 as a passing band and other radial spatial frequencies as a cutoff band. The frequency 3 is a predetermined low radial space frequency that is higher than the frequency 2 and equal to or less than the upper limit of the radial space frequency. Since the high-frequency image Hi11 is generated by using the high-frequency component of the pupillary electric field, it is configured to include a relatively fine pattern.
 図6Cに示す合成画像Si11は、合成モード13を用いて生成される合成画像である。合成モード13は、方位角7/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を示す第1の空間周波数特性、方位角-5/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を有する第2の空間周波数特性とからなる。但し、第1の空間周波数特性と第2の空間周波数特性は、動径空間周波数領域において帯域制限が設けられていない。即ち、合成モード13は、図5に例示される指向性1、2からなる合成モードと同様であり、合成モード11の通過帯域と合成モード12の通過帯域を併せ持つ。 The composite image Si11 shown in FIG. 6C is a composite image generated by using the composite mode 13. In the synthesis mode 13, the azimuth angle 7 / 6π is the main direction, and the first spatial frequency characteristic showing the directionalness indicating the passband having the bandwidth π, the azimuth angle -5 / 6π is the main direction, and the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of π. However, the first spatial frequency characteristic and the second spatial frequency characteristic are not provided with band limitation in the radial spatial frequency region. That is, the synthesis mode 13 is the same as the synthesis mode including the directivity 1 and 2 illustrated in FIG. 5, and has both the pass band of the synthesis mode 11 and the pass band of the synthesis mode 12.
 図6Dに示す低域画像Li12は、合成モード14を用いて生成される合成画像である。合成モード14は、方位角5/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を示す第1の空間周波数特性、方位角-7/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を有する第2の空間周波数特性とからなる。但し、第1の空間周波数特性と第2の空間周波数特性は、それぞれ動径空間周波数が周波数1から周波数2までの低域を通過帯域とし、それ以外の動径空間周波数を遮断帯域とする。
 低域画像Li12は、低域画像Li11とは通過帯域の主方向が異なるマスクを用いて生成されているため、低域画像Li11とは異なる模様を表す。低域画像Li12には、比較的粗く、輪郭が不明瞭な模様が表される。
The low-frequency image Li12 shown in FIG. 6D is a composite image generated by using the composite mode 14. In the synthesis mode 14, the azimuth angle of 5 / 6π is the main direction, and the first spatial frequency characteristic showing the directionalness indicating the passband having the bandwidth of π, the azimuth angle of -7 / 6π is the main direction, and the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of π. However, the first spatial frequency characteristic and the second spatial frequency characteristic each have a low frequency range from frequency 1 to frequency 2 as a passing band and other radial spatial frequencies as a cutoff band.
Since the low-frequency image Li12 is generated by using a mask whose main direction of the pass band is different from that of the low-frequency image Li11, it represents a pattern different from that of the low-frequency image Li11. The low-frequency image Li12 shows a pattern that is relatively coarse and has an unclear outline.
 図6Eに示す高域画像Hi12は、合成モード15を用いて生成される合成画像である。合成モード15は、方位角5/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を示す第1の空間周波数特性、方位角-7/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を有する第2の空間周波数特性とからなる。但し、第1の空間周波数特性と第2の空間周波数特性は、それぞれ動径空間周波数が周波数2から周波数3までの高域を通過帯域とし、それ以外の動径空間周波数を遮断帯域とする。高域画像Hi12は、生成に用いられたマスクの通過帯域の動径空間周波数が共通の高域画像Hi11よりも、通過帯域の方位角空間周波数が共通の低域画像Li12に類似した模様を表す。高域画像Hi12には、低域画像Li12よりも微細な模様が表される。 The high-frequency image Hi12 shown in FIG. 6E is a composite image generated by using the composite mode 15. In the synthesis mode 15, the azimuth angle of 5 / 6π is the main direction, and the first spatial frequency characteristic showing the directionalness indicating the passband having the bandwidth of π, the azimuth angle of -7 / 6π is the main direction, and the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of π. However, the first spatial frequency characteristic and the second spatial frequency characteristic each have a high frequency range from frequency 2 to frequency 3 as a passing band and other radial spatial frequencies as a cutoff band. The high-frequency image Hi12 represents a pattern more similar to the low-frequency image Li12 having a common passband azimuthal spatial frequency than the high-frequency image Hi11 having a common passband spatial frequency of the mask used for generation. .. The high-frequency image Hi12 shows a finer pattern than the low-frequency image Li12.
 図6Fに示す合成画像Si12は、合成モード16を用いて生成される合成画像である。合成モード16は、方位角7/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を示す第1の空間周波数特性、方位角-5/6πを主方向とし、帯域幅がπとなる通過帯域を示す指向性を有する第2の空間周波数特性とからなる。但し、第1の空間周波数特性と第2の空間周波数特性は、動径空間周波数領域において帯域制限が設けられていない。即ち、合成モード16は、合成モード14の通過帯域と合成モード15の通過帯域を併せ持つ。 The composite image Si12 shown in FIG. 6F is a composite image generated by using the composite mode 16. In the synthesis mode 16, the azimuth angle 7 / 6π is the main direction, and the first spatial frequency characteristic showing the directionalness indicating the passband having the bandwidth π, the azimuth angle -5 / 6π is the main direction, and the bandwidth is It is composed of a second spatial frequency characteristic having a directivity indicating a pass band of π. However, the first spatial frequency characteristic and the second spatial frequency characteristic are not provided with band limitation in the radial spatial frequency region. That is, the synthesis mode 16 has both the pass band of the synthesis mode 14 and the pass band of the synthesis mode 15.
 なお、図5、図6A-図6Fに示す指向性パターンでは、複数のマスクのそれぞれに係る通過帯域の一部または全部が重なり合わない場合を例にしたが、これには限られない。複数のマスクのうち、少なくとも2つのマスクの間で通過帯域の一部が重なり合ってもよい。また、複数のマスクのそれぞれの通過帯域の全体が、必ずしも空間周波数帯域の全体を網羅しなくでもよい。 Note that, in the directivity patterns shown in FIGS. 5 and 6A to 6F, a case where a part or all of the pass bands related to each of the plurality of masks do not overlap is taken as an example, but the present invention is not limited to this. A part of the pass band may overlap between at least two masks among the plurality of masks. Further, the entire pass band of each of the plurality of masks does not necessarily cover the entire spatial frequency band.
 なお、図3-図6Fは、観測対象平面が試料Smの深さ方向に垂直な正面である場合を例示したが、観測対象領域の一部または全部を横断する面であれば、これには限られない。
 そこで、出力処理部180は、観測対象平面の方向をユーザの操作により設定可能としてもよい。より具体的には、操作入力部(図示せず)から観察対象平面の方向を示す操作信号が入力され、操作信号が示す観察対象平面の方向を示す制御信号を検出信号取得部130に出力する。検出信号取得部130は、出力処理部180から入力される制御信号が示す方向に観察対象平面を定める。
In addition, FIG. 3 to FIG. 6F exemplify the case where the observation target plane is a front surface perpendicular to the depth direction of the sample Sm, but if it is a surface that crosses a part or all of the observation target area, this may be used. Not limited.
Therefore, the output processing unit 180 may be able to set the direction of the observation target plane by the user's operation. More specifically, an operation signal indicating the direction of the observation target plane is input from the operation input unit (not shown), and a control signal indicating the direction of the observation target plane indicated by the operation signal is output to the detection signal acquisition unit 130. .. The detection signal acquisition unit 130 determines the observation target plane in the direction indicated by the control signal input from the output processing unit 180.
 図7A-図7Cは、観測対象平面が試料Smの深さ方向に平行な断面である場合の表示例を示す。
 図7Aに例示される断面OCT画像Cs21は、深さ方向に平行な断面における試料Smの状態を示すOCT画像である。断面OCT画像Cs21は、正面OCT画像Oi01とは異なる模様を表す。
 図7Bに例示される合成画像Si21は、断面OCT画像Cs21に係るOCT信号に対して、合成モード13を用いて生成される合成画像である。斜めの斜線の部位は、合成モード13を構成する第1の指向性、第2の指向性を有するマスクを用いて共通して抽出される部位を示す。右上がりの斜線の部位、右下がりの斜線の部位は、それぞれ第1の指向性、第2の指向性を有するマスクにより抽出されるが、第2の指向性、第1の指向性を有するマスクでは抽出されない部位を示す。着色されていない部位は、第1の指向性、第2の指向性を有するマスクのいずれによっても抽出されない部位を示す。
 図7Cに例示される合成画像Si22は、断面OCT画像Cs21に係るOCT信号に対して、合成モード16を用いて生成される合成画像である。合成画像Si22は、合成画像Si21とは異なる模様を表す。
7A-7C show a display example when the observation target plane is a cross section parallel to the depth direction of the sample Sm.
The cross-section OCT image Cs21 exemplified in FIG. 7A is an OCT image showing the state of the sample Sm in the cross-section parallel to the depth direction. The cross-sectional OCT image Cs21 represents a pattern different from the front OCT image Oi01.
The composite image Si21 exemplified in FIG. 7B is a composite image generated by using the composite mode 13 with respect to the OCT signal related to the cross-sectional OCT image Cs21. The diagonally shaded portion indicates a portion commonly extracted by using a mask having a first directivity and a second directivity constituting the synthesis mode 13. The upward-sloping diagonal line portion and the downward-sloping diagonally-lined portion are extracted by the mask having the first directivity and the second directivity, respectively, but the mask having the second directivity and the first directivity. Indicates the part that is not extracted. The uncolored portion indicates a portion that is not extracted by either the mask having the first directivity or the second directivity.
The composite image Si22 exemplified in FIG. 7C is a composite image generated by using the composite mode 16 with respect to the OCT signal related to the cross-sectional OCT image Cs21. The composite image Si22 represents a pattern different from that of the composite image Si21.
 次に、指向性マスクの他の例について説明する。指向性マスクは、方位角空間周波数がより狭い帯域幅を有していてもよい。図8に例示される指向性マスクDm31の合成モードは、2個の通過帯域を有する。2個の通過帯域の帯域幅は、それぞれπ/9となる。但し、それぞれの通過帯域の主方向は、それぞれ方位角π/2、-π/2となり、互いに逆方向に向けられている。帯域幅が狭い指向性マスクを用いることで、主方向を目標方向(target direction)として、主方向まわりの方位角空間周波数の成分を抽出することができる。かかる指向性マスクは、試料の構造の分析に応用することができる。 Next, another example of the directional mask will be described. The directional mask may have a narrower bandwidth with azimuth spatial frequencies. The composite mode of the directional mask Dm31 illustrated in FIG. 8 has two pass bands. The bandwidths of the two passbands are π / 9, respectively. However, the main directions of the respective pass bands are azimuth angles π / 2 and −π / 2, respectively, and are directed in opposite directions. By using a directional mask with a narrow bandwidth, it is possible to extract the components of the azimuth spatial frequency around the main direction with the main direction as the target direction. Such a directional mask can be applied to the analysis of the structure of a sample.
 マスク部150は、例えば、微小な角度(例えば、1°~5°)間隔で通過帯域の主方向を方位角の方向に回転させ、回転させた主方向ごとに、瞳面電場データに指向性マスクを作用して、マスク電場データを生成する。変換部160は、生成された個々のマスク電場データに対して二次元逆フーリエ変換を行って空間領域のマスクOCT信号を生成する。制御部110は、回転させた主方向ごとのマスクOCT信号、もしくは、マスクOCT信号に基づくマスク画像データの一部または全部の強度の変化特性を解析する解析部(図示せず)を備えてもよい。 The mask unit 150 rotates, for example, the main direction of the passing band in the direction of the azimuth at minute intervals (for example, 1 ° to 5 °), and each of the rotated main directions is directional to the pupillary electric field data. The mask is applied to generate mask electric field data. The conversion unit 160 performs a two-dimensional inverse Fourier transform on the generated individual mask electric field data to generate a mask OCT signal in the spatial region. Even if the control unit 110 includes an analysis unit (not shown) that analyzes the mask OCT signal for each rotated main direction or the change characteristic of the intensity of a part or all of the mask image data based on the mask OCT signal. Good.
 図9、図10は、それぞれウシのアキレス腱(bovine Achilles tendon)、ニワトリの胸筋(chicken breast muscle)を試料として取得された信号強度の主方向依存性を例示する。縦軸、横軸は、それぞれ信号強度、主方向を示す。信号強度として、マスク画像データの総画素強度を用いた。画素強度は、画素ごとの信号値の合計値に相当する。図9、図10は、それぞれ信号強度が主方向に依存し、信号強度が有意な最大値を有することを示す。図9、図10に示す例では、最大値をとる主方向は、それぞれ152°、116°となった。また、信号強度の最大値は、平均値の4倍程度となる。このことは、試料とする組織構造が、信号強度が最大値をとる主方向に対して高い周期性を有することを示唆する。 FIGS. 9 and 10 exemplify the main direction dependence of the signal intensity obtained using the bovine Achilles tendon and the chicken breast muscle as samples, respectively. The vertical axis and the horizontal axis indicate the signal strength and the main direction, respectively. As the signal strength, the total pixel strength of the mask image data was used. The pixel strength corresponds to the total value of the signal values for each pixel. 9 and 10 show that the signal strength depends on the main direction and the signal strength has a significant maximum value, respectively. In the examples shown in FIGS. 9 and 10, the main directions in which the maximum values are taken are 152 ° and 116 °, respectively. Further, the maximum value of the signal strength is about four times the average value. This suggests that the tissue structure used as a sample has a high periodicity with respect to the main direction in which the signal intensity reaches the maximum value.
 このことは、次のシミュレーションによっても裏付けられる。シミュレーションでは、図11に示す試料モデルSpにz方向からの光線が入射される場合を仮定した。試料モデルSpの表面は、そのz方向の高さが、y方向に依存せず、x方向に一定の周期を有する三角波となる。そして、試料モデルSpのz座標が基準点(光学系の焦点、図1のコリメータ50bの焦点に相当)よりも遠い場合(Δz>0)、近い場合(Δz<0)のそれぞれについてOCTボリュームを合成した。各OCTボリュームから得られる正面画像、断面画像のOCT画像信号に基づく瞳面電場データに対して、指向性マスクを作用して通過帯域ごとにマスク電場データを生成した。但し、指向性マスクとして、主方向がπであり帯域幅がπとなる通過領域と、主方向が0であり帯域幅がπとなる通過領域を有する合成モード32(図12A)を用いた。そして、通過帯域ごとに生成したマスク電場データに対して二次元逆フーリエ変換を行って空間領域のマスクOCT信号を用いて、各1個の合成画像を示すOCT画像データを生成した。 This is also supported by the following simulation. In the simulation, it was assumed that a light ray from the z direction was incident on the sample model Sp shown in FIG. The surface of the sample model Sp is a triangular wave whose height in the z direction does not depend on the y direction and has a constant period in the x direction. Then, the OCT volume is set for each of the cases where the z coordinate of the sample model Sp is farther (Δz> 0) and closer (Δz <0) than the reference point (the focal point of the optical system, which corresponds to the focal point of the collimator 50b in FIG. 1). Synthesized. A directional mask was applied to the pupillary electric field data based on the OCT image signals of the front image and the cross-sectional image obtained from each OCT volume to generate mask electric field data for each pass band. However, as the directivity mask, a synthesis mode 32 (FIG. 12A) having a passing region having a main direction of π and a bandwidth of π and a passing region having a main direction of 0 and a bandwidth of π was used. Then, a two-dimensional inverse Fourier transform was performed on the mask electric field data generated for each pass band, and the mask OCT signal in the spatial region was used to generate OCT image data showing each one composite image.
 図12A、図12Bは、Δz>0となる場合を仮定して生成された正面画像に対する合成画像、断面画像に対する合成画像をそれぞれ示す。図12A、図12Bともに、x方向への高さの周期に応じた模様を示す。但し、図12Aは、y方向に対して輝度が一定であり、x方向に対しては、各周期の中心よりも負方向、正方向に周囲よりも輝度が高い部分が存在する。各周期の中心よりも負方向、正方向は、それぞれπ、0を主方向とする通過帯域に対応する。図12Bは、試料モデルSpの表面のz方向の高さに比例する部位に周囲よりも輝度が高い部分が存在する。図12C、図12Dは、Δz<0となる場合を仮定して生成された正面画像に対する合成画像、断面画像に対する合成画像をそれぞれ示す。図12C、図12Dが示す合成画像の模様は、それぞれ図12A、図12Bに示す合成画像の模様とは左右対称となる。 12A and 12B show a composite image for the front image and a composite image for the cross-sectional image generated on the assumption that Δz> 0, respectively. Both FIGS. 12A and 12B show patterns according to the period of height in the x direction. However, in FIG. 12A, the brightness is constant in the y direction, and there is a portion in the x direction in which the brightness is higher than the periphery in the negative direction and the positive direction from the center of each cycle. The negative and positive directions from the center of each cycle correspond to the passbands whose main directions are π and 0, respectively. In FIG. 12B, a portion of the surface of the sample model Sp that is proportional to the height in the z direction has a portion having a higher brightness than the surroundings. 12C and 12D show a composite image for the front image and a composite image for the cross-sectional image generated on the assumption that Δz <0, respectively. The patterns of the composite images shown in FIGS. 12C and 12D are symmetrical with the patterns of the composite images shown in FIGS. 12A and 12B, respectively.
 従って、図9、図10の例において試料としたウシのアキレス腱、ニワトリの胸筋は、それぞれ、その界面の高さが、高さ方向(z方向)と交差する一方向(x方向)に高い周期性を有し、他の方向(y方向)への依存性が低い構造を有することが裏付けられる(図13参照)。そして、z方向よりも、その周期の負方向に傾いた面の像と、正方向に傾いた面の像が、個々の通過帯域の主方向に対応付けられ、それぞれ異なるパターンで合成画像に表示される。 Therefore, the height of the interface between the bovine Achilles tendon and the chicken pectoralis major used as samples in the examples of FIGS. 9 and 10 is high in one direction (x direction) intersecting the height direction (z direction), respectively. It is confirmed that the structure has periodicity and is less dependent on other directions (y direction) (see FIG. 13). Then, the image of the surface inclined in the negative direction of the period and the image of the surface inclined in the positive direction are associated with the main directions of the individual pass bands, and are displayed in the composite image in different patterns. Will be done.
 上記の例では、主に1個の指向性マスクの個数Mが1個または2個である場合について説明したが、これには限られない。マスクの数Mは3個以上、例えば、60個、120個、360個、などであってもよい。マスクの数Mの増加に応じて、個々のマスクに係る通過帯域の帯域幅は、より微細なものとなりうる。帯域幅は、方位角空間周波数に限らず、動径空間周波数、または、方位角空間周波数と動径空間周波数の組に対しても固定または可変のいずれでもよく、より細分化されてもよい。マスクの数は、個々の通過帯域が空間的に連続しているとみなせる程度に十分に多い数になってもよい。M個のマスクそれぞれの通過帯域は、空間周波数領域において必ずしも周期的、規則的または網羅的に分布していなくてもよく、乱雑または間欠的に分布していてもよい。 In the above example, the case where the number M of one directional mask is mainly one or two has been described, but the present invention is not limited to this. The number M of the masks may be 3 or more, for example, 60, 120, 360, and the like. As the number of masks increases, the passband bandwidth for each mask can be finer. The bandwidth is not limited to the azimuth space frequency, and may be fixed or variable with respect to the azimuth space frequency or the combination of the azimuth space frequency and the azimuth space frequency, and may be further subdivided. The number of masks may be large enough that the individual passbands can be considered spatially continuous. The passbands of each of the M masks do not necessarily have to be periodically, regularly or exhaustively distributed in the spatial frequency domain, and may be randomly or intermittently distributed.
 なお、上記の例では、マスクを構成する個々の周波数サンプル点に対応する空間周波数ごとのマスク値が0または1のいずれか一方である場合を主としたが、これには限られない。マスク値は、任意の実数であってもよい。これにより、空間周波数ごとの瞳面電場の通過の度合いがより詳細に設定可能となる。例えば、通過帯域におけるマスク値は、遮断帯域におけるマスク値よりも大きい値であれば、それぞれ1、0に限定されるよりも、マスクOCT信号のマスクによる依存性を強調または緩和することができる。また、通過帯域と遮断帯域との境界から所定の範囲内において空間周波数に対応するサンプル点間で単調に変化するようにマスク値を設定することで、その境界の周辺における空間周波数に応じたマスク値の変化がより緩やかにすることができる。そのため、マスクOCT信号に生じうる折り返し雑音(エリアシング(aliasing))による、マスクOCT画像の異常な輝度の異常な空間変化を緩和することができる。 In the above example, the mask value for each spatial frequency corresponding to each frequency sample point constituting the mask is mainly either 0 or 1, but the case is not limited to this. The mask value may be any real number. This makes it possible to set the degree of passage of the pupillary electric field for each spatial frequency in more detail. For example, if the mask value in the pass band is larger than the mask value in the cutoff band, the dependence of the mask OCT signal by the mask can be emphasized or relaxed rather than being limited to 1 and 0, respectively. In addition, by setting the mask value so that it changes monotonically between the sample points corresponding to the spatial frequency within a predetermined range from the boundary between the pass band and the cutoff band, the mask according to the spatial frequency around the boundary is set. The change in value can be more gradual. Therefore, it is possible to alleviate an abnormal spatial change in the abnormal brightness of the mask OCT image due to aliasing noise that may occur in the mask OCT signal.
 空間周波数ごとのマスク値は、実数に限らず複素数であってもよい。マスク値は、例えば、上記のマスク値に対して光学系および被計測試料で生ずる収差を表す空間周波数ごとの位相と振幅を示す係数(複素数)でさらに除算された値であってもよい。かかるマスク値を用いることで、光学系および被計測試料で生ずる収差が補償される。
 上記に説明したマスクを用いたマスク電場データひいてはマスクOCT信号の生成は、空間周波数領域において複素数の演算により実現することができることである。言い換えれば、既存の光学部品などのハードウェアや、実数である輝度値もしくは色信号値を扱う画像処理では到底なしえない。
The mask value for each spatial frequency is not limited to a real number but may be a complex number. The mask value may be, for example, a value obtained by further dividing the above mask value by a coefficient (complex number) indicating the phase and amplitude for each spatial frequency representing the aberration generated in the optical system and the sample to be measured. By using such a mask value, aberrations generated in the optical system and the sample to be measured are compensated.
The generation of mask electric field data and thus the mask OCT signal using the mask described above can be realized by complex number calculation in the spatial frequency domain. In other words, it cannot be achieved by existing hardware such as optical components or image processing that handles real luminance values or color signal values.
 制御部110は、OCT信号の空間周波数特性を解析する解析部(図示せず)をさらに備えてもよい。解析部は、例えば、所定の帯域幅の方位角空間周波数帯域を通過帯域とするマスクを用いて、ステップS01-S03の処理を行わせ、生成されたマスクOCT信号のパワーを算出し、処理対象とする通過帯域の主方向を変更する処理を繰り返してもよい。これにより、解析部は、パワーの方位角空間周波数依存性を取得し、パワーが最大、極大、最小または極小となる方位角空間周波数を特定することができる。図8に例示した指向性マスクを用いた、図9、図10に例示される信号強度の解析が、この方位角空間周波数の解析例に相当する。
 解析部は、方位角空間周波数帯域に代えて、もしくは、方位角空間周波数とともに動径空間周波数帯域で定まる通過帯域ごとに、マスクOCT信号のパワーを算出してもよい。これにより、解析部は、パワーの空間周波数依存性を取得し、パワーが最大、極大、最小または極小となる空間周波数を特定することができる。
 解析部は、上記のパワーを算出する処理を、OCT信号が示す観察対象平面の全体ではなく、その一部となる部位ごとに実行してもよい。解析部は、パワーの空間周波数依存性を部位ごとに取得し、部位ごとにパワーが最大、極大、最小または極小となる空間周波数を特定することができる。また、解析部は、部位ごとにとりうる空間周波数ごとのパワーの範囲を定め、空間周波数依存性が所定の範囲よりも著しい部位を特定することができる。
 出力処理部180は、操作入力部から入力される操作信号に応じて、解析部が算出したパワーの少なくともいずれか、特定した空間周波数、または部位を示す情報を表示部、またはその他の機器に出力してもよい。
The control unit 110 may further include an analysis unit (not shown) that analyzes the spatial frequency characteristics of the OCT signal. For example, the analysis unit causes the processing of steps S01 to S03 to be performed using a mask having an azimuthal spatial frequency band having a predetermined bandwidth as a pass band, calculates the power of the generated mask OCT signal, and processes the processing target. The process of changing the main direction of the pass band may be repeated. As a result, the analysis unit can acquire the azimuth spatial frequency dependence of the power and specify the azimuth spatial frequency at which the power is maximum, maximum, minimum, or minimum. The analysis of the signal strength illustrated in FIGS. 9 and 10 using the directional mask illustrated in FIG. 8 corresponds to the analysis example of the azimuth spatial frequency.
The analysis unit may calculate the power of the mask OCT signal instead of the azimuth space frequency band or for each pass band determined by the azimuth space frequency band together with the azimuth space frequency band. As a result, the analysis unit can acquire the spatial frequency dependence of the power and specify the spatial frequency at which the power becomes the maximum, the maximum, the minimum, or the minimum.
The analysis unit may execute the process of calculating the power for each part of the observation target plane indicated by the OCT signal, instead of the entire plane. The analysis unit can acquire the spatial frequency dependence of power for each part, and can specify the spatial frequency at which the power is maximum, maximum, minimum, or minimum for each part. Further, the analysis unit can determine a range of power for each spatial frequency that can be taken for each site, and can identify a site whose spatial frequency dependence is more remarkable than a predetermined range.
The output processing unit 180 outputs information indicating at least one of the powers calculated by the analysis unit, the specified spatial frequency, or the part to the display unit or other equipment according to the operation signal input from the operation input unit. You may.
 以上に説明したように、上述した実施形態に係る画像処理装置100は、試料の状態を表す光干渉断層信号を空間周波数領域に変換して試料の逆空間に対応する瞳面における電場を示す電場データを生成する電場推定部140を備える。また、画像処理装置100は、瞳面における電場の通過特性分布を示すマスクを電場データに作用して、マスク電場データを生成するマスク部150と、マスク電場データを空間領域に変換してマスク光干渉断層信号を生成する変換部160と、を備える。
 この構成により、瞳面における電場にマスクを作用することで、マスクが示す通過特性分布を反映したマスク光断層信号を取得することができる。また、画像合成部170は、取得されたマスク光断層信号の信号値を輝度値に変換することでマスク光断層画像を取得することができる。そのため、光学系に現実に各種の光学部品を備えなくても、試料の観察に係る空間的条件に応じた電場の通過特性分布をマスクの作用により仮想的に調整することができる。
As described above, the image processing apparatus 100 according to the above-described embodiment converts an optical interference tom signal representing the state of the sample into a spatial frequency domain and shows an electric field in the pupil surface corresponding to the reciprocal space of the sample. The electric field estimation unit 140 for generating data is provided. Further, the image processing apparatus 100 acts on the electric field data with a mask showing the distribution of the passing characteristics of the electric field on the pupil surface to generate the mask electric field data, and converts the mask electric field data into a spatial region to convert the mask light into a space region. It includes a conversion unit 160 that generates an interference fault signal.
With this configuration, by acting a mask on the electric field on the pupil surface, it is possible to acquire a mask optical tomography signal that reflects the passage characteristic distribution indicated by the mask. Further, the image synthesizing unit 170 can acquire a masked optical tomographic image by converting the signal value of the acquired masked optical tomographic signal into a luminance value. Therefore, even if the optical system is not actually equipped with various optical components, the distribution of the passing characteristics of the electric field according to the spatial conditions related to the observation of the sample can be virtually adjusted by the action of the mask.
 また、マスクの通過特性分布は、瞳面において電場を通過する方位角の空間周波数帯域を示してもよい。
 この構成により、電場の通過帯域とする方位角の空間周波数帯域がマスクの通過特性分布として指示されるので、通過帯域に応じた観察方向の指向性をもった試料の観察が実現する。また、当該マスクに基づいて試料の構造を示すマスク画像を取得することで、観察方向に係る制約を解消または緩和して、試料を構成する組織の微細構造の異方性を可視化することができる。
Further, the passage characteristic distribution of the mask may indicate the spatial frequency band of the azimuth angle passing through the electric field on the pupil surface.
With this configuration, the spatial frequency band of the azimuth angle as the pass band of the electric field is indicated as the pass characteristic distribution of the mask, so that the observation of the sample having the directivity of the observation direction according to the pass band is realized. Further, by acquiring a mask image showing the structure of the sample based on the mask, it is possible to eliminate or relax the restriction on the observation direction and visualize the anisotropy of the fine structure of the structure constituting the sample. ..
 また、マスクの通過特性分布は、瞳面において電場を通過する動径の空間周波数帯域を示してもよい。
 この構成により、電場の通過帯域とする動径の空間周波数帯域がマスクの通過特性分布として指示されるので、通過帯域に応じた空間分解能をもった試料の観察が実現する。
Further, the passage characteristic distribution of the mask may indicate the spatial frequency band of the moving diameter passing through the electric field on the pupil surface.
With this configuration, the spatial frequency band of the moving diameter, which is the pass band of the electric field, is indicated as the pass characteristic distribution of the mask, so that the observation of the sample having the spatial resolution according to the pass band is realized.
 また、マスクは、空間周波数に対応するサンプル点ごとに電場の通過の有無を示してもよい。
 この構成により、瞳面電場を通過させる空間周波数帯域がディジタルデータとして定義されるため、マスクの作用に係る演算を簡素に行うことができる。
Further, the mask may indicate the presence / absence of passage of an electric field for each sample point corresponding to the spatial frequency.
With this configuration, the spatial frequency band through which the pupillary electric field passes is defined as digital data, so that the calculation related to the action of the mask can be simplified.
 また、本実施形態に係る画像処理装置100は、M個のマスク光干渉断層信号のそれぞれに基づくマスク画像を個々に異なる表示態様をもって合成して合成画像を生成する画像合成部170を備えてもよい。但し、マスク部150は、電場データに通過特性分布が異なるM個のマスクをそれぞれ作用してM個のマスク電場データを取得する。変換部160は、M個のマスク電場データをそれぞれ空間領域に変換してM個のマスク光干渉断層信号を生成する。
 この構成により、合成画像に接したユーザは、合成画像に表れる表示態様に基づいて、個々のマスク画像を直感的に識別することができる。個々のマスク画像を比較することで、ユーザは通過特性分布(例えば、指向性)による試料の状態の依存性を直感的に把握することができる。
Further, the image processing apparatus 100 according to the present embodiment may include an image synthesizing unit 170 that generates a composite image by synthesizing mask images based on each of the M masked optical interference tomographic signals in different display modes. Good. However, the mask unit 150 acts on the electric field data with M masks having different passage characteristic distributions to acquire the M mask electric field data. The conversion unit 160 converts each of the M masked electric field data into a spatial region to generate M masked optical interference tomographic signals.
With this configuration, the user who comes into contact with the composite image can intuitively identify each mask image based on the display mode appearing in the composite image. By comparing the individual mask images, the user can intuitively grasp the dependence of the sample state due to the passage characteristic distribution (for example, directivity).
 また、表示態様としてM個のマスク画像をそれぞれ表す色の色相が異なっていてもよい。
 この構成により、ユーザは、合成画像に表れる色に基づいて個々のマスク画像を直感的に識別できるとともに、通過特性分布(例えば、指向性)に対応した試料の状態が表れる部位を容易に把握することができる。
Further, as a display mode, the hues of the colors representing the M mask images may be different.
With this configuration, the user can intuitively identify each mask image based on the color appearing in the composite image, and easily grasp the part where the state of the sample corresponding to the passage characteristic distribution (for example, directivity) appears. be able to.
 また、M個のマスク画像をそれぞれ表す色をM個のマスク画像間で合成した色は無彩色である。
 この構成により、ユーザは合成画像に表れる色に対応する通過特性分布(例えば、指向性)に応じた試料の状態が他の透過特性分布よりも顕著に表れる部位を特定できるとともに、その濃淡により通過特性分布によらない試料の状態を認識することができる。
Further, the color in which the colors representing the M mask images are combined between the M mask images is achromatic.
With this configuration, the user can identify the part where the state of the sample corresponding to the passage characteristic distribution (for example, directivity) corresponding to the color appearing in the composite image appears more prominently than other transmission characteristic distributions, and the passage depends on the shading. It is possible to recognize the state of the sample regardless of the characteristic distribution.
 以上、図面を参照してこの発明の実施形態について詳しく説明してきたが、具体的な構成は上述のものに限られることはなく、この発明の要旨を逸脱しない範囲内において様々な設計変更等をすることが可能である。 Although the embodiments of the present invention have been described in detail with reference to the drawings, the specific configuration is not limited to the above, and various design changes and the like can be made without departing from the gist of the present invention. It is possible to do.
 例えば、上記の説明では、画像処理装置100が光干渉断層計1の一部である場合を例にしたが、これには限られない。画像処理装置100は、光干渉断層計1から独立し、光学系を備えない単一の機器であってもよい。その場合、画像処理装置100の制御部110において光学系制御部120が省略されてもよい。検出信号取得部130は、光学系に限られず、データ蓄積装置、PCなど、他の機器から有線または無線で、例えば、ネットワークを経由して検出信号またはOCT信号を取得してもよい。 For example, in the above description, the case where the image processing device 100 is a part of the optical interference tomographic meter 1 is taken as an example, but the present invention is not limited to this. The image processing device 100 may be a single device that is independent of the optical interference tomometer 1 and does not have an optical system. In that case, the optical system control unit 120 may be omitted in the control unit 110 of the image processing device 100. The detection signal acquisition unit 130 is not limited to the optical system, and may acquire a detection signal or an OCT signal from another device such as a data storage device or a PC by wire or wirelessly, for example, via a network.
 また、画像処理装置100は、上記の操作入力部と表示部を備えてもよいし、それらの一方または両方が省略されてもよい。
 画像処理装置100の制御部110において画像合成部170と出力処理部180の一方または両方が省略されてもよい。画像合成部170が省略される場合、変換部160は、生成したマスクOCT信号を、データ蓄積装置、PC、他の画像処理装置など、他の機器に出力してもよい。出力先とする機器は、画像合成部170と同様の機能、つまり、画像処理装置100から入力されるマスクOCT信号に基づいて出力画像データを生成し、生成した出力画像データに基づく画像を表示する機能を有してもよい。
 画像合成部170が採用する表色系は、必ずしもRGB表色系に限られず、その他の表色系、例えば、YCbCr表色系であってもよい。
 M個のマスク画像それぞれの表示態様として、色相や階調に限らず、網点、格子、斜線などの模様、点滅などの輝度や色相の時間変化など、その他の手法が用いられてもよい。
Further, the image processing device 100 may include the above-mentioned operation input unit and display unit, or one or both of them may be omitted.
In the control unit 110 of the image processing device 100, one or both of the image composition unit 170 and the output processing unit 180 may be omitted. When the image composition unit 170 is omitted, the conversion unit 160 may output the generated mask OCT signal to another device such as a data storage device, a PC, or another image processing device. The device as the output destination has the same function as the image synthesizing unit 170, that is, generates output image data based on the mask OCT signal input from the image processing device 100, and displays an image based on the generated output image data. It may have a function.
The color system adopted by the image composition unit 170 is not necessarily limited to the RGB color system, and may be another color system, for example, a YCbCr color system.
The display mode of each of the M mask images is not limited to hue and gradation, and other methods such as patterns such as halftone dots, grids, and diagonal lines, brightness such as blinking, and time change of hue may be used.
 なお、上述した実施形態における画像処理装置100の一部、例えば、制御部110の全部または一部をコンピュータで実現する場合、この制御機能を実現するためのプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行することによって実現してもよい。なお、ここでいう「コンピュータシステム」とは、画像処理装置100に内蔵されたコンピュータシステムであって、OSや周辺機器等のハードウェアを含むものとする。また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムを送信する場合の通信線のように、短時間、動的にプログラムを保持するもの、その場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリのように、一定時間プログラムを保持しているものも含んでもよい。また上記プログラムは、前述した機能の一部を実現するためのものであってもよく、さらに前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるものであってもよい。 When a part of the image processing device 100 in the above-described embodiment, for example, all or part of the control unit 110 is realized by a computer, a program for realizing this control function is recorded on a computer-readable recording medium. Then, the program recorded on the recording medium may be read into a computer system and executed. The "computer system" referred to here is a computer system built in the image processing device 100, and includes hardware such as an OS and peripheral devices. Further, the "computer-readable recording medium" refers to a portable medium such as a flexible disk, a magneto-optical disk, a ROM, or a CD-ROM, or a storage device such as a hard disk built in a computer system. Furthermore, a "computer-readable recording medium" is a medium that dynamically holds a program for a short period of time, such as a communication line when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. In that case, a program may be held for a certain period of time, such as a volatile memory inside a computer system serving as a server or a client. Further, the above-mentioned program may be a program for realizing a part of the above-mentioned functions, and may be a program for realizing the above-mentioned functions in combination with a program already recorded in the computer system.
 また、上述した実施形態における画像処理装置100の一部、または全部を、LSI(Large Scale Integration)等の集積回路として実現してもよい。画像処理装置100の各機能ブロックは個別にプロセッサ化してもよいし、一部、または全部を集積してプロセッサ化してもよい。また、集積回路化の手法はLSIに限らず専用回路、または汎用プロセッサで実現してもよい。また、半導体技術の進歩によりLSIに代替する集積回路化の技術が出現した場合、当該技術による集積回路を用いてもよい。 Further, a part or all of the image processing device 100 in the above-described embodiment may be realized as an integrated circuit such as an LSI (Large Scale Integration). Each functional block of the image processing apparatus 100 may be individually made into a processor, or a part or all of them may be integrated into a processor. Further, the method of making an integrated circuit is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. Further, when an integrated circuit technology that replaces an LSI appears due to advances in semiconductor technology, an integrated circuit based on this technology may be used.
1…光干渉断層計、10…光源、20…ビームスプリッタ、30a、30b、50a、50b…コリメータ、40…参照鏡、60a、60b…ガルバノミラー、70…分光器、100…画像処理装置、110…制御部、120…光学系制御部、130…検出信号取得部、140…電場推定部、150…マスク部、160…変換部、170…画像合成部、180…出力処理部、190…記憶部 1 ... Optical interference tomography, 10 ... Light source, 20 ... Beam splitter, 30a, 30b, 50a, 50b ... Collimator, 40 ... Reference mirror, 60a, 60b ... Galvano mirror, 70 ... Spectrometer, 100 ... Image processing device, 110 ... Control unit, 120 ... Optical system control unit, 130 ... Detection signal acquisition unit, 140 ... Electric field estimation unit, 150 ... Mask unit, 160 ... Conversion unit, 170 ... Image synthesis unit, 180 ... Output processing unit, 190 ... Storage unit

Claims (11)

  1.  試料の状態を表す光干渉断層信号を空間周波数領域に変換して前記試料の逆空間に対応する瞳面における電場を示す電場データを生成する電場推定部と、
     前記瞳面における電場の通過特性分布を示すマスクを前記電場データに作用して、マスク電場データを生成するマスク部と、
     前記マスク電場データを空間領域に変換してマスク光干渉断層信号を生成する変換部と、を備える
     画像処理装置。
    An electric field estimation unit that converts an optical interference tom signal representing the state of a sample into a spatial frequency domain and generates electric field data indicating an electric field on the pupil surface corresponding to the reciprocal space of the sample.
    A mask unit that generates masked electric field data by acting on the electric field data with a mask showing the distribution of electric field passing characteristics on the pupil surface.
    An image processing apparatus including a conversion unit that converts the mask electric field data into a spatial region to generate a mask optical interference tom signal.
  2.  前記通過特性分布は、前記瞳面において前記電場を通過する方位角の空間周波数帯域を示す
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the passage characteristic distribution indicates a spatial frequency band of an azimuth angle passing through the electric field in the pupil surface.
  3.  前記通過特性分布は、前記瞳面において前記電場を通過する動径の空間周波数帯域を示す
     請求項1または請求項2に記載の画像処理装置。
    The image processing apparatus according to claim 1 or 2, wherein the passing characteristic distribution indicates a spatial frequency band of a moving diameter passing through the electric field in the pupil surface.
  4.  前記マスクは、空間周波数に対応するサンプル点ごとに前記電場の通過の有無を示す
     請求項2または請求項3に記載の画像処理装置。
    The image processing apparatus according to claim 2 or 3, wherein the mask indicates the presence or absence of passage of the electric field for each sample point corresponding to the spatial frequency.
  5.  前記マスクは、前記電場を通過する通過領域と前記電場を通過しない遮断領域との境界から所定の範囲内においてサンプル点間で単調に変化するマスク値を有する
     請求項4に記載の画像処理装置。
    The image processing apparatus according to claim 4, wherein the mask has a mask value that changes monotonically between sample points within a predetermined range from the boundary between a passing region that passes through the electric field and a blocking region that does not pass through the electric field.
  6.  前記マスク光干渉断層信号に基づく前記試料の構造を示すマスク画像を生成する合成部を備える
     請求項1から請求項5のいずれか一項に記載の画像処理装置。
    The image processing apparatus according to any one of claims 1 to 5, further comprising a synthesis unit that generates a mask image showing the structure of the sample based on the mask light interference tom signal.
  7.  M個(Mは、2以上の整数)のマスク光干渉断層信号のそれぞれに基づくマスク画像を異なる表示態様をもって合成して合成画像を生成する合成部を備え、
     前記マスク部は、前記電場データに前記通過特性分布が異なるM個のマスクをそれぞれ作用してM個のマスク電場データを取得し、
     前記変換部は、前記M個のマスク電場データをそれぞれ空間領域に変換して前記M個のマスク光干渉断層信号を生成する
     請求項1から請求項6のいずれか一項に記載の画像処理装置。
    A compositing unit is provided which synthesizes mask images based on each of M mask optical interference tomographic signals (M is an integer of 2 or more) in different display modes to generate a composite image.
    The mask unit obtains M mask electric field data by acting on the electric field data with M masks having different passage characteristic distributions.
    The image processing apparatus according to any one of claims 1 to 6, wherein the conversion unit converts each of the M mask electric field data into a spatial region to generate the M mask optical interference tomographic signals. ..
  8.  前記表示態様として前記M個のマスク画像をそれぞれ表す色の色相が異なる
     請求項7に記載の画像処理装置。
    The image processing apparatus according to claim 7, wherein as the display mode, the hues of the colors representing the M mask images are different.
  9.  前記表示態様として前記M個のマスク画像をそれぞれ表す色を前記M個間で合成した色は、無彩色である
     請求項8に記載の画像処理装置。
    The image processing apparatus according to claim 8, wherein the color obtained by synthesizing the colors representing the M mask images as the display mode among the M mask images is an achromatic color.
  10.  画像処理装置における方法であって、
     試料の状態を表す光干渉断層信号を空間周波数領域に変換して前記試料の逆空間に対応する瞳面における電場を示す電場データを生成する電場推定過程と、
     前記瞳面における電場の通過特性分布を示すマスクを前記電場データに作用して、マスク電場データを生成するマスク過程と、
     前記マスク電場データを空間領域に変換してマスク光干渉断層信号を生成する変換過程と、を有する
     画像処理方法。
    It is a method in an image processing device.
    An electric field estimation process that converts an optical interference tom signal representing the state of a sample into a spatial frequency domain and generates electric field data indicating an electric field on the pupil surface corresponding to the reciprocal space of the sample.
    A mask process of generating masked electric field data by acting on the electric field data with a mask showing the distribution of electric field passing characteristics on the pupil surface.
    An image processing method comprising a conversion process of converting the masked electric field data into a spatial region to generate a masked optical interference tomographic signal.
  11.  画像処理装置のコンピュータに、
     試料の状態を表す光干渉断層信号を空間周波数領域に変換して前記試料の逆空間に対応する瞳面における電場を示す電場データを生成する電場推定手順と、
     前記瞳面における電場の通過特性分布を示すマスクを前記電場データに作用して、マスク電場データを生成するマスク手順と、
     前記マスク電場データを空間領域に変換してマスク光干渉断層信号を生成する変換手順と、を実行させるためのプログラム。
    To the computer of the image processing device
    An electric field estimation procedure that converts an optical interference tom signal representing the state of a sample into a spatial frequency domain and generates electric field data indicating an electric field on the pupil surface corresponding to the reciprocal space of the sample.
    A mask procedure for generating masked electric field data by acting on the electric field data with a mask showing the distribution of electric field passing characteristics on the pupil surface.
    A program for executing a conversion procedure for converting the masked electric field data into a spatial region to generate a masked optical interference tomographic signal.
PCT/JP2020/042352 2019-11-14 2020-11-13 Image processing device, image processing method, and program WO2021095826A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021556159A JPWO2021095826A1 (en) 2019-11-14 2020-11-13

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-206436 2019-11-14
JP2019206436 2019-11-14

Publications (1)

Publication Number Publication Date
WO2021095826A1 true WO2021095826A1 (en) 2021-05-20

Family

ID=75912754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/042352 WO2021095826A1 (en) 2019-11-14 2020-11-13 Image processing device, image processing method, and program

Country Status (2)

Country Link
JP (1) JPWO2021095826A1 (en)
WO (1) WO2021095826A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008175698A (en) * 2007-01-18 2008-07-31 Univ Of Tsukuba Image processing method and image processing apparatus of optical coherence tomography
JP2014228473A (en) * 2013-05-24 2014-12-08 国立大学法人 筑波大学 Jones matrix oct system and program for performing image processing of measurement data obtained by the oct
JP2017080344A (en) * 2015-10-30 2017-05-18 キヤノン株式会社 Image processing device, image processing method and optical interference tomographic device
JP2018066762A (en) * 2018-01-31 2018-04-26 国立大学法人 筑波大学 Jones matrix oct device and program
US20190117109A1 (en) * 2016-04-15 2019-04-25 The Regents Of The University Of California THz Sensing of Corneal Tissue Water Content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008175698A (en) * 2007-01-18 2008-07-31 Univ Of Tsukuba Image processing method and image processing apparatus of optical coherence tomography
JP2014228473A (en) * 2013-05-24 2014-12-08 国立大学法人 筑波大学 Jones matrix oct system and program for performing image processing of measurement data obtained by the oct
JP2017080344A (en) * 2015-10-30 2017-05-18 キヤノン株式会社 Image processing device, image processing method and optical interference tomographic device
US20190117109A1 (en) * 2016-04-15 2019-04-25 The Regents Of The University Of California THz Sensing of Corneal Tissue Water Content
JP2018066762A (en) * 2018-01-31 2018-04-26 国立大学法人 筑波大学 Jones matrix oct device and program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LY, A. ET AL.: "An evidence-based approach to the routine use of optical coherence tomography", CLINICAL AND EXPERIMENTAL OPTOMETRY, vol. 102, no. 3, 1 May 2019 (2019-05-01), pages 242 - 259, XP055823585, Retrieved from the Internet <URL:https://doi.org/10.1111/cxo.12847> *
WARTAK, A. ET AL.: "Multi-directional optical coherence tomography for retinal imaging", BIOMEDICAL OPTICS EXPRESS, vol. 8, no. 12, 1 December 2017 (2017-12-01), pages 005560, XP055823572, Retrieved from the Internet <URL:https://doi.org/10.1364/BOE.8.005560> *

Also Published As

Publication number Publication date
JPWO2021095826A1 (en) 2021-05-20

Similar Documents

Publication Publication Date Title
US8408704B2 (en) Fundus oculi observation device, ophthalmologic image processing device, and program
JP4971864B2 (en) Optical image measuring device and program for controlling the same
JP5474435B2 (en) Fundus analysis apparatus and fundus analysis program
CA2584958C (en) Enhanced optical coherence tomography for anatomical mapping
US10980416B2 (en) Blood flow measurement apparatus
CN105748041B (en) Speckle noise suppression system and method in optical coherence tomography imaging
CN103654719B (en) Picture pick-up device and image capture method and image processing equipment and image processing method
JP7009265B2 (en) Image processing equipment, image processing methods and programs
CN110545710B (en) Ophthalmic device
JP2020195886A (en) Tear film thickness measurement apparatus and method
JP6375760B2 (en) Optical coherence tomography apparatus and fundus image processing program
JP2016221111A (en) Image processing device and image processing method
WO2021095826A1 (en) Image processing device, image processing method, and program
EP3714767A1 (en) Ophthalmic information processing device, ophthalmic system, ophthalmic information processing method, and program
JP2017079886A (en) Blood flow measurement device
JP2021191552A (en) Ophthalmologic inspection device
JP2018000687A (en) Image processing device, image processing method, and program
JP7204345B2 (en) Image processing device, image processing method and program
JP2018068778A (en) Ophthalmologic oct analyzer and ophthalmologic analysis program
EP3714768A1 (en) Ophthalmic information processing device, ophthalmic system, ophthalmic information processing method, and program
JP6021289B2 (en) Blood flow information generation device, blood flow information generation method, and program
JP7409793B2 (en) Optical coherence tomography (OCT) device operating method, OCT data processing device operating method, OCT device, OCT data processing device
WO2021095852A1 (en) Signal processing device, signal processing method, and signal processing program
EP3949830A1 (en) Control system for an oct imaging system, oct imaging system and method for imaging
JP6758825B2 (en) Image processing device and its operation method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20887037

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021556159

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20887037

Country of ref document: EP

Kind code of ref document: A1