WO2022224917A1 - Dispositif de capture d'image tridimensionnelle - Google Patents

Dispositif de capture d'image tridimensionnelle Download PDF

Info

Publication number
WO2022224917A1
WO2022224917A1 PCT/JP2022/017973 JP2022017973W WO2022224917A1 WO 2022224917 A1 WO2022224917 A1 WO 2022224917A1 JP 2022017973 W JP2022017973 W JP 2022017973W WO 2022224917 A1 WO2022224917 A1 WO 2022224917A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
light
imaging
light receiving
imaging device
Prior art date
Application number
PCT/JP2022/017973
Other languages
English (en)
Japanese (ja)
Inventor
のりこ 安間
達夫 長▲崎▼
広朗 長▲崎▼
聡美 森久保
Original Assignee
のりこ 安間
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2021070725A external-priority patent/JP6918395B1/ja
Priority claimed from JP2021211632A external-priority patent/JP7058901B1/ja
Application filed by のりこ 安間 filed Critical のりこ 安間
Publication of WO2022224917A1 publication Critical patent/WO2022224917A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/30Measuring the intensity of spectral lines directly on the spectrum itself
    • G01J3/36Investigating two or more bands of a spectrum by separate detectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/45Interferometric spectrometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration

Definitions

  • the present invention relates to a three-dimensional imaging device.
  • the amplitude and phase of reflected light are detected by optical interferometry, three-dimensional resolution is performed by electrical processing using the detection results, and focusing and disturbance of the light wavefront are performed for each three-dimensional pixel.
  • the present invention relates to a three-dimensional imaging apparatus capable of recovering degraded resolution and performing spectral analysis.
  • Techniques for non-contact three-dimensional shape measurement include a focus movement method, a confocal movement method, an optical interference method, and a fringe projection method.
  • Spectral image detection technology As a spectral image detection technique, a hyperspectral camera using a line spectral method is known.
  • JP-A-2006-153654 Japanese Patent Application Laid-Open No. 2011-110290
  • the present invention has been made in view of such circumstances.
  • a first aspect of the three-dimensional imaging device of the present invention provides illumination light that sweeps the frequency of light or the frequency of amplitude modulation of light to illuminate a subject.
  • a light source that provides an optical interferometer that combines the reflected light from the object and the reference light to generate interference fringes; Two-dimensional detection of the interference fringes by two-dimensionally arranged light receiving elements, one-dimensional scanning of the one-dimensionally arranged light receiving elements, or two-dimensional scanning of a single light receiving element.
  • a two-dimensional detection mechanism that detects as an interference fringe signal at a position; the optical path length from the light source to the two-dimensional detection position of the two-dimensional detection mechanism of the reflected light reflected by the three-dimensionally distributed reflection points of the subject; an optical path difference calculating means for calculating an optical path difference between an optical path length from a light source to a two-dimensional detection position of the two-dimensional detection mechanism for each of the two-dimensional detection positions for all the reflection points to be resolved; a detection unit that obtains a three-dimensional data string by detecting the frequency of the interference fringe signal to resolve the light receiving direction for each of the two-dimensional detection positions; The light receiving direction and the a two-dimensional filtering unit for resolving intersecting planes, The three-dimensionally distributed reflection points of the object are three-dimensionally resolved.
  • the detection section converts the three-dimensional data string by Fourier transforming the interference fringe signal.
  • the three-dimensional data train is a complex signal of amplitude and phase;
  • the two-dimensional filter processing unit selects a data string corresponding to the imaging aperture from the three-dimensional data string, and selects a data string from the selected data string to the two-dimensional detection position to the reflection point.
  • Data corresponding to the optical path length is extracted using the information of the optical path difference, multiplied by the filter coefficient calculated from the optical path difference, and added to perform the imaging to resolve the reflection point, Similarly, by convolutively integrating the filter coefficients for all the reflection points to be resolved, the surface intersecting with the light receiving direction is resolved.
  • the three-dimensional imaging device comprising: a storage unit for storing the three-dimensional data string; an address generation unit that generates an address for reading out the data that matches the optical path length from the detection position of the two-dimensional detection mechanism to the reflection point to be resolved from the storage unit, using the optical path difference; a filter coefficient generation unit that reads the data using the address and generates the filter coefficients for data interpolation in the light receiving direction, initial phase matching, and weighting of the imaging aperture;
  • the two-dimensional filter processor superimposes and integrates the filter coefficient on the data of the complex signal.
  • the imaging aperture for performing the two-dimensional filtering is divided into a plurality of blocks, and the For each block, the same process as the two-dimensional filtering process is performed to resolve reflection points in the vicinity centering on the reflection point to be resolved, and complex signal data of the reflection points in the vicinity obtained in each block.
  • the distortion and fluctuation of the frequency sweep of the light source are detected, and the distortion and correction means for correcting the dispersion of the frequency components of the interference fringes caused by the phase matching filter.
  • an identification means is provided for calculating a spectral component and using the spectral component to identify the object from the reflectance spectrum of the object whose cluster is unknown.
  • the identification means uses AI that executes deep learning.
  • a low coherence light source instead of the light source, a low coherence light source, a spectroscope, and three-dimensional resolution is performed by the detection unit and the two-dimensional filtering unit.
  • a three-dimensional imaging apparatus according to any one of the first to eighth aspects, wherein the interference fringe signal detected by the two-dimensional detection mechanism has a three-dimensional and a memory for storing the interference fringe signal to which the information necessary for the three-dimensional resolution and the spectrum analysis is added as RAW data.
  • the degree of coherence of the light source, band characteristics (including distortion) and directivity of frequency sweep includes the coordinates of the detection position of the two-dimensional detection mechanism and the directivity of the light receiving element, the three-dimensional coordinates of the emission positions of the illumination light and the reference light with respect to the detection position of the two-dimensional detection mechanism, and information on the object.
  • the eleventh aspect of the three-dimensional imaging device of the present invention comprises a splitting unit that splits light emitted from a light source to generate illumination light and reference light; a synthesizing unit that interferes the reflected light from the subject with the reference light to generate interference light; an imaging optical system that forms an image of the reflected light; a slit provided on the imaging plane of the imaging optical system; a spectroscopic unit that disperses the interference light in a cross direction crossing the longitudinal direction of the opening of the slit.
  • a twelfth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the eleventh aspect, wherein at least one of the subject and the imaging device is moved so as to scan the imaging range in the cross direction. and a scanning mechanism for scanning.
  • a thirteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the eleventh aspect, wherein the imaging optical system is a cylindrical optical system in which the focal position is obliquely arranged with respect to the optical axis. with the elements of
  • a fourteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to any one of the eleventh to thirteenth aspects, comprising a light source that generates broadband light or wideband wavelength swept light.
  • a fifteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to any one of the eleventh to fourteenth aspects, wherein a predetermined wavelength band component is extracted from the interference light, and Fourier transform is performed. and a signal processing unit configured to generate an image in the predetermined wavelength band component.
  • a sixteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the fifteenth aspect, wherein the signal processing section extracts wavelength band components corresponding to three primary colors from the interference light, A signal processing unit is provided for performing conversion, generating three primary color image signals, and generating RGB image signals based on the three primary color image signals.
  • a seventeenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to any one of the eleventh to sixteenth aspects, wherein a plurality of clusters are obtained in descending order of Fisher ratio from the reflectance spectrum of the subject whose clusters are known. , and uses the spectral components to discriminate from the reflectance spectrum of an object whose cluster is unknown.
  • An eighteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the thirteenth aspect, comprising a plurality of the imaging devices having pinholes instead of the slits, and imaging by adjacent imaging devices. are divided into blocks, the amount of deviation is detected by taking the correlation between the blocks, and the images are pasted together.
  • a nineteenth aspect of the three-dimensional imaging device of the present invention is the three-dimensional imaging device according to the eighteenth aspect, wherein the plurality of imaging devices are arranged in a line and have a positional relationship adjacent to each other in the line. The imaging device is driven at different timings.
  • the present invention it is possible to provide a three-dimensional imaging apparatus capable of simultaneously achieving three-dimensional resolution and spectral image detection for a subject with a simple structure.
  • FIG. 1 is a configuration diagram showing the configuration of a three-dimensional imaging device according to an embodiment;
  • FIG. It is a figure explaining the principle which produces an interference fringe in a three-dimensional imaging device.
  • FIG. 11 is another diagram for explaining the principle of generating interference fringes in the three-dimensional imaging device; It is a figure which shows the structure from a reflection point to a two-dimensional filter process. It is a figure explaining the processing operation of two-dimensional filter processing.
  • (a) to (g) are diagrams for explaining the arrangement interval and directivity of light receiving elements.
  • FIG. 4 is a diagram illustrating a configuration for recovering resolution deteriorated due to disturbance of an optical wavefront by two-dimensional filter processing;
  • FIG. 4 is a diagram showing a configuration for generating an RGB image from RAW data of interference fringes; It is a figure which shows the wavelength band of various spectral images produced
  • FIG. 4 is a diagram for explaining non-linear cutting of a substance to be specified by AI; It is a figure explaining the case where the linearity of the frequency sweep of a laser light source is distorted.
  • FIG. 4 is a diagram illustrating a configuration for detecting distortion in linearity of frequency sweep; It is a figure which shows the structure of the application example of this embodiment.
  • FIG. 4 is a diagram showing a necessary focusing range when diagnosing a coronary artery; It is a figure which shows the structure which applied this embodiment to the intravascular OCT apparatus.
  • FIG. 4 is a diagram showing a necessary focusing range when diagnosing a coronary artery; It is a figure which shows the structure which applied this embodiment to the intravascular OCT apparatus.
  • FIG. 1 illustrates a method of detecting images of coronary arteries using an intravascular OCT device; It is a figure explaining the structure which applies this embodiment to X-ray imaging and gamma-ray imaging.
  • (a) is another diagram showing the configuration from the reflection point to the two-dimensional filtering process.
  • (b) is a diagram showing three-dimensional resolution processing in which a plane perpendicular to the optical axis is resolved by an imaging lens and the light receiving direction is resolved by Fourier transform processing.
  • 1 is a configuration diagram showing the configuration of an imaging device according to Example 1 of the present invention;
  • FIG. 23 is an explanatory diagram for explaining imaging processing by the imaging device of FIG. 22;
  • FIG. 2 is a block diagram illustrating detection of RGB and spectral images; It is a figure explaining the range of a Fourier transform.
  • FIG. 4 is a diagram for explaining FS (Foley Sammon) transformation;
  • FIG. 5 is a configuration diagram showing the configuration of an imaging device according to Example 2 of the present invention; It is a figure for demonstrating an observation image.
  • FIG. 11 is a configuration diagram showing the configuration of an imaging device according to Example 3 of the present invention;
  • a three-dimensional imaging apparatus detects the amplitude and phase of reflected light by optical interferometry, and performs three-dimensional resolution by electrical processing using them. Then, the three-dimensional imaging apparatus performs focusing, recovery of resolution deteriorated due to disturbance of the optical wavefront, and spectrum analysis for each three-dimensional pixel.
  • a three-dimensional imaging device two-dimensionally detects interference fringes of reflected light generated by an optical interferometer.
  • the light-receiving direction is resolved at each two-dimensionally detected position by Fourier transform processing, which will be described later.
  • the amplitude and phase of the reflected light hereafter referred to as a complex signal
  • the resolution of the plane intersecting the light receiving direction is obtained by the two-dimensional filtering process described later. conduct.
  • the three-dimensional imaging apparatus three-dimensionally resolves the subject through these two processes.
  • the two-dimensional filter processing described above performs focusing (dynamic focusing) for each pixel and restores the resolution that has been deteriorated due to the disturbance of the optical wavefront, which will be described later. Also, the spectrum of the reflected light is analyzed using the frequency sweep of the illumination light used for the resolution processing in the direction of light reception, and the composition of the subject is identified for each pixel.
  • the present embodiment is not limited to the visible light band, and includes a configuration in which an imaging optical system does not exist, a wavelength band of electromagnetic waves that are expensive even if an imaging optical system exists, such as infrared light, terahertz light, etc. It can also be applied to waves, millimeter waves, X-rays, ⁇ -rays, and the like.
  • FIG. 1 is a configuration diagram showing the configuration of a three-dimensional imaging device according to an embodiment.
  • the light source 1 emits light whose frequency is swept within the imaging time. Swept light, which is illumination light emitted from the light source 1 , is separated by the beam splitter 2 of the optical interferometer 13 .
  • One sweeping light reflected by the split surface illuminates the subject 3 .
  • the other sweeping light transmitted through the split surface is reflected by the mirror 4 .
  • the reference light reflected by the mirror 4 is combined with the reflected light 7 from the subject 3 by the beam splitter 2 to generate interference fringes.
  • the generated interference fringes are received by a two-dimensional array of light receiving elements 8 (hereinafter referred to as "imaging element").
  • imaging element a two-dimensional array of light receiving elements 8
  • a method of detecting the interference fringe signal with the imaging element 8 will be described later.
  • the interference fringe signal received by the imaging device 8 is stored in the memory 5 as RAW data. Then, the interference fringe signals necessary for resolution are read out from the memory 5, and the light receiving direction is resolved by the Fourier transform processing (detector) 11.
  • FIG. In this embodiment and other embodiments and examples to be described later, an example of a detection unit that performs data analysis by Fourier transform with respect to the light receiving direction for each two-dimensional detection position is described.
  • the data analysis method is not limited to Fourier transform, and various time-frequency analysis methods such as short-time Fourier transform and wavelet transform can be used.
  • the surface perpendicular to the optical axis 9 is resolved by a two-dimensional filtering process 12, which will be described later, and the reflection point 6 is detected three-dimensionally.
  • Fourier transform processing 11 and two-dimensional filter processing 12 will be described later.
  • interference fringes of a frequency proportional to the optical path difference between the reflected light from the reflection point 6 and the reference light are formed.
  • the above optical path difference is the optical path length of the illumination light emitted from the light source 1 that is reflected at the reflection point 6 via the beam splitter 2 and is received by each light receiving element of the image sensor 8, and the light path length of the light source 1 is the difference in the optical path length of the reference light from the reference light through the beam splitter 2, through the reflecting mirror 4 and the beam splitter 2, to the light receiving elements of the imaging element 8.
  • the reflection point 6 can be three-dimensionally resolved by a Fourier transform process 11 and a two-dimensional filter process 12, which will be described later.
  • the light source 1 has the spatial coherence (point light source property) required for resolution on a plane perpendicular to the optical axis 9, and the frequency sweep has the linearity and frequency band required for resolution in the light receiving direction. It has temporal coherence.
  • a frequency-swept laser light source using a deflection element such as MEMS (Micro Electro-Mechanical System) or KTN (potassium tantalate niobate) and a spectroscope can be used.
  • the light source 1 is an incoherent light source (partial coherent light source) having the spatial coherence (point light source property) required for resolution of a plane perpendicular to the optical axis 9, and the amplitude of the emitted light is modulated by the frequency sweep.
  • incoherent light source partial coherent light source
  • spatial coherence point light source property
  • the former light source is used to three-dimensionally resolve a small subject at a relatively short distance with high resolution in terms of coherence length.
  • the latter light source is used when three-dimensionally resolving a large object at a long distance.
  • an imaging device 8 is shown as a mechanism for two-dimensionally detecting interference fringes.
  • the present invention is not limited to this, and may be a mechanism that two-dimensionally detects interference fringes by a combination of a one-dimensional array of light-receiving elements and one-dimensional scanning, or a combination of a single light-receiving element and two-dimensional scanning.
  • an optical system may be arranged in each optical path of the illumination light, the reflected light, and the reference light.
  • the optical interferometer 13 may be placed anywhere on the light receiving path, and the beam splitter 2 may be separately placed for combining the reference wave and for separating the illumination light.
  • the optical interferometer 13 in FIG. 1 shows a basic configuration for explaining the principle.
  • the optical interferometer is not limited to this, and there are various methods, and the method can be selected according to the application.
  • an optical interferometer such as the Mirau method may be used to reduce the size of the structure.
  • an optical circulator using a Faraday rotator may be used in order to increase the light utilization efficiency.
  • the intermediate optical system and the beam splitter 2 and mirror 4 that constitute the optical interferometer 13 do not impair the coherence of the illumination light, the reflected light, and the reference light.
  • the shape must be such that the optical path length of the reflected light and the reference light to each light receiving element can be calculated.
  • the mirror 4 has a sufficiently small surface accuracy of 1/16 or less of the wavelength, and has a focal point such as a concave surface, a convex surface, and an ellipsoidal surface in addition to a point reflector or flat plate, so that the optical path length can be easily calculated. things are used.
  • the Fourier transform processing 11 of FIG. 1 for resolving the direction of light reception will be described below.
  • interference fringes are generated with the difference in frequency and the difference in phase. This is called optical heterodyne detection.
  • Optical heterodyne detection can convert a very high-frequency optical carrier into a low-frequency interference fringe carrier. Then, the interference fringes holding the information on the amplitude and phase of the light can be converted into electrical signals by the light receiving element. Optical heterodyne detection can also be applied to amplitude and phase detection of amplitude-modulated incoherent light.
  • FIG. 2 is a diagram for explaining the principle of generating interference fringes in a three-dimensional imaging device. Fourier transform processing for resolving the direction of light reception is based on the principle of this optical heterodyne detection. As shown in FIG. 2, a slight time difference (optical path difference) 14 is generated from the optical path length difference between the frequency-swept reference light 18 and the reflected light 19 . This causes a slight time difference 15 between the frequencies and phases of the reference light 18 and the reflected light 19 . Then, an interference fringe is generated which consists of a difference frequency and a difference phase.
  • FIG. 3 is another diagram for explaining the principle of generating interference fringes in the three-dimensional imaging device. Also, as shown in FIG. 3, when the frequency sweep bandwidth 21 is widened as indicated by the dotted line 22, the interference fringe frequency 23 increases as indicated by the dotted line 24 even if the optical path difference 25 is the same.
  • the frequency of the interference fringes is detected as a spectrum (complex signal) on the frequency axis.
  • the position of the spectrum on the frequency axis is proportional to the optical path difference between the reflected light and the reference light from the light source (point light source) 1 to the light receiving element 8 in FIG. Then, the distance from the light receiving element of the imaging element 8 to the reflection point 6 can be detected.
  • the resolution of the spectrum (the width of the single spectrum) is determined by the waveform obtained by Fourier transforming the envelope of the frequency sweep.
  • the frequency sweep bandwidth 23 of the interference fringes in FIG. 3 is widened as indicated by the dotted line 24, the number of spectra after Fourier transform with respect to the optical path difference increases, so the resolution in the light receiving direction can be increased.
  • the above-described processing can also be applied as it is when amplitude modulation of incoherent light is frequency-swept.
  • the reference light Es and the reflected light Er can be represented by the following equations (1) and (2), respectively.
  • Es As ⁇ cos ⁇ 2 ⁇ [f0+( ⁇ f/2T)t]t+ ⁇ 0 ⁇
  • Er Ar ⁇ cos ⁇ 2 ⁇ [f0+( ⁇ f/2T)(t-td)](t-td)+ ⁇ 0 ⁇ (2)
  • ⁇ f is the frequency sweep bandwidth
  • T is the sweep time
  • f0 is the sweep start frequency
  • ⁇ 0 is the initial phase of the sweep start
  • t is the time
  • td is the time difference (optical path difference) between the reference light and the reflected light
  • Ar is the amplitude of the reflected light
  • 2( ⁇ f/2T)td is the frequency of the interference fringe signal, and it can be seen that the frequency of the interference fringes changes linearly as the time difference (optical path difference) td changes. Further, from the second term of equation (4), 2 ⁇ [( ⁇ f/2T)td 2 +f0 ⁇ td] is the initial phase of the interference fringe signal, and the initial phase can change parabolically with respect to td. I understand.
  • the envelope When the envelope is a square wave, side lobes of the sinc function, which is the Fourier transform of the square wave, are generated. If the envelope is Gaussian (Gaussian function), the side lobes can be suppressed, but the resolution is slightly lowered, so the sweep bandwidth is increased accordingly.
  • Gaussian Gausian function
  • the three-dimensional point spread function (three-dimensional PSF (Point Spread Function)) when the interference fringe signal of the reflection point 6 is detected by the single light receiving element m shown in FIG. , the point spread function of the light receiving direction 7 is spherically extended to the directivity range of the light receiving element m.
  • Interference fringe signal obtained by each light receiving element or complex signal obtained by Fourier transforming it, the coherence degree of the light source, the band characteristics and directivity of the frequency sweep, the directivity of the light receiving element, the number of elements, and the array spacing , the three-dimensional coordinates of the emission positions of the illumination light and the reference light with respect to the light receiving surface of the image pickup device, and the information of the subject are added, and if it is archived as RAW data, various processing can be performed later using the phase information. can.
  • the light receiving direction can be resolved by performing a Fourier transform.
  • the principle of the resolution of the light receiving direction by Fourier transform processing is the same as that of pulse compression of radar. In other words, the phases of the frequency components of the reflected light are matched and added (passed through a phase-matched filter), and it is as if micron-order light pulses are transmitted and received like a radar, and the light receiving direction is resolved. It will be.
  • the Fourier transform of the interference fringe signal yields the point spread function in the light receiving direction, and the full width at half maximum of the point spread function is the resolution in the light receiving direction.
  • the sampling interval in the light receiving direction is set smaller than the resolution. Therefore, the number of pixels in the light-receiving direction is obtained by dividing the resolution range in the light-receiving direction by the sampling interval.
  • the interference fringe signal can be detected from the relationship of the Fourier transform pair while satisfying the sampling theorem.
  • the detection time is shortened, the frequency sweep time is shortened, and an imaging device 8 with a high frame rate is used accordingly.
  • the imaging device 8 is basically capable of global shutter operation.
  • the sweep time for light source 1 is set to 16.7 seconds. Due to the long detection time, it is applied to three-dimensional resolution and shape measurement of stationary objects.
  • the detection time when using a commercially available high-speed imaging device is 1 second.
  • the imaging time is 50 ms. For this reason, the application to a moving subject is expanded.
  • the imaging time can be further shortened by imaging with a plurality of imaging elements at different timings by means of a multi-plate prism.
  • the imaging time of the imaging device is shortened, it seems that sensitivity cannot be obtained.
  • the sensitivity is improved by the number of pixels in the light receiving direction, in other words, the SN ratio of the single spectrum is improved by the square root of the number of pixels.
  • the sensitivity is improved by the number of light receiving elements of the virtual lens 35 of FIG. Therefore, as a result, the sensitivity becomes almost the same as the shutter operation of an imaging device using an optical system, and no problem occurs.
  • FIG. 4 is a diagram showing a configuration from reflection points to two-dimensional filtering.
  • the interference fringes generated by combining the reference light and the reference light in the combining unit 32 are received by the light receiving elements 33-1 to 33-n of the imaging device.
  • the detected interference fringe signal is stored in memory 5 (FIG. 1).
  • the interference fringe signal corresponding to the aperture of the virtual lens 35 is read out from the memory, and Fourier transform processing 34 (11 in FIG. 1) is performed.
  • the light receiving direction of each of the light receiving elements 33-1 to 33-n is resolved, and three-dimensional data strings 36-1 to 36-n of complex signals in the light receiving direction are obtained.
  • the three-dimensional data trains 36-1 to 36-n of the complex signals in the light-receiving direction are processed by the two-dimensional filtering process 37.
  • the complex signals of the pixels matching the optical path length from the reflection point 31 to each of the light receiving elements 33-1 to 33-n are extracted.
  • the reflection point 31 can be resolved by matching the phase to the complex signal at the center position of the imaging aperture and adding it. This processing is performed for all reflection points (pixels) in the object space, and the object is three-dimensionally resolved.
  • FIG. 21(a) is another diagram showing the configuration from the reflection point to the two-dimensional filtering process.
  • Reflected light P1 to Pn from one reflection point can be expressed by the above equation (5).
  • R is the reference beam
  • Lp is the coefficient of the low-pass filter by the light receiving element
  • F is the Fourier transform of the light receiving direction.
  • Expression (6) represents three-dimensional resolution processing in which a plane perpendicular to the optical axis is resolved by an imaging lens and the light receiving direction is resolved by Fourier transform processing.
  • the frequency of the interference fringes changes linearly.
  • the initial phase of the fringe signal is 2 ⁇ [( ⁇ f/T)td 2 +f0 ⁇ td] and changes parabolically with respect to td.
  • This initial phase matching is performed when complex signals of pixels matching the optical path length from the reflection point 31 to each of the light receiving elements 33-1 to 33-n are extracted and added.
  • Initial phase matching is performed together with data interpolation processing by low-pass filters 42-1 to 42-n shown in FIG. 5, which will be described later.
  • the filter coefficient generator 50 shown in FIG. 5, which will be described later the data interpolation coefficient and the complex signal coefficient for phase matching are multiplied, and the coefficients 47-1 to 47- of the low-pass filters 42-1 to 42-n are obtained. n is generated.
  • FIG. 5 is a diagram for explaining the processing operation of the two-dimensional filtering process 37.
  • the data strings 36-1 to 36-n of the complex signals in the direction of light reception in FIG. 4 are stored in the line memories 41-1 to 41-n in FIG. From the line memories 41-1 to 41-n, the respective light receiving elements 33-1 to 33-n from the reflection point 31 in FIG.
  • the complex signals 48-1 to 48-n stored at addresses corresponding to optical path lengths up to n are read.
  • the low-pass filters 42-1 to 42-n perform data interpolation in the light receiving direction of the complex signals 48-1 to 48-n and the above-described phase matching. Then, the adder 49 performs addition.
  • the accuracy of data interpolation should be 1/16 or less of the resolution in the light receiving direction.
  • Data interpolation is preferably spline interpolation. However, linear interpolation using neighboring data is also sufficient.
  • Data near the complex signal that matches the optical path length from the reflection point 31 to each of the light receiving elements 33-1 to 33-n are read out from the line memories 41-1 to 41-n. The read data are input to low-pass filters 42-1 to 42-n for data interpolation.
  • the coefficients 47-1 to 47-n of the filters for data interpolation and phase matching are generated by the filter coefficient generator 50 according to addresses 44-1 to 44-p. In order to suppress side lobes, addition may be performed after multiplying by a weighting factor for correction. The multiplication of the weight coefficients is performed by the filter coefficient generator 50 by multiplying the filter coefficients 47-1 to 47-n of the low-pass filters 42-1 to 42-n.
  • These addresses 44-1 to 44-p are generated by calculation, or stored in a lookup table as calculated in advance, or considering the balance between calculation time and memory size. generated by a combination of these methods.
  • the optical path lengths of the reflected light and the reference light depend on the optical system arranged in the optical path, including the positions of the light receiving elements 33-1 to 33-n in FIG. 4, and the shape and position of the reflecting mirror. to each of the light receiving elements 33-1 to 33-n. Therefore, the optical path lengths of the reflected light and the reference light are accurately calculated in the address generator 45 and reflected in the addresses 44-1 to 44-p.
  • the optical path lengths of the reflected light and the reference light in the configuration of FIG. 1 are calculated.
  • the position of the center of the light receiving surface of the image sensor 8 is the origin (0, 0, 0) of the three-dimensional coordinates
  • the direction perpendicular to the paper surface is the X-axis
  • the vertical direction is the Y-axis
  • the direction of the optical axis 9 is the Z-axis.
  • the optical path length of the reflected light is obtained by folding the position of the light source 1 on the reflecting surface of the beam splitter 2, and from the position (0, 0, s) of the light source 1 on the optical axis 9 at that time to the position (x, y , z), the optical path length from the position (x, y, z) of the reflection point 6 to the position (dx, dy, 0) of each light receiving element of the imaging device 8 is added.
  • the optical path length of reflected light is represented by the following formula (7). [x 2 +y 2 +(zs) 2 ] 1/2 + [(x-dx) 2 +(y-dy) 2 +z 2 ] 1/2 (7)
  • the optical path length of the reference light is determined by reflecting the position of the light source 1 on the reflecting surface of the reflecting mirror 4 and further by reflecting on the reflecting surface of the beam splitter 2, and the position of the light source 1 on the optical axis 9 at that time (0 , 0, r) to the position (dx, dy, 0) of each light receiving element of the imaging device 8 .
  • the optical path length of the reference light is represented by Equation (8) below. [dx 2 +dy 2 +r 2 ] 1/2 (8)
  • the optical path lengths of the reflected light and the reference light can be easily calculated, so the optical path difference between the reflected light and the reference light can be calculated.
  • a value obtained by dividing the optical path difference between the reflected light and the reference light by the sampling interval in the light receiving direction corresponds to the pixel address when the light receiving direction is resolved by the Fourier transform processing 11 . In this way, addresses 44-1 to 44-n can be generated.
  • data interpolation by the low-pass filters 42-1 to 42-n can be used to convert the pixels into three-dimensional pixels in a cubic or equal arrangement. Addresses 44-1 to 44-p are thus generated accordingly.
  • the three-dimensional resolution can be achieved by Fourier transforming the reflected light from the object space into three dimensions.
  • the Fourier transform can greatly reduce the total number of multiplications due to the effect of the butterfly operation when the characteristics of the filter multiplied after the Fourier transform are constant (space invariant filter).
  • the one-dimensional Fourier transform processing in the light receiving direction is combined with the two-dimensional filter processing 12 in which the filter coefficient is optimized for each three-dimensional pixel and the convolution integration is performed.
  • Two-dimensional filtering 12 is the same as two-dimensional Fourier transforming the reflected light.
  • FIGS. 6A to 6G are diagrams for explaining the arrangement interval and directivity of the light receiving elements.
  • FIGS. 6(a) to 6(g) show a Fourier transform pair in which reflected light is received by a one-dimensional array of light receiving elements and Fourier transform is performed in the array direction. Directivity will be explained.
  • the y-axis indicates the position in the arrangement direction
  • the Y-axis indicates the position of the focal plane obtained by Fourier transforming the y-axis.
  • FIG. 6(a) shows a light receiving sensitivity distribution 51 when reflected light from a reflecting point on the optical axis is received by an opening 52.
  • FIG. The light sensitivity distribution 51 is the product of the set aperture 52 and the directivity of the single light receiving element (the light sensitivity distribution on the focal plane) 53 . Therefore, setting the opening 52 beyond the range of the directivity 53 is meaningless.
  • the maximum detectable resolution is determined by the directivity 53 of the light receiving element.
  • the directivity 53 is formed by the aperture of the single light receiving element and the microlens.
  • the waveform of the light receiving sensitivity distribution 51 shown in FIG. Further, since the directivity 53 of the single light receiving element is always in the direction of the optical axis, the waveform of the light receiving sensitivity distribution 51 changes when the reflection point to be detected moves away from the optical axis.
  • the waveform in FIG. 6(a) shows the case where the reflection point is on the optical axis.
  • FIG. 6(b) shows a light-receiving element arrangement with an interval P.
  • FIG. 6(c) shows the sensitivity distribution on the light receiving surface of the single light receiving element.
  • FIG. 6(d) shows a point spread function (resolution is full width at half maximum) 54 on the focal plane obtained by Fourier transforming the light sensitivity distribution 51 .
  • FIG. 6(e) shows diffraction poles caused by the arrangement of the light receiving elements. The pole spacing is 1/P.
  • FIG. 6(f) shows the directivity (light sensitivity distribution) 53 on the focal plane of a single light receiving element formed of microlenses (formed by Fourier transform of the microlenses).
  • the actual numerical value on the Y-axis is the value obtained by multiplying the reciprocal of the focal length by a coefficient proportional to the center wavelength, but the figure is omitted because it is not directly related to the description of this section.
  • the resolution 57 is proportional to 1/ ⁇ , which is the reciprocal of the aperture ⁇ of the light sensitivity distribution 51 from the Fourier transform pair relationship.
  • the resolution (numerical aperture) that can be synthesized is determined by the directivity 53 of the light receiving element.
  • the directivity of the microlens is set according to the desired resolution.
  • the directivity 53 of the single light receiving element is multiplied, thereby eliminating the diffraction from the second main pole 55 or higher (causing a ghost image).
  • the diffraction pole spacing 1/P must be set greater than the position 56 where the directivity 53 becomes null (0). In other words, the array interval P of the light receiving elements must be set smaller than the resolution.
  • the array interval of the light receiving elements must be 1 ⁇ m or less.
  • the manufacturing limit of the pixel interval of the imaging device is currently slightly below 1 ⁇ m.
  • the directivity of the microlenses can be controlled in the manufacturing process.
  • the positions of the ⁇ second main poles 55 are closest to the optical axis.
  • the directivity 53 of the light receiving element is always in the optical axis direction. Therefore, it is necessary to set the array interval P to a small value or narrow the angle of view so that the ⁇ second main poles 55 do not enter the directivity 53 of the light receiving element.
  • the Fourier transform processing 11 (Fig. 1) is the same as orthogonally detecting the interference fringe signal for each frequency component.
  • the carrier (carrier wave component) of the interference fringes disappears, and a complex signal of the point spread function in the light receiving direction is obtained.
  • the frequency band becomes narrower as the interference fringe carrier disappears, and becomes the bandwidth of the envelope of the point spread function.
  • the arrangement interval of the light-receiving elements required for the two-dimensional filtering 12 (FIG. 1) that is performed after converting the light into a complex signal can be on the order of microns, which is less than half the resolution.
  • the surface accuracy is required to be as high as 1/16 or less of the wavelength of light (on the order of several tens of nanometers).
  • the imaging lens is a very good two-dimensional Fourier transformer that can instantly form an image, and does not require processing time like two-dimensional filtering.
  • the focal position, aperture, magnification, etc., or correcting the disturbance of the optical wavefront a complicated optical system and mechanism are required, and it takes time to switch between them.
  • the two-dimensional filter processing is to electrically switch between them, to optimize for each pixel, to restore the deteriorated resolution, and to expand the depth of field with high resolution. etc. becomes possible.
  • FIG. 7 is a diagram for explaining a configuration for recovering the resolution deteriorated by the disturbance of the optical wavefront by two-dimensional filtering.
  • the opening is divided into a plurality of blocks 61-1 to 61-m to 61-n.
  • Interference fringe signals corresponding to each block are read out from the memory 5 (FIG. 1) and subjected to Fourier transforms 62-1 to 62-n.
  • two-dimensional filtering 63-1 to 63-n is performed for each block, and several pixels before and after the pixel of the reflection point 66 are filtered in the direction of the principal ray 67 of each block. Complex signals of a total of 5 pixels are detected.
  • cross-correlation processing 64-1 to 64-n is performed on the 5-pixel complex signal detected in each block and the 5-pixel complex signal in the central block 61-m to detect the optical path length deviation.
  • the cross-correlation processes 64-1 to 64-n are performed by superimposing the complex conjugate signals of five pixels of the central block 61-m on the complex signals of five pixels of other blocks.
  • the superposition integration is performed by interpolating data for 5 pixels so that the detection accuracy of the peak value indicating the deviation of the optical path length is 1/16 or less of the resolution in the light receiving direction.
  • the disturbance of the optical wavefront is large, it is dealt with by increasing the number of 5 pixels for cross-correlation processing.
  • the number of blocks is increased in order to increase the number of samples.
  • the number of blocks may be doubled by applying Gaussian weighting to the outputs of the light receiving elements of the blocks and overlapping the apertures.
  • the deviation of the optical path length between the central block and each block detected by the cross-correlation processing 64-1 to 64-n indicates the disturbance of the optical wavefront.
  • the data interpolating unit 66 interpolates the data so that the deviation of the optical path length for each block corresponds to the light receiving elements 33-1 to 33-n in FIG. Then, it is sent to the address generator 45 for optical path length matching shown in FIG. 5 and reflected in the address 46 (added to the address 46).
  • two-dimensional filter processing can be performed in which the disturbance of the optical wavefront is corrected.
  • the correlation between the central block and the blocks at the ends of the aperture weakens.
  • correlation processing is performed between the central first block and the adjacent second block that has a high correlation.
  • correlation processing is performed between the second block and the third block. This may be detected by accumulating deviations in the optical path length by repeating this while sequentially shifting to the outside.
  • detection errors are also accumulated, when the resolution in the light receiving direction is on the order of microns and the SN ratio is 40 db or more, if the complex signal in the light receiving direction is subjected to data interpolation and cross-correlation processing is performed, the deviation detection accuracy is sufficiently high. A high accuracy of several nano-orders can be obtained. Therefore, the accumulated error can be neglected. It is desirable to select a correlation processing method in consideration of numerical aperture and SN ratio.
  • the disturbance of the optical wavefront caused by the aberration of the optical system on the way is gentle, and the spatial frequency component in the optical wavefront is low.
  • the spatial frequency component increases due to disturbance of the light wavefront.
  • the number of blocks will be increased according to the sampling theorem.
  • the number of blocks and the numerical aperture (NA) of the blocks are in a trade-off relationship. Therefore, increasing the number of blocks reduces the accuracy of cross-correlation.
  • the disturbance of the optical wavefront for each subject is statistically grasped, and the S/N ratio is used as a constraint.
  • combinations of the number of blocks, the numerical aperture, and the cross-correlation pixel range are pre-solved for each subject by a combinatorial optimization problem. Then, after switching to the optimum balance for each subject, correction is performed.
  • the optimal combination for each subject is detected by annealing and iteration using the extension of the OTF of the image after two-dimensional filtering as an index. . Then, correction may be performed by switching to an optimum balance for each object.
  • the principle of correcting the disturbance of the optical wavefront of this embodiment is basically the same as that of adaptive optics used in astronomy.
  • a guide star point image
  • a guide star is set by irradiating a sodium atomic layer at an altitude of 90km with a laser beam to excite the sodium to make it glow.
  • a point image may be set on the surface of the subject using infrared light or the like, but the method of the present embodiment detects disturbance of the light wavefront by cross-correlation processing using the signal of the subject. Therefore, it is not necessary to set the same point image as the guide star in the object space.
  • the number and size of blocks 61-1 to 61-n in FIG. 7, which correspond to wavefront sensors and wavefront controllers of adaptive optics, can be appropriately set according to the application. Then, the balance between them can be optimized using processing such as an optimization problem.
  • Three-dimensional complex signal data of 5 ⁇ 5 ⁇ 5 pixels centered on the detection point is detected by two-dimensional filter processing of each block 61-1 to 61-m to 61-n.
  • Six-axis (x, y, z, x ⁇ , y ⁇ , z ⁇ ) cross-correlation processing using the three-dimensional complex signal is performed between blocks. Based on the result, the light receiving position of the light receiving element is corrected in addition to the correction of the disturbance of the light wavefront. After that, two-dimensional filtering is performed.
  • the interference fringe signals 71 corresponding to the aperture are read out sequentially or in parallel from the memory 5 in FIG.
  • the read signal is Fourier-transformed by the FFT 72 in the visible light band 81a shown in FIG. Then, a W (White) complex signal in which the direction of light reception is resolved is generated.
  • FIG. 9 is a diagram showing wavelength bands of various spectrum images generated by Fourier transform.
  • the band W includes the near-infrared region 82 where the living body is highly transparent shown in FIG. may be generated.
  • FIG. 9 is a diagram showing wavelength bands of various spectrum images generated by Fourier transform.
  • the W, R, and B complex signals are each subjected to two-dimensional filter processing, and three-dimensional resolution of W, R, and B is performed.
  • chromatic aberration device in optical path length
  • the pixels may be converted into a cubic array of pixels.
  • Each FFT and each two-dimensional filter processing shown in FIG. 8 have the same functions as those described for the Fourier transform processing 11 and two-dimensional filter 12 in FIG.
  • matrix conversion is performed by the matrix converter 75 in FIG. 8 to generate three-dimensionally resolved RGB signals.
  • images are displayed according to the purpose, such as surface images, cross-sectional images, transmission images, and three-dimensional constructed images by CG.
  • the R signal 83 and the B signal 84 in FIG. 9 have narrower wavelength bandwidths than the W signal and different center wavelengths. Therefore, the resolution of the light receiving direction of the R signal 83 and the B signal 84 is about 1/3 of that of the W signal. However, since the resolution of the human eye for R and B is also about 1/3, there is no problem.
  • Broadband swept light is considered to consist of a linear sum of a plurality of swept lights such as R, G, B, and infrared.
  • All processing is linear processing, including illumination, reflection, interference with reference light, detection of interference fringes, and Fourier transform. For this reason, from the principle of superposition, by extracting the swept frequency portion corresponding to the R and B bands from the interference fringe signal and performing the Fourier transform, the optical interference solution can be obtained by using the R and B swept light sources independently. The same result as image processing is obtained.
  • an XYZ complex signal can be obtained by multiplying the swept visible light band interference fringe signal by an XYZ color matching function and performing a Fourier transform.
  • the reflection spectrum in the visible light band mainly includes absorption of wavelengths that excite outer-shell electrons of atoms, absorption of wavelengths that excite molecular vibrations, spins, and intermolecular vibrations, and diffraction scattering due to the arrangement of refractive indices. changes the spectral components.
  • Good results can be obtained by using statistical analysis, such as multivariate analysis or deep learning AI, as a method of identifying such clusters.
  • statistical analysis such as multivariate analysis or deep learning AI
  • the procedure for identifying two clusters by such a method is described below.
  • the additional information 70 includes normalization processing information for reducing the variance of the clusters to be identified and making it easier to distinguish between the clusters, an address for extracting the substance to be identified from the image, and other information necessary for generating the image. includes information about
  • the information for the normalization process is the brightness of the illumination light, the wavelength band characteristics of the illumination light, and so on.
  • An expert who can identify a substance observes an RGB image or a spectral analysis image, which will be described later, and designates the extraction address using a mouse or the like.
  • An image may be generated by an external computer and then specified.
  • the information necessary to generate an image corresponds to the frequency sweep band, linearity, arrangement interval and directivity of the light receiving elements, and the like.
  • the interference fringe signal is read out from the recording device, and the acquired data is normalized by the computer. After that, Fourier transform processing and two-dimensional filtering processing are performed to generate a three-dimensional image.
  • the image portion of the substance to be identified is cut out from the 3D image according to the cutout address.
  • the complex signal in the light receiving direction of the extracted pixel is mainly the complex signal of the reflected light from the object surface.
  • propagation attenuation is large, so only reflection from the surface of the object is performed.
  • the pixel data in the target light receiving direction is three-dimensionally cut out.
  • the subject is a living body
  • the attenuation when propagating through the living body varies greatly depending on the wavelength.
  • the attenuation of tissue in the propagation path is superimposed.
  • the spectrum analysis is mainly performed on the image of the surface of the object except for the object with high transparency.
  • a computer performs FS (Foley-Sammon) transformation on a large amount of multispectral data of the two substances to be identified in a multidimensional coordinate space with each spectral component as an orthogonal axis.
  • FS Freley-Sammon
  • the FS transform is an orthogonal transform that calculates the feature axes that increase the Fisher ratio of two clusters in descending order. Similar to data compression, it is possible to narrow down to at most 5 to 6 feature axes based on cumulative contribution rates and experience.
  • the number of AI input terminals is 5 to 6 on the characteristic axis narrowed down by FS conversion. Therefore, the scale of AI, including the number of layers, becomes much smaller.
  • a feature of identification by AI is that identification by nonlinear segmentation Z becomes possible, as shown in FIG.
  • the interference fringe signal 71 is multiplied by the matrix conversion coefficients 77-1 to 77-n sent from the computer to the control unit 78 and stored by the multipliers 79-1 to 79-n. As a result, projective transformation of the interference fringe signal 71 to the characteristic axes EU1 to EU6 is performed.
  • 10A and 10B are diagrams for explaining non-linear segmentation of a substance to be specified by AI.
  • the interference fringe signals projectively transformed onto the characteristic axes EU1 to EU6 are subjected to Fourier transform processing and two-dimensional filtering to generate spectral analysis images, which are images of the characteristic axes EU1 to EU6.
  • the top EU1, EU2, and EU3 of the spectral analysis images may be assigned to YIQ (component method used in NTSC internal processing) in descending order of visual sensitivity, and may be displayed after performing matrix conversion to RGB. After that, the observer's visual brain performs non-linear discrimination.
  • YIQ component method used in NTSC internal processing
  • the spectrum analysis image may be input to the AI 80 for each pixel, and the two substances may be identified for each pixel.
  • AI neuron coefficients 76 have generally been loaded into AI 80 via controller 78 from the computer.
  • the results identified by AI80 may be displayed in a fusion with the RGB image by pseudo-coloring the pixel portions of the identified substances.
  • the above-described identification by FS conversion is just a method for identifying two clusters.
  • the characteristic axis is switched each time. Even if the switching is performed multiple times in a tree-like combination, the tree-like identification operation can be performed at high speed because the characteristic axes are narrowed down to 5 to 6 and the circuit scale of the AI 80 is small.
  • multispectral waveforms of multiple substances to be specified may be directly learned (supervised) by AI to specify multiple substances.
  • the number of input terminals of AI at that time is required as many as the number of multi-spectrum. Therefore, the scale of AI increases.
  • FIG. 11 is a diagram for explaining a case where the linearity of the frequency sweep of the laser light source is distorted.
  • frequency modulation (frequency dispersion) 104 occurs in the interference fringe signal due to the optical path difference 103 between the reflected light 101 and the reference light 102, as shown in FIG. This widens the spectral width after the Fourier transform and reduces the resolution. Then, when the optical path difference 103 changes, the frequency modulation 104 also changes.
  • the frequency modulation 104 caused by such sweep distortion can be corrected by performing phase-matched filtering on the spectral dispersion after the Fourier transform.
  • Phase matching is performed by using FIR filters (Finite Impulse Response Filters) 86-1 to 86-n as shown in FIG. .
  • the length of the FIR filter is set to allow for the range of frequency dispersion after Fourier transformation.
  • the coefficients 87-1 to 87-n (FIG. 5) of the phase matching filter are switched and multiplied for each pixel in the light receiving direction.
  • a complex conjugate signal generated by Fourier transforming the frequency modulation 104 shown in FIG. 11 is used for the coefficients 87-1 to 87-n (FIG. 5) of the phase-matched filter. Changes in distortion of the linearity of the light source frequency sweep are detected at an appropriate time period, the FIR coefficient generator 88 in FIG. (Fig. 5).
  • phase-matched filter coefficients 87-1 to 87-n are added to the additional information 70 via the control section 78 in FIG.
  • FIG. 12 is a diagram illustrating a configuration for detecting distortion in linearity of frequency sweep.
  • the light is split into wavelength components by a spectroscope 113, and an image is formed on a one-dimensionally arrayed light receiving element (line sensor) 114 arranged in the spectroscopic direction by an imaging optical system 115 to receive the light.
  • the light emitted from the light source 1 is imaged in a spot shape and moves on the one-dimensional light receiving element 114 according to the frequency sweep.
  • the reading of the one-dimensional light receiving element 114 is repeated multiple times to detect the movement of the spot light and the distortion of the sweep frequency.
  • a peak value detection circuit 115 interpolates the pixel data of the light receiving element to detect the peak value, thereby improving the accuracy of the position of the spot light.
  • the phase of the frequency modulation 104 (Fig. 11) is calculated in the FIR coefficient generator 88 (Fig. 5) using the time integration formula used when calculating the phase of FM modulation.
  • the position of the spot light detected by the one-dimensional light receiving element 114 is temporarily stored in the memory 116, converted into a frequency-modulated waveform, and sent to the FIR coefficient generator 88 in FIG. 5 to correct linearity distortion.
  • FIR filter coefficients 87-1 to 87-n are generated by computation and sent to FIR filters 86-1 to 86-n for correction.
  • the reading repetition frequency of the one-dimensional light receiving element 114 does not require a large value because the distortion of the frequency sweep of the light source is moderate. Data interpolation can reproduce the characteristic of frequency sweep. The distortion detection accuracy of the frequency sweep requires a value corresponding to the resolution in the light receiving direction.
  • the complex signal of the reflected light from the reference reflection point is detected using an optical interferometer, and the complex conjugate signal obtained by Fourier transforming it is the coefficient of the phase matching filter (87-1 to 87-n in FIG. 5). ) can be used as
  • a single light-receiving element is used instead of the image sensor 8 (FIG. 1), and a two-dimensional scanning mechanism detects the reflected light from the subject in two dimensions, and Fourier transform processing and two-dimensional filter processing perform three-dimensional detection. perform resolution.
  • Some single photodetectors have ultra-high sensitivity and some can detect special wavelength bands other than visible light, so they can be applied to three-dimensional imaging devices and inspection devices that use such wavelength bands.
  • a one-dimensional array of light-receiving elements is used in place of the imaging element 8 (FIG. 1), and scanning is performed by a one-dimensional scanning mechanism in a direction intersecting the array, so that the reflected light from the object is captured two-dimensionally.
  • three-dimensional resolution is performed by Fourier transform processing and two-dimensional filter processing.
  • Line sensors include those with a large number of pixels, those with high sensitivity, and those that can detect special wavelength bands. It can be applied to a visual sensor for FA robots and the like.
  • This embodiment can resolve three dimensions without using an imaging optical system. However, as described above, by combining this embodiment with the imaging optical system, the number of processes of the two-dimensional filter can be reduced.
  • FIG. 13 is a diagram showing the configuration of an application example of this embodiment.
  • the direction of the chief ray 123 is resolved by Fourier transform processing.
  • the resolution of the plane perpendicular to the optical axis 121 is performed by the imaging optical system 122 .
  • the imaging optical system 122 uses the obtained three-dimensional complex signal, it is possible to extend the depth of field of the imaging optical system 122 and recover resolution degradation due to disturbance of the optical wavefront by two-dimensional filtering. By correcting the disturbance of the optical wavefront, the aberration of the optical system can also be corrected.
  • 4a is a reflecting mirror.
  • FIG. 14(a) shows the imaging light flux 126 when the imaging position of the reflection point 124 (FIG. 13) is in front of the imaging device 125 (FIG. 13) (on the subject side).
  • FIG. 14(b) shows the imaging light flux 127 behind (on the image side) the imaging element.
  • Dotted light beams 128-1 and 128-2 indicate light beams re-imaged by virtual lenses 129-1 and 129-2 by two-dimensional filtering, respectively.
  • two-dimensional filtering By performing two-dimensional filtering on the pixels in the direction of the principal ray 123, it is possible to extend the depth of field and restore the resolution that has deteriorated due to the disturbance of the light wavefront described above.
  • this embodiment is applied to a fundus imaging device, unnecessary reflections on the surfaces of the objective optical system and eyeball optical system and unnecessary reflections due to turbidity of the vitreous body can be removed due to the difference in optical path length.
  • the aperture of the eyeball optical system which has conventionally been divided into rings for illumination and imaging in order to avoid unnecessary reflection from the eyeball optical system, can now be used entirely. Therefore, high-definition, high-contrast fundus imaging can be performed at high speed, and a three-dimensional tomographic image of the retina can be detected.
  • the imaging mechanism is scanned one-dimensionally to detect a tomographic image, and resolution and depth of field are expanded only in the optical axis direction 121 by Fourier transform processing and two-dimensional filter processing.
  • the number of image pickup elements required to expand the depth of field is 100 pixels or less. For this reason, the number of processes of the two-dimensional filter can be further reduced, and a tomogram with a high horizontal resolution and a deep depth of field can be detected in real time.
  • the frequency swept light source is replaced with a low coherence light source (for example, SLD: Super Luminescent Diode), and the interference fringe signal is separated by a spectroscope.
  • a frequency-swept interference fringe signal can be obtained in the same way as obtained with a frequency-swept light source, and by performing Fourier transform processing and two-dimensional filter processing on this, three-dimensional resolution can be performed. can be done.
  • FIG. 15 is a diagram showing a configuration using a low coherence light source in this embodiment.
  • broadband light emitted from a low coherence light source (SLD) 131 is reflected by a beam splitter 132 and illuminates an object 133 .
  • Reflected light from the reflection point 130 is imaged by an objective optical system 134, and unwanted light is removed through a slit 135 provided on the imaging plane.
  • SLD low coherence light source
  • the collimating optical system 136 After that, it is converted into parallel light by the collimating optical system 136 and enters the spectroscope 137 .
  • the center of the aperture of the slit 135 is positioned on the optical axis of the objective optical system 134 .
  • the reflected light is split by a spectroscope 137 and then imaged on an imaging device 139 by an imaging optical system 138 .
  • One of the lights separated by the beam splitter 132 is imaged as a line segment 142 on the reflecting mirror 141 by the cylindrical optical system 140 .
  • Light reflected from line segment 142 is converted into parallel light through cylindrical optical system 140 , objective optical system 134 , slit 135 , and collimator optical system 136 .
  • the parallel light is split by the spectroscope 137 , an image is formed on the imaging device 139 by the imaging optical system 138 . Then, the reflected light and the reference light are combined on the light receiving surface of the imaging element 139, and the generated interference fringes are converted into electrical signals by the imaging element.
  • the optical axis of the cylindrical optical system 140 folded by the reflecting surface of the beam splitter 132 is aligned with the optical axis of the objective optical system 134, and the line segment 142 and the opening of the slit 135 are optically conjugate.
  • a cylindrical optical system 140 and a reflecting mirror 141 are arranged.
  • the spectroscope 137, the image sensor 139, and the imaging optical system 138 are arranged so that the direction and range of the light split by the spectroscope 137 match the direction and range of the vertical arrangement of the pixels of the image sensor 139.
  • the subject image formed on the aperture of the slit 135 by the objective optical system 134 is formed on the horizontal pixel array of the imaging device 139 . Then, the interference fringe signals generated by the horizontal array pixels are detected from the vertical pixel array.
  • Interference fringe signals are sequentially read out from the imaging device 139 and stored in a memory (not shown). Then, the scanning mechanism 143 scans the object 133 or the optical axis 144 in the vertical direction 130 in FIG. 15 to acquire the interference fringe signal two-dimensionally and store it in the memory.
  • the interference fringe signal is read out from the memory, the light receiving direction is resolved by Fourier transform processing, and a three-dimensional resolution is obtained. Then, the depth of field is expanded and the disturbance of the light wavefront is corrected by two-dimensional filtering.
  • FIG. 16(a), (b), and (c) show the reflected light from a reflection point 146 located out of the focus position 145 of the objective optical system 134 (FIG. 15), with the optical axis 144 in the vertical direction of the paper surface. shows optical paths 147-1, 147-m, and 147-n of reflected light when detecting while scanning.
  • the reflection point 146 is distant from the focus position 145, the reflected light is 147-1 to 147-n can be detected.
  • the reflection point 146 can be resolved by Fourier transform processing and two-dimensional filter processing in the light receiving direction. Even when the reflection point 146 is behind the focus position 145, it can be similarly resolved.
  • the reflection point 146 When the reflection point 146 deviates from the focus position 145 of the objective optical system 134 (FIG. 15), the light beam is kicked by the slit 149 and the sensitivity is lowered. Since the amplitudes of the reflected lights 147-1 to 147-n are added, the same sensitivity as when the reflection point 146 is at the focus position 145 can be obtained.
  • An intravascular OCT (optical coherence tomography) apparatus percutaneously inserts a guide wire into a coronary artery of the heart from a blood vessel such as the root of the leg, arm, or wrist under X-ray fluoroscopy.
  • An OCT catheter of 1 mm ⁇ is inserted along the guide wire and rotated to detect a tomographic image of a coronary artery (2 to 4 mm ⁇ , length 15 cm).
  • PCI percutaneous coronary intervention
  • the challenge with intravascular OCT devices is in qualitative diagnosis, which identifies substances that cause stenosis in blood vessels, such as plaque (mass of fat and cholesterol), thrombus, and calcification, and assesses their risk grades.
  • qualitative diagnosis is made from the morphological information (shape, texture, brightness density) of the tomogram, but a high level of experience is required. Treatment methods vary depending on the material causing the stenosis and its grade. In particular, qualitative diagnosis is important because oily plaque, when detached, clogs small blood vessels and causes angina pectoris and myocardial infarction.
  • plaques, thrombi, and calcifications can be distinguished by color using visible light images from fiber angioscopes.
  • plaque is yellowish
  • thrombus is reddish determined by mixing with fibrin
  • calcified and normal mucosa are both whiteish, but the color tone including transparency is slightly different.
  • an intravascular OCT apparatus can simultaneously perform qualitative diagnosis by analyzing the spectrum of the vessel wall in addition to morphological diagnosis using tomographic images. And it is desirable that these diagnoses can be made over the 15 cm length of the coronary artery.
  • FIG. 17 is a diagram showing the focusing range required for diagnosing coronary arteries. Another problem with the intravascular OCT apparatus is that, as shown in FIG. 17, the focal range 150 required for diagnosing a coronary artery with a maximum diameter of 4 mm is as wide as 1 mm to 4 mm. Cannot set high resolution. If the horizontal resolution is low, horizontal reflections are superimposed, resulting in poor depth resolution.
  • the OCT catheter is rotated at high speed and pulled back to detect an image of the 15 cm coronary artery wall. Since the pullback is performed while flashing an optically transparent contrast agent under X-ray fluoroscopy, the OCT catheter should be rotated at high speed during the contrast agent flushing time limit of 2 to 3 seconds (recommended time for biological safety). Even if it is, the image of the blood vessel wall can only be obtained with a resolution of millimeter order. In addition, as described above, in the near-infrared band, the spectrum indicating the characteristics of the causative substance is not as clear as in the visible light band.
  • the present embodiment By applying the present embodiment to an intravascular OCT apparatus, the above problems can be solved, and it becomes possible to detect a high-resolution tomographic image with a deep depth of field and an image of a blood vessel wall. can improve the accuracy of qualitative diagnosis.
  • An application example will be described below.
  • FIG. 18 is a diagram showing a configuration in which this embodiment is applied to an intravascular OCT apparatus.
  • the imaging catheter 151 of FIG. 18 is rotated and pulled back within a sheath inserted from the aorta of the lower extremity into the coronary artery via a guidewire.
  • the guidewire and sheath are not shown due to existing therapeutic equipment.
  • the connector 152 has a role of fixing (chucking) the imaging catheter 151 to the rotor section 153 in addition to attaching and detaching the imaging catheter 151 .
  • the connector 152 rotates the imaging catheter 151 together with the rotor portion 153, and the one-dimensional array of the fiber array 154 and the pixel array of the line sensor 155 incorporated in the imaging catheter 151 are paired via the telecentric optical system 168. It is fixed so that it corresponds to one.
  • the imaging catheter 151 and the rotor section 153 incorporate mechanisms described below.
  • a frequency-swept light source (not shown) installed in the device main body emits light whose frequency is swept from visible to near-infrared. The emitted light passes through an optical rotary joint 156 and is guided to a fiber coupler 158 by a fiber 157, where it is separated into illumination light and reference light.
  • the illumination light is guided by a fiber 159, converted into parallel light by a collimator optical system 160, passed through a cylindrical optical system 161, reflected by a beam splitter 162, and passed through about 100 one-dimensionally arranged fiber arrays 154. It is focused at edge 163 .
  • the NA (numerical aperture) of the cylindrical optical system 161 is set to match the NA of the fiber array 154 .
  • the arrangement of the fiber arrays 154 may be arranged in a one-dimensional staggered arrangement to increase the number of arrangements to 200.
  • the illumination light guided by the fiber array 154 is emitted from the end 164 of the fiber array 154 and illuminates the inside of the blood vessel via the objective optical system 165 and the mirror 166 .
  • the objective optical system 165 is an image-side telecentric system, and its focal point is set at the center of the range indicated by 150 in FIG. Reflected light from the inside of the blood vessel, the blood vessel wall, and the inner layer of the blood vessel wall is imaged on the end 164 of the fiber array 154 by the objective optical system 165 and guided to the rotor section 153 .
  • the reflected light emitted from the end 163 of the fiber array 154 is combined with the reference light by the beam splitter 162 to generate interference fringes.
  • the length of fiber 167 that guides the reference light from fiber coupler 158 corresponds to the round trip length of fiber array 154 .
  • the interference fringes are imaged on the line sensor 155 by the telecentric optical system 168.
  • the telecentric optical system 168 magnifies the image of the end 163 of the fiber train 154 and is a double-telecentric optical system so that the NA of the fiber train 154 and the directivity of the one-dimensional light receiving element 155 correspond one-to-one. ing.
  • the interference fringe signal received by each element of the one-dimensional light receiving element 155 is sampled. Since the number of pixels of the one-dimensional light receiving element 155 is as small as 100, high-speed driving can be sufficiently achieved.
  • the sensitivity of the one-dimensional light receiving element 155 seems to have no margin at first glance, but the amplitude (SN ratio) after Fourier transform processing of the interference fringes can be changed by the phase matching of the Fourier transform. (to the SN ratio of the monospectral bandwidth), no problem arises.
  • the imaging catheter 151 is rotated and pulled back integrally with the rotor section 153 by the drive system 169 that performs rotation and pullback, and the interference fringe signal is sequentially detected over 15 cm of the coronary artery.
  • the interference fringe signal is sent to the main body of the apparatus via the rotary transformer 170 and stored in a memory (not shown) in the main body of the apparatus.
  • the optical rotary joint 156 may be multi-channeled, optically modulated including other control signals, and interfaced with the apparatus main body.
  • a slip ring is used for the power supply.
  • the interference fringe signal is read out from the memory of the main body of the device, divided into interference fringe signals in the visible light band and the near-infrared band, and Fourier transform is performed. Then, using the complex signal obtained by the Fourier transform, the extension of the depth of field and the correction of the disturbance of the optical wavefront described with reference to FIGS. 14A and 14B are performed by two-dimensional filtering.
  • the angle of view of the objective optical system 165 is set so that the imaging range 171 (corresponding to the width 181 of the image in FIG. 19) closest to the imaging catheter 151 in FIG. 18 is 1.5 mm. Then, while the imaging catheter 151 and the rotor section 153 are rotated at a speed of 75 rotations/second, a 15 cm coronary artery is pulled back for 2 seconds to obtain a three-dimensional image.
  • FIG. 19 shows an image of a blood vessel wall that has been cut open by pulling back.
  • 150 images of the blood vessel wall with a width of 1 mm excluding the overlapping portion 182 are detected over the blood vessel length of 15 cm.
  • the width of overlapping portion 182 varies with the distance to the vessel wall.
  • the position and magnification of the pixels of the overlapping portion 182 are corrected by CG technology, and the amplitude intensity of the image is smoothed in the direction of the pullback and added to obtain the image. Can be pasted together.
  • the resolution in the light receiving direction obtained by Fourier transform processing is higher than the horizontal resolution determined by the arrangement interval of the fibers. Therefore, if the angle of the mirror 166 (FIG. 18) is adjusted so that the vascular wall is obliquely illuminated and imaged, the resolution of the image of the vascular wall can be increased.
  • the fiber array 155 uses a broadband optical fiber that guides both the visible light band and the near infrared band. They may be arranged vertically in parallel, and two systems of processing circuits may be prepared. Also, the frequency swept light source may be separately prepared for the visible light band and for the near infrared band.
  • an ultrasonic transducer is provided at the tip of the imaging catheter 151, and a mechanism for detecting a tomographic image with ultrasonic waves, the above-described image detection of the blood vessel wall, and spectrum detection. Analysis mechanisms may be combined.
  • Ultrasonic tomography has lower resolution than near-infrared, but the detection depth of the tomographic image is deep. They also have their own characteristics in morphological diagnosis.
  • FIG. 20 is a diagram for explaining an example in which the present embodiment is applied to X-ray imaging and ⁇ -ray imaging.
  • X-rays emitted from the X-ray source 191 in FIG. 20 have a frequency sweep and coherence that match the resolution to be detected.
  • the amplitude of the X-ray is amplitude-modulated with a frequency sweep corresponding to the resolution to be detected.
  • X-rays emitted from an X-ray source 191 pass through a beam splitter 192 for X-rays and irradiate an object 193 .
  • the shape of the beam splitter 192 is an elliptical sphere, one of the focal points is located at the exit of the X-ray source 191 and the other is located on the reflecting surface of the reflector 194 .
  • the X-rays emitted from the X-ray source 191 are partly reflected by the X-ray beam splitter 192, further reflected by the reflector 194, and irradiated onto the two-dimensional light receiving element 195 as reference X-rays. .
  • the beam splitter 192 for X-rays is an X-ray-only mirror whose surface is polished by the EEM (Elastic Emission Machining) method and has a very high surface accuracy of ⁇ 1 to 2 nm. In recent years, mirrors dedicated to such X-rays have become commercially available. By adjusting the angle at which the X-ray mirror is installed, the reflectance and transmittance are adjusted and used as the beam splitter 192 .
  • the X-rays reflected (backscattered) from the subject are combined with the reference X-rays reflected from the reflector 194 by the beam splitter 192 to generate interference fringes.
  • the interference fringes are converted into electric signals by a two-dimensional light receiving element 195 such as a CMOS or CCD imaging element or a flat panel detector (FPD).
  • the time to acquire 3D data has been shortened to about 1 second.
  • the resolution remains in the order of millimeters because the resolution is based on only the absorption information without using the phase information.
  • the time to acquire three-dimensional data can be reduced to several milliseconds, which is the same as the shutter operation. Therefore, resolution on the order of microns can be obtained from the intervals of the pixel array of the two-dimensional light receiving element 195 .
  • the angle of view, magnification, and resolution can be freely set according to the purpose.
  • the scale of the apparatus can also be simplified compared to CT.
  • the frequency sweep required to obtain micron-order resolution requires only a very narrow fractional bandwidth, causing non-linear scattering of X-ray fluorescence and the like. It can be set to avoid wavelengths.
  • the aperture of the imaging element 195 can be made small, the number of three-dimensional filtering processes can be reduced. Announcements of X-ray sources capable of frequency sweeping have also become popular in recent years.
  • the array interval of the imaging elements is set to the production limit of 1 ⁇ m, three-dimensional resolution on the order of several ⁇ m becomes possible, and the imaging magnification and the corresponding resolution can be freely set. Accordingly, transmission images, cross-sectional images, and images constructed in three dimensions can be displayed.
  • the frequency sweep is matched to the spectral absorption band where the characteristics of the substance appear, there is a possibility of specifying the substance by analyzing the spectrum.
  • FIG. 22 shows an imaging device 1001 according to the first embodiment, and the imaging device 1001 is capable of three-dimensional space resolution and spectral analysis for each pixel.
  • FIG. 23 stereoscopically shows the imaging device 1001 of FIG. 22 to facilitate understanding of the imaging device 1001 .
  • the light emitted from the point light source 1039 passes through the cylindrical optical system 1017 and the first slit 1015 and is introduced into the first beam splitter 1011 by the first collimating optical system 1013 and the mirror 1031 .
  • the illumination light separated by the first beam splitter 1011 is input to the first collimating optical system 1007 via the second beam splitter 1008, which is a dividing section, and then passes through the second slit 1006 to the objective optical system.
  • a subject 1003 is illuminated by 1005 .
  • the illumination light separated by the second beam splitter 1008 in the middle of the optical path is applied to the reflector 1010 via the second collimating optical system 1009 to generate reference light.
  • the first and second slits 1006 and 1015 have linear (substantially rectangular) openings 1006a in the direction (X direction) perpendicular to the plane of FIG. Illumination light passing through 1006a illuminates the subject 1003 in a linear fashion.
  • the position of the second slit 1006 on the optical path is at the imaging position of the objective optical system 1005, and the position of the second slit 1006, the position of the reflector 1010, and the position of the light receiving surface of the two-dimensional light receiving sensor 1019 are conjugate is.
  • the illumination light is emitted through the second slit 1006. Therefore, depending on the application of the imaging device 1001, the objective optical system 1005 or the right side of the second slit 1006 (upstream in the optical path) side) can be exchanged.
  • Reflected light from the subject 1003 passes through the objective optical system 1005, the second slit 1006, and the first collimating optical system 1007, and the second beam splitter 1008 interferes with the reference light, which is the reflected light from the reflector 1010. be done.
  • 22 corresponds to the length of the opening 1006a of the second slit 1006 in the X direction in the direction perpendicular to the paper in FIG. is doing.
  • the optical path length from the second beam splitter 1008 to the reflector 1010 corresponds to the optical path length from the second beam splitter 1008 to the second slit 1006 .
  • the components of the interference optical system that interferes with the illumination optical system that performs illumination indicated by the dashed line II in FIG. may be installed separately.
  • the components of the interference optical system are placed on the object 1003 side (downstream side of the optical path) from the second slit 1006, the components including the second slit 1006 are placed on the right side (upstream side of the optical path). can use known hyperspectral cameras.
  • the reflected light (interference light) that interferes with the reference light is input to the spectroscope 1014.
  • the spectroscope 1014 in FIG. 22 uses a transmission type diffraction grating which is advantageous for miniaturization, but a reflection type spectroscope may be used.
  • the reflected light split into wavelength components by the spectroscope 1014 is imaged on the two-dimensional light receiving sensor 1019 by the imaging optical system 1016 .
  • the two-dimensional light receiving sensor 1019 a known CCD image sensor or a global shutter CMOS image sensor can be used.
  • the generated interference fringes are Fourier transformed, the time difference corresponding to the difference in optical distance can be converted into a difference in spatial frequency components. It becomes possible to detect Further, for each spectrally separated wavelength, an image is formed on the element row (see PX in FIG. 23) of the two-dimensional light receiving sensor 1019 by the imaging optical system 1016, and further, the wavelength band component of the signal received by the element row.
  • the phase-matched summation of the wavelength band components of the light yields a result similar to pulse compression in radar.
  • the light source 1039 uses a broadband light source in order to satisfy the bandwidth required for resolution and spectral analysis. If a sweep-type broadband light source appears in the future, instead of the spectroscope 1014 and the two-dimensional light receiving (area) sensor 1019, a one-dimensional light receiving sensor should be placed in the direction perpendicular to the paper surface (X direction). .
  • a broadband light source requires high linearity and frequency stability in a broadband wavelength sweep. A function to perform Fourier transform while reading is required.
  • the scanning mechanism 1004 scans in the vertical direction (S direction) in FIG. can be detected.
  • the process of generating an RGB image is explained below.
  • the output of the two-dimensional light receiving sensor 1019 shown in FIG. 22 is input to the FFT 61 shown in FIG.
  • a W (white) signal 81 is generated.
  • Fourier transform is performed on the range including the W signal 81 in the near-infrared region 85 with good transparency shown in FIG. is generated as
  • the FFT 62 In parallel with the generation of the W signal 81, the FFT 62 first performs the Fourier transform of the R band shown in FIG. 25 to generate the R signal 82.
  • the R signal 82 undergoes pixel interpolation by the interpolation memory unit 63 shown in FIG. 24, and is synchronized with the W signal (luminance signal) 81 on the time axis (pixel position).
  • the B-band Fourier transform shown in FIG. 25 is performed by the FFT 62 shown in FIG. 24 to generate the B signal 83 .
  • the pixel position of the B signal 83 is similarly interpolated by the interpolation memory unit 64 shown in FIG.
  • the output of the two-dimensional light receiving sensor 1019 can be multiplied by a coefficient corresponding to the XYZ color matching function and Fourier transformed to obtain an XYZ signal. Obtainable.
  • the resolution of the R signal 82 and the B signal 83 is about 1/3 that of the W signal 82. There is no problem because the resolution of the eye is also about 1/3.
  • the R signal 82 and the B signal 83 can be generated by performing the Fourier transform in the range corresponding to the divided R band and B band. Its signal generation process is based on the principle of superposition in linear systems. In other words, a broadband light source is considered to consist of a linear sum of multiple light sources with divided wavelength bands, including R, G, B, and infrared. is a linear process. Therefore, by extracting time-series signals corresponding to the R and B bands from the output of the two-dimensional light receiving sensor 1019 and performing Fourier transform, signal generation processing is performed using a single light source for R and B. You will get the same image.
  • Multispectral analysis performed using the multispectral data obtained by the imaging device 1001 will be described below.
  • the imaging device 1001 can be used as a known hyperspectral camera by sliding the reflector 1008 in the direction of arrow H so that the absorption band 1012 does not generate reference light.
  • Multispectral data spectral characteristics
  • the imaging device 1001 acquires as much multispectral data necessary for specifying the target substance as possible.
  • the name and composition of the target substance information necessary for preprocessing (normalization processing to reduce cluster dispersion) (this information includes variations in the brightness of illumination light, variations in the wavelength band of illumination light, image cut including information and called a tag) is added to the acquired multispectral data by the data format creation unit 70 and stored as RAW data in an off-line computer.
  • AI learns with a teacher
  • multispectral waveforms and principal component analysis and The feature axis is narrowed down by multivariate analysis such as FS (Foley-Sammon) transformation.
  • FS Freley-Sammon
  • multispectral data corresponding to each substance can be directly learned (supervised) by AI to identify multiple substances. Identification by AI is possible with a non-linear partition Z, as shown in FIG.
  • the above identification method is a method for identifying two clusters, it is necessary to switch the characteristic axis each time when identifying multiple substances. Even if the switching is performed multiple times in a tree-like combination, the material can be specified at high speed because the circuit scale is greatly reduced.
  • the imaging apparatus 1001 of the first embodiment is suitable for detecting stationary subjects and subjects with little movement.
  • an imaging device that can suppress the number of spectra to about 256 if an image sensor with 2 million pixels and 10,000 frames per second is used, detection is possible at a frame rate of 60 frames per second.
  • a specific application is a three-dimensional measuring device for computer graphics (CG). It is possible to display an image observed from a free viewpoint and direction, a transmission image, and a cross-sectional image (tomographic image) using CG from captured image data.
  • the spectral information of the spectrum enables accurate color reproduction and display that matches the lighting color.
  • the imaging apparatus 1001 of the present embodiment is capable of component analysis for each pixel, a microscope apparatus with high resolution in the Z-axis direction, a surface inspection apparatus capable of surface shape measurement and colorimetry, and a tomographic imaging apparatus can be used. It can be applied to a fundus camera capable of detection and composition analysis.
  • Example 2 Imaging device capable of imaging and spectrum analysis of the surface of an object in one shot FIG. Processing Based on Imaging Principles: See Embodiment 1.)
  • an imaging apparatus 1101 capable of imaging an object 1003 and analyzing its spectrum by one-shot imaging suitable for dynamic measurement will be shown.
  • the objective optical system 1005 of Example 1 (see FIG. 2) comprises a special cylindrical optical system 1025 element.
  • resolution by optical interference resolution processing in the horizontal direction (direction of arrow Z) of the paper surface of FIG. 27 is used.
  • three-dimensional shape measurement is not possible, but imaging of a moving subject 1003 and spectrum analysis are possible in one shot.
  • the focal position of the second cylindrical optical system 1025 becomes The curvature and aperture of the second cylindrical optical system 1025 in the direction perpendicular to the plane of the paper (X direction) gradually change with respect to the longitudinal direction of the condensed light beam 1022 so that the condensed light beam 1022 is gradually elongated while maintaining a constant degree of condensed light. is set.
  • the second cylindrical optical system 1025 is arranged so that condensed rays of illumination light that have passed through each point in the X direction of the aperture 1026a of the second slit 1026 are irradiated (projected) in parallel onto the surface of the subject 1033.
  • 27 is equipped with an optical system element in which the projection magnification in the direction (X direction) perpendicular to the plane of FIG. Diffusion and intensity distribution (lens power) in the vertical direction (arrow Y direction) of the second cylindrical optical system 1025 are appropriately set according to the longitudinal direction of the condensed light beam 1022 . Therefore, it is desirable to include a free-form surface (having an asymmetrical shape with respect to the optical axis OA) imaging element as a component of the second cylindrical optical system 1025 .
  • the longitudinal resolution and sensitivity of the condensed light beam 1022 will be explained.
  • a condensed ray 1022 of illumination light that has passed through the point where the aperture center C of the second slit 1026 and the optical axis OA intersect is obliquely projected onto the surface of the subject 1003 through the second cylindrical optical system 1025.
  • the reference light which is the reflected light from the reflector 1010, interferes.
  • OCI processing is based on OCT (Optical Coherence Tomography) processing.
  • OCI processing uses Michelson interferometry and Fourier transform processing to obtain a one-dimensional image by illuminating a micron-order short light pulse obliquely and receiving the light pulses that are successively reflected from the object surface. interference resolution processing).
  • the configuration located on the left side (downstream side of the optical path) from the second slit 1026 has the same configuration (in the vertical direction (Y direction in FIG. 27)) as a pinhole camera, so it seems to have low sensitivity.
  • the phases of each wavelength component of light are matched and added, so the intensity of the signal after Fourier transform is similar to pulse compression in radar. , is multiplied by the number of pixels in the longitudinal direction of the condensed ray 1022 .
  • the SN ratio is the square root of the number of pixels, so there are no concerns about sensitivity.
  • the reflected light is out of phase in proportion to the difference in round-trip distance from the second slit 1026 to each reflection point on the condensed light beam 1022 (as the reflection points on the condensed light beam 1022 are separated), and they are superimposed. received as a signal.
  • the superimposed reflected light is caused to interfere with the reference light from reflector 1010, interference fringes are generated at a frequency proportional to the phase shift, resulting in a superimposed signal.
  • the frequency of the interference fringes is higher at the reflection point where the optical path length from the second slit 1026 to the reflection position of the condensed beam 1022 is longer.
  • Resolution in the direction (arrow X) perpendicular to the plane of FIG. 27 is achieved by imaging using the cylindrical optical system 1025.
  • the configuration shown on the right side of the second slit 1026 (on the upstream side of the optical path) is the same as the configuration of Example 1 shown in FIG. do.
  • the imaging of the surface of the object 1003 and the spectrum analysis can be performed in one shot, so it is suitable for detecting a moving object.
  • an image obtained by this imaging method is obtained by obliquely illuminating the subject 1003 and aligning the reflection point 6 where the condensed light 1022 on the subject 1003 and the wavefront 5 of the illumination light intersect with the tangent line of the wavefront 5. It becomes the same as the image observed from the direction 7.
  • the object 1003 is translucent, as shown in the lower left enlarged view 7A of FIG. The same image as observed in transmission is obtained.
  • the illumination light should include a highly transmissive near-infrared region (0.68 ⁇ m to 1.5 ⁇ m).
  • T in FIG. 7A schematically indicates a tissue such as a blood vessel inside the subject 1003 .
  • Specific applications of the second embodiment include handy inspection devices such as surface inspection devices, colorimeters, microscopes, and intraoperative microscope devices.
  • Example 3 Imaging Apparatus Capable of Capturing Large-Screen Imaging and Spectrum Analysis at High Speed FIG.
  • the imaging device 1201 of the third embodiment replaces the second and first slits 1026 and 1027 of the imaging device 1101 of the second embodiment with a plurality of known pinholes, and replaces the two-dimensional light receiving sensor 1053 with a known line sensor.
  • the element rows of the line sensor are arranged in the vertical direction (see arrow Y direction in FIG. 27).
  • the configuration in which the imaging device 1201 performs detection while moving the subject 1003 in the direction perpendicular to the plane of FIG. can do.
  • FIG. 29 shows an embodiment of an inspection apparatus capable of inspecting a large screen at high speed by adopting a configuration in which the imaging apparatuses 1201 described above are combined in multiple stages.
  • the image of the overlapping portion 1032 acquired by imaging by the adjacent imaging devices 1201 is divided into blocks, the correlation between the blocks is calculated, the three-dimensional shift amount is detected, and based on the shift amount
  • a large screen can be detected by moving and interpolating pixels and pasting images together.
  • frequency component analysis can be performed for each pixel.
  • the imaging devices 1 to n are driven separately into odd-numbered imaging devices and even-numbered imaging devices to prevent mutual interference of illumination light in the overlapping portion 1032. , an image of one line (in the direction of arrow W in FIG. 29) is detected from two imagings.
  • an inspection device that inspects large screens such as sheets and iron plates at high speed, and a large number of pits to be inspected, such as blood analysis and genetic testing, can be collectively analyzed by spectral analysis.
  • An inspection device and the like can be mentioned.
  • the present invention is not limited to this embodiment.
  • the present invention is suitable for a three-dimensional imaging apparatus capable of simultaneously achieving three-dimensional resolution and spectral image detection of a subject with a simple structure.
  • This application claims the benefit of Japanese Patent Application No. 2021-70725 filed on April 19, 2021 and Japanese Patent Application No. 2021-211632 filed on December 24, 2021, the content of which is incorporated herein by reference in its entirety.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Instruments For Measurement Of Length By Optical Means (AREA)

Abstract

Le problème décrit par la présente invention est de fournir un dispositif de capture d'image tridimensionnelle grâce auquel la détection d'une image spectrale et la résolution en trois dimensions d'un sujet peuvent être réalisées simultanément par une structure simple. La solution selon l'invention porte sur un dispositif de capture d'image tridimensionnelle qui comprend : une source de lumière qui balaye la fréquence de lumière ou la fréquence de lumière modulée en amplitude pour fournir une lumière d'éclairage pour éclairer un sujet ; un interféromètre optique qui multiplexe de la lumière de référence et de la lumière réfléchie par le sujet pour générer une frange d'interférence ; un mécanisme de détection bidimensionnelle qui détecte la frange d'interférence sous la forme d'un signal électrique dans une position bidimensionnelle par l'utilisation de l'un quelconque parmi un réseau bidimensionnel d'éléments de réception de lumière, une combinaison d'un réseau unidimensionnel d'éléments de réception de lumière et un balayage unidimensionnel et une combinaison d'un élément de réception de lumière simple et d'un balayage unidimensionnel ; et un moyen de calcul de différence de trajet optique qui calcule, pour chaque pixel en trois dimensions, la différence de trajet optique entre la lumière réfléchie et la lumière de référence dans une position de détection bidimensionnelle du mécanisme de détection bidimensionnelle, le sujet étant résolu en trois dimensions par traitement à l'aide de la frange d'interférence et des informations de différence de trajet optique.
PCT/JP2022/017973 2021-04-19 2022-04-16 Dispositif de capture d'image tridimensionnelle WO2022224917A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2021-070725 2021-04-19
JP2021070725A JP6918395B1 (ja) 2021-04-19 2021-04-19 撮像装置
JP2021-211632 2021-12-24
JP2021211632A JP7058901B1 (ja) 2021-12-24 2021-12-24 3次元撮像装置

Publications (1)

Publication Number Publication Date
WO2022224917A1 true WO2022224917A1 (fr) 2022-10-27

Family

ID=83723317

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/017973 WO2022224917A1 (fr) 2021-04-19 2022-04-16 Dispositif de capture d'image tridimensionnelle

Country Status (1)

Country Link
WO (1) WO2022224917A1 (fr)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0634525A (ja) * 1992-07-21 1994-02-08 Olympus Optical Co Ltd 高速分光測光装置
JPH0886745A (ja) * 1994-09-14 1996-04-02 Naohiro Tanno 空間干渉型光波反射測定装置及びそれを用いた 光波エコートモグラフィー装置
JPH08313344A (ja) * 1995-05-23 1996-11-29 Shimadzu Corp 分光測定装置
JP2002139422A (ja) * 2001-09-10 2002-05-17 Fuji Photo Film Co Ltd 光散乱媒体の吸光計測装置
JP2005180931A (ja) * 2003-12-16 2005-07-07 Nippon Roper:Kk 分光処理装置
JP2011179950A (ja) * 2010-03-01 2011-09-15 Nikon Corp 測定システム
WO2012005315A1 (fr) * 2010-07-07 2012-01-12 兵庫県 Microscope holographique, procédé d'enregistrement d'une image d'hologramme d'élément microscopique, procédé de création d'un hologramme permettant la reproduction d'une image haute résolution et procédé de reproduction d'une image
US20150168125A1 (en) * 2012-07-30 2015-06-18 Adom, Advanced Optical Technologies Ltd. System for Performing Dual Path, Two-Dimensional Optical Coherence Tomography (OCT)
JP2016180733A (ja) * 2015-03-25 2016-10-13 日本分光株式会社 顕微分光装置
JP2018017670A (ja) * 2016-07-29 2018-02-01 株式会社リコー 分光特性取得装置、画像評価装置、及び画像形成装置
CN108732133A (zh) * 2018-04-12 2018-11-02 杭州电子科技大学 一种基于光学成像技术的植物病害在体无损检测系统
JP2020182604A (ja) * 2019-04-30 2020-11-12 のりこ 安間 高精細撮像とスペクトル解析が可能な内視鏡装置
JP6918395B1 (ja) * 2021-04-19 2021-08-11 のりこ 安間 撮像装置

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0634525A (ja) * 1992-07-21 1994-02-08 Olympus Optical Co Ltd 高速分光測光装置
JPH0886745A (ja) * 1994-09-14 1996-04-02 Naohiro Tanno 空間干渉型光波反射測定装置及びそれを用いた 光波エコートモグラフィー装置
JPH08313344A (ja) * 1995-05-23 1996-11-29 Shimadzu Corp 分光測定装置
JP2002139422A (ja) * 2001-09-10 2002-05-17 Fuji Photo Film Co Ltd 光散乱媒体の吸光計測装置
JP2005180931A (ja) * 2003-12-16 2005-07-07 Nippon Roper:Kk 分光処理装置
JP2011179950A (ja) * 2010-03-01 2011-09-15 Nikon Corp 測定システム
WO2012005315A1 (fr) * 2010-07-07 2012-01-12 兵庫県 Microscope holographique, procédé d'enregistrement d'une image d'hologramme d'élément microscopique, procédé de création d'un hologramme permettant la reproduction d'une image haute résolution et procédé de reproduction d'une image
US20150168125A1 (en) * 2012-07-30 2015-06-18 Adom, Advanced Optical Technologies Ltd. System for Performing Dual Path, Two-Dimensional Optical Coherence Tomography (OCT)
JP2016180733A (ja) * 2015-03-25 2016-10-13 日本分光株式会社 顕微分光装置
JP2018017670A (ja) * 2016-07-29 2018-02-01 株式会社リコー 分光特性取得装置、画像評価装置、及び画像形成装置
CN108732133A (zh) * 2018-04-12 2018-11-02 杭州电子科技大学 一种基于光学成像技术的植物病害在体无损检测系统
JP2020182604A (ja) * 2019-04-30 2020-11-12 のりこ 安間 高精細撮像とスペクトル解析が可能な内視鏡装置
JP6918395B1 (ja) * 2021-04-19 2021-08-11 のりこ 安間 撮像装置

Similar Documents

Publication Publication Date Title
JP4389032B2 (ja) 光コヒーレンストモグラフィーの画像処理装置
EP2905645B1 (fr) Microscope holographique et procédé de génération d'image holographique
JP6909207B2 (ja) 高分解能3dスペクトル領域光学撮像装置及び方法
JP5623028B2 (ja) 光干渉断層画像を撮る撮像方法及びその装置
US8731272B2 (en) Computational adaptive optics for interferometric synthetic aperture microscopy and other interferometric imaging
CN104684457B (zh) 使用oct光源和扫描光学器件的二维共焦成像
US8384908B2 (en) Image forming method and optical coherence tomograph apparatus using optical coherence tomography
AU2011384697B2 (en) Spectroscopic instrument and process for spectral analysis
US11644791B2 (en) Holographic imaging device and data processing method therefor
JP6765786B2 (ja) 撮像装置、撮像装置の作動方法、情報処理装置、及び情報処理装置の作動方法
JP2017522066A (ja) 改善された周波数領域干渉法による撮像システムおよび方法
JP2008542758A (ja) スペクトルコード化ヘテロダイン干渉法を画像化に使用可能なシステム、方法、及び装置
EP3627093B1 (fr) Appareil d'imagerie de tomographie par cohérence optique en parallèle dans le domaine de fourier et procédé d'imagerie par tomographie à cohérence optique en parallèle dans le domaine de fourier
JP2007298461A (ja) 偏光感受光画像計測装置
WO2012078417A1 (fr) Tomographie à cohérence optique à projection sur une image
JP2017047110A (ja) 撮像装置
JP2009008393A (ja) 光画像計測装置
JP6918395B1 (ja) 撮像装置
KR20170139126A (ko) 촬상 장치
JP5557397B2 (ja) 半透明物質の画像化の方法および装置
JP6292860B2 (ja) 光干渉断層計
JP7058901B1 (ja) 3次元撮像装置
JP2015114284A (ja) 光干渉断層計
JP2020182604A (ja) 高精細撮像とスペクトル解析が可能な内視鏡装置
JP2010164351A (ja) 光断層画像化装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22791690

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22791690

Country of ref document: EP

Kind code of ref document: A1