WO2016151666A1 - Microscope à éclairage structuré, procédé d'observation et programme de traitement d'image - Google Patents

Microscope à éclairage structuré, procédé d'observation et programme de traitement d'image Download PDF

Info

Publication number
WO2016151666A1
WO2016151666A1 PCT/JP2015/058420 JP2015058420W WO2016151666A1 WO 2016151666 A1 WO2016151666 A1 WO 2016151666A1 JP 2015058420 W JP2015058420 W JP 2015058420W WO 2016151666 A1 WO2016151666 A1 WO 2016151666A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
noise reduction
processing unit
reduction processing
interference fringes
Prior art date
Application number
PCT/JP2015/058420
Other languages
English (en)
Japanese (ja)
Inventor
三村 正文
吉田 隆彦
勇輝 照井
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Priority to PCT/JP2015/058420 priority Critical patent/WO2016151666A1/fr
Publication of WO2016151666A1 publication Critical patent/WO2016151666A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/06Means for illuminating specimens

Definitions

  • the present invention relates to a structured illumination microscope, an observation method, and an image processing program.
  • a super-resolution microscope that enables observation beyond the resolution of the optical system in a microscope apparatus.
  • a structured illumination microscope (SIM) that generates a super-resolution image of a sample by illuminating the sample with structured illumination to obtain a modulated image and demodulating the modulated image.
  • Structured Illumination Microscopy
  • the sample is illuminated by irradiating the sample with interference fringes formed by branching the light beam emitted from the light source into a plurality of light beams by a diffraction grating or the like and causing these light beams to interfere with each other in the vicinity of the sample.
  • the modulated image is acquired.
  • the present invention has been made in view of the above-described circumstances, and an object thereof is to provide a structured illumination microscope, an observation method, and an image processing program capable of reducing noise while ensuring the contrast of interference fringes. To do.
  • an illumination optical system that irradiates a sample with excitation fringes for exciting a fluorescent substance contained in the sample with interference fringes, and a control unit that controls the direction and phase of the interference fringes;
  • An imaging optical system that forms an image of a sample irradiated with interference fringes, an imaging device that captures an image formed by the imaging optical system and generates a captured image, and an image that is demodulated using the captured image
  • a structured illumination microscope comprising: a processing unit, wherein the image processing unit includes: a filter processing unit that performs a filter process in a frequency domain of a captured image or an image generated from the captured image; and a filter-processed image.
  • a structured illumination microscope comprising: a noise reduction processing unit that performs noise reduction processing; and a demodulation unit that performs demodulation processing using an image on which noise reduction processing has been performed.
  • the excitation light for exciting the fluorescent substance contained in the sample is irradiated to the sample with the interference fringes, the direction and phase of the interference fringes are controlled, and the interference fringes Forming an image of a sample irradiated with light, generating a captured image by capturing the image, and performing demodulation processing using the captured image.
  • an observation method characterized by including the above.
  • the sample is irradiated with the excitation light for exciting the fluorescent substance contained in the sample with the interference fringe, the direction and phase of the interference fringe are controlled, and the interference fringe is irradiated with the sample.
  • the computer uses the captured image generated by capturing the image of the image, the computer performs filtering in the frequency domain of the captured image or the image generated from the captured image, and noise is applied to the filtered image.
  • An image processing program is provided that performs a reduction process and a demodulation process using an image on which a noise reduction process has been performed.
  • the present invention it is possible to provide a structured illumination microscope, an observation method, and a control program capable of reducing noise while ensuring the contrast of interference fringes.
  • FIG. 1 is a diagram illustrating a structured illumination microscope 1 according to the embodiment.
  • the structured illumination microscope 1 includes a structured illumination device 2, an imaging device 3, a control device 4, a display device 5, and a storage device 6.
  • the structured illumination microscope 1 is, for example, a fluorescence microscope, and is used for observing a sample X including cells that have been fluorescently stained in advance.
  • the sample X is held on a stage (not shown), for example.
  • Schematic illumination microscope 1 generally operates as follows.
  • the structured illumination device 2 forms interference fringes and illuminates the sample X with the interference fringes.
  • the imaging device 3 captures a fluorescent image of the sample X illuminated by the interference fringes. This fluorescent image is a moire image modulated by interference fringes.
  • the control device 4 controls each part of the structured illumination microscope 1.
  • the control device 4 controls the structured illumination device 2 to switch the interference fringes to a plurality of states.
  • the control device 4 controls the imaging device 3 to capture an image of the sample X in each of the plurality of states of the interference fringes, and acquires a plurality of images.
  • the control device 4 can form a super-resolution image exceeding the resolution limit of the optical system of the imaging device 3 by performing demodulation processing using a plurality of images.
  • the control device 4 displays the formed super-resolution image on the display device 5.
  • the control device 4 causes the storage device 6 to store data of the formed super-resolution image (demodulated image), for example.
  • the structured illumination microscope 1 includes a 2D-SIM mode for forming a two-dimensional super-resolution image of a surface to be observed (hereinafter referred to as a sample surface) of the sample X and information in a direction perpendicular to the sample surface.
  • 3D-SIM mode for forming a three-dimensional three-dimensional super-resolution image.
  • the 2D-SIM mode will be mainly described, and then the 3D-SIM mode will be described.
  • the structured illumination device 2 includes a light source unit 10 and an illumination optical system 11.
  • the light source unit 10 includes, for example, a laser diode, and emits coherent light such as laser light.
  • the light emitted from the light source unit 10 is referred to as illumination light.
  • the wavelength of the illumination light is set to a wavelength band that includes the excitation wavelength of the fluorescent substance contained in the sample X.
  • the light source unit 10 may not be included in the structured illumination device 2.
  • the light source unit 10 is unitized and may be provided in the structured lighting device 2 so as to be replaceable (attachable or removable).
  • the light source unit 10 may be attached to the structured illumination device 2 during observation with the structured illumination microscope 1.
  • the illumination optical system 11 irradiates the sample with interference fringes with excitation light (illumination light) for exciting the fluorescent substance contained in the sample X.
  • the illumination optical system 11 includes a condenser lens 12, a light guide member 13, a collimator 14, and a branching unit 15 in order from the light source unit 10 toward the sample X.
  • the light guide member 13 includes, for example, an optical fiber.
  • the condensing lens 12 condenses the illumination light from the light source unit 10 on the end surface of the light guide member on the light incident side.
  • the light guide member 13 guides the illumination light from the condenser lens 12 to the collimator 14.
  • the collimator 14 converts the illumination light from the light guide member 13 into parallel light.
  • the illumination optical system 11 branches the illumination light into a plurality of diffracted lights by the branching unit 15 and illuminates the sample X with interference fringes formed by interference of the plurality of diffracted lights.
  • FIG. 1 shows 0th-order diffracted light (shown by a solid line), + 1st-order diffracted light (shown by a broken line), and -1st-order diffracted light (shown by a two-dot chain line) among a plurality of light beams.
  • the + 1st order diffracted light and the ⁇ 1st order diffracted light among the diffracted lights they are simply expressed as 1st order diffracted light.
  • the direction in which the first-order diffracted light is deflected with respect to the zero-order diffracted light (Z direction in FIG. 1) is referred to as a branch direction.
  • the illumination optical system 11 illuminates the sample with interference fringes formed by interference between the + 1st order diffracted light and the ⁇ 1st order diffracted light among the plurality of diffracted lights in the 2D-SIM mode.
  • the illumination optical system 11 forms interference fringes of + 1st order diffracted light and ⁇ 1st order diffracted light, and does not use 0th order diffracted light and second order or higher order diffracted light for forming interference fringes.
  • the illumination optical system 11 includes a rotationally symmetric lens member such as a spherical lens or an aspheric lens.
  • a rotationally symmetric lens member such as a spherical lens or an aspheric lens.
  • the symmetry axis of the lens member is referred to as the optical axis 11a of the illumination optical system 11.
  • the illumination optical system 11 may include a free-form surface lens.
  • the branching unit 15 branches the illumination light into a plurality of diffracted lights.
  • the branching unit 15 includes, for example, a diffraction grating 16 and a driving unit 17.
  • the diffraction grating 16 has, for example, a one-dimensional periodic structure in a plane intersecting the optical axis 11a of the illumination optical system 11. The direction in which the unit structures are arranged in this periodic structure corresponds to the aforementioned branching direction.
  • This periodic structure may be a structure in which the density (transmittance) changes periodically or a structure in which the step (phase difference) changes periodically.
  • the branching unit 15 may be configured to branch the illumination light into a plurality of light beams by a spatial light modulation element using ferroelectric liquid crystal instead of the diffraction grating 16, for example.
  • the driving unit 17 moves the diffraction grating 16 in a direction intersecting the optical axis 11a of the illumination optical system 11. As a result, the phase of the interference fringes formed by the illumination light changes.
  • the control device 4 controls the phase of the interference fringes by controlling the drive unit 17.
  • the drive unit 17 rotates (rotates) the diffraction grating 16 around an axis parallel to the optical axis 11 a of the illumination optical system 11. Thereby, the direction of the interference fringes formed by the illumination light changes.
  • the control device 4 controls the direction of the interference fringes by controlling the drive unit 17.
  • the structured illumination microscope 1 changes the direction of the interference fringes into three ways, changes the phase of the interference fringes into three ways for each direction of the interference fringes, and each of nine kinds of combinations of the directions and phases of the interference fringes are different.
  • the sample X is imaged in the state.
  • the illumination optical system 11 includes a lens 20, a half-wave plate 21, a mask 22, a lens 23, a field stop 24, a lens 25, a filter 26, a dichroic mirror 27, and an objective lens 28 in order from the branching unit 15 toward the sample X. including.
  • Diffracted light generated at the branching portion 15 enters the lens 20.
  • the lens 20 is disposed so that its focal point substantially coincides with the branching portion 15.
  • the lens 20 places diffracted light of the same order among the plurality of light beams branched by the branching unit 15 at the same position on a so-called pupil conjugate plane P1, which is a position conjugate with the rear focal plane (pupil plane) of the objective lens 28. Condensate.
  • the lens 20 condenses the 0th-order diffracted light generated at the branching portion 15 on the optical axis 11a of the illumination optical system 11 at the pupil conjugate plane P1.
  • the lens 20 condenses the + 1st order diffracted light generated in the branching portion 15 at a position away from the optical axis 11a.
  • the lens 20 condenses the ⁇ 1st order diffracted light generated at the branching portion 15 at a position symmetrical to the + 1st order diffracted light with respect to the optical axis 11a on the pupil conjugate plane P1.
  • the half-wave plate 21 is disposed in the optical path between the lens 20 and the lens 23, for example, and adjusts the polarization state of the illumination light so that the polarization state of the illumination light when entering the sample X is S-polarized light.
  • the incident surface of the first-order diffracted light with respect to the sample X is an XZ plane
  • the half-wave plate 21 is a linearly polarized light whose illumination state emitted from the illumination optical system 11 is in the Y direction.
  • the polarization state of the illumination light is adjusted so that When the diffraction direction is changed in the branching section 15, the incident surface of the first-order diffracted light with respect to the sample X rotates around the Z direction. Therefore, the half-wave plate 21 adjusts the polarization state of the illumination light according to the diffraction direction.
  • the half-wave plate 21 may be disposed at any position on the optical path between the branching portion 15 and the sample X.
  • the mask 22 transmits diffracted light used for forming interference fringes and blocks diffracted light not used for forming interference fringes.
  • the mask 22 passes the first-order diffracted light and blocks the 0th-order diffracted light and the second-order or higher-order diffracted light.
  • the mask 22 is disposed at a position where the optical path of the first-order diffracted light is separated from the optical path of the zero-order diffracted light, for example, at the pupil conjugate plane P1.
  • the mask 22 is, for example, an aperture stop, and defines the angle of light rays incident on the sample X.
  • the portion of the mask 22 where the 0th-order diffracted light is incident serves as a light shielding portion
  • the portion where the first-order diffracted light is incident serves as an opening (transmission portion).
  • the illumination light that has passed through the mask 22 enters the lens 23.
  • the lens 23 forms an intermediate image surface 23 a that is optically conjugate with the branch portion 15.
  • the field stop 24 is disposed, for example, on the intermediate image plane 23a.
  • the field stop 24 defines a range (illumination field, illumination area) in which illumination light is irradiated from the illumination optical system 11 to the sample X in a plane perpendicular to the optical axis 11a of the illumination optical system 11.
  • the illumination light that has passed through the field stop 24 enters the lens 25.
  • the lens 25 is, for example, a second objective lens.
  • the lens 25 condenses the + 1st order diffracted light from each point on the branching portion 15 on the rear focal plane (pupil plane P0) of the objective lens.
  • the lens 25 condenses the ⁇ 1st order diffracted light from each point on the branching portion 15 at another position on the rear focal plane (pupil plane P0) of the objective lens. That is, the lens 25 condenses the ⁇ 1st order diffracted light from each point of the branching portion 15 at a position symmetrical to the + 1st order diffracted light with respect to the optical axis of the illumination optical system 11.
  • the illumination light that has passed through the lens 25 enters the filter 26.
  • the filter 26 is an excitation filter, for example, and has a characteristic that light in a wavelength band including the excitation wavelength of the fluorescent material contained in the sample X selectively passes.
  • the filter 26 blocks at least a part of the illumination light other than the excitation wavelength, stray light, external light, and the like.
  • the light that has passed through the filter 26 enters the dichroic mirror 27.
  • the dichroic mirror 27 has a characteristic that light in a wavelength band including the excitation wavelength of the fluorescent substance contained in the sample X is reflected and light in a predetermined wavelength band (for example, fluorescence) out of the light from the sample X passes.
  • the light from the filter 26 is reflected by the dichroic mirror 27 and enters the objective lens 28.
  • the objective lens 28 forms a surface optically conjugate with the intermediate image surface 23a, that is, a surface optically conjugate with the branching portion 15, on the sample X. That is, the objective lens 28 forms structured illumination on the sample X.
  • the + 1st order diffracted light forms a spot on the pupil plane P0 of the objective lens 28 away from the optical axis 11a.
  • the ⁇ 1st order diffracted light forms a spot on the pupil plane P0 at a position symmetrical to the + 1st order diffracted light with respect to the optical axis 11a.
  • the spots formed by the first-order diffracted light are arranged, for example, on the outer periphery of the pupil plane P0.
  • the first-order diffracted light is incident on the focal plane at an angle corresponding to the numerical aperture (NA) of the objective lens 28.
  • interference fringes formed on the sample X have a periodic distribution of light intensity in a direction (in the XY plane) perpendicular to the optical axis 11a of the illumination optical system 11, for example.
  • This interference fringe is a pattern in which line-shaped bright portions and dark portions are periodically arranged in a direction corresponding to the periodic direction of the branching portion 15.
  • a direction parallel to the bright part and the dark part is referred to as a line direction
  • a direction in which the bright part and the dark part are arranged is referred to as a periodic direction.
  • the direction of the interference fringes is defined by, for example, at least one of the line direction and the periodic direction.
  • the portion of the sample X arranged in the bright part of the interference fringe emits fluorescence when the fluorescent material is excited.
  • the distribution of the structure of the sample X eg, fluorescent substance
  • the fluorescent image of the sample X is a moire image of the interference fringes formed by the illumination optical system 11 and the fluorescent density distribution of the sample X.
  • the imaging device 3 acquires this moire image.
  • the imaging device 3 includes an imaging optical system 31 and an imaging element 32.
  • the imaging optical system 31 includes an objective lens 28, a dichroic mirror 27, a filter 33, and a lens 34.
  • the imaging optical system 31 shares the objective lens 28 and the dichroic mirror 27 with the illumination optical system 11.
  • Light from the sample X (hereinafter referred to as observation light) enters the objective lens 28 to be collimated, and enters the filter 33 through the dichroic mirror 27.
  • the filter 33 is, for example, a fluorescent filter.
  • the filter 33 has a characteristic that light in a predetermined wavelength band (eg, fluorescence) of observation light from the sample X selectively passes.
  • the filter 33 blocks, for example, illumination light, external light, stray light, etc. reflected by the sample X.
  • the light that has passed through the filter 33 enters the lens 34.
  • the lens 34 forms a plane (image plane) optically conjugate with the focal plane (object plane) of the objective lens 28. An image (moire image) by fluorescence from the sample X is formed on this image plane.
  • the image sensor 32 includes a two-dimensional image sensor such as a CCD image sensor or a CMOS image sensor.
  • the imaging element 32 has, for example, a structure having a plurality of pixels arranged two-dimensionally and a photoelectric conversion element such as a photodiode disposed in each pixel.
  • the imaging element 32 reads out the electric charge generated by the irradiation of the observation light to the photoelectric conversion element by the reading circuit.
  • the image sensor 32 converts the read charges into digital data (for example, 8-bit gradation value), and outputs digital image data in which pixel positions and gradation values are associated with each other.
  • the structured illumination microscope 1 includes an image processing unit 40 that performs image processing using a captured image acquired by the imaging device 3.
  • the image processing unit 40 is provided in the control device 4, but may be provided separately from the control device 4.
  • the image processing unit 40 performs noise reduction processing on the captured image captured by the imaging element 32, and generates a demodulated image using the image after the noise reduction processing.
  • FIG. 2 is a block diagram showing a functional configuration of the control device 4.
  • the control device 4 includes a control unit 41, an image processing unit 40, and a storage unit 42.
  • the control device 4 includes, for example, a computer system including a CPU and a RAM, and the storage unit 42 is a work memory such as a RAM.
  • the control unit 41 acquires captured image data from the image sensor 32 and causes the storage device 6 to store captured image data.
  • the image processing unit 40 acquires captured image data from the storage device 6 and performs image processing.
  • the image processing unit 40 includes a filter processing unit 43, a noise reduction processing unit 44, and a demodulation unit 45.
  • FIG. 3 is a diagram showing input / output data in each unit of the image processing unit 40.
  • the filter processing unit 43 acquires captured image data.
  • the filter processing unit 43 performs filter processing in the frequency domain. For example, the filter processing unit 43 generates an image obtained by extracting a region having a predetermined frequency including the frequency of interference fringes from the captured image.
  • the filter processing unit 43 outputs the data of the filtered image.
  • the noise reduction processing unit 44 acquires captured image data and filtered image data.
  • the noise reduction processing unit 44 performs noise reduction processing on the captured image that is the target of noise reduction processing using the filtered image, and generates an image with reduced noise.
  • the noise reduction processing unit 44 outputs image data with reduced noise.
  • the demodulator 45 acquires data of the image with reduced noise generated by the noise reduction processor 44, and generates (constructs) a demodulated image using the image with reduced noise.
  • the demodulator 45 outputs demodulated image data.
  • the control unit 41 stores the data of the demodulated image in the storage device 6.
  • the control unit 41 supplies demodulated image data to the display device 5 (see FIG. 1), and causes the display device 5 to display the demodulated image.
  • the image processing unit 40 may include an intermediate image generation unit 46 as shown in FIGS. 3B and 3C.
  • the intermediate image generation unit 46 performs preprocessing for generating a demodulated image to generate an intermediate image.
  • the intermediate image generation unit 46 acquires image data with reduced noise generated by the noise reduction processing unit 44, and uses the image with reduced noise as an intermediate image. Is generated.
  • the intermediate image generation unit 46 outputs intermediate image data.
  • the demodulator 45 acquires the intermediate image data generated by the intermediate image generator 46 and generates (constructs) a demodulated image using the intermediate image.
  • the demodulator 45 outputs demodulated image data.
  • the noise reduction processing unit 44 may perform noise reduction processing on the intermediate image.
  • the intermediate image generation unit 46 acquires captured image data, and generates an intermediate image using the captured image.
  • the intermediate image generation unit 46 outputs intermediate image data.
  • the filter processing unit 43 acquires intermediate image data.
  • the filter processing unit 43 generates an image (filtered image) obtained by extracting a region having a predetermined frequency including the frequency of interference fringes from the intermediate image.
  • the filter processing unit 43 outputs the data of the filtered image.
  • the noise reduction processing unit 44 acquires intermediate image data and filtered image data.
  • the noise reduction processing unit 44 performs noise reduction processing on the intermediate image using the filtered image to generate an image with reduced noise.
  • the noise reduction processing unit 44 outputs image data with reduced noise.
  • the demodulator 45 acquires image data with reduced noise, and generates a demodulated image using the image with reduced noise.
  • the demodulator 45 outputs demodulated image data.
  • the filter processing unit 43 changes the contrast of the interference fringes from the captured image or the image generated from the captured image. For example, the filter processing unit 43 generates an image in which interference fringes are enhanced (hereinafter referred to as an enhanced image) as the filtered image.
  • an enhanced image an image in which interference fringes are enhanced
  • FIG. 4 is a conceptual diagram showing a captured image 50 and a first emphasized image 60 that are subject to noise reduction processing.
  • the captured image 50 includes, for example, a sample portion 51 corresponding to the distribution (fluorescence density distribution) of the structure (eg, fluorescent substance) of the sample X, and a stripe portion 52 corresponding to an interference fringe.
  • the stripe part 52 has a bright part 52a and a dark part 52b, and is a pattern in which the bright part 52a and the dark part 52b are alternately arranged periodically.
  • the bright part 52a and the dark part 52b are each linear (band-like).
  • a direction parallel to the bright part 52a and the dark part 52b is referred to as a line direction
  • a direction in which the bright part 52a and the dark part 52b are arranged is referred to as a periodic direction.
  • the periodic direction is a direction orthogonal to the line direction.
  • the first emphasized image 60 is an image in which the contrast between the bright part 52a and the dark part 52b is higher than the captured image 50 that is the target of the noise reduction process.
  • FIG. 5 is an explanatory diagram illustrating an example of a process for generating the first emphasized image 60.
  • the filter processing unit 43 acquires data of the captured image 50, and generates power spectrum PS data by performing two-dimensional Fourier transform on the captured image 50.
  • the power spectrum PS has a peak 55 corresponding to the period of interference fringes.
  • the peak 55 appears symmetrically with respect to the origin. Further, in the region on the lower frequency side than the peak 55, there is a distribution including information on the sample portion 51, and the peak 56 appears.
  • the filter processing unit 43 uses the bandpass filter 57 to extract a region of a predetermined frequency including the frequency of interference fringes (peak 55 frequency).
  • the horizontal axis is the direction corresponding to the horizontal scanning direction of the image
  • the vertical axis is the direction corresponding to the vertical scanning direction of the image
  • the symbol Dm is the period of the interference fringes.
  • the direction corresponding to the direction is indicated.
  • the band pass filter 57 is set to have a relatively higher gain in a predetermined frequency region including the frequency of the peak 55 (hereinafter referred to as a pass region 58) than in other regions.
  • the passband 58 includes a passband 58a and a passband 58b that are set symmetrically with respect to the origin so as to correspond to the peak 55.
  • the pass band 58a is an area having a bandwidth W centered on the frequency (u, v) of the peak 55, and the pass band 58b is centered on the frequency (-u, -v) of another peak 55. This is a region of the bandwidth W to be used.
  • the symbol PSa is a profile (cross-sectional view) in the direction Dm of the power spectrum PS
  • the symbol 57a is a gain distribution in the direction Dm of the bandpass filter 57
  • the symbol PSb is a filter process. It is the profile in the direction Dm of a later power spectrum.
  • the gain distribution 57a is, for example, binary, and is set to a high level (eg, 1) in the pass band 58a and the pass band 58b, and is set to a low level (eg, 0) in the other areas.
  • the filter processing unit 43 obtains a peak position (peak frequency) by detecting a power peak in a power spectrum PS obtained by, for example, Fourier transforming the captured image 50 to be subjected to noise reduction processing. For example, the filter processing unit 43 acquires a set value of the interference fringe frequency, and detects a peak by searching for a power spectrum around the set value frequency.
  • the setting value of the interference fringe frequency is stored in the storage device 6 (see FIGS. 1 and 2) as interference fringe setting information, for example.
  • the filter processing unit 43 can acquire interference fringe setting information from the storage device 6.
  • the filter processing unit 43 may detect a peak without using the set value of the interference fringe frequency.
  • the filter processing unit 43 may acquire the peak position by calculating the peak position from the set value of the interference fringe frequency without detecting the peak, for example.
  • the filter processing unit 43 generates the bandpass filter 57 by setting a region of the bandwidth W around the acquired peak position as the passband 58.
  • a pass band 58 is extracted.
  • the power before the filtering process is held in the pass band 58a and the pass band 58b, and the power is 0 in other areas.
  • the first enhanced image is generated, for example, by performing Fourier inverse transform on the power spectrum after the filter processing.
  • FIG. 6 is a diagram illustrating another example of the band-pass filter 57.
  • the band pass filter 57 in FIG. 6A has an offset component, and the low level is set to a value larger than zero.
  • the bandpass filter 57 in FIG. 6B changes in gain within the passband 58.
  • the gain is represented by a Gaussian distribution centered on the frequency of the peak 55.
  • the band pass filter 57 in FIG. 6C is obtained by adding an offset component to the band pass filter 57 in FIG. 6B.
  • the 6 (D) includes a pass band 58c in a frequency region lower than the frequency of the interference fringes.
  • the passband 58c is, for example, a region centered on the origin (frequency is 0), and the gain changes stepwise as compared with the surrounding area.
  • the gain in the pass band 58c is set to the same value as that of the pass band 58a and the pass band 58b, for example.
  • the passband 58c may be set to have a larger gain or may be set to be smaller than the passband 58a and the passband 58b.
  • the gain in the passband 58c is represented by a Gaussian distribution centered on the origin.
  • the gain peak in the passband 58c is set to the same value as the gain peak in the passband 58b and the passband 58c, for example.
  • the passband 58c may have a larger gain than the passband 58a and the passband 58b.
  • the passband 58c may be set to have a smaller gain than the passband 58a and the passband 58b.
  • the band pass filter 57 may have an offset component as shown in FIGS. 6A and 6C.
  • FIG. 7 is an explanatory diagram showing filter processing and noise reduction processing.
  • the filter processing unit 43 performs the filter process on the captured image 50 that is the target of the noise reduction process to generate the first emphasized image 60.
  • the noise reduction processing unit 44 performs noise reduction processing by replacing the luminance value of the target region P that is the target of noise reduction with the value calculated using the luminance values of the plurality of reference regions Q in the captured image 50. Do.
  • the noise reduction processing unit 44 calculates the luminance value of the target region P by weighting the luminance values of the plurality of reference regions Q.
  • the noise reduction processing unit 44 uses the first enhanced image 60 for calculating the weighting coefficient.
  • the noise reduction processing unit 44 uses the similarity between the target patch region Np including the target region P in the first emphasized image 60 and the reference patch region Nq including the reference region Q in the first emphasized image 60 as a weighting coefficient.
  • each of the plurality of reference regions Q is represented by reference numerals Q1 to Q4.
  • the noise reduction processing unit 44 sets the target patch area Np for the target area P on the first emphasized image 60. Further, the noise reduction processing unit 44 sets the reference patch areas Nq1 to Nq4 for the reference areas Q1 to Q4 on the first emphasized image 60.
  • the noise reduction processing unit 44 calculates similarities w1 to w4 between the target patch area Np on the first emphasized image 60 and the reference patch areas Nq1 to Nq4 on the first emphasized image 60.
  • the noise reduction processing unit 44 performs noise reduction processing by weighting the luminance values of the reference regions Q1 to Q4 in the captured image 50 that is the target of noise reduction processing using the similarities w1 to w4.
  • FIG. 8 is a diagram illustrating an example of noise reduction processing using the first emphasized image 60.
  • the noise reduction processing unit 44 sequentially selects a target region P that is a target for noise reduction from the first emphasized image 60.
  • the target region P is, for example, a one-pixel region on the first emphasized image 60, but may be a region including a plurality of pixels. Further, the target region P may be a part of one pixel on the first emphasized image 60, and may be, for example, one point (attention point, target point) on the first emphasized image 60.
  • the noise reduction processing unit 44 calculates the brightness value of the target area P using the brightness values of the plurality of reference areas Q.
  • the reference area Q is, for example, an area having the same area as the target area P, and is assumed to be one pixel of the first emphasized image 60 here.
  • the position of the pixel belonging to the target area P is represented by P (x, y), and the position of the pixel belonging to the reference area Q is represented by Q ( ⁇ , ⁇ ).
  • x and ⁇ are the coordinates of the pixel based on the position of the end pixel in the horizontal scanning direction.
  • y and ⁇ are the coordinates of the pixel based on the position of the end pixel in the vertical scanning direction.
  • x and ⁇ are integers from 0 to 1919
  • y and ⁇ are integers from 0 to 1079.
  • the noise reduction processing unit 44 performs noise reduction processing based on light intensity information (eg, luminance value, pixel value, pixel intensity, gradation value) of a plurality of reference regions Q.
  • the noise reduction processing unit 44 uses the luminance values of the plurality of reference regions Q for calculating the luminance value of the target region P.
  • the luminance value of the target region P in the image after the noise reduction processing is appropriately referred to as a corrected luminance value.
  • the luminance value of the target area P is represented by, for example, a pixel value at P (x, y)
  • the luminance value of the reference area Q is represented by, for example, a pixel value at Q ( ⁇ , ⁇ ).
  • the noise reduction processing unit 44 sets a predetermined area including the target area P as the target patch area Np, and sets a predetermined area including the reference area Q as the reference patch area Nq.
  • the noise reduction processing unit 44 calculates the similarity between the target patch area Np and the reference patch area Nq.
  • the noise reduction processing unit 44 corrects a value obtained by weighting the luminance value of the reference region Q in the captured image 50 to be subjected to noise reduction processing with the similarity of the reference patch region Nq on the first emphasized image 60. This is used to calculate the luminance value of P.
  • the target patch area Np is, for example, a rectangular area centered on the target area P, and one side thereof is parallel to the horizontal scanning direction.
  • the shape of the target patch region Np may be a polygon, or may be a shape including a curve in the outline such as a circle, an ellipse, or an ellipse. Further, the center of the target patch area Np may be shifted from the target area P.
  • the size of the target patch area Np is set so as to include at least one period of interference fringes.
  • a portion having a predetermined phase arbitrarily selected from one period of the interference fringes is referred to as a feature portion 53.
  • the feature portion 53 may be, for example, the center line of the bright portion of the interference fringe (the portion where the luminance value is maximized) or the center line of the dark portion of the interference fringe (the portion where the luminance value is minimized). Alternatively, it may be an intermediate portion between the center line of the bright portion and the dark portion of the interference fringe.
  • the length of one period of the interference fringes in the period direction is represented by ⁇ .
  • is, for example, the distance (pitch) between the center line of the bright part 52a and the center line of the adjacent bright part 52a.
  • px ⁇ / sin ⁇ .
  • the noise reduction processing unit 44 sets the target patch area Np by calculating the coordinates of the target patch area Np using, for example, the setting information of the position of the target area P and the target patch area Np.
  • the setting information of the target patch area Np includes, for example, size information, shape information, and positional relationship information with respect to the target area P, and is stored in the storage device 6.
  • the noise reduction processing unit 44 acquires setting information from the storage device 6 and sets the target patch region Np.
  • the size of the target patch area Np may be a fixed value or a variable value.
  • a recommended value or the like is set as a default value, and may be changeable by a user's designation.
  • the size of the target patch region Np may be set according to the sample X. The user may determine the size of the target patch region Np by trial and error according to the result of the noise reduction processing.
  • the reference region Q is arbitrarily selected from, for example, the first enhanced image 60 corresponding to the captured image 50 that is the target of noise reduction processing.
  • the noise reduction processing unit 44 sets a region including the target region as the peripheral range Ap, and selects the reference region Q from the peripheral range Ap.
  • the peripheral range Ap is, for example, a circular region centered on the target region P, but may be a polygonal region or a region whose center is shifted from the target region P (an eccentric region). Good.
  • the radius r of the peripheral range Ap is set such that the target patch area Np is within the peripheral range Ap, and is set to several times, several tens of times, or several hundred times the representative dimension of the target patch area Np, for example.
  • the representative dimensions are, for example, the length of one side and the length of a diagonal line when the shape of the target patch region Np is a rectangle (eg, square), and the shape of the target patch region Np is an ellipse (eg, a perfect circle).
  • the noise reduction processing unit 44 selects pixels in order from a plurality of pixels included in the peripheral range Ap, for example, as the reference region Q.
  • the noise reduction processing unit 44 may sequentially select all the pixels included in the peripheral range Ap as the reference region Q, or may not select some of the pixels included in the peripheral range Ap. Further, the noise reduction processing unit 44 may randomly select a pixel from among a plurality of pixels included in the peripheral range Ap as the reference region Q.
  • the noise reduction processing unit 44 sets a reference patch area Nq for the reference area Q.
  • the reference patch area Nq is, for example, a rectangular area centered on the reference area Q, and one side thereof is parallel to the horizontal scanning direction.
  • the noise reduction processing unit 44 sets the reference patch region Nq by calculating the coordinates of the reference patch region Nq using the position of the reference region Q and the setting information.
  • the setting information of the reference patch area Nq includes, for example, size information, shape information, and positional relationship information with respect to the reference area Q, and is stored in the storage unit 42.
  • the size and shape of the reference patch region Nq and the positional relationship with respect to the reference region Q are the same as the size and shape of the target patch region Np and the positional relationship with respect to the target region P, respectively.
  • the noise reduction processing unit 44 acquires the setting information of the reference patch area Nq from the storage unit 42 and sets the reference patch area Nq.
  • the noise reduction processing unit 44 calculates the similarity w according to the equation (1) in FIG.
  • the similarity w is a function (w (Np; Nq)) having the target patch area Np and the reference patch area Nq as arguments.
  • ⁇ h 2 and ⁇ a 2 in equation (1) are parameters set to, for example, default values or user-specified values.
  • f1 (x, y) represents the luminance value at the position of the point (x, y) of the first emphasized image 60.
  • i and j are temporary variables that change so that the point (x + i, y + j) satisfies the condition belonging to the target patch area Np.
  • the target patch area Np is an area of 5 pixels in the horizontal scanning direction and 5 pixels in the vertical scanning direction with (x, y) as the center.
  • i and j take values of ⁇ 2, ⁇ 1, 0, +1, and +2, respectively.
  • f1 (x + i, y + j) ⁇ f1 ( ⁇ + i, ⁇ + j) indicates a difference in luminance value between overlapping pixels when the centers of the target patch area Np and the reference patch area Nq are matched.
  • f1 (x ⁇ 2, y ⁇ 2) represents the luminance value of the upper left end pixel of the target patch area Np
  • f1 ( ⁇ 2, ⁇ -2) indicates the luminance value of the upper leftmost pixel of the reference patch area Nq.
  • the noise reduction processing unit 44 integrates the square of the difference in luminance value between pixels in the target patch region Np and the reference patch region Nq that have the same position in each region in the target patch region Np. Then, the similarity w is calculated.
  • the algorithm used for calculating the similarity w can be arbitrarily selected, for example, an algorithm that uses the absolute value of the luminance value between corresponding pixels in the target patch region Np and the reference patch region Nq. Also good.
  • the noise reduction processing unit 44 changes the position ( ⁇ , ⁇ ) of the reference region Q so that the point ( ⁇ , ⁇ ) satisfies the condition belonging to the peripheral range Ap, and is set for each reference region Q.
  • the similarity w between the patch area Nq and the target patch area Np is calculated.
  • the noise reduction processing unit 44 weights the luminance value of the reference region Q in the captured image 50 that is the target of the noise reduction processing, for example, according to Expression (2) in FIG.
  • Expression (2) c ⁇ 1 (x, y) is a normalization coefficient.
  • f2 (x, y) is the luminance value of the point (x, y) in the captured image 50 that is the target of the noise reduction process
  • g (x, y) is the corrected luminance value in the target region P, that is, It is a luminance value of the point (x, y) in the image after the noise reduction processing.
  • the noise reduction processing unit 44 calculates the luminance value g (x, y) using the similarity w as a weighting coefficient.
  • the noise reduction processing unit 44 generates image data after the noise reduction processing by replacing f (x, y) with g (x, y).
  • the image processing unit 40 calculates the similarity w using the first enhanced image 60 in which the interference fringes are enhanced, it is possible to generate an image with reduced noise while ensuring the contrast of the interference fringes.
  • the light intensity information may be the same value as the pixel value, or may be a value calculated using the pixel value.
  • the image processing unit 40 does not have to calculate the corrected luminance value for a partial region of the image (for example, the captured image 50) that is the target of the noise reduction process. For example, the image processing unit 40 may calculate a corrected luminance value for a part of the area specified by the user in the captured image 50 (cut out a part of the captured image 50). The image processing unit 40 may generate an image after noise reduction processing except for a region where the corrected luminance value is not calculated.
  • the light intensity information of the target region P and the reference region Q is a value calculated by various interpolation methods using luminance values (pixel values) when at least one of x, y, ⁇ , and ⁇ is not an integer, for example. It may be.
  • various interpolation methods include nearest neighbor interpolation (Nearest ⁇ neighbor), bilinear interpolation (Bilinear), and bicubic interpolation (Bicubic).
  • the size of the reference patch area Nq may be different from the size of the target patch area Np.
  • the shape of the reference patch region Nq may be different from the shape of the target patch region Np.
  • At least one of the size and the shape is different between the target patch area Np and the reference patch area Nq, for example, in a state where the reference patch area Nq is shifted so that the reference area Q and the target area P coincide with each other, What is necessary is just to calculate a similarity degree about the area
  • At least one of the size and shape of the reference patch area Nq may be a fixed value or a variable value that can be changed by a user's designation.
  • the structured illumination microscope 1 changes the direction of the interference fringes, for example, in three ways from the first direction to the third direction.
  • the first direction is a direction arbitrarily selected within a plane orthogonal to the optical axis 11a of the illumination optical system 11, and the second direction and the third direction are 120 ° and 240 ° with the first direction, respectively. It is the direction which makes the angle of.
  • the structured illumination microscope 1 changes the phase of the interference fringes, for example, to three phases of the first to third phases in a state where the directions of the interference fringes are set in the first to third directions. .
  • the first phase is a phase that is arbitrarily set in a plane orthogonal to the optical axis 11a of the illumination optical system 11, and the second phase and the third phase are 2 ⁇ / 3, 4 ⁇ from the first phase, respectively.
  • / 3 is a phase shifted.
  • the structured illumination microscope 1 changes the state of interference fringes (combination of direction and phase), for example, in nine ways, and acquires a captured image of the sample X illuminated by the interference fringes in each state.
  • FIG. 9 is a flowchart showing the observation method according to the present embodiment.
  • the structured illumination microscope 1 sets the direction of interference fringes.
  • the control unit 41 controls the driving unit 17 to place the diffraction grating 16 at the first rotation position, and sets the direction of the interference fringes to the first direction.
  • the structured illumination microscope 1 sets the phase of interference fringes.
  • the control unit 41 controls the drive unit 17 to place the diffraction grating 16 at a predetermined position, and sets the phase of the interference fringes to the first phase.
  • the structured illumination microscope 1 acquires an image of the sample X illuminated by the interference fringes.
  • the control unit 41 controls the image sensor 32 to image the sample X, and acquires captured image data from the image sensor 32.
  • step S4 the structured illumination microscope 1 (eg, the control unit 41) determines whether or not imaging has been completed for all phases (first to third phases) that are scheduled as phases of interference fringes. . If the control unit 41 determines that there is a phase for which imaging has not ended (step S4; No), the control unit 41 returns to step S2 and moves the diffraction grating 16 by the drive unit 17 to change the phase of the interference fringes to the next. Set to phase. For example, the control unit 41 repeats the processing from step S2 to step S4, thereby acquiring three captured images having different interference fringe phases in a state where the interference fringe direction is set to the first direction.
  • step S4 determines that all phases have been completed (step S4; Yes)
  • step S5 all directions (first first) that are planned as interference fringe directions.
  • the control unit 41 determines that there is a direction in which imaging has not ended (step S5; No)
  • the control unit 41 returns to step S1 and rotates the diffraction grating 16 by the drive unit 17, thereby changing the direction of the interference fringes to the next. Set the direction.
  • the controller 41 obtains nine captured images with different interference fringe states by repeating the processing from step S1 to step S5.
  • step S6 the structured illumination microscope 1 performs noise reduction processing on the captured image obtained in step S3 or an image generated from the captured image.
  • the control unit 41 causes the image processing unit 40 to perform noise reduction processing.
  • the image processing unit 40 performs noise reduction processing on the captured image 50 by the method described with reference to FIG. 7.
  • FIG. 10 is a flowchart showing noise reduction processing according to the present embodiment.
  • the image processing unit 40 acquires, for example, one piece of image data of nine captured images as the data of the target image for the noise reduction process.
  • the filter processing unit 43 generates a first enhanced image in which interference fringes are enhanced (see FIGS. 4 and 5). The flow of processing for generating the first emphasized image will be described later with reference to FIG.
  • step S12 the noise reduction processing unit 44 selects the target region P from the first emphasized image 60 (see FIG. 8). For example, the noise reduction processing unit 44 selects the point (x, y) on the first emphasized image 60 as the target region P. In step S ⁇ b> 13, the noise reduction processing unit 44 sets the target patch area Np for the target area P.
  • step S ⁇ b> 14 the noise reduction processing unit 44 selects the reference region Q from the first emphasized image 60. For example, the noise reduction processing unit 44 sets a predetermined area including the target area P as the peripheral area Ap, and selects the reference area Q from the peripheral area Ap. In step S15, the noise reduction processing unit 44 sets a predetermined area including the reference area Q as the reference patch area Nq.
  • step S16 the noise reduction processing unit 44 calculates the similarity w between the target patch area Np and the reference patch area Nq (see equation (1) in FIG. 8).
  • step S ⁇ b> 17 the noise reduction processing unit 44 determines whether or not the calculation of the similarity w has been completed for all scheduled reference regions Q. If the noise reduction processing unit 44 determines that there is a reference region Q for which the calculation of the similarity w has not been completed (step S17; No), the noise reduction processing unit 44 returns to step S14 and selects the next reference region Q from the peripheral range Ap. The processes in steps S15 to S17 are repeated.
  • step S18 the noise reduction processing unit 44 captures the target of the noise reduction processing.
  • the luminance value g (x, y) after correction is obtained by weighting the luminance value of the reference region Q in the image 50 with the similarity w and integrating it in the peripheral range Ap (see equation (2) in FIG. 8). Is calculated.
  • the noise reduction processing unit 44 replaces the luminance value f2 (x, y) of the target area P with the luminance value g (x, y).
  • step S19 the noise reduction processing unit 44 determines whether or not the corrected luminance value g (x, y) has been calculated for all scheduled target areas P. If the noise reduction processing unit 44 determines that there is an unfinished target region P (step S19; No), the noise reduction processing unit 44 returns to step S12 and selects the next target region P from the first emphasized image 60. For example, the noise reduction processing unit 44 increments x, sequentially sets pixels in one row to the target region P, finishes processing for the pixels in one row, and then increments y to perform pixels in the next row. Are set in the target area P in order.
  • the noise reduction processing unit 44 corrects the luminance value before correction (for example, f2 (x, y)) in the entire area or a part of the captured image 50 that is the target of the noise reduction processing. By replacing (eg, (g (x, y))), the image data after the noise reduction processing is generated.
  • the image processing unit 40 sequentially sets, for example, nine captured images used for generating a demodulated image as a captured image 50 to be subjected to noise reduction processing, and performs noise reduction processing on the captured image 50.
  • FIG. 11 is a flowchart showing a process for generating the first emphasized image 60.
  • the filter processing unit 43 determines whether to detect interference fringes on the target image for noise reduction processing.
  • the control unit 41 causes the display device 5 to display an image asking the user whether or not to detect interference fringes, and indicates the content of the instruction when there is an instruction from the user.
  • the instruction information is supplied to the filter processing unit 43.
  • the filter processing unit 43 performs the determination process in step S20 based on the instruction information.
  • the filter processing unit 43 may perform the determination process in step S20 based on setting information indicating whether or not to detect interference fringes. This setting information may be stored in the storage device 6 and changeable by the user, for example.
  • the filter processing unit 43 When it is determined that the interference fringes are detected (step S20; Yes), the filter processing unit 43 generates power spectrum data by performing two-dimensional Fourier transform on the captured image 50 that is the target of the noise reduction processing in step S21. In step S22, the filter processing unit 43 detects the peak position in the power spectrum (frequency domain) using the power spectrum data. In step S23, the filter processing unit 43 determines whether or not the interference fringe has been successfully detected. For example, when the absolute value of the difference between the detected peak position and the peak position calculated from the set value of the interference fringe frequency is equal to or smaller than the threshold value, the filter processing unit 43 determines that the interference fringe detection has succeeded. To do. Further, for example, the filter processing unit 43 determines that the interference fringe detection has failed when the absolute value of the difference between the detected peak position and the peak position calculated from the set value of the interference fringe frequency exceeds a threshold value. judge.
  • step S23 determines whether the detection of the interference fringes has failed (step S23; No), or when it is determined that the interference fringes are not detected in step S20 (step S20; No), the filter processing unit 43 determines the interference fringes in step S24. Get the setting value.
  • step S25 the filter processing unit 43 calculates the peak position in the power spectrum (frequency domain) using the set value of the interference fringes. That is, in step S25, the filter processing unit 43 calculates the peak position in the power spectrum (frequency domain) without using the power spectrum data.
  • step S25 After the process of step S25 is completed or when it is determined that the interference fringe has been successfully detected in step S23 (step S23; Yes), the filter processing unit 43 performs a bandpass filter corresponding to the peak position in step S26. Generate. In step S ⁇ b> 27, for example, the filter processing unit 43 generates a first enhanced image 60 by applying a band-pass filter to the captured image 50 subjected to noise reduction processing after Fourier transform and then performing inverse Fourier transform.
  • the filter process part 43 may output the determination result. For example, the filter processing unit 43 may supply the determination result of step S23 to the control unit 41, and the control unit 41 may notify the user of the failure in detecting the interference fringes. Moreover, the filter process part 43 does not need to perform the process of step S24 and step S25, when detecting an interference fringe. In this case, if the filter processing unit 43 determines in step S23 that the detection of interference fringes has failed (step S23; No), the noise reduction process may be interrupted or terminated. Further, the filter processing unit 43 does not have to perform the determination process (step S20) as to whether or not to detect the interference fringes, and in this case, the processes in steps S20 to S23 can be omitted.
  • the 3D-SIM mode In the 3D-SIM mode, the illumination optical system 11 interferes with interference fringes between + 1st order diffracted light and ⁇ 1st order diffracted light, interference fringes between 0th order diffracted light and ⁇ 1st order diffracted light, and interference between 0th order diffracted light and + 1st order diffracted light.
  • the sample X is illuminated by a synthetic interference fringe synthesized with the fringes.
  • the mask 22 passes the 0th-order diffracted light and the 1st-order diffracted light, and blocks the second-order and higher-order diffracted light. Note that the portion of the mask 22 where the 0th-order diffracted light is incident may be a shutter unit that can switch between the passage and blocking of the 0th-order diffracted light.
  • the structured illumination microscope 1 changes the direction of the interference fringes, for example, in three ways from the first direction to the third direction. Further, the structured illumination microscope 1 changes the phase of the interference fringes, for example, to five phases of the first to fifth phases in a state where the directions of the interference fringes are set in the first to third directions.
  • the first phase is a phase that is arbitrarily set in a plane orthogonal to the optical axis 11a of the illumination optical system 11, and the second to fifth phases are 2 ⁇ / 5, 4 ⁇ / from the first phase, respectively. The phase is shifted by 5, 6 ⁇ / 5, 8 ⁇ / 5.
  • the structured illumination microscope 1 changes the state of interference fringes (combination of direction and phase), for example, in 15 ways, and acquires a captured image of the sample X illuminated by the interference fringes in each state. For example, 15 images are used to generate one demodulated image in the 3D-SIM mode.
  • the image processing unit 40 can perform noise reduction processing in the 3D-SIM mode as in the 2D-SIM mode.
  • the demodulation processing may be, for example, processing using a method disclosed in US Pat. No. 8,115,806, or “Super-Resolution Video Microscopy of Live Cells by Structured Illumination”, Peter Kner , Bryant B. Chhun, Eric R. Griffis, Lukman Winoto, and Mats G. L. Gustafsson, NATURE METHODS Vol.6 NO.5, pp.339-342, May be.
  • the method used for the demodulation process is not limited to these methods.
  • the reference patch region Nq is selected from the first emphasized image 60 generated from the captured image 50 that is the target of the noise reduction process, but the captured image is different from the captured image 50 that is the target of the noise reduction process.
  • the reference patch region Nq can also be selected from an enhanced image generated from the image (hereinafter referred to as a second enhanced image).
  • a second enhanced image an enhanced image generated from the image
  • FIG. 12 is an explanatory diagram showing another example of the filter processing and noise reduction processing.
  • the filter processing unit 43 performs the filter process on the captured image 50 that is the target of the noise reduction process to generate the first emphasized image 60.
  • the filter processing unit 43 performs a filter process on the captured image 61 different from the captured image 50 (see FIG. 12A) to generate the second enhanced image 62. Generate.
  • the captured image 61 differs from the captured image 50 in the phase of the interference fringes.
  • the noise reduction processing unit 44 sets the target patch area Np for the target area P on the first emphasized image 60.
  • the noise reduction processing unit 44 sets reference patch areas Nq1 to Nq4 for the reference areas Q1 to Q4 on the second emphasized image 62.
  • the noise reduction processing unit 44 calculates similarities w1 to w4 between the target patch region Np on the first emphasized image 60 and the reference patch regions Nq1 to Nq4 on the second emphasized image 62, respectively.
  • the noise reduction processing unit 44 performs noise reduction processing by weighting the luminance values of the reference regions Q1 to Q4 in the captured image 61 by using the similarities w1 to w4.
  • FIG. 13 is an explanatory diagram showing another example of the filter processing and noise reduction processing.
  • the filter processing unit 43 performs the filter process on the captured image 50 that is the target of the noise reduction process to generate the first emphasized image 60.
  • the filter processing unit 43 performs a filter process on the captured image 63 different from the captured image 50 (see FIG. 12A) to generate the second enhanced image 64.
  • the captured image 63 differs from the captured image 50 in the direction of interference fringes.
  • the direction of the interference fringes in the captured image 63 is a direction rotated 120 ° with respect to the direction of the interference fringes in the captured image 50 with the clockwise direction being positive.
  • the noise reduction processing unit 44 sets the target patch area Np for the target area P on the first emphasized image 60. Further, the noise reduction processing unit 44 sets reference patch areas Nq1 to Nq4 for the reference areas Q1 to Q4 on the second emphasized image 64. The noise reduction processing unit 44 makes the direction of the reference patch areas Nq1 to Nq4 on the second emphasized image 64 and the interference fringes the same as the direction of the target patch area Np and the interference fringes on the first emphasized image 60. In addition, reference patch areas Nq1 to Nq4 are set.
  • the noise reduction processing unit 44 makes the direction of the interference fringes parallel in the target patch region Np and the reference patch region Nq1.
  • the noise reduction processing unit 44 applies the reference patch region Nq1 to the target patch region Nq1.
  • the patch area Np is set to have a positional relationship rotated by 120 °.
  • the noise reduction processing unit 44 calculates similarities w1 to w4 between the target patch area Np on the first emphasized image 60 and the reference patch areas Nq1 to Nq4 on the second emphasized image 64, respectively. For example, as illustrated in FIG. 13D, the noise reduction processing unit 44 rotates the reference patch area Nq1 so that the outer periphery coincides with the target patch area Np, and the rotated reference patch area Nq1 and the target patch area. The degree of similarity w1 with Np1 is calculated. The noise reduction processing unit 44 calculates the similarities w2 to w4 in the same manner as the similarity w1. As shown in FIG. 13E, the noise reduction processing unit 44 performs noise reduction processing by weighting the luminance values of the reference regions Q1 to Q4 in the captured image 61 using the similarities w1 to w4.
  • FIG. 14 is a diagram illustrating another example of the process of calculating the similarities w1 to w4.
  • the noise reduction processing unit 44 rotates at least a part of the second enhanced image 64 so that the direction of the interference fringes is parallel to the first enhanced image 60.
  • the noise reduction processing unit 44 sets the reference patch areas Nq1 to Nq4 on the second emphasized image 64 after rotation so as to be in a positional relationship parallel to the target patch area Np.
  • the noise reduction processing unit 44 calculates similarities w1 to w4 between the target patch area Np and each of the reference patch areas Nq1 to Nq4.
  • the same components as those in the above-described embodiment are denoted by the same reference numerals, and the description thereof is omitted or simplified.
  • the image processing unit 40 selects a reference region based on the interference fringe information.
  • the information on the interference fringes includes, for example, the direction of the interference fringes.
  • FIG. 15 is a diagram for explaining the noise reduction processing according to the present embodiment.
  • the noise reduction processing unit 44 selects a target region P that is a target for noise reduction from the first emphasized image 60.
  • the noise reduction processing unit 44 sets a region passing through the target region P and parallel to the line direction of the interference fringes as a candidate region Rp (search region).
  • interference fringe setting information indicating the direction of interference fringes (eg, line direction, periodic direction) is stored in the storage device 6 shown in FIG. 2 as interference fringe setting information, for example, and the noise reduction processing unit 44 is stored in the storage device 6.
  • the interference fringe setting information is acquired from the reference area Q, and the reference area Q is selected from the candidate areas Rp set based on the setting information.
  • the image processing unit 40 acquires information on the direction of interference fringes by analyzing at least one of the captured image 50 and the first enhanced image 60 that are the target of noise reduction processing, for example, and the noise reduction processing unit 44 performs analysis.
  • the candidate region Rp may be determined based on the interference fringe direction information obtained by the above.
  • the noise reduction processing unit 44 sequentially selects the reference region Q from the candidate region Rp.
  • the noise reduction processing unit 44 substitutes x + 1 for ⁇ in the above formula, and calculates ⁇ for this ⁇ .
  • sin ⁇ and tan ⁇ in the above formula are irrational numbers, and the calculated ⁇ is an irrational number.
  • the noise reduction processing unit 44 may convert the calculated ⁇ into an integer by, for example, rounding off the decimal point, rounding off, rounding up, etc. Further, the noise reduction processing unit 44 may select the reference region Q in which ⁇ and ⁇ are integers by nearest neighbor interpolation (Nearestighneighbor) or the like.
  • the noise reduction processing unit 44 may use all the pixels belonging to the candidate region Rp as the reference region Q, or may not use some pixels belonging to the candidate region Rp as the reference region Q. For example, the noise reduction processing unit 44 may use only the pixels whose distance to the target region P is equal to or less than a threshold among the pixels belonging to the candidate region Rp as the reference region Q. Further, the noise reduction processing unit 44 may use, for the reference region Q, pixels that are randomly selected from all the pixels belonging to the candidate region Rp.
  • the noise reduction processing unit 44 performs noise reduction processing based on the light intensity information of the plurality of reference regions Q. For example, the noise reduction processing unit 44 weights the luminance value of the reference region Q (calculates a weighted average of the luminance values), and sets the result as the corrected luminance value in the target region P. For example, as described with reference to FIG. 8, the noise reduction processing unit 44 determines a weighting coefficient based on the similarity between the target patch area Np and the reference patch area Nq.
  • the weighting coefficient may be determined based on a parameter other than the similarity, for example, a distance between the target region P and the reference region Q (eg, proportional to the reciprocal of the distance). Further, the noise reduction processing unit 44 may calculate the corrected luminance value in the target region P using a method other than the weighted average, for example, calculate an arithmetic average of the luminance values of the plurality of reference regions Q. Then, the result may be the corrected luminance value of the target area P.
  • a parameter other than the similarity for example, a distance between the target region P and the reference region Q (eg, proportional to the reciprocal of the distance).
  • the noise reduction processing unit 44 may calculate the corrected luminance value in the target region P using a method other than the weighted average, for example, calculate an arithmetic average of the luminance values of the plurality of reference regions Q. Then, the result may be the corrected luminance value of the target area P.
  • the image processing unit 40 performs noise reduction processing based on the interference fringe information. Therefore, for example, the positional relationship between the plurality of reference regions Q and the interference fringes is the same as the positional relationship between the target region P and the interference fringes, and noise can be reduced while ensuring the contrast of the interference fringes.
  • the image processing unit 40 selects a reference region based on the interference fringe information.
  • the information on the interference fringes includes, for example, information on the direction of the interference fringes and the period of the interference fringes.
  • the information regarding the period of the interference fringes includes, for example, at least one of the period of the interference fringes and the spatial frequency of the interference fringes.
  • FIG. 16 is a diagram for explaining the noise reduction processing according to the present embodiment.
  • the noise reduction processing unit 44 selects a target region P that is a target for noise reduction from the first enhanced image 60.
  • the noise reduction processing unit 44 passes through the target region P and is parallel to the line direction of the interference fringes, and the second candidate region Rpa is parallel to the first candidate region Rpa.
  • a region including the candidate region Rpb is set as a candidate region Rp (search region).
  • the second candidate region Rpb is a band-like region that is periodically arranged at intervals of ⁇ with respect to the first candidate region Rpa.
  • the noise reduction processing unit 44 sequentially selects the reference region Q from the candidate region Rp.
  • the coordinate of the reference region Q is Q ( ⁇ , ⁇ )
  • n is a number corresponding to a band-like region (eg, first candidate region Rpa) included in the candidate region Rp.
  • n corresponding to the first candidate region Rpa is set to 0.
  • n corresponding to the second candidate region Rpb is set to +1, +2, +3,...
  • the noise reduction processing unit 44 calculates the light intensity information of the target area P based on the light intensity information of the plurality of reference areas Q.
  • white noise random noise
  • Information regarding the period of interference fringes (eg, ⁇ , spatial frequency) is stored in the storage device 6 shown in FIG. 2 as interference fringe setting information, for example, and the noise reduction processing unit 44 receives interference from the storage device 6.
  • Stripe setting information is acquired and a candidate region Rp is set.
  • the image processing unit 40 acquires information on the period of the interference fringes by analyzing at least one of the captured image 50 and the first emphasized image 60 that are subject to noise reduction processing, and the noise reduction processing unit 44 analyzes
  • the reference region Q may be selected from the candidate regions Rp set based on the interference fringe period information obtained by the above.
  • the noise reduction processing unit 44 weights the luminance value of the reference region Q (calculates a weighted average of the luminance values), for example, and sets the result as the corrected luminance value in the target region P. For example, as described with reference to FIG. 8, the noise reduction processing unit 44 determines a weighting coefficient based on the similarity between the target patch area Np and the reference patch area Nq.
  • the noise reduction processing unit 44 may use all the pixels belonging to the candidate region Rp as the reference region Q, or may not use some pixels belonging to the candidate region Rp as the reference region Q. For example, the noise reduction processing unit 44 may not select the reference region Q from the first candidate region Rpa. Further, the noise reduction processing unit 44 may not select the reference region Q from one or more regions among the plurality of strip-shaped regions included in the second candidate region Rpb.
  • the noise reduction processing unit 44 selects the reference region Q from the image including the target region P in the above-described embodiment. Region Q is selected. That is, the reference region Q is selected from a reference image in which the phase of interference fringes is different from an image to be subjected to noise reduction processing or an image generated from this image.
  • the noise reduction processing unit 44 performs noise reduction processing based on, for example, information on the direction of interference fringes, information on the period, and information on the phase.
  • FIG. 17 is a conceptual diagram showing a first enhanced image 60 generated from an image targeted for noise reduction processing and a second enhanced image 65 generated from a reference image.
  • the second enhanced image 65 is different from the first enhanced image 60 in the phase of the interference fringes.
  • the positional relationship between the interference fringes is expressed using a characteristic portion 53 (for example, the center line of the bright portion).
  • the second enhanced image 65 includes one of the interference fringe feature portions 53 (indicated by reference numeral 53x) in the first enhanced image 60 and a candidate region Rpx when a reference region is selected from the first enhanced image 60. Is shown for comparison.
  • the second enhanced image 65 has the same interference fringe direction (for example, line direction) as the first enhanced image 60, and the position of the interference fringe is ⁇ d (2 ⁇ dsin ⁇ / ⁇ in phase) with the first enhanced image 60 in the horizontal scanning direction. Is equivalent).
  • the noise reduction processing unit 44 sets the candidate area Rp in the second emphasized image 65 at a position shifted by ⁇ d in the horizontal scanning direction with respect to the candidate area Rpx when the reference area is set in the first emphasized image 60. .
  • the reference area Q is selected from such candidate areas Rp, the distance between the reference area Q and the characteristic portion 53 in the horizontal scanning direction is dx.
  • the candidate region Rp is, for example, a linear (band-like) region, and the angle with the horizontal scanning direction is ⁇ .
  • the noise reduction processing unit 44 sets a reference patch region Nq for the selected reference region Q, and the similarity between the target patch region Np on the first emphasized image 60 and the reference patch region Nq on the second emphasized image 65 Calculate w.
  • the noise reduction processing unit 44 performs noise reduction processing by weighting the luminance value of the reference region Q on the second emphasized image 65 by the similarity w. Since the positional relationship between the reference region Q and the interference fringes is equivalent to the positional relationship between the target region P and the interference fringes, noise can be reduced while ensuring the contrast of the interference fringes.
  • FIG. 18 is a flowchart showing the noise reduction processing according to the present embodiment.
  • the image processing unit 40 acquires data of an image to be subjected to noise reduction processing, and generates a first enhanced image in which interference fringes of the image to be subjected to noise reduction processing are emphasized in step S41.
  • the image processing unit 40 acquires the direction and phase of the interference fringes on the target image for noise reduction processing or the first enhanced image.
  • the image processing unit 40 acquires reference image data in step S43, and generates a second enhanced image in which the interference fringes of the reference image are enhanced in step S44.
  • step S45 the image processing unit 40 acquires the direction and phase of interference fringes on the reference image or the second enhanced image.
  • the image processing unit 40 uses the information acquired in step S42 and the information acquired in step S45 to calculate the amount of change in the phase of the interference fringes in step S46.
  • the image processing unit 40 is at least one of step S42 and step S45.
  • the phase difference information may be acquired in step S46, and the process of step S46 may not be performed.
  • the image processing unit 40 acquires information regarding the interference fringe period in at least one of step S42 and step S45. May be.
  • the image processing unit 40 selects the target area P from the first emphasized image in step S47, and sets the target patch area Np for the target area P in step S48 (see FIG. 17A).
  • the image processing unit 40 sets the candidate region Rp in the second enhanced image based on the amount of change in the interference fringe phase calculated in step S46 (see FIG. 17B).
  • the image processing unit 40 selects the reference region Q from the candidate region Rp in the second emphasized image.
  • the image processing unit 40 sets a reference patch area Nq for the reference area Q in the second emphasized image.
  • step S52 the image processing unit 40 calculates the similarity w between the target patch area Np in the first emphasized image and the reference patch area Nq in the second emphasized image.
  • step S53 the image processing unit 40 determines whether or not the calculation of the similarity w for all the reference regions Q scheduled among the candidate regions Rp set in step S49 has been completed. If the image processing unit 40 determines that there is a reference region Q for which the calculation of the similarity score w has not been completed (step S53; No), the image processing unit 40 returns to step S50 and selects the next reference region Q. The image processing unit 40 repeats the processes in steps S50 to S53, thereby calculating the similarity w for all scheduled reference areas Q (eg, all pixels in the candidate area Rp) in the candidate area Rp. To do.
  • step S53 the luminance value of the reference region Q in the reference image is determined as the similarity in step S54. Weight with.
  • step S55 the image processing unit 40 determines whether or not the processing has been completed for all the target regions P that are scheduled. If the image processing unit 40 determines that there is a target region P that has not been processed (step S55; No), the image processing unit 40 returns to step S47 and selects the next target region P on the first emphasized image 60. The image processing unit 40 performs processing for all scheduled target areas P (for example, all pixels) by repeating the processing of steps S47 to S55.
  • step S55 when the image processing unit 40 determines that the processing of all scheduled target areas P has been completed (step S55; Yes), whether or not the processing of all scheduled reference images has been completed in step S56. Determine whether. If it is determined that there is a reference image that has not been processed (step S56; No), the image processing unit 40 returns to step S43 and acquires the next reference image.
  • the image processing unit 40 performs noise reduction processing using all scheduled reference images by repeating the processing of steps S43 to S56.
  • step S54 the noise reduction processing unit 44 adjusts the weighting coefficient according to the number of reference images, and integrates the weighted luminance values with the plurality of reference images.
  • the luminance value of the target area P is calculated.
  • the noise reduction processing unit 44 sets the weighting coefficient to 1/3 of the similarity, and in the process of step S54 for the first reference image, the noise reduction processing unit 44 A temporary luminance value is calculated.
  • the noise reduction processing unit 44 weights the luminance value of the reference area Q in each of the second and third reference images in the process of step S54 for the second and third reference images.
  • the provisional luminance value is updated by adding to the provisional luminance value.
  • the noise reduction process part 44 may perform the process of step S54 after the process of step S56.
  • step S56 If the image processing unit 40 determines that all scheduled reference images have been processed (step S56; Yes), the series of processing ends.
  • the image processing unit 40 sequentially performs the processing of steps S40 to S56 on the plurality of scheduled images, thereby performing all the planned processing. Noise reduction processing is performed on each image.
  • the noise reduction processing unit 44 selects the reference region Q from the reference image in which the phase of the interference fringes is different from the image including the target region P in the above-described embodiment.
  • a reference region Q is selected from reference images having different stripe directions.
  • the noise reduction processing unit 44 performs noise reduction processing based on information on information on the direction of interference fringes.
  • FIG. 19A is a conceptual diagram showing a second emphasized image 70 having a different interference fringe direction from the first emphasized image 60 (see FIG. 17), and FIG. 19B shows the target patch area Np and the reference patch. It is a conceptual diagram which shows the process which calculates the similarity degree with the area
  • the positional relationship between the interference fringes is expressed using a characteristic portion 53 (for example, the center line of the bright portion).
  • the second emphasized image 70 one of the interference fringe feature portions (indicated by reference numeral 53x) in the first emphasized image 60 is shown for comparison.
  • the direction of the interference fringe (eg, the line direction of the feature 53) is an angle ⁇ (eg, clockwise) from the interference fringe (eg, the line direction of the feature 53x) of the first emphasized image 60. 120 degrees).
  • the line direction of the characteristic portion 53 forms an angle of ( ⁇ + ⁇ ) with the horizontal scanning direction.
  • the noise reduction processing unit 44 sets, as the reference region Q, a region where the distance from the feature unit 53 is dx in the direction D1 rotated by an angle ⁇ from the horizontal scanning direction.
  • the noise reduction processing unit 44 sets, as the candidate region Rp, a region that is parallel to the feature portion 53 and has a distance dx from the feature portion 53 in the direction D1 in the peripheral range Ap, and from the candidate region Rp to the reference region. Select Q in turn.
  • the candidate region Rp is, for example, a linear (band-like) region, and an angle with the horizontal scanning direction is ( ⁇ + ⁇ ).
  • the noise reduction processing unit 44 rotates the reference patch area Nq counterclockwise by an angle ⁇ , and the degree of similarity between the rotated reference patch area Nq and the target patch area Np. Is calculated. Further, the noise reduction processing unit 44 performs noise reduction processing by weighting the luminance value of the reference region Q of the target image of the noise reduction processing with the similarity w.
  • the positional relationship between the reference region Q and the interference fringes is equivalent to the positional relationship between the target region P and the interference fringes, and noise can be reduced while ensuring the contrast of the interference fringes.
  • FIG. 20 is a conceptual diagram illustrating another example of processing for setting the reference region Q and the reference patch region Nq in the second emphasized image 70.
  • the noise reduction processing unit 44 rotates at least a part (for example, the peripheral range Ap) of the second enhanced image 70 counterclockwise by an angle ⁇ , and sets the direction of the interference fringes in the rotated peripheral range Ap to the first The direction of the interference fringes in the first weighted image 60 is matched.
  • the noise reduction processing unit 44 sets, as a candidate region Rp, a belt-like region that passes through a position corresponding to the target region P and forms an angle ⁇ with respect to the horizontal scanning direction in the rotated peripheral range Ap.
  • the noise reduction processing unit 44 sequentially selects the reference region Q from the candidate region Rp, and sets the reference patch region Nq for the reference region Q. Also by such processing, the positional relationship between the reference region Q and the interference fringes can be made equal to the positional relationship between the target region P and the interference fringes.
  • the reference image may be an image in which the phase of the interference fringes is out of phase with that of the noise reduction target image and the direction of the interference fringes is out of alignment.
  • the reference region Q may be set so that the distance between the reference region Q and the interference fringe is equal to the distance between the target region P and the interference fringe in a state where the directions of the interference fringes are aligned.
  • FIG. 21 is a flowchart showing noise reduction processing according to the present embodiment.
  • the noise reduction process shown in FIG. 21 differs from the noise reduction process shown in FIG. 18 in the process of step S60 following step S41 and the process of steps S61 to S63 following step S44.
  • step S60 the noise reduction processing unit 44 acquires the direction of interference fringes in at least one of the image to be subjected to noise reduction processing and the first enhanced image.
  • step S61 the noise reduction processing unit 44 acquires the direction of interference fringes in at least one of the reference image and the second enhanced image.
  • the noise reduction processing unit 44 calculates the amount of change (eg, angle) in the direction of the interference fringes.
  • the noise reduction processing unit 44 sets the candidate region Rp in the second emphasized image 70 in step S63 based on the amount of change in the interference fringe direction calculated in step S63 (see FIG. 17). Hereinafter, the noise reduction processing unit 44 performs the noise reduction processing in the same manner as the processing shown in FIG.
  • the noise reduction processing unit 44 at least one of step S60 and step S61.
  • the information of the interference fringe angle between the noise reduction target image and the reference image is acquired, and the process of step S62 may not be performed.
  • the image processing unit 40 acquires information about the period of interference fringes in at least one of step S60 and step S61. May be.
  • the image processing unit 40 sets one image among a plurality of images that are the basis of the demodulated image as an image to be subjected to noise reduction processing, and uses at least one image other than the image to be subjected to noise reduction processing as a reference image.
  • a noise reduction process For example, as in the first embodiment, an image targeted for noise reduction processing can be used for noise reduction processing.
  • the image in which the reference region Q is set among the nine images may be only one image that is the target of the noise reduction process, There may be two or more images including the noise reduction target image and at least one of the eight reference images, and at least one of the eight reference images not including the noise reduction target image.
  • the structured illumination microscope 1 acquires a plurality of captured images having the same direction and phase of interference fringes, and the image processing unit 40 performs noise reduction processing using the plurality of captured images having the same direction and phases of interference fringes. You may go.
  • the image processing unit 40 selects, for example, images to be subjected to noise reduction processing in order from a plurality of images (for example, nine captured images) from which a demodulated image is based, and for two or more of the plurality of images Perform noise reduction processing. For example, when nine images are acquired in the 2D-SIM mode, the image processing unit 40 performs noise reduction processing on each of the nine images.
  • FIG. 22A is a diagram illustrating a modification of the candidate region Rp
  • FIG. 22B is a conceptual diagram illustrating a process of calculating the similarity w.
  • the number of candidate regions Rp set during one cycle of the interference fringes is one in the above-described embodiment (see FIGS. 15 and 16), but is two in the present modification.
  • Candidate region Rp includes candidate region Rpc and candidate region Rpd similar to those in the above-described embodiment.
  • the candidate area Rpd is an area parallel to the candidate area Rpc, and the position in the horizontal scanning direction is different from that of the candidate area Rpc.
  • Candidate region Rpc is arranged, for example, at a position away from feature 53 by a distance dx in the positive direction of the horizontal scanning direction.
  • Candidate region Rpd is arranged, for example, at a position away from feature 53 by distance dx in the negative direction of the horizontal scanning direction.
  • the noise reduction processing unit 44 sets the candidate region Rp based on, for example, information on the direction of the interference fringes, information on the period of the interference fringes, and information on the phase of the interference fringes.
  • the noise reduction processing unit 44 performs a reference patch region Nq (hereinafter referred to as a reference patch) set for a reference region Q (hereinafter referred to as a reference region Qa) selected from the candidate regions Rpc.
  • a reference patch region Nq hereinafter referred to as a reference patch
  • Qb a reference region selected from the candidate region Rpd by 180 °.
  • the rotated reference patch region Nqb has the same positional relationship with the interference fringes (for example, the characteristic portion 53) as the reference patch region Nqa.
  • the noise reduction processing unit 44 calculates the similarity between the target patch area Np and the rotated reference patch area Nqb, and weights the luminance value of the reference area Qb with the similarity.
  • the image processing unit 40 uses, for example, an image captured by the image sensor 32 as an image to be subjected to noise reduction processing as illustrated in FIG. 3, but an image generated using the captured image is an image. It is good also as an image of the object of noise reduction processing.
  • the image processing unit 40 calculates the similarity for all the reference regions Q in the candidate region Rp, and then weights the luminance value by the similarity. Note that the image processing unit 40 accumulates the luminance values weighted by the similarity w every time the similarity w is calculated in step S16, and weights after calculating the similarity w for all the reference regions Q.
  • the integrated value of luminance values may be normalized.
  • control device 4 reads a control program stored in the storage device 6, for example, and executes various processes according to the control program.
  • This control program for example, irradiates a computer with excitation light for exciting a fluorescent substance contained in the sample with interference fringes, controls the direction and phase of the interference fringes, Forming an image of the irradiated sample, capturing an image to generate a captured image, performing a filter process in a frequency domain of the captured image or an image generated from the captured image, and a filter process Control of processing including performing noise reduction processing on the performed image and performing demodulation processing using the image on which noise reduction processing has been performed is executed.
  • the image processing unit 40 may read an image processing program stored in the storage device 6, for example, and execute various image processing according to the image processing program.
  • this image processing program irradiates a sample with excitation light for exciting a fluorescent substance contained in the sample with interference fringes, controls the direction and phase of the interference fringes, and creates an image of the sample irradiated with the interference fringes.
  • the computer uses the captured image generated by imaging, the computer performs filtering processing in the frequency domain of the captured image or the image generated from the captured image, and performs noise reduction processing on the filtered image. Do what you do.
  • the image processing program may cause the computer to generate a demodulated image using the image after the noise reduction processing. This image processing program may be a part of the control program.
  • SYMBOLS 1 Structured illumination microscope, 11 ... Illumination optical system, 15 ... Branch part, 31 ... Imaging optical system, 32 ... Image sensor, 40 ... Image processing unit, 41 ... control unit, 50 ... captured image, Np ... target patch area, Nq: reference patch area, P: target area, Q: reference area, Rp ... search area, X ... sample, w ... similarity

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

L'invention vise à proposer un microscope à éclairage structuré permettant d'obtenir un contraste de franges d'interférence tout en réduisant le bruit. À cet effet, un microscope à éclairage structuré (1) selon l'invention comprend : un système optique d'éclairage (11) qui expose un échantillon (X) à une frange d'interférence de lumière d'excitation destinée à exciter une substance fluorescente contenue dans l'échantillon ; une unité de commande (41) qui commande l'orientation et la phase de la frange d'interférence ; un système optique de formation d'image (31) qui forme des images de l'échantillon lorsque l'échantillon est exposé à la frange d'interférence ; un élément d'imagerie (32) qui capture des images parmi les images formées par le système optique de formation d'image et génère des images capturées ; et une unité de traitement d'image (40) qui utilise les images capturées pour effectuer un traitement de démodulation. L'unité de traitement d'image comprend : une unité de traitement de filtrage qui effectue un traitement de filtrage dans des zones de fréquence des images capturées ou d'images générées à partir des images capturées ; une unité de traitement de réduction de bruit qui effectue un traitement de réduction de bruit sur des images qui ont subi le traitement de filtrage ; et une unité de démodulation qui utilise des images qui ont subi le traitement de réduction de bruit pour effectuer un traitement de démodulation.
PCT/JP2015/058420 2015-03-20 2015-03-20 Microscope à éclairage structuré, procédé d'observation et programme de traitement d'image WO2016151666A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/058420 WO2016151666A1 (fr) 2015-03-20 2015-03-20 Microscope à éclairage structuré, procédé d'observation et programme de traitement d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/058420 WO2016151666A1 (fr) 2015-03-20 2015-03-20 Microscope à éclairage structuré, procédé d'observation et programme de traitement d'image

Publications (1)

Publication Number Publication Date
WO2016151666A1 true WO2016151666A1 (fr) 2016-09-29

Family

ID=56977885

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/058420 WO2016151666A1 (fr) 2015-03-20 2015-03-20 Microscope à éclairage structuré, procédé d'observation et programme de traitement d'image

Country Status (1)

Country Link
WO (1) WO2016151666A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106706577A (zh) * 2016-11-16 2017-05-24 深圳大学 一种光成像系统及方法
WO2018150471A1 (fr) * 2017-02-14 2018-08-23 株式会社ニコン Microscope à éclairage structuré, procédé d'observation et programme
DE102018108657A1 (de) 2018-04-12 2019-10-17 Jenoptik Optical Systems Gmbh Mikroskop mit strukturierter Beleuchtung
CN117368174A (zh) * 2023-12-07 2024-01-09 深圳赛陆医疗科技有限公司 成像系统及成像方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013076867A (ja) * 2011-09-30 2013-04-25 Olympus Corp 超解像観察装置
JP2014240870A (ja) * 2013-06-11 2014-12-25 オリンパス株式会社 共焦点画像生成装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013076867A (ja) * 2011-09-30 2013-04-25 Olympus Corp 超解像観察装置
JP2014240870A (ja) * 2013-06-11 2014-12-25 オリンパス株式会社 共焦点画像生成装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106706577A (zh) * 2016-11-16 2017-05-24 深圳大学 一种光成像系统及方法
CN106706577B (zh) * 2016-11-16 2019-08-16 深圳大学 一种光成像系统及方法
WO2018150471A1 (fr) * 2017-02-14 2018-08-23 株式会社ニコン Microscope à éclairage structuré, procédé d'observation et programme
JPWO2018150471A1 (ja) * 2017-02-14 2019-12-19 株式会社ニコン 構造化照明顕微鏡、観察方法、及びプログラム
DE102018108657A1 (de) 2018-04-12 2019-10-17 Jenoptik Optical Systems Gmbh Mikroskop mit strukturierter Beleuchtung
DE102018108657B4 (de) 2018-04-12 2024-03-28 Jenoptik Optical Systems Gmbh Vorrichtung zur Aufnahme wenigstens eines mikroskopischen Bildes und Verfahren zur Aufnahme eines mikroskopischen Bildes
CN117368174A (zh) * 2023-12-07 2024-01-09 深圳赛陆医疗科技有限公司 成像系统及成像方法

Similar Documents

Publication Publication Date Title
EP2437095B1 (fr) Dispositif d'observation d'échantillons pour générer une image à super résolution
US8019136B2 (en) Optical sectioning microscopy
JP6635052B2 (ja) 構造化照明顕微鏡、及び観察方法
JP4123305B2 (ja) 画像作成方法および顕微鏡装置
WO2016151666A1 (fr) Microscope à éclairage structuré, procédé d'observation et programme de traitement d'image
JP5888416B2 (ja) 構造化照明顕微鏡装置
CN108027320A (zh) 基于辅助电磁场的引入的一阶散射测量叠对的新方法
JP7027446B2 (ja) 角度可変照明による材料試験
JP6627871B2 (ja) 構造化照明顕微鏡システム、方法及びプログラム
JP6513507B2 (ja) 位相差顕微鏡および撮像方法
US11287625B2 (en) Microscope and observation method
CN111433559A (zh) 增强计量目标信息内容
US20210389577A1 (en) Microscope
US9729800B2 (en) Image generation system
US9995923B2 (en) Control apparatus and control method for spatial light modulator
WO2016151665A1 (fr) Microscope à éclairage structuré, procédé d'observation et programme de traitement d'image
WO2017094184A1 (fr) Microscope à balayage et procédé d'acquisition d'image de microscope
US9507137B2 (en) Microscope with structured illumination using displaceable grid structures
JP6621350B2 (ja) 空間光変調器の制御装置及び制御方法
WO2018151599A1 (fr) Microscopie à balayage à éclairage structuré
CN113711014A (zh) 使用结构化照明显微镜增强样品成像
JP2019204002A (ja) 顕微鏡、画像解析装置、観察方法、解析プログラム
WO2018150471A1 (fr) Microscope à éclairage structuré, procédé d'observation et programme
Dzyuba et al. Phase apodization of the imaging system through separate color channels to increase the depth of field
JP2003015048A (ja) 格子照明顕微鏡

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15886223

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 15886223

Country of ref document: EP

Kind code of ref document: A1