CN111512205A - Method and apparatus for optical confocal imaging using a programmable array microscope - Google Patents

Method and apparatus for optical confocal imaging using a programmable array microscope Download PDF

Info

Publication number
CN111512205A
CN111512205A CN201780097852.9A CN201780097852A CN111512205A CN 111512205 A CN111512205 A CN 111512205A CN 201780097852 A CN201780097852 A CN 201780097852A CN 111512205 A CN111512205 A CN 111512205A
Authority
CN
China
Prior art keywords
conjugate
camera
modulator
image
programmable array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780097852.9A
Other languages
Chinese (zh)
Inventor
T·M·约温
A·H·B·德弗里斯
D·J·阿恩特-约温
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Max Planck Gesellschaft zur Foerderung der Wissenschaften eV
Original Assignee
Max Planck Gesellschaft zur Foerderung der Wissenschaften eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Max Planck Gesellschaft zur Foerderung der Wissenschaften eV filed Critical Max Planck Gesellschaft zur Foerderung der Wissenschaften eV
Publication of CN111512205A publication Critical patent/CN111512205A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/008Details of detection or image processing, including general computer control
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0032Optical details of illumination, e.g. light-sources, pinholes, beam splitters, slits, fibers
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0036Scanning details, e.g. scanning stages
    • G02B21/0048Scanning details, e.g. scanning stages scanning mirrors, e.g. rotating or galvanomirrors, MEMS mirrors
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0052Optical details of the image generation
    • G02B21/0076Optical details of the image generation arrangements using fluorescence or luminescence
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/008Details of detection or image processing, including general computer control
    • G02B21/0084Details of detection or image processing, including general computer control time-scale detection, e.g. strobed, ultra-fast, heterodyne detection
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B26/00Optical devices or arrangements for the control of light using movable or deformable optical elements
    • G02B26/08Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
    • G02B26/0816Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements
    • G02B26/0833Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements the reflecting element being a micromechanical device, e.g. a MEMS mirror, DMD
    • G02B26/0841Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements the reflecting element being a micromechanical device, e.g. a MEMS mirror, DMD the reflecting element being moved or deformed by electrostatic means

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Microscoopes, Condenser (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Mechanical Light Control Or Optical Switches (AREA)
  • Mechanical Optical Scanning Systems (AREA)

Abstract

Optical confocal imaging performed using a Programmable Array Microscope (PAM) (100) having a light source arrangement (10), a spatial light modulator arrangement (20) having a plurality of reflective modulator elements, a PAM objective and a camera arrangement (30), wherein the spatial light modulator arrangement (20) is configured such that a first set of modulator elements (21) can be selected for directing excitation light to conjugate positions of an object to be investigated and for directing detection light originating from these positions to the camera arrangement (30), and a second set of modulator elements (22) can be selected for directing detection light from non-conjugate positions of the object to the camera arrangement (30), the method comprising the steps of: directing excitation light from a light source arrangement (10) to an object to be investigated via a first set of modulator elements, wherein a spatial light modulator device (20) is controlled such that a predetermined sequence of patterns of illumination points is focused to conjugate positions of the object, wherein each illumination point is illuminated by a defined current PAM illumination apertureIs created, collecting a conjugate image I based on collecting detection light from conjugate positions of the object for each pattern of PAM illumination aperturescUsing a non-conjugate camera channel of a camera arrangement (30), collecting a non-conjugate image I based on collecting detection light from a non-conjugate position of the object via a second set of modulator elements (22) for each pattern of the PAM illumination aperturencAnd based on the conjugate image IcAnd a non-conjugate image IncWherein a conjugate image I is collected, creates an Optical Slice Image (OSI) of the objectcThe step of image data of (1) comprises: with a non-conjugate camera channel of the camera arrangement (30), for each pattern of the PAM illumination aperture, partial detection light is collected from a conjugate position of the object via modulator elements of the second set of modulator elements (22) surrounding the current PAM illumination aperture. Furthermore, a PAM calibration method and a PAM configured for use in the above method are described.

Description

Method and apparatus for optical confocal imaging using a programmable array microscope
Technical Field
The present invention relates to an optical confocal imaging method performed using a Programmable Array Microscope (PAM). Furthermore, the invention relates to a PAM configured for confocal optical imaging using a spatiotemporal light modulation imaging system. The application of the invention is particularly in confocal microscopy.
Background
EP 911667A 1, EP 916981A 1 and EP 2369401B 1 disclose the use of a conjugate (c, "in focus", I) based on simultaneous acquisitionc) And nonconjugated (nc, "out of focus", I)nc) PAM operated by combination of 2D images to achieve fast, wide-field optical sectioning in fluorescence microscopes A plurality of apertures ("pinholes") are defined by the distribution of enabled ("on") micromirror elements of a large (currently 1080p, 1920 × 1080) array of Digital Micromirror Device (DMD) placed in the main image field of a microscope, a PAM module comprising a light source device and a camera device attached to the microscope via a single output/input portReceives the corresponding emitted light and directs it to the camera device. Although DMD is widely used for excitation purposes, its use in both excitation and detection paths ("two-pass principle") is unique to the PAM concept and its implementation. The "on" and "off" mirrors direct the fluorescence signals to the dual camera for registration of the c and nc images, respectively.
In addition, advanced Fluorescence measurement techniques, in particular structured light Fluorescence microscopy (SIM) (see J.demmel et al in Nature Protocols "vol.12, 988-.
Disclosure of Invention
Objects of the invention
It is an object of the present invention to provide an improved confocal optical imaging method and/or apparatus which avoids the disadvantages of the conventional techniques. In particular, it is an object of the present invention to provide confocal optical imaging with new PAM applications that improve spatial resolution, reduce system complexity and/or advanced fluorescence measurement techniques.
Summary of the invention
The above object is achieved with an optical confocal imaging method and/or a spatio-temporal light modulation imaging system (programmable array microscope, PAM) comprising the features of one of the independent claims. Preferred embodiments and applications of the invention are defined in the dependent claims.
According to a first general aspect of the present invention, the above object is solved by an optical confocal imaging method, which is performed using PAM, having a light source arrangement, a spatial light modulator arrangement having a plurality of reflective modulator elements, a PAM objective lens and a camera arrangement. Spatial light modulator means, in particular a Digital Micromirror Device (DMD) with an array of individually tiltable mirrors, is configured such that a first set of modulator elements can be selected for directing excitation light to conjugate positions of an object (sample) to be investigated and for directing detection light originating from these positions to camera means, and a second set of modulator elements can be selected for directing detection light from non-conjugate positions of the object to the camera means.
The optical confocal imaging method includes the following steps. The excitation light is directed from the light source device to the object to be investigated, in particular via the first set of modulator elements and via the reflective and/or refractive imaging optics (excitation or illumination step). The spatial light modulator device is controlled such that a predetermined pattern sequence of illumination points is focused to conjugate positions of the object, wherein each illumination point is created by a single modulator element or a group of adjacent modulator elements defining a current PAM illumination aperture. Collecting conjugate images I with a camera devicecImage data and non-conjugate image IncThe image data of (1). Conjugate image IcIs collected by employing detected light from conjugate positions of the object for each pattern of illumination points and PAM illumination apertures (a conjugate position is a position in the object in a conjugate focal plane relative to the spatial light modulator surface and the imaging plane of the camera device). Non-conjugate image IncBy adopting, for each pattern of illumination points and PAM illumination apertures, a non-conjugate position (other than a conjugate position) from the object via a second set of modulator elementsPosition of the device) received detected light. Based on conjugated images IcAnd a non-conjugate image IncAn Optical Slice Image (OSI) of the object is created, preferably using a control device comprised in the PAM. For example, the control device comprises at least one computer circuit, each comprising at least one control unit for controlling the light source device and the spatial light modulator device and at least one calculation unit for processing camera signals received from the camera device.
According to the invention, a conjugate image I is collectedcThe step of image data of (1) comprises: with the non-conjugate camera channel of the camera apparatus, a portion of the detected light is collected from the conjugate position of the object via the modulator elements of the second set of modulator elements surrounding the current PAM illumination aperture for each pattern of the PAM illumination aperture. Conjugation I according to the pore size and the 3D distribution of the absorption/emission species in the object (sample) to be investigatedcThe image may also include segments of detected light originating from non-conjugate locations of the sample. In contrast, nonconjugated IncThe image may also contain segments of detected light originating from conjugate positions of the sample. According to the invention, the step of forming the OSI is based in particular on calculating IcAnd IncThe conjugates and non-conjugates in the image detect segments of light and combine the signals. To achieve this, the present invention exploits the properties of excitation light that not only impinges on the conjugate ("in-focus") volume elements of the object, but also traverses the object with an intensity distribution indicated by the 3D-psf ("3D point spread function", e.g. approximately ellipsoidal with respect to the focal plane and diverging, e.g. conical with a large axial distance from the focal plane) corresponding to the imaging optics, resulting in a non-conjugate ("out-of-focus") distribution of the excited species. The inventors have found that due to the point spread function of the PAM imaging optics in the illumination and detection channels and in case of operation with a small PAM illumination aperture, a significant portion of the detection light from the conjugate position of the object is directed into the non-conjugate camera channel to be superimposed with the detection light from the non-conjugate position of the object and the two contributions can be separated from each other. This both greatly reduces system complexity because the PAM only has a single camera providing a non-conjugate camera channel, and becauseCollecting light via a non-conjugated camera channel allows the size of the illumination aperture (illumination spot diameter) to be reduced while improving resolution. The small illumination aperture and efficient detection light collection can significantly improve lateral spatial resolution and optical slicing efficiency while maintaining a high signal-to-noise ratio.
According to a second general aspect of the present invention, the above object is solved by an optical confocal imaging method performed using PAM having a light source arrangement, a spatial light modulator arrangement having a plurality of reflective modulator elements, a PAM objective lens and a camera arrangement as in the PAM according to the first aspect of the present invention. In particular, as described with reference to the first aspect of the invention, the spatial light modulator device is operated and the excitation light is directed to the object to be investigated. Forming a conjugate image I by collecting detection light from conjugate positions of the object via a first set of modulator elements for each pattern of illumination points and PAM illumination apertures using a conjugate camera channel of a camera arrangementcAnd forming a non-conjugate image I by collecting detection light from a non-conjugate position of the object via the second set of modulator elements for each pattern of illumination points and PAM illumination apertures using a non-conjugate camera channel of the camera arrangementnc. Based on conjugated images IcAnd a non-conjugate image IncAn optical slice image of the object is obtained.
According to the invention, the image (I) is conjugatedc) And non-conjugate images (I)nc) The calibration data is obtained by a calibration procedure comprising mapping the positions of the modulator elements to camera pixel positions of the camera arrangement, in particular a camera providing conjugated and non-conjugated camera channels, by using the calibration data to be mutually registered. The calibration process includes collecting calibration images and processing the recorded calibration images for creating calibration data for assigning each camera pixel of the camera device to one of the modulator elements.
Advantageously, applying the calibration procedure allows mapping the total intensity in the "speckled" recorded spots to a single known position in a spatial light modulator device (DMD array), thereby improving the spatial imaging resolution. In addition, the c and nc camera images are mapped to the same source DMD array, thus ensuring absolute registration of the c and nc distributions in the DMD space. These advantages have been obtained by incorporating a calibration procedure in the operation of conventional PAM. As outlined further below, particular advantages are provided if the calibration procedure is applied in an embodiment of the optical confocal imaging method according to the first general aspect of the invention.
According to a third general aspect of the present invention, the above object is solved by a PAM having a light source arrangement, a spatial light modulator arrangement having a plurality of reflective modulator elements, a PAM objective, relay optics, a camera arrangement and a control arrangement. Preferably, the PAM is configured to be able to perform an optical confocal imaging method according to the above-described first general aspect of the invention. The spatial light modulator means are configured such that a first set of modulator elements can be selected for directing excitation light to conjugate positions of the object to be investigated and for directing detection light originating from these positions to the camera means, and a second set of modulator elements can be selected for directing detection light from non-conjugate positions of the object to the camera means. The light source device is arranged for directing excitation light to the object to be investigated via the first set of modulator elements, wherein the control device is adapted for controlling the spatial light modulator device such that the predetermined sequence of patterns of illumination points is focused to conjugate positions of the object, wherein each illumination point is created by at least one single modulator element defining a current PAM illumination aperture. The camera arrangement is arranged for collecting a conjugate image I by collecting detection light from conjugate positions of the object for each pattern of illumination points and PAM illumination aperturescThe image data of (1). Furthermore, the camera arrangement comprises a non-conjugate camera channel configured for collecting a non-conjugate image I by collecting detection light from a non-conjugate position of the object via the second set of modulator elements for each pattern of illumination points and PAM illumination aperturesncThe image data of (1). The control means are adapted to be based on the conjugate image IcAnd a non-conjugate image IncTo create an optical slice image of the object. For example, the control device comprises at least one computer circuit, each comprising at least one control unit for controlling the light source device and the spatial light modulator device, and at least one processing unit for processing a camera signal received from the camera deviceA computing unit.
According to the invention, the non-conjugate camera channel of the camera arrangement is arranged for collecting, for each pattern of illumination points and PAM illumination apertures, part of the detected light from conjugate positions of the object via the modulator elements of the second set of modulator elements surrounding the current PAM illumination aperture. Preferably, the control means are adapted to apply the conjugate image IcExtracted as included in the non-conjugate image IncOf (c).
According to a fourth general aspect of the present invention, the above object is solved by a PAM having a light source arrangement, a spatial light modulator arrangement having a plurality of reflective modulator elements, a PAM objective, relay optics, a camera arrangement and a control arrangement. Preferably, the PAM is configured to be able to perform an optical confocal imaging method according to the above-described second general aspect of the invention. The spatial light modulator means are configured such that a first set of modulator elements can be selected for directing excitation light to conjugate positions of the object to be investigated and for directing detection light originating from these positions to the camera means, and a second set of modulator elements can be selected for directing detection light from non-conjugate positions of the object to the camera means. The light source device is arranged for directing excitation light to the object to be investigated via the first set of modulator elements. The control means are adapted to control the spatial light modulator means such that a predetermined pattern sequence of illumination points is focused to conjugate positions of the object, wherein each illumination point is created by at least one single modulator element defining a current PAM illumination aperture. The camera arrangement has a conjugate camera channel (c-camera) configured to form a conjugate image I by collecting detection light from conjugate positions of the object via a first set of modulator elements for each pattern of illumination points and PAM illumination aperturesc. Furthermore, the camera arrangement has a non-conjugate camera channel (nc camera) configured to form a non-conjugate image I by collecting detection light from a non-conjugate position of the object via the second set of modulator elements for each pattern of illumination points and PAM illumination aperturesnc. The control means are adapted to be based on the conjugate image IcAnd a non-conjugate image IncTo create an optical slice image of the object.
According to the invention, the control means are adapted to register the conjugate images (I) by using the calibration datac) And non-conjugate images (I)nc) The calibration data is obtained by a calibration procedure comprising mapping the positions of the modulator elements to camera pixel positions.
According to a preferred embodiment of the invention, the spatial light modulator means are controlled such that the current PAM illumination aperture has a diameter approximately equal to or smaller than M × λ/2NA, where λ is the central wavelength of the excitation light, NA is the numerical aperture of the objective lens, and M is the combined magnification of the objective lens and the relay lens between the modulator aperture and the object to be investigated, advantageously the PAM illumination aperture has a diameter equal to or smaller than the diameter of the Airy disk (meaning the best focused, diffraction limited spot that a perfect lens with a circular aperture can produce), thereby increasing the lateral spatial resolution compared to conventional PAM and confocal microscopes.
The number of modulator elements forming one spot or PAM illumination aperture can be chosen depending on the size of the modulator elements (mirrors) of the DMD array used and the requirements on resolution. If a plurality of modulator elements form a PAM illumination aperture, they preferably have a compact arrangement, for example as a square arrangement. Preferably, each of the PAM illumination apertures is created by a single modulator element. Thus, the advantage of maximum spatial resolution is obtained.
According to another advantageous embodiment of the invention, the camera arrangement comprises a conjugated camera channel (conjugated camera) in addition to the non-conjugated camera channel. In this case, a conjugate image I is formedcFurther comprising the steps of: forming a partially conjugate image I with a conjugate camera channel by collecting detection light from conjugate and non-conjugate positions of an object via a first set of modulator elements for each pattern of illumination points and PAM illumination aperturesc(ii) a Extracting the partially conjugated image I from images collected using a conjugated camera channelc(ii) a By conjugating said parts to an image IcFrom non-conjugate images IncExtracted contribution stackingTo form an optical slice image. Advantageously, using this embodiment, the optical slice image comprises all available light from the conjugate position, thereby improving the SNR of the image signal.
Preferably, for each PAM illumination aperture, the respective modulator elements of the PAM illumination aperture (included in or around the PAM illumination aperture) define a conjugate or non-conjugate camera pixel mask around a centroid of a camera signal of the respective conjugate or non-conjugate camera channel of the camera arrangement corresponding to the PAM illumination aperture. Subjecting each respective conjugate or non-conjugate camera pixel mask to a magnification and obtaining an estimate of the respective background conjugate or non-conjugate signal from the magnified conjugate or non-conjugate camera pixel mask to use as conjugate (I)c) And non-conjugated (I)nc) And (5) correcting the image. Advantageously, the formation and enlargement of the mask provides additional background information that improves image quality.
According to a particularly preferred embodiment of the optical confocal imaging method according to the first general aspect of the invention, a calibration procedure is applied comprising the following steps: illuminating the modulator element with a collimated light source device; creating a sequence of calibration patterns using the modulator elements; recording a calibration image of the calibration pattern using the camera device; the recorded calibration image is processed for creating calibration data for assigning each camera pixel of the camera arrangement to one of the modulator elements. For example, the calibration light source device comprises a white light source or a colored light source that uniformly illuminates the spatial light modulator device from the front side (instead of the fluorescent object). Using the calibration procedure, one of the main technical challenges of PAM operation, namely the accurate registration of the two images c and nc, is solved.
Preferably, for example, the calibration pattern comprises a sequence of regular, preferably hexagonal, matrices of light spots each produced by at least one single modulator element, the light spots having non-overlapping camera responses. In other words, according to one preferred embodiment using said calibration in all aspects of the invention, the separation of the modulator elements is chosen such that the corresponding distribution of the evoked signals recorded by the camera device is significantly isolated from the corresponding distribution of the neighboring distribution. Advantageously, the recorded points in the camera image are sufficiently separated not to overlap so that they can be unambiguously sliced. Hexagonal matrices of light spots are particularly preferred, as they have the advantage that: the individual modulator elements are equidistant from each other and sufficiently far apart in all directions in the plane of the camera detector to optimize the collection of single responses from the individual modulator elements using the camera.
According to another preferred embodiment using said calibration in all aspects of the invention, the number of calibration patterns is chosen such that all modulator elements are used for recording calibration images and creating calibration data. Advantageously, this allows the calibration to fully cover the spatial light modulator means.
According to another preferred embodiment using said calibration in all aspects of the invention, the sequence of calibration patterns is randomized such that the separation between modulator elements of successive patterns is maximized. Advantageously, this allows for minimizing temporal perturbations (e.g., temporal losses) of neighboring sites.
As a further advantage of the invention, the camera pixels, i.e. pixel-level camera signals, of the camera device (c and/or nc channels) that are responsive to light received from the respective modulator elements preferably provide different, unique and stable distributions of the relevant camera signal intensities associated with their coordinates in the matrix of camera pixels, which distributions are mapped to the respective modulator elements using a calibration procedure. The distribution is described by a system of linear equations defining the response to an arbitrary distribution of intensities originating from the modulator elements.
According to a first variant (centroid method), all collected calibration pattern images are accumulated (superimposing the image signals of the entire sequence of illumination patterns) and the camera signals are mapped back to their corresponding originating modulator elements, wherein the centroid of the camera signals defines a local sub-image in which the intensities are combined by a predetermined algorithm, e.g. an arithmetic or gaussian average of the 3 × 3 domain centered at the centroid position, in order to generate the signal intensities assignable to the respective originating modulator image elements.
According to a second variant (airy aperture method), all collected images are accumulated and the camera signals are mapped back to their corresponding originating modulator elements again. The image signals of the entire sequence of illumination patterns are superimposed. The illumination pattern includes an illumination aperture having a size comparable to the airy diameter (related to the center wavelength of the excitation light). In this case, each signal at each position in the image produced by overlapping the camera response for the entire sequence of patterns is represented as a linear equation whose coefficients are known from the calibration process, and the corresponding transmit signal impinging on the corresponding modulator element is obtained by solving a system of linear equations describing the entire image. Thus, the camera signals representing the response of the individual modulator elements are mapped back to their corresponding coordinates in the modulator matrix, so that the signal at each position in the image resulting from the overlapping responses to the entire sequence of patterns can be represented as a linear equation with known coefficients, and the transmit signals striking the respective modulator elements contribute to a particular position (coordinate), where these signals are evaluated by the solution of the system of linear equations describing the entire image. Advantageously, fluorescence imaging is obtained with improved accuracy by employing a system of linear equations.
In the context of a particular application of the present invention, a simultaneous or time-shifted excitation is provided using the same pattern, applying one or more light sources from opposite sides with respect to a first excitation light source and a spatial light modulator means.
Thus, with one preferred embodiment of the invention, the light source device comprises a first light source arranged for directing excitation light to a conjugate position of the object and a second light source arranged for directing excitation light to a non-conjugate position of the object, and the second light source is controlled for generating excitation light such that the excitation generated by the first light source is confined to the conjugate position of the object. In particular, the second light source may be controlled for generating a lost excited state around a conjugate position of the object.
Furthermore, the detected light from the object may be delayed emission, e.g. delayed fluorescence and phosphorescence, so that the aperture patterns of the modulator elements used for excitation and detection may be different and experimentally synchronized.
If according to another preferred embodiment of the invention the first set of modulator elements consists of a 2D linear array of a low number (limit 1) of elements and the camera signals of the individual modulator elements constitute a different, unique, stable distribution of the relevant signal intensities with coordinates in the matrix of camera pixels and the matrix of modulator elements defined by the calibration procedure, the advantage of applying advanced fluorescence techniques can be obtained.
The invention has the following further advantages and features. The PAM of the present invention allows fast acquisition, large fields, excellent resolution and slicing capability, and simple (i.e., "inexpensive") hardware. The excitation and emission point spread functions can be optimized without loss of signal.
According to further aspects of the invention, there is described a computer-readable medium comprising computer-executable instructions for controlling a programmable array microscope for performing one of the methods of the invention, a computer program having program code for performing one of the methods of the invention residing on the computer-readable medium, and an apparatus, such as a control apparatus device, comprising a computer-readable storage medium containing program instructions for performing one of the methods of the invention.
Drawings
Further advantages and details of preferred embodiments of the invention are described below with reference to the drawings, in which:
FIG. 1: schematic illustration of illumination and detection light paths in PAM according to a preferred embodiment of the present invention;
FIG. 2: a flow chart illustrating a calibration process according to a preferred embodiment of the present invention;
FIG. 3: an illustration of one example of a single aperture map used in a calibration process according to a preferred embodiment of the present invention;
FIG. 4: experimental results representing a comparison of registration methods during calibration according to a preferred embodiment of the present invention;
FIG. 5: creating a diagram of a dilated mask for processing conjugated and non-conjugated single aperture images in accordance with a preferred embodiment of the present invention;
FIG. 6: further experimental results obtained using the optical confocal imaging method according to the preferred embodiment of the present invention.
Detailed Description
The following description of the preferred embodiments of the present invention relates to the inventive strategy of achieving single image acquisition while enhancing resolution in exchange for speed based on three PAM modes of operation, all maintaining optical slices. They incorporate methods of acquisition and data processing that allow operation in three steps that increase the lateral resolution of the imaging. The first PAM mode of operation (or: RES1 mode) is based on calibration using the present invention such that the lateral resolution is equal to or greater than 200 nm. The second PAM mode of operation (or: RES2 mode) allows for a reduction in the illumination aperture and lateral resolution in the 100nm to 200nm range based on the use of creative extraction of conjugate images from non-conjugate camera channels. The third PAM mode of operation (or: RES3 mode) is based on advanced fluorescence technology, resulting in lateral resolution below 100 nm. It is noted that calibration in the RES1 mode is a preferred but optional feature of the RES2 and RES3 modes, which may alternatively be based on other pre-stored reference data, including the distribution of camera pixels "receiving" conjugated and non-conjugated signals from a single modulator element.
These three ranges of enhanced resolution correspond to enhanced resolution achieved by conventional confocal microscopy, "SIM" technology family and selective depletion methods, respectively, such as the resc L FT or other methods, such as F L IM, FRET, time-resolved delayed fluorescence or phosphorescence, hyperspectral imaging, minimum exposure (M L E) and/or tracking it is advantageous to switch between these modes without requiring physical modification of the instrument it is noted that the three modes of operation described above can be achieved alone, such as the RES1 mode alone or the RES2 mode or the RES3 mode alone, or in combination, such as the RES3 mode including the features of RES 2.
The present description relates to a PAM comprising a camera arrangement with two cameras. Note that the single camera embodiment may be used with alternative embodiments, particularly if calibration is omitted because pre-stored calibration data is available and if only optical slice images are extracted from the non-conjugate camera.
The following description of the mode of operation relates to the implementation of calibration procedures, conjugate image extraction and advanced fluorescence techniques using PAM. Fig. 1 schematically shows an assembly of a PAM100, the PAM100 having a light source arrangement 10 comprising one or two light sources 11, 12, e.g. semiconductor lasers, a spatial light modulator arrangement, e.g. a DMD array 20 having a plurality of tiltable reflective modulator elements 21, 22, a camera arrangement 30 having one non-conjugated camera 31 or two conjugated and non-conjugated cameras 31, 32, and a control arrangement 40 connected to the assembly 10, 20 and 30. Further details of the PAM, such as the microscope body, the objective, the relay optics and the support of the object 1 (sample) to be investigated, are not shown in the schematic drawing. The details of known PAM's, such as optical setup, control of spatial light modulator means, collection of camera signals, and creation of optical slice images from conjugate and non-conjugate images are implemented as known from conventional PAM's. The disclosure of EP 2369401 a1, in particular with regard to the structure and operation of a PAM as shown in fig. 1, 2, 4 and 5 and their description and the design of the imaging optics, is incorporated by reference in this specification.
In more detail, the DMD array 20 comprises an array of modulator elements 21, 22 (mirror elements) arranged in the modulator plane of the PAM100, wherein each modulator element can be independently switched between two states (tilt angle, see enlarged part of fig. 1). For example, a binary 1080p (high definition) pattern is generated at a frequency of, for example, about 16 kHz. Imaging optics (not shown in fig. 1) are arranged to focus the illumination light a from the DMD array 20 (via the "on" tilt state) onto the object 1 in the PAM100 and to relay the emitted light generated in the object in response to the illumination light to the DMD. The latter divides the detected light into two paths corresponding to the tilt angle of each micromirror. One detector camera 32 is arranged to collect so-called "conjugate" light (originating from the "on" mirror) and a second camera 31 is arranged to detect "non-conjugate light" (originating from the mirror in the "off" state). The two images are combined in real time (after registration and distortion correction) by a simple subtraction process to generate an optically sliced image, similar to the "confocal" image produced by a point scanning system. However, the excitation duty cycle of PAM is an order of magnitude higher, thus resulting in a very high frame rate required for real-time systems.
The light beams from the light sources 11, 12 via the DMD array 20 to the object 1 and via the DMD array 20 back to the cameras 31, 32 are only indicated by lines in fig. 1. In practice, a wide illumination is provided covering the entire surface of the DMD array 20, wherein the DMD array 20 is controlled such that a pattern of illumination spots is directed to the object 1 and focused in its focal plane 2. Thus, in practice, each illumination point creates a line beam path as shown in fig. 1.
The DMD array 20 (see the enlarged schematic in fig. 1) may be controlled such that a first set of modulator elements, e.g. 21, may be selected for directing excitation light a to conjugate positions in the focal plane 2 of the object 1 and for directing detection light B from these positions to the camera device 30, in particular to the non-conjugate camera 31 and optionally also to the conjugate camera 32. Furthermore, the DMD array 20 may be controlled such that a second set of modulator elements, e.g. 22, may be selected for directing detected light C from a non-conjugate position of the object to the camera device 30, in particular to the non-conjugate camera 31. Furthermore, as described below with reference to the RES2 mode, a second set of modulator elements, e.g., 22, directs detected light B from a conjugate position to the non-conjugate camera 31. Each set of modulator elements comprises a pattern of illumination apertures 23, each illumination aperture being formed by one individual modulator element 21 or a set of modulator elements 21.
The cameras 31, 32 comprise a matrix array of sensitive camera pixels 33, e.g. CMOS cameras, which collect the detected light received via the modulator elements 21, 22. With the calibration process of RES1 mode, camera pixels 33 are mapped to modulator elements 21, 22 of DMD array 20.
Preferably, the functional software runs in the control device 40 (fig. 1), which allows to control and set all connected components, in particular the units 10, 20 and 30, and to perform a fully automatic image acquisition. It also includes providing further image processing (image distortion correction, registration and subtraction) for generating the optical slice PAM image. The control means 40 allow the integration of the PAM mode, e.g. super resolution.
The control device 40 performs the tasks of firstly, it communicates with all connected hardware (DMD array 20 controller, one or two cameras 32, 31, filter wheel, L ED and/or laser excitation light sources 11, 12, microscope, xy micro-electromechanical platform and z piezo platform) including control and settings, secondly, it instructs the hardware to perform specific operations specific to PAM100 including displaying (a large number of) binary patterns on DMD array 20 and synchronously acquiring the results of patterned fluorescence due to these patterns (conjugated and non-conjugated images) on one or two cameras 32, 31, synchronization of display and acquisition is achieved by hardware triggering using proprietary scripting language controlled by integrated FPGA on the DMD controller board, specific scripts have been developed for different acquisition modalities, application software assembles the required scripts based on acquisition protocols and parameters, thirdly, the control device 40 will process the acquired conjugated and non-conjugated images, perform background and shadow corrections, non-linear distortion corrections, image registration, finally perform subtraction (subtraction) or scaling (small aperture combination) to generate the required images in full conjugated languages, e.g. the non-conjugated images can be processed by the full conjugate cameras 32, thus the full conjugated cameras can be processed in real-time by the conjugated cameras 100-conjugated cameras, the non-conjugated cameras, the full-conjugated images can be processed by the full-conjugated cameras, e.g. the full-conjugated cameras 16-script camera-capture software.
RES1 mode-calibration procedure
The calibration process is based on the consideration that a single illumination aperture (virtual "pinhole") in the image plane of PAM100 defines the excitation point-spread function (psf) in the focal plane 2 of PAM100 in object 1 while it imposes geometric constraints on the excitation emission passing through the camera behind it (the source of the term "confocal") signals emitted from off-axis points in focal plane 2 pass through aperture 23 with an efficiency depending on the pinhole diameter and psf corresponding to the PAM optics and emission wavelength the out-of-focus signals resulting from positions displaced from the focal plane and/or optical axis are attenuated to a greater extent providing the Z axis the pinhole also defines the lateral-cut and axial resolution which improves with decreasing size, albeit at the expense of signal degradation due to loss of focusing contribution. in most conventional systems, the aperture size is set close to the airy diameter defined by psf providing an acceptable compromise between resolution and recorded signal intensity the diffraction-limited lateral resolution in RES1 mode is achieved by M ×/2NA (the central aperture of PAM: a is a large as compared to the central aperture of PAM 34 a) and the optical elements — the objective lens 2, which achieves a high axial resolution of PAM 3-21 nm and a high magnification of the DMD optical beam under the conditions of the PAM 3-21-n-p.
The inventor's recent insight is to figure out what happens in small aperture size PAM operations, i.e., many DMD elements (1 × 1, 2 × 2, 3 × 3) correspond to a size smaller than the airy disk.
The calibration process (see fig. 2) comprises a Single Aperture Mapping (SAM) of the DMD modulator elements 21, 22 to the camera pixels 33 and vice versa. In PAM100, a "true" image of the fluorescence originating from object 1 is given by the distribution of the fluorescence striking DMD array 20 and its correspondence to the "on" (21) and "off" (22) mirror elements. To some extent, the cameras 31, 32 are merely recording devices, and are ideally used to reconstruct the desired DMD profile. Thus, the calibration process provides a method to systematically and unambiguously map the camera information back to the DMD array source in a manner that ensures consistency of the constituent pairs of c and nc contributions at the level of a single modulator element. The same procedure applies to conjugated and non-conjugated channels.
In the new SAM registration method, a series of calibration patterns consisting of individual modulator elements 21 are generated ("on" mirrors, focusing light to the focal plane), which are organized in a regular lattice at certain intervals (step S1). A hexagonal arrangement is preferred, wherein each position is equidistant from its 6 adjacent positions (fig. 2). Alternatively, other lattice geometries are possible. DMD array 20 is forward illuminated (e.g., from a microscope bright-field light source (not shown in fig. 1) operating in K-ochler transmission mode) so that the "on" pixels of the pattern form the image in c-camera 32 (in this case, the nc signals are uncorrelated). To obtain the corresponding information of the nc-channel, the complementary pattern is used and the image is recorded with nc camera 31. The process is repeated for the sequence of approximately 80 to 200 bit planes required for full coverage (step S2). Thus, a pitch of typically 10 would require 100 bitplane images for an array of individual DMD elements 21, 22, each bitplane image having approximately 15000 "on" elements globally shifted by DMD increments of a single element x, y in the sequence.
The order of the bit-planes so defined is typically randomized in order to minimize temporal perturbations (e.g., temporal losses) of neighboring sites. The recorded points in the camera image are sufficiently separated (without overlap) so that they can be clearly sliced. The segment intensity distribution between the binary mask and the pixel (approximately 20) of the overall signal containing the given point is determined (step S3). The total intensity is also determined (step S3) and the intensity weighted centroid location for each point is calculated (step S4). Subsequently, an inverse map of each centroid position to the DMD element from which the signal originated is provided, and calibration data representing reflection map information is calculated (step S5). This can be done using standard software tools, such as Mathematica software. The calibration data includes assignment tags to the camera pixels and/or modulator elements, and mapping vector data that cross-references the camera pixels and modulator elements.
Fig. 3 shows an example of single aperture mapping. The top image (fig. 3A) is recorded by c-camera 32 for a complete array of single apertures (95 rows and 157 columns, 14915 points per bit plane). The binary mask (FIG. 3B) depicts selected points from the top image; about 20 camera pixels show a finite value above the background. Without readjustment of the PAM optics, the gray value distribution for one such point is different, reproducible and stable, as shown in fig. 3C. The calculated centroid position (fig. 3D) corresponds to the array depicted in the binary mask.
This process has many advantages: (1) the total intensity in the "spot-like" recorded spot can be mapped to a single known location in DMD array 20; (2) the camera need only have a sufficiently large resolution and format to allow accurate (and stable) slicing of the calibration (and subsequent sampling) points. High QE, low noise and field uniformity are other desirable characteristics. Clear and fairly uniform focusing is important, but relative rotation and translation are not important; the two cameras may even be different in that they are both mapped back to the same DMD modulator element; (3) the total calibration intensity allows for the calculation of very accurate shading corrections for subsequent use; (4) the c and nc camera images are mapped to the same source array 20, ensuring absolute registration of the c and nc distributions in DMD space; (5) using the RES1 mode to superimpose all the bit-plane signals in a single exposure and readout, the registration process is also valid under these conditions, as the overlapping intensity distribution patterns can be added to form a linear equation for each camera pixel. In these equations, the variables are the DMD intensities of interest and the coefficients are known from calibration. The matrix of equations (for recall during operation) is stored and the system separately solves for each pattern of recorded intensities, i.e. any c and nc images generated separately or in e.g. a z-scan sequence.
An alternative, less accurate but useful simplified approach to SAM involves inverse mapping of intensity at the centroid position and/or the average of a small sub-matrix of pixel values for each centroid (e.g., the 3 × 3 domain). The alternative SAM registration process is very fast and produces sufficient results beyond the resolution and slicing capabilities that have been experimentally obtained so far with the traditional linear or non-linear geometric dewarping method available in the L abVIEW Vision.
Fig. 4 presents a comparison of 3T3Balbc mouse fiber cell α -tubulin staining with the imaging SAM registration process of contrasted staining with Alex488 GAMIG fig. 4A to 4C show the registration process by geometric dewarping fig. 4D to 4F show the registration by SAM PAM sequence 5_50 scan (5 × 5 apertures randomly distributed at 50% duty cycle) optical sections are greatly improved due to improved registration both processes use the same acquired data.
RES2 mode-conjugate image extraction procedure
In the RES2 mode, PAM is configured for a process known in the literature as "Structured Illumination (SIM)" or "pixel repositioning" for increasing lateral and/or axial resolution by a factor of 2 by enhancing higher spatial frequencies, advantageously this results in lateral resolution extending into the 100 to 200nm range, similar to the well-known "Airy" detector of confocal microscope L SM800 (Seiss manufacturer), the concept of which, as described above, is to utilize a large number of off-axis sub-Airy spot apertures (detectors) in a manner that enhances higher spatial frequencies but avoids unacceptable signal loss from very small pinholes in the point scanning system.
In PAM, the physical aperture (pinhole) of a conventional confocal microscope is replaced by at least one modulator element of a spatial light modulator device (DMD array). Thus, an "aperture" may consist of a single element or a combination of elements, for example in a square or pseudo-circular configuration or in the form of a line of adjustable thickness. In conventional confocal microscopes, a "small" pinhole provides improved resolution due to an increase in spatial bandwidth, represented by a 3D point spread function (psf), an image of a point source, or more directly by its fourier transform, 3D optical transfer function (otf), where the "missing cone" of a wide-field microscope is filled. However, since the pinholes are "shared" in excitation and emission, the smaller the size, the lower the captured emission signal strength, and the correspondingly lower the signal-to-noise ratio (the pinhole physically rejects the emission light that reaches outside the pinhole).
In contrast, in PAM, the emission "back" from an object in the microscope is registered by the conjugate camera (via a single "on" modulator element defining an aperture) and also by an array of "off" modulator elements around that single modulator element that direct light to the non-conjugate camera. That is, all detected light B from the conjugate position is collected and the illumination aperture size determines the fraction going to one or the other camera 31, 32 (see fig. 1). For very small apertures, such as a single element, most of the in-focus (if) and out-of-focus (of) signals go to the non-conjugate camera 32.
The single aperture calibration method of RES1 mode is used to define the distribution of camera pixels that "receive" conjugated and non-conjugated signals from a single modulator element aperture. In calibration, a set of complementary illumination patterns is used to determine the distribution (binary mask) in the two channels (cameras) for each individual micromirror position. The c and nc "images" defined above (fig. 3) are processed in parallel as described below (see also fig. 5 and scheme 1 below for more details).
The binary mask created from the calibration (fig. 5) is magnified by a factor of 1 to define a ring of pixels (fig. 5) surrounding the response region created from the calibration. In the c-channel (camera 32), the intensity in the "ring" mask 3 corresponds only to the standard background (electronic bias + offset) of the camera image, since by definition the in-focus (if) signal and any associated out-of-focus (of) signals corresponding to a given aperture are constrained to the "core" pixels defined by the original binary mask. The average background/pixel value (b) is calculated from the "ring" pixels of the mask 3 and used to calculate the total background contribution (b · kernel pixel count). The subtraction produces a RES2 mode c image.
In the case of nc channels (camera 31), as indicated above, the signal comprises a large portion of the if signal, as well as a contribution corresponding to a given position in the sample and its conjugate. In this case, the intensity in the ring pixels of the mask 3 (after dilation) contains the camera background but also the camera backgroundofComponents that are extended beyond the limits of the calibration mask, providing a means to correct the kernel response by subtraction.
The mesh nc signal (and the total image formed by all apertures processed for each illumination bit plane) contains the desired if information with the highest achievable resolution (2 times compared to the wide field) and the degree of slicing provided by the small aperture, and defines the RES2 mode (100-.
Since most of the desired signal is contained in the nc channel (in our current instrument the relationship between the intensities of c and nc is about 1/9), PAM100 (fig. 1) can be operated in this mode using only a single nc camera 31. However, the c and nc images collected using nc camera 31 and c camera 32 may also be superimposed to produce a total in-focus (if) emission, albeit at the cost of additional noise. A simplified algebraic description of these relationships is given in fig. 5 and scheme 1, and an example of RES2 mode imaging is given in fig. 6.
It is also worth noting that the intensity in the final image (in DMD array space) is much higher than in conventional camera images, since the process integrates the entire response (which is dispersed in the recorded image) into a single value corresponding to the origin DMD element placed in the coordinates of the final image as an additional benefit, these methods can be performed using an excitation light source comprising L ED instead of a laser light source, generally providing better field uniformity and avoiding artifacts due to residual (despite the use of diffractive elements) spatial and temporal coherence in the case of laser illumination.
In practical testing of the RES2 mode, it has been found that a few milliseconds per bit plane exposure time is sufficient to generate useful images. By minimizing the limitations of the camera characteristics (e.g. readout speed and noise, delay in rolling shutter mode, use of ROI), high quality recordings can be obtained from living cells substantially at > 1 fps.
The processing of conjugated and non-conjugated single aperture images in RES2 is described with reference to fig. 5, in more detail below. A single DMD modulator element is selected as the excitation source for producing the emitted (schematic) points c and nc camera images (as shown in fig. 5A and 5B). The geometry of these two points is not relevant. For simplicity, it may be assumed that the camera gains are matched. White pixel (number n)ij,c,nij,nc) Corresponding to the respective masks produced by the dicing (step S3). The center point in the c-image of fig. 5A is the calculated position of the intensity weighted centroid (step S4). The white pixels of the c image contain if, of and background contributions. Background value bij,cBy enlarging the mask 3, the average value v of the difference mask (outer circle pixel) is calculatedij,cAnd multiplied by nij,c(bij,c=vij,c·nij,c) To estimate locally and with high accuracy (by definition, without the presence of a transmitted signal); if the global background (dark state) signal is pre-subtracted, the value is small or negligible.
The contribution of can be estimated from nc points in fig. 5B, which represent the unique capabilities of the PAM of the present invention. The nc points show the central (shown as black) pixel corresponding to the position of the single selected modulator element on the opposite side and therefore having a background value (experimental observation). In this case, the enlarged mask 3 contains an of contribution and a background contribution (v) that are considered to be equal in density within the maskij,nc=ofij,nc+bij,nc). However, if there is some overlap of the contributions of adjacent spots due to the spot spacing used, vij,ncIf calculated from normalized distribution in mask calculated and recalled by a factor β ≦ 1 (empirically-0.8)ijNon-negativity of value). The corresponding of correction of the c signal is defined by gammavij,ncnpij,cGiven, experience has shown that γ < β, meaning very small pore sizes provide very good slicing capability, as also shown by the associated v value (b).
The following scheme shows the definition of PAM signal under different resolution schemes. sij,cAnd sij,cAre the recorded c and nc signals corresponding to the DMD modulator elements (apertures) with indices ij in the 2D DMD array 20. Each signal containing an in-focus (if)ij,c,ofij,c) Defocus (if)ij,nc,ofij,nc) And background (b)ij,c,bij,nc) The fractional distribution of the in-focus signal between the c and nc images is given by α, α is considered constant for any given DMD mode and optical configuration, α varies greatly with aperture size and is used to define the resolution ranges of RES1, RES2 and RES3 modes for the RES1 mode, the aperture is considered large enough to allow the entire in-focus signal (if)ij) Is limited to c, so α is 1, and a mesh if is desiredijIn RES2, the excitation (and thus "reception") aperture is significantly smaller than the diffraction-limited airy disk, i.e., α < 1, so that ifijNow in nc, which can exceed 90%. In RES3, the excited psf is additionally "diluted" by the loss of excited states due to induced emission or photoconversion.
General relations
Sij,c=ifij,c+ofij,c+bij,c
sij,nc=ifij,nc+ofij,nc+bij,nc
ifij,c=α·ifijifij,nc=(1-α)·ifij
RES1 mode
Figure BDA0002547228020000181
Figure BDA0002547228020000182
RES2, RES3 mode
α<1 ofij,c=ofij,nc
ifij=ifij,nc+ifij,c=(sij,nc-βvij,ncnpij,nc)+(sij,c-(vij,c+γvij,nc)npij,c)
Fig. 6 shows an example of RES2 mode imaging. Fig. 6A shows nc images of the same cells as in fig. 4. Resolution of fine details is much higher down to the width of a single DMD element (-100 nm)2) Fibers are visible. The slice is also excellent, revealing the structure in the blurred areas in the RES1 image of fig. 4. Figure 6B shows nc images of actin filament stained cells.
RES3 MODEL SUPER-RESOLUTION FLUORESCENT MICROSCOPE
In contrast, the "psf dilution" method based on excited state losses (e.g., STED) and in particular the molecular light conversion (e.g., RESO L FT) protocol is well suited to the SAM method applied in a manner suitable to obtain the RES3 mode of lateral resolution. the PAM module allows for bi-directional illumination (see fig. 1 and its description in, for example, EP 2369401 a 1.) thus, by using the same mode to expose the sample from one side to the activating (and readout) light and from the opposite side to the loss (or light conversion) light, the creation of a light source that automatically and accurately achieves the loss or light conversion illumination (corresponding to the "circular ring" in STED) can be used simultaneously, or can be used in accordance with a specific protocol and timely replacement compared to the conventional protocol probe and loss in point scanning systems, the conventional optical processing of the RESO 7 and light conversion illumination (equivalent to the "circular ring" in STED system), the entire optical light conversion system should be used to measure the protein properties using a simple pulse excitation light source L, additionally the PAM module is provided as an example that the fluorescence light conversion is done using a simple optical pulse 355635.
Implementation of RES1 to RES3 to control gastric motility using PAM100
In the following, a method of implementing the above PAM mode, preferably by a software program, is described in more detail.
AboutRES1 modeStep S1 of the calibration procedure with the function of generating a calibration matrix of single "points" (selected modulator elements) includes parameter definition. Origin parameters (x, y offset from the global origin, e.g. upper left corner) of the active elements defined in the DMD array matrix and space parameters representing the spacing between apertures of adjacent elements in the 2D modulator DMD array matrix, the number of rows nr in the excitation matrix, the number of columns nc in the excitation matrix and the entire sequence space are provided2The number of median planes nbp. Temporal overlap is minimized by randomizing the bit-plane sequence so that consecutive bit-planes do not overlap with x or y shifts < n (typically 2), where na is the number of individual element apertures in each bit-plane nr nc. For example: space 10; nbp-100; nr 95; nc is 17; and na is 14917, and the total number of calibration points is nbp na 1491700.
Step S2 of the calibration procedure, which includes obtaining a calibration response matrix (conjugate, non-conjugate), includes a PAM operation using a pattern sequence (e.g., consisting of a matrix of single-element apertures). Front illumination of the modulator is provided, for example from a coupled microscope operating in transmission mode, with K6hler adjustment to establish field uniformity. Images corresponding to each bit plane in the sequence are acquired and real-time focus adjustment is performed to minimize spot size (non-repetition) in the detector image. Acquisition of an image corresponding to the selected dot pattern is provided in a sequential manner. Preferably, for correction purposes, corresponding background and shadow images are collected. The operation is performed with a given sequence of patterns for the conjugated channels (recorded from the same side as the illumination) and with complementary patterns for the non-conjugated channels (recorded from the opposite side to the illumination). Subsequently, an averaging step may be performed to average (calculate an average) repetitions of the calibration data in the calibration phase.
Steps S3 to S5 include the process of calibrating the image for each bit plane to obtain an ordered set of vector response parameters (in rows and columns of the modulator matrix). firstly, the bit planes are reordered in an order according to a known randomized sequence. secondly, a slicing (steps S3, S4) is performed to identify and mark the response sub-images ("spots") -parameters: thresholds, dilation and erosion parameters; arbitrarily ordered according to the degree of distortion (curvature and displacement of rows and columns). subsequently, an output is generated comprising a 2D mask and vectors in rows and columns. the output preferably also comprises a 2D mask of pixel positions corresponding to pixel elements in a given spot; an alpha (α) parameter (to be used in RES2 and RES3 modes) representing the relative intensity distribution in the response pixels and the response matrix of the linear equation of the combined plane image; the calculated coordinates of the centroid of a given spot; the total intensity of a given spot; the total area of a given spot (in pixel) and the third row and column of the modulator matrix, the excitation matrix is provided to the storage of excitation matrix (n.c) for the excitation matrix and the processing of the excitation data of the corresponding excitation plane (S5).
In practice, the calibration method works well even if real-time acquisition and display requires more than 100 million linear equations to be solved in 10 to 100 ms. High-level software directed to SPARSE matrices (such as those referred to herein) utilizing multi-core and GPU architectures is readily available for computing (e.g., SPARSE suite).
RES2 and RES3 modesThe software implementation of (a) includes the following steps. First, the acquisition of the response matrix (conjugate, non-conjugate) is performed, including the parameter selection, and for the RES3 mode additionally the selection of the pattern sequence (superpixel definition) for light conversion and readout. In addition, X, Y and Z positioning and spectral (excitation, emission, light conversion) component selection (spectral channel definition) were performed.
Second, the integrated response matrix (bit plane response of the sum of single exposures) is reverse mapped to the modulator element matrix. This registration uses centroid-based calibration data (as in RES1 mode) and a local sub-image processing algorithm, or alternatively, alpha parameter-based calibration, where the solution of a complete or local alpha equation matrix using a sparse algorithm is used to generate the distribution of a single response in the DMD space (performed separately for conjugate (c) and non-conjugate (nc) image data).
Third, the acquired images are evaluated, e.g., with a sparse pattern of small excitation points, including calculations based on previously c and nc processed optical slice images. With respect to the c-image, the centroid calibration data and local sub-image processing algorithms are used to establish the distribution of the response signal in the camera domain and the projection to the DMD domain defined by the excitation pattern. Regarding nc images, the same as c, but including systematically evaluating defocus contributions by evaluating the signal immediately surrounding the calibration response region and appropriately scaled-down from the signal of the calibration response region. Finally, image combination is performed, where the optical slice RES2 image is obtained from only the processed nc image (using the main contribution of the very small excitation point) or the scaled sum of the processed c and nc images.
The features of the invention disclosed in the above description, the drawings and the claims are equally important for realizing the invention in different embodiments of the invention, individually or in combination or sub-combination.

Claims (33)

1. An optical confocal imaging method performed using a Programmable Array Microscope (PAM) (100), the programmable array microscope (100) having a light source arrangement (10), a spatial light modulator arrangement (20) having a plurality of reflective modulator elements, a programmable array microscope objective and a camera arrangement (30), wherein the spatial light modulator arrangement (20) is configured such that a first set of modulator elements (21) can be selected for directing excitation light to conjugate positions of an object to be investigated and for directing detection light originating from these positions to the camera arrangement (30), and a second set of modulator elements (22) can be selected for directing detection light from non-conjugate positions of the object to the camera arrangement (30), the method comprising the steps of:
directing excitation light from a light source device (10) via a first set of modulator elements to an object to be investigated, wherein a spatial light modulator device (20) is controlled such that a predetermined sequence of patterns of illumination points is focused to conjugate positions of the object, wherein each illumination point is created by at least one single modulator element defining a current programmable array microscope illumination aperture,
collecting conjugate images I based on collecting detection light from conjugate locations of an object for each pattern of a programmable array microscope illumination aperturecThe image data of (a) is obtained,
collecting a non-conjugate image I with a non-conjugate camera channel of a camera device (30) based on collecting detection light from a non-conjugate location of the object via a second set of modulator elements (22) for each pattern of the programmable array microscope illumination aperturencThe image data of (a) is obtained,
based on conjugated images IcAnd a non-conjugate image IncCreates an Optical Slice Image (OSI) of the object,
characterized in that a conjugate image I is collectedcThe step of image data of (1) comprises:
with a non-conjugate camera channel of the camera device (30), for each pattern of the programmable array microscope illumination aperture, partial detection light is collected from conjugate positions of the object via modulator elements of the second set of modulator elements (22) surrounding the current programmable array microscope illumination aperture.
2. The imaging method according to claim 1,
the spatial light modulator device (20) is controlled such that the current programmable array microscope illumination aperture has a diameter approximately equal to or smaller than M × λ/2NA, where λ is the central wavelength of the excitation light, NA is the numerical aperture of the objective lens, and M is the combined magnification of the objective lens and the relay lens between the modulator aperture and the object under investigation.
3. The imaging method according to any one of the preceding claims,
each of the current programmable array microscope illumination apertures has a size of less than 100 μm.
4. The imaging method according to any one of the preceding claims,
each of the programmable array microscope illumination apertures is created by a single modulator element (21).
5. The imaging method according to any one of the preceding claims,
for each programmable array microscope illumination aperture, the respective modulator element defines a non-conjugate camera pixel mask (3) surrounding a centroid of camera signals of a non-conjugate camera channel of a camera arrangement (30) corresponding to the programmable array microscope illumination aperture,
subjecting each non-conjugate camera pixel mask (3) to a magnification,
obtaining an estimate of the background non-conjugate signal from the enlarged non-conjugate camera pixel mask (3) for use as the non-conjugate image (I)nc) And a conjugate image (I)c) The correction of the image data of (2),
forming an Optical Slice Image (OSI) corresponding to a non-conjugate camera channel of a camera device (30)nc) And (4) components.
6. Imaging method according to any one of the preceding claims, wherein a conjugate image I is formedcFurther comprising the steps of:
forming a partially conjugate image I by collecting detection light from conjugate and non-conjugate positions of the object via a first set of modulator elements (21) for each pattern of the programmable array microscope illumination aperture using a conjugate camera channel of a camera device (30)c
Extracting the partial conjugate image I from images collected with a conjugate camera channel of a camera arrangement (30)c
By passing from IncSubtracts the estimate of the non-conjugate contribution to correct the partially conjugate image Ic
Formation with IcChannel-corresponding Optical Slice Image (OSI)c) The components of the first and second images are,
by combining non-conjugate and conjugate contributions (OSI ═ OSI)nc+OSIc) Forming an overall Optical Slice Image (OSI).
7. The imaging method according to claim 6,
for each programmable array microscope illumination aperture, the respective modulator element defines a conjugate camera pixel mask (3) surrounding a centroid of a camera signal of a conjugate camera channel of a camera arrangement (30) corresponding to the programmable array microscope illumination aperture,
subjecting the conjugate camera pixel mask (3) to magnification,
obtaining an estimate of the background non-conjugate signal from the enlarged conjugate camera pixel mask (3) for use as the conjugate image (I)c) And non-conjugate images (I)nc) To form an optical slice image.
8. The imaging method according to any one of the preceding claims, further comprising a calibration procedure with the steps of:
the modulator element is illuminated using a collimated light source arrangement (10),
a sequence of calibration patterns is created using the modulator elements,
recording a calibration image of the calibration pattern using a camera device (30),
the recorded calibration image is processed for creating calibration data assigning each camera pixel of the camera arrangement (30) to one of the modulator elements.
9. The imaging method of claim 8, comprising at least one of the following features:
the calibration pattern comprises a sequence of regular, preferably hexagonal, matrices of light spots, each generated by at least one single modulator element, the light spots having non-overlapping camera responses,
the number of calibration patterns is chosen such that all modulator elements are used to record calibration images and create calibration data,
the sequence of calibration patterns is randomized such that the spacing between modulator elements of successive patterns is maximized.
10. The imaging method according to any one of claims 8 to 9,
camera pixels of the camera device (30) that are responsive to light received from a single modulator element provide different, unique, stable distributions of relative camera signal intensities and their coordinates in the camera pixel matrix that are mapped to the respective modulator element using a calibration process.
11. The imaging method according to any one of claims 8 to 10,
all collected images are accumulated and the camera signals are mapped back to their corresponding originating modulator elements, wherein,
the centroid of the camera signal defines the partial sub-images in which the intensities are combined by a predetermined algorithm to produce signal intensities that can be assigned to the respective originating modulator image elements.
12. The imaging method according to any one of claims 8 to 10,
all collected images are accumulated and the camera signals are mapped back to their corresponding originating modulator elements, wherein,
each signal at each position in the image produced by overlapping the camera responses for the entire sequence of patterns is represented as a linear equation with coefficients known from the calibration process,
the respective transmit signals that strike the respective modulator elements are obtained by solving a system of linear equations that describe the entire image.
13. The imaging method according to any one of claims 8 to 12,
the first set of modulator elements (21) is an array of a low number (limit 1) of elements with non-overlapping responses, and the camera signals of the individual modulator elements constitute a distinct, unique, stable distribution of the relevant signal intensities with coordinates in the matrix of camera pixels and the matrix of modulator elements defined by the calibration process.
14. The imaging method of claim 13, further comprising:
the same pattern is used to apply one or more light sources from opposite sides for simultaneous or time-shifted excitation.
15. The imaging method according to any one of claims 8 to 12,
the first set of modulator elements consists of a 2D linear array of a low number (limit 1) of elements and the camera signals of the individual modulator elements constitute a distinct, unique, stable distribution of the relevant signal intensities with coordinates in the matrix of camera pixels and the matrix of modulator elements defined by the calibration procedure.
16. The imaging method according to any one of the preceding claims,
the light source arrangement (10) comprises a first light source arranged for directing excitation light to a conjugate position of the object and a second light source arranged for directing excitation light to a non-conjugate position of the object,
the second light source is controlled for generating excitation light such that the excitation generated by the first light source is confined to conjugate positions of the object.
17. The imaging method according to claim 16,
the second light source is controlled for generating a lost excited state around a conjugate position of the object.
18. The imaging method according to any one of the preceding claims,
the detection light from the object is delayed emission, e.g. delayed fluorescence and phosphorescence, so that the aperture patterns for excitation and detection may be different and experimentally synchronized.
19. An optical confocal imaging method performed using a Programmable Array Microscope (PAM) having a light source arrangement (10), a spatial light modulator arrangement (20) having a plurality of reflective modulator elements, a programmable array microscope objective and a camera arrangement (30), wherein the spatial light modulator arrangement (20) is configured such that a first set of modulator elements (21) can be selected for directing excitation light to conjugate positions of an object to be investigated and for directing detection light originating from these positions to the camera arrangement (30), and a second set of modulator elements (22) can be selected for directing detection light from non-conjugate positions of the object to the camera arrangement (30), the method comprising the steps of:
directing excitation light from a light source device (10) via a first set of modulator elements (21) to an object to be investigated, wherein a spatial light modulator device (20) is controlled such that a predetermined sequence of patterns of illumination points is focused to conjugate positions of the object, wherein each illumination point is created by at least one single modulator element defining a current programmable array microscope illumination aperture,
forming a conjugate image I by collecting detection light from conjugate positions of the object via a first set of modulator elements (21) for each pattern of the programmable array microscope illumination aperture using a conjugate camera channel of a camera device (30)c
Forming a non-conjugate image I by collecting detection light from non-conjugate locations of the object via a second set of modulator elements (22) for each pattern of the programmable array microscope illumination aperture using a non-conjugate camera channel of a camera device (30)nc
Based on conjugated images IcAnd a non-conjugate image IncCreating an Optical Slice Image (OSI) of the object, characterized in that,
registration of conjugate images (I) by using calibration datac) And non-conjugate images (I)nc) The calibration data is obtained by a calibration procedure comprising mapping the positions of the modulator elements to camera pixel positions.
20. Programmable Array Microscope (PAM) with a light source arrangement (10), a spatial light modulator arrangement (20) with a plurality of reflective modulator elements, a programmable array microscope objective, a camera arrangement (30) and a control arrangement (40), wherein the spatial light modulator arrangement (20) is configured such that a first set of modulator elements (21) can be selected for directing excitation light to conjugate positions of an object to be investigated and for directing detection light originating from these positions to the camera arrangement (30) and a second set of modulator elements (22) can be selected for directing detection light from non-conjugate positions of the object to the camera arrangement (30), wherein,
the light source device (10) is arranged for directing excitation light to the object to be investigated via a first set of modulator elements (21), wherein the control device (40) is adapted for controlling the spatial light modulator device (20) such that a predetermined sequence of patterns of illumination points is focused to conjugate positions of the object, wherein each illumination point is created by at least one single modulator element defining the current programmable array microscope illumination aperture,
the camera arrangement (30) is arranged for forming a conjugate image I based on the detected light from the conjugate position of the object for each pattern of the programmable array microscope illumination aperturecThe collected image data of (a) the image data,
the camera arrangement (30) comprises a non-conjugate camera channel configured for collecting a non-conjugate image I based on detected light from non-conjugate positions of the object via the second set of modulator elements (22) for each pattern of the programmable array microscope illumination aperturencThe image data of (a) is obtained,
the control means (40) are adapted to base the conjugate image IcAnd a non-conjugate image IncAn Optical Section Image (OSI) of the object is created,
it is characterized in that the preparation method is characterized in that,
the non-conjugate camera channels of the camera arrangement (30) are arranged for collecting, for each pattern of the programmable array microscope illumination aperture, inspection portion detection light from conjugate positions of the object via modulator elements of the second set of modulator elements surrounding the current programmable array microscope illumination aperture.
21. The programmable array microscope of claim 20,
the control means (40) are adapted to control the spatial light modulator means (20) such that the current programmable array microscope illumination aperture has a diameter approximately equal to or smaller than M × λ/2NA, where λ is the central wavelength of the excitation light, NA is the numerical aperture of the objective lens, and M is the combined magnification of the objective lens and the relay lens between the modulator aperture and the object to be investigated.
22. The programmable array microscope of any one of claims 20 to 21,
each of the current programmable array microscope illumination apertures has a size of less than 100 μm.
23. The programmable array microscope of any one of claims 20 to 22,
each of the programmable array microscope illumination apertures is created by a single modulator element.
24. The programmable array microscope of any one of claims 20 to 23,
for each programmable array microscope illumination aperture, the respective modulator element of the programmable array microscope illumination aperture defines a non-conjugate camera pixel mask (3) surrounding a centroid of a camera signal of a non-conjugate camera channel of a camera arrangement (30) corresponding to the programmable array microscope illumination aperture,
the control means (40) are adapted to subject each non-conjugated camera pixel mask (3) to a magnification,
the control means (40) are adapted to obtain an estimate of the background non-conjugate signal from the enlarged non-conjugate camera pixel mask (3) for use as the conjugate image (I)c) And non-conjugate images (I)nc) And (4) correcting.
25. The programmable array microscope of any of the preceding claims,
the camera arrangement (30) comprises a conjugate camera channel configured for deriving a conjugate sum from the object via a first set of modulator elements (21) by illuminating each pattern of apertures for the programmable array microscopeThe non-conjugate locations collect the detected light to form a partially conjugate image Ic
The control means (40) are adapted to extract the partial conjugate image I from images collected with a conjugate camera channel of the camera means (30)c
The control means (40) are adapted to control the image processing system by conjugating said part of the image IcFrom non-conjugate images IncThe extracted contributions are superimposed to form a conjugate image Ic
26. The programmable array microscope of claim 25,
for each programmable array microscope illumination aperture, the respective modulator element of the programmable array microscope illumination aperture defines a conjugate camera pixel mask (3) surrounding a centroid of a camera signal of a conjugate camera channel of a camera arrangement (30) corresponding to the programmable array microscope illumination aperture,
the control means (40) are adapted to subject the conjugate camera pixel mask (3) to a magnification,
the control means (40) are adapted to obtain an estimate of the background non-conjugate signal from the enlarged conjugate camera pixel mask (3) for use as the conjugate image (I)c) And non-conjugate images (I)nc) And (4) correcting.
27. Programmable array microscope according to one of the preceding claims, wherein the control device (40) is adapted to perform a calibration procedure according to the following steps: illuminating the modulator element with a calibration light source arrangement (10); creating a sequence of calibration patterns using the modulator elements; recording a calibration image of the calibration pattern using a camera device (30); the recorded calibration image is processed for creating calibration data assigning each camera pixel of the camera arrangement (30) to one of the modulator elements.
28. The programmable array microscope of any of the preceding claims,
the light source arrangement (10) comprises a first light source arranged for directing excitation light to a conjugate position of the object and a second light source arranged for directing excitation light to a non-conjugate position of the object,
the control means (40) are adapted to control the second light source and to generate excitation light such that excitation generated by the first light source is confined to conjugate positions of the object.
29. The programmable array microscope of claim 28,
the control means (40) is adapted to control the second light source and to generate a lossy excited state around a conjugate position of the object.
30. Programmable Array Microscope (PAM) with a light source arrangement (10), a spatial light modulator arrangement (20) with a plurality of reflective modulator elements, a programmable array microscope objective, a camera arrangement (30) and a control arrangement (40), wherein the spatial light modulator arrangement (20) is configured such that a first set of modulator elements (21) can be selected for directing excitation light to conjugate positions of an object to be investigated and for directing detection light originating from these positions to the camera arrangement (30) and a second set of modulator elements (22) can be selected for directing detection light from non-conjugate positions of the object to the camera arrangement (30), wherein,
the light source device (10) is arranged for directing excitation light from the light source device (10) to an object to be investigated via a first set of modulator elements (21), wherein the control device (40) is adapted for controlling the spatial light modulator device (20) such that a predetermined sequence of patterns of illumination points is focused to conjugate positions of the object, wherein each illumination point is created by at least one single modulator element defining a current programmable array microscope illumination aperture,
the camera arrangement (30) has conjugate camera channels configured for forming a conjugate image I by collecting detection light from conjugate positions of the object via a first set of modulator elements (21) for each pattern of the programmable array microscope illumination aperturec
The camera device (30) has a non-conjugated camera channel configured for illuminating by means of a microscope for a programmable arrayEach pattern of apertures collects detection light from non-conjugate locations of the object via a second set of modulator elements (22) to form a non-conjugate image Inc
The control means (40) are adapted to base the conjugate image IcAnd a non-conjugate image IncTo create an optical sectional image of the object,
it is characterized in that the preparation method is characterized in that,
the control means (40) are adapted to register the conjugate images (I) by using the calibration datac) And non-conjugate images (I)nc) The calibration data is obtained by a calibration procedure comprising mapping the positions of the modulator elements to camera pixel positions.
31. A computer readable medium comprising computer executable instructions to control a programmable array microscope to perform the method of any one of claims 1 to 19.
32. A computer program residing on a computer readable medium, the computer program having program code for performing the method of any of claims 1-19.
33. An apparatus comprising a computer readable storage medium containing program instructions for performing the method of any of claims 1-19.
CN201780097852.9A 2017-12-20 2017-12-20 Method and apparatus for optical confocal imaging using a programmable array microscope Pending CN111512205A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2017/083728 WO2019120502A1 (en) 2017-12-20 2017-12-20 Method and apparatus for optical confocal imaging, using a programmable array microscope

Publications (1)

Publication Number Publication Date
CN111512205A true CN111512205A (en) 2020-08-07

Family

ID=60812078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780097852.9A Pending CN111512205A (en) 2017-12-20 2017-12-20 Method and apparatus for optical confocal imaging using a programmable array microscope

Country Status (5)

Country Link
US (1) US20210003834A1 (en)
EP (1) EP3729161A1 (en)
JP (1) JP7053845B2 (en)
CN (1) CN111512205A (en)
WO (1) WO2019120502A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113749772A (en) * 2021-04-22 2021-12-07 上海格联医疗科技有限公司 Enhanced near-infrared 4K fluorescence navigation system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3751326A1 (en) * 2019-06-14 2020-12-16 Leica Microsystems CMS GmbH Method and microscope for imaging a sample
DE102019008989B3 (en) * 2019-12-21 2021-06-24 Abberior Instruments Gmbh Disturbance correction procedure and laser scanning microscope with disturbance correction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1244927B1 (en) * 1999-12-17 2005-09-07 Digital Optical Imaging Corporation Methods and apparatus for imaging using a light guide bundle and a spatial light modulator
EP2369401A1 (en) * 2010-03-23 2011-09-28 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Optical modulator device and spatio-temporally light modulated imaging system
CN103069806A (en) * 2010-08-12 2013-04-24 伊斯曼柯达公司 Light source modulation for projection
WO2013126762A1 (en) * 2012-02-23 2013-08-29 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Office Of Technology Transfer, National Institutes Of Health Multi-focal structured illumination microscopy systems and methods
US20150146278A1 (en) * 2013-11-26 2015-05-28 Cairn Research Limited Optical arrangement for digital micromirror device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE236412T1 (en) 1997-10-22 2003-04-15 Max Planck Gesellschaft PROGRAMMABLE SPACIAL LIGHT MODULATED MICROSCOPE AND MICROSCOPY METHOD
EP0916981B1 (en) * 1997-11-17 2004-07-28 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Confocal spectroscopy system and method
JP4894161B2 (en) 2005-05-10 2012-03-14 株式会社ニコン Confocal microscope
JP5393406B2 (en) 2009-11-06 2014-01-22 オリンパス株式会社 Pattern projector, scanning confocal microscope, and pattern irradiation method
US9395293B1 (en) * 2015-01-12 2016-07-19 Verily Life Sciences Llc High-throughput hyperspectral imaging with superior resolution and optical sectioning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1244927B1 (en) * 1999-12-17 2005-09-07 Digital Optical Imaging Corporation Methods and apparatus for imaging using a light guide bundle and a spatial light modulator
EP2369401A1 (en) * 2010-03-23 2011-09-28 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Optical modulator device and spatio-temporally light modulated imaging system
CN103069806A (en) * 2010-08-12 2013-04-24 伊斯曼柯达公司 Light source modulation for projection
WO2013126762A1 (en) * 2012-02-23 2013-08-29 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Office Of Technology Transfer, National Institutes Of Health Multi-focal structured illumination microscopy systems and methods
US20150146278A1 (en) * 2013-11-26 2015-05-28 Cairn Research Limited Optical arrangement for digital micromirror device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANTHONY H. B. DE VRIES ET AL.: "Generation 3 programmable array microscope (PAM) for high speed,large format optical sectioning in fluorescence", 《PROCEEDINGS OF SPIE》 *
PAVEL KRIZEK ET AL.: "Flexible structured illumination microscope with a programmable illumination array", 《OPT.EXPRESS》 *
SHIH-HUI CHAO ET AL: "Oxygen concentration measurement with a phosphorescence lifetime based micro-sensor array using a digital light modulation microscope", 《PROCEEDINGS OF SPIE》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113749772A (en) * 2021-04-22 2021-12-07 上海格联医疗科技有限公司 Enhanced near-infrared 4K fluorescence navigation system

Also Published As

Publication number Publication date
WO2019120502A1 (en) 2019-06-27
JP7053845B2 (en) 2022-04-12
US20210003834A1 (en) 2021-01-07
JP2021509181A (en) 2021-03-18
EP3729161A1 (en) 2020-10-28

Similar Documents

Publication Publication Date Title
US11604342B2 (en) Microscopy devices, methods and systems
JP6059190B2 (en) Microscopy and microscope with improved resolution
JP6706263B2 (en) Imaging method and system for obtaining super-resolution images of an object
JP2018517171A (en) Signal evaluation of fluorescence scanning microscopy using a confocal laser scanning microscope.
JP7252190B2 (en) System for generating enhanced depth-of-field synthetic 2D images of biological specimens
JP6708667B2 (en) Assembly and method for beam shaping and light sheet microscopy
JP7064796B2 (en) Image reconstruction method, device and microimaging device
WO2009057114A2 (en) Optical sensor measurement and crosstalk evaluation
CN109425978B (en) High resolution 2D microscopy with improved cross-sectional thickness
CN111512205A (en) Method and apparatus for optical confocal imaging using a programmable array microscope
JP2012190021A (en) Laser scanning microscope and operation method thereof
JPWO2012096153A1 (en) Microscope system
EP3520074B1 (en) Method for the analysis of spatial and temporal information of samples by means of optical microscopy
CN115248197A (en) Three-dimensional imaging device and imaging method
CN111413791B (en) High resolution scanning microscopy
JP2006317261A (en) Image processing method and device of scanning cytometer
CN107144551B (en) Confocal super-resolution imaging system and method based on sCMOS
JP4905356B2 (en) Line scanning confocal microscope
CN107850766A (en) System and method for the image procossing in light microscope
JP4426763B2 (en) Confocal microscope
WO2023060091A1 (en) Enhanced resolution imaging
US11256078B2 (en) Continuous scanning for localization microscopy
CN117369106B (en) Multi-point confocal image scanning microscope and imaging method
US20240053594A1 (en) Method and Device for Microscopy
CN116097147A (en) Method for obtaining an optical slice image of a sample and device suitable for use in such a method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20231013