WO2020002391A1 - A label-free multicolor optical surface tomography imaging method for nontransparent 3d samples - Google Patents

A label-free multicolor optical surface tomography imaging method for nontransparent 3d samples Download PDF

Info

Publication number
WO2020002391A1
WO2020002391A1 PCT/EP2019/066931 EP2019066931W WO2020002391A1 WO 2020002391 A1 WO2020002391 A1 WO 2020002391A1 EP 2019066931 W EP2019066931 W EP 2019066931W WO 2020002391 A1 WO2020002391 A1 WO 2020002391A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
sample
shows
image
light
Prior art date
Application number
PCT/EP2019/066931
Other languages
French (fr)
Inventor
Sebastian Munck
Original Assignee
Vib Vzw
Katholieke Universiteit Leuven, K.U.Leuven R&D
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vib Vzw, Katholieke Universiteit Leuven, K.U.Leuven R&D filed Critical Vib Vzw
Priority to EP19732378.5A priority Critical patent/EP3813641A1/en
Publication of WO2020002391A1 publication Critical patent/WO2020002391A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0073Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by tomography, i.e. reconstruction of 3D images from 2D projections
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/251Colorimeters; Construction thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/42Evaluating a particular growth phase or type of persons or animals for laboratory research
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/47Scattering, i.e. diffuse reflection
    • G01N2021/4735Solid samples, e.g. paper, glass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/47Scattering, i.e. diffuse reflection
    • G01N21/4738Diffuse reflection, e.g. also for testing fluids, fibrous materials
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N21/6456Spatial resolved fluorescence measurements; Imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/06Illumination; Optics
    • G01N2201/063Illuminating optical parts
    • G01N2201/0634Diffuse illumination

Definitions

  • This application relates to the field of microscopy. Means and methods are disclosed for label- free 3D imaging of non-transparent objects.
  • the 3D modality of these two 3D microscopy methods differs.
  • SPIM directly generates a stack of images of confocal sections and OPT acquires multiple images collected at different angles by rotating the sample with respect to the image acquisition and back calculating the 3D information.
  • This OPT imaging mode bears similarity with micro-computed tomography (CT) and also means that standard tools for 3D reconstruction of CT data also work for OPT data.
  • CT micro-computed tomography
  • Both devices implementing SPIM and OPT require transparent samples to image fluorescence within the tissue.
  • OPT has been used with transmitted light and absorbing dyes [6, 7] where the sample is back-illuminated, and the light is partially absorbed as the light passes through the transparent material.
  • OPT unlike SPIM, is also compatible with samples that slightly scatter light [8], but is not suited for opaque samples. Therefore, apart from inherently transparent biological samples, like the zebrafish ( Danio rerio ), transparency in the opaque samples needs to be induced using chemical 'clearing' methods. These chemical clearing methods are time intensive and are not compatible with all opaque samples. The optical chemical clearing is also counterproductive for studying the surface morphology and color appearance.
  • micro-CT is used and is based on the absorption of X-rays as the X-rays pass through the material [10].
  • the X-rays interact differentially with matter in the samples compared to photons in the visual spectrum and the X-rays pass readily through biological material, which is why optically non-transparent samples can be imaged in the micro-CT.
  • Flowever acquiring micro-CT devices still poses a financial burden for many laboratories and is problematic for following morphological changes over time. It is also inconvenient for forward genetic screens searching for mutants with abnormal external morphology and related tasks.
  • micro-CT cannot detect the 'optical' properties of a sample, like color, or differences in reflectivity since the micro-CT contrast solely captures density differences.
  • a method for imaging a sample comprises: illuminating, in an imaging chamber, the sample with diffused light from a light source, capturing reflected light in a detector, imaging the reflected light from the sample, and constructing an image of the sample from the determined reflected light from the sample.
  • the determining of the reflected light from the sample is carried out in one aspect by subtracting background light from the captured reflected light. The red, green and blue components of the reflected light can be captured separately.
  • the constructing of the image comprises using a back-projection algorithm, and subsequently a 3D image of the sample can be visualised.
  • the method may also include capturing fluorescent radiation from the sample.
  • the method comprises rotation and/or translation of the sample.
  • This document also describes a device for imaging a sample.
  • the device has an imaging chamber holding the sample, a diffused light source, a detector arranged to capture reflected light from the sample and a processor adapted to construct an image of the sample (130) from the captured reflected light.
  • the imaging chamber is lined in one aspect with reflecting materials.
  • the diffused light source may comprise an LED light source and may further comprise a diffuser between the imaging chamber and the diffused light source.
  • the device comprises a plurality of color filters adapted to be inserted between the diffused light source and the imaging chamber.
  • the detector may be able to capture light having one or more spectral bandwidths and that there may be more than one detector. This enables different spectral information to be used to construct the image.
  • the detector may be, for example, one of a light field camera, a black and white or RGB CCD camera, an sCMOS-based camera, an analogue camera or a scanning system.
  • a holographic imaging system could be used to create an image.
  • the image could be retrieved from the Fourier plane. It will be appreciated that the imaging may be corrected for changes is the perspective.
  • the method is not limited to a single focus position and it is possible that multiple focus positions are combined.
  • white light made up of light of many spectra is used and differences in the properties of the reflected light for example with dispersive elements is used to deduce surface properties, for example surface roughness.
  • Figure 1 shows imaging modalities.
  • Fig. 2B shows a simulating of the tomographic imaging process.
  • the phantom is rotated by 180°.
  • the Fig. 2B shows rotations of 0° and rotation by 60°.
  • Fig. 2C illustrates reflected light imaging using the Shepp-Logan Phantom.
  • the Phantom is illuminated from the top and rotated.
  • the 0° and 60° rotation are shown.
  • Fig. 2D shows how the reflected light images form a projection, with the sum of projections being visualized in the corresponding sinogram.
  • Fig. 3 shows colour images.
  • Fig. 4 shows a resistor with color code.
  • Fig. 4A is a photograph of the resistor showing the color code on it.
  • Fig. 4B is a 3D reconstruction of the blue channel.
  • Fig. 4C is a 3D reconstruction of the green channel.
  • Fig. 4D is a 3D reconstruction of the red channel.
  • Fig 4E highlights the relative intensity profile of the color channels as indicated in Figs. 4B, 4C and 4D.
  • Fig. 4F is 3D RGB color reconstruction.
  • Figs. 4G to 4L show the reconstruction of a seed cone sample ( Metasequoia glyptostroboides) with complex surface structure with scale bars of 2mm.
  • Fig. 4G shows a photograph of the seed cone.
  • Fig. 4H shows an individual image from the device (blue channel).
  • Fig. 41 shows a reconstructed sagittal section through a central plane of the seed cone (blue channel).
  • Fig 4J shows an intensity profile along the line indicated in I to compare inside and outside of the complex shape.
  • Figs. 4K-L show 3D semitransparent volume rendering of the seed cone in three colors (colors inverted compared to 4J, for realistic color display. The color balance for the three colors was adapted manually. Images show different angles, including a view from below the cone.
  • Fig. 5 shows the imaging of glossy surface of a rosemary beetle and a coin (scale bars at 3mm).
  • Fig. 5A shows a photograph of the Rosemary beetle.
  • Fig. 5B shows an Individual image from the imaging method of this document (blue channel).
  • Fig. 5C shows a surface rendering of the beetle in 3D using three colors.
  • Fig. 5D shows a referring of the back side of a eurocent coin.
  • FIGs 6A and 6B are photographs of a Lego figurine with a beard dubbed 'Dalton' and of a Lego figurine with glasses and happy face dubbed the 'Workman'.
  • Fig. 6C is a reconstructed sagittal section from the Dalton obtained with micro-CT.
  • Fig. 6D illustrates Dalton maximum intensity 3D reconstruction of the micro-CT data, made with
  • Fig. 6E illustrates Dalton volumetric 3D reconstruction of the micro-CT data, made with Arivis.
  • Fig. 6F demonstrates a reconstructed sagittal section from the Dalton obtained with the method of this document using the blue channel.
  • Fig. 6G shows Dalton maximum intensity 3D reconstruction of the data obtained using the method of this document, made with Arivis
  • Fig. 6H illustrates Workman maximum intensity 3D reconstruction of the data of this method, made with Arivis.
  • Fig. 61 shows that using the method of this document, the surface of the figurine can be revealed similar to the depicted micro-CT surface (see Fig. 6E) using, for example, one color channel (here blue).
  • Fig. 6J shows anverlay of the surfaces from the method of this document and micro-CT imaging. It can be seen that the images generated by the method of this document surface matches well the CT surface.
  • Fig. 7A shows reconstructed transmitted light image of a 400 mesh TEM grid with 26 pm bars.
  • Fig. 7B shows zoomed reconstructed transmitted light image.
  • Fig. 7C shows zoomed reconstructed reflected light image.
  • Fig. 7D shows an overlay of Fig. 7B and 7C.
  • Fig. 7E illustrates a section through the reconstructed grid in transmitted light.
  • Fig. 7F shows a section through the reconstructed grid in reflected light.
  • Fig. 7G is a reconstructed EM finder grid with letters in reflected light with 17 pm bars.
  • Fig. 7H shows 5 pm Dyna beads, reconstructed reflected light image.
  • Fig. 71 shows the same Dyna beads as in Fig. 7H imaged with a Nikon C2 confocal microscope, 20x objective with 0.75 NA.
  • Fig. 8A shows volume rendering of the mollusk shell side view.
  • FIG. 8B shows volume rendering of the shell bottom view, the shell is virtually cut open. .
  • Fig. 8C shows volume rendering of the shell, side view, the shell is virtually cut open.
  • Fig. 8D shows six channel intensity distribution from the squared regions indicated in Fig. 8A, 8B, and 8C.
  • Fig. 9 is a live larva was attached to an insect pin by adhesion.
  • Fig. 9A shows a difference between the same larva in a contracted (top) and a relaxed state
  • Fig. 9B shows a transversal cut through the larva at the region indicated by the red lines in A.
  • Fig. 9C shows changes in the larva shape along the dorsoventral axis in the same region as indicated in Fig. 9A with the contracted state being left of the relaxed state.
  • Fig. 10 shows a mutant fruit fly expressing GFP in the eyes is imaged.
  • Fig. 10A shows the reflective image in the blue channel.
  • Fig. 10B shows a section through reconstruction of the fly in reflective mode (front view, blue channel).
  • Fig. IOC shows a 3D rendering of the fly imaged in reflective mode.
  • Fig. 10D shows a combination of the 3D rendering of the reflective and the fluorescence mode.
  • Fig. 10 E shows a single three color (raw) image before reconstruction and rendering of a red eyed wild-type Drosophila head.
  • Fig. 10F shows a reconstruction according to this method of a sequence of rotational images as in Fig. 10E.
  • Fig. 10G shows a single three color image (before reconstruction and rendering) of a sacrificed eyed mutant Drosophila head with narrowed eyes of reduced size.
  • Fig. 10H shows a reconstruction of a sequence of rotational images as in Fig. 10G.
  • Figure 11 shows examples of embryos.
  • Fig. 12A shows a top view from dorsal to ventral of a stage 12 embryo.
  • Fig. 12 B shows a lateral side view of Fig. 12A.
  • Fig. 12C shows the same embryo as in Figs. 12A and 12B after ⁇ 1.5h. Stage 14.5 is shown.
  • Fig. 12D shows a lateral side view of the embryo shown in Fig. 12C.
  • Fig. 12E shows the same embryo as in Fig. 12A, 12B, 12C or 12D after ⁇ 2.8h (relative to Fig. 12A and 12B). Stage 19 is shown.
  • Fig. 12F is a lateral side view of Fig. 12E.
  • Fig. 12G shows the GFP fluorescence signal of the same embryo is shown after fixation and imaged with a spinning disc.
  • Fig. 12H shows a zoomed view comparing Fig. 12E and Fig. 12G.
  • Fig. 121 shows a side view using the imaging method of the application (ALMOST) displayed in purple, with more brightness in purple indicating less reflection. Insert is showing a raw reflection image.
  • Fig. 12J shows a side view using autofluorescence displayed in cyan. Brighter signals indicate stronger autofluorescence.
  • Fig. 12K shows a side view using transmitted light displayed in green. Brighter signals indicate more transmission.
  • Fig. 12L is a merged view showing the tadpole from the top.
  • Fig. 12 M-P are virtual section through the animal as indicated in L.
  • Fig. 12Q is a merged side view of I, J, K, section as indicated in L.
  • Fig. 13 shows a semi-transparent technical object imaged by the imaging method of the description.
  • Fig. 13A is a raw image of a LED.
  • Fig. 13B shows a 3D projection using the imaging method of the application revealing the outer shape.
  • Fig. 13C is a cut view revealing parts from the inside of the LED.
  • Fig. 13D-F are zoomed images corresponding to the red rectangle indicated in A-C.
  • FIG. 14A Photograph of a resistor.
  • Fig. 14B RGB reconstruction of the resistor using the imaging method; visualized in a projection.
  • an automatic white balance was performed using the Leica LAS software, which was driving the camera. Consequently, no individual adaptations for the different color channels have been performed. The overall contrast has been adjusted. This shows that an automated procedure can be used for the color balance in the imaging method.
  • Fig. 14C RGB reconstruction of the resistor using the imaging method as in Fig. 14B with a surface rendering.
  • the colors of the artificial surface rendering and light added to the rendered scene gives a less vivid impression than the photograph or the projections, but is a real 3D volumetric object.
  • Fig. 15 demonstrates that mirroring surfaces such as a ball bearing ball can be imaged by the imaging method disclosed herein.
  • Fig. 15A is a screenshot showing a ball bearing ball mounted in our device and it glossy surface reflecting the surrounding.
  • Fig. 15B shows the raw data of the ball bearing ball. A surface defect can be seen and is marked by the white arrowhead. The chamber, the aperture for the camera, the mount and other features of the environment are also visible due to the reflective surface (see black arrows ).
  • Fig. 15C shows the reconstruction of the ball bearing ball.
  • the surface can be visualized without the reflections of the environment and without any further editing, while highlighting the surface features (see white arrowhead). Please note that the reconstructed image is mirrored.
  • the size of the ball is about 1.25 cm in diameter.
  • Fig. 1A shows a diagram of an imaging light path for a sample with back illumination as known in the art.
  • Fig. IB illustrates the theory of image formation in a prior art tomographic system, like the OPT system.
  • the sample to be imaged resides at the center of a coordinate system.
  • Parallel rays of light spaced by a distance ⁇ pass through the sample to form a projected image (R Q ).
  • R Q projected image
  • Fig. 1C shows a diagram of the oblique illumination light path used to create reflected light images of samples which are opaque as in the method of this document. It will be appreciated that this method differs from the manner in which the prior art standard OPT works with transparent samples and uses fluorescence or back illumination. It will be appreciated that it is possible to add color filters in the reflected light path to collect spectral information.
  • Fig. ID illustrates a theory of image formation when the reflected light interacts with a fully opaque sample that specifically contains surface topography information.
  • Fig. IE shows an oblique illumination/imaging chamber for reflected light imaging. A reflective chamber is used, for example lined with white paper, to promote diffuse illumination.
  • Fig. IF shows a depiction of diffuse reflection compared to specular reflection.
  • Fig. 1G illustrates a flowchart of the imaging process of this document from imaging over reconstruction to 3D rendering of the samples.
  • a new variation of OPT is described that provides a 3D surface reconstruction of opaque samples including information on color and reflective properties of the samples. The method is based on the diffuse scattering of light that occurs when photons in the visible spectrum interact with the surface of nontransparent 3D objects. The method also enables the 3D color visualization of the sample with a reflective surface.
  • the method is tested by reconstructing 3D color images from a diverse set of samples including: an electrical resistor, seed cones of the dawn redwood Metasequoia glyptostroboides, the rosemary beetle Chrysolina americana, Lego figurines (which are compared with micro-CT) and a shell of the mollusk Pollia dorbignyi with six color channels.
  • the fruit fly Drosophila melanogaster both larvae and adults
  • the image of the surface of fixed and live Xenopus embryos highlights the applicability of the method to detect shape changes, for example during developmental furrow formations in neurulation.
  • the imaging of reflected light means that the reflectance (R) of the sample determines the image with R being typically approximated by; r, _ f reflected
  • R typically changes with the wavelength of the light for colored samples, showing different degrees of reflectance for different wavelengths [11]
  • the inventors theorized that it would be possible to decode the intensity of detected light (R) into an image of the sample shape and surface properties using the 3D capability of the OPT device.
  • this OPT device multiple ones of the images of the sample are collected as the sample rotates relative to a detector. Given that the sample receives constant homogeneous indirect diffuse illumination at all of the imaged angles Q, the 'background' number of photons that reach the detector remains constant. In contrast, the variation in the brightness information of each image corresponds to the specific reflective properties of the sample at each angle.
  • the image formation process can be described by the Radon transform Pg (r ) of the object; with d being the Dirac delta function, r the perpendicular distance from the line to the origin, and Q the angle formed by the distance vector (Fig IB) and a coordinate system with its origin at the middle of the sample 130.
  • the rays of light passing through the sample form a projection of the sample on the image plane and this projection plane has the angle Q relative to coordinate system.
  • an imaging chamber 100 was developed and this imaging chamber 100 is shown in Fig. IE.
  • the imaging chamber 100 promotes diffuse illumination of an opaque object (sample 130 on a stage 135) to be imaged using commonly available materials.
  • the imaging chamber 100 contained a reflective surface 110 via white paper and aluminum foil, a non- coherent unfocused light source 120 of LED goosenecks (like used for dissection stereo-microscopes, but this is not limiting of the invention) directed at the sample 130, and a diffuser 140 made of milk glass placed between the light source 120 and the sample 130 ( Figures 1C and IE).
  • a detector 150 records the reflected light from the sample 130
  • the light recorded at the detector 150 includes rays of the light that have reflected from a surface of the sample 130 against a constant illumination 'background' due to reflection from the reflective surface 110. No (or minimal) light interacts with the interior of the opaque sample 130 (see Figure ID). Furthermore, the sample 130 will differentially absorb and reflect light depending on properties of the surface of the sample 130, like color, and thus the reflected light image will contain spectral information about the sample 130.
  • the goal of the imaging chamber 100 is to obtain images of the sample 130 as if the sample 130 were a self-radiant object.
  • the method aims to process diffuse reflection as compared to specular reflection (as shown in Figure IF).
  • diffuse reflection the radiant or luminous intensity of a diffuse radiator is directly proportional to the cosine of the angle between the illumination direction 160 and the surface of the sample, as known from Lambert's cosine law [14]. That means that the surface reflection of light from the sample 130 will scatter in different directions with the brightest reflection being perpendicular to the surface of the sample 130.
  • an indirect diffuse source such as that in Fig.
  • the rays of the light that are reflected at the surface of the sample can be captured by the objective lens of the detector 150 and form an image on the detector 150 (see Figure ID).
  • This is different to the prior art OPT method in which the reflections from the sample 130 are typically avoided by using an immersion medium to match the refractive index of the sample 130.
  • a refractive index mismatch is actually supporting the imaging.
  • the imaging method of this document is imaged in air to maximize the reflectivity of the surface of the sample 130, except for 'aquatic' samples like Xenopus embryos.
  • the next step is rendering and visualizing the imaged 3D shapes using 3D rendering software.
  • darker parts of the image are considered as sites where the rays are absorbed, while brighter parts are regions where the rays pass unimpeded through the sample 130.
  • the brighter regions in reflected imaging are those regions where the sample 130 has higher reflective properties, and thus this method of imaging requires inversion of the grayscale compared to the transmission OPT image.
  • the illumination system provides a difference between the background intensity of light and the reflected light that has interacted with the sample 130.
  • the background intensity of the light needs to be distinguishable from the sample 130 and consequently rendered transparent to reveal the 3D shape of the sample 130.
  • the approach of using a white background as the reflective surface 110 achieves this for any non-white (or less bright) samples 130.
  • Rendering the 3D shape of the sample 130 also means that the result of the method is a computer-generated object.
  • This computer-generated object can then be differently visualized as a projection, volume, or as surface, where color information can be added in the form of a look-up-table and illumination and shading can be animated.
  • the result is a computer-generated image of the computer-generated object and thus may appear artificial as compared to a photograph.
  • the 3D information is fully digitized and can be used for modeling printing etc.
  • FIG. 1G shows a first step 5 in which the sample 130 is placed substantially centrally on the rotational stage 135 of the imaging chamber 100.
  • the sample 130 is illuminated in step 10 with indirect light to avoid speckles.
  • the reflective properties of the light are selected, for example by using a spectral filter as explained later and the light reflected from the sample 130 is imaged in step
  • step 25 The sample 130 is rotated as shown in step 25 and the imaging (step 20) of the sample 130 is repeated.
  • the plurality of the projection images from the rotation of step 20 and 25 are used as input for the back projection algorithm.
  • the relevant intensity values of the images, i.e. the reflection from the sample, are included in the calculation, i.e. there is no need to truncate the histogram for the calculations.
  • the 3D information can be constructed from the rotational images using the back projection algorithm.
  • step 40 the 3D information constructed in step 35 is used to create an image stack and this image stack can be visualized in step 45.
  • the visualization can be modified in step 50 to make the background transparent and the sample 130 visible.
  • step 55 surface rendering can be applied to the reconstructed surface of the sample 130.
  • Fig. 2A the Shepp-Logan Phantom is simulated.
  • Parallel rays traversing through the phantom like those generated by the micro-CT and will create a projection depending on the angle where the parallel rays traverse through the sample and depending on the sample density (absorbance).
  • the depicted series of projections is shown as the sinogram (middle picture).
  • the varying intensities in the sinogram are a result of the absorbance of the light, with bright parts indicating more absorbance.
  • a reconstruction of the phantom left picture
  • the accuracy and detail of the reconstruction depends on the number of the parallel rays, as well as the number of images collected over a range of different angles.
  • Fig. 2B shows a simulating of the tomographic imaging process.
  • the phantom is rotated by 180°.
  • the Fig. 2B shows rotations of 0° and rotation by 60°.
  • Fig. 2C illustrates reflected light imaging using the Shepp-Logan Phantom.
  • the Phantom is illuminated from the top and rotated.
  • the 0° and 60° rotation are shown.
  • the illuminated parts of the Phantom, which will create a reflected light image are shown.
  • the outer and brightest ellipse in the Shepp-Logan phantom are considered as the opaque surface of the sample and thus no information other than the first bright reflection is contributing to the image.
  • the light intensity is assumed to be the same everywhere in the individual images, while in reality an intensity gradient will be created depending on the angle of illumination and depending on Lambert's cosine law, as explained above.
  • Fig. 2D shows how the reflected light images form a projection, with the sum of projections being visualized in the corresponding sinogram.
  • the sinogram was constructed from the y-projections of the rotated images of the Phantom. From this sinogram, the outer shape of the Phantom can be reconstructed using the Radon based filtered back projection. The size of the reconstruction was adjusted due to the fewer 'rays' as compared to the reconstruction shown in Fig. 2A.
  • the practical applicability of the approach to determine the true 3D shape of a sample with relatively simple shape was tested.
  • the sample 130 also includes color information. It is known that pigmented specimens (i.e. coloured specimens) will differentially absorb and reflect different wavelengths of reflected light.
  • the apparatus is used with a black and white camera as the detector 150, but it is expected that it is possible to generate color images of the sample using a set of three filters (as shown in Fig. 3) to create red, green and blue color channels.
  • the idea to use three color filters is akin to color photography as explored by James Maxwell in 1861 [15].
  • An Amersham Bioscience Amersham PI Little Chalfont Buckinghamshire United Kingdom; now part of GE Healthcare (Chicago, Illinois, United
  • a resistor (Figure 4A) was used to reveal the characteristic color code on its surface in 3D in RGB color. Individual 512x512 pixel images were collected over 360° with 0.9 degree rotational steps. The reconstruction was based on the standard micro-CT NRecon software implementing the filtered backprojection algorithm. The imaging, reconstruction and rendering was carried out for the three RGB color channels ( Figure 4B, 4C and 4D). Figure 4E shows the intensity distribution along the lines indicated in Figs. 4B, 4C and 4D. A maximum intensity projection combining all colors in Arivis 3D rendering software is shown in Figure 4F. In this projection the sample 130 appears partly see through. The color rings are revealed properly.
  • the brown, green, red and gold color rings are imprinted on the resistor and are part of the four-color code used to describe its properties. The method allows these color rings to be revealed. The color balance for the three colors was adapted manually.
  • FIGs. 4G to 4L The reconstruction of the complex shape of a biological sample is also shown in Figs. 4G to 4L.
  • a seed cone was chosen to test whether reflected illumination can reveal a shape with cavities/non-convex morphologies. More specifically, the seed-bearing cones of Metasequoia glyptostroboides, also called the dawn redwood tree ( Figure 4G; photograph) were used. After imaging (Figure 4H), the 3D structure of the seed cone was created, which allows a virtually slice through the seed cone showing its surface structure like it is cut open (Figure 41). The intensity changes of the signal for one of the cavities is shown in Figure 4J. Red, green and blue filters were used to create the RGB-type 3D image of the surface from three acquired volumes. Arivis 3D software visualized the surface of the seed cone in 3D ( Figure 4K-L). Figure 4G and 4L demonstrates that the method of this document allows visualizing the cavities and complex structure of the sample 130.
  • the aim of the imaging chamber 100 is to create diffuse illumination that should enable imaging of glossy samples.
  • a Chrysolina americana commonly known as the Rosemary beetle
  • These insects have a colorful elytra with metallic green and purple stripes along the rostral to caudal direction on them.
  • Fig. 5 shows that the method can image these smooth shiny surfaces and visualize the color pattern and typical indentations on the forewing of the beetle.
  • a regular eurocent coin was image. The oak leaf imprint of the German mint becomes visible. This test demonstrates the applicability of the method for imaging the 3D morphology of insects and metallic samples.
  • FIG. 6 shows the samples of same shape but with different color patterning are imaged, namely Lego figurines.
  • Figs 6A and 6B are photographs of a Lego figurine with a beard dubbed 'Dalton' and of a Lego figurine with glasses and happy face dubbed the 'Workman'.
  • Fig. 6C is a reconstructed sagittal section from the Dalton obtained with micro-CT.
  • Fig. 6D illustrates Dalton maximum intensity 3D reconstruction of the micro-CT data, made with Arivis.
  • Fig. 6E illustrates Dalton volumetric 3D reconstruction of the micro-CT data, made with Arivis.
  • Fig. 6F demonstrates a reconstructed sagittal section from the Dalton obtained with the method of this document using the blue channel.
  • Fig. 6G shows Dalton maximum intensity 3D reconstruction of the data obtained using the method of this document, made with Arivis
  • Fig. 6H illustrates Workman maximum intensity 3D reconstruction of the data, made with Arivis.
  • Fig. 61 shows that using the method of this document, the surface of the figurine can be revealed similar to the depicted micro-CT surface (see Fig. 6E) using, for example, one color channel (here blue).
  • Fig. 6J shows an overlay of the surfaces from the method of this document and micro-CT imaging. It can be seen that the images generated by the method of this document surface matches well the CT surface.
  • both of the Lego figurines are revealed in the imaging method of this document and can be discriminated, whereas in the micro-CT imaging method, the figurines look similar.
  • Fig. 7 shows the comparison of the imaging of a 400 copper mesh EM support grid in both modalities and reconstructed. The two modalities pick up information through the holes of the mesh differently.
  • Fig. 7A shows reconstructed transmitted light image of a 400 mesh TEM grid with 26 pm bars.
  • Fig. 7B shows zoomed reconstructed transmitted light image.
  • Fig. 7C shows zoomed reconstructed reflected light image.
  • Fig. 7D shows and overlay of Fig. 7B and 7C.
  • Fig. 7E illustrates a section through the reconstructed grid in transmitted light.
  • Fig. 7F shows a section through the reconstructed grid in reflected light.
  • Fig. 7E shows that the image formation is cleaner for the transmitted light and some artifacts arise from specular reflection indicated by the thin diagonal dark lines.
  • Fig. 7G is a reconstructed EM finder grid with letters in reflected light with 17 pm bars.
  • Fig. 7H shows 5 pm Dyna beads, reconstructed reflected light image.
  • Fig. 71 shows the same Dyna beads as in Fig. 7H imaged with a Nikon C2 confocal microscope, 20x objective with 0.75 NA.
  • the imaging method of this document can only detect the aggregates of the beads and is limited by the sampling of the camera ( ⁇ 4.2 pm per pixel in x,y and thus too coarse for picking up the small differences between the neighboring beads of 5pm).
  • FIG. 8 shows the shell of Pollia dorbignyi in 3D color and a plot of the spectral composition of different parts of the shell and plasticine used to hold the shell in place. The plot is based on six volumes acquired with six spectral filters. The applicability of the method of this document for imaging shells of mollusk and generating spectra can be seen.
  • Figure 8 shows a six channel spectral imaging of a sea snail shell (Pollia dorbignyi).
  • Fig. 8A shows volume rendering of the mollusk shell side view.
  • Fig. 8B shows volume rendering of the shell bottom view, the shell is virtually cut open.
  • Fig. 8C shows volume rendering of the shell, side view, the shell is virtually cut open.
  • Fig. 8D shows six channel intensity distribution from the squared regions indicated in Fig. 8A, 8B, and 8C. Differences in the spectral composition from the different regions can be revealed.
  • the spectral specificity of the used filters is indicated by the bars; the line graph shows the spectral profile of the reflections from the different regions in the shell and the plasticine support.
  • Figure 9 shows the 3D morphic potential of the Drosophila 3rd instar larva.
  • Fig. 9A shows a live larva attached to an insect pin by adhesion. The larval body is in a contracted curled up state when lifted up from the ground. Grayscale data is used to visualize the change in the outer shape of a larva.
  • Fig. 9A shows a difference between the same larva in a contracted (top) and a relaxed state (bottom). Exposure to 0.2 M NaN3 for 30 min induced the relaxed state. The arrows indicate the difference in length between the two states. Here the induced relaxation shows that the larva is more stretched out
  • Fig. 9B shows a transversal cut through the larva at the region indicated by the lines in Fig. 9A. The larva is oriented according to Fig. 9A, with the curled state on top. The black lines, and arrows indicate the difference in the shape along the dextro-sinister (horizontal) axis of the larva between the two states.
  • Fig. 9C shows changes in the larva shape along the dorsoventral axis in the same region as indicated in Fig. 9A with the contracted state being left of the relaxed state. Interestingly, this difference is more pronounced than in the transversal axis (Fig. 9B).
  • the difference between the two states along the dorsoventral axis is 846.91 pm; vs. 602.86 pm; which amounts to a difference of 40.5%. Changes might be associated to specific pose.
  • Figure 10 shows the combining of this method and fluorescence OPT on adult Drosophila.
  • Fig. 10A shows the reflective image in the blue channel of a fruit fly expressing GFP in the eyes.
  • Fig. 10B shows a section through reconstruction of the fly in reflective mode (front view, blue channel).
  • Fig. 10C shows a 3D rendering of the fly imaged in reflective mode.
  • Fig. 10D shows a combination of the 3D rendering of the reflective and the fluorescence mode.
  • Fig. 10 E shows a single three color (raw) image before reconstruction and rendering of a red-eyed wild-type Drosophila head.
  • Fig. 10F shows a reconstruction according to this method of a sequence of rotational images as in Fig. 10E.
  • Fig. 10A shows the reflective image in the blue channel of a fruit fly expressing GFP in the eyes.
  • Fig. 10B shows a section through reconstruction of the fly in reflective mode (front view, blue channel).
  • Fig. 10C shows
  • FIG. 10G shows a single three color image (before reconstruction and rendering) of a sacrifice eyed mutant Drosophila head with narrowed eyes of reduced size.
  • Fig. 10H shows a reconstruction of a sequence of rotational images as in Fig. 10G.
  • Xenopus is a commonly used model system and widely used for embryology studies. Xenopus eggs and embryos are opaque likely because of their yolk content, which is different to some other model organisms like zebrafish and Drosophila embryos, which are transparent. Recently an adaptive light- sheet microscope was introduced to overcome spatially varying optical properties in tissue and to image embryo development in greater detail. This technique allows improving live cell imaging in Drosophila and zebrafish embryos [17]. Flowever, such a system can only correct for varying cell density.
  • Figure 11 shows examples of embryos at the one cell stage (stage 1), four-cell stage(stage 3), blastula stage ( stage 7), large yolk plug stage (stage 11) , neural plate stage (stage 14), mid neural fold stage (stage 15), an early tailbud stage (stage 25) and a tailbud stage (stage 28). This shows that with the method it is possible to image the surface of Xenopus embryos label-free and without clearing.
  • Graylevel imaging is used for visualizing the surface of different developmental stages of Xenopus tropicalis embryos. 3D rendering of:
  • NTDs Neural tube defects
  • Xenopus is a good model system for spinal cord formation [24, 25], as the vertebrate-specific program of neurulation can be observed easily outside the uterus.
  • zebrafish would potentially be an alternative and inherently transparent model system that can be imaged with the available techniques (light sheet microscopy), the process of neurulation differs, and the zebrafish undergoes so-called secondary neurulation [26], which is different from the more human-relevant primary neurulation. Therefore, being able to image and analyze the neurulation in alive Xenopus embryos is an advantage.
  • Figure 12G shows the widefield and fluorescent signal obtained from the spinning disc microscope.
  • Figure 12H highlights a zoomed 3D view, where the embryo was cut digitally, and the furrow is shown towards the caudal direction.
  • the signals from the spinning disc and the imaging method of this document are shown next to each other.
  • Figure 12 overall signifies that opaque model systems can be imaged using the imaging method of this document.
  • Figure 12 illustrates live imaging of a Xenopus tropicalis embryo with different stages of the same Xenopus tropicalis embryo are shown during neurulation.
  • Fig. 12A shows a top view from dorsal to ventral of a stage 12 embryo and
  • Fig. 12 B shows a lateral side view of Fig. 12A.
  • Fig. 12C shows the same embryo as in Figs. 12A and 12B after ⁇ 1.5h. Stage 14.5 is shown.
  • Fig. 12D shows a lateral side view of the embryo shown in Fig. 12C.
  • Fig. 12E shows the same embryo as in Fig. 12A, 12B, 12C or 12D after ⁇ 2.8h (relative to Fig. 12A and 12B).
  • Stage 19 is shown.
  • Fig. 12F is a lateral side view of Fig. 12E.
  • the method shows that it is possible to image opaque samples in 3D and that the shape can be revealed in color by combining the concepts of OPT with oblique illumination, color filters and using the filtered back projection algorithm together with 3D rendering software.
  • This approach overcomes the need for transparent or cleared samples and allows the analysis of the 3D morphology of opaque samples like insect cuticles or shells on the mesoscale.
  • live samples can be imaged, the method opens the possibility for longitudinal imaging of unaltered (non-fixed and non-cleared) samples.
  • the method enables a supplementary approach to well-established OPT and light sheet modalities and allows imaging of the sample color, which is lost in X-ray-based techniques like micro-CT.
  • the imaging method was applied to the surface of samples like seed cones, adult insects, resistors and Lego figurines using straightforward modifications of existing OPT hardware.
  • the seed cone shows that it is possible to reveal complex non-convex surface structures (Figure 4G to 4L) it also demonstrates that the imaging method fills a niche for 3D imaging of relatively large samples.
  • the resistor ( Figure 4A-4F) shows that the color information can be depicted realistically even though the result of imaging method is an "artificial" computer rendered object that may give a less 'vivid' impression than a photograph.
  • the Xenopus imaging shows that the method can be used to image live samples and opens the possibility for longitudinal non-destructive surface imaging of developmental process. It also highlights the potential of the method for Xenopus embryogenesis (Figure 11) and for investigating critical steps of neurulation ( Figure 12).
  • the commonness of neural tube defects during pregnancies [22] stresses the importance of this topic.
  • the fact that Xenopus has recently become of interest for high content screening [27] supports the relevance of this proof of concept further, especially as the method could as well be integrated into robotic workflows.
  • a coordinate system like it has been developed for spherical embryos earlier [28] or other frameworks for modeling embryogenesis [29] could be applied.
  • the possibility to image live samples stresses the non-invasive character and as reflected visible light is collected that low phototoxicity can be expected.
  • the achievable resolution is given by the optical system. Different methods for characterizing the resolution are being used [33] including the Abbe diffraction limit [34], which would be given by the wavelength used over two times the NA of the objective lens.
  • the 3D reconstruction can approximate isotropic resolution if an increasing number of angles is used for the reconstruction.
  • artifacts stemming from specular reflection may influence the images ( Figure 5).
  • key for the reconstruction is to avoid highlights or specular reflections and image diffuse reflection.
  • Figure 1 and Figure 3 show that with the imaging chamber specular reflection from glossy biological surfaces can be reduced efficiently.
  • the 3D reconstruction used is based on the filtered back projection algorithm typically used in CT ( Figure 2).
  • the fact that it can be used for reflective surfaces poses a new application for that reconstruction as it is originally based on the idea of line integrals typically associated with the attenuation of rays traversing through an object.
  • Adaptations for reflective imaging like filtering for small specular artifacts, as in Figure 7, could be beneficial for future developments.
  • the use of a telecentric lens might be improving the reconstruction additionally because of the reduced perspective skewing in OPT and the imaging method.
  • the method differs from known technologies like Optical Coherence Tomography (OCT) [40], as it is not utilizing an interferometric approach. It is as well different from earlier described optical reflection tomography [41] as it is not based on measuring the refractive index and the thickness of the sample.
  • OCT Optical Coherence Tomography
  • diffuse optical reflection microscopy utilizes a single continuous wave laser for illumination of the sample [42]
  • High-resolution reflection tomographic diffractive microscopy has been proposed earlier.
  • a holographic approach and high NA lenses for imaging were used [43] instead of an OPT to target the mesoscale.
  • the method enables the study and documentation of the 3D morphology of samples such as insect cuticle, plant seeds, alive and developing Xenopus embryos, as well as mollusk shells.
  • the ability to record the surface of a mesoscale object in 3D opens perspectives for digital repositories of zoological and botanical collections and enables a link to 3D printing of these objects.
  • the possibility for spectral analysis can provide more insight into the pigments in the samples and may also allow applications for diagnostics of small parts in material science, like for example the amount of oxidation and point of failure analysis in industrial processes.
  • Other applications may include virtual reality and numerical simulations of 3D objects, but also art, and historic objects, including the analysis of coloring on ancient statuary and pottery and the teaching of these.
  • the method complements other approaches, such as micro-CT [51, 52], X-ray microscopes, or light sheet microscope, for 3D representation of the sample's surface morphology thereby adding complete preservation of the actual characteristic color scheme without the need to use contrast agents, sample preprocessing, or digital post-processing to reintroduce the colors.
  • the method will not reveal the inside of opaque samples, but it is cheaper than a micro-CT, can be implemented straightforwardly, and is well-suited for field applications.
  • Our approach is compatible with recently described resources for cheap custom-build OPTs [7] Also for 3D rendering, open solutions like Drishti can be used [53] Experimental Details
  • a diffusor was used, and the imaging chamber was lined with white paper.
  • the sample was illuminated from the side with a gooseneck LED (Leica KL 200 LED) white light source.
  • Aluminum foil at the other side of the sample reflected light on the non-illuminated side.
  • a K580 from a Leitz filter slider was used as a red filter.
  • a green filter a Leitz Gelbgmn 32mm/35mm color glass was used.
  • blue filter a Leitz CB 16.5 blue filter with diameter 32mm/35mm was used (for spectral information see supplementary Figure 2).
  • the bandpass filters used in Figure 6 were a 377/50 nm filter provided by Zeiss; a 420/40 nm filter provided by Olympus; a 460/50 nm filter provided by Nikon; a 525/50 nm provided filter by Zeiss; a 600/50 nm filter provided by Olympus and a 690/70 nm brightline filter provided by Semrock.
  • Figure 14 we used a DFC450c camera from Leica microsystems (Wetzlar, Germany) steered by Leica LAS software (version 4.8), attached to a Nikon (Tokyo, Japan) Te200 stand outfitted with a Nikon Plan Fluor 4x lens with 0.13NA and 16.5 mm working distance. The sample was rotated using a Xeryon (Leuven, Belgium) XRT-U 30 rotational piezo stage.
  • Micro-CT datasets were acquired on a SkyScan 1278 (Bruker micro-CT, Kontich, Belgium) in step- and-shoot mode with the following parameters: 65 kVp X-ray source voltage and 770 mA source current combined with an X-ray filter of 1 mm aluminum, 40 ms exposure time per projection, four averages per view, acquiring projections with 0.7° increments over a total angle of 180°, resulting in reconstructed
  • the dimmer the imaging the more the global brightness might need to be adapted.
  • a white background was used. That means that regions as bright as the background or brighter will be revealed as see through.
  • the illumination needs to be adapted to low levels not to lose the bright regions in volume rendering. This can pose a limitation depending on the dynamic range of the camera and the possibility to illuminate the background as well.
  • the resistor is a 15 kilo Ohm resistor with a tolerance of 5% and has the four-band resistor code: brown, green, red, and gold; purchased from R&S (RS Components GmbH Hessenring 13b, 64546 Morfelden-Walldorf).
  • the Drosophila samples were fixed at -80 °C to maintain the morphology and fluorescence.
  • the fly strain used in Figure 10A-D expresses GFP in the eyes in a white-eyed background (Genotype: y[l] M ⁇ vas-int.Dm ⁇ ZH-2A w[*]; Bloomington stock centre # 24481.
  • Fly strains used in Figure 10E-H were Canton-S (CS); Kyoto stock center # 105666 , and w; GlaBC/CyO (Bloomington Drosophila stock center # 6662).
  • Grids were square mesh EM support grids, 400 copper mesh with 26 pm bars (FCF 400 - Cu - SB
  • Beads were magnetic Dynabeads 500 with iron core with ⁇ 5 pm size. (Thermo Fisher).
  • a Pollia dorbignyi [57] shell was used for the spectral imaging and was obtained at 42°21’49.7"N; 3°09'47.2"E.
  • ISPAMM Image Storage Platform for Analysis Management and Mining
  • Zalevsky Z Extended depth of focus imaging: a review. In: 2010. SPIE: 11.
  • Clark DP, Badea CT Micro-CT of rodents: state-of-the-art and future perspectives.
  • Physica medica PM : an international journal devoted to the applications of physics to medicine and biology : official journal of the Italian Association of Biomedical Physics 2014, 30(6):619-634.
  • Kak AC Slaney M: Principles of computerized tomographic imaging. Philadelphia: Society for Industrial and Applied Mathematics; 2001.
  • Lambert JH, DiLaura DL Photometry, or, On the measure and gradations of light, colors, and shade : translation from the Latin of Photometria, sive, De mensura et gradibus luminis, colorum et umbrae. New York: Illuminating Engineering Society of North America; 2001.
  • Maxwell JC Niven WD: The scientific papers of James Clerk Maxwell. Mineola, N.Y.: Dover Publications; 2003. 16. Truong TV, Supatto W, Koos DS, Choi JM, Fraser SE: Deep and fast live imaging with two-photon scanned light-sheet microscopy. Nature methods 2011, 8(9):757-760.
  • Royer LA Lemon WC, Chhetri RK, Wan Y, Coleman M, Myers EW, Keller PJ: Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nature biotechnology 2016, 34(12):1267-1278.
  • NTDs Neural tube defects
  • Borodinsky LN Xenopus laevis as a Model Organism for the Study of Spinal Cord Formation, Development, Function and Regeneration. Frontiers in neural circuits 2017, 11:90.
  • Maia LA, Velloso I, Abreu JG Advances in the use of Xenopus for successful drug screening.
  • Greenberg B Apparatus and method for detection of cervical dilation during labor.
  • Google Patents 2008.
  • Teutsch C Model-based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners.
  • VerdCi JR, Alba-Tercedor J, Jimenez-Manrique M Evidence of Different Thermoregulatory Mechanisms between Two Sympatric Scarabaeus Species Using Infrared Thermography and Microcomputer Tomography. PloS one 2012, 7(3):e33914.
  • Metscher BD MicroCT for comparative morphology: simple staining methods allow high- contrast 3D imaging of diverse non-mineralized animal tissues. BMC physiology 2009, 9:11.
  • Limaye A Drishti, A Volume Exploration and Presentation Tool. Proc Spie 2012, 8506.
  • Payraudeau BC Catalogue descriptif et methodique des annelides et des mollusques de I'ile de
  • Wood WB The Nematode Caenorhabditis elegans. Cold Spring Flarbor, N.Y.: Cold Spring Flarbor Laboratory; 1988.

Landscapes

  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Analytical Chemistry (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Biochemistry (AREA)
  • Biomedical Technology (AREA)
  • Immunology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)

Abstract

This application relates to the field of microscopy, more precisely means and methods are disclosed for label-free 3D imaging of non-transparent objects.

Description

A LABEL-FREE MULTICOLOR OPTICAL SURFACE TOMOGRAPHY IMAGING METHOD FOR
NONTRANSPARENT 3D SAMPLES
[0001] Field of the Invention
[0002] This application relates to the field of microscopy. Means and methods are disclosed for label- free 3D imaging of non-transparent objects.
[0003] Background of the Invention
[0004] Current mesoscale 3D imaging techniques are limited to transparent or cleared samples or using X-rays. This is a severe limitation for many research areas, as the 3D color surface morphology of opaque samples like intact adult Drosophila, Xenopus embryos, and other non-transparent samples cannot be assessed.
[0005] Recent developments in 3D microscopy have revolutionized the imaging of whole model organisms such as zebrafish and Drosophila larvae as well as cleared mouse embryos and organs [1-3]. Among the techniques that allowed this revolution are light sheet or Selective Plane Illumination Microscopy (SPIM) [4, 5], and Optical Projection Tomography (OPT) [6]
[0006] The 3D modality of these two 3D microscopy methods differs. SPIM directly generates a stack of images of confocal sections and OPT acquires multiple images collected at different angles by rotating the sample with respect to the image acquisition and back calculating the 3D information. This OPT imaging mode bears similarity with micro-computed tomography (CT) and also means that standard tools for 3D reconstruction of CT data also work for OPT data.
[0007] Both devices implementing SPIM and OPT require transparent samples to image fluorescence within the tissue. In addition to the fluorescence, OPT has been used with transmitted light and absorbing dyes [6, 7] where the sample is back-illuminated, and the light is partially absorbed as the light passes through the transparent material. OPT, unlike SPIM, is also compatible with samples that slightly scatter light [8], but is not suited for opaque samples. Therefore, apart from inherently transparent biological samples, like the zebrafish ( Danio rerio ), transparency in the opaque samples needs to be induced using chemical 'clearing' methods. These chemical clearing methods are time intensive and are not compatible with all opaque samples. The optical chemical clearing is also counterproductive for studying the surface morphology and color appearance.
[0008] Therefore, for imaging the morphological details of opaque samples, like adult Drosophila and the surface features of embryos, brightfield dissecting microscopes are often used that provide only a 2D representation of the 3D sample. Furthermore, similar microscopes are used to create extended focus operations [9], where a 3D stack is processed to provide a 2D projection of the in-focus parts of the stack, thereby losing the 3D information of the samples.
[0009] For 3D imaging of non-transparent objects, micro-CT is used and is based on the absorption of X-rays as the X-rays pass through the material [10]. The X-rays interact differentially with matter in the samples compared to photons in the visual spectrum and the X-rays pass readily through biological material, which is why optically non-transparent samples can be imaged in the micro-CT. Flowever, acquiring micro-CT devices still poses a financial burden for many laboratories and is problematic for following morphological changes over time. It is also inconvenient for forward genetic screens searching for mutants with abnormal external morphology and related tasks. Finally, the micro-CT cannot detect the 'optical' properties of a sample, like color, or differences in reflectivity since the micro-CT contrast solely captures density differences. Thus, there is currently an unmet need for straightforward 3D surface color imaging of non-transparent biological samples, and such an approach would greatly benefit some fields and open novel possibilities for 3D quantitative biology. Summary of the Invention
[0010] A method for imaging a sample is described in this document. The method comprises: illuminating, in an imaging chamber, the sample with diffused light from a light source, capturing reflected light in a detector, imaging the reflected light from the sample, and constructing an image of the sample from the determined reflected light from the sample. [0011] The determining of the reflected light from the sample is carried out in one aspect by subtracting background light from the captured reflected light. The red, green and blue components of the reflected light can be captured separately.
[0012] In one aspect of the invention, the constructing of the image comprises using a back-projection algorithm, and subsequently a 3D image of the sample can be visualised.
[0013] The method may also include capturing fluorescent radiation from the sample.
[0014] In a further aspect of the invention, the method comprises rotation and/or translation of the sample.
[0015] This document also describes a device for imaging a sample. The device has an imaging chamber holding the sample, a diffused light source, a detector arranged to capture reflected light from the sample and a processor adapted to construct an image of the sample (130) from the captured reflected light.
[0016] The imaging chamber is lined in one aspect with reflecting materials.
[0017] The diffused light source may comprise an LED light source and may further comprise a diffuser between the imaging chamber and the diffused light source.
[0018] In a further aspect, the device comprises a plurality of color filters adapted to be inserted between the diffused light source and the imaging chamber.
[0019] It will be appreciated that the detector may be able to capture light having one or more spectral bandwidths and that there may be more than one detector. This enables different spectral information to be used to construct the image.
[0020] The detector may be, for example, one of a light field camera, a black and white or RGB CCD camera, an sCMOS-based camera, an analogue camera or a scanning system.
[0021] It would be possible to use other forms of light paths and directions to create the image. For example, a holographic imaging system could be used to create an image. In other aspects, the image could be retrieved from the Fourier plane. It will be appreciated that the imaging may be corrected for changes is the perspective. [0022] The method is not limited to a single focus position and it is possible that multiple focus positions are combined.
[0023] In one aspect, for example, white light made up of light of many spectra is used and differences in the properties of the reflected light for example with dispersive elements is used to deduce surface properties, for example surface roughness.
[0024] It will be appreciated that the method and device may need to be calibrated for example for stages of oxidation of a given material
Description of the Figures
[0025] Figure 1 shows imaging modalities.
[0026] In Fig. 2A the Shepp-Logan Phantom is simulated.
[0027] Fig. 2B shows a simulating of the tomographic imaging process. The phantom is rotated by 180°. The Fig. 2B shows rotations of 0° and rotation by 60°.
[0028] Fig. 2C illustrates reflected light imaging using the Shepp-Logan Phantom. The Phantom is illuminated from the top and rotated. In this Fig. 2C, the 0° and 60° rotation are shown.
[0029] Fig. 2D shows how the reflected light images form a projection, with the sum of projections being visualized in the corresponding sinogram.
[0030] Fig. 3 shows colour images.
[0031] Fig. 4 shows a resistor with color code.
[0032] Fig. 4A is a photograph of the resistor showing the color code on it.
[0033] Fig. 4B is a 3D reconstruction of the blue channel.
[0034] Fig. 4C is a 3D reconstruction of the green channel.
[0035] Fig. 4D is a 3D reconstruction of the red channel.
[0036] Fig 4E highlights the relative intensity profile of the color channels as indicated in Figs. 4B, 4C and 4D.
[0037] Fig. 4F is 3D RGB color reconstruction. [0038] Figs. 4G to 4L show the reconstruction of a seed cone sample ( Metasequoia glyptostroboides) with complex surface structure with scale bars of 2mm.
[0039] Fig. 4G shows a photograph of the seed cone.
[0040] Fig. 4H shows an individual image from the device (blue channel).
[0041] Fig. 41 shows a reconstructed sagittal section through a central plane of the seed cone (blue channel).
[0042] Fig 4J shows an intensity profile along the line indicated in I to compare inside and outside of the complex shape.
[0043] Figs. 4K-L show 3D semitransparent volume rendering of the seed cone in three colors (colors inverted compared to 4J, for realistic color display. The color balance for the three colors was adapted manually. Images show different angles, including a view from below the cone.
[0044] Fig. 5 shows the imaging of glossy surface of a rosemary beetle and a coin (scale bars at 3mm).
[0045] Fig. 5A shows a photograph of the Rosemary beetle.
[0046] Fig. 5B shows an Individual image from the imaging method of this document (blue channel).
[0047] Fig. 5C shows a surface rendering of the beetle in 3D using three colors.
[0048] Fig. 5D shows a referring of the back side of a eurocent coin.
[0049] Figs 6A and 6B are photographs of a Lego figurine with a beard dubbed 'Dalton' and of a Lego figurine with glasses and happy face dubbed the 'Workman'.
[0050] Fig. 6C is a reconstructed sagittal section from the Dalton obtained with micro-CT.
[0051] Fig. 6D illustrates Dalton maximum intensity 3D reconstruction of the micro-CT data, made with
Arivis.
[0052] Fig. 6E illustrates Dalton volumetric 3D reconstruction of the micro-CT data, made with Arivis.
[0053] Fig. 6F demonstrates a reconstructed sagittal section from the Dalton obtained with the method of this document using the blue channel. [0054] Fig. 6G shows Dalton maximum intensity 3D reconstruction of the data obtained using the method of this document, made with Arivis, and Fig. 6H illustrates Workman maximum intensity 3D reconstruction of the data of this method, made with Arivis.
[0055] Fig. 61 shows that using the method of this document, the surface of the figurine can be revealed similar to the depicted micro-CT surface (see Fig. 6E) using, for example, one color channel (here blue).
[0056] Fig. 6J shows anverlay of the surfaces from the method of this document and micro-CT imaging. It can be seen that the images generated by the method of this document surface matches well the CT surface.
[0057] Fig. 7A shows reconstructed transmitted light image of a 400 mesh TEM grid with 26 pm bars.
[0058] Fig. 7B shows zoomed reconstructed transmitted light image.
[0059] Fig. 7C shows zoomed reconstructed reflected light image.
[0060] Fig. 7D shows an overlay of Fig. 7B and 7C.
[0061] Fig. 7E illustrates a section through the reconstructed grid in transmitted light.
[0062] Fig. 7F shows a section through the reconstructed grid in reflected light.
[0063] Fig. 7G is a reconstructed EM finder grid with letters in reflected light with 17 pm bars.
[0064] Fig. 7H shows 5 pm Dyna beads, reconstructed reflected light image.
[0065] Fig. 71 shows the same Dyna beads as in Fig. 7H imaged with a Nikon C2 confocal microscope, 20x objective with 0.75 NA.
[0066] Fig. 8A shows volume rendering of the mollusk shell side view.
[0067] Fig. 8B shows volume rendering of the shell bottom view, the shell is virtually cut open. .
[0068] Fig. 8C shows volume rendering of the shell, side view, the shell is virtually cut open.
[0069] Fig. 8D shows six channel intensity distribution from the squared regions indicated in Fig. 8A, 8B, and 8C.
[0070] Fig. 9 is a live larva was attached to an insect pin by adhesion.
[0071] Fig. 9A shows a difference between the same larva in a contracted (top) and a relaxed state
(bottom). [0072] Fig. 9B shows a transversal cut through the larva at the region indicated by the red lines in A.
[0073] Fig. 9C shows changes in the larva shape along the dorsoventral axis in the same region as indicated in Fig. 9A with the contracted state being left of the relaxed state.
[0074] Fig. 10 shows a mutant fruit fly expressing GFP in the eyes is imaged.
[0075] Fig. 10A shows the reflective image in the blue channel.
[0076] Fig. 10B shows a section through reconstruction of the fly in reflective mode (front view, blue channel).
[0077] Fig. IOC shows a 3D rendering of the fly imaged in reflective mode.
[0078] Fig. 10D shows a combination of the 3D rendering of the reflective and the fluorescence mode.
[0079] Fig. 10 E shows a single three color (raw) image before reconstruction and rendering of a red eyed wild-type Drosophila head.
[0080] Fig. 10F shows a reconstruction according to this method of a sequence of rotational images as in Fig. 10E.
[0081] Fig. 10G shows a single three color image (before reconstruction and rendering) of a glace eyed mutant Drosophila head with narrowed eyes of reduced size.
[0082] Fig. 10H shows a reconstruction of a sequence of rotational images as in Fig. 10G.
[0083] Figure 11 shows examples of embryos.
[0084] Fig. 12A shows a top view from dorsal to ventral of a stage 12 embryo.
[0085] Fig. 12 B shows a lateral side view of Fig. 12A.
[0086] Fig. 12C shows the same embryo as in Figs. 12A and 12B after ~1.5h. Stage 14.5 is shown.
[0087] Fig. 12D shows a lateral side view of the embryo shown in Fig. 12C.
[0088] Fig. 12E shows the same embryo as in Fig. 12A, 12B, 12C or 12D after ~2.8h (relative to Fig. 12A and 12B). Stage 19 is shown.
[0089] Fig. 12F is a lateral side view of Fig. 12E.
[0090] Fig. 12G shows the GFP fluorescence signal of the same embryo is shown after fixation and imaged with a spinning disc. [0091] Fig. 12H shows a zoomed view comparing Fig. 12E and Fig. 12G.
[0092] Fig. 121 shows a side view using the imaging method of the application (ALMOST) displayed in purple, with more brightness in purple indicating less reflection. Insert is showing a raw reflection image.
[0093] Fig. 12J shows a side view using autofluorescence displayed in cyan. Brighter signals indicate stronger autofluorescence.
[0094] Fig. 12K shows a side view using transmitted light displayed in green. Brighter signals indicate more transmission.
[0095] Fig. 12L is a merged view showing the tadpole from the top.
[0096] Fig. 12 M-P are virtual section through the animal as indicated in L.
[0097] Fig. 12Q is a merged side view of I, J, K, section as indicated in L.
[0098] Fig. 13 shows a semi-transparent technical object imaged by the imaging method of the description.
[0099] Fig. 13A is a raw image of a LED.
[0100] Fig. 13B shows a 3D projection using the imaging method of the application revealing the outer shape.
[0101] Fig. 13C is a cut view revealing parts from the inside of the LED.
[0102] Fig. 13D-F are zoomed images corresponding to the red rectangle indicated in A-C.
[0103] Fig. 13 B, C, E and F are displayed using a color look-up table ranging from blue over yellow and white to orange. Scale bar = 500 pm.
[0104] Fig. 14 Automatic color balance. Scale bars = 500pm. Imaging conditions are summarized in Table 1.
[0105] Fig. 14A Photograph of a resistor.
[0106] Fig. 14B RGB reconstruction of the resistor using the imaging method; visualized in a projection. For the acquisition, an automatic white balance was performed using the Leica LAS software, which was driving the camera. Consequently, no individual adaptations for the different color channels have been performed. The overall contrast has been adjusted. This shows that an automated procedure can be used for the color balance in the imaging method.
[0107] Fig. 14C RGB reconstruction of the resistor using the imaging method as in Fig. 14B with a surface rendering. The colors of the artificial surface rendering and light added to the rendered scene gives a less vivid impression than the photograph or the projections, but is a real 3D volumetric object.
[0108] Fig. 15 demonstrates that mirroring surfaces such as a ball bearing ball can be imaged by the imaging method disclosed herein.
[0109] Fig. 15A is a screenshot showing a ball bearing ball mounted in our device and it glossy surface reflecting the surrounding.
[0110] Fig. 15B shows the raw data of the ball bearing ball. A surface defect can be seen and is marked by the white arrowhead. The chamber, the aperture for the camera, the mount and other features of the environment are also visible due to the reflective surface (see black arrows ).
[0111] Fig. 15C shows the reconstruction of the ball bearing ball. The surface can be visualized without the reflections of the environment and without any further editing, while highlighting the surface features (see white arrowhead). Please note that the reconstructed image is mirrored. The size of the ball is about 1.25 cm in diameter.
Detailed Description of the Invention
[0112] The invention will now be described with respect to Fig. 1. Fig. 1A shows a diagram of an imaging light path for a sample with back illumination as known in the art. Fig. IB illustrates the theory of image formation in a prior art tomographic system, like the OPT system. The sample to be imaged resides at the center of a coordinate system. Parallel rays of light spaced by a distance Ύ pass through the sample to form a projected image (RQ). The theory of this imaging process is based on the Radon transform.
[0113] Fig. 1C shows a diagram of the oblique illumination light path used to create reflected light images of samples which are opaque as in the method of this document. It will be appreciated that this method differs from the manner in which the prior art standard OPT works with transparent samples and uses fluorescence or back illumination. It will be appreciated that it is possible to add color filters in the reflected light path to collect spectral information.
[0114] Fig. ID illustrates a theory of image formation when the reflected light interacts with a fully opaque sample that specifically contains surface topography information. [0115] Fig. IE shows an oblique illumination/imaging chamber for reflected light imaging. A reflective chamber is used, for example lined with white paper, to promote diffuse illumination. Fig. IF shows a depiction of diffuse reflection compared to specular reflection.
[0116] Fig. 1G illustrates a flowchart of the imaging process of this document from imaging over reconstruction to 3D rendering of the samples. [0117] A new variation of OPT is described that provides a 3D surface reconstruction of opaque samples including information on color and reflective properties of the samples. The method is based on the diffuse scattering of light that occurs when photons in the visible spectrum interact with the surface of nontransparent 3D objects. The method also enables the 3D color visualization of the sample with a reflective surface. The method is tested by reconstructing 3D color images from a diverse set of samples including: an electrical resistor, seed cones of the dawn redwood Metasequoia glyptostroboides, the rosemary beetle Chrysolina americana, Lego figurines (which are compared with micro-CT) and a shell of the mollusk Pollia dorbignyi with six color channels. The fruit fly Drosophila melanogaster (both larvae and adults) (including co-detecting GFP-expression and genetic eye mutations) is imaged. The image of the surface of fixed and live Xenopus embryos highlights the applicability of the method to detect shape changes, for example during developmental furrow formations in neurulation.
[0118] The imaging of reflected light means that the reflectance (R) of the sample determines the image with R being typically approximated by; r, _ f reflected
Figure imgf000011_0001
ncoming [0119] / being the intensity for non-mirroring material with some roughness. Depending on the material, R typically changes with the wavelength of the light for colored samples, showing different degrees of reflectance for different wavelengths [11]
[0120] The inventors theorized that it would be possible to decode the intensity of detected light (R) into an image of the sample shape and surface properties using the 3D capability of the OPT device. In this OPT device, multiple ones of the images of the sample are collected as the sample rotates relative to a detector. Given that the sample receives constant homogeneous indirect diffuse illumination at all of the imaged angles Q, the 'background' number of photons that reach the detector remains constant. In contrast, the variation in the brightness information of each image corresponds to the specific reflective properties of the sample at each angle.
[0121] The inventors used existing tools to solve the problem of reconstructing 2D OPT reflected light images as set out in this document into a 3D surface representation of the sample [12], (See Figure IB). It was concluded this was possible given the fundamental similarities between the approach set out in this document and previously used transmitted light OPT, as well as CT. In all three cases, the OPT device or the CT device outputs a series of images collected at different angles. The mathematical foundation of the standard CT/transmitted light approaches are so-called line integrals, which represent the total attenuation of a straight ray traveling through the sample f(x,y).
[0122] According to Beer's law, the total energy l0 emerging from the object along this ray is given by;
Figure imgf000012_0001
with /,· being the total incident energy and m(c,g) the local attenuation coefficient at (x,y).
[0123] Thus, considering the total attenuation along any line through the sample, the image formation process can be described by the Radon transform Pg (r ) of the object;
Figure imgf000012_0002
with d being the Dirac delta function, r the perpendicular distance from the line to the origin, and Q the angle formed by the distance vector (Fig IB) and a coordinate system with its origin at the middle of the sample 130. The rays of light passing through the sample form a projection of the sample on the image plane and this projection plane has the angle Q relative to coordinate system.
[0124] The formation of the images is described in references (2) and (3) and the inverse Radon transform needs to be performed in order to reconstruct the underlying 3D shape. Different practical solutions are known to exist for reconstruction of the 3D images. For example, a back-projection algorithm is typically used to reconstruct 3D objects, but this is not limiting of the invention. A simple discrete description of the filtered backprojection algorithm for the inverse Radon transform of parallel projection data can be considered as
Figure imgf000013_0001
with N being the number of projection angles and Qe; being the Radon projection kernel, filtered for data sparsity. More sophisticated reconstruction algorithms exist and are used [13].
[0125] The major difference between the prior art transmitted OPT and CT imaging and the method of this document is that the underlying quantity that generates the signal is changed and that reflected light (R) as known in Figs. 1A and IB replaces absorption with transmission (as shown in Figures 1C and ID). Flowever, even with a changed intensity of the reflected light (as opposed to the transmitted light), the back-projection algorithm can similarly back-calculate the true 3D shape of the imaged object from a series of 2D images collected at different ones of the projection angles as the back-projection algorithm does for other modes (transmitted light or fluoresced light). In other words, the reconstruction algorithm operates independently as to whether light originates from the surface of the object to be imaged or from behind the object to be imaged.
[0126] For the reflective imaging shown in Figs. 1C and ID, an imaging chamber 100 was developed and this imaging chamber 100 is shown in Fig. IE. The imaging chamber 100 promotes diffuse illumination of an opaque object (sample 130 on a stage 135) to be imaged using commonly available materials. The imaging chamber 100 contained a reflective surface 110 via white paper and aluminum foil, a non- coherent unfocused light source 120 of LED goosenecks (like used for dissection stereo-microscopes, but this is not limiting of the invention) directed at the sample 130, and a diffuser 140 made of milk glass placed between the light source 120 and the sample 130 (Figures 1C and IE). A detector 150 (in this case a camera) records the reflected light from the sample 130 In this setup, the light recorded at the detector 150 includes rays of the light that have reflected from a surface of the sample 130 against a constant illumination 'background' due to reflection from the reflective surface 110. No (or minimal) light interacts with the interior of the opaque sample 130 (see Figure ID). Furthermore, the sample 130 will differentially absorb and reflect light depending on properties of the surface of the sample 130, like color, and thus the reflected light image will contain spectral information about the sample 130.
[0127] The goal of the imaging chamber 100 is to obtain images of the sample 130 as if the sample 130 were a self-radiant object. Thus, the method aims to process diffuse reflection as compared to specular reflection (as shown in Figure IF). In diffuse reflection, the radiant or luminous intensity of a diffuse radiator is directly proportional to the cosine of the angle between the illumination direction 160 and the surface of the sample, as known from Lambert's cosine law [14]. That means that the surface reflection of light from the sample 130 will scatter in different directions with the brightest reflection being perpendicular to the surface of the sample 130. When the light is illuminating the sample 130 from an indirect diffuse source, such as that in Fig. IE, the rays of the light that are reflected at the surface of the sample can be captured by the objective lens of the detector 150 and form an image on the detector 150 (see Figure ID). This is different to the prior art OPT method in which the reflections from the sample 130 are typically avoided by using an immersion medium to match the refractive index of the sample 130. In contrast, when imaging in a reflective mode, a refractive index mismatch is actually supporting the imaging. Thus, the imaging method of this document is imaged in air to maximize the reflectivity of the surface of the sample 130, except for 'aquatic' samples like Xenopus embryos.
[0128] While diffuse reflected illumination may still lead to shadowing, as is known in scanning electron microscopy (SEM), it was reasoned that when the sample 130 is imaged over a range of angles, as in OPT 3D imaging, the surface of the sample 130 will be evenly illuminated and the topology can be imaged correctly. Furthermore, it was hypothesized that multi-directional and even illumination of the sample 130 would promote accurate detection of the samples 130 with complex and/or angular (non-convex) morphology while also gathering information on fine surface topography of the sample, like dimpling.
[0129] After imaging and reconstructing the 3D information with the back-projection algorithm, the next step is rendering and visualizing the imaged 3D shapes using 3D rendering software. In the context of transmitted light OPT and using standard processing of the back-projection algorithm, darker parts of the image are considered as sites where the rays are absorbed, while brighter parts are regions where the rays pass unimpeded through the sample 130. In other words, the brighter regions in reflected imaging are those regions where the sample 130 has higher reflective properties, and thus this method of imaging requires inversion of the grayscale compared to the transmission OPT image.
[0130] To render shapes in 3D acquired with the reflective light imaging method described in this document, the illumination system provides a difference between the background intensity of light and the reflected light that has interacted with the sample 130. In addition, the background intensity of the light needs to be distinguishable from the sample 130 and consequently rendered transparent to reveal the 3D shape of the sample 130. The approach of using a white background as the reflective surface 110 achieves this for any non-white (or less bright) samples 130.
[0131] Rendering the 3D shape of the sample 130 also means that the result of the method is a computer-generated object. This computer-generated object can then be differently visualized as a projection, volume, or as surface, where color information can be added in the form of a look-up-table and illumination and shading can be animated. It also means that the result is a computer-generated image of the computer-generated object and thus may appear artificial as compared to a photograph. At the same time it means that the 3D information is fully digitized and can be used for modeling printing etc.
[0132] The steps from imaging over reconstruction to rendering are illustrated in the flow diagram in Figure 1G. This diagram shows a first step 5 in which the sample 130 is placed substantially centrally on the rotational stage 135 of the imaging chamber 100. The sample 130 is illuminated in step 10 with indirect light to avoid speckles. In step 15, the reflective properties of the light are selected, for example by using a spectral filter as explained later and the light reflected from the sample 130 is imaged in step
20. It will be appreciated that the reflected light from the sample 130 and from the background will need to be distinguishable from one another otherwise later reconstruction of the image will not be possible.
[0133] The sample 130 is rotated as shown in step 25 and the imaging (step 20) of the sample 130 is repeated. In step 30, the plurality of the projection images from the rotation of step 20 and 25 are used as input for the back projection algorithm. The relevant intensity values of the images, i.e. the reflection from the sample, are included in the calculation, i.e. there is no need to truncate the histogram for the calculations. In step 35 the 3D information can be constructed from the rotational images using the back projection algorithm.
[0134] In step 40, the 3D information constructed in step 35 is used to create an image stack and this image stack can be visualized in step 45. The visualization can be modified in step 50 to make the background transparent and the sample 130 visible. In step 55, surface rendering can be applied to the reconstructed surface of the sample 130. Results
[0135] The first test was carried out in silico to see whether the back projection-based approach outlined in this document can reconstruct a 2D object from a series of images using Matlab. The reconstruction of the simulated reflection from the outer surface of the phantom is shown in Figure 2 and illustrates the successful application of the method applied to simulated reflected light images.
[0136] In Fig. 2A the Shepp-Logan Phantom is simulated. Parallel rays traversing through the phantom, like those generated by the micro-CT and will create a projection depending on the angle where the parallel rays traverse through the sample and depending on the sample density (absorbance). The depicted series of projections is shown as the sinogram (middle picture). The varying intensities in the sinogram are a result of the absorbance of the light, with bright parts indicating more absorbance. From the sinogram, a reconstruction of the phantom (left picture) can be approximated using a filtered back projection algorithm. As expected, the accuracy and detail of the reconstruction depends on the number of the parallel rays, as well as the number of images collected over a range of different angles.
[0137] Fig. 2B shows a simulating of the tomographic imaging process. The phantom is rotated by 180°. The Fig. 2B shows rotations of 0° and rotation by 60°.
[0138] Fig. 2C illustrates reflected light imaging using the Shepp-Logan Phantom. The Phantom is illuminated from the top and rotated. In this Fig. 2C, the 0° and 60° rotation are shown. The illuminated parts of the Phantom, which will create a reflected light image, are shown. The outer and brightest ellipse in the Shepp-Logan phantom are considered as the opaque surface of the sample and thus no information other than the first bright reflection is contributing to the image. For simplicity, the light intensity is assumed to be the same everywhere in the individual images, while in reality an intensity gradient will be created depending on the angle of illumination and depending on Lambert's cosine law, as explained above.
[0139] Fig. 2D shows how the reflected light images form a projection, with the sum of projections being visualized in the corresponding sinogram. The sinogram was constructed from the y-projections of the rotated images of the Phantom. From this sinogram, the outer shape of the Phantom can be reconstructed using the Radon based filtered back projection. The size of the reconstruction was adjusted due to the fewer 'rays' as compared to the reconstruction shown in Fig. 2A.
[0140] The practical applicability of the approach to determine the true 3D shape of a sample with relatively simple shape was tested. The sample 130 also includes color information. It is known that pigmented specimens (i.e. coloured specimens) will differentially absorb and reflect different wavelengths of reflected light. The apparatus is used with a black and white camera as the detector 150, but it is expected that it is possible to generate color images of the sample using a set of three filters (as shown in Fig. 3) to create red, green and blue color channels. The idea to use three color filters is akin to color photography as explored by James Maxwell in 1861 [15]. An Amersham Bioscience (Amersham PI Little Chalfont Buckinghamshire United Kingdom; now part of GE Healthcare (Chicago, Illinois, United
States)) Ultrospec 2100 pro with Swift II software version 2.06. was used to acquire spectra of the three- color filters used for three color volume imaging. Spectra between 300 and 700 nm in 1 nm steps were acquired. Speed was 1800 nm/min., no reference was used. The transmission of the filters shows that, even with suboptimal filters, the color information can be retrieved.
[0141] A resistor (Figure 4A) was used to reveal the characteristic color code on its surface in 3D in RGB color. Individual 512x512 pixel images were collected over 360° with 0.9 degree rotational steps. The reconstruction was based on the standard micro-CT NRecon software implementing the filtered backprojection algorithm. The imaging, reconstruction and rendering was carried out for the three RGB color channels (Figure 4B, 4C and 4D). Figure 4E shows the intensity distribution along the lines indicated in Figs. 4B, 4C and 4D. A maximum intensity projection combining all colors in Arivis 3D rendering software is shown in Figure 4F. In this projection the sample 130 appears partly see through. The color rings are revealed properly.
[0142] It will be appreciated that small artifacts like the flare on the thread above the actual resistor are a consequence of extended reflexes on the reflective metal part connecting the resistor.
[0143] The brown, green, red and gold color rings (from top to bottom) are imprinted on the resistor and are part of the four-color code used to describe its properties. The method allows these color rings to be revealed. The color balance for the three colors was adapted manually.
To test, if an automated procedure for adjusting the colors can be used and to show that the approach also works with a color camera, we imaged a resistor with a color camera, and applied the automatic white balance (sometimes referred to as color balance) procedure from the software. The resistor was then reconstructed and visualized without individual adjustments for the different colors. Figure 14 shows that the resistor can be depicted and the colors match the original well.
[0144] The reconstruction of the complex shape of a biological sample is also shown in Figs. 4G to 4L. A seed cone was chosen to test whether reflected illumination can reveal a shape with cavities/non-convex morphologies. More specifically, the seed-bearing cones of Metasequoia glyptostroboides, also called the dawn redwood tree (Figure 4G; photograph) were used. After imaging (Figure 4H), the 3D structure of the seed cone was created, which allows a virtually slice through the seed cone showing its surface structure like it is cut open (Figure 41). The intensity changes of the signal for one of the cavities is shown in Figure 4J. Red, green and blue filters were used to create the RGB-type 3D image of the surface from three acquired volumes. Arivis 3D software visualized the surface of the seed cone in 3D (Figure 4K-L). Figure 4G and 4L demonstrates that the method of this document allows visualizing the cavities and complex structure of the sample 130.
[0145] The aim of the imaging chamber 100 is to create diffuse illumination that should enable imaging of glossy samples. To test and confirm this, a Chrysolina americana, commonly known as the Rosemary beetle, was imaged. These insects have a colorful elytra with metallic green and purple stripes along the rostral to caudal direction on them. Fig. 5 shows that the method can image these smooth shiny surfaces and visualize the color pattern and typical indentations on the forewing of the beetle. To also test inorganic metallic surfaces a regular eurocent coin was image. The oak leaf imprint of the German mint becomes visible. This test demonstrates the applicability of the method for imaging the 3D morphology of insects and metallic samples.
[0146] To further demonstrate that the imaging method disclosed herein enables imaging of higly reflective surfaces, a ball bearing was imaged. Imaging of ball bearings is very challenging as the ball in essence acts like an extremely concave mirror. The reflection makes it difficult to inspect the surface with optical methods, as the sample (the ball) will show the illumination or the probe (the camera) or both on the surface. Therefore, separating the surface from the optical properties is key. With our method, we can image create a complete 360° reconstruction of the surface of the ball independent of the shininess of the ball. The reconstruction used in the imaging application disclosed herein requires no additional image processing to achieve the environmentally independent feature reconstruction (Figure 15). This observation is advantageous to analyse the safety of ball bearing containing devices. Indeed, ball bearings with defects can cause disastrous failures of industrial and recreational devices, posing safety and financial risks to the operation of ball bearing containing devices. Optical inspection enabled by the method dislcosed herein offers a quick and cost-efficient way to control manufacturing or test operation state of ball bearings. [0147] To compare the standard for imaging optically nontransparent samples, micro-CT, with the method of this document and stress the possibilities that the color imaging of this document holds, experiments were carried out to image samples that are similar with respect to their 3D shape and surface, but with different coloring. A reproducible shape of Lego figurines was used and two figurines with different facial expression and color scheme were imaged (Figure 6A and 6B). Using micro-CT, the shape and surface of the figurines can be depicted, but the two characters cannot be discriminated (Figure 6C and 6E). Using the method of this document, the two characters can be clearly discriminated, also the surface of the figurines can be extracted using volume rendering similar to the CT data (Figure 6F-6I). The surface extracted from the imaging method of this document and the surface determined by a CT method match well showing qualitatively that the surfaces are comparable to CT surfaces of similar resolution (Figure 6J).
[0148] Fig. 6 shows the samples of same shape but with different color patterning are imaged, namely Lego figurines. Figs 6A and 6B are photographs of a Lego figurine with a beard dubbed 'Dalton' and of a Lego figurine with glasses and happy face dubbed the 'Workman'. Fig. 6C is a reconstructed sagittal section from the Dalton obtained with micro-CT. Fig. 6D illustrates Dalton maximum intensity 3D reconstruction of the micro-CT data, made with Arivis. Fig. 6E illustrates Dalton volumetric 3D reconstruction of the micro-CT data, made with Arivis.
[0149] Fig. 6F demonstrates a reconstructed sagittal section from the Dalton obtained with the method of this document using the blue channel. Fig. 6G shows Dalton maximum intensity 3D reconstruction of the data obtained using the method of this document, made with Arivis, and Fig. 6H illustrates Workman maximum intensity 3D reconstruction of the data, made with Arivis.
[0150] Fig. 61 shows that using the method of this document, the surface of the figurine can be revealed similar to the depicted micro-CT surface (see Fig. 6E) using, for example, one color channel (here blue). Fig. 6J shows an overlay of the surfaces from the method of this document and micro-CT imaging. It can be seen that the images generated by the method of this document surface matches well the CT surface. [0151] It will also be noted that both of the Lego figurines are revealed in the imaging method of this document and can be discriminated, whereas in the micro-CT imaging method, the figurines look similar.
[0152] To compare the performance of the method with conventional OPT imaging and explore the resolution further, imaging was done of grids used for electron microscopy and beads as test samples (Figure 7). Overall, the reconstructions of both modalities match well, while a section through the grid for both modalities reveals potential artifacts for the method of this document in the form of specular reflections. The bead imaging revealed that the sampling of camera limits the current setup.
[0153] Fig. 7 shows the comparison of the imaging of a 400 copper mesh EM support grid in both modalities and reconstructed. The two modalities pick up information through the holes of the mesh differently. Fig. 7A shows reconstructed transmitted light image of a 400 mesh TEM grid with 26 pm bars. Fig. 7B shows zoomed reconstructed transmitted light image. Fig. 7C shows zoomed reconstructed reflected light image. Fig. 7D shows and overlay of Fig. 7B and 7C.
[0154] To show the differences in the image formation between reflected and transmitted mode, a section through the grid for both modalities is shown. Fig. 7E illustrates a section through the reconstructed grid in transmitted light. Fig. 7F shows a section through the reconstructed grid in reflected light. Fig. 7E shows that the image formation is cleaner for the transmitted light and some artifacts arise from specular reflection indicated by the thin diagonal dark lines. Fig. 7G is a reconstructed EM finder grid with letters in reflected light with 17 pm bars.
[0155] Subsequently, beads were imaged beads with an iron core of about 5 pm dispersed on a transparent coverslip. Fig. 7H shows 5 pm Dyna beads, reconstructed reflected light image. Fig. 71 shows the same Dyna beads as in Fig. 7H imaged with a Nikon C2 confocal microscope, 20x objective with 0.75 NA. The imaging method of this document can only detect the aggregates of the beads and is limited by the sampling of the camera (~4.2 pm per pixel in x,y and thus too coarse for picking up the small differences between the neighboring beads of 5pm).
[0156] Scale bar in Fig. 7A, 7B, 7E and 7G = 500 pm, scale bar in Fig. 71 = 50 pm. [0157] Subsequently, it was explored if it is possible to use the approach to go beyond the three color
RGB images and get a more extended spectral readout from the sample surface. To this aim, a sea snail shell was imaged using six spectral filters. Figure 8 shows the shell of Pollia dorbignyi in 3D color and a plot of the spectral composition of different parts of the shell and plasticine used to hold the shell in place. The plot is based on six volumes acquired with six spectral filters. The applicability of the method of this document for imaging shells of mollusk and generating spectra can be seen.
[0158] Figure 8 shows a six channel spectral imaging of a sea snail shell (Pollia dorbignyi). Fig. 8A shows volume rendering of the mollusk shell side view. Fig. 8B shows volume rendering of the shell bottom view, the shell is virtually cut open. Fig. 8C shows volume rendering of the shell, side view, the shell is virtually cut open. Fig. 8D shows six channel intensity distribution from the squared regions indicated in Fig. 8A, 8B, and 8C. Differences in the spectral composition from the different regions can be revealed. The spectral specificity of the used filters is indicated by the bars; the line graph shows the spectral profile of the reflections from the different regions in the shell and the plasticine support. In the figures above, minor differences in the intensity between the different color channels were adapted manually. Flere for the six channels the intensity of the background was kept constant to normalize for differences between channels. The shell appears hollow as reflective light is imaged, and no information is collected from the inside (see Figure 2).
[0159] Scale bar = 2mm. Imaging conditions are summarized in the Table 1.
[0160] Recent studies using light-sheet microscopy created impressive sequences of the early Drosophila development using fluorescent nuclear markers showing the inner cellular organization of the larvae. These movies end when the muscles of the larvae start twitching [16]. To illustrate the complementarity of the label-free surface approach of the method of this document, the morphic potential (meaning the amount) of form change in a Drosophila 3rd instar larva was shown. The movements of Drosophila larvae for example during foraging are characterized by a remarkable form change of the larval body. It was reasoned that a tomographic view provides more insights than the obvious macroscopic contractions. The Figures show a larva in a contracted, curled-up and in a relaxed elongated state. This reveals that the contractibility and deformability of the larval body differs along the different axes (see Figure 9). Hence these results show that the morphic potential of this individual is manifested unequally between the different axes and is pointing to a differential contribution of the different muscle groups involved in the movement.
[0161] Figure 9 shows the 3D morphic potential of the Drosophila 3rd instar larva. Fig. 9A shows a live larva attached to an insect pin by adhesion. The larval body is in a contracted curled up state when lifted up from the ground. Grayscale data is used to visualize the change in the outer shape of a larva. Fig. 9A shows a difference between the same larva in a contracted (top) and a relaxed state (bottom). Exposure to 0.2 M NaN3 for 30 min induced the relaxed state. The arrows indicate the difference in length between the two states. Here the induced relaxation shows that the larva is more stretched out
(3469.37 pm) and longer than in the contracted and curled-up state (2690.14 pm). The difference in length between the contracted and the relaxed state corresponds to about 22.5 % when measuring from rostral to caudal and about 10 % when following the curvature of the contracted larva along the anterior- posterior axis (3133.15 pm vs. 3482.67 pm). Fig. 9B shows a transversal cut through the larva at the region indicated by the lines in Fig. 9A. The larva is oriented according to Fig. 9A, with the curled state on top. The black lines, and arrows indicate the difference in the shape along the dextro-sinister (horizontal) axis of the larva between the two states. The difference is 640.61 pm vs. 555.88 pm, corresponding to a difference of about 15.2%. Fig. 9C shows changes in the larva shape along the dorsoventral axis in the same region as indicated in Fig. 9A with the contracted state being left of the relaxed state. Interestingly, this difference is more pronounced than in the transversal axis (Fig. 9B). The difference between the two states along the dorsoventral axis is 846.91 pm; vs. 602.86 pm; which amounts to a difference of 40.5%. Changes might be associated to specific pose.
[0162] Scale bar = 500 pm. Imaging conditions are summarized in Table 1.
[0163] Next, it was tested if the method can be used for imaging adult Drosophila melanogaster fruit flies. The combination of the imaging method with fluorescence using a mutant fly expressing GFP in the eyes. In Figure 10A-D, it is shown that the imaging method of this document can also be combined with fluorescent OPT. It is possible to reconstruct the overall surface morphology of the cuticle of the fly in
3D and add an additional channel for fluorescence detection.
[0164] To further test the applicability for imaging fly mutants, and to test the resolution of our setup on biological samples, images of the heads of wild-type red-eyed Drosophilas (Figure 10E and 10F) and so-called glazed eye (Gla) mutants with narrowed eyes of reduced size (Figure 10G and FI).
[0165] Using the method of this document, the eye morphology of the fly can be imaged in 3D, and the difference between the two genotypes can be detected. Figure 10 underlines the applicability of our method for imaging the morphology of insects.
[0166] Figure 10 shows the combining of this method and fluorescence OPT on adult Drosophila. Fig. 10A shows the reflective image in the blue channel of a fruit fly expressing GFP in the eyes. Fig. 10B shows a section through reconstruction of the fly in reflective mode (front view, blue channel). Fig. 10C shows a 3D rendering of the fly imaged in reflective mode. Fig. 10D shows a combination of the 3D rendering of the reflective and the fluorescence mode. Fig. 10 E shows a single three color (raw) image before reconstruction and rendering of a red-eyed wild-type Drosophila head. Fig. 10F shows a reconstruction according to this method of a sequence of rotational images as in Fig. 10E. Fig. 10G shows a single three color image (before reconstruction and rendering) of a glace eyed mutant Drosophila head with narrowed eyes of reduced size. Fig. 10H shows a reconstruction of a sequence of rotational images as in Fig. 10G. The scale bars= 500 pm. Imaging conditions are summarized in Table 1.
[0167] Next, a test was carried out to see if it is possible to image the embryogenesis in Xenopus embryos. Xenopus is a commonly used model system and widely used for embryology studies. Xenopus eggs and embryos are opaque likely because of their yolk content, which is different to some other model organisms like zebrafish and Drosophila embryos, which are transparent. Recently an adaptive light- sheet microscope was introduced to overcome spatially varying optical properties in tissue and to image embryo development in greater detail. This technique allows improving live cell imaging in Drosophila and zebrafish embryos [17]. Flowever, such a system can only correct for varying cell density. In opaque and absorbing systems, such as for example in Xenopus embryos, where the signal is lost, it is of limited use. To overcome the opacity issue, recently microscopic Magnetic Resonance Imaging (mMRI) was used to investigate noninvasively and independent of the optical properties mitotic divisions inside the opaque early Xenopus embryo [18] and has been successfully applied to unravel disheveled signaling in Xenopus gastrulation [19].
[0168] Other solutions for imaging opaque and highly scattering samples have been proposed before like Surface Imaging Microscopy for example [20], which has been used to investigate embryo development of fixed samples. To test, if it is possible to image Xenopus embryos with the method of this disclosure, chemically fixed embryos at different stages were investigated.
[0169] Figure 11 shows examples of embryos at the one cell stage (stage 1), four-cell stage(stage 3), blastula stage ( stage 7), large yolk plug stage (stage 11) , neural plate stage (stage 14), mid neural fold stage (stage 15), an early tailbud stage (stage 25) and a tailbud stage (stage 28). This shows that with the method it is possible to image the surface of Xenopus embryos label-free and without clearing.
[0170] Graylevel imaging is used for visualizing the surface of different developmental stages of Xenopus tropicalis embryos. 3D rendering of:
• One cell stage (stage 1 - Fig. 11A)
• Four cell stage (stage 3 - Fig. 11B)
• Blastula stage ( stage 7 Fig. 11C)
• Large yolk plug stage (stage 11 - Fig. 11D)
• Neural plate stage( stage 14 - Fig HE)
• Mid neural fold stage (stage 15 - Fig 11F)
• An early tailbud stage (stage 25 - Fig 11G)
• A tailbud stage (stage 28 - Fig 11H).
[0171] Scale bars= 500 pm. Imaging conditions are summarized in Table 1.
[0172] To test, if it is possible to leave out all chemical treatment altering the sample (clearing and fixation) and thus image live Xenopus embryos, the process of neurulation during Xenopus tropicalis development was investigated. The inventors were interested in the neural tube formation and especially in the late steps from fold apposition to fusion and remodeling [21]. This process is of relevance as the failure to close the neural tube can lead to neural tube defects. Neural tube defects (NTDs) are one of the most common birth defects affecting approximately 0.5-2/1000 pregnancies [22] These NTDs include spina bifida, anencephaly, and others [23] Xenopus is a good model system for spinal cord formation [24, 25], as the vertebrate-specific program of neurulation can be observed easily outside the uterus. While zebrafish would potentially be an alternative and inherently transparent model system that can be imaged with the available techniques (light sheet microscopy), the process of neurulation differs, and the zebrafish undergoes so-called secondary neurulation [26], which is different from the more human-relevant primary neurulation. Therefore, being able to image and analyze the neurulation in alive Xenopus embryos is an advantage.
[0173] For studying the neural tube closure, surface imaging is crucial as exemplified by earlier SEM studies [21]. Using the imaging method of this document, it was possible to image the furrow and the progress of the closure of the neuronal tube between stage 12 and 19 during a period of ~2.8 hours in 4 steps. Figures 12A-F shows that the developmental processes can be studied in 3D using the method and the surface dynamic of live embryos can be imaged. As this embryo expresses GFP in neuronal precursor cells, the neuronal tube formation can also be studied in this respect. To demonstrate how the imaging method of this document complements other imaging strategies and to see how it compares to regular 3D imaging strategies, the same embryo was imaged after chemical fixation on a spinning disc microscope. Figure 12G shows the widefield and fluorescent signal obtained from the spinning disc microscope. Figure 12H highlights a zoomed 3D view, where the embryo was cut digitally, and the furrow is shown towards the caudal direction. The signals from the spinning disc and the imaging method of this document are shown next to each other. Figure 12 overall signifies that opaque model systems can be imaged using the imaging method of this document.
[0174] As the Xenopus gets older, it gets more transparent. Therefore we wanted to check next if we can image semitransparent samples and compare our technique with other related OPT techniques. As proof of concept, we image a technical semitransparent sample, a LED, to show that with the imaging method of this document internal structures can be revealed (Figure 13). Next, we have imaged a tadpole of stage ~50 with the imaging method of this document, autofluorescence, and transmitted light OPT. Figure 12 shows that semitransparent samples can be imaged with the imaging method of this document, it also shows that the information it reveals is different from the transmitted light and the autofluorescence signal underlining the complementary character of our approach.
[0175] Figure 12 illustrates live imaging of a Xenopus tropicalis embryo with different stages of the same Xenopus tropicalis embryo are shown during neurulation. Fig. 12A shows a top view from dorsal to ventral of a stage 12 embryo and Fig. 12 B shows a lateral side view of Fig. 12A.
[0176] Fig. 12C shows the same embryo as in Figs. 12A and 12B after ~1.5h. Stage 14.5 is shown.
[0177] Fig. 12D shows a lateral side view of the embryo shown in Fig. 12C. and Fig. 12E shows the same embryo as in Fig. 12A, 12B, 12C or 12D after ~2.8h (relative to Fig. 12A and 12B). Stage 19 is shown. Fig. 12F is a lateral side view of Fig. 12E.
[0178] Scale bar is 500 pm in all images. Imaging conditions are summarized in Table 1.
[0179] The method shows that it is possible to image opaque samples in 3D and that the shape can be revealed in color by combining the concepts of OPT with oblique illumination, color filters and using the filtered back projection algorithm together with 3D rendering software. This approach overcomes the need for transparent or cleared samples and allows the analysis of the 3D morphology of opaque samples like insect cuticles or shells on the mesoscale. As live samples can be imaged, the method opens the possibility for longitudinal imaging of unaltered (non-fixed and non-cleared) samples.
[0180] The method enables a supplementary approach to well-established OPT and light sheet modalities and allows imaging of the sample color, which is lost in X-ray-based techniques like micro-CT.
[0181] The imaging method was applied to the surface of samples like seed cones, adult insects, resistors and Lego figurines using straightforward modifications of existing OPT hardware. The seed cone shows that it is possible to reveal complex non-convex surface structures (Figure 4G to 4L) it also demonstrates that the imaging method fills a niche for 3D imaging of relatively large samples. The resistor (Figure 4A-4F) shows that the color information can be depicted realistically even though the result of imaging method is an "artificial" computer rendered object that may give a less 'vivid' impression than a photograph.
[0182] The comparison with micro-CT shows that the surfaces can be quantitative (Figure 6). The mollusk shells highlight that spectral information and quantitative intensity information can be retrieved in 3D using our method (Figure 8). Next, as demonstrated by Drosophila eye imaging, the imaging method opens the possibility to quantify morphological differences in 3D (Figures 10 and 12) and could be used for quantifying genetic interaction screens in model organisms. Genetic interaction studies in Drosophila often use eye deformations as readout. Flowever as compared to the 2D extended focus operations [9], which are the standard for Drosophila eye imaging, or SEM, which is tedious, the imaging method allows for imaging and quantifying the morphology of the eye straightforwardly in 3D.
[0183] The prospect of looking at the form change of samples like larva (Figure 9) offers a new perspective for biomechanical studies in small model systems and could be explored more dynamically in the future.
[0184] The Xenopus imaging shows that the method can be used to image live samples and opens the possibility for longitudinal non-destructive surface imaging of developmental process. It also highlights the potential of the method for Xenopus embryogenesis (Figure 11) and for investigating critical steps of neurulation (Figure 12). The commonness of neural tube defects during pregnancies [22] stresses the importance of this topic. The fact that Xenopus has recently become of interest for high content screening [27] supports the relevance of this proof of concept further, especially as the method could as well be integrated into robotic workflows. For the quantitative measurements of complex features on the embryo, a coordinate system like it has been developed for spherical embryos earlier [28] or other frameworks for modeling embryogenesis [29] could be applied. The possibility to image live samples stresses the non-invasive character and as reflected visible light is collected that low phototoxicity can be expected.
[0185] The combination of the method with fluorescence (Figure 10) and the images from the semitransparent tadpole sample (Figure 12) show that the approach is easily combinable with other imaging modalities and can add to the standard repertoire of imaging techniques. The fact that a modified OPT device opens new applications for this straightforward imaging technique. Due to the difference in the imaging geometry, these applications are not directly available in light sheet imaging, where shadowing will play a more prominent role. However, a similar addition of a diffuse light source and rotation of the sample, sometimes already implemented in light sheet approaches, opens the possibility for a combined device incorporating light sheet method and the imaging method of this document [30-32]
[0186] The achievable resolution is given by the optical system. Different methods for characterizing the resolution are being used [33] including the Abbe diffraction limit [34], which would be given by the wavelength used over two times the NA of the objective lens. In OPT type of imaging the 3D reconstruction can approximate isotropic resolution if an increasing number of angles is used for the reconstruction. However, artifacts stemming from specular reflection may influence the images (Figure 5). Thus, key for the reconstruction is to avoid highlights or specular reflections and image diffuse reflection. Figure 1 and Figure 3 show that with the imaging chamber specular reflection from glossy biological surfaces can be reduced efficiently.
[0187] For OPT and related types of imaging, such as that taught in this document, a practical limitation is the tradeoff between depth of focus and resolution, to prevent the images being affected by out-of- focus blur. Practically this means that the availability of objective lenses and sample geometry/size dictate the achievable resolution. The use of point spread function aware algorithms for reconstruction [35] can improve the achievable resolution and reduce out of focus information allowing higher NA lenses to be used. For an overview of used voxel sizes, please refer to Table 1. It is of note that the different wavelengths in the multicolor images will have a different resolution with longer wavelengths being more strongly diffracted.
[0188] Other schemes to retrieve the spectral information are possible, including the use of a monochromator, color cameras (Figure 14), different spectral light sources like colored LEDs, and using a second camera combined with a dispersive element to access pixel-wise spectra. Problems with balancing different color channels similar to other color and spectral imaging techniques can arise, and normalization of the different stacks may be required (Figure 8). Aligning different color stacks due to mechanical shifts between repeated imaging for multi-color retrieval may be required and needs to be taken into account, for instance by manual alignment using landmarks either on the reconstructed images or in the 3D visualization software akin to the OPT to CT alignment in Figure 6. For a more sophisticated way of combining colors than performed here, a phasor-based system like it was described earlier may be applied [36]
[0189] The use of an RGB camera for imaging is a straightforward way to speed up the imaging process for multicolor imaging and reduce the number of separate channels to be recorded in the imaging method. The acquisition of a 360° view of the sample on our device with a voxel size of 37.91 in x,y,z typically was in the range of 3 min/channel. The use of a faster rotational stage that is directly triggering the camera can speed this up (see Figure 14). The processing on a standard computer workstation (Dell Precision 3500) from 2010 with the filtered back projection algorithm was in a similar range.
[0190] The 3D reconstruction used is based on the filtered back projection algorithm typically used in CT (Figure 2). The fact that it can be used for reflective surfaces poses a new application for that reconstruction as it is originally based on the idea of line integrals typically associated with the attenuation of rays traversing through an object. Adaptations for reflective imaging, like filtering for small specular artifacts, as in Figure 7, could be beneficial for future developments. The use of a telecentric lens might be improving the reconstruction additionally because of the reduced perspective skewing in OPT and the imaging method.
[0191] Due to the rotation of the sample, an image resulting from the imaging method can be more complete than confocal imaging [37], where the backside is shaded due to the image geometry and the absorbing nature of opaque samples. However, shadowing or shielding by nested surfaces can impact the reconstruction in the imaging method. Nevertheless, a complete 360° view of non-convex samples can be recovered by the imaging method (Figure 4). For opaque samples, no information from the inside is retrieved by reflective imaging. However, from Figure 2 and Figures 12-13, it can be deduced that principally semitransparent and nested surfaces can be reconstructed given that the reflectivity of inner layers is higher or of a different wavelength than outer layers and the dynamic range of the system allows them to be discriminated. This opens the possibility to image internal surfaces with a different refractive index, e.g., swim bladders in fish, or compound samples where a surface of interest is wrapped into a transparent conduit.
[0192] The method of this document is different from extended focus operations, where only a 2D projection is created from the acquired 3D stack. Nevertheless, the method can benefit from similar ring illumination schemes and solutions to reduce highlights, known from material sciences applications [38] and macro photography [39]
[0193] The method differs from known technologies like Optical Coherence Tomography (OCT) [40], as it is not utilizing an interferometric approach. It is as well different from earlier described optical reflection tomography [41] as it is not based on measuring the refractive index and the thickness of the sample. In contrast to the method outlined in this document, diffuse optical reflection microscopy utilizes a single continuous wave laser for illumination of the sample [42] High-resolution reflection tomographic diffractive microscopy has been proposed earlier. However, a holographic approach and high NA lenses for imaging were used [43] instead of an OPT to target the mesoscale. It is also different from the recently published reflective light sheet microscopy, which aims to use reflective surfaces to increase fluorescent signals [44], or optically sectioned imaging by oblique plane Microscopy [45] The imaging of reflective samples in this application is different from the fluorescence OPT imaging described by Sharpe and colleagues [6], as here different properties of the material are probed. Nevertheless, as the reflective imaging is nondestructive, it can be combined with fluorescent OPT imaging (Figure 10). However, for nontransparent samples, only fluorescence from the surface can be retrieved.
[0194] For scanning 3D objects on the macro scale, several methods are known but differ from the method taught in this document; like photogrammetry [46] which uses information like the focal length of the lens for making its calculations. Likewise, the method is different from triangulation based laser scanners, or structure reconstruction from motion, modulated or structured light scanners, and from point cloud systems scanners [47, 48], as the method uses a continuous angle acquisition and algorithms from CT processing. The imaging method is contact-free and is also different from the methods combining silhouettes as intensity information is collected. It is not working by the time of flight [49] or requires a conoscopic crystal like in conoscopic holography [50] Finally, no user-assisted image-based modeling is required with the use of computer-aided design (CAD) or related programs. As such the method is not limited to the micro/mesoscale, and the principle is also applicable to larger samples as this method is non-invasive and could be expanded to '3D-spectral-virtual photography' including applications for recording biometry data and 3D representation of goods.
[0195] The method enables the study and documentation of the 3D morphology of samples such as insect cuticle, plant seeds, alive and developing Xenopus embryos, as well as mollusk shells. The ability to record the surface of a mesoscale object in 3D opens perspectives for digital repositories of zoological and botanical collections and enables a link to 3D printing of these objects. In addition, the possibility for spectral analysis can provide more insight into the pigments in the samples and may also allow applications for diagnostics of small parts in material science, like for example the amount of oxidation and point of failure analysis in industrial processes. Other applications may include virtual reality and numerical simulations of 3D objects, but also art, and historic objects, including the analysis of coloring on ancient statuary and pottery and the teaching of these.
[0196] The method complements other approaches, such as micro-CT [51, 52], X-ray microscopes, or light sheet microscope, for 3D representation of the sample's surface morphology thereby adding complete preservation of the actual characteristic color scheme without the need to use contrast agents, sample preprocessing, or digital post-processing to reintroduce the colors. The method will not reveal the inside of opaque samples, but it is cheaper than a micro-CT, can be implemented straightforwardly, and is well-suited for field applications. Our approach is compatible with recently described resources for cheap custom-build OPTs [7] Also for 3D rendering, open solutions like Drishti can be used [53] Experimental Details
[0197] Device
[0198] A SkyScan 3001M OPT scanner, manufactured at Bruker micro-CT, Kontich, Belgium for BIOPTONICS (Bioptonics, MRC Technology, Crewe Road South, Edinburgh, EH4 2SP, UK), with the following technical modifications:
[0199] Diffuse oblique illumination
[0200] To create an even diffuse illumination, a diffusor was used, and the imaging chamber was lined with white paper. The sample was illuminated from the side with a gooseneck LED (Leica KL 200 LED) white light source. Aluminum foil at the other side of the sample reflected light on the non-illuminated side.
[0201] Filters
[0202] A K580 from a Leitz filter slider was used as a red filter. As a green filter, a Leitz Gelbgmn 32mm/35mm color glass was used. As blue filter, a Leitz CB 16.5 blue filter with diameter 32mm/35mm was used (for spectral information see supplementary Figure 2). The bandpass filters used in Figure 6 were a 377/50 nm filter provided by Zeiss; a 420/40 nm filter provided by Olympus; a 460/50 nm filter provided by Nikon; a 525/50 nm provided filter by Zeiss; a 600/50 nm filter provided by Olympus and a 690/70 nm brightline filter provided by Semrock.
[0203] For Figure 14 we used a DFC450c camera from Leica microsystems (Wetzlar, Germany) steered by Leica LAS software (version 4.8), attached to a Nikon (Tokyo, Japan) Te200 stand outfitted with a Nikon Plan Fluor 4x lens with 0.13NA and 16.5 mm working distance. The sample was rotated using a Xeryon (Leuven, Belgium) XRT-U 30 rotational piezo stage.
[0204] Micro-computed tomography
[0205] Micro-CT datasets were acquired on a SkyScan 1278 (Bruker micro-CT, Kontich, Belgium) in step- and-shoot mode with the following parameters: 65 kVp X-ray source voltage and 770 mA source current combined with an X-ray filter of 1 mm aluminum, 40 ms exposure time per projection, four averages per view, acquiring projections with 0.7° increments over a total angle of 180°, resulting in reconstructed
3D datasets with 50 pm isotropic voxel size.
[0206] Image reconstruction
[0207] For the imaging method of this document, as well as the prior art methods OPT and micro-CT reconstruction, NRecon 1.6.3.3 micro-CT software from Bruker was used.
[0208] 3D visualization
[0209] For the 3D visualization of the reconstructed stacks, Vision4D from Arivis 2.12.5 (Unterschleissheim, Germany) was used. It is of note that for different 3D rendering software, slightly different ways to implement lightning and color exist. Using Arivis, the best representation of the original colors is given using an inverted cubic representation of the gray levels for the maximum projection. As mentioned previously, intensity mixing depends on the model implemented by the software. Flowever, a nonlinear intensity representation could make sense because of the nonlinear relationship of absorption. For the volumetric rendering, it was necessary to use the complementary colors (Cyan Magenta and Yellow instead of Red Green and Blue) for accurate color mixing. Obviously the artificial light source and its positioning can have an influence. Also, the dimmer the imaging, the more the global brightness might need to be adapted. In the imaging a white background was used. That means that regions as bright as the background or brighter will be revealed as see through. Thus, depending on the variation of brightness of the sample the illumination needs to be adapted to low levels not to lose the bright regions in volume rendering. This can pose a limitation depending on the dynamic range of the camera and the possibility to illuminate the background as well.
[0210] The volumes from the CT imaging method and the imaging method of this document were aligned using the ec-CLEM plugin from Icy [54] Five landmarks points were added in order to compute the transformation in 3D.
[0211] Simulation
[0212] Simulations were performed using MATLAB 2016b (Mathworks, MA USA), the MATLAB toolbox
DIP image 2.8 (TU Delft) and Fiji/lmageJ [55] For Radon and inverse Radon transforms the 'radon', and 'iradon' command from the image analysis toolbox were used. The simulation is using the Shepp-Logan head phantom [56],
[0213] Samples
[0214] The Metasequoia seed cone was collected at the Leuven botanical gardens.
[0215] The resistor is a 15 kilo Ohm resistor with a tolerance of 5% and has the four-band resistor code: brown, green, red, and gold; purchased from R&S (RS Components GmbH Hessenring 13b, 64546 Morfelden-Walldorf).
[0216] Figurines are from LEGO™ (Billund, Denmark).
[0217] The Chrysolina americana sample was collected approximately at 50°85'82.49"N; 47°04'25.3"E.
[0218] The Drosophila samples were fixed at -80 °C to maintain the morphology and fluorescence. The fly strain used in Figure 10A-D expresses GFP in the eyes in a white-eyed background (Genotype: y[l] M{vas-int.Dm}ZH-2A w[*]; Bloomington stock centre # 24481. Fly strains used in Figure 10E-H were Canton-S (CS); Kyoto stock center # 105666 , and w; GlaBC/CyO (Bloomington Drosophila stock center # 6662).
[0219] Grids were square mesh EM support grids, 400 copper mesh with 26 pm bars (FCF 400 - Cu - SB
Electron Microscopy Science) and a finder grid with 17 pm bars (Agar scientific).
[0220] Beads were magnetic Dynabeads 500 with iron core with ~5 pm size. (Thermo Fisher).
[0221] A Pollia dorbignyi [57] shell was used for the spectral imaging and was obtained at 42°21’49.7"N; 3°09'47.2"E.
[0222] Samples were mounted using plasticine.
[0223] For Figure 9 a 3rd instar larva was collected from the food of ongoing fly culture and washed in tap water. The wet larva was stuck to the insect pin by adhesion. For imaging the larva was exposed to an atmosphere of CO to stop it from moving during the acquisition. A 0.2 M NaN solution for 30 min was used to relax the muscles [58], [0224] Xenopus tropicalis embryos were placed in FEP tubes, with 1.6mm diameter for imaging. Fixed embryos were imaged in PBS, while living ones were kept in 1/9 th diluted Modified frog Ringer (MR: 0.1 M NaCI,1.8 mM KCI, 2.0 mM Cacl2, 1.0 MgCI2, 5.0 mM, HEPES-NaOH (pH 7.6) or 300 mg/I NaHCOs) solution. The tube with the frog embryo was submersed in a buffer containing glass cuvette during acquisition. For the live imaging, a crest3-gfp transgenic reporter line was crossed to the FI generation, and the offspring was imaged. [0225] Table 1. Imaging conditions used.
Figure imgf000036_0001
[0226] Funding
[0227] The work was supported by the ISPAMM (An Image Storage Platform for Analysis Management and Mining (ISPAMM; AKUL/13/39) project and by VIB.
[0228] Data collected from plants
[0229] For the plant samples Only shed deadwood was used. Local, national and international guidelines and legislation for the plants imaged in the study were adhered to. [0230] Reference Numerals
• 100 Imaging Chamber
• 110 Reflective Surface
• 120 Light Source
• 125 Light
• 130 Sample
• 140 Diffuser
• 150 Detector
• 160 Illumination direction
References
1. Spalteholz W: (jber das durchsichtigmachen von menschlichen und tierischen praparaten und seine theoretischen bedingungen, 2. aufl. edn. n.p.; 1914.
2. Dodt HU, Leischner U, Schierloh A, Jahrling N, Mauch CP, Deininger K, Deussing JM, Eder M, Zieglgansberger W, Becker K: Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain. Nature methods 2007, 4(4):331-336.
3. Richardson DS, Lichtman JW: Clarifying Tissue Clearing. Cell 2015, 162(2):246-257.
4. Siedentopf H, Zsigmondy R: (jber Sichtbarmachung und Grossenbestimmung ultramikroskopischer Teilchen, mit besonderer Anwendung auf Goldrubinglaser. Annalen der Physik 1903, 4(10):l-39.
5. Huisken J, Swoger J, Del Bene F, Wittbrodt J, Stelzer EH: Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 2004, 305(5686):1007-1009.
6. Sharpe J, Ahlgren U, Perry P, Hill B, Ross A, Hecksher-Sorensen J, Baldock R, Davidson D: Optical projection tomography as a tool for 3D microscopy and gene expression studies. Science 2002, 296(5567):541-545.
7. Wong MD, Dazai J, Walls JR, Gale NW, Henkelman RM: Design and implementation of a custom built optical projection tomography system. PloS one 2013, 8(9):e73491.
8. Arranz A, Dong D, Zhu S, Savakis C, Tian J, Ripoll J: In-vivo optical tomography of small scattering specimens: time-lapse 3D imaging of the head eversion process in Drosophila melanogaster. Scientific reports 2014, 4:7325.
9. Zalevsky Z: Extended depth of focus imaging: a review. In: 2010. SPIE: 11.
10. Clark DP, Badea CT: Micro-CT of rodents: state-of-the-art and future perspectives. Physica medica : PM : an international journal devoted to the applications of physics to medicine and biology : official journal of the Italian Association of Biomedical Physics 2014, 30(6):619-634.
11. Hecht E: Optics, 4th edn. Reading, Mass.: Addison-Wesley; 2002.
12. Kak AC, Slaney M: Principles of computerized tomographic imaging. Philadelphia: Society for Industrial and Applied Mathematics; 2001.
13. Pan XC, Sidky EY, Vannier M: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? Inverse Probl 2009, 25(12).
14. Lambert JH, DiLaura DL: Photometry, or, On the measure and gradations of light, colors, and shade : translation from the Latin of Photometria, sive, De mensura et gradibus luminis, colorum et umbrae. New York: Illuminating Engineering Society of North America; 2001.
15. Maxwell JC, Niven WD: The scientific papers of James Clerk Maxwell. Mineola, N.Y.: Dover Publications; 2003. 16. Truong TV, Supatto W, Koos DS, Choi JM, Fraser SE: Deep and fast live imaging with two-photon scanned light-sheet microscopy. Nature methods 2011, 8(9):757-760.
17. Royer LA, Lemon WC, Chhetri RK, Wan Y, Coleman M, Myers EW, Keller PJ: Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nature biotechnology 2016, 34(12):1267-1278.
18. Papan C, Boulat B, Velan SS, Fraser SE, Jacobs RE: Time-lapse tracing of mitotic cell divisions in the early Xenopus embryo using microscopic MRI. Developmental dynamics : an official publication of the American Association of Anatomists 2006, 235(ll):3059-3062.
19. Ewald AJ, Peyrot SM, Tyszka JM, Fraser SE, Wallingford JB: Regional requirements for Dishevelled signaling during Xenopus gastrulation: separable effects on blastopore closure, mesendoderm internalization and archenteron formation. Development 2004, 131(24):6195-6209.
20. Ewald AJ, McBride H, Reddington M, Fraser SE, Kerschmann R: Surface imaging microscopy, an automated method for visualizing whole embryo samples in three dimensions at high resolution. Developmental dynamics : an official publication of the American Association of Anatomists 2002, 225(3):369-375.
21. Rolo A, Savery D, Escuin S, de Castro SC, Armer HE, Munro PM, Mole MA, Greene ND, Copp AJ:
Regulation of cell protrusions by small GTPases during fusion of the neural folds. eLife 2016, 5:el3273.
22. Copp AJ, Stanier P, Greene ND: Neural tube defects: recent advances, unsolved questions, and controversies. The Lancet Neurology 2013, 12(8):799-810.
23. National Center on Birth D, Developmental Disabilities CDC: Neural tube defects (NTDs) rates, 1995-1999. Teratology 2002, 66 Suppl 1:S212-217.
24. Davidson LA, Keller RE: Neural tube closure in Xenopus laevis involves medial migration, directed protrusive activity, cell intercalation and convergent extension. Development 1999, 126(20) :4547-4556.
25. Borodinsky LN: Xenopus laevis as a Model Organism for the Study of Spinal Cord Formation, Development, Function and Regeneration. Frontiers in neural circuits 2017, 11:90.
26. Schmidt R, Strahle U, Scholpp S: Neurogenesis in zebrafish - from embryo to adult. Neural development 2013, 8:3.
27. Maia LA, Velloso I, Abreu JG: Advances in the use of Xenopus for successful drug screening.
Expert opinion on drug discovery 2017, 12(11):1153-1159.
28. Tyszka JM, Ewald AJ, Wallingford JB, Fraser SE: New tools for visualization and analysis of morphogenesis in spherical embryos. Developmental dynamics : an official publication of the American Association of Anatomists 2005, 234(4) :974-983. 29. Khairy K, Lemon W, Amat F, Keller PJ: A Preferred Curvature-Based Continuum Mechanics Framework for Modeling Embryogenesis. Biophysical journal 2018, 114(2):267-277.
30. Tomer R, Khairy K, Keller PJ: Light sheet microscopy in cell biology. Methods in molecular biology 2013, 931:123-137.
31. Bassi A, Schmid B, Huisken J: Optical tomography complements light sheet microscopy for in toto imaging of zebrafish development. Development 2015, 142(5):1016-1020.
32. Power RM, Huisken J: A guide to light-sheet fluorescence microscopy for multiscale imaging.
Nature methods 2017, 14(4):360-373.
33. Paparelli L, Corthout N, Pavie B, Annaert W, Munck S: Analyzing Protein Clusters on the Plasma Membrane: Application of Spatial Statistical Analysis Methods on Super-Resolution Microscopy Images. Advances in anatomy, embryology, and cell biology 2016, 219:95-122.
34. Abbe E: Conributions to the theory of the microscope and the microscopic perception (translated from German). Arch Mikr Anat 1873, 9:413-468.
35. Chan KG, Liebling M: A Point-Spread-Function-Aware Filtered Backprojection Algorithm for Focal-Plane-Scanning Optical Projection Tomography. I S Biomed Imaging 2016:253-256.
36. Cutrale F, Trivedi V, Trinh LA, Chiu CL, Choi JM, Artiga MS, Fraser SE: Hyperspectral phasor analysis enables multiplexed 5D in vivo imaging. Nature methods 2017, 14(2):149-152.
37. Pawley JB: Handbook of biological confocal microscopy, 3rd edn. New York, NY: Springer; 2006.
38. STOPPE L, HUSEMANN C, Singer W: Verfahren zum erzeugen eines ergebnisbilds und optische vorrichtung. In. : Google Patents; 2016.
39. Greenberg B: Apparatus and method for detection of cervical dilation during labor. In. : Google Patents; 2008.
40. Boyd S, Brancato R, Straatsma BR: Optical coherence tomography : atlas and text. Clayton, Panama: Jaypee Highlights Medical; 2009.
41. Yoden K, Ohmi M, Ohnishi Y, Kunizawa N, Haruna M: An approach to optical reflection tomography along the geometrical thickness. Opt Rev 2000, 7(5):402-405.
42. Cheng X, Boas DA: Diffuse optical reflection tomography with continuous-wave illumination.
Optics express 1998, 3(3):118-123.
43. Sarmis M, Simon B, Debailleul M, Colicchio B, Georges V, Delaunay JJ, Haeberle O: High resolution reflection tomographic diffractive microscopy. J Mod Optic 2010, 57(9):740-745.
44. Wu Y, Kumar A, Smith C, Ardiel E, Chandris P, Christensen R, Rey-Suarez I, Guo M, Vishwasrao HD, Chen J et ah. Reflective imaging improves spatiotemporal resolution and collection efficiency in light sheet microscopy. Nature communications 2017, 8(1): 1452. 45. Dunsby C: Optically sectioned imaging by oblique plane microscopy. Optics express 2008, 16(25) :20306-20316.
46. Kraus K: Band 1 Photogrammetrie. Geometrische Informationen aus Photographien und Laserscanneraufnahmen. , vol. 1. Berlin, Boston: De Gruyter; 2004.
47. Abdel-Bary EBRAHIM M: 3D Laser Scanners' Techniques Overview. International Journal of Science and Research (IJSR) 2015, 4(10).
48. Teutsch C: Model-based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners.
Magdeburg: Shaker Verlag; 2007.
49. Goyer GG, Watson R: The Laser and its Application to Meteorology. Bulletin of the American Meteorological Society 1963, 44(9):564-575.
50. Hartsthorne NH, Stuart A: Crystals and the Polarizing Microscope. London: Arnold; 1970.
51. VerdCi JR, Alba-Tercedor J, Jimenez-Manrique M: Evidence of Different Thermoregulatory Mechanisms between Two Sympatric Scarabaeus Species Using Infrared Thermography and Microcomputer Tomography. PloS one 2012, 7(3):e33914.
52. Metscher BD: MicroCT for comparative morphology: simple staining methods allow high- contrast 3D imaging of diverse non-mineralized animal tissues. BMC physiology 2009, 9:11.
53. Limaye A: Drishti, A Volume Exploration and Presentation Tool. Proc Spie 2012, 8506.
54. de Chaumont F, Dallongeville S, Chenouard N, Herve N, Pop S, Provoost T, Meas-Yedid V,
Pankajakshan P, Lecomte T, Le Montagner Y et al: Icy: an open bioimage informatics platform for extended reproducible research. Nature methods 2012, 9(7):690-696.
55. Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B et al: Fiji: an open-source platform for biological-image analysis. Nature methods 2012, 9(7):676-682.
56. Jain AK: Fundamentals of digital image processing. Englewood Cliffs, NJ: Prentice Flail; 1989.
57. Payraudeau BC: Catalogue descriptif et methodique des annelides et des mollusques de I'ile de
Corse; avec huit planches representant quatre-vingt-huit especes, dont soixante-huit nouvelles. Paris: Bechet; 1826.
58. Wood WB: The Nematode Caenorhabditis elegans. Cold Spring Flarbor, N.Y.: Cold Spring Flarbor Laboratory; 1988.

Claims

1. A method for imaging a sample (130) comprising:
illuminating (10), in an imaging chamber (100), the sample (130) with diffused light (125) from a light source (120);
imaging (20) reflected light in a detector (150);
determining the reflected light from the sample (130); and
constructing (30, 35, 40) an image of the sample from the determined reflected light from the sample (130).
2. The method of claim 1, wherein the determining of the reflected light from the sample (130) is carried out by subtracting background light from the captured reflected light.
3. The method of claim 1 or 2, further comprising capturing (15) red, green and blue component of the reflected light separately.
4. The method of any of the above claims, wherein the constructing of the image comprises using (30) a back-projection algorithm.
5. The method of any of the above claims, further comprising visualising (45) a 3D image of the sample (130).
6. The method of any of the above claims, further comprising capturing fluorescent radiation from the sample (130).
7. The method of any of the above claims, further comprising one of rotation (25) or translation of the sample (130).
8. A device for imaging a sample (100) comprising:
an imaging chamber (100) holding the sample (130);
a diffused light source (120);
a detector (150) arranged to capture reflected light from the sample (130); and
a processor adapted to construct an image of the sample (130) from the captured reflected light.
9. The device of claim 8, wherein the interior of the imaging chamber (100) is lined with reflecting materials.
10. The device of claim 8 or 9, wherein the diffused light source (120) comprises an LED light source.
11. The device of one of claims 8 to 10, further comprising a diffuser (140) between the imaging chamber (100) and the diffused light source (120).
12. The device of one of claims 8 to 11, further comprises a plurality of color filters adapted to be inserted between the diffused light source (120) and the imaging chamber (100).
13. The device of one of claim 8 to 12, wherein the detector (150) is adapted to capture light of one or more spectral bandwidths.
PCT/EP2019/066931 2018-06-27 2019-06-26 A label-free multicolor optical surface tomography imaging method for nontransparent 3d samples WO2020002391A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19732378.5A EP3813641A1 (en) 2018-06-27 2019-06-26 A label-free multicolor optical surface tomography imaging method for nontransparent 3d samples

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP18180052 2018-06-27
EP18180052.5 2018-06-27
EP18207442 2018-11-21
EP18207442.7 2018-11-21

Publications (1)

Publication Number Publication Date
WO2020002391A1 true WO2020002391A1 (en) 2020-01-02

Family

ID=66999845

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/066931 WO2020002391A1 (en) 2018-06-27 2019-06-26 A label-free multicolor optical surface tomography imaging method for nontransparent 3d samples

Country Status (2)

Country Link
EP (1) EP3813641A1 (en)
WO (1) WO2020002391A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115644787A (en) * 2022-11-01 2023-01-31 杭州微新医疗科技有限公司 Position adjusting mechanism of microscope of expander
CN115886701A (en) * 2022-11-01 2023-04-04 杭州微新医疗科技有限公司 Vagina inspection device

Non-Patent Citations (58)

* Cited by examiner, † Cited by third party
Title
"National Center on Birth D, Developmental Disabilities CDC: Neural tube defects (NTDs) rates", TERATOLOGY, vol. 66, no. 1, 1995, pages 212 - 217
ABBE E: "Conributions to the theory of the microscope and the microscopic perception (translated from German", ARCH MIKRANAT, vol. 1873, no. 9, pages 413 - 468
ABDEL-BARY EBRAHIM M: "3D Laser Scanners' Techniques Overview", INTERNATIONAL JOURNAL OF SCIENCE AND RESEARCH (USR), vol. 4, no. 10, 2015
ARRANZ ADONG DZHU SSAVAKIS CTIAN JRIPOLL J: "In-vivo optical tomography of small scattering specimens: time-lapse 3D imaging of the head eversion process in Drosophila melanogaster", SCIENTIFIC REPORTS, vol. 4, 2014, pages 7325
BASSI ASCHMID BHUISKEN J: "Optical tomography complements light sheet microscopy for in toto imaging of zebrafish development", DEVELOPMENT, vol. 142, no. 5, 2015, pages 1016 - 1020
BORODINSKY LN: "Xenopus laevis as a Model Organism for the Study of Spinal Cord Formation, Development, Function and Regeneration", FRONTIERS IN NEURAL CIRCUITS, vol. 11, 2017, pages 90
BOYD SBRANCATO RSTRAATSMA BR: "Optical coherence tomography : atlas and text", JAYPEE HIGHLIGHTS MEDICAL, 2009
CHAN KGLIEBLING M: "A Point-Spread-Function-Aware Filtered Backprojection Algorithm for Focal-Plane-Scanning Optical Projection Tomography", IS BIOMED IMAGING, 2016, pages 253 - 256
CHENG XBOAS DA: "Diffuse optical reflection tomography with continuous-wave illumination", OPTICS EXPRESS, vol. 3, no. 3, 1998, pages 118 - 123
CLARK DPBADEA CT: "Micro-CT of rodents: state-of-the-art and future perspectives", PHYSICA MEDICA : PM: AN INTERNATIONAL JOURNAL DEVOTED TO THE APPLICATIONS OF PHYSICS TO MEDICINE AND BIOLOGY : OFFICIAL JOURNAL OF THE ITALIAN ASSOCIATION OF BIOMEDICAL PHYSICS, vol. 30, no. 6, 2014, pages 619 - 634
COPP AJSTANIER PGREENE ND: "Neural tube defects: recent advances, unsolved questions, and controversies", THE LANCET NEUROLOGY, vol. 12, no. 8, 2013, pages 799 - 810
CUTRALE FTRIVEDI VTRINH LACHIU CLCHOI JMARTIGA MSFRASER SE: "Hyperspectral phasor analysis enables multiplexed 5D in vivo imaging", NATURE METHODS, vol. 14, no. 2, 2017, pages 149 - 152
DAVIDSON LAKELLER RE: "Neural tube closure in Xenopus laevis involves medial migration, directed protrusive activity, cell intercalation and convergent extension", DEVELOPMENT, vol. 126, no. 20, 1999, pages 4547 - 4556
DE CHAUMONT FDALLONGEVILLE SCHENOUARD NHERVE NPOP SPROVOOST TMEAS-YEDID VPANKAJAKSHAN PLECOMTE TLE MONTAGNER Y ET AL.: "Icy: an open bioimage informatics platform for extended reproducible research", NATURE METHODS, vol. 9, no. 7, 2012, pages 690 - 696
DODT HULEISCHNER USCHIERLOH AJAHRLING NMAUCH CPDEININGER KDEUSSING JMEDER MZIEGLGANSBERGER WBECKER K: "Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain", NATURE METHODS, vol. 4, no. 4, 2007, pages 331 - 336, XP055590656, DOI: doi:10.1038/nmeth1036
DUNSBY C: "Optically sectioned imaging by oblique plane microscopy", OPTICS EXPRESS, vol. 16, no. 25, 2008, pages 20306 - 20316
EWALD AJMCBRIDE HREDDINGTON MFRASER SEKERSCHMANN R: "Surface imaging microscopy, an automated method for visualizing whole embryo samples in three dimensions at high resolution", DEVELOPMENTAL DYNAMICS : AN OFFICIAL PUBLICATION OF THE AMERICAN ASSOCIATION OF ANATOMISTS, vol. 225, no. 3, 2002, pages 369 - 375, XP002421548, DOI: doi:10.1002/dvdy.10169
EWALD AJPEYROT SMTYSZKA JMFRASER SEWALLINGFORD JB: "Regional requirements for Dishevelled signaling during Xenopus gastrulation: separable effects on blastopore closure, mesendoderm internalization and archenteron formation", DEVELOPMENT, vol. 131, no. 24, 2004, pages 6195 - 6209
GOYER GGWATSON R: "The Laser and its Application to Meteorology", BULLETIN OF THE AMERICAN METEOROLOGICAL SOCIETY, vol. 44, no. 9, 1963, pages 564 - 575
GREENBERG B: "Apparatus and method for detection of cervical dilation during labor", GOOGLE PATENTS, 2008
HARTSTHORNE NHSTUART A: "Crystals and the Polarizing Microscope", 1970
HUISKEN JSWOGER JDEL BENE FWITTBRODT JSTELZER EH: "Optical sectioning deep inside live embryos by selective plane illumination microscopy", SCIENCE, vol. 305, no. 5686, 2004, pages 1007 - 1009
JAIN AK: "Fundamentals of digital image processing", 1989, PRENTICE HALL
KERVRANN CHARLES ET AL: "A Guided Tour of Selected Image Processing and Analysis Methods for Fluorescence and Electron Microscopy", IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, IEEE, US, vol. 10, no. 1, 4 December 2015 (2015-12-04), pages 6 - 30, XP011596775, ISSN: 1932-4553, [retrieved on 20160121], DOI: 10.1109/JSTSP.2015.2505402 *
KHAIRY KLEMON WAMAT FKELLER PJ: "A Preferred Curvature-Based Continuum Mechanics Framework for Modeling Embryogenesis", BIOPHYSICAL JOURNAL, vol. 114, no. 2, 2018, pages 267 - 277
KRAUS K: "Band 1 Photogrammetrie. Geometrische Informationen aus Photographien und Laserscanneraufnahmen", vol. 1, 2004, DE GRUYTER
LAMBERT JHDILAURA DL: "Principles of computerized tomographic imaging", 2001, ILLUMINATING ENGINEERING SOCIETY OF NORTH AMERICA, article "On the measure and gradations of light, colors, and shade : translation from the Latin of Photometria, sive, De mensura et gradibus luminis, colorum et umbrae"
LIMAYE A: "Drishti, A Volume Exploration and Presentation Tool", PROC SPIE, 2012, pages 8506
MAIA LAVELLOSO IABREU JG: "Advances in the use of Xenopus for successful drug screening", EXPERT OPINION ON DRUG DISCOVERY, vol. 12, no. 11, 2017, pages 1153 - 1159
MATTHIAS RIECKHER ET AL: "Microscopic Optical Projection Tomography In Vivo", PLOS ONE, vol. 6, no. 4, 29 April 2011 (2011-04-29), pages e18963, XP055529707, DOI: 10.1371/journal.pone.0018963 *
MAXWELL JCNIVEN WD: "The scientific papers of James Clerk Maxwell", 2003, DOVER PUBLICATIONS
METSCHER BD: "MicroCT for comparative morphology: simple staining methods allow highcontrast 3D imaging of diverse non-mineralized animal tissues", BMC PHYSIOLOGY, vol. 9, 2009, pages 11, XP021058145, DOI: doi:10.1186/1472-6793-9-11
PAN XCSIDKY EYVANNIER M: "Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?", INVERSE PROBL, vol. 25, no. 12, 2009, XP020167541
PAPAN CBOULAT BVELAN SSFRASER SEJACOBS RE: "Time-lapse tracing of mitotic cell divisions in the early Xenopus embryo using microscopic MRI", DEVELOPMENTAL DYNAMICS : AN OFFICIAL PUBLICATION OF THE AMERICAN ASSOCIATION OF ANATOMISTS, vol. 235, no. 11, 2006, pages 3059 - 3062
PAPARELLI LCORTHOUT NPAVIE BANNAERT WMUNCK S: "Analyzing Protein Clusters on the Plasma Membrane: Application of Spatial Statistical Analysis Methods on Super-Resolution Microscopy Images", ADVANCES IN ANATOMY, EMBRYOLOGY, AND CELL BIOLOGY, vol. 219, 2016, pages 95 - 122
PAWLEY JB: "Handbook of biological confocal microscopy", 2006, SPRINGER
POWER RMHUISKEN J: "A guide to light-sheet fluorescence microscopy for multiscale imaging", NATURE METHODS, vol. 14, no. 4, 2017, pages 360 - 373, XP055506998, DOI: doi:10.1038/nmeth.4224
RICHARDSON DSLICHTMAN JWCLARIFYING TISSUE CLEARING, CELL, vol. 162, no. 2, 2015, pages 246 - 257
ROLO ASAVERY DESCUIN SDE CASTRO SCARMER HEMUNRO PMMOLE MAGREENE NDCOPP AJ: "Regulation of cell protrusions by small GTPases during fusion of the neural folds", ELIFE, vol. 5, 2016, pages e13273
ROYER LALEMON WCCHHETRI RKWAN YCOLEMAN MMYERS EWKELLER PJ: "Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms", NATURE BIOTECHNOLOGY, vol. 34, no. 12, 2016, pages 1267 - 1278
SARMIS MSIMON BDEBAILLEUL MCOLICCHIO BGEORGES VDELAUNAY JJHAEBERLE O: "High resolution reflection tomographic diffractive microscopy", J MOD OPTIC, vol. 57, no. 9, 2010, pages 740 - 745
SCHINDELIN JARGANDA-CARRERAS IFRISE EKAYNIG VLONGAIR MPIETZSCH TPREIBISCH SRUEDEN CSAALFELD SSCHMID B ET AL.: "Fiji: an open-source platform for biological-image analysis", NATURE METHODS, vol. 9, no. 7, 2012, pages 676 - 682, XP055343835, DOI: doi:10.1038/nmeth.2019
SCHMIDT RSTRAHLE USCHOLPP S: "Neurogenesis in zebrafish - from embryo to adult", NEURAL DEVELOPMENT, vol. 8, 2013, pages 3, XP021142592, DOI: doi:10.1186/1749-8104-8-3
SHARPE JAHLGREN UPERRY PHILL BROSS AHECKSHER-SORENSEN JBALDOCK RDAVIDSON D: "Optical projection tomography as a tool for 3D microscopy and gene expression studies", SCIENCE, vol. 296, no. 5567, 2002, pages 541 - 545, XP001152115, DOI: doi:10.1126/science.1068206
SIEDENTOPF HZSIGMONDY R: "Uber Sichtbarmachung und Grossenbestimmung ultramikroskopischer Teilchen, mit besonderer Anwendung auf Goldrubinglaser", ANNALEN DER PHYSIK, vol. 4, no. 10, 1903, pages 1 - 39
SPALTEHOLZ W: "Uber das durchsichtigmachen von menschlichen und tierischen praparaten und seine theoretischen bedingungen", 1914
STEPHEN DOW: "Digital Photography How-To: Building a Light Tent - CreativePro.com", 14 March 2003 (2003-03-14), XP055530183, Retrieved from the Internet <URL:https://creativepro.com/digital-photography-how-to-building-a-light-tent/> [retrieved on 20181204] *
STOPPE LHUSEMANN CSINGER W: "Verfahren zum erzeugen eines ergebnisbilds und optische vorrichtung", GOOGLE PATENTS, 2016
TEUTSCH C: "Model-based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners", 2007, SHAKER VERLAG
TOMER RKHAIRY KKELLER PJ: "Light sheet microscopy in cell biology", METHODS IN MOLECULAR BIOLOGY, vol. 931, 2013, pages 123 - 137
TRUONG TVSUPATTO WKOOS DSCHOI JMFRASER SE: "Deep and fast live imaging with two-photon scanned light-sheet microscopy", NATURE METHODS, vol. 8, no. 9, 2011, pages 757 - 760
TYSZKA JMEWALD AJWALLINGFORD JBFRASER SE: "New tools for visualization and analysis of morphogenesis in spherical embryos", DEVELOPMENTAL DYNAMICS: AN OFFICIAL PUBLICATION OF THE AMERICAN ASSOCIATION OF ANATOMISTS, vol. 234, no. 4, 2005, pages 974 - 983
VERDU JRALBA-TERCEDOR JJIMENEZ-MANRIQUE M: "Evidence of Different Thermoregulatory Mechanisms between Two Sympatric Scarabaeus Species Using Infrared Thermography and MicroComputer Tomography", PLOS ONE, vol. 7, no. 3, 2012, pages e33914
WONG MDDAZAI JWALLS JRGALE NWHENKELMAN RM: "Design and implementation of a custom built optical projection tomography system", PLOS ONE, vol. 8, no. 9, 2013, pages e73491
WOOD WB: "The Nematode Caenorhabditis elegans", 1988, COLD SPRING HARBOR LABORATORY
WU YKUMAR ASMITH CARDIEL ECHANDRIS PCHRISTENSEN RREY-SUAREZ IGUO MVISHWASRAO HDCHEN J ET AL.: "Reflective imaging improves spatiotemporal resolution and collection efficiency in light sheet microscopy", NATURE COMMUNICATIONS, vol. 8, no. 1, 2017, pages 1452
YODEN KOHMI MOHNISHI YKUNIZAWA NHARUNA M: "An approach to optical reflection tomography along the geometrical thickness", OPT REV, vol. 7, no. 5, 2000, pages 402 - 405, XP019353822, DOI: doi:10.1007/s10043-000-0402-5
ZALEVSKY Z: "Extended depth of focus imaging: a review", SPIE, 2010, pages 11

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115644787A (en) * 2022-11-01 2023-01-31 杭州微新医疗科技有限公司 Position adjusting mechanism of microscope of expander
CN115886701A (en) * 2022-11-01 2023-04-04 杭州微新医疗科技有限公司 Vagina inspection device

Also Published As

Publication number Publication date
EP3813641A1 (en) 2021-05-05

Similar Documents

Publication Publication Date Title
Stelzer et al. Light sheet fluorescence microscopy
Klaus et al. Three‐dimensional visualization of insect morphology using confocal laser scanning microscopy
Kerstens et al. A label-free multicolor optical surface tomography (ALMOST) imaging method for nontransparent 3D samples
Ford et al. Phase-gradient microscopy in thick tissue with oblique back-illumination
Rieckher et al. Microscopic optical projection tomography in vivo
Handschuh et al. Showing their true colors: a practical approach to volume rendering from serial sections
Wangpraseurt et al. In vivo imaging of coral tissue and skeleton with optical coherence tomography
Blagodatski et al. Under-and over-water halves of Gyrinidae beetle eyes harbor different corneal nanocoatings providing adaptation to the water and air environments
CN109923401A (en) Hyperspectral imager
Birk et al. Improved reconstructions and generalized filtered back projection for optical projection tomography
Pégard et al. Flow-scanning optical tomography
Kirkbride et al. The application of laser scanning confocal microscopy to the examination of hairs and textile fibers: An initial investigation
McConnell et al. Application of the Mesolens for subcellular resolution imaging of intact larval and whole adult Drosophila
EP3813641A1 (en) A label-free multicolor optical surface tomography imaging method for nontransparent 3d samples
Harvey et al. Directional reflectance and milli-scale feather morphology of the African Emerald Cuckoo, Chrysococcyx cupreus
WO2021150973A1 (en) Intelligent automated imaging system
FR2982384A1 (en) DEVICE FOR VISUALIZING A VIRTUAL BLADE
Land et al. The quality of vision in the ctenid spider Cupiennius salei
CN107014755A (en) A kind of system differentiated for algae with algae proliferation non-destructive monitoring situation
Bryson-Richardson et al. Optical projection tomography for spatio-temporal analysis in the zebrafish
Munck et al. Challenges and advances in optical 3D mesoscale imaging
Zhu et al. Smartphone-based microscopes
Berry et al. Form vision in the insect dorsal ocelli: an anatomical and optical analysis of the locust ocelli
Nilsson The transparent compound eye of Hyperia (Crustacea): Examination with a new method for analysis of refractive index gradients
Rigosi et al. A new, fluorescence-based method for visualizing the pseudopupil and assessing optical acuity in the dark compound eyes of honeybees and other insects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19732378

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019732378

Country of ref document: EP

Effective date: 20210127