WO2016020671A1 - Procédés et appareil pour déterminer des données d'image - Google Patents

Procédés et appareil pour déterminer des données d'image Download PDF

Info

Publication number
WO2016020671A1
WO2016020671A1 PCT/GB2015/052256 GB2015052256W WO2016020671A1 WO 2016020671 A1 WO2016020671 A1 WO 2016020671A1 GB 2015052256 W GB2015052256 W GB 2015052256W WO 2016020671 A1 WO2016020671 A1 WO 2016020671A1
Authority
WO
WIPO (PCT)
Prior art keywords
radiation
diffraction pattern
estimate
determining
plane
Prior art date
Application number
PCT/GB2015/052256
Other languages
English (en)
Inventor
John Marius Rodenburg
Peng Li
Darren John BATEY
Original Assignee
The University Of Sheffield
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The University Of Sheffield filed Critical The University Of Sheffield
Publication of WO2016020671A1 publication Critical patent/WO2016020671A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/20Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by using diffraction of the radiation by the materials, e.g. for investigating crystal structure; by using scattering of the radiation by the materials, e.g. for investigating non-crystalline materials; by using reflection of the radiation by the materials

Definitions

  • the present invention relates to methods and apparatus for determining one or more object characteristics.
  • the present invention relates to methods and apparatus for determining image data from which an image of an object may be generated.
  • Some embodiments of present invention relate to methods and apparatus for determining image data from which images of three dimensional objects may be generated.
  • WO 2005/106531 discloses a method for providing image data which may be used to construct an image of an object based on a measured intensity of several diffraction patterns.
  • the several diffraction patterns are formed with an object at each of a plurality of positions with respect to incident radiation.
  • This method is known as a ptychographical iterative engine (PIE).
  • PIE ptychographical iterative engine
  • an iterative phase-retrieval method is used to determine an estimate of the absorption and phase-change caused by the object to a wave field as it passes through or is reflected by the object.
  • This method uses redundancy in the plurality of diffraction patterns to determine the estimate.
  • WO 2010/064051 which is incorporated herein by reference for all purposes, discloses an enhanced PIE (ePIE) method wherein it is not necessary to know or estimate a probe function. Instead a process is disclosed in which the probe function is iteratively calculated step by step with a running estimate of the probe function being utilised to determine running estimates of an object function associated with a target object.
  • ePIE enhanced PIE
  • Figure 1 shows an apparatus according to an embodiment of the present invention
  • Figure 2 shows a method according to an embodiment of the invention
  • Figure 3 shows two recorded low resolution images from different tilt angles according to an embodiment of the invention
  • Figure 4 shows image data produced by an embodiment of the invention.
  • FIG. 1 illustrates an apparatus according to an embodiment of the invention generally denoted with reference 100.
  • the apparatus 100 comprises a radiation source 1 10 for providing incident radiation, a focussing element 130, a filtering element 140 and a detector 150 for detecting an intensity of radiation.
  • the radiation source 110 is a source of radiation 1 1 1, 112, 113 for directing to a target object 120.
  • the radiation source 110 may comprise one or more radiation sources such as, where the radiation is light, one or more illumination devices.
  • the radiation source 110 comprises a plurality of illumination devices each configured to provide respective illumination directed to the target object 120.
  • each illumination device may be an LED although it will be realised that other sources of illumination may be used.
  • the radiation source may comprise one or more devices for directing the radiation toward the target object at a desired incidence angle.
  • a radiation source may be used to provide radiation at more than one incidence angle by appropriate reconfiguration of the devices for directing the radiation toward the target obj ect 120.
  • a radiation source or apparatus for directing radiation such as a fibre optic cable, may be moveably mounted with respect to the target object 120.
  • the radiation source 1 10 is arranged to direct plane wave radiation toward the target object 120.
  • the radiation source 1 10 is arranged to selectively direct the plane wave radiation toward the target obj ect from each of a plurality of incidence angles 1 1 1 , 1 12, 1 13.
  • An incidence angle is an angle between the plane wave and the target object 120.
  • the radiation directed toward the target object 120 may be referred to as tilted radiation indicative of radiation which is not generally directed perpendicular to the target object 120, although one of the incidence angles may be perpendicular to the target object 120.
  • the radiation source 1 10 may be operated responsive to a control signal provided from a control unit (not shown in Figure 1) to provide radiation directed toward the target obj ect from a desired or selected incidence angle such that radiation detected by a detector 150 may be recorded corresponding to that incidence angle.
  • radiation is to be broadly construed.
  • the term radiation includes various wave fronts. Radiation includes energy from a radiation source. This will include electromagnetic radiation including X-rays, emitted particles such as electrons. Other types of radiation include acoustic radiation, such as sound waves. Such radiation may be represented by a wavefront function. This wave function includes a real part and an imaginary part as will be understood by those skilled in the art.
  • the radiation 1 1 1 , 1 12, 1 13 provided at each incidence angle may be considered to have the same structure which may be that of a perfect plane wave. However in other embodiments the radiation provided at each incidence angle may be considered to have a respective form which allows for deviations of the radiation provided from the perfect plane wave.
  • a probe function indicative of one or more characteristics of the provided radiation is updated during the method.
  • a probe function Pk, indicative of the radiation is used where k is indicative of the incidence angle, as noted above.
  • the probe function indicative of the incident radiation is updated during the method the probe function may be defined as Pk,n, where n is indicative of an iteration number.
  • a probe function may be a complex numbered function representing one or more characteristics of the radiation, which may be an illumination wavefront.
  • the focussing element 130 is arranged post (downstream) the target object 120 to receive radiation exiting from the target object 120.
  • the focussing element 130 may comprise one or a plurality of devices for focussing the radiation.
  • the focussing element 130 comprises a lens as shown in Figure 1 although it will be realised that other elements may be used, particularly depending upon the radiation type. It will be realised that embodiments may be envisaged including a plurality of lenses, such as two lenses, although it will be realised that more than two lenses may be used in some embodiments.
  • the focussing element 130 is arranged to focus received radiation at a back focal plane (BFP).
  • BFP back focal plane
  • the filtering element 140 is arranged to filter the radiation.
  • the filter may be arranged within the focussing element 130.
  • the filter 140 may be arranged within the lens 130, or between first and second lenses forming the focussing element 130.
  • the filtering element 140 may be arranged post (downstream of) the focussing element 130, such as at or near the BFP.
  • the filtering element 140 is arranged to filter radiation falling thereon to form a filtered diffraction pattern at the BFP.
  • the filtering element 140 may comprise an aperture such that only radiation passes through the filtering element 140 within the aperture.
  • the filtering element 140 may be located at the BFP of the focussing element 130.
  • the filtering element 140 is associated with a filtering function.
  • the filtering function 140 defines a transmission of radiation through the filtering element 140.
  • the filtering function 140 may be updated in at least some iterations of a method according to an embodiment of the invention.
  • the detector 150 is a suitable recording device for recording an intensity of radiation falling thereon, such as a CCD or the like. The detector 150 allows the detection of a low resolution image in a detector plane i .e. a plane of the detector 150.
  • the detector 150 may be located at an image plane, or may be offset from the image plane, for example to allow for a large dynamic range of the intensity. It will be realised that where the detector is offset from the image plane an appropriate propagation operator is required to determine an estimate of the intensity at the detector 150.
  • the detector 150 may comprise an array of detector elements, such as in a CCD. Intensity data output by the detector 150 may be received by the control unit comprising a processing device for carrying out a method according to an embodiment of the invention, as will be explained with reference to Figure 2.
  • Figure 2 illustrates a method 200 according to an embodiment of the invention.
  • the method 200 is a method of providing image data associated with the target object 120.
  • the image data may be used to determine one or more characteristics of the target object 120.
  • the image data may be used to determine one or more images associated with the target object 120.
  • the method 200 may be performed in association with the apparatus 100 described with reference to Figure 1.
  • the method 200 is a method of iteratively determining the image data of the target object 120.
  • the method 200 involves iteratively updating one or more estimates of the image data associated with the target object 120 in a real space or domain, as will be explained.
  • initial estimates of one or more object functions indicative of characteristics of the target object 120 are provided.
  • the object function estimate(s) may be stored as be an array of appropriate size.
  • the matrix(es) may store an array of random or pseudo-random values, or may contain predetermined values, such as all being T s.
  • image data at two respective “slices" or planes through the target object 120 is determined where each slice is associated with a respective object function.
  • image data for only a single slice through the target object 120 is determined, or where image data at more than two slices through the target object 120 is determined.
  • three-dimensional image data it is meant image data relating to a plurality of slices which intersect the target object 120.
  • Each slice through the target object is associated with a respective object function O s where s is indicative of a number of the slice.
  • s is indicative of a number of the slice.
  • the method illustrated in Figure 2 iteratively determines two object functions 0 ⁇ and 0 2 respectively each associated with a respective one of the slices 121, 122.
  • the slices 121, 122 are separated by a distance Az. Therefore, at a start of the method, there is provided two initial object functions ⁇ , ⁇ and On, 2 where, as above, n is indicative of the iteration number, which for the initial functions is 0, and the respective number is indicative of the slice concerned.
  • each probe function has a planar phase gradient representative of the incidence angle of the radiation 1 11, 112, 113.
  • the probe functions are used throughout the iterations of the method 200 without change or updating. In other embodiments, some or all of the probe functions may be updated during iterations of the method. For ease of explanation, an embodiment where the probe functions remain constant will be explained.
  • each probe function may be denoted as P k where k is indicative of the incidence angle.
  • step 210 an exit wave from the target object 120 is determined.
  • the exit wave ⁇ emanating from the target object 120 is determined by multiplying an object function indicative of the target object 120, where there is only a single object function, by a probe function indicative of radiation incident radiation on the target object 120.
  • the determination of the exit wave comprises propagating intermediate exit waves between slices or planes 121, 122 associated with the object functions.
  • an exit wave from each slice 121, 122 is determined.
  • a first exit wave i/> f c , i is determined from the first slice 121.
  • the exit wave may be determined by multiplying the probe function i incident on the target object 120 by the current object function ⁇ , ⁇ for the first slice 121.
  • the exit wave ⁇ 1 1 0 1 1 P 1 for the first angle of incident radiation 111 is determined.
  • the exit wave of radiation from the first slice 121 is then propagated to a next adjacent slice, which in Figure 1 is slice 122.
  • the propagation of the exit wave may be made by use of a suitable propagation operator.
  • an angular spectrum propagator is used to propagate the exit wave over a distance between the slices 121, 122 which in the example of Figure 1 is the distance Az.
  • P k s+ i ⁇ ⁇ , ⁇ ⁇
  • P AZ is a suitable propagator over the distance Az is used, where the propagation operator may be the angular spectrum operator.
  • the propagated wave at the subsequent slice is then used as a probe function indicative of one or more characteristics of the radiation incident upon the subsequent slice 122.
  • An exit wave from the last or most downstream slice of the target object 120 is used as the exit wave from the object. For example the exit wave from the last slice 122 of the object 120 shown in Figure 1.
  • step 220 a diffraction pattern at the BFP 140 is determined.
  • Step 220 comprises determining the diffraction pattern based upon the exit wave ip k 2 from the target object 120 and a suitable propagation operator.
  • the propagation operator may be a Fourier transform, although it will be realised that other propagation operators may be used.
  • Step 230 comprises determining an exit wave from the filter 140.
  • Step 230 comprises applying the filtering function associated with the filtering element 140 to the incident wave at the BFP calculated in step 220.
  • the filtering function may be defined as A which may be a finite aperture function. In some embodiments the filtering function is updated in at least some iterations of the method 200. Therefore the filtering function may be defined as A n where n is indicative of the iteration number.
  • an intensity of radiation at a plane of the detector 150 is determined. The intensity of radiation may form an image the plane of the detector, particularly where the detector is located at the image plane. The intensity is determined by propagating the exit wave from the filtering element 140 using a suitable propagation operator. The propagation operator may be a Fourier transform.
  • Step 250 comprises updating the determined intensity from step 240 based on data measured by the detector 150.
  • step 250 may comprise updating an intensity of the image determined in step 240 based upon intensity measured by the detector 150.
  • This step may be referred to as applying a modulus constraint on the determined intensity from step 240.
  • the modulus constraint may be applied by:
  • Ik,c V'k,m Ik,c/
  • Step 260 comprises determining an updated exit wave at the BFP.
  • an updated exit wave from the filter 140 is determined.
  • Step 260 comprises back propagating the updated intensity data from the plane of the detector 150 to the plane of the filter 140 or the BFP.
  • the updated image data may be back-propagated to the BFP by: 3 ⁇ 4e— F 1 ⁇ Ik,c ⁇ -
  • ⁇ ⁇ is the updated exit wave from the filter 140.
  • Step 270 comprises determining an updated diffraction pattern at the BFP.
  • step 270 may optionally comprise updating the filtering function associated with the filter 140.
  • the updated diffraction pattern may be determined by:
  • a n+1 is the updated filtering function which may be used in an n+1 iteration of the method 200.
  • the filtering function may only be updated after a predetermined number of iterations of the method 200.
  • the updated filtering function is determined in parallel form based upon diffraction patterns from a plurality of incidence angles of radiation. In the above equation the summation ⁇ f c in the equation sums across all K incidence angles.
  • step 280 an updated exit wave from the object 120 is determined.
  • step 280 comprises updating, during each iteration of the method 200, a calculated wave in the real domain or space based upon the updated diffraction pattern or input wave at the filter 140.
  • the updated exit wave from the target object 120 is determined by back- propagating the updated diffraction pattern from the plane of the filter 140 or BFP to the plane of the object 120.
  • the updated exit wave is determined by using an inverse propagation operator to that used in step 220.
  • Step 280 may be based upon an inverse Fourier transform.
  • the updated exit wave may be calculated by: where ⁇ ) k ' 2 the updated exit wave in the example shown in Figure 1 is at the second plane 122, although it will be realized that this is merely exemplary.
  • step 290 one or more object functions associated with the object 120 are updated.
  • step 290 comprises determining updated image data for the object 120 at one or more planes in the real domain.
  • step 290 may comprise determining each updated object function in turn, thereby sequentially determining each updated object function toward the source of radiation 1 10.
  • intermediate probe functions are also determined and propagated between the planes 121, 122 associated with each object function.
  • an updated object associated with the second plane 122 is determined by: where 0 ⁇ +1 2 is an object function associated with the second plane 122 for an n+ ⁇ iteration of the method 200.
  • an updated probe function at the second plane 122 is determined by:
  • P 2 is the updated probe function indicative of incident radiation at the second plane 122 within the object 120.
  • step 290 comprises calculating:
  • step 290 is a value used to alter a strength of feedback.
  • step 290 may be altered from the above described embodiment depending upon a number of planes associated with object functions.
  • the updated one or more object functions are provided for use in a next iteration of step 210.
  • step 295 it is determined whether there remain further incidence angles of radiation to consider in iterations of the method 200.
  • step 298 it is determined whether to terminate the method 200.
  • the method 200 may be terminated when a predetermined number of iterations have been completed or when a predetermined condition occurs.
  • the predetermined condition may be when an error metric associated with the method 200 meets a predetermined value.
  • the error metric is based upon difference between the measured and calculated intensities at the detector 150.
  • the error metric may be calculated by the normalized deviation between the measured and calculated intensities using:
  • step 270 when updating the filtering function to determine A n+1 associated with the filtering element 140, a ptychography reconstruction still applies since the filtering element 140 is consistent in the BFP.
  • the reason for using a parallel form of the equation to update the filtering function is because a DC component of the spectrum
  • is usually a very high value.
  • max m the update equation of a series version will dramatically slow the convergence.
  • the diffraction pattern is updated at the BFP. However, this update is rather taking out the effect of the filter function from the corrected diffraction pattern e than trying to reconstruct the spectrum. More importantly, the updated diffraction pattern is back- propagated to the object plane to perform the object reconstruction slice by slice in real space.
  • An exit end of the fiber, which produces a divergent point source, is mounted onto a stepper motor driven x-y translation stage.
  • the object 120 is placed approximately 70mm downstream of the source 120.
  • a doublet lens 130 with a focal length of 30mm is positioned at a distance of 45mm from the object 120.
  • a diaphragm with the aperture diameter set to 2mm is used as the filter 140.
  • a CCD detector 150 is placed in the image plane (about 215mm from the diaphragm 140) to record the low resolution image intensities.
  • a dataset is collected from 225 source positions arranged on a 15 x 15 raster grid.
  • a nominal step size of the stage is 1mm, with ⁇ 20% random offset to avoid a "raster grid pathology" as is known to those familiar in the art.
  • the change in position of the radiation corresponds to a change of incident angle of the radiation 111, 112, 113 at the target object 120 of about 0.5 degrees, as is explained in more detail below.
  • the object 120 is composed of two microscope slides arranged coverslip-to-slide (see the inset of Fig. 2). A measured separation between the two specimen layers is 1.025mm.
  • 'Dark-field' is used here in the sense as used in conventional microscopy, namely that the unscattered beam associated with the incident radiation 1 11, 112, 113 does not pass through the aperture in the lens for this particular angle of illumination, but is blocked by the aperture, thus giving an image which shows only scattered intensity from the object 120 on an otherwise black, or dark, background.
  • Both the conventional ePIE algorithm and a method according to an embodiment of the invention are applied to recorded intensity data for 100 iterations.
  • the initial guess for the object function is free space - namely a value of 1 for all positions or matrix cells, and the initial guess for the filter function associated with the filtering element 140 is a Gaussian filtering function.
  • a tilt angle for all scan positions is calculated to generate a set of tilted plane waves.
  • the filtering function associated with the filter 140 is not updated. After that, both a part of the diffraction pattern corresponding to an area within the filter 140, and the filter function are updated in each iteration of the method 200. The final results are shown in Fig. 4.
  • Figure 4 shows reconstructions of the dataset from the illumination tilting strategy using both the ePIE algorithm and a method 200 according to an embodiment of the invention.
  • Figure 4(a)(b)(c)(d) are the results of the conventional ePIE algorithm and respectively represent the object modulus, the object phase, the filter modulus and the filter phase.
  • Figure 4 (e)(f)(g)(h)(i)(j) are the results of the a method according to an embodiment of the invention and, respectively, represent the first slice 121 modulus, the first slice 121 phase, the second slice 122 modulus, the second slice 122 phase, the filtering function modulus and the filtering function phase.
  • the object 120 and the filter 140 are in different spaces and that the scale bars are also different.
  • the scale of Fig4(a) is also the same for 4(b,e,f,g,h), and the scale of 4(c) is the same as for 4(d,i,j).
  • the ePIE algorithm manages to reconstruct image data indicative of a blurred representation of the target object 120, which means a projection approximation has not completely broken down.
  • the reconstruction is an artifice from which it is not possible to extract accurate images of either layer or slice through the object 120.
  • the method 200 according to an embodiment of the invention successfully separates the two slices 121, 122 and produces much better reconstructions.
  • Cross-talk between the two slices 121, 122 is hardly visible in either reconstructed slice 121, 122. Since both two layers 121, 122 are out focus, during the scanning process the images will shift in the plane of the detector 150. Given the Field Of View (FOV) reconstructed, some features move out and new features move into the FOV at the edges.
  • FOV Field Of View
  • Phase curvature from the reconstructed filter element 140 implies that the exit wave plane (the instant downstream plane of the second layer 122) is not in focus. In other words the exit wave plane is not coincident with the conjugate plane of the detector 150.
  • the method 200 automatically accounts for this and produces a phase curvature on the filter function associated with the filtering element 140 to refocus the exit wave plane.
  • multi-slice embodiments may extract 3D information for the object 120.
  • the problem for the conventional reconstruction method is that the object has to be thin enough to validate the projection approximation. Otherwise, the spectrum in the back focal plane is not consistent when tilting the illumination.
  • a filter scanning strategy has been proposed to circumvent this object thickness limitation.
  • the reconstruction is just the exit wave of the object. Although we can refocus each layer by propagating this exit wave, the other out focused layers always corrupt the in focused layer.
  • Optical experiments have been conducted to test the proposed method. The results indicate that the proposed method extracts the 3D information of the object very well, while the conventional method fails to produce right reconstructions and the filter scanning method cannot separate the object.
  • embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs that, when executed, implement embodiments of the present invention.
  • embodiments provide a program comprising code for implementing a system or method as claimed in any preceding claim and a machine readable storage storing such a program. Still further, embodiments of the present invention may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.

Landscapes

  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

Des modes de réalisation de la présente invention portent sur un procédé de détermination de caractéristiques d'objet, lequel procédé met en œuvre la délivrance d'un rayonnement dirigé vers un objet selon chacun d'une pluralité d'angles d'incidence, la détection, par un détecteur, d'une intensité de rayonnement pour chacun des angles d'incidence ; et la détermination de façon itérative de données d'objet indicatives d'une ou de plusieurs caractéristiques de l'objet, des itérations du procédé comprenant la détermination, pour chaque angle d'incidence de rayonnement, d'une estimation d'un motif de diffraction filtré en un plan focal d'un élément de focalisation de rayonnement sur la base d'une estimation actuelle des données d'objet, la détermination d'une estimation de données d'intensité en un plan du détecteur sur la base du motif de diffraction filtré, et la mise à jour de l'estimation des données d'objet dans un domaine réel sur la base de l'intensité de rayonnement au niveau du détecteur.
PCT/GB2015/052256 2014-08-08 2015-08-04 Procédés et appareil pour déterminer des données d'image WO2016020671A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1414063.6A GB201414063D0 (en) 2014-08-08 2014-08-08 Methods and apparatus for determining image data
GB1414063.6 2014-08-08

Publications (1)

Publication Number Publication Date
WO2016020671A1 true WO2016020671A1 (fr) 2016-02-11

Family

ID=51629495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2015/052256 WO2016020671A1 (fr) 2014-08-08 2015-08-04 Procédés et appareil pour déterminer des données d'image

Country Status (2)

Country Link
GB (1) GB201414063D0 (fr)
WO (1) WO2016020671A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1740975A1 (fr) * 2004-04-29 2007-01-10 University of Sheffield Imagerie haute resolution
WO2008142360A1 (fr) * 2007-05-22 2008-11-27 Phase Focus Limited Imagerie tridimensionnelle
WO2010064051A1 (fr) * 2008-12-04 2010-06-10 Phase Focus Limited Obtention de données d’image
WO2014033459A1 (fr) * 2012-08-31 2014-03-06 Phase Focus Limited Améliorations de récupération de phase de ptychographie

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1740975A1 (fr) * 2004-04-29 2007-01-10 University of Sheffield Imagerie haute resolution
WO2008142360A1 (fr) * 2007-05-22 2008-11-27 Phase Focus Limited Imagerie tridimensionnelle
WO2010064051A1 (fr) * 2008-12-04 2010-06-10 Phase Focus Limited Obtention de données d’image
WO2014033459A1 (fr) * 2012-08-31 2014-03-06 Phase Focus Limited Améliorations de récupération de phase de ptychographie

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MAIDEN A M ET AL: "An improved ptychographical phase retrieval algorithm for diffractive imaging", ULTRAMICROSCOPY, ELSEVIER, AMSTERDAM, NL, vol. 109, no. 10, 1 September 2009 (2009-09-01), pages 1256 - 1262, XP026470501, ISSN: 0304-3991, [retrieved on 20090606], DOI: 10.1016/J.ULTRAMIC.2009.05.012 *

Also Published As

Publication number Publication date
GB201414063D0 (en) 2014-09-24

Similar Documents

Publication Publication Date Title
EP2356487B1 (fr) Obtention de données d image
JP5917507B2 (ja) タイコグラフィ法によるプローブのキャリブレーション方法
DK1740975T3 (en) high-resolution imagery
EP2702556B1 (fr) Procédé et appareil servant à produire des données d'image servant à construire une image d'une zone d'un objet cible
JP5314676B2 (ja) 三次元撮像
KR101896506B1 (ko) 3차원 이미징
JP6556623B2 (ja) 位相回復の改善
JP2016173594A (ja) 差分測定に基づき顕微鏡検査をスキャンするためのオートフォーカス
US9086570B2 (en) Quantative phase imaging microscope and method and apparatus performing the same
US10466184B2 (en) Providing image data
EP2227705B1 (fr) Procédé et appareil permettant la fourniture de données d'image
WO2016020671A1 (fr) Procédés et appareil pour déterminer des données d'image
Xie et al. Deep learning for estimation of Kirkpatrick–Baez mirror alignment errors
US20180328867A1 (en) Improved method ptychographic detector mapping

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15749851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15749851

Country of ref document: EP

Kind code of ref document: A1