WO2014108708A1 - An image restoration method - Google Patents

An image restoration method Download PDF

Info

Publication number
WO2014108708A1
WO2014108708A1 PCT/GB2014/050091 GB2014050091W WO2014108708A1 WO 2014108708 A1 WO2014108708 A1 WO 2014108708A1 GB 2014050091 W GB2014050091 W GB 2014050091W WO 2014108708 A1 WO2014108708 A1 WO 2014108708A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
image
region
primary
calculating
Prior art date
Application number
PCT/GB2014/050091
Other languages
French (fr)
Inventor
Weiping LU
Zhen QIU
Original Assignee
Heriot-Watt University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heriot-Watt University filed Critical Heriot-Watt University
Priority to GB1512290.6A priority Critical patent/GB2528179B/en
Publication of WO2014108708A1 publication Critical patent/WO2014108708A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the present invention relates to a method and apparatus for increasing the spatial resolution of an image, for example a method and apparatus for restoring a microscope image with a spatial resolution beyond the diffraction limit.
  • SR super-resolution
  • SMLM single molecule localization microscopy
  • the third approach is computational, in which image processing techniques are employed to reconstruct a SR image from a set of low-resolution (LR) observations (Elad, M. & Feuer, A. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Process. 6, 1646-1658 (1997)).
  • LR low-resolution
  • HR high-resolution
  • SR medical imaging such as X-ray mammography, functional magnetic resonance imaging and positron emission tomography.
  • Medical imaging usually uses highly controlled illumination doses to avoid damage to the subject, leading to low signal-to-noise ratio (SNR) images.
  • SNR signal-to-noise ratio
  • Inclusion of a prior model for noise removal therefore becomes critically important for the performance of SR restoration.
  • noise removal and feature preservation (and restoration) can impede image resolution that can be restored.
  • the prior model is usually constructed based on the edge-preservation concept in medical and other applications (Farsiu, S., Robinson, D., Elad, M. & Milanfar, P. Advances and challenges in super-resolution. Int. J. Imaging Syst. Technol. 14, 47-57 (2004)); features are restored as long as all the edges are preserved in the inverse process.
  • Medical images usually contain data describing tissues with simpler structures and larger size compared to biological images, typically 2-3 times smaller than the resolution limit of the imaging system. Fluorescence images of intracellular structures often contain abundant, heterogeneous blob and ridge-like features, complex sub- cellular structures, potentially 10 times smaller than the diffraction limit. In general, edges embedded in such small and complex features can be prone to noise contamination.
  • an image restoration method comprising obtaining a plurality of primary images wherein each primary image contains a different representation of a subject, calculating a map representing at least one image feature identified by calculating at least one second order difference or higher order difference, fitting the primary images to a model that represents each primary image as an alteration of a common target image of the subject, wherein fitting the primary images is subject to a constraint that the target image includes the at least one feature and extracting the target image from the fit, wherein the target image has a spatial resolution greater than a spatial resolution of the primary images.
  • the at least one image feature may comprise at least one feature of the target image, or at least one feature of one or more of the primary images.
  • the at least one second order difference or higher order difference may comprise at least one second order difference or higher order difference between points or regions of an image, which can for example be the target image or one or more of the primary images.
  • the second order difference or higher order difference may comprise a non-local second order difference or non-local higher order difference.
  • the constraint that the target image includes the at least one feature may comprise a constraint that the target image includes a representation of the at least one feature substantially in accordance with the map.
  • a map can be provided that enables preservation of desired features, for example blobs, ridges or other areas, which can be difficult to identify under low-SNR environments using conventional methods.
  • desired features for example blobs, ridges or other areas, which can be difficult to identify under low-SNR environments using conventional methods.
  • the preservation of such features can be useful in the context of fluorescence microscopy and/or in the context of imaging of cellular or sub-cellular features.
  • the obtaining of the plurality of primary images may comprise acquiring the images using a measurement apparatus or may comprise reading previously acquired images from a data store.
  • the calculating of the map may comprise calculating at least one first order difference.
  • the at least one feature may comprise at least one edge, area or volume.
  • the at least one first order difference may be used to identify at least one edge.
  • the at least one second order difference or higher order difference may be used to identify at least one area or volume, for example at least one blob.
  • Each area may comprise a region of lateral extent having a length greater than a minimum length and a width greater than a minimum width, for example a length greater than one pixel of the target image and a width greater than one pixel of the target image.
  • At least one of the edges, areas or volumes may be smaller than an impulse response of an optical system used to measure the primary images.
  • the at least one difference may comprise at least one non-local difference.
  • Each primary image may comprise picture elements, for example pixels or voxels, and calculating the map may involve for at least some picture elements of the primary image, defining a region around the picture element wherein the region has an area greater than an area of the picture element and calculating differences between regions to identify image features.
  • Calculating differences between regions may involve calculating a first order difference and/or a second order difference between a first and a second region.
  • the differences may comprise intensity differences.
  • Each difference may be a difference in intensity or any quantity derived from intensity or a difference in brightness or a difference in contrast or a difference in colour.
  • the first order non-local difference may be calculated by defining first and second regions.
  • the first and second regions may be adjacent or contiguous.
  • the first order non-local difference may be obtained by calculating a difference between values associated with the first region and values associated with the second region or between a function of values associated with in the first region and a function of values associated with the second region.
  • the second order non-local difference may be calculated by defining three regions, for example a first, a second and a third region, wherein the second region is located between the first and the third region.
  • the first, second and third regions may be adjacent or contiguous.
  • the second order non-local difference may be obtained by calculating a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
  • the second order non-local difference may be obtained by calculating a difference between a function of a first order difference calculated between the first and the second region and a function of a first order difference calculated between the second and the third region.
  • the first, second or higher order non-local differences may be first, second or higher order differences or functions of such differences.
  • a first order difference could be calculated as a first order derivative squared.
  • Calculating a first order difference may comprise defining a first region and a second region, representing the first region by a vector or a matrix comprising values of a plurality of picture elements within the first region and a second vector or matrix comprising values of a plurality of picture elements within the second region and calculating a norm of a difference between the vector or matrix of the first region and the vector or matrix of the second region.
  • the first and second regions may be adjacent or contiguous.
  • Calculating a second order difference may comprise defining a first, a second, and a third region, the second region being located between the first and the third region, representing the first, second, and third region by a vector or a matrix comprising values of a plurality of picture elements within the first, second and third region respectively, and calculating a norm of a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
  • the regions may be adjacent or contiguous to each other.
  • Calculating a high order difference may comprise defining a first region, a second region and a new matrix, representing the first region by a vector or a matrix comprising values of a plurality of picture elements within the first region, a second vector or matrix comprising values of a plurality of picture elements within the second region and a new matrix comprising norms of different orders of a difference between the vector or matrix of the first region and the vector or matrix of the second region, and for example calculating the multiplication between the Moore-Penrose pseudo-inverse (see for example, Chatterjee, P. & Milanfar, P. Practical Bounds on Image Denoising: From Estimation to Information. IEEE Transactions on Image Processing 20, 1221 - 1233, (201 1 )) of the new matrix and the vector or matrix of the first region.
  • the picture element values may be intensity values and/or brightness and/or colour and/or any quantity derived from them, for example, different in frequency of the light.
  • Calculating the map may comprise calculating at least one first order difference and at least one second order difference.
  • Calculating a map may comprise weighting the first and second order differences by a first weighting factor and a second weighting factor respectively.
  • the first weighting factor may be proportional to a ratio of the first order difference over a sum of the first and second order differences and the second weighting factor may be proportional to a ratio of the second order difference over a sum of the first and second order differences.
  • the plurality of primary images may be acquired using a microscope. That feature is particularly significant and so in a further, independent aspect of the invention there is provided an image restoration method comprising obtaining a plurality of primary images acquired using a microscope wherein each primary image contains a different representation of a subject, fitting the primary images to a model that represents each primary image as an alteration of a common target image of the subject, extracting the target image from the fit, wherein the target image has a spatial resolution greater than a spatial resolution of the primary images.
  • the method may comprise calculating a map comprising at least one image feature, wherein fitting the primary images is subject to a constraint that the target image includes the at least one feature.
  • the model may comprise an energy function and fitting may comprise minimizing the energy function.
  • the fitting may be performed iteratively and at each iteration stage the target image may be updated and the map may be re-calculated as a function of the updated target image, until the energy function is minimized.
  • Obtaining a plurality of primary images may comprise causing relative translation of the subject and imaging optics to a plurality of relative positions and obtaining at least one image for each position.
  • Obtaining a plurality of primary images may comprise other types of spatial displacement of the subject or/and the imaging system and obtaining at least one image for each displacement.
  • Obtaining the plurality of images may be performed in a single acquisition.
  • Obtaining a plurality of images may comprise using diffractive optics to acquire the plurality of images.
  • the plurality of images may comprise fluorescent images obtained using a fluorescent imaging system.
  • Obtaining a plurality of primary images may comprise measuring a first set of primary images and a second set of primary images, extracting a first and second target image corresponding to the first and second set of primary images using the method of any of the preceding claims and combining the first and second target images to form a combined target image.
  • the primary images of the first set may be images of a first colour
  • the primary images of the second set may be images of a second, different colour.
  • the primary images of the first set may comprise images of a first type of structure and the primary images of the second set comprise images of a second type of structure.
  • the primary images may be images of cellular structures, optionally images of at least one of transport particle structures and microtubule structures.
  • the spatial resolution of the target image may increase with an increase in the plurality of primary images. For example, the spatial resolution may increase up to 7 times.
  • the target image may have a spatial resolution beyond a limit of diffraction.
  • an apparatus comprising image acquisition means operable to acquire a plurality of primary images, a memory in communication with the image acquisition means for storing the primary images, and a processor in communication with the memory, the processor being arranged to process the primary images according to at least one of the methods as claimed and/or described herein.
  • the image acquisition means may comprise a microscope operable in combination with a translation stage to acquire primary images.
  • the image acquisition means may comprise a microscope operable in combination with diffractive optics to acquire primary images.
  • Figure 1 is a diagram of an experimental set up for performing translation microscopy according to an embodiment.
  • Figure 2 is a flow diagram of a generic method for increasing the spatial resolution of an image.
  • Figure 3 is a flow diagram of an exemplary method for increasing the spatial resolution of an image.
  • Figure 4 (a) is a synthetic 1 -D signal at low resolution and High resolution.
  • Figure 4 (b) is a 1 s ' and 2 nd NLD response of the LR signal of figure 4 (a) and the combination of the 1 st and 2 nd NLD.
  • Figure 4(c) is a restored signal obtained after 131 iterations by IRLS.
  • Figure 4(d) is a restored signal obtained after 388 iterations by I RLS.
  • Figure 4(e) is a restored signal obtained after 517 iterations by I RLS.
  • Figure 4(f) is a HR restored signal obtained by TRAM and by a prior art method using an edge-preserving prior model.
  • Figure 5(a) is an ISO 12233 resolution chart.
  • Figure 5(b) is a restored image of figure 5(a) with added noise, using TRAM.
  • Figure 5 (e) is a close-up region marked by a red box in figure 5 (a).
  • Figure 5 (g) is a restored image of figure (f) obtained by TRAM.
  • Figure 5 (h) is a restored image of figure (f) obtained by ALG.
  • Figure 5 (i) is a restored image of figure (f) obtained by RSR.
  • Figure 5 (j) is a restored image of figure (f) obtained by ZMT.
  • Figure 6 (a) is an HR image showing five different synthetic structures.
  • Figure 6 (b) is an artificially blurred image of figure 6 (a).
  • Figure 6 (c) is a restored image of figure 6(b) using TRAM.
  • Figure 6 (d) is the FWHM ratio of the LR image to the restored images obtained for the five types of structures.
  • Figure 6(e) is the FWHM ratio of the LR image to the restored image as a function of the number of LR images and obtained for 3 input noise levels.
  • Figure 6 (f) is the FWHM ratio of the LR to the restored image versus the Std of the input noise.
  • Figure 7 (a) is low resolution image showing a plurality of quantum dots.
  • Figure 7 (b) is a zoomed image of a first region of figure 7 (a).
  • Figure 7 (c) is a restored super-resolution image of the first region of figure 7 (a) acquired using 32 low resolution images.
  • Figure 7 (d) is a restored super-resolution image of figure 7 (b) acquired using 64 low resolution images.
  • Figure 7 (e) is a resolution curve showing the FWHM of a QD image recovered using an increasing number of low resolution images.
  • Figure 7 (f) is a zoomed image of a second region of figure 7 (a).
  • Figure 7 (g) is a restored super-resolution image of figure 5 (f) acquired using 64 low resolution images.
  • Figure 7 (h) is a zoomed image of a third region of figure 7 (a).
  • Figure 7 (i) is a restored super-resolution image of figure 7 (h) acquired using 64 low resolution images.
  • Figure 7 (j) is an intensity fluctuation measured over time in figure 7 (b).
  • Figure 7(k) is an intensity fluctuation of the unresolved image of figure 7 (f) measured over time and an intensity fluctuation of each of the two resolved QDs in figure 7(g).
  • Figure 7 (I) is an intensity fluctuation of the unresolved image of figure 7 (h) measured over time (black curve) and an intensity fluctuation of each of the three resolved QDs in Figure 7(i).
  • Figure 8 (a) is a low resolution image of a pulmonary endothelial cell.
  • Figure 8 (b) is a super resolution restored image corresponding to figure 8 (a) and obtained using 60 low resolution images.
  • Figure 8 (c) is a zoom on a first region of figure 8(a).
  • Figure 8 (d) is a super resolution restored image corresponding to figure 8 (c).
  • Figure 8 (e) is a zoom on a second region of figure 8(a).
  • Figure 8 (f) is a super resolution restored image corresponding to figure 8 (e).
  • Figure 9 (a) is a restored image by the TRAM method of the microtubules.
  • Figure 9 (b) is the feature image of the microtubules of figure 9(a).
  • Figure 10(a) is a LR image of a human face.
  • Figure 10(b) is a restored image of the human face of Figure 10(a) using SR translation imaging.
  • Figure 10(c) is a restored image of the human face of Figure 10(a) using the ALG method.
  • Figure 10(d) is a restored image of the human face of Figure 10(a) using the RSR method.
  • Figure 10(e) is a restored image of the human face of Figure 10(a) using the ZMT method.
  • Figure 1 shows a diagram of a setup 10 for performing fluorescence super-resolution microscopy.
  • the setup comprises a laser system 12 in optical communication with an inverted microscope, a camera 24, a memory 26 and a processor 28.
  • the inverted microscope has an objective 14 positioned under a translation stage 16, collimating optics and imaging optics (not shown) and a set of emission 20 and excitation 22 filters.
  • a sample 18 is positioned onto the translation stage 16.
  • the translation stage 16 is positioned in a first position with respect to the objective, where the sample is in the field of view of the objective.
  • a laser beam having a wavelength suitable for fluorescence excitation of the sample is directed onto an aperture of the objective via the excitation filter 20 and collimating optics (not shown). Fluorescent light from the sample is collected by the objective and imaged onto the camera 24 via the emission filter 22 and imaging optics (not shown).
  • the camera 24 records a first image of the sample in a first position.
  • the translation stage 16 is then translated to a second position where the sample remains in the field of view of the objective, and a second image is recorded by the camera 24.
  • a plurality of images are recorded by the camera for different translation stage positions and stored in a memory 26 as a set of primary images.
  • the processor 28 retrieves the set of primary images from the memory 26 and runs an image restoration algorithm. Each image comprises picture elements for example digital pixels.
  • QDs quantum dots
  • ASI motorized stage
  • Image data was collected using an Orca-Flash 4.0s CMOS camera (Hamamatsu) which in combination with a 1 .6ximagnifier in the image path provided an effective pixel size of 27 27 nm. Ten frames were acquired at each position before translation of the stage to the next position.
  • the primary images could also be acquired in a single measurement. This could be achieved using diffractive optics to simultaneously record the diffraction images of a same subject in different diffraction orders. In this case each diffraction image represents different spatial shift of the same subject. In such an embodiment no translation of the sample would be required.
  • Figure 2 shows a flow diagram 30 of the stages of a method performed by the processor.
  • the method When applied to microscopy the method is referred to as translation microscopy (TRAM) and can be used to achieve super-resolution imaging.
  • TAM translation microscopy
  • the processor After receiving the plurality of primary images containing a different representation of a subject 32, the processor fits the primary images to a model that represents each primary image as a distortion, or other alteration, of a common target image of the subject 34.
  • the target image is then extracted from the fit 36.
  • the extracted target image has a spatial resolution greater than a spatial resolution of the primary images.
  • Figure 3 shows a flow diagram of an embodiment of the method highlighted in Figure 2.
  • the fitting stage 34 of Figure 2 is decomposed into stages 42, 44, 46, 48 and 50.
  • the mode of operation of the method involves: obtaining 40 a plurality M of low resolution (LR) correlated images J, also referred to as primary images, calculating 42 a correspondence matrix C a and a convolving matrix P k to model a predicted image, identifying 44 edges and blobs features in the original image by calculating a series of first order non local difference (NLD) and second order NLD between different regions of the original image, calculating 46 a structural map of the original image, defining 48 an energy function that is function of a high resolution (HR) image I to be restored and of the structural map, minimizing 50 the energy function, and extracting 52 the HR restored image.
  • LR low resolution
  • Stage 40 may comprise measuring images or reading images that have been measured previously.
  • the low resolution (LR) images to be used to recover a high resolution (HR) image via inverse process must be correlated but not identical.
  • the LR primary images are recorded using the setup of Figure 1 by translating the sample or specimen in the XY plane as described above.
  • the obtained primary images / are considered as the outcome of an original high resolution (HR) image or target image I, after an image-degrading process involving blurring and noise contamination.
  • Measuring stage 42 estimates a predicted image by modelling the image-degrading process. This process can be formulated by a linear image capturing model as:
  • M denotes the number of images, the column vectors /, and consist of rowwise concatenations of the LR and HR images
  • P is a blurring matrix (also referred to as convolving matrix) determined by the PSF of the imaging system and N, represents additive white Gaussian noise (AWGN) .
  • AWGN additive white Gaussian noise
  • SR restoration aims at recovering the HR images beyond the diffraction limit from the LR observations.
  • the blurring matrix for an optical microscopy cannot have a full rank and is not invertible. Therefore, is usually estimated by minimizing a pre-defined energy function
  • E(I,)
  • the first term in the energy function, E(/ / ) measures the difference between the LR observations and predicted data in a L2-norm form
  • C M is a matrix measuring the pixel-level correspondence between the HR images, /, and I k
  • ⁇ ( ⁇ ) is an increasing function defined as:
  • the correspondence matrix C M is unknown to the observer but is assumed to be unchanged during the degrading process. As such, the matrix can be determined by the correspondence between LR images.
  • the correspondence matrix can be determ ined from motion vectors of two LR images given by the relative positions between the camera and specimen.
  • the PSF matrix in laboratory environment is readily calculated based on the specifications of the microscopes and the correspondence matrix or can be accurately estimated using experimental images of single point sources such as bead or quantum dot samples.
  • a predicted image is calculated as the product of the blurring matrix P k times the correspondence matrix C H times the target image /, where is common to the predicted images.
  • the desired solution i.e. , restored SR image
  • stages 44 and 46 are performed sequentially to calculate a map also referred to as prior model of the target image.
  • the prior model In the presence of noise, the prior model, ?(/,) , is included in the energy function.
  • the purpose of the prior model is to regulate the minimization process in order to remove noise while preserving fine structures in the LR observations.
  • the proportional parameter, X t is adjusted during the iterative process to balance noise removal and feature preservation.
  • an edge is a fundamental feature that underlies more complicated features or structures in an image, so the latter can be preserved as long as edges are preserved.
  • a new prior model is presented capable of characterizing complex biological structures while avoid over-smoothing in low signal to noise ratio images.
  • the model is based on the fact that diverse biological structures such as vesicles, filaments, microtubules and their complex networks are made primarily of two basic features, blob and ridge, which are circular and line-like regions either brighter or darker than their surroundings. These circular-like regions also referred to as areas, are better correlated with a second-order difference rather than a first-order difference which measures edges.
  • the prior model is expressed as:
  • NLDs local differences
  • N the pixel number of image / / .
  • non-local it is meant that the differences are computed between regions (patches), instead of picture elements (pixels).
  • Calculating the map involves for each picture elements of the primary image, defining a region around the picture element wherein the region has an area greater than an area of the picture element and calculating differences between regions to identify image features.
  • Calculating differences between regions involves, in this case, calculating a first order difference (e.g. gradient) and a second order difference (e.g. difference of the gradient) between a first and a second region.
  • a first order difference e.g. gradient
  • a second order difference e.g. difference of the gradient
  • the first order non-local difference is calculated by defining first and second adjacent regions. Each of the first and second adjacent regions is represented as a vector comprising the intensity values of the pixels present within the first and second adjacent regions respectively.
  • the first order difference is then obtained by calculating a norm of a difference between the vector of the first adjacent region and the vector of the second adjacent region.
  • the second order non-local difference is calculated by defining three adjacent regions, for example a first, a second and a third region wherein the second region is located between the first and the third region. Each of the three regions is represented by a vector comprising the intensity values of the pixels present within each of the three adjacent regions respectively.
  • the second order non-local difference is then obtained by calculating a norm of a difference between a first order non-local difference calculated between the first and the second region, and a first order nonlocal difference calculated between the second and the third region.
  • the first and second order NLDs are calculated as:
  • NLDs are more robust feature detectors compared to pixel-level gradient and Laplace operators in the presence of noise.
  • the coefficients ⁇ , ( ) and w z (x) are the weights that balance the contributions of the two NLDs in the forms of
  • NLD NLD dominates the prior model in this region.
  • the second order NLD dominates in the vicinity of blobs for the same reason.
  • the prior model Eq. (4) can be also constructed by including higher-order differences for which the coefficients can be also calculated in a similar way to Eq. (5) and Eq. (6).
  • An alternative way to calculate the differences of different orders can be undertaken by using Taylor series expansion of the same patch as a vector.
  • the highest order of differences for each patch can be then adaptively determined using principal components analysis by considering the noise levels (see for example Chatterjee, P. & Milanfar, P. Clustering-based denoising with locally learned dictionaries. Image Processing, IEEE Transactions on 18, 1438-1451 (2009)).
  • the weights coefficient for differences can be also calculated in the similar way as Eq. (6).
  • Stage 48 of Figure 3 defines the energy function.
  • the energy function has already been described above and can be rewritten by substituting Eq. (4) into Eq. (2) as:
  • NLDs involve the patches each of which contains multiple pixels
  • two matrices, D, e and D 2 E R m are defined in order to represent the first and second order NLDs
  • 0 ' is a null matrix to avoid the boundary effect
  • R and e R are defined as
  • a column vector, is further defined.
  • Eq. (17) is a nonlinear equation of /, because NL1 , A NL2 and A k also involve the variable, so will have multiple solutions that can correspond to local and global minima of the energy function. As such, traditional optimization methods such as the gradient- descent and variational calculus methods are inappropriate to solve Eq. (17).
  • Stages 50 and 52 are performed sequentially to minimize the energy function and extract the target image.
  • the minimization problem in Eq. (2) is solved by a modified iteratively reweighted least squares (MIRLS) method. During the minimization process, both the target image and map/prior model evolve.
  • MIRLS modified iteratively reweighted least squares
  • This step enforces that the multiple solutions l ik by step (c) should be similar to each other.
  • step (e) Go to step (c) if Eq. (18) cannot be satisfied using the current estimation otherwise update the parameter according to the residual noise in the current estimation / / .
  • step (f) The iteration stops when / / converges and is considered to be the restored image; otherwise go to step (b) to compute again the weight matrices with updated A.
  • the rate of the evolution is adjusted at each iteration stage based on the difference of HR solutions between the present and previous stages; fast in the beginning it becomes slower as the energy function gets closer to the global minimum.
  • the parameter ⁇ is also updated at each iteration stage according to the residual noise contained in the current HR image estimation. When the mean square difference of the HR image estimations between two adjacent iterations is below a pre-set threshold, the iteration stops and the solution is considered to be the restored HR target image.
  • I lk argmin X F k [l k + (P k T P k + eps) " (P k J k - P k T P k I, ,k )) I M - B k
  • the 64 HR signal was recovered using 64 LR synthetic signals.
  • Figure 4b shows the responses of the first order NLD 60 and second order NLDs 62 and their combination 64 to the noisy LR signal.
  • the value of the 1 st -order NLD is relatively larger in the vicinity of the edge but smaller in the neighbourhood of the blobs and stripe.
  • the second order NLD responds better to blobs and stripes a combination of the two, > gives rise to a high and well-balanced response to all the features and low response to the background, as shown in Figure 4b (blue).
  • Figure 4 (c-e) shows recovery of the HR signal at different stage of the iterative process. It can be observed that background regions are smoothed heavily in the initial stage while features are being restored (Fig. 4c). As the signal evolves during the inverse process, the smoothing effect "propagates" towards the feature regions, which leads to higher contrast between feature and background and therefore increased responses of the first and second order NLD to the features. The system performs in such a positive feedback manner, leading to more effective noise reduction and resolution improvement in the second stage, as shown in Figure 4d-e. The iteration process completes when the signals between two adjacent iterations is blow a predefined threshold.
  • Figure 4 (g) shows the final result.
  • a good restoration of features and reduction of noise are obtained compared to the noise-free signal in Figure 4 (a).
  • a second signal was restored using the same set of LR frames but by setting our method with W
  • Figure 5 shows a 2-D 8-bit ISO 12233 resolution chart containing blobs and ridges with varying sizes and orientations and that is commonly used for a standard evaluation of SR restoration.
  • FIG. 5 (b) shows a restored image using TRAM with a set of 64 LR frames. All the features in the chart, including stripes, curves and numbers are shown to be very well recovered.
  • PSNR Peak Signal to Noise Ratio
  • Figures 5 (e-j) show respectively the HR, LR and four restored images of a magnified boxed region in Figure 5(a) obtained using TRAM (g), ALG (Babacan, S. D., Molina, R. & Katsaggelos, A. K. Variational Bayesian Super Resolution. IEEE Trans. Image Process. 20, 984-999 (201 1 )) (h), RSR (Farsiu, S., Robinson, M. D., Elad, M.
  • the PSNRs of the restored results were plotted for all four methods on the 64 LR frames for different degradation cases with various noise and PSF levels. As seen in Figure 5(d) the TRAM performs noticeably better than the other methods, at least by 5dB in terms of PSNR.
  • Figure 6 shows five synthetic cells with different structures containing blobs and ridges that mimic the key features of transport particle and microtubules in intracellular structures.
  • the image is a HR 8-bit image (2312 pixel 384 pixel).
  • the blobs have a diameter of 21 pixels and a center distance of 21 pixels between the two adjacent ones.
  • the ridges have the FWHM of 10 pixels and a center-line distance of 32 pixels.
  • the 1 -D vertical profiles for the four types of particle arrangements and a cross- sectional profile for the three microtubules are plotted 80 in this figure.
  • the corresponding intensity profile 82 shows that the cell structures are diffraction unresolved.
  • Figure 6 (c) shows the restored image obtained by the TRAM method. The resolution improvement is measured to be around 6.3 times for each structure in terms of the FWHM ratio (Fig. 6(d)), demonstrating the robustness of the method for different structures.
  • the resolution in the restored image is -14 pixels (28.4 nm) and is smaller than the distances between the adjacent particles and parallel microtubules, as such they are all resolved as shown by the intensity profiles in Fig. 6 (c).
  • the decease of the FWHM ratio on increasing noise level can be divided into three stages. In the first stage where the noise contamination is low (Std from 2 to 10), the FWHM ratio decreases rapidly.
  • the FWHM ratios for all levels of noise contamination show a monotonic increase on increasing the number of LR observations and begin to saturate at 50 LR images. There is however a shift among the three curves because of different severities of noise contamination; less resolution improvement for higher level of noise contamination for a fixed number of LR images and, for higher noise levels, more LR observations are required to achieve a same resolution improvement compared to lower noise cases.
  • Figure 7(a) shows a 16-bit LR image of a plurality of quantum dots (QD).
  • QD quantum dots
  • the image was acquired with an excitation at 405 nm wavelength using a widefield microscope equipped with a 150 X 1.45 NA objective. This set up resulted in a diffraction limit of 228 nm (thus PSF of 194 nm at FWHM) which in turn determines the convolving matrix, P / ..
  • a set of LR images was acquired whilst translating the sample along the y- axis in steps of 100 nm, from which C w was determined.
  • Figure 7 (b) shows a zoomed image of region 1 , where the intensity profile is Airy-disk shape of the FWHM of 194 nm (Gaussian fitting), in agreement with the theoretical value.
  • Figure 7 (c) and (d) shows the restored SR images resulting from 32 and 64 LR observations, giving measured FWHM of 39.7 and 30.6 nm respectively.
  • Figure 7 (e) shows that the FWHM measured from a restored image decreases exponentially when increasing the number of LR images used to restore the image.
  • the spatial resolution improves ⁇ 3-fold for 16 LR images and up to 7-fold for 64 LR images.
  • the synthetic results are consistent with those obtained in the synthetic cell data experiment discussed above in terms of resolution improvement and its dependence on the numbers of LR image frames.
  • Figure 7 (f) shows a zoomed image of a second region of figure 7 (a) and Figure 7 (g) shows the corresponding restored super-resolution image of figure 7 (f) acquired using 64 low resolution images.
  • Figure 7 (g) reveals the presence of 2 QDs.
  • Figure 7 (h) shows a zoomed image of a third region of figure 7 (a) and Figure 7 (i) shows the corresponding restored super-resolution image of figure 7 (h) acquired using 64 low resolution images.
  • Figure 7 (i) reveals the presence of 3 QDs.
  • Figure 7 (g) and (i) shows that the method described above allows identifying diffraction-unresolved multiple QDs in Figure 7 (a).
  • QD intensity fluctuations were investigated, taking advantage of the quantum blinking effect of single QDs.
  • Figure 7 (j) shows the intensity fluctuation measured over time in figure 7 (b) where the LR image contains a single QD. In this case the intensity fluctuation varies quantally between bright and dark states.
  • Figure 7(k) shows the intensity fluctuation measured over time in figure 7 (f) where the LR image contains 2 QDs.
  • the intensity fluctuation signal 100 is the sum of those of the two dots (curves 102, 104), consequently the "off" state appears less frequently as shown by the black curve.
  • This characteristic becomes more prominent when there are more QD signals in a bright spot as shown in Figure 7 (I) corresponding to the case of three QDs (curves 102, 104, 106).
  • the intensity fluctuation tends to be averaged out by random blinks of all the individual dots in the region.
  • Figure 8 (a) shows a multi colour low resolution image of a bovine pulmonary artery endothelial cell.
  • the TRAM method was performed by measuring a first set of primary images of a first colour, a second set of primary images of a second colour and a third set of primary images of a third colour.
  • the corresponding first, second and third target images were estimated and then combined to form a multicolour target image.
  • the three colours represent three different stained structures; Red: Actin 1 10, Green: Microtubules 1 12 and Blue: DNA (DAPI) 1 14, respectively.
  • DAPI DNA
  • FIG. 8 (b) shows a super resolution restored image corresponding to figure 8 (a) and obtained using 60 low resolution images. The image demonstrates a significant improvement in resolution and signal-to-noise ratio in all three colours.
  • Figure 8 (c) and (d) show a zoom image of the microtubule network of figure 8 (a) at low and high resolution respectively.
  • the microtubule network is unresolved and overlaps with DAPI.
  • Individual microtubule filaments and DAPI profiles are clearly resolved on the recovered super resolution image of Figure 8 (d).
  • the measured FWHM of a single microtubule is 31 nm, which represents a resolution improvement of 6.4-fold.
  • Figure 8 (e) and (f) show a zoom image in an area of figure 8 (a) where three stained structures are densely packed. At LR the three colours are mixed Figure 8 (a). In the recovered SR image figure 8 (f) the relative position of each structure is clearly improved and in particular the boundary between actin and microtubule filaments.
  • Figure 9 (a) shows the restored high-resolution image of the microtubules by the TRAM method.
  • Figure 9 (b) shows the map corresponding to the microtubules in figure 8.
  • both the target image and the map evolve (for this case, from thick unfocused line to thin/focused lines).
  • Figure 9 (a) and (b) show the images obtained at the last iteration, where the target image is considered to be the "true solution”.
  • Figure 10 (a-e) shows the restoration of a human portrait, demonstrating that the method can be applied to improve the resolution of images taken by commercial cameras.
  • Figure 10 (a) shows a LR human portrait provided by UCSC. In this case multiple images of the portrait were acquired by spatially displacing the camera for each image taken.
  • Figure 10 (b-e) show the restored images obtained by SR Translational imaging (b), ALG (c) , RSR (d) and ZMT (e).
  • SR Translational imaging provides a better recovery, including the eyes, eye bows, nose and hair.
  • our method is also very effective in suppressing noise without introducing artifacts.
  • RSR and ZMT do not effectively restore the HR resolution since the gradient-based prior function over-smooths the features during the inverse process.
  • ALG recovers the resolution better than RSR and ZMT but results in severe zigzag artifacts around the edges.
  • the principle of the method is not limited to the restoration of images relative to specific systems and can be adapted to identify the most suitable types of features in order to improve the spatial resolution of a particular system.
  • a combination of first and second order differences can be used, for example as described, in order to identify edges and blobs/areas. It is noted that other higher orders of difference may be used, alone or in combination, in order to calculate a map/ prior model in alternative embodiments. For example a map could be obtained by calculating a third order difference alone. It could also be possible to obtain a map by calculating a combination of orders, such as a first and third order , or a second order and a third order, or a first ,second and third order.
  • the differences can be calculated using any suitable method, for example by using a numerical method, determining differences, applying an algorithm to set of data, for example a set of intensity or other data, or analytically solving an expression.
  • a specific way of calculating the first and second order differences in one embodiment has been described above with reference to Equation 5.
  • Any suitable method for determining first order, second order or higher order non local differences can be used in alternative embodiments.
  • the first order non-local difference is calculated by defining first and second regions. The first and second regions may be adjacent or contiguous. The first order non-local difference is then obtained by calculating a difference between values associated with the first region and values associated with the second region or between a function of values associated with in the first region and a function values associated with the second region.
  • each of the first and second regions can be represented as a vector or a matrix comprising values of a plurality of picture elements (for example pixels or voxels) within the first and second regions respectively.
  • the first order non-local difference is then obtained by calculating a norm of a difference between the vector (or matrix) of the first region and the vector (or matrix) of the second region.
  • the second order non-local difference is calculated by defining three adjacent regions, for example a first, a second and a third region, wherein the second region is located between the first and the third region.
  • the first, second and third regions may be adjacent or contiguous.
  • the second order non-local difference is then obtained by calculating a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
  • the second order non-local difference may be obtained by calculating a difference between a function of a first order difference calculated between the first and the second region and a function of a first order difference calculated between the second and the third region.
  • each of the three regions may be represented by a vector or a matrix comprising a plurality of picture elements within each of the three regions respectively.
  • Each picture element for example pixels
  • the second order difference is then obtained by calculating a norm of a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
  • a third order non-local difference is calculated as the difference between a first second-order NLD and a second second-order NLD and using five adjacent or contiguous regions.
  • an N' h order NLD is calculated as the difference between a first (N-l )-order NLD and a second (N-l )-order NLD and using 2N-1 adjacent or contiguous regions.
  • the regions defined for the calculation of the first, second or higher order non-local difference may be of any particular shape, for example each region may form a substantially rectangular, triangular or circular region.
  • the values of the picture elements contained in the vector or matrix representing these regions may be an intensity value, or other quantity such as brightness, colour or frequency or any quantity derived from these quantities.
  • the first, second or higher order non-local differences are first, second or higher order derivatives or functions of such derivatives. For example a first order difference could be calculated as a first order derivative squared.
  • the inverse process of the method is not limited to minimizing an energy function as described above.
  • Other types of energy functions could be used.
  • the robust function Eq.(3) can be replaced by an exponential function, which would not significantly change the results.
  • the method is not limited to a specific fluorescence modality.
  • the method could be used with fluorescence anisotropy or fluorescence lifetime type measurements.
  • the method is also not limited to microscopy imaging techniques or to imaging applications performed in the optical region of the spectrum.
  • the method can be used to improve the spatial resolution of X-ray CT scans such as CT scans for oil search applications. In this case multiple images could be taken at different angles.
  • the method could also find applications for in vivo imaging applications.
  • the method could be of particular interest in these cases where the subject (a patient or an animal) is moving during measurement.
  • the motion of the subject provides a natural translational motion that can be used as a means of obtaining a plurality of primary images.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

An image restoration method comprises obtaining a plurality of primary images wherein each primary image contains a different representation of a subject, calculating a map representing at least one feature of the target image identified by calculating at least one second order difference or higher order difference, fitting the primary images to a model that represents each primary image as an alteration of a common target image of the subject, wherein fitting the primary images is subject to a constraint that the target image includes the at least one feature and extracting the target image from the fit, wherein the target image has a spatial resolution greater than a spatial resolution of the primary images.

Description

An Image Restoration Method
Introduction
The present invention relates to a method and apparatus for increasing the spatial resolution of an image, for example a method and apparatus for restoring a microscope image with a spatial resolution beyond the diffraction limit.
Background In microscopy, image resolution is limited by the standard diffraction limit, about 200- 250 nm for visible light. A resolution that exceeds this limit can be referred to as a super-resolution (SR). Three main approaches have been developed to achieve SR in optical microscopy. First, hardware-based technologies aim to reduce the point spread function (PSF), such as stimulated emission deletion (STED) (Hell, S. W. Microscopy and its focal switch. Nat. Methods 6, 24-32 (2009)) and structured illumination microscopy (SIM) (Heintzmann, R. & Gustafsson, M. G. L. Subdiffraction resolution in continuous samples. Nat. Photonics 3, 362-364 (2009)) by employing optical patterning of the excitation and a nonlinear response of the sample.
Second, biological and software technologies, grouped together as single molecule localization microscopy (SMLM), try to image single PSFs separated in time, calculating the positions of the single molecules to give rise to the signals with a precision substantially better than the diffraction limit (Won, R. Eyes on super- resolution. Nat. Photonics 3, 368-369 (2009).). A SR image is then reconstructed by mapping together all the individual measurements acquired at different time points.
The third approach is computational, in which image processing techniques are employed to reconstruct a SR image from a set of low-resolution (LR) observations (Elad, M. & Feuer, A. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Process. 6, 1646-1658 (1997)). In this approach, a LR observation is considered as the outcome of a degrading process of a high-resolution (HR) image due to blurring and noise effects. The SR restoration method is therefore a post-acquisition inverse process.
Recently, there have been substantial efforts to produce SR medical imaging, such as X-ray mammography, functional magnetic resonance imaging and positron emission tomography. Medical imaging usually uses highly controlled illumination doses to avoid damage to the subject, leading to low signal-to-noise ratio (SNR) images. Inclusion of a prior model for noise removal therefore becomes critically important for the performance of SR restoration. However, there is a trade-off between noise removal and feature preservation (and restoration); over-smoothing can impede image resolution that can be restored. To date, the prior model is usually constructed based on the edge-preservation concept in medical and other applications (Farsiu, S., Robinson, D., Elad, M. & Milanfar, P. Advances and challenges in super-resolution. Int. J. Imaging Syst. Technol. 14, 47-57 (2004)); features are restored as long as all the edges are preserved in the inverse process.
Medical images usually contain data describing tissues with simpler structures and larger size compared to biological images, typically 2-3 times smaller than the resolution limit of the imaging system. Fluorescence images of intracellular structures often contain abundant, heterogeneous blob and ridge-like features, complex sub- cellular structures, potentially 10 times smaller than the diffraction limit. In general, edges embedded in such small and complex features can be prone to noise contamination.
Summary of the invention
In a first aspect of the invention there is provided an image restoration method comprising obtaining a plurality of primary images wherein each primary image contains a different representation of a subject, calculating a map representing at least one image feature identified by calculating at least one second order difference or higher order difference, fitting the primary images to a model that represents each primary image as an alteration of a common target image of the subject, wherein fitting the primary images is subject to a constraint that the target image includes the at least one feature and extracting the target image from the fit, wherein the target image has a spatial resolution greater than a spatial resolution of the primary images. The at least one image feature may comprise at least one feature of the target image, or at least one feature of one or more of the primary images. The at least one second order difference or higher order difference may comprise at least one second order difference or higher order difference between points or regions of an image, which can for example be the target image or one or more of the primary images. The second order difference or higher order difference may comprise a non-local second order difference or non-local higher order difference. The constraint that the target image includes the at least one feature may comprise a constraint that the target image includes a representation of the at least one feature substantially in accordance with the map.
By calculating at least one second or higher order difference a map can be provided that enables preservation of desired features, for example blobs, ridges or other areas, which can be difficult to identify under low-SNR environments using conventional methods. The preservation of such features can be useful in the context of fluorescence microscopy and/or in the context of imaging of cellular or sub-cellular features.
The obtaining of the plurality of primary images may comprise acquiring the images using a measurement apparatus or may comprise reading previously acquired images from a data store.
The calculating of the map may comprise calculating at least one first order difference.
The at least one feature may comprise at least one edge, area or volume. The at least one first order difference may be used to identify at least one edge. The at least one second order difference or higher order difference may be used to identify at least one area or volume, for example at least one blob.
Each area may comprise a region of lateral extent having a length greater than a minimum length and a width greater than a minimum width, for example a length greater than one pixel of the target image and a width greater than one pixel of the target image.
At least one of the edges, areas or volumes may be smaller than an impulse response of an optical system used to measure the primary images.
The at least one difference may comprise at least one non-local difference.
Each primary image may comprise picture elements, for example pixels or voxels, and calculating the map may involve for at least some picture elements of the primary image, defining a region around the picture element wherein the region has an area greater than an area of the picture element and calculating differences between regions to identify image features.
Calculating differences between regions may involve calculating a first order difference and/or a second order difference between a first and a second region.
The differences may comprise intensity differences. Each difference may be a difference in intensity or any quantity derived from intensity or a difference in brightness or a difference in contrast or a difference in colour.
The first order non-local difference may be calculated by defining first and second regions. The first and second regions may be adjacent or contiguous. The first order non-local difference may be obtained by calculating a difference between values associated with the first region and values associated with the second region or between a function of values associated with in the first region and a function of values associated with the second region.
The second order non-local difference may be calculated by defining three regions, for example a first, a second and a third region, wherein the second region is located between the first and the third region. The first, second and third regions may be adjacent or contiguous. The second order non-local difference may be obtained by calculating a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region. Alternatively the second order non-local difference may be obtained by calculating a difference between a function of a first order difference calculated between the first and the second region and a function of a first order difference calculated between the second and the third region.
The first, second or higher order non-local differences may be first, second or higher order differences or functions of such differences. For example a first order difference could be calculated as a first order derivative squared.
Calculating a first order difference may comprise defining a first region and a second region, representing the first region by a vector or a matrix comprising values of a plurality of picture elements within the first region and a second vector or matrix comprising values of a plurality of picture elements within the second region and calculating a norm of a difference between the vector or matrix of the first region and the vector or matrix of the second region. The first and second regions may be adjacent or contiguous.
Calculating a second order difference may comprise defining a first, a second, and a third region, the second region being located between the first and the third region, representing the first, second, and third region by a vector or a matrix comprising values of a plurality of picture elements within the first, second and third region respectively, and calculating a norm of a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region. The regions may be adjacent or contiguous to each other.
Calculating a high order difference may comprise defining a first region, a second region and a new matrix, representing the first region by a vector or a matrix comprising values of a plurality of picture elements within the first region, a second vector or matrix comprising values of a plurality of picture elements within the second region and a new matrix comprising norms of different orders of a difference between the vector or matrix of the first region and the vector or matrix of the second region, and for example calculating the multiplication between the Moore-Penrose pseudo-inverse (see for example, Chatterjee, P. & Milanfar, P. Practical Bounds on Image Denoising: From Estimation to Information. IEEE Transactions on Image Processing 20, 1221 - 1233, (201 1 )) of the new matrix and the vector or matrix of the first region.
The picture element values may be intensity values and/or brightness and/or colour and/or any quantity derived from them, for example, different in frequency of the light.
Calculating the map may comprise calculating at least one first order difference and at least one second order difference.
Calculating a map may comprise weighting the first and second order differences by a first weighting factor and a second weighting factor respectively.
The first weighting factor may be proportional to a ratio of the first order difference over a sum of the first and second order differences and the second weighting factor may be proportional to a ratio of the second order difference over a sum of the first and second order differences.
The plurality of primary images may be acquired using a microscope. That feature is particularly significant and so in a further, independent aspect of the invention there is provided an image restoration method comprising obtaining a plurality of primary images acquired using a microscope wherein each primary image contains a different representation of a subject, fitting the primary images to a model that represents each primary image as an alteration of a common target image of the subject, extracting the target image from the fit, wherein the target image has a spatial resolution greater than a spatial resolution of the primary images. The method may comprise calculating a map comprising at least one image feature, wherein fitting the primary images is subject to a constraint that the target image includes the at least one feature.
The model may comprise an energy function and fitting may comprise minimizing the energy function.
The fitting may be performed iteratively and at each iteration stage the target image may be updated and the map may be re-calculated as a function of the updated target image, until the energy function is minimized.
Obtaining a plurality of primary images may comprise causing relative translation of the subject and imaging optics to a plurality of relative positions and obtaining at least one image for each position.
Obtaining a plurality of primary images may comprise other types of spatial displacement of the subject or/and the imaging system and obtaining at least one image for each displacement.
Obtaining the plurality of images may be performed in a single acquisition.
Obtaining a plurality of images may comprise using diffractive optics to acquire the plurality of images.
The plurality of images may comprise fluorescent images obtained using a fluorescent imaging system.
Obtaining a plurality of primary images may comprise measuring a first set of primary images and a second set of primary images, extracting a first and second target image corresponding to the first and second set of primary images using the method of any of the preceding claims and combining the first and second target images to form a combined target image.
The primary images of the first set may be images of a first colour, and the primary images of the second set may be images of a second, different colour.
The primary images of the first set may comprise images of a first type of structure and the primary images of the second set comprise images of a second type of structure.
The primary images may be images of cellular structures, optionally images of at least one of transport particle structures and microtubule structures. The spatial resolution of the target image may increase with an increase in the plurality of primary images. For example, the spatial resolution may increase up to 7 times.
The target image may have a spatial resolution beyond a limit of diffraction. In a further, independent aspect of the invention there is provided an apparatus comprising image acquisition means operable to acquire a plurality of primary images, a memory in communication with the image acquisition means for storing the primary images, and a processor in communication with the memory, the processor being arranged to process the primary images according to at least one of the methods as claimed and/or described herein.
The image acquisition means may comprise a microscope operable in combination with a translation stage to acquire primary images.
The image acquisition means may comprise a microscope operable in combination with diffractive optics to acquire primary images.
Features in one aspect may be applied as features in any other aspect. For example, method features may be applied as apparatus features and vice versa.
Brief Description of the Drawings
Various aspects of the invention will now be described by way of example only and with reference to the accompanying drawings, of which:
Figure 1 is a diagram of an experimental set up for performing translation microscopy according to an embodiment.
Figure 2 is a flow diagram of a generic method for increasing the spatial resolution of an image.
Figure 3 is a flow diagram of an exemplary method for increasing the spatial resolution of an image.
Figure 4 (a) is a synthetic 1 -D signal at low resolution and High resolution.
Figure 4 (b) is a 1 s' and 2nd NLD response of the LR signal of figure 4 (a) and the combination of the 1 st and 2nd NLD.
Figure 4(c) is a restored signal obtained after 131 iterations by IRLS.
Figure 4(d) is a restored signal obtained after 388 iterations by I RLS.
Figure 4(e) is a restored signal obtained after 517 iterations by I RLS.
Figure 4(f) is a HR restored signal obtained by TRAM and by a prior art method using an edge-preserving prior model.
Figure 5(a) is an ISO 12233 resolution chart.
Figure 5(b) is a restored image of figure 5(a) with added noise, using TRAM. Figure 5 (c) is the mean PSNR versus the frame number of LR images for noise Std σπ = 20 and PSF Stds apsf = 5, 10, 15 pixels, respectively.
Figure 5 (d) is a comparison among the mean PSNR of TRAM, ALG, RSR and ZMT versus the Noise Std when the PSF Stds opsf = 5, 10, 15, respectively.
Figure 5 (e) is a close-up region marked by a red box in figure 5 (a).
Figure 5 (f) is one frame of LR images generated from figure 5(e) by Gaussian- shape PSF with Std σρ3, = 10 and AWGN with Std ση = 20.
Figure 5 (g) is a restored image of figure (f) obtained by TRAM.
Figure 5 (h) is a restored image of figure (f) obtained by ALG.
Figure 5 (i) is a restored image of figure (f) obtained by RSR.
Figure 5 (j) is a restored image of figure (f) obtained by ZMT.
Figure 6 (a) is an HR image showing five different synthetic structures.
Figure 6 (b) is an artificially blurred image of figure 6 (a).
Figure 6 (c) is a restored image of figure 6(b) using TRAM.
Figure 6 (d) is the FWHM ratio of the LR image to the restored images obtained for the five types of structures.
Figure 6(e) is the FWHM ratio of the LR image to the restored image as a function of the number of LR images and obtained for 3 input noise levels.
Figure 6 (f) is the FWHM ratio of the LR to the restored image versus the Std of the input noise.
Figure 7 (a) is low resolution image showing a plurality of quantum dots.
Figure 7 (b) is a zoomed image of a first region of figure 7 (a).
Figure 7 (c) is a restored super-resolution image of the first region of figure 7 (a) acquired using 32 low resolution images.
Figure 7 (d) is a restored super-resolution image of figure 7 (b) acquired using 64 low resolution images.
Figure 7 (e) is a resolution curve showing the FWHM of a QD image recovered using an increasing number of low resolution images.
Figure 7 (f) is a zoomed image of a second region of figure 7 (a).
Figure 7 (g) is a restored super-resolution image of figure 5 (f) acquired using 64 low resolution images.
Figure 7 (h) is a zoomed image of a third region of figure 7 (a).
Figure 7 (i) is a restored super-resolution image of figure 7 (h) acquired using 64 low resolution images.
Figure 7 (j) is an intensity fluctuation measured over time in figure 7 (b). Figure 7(k) is an intensity fluctuation of the unresolved image of figure 7 (f) measured over time and an intensity fluctuation of each of the two resolved QDs in figure 7(g).
Figure 7 (I) is an intensity fluctuation of the unresolved image of figure 7 (h) measured over time (black curve) and an intensity fluctuation of each of the three resolved QDs in Figure 7(i).
Figure 8 (a) is a low resolution image of a pulmonary endothelial cell.
Figure 8 (b) is a super resolution restored image corresponding to figure 8 (a) and obtained using 60 low resolution images.
Figure 8 (c) is a zoom on a first region of figure 8(a).
Figure 8 (d) is a super resolution restored image corresponding to figure 8 (c). Figure 8 (e) is a zoom on a second region of figure 8(a).
Figure 8 (f) is a super resolution restored image corresponding to figure 8 (e). Figure 9 (a) is a restored image by the TRAM method of the microtubules. Figure 9 (b) is the feature image of the microtubules of figure 9(a).
Figure 10(a) is a LR image of a human face.
Figure 10(b) is a restored image of the human face of Figure 10(a) using SR translation imaging.
Figure 10(c) is a restored image of the human face of Figure 10(a) using the ALG method.
Figure 10(d) is a restored image of the human face of Figure 10(a) using the RSR method.
Figure 10(e) is a restored image of the human face of Figure 10(a) using the ZMT method.
Detailed description
Figure 1 shows a diagram of a setup 10 for performing fluorescence super-resolution microscopy. The setup comprises a laser system 12 in optical communication with an inverted microscope, a camera 24, a memory 26 and a processor 28. The inverted microscope has an objective 14 positioned under a translation stage 16, collimating optics and imaging optics (not shown) and a set of emission 20 and excitation 22 filters.
In use, a sample 18 is positioned onto the translation stage 16. The translation stage 16 is positioned in a first position with respect to the objective, where the sample is in the field of view of the objective. A laser beam having a wavelength suitable for fluorescence excitation of the sample is directed onto an aperture of the objective via the excitation filter 20 and collimating optics (not shown). Fluorescent light from the sample is collected by the objective and imaged onto the camera 24 via the emission filter 22 and imaging optics (not shown). The camera 24 records a first image of the sample in a first position. The translation stage 16 is then translated to a second position where the sample remains in the field of view of the objective, and a second image is recorded by the camera 24. A plurality of images are recorded by the camera for different translation stage positions and stored in a memory 26 as a set of primary images. The processor 28 retrieves the set of primary images from the memory 26 and runs an image restoration algorithm. Each image comprises picture elements for example digital pixels.
For the measurements of quantum dots (QDs), described below, the images were acquired on an inverted 1X81 microscope (Olympus) using a 150X 1 .45 NA objective. Illumination was provided by a fully motorized four laser TIRF combiner coupled to a 405 nm 100W laser under widefield illumination. The sample was laterally translated using a motorized stage (ASI). Image data was collected using an Orca-Flash 4.0s CMOS camera (Hamamatsu) which in combination with a 1 .6ximagnifier in the image path provided an effective pixel size of 27 27 nm. Ten frames were acquired at each position before translation of the stage to the next position. Fixed cell data was acquired on an SP5 SMD laser scanning confocal microscope (Leica) using a 60 x 1 .4 NA objective. Images 4096 4096 were acquired with a pixel size of 6 nm 6 nm. A single frame in each channel was acquired before translation of the stage to the next position.
It is noted that the primary images could also be acquired in a single measurement. This could be achieved using diffractive optics to simultaneously record the diffraction images of a same subject in different diffraction orders. In this case each diffraction image represents different spatial shift of the same subject. In such an embodiment no translation of the sample would be required.
Figure 2 shows a flow diagram 30 of the stages of a method performed by the processor. When applied to microscopy the method is referred to as translation microscopy (TRAM) and can be used to achieve super-resolution imaging. After receiving the plurality of primary images containing a different representation of a subject 32, the processor fits the primary images to a model that represents each primary image as a distortion, or other alteration, of a common target image of the subject 34. The target image is then extracted from the fit 36. The extracted target image has a spatial resolution greater than a spatial resolution of the primary images.
Figure 3 shows a flow diagram of an embodiment of the method highlighted in Figure 2. In this case, the fitting stage 34 of Figure 2 is decomposed into stages 42, 44, 46, 48 and 50. The mode of operation of the method involves: obtaining 40 a plurality M of low resolution (LR) correlated images J, also referred to as primary images, calculating 42 a correspondence matrix Ca and a convolving matrix Pk to model a predicted image, identifying 44 edges and blobs features in the original image by calculating a series of first order non local difference (NLD) and second order NLD between different regions of the original image, calculating 46 a structural map of the original image, defining 48 an energy function that is function of a high resolution (HR) image I to be restored and of the structural map, minimizing 50 the energy function, and extracting 52 the HR restored image.
Stage 40 may comprise measuring images or reading images that have been measured previously. According to information theory, the low resolution (LR) images to be used to recover a high resolution (HR) image via inverse process must be correlated but not identical. For biological microscopy applications, the LR primary images are recorded using the setup of Figure 1 by translating the sample or specimen in the XY plane as described above. The obtained primary images /, are considered as the outcome of an original high resolution (HR) image or target image I, after an image-degrading process involving blurring and noise contamination.
Measuring stage 42 estimates a predicted image by modelling the image-degrading process. This process can be formulated by a linear image capturing model as:
J, = PlIl + Nl , l = - ,...k, ...,M (1 ) where M denotes the number of images, the column vectors /, and consist of rowwise concatenations of the LR and HR images, P, is a blurring matrix (also referred to as convolving matrix) determined by the PSF of the imaging system and N, represents additive white Gaussian noise (AWGN) . The blurring matrix and noise can be different for different / in Eq. (1 ).
SR restoration aims at recovering the HR images beyond the diffraction limit from the LR observations. The blurring matrix for an optical microscopy cannot have a full rank and is not invertible. Therefore, is usually estimated by minimizing a pre-defined energy function,
= argmin£'(7i),
I,
E(I,) = | (|Λ Pk < )+
Figure imgf000013_0001
, (2) where the first term in the energy function, E(//), measures the difference between the LR observations and predicted data in a L2-norm form , CM is a matrix measuring the pixel-level correspondence between the HR images, /, and Ik , and ø (■) is an increasing function defined as:
Figure imgf000013_0002
so that the energy function is more likely to reach a global minimum . In practice, the correspondence matrix CM is unknown to the observer but is assumed to be unchanged during the degrading process. As such, the matrix can be determined by the correspondence between LR images.
The correspondence matrix can be determ ined from motion vectors of two LR images given by the relative positions between the camera and specimen. The PSF matrix in laboratory environment is readily calculated based on the specifications of the microscopes and the correspondence matrix or can be accurately estimated using experimental images of single point sources such as bead or quantum dot samples. For every primary image Jk a predicted image is calculated as the product of the blurring matrix Pk times the correspondence matrix CH times the target image /, where is common to the predicted images. The desired solution (i.e. , restored SR image) is obtained, when the energy function is minimised. Returning to figure 3, stages 44 and 46 are performed sequentially to calculate a map also referred to as prior model of the target image. In the presence of noise, the prior model, ?(/,) , is included in the energy function. The purpose of the prior model is to regulate the minimization process in order to remove noise while preserving fine structures in the LR observations. The proportional parameter, Xt is adjusted during the iterative process to balance noise removal and feature preservation.
In general, an edge is a fundamental feature that underlies more complicated features or structures in an image, so the latter can be preserved as long as edges are preserved. A new prior model is presented capable of characterizing complex biological structures while avoid over-smoothing in low signal to noise ratio images. The model is based on the fact that diverse biological structures such as vesicles, filaments, microtubules and their complex networks are made primarily of two basic features, blob and ridge, which are circular and line-like regions either brighter or darker than their surroundings. These circular-like regions also referred to as areas, are better correlated with a second-order difference rather than a first-order difference which measures edges. The prior model is expressed as:
where rder non-
Figure imgf000014_0001
local differences (NLDs) at the pixel position x and N is the pixel number of image //. By non-local, it is meant that the differences are computed between regions (patches), instead of picture elements (pixels). Calculating the map involves for each picture elements of the primary image, defining a region around the picture element wherein the region has an area greater than an area of the picture element and calculating differences between regions to identify image features.
Calculating differences between regions involves, in this case, calculating a first order difference (e.g. gradient) and a second order difference (e.g. difference of the gradient) between a first and a second region.
The first order non-local difference is calculated by defining first and second adjacent regions. Each of the first and second adjacent regions is represented as a vector comprising the intensity values of the pixels present within the first and second adjacent regions respectively. The first order difference is then obtained by calculating a norm of a difference between the vector of the first adjacent region and the vector of the second adjacent region.
The second order non-local difference is calculated by defining three adjacent regions, for example a first, a second and a third region wherein the second region is located between the first and the third region. Each of the three regions is represented by a vector comprising the intensity values of the pixels present within each of the three adjacent regions respectively. The second order non-local difference is then obtained by calculating a norm of a difference between a first order non-local difference calculated between the first and the second region, and a first order nonlocal difference calculated between the second and the third region.
The first and second order NLDs are calculated as:
2 w
||vNL/, w|| =∑ (/, (x + i) - 1, (x - 2W + i))2
i=-W
2 1 w
jV , (x)|| = -∑ (21, (x + i) - Il (x - 2W + i) - Il (x + 2W + i)f
2 ,.-w _ (5) where W is the half width of the patch. NLDs are more robust feature detectors compared to pixel-level gradient and Laplace operators in the presence of noise.
The coefficients ιν, ( ) and wz(x) are the weights that balance the contributions of the two NLDs in the forms of
Figure imgf000015_0001
Since ||νΝ1/;(χ)||2 > ||VNL/;(x)||2 in the vicinity of edges, Wi > w2 and the first order
NLD dominates the prior model in this region. The second order NLD dominates in the vicinity of blobs for the same reason. As such, the combination wAx) V /.(x) + w2(x) Vl /,(x) provides well-balanced responses for both edge and blob/ridge features. The prior model Eq. (4) can be also constructed by including higher-order differences for which the coefficients can be also calculated in a similar way to Eq. (5) and Eq. (6). An alternative way to calculate the differences of different orders can be undertaken by using Taylor series expansion of the same patch as a vector. The highest order of differences for each patch can be then adaptively determined using principal components analysis by considering the noise levels (see for example Chatterjee, P. & Milanfar, P. Clustering-based denoising with locally learned dictionaries. Image Processing, IEEE Transactions on 18, 1438-1451 (2009)). The weights coefficient for differences can be also calculated in the similar way as Eq. (6).
Stage 48 of Figure 3 defines the energy function. The energy function has already been described above and can be rewritten by substituting Eq. (4) into Eq. (2) as:
Figure imgf000016_0001
The optimization problem, It = arg min £'(/i) , is usually solved by finding the solution that the gradient -0. To do so, Eq. (7) is re-written in a matrix-vector dl,
form. Since the NLDs involve the patches each of which contains multiple pixels, two matrices, D, e and D2 E Rm are defined in order to represent the first and second order NLDs,
Figure imgf000016_0003
Figure imgf000016_0002
Where 0 ' is a null matrix to avoid the boundary effect, and where
R and e R are defined as
-1 0 · · · 1 0 · · · 0 0 · · · 0
0 -1 0 · · · ! 0 · · · 0 ' · . 0
0 - 1 0 1 0
(9)
and -1 0 ··· 2 0 ··· -1 0 ··· 0
0 -1 0 ··· 2 0 ··· -1 '·. 0
(10)
0 -1 0 2 0 ■■■ -1
A column vector, , is further defined. The Xth element is the only nonzero element with unit value so that the Xth element of any vector v= [v(1),... v(x),... v(N)]T can be written as
ν = όχ τν. (11)
Combining Eq. (9) - (11), the first and second order NLDs are rewritten as
(12)
The function E(l!) is then expressed by using Eq.(12) in the following matrix-vector form,
Figure imgf000017_0001
(13) ,
which no longer contains any scalars related to //. Eq. (13) now allows us to directly computer the gradient,
Figure imgf000017_0002
(14) ,
where the Λ Λ/diagonal matrices, ANL1 and ANL2, are given as
Figure imgf000017_0003
and the N N diagonal matrix Ak is
Α,=αία§(Φ,)
(16)
(pk cH/, - ))2 ) , · · · (Pk H -Jk ) The minimization, i.e., -^^ = o, leads to the following equation,
dl, . (17)
Figure imgf000018_0001
Eq. (17) is a nonlinear equation of /, because NL1, ANL2 and Ak also involve the variable, so will have multiple solutions that can correspond to local and global minima of the energy function. As such, traditional optimization methods such as the gradient- descent and variational calculus methods are inappropriate to solve Eq. (17).
Stages 50 and 52 are performed sequentially to minimize the energy function and extract the target image. The minimization problem in Eq. (2) is solved by a modified iteratively reweighted least squares (MIRLS) method. During the minimization process, both the target image and map/prior model evolve.
We first rewrite Eq. (17) as,
Figure imgf000018_0002
where the matrices Bk, Fh Qk are given respectively as
- ( k T lPk TAkJk
Qk CkIPk AkPkCklll (19)
F, = D ANLUD1 + D^ ANL2JD2
We then modify the nonlinear equation Eq. (18) of as,
^BK - QK = AI FKIL / M - BK (20) BM - QM = X1MFUI1 I M which contains more constraint than Eq. (18) since the unknown image // Should satisfy not only one equation but also M equations simultaneously. The solution //by using Eq. (20) can therefore satisfy Eq. (18). The main stages of I RLS for Eq. (20) are:
(a) Initialization : Let // = Jt and λτ = ση, where LR observation Jk, the blurring matrix
Pk, the correspondence matrix Cw and the noise Std ση are known.
(b) Computer the weight matrices Bk, F,, Qk by Eq. (19) based on the current estimate
(c) For each frame k:
1
(c1 ) Solve the equation Ilk = argmin :Bk - Qt . The solution is an
intermediate solution of the final estimation lik in step (c).
(c2) Given I,k , calculate the final estimation \ k in step (c) by solving the
1
equation Il k = arg min B,
(d) The solution // is obtained by a weighted average of {ll k }
Figure imgf000019_0001
where the weight vector is given as
w, = [ ΧΙ, (i) - 1, ( )), ... '(ΙΜ (i) - 1, (i))f I Ck , i = 1.2.... N , (22) and Ck is a normalization factor. This step enforces that the multiple solutions lik by step (c) should be similar to each other.
(e) Go to step (c) if Eq. (18) cannot be satisfied using the current estimation otherwise update the parameter according to the residual noise in the current estimation //.
(f) The iteration stops when // converges and is considered to be the restored image; otherwise go to step (b) to compute again the weight matrices with updated A.
The rate of the evolution is adjusted at each iteration stage based on the difference of HR solutions between the present and previous stages; fast in the beginning it becomes slower as the energy function gets closer to the global minimum. The parameter λ is also updated at each iteration stage according to the residual noise contained in the current HR image estimation. When the mean square difference of the HR image estimations between two adjacent iterations is below a pre-set threshold, the iteration stops and the solution is considered to be the restored HR target image.
It can be noted that Eq. (18) above has been linearized from Eq. (17) for given weight matrix > i, AJI_2 and Ak. The intermediate solution /„= argmin in step
(c1) can be solved by many approaches, e.g. conjugate gradient (CG), Wiener Filter, or shrinkage method. It is solved here by a iteratively modified Wiener Filter =
Figure imgf000020_0001
-Pk TP ?D) , (23) where eps is a small constant to make sure the stability of the matrix inverse, Ifk is the solution for the previous iteration. Given the intermediate solution, Ilk , the equation
Ilk = argmin in step (c2) can then be rewritten as
Figure imgf000020_0002
Ilk= argmin
: [h* + { tT k + eps)" (ATJk - k T k ,)) I M - (24)
Figure imgf000020_0003
: arg mm KFk [ , + (P?Pt + eps)" (P'Jk - PkPkI,,k)) fM-B, which is supposed to be also solved by Wiener filter but the stability heavily depends on the constant eps. A small eps can bring several artifacts while a large eps can give inaccurate estimation of the solution IlJ . We revise the equation by adding a regularization term as
Ilk= argmin X Fk [l k + (Pk TPk + eps)" (PkJk - Pk TPkI,,k)) I M - Bk
(25) + ||Root(ANLU)-1 JDJu||1 +
Figure imgf000020_0004
where ||·|| is the 1-1 norm and operator Root(ANL1/) generate a new matrix whose elements are square root of the corresponding elements of the matrix Am . We can then solve Eq.(25) using a well-known method of least-absolute-shrinkage-and- selection-operator (lasso) (Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267-288 (1996)) as, fl1Root ANL1 , ) 1 sgn (Root (ANL ) ' z>Ju)
Figure imgf000021_0001
+
x Root (Amjt )~ D2Il k \ - / M where sgn («) is a sign function and (·)+ is a shrinkage function given as
Figure imgf000021_0002
Figure 4a shows a 1 -D 8-bit synthetic LR signal and corresponding recovered HR signal containing one step edge, three single blobs of widths s = 5, 1 1 and 21 pixels and strips made of these blobs in multiple packs, the latter was PSF blurred (Std apsf = 10 pixel) and noise contaminated (AWGN Std ση = 20) of the former under the general model Eq. (1 ). The 64 HR signal was recovered using 64 LR synthetic signals.
Figure 4b shows the responses of the first order NLD 60 and second order NLDs 62 and their combination 64 to the noisy LR signal. The value of the 1 st-order NLD is relatively larger in the vicinity of the edge but smaller in the neighbourhood of the blobs and stripe. On the contrary, the second order NLD responds better to blobs and stripes a combination of the two,
Figure imgf000021_0003
> gives rise to a high and well-balanced response to all the features and low response to the background, as shown in Figure 4b (blue).
Figure 4 (c-e) shows recovery of the HR signal at different stage of the iterative process. It can be observed that background regions are smoothed heavily in the initial stage while features are being restored (Fig. 4c). As the signal evolves during the inverse process, the smoothing effect "propagates" towards the feature regions, which leads to higher contrast between feature and background and therefore increased responses of the first and second order NLD to the features. The system performs in such a positive feedback manner, leading to more effective noise reduction and resolution improvement in the second stage, as shown in Figure 4d-e. The iteration process completes when the signals between two adjacent iterations is blow a predefined threshold.
Figure 4 (g) shows the final result. A good restoration of features and reduction of noise are obtained compared to the noise-free signal in Figure 4 (a). For comparison, a second signal was restored using the same set of LR frames but by setting our method with W| = 1 and w2 = 0 in Eq. (7), corresponding to the edge-preserving prior model. As seen, the edge is preserved but the blobs and stripes are smoothed out by using this method.
Figure 5 (a) shows a 2-D 8-bit ISO 12233 resolution chart containing blobs and ridges with varying sizes and orientations and that is commonly used for a standard evaluation of SR restoration. The image was corrupted by a Gaussian-shaped PSF with Std apsf = 5 (pixels) and an AWGN with Std σ„ = 20. The chart was then restored using the different methods.
Figure 5 (b) shows a restored image using TRAM with a set of 64 LR frames. All the features in the chart, including stripes, curves and numbers are shown to be very well recovered. To quantify the performance, the Peak Signal to Noise Ratio (PSNR) of our result was plotted as a function of the number of LR frames under three different blurring and noise situations. The PSNR are shown in Figure 5 (c) for a noise Std σ„ = 20 and for a PSF Stds apsf = 5, 10, 15 pixels, respectively as lines 70, 72, 74. It can be observed that all three curves show a monotonic increase of the FWHM ratio on increasing the number of LR observations and begin to saturate at 50 LR images, which is dependent on the noise level in the LR observations. Their relative positions are however different. The curves with the more severe PSF blurring are located closer to the x-axis. As such, by a vertical comparison of fixing the LR frame number, the values of the FWHM ratio are lower for higher levels of PSF blurring, indicating that the higher blurring reduces the maximum resolution that can be restored. By a further horizontal comparison of fixing the FWHM ratio, the values of LR frame number among the three curves are lower for higher levels of PSF blurring, showing that more LR observations for the higher noise case are required to achieve a same resolution improvement compared to lower noise case. Figures 5 (e-j) show respectively the HR, LR and four restored images of a magnified boxed region in Figure 5(a) obtained using TRAM (g), ALG (Babacan, S. D., Molina, R. & Katsaggelos, A. K. Variational Bayesian Super Resolution. IEEE Trans. Image Process. 20, 984-999 (201 1 )) (h), RSR (Farsiu, S., Robinson, M. D., Elad, M. & Milanfar, P. Fast and robust multiframe super resolution. IEEE Trans. Image Process. 13, 1327-1344 (2004)) (i) and ZMT (Zomet, A., Rav-Acha, A. & Peleg, S. in Proc. IEEE CVPR 2001. 645-650.) (j) using 64 LR frames. It can be observed that the ALG, RSR and ZMT methods either produce severe artifacts (ALG) or fail to restore the image resolution by smooth out numbers and ridges in their results (RSR, ZMT). In contrast, the image obtained using TARM shows a best visual quality by providing the most resolution enhancement without introducing artifacts, compared to the original HR one in Figure 5(e). To quantify the superiority of the TRAM method over others, the PSNRs of the restored results were plotted for all four methods on the 64 LR frames for different degradation cases with various noise and PSF levels. As seen in Figure 5(d) the TRAM performs noticeably better than the other methods, at least by 5dB in terms of PSNR.
Figure 6 (a) shows five synthetic cells with different structures containing blobs and ridges that mimic the key features of transport particle and microtubules in intracellular structures. The image is a HR 8-bit image (2312 pixel 384 pixel). The blobs have a diameter of 21 pixels and a center distance of 21 pixels between the two adjacent ones. The ridges have the FWHM of 10 pixels and a center-line distance of 32 pixels. The 1 -D vertical profiles for the four types of particle arrangements and a cross- sectional profile for the three microtubules are plotted 80 in this figure. A set of 64 LR frames are obtained under the TRAM procedure with an AWGN of Std σ„ = 20 and Gaussian-shaped PSF of Std apsf = 31 pixels, the latter gives rise to the diffraction limit of 91 pixels. If such diffraction limit equals to the standard one of visible limit (-200 nm), the pixel size would be 2.2 nm. As such, the resolution improvement measured in this experiment can be highly precise.
Figure 6 (b) shows an artificially blurred image of the cells in figure 6 (a) corrupted with noise contamiation of Std ση= 20 and PSF blurring of Std apsf = 31 (pixels). The corresponding intensity profile 82 shows that the cell structures are diffraction unresolved. Figure 6 (c) shows the restored image obtained by the TRAM method. The resolution improvement is measured to be around 6.3 times for each structure in terms of the FWHM ratio (Fig. 6(d)), demonstrating the robustness of the method for different structures. The resolution in the restored image is -14 pixels (28.4 nm) and is smaller than the distances between the adjacent particles and parallel microtubules, as such they are all resolved as shown by the intensity profiles in Fig. 6 (c).
Figure 6(e) shows the FWHM ratios between the LR and restored image as a function of the number of LR images and obtained for 3 input noise levels (sigmaN = 10, sigmaN = 20 and sigmaN = 30, corresponding to lines 90, 92, 94) and illustrates the resolution improvement of our method on different noise levels for fixed PSF and LR frames. The number of LR frames and PSF Std are fixed to be 64 and apsf = 31 (pixels), respectively. As seen the decease of the FWHM ratio on increasing noise level can be divided into three stages. In the first stage where the noise contamination is low (Std from 2 to 10), the FWHM ratio decreases rapidly. This is consistent with a previous showing that only a little noise contamination can greatly reduce the resolution that can be restored. As noise increases to an intermediate level in the second stage (noise Std from 10 to 20), the ratio maintains at around the value of 6 - 7 and drops in a very modest rate. In the final stage where the noise contamination is high (noise Std 20-40), the ratio deceases rapidly again; resolution can be no longer improved when noise contamination exceeds the Std of 40.
Figure 6 (f) shows the FWHM ratio between the LR and restored structures versus the number of LR images for different input noise levels of Std ση = 10, 20 and 30, respecitively. The Std of PSF is set to be apsf = 31 (pixels). As seen, the FWHM ratios for all levels of noise contamination show a monotonic increase on increasing the number of LR observations and begin to saturate at 50 LR images. There is however a shift among the three curves because of different severities of noise contamination; less resolution improvement for higher level of noise contamination for a fixed number of LR images and, for higher noise levels, more LR observations are required to achieve a same resolution improvement compared to lower noise cases. As such, the dependence of FWHM ratio on different noise levels behaves similarly to that of PSNR on different blurring levels for the chart image shown in Fig. 6(c). Figure 7(a) shows a 16-bit LR image of a plurality of quantum dots (QD). The image was acquired with an excitation at 405 nm wavelength using a widefield microscope equipped with a 150 X 1.45 NA objective. This set up resulted in a diffraction limit of 228 nm (thus PSF of 194 nm at FWHM) which in turn determines the convolving matrix, P/.. A set of LR images was acquired whilst translating the sample along the y- axis in steps of 100 nm, from which Cw was determined. The image has a measured noise levels of σ„ = 1 1 .2 (Std). Figure 7 (b) shows a zoomed image of region 1 , where the intensity profile is Airy-disk shape of the FWHM of 194 nm (Gaussian fitting), in agreement with the theoretical value.
Figure 7 (c) and (d) shows the restored SR images resulting from 32 and 64 LR observations, giving measured FWHM of 39.7 and 30.6 nm respectively.
Figure 7 (e) shows that the FWHM measured from a restored image decreases exponentially when increasing the number of LR images used to restore the image. The spatial resolution improves ~ 3-fold for 16 LR images and up to 7-fold for 64 LR images. Importantly, it is noted that the synthetic results are consistent with those obtained in the synthetic cell data experiment discussed above in terms of resolution improvement and its dependence on the numbers of LR image frames.
Figure 7 (f) shows a zoomed image of a second region of figure 7 (a) and Figure 7 (g) shows the corresponding restored super-resolution image of figure 7 (f) acquired using 64 low resolution images. Figure 7 (g) reveals the presence of 2 QDs.
Figure 7 (h) shows a zoomed image of a third region of figure 7 (a) and Figure 7 (i) shows the corresponding restored super-resolution image of figure 7 (h) acquired using 64 low resolution images. Figure 7 (i) reveals the presence of 3 QDs.
Figure 7 (g) and (i) shows that the method described above allows identifying diffraction-unresolved multiple QDs in Figure 7 (a). To verify the results, QD intensity fluctuations were investigated, taking advantage of the quantum blinking effect of single QDs. Figure 7 (j) shows the intensity fluctuation measured over time in figure 7 (b) where the LR image contains a single QD. In this case the intensity fluctuation varies quantally between bright and dark states.
Figure 7(k) shows the intensity fluctuation measured over time in figure 7 (f) where the LR image contains 2 QDs. In this case, the intensity fluctuation signal 100 is the sum of those of the two dots (curves 102, 104), consequently the "off" state appears less frequently as shown by the black curve. This characteristic becomes more prominent when there are more QD signals in a bright spot as shown in Figure 7 (I) corresponding to the case of three QDs (curves 102, 104, 106). The intensity fluctuation tends to be averaged out by random blinks of all the individual dots in the region. These results demonstrate that the above method enable separating single particles from diffraction- unresolved data.
Figure 8 (a) shows a multi colour low resolution image of a bovine pulmonary artery endothelial cell. In this case the TRAM method was performed by measuring a first set of primary images of a first colour, a second set of primary images of a second colour and a third set of primary images of a third colour. The corresponding first, second and third target images were estimated and then combined to form a multicolour target image. The three colours represent three different stained structures; Red: Actin 1 10, Green: Microtubules 1 12 and Blue: DNA (DAPI) 1 14, respectively. To achieve this colour scheme the sample was stained with Texas Red-X phalloidin, anti-bovine a- tubulin and a BODIPY FL labeled secondary antibody, and DAPI. A set of 60 LR observations of all three channels was acquired, with translation of 100 nm between each frame, using a scanning confocal microscope. In this example the width of microtubules is many times smaller than the FWFM of the PSF of the microscope used to measure the primary images. Figure 8 (b) shows a super resolution restored image corresponding to figure 8 (a) and obtained using 60 low resolution images. The image demonstrates a significant improvement in resolution and signal-to-noise ratio in all three colours.
Figure 8 (c) and (d) show a zoom image of the microtubule network of figure 8 (a) at low and high resolution respectively. In the raw data the microtubule network is unresolved and overlaps with DAPI. Individual microtubule filaments and DAPI profiles are clearly resolved on the recovered super resolution image of Figure 8 (d). The measured FWHM of a single microtubule is 31 nm, which represents a resolution improvement of 6.4-fold.
Figure 8 (e) and (f) show a zoom image in an area of figure 8 (a) where three stained structures are densely packed. At LR the three colours are mixed Figure 8 (a). In the recovered SR image figure 8 (f) the relative position of each structure is clearly improved and in particular the boundary between actin and microtubule filaments.
Figure 9 (a) shows the restored high-resolution image of the microtubules by the TRAM method. Figure 9 (b) shows the map corresponding to the microtubules in figure 8. During the minimization of the energy function, both the target image and the map evolve (for this case, from thick unfocused line to thin/focused lines). Figure 9 (a) and (b) show the images obtained at the last iteration, where the target image is considered to be the "true solution".
Figure 10 (a-e) shows the restoration of a human portrait, demonstrating that the method can be applied to improve the resolution of images taken by commercial cameras. Figure 10 (a) shows a LR human portrait provided by UCSC. In this case multiple images of the portrait were acquired by spatially displacing the camera for each image taken. Figure 10 (b-e) show the restored images obtained by SR Translational imaging (b), ALG (c) , RSR (d) and ZMT (e). By comparing Figure 10(b) with Figure 10(c-e), it is apparent that SR Translational imaging provides a better recovery, including the eyes, eye bows, nose and hair. Also thanks to the new prior model, our method is also very effective in suppressing noise without introducing artifacts. On comparisons, RSR and ZMT do not effectively restore the HR resolution since the gradient-based prior function over-smooths the features during the inverse process. ALG recovers the resolution better than RSR and ZMT but results in severe zigzag artifacts around the edges.
A skilled person will appreciate that variations of the disclosed arrangements are possible without departing from the invention. Accordingly, the above description of the specific embodiment is made by way of example only and not for the purposes of limitation. It will be clear to the skilled person that minor modifications may be made without significant changes to the operation described.
For example, the principle of the method is not limited to the restoration of images relative to specific systems and can be adapted to identify the most suitable types of features in order to improve the spatial resolution of a particular system.
A combination of first and second order differences can be used, for example as described, in order to identify edges and blobs/areas. It is noted that other higher orders of difference may be used, alone or in combination, in order to calculate a map/ prior model in alternative embodiments. For example a map could be obtained by calculating a third order difference alone. It could also be possible to obtain a map by calculating a combination of orders, such as a first and third order , or a second order and a third order, or a first ,second and third order.
The differences can be calculated using any suitable method, for example by using a numerical method, determining differences, applying an algorithm to set of data, for example a set of intensity or other data, or analytically solving an expression. A specific way of calculating the first and second order differences in one embodiment has been described above with reference to Equation 5. Any suitable method for determining first order, second order or higher order non local differences can be used in alternative embodiments. In certain embodiments, the first order non-local difference is calculated by defining first and second regions. The first and second regions may be adjacent or contiguous. The first order non-local difference is then obtained by calculating a difference between values associated with the first region and values associated with the second region or between a function of values associated with in the first region and a function values associated with the second region.
For example in certain embodiment each of the first and second regions can be represented as a vector or a matrix comprising values of a plurality of picture elements (for example pixels or voxels) within the first and second regions respectively. The first order non-local difference is then obtained by calculating a norm of a difference between the vector (or matrix) of the first region and the vector (or matrix) of the second region.
In certain embodiments, the second order non-local difference is calculated by defining three adjacent regions, for example a first, a second and a third region, wherein the second region is located between the first and the third region. The first, second and third regions may be adjacent or contiguous. The second order non-local difference is then obtained by calculating a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region. Alternatively the second order non-local difference may be obtained by calculating a difference between a function of a first order difference calculated between the first and the second region and a function of a first order difference calculated between the second and the third region. For example in certain embodiment each of the three regions may be represented by a vector or a matrix comprising a plurality of picture elements within each of the three regions respectively. Each picture element (for example pixels) has a value. The second order difference is then obtained by calculating a norm of a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
In certain embodiments a third order non-local difference is calculated as the difference between a first second-order NLD and a second second-order NLD and using five adjacent or contiguous regions.
By extension, in certain embodiment an N'h order NLD is calculated as the difference between a first (N-l )-order NLD and a second (N-l )-order NLD and using 2N-1 adjacent or contiguous regions. The regions defined for the calculation of the first, second or higher order non-local difference may be of any particular shape, for example each region may form a substantially rectangular, triangular or circular region. The values of the picture elements contained in the vector or matrix representing these regions, may be an intensity value, or other quantity such as brightness, colour or frequency or any quantity derived from these quantities. In certain embodiments the first, second or higher order non-local differences are first, second or higher order derivatives or functions of such derivatives. For example a first order difference could be calculated as a first order derivative squared.
The inverse process of the method is not limited to minimizing an energy function as described above. Other types of energy functions could be used. For example, the robust function Eq.(3) can be replaced by an exponential function, which would not significantly change the results.
When used in combination with fluorescence microscopy, the method is not limited to a specific fluorescence modality. For example the method could be used with fluorescence anisotropy or fluorescence lifetime type measurements. There is also no specific limitation on the nature or number of dyes used to stain a particular sample.
The method is also not limited to microscopy imaging techniques or to imaging applications performed in the optical region of the spectrum. For example the method can be used to improve the spatial resolution of X-ray CT scans such as CT scans for oil search applications. In this case multiple images could be taken at different angles.
The method could also find applications for in vivo imaging applications. The method could be of particular interest in these cases where the subject (a patient or an animal) is moving during measurement. In this case the motion of the subject provides a natural translational motion that can be used as a means of obtaining a plurality of primary images.
The method could also be used for diagnostic or surgery applications when combined with endoscopic imaging to achieve super resolution endoscopy imaging. It will be understood that the present invention has been described above purely by way of example, and modifications of detail can be made within the scope of the invention.
Each feature disclosed in the description, and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination.

Claims

CLAIMS:
1 . An image restoration method comprising
obtaining a plurality of primary images wherein each primary image contains a different representation of a subject,
calculating a map representing at least one feature of the target image identified by calculating at least one second order difference or higher order difference,
fitting the primary images to a model that represents each primary image as an alteration of a common target image of the subject, wherein fitting the primary images is subject to a constraint that the target image includes the at least one feature and
extracting the target image from the fit, wherein the target image has a spatial resolution greater than a spatial resolution of the primary images.
2. A method as claimed in claim 1 , wherein the calculating of the map comprises calculating at least one first order difference.
3. A method as claimed in claim 1 or 2, wherein the at least one feature comprises at least one edge, area or volume.
4. A method as claimed in claim 3, wherein each area comprises a region of lateral extent having a length greater than a minimum length and a width greater than a minimum width.
5. A method as claimed in claim 3 or 4, wherein at least one of the edges, areas or volumes is smaller than an impulse response of an optical system used to measure the primary images.
6. A method as claimed in any preceding claim, wherein the at least one difference comprises at least one non-local difference.
7. An image restoration method as claimed in any preceding claim, wherein the primary image comprises picture elements, for example pixels, and wherein calculating the map involves for at least some picture elements of the primary image, defining a region around the picture element wherein the region has an area greater than an area of the picture element and calculating differences between regions to identify image features.
8. A method as claimed in claim 7 wherein calculating differences between regions involves calculating a first order difference and/or a second order difference between a first and a second region.
9. A method as claimed in claim 7 or 8 wherein the differences are differences in intensity and/or brightness and/or colour and/or any quantity derived from them, for example, different in frequency of the light.
10. A method as claimed in claim 2 or any of claims 3 to 9 as dependent on claim 2, wherein calculating a first order difference involves defining a first region and a second region, representing the first region by a vector or a matrix comprising values of a plurality of picture elements within the first region and a second vector or matrix comprising values of a plurality of picture elements within the second region and calculating a norm of a difference between the vector or matrix of the first region and the vector or matrix of the second region.
1 1 . A method as claimed in any preceding claim, wherein calculating a second order difference involves defining a first, a second, and a third region, the second region being located between the first and the third region, representing the first, second, and third region by a vector or a matrix comprising values of a plurality of picture elements within the first, second and third region respectively, and calculating a norm of a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
12. A method as claimed in claim 10 or 1 1 wherein the picture element values are intensity values and/or brightness and/or colour and/or any quantity derived from them, for example, different in frequency of the light.
13. A method as claimed in any preceding claim, wherein calculating the map comprises calculating at least one first order difference and at least one second order difference.
14. A method as claimed in claim 13 wherein calculating a map involves weighting the first and second order differences by a first weighting factor and a second weighting factor respectively.
15. A method as claimed in claim 14 wherein the first weighting factor is proportional to a ratio of the first order difference over a sum of the first and second order differences and the second weighting factor is proportional to a ratio of the second order difference over a sum of the first and second order differences.
16. A method according to any preceding claim, wherein the plurality of primary images are acquired using a microscope.
17. An image restoration method comprising
obtaining a plurality of primary images acquired using a microscope wherein each primary image contains a different representation of a subject, fitting the primary images to a model that represents each primary image as an alteration of a common target image of the subject,
extracting the target image from the fit, wherein the target image has a spatial resolution greater than a spatial resolution of the primary images.
18. An image restoration method as claimed in claim 17, comprising calculating a map comprising at least one image feature, wherein fitting the primary images is subject to a constraint that the target image includes the at least one feature.
19. A method as claimed in any of the preceding claims wherein the model comprises an energy function and wherein fitting involves minimizing the energy function.
20. A method as claimed in claim 19 wherein the fitting is performed iteratively and wherein at each iteration stage the target image is updated and the map is re- calculated as a function of the updated target image, until the energy function is minimized.
21 . A method as claimed in any of the preceding claims, wherein obtaining a plurality of primary images comprises causing relative translation of the subject and imaging optics to a plurality of relative positions and obtaining at least one image for each position.
22. A method as claimed in any of the preceding claims, wherein obtaining the plurality of images is performed in a single acquisition.
23. A method as claimed in any of the preceding claims, wherein obtaining a plurality of images comprises using diffractive optics to acquire the plurality of images.
24. An image restoration method as claimed in any of the preceding claims, wherein the plurality of images comprises fluorescent images obtained using a fluorescent imaging system.
25. An image restoration method as claimed in any of the preceding claims wherein obtaining a plurality of primary images comprises measuring a first set of primary images and a second set of primary images, extracting a first and second target image corresponding to the first and second set of primary images using the method of any of the preceding claims and combining the first and second target images to form a combined target image.
26. A method according to claim 25, wherein the primary images of the first set are images of a first colour, and the primary images of the second set are images of a second, different colour.
27. A method as claimed in claim 25 or 26 wherein the primary images of the first set are images of a first type of structure and the primary images of the second set are images of a second type of structure.
28. A method as claimed in any of the preceding claims wherein the primary images are images of cellular structures, optionally images of at least one of transport particle structures and microtubule structures.
29. A method as claimed in any of the preceding claims wherein the spatial resolution of the target image increases with an increase in the plurality of primary images.
30. A method as claimed in any of the preceding claims wherein the target image has a spatial resolution beyond a limit of diffraction.
31 . An apparatus comprising
image acquisition means operable to acquire a plurality of primary images,
a memory in communication with the image acquisition means for storing the primary images, and
a processor in communication with the memory, the processor being arranged to process the primary images according to a method as claimed in any of claims 1 to 30.
32. An apparatus as claimed in claim 31 , wherein the image acquisition means includes a microscope operable in combination with a translation stage to acquire primary images.
33. An apparatus as claimed in claim 31 or 32, wherein the image acquisition means includes a microscope operable in combination with diffractive optics to acquire primary images.
. A computer program product comprising computer readable instructions that are executable to perform a method according to any of claims 1 to 30.
35. A method substantially as described herein with reference to the accompanying drawings. An apparatus substantially as described herein with reference accompanying drawings.
PCT/GB2014/050091 2013-01-14 2014-01-14 An image restoration method WO2014108708A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1512290.6A GB2528179B (en) 2013-01-14 2014-01-14 An image restoration method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1300637.4 2013-01-14
GB201300637A GB201300637D0 (en) 2013-01-14 2013-01-14 An Image Restoration Method

Publications (1)

Publication Number Publication Date
WO2014108708A1 true WO2014108708A1 (en) 2014-07-17

Family

ID=47757955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2014/050091 WO2014108708A1 (en) 2013-01-14 2014-01-14 An image restoration method

Country Status (2)

Country Link
GB (2) GB201300637D0 (en)
WO (1) WO2014108708A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541495A (en) * 2023-09-04 2024-02-09 长春理工大学 Image stripe removing method, device and medium for automatically optimizing model weight

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084883A (en) * 2019-04-15 2019-08-02 昆明理工大学 A method of it inducing brain activity and rebuilds face-image

Non-Patent Citations (17)

* Cited by examiner, † Cited by third party
Title
AHMAD HUMAYUN ET AL: "A Novel Framework for Molecular Co-Expression Pattern Analysis in Multi-Channel Toponome Fluorescence Images", MIAAB 2011 (PROCEEDINGS OF THE 2011 MICROSCOPIC IMAGE ANALYSIS WITH APPLICATIONS IN BIOLOGY), 2 September 2011 (2011-09-02), pages 109 - 112, XP055108769 *
BABACAN, S. D.; MOLINA, R.; KATSAGGELOS, A. K: "Variational Bayesian Super Resolution", IEEE TRANS. IMAGE PROCESS., vol. 20, 2011, pages 984 - 999, XP011411765, DOI: doi:10.1109/TIP.2010.2080278
CHATTERJEE P ET AL: "Clustering-Based Denoising With Locally Learned Dictionaries", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 18, no. 7, 1 July 2009 (2009-07-01), pages 1438 - 1451, XP011268735, ISSN: 1057-7149, DOI: 10.1109/TIP.2009.2018575 *
CHATTERJEE, P.; MILANFAR, P.: "Clustering-based denoising with locally learned dictionaries. Image Processing", IEEE TRANSACTIONS ON, vol. 18, 2009, pages 1438 - 1451
CHATTERJEE, P.; MILANFAR, P.: "Practical Bounds on Image Denoising: From Estimation to Information", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 20, 2011, pages 1221 - 1233, XP011411797, DOI: doi:10.1109/TIP.2010.2092440
DILIP KRISHNAN ET AL: "Fast Image Deconvolution using Hyper-Laplacian Priors", PROC. NEURAL INF. PROCESS. SYST., vol. 1041, 6 December 2010 (2010-12-06), pages 1033, XP055108267 *
ELAD, M.; FEUER, A.: "Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images", IEEE TRANS. IMAGE PROCESS, vol. 6, 1997, pages 1646 - 1658, XP000724632, DOI: doi:10.1109/83.650118
FARSIU, S.; ROBINSON, D.; ELAD, M.; MILANFAR, P.: "Advances and challenges in super-resolution", INT. J. IMAGING SYST. TECHNOL., vol. 14, 2004, pages 47 - 57, XP008142293
FARSIU, S.; ROBINSON, M. D.; ELAD, M.; MILANFAR, P: "Fast and robust multiframe super resolution", IEEE TRANS. IMAGE PROCESS., vol. 13, 2004, pages 1327 - 1344, XP011118230, DOI: doi:10.1109/TIP.2004.834669
HEINTZMANN, R.; GUSTAFSSON, M. G. L: "Subdiffraction resolution in continuous samples", NAT. PHOTONICS, vol. 3, 2009, pages 362 - 364
HELL, S. W., MICROSCOPY AND ITS FOCAL SWITCH. NAT. METHODS, vol. 6, 2009, pages 24 - 32
MARSHALL F TAPPEN ET AL: "Exploiting the Sparse Derivative Prior for Super-Resolution and Image Demosaicing", IEEE WORKSHOP ON STATISTICAL AND COMPUTATIONAL THEORIES OF VISION AT ICCV 2003, 13 October 2003 (2003-10-13), XP055108792 *
TIBSHIRANI, R: "Regression shrinkage and selection via the lasso", JOURNAL OF THE ROYAL STATISTICAL SOCIETY. SERIES B (METHODOLOGICAL, 1996, pages 267 - 288
UROS KRZIC: "Multiple-view microscopy with light-sheet based fluorescence microscope", DISSERTATION, 8 July 2009 (2009-07-08), Heidelberg, pages 1 - 149, XP055079132, Retrieved from the Internet <URL:http://archiv.ub.uni-heidelberg.de/volltextserver/9668/1/Uros_Krzic_PhD_Thesis_Heidelberg_University_July_2009_v40.pdf> [retrieved on 20130913] *
WON, R.: "Eyes on super- resolution", NAT. PHOTONICS, vol. 3, 2009, pages 368 - 369
ZHEN QIU ET AL: "A new feature-preserving nonlinear anisotropic diffusion for denoising images containing blobs and ridges", PATTERN RECOGNITION LETTERS, vol. 33, no. 3, 1 February 2012 (2012-02-01), pages 319 - 330, XP028122441, ISSN: 0167-8655, [retrieved on 20111115], DOI: 10.1016/J.PATREC.2011.11.001 *
ZOMET, A.; RAV-ACHA, A.; PELEG, S., PROC. IEEE CVPR, 2001, pages 645 - 650

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541495A (en) * 2023-09-04 2024-02-09 长春理工大学 Image stripe removing method, device and medium for automatically optimizing model weight

Also Published As

Publication number Publication date
GB201512290D0 (en) 2015-08-19
GB2528179B (en) 2016-12-21
GB2528179A (en) 2016-01-13
GB201300637D0 (en) 2013-02-27

Similar Documents

Publication Publication Date Title
US11222415B2 (en) Systems and methods for deep learning microscopy
EP3942518B1 (en) Systems and methods for image processing
CN110313016B (en) Image deblurring algorithm based on sparse positive source separation model
US20220253983A1 (en) Signal processing apparatus and method for enhancing a digital input signal
Lee et al. High-quality non-blind image deconvolution with adaptive regularization
EP3105736A1 (en) Method for performing super-resolution on single images and apparatus for performing super-resolution on single images
Ikoma et al. A convex 3D deconvolution algorithm for low photon count fluorescence imaging
Boulanger et al. Nonsmooth convex optimization for structured illumination microscopy image reconstruction
Zhou et al. W2S: microscopy data with joint denoising and super-resolution for widefield to SIM mapping
Ben Hadj et al. Space variant blind image restoration
Lee et al. Improving focus measurement via variable window shape on surface radiance distribution for 3D shape reconstruction
Elwarfalli et al. Fifnet: A convolutional neural network for motion-based multiframe super-resolution using fusion of interpolated frames
Ponti et al. Image restoration using gradient iteration and constraints for band extrapolation
Zhang et al. Group-based sparse representation for Fourier ptychography microscopy
Prigent et al. SPITFIR (e): a supermaneuverable algorithm for fast denoising and deconvolution of 3D fluorescence microscopy images and videos
Soulez A “learn 2D, apply 3D” method for 3D deconvolution microscopy
WO2014108708A1 (en) An image restoration method
Gregson et al. Stochastic deconvolution
Vaudrey et al. Generalised residual images’ effect on illumination artifact removal for correspondence algorithms
Sahay et al. Shape extraction of low‐textured objects in video microscopy
Kryvanos et al. Nonlinear image restoration methods for marker extraction in 3D fluorescent microscopy
Kheradmand et al. Motion deblurring with graph Laplacian regularization
Tadrous A method of PSF generation for 3D brightfield deconvolution
Han et al. Refocusing phase contrast microscopy images
Danek Graph cut based image segmentation in fluorescence microscopy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14703160

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 1512290

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20140114

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 1512290.6

Country of ref document: GB

122 Ep: pct application non-entry in european phase

Ref document number: 14703160

Country of ref document: EP

Kind code of ref document: A1