GB2595694A - Method and system for joint demosaicking and spectral signature estimation - Google Patents

Method and system for joint demosaicking and spectral signature estimation Download PDF

Info

Publication number
GB2595694A
GB2595694A GB2008371.3A GB202008371A GB2595694A GB 2595694 A GB2595694 A GB 2595694A GB 202008371 A GB202008371 A GB 202008371A GB 2595694 A GB2595694 A GB 2595694A
Authority
GB
United Kingdom
Prior art keywords
hyperspectral
image
spectral
snapshot
mosaic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2008371.3A
Other versions
GB2595694B (en
GB202008371D0 (en
Inventor
Vercauteren Tom
Ebner Michael
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kings College London
Original Assignee
Kings College London
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kings College London filed Critical Kings College London
Priority to GB2008371.3A priority Critical patent/GB2595694B/en
Publication of GB202008371D0 publication Critical patent/GB202008371D0/en
Priority to EP21730274.4A priority patent/EP4162242A1/en
Priority to CN202180059179.6A priority patent/CN116134298A/en
Priority to PCT/GB2021/051280 priority patent/WO2021245374A1/en
Priority to JP2022575173A priority patent/JP2023529189A/en
Priority to US18/008,062 priority patent/US20230239583A1/en
Publication of GB2595694A publication Critical patent/GB2595694A/en
Application granted granted Critical
Publication of GB2595694B publication Critical patent/GB2595694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • G01J2003/2826Multispectral imaging, e.g. filter imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The crux of the invention is using image processing to obtain a high spatial resolution image by up-sampling a low spatial-spectral resolution sample of a hypercube (a hyper-spectral image), in order to determine parameters of a desired target in the hyperspectral image. The particular application is for medical imaging. Determining parameters 66 (i.e. physical properties of the scene such as blood perfusion or oxygenation saturation level information) of a desired target image from hyperspectral imagery, comprising: capturing hyperspectral snapshot mosaic images 62 (see Fig. 7) of a scene using a hyperspectral image sensor (Fig. 1 & 2), where the snapshot mosaic images 62 are relatively low spatial (x and y axes) resolution and relatively low spectral (λ axis) resolution; undertaking demosaicking and parameter estimation from the snapshot mosaic images to determine relatively high spatial resolution parameters 66 of a desired target image; outputting the determined relatively high resolution parameters 66 as representative of the desired target image. One embodiment (fv and gp) uses the hyperspectral snapshot mosaic to build a virtual hypercube 64, which has higher spatial resolution, and then using the virtual hypercube to estimate the desired parameters 66. Another embodiment (fp) using joint demosaicking and parameter estimation in order to obtain the parameters 66.

Description

METHOD AND SYSTEM FOR JOINT DEMOSAICKING AND SPECTRAL SIGNATURE
ESTIMATION
Technical Field
Embodiments of the invention relate generally to image and video processing and in particular to a system and method for processing hyperspectral images acquired in real time, and in some embodiments to images acquired in a medical context.
Background to the Invention and Prior Art
Many difficult intraoperative decisions with potentially life-changing consequences for the patient are still based on the surgeon's subjective visual assessment. This is partly because, even with the most advanced current surgical techniques, it may still not be possible to reliably identify critical structures during surgery. The need for more refined, less qualitative, intraoperative wide-field visualisation and characterisation of tissue during surgery has been evidenced across a variety of surgical specialties.
As a first example, in neuro-oncology, surgery is often the primary treatment, with the aim to remove as much abnormal tissue as safely possible (Gross Total Resection, GTR). The pursuit of GTR has to be balanced with the risk of postoperative morbidity associated with damaging sensitive areas that undertake vital functions such as critical nerves and blood vessels. During surgery, navigation solutions, such as those disclosed in (LS. Pat. Nici 9,788,906 82, can map preoperative information (e.g. MRI or CT) to the anatomy of the patient on the surgical table. However, navigation based on preoperative imaging does not account for intraoperative changes. Interventional imaging and sensing, such as surgical microscopy, fluorescence imaging, point-based Raman spectroscopy, ultrasound and intra-operative MRI, may be used by the surgeon either independently or as adjunct to navigation information to visualise the operated tissues. However, tissue differentiation based on existing intraoperative imaging remains challenging because of stringent operative constraints in the clinical environment (e.g. intraoperative MRI or CT), or imprecise tumour delineation (e.g. ultrasound or fluorescence imaging).
In neuro-oncology surgery, fluorescence-guided surgery with 5-aminolevulinic acid (5-ALA) induced protoporphyrin IX (PpIX) has been increasingly used. Other fields including bladder cancer have also benefited from PpIX fluorescence guided surgery. However, the visualisation of malignant tissue boundaries is fuzzy due to accumulation of tumour marker also in healthy tissue; is non-quantitative due in part to the time-varying fluorescence effect and the confounding effect of tissue autofluorescence; is associated with side effects; and can only be used for specific tumour types as reviewed in Suera iviolia et al.. Neurosurgical Review, 2019. The wealth of prior art aiming at improving neurosurgical tissue differentiation is a clear indication that better intraoperative imaging is seen as an opportunity to improve patient outcomes in these difficult surgeries.
As a second example, Necrotising Enterocolitis (NEC) is a devastating neonatal disease often requiring surgical treatment with potential important side effects. NEC is characterised by ischaemic necrosis of intestinal mucosa, resulting in perforation, generalised peritonitis and, in severe cases, death of the newborn. Three in every thousand live births suffer from NEC with 85% of cases occurring in infants of very low birth weight (<1500g) of whom 30% die despite state-of-the-art care as reviewed in Huil et al., Journal of the American Colleoe of Suraeons. 2014. Surgical management of NEC includes primary peritoneal drainage, exploratory confirmation surgery and/or laparotomy with bowel resection. A major challenge for surgeons performing NEC laparotomies is deciding how much bowel to resect, with long-term risk of leaving the baby with short bowel syndrome weighed against leaving poorly perfused bowel in situ, compromising the infant's chances of recovery. Currently, there is no standard of care image guidance technology for NEC laparotomy.
Operative planning of the resection thus relies on the surgeon's judgment, dexterity and perceptual skills. If doubtful, crude incision of the tissue to assess bleeding may be used. It is thought that NEC mortality rates could be reduced by earlier diagnosis, better monitoring and improved surgical management.
As discussed in Shapey et al., Journal of Biophotonics, 2019, multispectral and hyperspectral imaging, hereafter jointly referred to as hyperspectral imaging (HSI) are emerging optical imaging techniques with the potential to transform the way surgery is performed. However, it remains unclear whether current systems are capable of delivering real-time, high-resolution tissue characterisation for surgical guidance. HSI is a safe, non-contact, non-ionising and non-invasive optical imaging modality with characteristics making it attractive for surgical use. By splitting light into multiple spectral bands far beyond what the naked eye can see, HSI carries refined information about tissue properties beyond conventional colour information that may be used for more objective tissue characterisation. In HSI, within a given time frame, the collected data spans a three-dimensional space composed of two spatial dimensions and one spectral dimension. Each such three-dimensional frame is commonly referred to as a hyperspectral image or hypercube. As illustrated in US. Pat, No. 5337,885 Ed, the concept of using HSI for medical applications has been known and explored for several decades. Classically, HSI has relied on scanning through space and/or spectrum to acquire complete hypercubes. Due to the time required for scanning purposes, these methods have been unable to provide a live display of hyperspectral images. Compact sensors capable of acquiring HSI data in real-time, referred to as snapshot HSI, have recently been developed. Such snapshot sensors acquire hyperspectral images at video rate, typically capable of achieving about 30 hyperspectral frames per seconds or even more, by sacrificing both spectral and spatial resolution. Instead of acquiring a dense hypercube -i.e. with fully sampled spectral information (z-direction) at each spatial pixel of a scene (x-y plane) -snapshot hyperspectral cameras acquire subsampled hyperspectral images in one shot typically using a tiled or mosaic pattern as detailed in Prithette et al" Proc. or SPIE 2017.
Here, we define a hyperspectral imaging system to be real-time if it is capable of acquiring images at a video rate suitable for providing a live display of hyperspectral imaging information in the order of tens of frames per second.
As illustrated in Shapev et al., journal of E1:oohotonics 2019 and further detailed below in view of the prior art, while existing HSI systems can capture important information during surgery, they currently do not provide a means of providing wide-field and real-time information of high enough resolution to support surgical guidance.
Hyperspectral imaging for use in medical applications has been described with a number of different acquisition principles. The main ones have relied on sequential filtering of the light at the detector side. As an early example, U.S. Pat. No. 5,539.517 A proposed an interferometer-based method where a predetermined set of linear combinations of the spectral intensities is captured sequentially through scanning.
Around the same time, U.S. Pat. No 6,937 835 Si proposed to acquire HSI data sequentially with the help of a tuneable filter such as a Liquid Crystal Tuneable Filters (LCTF) in combination with prior knowledge about expected tissue responses to acquire data according to a given diagnostic protocol. U.S. Pat. No, 8,320,996 B2 refined a programmable spectral separator, such as an LCTF, to acquire spectral bands one after the other, extract information related to a specific diagnostic protocol and proposed to project a summarising pseudo-colour image onto the imaged region of interest. In E.P. Pat. Awl/cation No. 2 851 662 A2, a slit-shaped aperture coupled with a dispersive element and mechanical scanning is used to acquire spectral imaging information in a sequential fashion. As these methods rely on sequential acquisition, they are not directly suitable for real-time wide-field imaging. Also, none of these works have presented a means of improving the resolution of the captured HSI.
In addition to filtering the light at the detector end, HSI for medical application has also been explored through the use of filtered excitation light. As a first example U.S. Pat. Application No. 2013/0245455 Al proposed a HSI setup with a plurality of LED sources switched on in a particular order to acquire a plurality of spectral bands sequentially. In a similar approach, W.O. Pat. Application No. 2015/135058 Al presented an HSI system which requires optical communication between a remote light source and a spectral filtering device to scan through a set of illumination filters.
As with their detection filtering counterparts, these systems are not suitable for real-time imaging and no solution for HSI resolution improvement is provided.
Still in the medical domain, HSI data sources have been integrated in more complex setups, some of which looking into providing pathology-related discriminative information. U.S. Pat. Application No. 2016/0278678 Al relies on projecting spatially modulated light for depth-resolved fluorescence imaging combined with hyperspectral imaging. U.S. Pat. No. 10292771 62 disclosed a surgical imaging system potentially including an HSI device and exploiting a specific surgical port with a treatment to decrease the reflectance of the port. In U.S. Pat. No. 9;788,906 62, hyperspectral imaging is used as a potential source of information to detect the phases of a medical procedure and correspondingly configuring an imaging device.
HSI-derived tissue classification is disclosed in EP. Pat Application No 3545491 Al where clustering is used to assign the same classification to all the pixels belonging to the same cluster. Tissue classification based on HSI data is also presented in W.O. Pat, Application No. 2018'059659 Al. Although of potential interest for use during surgery, none of these imaging systems propose a means of acquiring real-time HSI or a means to improving the resolution of HSI images, and nor do they propose a means of generating high-resolution tissue characterisation of classification maps.
Outside of the medical field, sensors able to acquire HSI data in real-time have recently been proposed. In E.P. Pat, Application No. 3348974 Al, a hyperspectral mosaic sensor is presented in which spectral filters are interleaved at the pixel level to generate spatially and spectrally sparse but real-time HSI data. A number of aberrations are expected in such sensors and Pichette et al.., Proc. of SPIE, 2017 presented a calibration approach able to compensate for some of the observed spectral distortions but no method was presented there to increase the spatial resolution. In Dilkstra et al., Machine Vision and Applications, 2019, a learning-based approach for snapshot HSI acquired with a mosaic sensor is presented. Although the effect of spectral cross-talk and the sparse nature of the sensor are discussed in this work, the combined effect of various distortions is not modelled or captured directly. Simplifying assumptions are used to decouple crosstalk-correction and upscaling. Alternative means of capturing snapshot HSI data have been proposed such as coded aperture snapshot spectral imaging (CASSI) presented in Waqadankar et al., Applied Optics, 2008. These imaging systems typically include a number of optical elements such as dispersive optics, coded apertures, and several lenses often resulting in an impractical form factor for use in surgery. Similar to mosaic sensors, CASSI systems result in difficult trade-offs between temporal, spectral, and spatial resolution but also lead to complex and computationally costly reconstruction techniques. U.S. Pat. Application No. 201910096049 Al proposed to combine learning-based techniques with optimisation techniques to reconstruct CASSI-based HSI data. Even though the computational complexity was reduced, and although the system is able to capture raw data in real-time, the system does not disclose a means of performing real-time reconstruction. Even though sensors such as mosaic sensor and CASSI may find a use in surgery, it remains to be shown how these can be integrated in a real-time system able to display high-resolution HSI-derived images and concurrently provide maps such as tissue characterisation or discriminative images of tissue classification for surgical support.
Prior art shows that the problem of intraoperative tissue characterisation arises in many surgical fields and that it has been addressed with different methods. Hyperspectral imaging has notably shown great potential in this area. However, to the best of our knowledge, there is no disclosure of a method that can provide wide-field and high-resolution tissue-related information derived in real-time from hyperspectral imaging during surgery. Therefore, a need exists for a system and method that would allow for real-time resolution improvement and associated tissue characterisation of hyperspectral imaging.
Summary of Invention
Embodiments of the invention provide a method and system that allows parameters of a desired target image to be determined from hyperspectral imagery of scene.
The parameters may be representative of various aspects of the scene being imaged, particularly representative of physical properties of the scene. For example, in some medical imaging contexts, the property being imaged may be blood perfusion or oxygenation saturation level information per pixel. In one embodiment the parameters are obtained by collecting lower spectral and spatial resolution hyperspectral imagery, and then building a virtual hypercube of the information having a higher spatial resolution using a spatiospectral-aware demosaicking process, the virtual hypercube then being used for estimation of the desired parameters at the higher spatial resolution. Alternatively, in another embodiment, instead of building the virtual hypercube and then performing the estimation, a joint demosaicking and parameter estimation operation is performed to obtain the parameters in high spatial resolution directly from lower spectral and spatial resolution hyperspectral imagery. Various white level and spectral calibration operations may also be performed to improve the results obtained.
In view of the above, from a first aspect there is provided a method of determining parameters of a desired target image from hyperspectral imagery, comprising: capturing hyperspectral snapshot mosaic images of a scene using a hyperspectral image sensor, the snapshot mosaic images being of relatively low spatial and low spectral resolution; undertaking demosaicking of the snapshot mosaic images to generate a virtual hypercube of the snapshot mosaic image data, the virtual hypercube comprising image data of relatively high spatial resolution compared to the snapshot mosaic images; from the image data in the virtual hypercube, determining relatively high spatial resolution parameters of a desired target image; and outputting the determined relatively high-resolution parameters as representative of the desired target image.
In one example the demosaicking is spatiospectrally aware. For example, the demosaicking may comprise image resampling, such as linear or cubic resampling, of the snapshot mosaic images followed by the application of a spectral calibration matrix. Moreover, in another example the demosaicking may comprise machine learning.
In addition or alternatively, the demosaicking may be temporally consistent between two or more consecutive frames based on motion compensation in between frames.
A further example further comprises, prior to capturing the hyperspectral snapshot mosaic images, undertaking a white balancing operation on the hyperspectral image sensor. In one example the white balancing operation may comprise separately acquiring reference images, including dark and white reference mosaic images and w""rw at integration times Td and r w respectively, and deploying a linear model where in addition to the acquired mosaic image iv, of an object with integration time T, a white reference mosaic image ww".", of a reflectance tile with integration time re" and dark reference mosaic images wit, and Warw are acquired with integration times 1-and T47, with a closed shutter, and the white balancing operation yields a reflectance wr-wdr, r", mosaic image given by r: = E W. 1117,7; cw T d; cw In a further example, prior to capturing the hyperspectral snapshot mosaic images, a spatiospectral calibration operation is undertaken on the hyperspectral image sensor. During the calibration operation a real spectral filter response operator BF: iVLA -> Rinlin2 and a spatial cross-talk operator T: W -> W are estimated in a controlled setup to account for parasitical effects during image acquisition.
In addition, a further example may further comprise measuring a characteristic of the hyperspectral image sensor to obtain a measured system filter response operator Au: U -> W by acquiring snapshot mosaic image data using collimated light and sweeping through all n, wavelengths in conjunction with an imaging target with known, typically spatially-constant, spectral signature.
In one example the determining of the relatively high spatial parameters further comprises analysing pixel-level hyperspectral information for its composition of unique end-members characterised by specific spectral signatures.
In one example the determining of the relatively high spatial parameters further comprises estimation of tissue properties per spatial location (typically pixels) from reflectance information of hyperspectral imaging, such as pixel-level tissue absorption information.
Another example of the present disclosure provides a method of determining parameters of a desired target image from hyperspectral imagery, comprising: capturing hyperspectral snapshot mosaic images of a scene using a hyperspectral image sensor, the snapshot mosaic images being of relatively low spatial and low spectral resolution; undertaking a joint demosaicking and parameter estimation from the snapshot mosaic images to determine relatively high spatial resolution parameters of a desired target image; and outputting the determined relatively high-parameters as representative of the desired target image. Within this additional example all of the white balancing and calibration operations noted above may also be employed A further example provides a system for determining parameters of a desired target image from hyperspectral imagery, comprising: a hyperspectral image sensor arranged in use to capture hyperspectral images of a scene; a processor; and a computer readable storage medium storing computer readable instructions that when executed by the processor cause the processor to control the system to perform the method of any of the above examples.
In addition, another example provides a computer readable storage medium storing a computer program that when executed causes a hyperspectral imaging system to perform the method of any of the above examples.
Further features and aspects of the invention will be apparent from the appended claims.
Brief Description of the Drawings
Further features and advantages of the present invention will become apparent from the following description of an embodiment thereof, presented by way of example only, and by reference to the drawings, wherein like reference numerals refer to like parts, and wherein: Figure 1 is a display illustrating a typical arrangement of filters in a mosaic sensor. Figure 2 is a display that schematically represents a mosaic sensor array that comprises the active sensor area of a snapshot mosaic imaging system.
Figure 3 is a graph showing example responses of a near-infrared 5 x 5 mosaic sensor.
Figure 4 is a display illustrating the molar extinction coefficient for oxy-(Hb02) and deoxy-haemoglobin (Hb).
Figure 5 is a diagram describing an example of a sterile imaging system that may be used for real-time hyperspectral imaging.
Figure 6 is a commutative diagram representing the steps the computational methods perform during spatiospectral calibration, virtual hypercube reconstruction and parameter estimation.
Figure 7 is a display that schematically compares the sparse hyperspectral information acquired in a two-dimensional snapshot mosaic image with the information captured in a three-dimensional hypercube.
Figure 8 is a diagram describing the steps of spatiospectral calibration, spatiospectral-aware demosaicking, and parameter estimation via virtual hypercubes or acquired snapshot imaging data.
Figure 9 is a display illustrating the shortcomings of demosaicking using methods with no spatiospectral awareness.
Figure 10 is a display illustrating different tissue property parameter maps extracted from virtual hypercubes.
Figure 11 is diagram of a computer system according to an embodiment of the invention.
Overview of Embodiments According to one aspect, embodiments described herein relate to a computer-implemented method and a computer system for obtaining hyperspectral images from lower resolution mosaic image data acquired in real time, in order to determine image property information representative of some physical property of the sample being imaged. The method may comprise: * the acquisition of hyperspectral imaging data with a medical device suitable for use in a sterile environment * the application of data-driven computational models to deliver hyperspectral information at a strictly higher resolution than any spectral band of the original data An imaging system for data acquisition for use with the method and system may consist of one or more hyperspectral imaging (HSI) cameras and a light stimulus provided by one or more light sources. Its application can be combined with a scope, such as an exoscope or endoscope, to be provided as part of the optical path of the imaging system.
The imaging system may be hand-held, fixed to a surgical table for example by means of a mechanical arm or combined with a robotic actuation mechanism.
Optical filters may be placed in the optical path anywhere in between the origin of the travelling light and its receiving end such as a camera sensor.
Hyperspectral images may be acquired with an imaging device that captures sparse hyperspectral information such as by assigning each spatial location the spectral information of strictly fewer spectral bands than the total number of spectral bands the imaging system is capable of measuring. An example of such a prior art imaging system is shown in US 9857222 B, and that describes using a mosaic of filters for passing different bands of the optical spectrum, and a sensor array arranged to detect pixels of the image at the different bands passed by the filters, wherein for each of the pixels, the sensor array has a cluster of sensor elements for detecting the different bands, and the mosaic has a corresponding cluster of filters of different bands, integrated on the sensor element so that the image can be detected simultaneously at the different bands.
An example imaging system may include a sensor array of individual 5 x 5 mosaic sensor as shown in Figure 1. The active sensor area is obtained by creating an array of such individual mosaic sensors (see Figure 2). Each S x 5 mosaic sensor integrates 25 optical filters that are sensitive to different spectral bands. This leads to a sparse sampling of hyperspectral information across the active sensor area where each spectral band information is acquired only once per 5 x 5 region and spatially shifted with respect to other bands. An image acquired by such a sensor array arrangement shall be called a 'mosaic' or 'snapshot mosaic' image.
Due to imperfections and physical design constraints of the hyperspectral camera sensor in practice, parasitic effects may lead to multimodal sensitivity response curves of the optical filters. Examples of such effects affecting the imaging include interference of higher-order spectral harmonics, out-of-band leakage and cross-talk between neighbouring pixels on the camera sensor. Example response curves for a near-infrared (NIR) 5 x 5 mosaic sensor are shown in Figure 3. As will be made clear, additional filters may be used to suppress or emphasise spectral parts of these responses. A naive hypercube reconstruction from a snapshot mosaic image obtained by stacking images of band-associated pixels onto each other leads to a spatially and spectrally distorted hypercube representation that is low-resolution in both spatial and spectral dimensions. Therefore, snapshot hyperspectral imaging is characterized by a high temporal resolution of hyperspectral images that are affected by multimodal spectral band contamination and are low-resolution in both spatial and spectral dimensions.
Herein, we disclose methods suitable to obtain hyperspectral imaging information in high temporal and spatial resolution from snapshot imaging that provides wide-field and high-resolution tissue-related information during surgery in real-time.
White balancing and spatiospectral calibration of acquired hyperspectral snapshot mosaic images may be done as a preprocessing step with data acquired either in factory or with data acquired by the user. In some examples this may be achieved by using a single image of a static object, such as a reflectance board, or a sequence of images of either a static or moving object, acquired in or outside of the operating theatre. In some other examples, this may be done by processing specular reflections as observed in the acquired image data. Further examples for image calibration may relate to image processing due to a deliberate change of the imaging device settings, such as changing the effects of filter adjustments.
The reconstruction of a higher-resolution hyperspectral image data from the original, low-resolution snapshot mosaic using image processing methods shall be referred to as 'demosaicking' or 'upsampling'. Demosaicking may be performed by spatiospectrally upsampling the acquired snapshot mosaic data to obtain a hypercube with a fixed, potentially arbitrary, number of spectral band information for all acquired image pixel locations. In some embodiments, such demosaicking may also be performed to achieve spatiospectral upsampling onto a high-resolution grid other than the original image pixel locations. Besides conventional demosaicking approaches that achieve spatial upsampling due to, e.g., resampling, we present spatiospectral-aware upsampling/demosaicking methods that account for both spatial cross-talk and spectral parasitical effects specifically important in snapshot imaging systems. Such a reconstruction shall be referred to as a 'virtual hypercube'.
Simple examples of demosaicking may include image resampling performed independently for each spectral band. In other examples, this may include methods based on inverse problem formulations. Other examples may also include the use of data-driven, supervised, semi-supervised or unsupervised/self-supervised machine learning approaches. These examples may also include computational methods for reconstructions that are designed for irregular grids. Increased quality or robustness during demosaicking may be obtained by processing a video stream of image data. Similar approaches may be used to increase temporal resolution for data visualisation.
Computational models may be used for parameter estimation from a virtual hypercube representation. Examples may include the estimation of tissue properties per spatial location (typically pixels) from reflectance information of hyperspectral imaging, such as pixel-level tissue absorption information. More generally, the obtained pixel-level hyperspectral information may be analysed for its composition of unique end-members characterised by specific spectral signatures. Spectral unmixing algorithms are presented that estimate the relative abundance of end-members mixed in pixel spectra to derive tissue properties relevant for surgical guidance. Relevant examples of end-members include oxy-and deoxy-haemoglobin (Figure 4). Examples of relevant end-member-derived tissue properties include blood perfusion and oxygenation level saturation level information per pixel.
Other examples of unmixing may include the estimation of fluorescence and auto-fluorescence which may also be used for quantitative fluorescence.
In another aspect, virtual hypercube representation may serve to estimate a pseudo red-green-blue (RGB) image or any other image of reduced dimensionality to visualize hyperspectral imaging data.
According to another aspect, virtual hypercubes may also be used to classify pixels according to tissue type, including the types of benignity and malignancy. Virtual hypercubes may also be used for semantic segmentation beyond tissue types. This may include the classification of any pixel associated with non-human tissue such as surgical tools. Obtained segmentations may be used for increased robustness of tissue parameter estimation or to correct potential image artefacts such as specular reflections.
In all examples described in here, virtual hypercube estimation and parameter extraction may be performed at two independent steps or may be performed jointly. Computational models may use algorithms which allow for joint demosaicking and parameter estimation. Such approaches may be based on inverse-problems formulations or on supervised or unsupervised machine learning approaches.
All computer-assisted parameter estimations may be associated with uncertainty estimates.
Description of the Embodiments
The present disclosure relates to an image processing system and method that may allow video processing of high-resolution hypercube data to be performed online from a video stream of sparse, low-resolution mosaic data acquired in real time by a medical device suitable for use in a sterile environment.
Description of the System
Figure 5 presents an overview of an example sterile imaging system that may be used for real-time hyperspectral imaging. A real-time hyperspectral imaging camera, such as a snapshot hyperspectral camera, is mounted on a sterile optical scope, such as a sterile exoscope, via an appropriate adapter. This adapter may also allow for zooming and focusing of the optical system and may comprise additional features such as a mechanical shutter, a beam splitter or a filter adapter. In some embodiments, the use of several camera sensors combined with light splitting mechanisms may be advantageous to cover the range of wavelengths of interest. For ease of presentation, such configuration may continue to be referred to as a hyperspectral imaging camera. The sterile optical scope is connected to a light source, such as a broadband Xenon or LED light source, that can provide light for spectral wavelengths appropriate for the hyperspectral imaging camera or for exciting fluorophores of interest via a light guide which may be sterile or draped. It should be clear that light sources may in some embodiments also be mounted together with the camera, thereby potentially foregoing the need for a light guide.
Optical filters may be placed in the optical path anywhere in between the origin of the travelling light and its receiving end such as the camera sensor. Such optical filters may be inserted using a variety of means such as a filter wheel within the light source that may hold multiple optical filters or may be embedded in the adapter or endoscope. In some embodiments, optical filters may be used to eliminate undesired out-of-band responses, such as parts of the visible light for an NIR sensor (Figure 3). The hyperspectral imaging camera is connected to a computational workstation via a data link such as a cable or a wireless communication. Advantageously, electrical power may be provided to the camera sensor and other powered elements mounted with the camera (e.g. tuneable lens or filter) through the same cable as the data link as would be with Power over Ethernet (PoE) connection. The workstation processes acquired hyperspectral imaging information and derived information may be displayed to a user via a display monitor. In some embodiments, the workstation may be embedded in the camera sensor unit or in the display monitor. Visualized information may include the acquired hyperspectral imaging data, or derived information thereof via computational methods described below, such as RGB image or tissue property information. Overlay of different types of information may be used to provide more context to the user. In one example, tissue property information in an area where estimation is done with high confidence can be overlaid on a pseudo-RGB rendering of the captured scene. Sterility of the imaging system may be ensured by a combination of draping or sterilising of the system components and procedural step to ensure connection between sterile and non-sterile components do not compromise the sterility of the sterile operators and field. One advantageous embodiment can be to use a sterile drape for the camera and the data cable which is sealed on a sterile optical scope connected to a sterile light guide. The sterile imaging system may be hand-held by the user or fixed to a surgical table that allows controlled mobilisation or immobilisation of the imaging system depending on the user's requirement during surgery. Controlled mobilisation and immobilisation of the sterile imaging system may be achieved using a sterile or draped mechanical arm or robotic actuation mechanism. In other embodiments, the hyperspectral imaging system may be embedded in a surgical microscope.
Computational steps are performed by a computation workstation 52 to extract tissue or object property information on a per pixel-level from acquired low-resolution snapshot hyperspectral imaging data for display during surgery. The computations workstation 52 is shown in more detail in Figure 11, from where it can be seen that the computation workstation 52, which may be a suitably programmed general purpose computer, comprises a processor 1128, provided with memory 1130, and an input-output interface 1132 from which control inputs can be obtained from peripheral devices such as keyboards, footswitches, pointing devices (such as a computer mouse or trackpad), and the like. In addition, a further input port 1134 has connected to it a hyperspectral imaging camera for capturing hyperspectral imaging data, and an image data output port 1136 is connected to a display, for displaying images generated by the present embodiment, using the hyperspectral imaging data as input.
Also provided is a computer readable storage medium 1112, such as a hard disk, solid state drive, or the like, on which is stored appropriate control software and data to allow embodiments of the invention to operate. In particular, the storage medium 1112 has stored thereon operating system software 1114, which provides overall control of the computing system 52, and also has stored thereon spatiospectral demosaicking program 1116, and parameter estimation program 1118. In addition, parameter mapping program 1120 is also provided. As will be described later, the spatiospectral demosaicking program 1116, and parameter estimation program 1118 operate together to provide a first embodiment, whereas the parameter mapping program operates to combine the functionality of the spatiospectral demosaicking program 1116, and parameter estimation program 1118 into a single process to provide a second embodiment. Input to both embodiments is in the form of multiple snapshot mosaic images 1126, and the output is various functional or semantic data images 1122, as will be described. In addition, an intermediate data structure in the form of virtual hypercube 1124 may also be stored on the computer readable medium, which is generated during the operation of the first embodiment, as will be described.
Hyperspectral information of an object, such as tissue, is affected by spatiospectral parasitic effects in addition to spatiospectral downsampling during snapshot imaging which leads to the acquisition of a hyperspectral image that is characterised by low spatial and low spectral resolution. The disclosed methods may perform computational steps that address each of these effects either independently or jointly to obtain per-pixel estimates of tissue or object property information in high spatial resolution.
A filter response mapping may be used to describe hyperspectral information of an object in lower spectral resolution acquired by individual band sensors of the snapshot imaging sensor. Further band selection and crosstalk modelling approaches may be used to describe the acquired low-spectral and low-spatial resolution snapshot mosaic image.
In some embodiments (i.e. the first embodiment mentioned above), hyperspectral information of a virtual hypercube characterised by low spectral but high spatial resolution may then be reconstructed by deploying spatiospectral correction approaches. Computational parameter estimation approaches may be used to infer per-pixel tissue or object properties. Example computational approaches are disclosed below.
In other embodiments (i.e. the second embodiment mentioned above), tissue or object property information can be obtained from the acquired low-resolution snapshot data directly. Example computational approaches are disclosed below.
It should be clear to a person skilled in the art, that a direct parameter estimation approach can be considered as the inference of tissue-property information from a virtual hypercube where the reconstructed virtual hypercube itself corresponds to the estimated tissue property map. Further details will be provided with respect to Figure 6.
Figure 6 presents an overview of the involved steps the computational methods of both the first and second embodiments may perform. In summary, in the first embodiment snapshot mosaic data w is captured at 62, with low spatial and low spectral resolution. This then undergoes a demosaicking process to generate a virtual hypercube 64, containing virtual high spatial, but low spectral resolution data. From the virtual hypercube a parameter estimation process can then be performed, as detailed further below, to obtain desired high spatial resolution data 66 in the desired parameter space.
In contrast, for the second embodiment the snapshot mosaic data w is again captured at 62, with low spatial and low spectral resolution. This is then subjected to a joint demosaicking and parameter estimation process, which strictly speaking foregoes the complete generation of a virtual hypercube (although conceptually might be thought of as still generating the parts of it that are required, even if in fact the computation is performed more directly) as detailed further below, to obtain desired high spatial resolution data in the desired parameter space directly.
In more detail, for real-time hyperspectral imaging, sparse hyperspectral information may be obtained by using a snapshot camera system that simultaneously acquires individual bands at different spatial locations using a sensor array of mosaic filters in 'one shot'. Such a system may acquire a snapshot mosaic image on a regular grid w E W: = R"x X Rib with active sensor area w of width nx and height ny. It shall be noted that alternative sensor types with irregular or systematic pixel arrangements similar to mosaic imaging, such as tiled capturing using microlens arrays (Figure 2), can be addressed by straightforward adaptation of the following approaches.
A snapshot mosaic w E W describes a 'flattened', two-dimensional (2D) and low- resolution approximation of a, typically unknown, three-dimensional (3D) high-resolution hypercube it E U:= W x A with nA c N discrete bands of the optical spectrum providing an approximation of the continuous optical spectrum in a wavelength range of interest.
For an example of an mi x m, mosaic filter, a total of rn1m2 individual spectral bands may be acquired over an mi x m2 spatial region. For an mi x m2 mosaic, a filter response operator BF: IRnA -> {Wn1M2 may be assumed that describes the mapping from 71A to all the ingn2 discrete bands of the optical spectrum whereby m1m2 « nA typically. This operator shall advantageously model the spectral response of the respective filters including the relative radiance of the light source, higher-order harmonics and spectral leakage of the mosaic filters but no spatial cross-talk across the sensor elements, which will be accounted for separately. Assuming the independent application of such filter band responses to all individual spatial locations in the active sensor area Iv a hypercube it c u =147 x RnA can be transformed into a hypercube v E W x tirz1n2 with lower spectral resolution via BE. By assuming identical properties of all mosaic filters, the extension of the filter response operator BE to the active sensor area can be formally described using the Kronecker product in combination with the identity operator I, i.e. I BE:U VF with (I BF)(u) = v. Defining a band selection operator Swn,,Rm2: Rrn1 X Rm2 X R2242112 -> Rml X Rm2, the mapping from a mosaic hypercube to a snapshot mosaic can be described, i.e. the 'flattening' of a three-dimensional /n, X rn2 X im1ni2 hypercube onto a two-dimensional mi x m2 mosaic that contains the spectral information of each of the individual m1m2 acquired bands at different m1m2 spatial positions (Figure 7). This operation can be naturally extended to a selection operator s:= sw: w X Rralm2 -> W over the entire active sensor area W. Spatial cross-talk between individual neighbouring band sensors is modelled by an appropriate cross-talk operator T:= Tw: W W over the active sensor area w of the sensor. As an example, a convolution with kernel size k x k may be assumed to model the cross-talk around a kxk neighbourhood per pixel. By introducing a spatial cross-talk operator T that accounts for mixed sensor responses of neighboring filters, all remaining, primarily spectral, parasitical effects are therefore accounted by the spectral filter response operator BE. Overall, the forward model of the snapshot mosaic image acquisition that independently accounts for spatial and spectral leakages of the imaging system can be described by the joint operator T 0 S (1 (S) Br): U -> W, (1) which maps the (typically unknown) 3D high-resolution hypercube U E U into the 2D low-resolution snapshot mosaic w c W. Specific examples of how to obtain both the real spectral filter response operator BF and the spatial cross-talk operator T of the mosaic sensor to characterise the hyperspectral snapshot imaging setup are shown below. To account for differences in the optical spectrum provided by different light sources, white balancing may be performed for acquired radiances w c W as a preprocessing step or its impact may be embedded in BE.
Similar to the filter response operator BE: IVA Trim', for a given mosaic sensor, a virtual filter response operator By: Er^ WA for nA virtual bands can be defined.
Virtual filter responses can be chosen depending on the desired task. Specific choices of the virtual filter responses may include representations of idealised transfer functions of optical filters, such as the primary response of Fabry-Perot optical resonators, which are characteristic of some snapshot imaging systems such as presented by Pirhette et al PrOr of SPITE 2017. Virtual bands may also be chosen as regularly spaced spectral bands for increased interpretability and interoperability. Other examples include the spectrum of end-members that are of particular interest for tissue property extraction, such as Fib or Hb02 (Figure 4). In practice, the number of virtual filter bands nA may be smaller or equal than the number of filter bands m1n2 of the mosaic sensor. By introducing an operator C: IlEn1rn2 -> na for spectral correction, a spectral mapping from real to virtual filter responses of the system can be established by ensuring Cc BE B. Specific examples of how to obtain the spectral calibration operator C are shown below (Figure SA). In cases where the spectral responses of the acquisition filters include a limited or null amount of spectral parasitical effects such as higher-order harmonics and spectral leakage, it may be advantageous to choose B = Bp thus leading to C describing the identity operator.
With the definition of a virtual hypercube space IA,: = W X IA, spatiospectral-aware 'demosaicking' or'upsampling'fv: W -> v, then refers to the reconstruction of a virtual hypercube v e Vi7 from an acquired snapshot image w E W, i.e. fv(w) = v, which accounts for both spatial cross-talk and spectral parasitical effects. Specific examples of demosaicking approaches A, are shown below (Figure 8B).
Based on a reconstructed virtual hypercube v c Vv parameter estimation approaches gp: V, -> Pn can be used to estimate image property information on a pixel level over the entire active sensor area, i.e. to estimate a property p E Pn:= W X Rn whereby E N depends on the type of property (Figure 8C). For example, it = 1 may be used for semantic tissue classification for the differentiation between benign and malignant tissue types whereas it = 3 may be used for the estimation of pseudo-RGB images.
Instead of individual demosaicking and parameter extraction steps, i.e. p = gp(v)= gp(f,(w)), joint models fp: w -> Pn are presented to allow an end-to-end approach directly from the acquired mosaic images w W (Figure 8D).
As should be clear to a person skilled in the art, all computational methods may also be used for multiple camera systems that provide multiplexed video stream data.
Additionally, all presented computational approaches may lead to reconstructions on a different high-resolution grid other than then the active sensor area w. All computational methods may also be based on any other positive loss functions (e.g. smooth Li or bisquare) other than the norm presented in the examples. Specific assumptions on the noise level for error estimates may also be made, such as the assumption that noise is independent but not identically distributed across wavelengths.
White Balancing An acquired mosaic image captures the radiance from an object. Reflectance calculation, or white balancing, may be performed to compute the reflectance signal from the acquired radiance in the snapshot mosaic image w E W as a preprocessing step. This may be achieved by using separately acquired reference images, including white and dark reference mosaic images Iva), and ww;rw acquired at integration times Ta and Tw, respectively. In some examples, white balancing may be achieved by deploying a linear model. I.e. in addition to the acquired mosaic image vv, of an object with integration time T, a white reference mosaic image of a reflectance tile with integration time -rw, and dark reference mosaic images wa, and acquired with integration times T and Tw, with a closed shutter, white balancing yields the reflectance mosaic image WT Wd;T Tw WW;Tw Wd;Tw T (2) For some examples, integration times T and Tw in (2) may be identical. In others, r", may be reduced to avoid potential sensor saturation effects. As should be clear to a person skilled in the art, the white reference may also refer to any means of acquiring a preferably spectrally neutral reference. In some embodiments, the use of a grey card may for example be combined with an intensity correction factor akin to the effect of r" so as to avoid any potential saturation effects when acquiring the white reference. A sterile imaging target with known reflectance characteristics, such as medical equipment available in the operating theatre, may also be used to estimate a white reference according to (2). An example of such sterile imaging target may be a surgical gauze. Specular reflections of one or more acquired images, obtained from various angles and positions to the surgical scene, may also be used as a surrogate of a white reference signal.
In examples where imaging setup characteristics are known a priori, white balancing may be precomputed therefore removing the need of acquiring white and dark references in an intraoperative setup. Both white and dark references may be estimated in-factory for a variety of camera settings to be used for on-the-fly white balancing during intraoperative use of the imaging system.
White balancing according to (2) may also be performed for fluorescence imaging applications. This may include white balancing of the system in conjunction with optical components specifically designed for fluorescence-based imaging, such as an exoscope with adequate light source and optical filters for indocyanine green (ICG) or 5 aminolevulinic acid (5-ALA) induced protoporphyrin IX (PpIX).
All presented white balancing approaches may include the temporal processing of a video stream. Such approaches may be used to account for measurement uncertainty or to capture spatially varying white balancing with non-uniform reflectance targets, such as a surgical gauze. Examples may include the temporal averaging of the white or dark reference images used for (2).
Spa tiospectral Calibration Both the real spectral filter response operator BE c R.m1ni2xn^ and the spatial cross-talk operator T: W -> W in (1) can be estimated in a controlled setup to account for parasitical effects during image acquisition.
By measuring the characteristic of the sensor in-factory a measured system filter response operator A'Fneas: u -> w can be obtained. This may be achieved by acquiring snapshot mosaic image data using collimated light and sweeping through all ?i" wavelengths in conjunction with an imaging target with known, typically spatially-constant, spectral signature. In conjunction with (1), spatiospectral calibration of the imaging system may then be performed.
In the example of linear operators BE: RnA Rmlin2 and T: W W, let OT E Rralm2k2 denote the unknown parameters of a cross-talk operator T describing k x k convolution kernels for the m1m2 filters to model the mixing of pixel neighbourhood responses for each band. Spatiospectral calibration of the imaging system can be performed by estimating, in (1), both BE, represented in this linear operation mode as a matrix in iWnim2xnA, and T represented with OT, by solving an optimisation problem such as Thin min IITE, 04050 (I 0 DE) -Arus (3) erEvnivnzA72 BEERrnirri2xtin Additional regularisation and constraints, such as positivity constraints, on the variables may be applied for OT and/or B, in (3). In some embodiments, a model using the same kernel for each band may for example be advantageous. In some examples, further regularisation may include the use of a blind source separation approach which may result into an optimization problem of the form min min I OT>014.0 SIM (BF(i,')) BF@ ,')) (4) such that IIT(.; 04050 (I 0 BF) -Ar"s II < for a given error threshold a> 0 and a similarity measure Sim. Similarity measures may include normalized mutual information, Kullback-Leibler divergence, or other existing scores. Alternatively, one may be interested in minimising the deviation, measured by a function Dev, from an expected model as can be done for example using a normality testing score: nt 1,n2 (5) min mm eT>osF>o Dev (13F(i,.)) i.1 such that T) 0 5 ° (1 0 BF) -Ar' II < E It should be clear that reformulation of such constrained optimisation models can be done by relaxing the hard constraints through the inclusion of additional regularisation terms in (3).
From (3) it follows that for other examples spatiospectral calibration for an intraoperative system may be performed by acquiring snapshot mosaic images w E W of an object with known hypercube Bret. = vv x tranA over a spatial region 12 c W, i.e. ureritaxwm is known. This leads to a spatiospectral calibration problem of the form min min II (T (.; Or) 050(10 BF))in (urel) wiall (6) orewnufmk2BFERrnim2,mA where additional regularisation and variable constraints may be applied, such as mentioned above.
Besides commercial calibration targets, a sterile imaging target with known reflectance characteristics, such as medical equipment available in the operating theatre, may be used to define urer for (5).
The initialisation of BE in (3)-(6) may be performed based on the measured system filter response operator /eras. One advantageous embodiment being to perform such initialisation with data acquired in factory and then performing refinement using data acquired at the point of care.
Calibration of the optical system during a surgical setting using (6) may be achieved using different optical filters with optical transmission properties t E R". For example, for a target with known hypercube representation urel E W X RnA over a region 12 c W, (6) can be solved for t C)urer and resulting snapshot mosaic image wt, whereby 0 denotes the pointwise multiplication in the spectral dimension.
Advantageously, switching across r can be implemented through the activation of a filter wheel embedded in the light source of the imaging system.
Calibration of the optical system during a surgical setting using (6) may also be achieved via the temporal processing of a video stream to account for measurement uncertainty or to enable the use of spatially-varying non-uniform reflectance targets, such as a surgical gauze, for white balancing. An example may include the temporal averaging of acquired snapshot images w of a target with known average reflectance characteristics.
Given virtual filter responses Bv: trznA HULA, a mapping between band-filtered and virtual hypercube spaces VF = W X Rrn 1'7'2 and Vv = W x RA can be established by finding a spectral correction operator c: Farnon2 Th1r1i such that Cc BE B. In case of linear operators represented as matrices, this leads to C: = min le BE BVII, (7) iMR"axml1n2 whereby additional regularisation may be performed.
It is worth noting that the calibration computation in (7) reduces to the method described in Pichette et al" Proc.-. of SP1E. 2017 if and only if the spatial cross-talk operator T is assumed to be the identity operator, n1=m1m2 and all m1m2 bands are acquired at the same spatial location which is not the case for snapshot mosaic imaging when imaging spatially-varying scenes (Figure 7). Moreover, no demosaicking steps that account for both spatial cross-talk and spectral parasitical effects were presented in Pichette et al.. Proc. of SPTE"2017.
For spatiospectral-aware demosaicking, it may be useful to estimate the pseudo-inverse of the spectral correction operator instead, i.e. : = min II " By -B Fil, (8) e'elleni tn2 xn whereby regularisation may be performed. In other examples, C and Ct may be obtained as a result of using invertible neural networks as a model for c in (7).
As should be clear to the person skilled in the art, all calibration methods described in here may be performed for multiple camera set-ups including different acquisition settings such as different gains.
Spatiospectral-Aware Demosaicking Spatiospectral-aware demosaicking methods fv: W -> Vv aim at reconstructing a virtual hypercube v c = w x RnA from an acquired mosaic image w E W, i.e. v = fv(w), by accounting for parasitical effects present in snapshot imaging.
A straightforward and computationally fast approach for demosaicking may be to use image resampling, such as linear or cubic resampling, on the calibrated mosaic images followed by the application of the spectral calibration matrix C in (2). In absence of a model that takes into account spatial and spectral parasitical effects, this leads to a hypercube reconstruction that suffers from blur in both spatial and spectral dimensions as well as other artefacts such as edge shifts and therefore leads to increased uncertainty for subsequently performed tissue characterisation (Figure 9).
With the forward model Av: = T 0 S 0(1 0 CI): Vv -> W (9) a regularised inverse problems (IP)-based demosaicking approach fr may be described as = argmin"vvelAvv -wil + Regn, (v)) (10) with an appropriate regularisation operator Regip, such as Tikhonov regularisation, and constraints on the variables, such as positivity constraints.
Depending on the choice of the virtual filter responses By, (8) and, thus, (10) may be of a very ill-posed nature. Alternative examples for IP-based demosaicking may include the minimisation of = argeninvev, el (T S)v -wIl + Regip(v)) (11) instead which leads to fjP (w) = C)1/2"7.. (12) Additional regularisation and variable constraints may be applied in (11), including a regularisation of the form Reg1p((l 0 c)v) instead of Reglp (v).
For increased computational efficiency, all operators may be implemented as matrix-free operators.
If linear modelling with Tikhonov regularisation is used in combination with an -P2-norm, dedicated linear least-squares method, such as LSMR, may be deployed to solve (10) or (11). In case of total variation-based regularisation, alternating direction method of multipliers (ADMM) may be used. Other numerical approaches, such as primal-dual or forward-backward splitting algorithms, may be used instead depending on the type and combination of operator models, data loss and regularisation terms.
To obtain fast computational times for spatiospectral-aware demosaicking for real-time intraoperative guidance during surgery, machine learning approaches may be used where fast computational times at inference can be achieved at the cost of slower computational times at training stage. The implementation of a machine learning approach may be based on a fully convolutional neural network (CNN). In some examples, the CNN may be implemented using a U-Net-like architecture.
Supervised (5) machine learning approaches for spatiospectral-aware demosaicking fvs(.; 0): w vv with model parameters 0 may be deployed in case a database of paired samples, i.e. f(wi, vi)}iej, is available. As one example, optimal parameters 0' could be established by minimising the expectation a loss function vv x vv R+, i.e. the risk or generalization error. A straightforward approach of this may be based on empirical risk minimisation for a training subset Jr c] using the loss e(v1,v2) = II -v211, i.e. 04: = argmin lig(vt5; 0)-v, II + Reg( 0) (13) leh-with an appropriate regularisation operator Regs. It should be clear that any other loss may be used in (13) in addition to approaches that increase generalizability, including stopping the optimization when a loss criteria is reached on a separate validation data set, using data augmentation strategies, drop-out and the like. In case no paired database is readily available, such a training database may also be constructed using classical IP-based approach as {(wf, Al(w1))} . In other embodiments, a paired database may also be constructed by simulating snapshot data from existing hypercube data via the forward model (9), i.e. RAvvi, v,)} j.
In other examples, an unsupervised (U) machine learning approach fvu 0); w v, with model parameters 0 may be deployed for a database w1} .1. An example may include a self-supervised approach by finding optimal parameters 0 such that it holds for a training subset h. cj- 0 *: = argmin, L lop ftqw J; e) -will + Reg( 0) (14) \IEJT with an appropriate regularisation operator Reg". In some embodiments, regularisation may also be based on cycle consistency losses.
In other embodiments, semi-supervised machine learning approaches may be used in case no exhaustive, or no sufficiently representative/realistic, database of paired examples is available. A typical implementation of such approach could rely on a formulation combining supervised and unsupervised losses from (13) and (14).
Examples may also include the use of adversarial training approaches, such as the use of generative adversarial networks (GANs), to increase the dataset by synthesizing high-fidelity data pairs from available data.
Implementations of these examples may include the use of deep neural networks such as CNNs. An example may be based on a single-image super resolution reconstruction network architecture based on a residual network structure, whereby the initial upscaling layer takes into account the regular, but spatially shifted hypercube sampling reflected in the mosaic image acquisition. Other approaches may use input layers suitable for irregularly sampled input data, such as layers based on Nadaraya-Watson kernel regression.
Temporally-Consistent Demosaicking Instead of reconstructing a virtual hypercube from a single mosaic image at a time, a temporally-consistent approach may be deployed for increased robustness.
Spatiospectral-aware demosaicking for temporally-consistent virtual hypercube reconstruction between two or more consecutive frames may be used and based on motion compensation in between frames.
Inverse problems-based approaches for temporally-consistent spatiospectral-aware demosaicking may be based on optical flow (OF) which, for two consecutive frames wt, 1/1/t+' E W, may be defined as wt+1) = arRnlinvEvynun " (11Avv -w t 112 peRlix,qeft-Y 71". ny +111(Avv)(xt,Y.3 -Ivt+1(xt +13t,Yi +q1)2 (15) ii j1 +Regip (v) + RegoF (p, q)) with appropriate regularisation operators Reg'', and Reg". In other examples, extension of (15) to multiple frames may be performed.
Machine learning-based supervised or unsupervised approaches for temporally-consistent spatiospectral-aware demosaicking may be based on video super-resolution approaches. These may be based on super-resolution networks with separated or integrated motion compensation, such as optical flow estimation. Other examples may build on a recurrent neural network (RNN), such as long short-term memory (LSTM) networks, to process the video stream of temporal snapshot image data.
Similar approaches may be used to increase temporal resolution for the visualisation of data derived from snapshot mosaic imaging. In some examples, this may be done by estimating wt+1/2 at half time steps t +1/2, i.e. doubling the frame rate for HSI data visualisation, by using the displacements p/2, qI2 as obtained during optical flow estimates, such as (15).
Parameter Estimation from Virtual Hypercubes Based on a reconstructed virtual hypercube v E Vv parameter estimation approaches gp: Vv -> PR can be used to estimate image property information on a pixel level over the entire active sensor area W, i.e. to estimate a property /3 e P": = W X n of dimension 71 E N. Approaches for spectral unmixing may be used to estimate the relative abundance of specific end-members mixed in the pixel spectra. For example, given a set of reflectances xi; c IIVA, or derived values thereof, for each pixel location (i,j), IE (1, ..., VA, j E [1, ..., nyl, of the active sensor area w the spectral mixture of 12, E NI end-members (epr"Li may be described by a linear spectral mixture model fl =law, ek + Et/ (16) k=1 with eu denoting the random error and aii, the relative abundance ratio of end-member k at pixel location (i,j). By defining the end-member matrix E:=[e1,...,epe] E RnAxne and local abundances au:= ne the model (16) can be written as xi) = Eau + eu. With x:= (Oh] E 171, and a:= (avi), E Pne, an inverse problems-based approach for spectral unmixing may read gr(x):= argminaep iiE + RegiP (a) i=1 1=1 with appropriate regularisation Regi, and variable constraints, such as positivity.
In one example, regularisation in (17) may be omitted which leads to a straightforward computation of the relative abundances a E /VI' using normal equations.
Other choices of discrepancy measures between Ea and x in (17) may be used, such as the cosine distance. Specific assumptions on the noise level in (16) may also be made, such as the assumption that noise is independent but not identically distributed across wavelengths. (17)
Other approaches may be based on supervised or unsupervised approaches g;: V, -> Ple and 0: vv -> re similar to the demosaicking approaches (13) and (14), respectively.
Examples for spectral unmixing may include the unmixing of the end-members oxy- (Hb02) and deoxy-haemoglobin (Hb) per pixel with spectral characteristics of their molar extinction coefficients shown in Figure 4. A simple model to estimate the associated relative abundance a = gp(x) may be based on absorbance estimates x = -1n(v) for a reflectance hypercube V E Vv. Derived abundances a""2,a"b,ato, E W for oxy-and deoxy-haemoglobin in addition to an end-member that accounts for scattering losses may be used to estimate total haemoglobin (or blood perfusion) + a" and oxygenation saturation levels iffrib02/(a,""", + a") (Figure 10a and Figure 10b).
Other examples for spectral unmixing may include tissue differentiation based on known spectral reflectance signatures of tissue types.
A virtual hypercube that holds the NIR reflectance information may also be used for spectral unmixing during fluorescence imaging based on known absorption and emission spectra of fluorescent compounds, such as PpIX or ICG. This may also be used for quantitative fluorescence imaging to estimate the concentration of the fluorescent compound.
Pseudo-RGB images may be obtained from a virtual hypercube such as by using CIE RGB colour matching functions (Figure 10c). If the virtual hypercube does not present spectral bands that cover the visible spectrum for RGB reconstruction, or if it covers it only partially, colorization methods may be deployed. In case of NIR imaging, this may include supervised or unsupervised methods for colorization to estimate per-pixel-RGB information. One example may include the use of cyclic adversarial networks for unpaired samples of surgical RGB images and virtual hypercube reconstructions. Other approaches may be based on the use of higher-order responses of the optical system in the visible range. In one embodiment of using NIR imaging sensors, higher-order responses, typically considered undesired spectral responses outside of the sensor's active range that need to be eliminated, could be specifically exploited to acquire spectral measurements outside of the NIR region (Figure 3). With known filter response curves of the sensor across the optical spectrum, switching in between optical filters can be used to sequentially acquire signal either in the NIR or visible range covering RGB color information for image reconstruction. In some examples, such a switch may advantageously be implemented through the use of a filter wheel embedded in the light source.
Other examples of parameter estimation may include the segmentation of tissues or surgical tools using data-driven supervised, semi-supervised or unsupervised/self-supervised machine learning approaches.
In other examples, a virtual hypercube and its per-pixel reflectance information may also be used to estimate optical tissue properties. An example may include the absorption coefficient which may be estimated using an approach similar to inverse adding-doubling (IAD) or inverse Monte Carlo. Based on obtained absorption estimates using, e.g., IAD, supervised or semi-supervised machine learning approaches could be devised to estimate absorption maps from virtual hypercubes.
Joint segmentation and parameter estimation methods may be used. An example may include the automatic segmentation of tissue which can be used to provide a tissue-specific scattering prior to obtain more accurate absorption coefficient estimates.
Another example may include the automatic segmentation of tissue or surgical tools for more robust tissue parameter estimations. This may include accounting for image artefacts, such as specular reflections, or rejection of non-tissue-related signal contributions.
Other image analysis methods to derive information relevant for surgical decision making from a virtual hypercube may be used.
Parameter Estimation from Snapshot Imaging As described previously, in a second embodiment parameter extractions may also be performed directly from acquired mosaic images w E iv via computational approaches f: W 1312.
Such models may be based on the prior knowledge of 'ideal' parameter mappings hp: U -> Similar to the concept of introducing a spectral correction operator 10 C: VF -> Vv between band-filtered and virtual hypercube spaces, a mapping m(; 8): tip Pn with model parameters 19 can be established such that ma (1 Bp) hp. Such mapping may be determined via 0 ' : = argmino (11Th (-: (1 0 BF) -h p Reg(0)) (18) with appropriate regularisation Reg and variable constraints, such as positivity. With CM, i.e. 11,1 = argminvevE (II (T c S)v - Reg(v)) (19) with appropriate regularisation term such as of the form Reg(IT(v:01)), this leads to f(w) =Th(i); ,e*). (20) In other examples, the pseudo-inverse dr (.; 0): Pn -> VF with model parameters 0 may be determined via in (II (.: 0) chp - BEM Reg(0)) (21) with appropriate regularisation Reg and variable constraints, such as positivity. The forward model can then be defined as A p: = T 0 S 0 Trt: -> W. (22) Similar to above mentioned examples (1O)-(14), inverse problems-based approaches, supervised, semi-supervised and unsupervised approaches may be used to estimate the parameter mapping fp: W -> . It shall be noted that instead of estimating TC or mt, invertible models may be used, such as invertible neural networks, which not only learn a forward mapping but also to establish the corresponding inverse simultaneously.
As one example, hp can be derived from the spectral unmixing model used to estimate abundance for oxy-and deoxy-haemoglobin. By extending the linear spectral mixture model (16) from the space A to PtA, normal equations lead to the explicit formulation of hp (u) = -(1 (ET E)-1 ET)(1n(u)) E P3 (23) whereby E E En^x3.
In other examples pseudo-RGB images may be obtained from snapshot images using data-driven machine learning approaches. One example may include the use of cyclic adversarial networks for unpaired samples of surgical RGB and snapshot mosaic images.
Uncertainty Estimates Whereas presented forward models are typically well-defined, the problem of estimating the inverse is generally ambiguous, such as the pixel-wise reconstruction fp: W P" of tissue property parameters from low-resolution snapshot image acquisitions. It shall be obvious that for all presented computational approaches, additional uncertainty quantification capabilities may be introduced to estimate the uncertainty of obtained outcomes. This may include approaches such as dropout sampling, probabilistic inference, ensembles of estimators, or test-time aug mentations.
In other embodiments, invertible mappings may be used in the presented models such as obtained by invertible neural networks, which not only learn a forward mapping but also establish the corresponding inverse process. Such approaches may also be used to recover the full posterior distribution able to capture uncertainty in obtained solution estimates.
Uncertainty estimates may be displayed to the user or may be used as part of a computational pipeline or visualisation strategy. According to one example, uncertainty estimates relating to parameter estimates may be used to choose and display only the estimates meeting a given certainty criteria such as a threshold.
Various further modifications to the above described examples, whether by way of addition, deletion or substitution, will be apparent to the skilled person to provide additional examples, any and all of which are intended to be encompassed by the appended claims.

Claims (15)

  1. Claims 1. A method of determining parameters of a desired target image from hyperspectral imagery, comprising: capturing hyperspectral snapshot mosaic images of a scene using a hyperspectral image sensor, the snapshot mosaic images being of relatively low spatial and low spectral resolution; undertaking demosaicking of the snapshot mosaic images to generate a virtual hypercube of the snapshot mosaic image data, the virtual hypercube comprising image data of relatively high spatial resolution compared to the snapshot mosaic images; from the image data in the virtual hypercube, determining relatively high spatial resolution parameters of a desired target image; and outputting the determined relatively high-resolution parameters as representative of the desired target image.
  2. 2. A method according to claim 1, wherein the demosaicking is spatio-spectrally aware.
  3. 3. A method according to claim 2, wherein the demosaicking comprises image resampling, such as linear or cubic resampling, of the snapshot mosaic images followed by the application of a spectral calibration matrix.
  4. 4. A method according to claim 2, wherein the demosaicking comprises machine learning.
  5. 5. A method according to any of the preceding claims, wherein the demosaicking is temporally consistent between two or more consecutive frames based on motion compensation in between frames.
  6. 6. A method according to any of the preceding claims, and further comprising, prior to capturing the hyperspectral snapshot mosaic images, undertaking a white balancing operation on the hyperspectral image sensor.
  7. 7. A method according to claim 6, wherein the white balancing operation comprises separately acquiring reference images, including dark and white reference mosaic images FVdtd and at integration times id and T respectively, and deploying a linear model where in addition to the acquired mosaic image vv., of an object with integration time T, a white reference mosaic image w of a reflectance tile with integration time rw, and dark reference mosaic images w", and w, are acquired with integration times t and Tw, with a closed shutter, and the white balancing operation yields a reflectance mosaic image given by r: = Wd' E W.
  8. 8. A method according to any of the preceding claims, and further comprising, prior to capturing the hyperspectral snapshot mosaic images, undertaking a spatiospectral calibration operation on the hyperspectral image sensor.
  9. 9. A method according to claim 8, wherein a real spectral filter response operator B: tra"A Tarri1rn2 and a spatial cross-talk operator T: W W are estimated in a controlled setup to account for parasitical effects during image acquisition.
  10. 10. A method according to claim 9, and further comprising measuring a characteristic of the hyperspectral image sensor to obtain a measured system filter response operator Ar: U -> W by acquiring snapshot mosaic image data using collimated light and sweeping through all nA wavelengths in conjunction with an imaging target with known, typically spatially-constant, spectral signature.
  11. 11. A method according to any of the preceding claims, wherein the determining of the relatively high spatial parameters further comprises analysing pixel-level hyperspectral information for its composition of unique end-members characterised by specific spectral signatures.
  12. 12. A method according to any of the preceding claims, wherein the determining of the relatively high spatial parameters further comprises estimation of tissue properties per spatial location (typically pixels) from reflectance information of hyperspectral imaging, such as pixel-level tissue absorption information.
  13. 13. A method of determining parameters of a desired target image from hyperspectral imagery, comprising: capturing hyperspectral snapshot mosaic images of a scene using a hyperspectral image sensor, the snapshot mosaic images being of relatively low spatial and low spectral resolution; undertaking a joint demosaicking and parameter estimation from the snapshot mosaic images to determine relatively high spatial resolution parameters of a desired target image; and outputting the determined relatively high-resolution parameters as representative of the desired target image.
  14. 14. A system for determining parameters of a desired target image from hyperspectral imagery, comprising: a hyperspectral image sensor arranged in use to capture hyperspectral images of a scene; a processor; and a computer readable storage medium storing computer readable instructions that when executed by the processor cause the processor to control the system to perform the method of any of claims 1 to 13.
  15. 15. A computer readable storage medium storing a computer program that when executed causes a hyperspectral imaging system to perform the method of any of claims 1 to 13.
GB2008371.3A 2020-06-03 2020-06-03 Method and system for joint demosaicking and spectral signature estimation Active GB2595694B (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
GB2008371.3A GB2595694B (en) 2020-06-03 2020-06-03 Method and system for joint demosaicking and spectral signature estimation
JP2022575173A JP2023529189A (en) 2020-06-03 2021-05-26 Method and system for joint demosaicing and spectral feature estimation
CN202180059179.6A CN116134298A (en) 2020-06-03 2021-05-26 Method and system for joint demosaicing and spectral feature map estimation
PCT/GB2021/051280 WO2021245374A1 (en) 2020-06-03 2021-05-26 Method and system for joint demosaicking and spectral signature estimation
EP21730274.4A EP4162242A1 (en) 2020-06-03 2021-05-26 Method and system for joint demosaicking and spectral signature estimation
US18/008,062 US20230239583A1 (en) 2020-06-03 2021-05-26 Method and system for joint demosaicking and spectral signature estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2008371.3A GB2595694B (en) 2020-06-03 2020-06-03 Method and system for joint demosaicking and spectral signature estimation

Publications (3)

Publication Number Publication Date
GB202008371D0 GB202008371D0 (en) 2020-07-15
GB2595694A true GB2595694A (en) 2021-12-08
GB2595694B GB2595694B (en) 2024-07-31

Family

ID=71526227

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2008371.3A Active GB2595694B (en) 2020-06-03 2020-06-03 Method and system for joint demosaicking and spectral signature estimation

Country Status (1)

Country Link
GB (1) GB2595694B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4318379A1 (en) * 2022-08-02 2024-02-07 Meta Platforms Technologies, LLC Sparse color reconstruction using deep neural networks

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052772B (en) * 2021-03-23 2024-08-20 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN113486906B (en) * 2021-07-07 2024-01-09 西北工业大学 Mosaic space spectrum gradient direction histogram extraction method of snapshot spectrum image
CN116579959B (en) * 2023-04-13 2024-04-02 北京邮电大学 Fusion imaging method and device for hyperspectral image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190096049A1 (en) * 2017-09-27 2019-03-28 Korea Advanced Institute Of Science And Technology Method and Apparatus for Reconstructing Hyperspectral Image Using Artificial Intelligence

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190096049A1 (en) * 2017-09-27 2019-03-28 Korea Advanced Institute Of Science And Technology Method and Apparatus for Reconstructing Hyperspectral Image Using Artificial Intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Hyperspectral demosaicking and crosstalk correction using deep learning", Dijkstra et al, Machine Vision and Applications 30(1), 1 - 21 (2019). https://link.springer.com/article/10.1007/s00138-018-0965-4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4318379A1 (en) * 2022-08-02 2024-02-07 Meta Platforms Technologies, LLC Sparse color reconstruction using deep neural networks

Also Published As

Publication number Publication date
GB2595694B (en) 2024-07-31
GB202008371D0 (en) 2020-07-15

Similar Documents

Publication Publication Date Title
Clancy et al. Surgical spectral imaging
US11857317B2 (en) Method and apparatus for quantitative and depth resolved hyperspectral fluorescence and reflectance imaging for surgical guidance
GB2595694A (en) Method and system for joint demosaicking and spectral signature estimation
US20190388160A1 (en) Methods and systems for intraoperatively confirming location of tissue structures
US11141044B2 (en) Method and apparatus for estimating the value of a physical parameter in a biological tissue
US5016173A (en) Apparatus and method for monitoring visually accessible surfaces of the body
WO2015023990A1 (en) Method and apparatus for quantitative and depth resolved hyperspectral fluorescence and reflectance imaging for surgical guidance
US20230239583A1 (en) Method and system for joint demosaicking and spectral signature estimation
WO2015135058A1 (en) Methods and systems for intraoperatively confirming location of tissue structures
US11619548B2 (en) Hybrid spectral imaging devices, systems and methods
US9854963B2 (en) Apparatus and method for identifying one or more amyloid beta plaques in a plurality of discrete OCT retinal layers
Jones et al. Bayesian estimation of intrinsic tissue oxygenation and perfusion from RGB images
EP3824799A1 (en) Device, apparatus and method for imaging an object
Chand et al. Identifying oral cancer using multispectral snapshot camera
US20190090726A1 (en) Optical device using liquid crystal tunable wavelength filter
Arnold et al. Hyper-spectral video endoscopy system for intra-surgery tissue classification
Clancy et al. A triple endoscope system for alignment of multispectral images of moving tissue
Zenteno et al. Spatial and Spectral Calibration of a Multispectral-Augmented Endoscopic Prototype
Zamora et al. Hyperspectral imaging analysis for ophthalmic applications
Wisotzky et al. Multispectral Stereo-Image Fusion for 3D Hyperspectral Scene Reconstruction
Aloupogianni et al. Effect of formalin fixing on chromophore saliency maps derived from multi-spectral macropathology skin images
Ehler et al. High-resolution autofluorescence imaging for mapping molecular processes within the human retina

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20240111 AND 20240117