WO2016061502A1 - Super-pixel detection for wearable diffuse optical tomography - Google Patents

Super-pixel detection for wearable diffuse optical tomography Download PDF

Info

Publication number
WO2016061502A1
WO2016061502A1 PCT/US2015/056014 US2015056014W WO2016061502A1 WO 2016061502 A1 WO2016061502 A1 WO 2016061502A1 US 2015056014 W US2015056014 W US 2015056014W WO 2016061502 A1 WO2016061502 A1 WO 2016061502A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
super
subject
pixel
dot
Prior art date
Application number
PCT/US2015/056014
Other languages
French (fr)
Inventor
Joseph P. Culver
Karla BERGONZI
Adam EGGEBRECHT
Silvina FERRADAL
Original Assignee
Washington University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Washington University filed Critical Washington University
Priority to US15/519,350 priority Critical patent/US10786156B2/en
Publication of WO2016061502A1 publication Critical patent/WO2016061502A1/en
Priority to US16/947,829 priority patent/US20200375465A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0073Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by tomography, i.e. reconstruction of 3D images from 2D projections
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4058Detecting, measuring or recording for evaluating the nervous system for evaluating the central nervous system
    • A61B5/4064Evaluating the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4869Determining body composition
    • A61B5/4875Hydration status, fluid retention of the body
    • A61B5/4878Evaluating oedema
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • A61B5/7207Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/726Details of waveform analysis characterised by using transforms using Wavelet transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • A61B2576/026Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part for the brain
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • the example embodiments herein generally relate to measuring brain activity using diffuse optical tomography and, more specifically, to the measuring of brain activity using super- pixel detection with wearable diffuse optical tomography.
  • Functional neuroimaging has enabled mapping of brain function and revolutionized cognitive neuroscience.
  • functional neuroimaging is used as a diagnostic and prognostic tool in the clinical setting. Its application in the study of disease may benefit from new, more flexible tools.
  • functional magnetic resonance imaging (fMRI) has been widely used to study brain function.
  • fMRI functional magnetic resonance imaging
  • the logistics of traditional fMRI devices are ill-suited to subjects in critical care.
  • fMRI generally requires patients to be centralized in scanning rooms, and provides a single "snap shot" of neurological status isolated to the time of imaging, providing a limited assessment during a rapidly evolving clinical scenario. This snap shot is generally captured on a limited basis, for example, once per stay at a hospital, once a week, once a month, and the like.
  • ischemic stroke which presents with the sudden onset of neurological deficits
  • the ischemia triggers a complex cascade of events including anoxic depolarization, excitotoxicity, spreading depression, and, in some cases, reperfusion.
  • thrombolysis/thrombectomy e.g., thrombolysis/thrombectomy
  • potential concerns include post-thrombolysis hemorrhagic transformation, and life- threatening cerebral edema. Therefore, throughout the hyperacute to sub-acute phases, early detection of neurological deterioration is essential and close neurological monitoring is critical.
  • DOI Diffuse optical imaging
  • MRS near-infrared spectroscopy
  • 2D imaging methods are classified as diffuse optical topography.
  • Functional Near-Infrared Spectroscopy fNIR or fNIRS
  • NIRS near-infrared spectroscopy
  • fNIR is a non-invasive imaging method involving the quantification of chromophore concentration resolved from the measurement of near infrared (NIR) light attenuation, temporal, or phasic changes.
  • NIR spectrum light takes advantage of the optical window in which skin, tissue, and bone are mostly transparent to NIR light in the spectrum of approximately 700-900 nm, while hemoglobin (Hb) and deoxygenated-hemoglobin (deoxy-Hb) are stronger absorbers of light. Differences in the absorption spectra of deoxy-Hb and oxy-Hb allow the measurement of relative changes in hemoglobin concentration through the use of light attenuation at multiple wavelengths.
  • fNIR and fNIRS may be used to assess cerebral hemodynamics in a manner similar to fMRI using various optical techniques.
  • fNIRS could be used for bedside monitoring of a neurological status of a patient.
  • fNIRS as a standard tool for functional mapping has been limited by poor spatial resolution, limited depth penetration, a lack of volumetric localization, and contamination of brain signals by hemodynamics in the scalp and skull.
  • High-density diffuse optical tomography provides an advanced NIRS technique that offers substantial improvement in spatial resolution and brain specificity.
  • these advancements in HD-DOT lead to additional challenges in wearability and portability. For example, by increasing a number of detection fibers in a wearable apparatus thus increasing spatial resolution, the weight of the wearable device also increases.
  • the functional imaging apparatus would be much lighter in weight in comparison to previous wearable apparatuses, and be of a size that is convenient for portability, movement, and continuous uninterrupted wearing of the apparatus.
  • the new bedside monitoring technique would benefit patients in clinical settings such as intensive care units, operating rooms, and the like.
  • the electronic console includes a fiber array, a detector coupled to the fiber array, a computing device coupled to the detector, and a display.
  • the fiber array includes a plurality of fibers configured to transport resultant light detected by a head apparatus worn by a subject.
  • the detector is coupled to the fiber array to detect resultant light from the plurality of fibers.
  • the detector includes a plurality of super-pixels each defined by a plurality of pixels of an array of pixels. Each super-pixel is associated with a fiber of the plurality of fibers. Each super-pixel is configured to generate a plurality of detection signals in response to detected resultant light from its associated fiber.
  • the computing device receives the plurality of detection signals from each of the plurality of super-pixels.
  • the computing device is configured to generate a high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject based on the plurality of detection signals from each of the plurality of super-pixels.
  • the display is configured to display the HD-DOT image signal of the brain activity of the subject.
  • HD-DOT high density-diffuse optical tomography
  • the head apparatus is configured to direct light at the head of the subject and receive resultant light from the head of the subject in response to the light directed at the head of the subject.
  • the electronic console includes a fiber array, a detector coupled to the fiber array, and a computing device coupled to the detector.
  • the fiber array includes a plurality of fibers configured to transport light to the head apparatus worn by a subject and transport resultant light received by the head apparatus.
  • the detector includes a plurality of super-pixels each defined by a plurality of pixels of an array of pixels. Each super-pixel is associated with a fiber of the plurality of fibers.
  • Each super-pixel is configured to generate a plurality of detection signals in response to detected resultant light from its associated fiber.
  • the computing device receive the plurality of detection signals from each of the plurality of super- pixels.
  • the computing device is configured to generate a high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject based on the plurality of detection signals from each of the plurality of super-pixels.
  • HD-DOT high density-diffuse optical tomography
  • Another aspect of the disclosure provides a computer- implemented method for performing super-pixel detection using a detector that includes a plurality of super-pixels each defined by a plurality of pixels of an array of pixels.
  • the method is implemented by a computing device in communication with a memory.
  • the method includes receiving, by the computing device, a plurality of detection signals from the array of pixels. For each super-pixel, a subset of the plurality of detection signals is associated with the super-pixel that generated the detection signals in the subset.
  • a high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject is generated based at least in part on the subsets of the plurality of detection signals associated with the plurality of super-pixels, and the generated HD-DOT image signal is output.
  • HD-DOT high density-diffuse optical tomography
  • FIG. 1 is a schematic diagram illustrating an example of a high density-diffuse optical tomography (HD-DOT) system
  • FIG. 2 is a diagram illustrating an example of a weight-vs-coverage analysis of a HD- DOT system
  • FIGS. 3A-3F are diagrams illustrating examples of a large field-of-view HD-DOT system for imaging distributed brain function
  • FIGS. 4A-4C are diagrams illustrating examples of a super-pixel concept and design to decrease noise and fiber size
  • FIGS. 5A-5D are diagrams illustrating examples of super-pixel detection
  • FIGS. 6A-6B are diagrams illustrating an example of a first light weight prototype cap
  • FIGS. 7A-7E are diagrams illustrating an example of a second prototype of a low profile, lightweight wearable HD-DOT cap
  • FIG. 8 is a flow diagram illustrating an example for improving HD-DOT using anatomical reconstructions
  • FIGS. 9A-9E are diagrams illustrating examples of head surface driven subject- specific head modeling
  • FIG. 10 is a diagram illustrating an example of atlas derived DOT visual activations on a single subject
  • FIG. 1 1 is a diagram illustrating an example of real-time three-dimensional (3D) object scanning
  • FIGS. 12A-12D are diagrams illustrating an example of a validation of functional diffuse optical tomography (fcDOT) in view of functional magnetic resonance imaging (fMRI) mapping of brain function using language paradigms;
  • FIGS. 13A-13F are diagrams illustrating examples of resting state functional connectivity diffuse optical tomography (fcDOT) maps of distributed resting state networks;
  • FIGS. 14A-14I are diagrams illustrating examples of a feasibility of a clinical HD-
  • FIG. 15 is a diagram illustrating an example of longitudinal fcDOT maps.
  • FIG. 16 is a diagram illustrating an example of a super-pixel detection method for measuring brain activity.
  • the example embodiments herein relate to systems, apparatuses, and methods for providing wearable, whole-head functional connectivity diffuse optical tomography (fcDOT) tools for longitudinal brain monitoring that may be used in an acute care setting, such as at a bedside of a person.
  • a wearable apparatus such as a cap, helmet, and/or the like, may be used to cover the head of the person.
  • the cap may include fibers for detecting light reflected from the brain/head of the person.
  • the cap may be in contact with an electronic console that may analyze the detected brain information.
  • HD-DOT high density-diffuse optical tomography
  • super-pixel detection enables lightweight wearable apparatuses such as caps and portable diffuse optical tomography (DOT) instrumentation.
  • DOT diffuse optical tomography
  • the size of the detection fibers is an obstacle to fabricating more ergonomic (wearable) and portable DOT.
  • an average wearable HD-DOT apparatus includes approximately 280 fiber strands (about one meter in length), and has a weight of around 30 pounds. Even a sparse HD-DOT wearable apparatus having between 50-100 fiber strands has a weight that is approximately 7-10 pounds.
  • the super-pixel detection technology may use a detectors such as electron-multiply charge-coupled devices (EMCCD), scientific complementary metal-oxide- semiconductor (sCMOS) detectors, and the like.
  • EMCCD electron-multiply charge-coupled devices
  • sCMOS scientific complementary metal-oxide- semiconductor
  • Previously developed EMCCD-based DOT systems are slow (e.g., less than about 0.01 Hz) and use geometries that may require only limited dynamic range, such as small volumes (e.g., mouse) or transmission mode measurements.
  • a super-pixel approach uses a combination of temporal and spatial referencing along with cross-talk reduction to obtain high dynamic range (DNR) and low cross talk.
  • DNR dynamic range
  • One improvement over previous technology such as avalanche photodiodes (APDs) is a significant reduction in sensitivity (NEP) that enables the use of smaller fibers (e.g., greater than about 30x reduction) and a smaller console (e.g., greater than about 5x).
  • EMCCDs and sCMOS detectors are attractive for use in DOT because they include many pixels, integrated cooling, electron multiply gain, A/D conversion, flexible software control, and the like.
  • a challenge in using EMCCDs or sCMOS detectors is to establish DOT detector specifications, including low detection noise equivalent power (NEP ⁇ 20 fW/VHz), detectivity (3(fW/VHz)/ mm 2 ), high dynamic range (DNR > 10 6 ), low inter-measurement cross talk (CT ⁇ 10 "6 ), and high frame rates (FR > 3 Hz).
  • NEP low detection noise equivalent power
  • DNR > 10 6 detectivity
  • CT ⁇ 10 "6 low inter-measurement cross talk
  • FR > 3 Hz high frame rates
  • the super-pixel design overcomes previous limitations of EMCCD based DOT systems and lowers the noise equivalent power (NEP) while maintaining high-dynamic range (DNR > 10 6 ), low cross-talk (CT ⁇ 10 "6 ), and reasonable frame rates (FR > 3 Hz).
  • NEP noise equivalent power
  • the super-pixel concept leverages massive pixel summing while avoiding corruption by noise sources.
  • the super-pixel detection method may generate a medium sized detector (scale 0.1 to 1 mm diameter) by summing pixel values on a CMOS or CCD camera.
  • the noise equivalent power scales as -area 172 , and thus (NEP/area) scales as -area "172 .
  • simple binning and temporal summing may not be sufficient.
  • EMCCDs have a dark-field signal drift that becomes apparent when summing many frames. To counter this, within frame dark-field measurements and temporal modulation/demodulation are used. By lowering the detectivity (NEP/area), the dynamic range is commensurately increased at the same time (e.g., -5- 10 6 ).
  • SP-DOT super-pixel DOT
  • the whole head may include a top half of the head, a scalp of the head, a surface of the head from the forehead to the back neckline, and the like.
  • the field-standard optical fibers are decreased by a factor of ⁇ 10x in diameter. This decrease in diameter causes a -lOOx decrease in weight, but also a decrease in the amount of light collected.
  • Detectivity is a measurement of this sensitivity per area of incident light via the fiber.
  • the Detectivity is the Noise Floor (NF) of the sensor divided by the area (A) over which the light is incident.
  • An advantage of using sCMOS and EMCCD sensors is that they are more sensitive than the APDs.
  • the individual pixels on a sCMOS sensors have a Noise Equivalent Power (NEP) that is 10 4 lower than the APDs.
  • NEP Noise Equivalent Power
  • a potential drawback is that the individual pixels have a Dynamic Range that is 10 2 lower than the APDs, and a smaller collection area.
  • the Super- pixel algorithm shown below manipulates a CCD sensor individual pixels so that its Dynamic Range is increased while lowering the Detectivity (NEP/area).
  • Another aspect to account for in brain imaging is the encoding for location within the data. Location is encoded in separate frames, so 1 frame needs to be collected for each encoding step (K). This will have the effect of reducing the light levels in each frame by a factor of K or increasing the noise floor in each frame by a factor of K.
  • the super-pixel algorithm therefore allows for sCMOS and EMCCD sensors to perform tomographic neural imaging by using manipulations that increase the dynamic range as compared to a single pixel and maintains a comparable detectivity by accounting for the frame rate, size of the super-pixel, and number of encoding steps.
  • CT is complex at multiple levels including optical focusing, and electronic sources within CCD elements, EMCCD gain, sCMOS readout structures, and A/D conversion.
  • a super-pixel cross-talk reduction (CTR) method may be used to leverage the unique super-pixel reference areas.
  • a bleed pattern for each super-pixel (into other super-pixels) may be measured in a calibration step. During operation, scaled bleed patterns are subtracted for each super-pixel from all other super-pixels.
  • the bleed pattern correction is effectively a matrix operation that transforms a vector of raw super-pixel values to a vector of corrected super-pixel values.
  • the CTR method is implemented, the CT is less than about 1 - 10 "6 .
  • the super-pixel concept generally involves two steps within frame including dark- field subtraction, and an active cross-talk reduction scheme that uses calibrated bleed patterns to remove cross-talk signals during operation based on the images obtained when multiple fibers are illuminated.
  • Detection fibers 400/430/730 ⁇ core/cladding/coating, FT400EMT, Thorlabs
  • an aluminum block such as a 6x6 array
  • each HD-DOT frame (4.1 Hz) will have a total 108 images (36 positions encode steps x [two wavelengths - 690 and 850 nm - plus a dark frame]).
  • a camera-link frame grabber with an onboard field programmable array (National Instruments NI PCIe-1473R) will compute the super-pixels in real-time super-pixel.
  • FIG. 1 is a diagram illustrating an example of an HD-DOT system 100 including an imaging cap 101 (sometimes referred to herein as a wearable head apparatus), a fiber array 102, and an electronic console 1 10 coupled with imaging cap 101 through fiber array 102.
  • an imaging cap 101 sometimes referred to herein as a wearable head apparatus
  • a fiber array 102 sometimes referred to herein as a wearable head apparatus
  • an electronic console 1 10 coupled with imaging cap 101 through fiber array 102.
  • imaging cap 101 includes a plurality of interconnected patches, each patch including a plurality of sources and a plurality of detectors.
  • each source corresponds with a detector to define a plurality of source-detector pairs.
  • the imaging cap 101 is placed over the patient's head, and for each source- detector pair, light is transmitted to the patient by the source.
  • the transmitted light is scattered by interactions with the patient, and at least some of the scattered light (sometimes referred to herein as "resultant light”) is received by detectors.
  • imaging cap 101 is configurable for a particular patient, e.g., by modeling the cap based on the patient's anatomy, and is lightweight to facilitate portability of the HD-DOT system and enable longitudinal imaging of the patient in the acute setting (e.g., a clinic, intensive care unit, or other environment).
  • Fiber array 102 may include a plurality of source fibers and a plurality of detector fibers.
  • Source fibers are optical imaging fibers that may transport light from electronic console 1 10 to sources on imaging cap 101.
  • detector fibers are optical imaging fibers that transport light from detectors on imaging cap 101 to electronic console 1 10.
  • fiber array 102 may be constructed using fewer and/or smaller optical imaging fibers to facilitate portability of the HD-DOT system 100.
  • electronic console 110 includes a fiber array holder 103, a detector 105, a lens 106 positioned between fiber array holder 103 and detector 105, and a light source 104 coupled with fiber array 102 by fiber array holder 103.
  • Fiber array holder 103 is coupled with optical fibers (i.e., source fibers and detector fibers) of fiber array 102, and is configured to hold the fibers in a desired arrangement to allow optical communication between fiber array holder 103, detector 105, and light source 104.
  • fiber array holder 103 holds the fibers in a square arrangement that corresponds to the shape of detector 105.
  • fiber array holder 103 may be configured to hold the optical fibers in any suitable arrangement to enable the HD-DOT system 100 to function as described herein.
  • Detector 105 is an image sensing device and is positioned within electronic console 110 to receive light from detector fibers of fiber array 102. Detector 105 converts incident light into electron charges to generate an electric signal that may be processed to construct, for example, HD-DOT images of the patient or the patient's brain.
  • detector 105 may include an electron multiply charge-coupled device (EMCCD) having a plurality of pixels defined on a surface of the detector.
  • EMCD electron multiply charge-coupled device
  • detector fibers transport light (i.e., scattered light received by the detectors) between imaging cap 101 and electronic console 1 10. The light is received at the electronic console 1 10 at fiber array holder 103.
  • detector 105 may include other image sensing devices such as, e.g., a charge-coupled device (CCD), a complementary metal-oxide- semiconductor (CMOS), or any suitable image sensing device to enable the system to function as described herein.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide- semiconductor
  • CMOS of scientific CMOS CMOS of scientific CMOS
  • the row-by-row nature of the readout by the CMOS does introduce row-specific noise.
  • the frame-to-frame background is subtracted first. If there were no other noise sources in the CMOS, the noise would follow predictions of the super-pixel math above.
  • the cross-talk reduction is used with the EMCCD's a similar approach is required for sCMOS.
  • it can be of an advantage to subtract row-specific noise, wherein a number of pixels for each row that are not illuminated are used to generate a row specific dark value. The row specific dark values are then subtracted from the rest of the pixels in that row.
  • Light source 104 is positioned within electronic console 110 to provide light to source fibers of fiber array 102.
  • light source 104 includes a plurality of laser diodes (LDs), and each LD provides light to a source fiber and, further, to a source in imaging cap 101.
  • light source 104 includes a plurality of light emitting diodes (LEDs).
  • light source 104 may include any other suitable device for generating light to enable the system to function as described herein.
  • electronic console 1 10 also includes a computing device 115 for processing the electric signal generated by detector 105 to compute HD-DOT images.
  • Computing device 1 15 may include at least one memory device 150 and a processor 120 coupled to memory device 150.
  • memory device 150 stores executable instructions that, when executed by processor 120, enable computing device 1 15 to perform one or more operations described herein.
  • processor 120 may be programmed by encoding an operation as one or more executable instructions and providing the executable instructions in memory device 150.
  • Processor 120 may include one or more processing units (not shown) such as in a multi-core configuration. Further, processor 120 may be implemented using one or more heterogeneous processor systems in which a main processor is included with secondary processors on a single chip. As another example, processor 120 may be a symmetric multi-processor system containing multiple processors of the same type. Further, processor 120 may be implemented using any suitable programmable circuit including one or more systems and microcontrollers, microprocessors, programmable logic controllers (PLCs), reduced instruction set circuits (RISCs), application specific integrated circuits (ASICs), programmable logic circuits, field programmable gate arrays (FPGAs), and any other circuit capable of executing the functions described herein. Further, processor 120 may include an internal clock to monitor the timing of certain events, such as an imaging period and/or an imaging frequency. In the example embodiment, processor 120 receives imaging data from imaging cap 101, and processes the imaging data for HD-DOT.
  • PLCs programmable logic controllers
  • RISCs reduced instruction
  • Memory device 150 may include one or more devices that enable information such as executable instructions and/or other data to be stored and retrieved.
  • Memory device 150 may include one or more computer readable media, such as, dynamic random access memory (DRAM), static random access memory (SRAM), a solid state disk, and/or a hard disk.
  • Memory device 150 may be configured to store application source code, application object code, source code portions of interest, object code portions of interest, configuration data, execution events and/or any other type of data.
  • Computing device 115 also includes a media display 140 and an input interface 130.
  • Media display 140 is coupled with processor 120, and presents information, such as user- configurable settings or HD-DOT images, to a user, such as a technician, doctor, or other user.
  • Media display 140 may include any suitable media display that enables computing device 115 to function as described herein, such as, e.g., a cathode ray tube (CRT), a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an LED matrix display, and an "electronic ink” display, and/or the like.
  • media display 140 may include more than one media display. According to various example embodiments, the media display may be used to display HD-DOT image data on a screen thereof.
  • Input interface 130 is coupled with processor 120 and is configured to receive input from the user (e.g., the technician).
  • Input interface 130 may include a plurality of push buttons (not shown) that allow a user to cycle through user-configurable settings and/or user-selectable options corresponding to the settings.
  • input interface 130 may include any suitable input device that enables computing device 115 to function as described herein, such as a keyboard, a pointing device, a mouse, a stylus, a touch sensitive panel (e.g., a touch pad or a touch screen), a gyroscope, an accelerometer, a position detector, an audio interface, and/or the like.
  • a single component, such as a touch screen may function as both media display 140 and input interface 130.
  • Computing device 1 15 further includes a communications interface 160.
  • Communications interface 160 is coupled with processor 120, and enables processor 120 (or computing device 115) to communicate with one or more components of the HD-DOT system, other computing devices, and/or components external to the HD-DOT system.
  • computing device 1 15 may further include a head modeling module that generates a photometric head model of the subject, and the detector may generate an image signal of the brain activity of the subject based on the photometric head model of the subject.
  • the computing device 1 15 may further include a de-noising module that removes noise from the light transported by the plurality of fibers from the head apparatus, and the detector may generate the image signal of the brain activity of the subject based on the removed noise.
  • Embodiments of the head modeling and the noise removal are further described in various examples herein.
  • HD-DOT system 100 includes imaging cap 101 that provides whole-head coverage and which is lightweight.
  • FIG. 2 illustrates an example of a weight- vs-coverage analysis of an HD-DOT system including wearable head apparatuses having different coverages and weights.
  • wearable head apparatus 201 includes sparsely populated fibers that, when combined, weigh approximately 8 pounds. However, due to the space between each fiber, the apparatus 201 does not provide complete coverage of the entire head of a person.
  • Coverage may be increased to provide whole-head imaging by increasing the number of source-detector pairs in the imaging cap, and increasing the relative number of imaging fibers coupled with the imaging cap. In such systems, increasing the number of imaging fibers also increases the weight of the imaging cap. As shown in FIG. 2 at point B, increasing coverage by increasing the number of imaging fibers in wearable head apparatus 202, forces the use of "hairdryer" ergonomics for approximately one-half head coverage (also shown in FIG. 3). In this example, more head coverage is provided, however, such coverage requires about 280 fibers (for 1 m length fibers) and weighs up to about 30 lbs.
  • a length of the fibers may be in a range between zero and two meters, one half meter and one and a half meters, one meter to three meters, and the like.
  • wearable whole-head HD-DOT 203 may be provided by reducing the weight of a wearable head apparatus (i.e. an imaging cap in this example) while maintaining, or enhancing, head coverage.
  • a wearable whole-head HD-DOT apparatus 203 can meet (or exceed) performance requirements imposed by the high attenuation (blood volume) of the brain with optode separations of about 1-5 cm.
  • Performance requirements may include low noise levels (NEP ⁇ 20 fW/VHz for a 3 mm detector), high dynamic range (DNR > 10 6 ), and low inter-measurement cross talk (CT ⁇ 10).
  • a super-pixel approach to instrumentation enables weight reduction of HD-DOT imaging caps.
  • a wearable whole-head HD-DOT apparatus 203 is enabled, at least in part, by using a super-pixel detection method and electron-multiply charge-coupled devices (EMCCD).
  • EMCCD electron-multiply charge-coupled devices
  • the super-pixel detection method uses a combination of temporal and spatial referencing along with cross-talk reduction to obtain high dynamic range (DNR) and low cross talk.
  • DNR dynamic range
  • the super-pixel detection method provides a reduction in sensitivity (NEP) over at least some known methods and enables the use of smaller imaging fibers and a smaller console.
  • the method enables the use of imaging fibers up to about 30 times smaller than such fibers for known methods, and enables the use of an electronic console up to about 5 times smaller than such consoles for known methods.
  • use of smaller imaging fibers provides an imaging cap having a weight of about 1 lb (e.g., similar to the weight of a bicycle helmet) which, when provided along with an electronic console having a low-profile design, provides a wearable whole-head HD-DOT system suitable for longitudinal fcDOT in the acute care setting.
  • whole-head HD-DOT apparatus 203 extends the field-of-view to cover multiple contiguous functional domains.
  • the imaging cap may have a weight between 0 pounds and two pounds, a range between a half pound and one and a half pounds, and the like.
  • the imaging cap may have a weight that exceeds two pounds or that is less than two pounds.
  • FIGS. 3A-3F are diagrams illustrating examples of a large field-of-view (FOV) HD- DOT system for imaging distributed brain function.
  • the examples of FIGS. 3A-3F correspond to wearable head apparatus 202 in FIG. 2.
  • FIG. 3A illustrates a set-up for HD-DOT.
  • FIGS. 3B, 3C, and 3D illustrate a plurality of monitored data sets for managing data quality control.
  • FIG. 3C illustrates an average light level for each source and detector on a flattened view of the cap.
  • FIG. 3D illustrates measurements above the noise threshold (variance ⁇ 7.5%, shown as lines). Also, FIG. 3E illustrates the FOV across 8 subjects, where color bar codes the number of subjects with usable sensitivity at a given location of cortex. FIG. 3F illustrates a subject wearing a fiber head apparatus.
  • DOT instrumentation for improved cortical coverage was developed by constructing a high-density array including a high-density regular grid of sources and detectors.
  • the high-density array places strong demands on hardware, specifically, high sensitivity, low noise floor, and large dynamic range of each detector (shown in FIG. 3b).
  • Development of in situ data quality control was critical for managing the large source- detector pair data sets (#SD-pairs > 2000) (shown in FIGS. 3C and 3D).
  • An imaging head apparatus was developed that couples the optodes (i.e., source and detector fibers) to the head. However, in this example, the head apparatus is very heavy due to the size of the fibers included in the source-detector array.
  • super-pixel concepts and designs may be applied to the optical fibers and image sensors of the HD-DOT systems described herein, enabling a reduction in an overall size of the wearable head apparatus, and, thus enabling a reduction in a size of a computing device that the head apparatus is connected to.
  • FIGS. 4A-4C are diagrams illustrating examples of a super-pixel concept applied to an HD-DOT system to decrease noise and fiber size.
  • the fibers are relayed to an electron multiply charge coupled device (EMCCD) 105 using a high numerical aperture lens to maintain high transmission (e.g., > 90%).
  • ECCD electron multiply charge coupled device
  • FIG. 4B a 6 x 6 array of super-pixels 400 is defined.
  • Each super-pixel 450 of the array of super-pixels 400 includes a core region 410 used to detect the super-pixel light intensity, a buffer region 420 where light levels decay by up to 10 4 and are discarded, and a reference region 430 to calculate stray noise signals. Using reference subtraction, very low noise and cross talk may be obtained (shown in FIG. 5).
  • FIG. 4C illustrates a front view of a fiber array holder (top) and a back view with optical fibers (bottom).
  • the super-pixel detection method leverages pixel summing while reducing corruption by noise sources.
  • CTR cross talk reduction
  • the image sensor included an EMCCD with 512 x 512 pixels of size 16 x 16 ⁇ had an EM gain at lOx.
  • a super-pixel such as super-pixel 450 shown in FIG. 4B includes a plurality of pixels combined to form one large super-pixel.
  • a detector 400 includes an array of pixels.
  • the detector includes an array of 510 X 510 pixels.
  • the example embodiments generate the super-pixels.
  • a super-pixel is 85 X 85 pixels.
  • the array includes 36 super-pixels (6 x 6), where each super-pixel includes 7,225 pixels. It should be appreciated that the example embodiments are not limited to specific sizes of detector arrays 400 and super-pixels, and may be any desired size.
  • each super-pixel includes a buffer 420 that surrounds the pixel core 410.
  • buffer 420 may be further surrounded by reference region 430.
  • the buffer 420 and the reference region 430 may be generated by turning off or otherwise preventing light from being detected by pixels in the buffer region 420 and the reference region 430.
  • the pixel core 410, the buffer region 420, and the reference region 430 may be included within the super-pixel (i.e. within the 85 x 85 pixels).
  • FIG. 5 are photographs illustrating a performance of the super-pixel detection method (all data is shown as log(abs(data))) shown in FIG. 4.
  • raw EMCCD images have a DNR that is determined by the full well capacity and the readout noise for about 10 4 .
  • FIG. 5B simple binning into 6 x 6 binned regions improves the DNR to lxlO 5 , but cross talk still occurs at lxlO "3 .
  • FIG. 5D illustrates a full range test showing improved CT and DNR of super-pixels versus simple binning.
  • APDs avalanche photodiodes
  • imaging caps used in various examples were developed using the super-pixel detection method for use in the acute setting.
  • FIGS. 6A-6B illustrate examples of an HD-DOT imaging cap 601.
  • HD-DOT imaging cap 601 includes 24 source fibers and 28 detector fibers.
  • FIG. 6B illustrates an example of the elastic fiber attachment of HD-DOT imaging cap 601 for facilitating lateral stability and surface normal compliance.
  • HD-DOT imaging cap 601 may be built using super-pixel lightweight fibers to provide improvements in cap ergonomics.
  • fibers may be epoxied within right-angle aluminum tubes and are anchored to the cap with elastic straps that provide a "spring" effect to hold the fibers firmly yet comfortably against the scalp (shown in FIG. 6A).
  • the fibers and/or tubes may protrude through the cap by about 3-5 mm allowing for combing through hair of a person.
  • approximately 288 fibers have a total cross-sectional area of about 1.4 cm 2 , similar to four USB cables.
  • FIGS. 7A-7E illustrate a second prototype HD-DOT imaging cap 701 that is a low profile, lightweight wearable HD-DOT imaging cap.
  • fibers of second prototype imaging cap 701 are guided by an anatomical computer model that optimizes the placement of the fibers, and accommodates the position dependent curvature of the head surface (which may be generated from a MRI population atlas).
  • a head surface is expanded to a cap 8 mm larger than the head, and converted to STL files which are printed in ABS plastic using a three-dimensional (3D) printer.
  • the patches are integrated into a neoprene cap.
  • Elastic fiber management from HD-DOT imaging cap 601 (shown in FIG. 6) is incorporated to optimize fiber/scalp coupling.
  • second imaging cap 701 incorporates anatomical morphology of the head (derived from MRI data) into the cap structure itself.
  • the full-head grid of optode positions may be relaxed onto a computer model of a subject's head anatomy (shown in FIG. 7A) while maintaining an interlaced source and detector grid topology.
  • the computer model is divided into 9 patches of optodes (shown in FIGS. 7B and 7C) that provide local stability of the cap and assist in fiber management. The patches were realized with a three-dimension printer (shown in FIGS.
  • the super-pixel lightweight fibers de-couple the goals of cap deformation (elastic fiber management and neoprene patch hinges) from fiber torque (less flexible ABS plastic patches).
  • an HD-DOT system includes a wearable, whole-head HD-DOT for clinical based brain imaging using the super-pixel detection method described herein.
  • wearable HD-DOT includes an imaging cap weight of about 1 lb. Imaging fiber weight is largely determined by the area of the fiber for light collection.
  • One of the challenges in reducing the size of a fiber is maintaining HD-DOT specifications.
  • Super-pixel detection methods enable generating an about 0.4 mm diameter detector by summing pixels on a CCD camera.
  • EMCCDs are attractive for DOT with many pixels, integrated cooling, electron multiply gain, A/D conversion and flexible software control.
  • Detection fibers 400 / 430 / 730 ⁇ core / cladding / coating, FT400EMT, Thorlabs
  • Detection fibers are held in an aluminum block (6x6 array) and imaged onto an EMCCD.
  • simple binning and temporal summing is not sufficient.
  • EMCCDs have a dark-field signal drift that becomes apparent when summing many frames.
  • CT complex at multiple levels including optical focusing, and electronic sources within CCD elements, EMCCD gain and A/D conversion.
  • CTR super-pixel cross-talk reduction
  • each HD-DOT frame (4.1 Hz) includes a total 108 images (36 position encode steps x [two wavelengths— 690 and 850 nm— plus a dark frame]).
  • a camera-link frame grabber with an onboard field programmable array (National Instruments NI PCIe-1473R) computes the super-pixels in real-time.
  • the example system includes illumination sources including laser diodes (LD), providing an about 30x increase in peak light level (60 mW vs. 2 mW CW power) over light emitting diodes (LEDs).
  • LD laser diodes
  • LEDs light emitting diodes
  • individual LDs (670 nm RL67100G, 850 nm R85100G, Roithner-Lasertechnik) for each source position may be coupled to 200 ⁇ fibers.
  • diffusing elements provide about a 2.5 mm spot.
  • the single source fluence is about 0.2 mW/mm 2 , well below the ANSI limit (4 mW/mm 2 ).
  • the example system also includes an electronic console including a camera, lens, and fiber coupling block occupying about 6x8x8 inches (height x width x depth).
  • an electronic console including a camera, lens, and fiber coupling block occupying about 6x8x8 inches (height x width x depth).
  • 144 super-pixel detectors are included with about 5x compression compared to known APD- DOT (50U for 144 detectors). Illumination will use 9U.
  • the full system is about 36x48x24 inches, including a computer (control, collection, processing).
  • the example system further includes an imaging cap (shown in FIGS. 6A and 7A-7E) including sub-arrays of 6x6 detectors interlaced with 6x6 sources, with 36 step time encoding of the sources.
  • the resulting four sub-arrays run concurrently since the active sources are separated by a distance greater than 2x the longest usable source-detector pair (5 th nearest-neighbor) distance.
  • the model may be guided by an anatomical computer model that optimizes the placement of fibers and accommodates the position dependent curvature of the head surface (shown in FIGS. 6 and 7).
  • the HD-DOT imaging cap may be configured in small (53 ⁇ 2 cm), medium (57 ⁇ 2 cm) and large (61 ⁇ 2 cm) caps. The caps may be optimized using small pilot studies.
  • the example system may also include real-time displays for cap fit optimization.
  • HD-DOT performance depends critically on fiber/scalp coupling.
  • real-time displays may be developed and used in both "measurement space" (shown in FIGS. 3C and 3D) and image space using a graphics processing unit cluster (e.g., an NVidia Tesla C2075 GPU cluster).
  • a graphics processing unit cluster e.g., an NVidia Tesla C2075 GPU cluster.
  • developed real-time displays may be used to estimate real-time imaging within about 1 minute of cap placement.
  • the parameter space of the system may be analyzed to meet design goals.
  • a strength of the system is its extensive flexibility, with regard to the pixel binning, detector size, and temporal summing, which may optimize the field-of-view, dynamic range and speed.
  • the system may be analyzed to determine the most relevant real-time displays for cap fit.
  • the system may be analyzed to determine the relative importance of "sensor space” vs "image space” data with respect to cap fit. The analysis (and other appropriate feedback) may be used to develop real-time displays for the system.
  • photometric head modeling and motion denoising for high density-diffuse optical tomography may be used in at least some of the examples.
  • the subject's head surface and the position of the imaging cap are captured for registering the DOT data set to a model of the subject's anatomy.
  • Some known cases have demonstrated the advantage of using co-registered anatomical head modelling to improve HD-DOT localization. Specifically, demonstrating the use of individual subject anatomical MRI (shown in FIGS. 2, 3, and 8).
  • FIGS. 2, 3, and 8 show the use of individual subject anatomical MRI (shown in FIGS. 2, 3, and 8).
  • research quality MRIs may not be available for many subjects, and head models may be generated by transforming reference (or atlas) anatomy to the subject.
  • Computing an individual light-path model requires capturing the exterior shape of the head and the relative location of the HD-DOT imaging cap.
  • a photometric approach is contemplated for efficiently capturing this data, where non-linear models are used to obtain an about 5 mm correspondence with fMRI. Preliminary tests of the contemplated approach show that non-linear registration may be used to obtain localization errors of less than about 3 mm for reference-anatomy versus subject-MRI head models (shown in FIGS. 9A-9E).
  • the next challenge is de-noising the captured data for the subject's head surface and the position of the cap from motion artifacts.
  • Some known methods including independent component analysis (ICA) and wavelets, have been evaluated for fMRI and fNIRS, but have yet to be established for fcDOT.
  • ICA independent component analysis
  • HD-DOT overlapping measurements impose an inherent structure on potential fiber movement induced error signals.
  • a method is contemplated that uses HD-DOT overlapping measurements to quantify optode coupling and provide mathematical correction of the raw signal to account for movement artifact.
  • a study will be conducted to evaluate the contemplated method against known approaches such as ICA and wavelet approaches.
  • FIG. 8 are photographs illustrating example anatomical (e.g., subject MRI) reconstructions for improving HD-DOT.
  • a processing pipeline for anatomically based forward light modeling and spatial normalization was developed.
  • the localization error of DOT relative to fMRI was about 6.1 mm.
  • Co-registration to anatomy also enabled projection to the pial cortical surface using Computerized Anatomical Reconstruction Toolkit (CARET), an fMRI processing tool (shown in FIG. 12).
  • CARET Computerized Anatomical Reconstruction Toolkit
  • FIGS. 9A-9E are photographs illustrating head surface driven subject- specific head modeling for creating anatomically accurate head models without subject MRI.
  • reference anatomy is transformed to the subject head surface using linear and non-linear optimizations, and the output is subject specific DOT.
  • anatomically accurate head models are created using reference anatomy (population MRI data) (shown in FIG. 9A) that is warped to each individual head surface (shown in FIG. 9B) using two consecutive fitting routines.
  • the fitting routines include a linear registration for performing global adjustments (shown in FIG. 9C), and a non-linear registration for improving local fitting (shown in FIG. 9D).
  • input is received including the external surface of a subject's head, the location of the optode array relative to the subject's head, and a set of anatomical fiducials used in the registration routines.
  • the warped atlas and the co-registered optode array are used to compute an individualized forward light model that is aligned with the subject's anatomical head structure (shown in FIG. 9E).
  • the premise of atlas-based head modeling is validated using head surfaces extracted from subject MRI scans and by comparing to DOT reconstructions using subject-specific anatomical images and to subject-matched fMRI datasets (shown in FIG. 10).
  • FIG. 10 is a diagram illustrating an example of atlas derived DOT visual activations on a single subject that spatially overlap with both individual MRI-based DOT reconstructions and fMRI activations (threshold at 50% maximum).
  • the premise of atlas- based head modeling is validated using head surfaces extracted from subject MRI scans and by comparing to DOT reconstructions using subject-specific anatomical images and to subject- matched fMRI datasets.
  • An example modeling method described herein provides photometric head modeling for HD-DOT. Further, an example de-noising method is provided for motion de-noising for HD- DOT.
  • a reference head structure e.g., atlas
  • a subject MRI it may be preferable to transform a reference head structure (e.g., atlas) to the subject's head surface shape, as a subject MRI is not always available.
  • a set of anatomical fiducials measured on subject head surfaces are used to transform the reference head structure to the subject's head surface shape.
  • Anatomical landmarks based on the 10/20 international system including nasion, inion, pre-auricular points and Cz
  • fiducials shown in FIG. 1 1 with red dots
  • selected optodes of the imaging array shown in FIG. 11 with blue dots
  • fiducials are measured with an RF 3D digitizer (FastTrack, Polhemus USA).
  • a photometric scanner HandyScan 3D, Creaform
  • the photometric scanner can retrieve both the 3D coordinates of optodes and the surface of the subject's head.
  • reflective targets are placed onto the optode locations and anatomical fiducials.
  • FIG. 1 feasibility of head surface capture is shown using a Kinect (Microsoft) camera.
  • FIGS. 9A-9E and 10 the feasibility of using a two-step, linear then non-linear, transform method is established using surfaces extracted from MRI.
  • the example modeling method includes FEM head modeling for light propagation and inversion.
  • the example de-noising method includes a coupling coefficient (CC) motion noise removal method that leverages spatial structure in DOT data.
  • CC coupling coefficient
  • motion noise is specific to individual fibers (e.g., a head turn will press or pull optodes to/from the head). Motion changes the transmission to/from individual fibers and is a multiplicative noise factor.
  • a wearable HD-DOT system may have about 3000 SD-pair measurements, yet only involve 288 fibers.
  • coupling coefficient errors are evaluated for baseline DOT reconstructions and the technique is extended to time variant data and coupling coefficients (the method transfers directly). An estimate of the coupling coefficients is calculated as the mean of the first nearest neighbor measurements for each source and detector.
  • the example de-noising method may include noise removal methods from fMRI and fNIRS.
  • the example de-noising method may include one of four methods having shown promise for motion artifact removal: (i) independent component analysis (ICA); (ii) wavelet analysis; (iii) "scrubbing" data (cropping corrupted segments); and (iv) polynomial spline interpolation. Work with blind source separation of HD-DOT data suggests that ICA will also aide in noise identification and reduction.
  • fcDOT methods are developed for evaluating how similar (or dissimilar) a single subject is in comparison to a population average. Specifically, fcDOT analyses are developed by compressing the full fc- matrix (voxel-by-voxel) down to images that assign an fc-metric to each voxel in the brain (shown in FIG. 14). For example, previous cases have found bilateral homotopic connectivity maps useful in mouse studies of Alzheimer's disease. Other potentially useful fc-indices include similarity and asymmetry measures. To test the fcDOT methods, normative data sets are acquired by studying healthy subjects within an anticipated age range (years 60-80).
  • fcDOT is established in acute stroke.
  • Bedside fcDOT (or fcDOT in the acute care setting) will enable longitudinal monitoring of functional connectivity.
  • the wearability of fcDOT technology enables longitudinal bedside functional mapping of brain integrity during the post-stroke acute time window (12-72 hours) at the bedside in the intensive care unit (ICU).
  • ICU intensive care unit
  • fcDOT is compared to serial behavioral exams (e.g., the NIHSS).
  • the benefit of fcDOT as a brain monitoring imaging method is demonstrated in extended 12 hour scanning. Such time windows may be difficult or impossible with fcMRI.
  • FIGS. 12A-12D are diagrams illustrating an example of a validation of fcDOT versus fMRI mapping of brain function using a plurality of language paradigms.
  • FIG. 12A illustrates validating fcDOT versus fMRI in an example of using hearing words versus other words.
  • FIG. 12A illustrates validating fcDOT versus fMRI in an example of using hearing words versus other words.
  • FIG. 12B illustrates validating fcDOT versus fMRI using reading words versus other words.
  • FIG. 12C illustrates validating fcDOT versus fMRI using imagined speaking versus reading.
  • FIG. 12D illustrates validating fcDOT versus fMRI using converting verb generation versus imagined speaking.
  • the contrast-to-noise-ratio (CNR, expressed as the max t-value associated with each color bar) for HD-DOT across subjects was within a factor of 2 of the fMRI CNR, suggesting that HD-DOT (within its FOV) has similar reproducibility to that of fMRI.
  • a goal in extending the FOV was to image distributed resting state networks (RSNs) (shown in FIG. 13).
  • FIGS. 13A-13F are diagrams illustrating seed-based correlation maps obtained in normal volunteers for three sensory-motor and three cognitive networks, where the anatomical location of each seed is shown as a black dot.
  • resting state functional connectivity of distributed networks provides a sensitive marker of neurological dysfunction.
  • distributed RSNs may include the dorsal attention (DAN), fronto-parietal control (FPC) and default mode (DMN) networks.
  • DAN dorsal attention
  • FPC fronto-parietal control
  • DN default mode
  • These fcDOT RSNs exhibit topographies similar to those obtained non-concurrently with fcMRI (FPC, DAN, DMN).
  • FIGS. 14A-14I are diagrams illustrating examples of the feasibility of a clinical HD-DOT system with limited FOV.
  • FIG. 14A illustrates fcDOT in the ICU on a patient recovering from an acute stroke.
  • FIGS. 14B and 14C illustrate CT and fcDOT for a healthy subject.
  • FIGS. 14D and 14E illustrate CT and fcDOT for a moderate stroke subject.
  • FIGS. 14F and 14G illustrate CT and fcDOT for a severe stroke subject.
  • the infarcts are represented by binary masks.
  • the alterations in fc patterns measured via seed- voxel maps are correlated with the severity of stroke injury.
  • an asymmetry index a within subject measure, quantifies how different the maps are on opposite sides of the head and shows strong correlation to the NIHSS across 6 subjects.
  • a similarity index a between subject measure, quantifies how similar any two fc maps are and also shows a strong correlation to the NIHSS.
  • fcDOT is evaluated by validating fcDOT against fcMRI and neurocognitive testing in both a normal population and chronic stroke. More particularly, fcDOT is evaluated using fc-metrics that comprehensively evaluate the connection patterns including an asymmetry index and a similarity index. These metrics will also be used to compare fcDOT and fcMRI.
  • a limited FOV HD-DOT system was developed to test the feasibility of imaging populations in the neonatal ICU and in the adult ICU at the bedside of patients recovering from stroke (shown in FIGS. 14A-14I).
  • FIG. 15 illustrates longitudinal fcDOT maps taken during a period of 7 hours.
  • a group of 4 healthy subjects were scanned twice, each for 8 hours continuously.
  • the first goal was to ascertain the wearability of the imaging cap.
  • the subjects wear times were as follows: Subject 1, 7 h 47 min / 7 h 21 min; Subject 2, 9 h 18 min / 9 h 1 1 min; Subject 3, 8 h 10 min / 8 h 40 min; Subject 4, 8 h 13 min / 8 h 39 min.
  • the four subjects were able to wear the cap for 8 hours ( ⁇ 30 min) without reporting any increased discomfort from the cap. This confirmed anecdotal evidence from shorter 2 hours scans, that any discomfort is evident during the first 30 minutes of scanning.
  • the cap fit is wearable long-term.
  • preliminary image analysis shows promising stability of fcDOT across hours.
  • inclusion criteria for healthy subjects include: 1) age 50-80 years; 2) no history of neurological disorders; and 3) balanced for gender.
  • Exclusion criteria include: HD-DOT headset discomfort or any MRI contraindications.
  • Exclusion criteria include: 1) non-stroke diagnosis; 2) intracerebral hemorrhage; 3) DOT cap discomfort; and 4) MRI contraindications.
  • Healthy subjects are imaged on two days with two sessions each day, including a total of one fcMRI session and three fcDOT sessions (in random order). Stroke subjects are also be brought in for two days, one day fcMRI and fcDOT, the other day fcDOT and behavior testing (in random order). Both days are within two weeks.
  • subjects are scanned (fcDOT) for 1.5 hours using (i) 30 min supine resting state, (ii) 30 min supine mixture of auditory stimuli (words) and visual stimuli (flickering checkerboards), and (iii) 30 min sitting 30° head-of-bed elevation.
  • fcMRI are collected by similar means to those shown in FIGS. 12A-12D and 13A-13F.
  • neurobehavioral assessments are conducted by a psychometrician blinded to the imaging results to comprehensively assess cognitive and motor deficits.
  • Multiple cognitive domains are evaluated (e.g., language, memory, attention, and motor function) using the following tests: for spatial attention, a computerized Posner Task, recording reaction times (RTs) and accuracy; for motor, active range of motion at the wrist, grip strength, performance on the Action Research Arm Test (ARAT), speed on the Nine Hole Peg Test (NHPT), in pegs/second, gait speed, and Functional Independence Measurement (FIM) walk item; for attention, a Posner task, Mesulam symbol cancelation test, and Behavioral inattention test (BIT) star cancellation test; for memory, the Hopkins verbal learning test (HVLT) and brief visuospatial memory test (BVMT); for language, word comprehension, Boston Naming Test, oral reading of sentences, stem completion, and animal naming.
  • HVLT Hopkins verbal learning test
  • BVMT brief visuospatial memory test
  • the fc-metrics computed for both fcDOT and fcMRI include seed-voxel maps, homotopic-fc, asymmetry-fc, and similarity-fc.
  • Seed-voxel maps are computed using a subset of seeds from the fcMRI literature (within DOT FOV).
  • Homotopic-fc is computed by constructing an interhemispheric homotopic index using every voxel in a hemisphere as a seed.
  • the homotopic connectivity metric strongly correlates with ischemic deficit.
  • the asymmetry index equals the normalized difference in the number of voxels above threshold between the hemispheres (shown in FIGS. 14A-14I).
  • Similarity-fc is computed using a similarity index calculated for each voxel (seed), and measuring the spatial correlation between any two given fc maps (e.g., group-vs-group and subject- vs-group) (shown in FIGS. 14A-14I).
  • fcDOT metrics are established through comparison to fcMRI and test-retest.
  • the fcDOT data are validated through comparisons of between fcDOT and fcMRI at both the single subject and group level for the fc-metrics.
  • intra-class correlation coefficients ICC
  • comparing head-of-bed elevation the difference of fcDOT between supine and sitting 30° head-of-bed elevation is evaluated using means similar to validation of fcDOT against fcMRI, described previously. While there is no precedent from fcMRI, expected differences are relatively small, though likely detectable. In some embodiments, comparing head-of-bed elevation provides the control data for comparing the performance of fcDOT in chronic stroke to behavior.
  • fcDOT chronic stroke
  • fcDOT patterns in stroke patients differ from those of healthy age matched controls (shown in FIG. 10).
  • the analysis for the performance of fcDOT in normal subjects is repeated for the stroke subject data.
  • the behavioral metrics are compared against fc-metrics using logistic regression analysis to test whether behavior abnormalities in stroke are associated with fcDOT measures of dysfunction.
  • Initial analysis may use global integrated neurological behavior measures (e.g., NIHSS) and global integrated measures of fcDOT (e.g., brain average of (dis)similarity metric).
  • a secondary analysis may evaluate more specific functional relationships between the behavioral domains and sub-network fcDOT metrics (e.g., the average within subnetwork strengths of somatomotor, attention visual and default mode sub-networks, shown in FIGS. 12 and 13). From known studies, it is anticipated that the somatomotor and default networks correlate with somatomotor behavior function. A linear mixed-model analysis is used on both behavioral and fc-indices. Control for the multiple comparisons follows known statistical analysis of HD-DOT and MRI, and uses a cluster analysis in conjunction with a random field noise model that incorporates measures of the local temporal and spatial correlations.
  • fcDOT longitudinal fcDOT in the ICU.
  • Acute stroke subjects test fcDOT in an acute disease in a subject population with a wide dynamic range of functional deficits and significant changes over a time (hours/days). Behavioral dysfunction ranges from a complete recovery to death.
  • IHSS NIH stroke scale
  • Exclusion criteria include: 1) non-stroke diagnosis; 2) intracerebral hemorrhage on recruitment; 3) HD-DOT headset discomfort.
  • inclusion criteria include those for the first method, and also include patients who are under orders of 24 hour bed rest (e.g., all patients receiving thrombolytics or severe strokes), and excluding patients with significant aphasia (inability to communicate).
  • the first method of the example study tests if fcDOT can detect changes in neurological status over time.
  • Patient improvement may follow reperfusion; deterioration may occur due to a number of causes including hemorrhage or cerebral edema.
  • fcDOT measures may degrade in parallel. This analysis leverages the multiple time epochs acquired within each subject.
  • fcDOT sensitivity may be established as an imaging biomarker for longitudinal monitoring of neurological status, e.g., to validate fcDOT in relation to NIHSS.
  • a study may compare fcDOT to CT.
  • CT is used to define the spatial location and extent of infarct.
  • fcDOT may have application in the ischemic stroke population.
  • fcDOT metrics may herald impeding herniation. Cytotoxic cerebral edema usually occurs within days after stroke onset, and is manifested by neurological deterioration and decline in level of arousal. Fc metrics may be able to detect early signs of edema, e.g., by correlating the disruption of contra-lesional local-fc with degree of edema as measured by midline shift (mm) from CT. Further, fcDOT may predict future functional outcome, since recent data suggests that bilateral homotopic fc is predictive of longer term outcome.
  • FIG. 16 is a diagram illustrating an example of a super-pixel detection method for measuring brain activity.
  • method 1600 includes receiving a plurality of signals from a plurality of fibers detecting an image of a head of a user, in 1610.
  • each of the fibers may include a source fiber for emitting light towards the head of the user and a detector fiber for detecting light that is incident from the head of the user.
  • the plurality of fibers may be included in a fiber array.
  • a first end of the fiber array may be attached to an imaging cap that is worn on a head of the user.
  • the other end of the fiber array may be attached to an electronic console to measure signals detected from the imaging cap while worn on the head of the user.
  • the method 1600 further includes performing super-pixel detection on the image signals received from the plurality of fibers, in 1620.
  • a detector may be divided into super-pixels.
  • Each super-pixel may include a plurality of pixels, for example, 25 x 25 pixels, 40 x 40 pixels, 60 x 60 pixels, 85 x 85 pixels, and the like.
  • Each super-pixel may include a core that is configured to sense light from the fibers included in the fiber array. Pixel values of pixels included in the core may be summed.
  • the core may be of any desired shape, for example, circular, square, elliptical, and the like.
  • a buffer region may surround the core of each super-pixel. In the buffer region, light may decay thus preventing cross-talk between the super-pixels.
  • Each super-pixel may further include a reference region that surrounds the buffer regions. The reference region may be used to detect stray light.
  • the method 1600 further includes generating HD-DOT image data based on the super-pixel detected image signals, in 1630.
  • a detector may convert incident light into electron charges to generate an electric signal that may be processed and may be used to construct, for example, HD-DOT images of the patient or the patient's brain.
  • the detector may include an electron multiply charge-coupled device (EMCCD) having a plurality of pixels defined on a surface of the detector.
  • EMCCD electron multiply charge-coupled device
  • detector fibers transport light (i.e., scattered light received by the detectors) between an imaging cap and an electronic console. The received light may be focused onto detector by a lens, and the light incident on detector may be converted into an electric signal including HD-DOT image data
  • the method 1600 further includes outputting the generated HD-DOT image data, in 1640.
  • the HD-DOT image data may be displayed on a screen that is electrically connected to the electronic console.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Neurology (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Neurosurgery (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

A system includes a wearable head apparatus and an electronic console. The head apparatus is configured to receive resultant light from the head of a subject. The electronic console includes a fiber array, a detector, and a computing device. The fiber array includes a plurality of fibers configured to transport resultant light received by the head apparatus. The detector includes a plurality of super-pixels each defined by a plurality of pixels of an array of pixels. Each super-pixel is associated with a fiber. Each super-pixel is configured to generate a plurality of detection signals in response to detected resultant light from its associated fiber. The computing device receives the detection signals from each of the plurality of super-pixels. The computing device generates a high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject based on the detection signals from the super-pixels.

Description

SUPER-PIXEL DETECTION FOR WEARABLE DIFFUSE OPTICAL TOMOGRAPHY
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of priority to U.S. Provisional Patent Application No. 62/065,337, filed October 17, 2014, the entire disclosure of which is incorporated herein by reference.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH & DEVELOPMENT
[0002] This invention was made with government support under grant R01EB009233, awarded by the U.S. National Institutes of Health. The U.S. government may have certain rights in this invention.
BACKGROUND OF THE DISCLOSURE
[0003] The example embodiments herein generally relate to measuring brain activity using diffuse optical tomography and, more specifically, to the measuring of brain activity using super- pixel detection with wearable diffuse optical tomography.
[0004] Functional neuroimaging has enabled mapping of brain function and revolutionized cognitive neuroscience. Typically, functional neuroimaging is used as a diagnostic and prognostic tool in the clinical setting. Its application in the study of disease may benefit from new, more flexible tools. Recently, functional magnetic resonance imaging (fMRI) has been widely used to study brain function. However, the logistics of traditional fMRI devices are ill-suited to subjects in critical care. In particular, fMRI generally requires patients to be centralized in scanning rooms, and provides a single "snap shot" of neurological status isolated to the time of imaging, providing a limited assessment during a rapidly evolving clinical scenario. This snap shot is generally captured on a limited basis, for example, once per stay at a hospital, once a week, once a month, and the like.
[0005] In one example, in ischemic stroke (which presents with the sudden onset of neurological deficits), the ischemia triggers a complex cascade of events including anoxic depolarization, excitotoxicity, spreading depression, and, in some cases, reperfusion. During the hyperacute phase (first hours after onset), brain injury evolves rapidly, and therapeutic interventions (e.g., thrombolysis/thrombectomy) aim to preserve viable brain tissue. Beyond the hyperacute phase, potential concerns include post-thrombolysis hemorrhagic transformation, and life- threatening cerebral edema. Therefore, throughout the hyperacute to sub-acute phases, early detection of neurological deterioration is essential and close neurological monitoring is critical. [0006] Diffuse optical imaging (DOI) is a method of imaging using near-infrared spectroscopy (MRS) or fluorescence-based methods. When used to create three-dimensional (3D) volumetric models of the imaged material, DOI is referred to as diffuse optical tomography, whereas two-dimensional (2D) imaging methods are classified as diffuse optical topography. Functional Near-Infrared Spectroscopy (fNIR or fNIRS), is the use of NIRS (near-infrared spectroscopy) for the purpose of functional neuroimaging. Using fNIR, brain activity is measured through hemodynamic responses associated with neuron behavior.
[0007] fNIR is a non-invasive imaging method involving the quantification of chromophore concentration resolved from the measurement of near infrared (NIR) light attenuation, temporal, or phasic changes. NIR spectrum light takes advantage of the optical window in which skin, tissue, and bone are mostly transparent to NIR light in the spectrum of approximately 700-900 nm, while hemoglobin (Hb) and deoxygenated-hemoglobin (deoxy-Hb) are stronger absorbers of light. Differences in the absorption spectra of deoxy-Hb and oxy-Hb allow the measurement of relative changes in hemoglobin concentration through the use of light attenuation at multiple wavelengths.
[0008] fNIR and fNIRS may be used to assess cerebral hemodynamics in a manner similar to fMRI using various optical techniques. In principal, fNIRS could be used for bedside monitoring of a neurological status of a patient. However, despite unique strengths, fNIRS as a standard tool for functional mapping has been limited by poor spatial resolution, limited depth penetration, a lack of volumetric localization, and contamination of brain signals by hemodynamics in the scalp and skull.
[0009] High-density diffuse optical tomography (HD-DOT) provides an advanced NIRS technique that offers substantial improvement in spatial resolution and brain specificity. However, these advancements in HD-DOT lead to additional challenges in wearability and portability. For example, by increasing a number of detection fibers in a wearable apparatus thus increasing spatial resolution, the weight of the wearable device also increases.
[0010] Accordingly, it would be beneficial to provide a new functional imaging apparatus capable of monitoring the neurological status of a patient at a bedside in a clinical setting, for example, during an acute stroke, and the like, in order to provide meaningful functional readouts useful to a clinician. Preferably, the functional imaging apparatus would be much lighter in weight in comparison to previous wearable apparatuses, and be of a size that is convenient for portability, movement, and continuous uninterrupted wearing of the apparatus. The new bedside monitoring technique would benefit patients in clinical settings such as intensive care units, operating rooms, and the like. BRIEF DESCRIPTION OF THE DISCLOSURE
[0011] One aspect of the disclosure provides an electronic console for super-pixel detection and analysis. The electronic console includes a fiber array, a detector coupled to the fiber array, a computing device coupled to the detector, and a display. The fiber array includes a plurality of fibers configured to transport resultant light detected by a head apparatus worn by a subject. The detector is coupled to the fiber array to detect resultant light from the plurality of fibers. The detector includes a plurality of super-pixels each defined by a plurality of pixels of an array of pixels. Each super-pixel is associated with a fiber of the plurality of fibers. Each super-pixel is configured to generate a plurality of detection signals in response to detected resultant light from its associated fiber. The computing device receives the plurality of detection signals from each of the plurality of super-pixels. The computing device is configured to generate a high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject based on the plurality of detection signals from each of the plurality of super-pixels. The display is configured to display the HD-DOT image signal of the brain activity of the subject.
[0012] Another aspect of the disclosure provides a system with a wearable head apparatus configured to be worn on a head of a subject and an electronic console. The head apparatus is configured to direct light at the head of the subject and receive resultant light from the head of the subject in response to the light directed at the head of the subject. The electronic console includes a fiber array, a detector coupled to the fiber array, and a computing device coupled to the detector. The fiber array includes a plurality of fibers configured to transport light to the head apparatus worn by a subject and transport resultant light received by the head apparatus. The detector includes a plurality of super-pixels each defined by a plurality of pixels of an array of pixels. Each super-pixel is associated with a fiber of the plurality of fibers. Each super-pixel is configured to generate a plurality of detection signals in response to detected resultant light from its associated fiber. The computing device receive the plurality of detection signals from each of the plurality of super- pixels. The computing device is configured to generate a high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject based on the plurality of detection signals from each of the plurality of super-pixels.
[0013] Another aspect of the disclosure provides a computer- implemented method for performing super-pixel detection using a detector that includes a plurality of super-pixels each defined by a plurality of pixels of an array of pixels. The method is implemented by a computing device in communication with a memory. The method includes receiving, by the computing device, a plurality of detection signals from the array of pixels. For each super-pixel, a subset of the plurality of detection signals is associated with the super-pixel that generated the detection signals in the subset. A high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject is generated based at least in part on the subsets of the plurality of detection signals associated with the plurality of super-pixels, and the generated HD-DOT image signal is output.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] FIG. 1 is a schematic diagram illustrating an example of a high density-diffuse optical tomography (HD-DOT) system;
[0015] FIG. 2 is a diagram illustrating an example of a weight-vs-coverage analysis of a HD- DOT system;
[0016] FIGS. 3A-3F are diagrams illustrating examples of a large field-of-view HD-DOT system for imaging distributed brain function;
[0017] FIGS. 4A-4C are diagrams illustrating examples of a super-pixel concept and design to decrease noise and fiber size;
[0018] FIGS. 5A-5D are diagrams illustrating examples of super-pixel detection;
[0019] FIGS. 6A-6B are diagrams illustrating an example of a first light weight prototype cap;
[0020] FIGS. 7A-7E are diagrams illustrating an example of a second prototype of a low profile, lightweight wearable HD-DOT cap;
[0021] FIG. 8 is a flow diagram illustrating an example for improving HD-DOT using anatomical reconstructions;
[0022] FIGS. 9A-9E are diagrams illustrating examples of head surface driven subject- specific head modeling;
[0023] FIG. 10 is a diagram illustrating an example of atlas derived DOT visual activations on a single subject;
[0024] FIG. 1 1 is a diagram illustrating an example of real-time three-dimensional (3D) object scanning;
[0025] FIGS. 12A-12D are diagrams illustrating an example of a validation of functional diffuse optical tomography (fcDOT) in view of functional magnetic resonance imaging (fMRI) mapping of brain function using language paradigms;
[0026] FIGS. 13A-13F are diagrams illustrating examples of resting state functional connectivity diffuse optical tomography (fcDOT) maps of distributed resting state networks; [0027] FIGS. 14A-14I are diagrams illustrating examples of a feasibility of a clinical HD-
DOT system with limited field of view (FOV);
[0028] FIG. 15 is a diagram illustrating an example of longitudinal fcDOT maps; and
[0029] FIG. 16 is a diagram illustrating an example of a super-pixel detection method for measuring brain activity.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0030] The example embodiments herein relate to systems, apparatuses, and methods for providing wearable, whole-head functional connectivity diffuse optical tomography (fcDOT) tools for longitudinal brain monitoring that may be used in an acute care setting, such as at a bedside of a person. For example, a wearable apparatus such as a cap, helmet, and/or the like, may be used to cover the head of the person. The cap may include fibers for detecting light reflected from the brain/head of the person. The cap may be in contact with an electronic console that may analyze the detected brain information. Also provided herein are systems, apparatuses, and methods for providing photometric head modeling and motion denoising for high density-diffuse optical tomography (HD-DOT).
[0031] According to various example embodiments, super-pixel detection enables lightweight wearable apparatuses such as caps and portable diffuse optical tomography (DOT) instrumentation. For DOT, the size of the detection fibers is an obstacle to fabricating more ergonomic (wearable) and portable DOT. For example, an average wearable HD-DOT apparatus includes approximately 280 fiber strands (about one meter in length), and has a weight of around 30 pounds. Even a sparse HD-DOT wearable apparatus having between 50-100 fiber strands has a weight that is approximately 7-10 pounds. As described in more detail, below, by introducing super-pixel detection to DOT, smaller fibers (less than about 1/30 of the known standard) may be used while still maintaining HD-DOT performance specifications, and development of a novel full-head low- profile wearable DOT cap is possible.
[0032] For example, the super-pixel detection technology may use a detectors such as electron-multiply charge-coupled devices (EMCCD), scientific complementary metal-oxide- semiconductor (sCMOS) detectors, and the like. Previously developed EMCCD-based DOT systems are slow (e.g., less than about 0.01 Hz) and use geometries that may require only limited dynamic range, such as small volumes (e.g., mouse) or transmission mode measurements.
[0033] However, functional neuro imaging DOT systems have so far not used EMCCD or sCMOS technology. In the example embodiments, a super-pixel approach uses a combination of temporal and spatial referencing along with cross-talk reduction to obtain high dynamic range (DNR) and low cross talk. One improvement over previous technology such as avalanche photodiodes (APDs) is a significant reduction in sensitivity (NEP) that enables the use of smaller fibers (e.g., greater than about 30x reduction) and a smaller console (e.g., greater than about 5x).
[0034] Generally, EMCCDs and sCMOS detectors are attractive for use in DOT because they include many pixels, integrated cooling, electron multiply gain, A/D conversion, flexible software control, and the like. A challenge in using EMCCDs or sCMOS detectors is to establish DOT detector specifications, including low detection noise equivalent power (NEP < 20 fW/VHz), detectivity (3(fW/VHz)/ mm2), high dynamic range (DNR > 106), low inter-measurement cross talk (CT < 10"6), and high frame rates (FR > 3 Hz). However, significant challenges exist because raw single-pixel EMCCD signals fail to meet DOT specification by greater than about lOOx with DNR-104, and CT-10 3.
[0035] According to the example embodiments, the super-pixel design overcomes previous limitations of EMCCD based DOT systems and lowers the noise equivalent power (NEP) while maintaining high-dynamic range (DNR > 106), low cross-talk (CT < 10"6), and reasonable frame rates (FR > 3 Hz). For example, the super-pixel concept leverages massive pixel summing while avoiding corruption by noise sources.
[0036] For example, the super-pixel detection method may generate a medium sized detector (scale 0.1 to 1 mm diameter) by summing pixel values on a CMOS or CCD camera. In principal, the noise equivalent power scales as -area172, and thus (NEP/area) scales as -area"172. As a non- limiting example, a super-pixel (area = 0.13 mm2) may provide NEP = 0.15 fW/VHz and (NEP/area) = 1.18 (fW/VHz)/ mm . However, simple binning and temporal summing may not be sufficient. EMCCDs have a dark-field signal drift that becomes apparent when summing many frames. To counter this, within frame dark-field measurements and temporal modulation/demodulation are used. By lowering the detectivity (NEP/area), the dynamic range is commensurately increased at the same time (e.g., -5- 106).
[0037] Before cross-talk is addressed, the basic math involved is addressed with super-pixel summing and how the summing modifies the noise floor, the detectivity, and dynamic range. A goal of super-pixel DOT (SP-DOT) is to create a wearable, whole-head imaging system. As a non- limiting example, the whole head may include a top half of the head, a scalp of the head, a surface of the head from the forehead to the back neckline, and the like. To achieve this, the field-standard optical fibers are decreased by a factor of ~10x in diameter. This decrease in diameter causes a -lOOx decrease in weight, but also a decrease in the amount of light collected. Currently used APD detectors are not sensitive enough for use with lOx smaller fibers. Detectivity (D) is a measurement of this sensitivity per area of incident light via the fiber. Mathematically, the Detectivity is the Noise Floor (NF) of the sensor divided by the area (A) over which the light is incident.
[0038] An advantage of using sCMOS and EMCCD sensors is that they are more sensitive than the APDs. For example, the individual pixels on a sCMOS sensors have a Noise Equivalent Power (NEP) that is 104 lower than the APDs. A potential drawback is that the individual pixels have a Dynamic Range that is 102 lower than the APDs, and a smaller collection area. The Super- pixel algorithm shown below manipulates a CCD sensor individual pixels so that its Dynamic Range is increased while lowering the Detectivity (NEP/area).
[0039] For example, a single pixel before a super-pixel algorithm is applied has a noise floor (NFpix), an area (Apix), a resulting Detectivity (Dpix = NFpix / Apix), and a dynamic range (DNRpix). The DNRpix is calculated as the full well depth of the pixel (FWpix) divided by the noise floor: DNRpix = FWpix / NFpix.
[0040] In order to increase the dynamic range and decrease the detectivity, multiple pixels are combined together by summing N pixels within a single frame together. Summing N pixels together increases the full well depth linearly with N (FWsp = Fwpix * N) but the noise floor increases as the square root of N because the noise is added in quadrature (NFsp = NFpix * A/N). The dynamic range therefore increases as the square root of N: DNRsp = FWsp / NFsp = (Fwpix * N) / (NFpix * Λ/Ν) = DNRpix* N. While summing N pixels increases the noise floor, it also increases the resulting area. Therefore, the detectivity will decrease by the square root of N: Dsp = NFsp / Asp = (NFpix * Λ/Ν) / (Apix * N) = Dpix / N.
[0041] By creating a super-pixel within a single frame, the dynamic range (DNRsp) has increased and the detectivity (Dsp) has decreased. However to use this super-pixel algorithm for brain imaging, the sampling rate of the data needs to be considered. A typical data rate for brain imaging is 10Hz. To compare the noise across sensors, the noise floor is calculated over a 1 second bandwidth. If the CMOS collected at a frame rate of f Hz, f images are collected per 1 second and therefore after summing over the 1 second bandwidth the noise floor and the detectivity increases as the square root of f: NFsp,f = NFsp *Vf = NFpix *VN *Vf.
[0042] In this example Dsp, f = NFsp, f / Asp = (NFpix *VN *Vf)/(Apix * N) = Dpix *( Vf / VN). Also, the dynamic range will be improved because "f ' frames are summed: DNRsp, f = Vf *DNRsp = DNRpix *Vf*VN.
[0043] Another aspect to account for in brain imaging is the encoding for location within the data. Location is encoded in separate frames, so 1 frame needs to be collected for each encoding step (K). This will have the effect of reducing the light levels in each frame by a factor of K or increasing the noise floor in each frame by a factor of K. The effective read noise will therefore increase linearly with K: NFsp,f,k = NFsp,f * K = NFpix * * Vf * K. The resultant effective detectivity will also increase linearly with K: Dsp,f,k = (Dpix / VN) * Vf * K. The dynamic range will not change as the number of frames summed does not change: DNRsp,f,k = DNRsp,f= DNRpix *Vf*VN.
[0044] The super-pixel algorithm therefore allows for sCMOS and EMCCD sensors to perform tomographic neural imaging by using manipulations that increase the dynamic range as compared to a single pixel and maintains a comparable detectivity by accounting for the frame rate, size of the super-pixel, and number of encoding steps.
[0045] In one example embodiment, with the super-pixel algorithm with values necessary for wearable, whole-head imaging with 200 micron diameter fibers (156x smaller and lighter than the traditional fibers (Diameter =2.5 mm)), the dynamic range of the super-pixel was reduced from lOOx to only 3x lower than the APDs and the detectivity is still lOx better than the APDs (Wavelength of 690nm, N=754 pixels, frame rate of f=10Hz, and K = 72 encoding steps).
[0046] The math in the examples above, describes the book keeping for summing across pixels and frames assuming perfect shot noise model. The cross talk reduction algorithm below will address the problem of cross-talk, a topic ignored in the math above.
[0047] Regarding cross-talk, CT is complex at multiple levels including optical focusing, and electronic sources within CCD elements, EMCCD gain, sCMOS readout structures, and A/D conversion. A super-pixel cross-talk reduction (CTR) method may be used to leverage the unique super-pixel reference areas. A bleed pattern for each super-pixel (into other super-pixels) may be measured in a calibration step. During operation, scaled bleed patterns are subtracted for each super-pixel from all other super-pixels. The bleed pattern correction is effectively a matrix operation that transforms a vector of raw super-pixel values to a vector of corrected super-pixel values. When the CTR method is implemented, the CT is less than about 1 - 10"6.
[0048] The super-pixel concept generally involves two steps within frame including dark- field subtraction, and an active cross-talk reduction scheme that uses calibrated bleed patterns to remove cross-talk signals during operation based on the images obtained when multiple fibers are illuminated.
[0049] As described in more detail, below, a study was conducted for testing super-pixel feasibility. Super-pixel feasibility was tested using 0.4 mm fiber detectors. An EMCCD (Andor Tech, iXon Ultra 897) with 512 x 512 pixels of size 16x16 μιη had an EM gain at lOx. In comparing super-pixel detectors to APDs (Hamamatsu, 3 mm dia., gain = 30), NEP was evaluated at 1 Hz. Dark backgrounds were subtracted in all cases. The super-pixel detectors provide NEP = 0.2 fW/VHz, about lOOx lower than the APDs, a DNR = 5- 106 and CT < lO6 Further, the design achieves DOT frame-rates >3 Hz.
[0050] Detection fibers (400/430/730 μιη core/cladding/coating, FT400EMT, Thorlabs) may be held in an aluminum block (such as a 6x6 array) and imaged onto an EMCCD. The following numbers are for a super-pixel example with a 60-pixel diameter (total -2,826 pixels, magnification = 2x).
[0051] Regarding frame-rate, with on camera binning (8x1) the camera FR = 448 Hz. Each HD-DOT frame (4.1 Hz) will have a total 108 images (36 positions encode steps x [two wavelengths - 690 and 850 nm - plus a dark frame]). A camera-link frame grabber with an onboard field programmable array (National Instruments NI PCIe-1473R) will compute the super-pixels in real-time super-pixel.
[0052] FIG. 1 is a diagram illustrating an example of an HD-DOT system 100 including an imaging cap 101 (sometimes referred to herein as a wearable head apparatus), a fiber array 102, and an electronic console 1 10 coupled with imaging cap 101 through fiber array 102.
[0053] In the example embodiment, imaging cap 101 includes a plurality of interconnected patches, each patch including a plurality of sources and a plurality of detectors. In the example embodiment, each source corresponds with a detector to define a plurality of source-detector pairs. During operation, the imaging cap 101 is placed over the patient's head, and for each source- detector pair, light is transmitted to the patient by the source. Here, the transmitted light is scattered by interactions with the patient, and at least some of the scattered light (sometimes referred to herein as "resultant light") is received by detectors. In one embodiment, imaging cap 101 is configurable for a particular patient, e.g., by modeling the cap based on the patient's anatomy, and is lightweight to facilitate portability of the HD-DOT system and enable longitudinal imaging of the patient in the acute setting (e.g., a clinic, intensive care unit, or other environment).
[0054] Fiber array 102 may include a plurality of source fibers and a plurality of detector fibers. Source fibers are optical imaging fibers that may transport light from electronic console 1 10 to sources on imaging cap 101. Similarly, detector fibers are optical imaging fibers that transport light from detectors on imaging cap 101 to electronic console 1 10. In some embodiments, fiber array 102 may be constructed using fewer and/or smaller optical imaging fibers to facilitate portability of the HD-DOT system 100.
[0055] In the example embodiment, electronic console 110 includes a fiber array holder 103, a detector 105, a lens 106 positioned between fiber array holder 103 and detector 105, and a light source 104 coupled with fiber array 102 by fiber array holder 103. Fiber array holder 103 is coupled with optical fibers (i.e., source fibers and detector fibers) of fiber array 102, and is configured to hold the fibers in a desired arrangement to allow optical communication between fiber array holder 103, detector 105, and light source 104. In the example embodiment, fiber array holder 103 holds the fibers in a square arrangement that corresponds to the shape of detector 105. In other embodiments, fiber array holder 103 may be configured to hold the optical fibers in any suitable arrangement to enable the HD-DOT system 100 to function as described herein.
[0056] Detector 105 is an image sensing device and is positioned within electronic console 110 to receive light from detector fibers of fiber array 102. Detector 105 converts incident light into electron charges to generate an electric signal that may be processed to construct, for example, HD-DOT images of the patient or the patient's brain. In the example embodiment, detector 105 may include an electron multiply charge-coupled device (EMCCD) having a plurality of pixels defined on a surface of the detector. During operation, detector fibers transport light (i.e., scattered light received by the detectors) between imaging cap 101 and electronic console 1 10. The light is received at the electronic console 1 10 at fiber array holder 103. The received light is focused onto detector 105 by lens 106, and the light incident on detector 105 is converted into an electric signal including HD-DOT image data. In some embodiments, detector 105 may include other image sensing devices such as, e.g., a charge-coupled device (CCD), a complementary metal-oxide- semiconductor (CMOS), or any suitable image sensing device to enable the system to function as described herein.
[0057] In the example embodiment of CMOS of scientific CMOS (sCMOS), the row-by-row nature of the readout by the CMOS does introduce row-specific noise. To remove temporal drift noise, the frame-to-frame background is subtracted first. If there were no other noise sources in the CMOS, the noise would follow predictions of the super-pixel math above. For the same reason the cross-talk reduction is used with the EMCCD's a similar approach is required for sCMOS. In particular it can be of an advantage to subtract row-specific noise, wherein a number of pixels for each row that are not illuminated are used to generate a row specific dark value. The row specific dark values are then subtracted from the rest of the pixels in that row.
[0058] Light source 104 is positioned within electronic console 110 to provide light to source fibers of fiber array 102. In the example embodiment, light source 104 includes a plurality of laser diodes (LDs), and each LD provides light to a source fiber and, further, to a source in imaging cap 101. In other embodiments, light source 104 includes a plurality of light emitting diodes (LEDs). In yet other embodiments, light source 104 may include any other suitable device for generating light to enable the system to function as described herein.
[0059] In the example embodiment, electronic console 1 10 also includes a computing device 115 for processing the electric signal generated by detector 105 to compute HD-DOT images. Computing device 1 15 may include at least one memory device 150 and a processor 120 coupled to memory device 150. In the example embodiment, memory device 150 stores executable instructions that, when executed by processor 120, enable computing device 1 15 to perform one or more operations described herein. In some embodiments, processor 120 may be programmed by encoding an operation as one or more executable instructions and providing the executable instructions in memory device 150.
[0060] Processor 120 may include one or more processing units (not shown) such as in a multi-core configuration. Further, processor 120 may be implemented using one or more heterogeneous processor systems in which a main processor is included with secondary processors on a single chip. As another example, processor 120 may be a symmetric multi-processor system containing multiple processors of the same type. Further, processor 120 may be implemented using any suitable programmable circuit including one or more systems and microcontrollers, microprocessors, programmable logic controllers (PLCs), reduced instruction set circuits (RISCs), application specific integrated circuits (ASICs), programmable logic circuits, field programmable gate arrays (FPGAs), and any other circuit capable of executing the functions described herein. Further, processor 120 may include an internal clock to monitor the timing of certain events, such as an imaging period and/or an imaging frequency. In the example embodiment, processor 120 receives imaging data from imaging cap 101, and processes the imaging data for HD-DOT.
[0061] Memory device 150 may include one or more devices that enable information such as executable instructions and/or other data to be stored and retrieved. Memory device 150 may include one or more computer readable media, such as, dynamic random access memory (DRAM), static random access memory (SRAM), a solid state disk, and/or a hard disk. Memory device 150 may be configured to store application source code, application object code, source code portions of interest, object code portions of interest, configuration data, execution events and/or any other type of data.
[0062] Computing device 115 also includes a media display 140 and an input interface 130. Media display 140 is coupled with processor 120, and presents information, such as user- configurable settings or HD-DOT images, to a user, such as a technician, doctor, or other user. Media display 140 may include any suitable media display that enables computing device 115 to function as described herein, such as, e.g., a cathode ray tube (CRT), a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an LED matrix display, and an "electronic ink" display, and/or the like. Further, media display 140 may include more than one media display. According to various example embodiments, the media display may be used to display HD-DOT image data on a screen thereof.
[0063] Input interface 130 is coupled with processor 120 and is configured to receive input from the user (e.g., the technician). Input interface 130 may include a plurality of push buttons (not shown) that allow a user to cycle through user-configurable settings and/or user-selectable options corresponding to the settings. Alternatively, input interface 130 may include any suitable input device that enables computing device 115 to function as described herein, such as a keyboard, a pointing device, a mouse, a stylus, a touch sensitive panel (e.g., a touch pad or a touch screen), a gyroscope, an accelerometer, a position detector, an audio interface, and/or the like. Additionally, a single component, such as a touch screen, may function as both media display 140 and input interface 130.
[0064] Computing device 1 15 further includes a communications interface 160. Communications interface 160 is coupled with processor 120, and enables processor 120 (or computing device 115) to communicate with one or more components of the HD-DOT system, other computing devices, and/or components external to the HD-DOT system. Although not shown in FIG. 1, computing device 1 15 may further include a head modeling module that generates a photometric head model of the subject, and the detector may generate an image signal of the brain activity of the subject based on the photometric head model of the subject. Also, the computing device 1 15 may further include a de-noising module that removes noise from the light transported by the plurality of fibers from the head apparatus, and the detector may generate the image signal of the brain activity of the subject based on the removed noise. Embodiments of the head modeling and the noise removal are further described in various examples herein.
[0065] In the example embodiments, HD-DOT system 100 includes imaging cap 101 that provides whole-head coverage and which is lightweight. FIG. 2 illustrates an example of a weight- vs-coverage analysis of an HD-DOT system including wearable head apparatuses having different coverages and weights. As shown in FIG. 2, at point A, sparsely populated HD-DOT systems are lightweight (or wearable), but do not provide whole-head coverage. In this example, wearable head apparatus 201 includes sparsely populated fibers that, when combined, weigh approximately 8 pounds. However, due to the space between each fiber, the apparatus 201 does not provide complete coverage of the entire head of a person. [0066] Coverage may be increased to provide whole-head imaging by increasing the number of source-detector pairs in the imaging cap, and increasing the relative number of imaging fibers coupled with the imaging cap. In such systems, increasing the number of imaging fibers also increases the weight of the imaging cap. As shown in FIG. 2 at point B, increasing coverage by increasing the number of imaging fibers in wearable head apparatus 202, forces the use of "hairdryer" ergonomics for approximately one-half head coverage (also shown in FIG. 3). In this example, more head coverage is provided, however, such coverage requires about 280 fibers (for 1 m length fibers) and weighs up to about 30 lbs. Although the wearable head apparatus in this example can be supported by strain-relief guides, this fixes the apparatus in space and requires special seating and positioning that result in the "hair-dryer" ergonomics. Moreover, the electronic console in this system typically requires three 7 foot, 19 inch racks, and is essentially not portable. As a non-limiting example, a length of the fibers may be in a range between zero and two meters, one half meter and one and a half meters, one meter to three meters, and the like.
[0067] As shown in FIG. 2 at point C, wearable whole-head HD-DOT 203 according to one or more example embodiments may be provided by reducing the weight of a wearable head apparatus (i.e. an imaging cap in this example) while maintaining, or enhancing, head coverage. For example, during operation, a wearable whole-head HD-DOT apparatus 203 can meet (or exceed) performance requirements imposed by the high attenuation (blood volume) of the brain with optode separations of about 1-5 cm. Performance requirements may include low noise levels (NEP < 20 fW/VHz for a 3 mm detector), high dynamic range (DNR > 106), and low inter-measurement cross talk (CT < 10).
[0068] In this example, a super-pixel approach to instrumentation enables weight reduction of HD-DOT imaging caps. For example, a wearable whole-head HD-DOT apparatus 203 is enabled, at least in part, by using a super-pixel detection method and electron-multiply charge-coupled devices (EMCCD). The super-pixel detection method uses a combination of temporal and spatial referencing along with cross-talk reduction to obtain high dynamic range (DNR) and low cross talk. Furthermore, the super-pixel detection method provides a reduction in sensitivity (NEP) over at least some known methods and enables the use of smaller imaging fibers and a smaller console. For example, the method enables the use of imaging fibers up to about 30 times smaller than such fibers for known methods, and enables the use of an electronic console up to about 5 times smaller than such consoles for known methods. In one embodiment, use of smaller imaging fibers provides an imaging cap having a weight of about 1 lb (e.g., similar to the weight of a bicycle helmet) which, when provided along with an electronic console having a low-profile design, provides a wearable whole-head HD-DOT system suitable for longitudinal fcDOT in the acute care setting. In addition, whole-head HD-DOT apparatus 203 extends the field-of-view to cover multiple contiguous functional domains. For example, the imaging cap may have a weight between 0 pounds and two pounds, a range between a half pound and one and a half pounds, and the like. As another example, the imaging cap may have a weight that exceeds two pounds or that is less than two pounds.
[0069] FIGS. 3A-3F are diagrams illustrating examples of a large field-of-view (FOV) HD- DOT system for imaging distributed brain function. The examples of FIGS. 3A-3F correspond to wearable head apparatus 202 in FIG. 2. In the example embodiments, FIG. 3A illustrates a set-up for HD-DOT. FIGS. 3B, 3C, and 3D illustrate a plurality of monitored data sets for managing data quality control. For example, FIG. 3B illustrates the average light intensity for every source- destination (SD) pair separated by < 55 mm, spanning a DNR = 106. FIG. 3C illustrates an average light level for each source and detector on a flattened view of the cap. FIG. 3D illustrates measurements above the noise threshold (variance < 7.5%, shown as lines). Also, FIG. 3E illustrates the FOV across 8 subjects, where color bar codes the number of subjects with usable sensitivity at a given location of cortex. FIG. 3F illustrates a subject wearing a fiber head apparatus.
[0070] In the prototype HD-DOT system of FIGS. 3A-3F, DOT instrumentation for improved cortical coverage was developed by constructing a high-density array including a high-density regular grid of sources and detectors. The high-density array places strong demands on hardware, specifically, high sensitivity, low noise floor, and large dynamic range of each detector (shown in FIG. 3b). Development of in situ data quality control was critical for managing the large source- detector pair data sets (#SD-pairs > 2000) (shown in FIGS. 3C and 3D). An imaging head apparatus was developed that couples the optodes (i.e., source and detector fibers) to the head. However, in this example, the head apparatus is very heavy due to the size of the fibers included in the source-detector array.
[0071] According to various example embodiments, super-pixel concepts and designs may be applied to the optical fibers and image sensors of the HD-DOT systems described herein, enabling a reduction in an overall size of the wearable head apparatus, and, thus enabling a reduction in a size of a computing device that the head apparatus is connected to.
[0072] FIGS. 4A-4C are diagrams illustrating examples of a super-pixel concept applied to an HD-DOT system to decrease noise and fiber size. Referring to FIG. 4A, the fibers are relayed to an electron multiply charge coupled device (EMCCD) 105 using a high numerical aperture lens to maintain high transmission (e.g., > 90%). As shown in FIG. 4B, a 6 x 6 array of super-pixels 400 is defined. Each super-pixel 450 of the array of super-pixels 400 includes a core region 410 used to detect the super-pixel light intensity, a buffer region 420 where light levels decay by up to 104 and are discarded, and a reference region 430 to calculate stray noise signals. Using reference subtraction, very low noise and cross talk may be obtained (shown in FIG. 5). FIG. 4C illustrates a front view of a fiber array holder (top) and a back view with optical fibers (bottom).
[0073] In the example embodiments, a super-pixel detection method may overcome previous limitations of CCD-based DOT systems. Also, the super-pixel detection method may lower the noise equivalent power (NEP) relative to avalanche photodiode (APD) detection (NEP = 20 fW/Vtlz), while maintaining high-dynamic range (DNR > 106), low cross- talk (CT < 10"6), and reasonable frame rates (FR > 3 Hz). The super-pixel detection method leverages pixel summing while reducing corruption by noise sources. When implementing the super-pixel detection method, a cross talk reduction (CTR) method between super-pixels may be performed. A study was conducted to test the feasibility of the super-pixel detection method using 0.4 mm fiber detectors (shown in FIGS. 4 and 5). In the study, the image sensor included an EMCCD with 512 x 512 pixels of size 16 x 16 μιη had an EM gain at lOx.
[0074] In the example embodiments herein, a super-pixel such as super-pixel 450 shown in FIG. 4B includes a plurality of pixels combined to form one large super-pixel. In the example of FIGS. 4A-4C, a detector 400 includes an array of pixels. In this example, the detector includes an array of 510 X 510 pixels. Rather than analyze data from each individual pixel, the example embodiments generate the super-pixels. In this example, a super-pixel is 85 X 85 pixels. Accordingly, rather than an array of pixels including 260, 100 pixels (510 x 510), the array includes 36 super-pixels (6 x 6), where each super-pixel includes 7,225 pixels. It should be appreciated that the example embodiments are not limited to specific sizes of detector arrays 400 and super-pixels, and may be any desired size.
[0075] Within each super-pixel is a pixel core 420. The pixel core may include a square shape, a circular shape, an oval shape, an elliptical shape, and the like. In various examples, to prevent noise and cross-talk between super-pixels, each super-pixel 450 includes a buffer 420 that surrounds the pixel core 410. Also, buffer 420 may be further surrounded by reference region 430. For example, the buffer 420 and the reference region 430 may be generated by turning off or otherwise preventing light from being detected by pixels in the buffer region 420 and the reference region 430. In this example, the pixel core 410, the buffer region 420, and the reference region 430 may be included within the super-pixel (i.e. within the 85 x 85 pixels). [0076] FIG. 5 are photographs illustrating a performance of the super-pixel detection method (all data is shown as log(abs(data))) shown in FIG. 4. As shown in FIG. 5A, raw EMCCD images have a DNR that is determined by the full well capacity and the readout noise for about 104. As shown in FIG. 5B, simple binning into 6 x 6 binned regions improves the DNR to lxlO5, but cross talk still occurs at lxlO"3. As shown in FIG. 5C, super-pixel analysis reduces CT < 10"6 and generates a DNR of about 5xl06, and NEP = 2 fW/VHz. FIG. 5D illustrates a full range test showing improved CT and DNR of super-pixels versus simple binning.
[0077] In the example of FIGS. 5A-5D, in studying the feasibility of the super-pixel detection method, the super-pixel detection method is compared to a design including avalanche photodiodes (APDs) (Hamamatsu, 3 mm dia., gain = 30) by evaluating NEP at lHz. In this example, dark backgrounds were subtracted in all cases (shown in FIGS. 5A, 5B, and 5C). Based on the comparison, the super-pixel detection method provides NEP = 0.2 fW/VHz, lOOx lower than the design including APDs, a DNR = 5xl06, and CT < 106. Further, the super-pixel detection method achieves DOT frame-rates less than about 3 Hz. Yet further, the super-pixel detection method also enables improvements in wearability.
[0078] According to various example embodiments, imaging caps used in various examples were developed using the super-pixel detection method for use in the acute setting.
[0079] FIGS. 6A-6B illustrate examples of an HD-DOT imaging cap 601. As shown in FIG. 6A, HD-DOT imaging cap 601 includes 24 source fibers and 28 detector fibers. FIG. 6B illustrates an example of the elastic fiber attachment of HD-DOT imaging cap 601 for facilitating lateral stability and surface normal compliance. HD-DOT imaging cap 601 may be built using super-pixel lightweight fibers to provide improvements in cap ergonomics. In HD-DOT imaging cap 601, fibers may be epoxied within right-angle aluminum tubes and are anchored to the cap with elastic straps that provide a "spring" effect to hold the fibers firmly yet comfortably against the scalp (shown in FIG. 6A). The fibers and/or tubes may protrude through the cap by about 3-5 mm allowing for combing through hair of a person. In this example, for a whole-head wearable cap, approximately 288 fibers have a total cross-sectional area of about 1.4 cm2, similar to four USB cables.
[0080] FIGS. 7A-7E illustrate a second prototype HD-DOT imaging cap 701 that is a low profile, lightweight wearable HD-DOT imaging cap. As shown in FIGS. 7A and 7B, fibers of second prototype imaging cap 701 are guided by an anatomical computer model that optimizes the placement of the fibers, and accommodates the position dependent curvature of the head surface (which may be generated from a MRI population atlas). As shown in FIGS. 7C and 7D, a head surface is expanded to a cap 8 mm larger than the head, and converted to STL files which are printed in ABS plastic using a three-dimensional (3D) printer. As shown in FIG. 7E, the patches are integrated into a neoprene cap. Elastic fiber management from HD-DOT imaging cap 601 (shown in FIG. 6) is incorporated to optimize fiber/scalp coupling.
[0081] In designing the second HD-DOT imaging cap 701, second imaging cap 701 incorporates anatomical morphology of the head (derived from MRI data) into the cap structure itself. Using an energy minimization algorithm, the full-head grid of optode positions may be relaxed onto a computer model of a subject's head anatomy (shown in FIG. 7A) while maintaining an interlaced source and detector grid topology. The computer model is divided into 9 patches of optodes (shown in FIGS. 7B and 7C) that provide local stability of the cap and assist in fiber management. The patches were realized with a three-dimension printer (shown in FIGS. 7D and 7E) and were attached to a neoprene cap to provide comfort and flexibility between the patches through hinge mechanisms (shown in FIG. 7E). In building the second HD-DOT imaging cap 701, the super-pixel lightweight fibers de-couple the goals of cap deformation (elastic fiber management and neoprene patch hinges) from fiber torque (less flexible ABS plastic patches).
[0082] According to various example embodiments, an HD-DOT system includes a wearable, whole-head HD-DOT for clinical based brain imaging using the super-pixel detection method described herein. In various examples, wearable HD-DOT includes an imaging cap weight of about 1 lb. Imaging fiber weight is largely determined by the area of the fiber for light collection. One of the challenges in reducing the size of a fiber is maintaining HD-DOT specifications. For example, HD-DOT specifications include low detection noise equivalent power (NEP < 20 fW/VHz, dia = 3 mm, NEP/mm2 ~ 2.8(fW/VHz)/ mm ), high dynamic range (DNR > 106), low inter- measurement cross talk (CT < 10"6), and high frame rates (FR > 3 Hz). Super-pixel detection methods enable generating an about 0.4 mm diameter detector by summing pixels on a CCD camera. Generally, EMCCDs are attractive for DOT with many pixels, integrated cooling, electron multiply gain, A/D conversion and flexible software control. However, additional challenges exist because raw single-pixel EMCCD signals fail to meet HD-DOT specifications by greater than lOOx with DNR ~ 104, and CT ~ 10"3.
[0083] Super-pixel detection methods help solve these challenges as shown in Table 1, below, and FIGS. 4 and 5. Detection fibers (400 / 430 / 730 μιη core / cladding / coating, FT400EMT, Thorlabs) are held in an aluminum block (6x6 array) and imaged onto an EMCCD. The following numbers are for a super-pixel with a 60-pixel diameter (total about 2826 pixels, magnification = 2x). Since NEP scales as about area172, NEP per area scales as about area"172. Potentially, a super-pixel (area = 0.13 mm 2) provides NEP = 0.15 fW/VHz and NEP per area = 1.18 (fW/VHz)/ mm . However, as shown in FIG. 5, simple binning and temporal summing is not sufficient. EMCCDs have a dark-field signal drift that becomes apparent when summing many frames.
[0084] Within frame, dark-field measurements and temporal modulation and/or demodulation may be used to counter signal drift. For a super-pixel (dia = 0.4 mm), the effective noise/area is reduced by about 50 and the dynamic range reaches about 5xl06. CT is complex at multiple levels including optical focusing, and electronic sources within CCD elements, EMCCD gain and A/D conversion. A super-pixel cross-talk reduction (CTR) method was developed, leveraging the unique super-pixel reference areas (shown in FIG. 4). The bleed pattern for each super-pixel (into other super-pixels) is measured in a calibration step. During operation, scaled bleed patterns are subtracted for each super-pixel from all other super-pixels. After CTR, the CT is less than lxlO"6 (shown in FIG. 5). With on-camera binning (8x1) the camera FR = 448 Hz. Each HD-DOT frame (4.1 Hz) includes a total 108 images (36 position encode steps x [two wavelengths— 690 and 850 nm— plus a dark frame]). A camera-link frame grabber with an onboard field programmable array (National Instruments NI PCIe-1473R) computes the super-pixels in real-time.
Figure imgf000020_0001
[0085] The example system includes illumination sources including laser diodes (LD), providing an about 30x increase in peak light level (60 mW vs. 2 mW CW power) over light emitting diodes (LEDs). For example, individual LDs (670 nm RL67100G, 850 nm R85100G, Roithner-Lasertechnik) for each source position may be coupled to 200 μιη fibers. On the scalp, diffusing elements provide about a 2.5 mm spot. At a 1/108 duty cycle, the single source fluence is about 0.2 mW/mm2, well below the ANSI limit (4 mW/mm2). [0086] The example system also includes an electronic console including a camera, lens, and fiber coupling block occupying about 6x8x8 inches (height x width x depth). In a 10U height (19 in rack), 144 super-pixel detectors are included with about 5x compression compared to known APD- DOT (50U for 144 detectors). Illumination will use 9U. The full system is about 36x48x24 inches, including a computer (control, collection, processing).
[0087] The example system further includes an imaging cap (shown in FIGS. 6A and 7A-7E) including sub-arrays of 6x6 detectors interlaced with 6x6 sources, with 36 step time encoding of the sources. The resulting four sub-arrays run concurrently since the active sources are separated by a distance greater than 2x the longest usable source-detector pair (5th nearest-neighbor) distance. In developing the imaging cap, the model may be guided by an anatomical computer model that optimizes the placement of fibers and accommodates the position dependent curvature of the head surface (shown in FIGS. 6 and 7). To accommodate the wide range of head sizes, the HD-DOT imaging cap may be configured in small (53±2 cm), medium (57±2 cm) and large (61±2 cm) caps. The caps may be optimized using small pilot studies.
[0088] The example system may also include real-time displays for cap fit optimization. According to various aspects, HD-DOT performance depends critically on fiber/scalp coupling. To guide operator cap fit, real-time displays may be developed and used in both "measurement space" (shown in FIGS. 3C and 3D) and image space using a graphics processing unit cluster (e.g., an NVidia Tesla C2075 GPU cluster). For example, developed real-time displays may be used to estimate real-time imaging within about 1 minute of cap placement.
[0089] To test the example HD-DOT system, bench top and in vivo performance tests may be conducted. Tests with the full implementation of a super-pixel DOT system may be used to confirm the system specifications (shown in Table 1). In vivo tests provide realistic cranial tissue structures and subject movement. Initial prototype testing may include longitudinal wearability testing for up to about 12 hour scan times (20 minute breaks every 4 hours, shown in FIG. 15). Functional imaging in healthy adults (N = 20) for visual, auditory, and language tasks and rest is enabled by methods similar to those shown in FIGS. 12A-12D, and 13A-13F.
[0090] In further testing of the example HD-DOT system, the parameter space of the system may be analyzed to meet design goals. According to various aspects, a strength of the system is its extensive flexibility, with regard to the pixel binning, detector size, and temporal summing, which may optimize the field-of-view, dynamic range and speed. Particularly, the system may be analyzed to determine the most relevant real-time displays for cap fit. Further, the system may be analyzed to determine the relative importance of "sensor space" vs "image space" data with respect to cap fit. The analysis (and other appropriate feedback) may be used to develop real-time displays for the system.
[0091] In some example embodiments, photometric head modeling and motion denoising for high density-diffuse optical tomography (HD-DOT) may be used in at least some of the examples.
[0092] In providing wearable, whole-head HD-DOT for the acute setting using systems such as, e.g., the example HD-DOT system, the subject's head surface and the position of the imaging cap are captured for registering the DOT data set to a model of the subject's anatomy. Some known cases have demonstrated the advantage of using co-registered anatomical head modelling to improve HD-DOT localization. Specifically, demonstrating the use of individual subject anatomical MRI (shown in FIGS. 2, 3, and 8). However, for the acute care setting, research quality MRIs may not be available for many subjects, and head models may be generated by transforming reference (or atlas) anatomy to the subject. Computing an individual light-path model requires capturing the exterior shape of the head and the relative location of the HD-DOT imaging cap. A photometric approach is contemplated for efficiently capturing this data, where non-linear models are used to obtain an about 5 mm correspondence with fMRI. Preliminary tests of the contemplated approach show that non-linear registration may be used to obtain localization errors of less than about 3 mm for reference-anatomy versus subject-MRI head models (shown in FIGS. 9A-9E).
[0093] The next challenge is de-noising the captured data for the subject's head surface and the position of the cap from motion artifacts. Some known methods, including independent component analysis (ICA) and wavelets, have been evaluated for fMRI and fNIRS, but have yet to be established for fcDOT. HD-DOT overlapping measurements impose an inherent structure on potential fiber movement induced error signals. A method is contemplated that uses HD-DOT overlapping measurements to quantify optode coupling and provide mathematical correction of the raw signal to account for movement artifact. A study will be conducted to evaluate the contemplated method against known approaches such as ICA and wavelet approaches.
[0094] In creating DOT head modeling and spatial normalization of functional brain maps, improvements in instrumentation (shown in FIG. 2) prompted the need for advancements in (i) realistic forward light modeling for accurate HD-DOT image reconstruction and (ii) spatial normalization for voxel-wise comparisons across subjects.
[0095] FIG. 8 are photographs illustrating example anatomical (e.g., subject MRI) reconstructions for improving HD-DOT. As shown in FIG. 8, a processing pipeline for anatomically based forward light modeling and spatial normalization was developed. A study was conducted to validate both methods in five healthy adults by direct comparison of HD-DOT vs. fMRI responses to visual stimuli (shown in FIG. 12). At the group level, the localization error of DOT relative to fMRI was about 6.1 mm. Co-registration to anatomy also enabled projection to the pial cortical surface using Computerized Anatomical Reconstruction Toolkit (CARET), an fMRI processing tool (shown in FIG. 12).
[0096] In the acute care setting, subject MRI may be unavailable for creating anatomically accurate head models. FIGS. 9A-9E are photographs illustrating head surface driven subject- specific head modeling for creating anatomically accurate head models without subject MRI. As shown in FIGS. 9A-9E, reference anatomy is transformed to the subject head surface using linear and non-linear optimizations, and the output is subject specific DOT.
[0097] In the example embodiment, as shown in FIGS. 9A-9E, anatomically accurate head models are created using reference anatomy (population MRI data) (shown in FIG. 9A) that is warped to each individual head surface (shown in FIG. 9B) using two consecutive fitting routines. The fitting routines include a linear registration for performing global adjustments (shown in FIG. 9C), and a non-linear registration for improving local fitting (shown in FIG. 9D). In the example embodiment, input is received including the external surface of a subject's head, the location of the optode array relative to the subject's head, and a set of anatomical fiducials used in the registration routines. The warped atlas and the co-registered optode array are used to compute an individualized forward light model that is aligned with the subject's anatomical head structure (shown in FIG. 9E). The premise of atlas-based head modeling is validated using head surfaces extracted from subject MRI scans and by comparing to DOT reconstructions using subject-specific anatomical images and to subject-matched fMRI datasets (shown in FIG. 10).
[0098] FIG. 10 is a diagram illustrating an example of atlas derived DOT visual activations on a single subject that spatially overlap with both individual MRI-based DOT reconstructions and fMRI activations (threshold at 50% maximum). In the example embodiment, the premise of atlas- based head modeling is validated using head surfaces extracted from subject MRI scans and by comparing to DOT reconstructions using subject-specific anatomical images and to subject- matched fMRI datasets.
[0099] An example modeling method described herein provides photometric head modeling for HD-DOT. Further, an example de-noising method is provided for motion de-noising for HD- DOT.
[0100] For the acute care setting, it may be preferable to transform a reference head structure (e.g., atlas) to the subject's head surface shape, as a subject MRI is not always available. In the example modeling method (shown in FIGS. 9A-9E and 10), a set of anatomical fiducials measured on subject head surfaces are used to transform the reference head structure to the subject's head surface shape. Anatomical landmarks based on the 10/20 international system (including nasion, inion, pre-auricular points and Cz) are used for fiducials (shown in FIG. 1 1 with red dots). Moreover, selected optodes of the imaging array (shown in FIG. 11 with blue dots) are measured. In some known systems, fiducials are measured with an RF 3D digitizer (FastTrack, Polhemus USA). In the example modeling method, a photometric scanner (HandyScan 3D, Creaform) is used, providing improved speed over those known systems. The photometric scanner can retrieve both the 3D coordinates of optodes and the surface of the subject's head. To facilitate locating positions, reflective targets are placed onto the optode locations and anatomical fiducials. As shown in FIG. 1 1, feasibility of head surface capture is shown using a Kinect (Microsoft) camera. As shown in FIGS. 9A-9E and 10, the feasibility of using a two-step, linear then non-linear, transform method is established using surfaces extracted from MRI. In some cases, the example modeling method includes FEM head modeling for light propagation and inversion.
[0101] The accuracy of the example modeling method will be validated in control subjects with MRI. Performance will be evaluated in the physical space of the fibers (prior to image inversion) and also in image space with task responses at the subject and group level. Photometric capture will be evaluated against an RF 3D pen and physical rulers. It is contemplated that locational accuracy will be better than 1 mm. Further, it is contemplated that evaluations of functional response errors will follow some known methods. Yet further, it is contemplated that expected localization errors for atlas-derived versus subject-MRI based head models will be than about 2 mm (shown in FIG. 10), and between HD-DOT and fMRI will be less than about 6 mm. A study of healthy subjects (N = 15) may optimize and validate (vs-MRI) the example modeling method.
[0102] Referring to the contemplated example de-noising method, frequently, data from clinical populations are contaminated with noise from movement artifacts. Effective noise suppression is needed so that large amounts of potentially useful data are not discarded. The example de-noising method includes a coupling coefficient (CC) motion noise removal method that leverages spatial structure in DOT data.
[0103] In principal, motion noise is specific to individual fibers (e.g., a head turn will press or pull optodes to/from the head). Motion changes the transmission to/from individual fibers and is a multiplicative noise factor. A wearable HD-DOT system may have about 3000 SD-pair measurements, yet only involve 288 fibers. In the example de-noising method, coupling coefficient errors are evaluated for baseline DOT reconstructions and the technique is extended to time variant data and coupling coefficients (the method transfers directly). An estimate of the coupling coefficients is calculated as the mean of the first nearest neighbor measurements for each source and detector. Time variant coupling coefficients are modeled as: Icor= [Cso/Cs(t)] * [Cdo Cd(t)] * I(t), where, I(t) is a single SD-pair intensity, Icor(t) is the corrected intensity, Cs(t) is the source coupling coefficient, Ca(t) is the detector coupling coefficient, and Cso and Ca0 are the temporal mean of Cs(t) and Cd(t), respectively.
[0104] In some cases, the example de-noising method may include noise removal methods from fMRI and fNIRS. In particular, the example de-noising method may include one of four methods having shown promise for motion artifact removal: (i) independent component analysis (ICA); (ii) wavelet analysis; (iii) "scrubbing" data (cropping corrupted segments); and (iv) polynomial spline interpolation. Work with blind source separation of HD-DOT data suggests that ICA will also aide in noise identification and reduction.
[0105] The example de-noising method (and other noise reduction methods) will be evaluated in normal subjects (N = 15). Task and resting state data will be collected with specific head motions (front-to-back, side-to-side, and twisting) programmed into an event design. Wireless accelerometers (G-link-LXRS, Micro Strain) will be used to measure head motion. The method will be assessed using four metrics: (a) the percent suppression of known movement artifact features; (b) the CNR of activation data; (c) test-retest reliability of resting state fc; and (d) the strength of short- vs-long distance connections. Based on known work on superficial-signal regression, expected improvements in data quality are the most dramatic when pre-correction CNR = 3±2. Further, expected test-retest values are similar to, or better than, some known fcDOT (standard dev. of r < 0.2 for homotopic connections).
[0106] Development of the methods for photometric head modeling and de-noising may benefit from feedback and iteration with the development of wearable, whole-head HD-DOT and the development of functional connectivity metrics, described later in detail. For example, instrument development may suggest approaches and/or demands for cap registration and photometry. Similarly, the success or challenges in the noise reduction methods may suggest refinements and/or alternatives to cap design. In one study, while the HandyScan 3D has accuracy specifications sufficient to meet a 1mm goal, it is also possible that a Kinetic camera (with KinectFusion software) has sufficient resolution (shown in FIG. 1 1). The Kinect has the advantage of lower cost and may be more easily disseminated.
[0107] Studies were conducted to assess functional connectivity in healthy controls and chronic stroke, and to assess longitudinal functional connectivity in the acute care setting. While the low-frequency fluctuations in cerebral hemodynamics were detected by MRS and reported in 2000, the spatial evaluation of the temporal cross-correlation was not explored until more recently. In a known case in 2008, an example of functional connectivity mapping using optical techniques was developed showing the feasibility of fcDOT in adult humans. In the known case, a large field- of-view (FOV) system was developed to make the first maps of distributed brain networks with fcDOT and the results were validated by comparison against fcMRI.
[0108] In assessing functional connectivity in healthy controls and chronic stroke, fcDOT methods are developed for evaluating how similar (or dissimilar) a single subject is in comparison to a population average. Specifically, fcDOT analyses are developed by compressing the full fc- matrix (voxel-by-voxel) down to images that assign an fc-metric to each voxel in the brain (shown in FIG. 14). For example, previous cases have found bilateral homotopic connectivity maps useful in mouse studies of Alzheimer's disease. Other potentially useful fc-indices include similarity and asymmetry measures. To test the fcDOT methods, normative data sets are acquired by studying healthy subjects within an anticipated age range (years 60-80). To establish the sensitivity of fcDOT to brain injury it will be important to validate in a population with a wide dynamic range in deficits. Chronic stroke patients have a large spectrum of temporally stable neurological deficits and thus provide the ideal patient population to evaluate fcDOT as a surrogate of neurological behavior exams.
[0109] In assessing longitudinal functional connectivity in the acute care setting, fcDOT is established in acute stroke. Bedside fcDOT (or fcDOT in the acute care setting) will enable longitudinal monitoring of functional connectivity. The wearability of fcDOT technology enables longitudinal bedside functional mapping of brain integrity during the post-stroke acute time window (12-72 hours) at the bedside in the intensive care unit (ICU). In validating bedside fcDOT in the ICU, fcDOT is compared to serial behavioral exams (e.g., the NIHSS). In some embodiments, the benefit of fcDOT as a brain monitoring imaging method is demonstrated in extended 12 hour scanning. Such time windows may be difficult or impossible with fcMRI.
[01 10] In establishing fcDOT methods for mapping brain function in humans, some previous HD-DOT experiments had been limited to visual or motor task paradigms. To test HD-DOT imaging of distributed, multiple-order brain functions, a study was conducted following a known PET study and used a hierarchy of tasks to break down language into sensory (visual and auditory), articulatory (speaking), and semantic (higher order cognitive) processes (shown in FIGS. 12A- 12D). [01 11] FIGS. 12A-12D are diagrams illustrating an example of a validation of fcDOT versus fMRI mapping of brain function using a plurality of language paradigms. FIG. 12A illustrates validating fcDOT versus fMRI in an example of using hearing words versus other words. FIG. 12B illustrates validating fcDOT versus fMRI using reading words versus other words. FIG. 12C illustrates validating fcDOT versus fMRI using imagined speaking versus reading. FIG. 12D illustrates validating fcDOT versus fMRI using converting verb generation versus imagined speaking.
[01 12] In the examples of FIGS. 12A-12D, the contrast-to-noise-ratio (CNR, expressed as the max t-value associated with each color bar) for HD-DOT across subjects was within a factor of 2 of the fMRI CNR, suggesting that HD-DOT (within its FOV) has similar reproducibility to that of fMRI. A goal in extending the FOV was to image distributed resting state networks (RSNs) (shown in FIG. 13).
[01 13] FIGS. 13A-13F are diagrams illustrating seed-based correlation maps obtained in normal volunteers for three sensory-motor and three cognitive networks, where the anatomical location of each seed is shown as a black dot. As shown in FIGS. 13A-13F, resting state functional connectivity of distributed networks provides a sensitive marker of neurological dysfunction. In particular, distributed RSNs may include the dorsal attention (DAN), fronto-parietal control (FPC) and default mode (DMN) networks. These fcDOT RSNs exhibit topographies similar to those obtained non-concurrently with fcMRI (FPC, DAN, DMN).
[01 14] In assessing functional connectivity, a study was conducted to test the feasibility of a clinical HD-DOT system with limited FOV. FIGS. 14A-14I are diagrams illustrating examples of the feasibility of a clinical HD-DOT system with limited FOV. FIG. 14A illustrates fcDOT in the ICU on a patient recovering from an acute stroke. FIGS. 14B and 14C illustrate CT and fcDOT for a healthy subject. FIGS. 14D and 14E illustrate CT and fcDOT for a moderate stroke subject. FIGS. 14F and 14G illustrate CT and fcDOT for a severe stroke subject. As shown in FIGS. 14B, 14D, and 14F, the infarcts are represented by binary masks. The alterations in fc patterns measured via seed- voxel maps are correlated with the severity of stroke injury. As shown in FIG. 14H, an asymmetry index, a within subject measure, quantifies how different the maps are on opposite sides of the head and shows strong correlation to the NIHSS across 6 subjects. As shown in FIG. 141, a similarity index, a between subject measure, quantifies how similar any two fc maps are and also shows a strong correlation to the NIHSS.
[01 15] In the example embodiment, fcDOT is evaluated by validating fcDOT against fcMRI and neurocognitive testing in both a normal population and chronic stroke. More particularly, fcDOT is evaluated using fc-metrics that comprehensively evaluate the connection patterns including an asymmetry index and a similarity index. These metrics will also be used to compare fcDOT and fcMRI. A limited FOV HD-DOT system was developed to test the feasibility of imaging populations in the neonatal ICU and in the adult ICU at the bedside of patients recovering from stroke (shown in FIGS. 14A-14I).
[01 16] As shown in FIGS. 14C, 14E, and 14G, feasibility data in adult stroke patients with seeds placed in the temporal lobe display fcDOT maps whose disruption is significantly correlated with the volume of the infarct (R2 = 0.87, p = 0.02; N = 5). The fc maps can be assessed for asymmetry between hemispheres within a subject and for similarity to healthy templates. Asymmetry is calculated as the percentage difference in number of voxels within a seed-based fc map between the left and right hemisphere. A comparison across 6 stroke subjects shows a correlation between a NIH Stroke Scale and asymmetry. As shown in FIG. 14h, subjects with worse (i.e., higher) NIHSS scores demonstrate greater asymmetry (R2 = 0.80, p = 0.015, uncorrected). In the example embodiment, fcDOT similarity was evaluated by calculating the spatial correlation between seed-voxel maps for a normal subject and each stroke subject. As shown in FIG. 14i, across the same 6 stroke subjects, the similarity index decreased as the NIHSS increased (R2 = 0.95, p = 0.0008, uncorrected).
[01 17] In assessing functional connectivity, a study was conducted for continuous fcDOT in eight hour longitudinal scans in healthy subjects.
[01 18] FIG. 15 illustrates longitudinal fcDOT maps taken during a period of 7 hours. In determining the feasibility of imaging longitudinally, a group of 4 healthy subjects were scanned twice, each for 8 hours continuously. The first goal was to ascertain the wearability of the imaging cap. The subjects wear times were as follows: Subject 1, 7 h 47 min / 7 h 21 min; Subject 2, 9 h 18 min / 9 h 1 1 min; Subject 3, 8 h 10 min / 8 h 40 min; Subject 4, 8 h 13 min / 8 h 39 min. The four subjects were able to wear the cap for 8 hours (±30 min) without reporting any increased discomfort from the cap. This confirmed anecdotal evidence from shorter 2 hours scans, that any discomfort is evident during the first 30 minutes of scanning. When the initial cap fit is sufficiently optimized, the cap fit is wearable long-term. As shown in FIG. 15, preliminary image analysis shows promising stability of fcDOT across hours.
[01 19] In further assessing functional connectivity, a study was conducted to develop fcDOT metrics for evaluating brain injury. To establish fcDOT sensitivity to brain injury a clinical population is sought with a wide dynamic range of functional deficits, stable injury and the potential for comparisons to fcMRI and behavior assays. Chronic stroke subjects fit these requirements.
[0120] In the study, inclusion criteria for healthy subjects (n = 32) include: 1) age 50-80 years; 2) no history of neurological disorders; and 3) balanced for gender. Exclusion criteria include: HD-DOT headset discomfort or any MRI contraindications.
[0121] In the study, inclusion criteria for stroke subjects (n = 48) include: 1) age 50-80 years and able to obtain informed consent from patient or patient's representative; 2) ischemic stroke (with or without thrombolytic therapy); 3) first time stroke; 4) patients are selected to stratify across a range of severities, NIHSS = 5 to 25; and 5) time after stroke greater than 12 months. Exclusion criteria include: 1) non-stroke diagnosis; 2) intracerebral hemorrhage; 3) DOT cap discomfort; and 4) MRI contraindications.
[0122] Healthy subjects are imaged on two days with two sessions each day, including a total of one fcMRI session and three fcDOT sessions (in random order). Stroke subjects are also be brought in for two days, one day fcMRI and fcDOT, the other day fcDOT and behavior testing (in random order). Both days are within two weeks.
[0123] During each session, subjects are scanned (fcDOT) for 1.5 hours using (i) 30 min supine resting state, (ii) 30 min supine mixture of auditory stimuli (words) and visual stimuli (flickering checkerboards), and (iii) 30 min sitting 30° head-of-bed elevation.
[0124] For each subject, one 60 minute supine fJVIRI scan is obtained with (i) 30 minutes of resting state and (ii) 30 minutes of auditory and visual stimuli for validation of the fcDOT maps. fcMRI are collected by similar means to those shown in FIGS. 12A-12D and 13A-13F.
[0125] In evaluating behavior of stroke subjects, neurobehavioral assessments are conducted by a psychometrician blinded to the imaging results to comprehensively assess cognitive and motor deficits. Multiple cognitive domains are evaluated (e.g., language, memory, attention, and motor function) using the following tests: for spatial attention, a computerized Posner Task, recording reaction times (RTs) and accuracy; for motor, active range of motion at the wrist, grip strength, performance on the Action Research Arm Test (ARAT), speed on the Nine Hole Peg Test (NHPT), in pegs/second, gait speed, and Functional Independence Measurement (FIM) walk item; for attention, a Posner task, Mesulam symbol cancelation test, and Behavioral inattention test (BIT) star cancellation test; for memory, the Hopkins verbal learning test (HVLT) and brief visuospatial memory test (BVMT); for language, word comprehension, Boston Naming Test, oral reading of sentences, stem completion, and animal naming. [0126] In the study, the fc-metrics computed for both fcDOT and fcMRI include seed-voxel maps, homotopic-fc, asymmetry-fc, and similarity-fc. Seed-voxel maps are computed using a subset of seeds from the fcMRI literature (within DOT FOV). Homotopic-fc is computed by constructing an interhemispheric homotopic index using every voxel in a hemisphere as a seed. In some embodiments, the homotopic connectivity metric strongly correlates with ischemic deficit. Asymmetry-fc is computed, for a given fc-map, by applying a threshold to binarize the fc map (e.g., r = 0.5). The asymmetry index equals the normalized difference in the number of voxels above threshold between the hemispheres (shown in FIGS. 14A-14I). Similarity-fc is computed using a similarity index calculated for each voxel (seed), and measuring the spatial correlation between any two given fc maps (e.g., group-vs-group and subject- vs-group) (shown in FIGS. 14A-14I).
[0127] The performance of fcDOT in normal subjects is compared to fcMRI by validating head models, validating fcDOT against fcMRI, and comparing head-of-bed elevation. In validating head models, the reference head model is validated in the older controls. With the auditory and visual functional localizers, expected localization errors are 2 mm to 5 mm. Further, the fc-metrics are evaluated for the different head models at the subject and group level.
[0128] In validating fcDOT against fcMRI, fcDOT metrics are established through comparison to fcMRI and test-retest. The fcDOT data are validated through comparisons of between fcDOT and fcMRI at both the single subject and group level for the fc-metrics. In testing the reliability of each fcDOT metric, intra-class correlation coefficients (ICC) are computed for inter-session and intra-session comparisons.
[0129] In comparing head-of-bed elevation, the difference of fcDOT between supine and sitting 30° head-of-bed elevation is evaluated using means similar to validation of fcDOT against fcMRI, described previously. While there is no precedent from fcMRI, expected differences are relatively small, though likely detectable. In some embodiments, comparing head-of-bed elevation provides the control data for comparing the performance of fcDOT in chronic stroke to behavior.
[0130] In other cases, the performance of fcDOT in chronic stroke is compared to behavior. Hypothetically, fcDOT patterns in stroke patients differ from those of healthy age matched controls (shown in FIG. 10). The analysis for the performance of fcDOT in normal subjects is repeated for the stroke subject data. In addition, the behavioral metrics are compared against fc-metrics using logistic regression analysis to test whether behavior abnormalities in stroke are associated with fcDOT measures of dysfunction. Initial analysis may use global integrated neurological behavior measures (e.g., NIHSS) and global integrated measures of fcDOT (e.g., brain average of (dis)similarity metric). A secondary analysis may evaluate more specific functional relationships between the behavioral domains and sub-network fcDOT metrics (e.g., the average within subnetwork strengths of somatomotor, attention visual and default mode sub-networks, shown in FIGS. 12 and 13). From known studies, it is anticipated that the somatomotor and default networks correlate with somatomotor behavior function. A linear mixed-model analysis is used on both behavioral and fc-indices. Control for the multiple comparisons follows known statistical analysis of HD-DOT and MRI, and uses a cluster analysis in conjunction with a random field noise model that incorporates measures of the local temporal and spatial correlations.
[0131] In other studies, other metrics for evaluating brain injury are developed. The field of fc-network analysis is rapidly advancing, and alternatives to the proposed fc-index may arise. For example, a method was developed for parcellating functional architecture. Other metrics may include measures derived from graph theory, e.g., node degree, community assignments, participation coefficient and between-ness centrality, cortical hubs and small world connectedness and dual regression. For example, regarding the proposed head model, a reference head may be used that incorporates anatomical aging and the shrinkage of brain, with age built either from a set of previous fcMRI stroke subjects.
[0132] In yet further assessing functional connectivity, an example study was conducted to test the feasibility of longitudinal fcDOT in the ICU. Acute stroke subjects test fcDOT in an acute disease in a subject population with a wide dynamic range of functional deficits and significant changes over a time (hours/days). Behavioral dysfunction ranges from a complete recovery to death. Temporally, following ischemic stroke, neurological status can be highly unstable. While fcDOT may eventually provide a more quantitative and continuous assay than current neurological exams, in the example embodiment, the NIH stroke scale ( IHSS) is used to evaluate fcDOT.
[0133] In some cases, in the study, a first method includes using a N = 32 subjects, but with moderate 4 hour scans, during the first three days following stroke. In other cases, a second method may include a N = 10 subjects, but pilot the feasibility of extended longitudinal imaging fcDOT for up to 12 hours.
[0134] In the study, for the first method, inclusion criteria include: 1) age 50-80 years and able to obtain informed consent from patient or patient's representative; 2) ischemic stroke (with or without thrombolytic therapy); 3) first time stroke; 4) patients will be selected to stratify across a range of severities, NIHSS = 5 to 25; 5) first HD-DOT session within 12 hours of stroke onset. Exclusion criteria include: 1) non-stroke diagnosis; 2) intracerebral hemorrhage on recruitment; 3) HD-DOT headset discomfort. [0135] In the study, for the second method, inclusion criteria include those for the first method, and also include patients who are under orders of 24 hour bed rest (e.g., all patients receiving thrombolytics or severe strokes), and excluding patients with significant aphasia (inability to communicate).
[0136] All stroke patients are evaluated in the Emergency Department (ED) by neurological examination, head CT, and standard laboratory tests. Following possible intravenous tissue plasminogen activator (IV tPA) infusion or mechanical (Solitaire stentriever) thrombolysis, patients are admitted to the Neurological-Neurosurgical ICU for post-treatment monitoring. The NIHSS and Glasgow Coma Scale (GCS, a 6-point clinical scale of arousal) is obtained every 2-4 hours as part of standard patient care.
[0137] In the study, for the first method, subjects (n = 32) are imaged within 24 hours of stroke onset with two additional scans, once a day, obtained on subsequent hospital days (1-3). Using the DOT procedures previously described herein, scans last for 4 hours so that each imaging session spans either two or three NIHSS assessments. The reliability of fcDOT measures as indicators of stroke induced neurocognitive deficits is also evaluated.
[0138] In comparing fcDOT and NIHSS, it is hypothesized that metrics of fc disruption correlate with NIHSS. The fc metrics are paired with the concurrent NIHSS across all patients and time points (32 patients x 3 imaging sessions x 2 NIHSS time points = 192 comparisons) to quantify the degree of correlation.
[0139] In detecting change in status over time, whereas the previous analysis groups all the data together (ignores timing), the first method of the example study tests if fcDOT can detect changes in neurological status over time. Patient improvement may follow reperfusion; deterioration may occur due to a number of causes including hemorrhage or cerebral edema. In patients with deteriorating neurologic status, fcDOT measures may degrade in parallel. This analysis leverages the multiple time epochs acquired within each subject.
[0140] In the study, for the second method, the full benefit of fcDOT as a brain monitoring imaging method is demonstrated in extended 12+ hours scanning. Two small-scale studies are performed; first (n = 5) scan for 8 hours, and second (n = 5) scan for 12 hours. The study is restricted to subjects that can communicate so that the imaging cap may be removed if needed. Data analysis includes linear regression to NIHSS over time.
[0141] In some cases, fcDOT sensitivity may be established as an imaging biomarker for longitudinal monitoring of neurological status, e.g., to validate fcDOT in relation to NIHSS. For example, a study may compare fcDOT to CT. CT is used to define the spatial location and extent of infarct.
[0142] In other cases, fcDOT may have application in the ischemic stroke population. For example, fcDOT metrics may herald impeding herniation. Cytotoxic cerebral edema usually occurs within days after stroke onset, and is manifested by neurological deterioration and decline in level of arousal. Fc metrics may be able to detect early signs of edema, e.g., by correlating the disruption of contra-lesional local-fc with degree of edema as measured by midline shift (mm) from CT. Further, fcDOT may predict future functional outcome, since recent data suggests that bilateral homotopic fc is predictive of longer term outcome. Yet further, when sensitivity is high, further clinical studies to assess fcDOT utility in clinical decision-making (interventional rescue therapy in patients with "failed" IV tPA, or early craniectomy in patients with impending cerebral edema and midline shift) may be pursued.
[0143] FIG. 16 is a diagram illustrating an example of a super-pixel detection method for measuring brain activity.
[0144] Referring to FIG. 16, method 1600 includes receiving a plurality of signals from a plurality of fibers detecting an image of a head of a user, in 1610. For example, each of the fibers may include a source fiber for emitting light towards the head of the user and a detector fiber for detecting light that is incident from the head of the user. T Furthermore, the plurality of fibers may be included in a fiber array. A first end of the fiber array may be attached to an imaging cap that is worn on a head of the user. The other end of the fiber array may be attached to an electronic console to measure signals detected from the imaging cap while worn on the head of the user.
[0145] The method 1600 further includes performing super-pixel detection on the image signals received from the plurality of fibers, in 1620. For example, rather than all pixels return individual imaging values, a detector may be divided into super-pixels. Each super-pixel may include a plurality of pixels, for example, 25 x 25 pixels, 40 x 40 pixels, 60 x 60 pixels, 85 x 85 pixels, and the like. Each super-pixel may include a core that is configured to sense light from the fibers included in the fiber array. Pixel values of pixels included in the core may be summed. Also, the core may be of any desired shape, for example, circular, square, elliptical, and the like. A buffer region may surround the core of each super-pixel. In the buffer region, light may decay thus preventing cross-talk between the super-pixels. Each super-pixel may further include a reference region that surrounds the buffer regions. The reference region may be used to detect stray light.
[0146] The method 1600 further includes generating HD-DOT image data based on the super-pixel detected image signals, in 1630. A detector may convert incident light into electron charges to generate an electric signal that may be processed and may be used to construct, for example, HD-DOT images of the patient or the patient's brain. In the example embodiment, the detector may include an electron multiply charge-coupled device (EMCCD) having a plurality of pixels defined on a surface of the detector. During operation, detector fibers transport light (i.e., scattered light received by the detectors) between an imaging cap and an electronic console. The received light may be focused onto detector by a lens, and the light incident on detector may be converted into an electric signal including HD-DOT image data
[0147] The method 1600 further includes outputting the generated HD-DOT image data, in 1640. For example, the HD-DOT image data may be displayed on a screen that is electrically connected to the electronic console.
[0148] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.

Claims

WHAT IS CLAIMED IS:
1. An electronic console for super-pixel detection and analysis, the electronic console comprising:
a fiber array including a plurality of fibers configured to transport resultant light detected by a head apparatus worn by a subject;
a detector coupled to the fiber array to detect resultant light from the plurality of fibers, the detector including a plurality of super-pixels each defined by a plurality of pixels of an array of pixels, each super-pixel associated with a fiber of the plurality of fibers, each super- pixel configured to generate a plurality of detection signals in response to detected resultant light from its associated fiber;
a computing device coupled to the detector to receive the plurality of detection signals from each of the plurality of super-pixels, the computing device configured to generate a high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject based on the plurality of detection signals from each of the plurality of super-pixels; and a display configured to display the HD-DOT image signal of the brain activity of the subject.
2. The electronic console of claim 1, wherein the computing device further comprises a head modeling module configured to generate a photometric head model of the subject, wherein the computing device is configured to generate the HD-DOT image signal of the brain activity of the subject based at least in part on the generated photometric head model of the subject.
3. The electronic console of claim 1, wherein the computing device further comprises a de-noising module configured to remove noise from the plurality of detection signals, wherein the computing device is configured to generate the HD-DOT image signal of the brain activity of the subject based at least in part on the detection signals with the noise removed.
4. The electronic console of claim 1, wherein the detector comprises an electron multiply charge-coupled device (EMCCD) or a Complementary metal-oxide-semiconductor (CMOS) sensor .
5. The electronic console of claim 1, wherein each super-pixel is defined to include a core area including a plurality of core pixels, a buffer area including a plurality of buffer pixels surrounding the core, and a reference area including a plurality of reference pixels surrounding the buffer.
6. The electronic console of claim 5, wherein the plurality of detection signals include core signals generated by the plurality of core pixels, buffer signals generated by the plurality of buffer pixels, and reference signals generated by the plurality of reference pixels.
7. The electronic console of claim 6, wherein the computing device is configured to generate the HD-DOT image signal based on the core signals, configured to discard the buffer signals, and configured to calculate stray noise based at least in part on the reference signals.
8. The electronic console of claim 1, further comprising a fiber array holder configured to hold the plurality of fibers in a shape that corresponds to a shape of the detector to direct the resultant light of each fiber at a different super-pixel, and a lens configured to focus the resultant light from the plurality of fibers onto the detector.
9. The electronic console of claim 1, wherein the plurality of fibers comprise a plurality of source fibers configured to transport light to the head apparatus and a plurality of detector fibers configured to transport resultant light detected by the head apparatus.
10. A system comprising:
a wearable head apparatus configured to be worn on a head of a subject, the head apparatus configured to direct light at the head of the subject and receive resultant light from the head of the subject in response to the light directed at the head of the subject; and
an electronic console comprising:
a fiber array including a plurality of fibers configured to transport light to the head apparatus worn by a subject and transport resultant light received by the head apparatus;
a detector coupled to the fiber array to detect the resultant light from the plurality of fibers, the detector including a plurality of super-pixels each defined by a plurality of pixels of an array of pixels, each super-pixel associated with a fiber of the plurality of fibers, each super-pixel configured to generate a plurality of detection signals in response to detected resultant light from its associated fiber; and
a computing device coupled to the detector to receive the plurality of detection signals from each of the plurality of super-pixels, the computing device configured to generate a high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject based on the plurality of detection signals from each of the plurality of super-pixels.
11. The system of claim 10, wherein the wearable head apparatus weighs between one half pound and one and one half pounds.
12. The system of claim 10, wherein the detector comprises an electron multiply charge- coupled device (EMCCD) or a Complementary metal-oxide-semiconductor (CMOS) sensor that is configured to detect the resultant light from the plurality of fibers.
13. The system of claim 10, wherein each super-pixel is defined to include a core area including a plurality of core pixels, a buffer area including a plurality of buffer pixels surrounding the core, and a reference area including a plurality of reference pixels surrounding the buffer, wherein the plurality of detection signals include core signals generated by the plurality of core pixels, buffer signals generated by the plurality of buffer pixels, and reference signals generated by the plurality of reference pixels.
14. The system of claim 13, wherein the computing device is configured to generate the HD-DOT image signal based on the core signals, configured to discard the buffer signals, and configured to calculate stray noise based at least in part on the reference signals.
15. The system of claim 10, wherein the plurality of fibers comprise a plurality of source fibers configured to transport light to the wearable head apparatus worn by the subject and a plurality of detector fibers configured to transport resultant light detected by the wearable head apparatus.
16. A computer-implemented method for performing super-pixel detection using a detector that includes a plurality of super-pixels each defined by a plurality of pixels of an array of pixels, said method implemented by a computing device in communication with a memory, the method comprising:
receiving, by the computing device, a plurality of detection signals from the array of pixels;
associating, for each super-pixel, a subset of the plurality of detection signals with the super-pixel that generated the detection signals in the subset;
generating a high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject based at least in part on the subsets of the plurality of detection signals associated with the plurality of super-pixels; and
outputting the generated HD-DOT image signal.
17. The computer-implemented method of claim 16, further comprising generating a photometric head model of the subject, wherein the generating comprises generating the HD- DOT image signal of the brain activity of the subject is based at least in part on the photometric head model of the subject.
18. The computer-implemented method of claim 16, further comprising removing noise from the received plurality of detection signals.
19. The computer- implemented method of claim 16, wherein each super-pixel includes a core area including a plurality of core pixels, a buffer area including a plurality of buffer pixels surrounding the core, and a reference area including a plurality of reference pixels surrounding the buffer, wherein the plurality of detection signals include detection signals generated by the plurality of core pixels, detection signals generated by the plurality of buffer pixels, and detection signals generated by the plurality of reference pixels.
20. The computer- implemented method of claim 19, further comprising discarding the buffer signals, calculating stray noise based at least in part on the reference signals, and wherein the high density-diffuse optical tomography (HD-DOT) image signal of the brain activity of the subject is generated based on the core signals.
PCT/US2015/056014 2014-10-17 2015-10-16 Super-pixel detection for wearable diffuse optical tomography WO2016061502A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/519,350 US10786156B2 (en) 2014-10-17 2015-10-16 Super-pixel detection for wearable diffuse optical tomography
US16/947,829 US20200375465A1 (en) 2014-10-17 2020-08-19 Super-pixel detection for wearable diffuse optical tomography

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462065337P 2014-10-17 2014-10-17
US62/065,337 2014-10-17

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/519,350 A-371-Of-International US10786156B2 (en) 2014-10-17 2015-10-16 Super-pixel detection for wearable diffuse optical tomography
US16/947,829 Continuation US20200375465A1 (en) 2014-10-17 2020-08-19 Super-pixel detection for wearable diffuse optical tomography

Publications (1)

Publication Number Publication Date
WO2016061502A1 true WO2016061502A1 (en) 2016-04-21

Family

ID=55747427

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/056014 WO2016061502A1 (en) 2014-10-17 2015-10-16 Super-pixel detection for wearable diffuse optical tomography

Country Status (2)

Country Link
US (2) US10786156B2 (en)
WO (1) WO2016061502A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110638477A (en) * 2018-06-26 2020-01-03 佳能医疗系统株式会社 Medical image diagnosis apparatus and alignment method
JP2021037348A (en) * 2020-11-26 2021-03-11 公立大学法人 富山県立大学 Cap for acquiring brain information, and production method of cap for acquiring brain information
US20220065964A1 (en) * 2020-08-26 2022-03-03 Canon Medical Systems Corporation Magnetic resonance imaging apparatus, magnetic resonance imaging method, and computer program product
US11864865B2 (en) 2018-02-26 2024-01-09 Washington University Small form factor detector module for high density diffuse optical tomography
CN117357132A (en) * 2023-12-06 2024-01-09 之江实验室 Task execution method and device based on multi-layer brain network node participation coefficient

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10786156B2 (en) * 2014-10-17 2020-09-29 Washington University Super-pixel detection for wearable diffuse optical tomography
US10839712B2 (en) * 2016-09-09 2020-11-17 International Business Machines Corporation Monitoring learning performance using neurofeedback
US9730649B1 (en) 2016-09-13 2017-08-15 Open Water Internet Inc. Optical imaging of diffuse medium
US10778912B2 (en) 2018-03-31 2020-09-15 Open Water Internet Inc. System and device for optical transformation
US10778911B2 (en) 2018-03-31 2020-09-15 Open Water Internet Inc. Optical transformation device for imaging
US10506181B2 (en) 2018-03-31 2019-12-10 Open Water Internet Inc. Device for optical imaging
US10966612B2 (en) 2018-06-14 2021-04-06 Open Water Internet Inc. Expanding beam optical element
AU2019297809A1 (en) 2018-07-06 2021-01-28 Axem Neurotechnology Inc. Apparatus and method for monitoring brain activity
US10962929B2 (en) 2018-09-14 2021-03-30 Open Water Internet Inc. Interference optics for optical imaging device
US10874370B2 (en) 2019-01-28 2020-12-29 Open Water Internet Inc. Pulse measurement in optical imaging
US11600093B1 (en) 2019-01-28 2023-03-07 Meta Platforms, Inc. Increased dynamic range sensor with fast readout
US10955406B2 (en) 2019-02-05 2021-03-23 Open Water Internet Inc. Diffuse optical imaging with multiple beams
US12035996B2 (en) 2019-02-12 2024-07-16 Brown University High spatiotemporal resolution brain imaging
KR102378203B1 (en) * 2019-04-12 2022-03-25 한국과학기술원 Method, system and non-transitory computer-readable recording medium for estimating bio information about head by using machine learning
EP3977409A1 (en) * 2019-05-28 2022-04-06 Brainlab AG Deformity-weighted registration of medical images
US11320370B2 (en) 2019-06-26 2022-05-03 Open Water Internet Inc. Apparatus for directing optical and acoustic signals
US11581696B2 (en) 2019-08-14 2023-02-14 Open Water Internet Inc. Multi-channel laser
US11622686B2 (en) 2019-11-22 2023-04-11 Open Water Internet, Inc. Optical imaging with unshifted reference beam
US11819318B2 (en) 2020-04-27 2023-11-21 Open Water Internet Inc. Optical imaging from light coherence
US11559208B2 (en) 2020-05-19 2023-01-24 Open Water Internet Inc. Imaging with scattering layer
US11259706B2 (en) 2020-05-19 2022-03-01 Open Water Internet Inc. Dual wavelength imaging and out of sample optical imaging
US12076110B2 (en) 2021-10-20 2024-09-03 Brown University Large-scale wireless biosensor networks for biomedical diagnostics
CN117726674B (en) * 2024-02-07 2024-05-14 慧创科仪(北京)科技有限公司 Positioning method of near-infrared brain function imaging device based on personalized brain model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080154126A1 (en) * 2006-12-22 2008-06-26 Washington University High Performance Imaging System for Diffuse Optical Tomography and Associated Method of Use
US20100208872A1 (en) * 1995-05-11 2010-08-19 Andrew Karellas System for quantitative radiographic imaging

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1729261A1 (en) 2005-06-01 2006-12-06 Deutsches Krebsforschungszentrum Stiftung Des Öffentlichen Rechts Method for tomographic reconstruction
US8676302B2 (en) 2006-01-03 2014-03-18 University Of Iowa Research Foundation Systems and methods for multi-spectral bioluminescence tomography
US9480425B2 (en) * 2008-04-17 2016-11-01 Washington University Task-less optical mapping of dynamic brain function using resting state functional connectivity
US9092111B2 (en) 2010-07-26 2015-07-28 International Business Machines Corporation Capturing information on a rendered user interface including user activatable content
US9545223B2 (en) 2011-03-02 2017-01-17 Board Of Regents, The University Of Texas System Functional near infrared spectroscopy imaging system and method
US20120232402A1 (en) * 2011-03-02 2012-09-13 Macfarlane Duncan Functional Near Infrared Spectroscopy Imaging System and Method
US9993159B2 (en) 2012-12-31 2018-06-12 Omni Medsci, Inc. Near-infrared super-continuum lasers for early detection of breast and other cancers
US10786156B2 (en) * 2014-10-17 2020-09-29 Washington University Super-pixel detection for wearable diffuse optical tomography

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100208872A1 (en) * 1995-05-11 2010-08-19 Andrew Karellas System for quantitative radiographic imaging
US20080154126A1 (en) * 2006-12-22 2008-06-26 Washington University High Performance Imaging System for Diffuse Optical Tomography and Associated Method of Use

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11864865B2 (en) 2018-02-26 2024-01-09 Washington University Small form factor detector module for high density diffuse optical tomography
CN110638477A (en) * 2018-06-26 2020-01-03 佳能医疗系统株式会社 Medical image diagnosis apparatus and alignment method
CN110638477B (en) * 2018-06-26 2023-08-11 佳能医疗系统株式会社 Medical image diagnosis device and alignment method
US20220065964A1 (en) * 2020-08-26 2022-03-03 Canon Medical Systems Corporation Magnetic resonance imaging apparatus, magnetic resonance imaging method, and computer program product
US11639979B2 (en) * 2020-08-26 2023-05-02 Canon Medical Systems Corporation Magnetic resonance imaging apparatus, magnetic resonance imaging method, and computer program product
JP2021037348A (en) * 2020-11-26 2021-03-11 公立大学法人 富山県立大学 Cap for acquiring brain information, and production method of cap for acquiring brain information
JP7156719B2 (en) 2020-11-26 2022-10-19 公立大学法人 富山県立大学 Brain information acquisition cap and method for producing brain information acquisition cap
CN117357132A (en) * 2023-12-06 2024-01-09 之江实验室 Task execution method and device based on multi-layer brain network node participation coefficient
CN117357132B (en) * 2023-12-06 2024-03-01 之江实验室 Task execution method and device based on multi-layer brain network node participation coefficient

Also Published As

Publication number Publication date
US20200375465A1 (en) 2020-12-03
US20170231501A1 (en) 2017-08-17
US10786156B2 (en) 2020-09-29

Similar Documents

Publication Publication Date Title
US20200375465A1 (en) Super-pixel detection for wearable diffuse optical tomography
Pinti et al. A review on the use of wearable functional near‐infrared spectroscopy in naturalistic environments
Yücel et al. Functional near infrared spectroscopy: enabling routine functional brain imaging
Menant et al. A consensus guide to using functional near-infrared spectroscopy in posture and gait research
Yücel et al. Best practices for fNIRS publications
US11589749B2 (en) Optically monitoring brain activities using 3D-aware head-probe
Chitnis et al. Functional imaging of the human brain using a modular, fibre-less, high-density diffuse optical tomography system
US7983740B2 (en) High performance imaging system for diffuse optical tomography and associated method of use
Eggebrecht et al. Mapping distributed brain function and networks with diffuse optical tomography
Hoshi Towards the next generation of near-infrared spectroscopy
US11864865B2 (en) Small form factor detector module for high density diffuse optical tomography
US9480425B2 (en) Task-less optical mapping of dynamic brain function using resting state functional connectivity
Machado et al. Detection of hemodynamic responses to epileptic activity using simultaneous Electro-EncephaloGraphy (EEG)/Near Infra Red Spectroscopy (NIRS) acquisitions
US11129565B2 (en) Method for representations of network-dependent features of the hemoglobin signal in living tissues for detection of breast cancer and other applications
Pinti et al. An analysis framework for the integration of broadband NIRS and EEG to assess neurovascular and neurometabolic coupling
Zhao et al. A wide field-of-view, modular, high-density diffuse optical tomography system for minimally constrained three-dimensional functional neuroimaging
Peng et al. Multichannel continuous electroencephalography-functional near-infrared spectroscopy recording of focal seizures and interictal epileptiform discharges in human epilepsy: a review
Klein et al. Performance comparison of systemic activity correction in functional near-infrared spectroscopy for methods with and without short distance channels
Wagner et al. Comparison of whole-head functional near-infrared spectroscopy with functional magnetic resonance imaging and potential application in pediatric neurology
Bergonzi et al. Lightweight sCMOS-based high-density diffuse optical tomography
Srinivasan et al. Illuminating neurodegeneration: a future perspective on near-infrared spectroscopy in dementia research
Khan et al. Dynamic activation patterns of the motor brain revealed by diffuse optical tomography
Burke et al. Bedside diffuse optical tomography of disrupted brain connectivity during acute stroke
Chen et al. Enhancing Blood Flow Assessment in Diffuse Correlation Spectroscopy: A Transfer Learning Approach with Noise Robustness Analysis
CN106805970B (en) Multi-channel brain function imaging device and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15850019

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15850019

Country of ref document: EP

Kind code of ref document: A1