WO2019095069A1 - Systems and methods for performing gabor optical coherence tomographic angiography - Google Patents

Systems and methods for performing gabor optical coherence tomographic angiography Download PDF

Info

Publication number
WO2019095069A1
WO2019095069A1 PCT/CA2018/051459 CA2018051459W WO2019095069A1 WO 2019095069 A1 WO2019095069 A1 WO 2019095069A1 CA 2018051459 W CA2018051459 W CA 2018051459W WO 2019095069 A1 WO2019095069 A1 WO 2019095069A1
Authority
WO
WIPO (PCT)
Prior art keywords
spectral
frame
spectral interferogram
gabor
convolved
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CA2018/051459
Other languages
English (en)
French (fr)
Inventor
Victor X.D. Yang
Chaoliang CHEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US16/764,426 priority Critical patent/US11523736B2/en
Priority to JP2020545402A priority patent/JP2021503092A/ja
Priority to CN202410637582.8A priority patent/CN118592892A/zh
Priority to CN201880085663.4A priority patent/CN112136182B/zh
Priority to CN202410637846.XA priority patent/CN118986271A/zh
Priority to CA3082416A priority patent/CA3082416A1/en
Publication of WO2019095069A1 publication Critical patent/WO2019095069A1/en
Anticipated expiration legal-status Critical
Priority to US18/077,469 priority patent/US12396643B2/en
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0066Optical coherence imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1225Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes using coherent radiation
    • A61B3/1233Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes using coherent radiation for measuring blood flow, e.g. at the retina
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/02007Evaluating blood vessel condition, e.g. elasticity, compliance
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/026Measuring blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/026Measuring blood flow
    • A61B5/0261Measuring blood flow using optical means, e.g. infrared light
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • A61B5/14555Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases specially adapted for the eye fundus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4887Locating particular structures in or on the body
    • A61B5/489Blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • A61B5/7207Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • A61B5/7207Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts
    • A61B5/7214Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts using signal cancellation, e.g. based on input of two identical physiological sensors spaced apart, or based on two signals derived from the same sensor, for different optical wavelengths
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0233Special features of optical sensors or probes classified in A61B5/00
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0044Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/42Detecting, measuring or recording for evaluating the gastrointestinal, the endocrine or the exocrine systems
    • A61B5/4222Evaluating particular parts, e.g. particular organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/404Angiography

Definitions

  • the present disclosure relates to optical coherence tomography angiography.
  • OCT optical coherence tomography
  • TD-OCT time- domain OCT
  • SS-OCT swept-source OCT
  • OCT optical Doppler tomography
  • CDOCT color Doppler OCT
  • OCT angiography Morphological OCT microvasculature imaging, collectively termed OCT angiography (OCTA).
  • OCT angiography has also been developed.
  • OCT algorithms available now can be divided into two categories according to processing mode.
  • the first is inter-line mode, such as Doppler variance phase resolved (DVPR), intensity- based modified Doppler variance (IBDV), optical micro-angiography (OMAG).
  • inter-line mode the blood flow information was extracted from one frame of interference fringes at each position.
  • IBDV intensity- based modified Doppler variance
  • OMAG optical micro-angiography
  • OMAG optical micro-angiography
  • OMAG optical micro-angiography
  • the second processing mode is inter-frame, which extracts blood flow information from multi-frames of structural images at each position, such as phase variance OCT (PVOCT), speckle variance OCT (SVOCT), correlation mapping OCT (cmOCT), split-spectrum amplitude-decorrelation angiography (SSADA) and differential standard deviation of log-scale intensity (DSDLI), and ultrahigh sensitivity optical micro-angiography (UHS-OMAG).
  • PVOCT phase variance OCT
  • SVOCT speckle variance OCT
  • cmOCT correlation mapping OCT
  • SSADA split-spectrum amplitude-decorrelation angiography
  • DSDLI differential standard deviation of log-scale intensity
  • UHS-OMAG ultrahigh sensitivity optical micro-angiography
  • PVOCT, SVOCT, cmOCT, SSADA, and DSDLI obtain blood vessel contrast by calculating statistical information from either phase or intensity images in spatial domain.
  • PVOCT calculates the variance of phase difference between two frames.
  • SVOCT and DSDLI calculate the variances of intensity and the differential intensity between two frames, respectively.
  • cmOCT and SSADA calculate the decorrelation coefficients, but in SSADA, the full spectrum is divided into four sub-bands to improve microvascular image quality.
  • the OMAG algorithm is performed in the slow scanning direction and blood flow signal is calculated from both amplitude and phase signals, resulting in an improvement of sensitivity.
  • differential interferograms obtained using a spectral domain or swept source optical coherence tomography system are convolved with a Gabor filter, where the Gabor filter is computed according to an estimated surface depth of the tissue surface.
  • the Gabor-convolved differential interferogram is processed to produce an en face image, without requiring the performing of a fast Fourier transform and k-space resampling.
  • two interferograms are separately convolved with a Gabor filter, and the amplitudes of the Gabor-convolved interferograms are subtracted to generate a differential Gabor- convolved interferogram amplitude frame, which is then further processed to generate an en face image in the absence of performing a fast Fourier transform and k-space resampling.
  • the example OCTA methods disclosed herein are shown to achieve faster data processing speeds compared to conventional OCTA algorithms.
  • a method of generating an en face angiography image via optical coherence tomography comprising:
  • a spectral domain or swept source optical coherence tomography system to scan a spatial region comprising a tissue surface and to detect at least a first spectral interferogram frame and a second spectral interferogram frame;
  • a system for generating an en face angiography image via optical coherence tomography comprising:
  • control and processing circuitry operatively coupled to the optical coherence tomography system, the control and processing circuitry comprising a processor and a memory, wherein the processor is configured to execute instructions stored in the memory for performing the steps of:
  • controlling the optical coherence tomography system to scan a spatial region comprising a tissue surface and to detect at least a first spectral interferogram frame and a second spectral interferogram frame;
  • a method of generating an en face angiography image via optical coherence tomography comprising:
  • a spectral domain or swept source optical coherence tomography system to scan a spatial region comprising a tissue surface and to detect at least a first spectral interferogram frame and a second spectral interferogram frame;
  • differential Gabor-convolved spectral interferogram amplitude frame to generate the en face angiography image, wherein the differential Gabor-convolved spectral interferogram amplitude frame is processed in the absence of performing a fast Fourier transform and k-space resampling.
  • a system for generating an en face angiography image via optical coherence tomography comprising:
  • control and processing circuitry operatively coupled to the optical coherence tomography system, the control and processing circuitry comprising a processor and a memory, wherein the processor is configured to execute instructions stored in the memory for performing the steps of:
  • controlling the optical coherence tomography system to scan a spatial region comprising a tissue surface and to detect at least a first spectral interferogram frame and a second spectral interferogram frame;
  • differential Gabor-convolved spectral interferogram amplitude frame to generate the en face angiography image, wherein the differential Gabor-convolved spectral interferogram amplitude frame is processed in the absence of performing a fast Fourier transform and k-space resampling.
  • a method of performing texture noise suppression of a first spectral variance optical coherence tomography en face image the first spectral variance optical coherence tomography en face image having been generated based on a first spectral interferogram frame and a second spectral interferogram frame, the method comprising:
  • obtaining a texture-noise-suppressed spectral variance optical coherence tomography image by summing a normalization of a logarithm of the first spectral variance optical coherence tomography en face image and a normalization of the second spectral variance optical coherence tomography en face image.
  • a system for performing texture noise suppression of a spectral variance optical coherence tomography en face images comprising:
  • control and processing circuitry operatively coupled to the optical coherence tomography system, the control and processing circuitry comprising a processor and a memory, wherein the processor is configured to execute instructions stored in the memory for performing the steps of:
  • controlling the optical coherence tomography system to scan a spatial region comprising a tissue surface and to detect at least a first spectral interferogram frame and a second spectral interferogram frame;
  • obtaining a texture-noise-suppressed spectral variance optical coherence tomography image by summing a normalization of a logarithm of the first spectral variance optical coherence tomography en face image and a normalization of the second spectral variance optical coherence tomography en face image.
  • FIG. 1 A shows an example system for performing optical coherence tomography angiography using spectral domain OCT (SDOCT).
  • SDOCT spectral domain OCT
  • FIG. 1B shows an example system for performing optical coherence tomography angiography using swept-source OCT (SSOCT).
  • SSOCT swept-source OCT
  • FIG. 2 shows a flow chart illustrating an example method of performing Gabor optical coherence tomographic angiography (GOCTA).
  • the right side of the flow chart shows an example method of calculating the surface depth of the imaged tissue surface, in which three A-scans are initially calculated to determine the approximate retinal surface location, thereby providing Gabor filter parameters for the B-scan processing that is performed on the left side of the flow chart.
  • GOCTA Gabor optical coherence tomographic angiography
  • FIG. 3 is a cross-sectional structural diagram of the human eye.
  • the curvature of the retinal surface within the region covered by the dashed box can be approximated by the anterio-posterior (AP) diameter.
  • AP anterio-posterior
  • FIGS. 4A-4C show an illustration of the steps of surface calculation based on a plurality of A-scans that sample the surface.
  • FIGS. 5A-5C illustrate example methods of sub-spectral-band, sub-sampling, and skipped convolution processing, respectively.
  • FIGS. 6A and 6B illustrate an example cause of texture noise for (A) finger or palm print and (B) skin lesion.“RBC” is a red blood cell.
  • FIGS. 7A and 7B are flow charts illustrating example methods of data processing steps for (A) AGOCTA with optional texture noise removal, and
  • FIGS. 8A-80 provide a comparison of the microvascular images at optical nerve head region (a) The structural surface calculated by using Eq. 3, the three corners marked by black circles were calculated by FFT. (b) The images outputted from a commercial system (c) The mask for dynamic blood flow signals (red) and background (blue) on a local region marked by the dashed rectangles in (d) - (g).
  • (d) - (g) are the microvascular images obtained by GOCTA, SVOCT, UHS-OMAG and SSADA, respectively (h), , (I) and (n) are the zoomed-in local regions marked by the dashed white rectangles in (d) - (g), respectively (i), (k), (m) and (o) are the histograms of the intensity values covered by mask (c), where the red and the blue represent dynamic flow signal and background, respectively (b) and (d) - (g) share the scale bar.
  • FIGS. 9A-90 provide a comparison of the microvascular images at fovea region (a) The structural surface calculated by using Eq. 3, the three corners marked by black circles were calculated by FFT. (b) The images
  • GOCTA, SVOCT, UHS-OMAG and SSADA respectively (h), (j), (I) and (n) are the zoomed-in local regions marked by the dashed white rectangles in (d)
  • FIG. 10A is a table comparing the data processing time for each two 13- scans from the same position.
  • FIG. 5B is a table comparing the data processing time for entire 3D (608x2048x304) dataset by CPU and GPU.
  • FIGS. 11A-11 H show results from retinal imaging of a healthy volunteer, for which a local region of 6x6 mm 2 was scanned on both optical nerve head region and fovea region
  • (a) and (b) are the surface data obtained by using an example implementation of the GOCTA method described herein
  • (c) and (d) are the regular en face microvascular images for optical head nerve region and fovea region, respectively
  • (e) and (f) are the optimized microvascular images
  • g) and (h) are the differential images obtained by subtracting (e) by (c), (f) by (d), respectively (c) - (h) share the same scale bar.
  • FIGS. 12A-12T show surface data (red curves) obtained by using an example implementation of the present GOCTA method
  • (a) - are the cross sectional structural images with surface data for optical nerve head region at positions of 0 mm, 0.7 mm, 1.3 mm, 2.0 mm, 2.6 mm, 3.3 mm, 4.0 mm, 4.6 mm, 5.3 mm and 6.0 mm.
  • (k) - (t) are the cross sectional structural images with surface data for fovea region at positions of 0 mm, 0.7 mm, 1.3 mm, 2.0 mm, 2.6 mm, 3.3 mm, 4.0 mm, 4.6 mm, 5.3 mm and 6.0 mm.
  • FIGS. 13A-13Q show microvascular images of sub spectral band and sub sampling band on optical nerve head region
  • (a) - (p) are en face images with different spectral bands and different sampling bands.
  • (a1) - (p1) are the zoomed local images in the marked region by a dashed rectangle in (a) - (p).
  • (a2) - (p2) are the histograms of the pixel intensities in (a1) - (p1) covered by mask (q), where red and blue represent dynamic and static signals, respectively (a) - (p) share the same scale bar.
  • FIGS. 14A-14Q show microvascular images of sub spectral band and sub sampling band on fovea region
  • (a) - (p) are en face images with different spectral bands and different sampling bands.
  • (a1) - (p1) are the zoomed local images in the marked region by a dashed rectangle in (a) - (p).
  • (a2) - (p2) are the histograms of the pixel intensities in (a1) - (p1) covered by mask (q), where red and blue represent dynamic and static signals, respectively (a) - (p) share the same scale bar.
  • FIGS. 15A and 15B show the“lost” microvascular information by 1/4 spectral band and 1/2 sampling band compared to full band. The“lost”
  • microvascular information shown in (a) and (b) are obtained by subtracting
  • FIGS. 16A-16J show microvascular images obtained by GOCTA
  • SVOCT and OMAG on 1/4 spectral band and 1/2 sampling band (a), (b) and (d) are images on optical head nerve region obtained by GOCTA, SVOCT and OMAG, respectively (c) and (e) are the differential images of subtracting (a) by (b) and (a) by (d). (f), (g) and (i) are images on fovea region obtained by GOCTA, SVOCT and OMAG. (h) and 0) are the differential images of subtracting (f) by (g) and (f) by (i).
  • FIG. 17 plots the data processing time of each step of GOCTA for two B-scans from the same position on both CPU and GPU. Sum is the
  • Transfer is the process of transferring data from host memory to GPU memory. Preparation is some steps to get the convolution prepared. On GPU processing, the time for
  • FIG. 18 is a table showing the 3D data processing time of sub spectral bands and sub sampling band.
  • FIGS. 19A-19I plots images of phantom experiments
  • cmOCT, SVOCT and AGOCTA were the histograms of the marked regions (dashed rectangle:
  • dynamic and static signals were marked as red and blue, respectively (b) - (d) shared the same scale bar.
  • FIGS. 20A-20K show microvascular images of a local region on a healthy volunteer's palm
  • a Photograph of the volunteer's hand.
  • the marked region (6x6 mm) was scanned
  • b The estimated surface curvature
  • c The mask for blood flow signals
  • red red
  • background blue
  • d -
  • f The en face microvascular images calculated by cmOCT, SVOCT and AGOCTA,
  • (g) - (i) are the histograms of the intensity values within the mask (c), where dynamic and static signals were marked as red and blue, respectively (c) - (f) shared the same scale bar.
  • FIGS. 21A-21 L show the calculated surface data (a) - (I) are the cross sectional images at 0.5 mm, 1.0 mm, 1.5 mm, 2.0 mm, 2.5 mm, 3.0 mm, 3.5 mm, 4.0 mm, 4.5 mm, 5.0 mm, 5.5 mm and 6.0 mm, where the red curves are the estimated surface.
  • FIG. 22 plots a comparison of data processing time for two B-scans from the same position on CPU.
  • FFT Fast Fourier Transform
  • MIP mean intensity projection
  • STD standard deviation
  • FIG. 23 plots a comparison of data processing time for two B-scans from the same position on GPU.
  • Transfer in this context refers to the transfer of data from host memory to GPU memory.
  • FIG. 24 plots a comparison of data processing time for entire 3D data.
  • the upper time axis is for cmOCT and the lower one is for SVOCT and AGOCTA.
  • FIGS. 25A-25V show results of texture noise removing on a healthy volunteer's palm data
  • (b) - (d) are the en face structural images at three depth obtained by AGOCTA (mean value of the averaged two absolute Gabor filtered fringes)
  • (e) - (g) and (n) - (p) are obtained by regular AGOCTA and SVOCT within three depth ranges
  • (h) - (j) and (q) - (s) are obtained by AGOCTA and SVOCT with texture noise removed
  • (k) - (m) and (t) - (v) are the differential images of optimized images and regular images for AGOCTA and SVOCT. All images share the same scale bar.
  • FIGS. 26A-26J show microvascular images on a HHT patient's skin lesion
  • (e) - (f) The en face microvascular images (at depth range of 650 to 950 pm below skin surface) obtained by regular SVOCT and AGOCTA, respectively
  • (h) - (i) are obtained by SVOCT and AGOCTA with texture noise removed.
  • - (I) are histograms of the intensity values covered by mask (c), where dynamic and static signals were marked as red and blue, respectively.
  • FIG. 27 plots 3D data processing time of sub spectral bands and sub sampling band.
  • FIGS. 28A-28M show microvascular images of AGOCTA on sub
  • spectral and sub sampling bands are en face images with different spectral bands and different sampling bands.
  • (a1) - (11) are the zoomed local images in the marked region by a dashed rectangle in (a) - (I).
  • (a2) - (I2) are the histograms of the pixel intensities in (a1) - (11) covered by mask (m), where red and blue represent dynamic and static signals, respectively (a) - (I) share the same scale bar.
  • FIGS. 29A-29J show microvascular images of the scalp of a healthy volunteer (a) A photograph of the scalp, where the marked local region (6x6 mm 2 ) was scanned (b) - (d) are the structural images within three different depth ranges (e) - are the microvascular images obtained by AGOCTA within three different depth ranges and the fringes of 1/2 spectral band and
  • FIGS. 30A-30I show images obtained using different example
  • the image quality is maintained when using the“skipped convolution” method and/or the spectral-sub band method.
  • the terms“comprises” and“comprising” are to be construed as being inclusive and open ended, and not exclusive. Specifically, when used in the specification and claims, the terms“comprises” and“comprising” and variations thereof mean the specified features, steps or components are included. These terms are not to be interpreted to exclude the presence of other features, steps or components.
  • the term“exemplary” means“serving as an example, instance, or illustration,” and should not be construed as preferred or advantageous over other configurations disclosed herein.
  • the terms“about” and“approximately” are meant to cover variations that may exist in the upper and lower limits of the ranges of values, such as variations in properties, parameters, and dimensions. Unless otherwise specified, the terms“about” and“approximately” mean plus or minus 25 percent or less.
  • any specified range or group is as a shorthand way of referring to each and every member of a range or group individually, as well as each and every possible sub-range or sub -group encompassed therein and similarly with respect to any sub-ranges or sub-groups therein. Unless otherwise specified, the present disclosure relates to and explicitly incorporates each and every specific member and combination of sub-ranges or subgroups.
  • the term "on the order of”, when used in conjunction with a quantity or parameter, refers to a range spanning approximately one tenth to ten times the stated quantity or parameter.
  • the blood flow information is obtained from the spatial domain.
  • the SDOCT systems described above all require numerous complex processing steps, including k-space resampling, dispersion compensation, Fourier transform (FFT), and maximum (or mean) projection (MIP). Some of these processing steps require long processing times, which poses challenges for real-time imaging, even when using GPUs for data processing.
  • OCTA images are typically used as en face image sets for clinical decision making, such as identifying an area of microvascular abnormality, after which depth resolved information, such as cross-sectional structural OCT images of the retina at the particular region, are reviewed. Therefore, rapid en face OCTA image display, at the time of scanning, may be advantageous to screen retinal pathology as well as to focus detailed examination on a smaller region of interest. In such scenarios, rapid en face OCTA may allow immediate feedback and re-scanning. Such capability may also be useful for less cooperative patients where motion artefacts degrade OCTA images. The present inventors thus sought out to improve upon current OCTA detection and processing methods in order to develop a rapid OCTA method that would enhance the clinical utility of real-time OCTA imaging and video display.
  • optical coherence tomographic angiography OCTA algorithms are provided in which blood flow information is directly extracted from interference fringes without performing the time- consuming steps mentioned above, thereby facilitating real-time OCTA video display.
  • the various example implementations of the methods disclosed herein have been shown to significantly decrease data processing time while maintaining image quality that is suitable for real-time clinical applications.
  • the system includes, but is not limited to, a broadband source 170, a line array detector 194, a beamsplitter 180, a sample arm 182, a reference arm 186, and a spectrally dispersive optic 192.
  • the system may include one or more scanning devices (e.g. motor controlled galvo mirrors), shown at 190, for scanning the beam of the sample arm relative to an object (e.g. tissue).
  • scanning devices e.g. motor controlled galvo mirrors
  • the beamsplitter 180 splits light from the broadband source 170 between the reference arm 184 and the sample arm 186 and the light reflected from the two arms is interfered.
  • the reflected light is interfered using the beamsplitter 180.
  • a different beamsplitter may be employed.
  • the interfered light is dispersed using the dispersive optic 192, which may be a dispersion grating.
  • the dispersion optic 192 spatially disperses the different spectral components of the interfered light, and the spatially dispersed spectrum is detected using the photodetector array 194 (e.g. a line camera).
  • the detected spectrum is the Fourier transform of the axial scan line (A-line), thereby encoding the reflectivity of the tissue as a function of depth.
  • the broadband source 172, detector array 194, and scanning system are operatively coupled to control and processing hardware 100.
  • the control and processing hardware 100 may include a processor 1 10, a memory 1 15, a system bus 105, one or more input/output devices 120, and a plurality of optional additional devices such as communications interface 135, display 125, external storage 130, and data acquisition interface 140.
  • the display 125 may be employed to provide a user interface for displaying en face OCTA video and/or images, and/or for providing input to control the operation of the system.
  • the display may be directly integrated into a control and processing device 165 (for example, as an embedded display), or may be provided as an external device (for example, an external monitor).
  • executable instructions represented as image processing module 160 are processed by control and processing hardware 100 to generate en face OCTA images and/or video as per the example methods described below.
  • the control and processing hardware 100 may include, for example, and execute instructions for performing one or more of the methods illustrated in FIGS. 2 and/or FIGS. 7 A and 7B, or other methods described herein, or variants thereof.
  • Such executable instructions may be stored, for example, in the memory 1 15 and/or other internal storage.
  • Additional control modules may be provided, for example, for controlling the scanning operations of one or more scanning mirrors (e.g. galvo controllers).
  • the methods described herein can be partially implemented via hardware logic in processor 1 10 and partially using the instructions stored in memory 1 15. Some embodiments may be implemented using processor 1 10 without additional instructions stored in memory 1 15. Some embodiments are implemented using the instructions stored in memory 1 15 for execution by one or more microprocessors. Thus, the disclosure is not limited to a specific configuration of hardware and/or software.
  • control and processing hardware 100 may be provided as an external component that is interfaced to a processing device.
  • bus 105 is depicted as a single connection between all of the components, it will be appreciated that the bus 105 may represent one or more circuits, devices or communication channels which link two or more of the components.
  • the bus 105 may include a motherboard.
  • the control and processing hardware 100 may include many more or less components than those shown.
  • Some aspects of the present disclosure can be embodied, at least in part, in software, which, when executed on a computing system, transforms an otherwise generic computing system into a specialty-purpose computing system that is capable of performing the methods disclosed herein, or variations thereof. That is, the techniques can be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache, magnetic and optical disks, or a remote storage device. Further, the instructions can be downloaded into a computing device over a data network in a form of compiled and linked version.
  • the logic to perform the processes as discussed above could be implemented in additional computer and/or machine- readable media, such as discrete hardware components as large-scale integrated circuits (LSI's), application-specific integrated circuits (ASIC's), or firmware such as electrically erasable programmable read-only memory (EEPROM's) and field- programmable gate arrays (FPGAs).
  • LSI's large-scale integrated circuits
  • ASIC's application-specific integrated circuits
  • firmware such as electrically erasable programmable read-only memory (EEPROM's) and field- programmable gate arrays (FPGAs).
  • a computer readable storage medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods.
  • the executable software and data may be stored in various places including for example ROM, volatile RAM, nonvolatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices.
  • the phrases“computer readable material” and“computer readable storage medium” refers to all computer-readable media, except for a transitory propagating signal per se.
  • FIG. 1 B illustrates an alternative example implementation of a system for performing OCTA based on swept source OCT (SSOCT).
  • the example system includes a swept-source optical coherence tomography system, that includes, but is not limited to, a tunable laser 172, a detector 196, an amplifier 198, a beamsplitter 180, a sample arm 182, and a reference arm 186.
  • the system may include one or more scanning devices (e.g. motor controlled galvo mirrors), shown at 190, for scanning the beam of the sample arm relative to an object (e.g. tissue).
  • the tunable laser employed for such an implementation may be an akinetic laser in order to improve image quality.
  • the tunable laser 172 is employed to tune or“sweep” the optical wavelength of light emanating from the laser, and the resulting interference pattern,
  • Spectral analysis via a Fourier transform
  • One or more additional control modules may be provided for synchronizing operation of a tunable laser 172 with the scanning operations.
  • the interference fringes between the light backscattered from sample and reflected by the reference mirror are detected by a spectrometer camera.
  • a three-dimensional (3D) dataset of spectral interferogram frames (spectral fringes), collected as shown at 200 is acquired (e.g. scanning using x- and y-galvo- mirrors).
  • the direct component (DC) of the interference can be measured by blocking sample arm and the auto-correlation of sample beam is negligible.
  • the captured signal After subtracting the DC component, the captured signal can be simplified by
  • x and y represent the scanning directions (e.g. of the two galvos)
  • A is wavelength
  • S(X) is the power spectral density of light source
  • R r are the backscattering coefficient of sample and the reflectivity reference mirror, respectively r
  • 7 r are the input power in the sample and reference arms
  • n is the refractive index
  • z represents depth
  • F (c g) and F ⁇ L l ) are the initial phase and the dispersion mismatch between sample arm and reference arm.
  • the amplitude and the frequency of the fringes vary with time. However, for two consecutive B-scans acquired from the same position, acquired as shown at steps 205 and 210, the amplitude or frequency of the components
  • the differential spectral interferogram fram shown at 215 in FIG. 2, can be expressed by
  • G(c, l,g) I(c, l,g i ) -I(c, l,g 2 ) ,
  • the human eye as an optical system, has a curved image plane on the retina near the fovea, with the optical nerve head in the vicinity.
  • the anterio-posterior (AP) diameter of an emmetropic human adult eye is approximately 22 to 24 mm, which is relatively invariant between sex and age groups. While the transverse diameter varies more according to the width of the orbit, the curvature of the area near the fovea and optical nerve head can be approximated by the AP diameter.
  • an AP diameter within the range of 21-23 mm or 21.5 to 22.5 mm may be employed as an approximation for human eyes.
  • the orientation of retina is determined forthe subsequent generation of Gabor filters having depth selectivity corresponding to the tissue surface depth.
  • the retinal orientation may be evaluated based on a spherical model which can be expressed as:
  • the surface points FFT may be determined by processing at least three A-scans (e.g. at corners of the image frame) to determine the depth of surface, as shown at 220 in FIG. 2.
  • the retinal surface z s (x, y) can be then determined by fitting the surface points to using eq. 3, thereby approximating the region marked by the dashed box in FIG. 3, as per step 225 of FIG. 2.
  • the accuracy of the estimated surface data can be improved by using a distributed set of A-scans (e.g. evenly distributed among across the tissue surface of interest), and 2D cubic interpolating the surface positions of the A scans.
  • A-scans e.g. evenly distributed among across the tissue surface of interest
  • 2D cubic interpolating the surface positions of the A scans This method provides a more accurate surface depth estimation than the preceding example method that employed 3 corner surface positions to solve a sphere function.
  • the sample information at different depths is modulated by different frequency components.
  • a Gabor filter is a linear filter
  • the frequency component within a specific frequency range can be obtained directly by convolution, which is equivalent to multiplying a Gaussian function in spatial domain.
  • the Gaussian function can be used to extract the sample information within the depth range of &-Dz/2 to &+Dz/2, where dz and Az are the depth and depth range respectively.
  • the filter can be obtained by performing a FFT on the above mentioned Gaussian function and expressed by
  • f a is the initial phase.
  • the Gabor filter based on wavelength G(x,i,y) is then calculated by performing a reverse resampling on G ⁇ x,k,y) .
  • This step of calculating the Gabor filter, based on the approximate retinal surface at the pixels of the differential interferogram image frame, is shown at step 230 of FIG. 2.
  • a new differential frame is obtained, henceforth referred to as a Gabor-convolved differential spectral interferogram frame.
  • This step is shown at step 235 in FIG. 2, and is computed as:
  • the GOCTA signal can then be obtained by calculating the standard deviation (STD) of the Gabor-convolved differential spectral interferogram frame ⁇ c y ) , as shown at step 240, which is expressed by:
  • M is the pixel number of CCD
  • 1” mm( x y) is the mean value of each A-scan of the filtered fringe.
  • a given pixel of the en face angiography image is generated by calculating, for a respective pixel of the Gabor- convolved differential spectral interferogram frame, measure based on a spectral standard deviation.
  • a given pixel of the en face angiography image is generated by calculating, for a respective pixel of the Gabor-convolved differential spectral interferogram frame, a measure quantifying a spectral statistical dispersion.
  • statistical measures include median absolute deviation and average absolute deviation.
  • the measures of variance may be higher order power/roots of variance, or combination thereof.
  • the Gabor filter parameters may be chosen such that a large number of zeros are encountered, thus simplifying the computational complexity and reducing the time needed for the convolution in digital filtering. For example, it was found that microvascular images within a depth range of 350 pm (10% of the total OCT ranging depth in one example implementation in the Examples below) spherically fitted retinal surface may be calculated for analysis and comparison. In this example implementation, the non-zero segment length of Gabor filter (Eq. 4) was found to be only 16 pixels (see the Example section below for details of the example system employed), resulting in a substantial decrease of computation complexity.
  • the example GOCTA methods disclosed herein avoid the step of calculating depth resolved structural images (in the z-direction), in contrast to the conventional methods described above. Accordingly, the present systems and methods may be potentially useful for calculating the preview OCTA images as the first line en face display for the clinician, and improve the efficiency of disease screening and diagnosis in busy clinical environment.
  • the GOCTA method may result in significantly reduced image processing times relative to the
  • the Gabor optical coherence tomographic angiography (GOCTA) methods disclosed herein may be employed to provide images and/or video of the microvasculature of a human retina using a standard ophthalmic SDOCT system.
  • the present GOCTA methods are well suited for SDOCT systems used in wide field scanning, ultra-high spectral resolution or parallel high A-line speed applications, where large data amount is generated.
  • the present GOCTA methods can also be implemented on graphics processing units (GPUs) to increase data processing speed further.
  • GPUs graphics processing units
  • the preceding description of the GOCTA method was provided with reference to implementation using an SDOCT system, it will be understood that the preceding example embodiments may alternatively be implemented using a SSOCT system. As noted above, such a system is illustrated in FIG. 1 B. It will be understood that the tunable laser employed for such an implementation may be an akinetic laser in order to improve image quality. It is noted that a limitation of the example GOCTA method illustrated in FIG. 2 is that the structural image alignment in the z-direction cannot be performed for motion artefact removal. However, it is nonetheless noted that x- and y-direction based en face image registration and alignment may still be applied. In clinical use, the GOCTA method can be employed to provide en face images and/or video (e.g. preview images or video), and subsequent processing of the 3D interferogram dataset may optionally be employed to extract depth profile information.
  • en face images and/or video e.g. preview images or video
  • subsequent processing of the 3D interferogram dataset may optionally be employed
  • the curvature of the lens system can affect the accuracy of the evaluated retinal orientation, and for slight curvature, the images obtained by the GOCTA method will not be affected due to the depth range of Gabor filter being a small fraction (e.g. approximately 10%) of the total OCT ranging depth.
  • the relative shifting distance at each pixel can be obtained by scanning a mirror and the evaluated retinal orientation can be compensated in software.
  • FIGS. 4A-4C illustrate an example implementation for achieving improved surface data and accuracy.
  • a FFT was performed on a set of (approximately) uniformly distributed A-scans (e.g. 9x9 or less, or 30 A scans or less, or 100 A scans or less) in order to calculate the surface information of tissue, and the result is shown in FIG.
  • the surface depth characterization of the tissue surface may be performed using another modality, such as, but not limited to, a surface profile detection system (e.g. using structured light).
  • the moving scatters can change the frequency or amplitude of the spectral fringes obtained by OCT
  • the standard deviation of the Gabor-convolved differential fringes of the two B-scans from the same position was selected as the GOCTA signal to contrast
  • the backscattered intensity can be modulated by the retinal texture pattern, resulting in a decrease of sensitivity for extracting vascular information. As a result, some vascular information in the local regions with a weak backscattered intensity may be lost.
  • the STD of differential fringes was divided by the total energy of the two A-scans, and the resulting improved GOCTA signal can be expressed by :
  • x and y are the pixel index for fast scanning and slow scanning directions, respectively.
  • Di( X , x, yi ) and Di(x, x, y 2 ) are the two B-scans obtained by SDOCT from the same position, A is wavelength, Dr is the Gabor filtered differential fringes of the two frames from same position.
  • the standard deviation calculation in the equation above may alternatively be computed as one or many different measures of spectral statistical dispersion, optionally including a higher order power or root or combination thereof.
  • the spectral density function of the laser in SDOCT is a Gaussian function, whereby the center portion of the spectrum carries the majority of the sample information due to the stronger intensity. Accordingly, in some example embodiments, the spectral fringes obtained by the OCT system could be shortened in bandwidth, in order to decrease computation complexity, without significantly degrading image quality, thereby achieving a higher data processing speed. Furthermore, while the standard deviation of the differential fringes over the total energy of the two fringes was used for contrasting microvasculature in GOCTA, each pixel carried the information of moving scatters, and as a result, the spectral fringes could also be spectrally sub sampled to further improve data processing speed.
  • FIGS. 5A-B schematically illustrate example methods of performing sub spectral band sampling, and sub sampling within a spectral band (“sub sampling band”).
  • a spectral subset of the differential interferogram is processed.
  • the spectral subset of the differential interferogram may be a quarter or a half of the full band.
  • FIG. 5B illustrates the sub-sampling of the differential interferogram, illustrating nonlimiting cases in which one of every two spectral pixels are sampled, and one of every three spectral pixels is sampled. It is noted that in the case of sub spectral band sampling shown in FIG. 5A, the Gabor filters did not need to be shortened. However, in the example embodiment shown in FIG. 5B, both the interferogram and the Gabor filters are sub sampled, since the spectral resolution is changed by this method.
  • each pixel is used N g times during the calculation of the convolution (where N g is the size of the Gabor filter kernel).
  • N g is the size of the Gabor filter kernel.
  • the convolution method may be adapted to reduce the number of times a pixel is employed during the convolution to further decrease computing amount for data processing.
  • the conventional method of performing the convolution involves the shifting of the Gabor filter by one pixel between successive steps of the convolution.
  • the Gabor filter may be shifted by a plurality of pixels that is less than the kernel of the Gabor filter between successive steps (e.g. between at least one successive step) when performing the convolution, such that n ⁇ N g -1 pixels of the spectral interferogram are skipped between steps of the convolution.
  • FIG. 5C illustrates one example and non-limiting implementation of such a “skipped convolution” method, in which the Gabor filter is shifted by N g pixels for each step during the convolution process (skipping N g -1 intermediate convolution steps), such that each pixel is employed only once during the convolution.
  • this present example implementation can significantly increase the image processing speed without compromising image quality.
  • the skipped convolution method may be combined with the preceding sub- spectral band methods.
  • the preceding example GOCTA methods are adapted according to a method involving the convolution of Gabor filters with two interferograms, and the subsequent subtraction of the amplitudes of the Gabor- convolved interferograms.
  • This modified OCTA method is henceforth termed amplitude based Gabor OCTA (AGOCTA).
  • AGOCTA amplitude based Gabor OCTA
  • This method may be beneficially applied to SSOCT systems, where the processing method may reduce and/or reject the timing-induced phase errors caused by swept source lasers, while achieving reconstructed en face microvascular images with a faster data processing speed compared to the two popular skin imaging algorithms (cmOCT and SVOCT) that are commonly used for SSOCT systems.
  • FIG. 7 A provides a flow chart illustrating example implementations of the example AGOCTA method.
  • a 3D data-set of spectral interferogram frames (spectral fringes) is acquired (e.g. using a SSOCT system), as shown at step 300 and each position is scanned at least twice, thereby providing interferogram frames as shown at 305 and 310.
  • the obtained spectral interferogram frames could be expressed by
  • k is wavenumber
  • S(k) is the power spectral density of the light source
  • R s and R r are the scattering coefficient of sample and the reflectivity reference mirror, respectively.
  • r s and y are the input power in the sample and reference arms, respectively.
  • F 0 is the initial phase.
  • the frequency components within specific depth range in spatial domain may be obtained by convolving with Gabor filters in which surface data was needed.
  • an FFT may be performed on a subset of (e.g. 5x5) of A-scans that are
  • the overall surface may be estimated by 2D cubic interpolating the matrix of surface positions. This calculation is shown at step 315 of FIG. 7 A.
  • tissue which has a more complex surface curvature than the retinal surface considered above, a higher density of A-scans may be useful to obtain a more accurate calculation of the surface profile. It will therefore be apparent that there is a trade-off between the computational complexity required for surface profile (depth) characterization and overall processing time.
  • the Gabor filters can be obtained, as shown at 320, and may be expressed by:
  • Blood flow signals may then be calculated by convolving the two
  • I k) I(k) ® G(k) , (3) where ® is the operator of convolution.
  • a Hilbert transform and amplitude operation are then performed on the Gabor-convolved spectral interferogram frames, as shown at 335, 340, 345 and 350 in order to calculate the amplitude plots of the two frames.
  • the differential Gabor-convolved spectral interferogram amplitude frame is obtained and expressed by:
  • M is the pixel index in each A-scan
  • i is the mean value of the fringes.
  • a given pixel of the en face angiography image is generated by calculating, for a respective pixel of the differential Gabor-convolved spectral interferogram amplitude frame, measure based on a spectral standard deviation.
  • a given pixel of the en face angiography image is generated by calculating, for a respective pixel of the differential Gabor-convolved spectral interferogram amplitude frame, a measure quantifying a spectral statistical dispersion.
  • Non-limiting examples of statistical measures include median absolute deviation and average absolute deviation.
  • the measures of variance may be higher order power/roots of variance, or combination thereof.
  • backscattering intensity may be modulated by the texture pattern of the
  • texture pattern modulation may occur for tissues such as such as finger and palm print, lesion, etc., as shown in FIG. 6A.
  • the obtained microvascular images are also modulated by the texture patterns, appeared as discontinuity of vessels.
  • red blood cells (RBC, 510) at position 1 and position 2 are ⁇ and h, respectively, h will be stronger than h, since the depth of RBC at position 1 and position 2 are ⁇ and h, respectively, h will be stronger than h, since the depth of RBC at position 1 and position 2 are intensities by red blood cells (RBC, 510) at position 1 and position 2 are ⁇ and h, respectively, h will be stronger than h, since the depth of RBC at
  • position 1 is smaller than position 2.
  • RBCs move in a single file, with variable distances in between, and therefore, it is possible after the time required for a complete B-scan, there can be an“all or none” phenomenon since certain locations will have backscatter signal from a RBC while other locations will have none.
  • the STDs obtained at the two positions for the same vessel are /2 and l 2 /2
  • microvascular images obtained using the modulated intensity signals are also modulated by skin texture pattern, appearing as a discontinuity of vessels.
  • the lesions 515 may impose a strong texture effect on the angiographic image processing, again causing discontinuity of vessels.
  • the aforementioned AGOCTA method may be adapted to reduce texture modulation effects as follows. Referring again to FIG. 7A, by dividing the obtained AGOCTA image by the mean value of the averaged fringes of the two absolute Gabor filtered fringes, as shown at steps 365, 370, 375 and 380, a new AGOCTA image is obtained where the texture pattern is reversed. The en face images with texture noise removal or reduction may then be obtained by summing the normalized new AGOCTA image and normalized log scale of original AGOCTA image, as shown at steps 385, 390, 392, 394, 396 and 398, which is expressed by
  • Norm (392, 394) and Abs (365, 370) are the normalize and absolute operator, respectively.
  • the preceding texture noise suppression method may also be employed for SVOCT imaging, as shown in FIG. 7B.
  • FIG. 7B which is a modification of the SVOCT method
  • the same MIP window was performed on both STD images and averaged structural images, as shown at 400 and 405, in order to calculate original en face SVOCT images and en face structural images.
  • a new SVOCTA image is obtained with texture pattern reversed as well.
  • the final SVOCTA images with texture noise suppression are obtained by summing the normalized new SVOCT image and normalized log scale of original AGOCTA image, as shown at steps 415, 420, 425, 430 and 435.
  • the example AGOCTA method described above has been shown to provide faster data processing speed in comparison to other two SSOCT blood flow imaging algorithms, SVOCT and cmOCT, that are performed in the spatial domain. This advantage is understood to be mainly due to calculation of the blood flow signal from the spectral domain directly, which can decrease the computationally intensive processing time.
  • One limitation of the AGOCTA method is the lack of depth resolved information. However, since most clinicians are more familiar with en face microvascular images, such limitations may not be detrimental.
  • the AGOCTA method may be used for calculating preview images and/or video in order to improve the diagnosing efficiency for the doctors in clinics. This workflow may be beneficial, for example, in the case of uncooperative patients.
  • the data processing time for the present example AGOCTA method was found to almost double because the convolution of Gabor filter was performed on the differential spectral fringes of the two A-scans from the same position in GOCTA, while in AGOCTA, the convolution was separately performed on the two A-scans and then the differential amplitude plots of the two filtered fringes were used for STD calculation.
  • the present example embodiments may be employed to perform OCTA imaging on any tissue surface having vascularization associated therewith, such as, but not limited to, brain tissue, cardiac tissue, muscle, respiratory and gastrointenstinal tissue, abdominal organs such as bladder or ureter.
  • tissue surface having vascularization associated therewith such as, but not limited to, brain tissue, cardiac tissue, muscle, respiratory and gastrointenstinal tissue, abdominal organs such as bladder or ureter.
  • Clinical applications for the systems and methods disclosed herein include, but are not limited to, microvascular imaging, including but not limited to cortical neurovascular coupling assessment such as functional neuroimaging, monitoring of therapeutic effects on retinal pathology, assessment of microvasculature of organ transplant in terms of perfusion status, monitoring of angiogenesis in neoplastic and non-neoplastic disorders.
  • microvascular imaging including but not limited to cortical neurovascular coupling assessment such as functional neuroimaging, monitoring of therapeutic effects on retinal pathology, assessment of microvasculature of organ transplant in terms of perfusion status, monitoring of angiogenesis in neoplastic and non-neoplastic disorders.
  • An example implementation of the aforementioned GOCTA method was performed on a dataset based on detection of a healthy human eye using a commercial SDOCT system (AngioVue, OptoVue Inc.) to verify its performance.
  • This system operated at a center wavelength of 840 nm with the axial resolution and lateral resolution of ⁇ 5 pm and ⁇ 15 pm, respectively.
  • the A-scan rate is 70,000 A-scans per second.
  • the scanning range was 3x3 mm and each position was scanned twice.
  • Retinal OCT scanning was performed on ten healthy volunteers.
  • Example data for two local regions are shown in FIGS. 8A-80 and 9A-90, respectively.
  • the scanning ranges were 3x3 mm with 608x304 A-scans.
  • SVOCT, UHS-OMAG and SVOCT algorithms were performed on the same dataset to calculate microvascular images for comparison and the en face images were obtained by using mean projection within the depth range, same as the result obtained by Gabor filters. All of the en face microvascular images were calculated within depth of 0 - 350 pm below the retinal surface.
  • Signal to noise ratio (SNR) and contrast to noise ratio (CNR) of the en face micro-vascular images were also calculated for quantitative comparison, SNR and CNR were calculated by
  • the marked regions were double-thresholded to obtain the masks for dynamic signals (red) and background (blue), as shown in FIG. 8C and FIG. 9C, which include vessels of different sizes.
  • the results demonstrate that the GOCTA method can provide comparable image quality compared to the other three algorithms in the vicinity of both the optical nerve head and fovea regions, as shown by the comparable SNRs and CNRs.
  • the output images from commercial SSADA algorithm use duplicated optical scanning in two directions (x and y) with postprocessing applied on these two data sets to suppress motion artifact.
  • the SNRs and CNRs were measured and calculated on all ten volunteers’ data sets from healthy volunteers and the average and standard deviation of SNRs and CNRs for GOCTA, SVOCT, UHS-OMAG and SSADA are 25 ⁇ 2, 23 ⁇ 2, 23 ⁇ 2, 22 ⁇ 1 and 14 ⁇ 2, 9 ⁇
  • the SNRs and CNRs obtained by GOCTA are slightly higher than the other three algorithms.
  • the reason for this improvement may be that the proposed algorithm uses a large range of frequency components (the sample information within depth range of &-Dz/2 to &+Dz/2 in spatial domain) to calculate the blood flow information, which is more robust compared to the other three algorithms where only the sample information at the same depth is used and then perform maximum (mean) projection to generate en face microvascular images.
  • a key advantage of the present GOCTA method is the speed of processing.
  • the datasets were processed on the same computer using published SVOCT, UHS-OMAG, and SSADA algorithms, in MatLab ® . It is noted that in order to obtain datasets used to postprocess the commercial SSADA image, both scanning in the x and y directions were performed and the SSADA algorithm must be repeated, which doubled the numerical processing time.
  • the data processing was accomplished on a laptop (CPU: i7-4720HQ, memory: 16 G, GPU: NVIDIA Geforce (GTX 970M), operating system: windows 8.1).
  • the data processing time for each 2 B-scans from the same position was calculated and the results are shown in FIG. 10A.
  • FIGS. 1 1 A- 1 1 H Two local regions were scanned (6x6 mm 2 ) on a healthy volunteer's optical nerve head region and fovea region for data processing and comparison. The results are shown in FIGS. 1 1 A- 1 1 H.
  • the differential images FIG. 1 1 G-1 1 H were obtained and showed the extra microvascular information extracted by the refined GOCTA method (that includes texture removal). The results demonstrated that the improved GOCTA method could achieve even a higher sensitivity.
  • the calculated surface data was plotted (red curves) on the cross sectional structural images at the positions of 0 mm, 0.7 mm, 1 .3 mm, 2.0 mm, 2.6 mm, 3.3 mm, 4.0 mm, 4.6 mm, 5.3 mm and 6.0 mm, as shown in FIGS. 12A-12T. It could be found that the surface data fit well with the structural images, except for some local areas with high curvature (marked by yellow arrows). Due to a large depth range that was used in the GOCTA method to calculate en face microvascular images, the depth shifts at the marked positions by yellow arrows could be covered.
  • FIG. 15A and 15B show the differential images for optical nerve head region and fovea region, respectively. It was found that for the GOCTA method, 1/4 spectral band and 1/2 sampling band of spectral fringes could be used to calculate en face microvascular images without significant degradation of image quality.
  • FIG. 16A-16J The differential microvascular images FIG. 16C, 16E, 16H and 16J were obtained by subtracting GOCTA images by SVOCT and OMAG images and, from these differential images, it could be found that the enhanced GOCTA method (using additional A-scan lines and texture removal) provided a higher sensitivity for extracting vascular information.
  • the data processing time was also analyzed for the GOCTA method using both CPU and GPU processing. Data processing was performed for different spectral bands and different sampling bands on a laptop (CPU: i7-4720HQ, memory: 16 G, GPU: NVIDIA Geforce (GTX 970M), operating system: windows 8.1) using GOCTA in MatLab ® .
  • the data processing time for each step of GOCTA on two B-scans from the same position was shown in FIG. 17. It is noted that the time for the surface calculation was obtained by dividing the entire time by the scanning steps in slow scanning direction. When measuring data processing time on GPU, surface calculation was also performed on CPU. Since GPU processing had the capability of parallel processing, the time for each step was very small.
  • the processing time for each step was measured by performing this step on 100 pairs of two B-scans and then calculating the mean value. It could be found, by using 1/4 spectral band and 1/2 sampling band, the data processing speed was improved by almost 8 and 4 times on CPU and GPU, respectively. The processing time was also measured for the entire 3D data set on CPU and GPU processors, respectively, and the results were shown in FIG. 18. It could be found, by using 1/4 spectral band and 1/2 sampling band, the data processing speed was improved by almost 9 and 4 times on CPU and GPU, respectively.
  • FIGS. 19B-19D were the normalized en face images obtained by cmOCT, SVOCT and AGOCTA, respectively.
  • MIP mean intensity projection
  • / and j are the pixel index
  • M and P are the correlation window size and they were both set to be 3 in this work.
  • l 2N -i and l 2N are the two frames of intensity based structural images from the same position
  • t is the mean value in the correlation window. All of the resultant cross-sectional correlation images were multiplied by the corresponding structural images to suppress the background noise.
  • SNR Signal to noise ratio
  • CNR contrast to noise ratio
  • FIGS. 20A-I A local region on a healthy volunteer's palm was also scanned, and the en face microvascular images were calculated by performing cmOCT, SVOCT and AGOCTA algorithms on the same dataset, respectively. In this region, regular AGOCTA and SVOCT were performed to calculate en face microvascular images since texture noise was not found.
  • FIGS. 20A-I The results are shown in FIGS. 20A-I.
  • FIG. 20A shows the photography of the volunteer’s palm with the marked region (6x6 mm) as scanned.
  • FIG. 20B was the calculated surface data.
  • FIG. 20C By thresholding the en face microvascular images, the mask for blood flow signals and background was obtained, as shown in FIG. 20C, where the red and the blue represented blood flow and background signals, respectively.
  • FIGS. 20D-20F show the microvascular images obtained by cmOCT, SVOCT and AGOCTA, respectively.
  • cmOCT the correlation widow size was 3x3 pixels and the obtained cross-sectional correlation coefficient images were multiplied by the intensity based structural images to suppress the background noise.
  • the correlation window decreased the lateral resolution, resulting in the discontinuity of small blood vessels, as shown in the regions marked by dashed ellipses in FIG. 20D.
  • the intensity values covered by the mask FIG. 20C were used to calculate SNR and CNR for quantitative comparison, with the histograms as shown in FIGS. 20G-I. These three algorithms provided similar SNRs and CNRs.
  • FFT was performed on uniformly distributed 12 cross sectional images to calculate the structural images at positions of 0.5 mm, 1 .0 mm, 1 .5 mm, 2.0 mm, 2.5 mm, 3.0 mm, 3.5 mm, 4.0 mm,
  • FIGS. 21A-L where the red curves show the estimated surface data. It was found that, apart from some local high frequency components (marked by the white arrows), the obtained curves matched the skin's surface well. The accuracy of surface data could be improved by increasing the size of the matrix used for calculating the original surface data uniformly distributed on xy plane.
  • the main advantage of the AGOCTA method is the data processing speed.
  • Data processing was performed on a laptop (CPU: i7- 4720HQ, memory: 16 G, GPU: NVIDIA Geforce (GTX 970M), operating system: windows 8.1) using the published cmOCT, SVOCT and the proposed AGOCTA in MatLab ® .
  • the data processing time was measured for 2 B-scans from the same position on CPU and GPU, respectively. The results were shown in FIG. 22 and FIG. 23, respectively.
  • the en face microvascular images were calculated within the depth range of 10 % of the total OCT ranging depth, resulting in a convolving kernel with length of 16 pixels.
  • FIG. 22 the en face microvascular images were calculated within the depth range of 10 % of the total OCT ranging depth, resulting in a convolving kernel with length of 16 pixels.
  • the surface data for each frame was calculated by the following steps in both cmOCT and SVOCT : 1) median filter and threshold structural images; 2) index the position of the first non-zero value in each Aline; 3) perform a 4th order of polynomial fitting to smooth the curve.
  • the surface calculation time for the AGOCTA method was obtained by dividing the whole surface calculation time by the number of scanning steps in slow scanning direction. Using the AGOCTA method, data processing speed was improved by nearly 187 and 2 times on CPU compared to cmOCT and SVOCT, respectively.
  • a GPU based parallel computing library was employed on Matlab for data processing and measured the time.
  • image filtering was performed on GPU for cmOCT and SVOCT, while for the AGOCTA method, the entire step was performed on CPU.
  • the data processing speed was improved 38 and 1.5 times on GPU compared to cmOCT and SVOCT, respectively.
  • the data processing time was also measured for the entire 3D dataset on both CPU and GPU, the results were shown in FIG. 24.
  • the data processing was simulated under real time imaging mode by calculating the acquired interference fringes frame by frame (1024x1365), as each B-mode image dataset became available on both CPU and GPU. It was found, on both CPU and GPU, AGOCTA provided the fastest data processing speed.
  • FIGS. 26A-26J One pre-treatment imaging results are presented in this example to demonstrate that the AGOCTA method could be performed in the clinic setting for imaging microvascular, as shown in FIGS. 26A-26J.
  • FIGS. 26(e) and (h) were obtained by SVOCTA while FIGS. 26 (f) and (i) were obtained by the AGOCTA method, and texture noise was removed in (e) and (f).
  • the present method was able to remove texture noise on skin lesion and provide a better quality of microvascular images.
  • FIGS. 26(e) and (h) were obtained by SVOCTA
  • FIGS. 26 (f) and (i) were obtained by the AGOCTA method, and texture noise was removed in (e) and (f).
  • FIGS. 260 - (I) were the histograms of the intensity values covered by the mask FIGS. 26(c) for SNR and CNR comparisons.
  • the AGOCTA method was also performed on the sub bands of spectral fringes to accelerate the data processing speed, and a local region (6x6 mm2) on a healthy volunteer's palm was scanned and processed to demonstrate the performance.
  • Data processing time and microvascular images and of sub bands are shown in FIG. 27 and FIGS. 28A-28M, respectively.
  • a local region (6 x 6 mm2) of scalp was scanned on a healthy volunteer. Before scanning, the local region of scalp was shaved to remove hairs.
  • FIGS. 29E-29J The obtained microvascular images were shown in FIGS. 29E-29J. Comparing FIGS. 29(e) - (g) with FIGS. 29(h) - (j), it was found the texture artifacts (marked by dashed yellow circles) caused by hair follicle were removed by the proposed method.
  • FIGS. 30A-30I show images obtained using different example
  • the image quality is maintained when using the“skipped convolution” method and/or the spectral-sub band method.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Ophthalmology & Optometry (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Hematology (AREA)
  • Cardiology (AREA)
  • Vascular Medicine (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Quality & Reliability (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Optics & Photonics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Eye Examination Apparatus (AREA)
PCT/CA2018/051459 2017-11-16 2018-11-16 Systems and methods for performing gabor optical coherence tomographic angiography Ceased WO2019095069A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US16/764,426 US11523736B2 (en) 2017-11-16 2018-11-16 Systems and methods for performing gabor optical coherence tomographic angiography
JP2020545402A JP2021503092A (ja) 2017-11-16 2018-11-16 ガボール光干渉断層血管撮影法を実行するためのシステムおよび方法
CN202410637582.8A CN118592892A (zh) 2017-11-16 2018-11-16 光学相干断层扫描生成正面血管造影图像的方法和系统
CN201880085663.4A CN112136182B (zh) 2017-11-16 2018-11-16 基于Gabor光学相干层析术血流成像的系统和方法
CN202410637846.XA CN118986271A (zh) 2017-11-16 2018-11-16 抑制纹理噪声的方法和系统
CA3082416A CA3082416A1 (en) 2017-11-16 2018-11-16 Systems and methods for performing gabor optical coherence tomographic angiography
US18/077,469 US12396643B2 (en) 2017-11-16 2022-12-08 Systems and methods for performing Gabor optical coherence tomographic angiography

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762587285P 2017-11-16 2017-11-16
US62/587,285 2017-11-16

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/764,426 A-371-Of-International US11523736B2 (en) 2017-11-16 2018-11-16 Systems and methods for performing gabor optical coherence tomographic angiography
US18/077,469 Continuation US12396643B2 (en) 2017-11-16 2022-12-08 Systems and methods for performing Gabor optical coherence tomographic angiography

Publications (1)

Publication Number Publication Date
WO2019095069A1 true WO2019095069A1 (en) 2019-05-23

Family

ID=66539237

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2018/051459 Ceased WO2019095069A1 (en) 2017-11-16 2018-11-16 Systems and methods for performing gabor optical coherence tomographic angiography

Country Status (5)

Country Link
US (2) US11523736B2 (enExample)
JP (1) JP2021503092A (enExample)
CN (3) CN112136182B (enExample)
CA (1) CA3082416A1 (enExample)
WO (1) WO2019095069A1 (enExample)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114993965A (zh) * 2022-05-13 2022-09-02 中煤嘉沣(湖南)环保科技有限责任公司 一种污染源自动识别方法以及系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11678801B2 (en) * 2020-04-02 2023-06-20 University Of Rochester Gabor domain optical coherence elastography
CN113017569B (zh) * 2021-03-09 2022-06-03 四川大学华西医院 基于光谱子带时域自相关的皮肤伤口愈合情况检查系统
CN114387174A (zh) * 2021-12-02 2022-04-22 广东唯仁医疗科技有限公司 一种octa图像降噪方法、电子设备及存储介质
CN115953542B (zh) * 2023-03-14 2023-05-12 北京心联光电科技有限公司 一种光学相干断层扫描微血管成像方法、装置及设备
TWI893374B (zh) * 2023-04-17 2025-08-11 中國醫藥大學 電子裝置、判讀微血管回填充時間之系統及其方法
CN117760572B (zh) * 2023-07-14 2024-11-01 淮阴师范学院 基于两帧差分与镜像移相的低载频干涉条纹动态解调方法
EP4600603B1 (en) 2024-02-06 2025-10-22 OPTOPOL Technology Spolka z ograniczona Odpowiedzialnoscia Apparatus and method for numerical processing of optical coherence tomography data
CN118628484B (zh) * 2024-08-12 2024-11-05 北京心联光电科技有限公司 基于oct血管造影计算流速的方法、装置、设备及存储介质
CN118657847B (zh) * 2024-08-19 2024-10-29 山东大学 去相关融合的octa图像处理方法、系统、设备及介质
CN118781172B (zh) * 2024-09-11 2024-12-06 南京晨新医疗科技有限公司 3d超高清荧光医用内窥镜血管图像增强方法及系统、装置、存储介质
CN120070517B (zh) * 2025-04-27 2025-07-29 山东大学 光学相干层成像中B-scan图像对准方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130289882A1 (en) * 2011-07-07 2013-10-31 Carl Zeiss Meditec, Inc. Inter-frame complex oct data analysis techniques
US20150324966A1 (en) * 2013-02-19 2015-11-12 Optos Plc Improvements in or relating to image processing
US20160106314A1 (en) * 2005-01-21 2016-04-21 Carl Zeiss Meditec, Inc. Method of motion correction in optical coherence tomography imaging
US20160278627A1 (en) * 2015-03-25 2016-09-29 Oregon Health & Science University Optical coherence tomography angiography methods
US20160317020A1 (en) * 2015-05-01 2016-11-03 Oregon Health & Science University Phase gradient optical coherence tomography angiography
WO2017218738A1 (en) * 2016-06-15 2017-12-21 David Huang Systems and methods for automated widefield optical coherence tomography angiography

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7725169B2 (en) 2005-04-15 2010-05-25 The Board Of Trustees Of The University Of Illinois Contrast enhanced spectroscopic optical coherence tomography
US7712898B2 (en) 2006-04-03 2010-05-11 University Of Iowa Research Foundation Methods and systems for optic nerve head segmentation
US8340455B2 (en) 2008-03-31 2012-12-25 University Of Central Florida Research Foundation, Inc. Systems and methods for performing Gabor-domain optical coherence microscopy
WO2010062883A1 (en) 2008-11-26 2010-06-03 Bioptigen, Inc. Methods, systems and computer program products for biometric identification by tissue imaging using optical coherence tomography (oct)
US8787623B2 (en) 2008-11-26 2014-07-22 Bioptigen, Inc. Methods, systems and computer program products for diagnosing conditions using unique codes generated from a multidimensional image of a sample
WO2010092533A1 (en) * 2009-02-13 2010-08-19 Ecole Polytechnique Federale De Lausanne (Epfl) Method and apparatus for 3d object shape and surface topology measurements by contour depth extraction acquired in a single shot
JP5525867B2 (ja) * 2009-03-04 2014-06-18 株式会社東芝 超音波診断装置、画像処理装置、超音波診断装置の制御方法、及び画像処理方法
US8750615B2 (en) 2010-08-02 2014-06-10 Case Western Reserve University Segmentation and quantification for intravascular optical coherence tomography images
EP2665406B1 (en) * 2011-01-20 2021-03-10 University of Iowa Research Foundation Automated determination of arteriovenous ratio in images of blood vessels
CA2844433A1 (en) * 2011-08-09 2013-02-14 Optovue, Inc. Motion correction and normalization of features in optical coherence tomography
JP6278295B2 (ja) * 2013-06-13 2018-02-14 国立大学法人 筑波大学 脈絡膜の血管網を選択的に可視化し解析する光干渉断層計装置及びその画像処理プログラム
CN103729848B (zh) * 2013-12-28 2016-08-31 北京工业大学 基于光谱显著性的高光谱遥感图像小目标检测方法
EP3073915A4 (en) * 2014-01-17 2017-08-16 Arterys Inc. Apparatus, methods and articles for four dimensional (4d) flow magnetic resonance imaging
WO2015113895A1 (en) * 2014-01-28 2015-08-06 Ventana Medical Systems, Inc. Adaptive classification for whole slide tissue segmentation
CN106455974B (zh) * 2014-06-20 2018-12-14 拉姆伯斯公司 用于有透镜和无透镜的光学感测的系统和方法
US9092691B1 (en) * 2014-07-18 2015-07-28 Median Technologies System for computing quantitative biomarkers of texture features in tomographic images
US9384537B2 (en) 2014-08-31 2016-07-05 National Taiwan University Virtual spatial overlap modulation microscopy for resolution improvement
WO2016154495A2 (en) * 2015-03-25 2016-09-29 Oregon Health & Science University Systems and methods of choroidal neovascularization detection using optical coherence tomography angiography
US9984459B2 (en) * 2015-04-15 2018-05-29 Kabushiki Kaisha Topcon OCT angiography calculation with optimized signal processing
CN104867147A (zh) * 2015-05-21 2015-08-26 北京工业大学 基于冠状动脉造影图像分割的syntax自动评分方法
CN104881872B (zh) * 2015-05-27 2018-06-26 浙江大学 一种光学微血管造影图像分割及评价方法
CN105303537B (zh) * 2015-11-26 2018-08-28 东南大学 一种医学图像三维血管显示增强方法
US10231619B2 (en) * 2015-12-09 2019-03-19 Oregon Health & Science University Systems and methods to remove shadowgraphic flow projections in OCT angiography
CN105574861B (zh) * 2015-12-14 2018-05-08 上海交通大学 无标记的血流成像方法及系统
CN105825516B (zh) * 2016-03-25 2018-06-19 上海慧达医疗器械有限公司 一种dicom影像血流分析系统
CN106442405B (zh) * 2016-11-14 2019-12-27 中国烟草总公司郑州烟草研究院 一种卷烟烟气气相物动态检测方法
CN106600614B (zh) * 2016-12-19 2019-10-18 南京理工大学 基于凹凸性的sd-oct视网膜图像cnv分割方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160106314A1 (en) * 2005-01-21 2016-04-21 Carl Zeiss Meditec, Inc. Method of motion correction in optical coherence tomography imaging
US20130289882A1 (en) * 2011-07-07 2013-10-31 Carl Zeiss Meditec, Inc. Inter-frame complex oct data analysis techniques
US20150324966A1 (en) * 2013-02-19 2015-11-12 Optos Plc Improvements in or relating to image processing
US20160278627A1 (en) * 2015-03-25 2016-09-29 Oregon Health & Science University Optical coherence tomography angiography methods
US20160317020A1 (en) * 2015-05-01 2016-11-03 Oregon Health & Science University Phase gradient optical coherence tomography angiography
WO2017218738A1 (en) * 2016-06-15 2017-12-21 David Huang Systems and methods for automated widefield optical coherence tomography angiography

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114993965A (zh) * 2022-05-13 2022-09-02 中煤嘉沣(湖南)环保科技有限责任公司 一种污染源自动识别方法以及系统

Also Published As

Publication number Publication date
CN118986271A (zh) 2024-11-22
US20200352437A1 (en) 2020-11-12
US20230172448A1 (en) 2023-06-08
US12396643B2 (en) 2025-08-26
CN112136182B (zh) 2024-10-15
US11523736B2 (en) 2022-12-13
CN118592892A (zh) 2024-09-06
JP2021503092A (ja) 2021-02-04
CN112136182A (zh) 2020-12-25
CA3082416A1 (en) 2019-05-23

Similar Documents

Publication Publication Date Title
US12396643B2 (en) Systems and methods for performing Gabor optical coherence tomographic angiography
JP6200902B2 (ja) 生体内の光学的流れイメージング
Zhang et al. Methods and algorithms for optical coherence tomography-based angiography: a review and comparison
JP6316298B2 (ja) Oct血管造影による局所循環の定量化
EP2812881B1 (en) Segmentation and enhanced visualization techniques for full-range fourier domain optical coherence tomography
US10398302B2 (en) Enhanced vessel characterization in optical coherence tomograogphy angiography
Schwarz et al. Motion correction in optoacoustic mesoscopy
US20180070842A1 (en) Microangiography method and system based on full-space modulation spectrum splitting and angle compounding
Adabi et al. An overview of methods to mitigate artifacts in optical coherence tomography imaging of the skin
US20160066798A1 (en) Methods and Systems for Determining Hemodynamic Properties of a Tissue
CN107862724B (zh) 一种改进的微血管血流成像方法
CN106166058A (zh) 一种应用于光学相干断层扫描血管成像方法及oct系统
Dwork et al. Automatically determining the confocal parameters from OCT B-scans for quantification of the attenuation coefficients
CN109963494A (zh) 具有改进的图像质量的光相干断层成像系统
WO2018057838A1 (en) Methods and systems for enhancing optical image quality
Xie et al. Reduction of periodic noise in Fourier domain optical coherence tomography images by frequency domain filtering
Zhang et al. Self-supervised PSF-informed deep learning enables real-time deconvolution for optical coherence tomography
Gómez-Valverde et al. Evaluation of speckle reduction with denoising filtering in optical coherence tomography for dermatology
Zhang et al. Robust Ultrafast Projection Pipeline for Structural and Angiography Imaging of Fourier-Domain Optical Coherence Tomography
Chen et al. Complex Correlated Phase Gradient Variance Based Optical Coherence Tomography Angiography
Reif et al. Optical microangiography based on optical coherence tomography
Borycki et al. Multiwavelength laser Doppler holography (MLDH) in spatiotemporal optical coherence tomography (STOC-T)
JP7401302B2 (ja) 眼血流を視野内の全領域で画像化する方法および装置
Ralston et al. Data Analysis and Signal Postprocessing for Optical Coherence Tomography
Li Speckle Reduction and Lesion Segmentation for Optical Coherence Tomography Images of Teeth

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18878196

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3082416

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2020545402

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18878196

Country of ref document: EP

Kind code of ref document: A1