CN108471949B - Reflective mode multispectral time-resolved optical imaging method and device for tissue classification - Google Patents

Reflective mode multispectral time-resolved optical imaging method and device for tissue classification Download PDF

Info

Publication number
CN108471949B
CN108471949B CN201680076887.XA CN201680076887A CN108471949B CN 108471949 B CN108471949 B CN 108471949B CN 201680076887 A CN201680076887 A CN 201680076887A CN 108471949 B CN108471949 B CN 108471949B
Authority
CN
China
Prior art keywords
tissue
burn
light
image
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680076887.XA
Other languages
Chinese (zh)
Other versions
CN108471949A (en
Inventor
约翰·迈克尔·迪迈欧
范文胜
杰弗里·E·撒切尔
李伟志
莫伟荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spectral MD Inc
Original Assignee
Spectral MD Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/US2015/057882 external-priority patent/WO2016069788A1/en
Application filed by Spectral MD Inc filed Critical Spectral MD Inc
Publication of CN108471949A publication Critical patent/CN108471949A/en
Application granted granted Critical
Publication of CN108471949B publication Critical patent/CN108471949B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0064Body surface scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0071Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by measuring fluorescence emission
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • A61B5/0261Measuring blood flow using optical means, e.g. infrared light
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • A61B5/0295Measuring blood flow using plethysmography, i.e. measuring the variations in the volume of a body part as modified by the circulation of blood therethrough, e.g. impedance plethysmography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/444Evaluating skin marks, e.g. mole, nevi, tumour, scar
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/445Evaluating skin irritation or skin trauma, e.g. rash, eczema, wound, bed sore
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7282Event detection, e.g. detecting unique waveforms indicative of a medical condition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Hematology (AREA)
  • Cardiology (AREA)
  • Dermatology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Mathematical Physics (AREA)
  • Optics & Photonics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

Some aspects of the present invention relate to apparatus and techniques for non-invasive optical imaging capable of acquiring multiple images corresponding to different times and different frequencies. In addition, the alternative examples described herein are useful in a variety of tissue classification applications, including assessing the presence and severity of tissue conditions such as burns or other wounds.

Description

Reflective mode multispectral time-resolved optical imaging method and device for tissue classification
Statement regarding federally sponsored research or development
Part of the work of the present invention was carried out under the Authority of the united states health and public services department, under the Authority of the chief office responsible for preparation and management, under the support of the united states government in accordance with the protocol No. hhso100201300022c awarded by the Biomedical Advanced Research and Development Authority (BARDA). The government has certain rights in the invention.
Technical Field
The systems and methods disclosed herein relate to non-invasive clinical imaging, and more particularly, to non-invasive imaging of subcutaneous blood flow, diffuse reflectance spectroscopy, and computer-aided diagnosis.
Background
Optical imaging is an emerging technology with potential for improving disease prevention, diagnosis and treatment in clinics, clinics or operating rooms. Optical imaging techniques utilize different photon absorption or scattering surfaces at different wavelengths to enable non-invasive discrimination between soft tissues and between natural soft tissues and tissues traced with endogenous or exogenous contrast agents. This distinction in photon absorption and scattering allows for the provision of specific tissue comparisons, as well as the study of functional and molecular level activities underlying health and disease.
Disclosure of Invention
Tissue classification for burn assessment
Aspects of the invention described herein relate to devices and methods for classifying tissue using optical imaging. There is a long-felt need for the following technologies: non-invasive imaging techniques that can classify damaged tissue, particularly techniques that facilitate rapid treatment and assessment of wound severity before, during, and/or after initiation of treatment to monitor progress of the recovery process. One example of such a need is the desire for better imaging techniques that can be used to treat and assess burn severity, for example, in conventional burn treatments and/or burn treatments for large-scale accidents.
To illustrate that an example of burn treatment for a large scale accident requires better imaging techniques, reference is made to the following. Currently there are only 250 experts on burns and 1800 beds for burns in the united states. These burn facilities now operate at 95% capacity. Any sudden increase in the number of burn patients requires immediate identification and prioritization of the patients, all of which requires attention from burn specialists. In addition, a doctor who is not a burn specialist is required to address the needs of the patient at the time of the catastrophic event. For example, when a nuclear emergency, a forest fire, or a large-scale fire and smoke accident occurs, the number of patients requiring burn treatment suddenly and rapidly increases. Since assessing burns is even difficult for burn professionals and such assessment is subjective in the current state of the art, there is clearly a need for such a device: the device enables burn professionals and physicians who are not burn professionals to quickly identify or classify patients who require immediate emergency treatment and/or therapy by the burn professional. In a large-scale accident scenario, there may be as many as 10000 patients in need of thermal burn treatment. With the limited number of specialist physicians and burn centers in the united states, there is a public health need for burn treatments that can be rapidly deployed on a broad scale by non-specialist medical personnel at such events; in addition to burns, there are many other needs in the art for methods and devices that can rapidly classify and differentiate between damaged and non-damaged tissue.
The standard of treatment for burns begins with the use of visual and tactile cues to estimate the burn depth. An effective treatment regimen can only be given after classifying the burn according to depth. Generally, surface thickness burns and full thickness burns can be classified by appearance, but the classification of whether a partial thickness burn is "superficial" or "deep" is often delayed. This delay occurs because the skin lesion is not visualized to the full extent, until a partial burn has developed.
There are several reasons why it is important to quickly and accurately classify the depth of a partial burn. First, the treatment regimen for surface partial thickness burns and deep partial thickness burns are significantly different. The surface partial burn only needs to be locally applied with ointment and can be naturally healed in 7-21 days, and the deep partial thickness burn needs to be surgically cut and transplanted from the position of the donor skin. Second, it is important to assess as early as possible whether a burn requires surgical intervention to minimize scarring and bacterial colonization of the wound. Delayed intervention associated with classifying partial thickness burns has been shown to increase the risk of infection, metabolic distortion, and organ failure. Furthermore, it has recently been shown that burn progression does not improve with delayed resection time. Finally, multi-zone burns are common, often involving different burn depths. The excision and implantation of complex burns requires expert planning and careful differential excision to ensure optimal treatment of the entire burn area.
In the case of a U.S. hospital with only 250 burn specialists and 1800 designated burn beds (operating at 95% capacity), burn treatment resources are scarce. Thus, the first line of treatment for burn patients is often a non-expert who lacks experience in burn treatment, which can result in delayed, non-optimal treatment, increasing the chances of complications. Currently, the accuracy of clinical burn depth diagnosis by experts is estimated to be 70-80%, while non-experts have an accuracy of only 60%.
The most potential solutions to improve burn depth estimation include fluorescent dyes, high frequency ultrasound, nuclear Imaging (MRI), photography, thermal spectroscopy, Laser Doppler Imaging (LDI). Laser doppler imaging is the only FDA-approved technique in the united states, which includes the diagnosis of burn wounds. It is non-invasive, has been shown to be effective in wound assessment, and is currently available to burn professionals. Although this technique is available, it is rarely used, mainly in large burn centers. The most obvious disadvantages (requirement for a completely exposed wound, patient rest, 48 hours delay after burn) lead to low clinical applicability. The collection time is also slow. Thermogravimetric imaging similar to LDI is non-invasive and non-contact, but requires the patient to undergo 15 minutes of temperature equilibration in an incubator, and is currently not suitable for classification of burn depth. The use of color photography in burn assessment is often difficult because it does not provide much more information than visual observation, and requires the burn doctor to interpret the image. Intravascular dyes such as indocyanine green (ICG) provide information about blood flow in tissues. This technique has been studied in burns and can be used to identify areas of high or low blood perfusion in burns. This technique is invasive, requires injection of dye for each image acquisition attempt, requires multiple injections for some procedures depending on the number of images acquired, and is expensive and time consuming.
Clearly, another potential approach, Multi Spectral Imaging (MSI), measures the reflectance of a burn surface to selected wavelengths of visible and near-infrared light. Different tissue types are composed of specific combinations of tissue components that interact differently with light. The interaction of these lights with the tissue forms a characteristic reflectance signature that can be obtained by MSI for classifying burn severity. MSI also enables evaluation of tissues through topical wound dressings and bandaging, and allows for minor movement of the patient. These features make MSI an interesting solution.
MSI has been previously tested in a clinical setting, with Anselmo et al in 1977 and Afromitz et al in 1988 having obtained the earliest results. These tests successfully classified different burn depths, but the time required to complete one collection was on the order of several days to weeks. With advances in imaging technology and computer processing power over the past decades, we are better able to implement the MSI technique as a routine part of patient examination or surgery.
The accuracy of the MSI technique depends on the identification of valuable wavelengths employed by the clinical device. Here, we used the pig burn model to test the following properties of MSI: the initial burn site and partial thickness burn during surgical debridement (also known as burn excision) were evaluated using different wavelengths. As discussed in the theoretical section of the present invention, the selected wavelength takes into account the absorption peaks of the main components of the skin tissue (blood, melanin, water, fat, extracellular matrix (ECM)) and previous clinical studies have proposed the ability to classify burns. The clinical effectiveness of the wavelength was verified by histopathological evaluation of the same sample.
Optical imaging techniques provide a non-contact, rapid assessment method at or near the surface of tissue. The study of tissue perfusion can be done optically because the light absorption of hemoglobin in blood in tissue is significant. These blood-derived chromophores found in subcutaneous and superficial tissues have optical characteristics (primarily absorption) that contrast with surrounding tissues. Photoplethysmography (PPG) signals are time-varying signals associated with perfusion of blood tissue, resulting from specific characteristics of blood flow in the tissue. Blood with hemoglobin-loaded cells flows through blood vessels in the tissue, with the blood volume varying cyclically with the cardiac cycle. The consequent dynamic changes in blood volume are used to assess tissue health, including associated perfusion of blood flow, cardiac function, and peripheral vascular health. PPG imaging this spectral optical imaging technique is capable of monitoring blood perfusion through the superficial surface of tissue.
Non-contact, reflectance mode PPG imaging is achieved by analyzing backscattered light from perfused tissue. When light is incident on tissue, a portion of the light is scattered within the tissue, then interacts with chromophores in the blood, and finally is scattered back through the tissue surface. When observed over time, this light-tissue interaction adds a weak AC modulation of about 1-2% compared to the total amount of reflected light from the tissue. The small AC signal of this backscattered light can be analyzed to obtain information about the location, the relative blood volume, the blood concentration of the relevant arterial circulation. Images generated from this information provide a method of assessing pathological conditions related to changes in tissue blood flow and pulse rate, including: tissue perfusion, cardiovascular health, wounds such as ulcers, peripheral arterial disease, respiratory health.
The optical imaging technique which can reliably, cheaply and conveniently measure the tissue blood perfusion has high value for the medical field. PPG imaging is a technique that can be applied to burn and chronic wound treatment. We are particularly concerned with burns because the technology is expected to provide burn patient assessment without the need for disposable or sterile body contact devices.
Non-contact PPG imaging typically uses Near Infrared (NIR) wavelengths as an illumination source to take advantage of the increased photons that penetrate tissue. A common configuration involves placing a light source in proximity to the target tissue to be imaged. Because the light travels a banana-shaped path through the tissue, the PPG signal can be acquired in the dark area of the image. This typically requires a large dynamic range and low light sensitivity sensor (typically a scientific grade CMOS or CCD camera) to detect the PPG signal emanating from the non-illuminated areas of the image. In the present invention, we studied the variation factors of the illumination pattern and the received PPG signal intensity and assume that more strongly and more uniformly illuminating the entire field of view (FOV) of the imager can enhance the PPG signal intensity.
For example, in the experiments described in the present invention, we developed an optical PPG prototype system that utilizes a spatially uniform DC modulated illumination source. We illustrate the theory of uniform illumination, evaluating PPG imaging performance, comparing it to other types of light sources. We evaluated the imaging system by a bench-top tissue model calibration of tissue-like optical properties and performed animal model experiments. We confirmed that: PPG imaging using uniform illumination can improve the performance of imaging surface blood vessels in animal burn models.
In some optional examples described herein, we propose non-contact reflective photoplethysmography (PPG) imaging methods and systems that can be used to identify the presence of skin burns during burn debridement procedures. These methods and systems can provide assistance to clinicians and surgeons during skin wound care procedures such as burn excision, wound treatment decisions, and the like. In some experiments, we investigated the systematic variation factors of illumination uniformity and intensity and presented our conclusions. For our PPG imaging device, LED arrays, tungsten lamp light sources and finally high power LED emitters are investigated as illumination methods. These three different illumination sources were tested in a controlled tissue model and an animal burn model. We have found that: in our animal burn model, the low-heat, uniform illumination pattern using a high-power LED emitter significantly improves the acquired PPG signal. These improvements enable PPG signals of different pixels to be compared in time and frequency domains, simplifying the complexity of the illumination subsystem, and also eliminating the need to use a large dynamic range camera. Our optical improvements can bring more images clinically available through burn model output comparisons, such as animal burn data and blood volume in controlled tissue models, etc., thus facilitating burn assessment.
The alternative examples described herein may be used to identify and/or classify the severity of decubitus ulcers, congestion, limb degeneration, raynaud's phenomenon, chronic wounds, abrasions, scratches, bleeding, fractures, punctures, penetrating wounds, cancer, or any type of tissue change, where the appearance and quality of these tissues is different from the normal state. The devices described herein may also be used to monitor healthy tissue, promote and improve wound healing processes, for example to enable faster and more accurate determination of the margins of debridement, and to assess the healing progress of a wound or disease, particularly after treatment. In some alternative examples described herein, there is provided an apparatus comprising: it is possible to identify healthy tissue adjacent to injured tissue, determine resection margins, monitor the progress of recovery after implantation of a prosthesis such as a left ventricular assist device, assess the viability of tissue transplantation or regenerative cell implantation, or monitor surgical recovery, particularly after reconstructive surgery. Moreover, the alternative examples described herein may be used to estimate changes in the wound or the generation of healthy tissue following injury (particularly after introduction of therapeutic agents such as steroids, hepatocyte growth factor, fibroblast growth factor, antibiotics, etc., or after introduction of regenerative cells, such as isolated or aggregated cell populations including stem cells, endothelial cells and/or endothelial precursor cells).
Alternative examples of the invention propose two optical imaging techniques: PPG imaging and MSI, which may be implemented by the same system hardware. These two ways complement each other in assessing the type of tissue property. For example, PPG imaging measures the intensity of arterial blood flow just below the skin surface to distinguish viable from non-viable tissue. MSI analyzes the different wavelengths of light absorbed and reflected by the tissue and classifies the tissue by comparing the resulting reflectance spectra to build a database of known reflectance spectra.
PPG imaging can acquire vital signs using techniques similar to those used in pulse oximetry, including: heart rate, respiratory rate, SpO2. The PPG signal is generated by measuring the interaction of light with dynamic changes in the vascularized tissue. The vascularized tissue expands and contracts approximately 1-2% by volume at the frequency of the cardiac cycle at each systolic pressure fluctuation. This blood inflow not only increases the volume of the tissue, but also brings additional strongly light absorbing hemoglobin. Thus, the absorption of light within the tissue oscillates with each heartbeat. By recording how light is absorbed through tissue to generate a plethysmograph, which is analyzed, wherebyChanges in tissue blood flow are identified. This information is then converted into the vital signs recorded by the pulse oximeter.
In some cases, to generate an image based on a plethysmogram, we utilize the path of light through tissue. A small portion of the light incident on the tissue surface is scattered into the tissue. A small portion of this scattered light leaves the tissue from the same surface that was originally incident. This backscattered light is acquired over the entire tissue area using a sensitive digital camera, such that each pixel in the imager includes a characteristic PPG waveform determined by the scattered light intensity variations. To generate a 2D map of the relative tissue blood flow, the amplitude of each unique waveform is measured. To improve accuracy, we measure the average amplitude for many heartbeat samples.
The MSI measures the reflectance of selected wavelengths of visible and near-infrared light at the surface. MSI is suitable for burns because different tissue types, such as viable tissue and necrotic tissue, are composed of a specific combination of tissue components that interact differently with light. The different interactions of these lights with the tissue form characteristic reflection signatures that are captured by MSI. Spectral features may be collected from a patient burn and compared to a database of known spectral features to characterize the patient burn. In some cases, although the number of specific wavelengths at which MSI describes tissue may be small compared to newer hyperspectral imagers, the advantage of using MSI is: spatial resolution, spectral range, image acquisition speed, cost. Spectral identification of burn severity has been proposed as a means of supplementing clinical observations during initial patient assessment as early as the 1970 s. In 1977, the feasibility of identifying the severity of burns by studying the characteristic optical reflectance properties of different depth burns was demonstrated using a NASA developed camera equipped with replaceable filters. Other tissues have also met with some success in characterizing burn tissue using this technique. They show that: compared to clinical judgment, MSI can improve the judgment of burn depth and publish: MSI is limited in clinical applications by technical difficulties, such as the need for filters with brighter spectral reflectance to moisture on the skin surface. Most importantly, when the MSI technique was initially developed, it required several days of data acquisition time because of some limitations in data processing, which today's engineers need not face under the conditions of modern computer technology.
Tissue classification for resection
There are approximately 185000 lower limb amputations annually in the united states, with over two million american adults being amputees. The most important risk factor for resection, whether or not with Diabetes (DM), is Peripheral Artery Disease (PAD), while diabetic patients account for more than half of all amputees, known as diabetic amputees. Compared with the common population, the risk of lower limb amputation of the diabetic patients is increased by 10 times, and more than 60000 amputations are diabetic lower limb ulcers every year. Approximately 30 out of every 100000 people per year require amputations secondary to diabetes and, due to the aging population of the united states, this incidence is expected to increase by 50% in the next decade.
The U.S. medical system has significant amputation costs, financial expenditures, and the like, each year. In a study of the refuelling military Affairs (VA) system alone, the cost burden associated with diabetic limb loss exceeded two hundred million dollars ($ 60647 per patient) in one year (2010). In the united states, hospital-related costs for all lower limb amputations in 2009 amount to $ 83 billion, including recovery and repair costs, and the lifetime cost for major amputations is approximately $ 500000 per patient. In addition to the heavy economic burden of amputation, patients experience significant morbidity and reduced quality of life due to amputation. Most importantly, the functional status of these patients is challenged, and only 25% of diabetic lower limb major amputated patients are able to walk outside of the home using the prosthetic limb. With increasing levels of proximal end of amputation, the potential for successful recovery from ambulation is reduced due to increased tissue loss and increased energy expenditure.
Although preserving as much limb tissue as possible preferentially during amputation, the physician must balance the potential for initial wound healing for a given level of amputation (LOA), which will decrease with more distal amputations. The appropriate LOA selection is initially determined by the clinical judgment of the surgeon (based on clinical factors such as diabetes, smoking, nutritional status, using patient history and physical examination including color, body temperature, peripheral pulse, wound bleeding during treatment), possibly in combination with various non-invasive tests for quantifying tissue blood flow and/or oxygenation (forearm index (ABI), transcutaneous oxygen measurement (TCOM), or Skin Perfusion Pressure (SPP)). However, despite the current guidelines recommending this practice, only half of patients with lower limb amputations are evaluated by the most commonly used exam (ABI). Moreover, one study showed that: up to 50% of patients with palpable arteries of the dorsum pedis and 30% of patients with normal ABI require re-resection after the initial forefoot resection. In the same study, almost 50% of patients with parallel revascularization also required re-resection, despite significant efforts in distal limb revascularization. Although TCOM was initially reliable showing the potential to identify initial wound healing after amputation, there is still controversy regarding its use since not a sufficiently large, authoritative study has clarified the role of TCOM in clinical application. Furthermore, TCOM measurements are affected by physiological conditions such as body temperature, and TCOM electrodes can only analyze small areas of skin. Thus, despite the decades of TCOM technology, TCOM has not been incorporated into routine clinical applications.
The published re-ablation rate is by no means optimal in view of the challenging trade-off between maximizing tissue retention and minimizing the risk of initial wound non-healing, and the initial reliance on clinical judgment to determine the appropriate LOA. The re-resection rate varies depending on the initial level of resection, from approximately 10% of the supraknee (AKA) resection to 35% of the resection of the foot requiring a more proximal level of final repair. Limited data is now available for the direct cost of re-ablation, and it is evident that a major portion of the billions of dollars annually spent on treatment associated with diabetic ablation is associated with the efforts of ablation repair, readmission, and wound treatment that are spent primarily between primary surgery and repair. With regard to morbidity and mortality, delayed and failed initial healing exposes the patient to increased risks, including infection. Moreover, delayed and failed initial healing after the initial resection procedure severely impacts the quality of life of the patient. Patients requiring resection repair have delayed physical recovery and delayed return to an ambulatory state during repair. These patients also have increased contact with the medical system and often undergo additional wound treatment prior to repair, with the initial selection of an appropriate LOA avoiding such a result. Finally, although the re-excision rate was explicitly published, no study was published: the physician's awareness of the risk of re-ablation to what extent can lead to an overly aggressive selection of more proximal levels of LOA. Indeed, it is feasible that some patients receive resection to a much closer approximation than necessary because their surgeons cannot be sure to predict the great likelihood of more distal horizontal healing. Thus, the detection of an instruction to make a decision about LOA can reduce the re-ablation rate, as well as protect the patient from large ablations facing the tissue.
However, there is currently no gold standard test to judge the ability of a diabetic patient to heal an initial wound after resection. Through local estimation of tissue microcirculation alone, one tries to find such a gold standard. In this context, several tools known to accurately judge the perfusion and oxygenation of skin tissue have been tested, including TCOM, SPP and laser doppler. However, in selecting LOA, microcirculation estimation alone does not yield an estimate of tissue healing capacity that is accurate enough to replace clinical judgment. Thus, characterizing local perfusion and oxygenation of the skin clearly does not have sufficient information to quantify the healing potential of the tissue. None of the predicted outcomes of these techniques include the systemic effects of the same disease that also affect wound healing potential. Indeed, almost twenty years ago, in a review of the factors affecting wound healing after diabetic macro-ablation, one author concluded with respect to the selection of an appropriate level of ablation: the "no gold standard test" predicts the potential for healing after major resection because, not only is tissue blood flow relevant to wound healing, all other factors mentioned in the review (smoking, nutritional status, diabetes, infection) are important. Therefore, clinical judgment in combination with various tests will be the most common way. "although the authors have predicted, Spectral MD has developed imaging devices that can combine information gleaned from target detection that characterizes tissue blood flow physiology with important patient health indicators. Among other things, the foregoing problems are addressed in some embodiments by the machine learning algorithm of the present invention, which combines optical microcirculation assessment with overall health indicators of a patient to generate predictive information. Using this method, our device is able to provide a quantitative assessment of wound healing potential, whereas current clinical practice criteria can only be assessed qualitatively.
Accordingly, one aspect of the present invention relates to an imaging system comprising: one or more light sources configured to illuminate a first tissue region; one or more image capture devices configured to receive light reflected from the second tissue region; one or more controllers configured to control the one or more light sources and the one or more image acquisition devices to acquire a plurality of images corresponding to different times and different frequency bands; a processor configured to analyze an area of the second tissue region according to one or more clinical states based at least in part on the plurality of images.
Another aspect of the invention relates to a method comprising the steps of: illuminating a first tissue region with more than one light source; receiving light reflected from the second tissue region with one or more image acquisition devices; acquiring a plurality of images of the second tissue region corresponding to different times and different frequency bands; classifying an area of the second tissue region based at least in part on the plurality of images.
Another aspect of the invention relates to an imaging system, comprising: one or more light sources configured to illuminate a first tissue region; one or more image capture devices configured to receive light reflected from the second tissue region; means for acquiring a plurality of images of the second tissue region corresponding to different times and different frequency bands; means for classifying an area of the second tissue region based on the plurality of images.
Another aspect of the invention relates to a method of healing a wound or improving wound healing, comprising the steps of: (a) acquiring a plurality of images corresponding to different times and different frequency bands of a first tissue region and a second tissue region, wherein the second tissue region comprises at least a portion of a wound and the first tissue region comprises healthy tissue, such as with any of the systems described herein; (b) classifying an area of the second tissue region based on the plurality of images acquired in (a); (c) providing a therapeutic agent and/or treatment to at least a portion of the wound to induce wound healing.
Another aspect of the invention relates to a method of monitoring wound healing or wound recovery, comprising the steps of: (a) acquiring a plurality of images corresponding to different times and different frequency bands of a first tissue region and a second tissue region, wherein the second tissue region comprises at least a portion of a wound and the first tissue region comprises healthy tissue, such as with any of the systems described herein; (b) classifying an area of the second tissue region based on the plurality of images acquired in (a); (c) providing a therapeutic agent to at least a portion of the wound to cause the wound to heal; (d) repeating at least step (a) and step (b) after performing step (c).
Another aspect of the invention relates to a method of classifying a wound, comprising the steps of: (a) acquiring a plurality of images corresponding to different times and different frequency bands of a first tissue region and a second tissue region, wherein the second tissue region comprises at least a portion of a wound and the first tissue region comprises healthy tissue, such as with any of the systems described herein; (b) classifying an area of the second tissue region based on the plurality of images acquired in (a).
Another aspect of the invention relates to a method of debriding a wound comprising the steps of: (a) acquiring a plurality of images corresponding to different times and different frequency bands of a first tissue region and a second tissue region, wherein the second tissue region comprises at least a portion of a wound and the first tissue region comprises healthy tissue, such as with any of the systems described herein; (b) determining the margins of debridement, such as the areas where healthy tissue and necrotic tissue are most proximal to the wound, based on the plurality of images acquired in step (a); (c) the wound is debrided within the debridement margins.
Another aspect of the invention relates to a method of identifying a chronic wound, comprising the steps of: (a) acquiring a plurality of images corresponding to different times and different frequency bands of a first tissue region and a second tissue region, wherein the second tissue region comprises at least a portion of a wound and the first tissue region comprises healthy tissue, such as with any of the systems described herein; (b) classifying an area of the second tissue region as a region representing a chronic wound based on the plurality of images acquired in (a).
Another aspect of the present invention relates to a method of assessing the severity of a burn comprising the steps of: positioning the target proximate to the light source and the image capture device; illuminating a first tissue region of a target with the light source; acquiring a plurality of images of a second tissue region with the image acquisition device; classifying a burn status of the second tissue region area based at least in part on the plurality of images acquired by the image acquisition device; calculating a percentage estimate of the target overall burn body surface area based at least in part on the classification.
Another aspect of the invention relates to a device for assessing the severity of a burn of a subject, comprising: one or more light sources configured to illuminate a first tissue region; one or more image capture devices configured to receive light reflected from the second tissue region; one or more controllers configured to control the one or more light sources and the one or more image acquisition devices to acquire a plurality of images of the second tissue region; a processor configured to classify a burn state of an area of the second tissue region based on the plurality of images, and to calculate a percentage estimate of the target overall burn body surface area based on the classification of the burn state.
Another aspect of the invention relates to a method for storing and updating data, the method comprising performing the following steps under control of a Program Execution Service (PES), wherein the PES comprises a plurality of data centers, each data center comprising one or more physical computing systems capable of executing one or more virtual desktop instances (virtual desktop instances), each virtual desktop instance associated with a computing environment comprising an operating system capable of executing one or more application programs, each virtual desktop instance accessible via a network by a computing device of a user of the PES: forming a bidirectional connection between the PES and a first computing device of the user; receiving a request from the first computing device to synchronize a dynamic database including organization status data on a PES; accessing file metadata that indicates whether the dynamic database is to be synchronized with more than one computing device; determining whether the dynamic database is to be synchronized with the first computing device based, at least in part, on the file metadata; in response to determining that the dynamic database is to be synchronized with the first computing device, synchronizing the dynamic database with the first computing device using the bidirectional connection, wherein the synchronized dynamic database is stored locally at the first computing device and is accessible without a bidirectional connection between a PES and the first computing device.
Drawings
The disclosure will hereinafter be described in conjunction with the appended drawings and appendices, which are intended to illustrate and not to limit the disclosure, wherein like designations denote like elements. This patent or application document contains at least one drawing executed in color. Copies of this patent or application publication with color drawing(s) will be provided by the office upon request and payment of the necessary fee.
Fig. 1A shows an exemplary configuration of an imager that images a subject.
FIG. 1B is an illustration of an exemplary movement of an exemplary imaging detector.
FIG. 2 is an illustration of an exemplary user interface for acquiring an image.
FIG. 3 is an example illustration of multiple surface images of a target.
Fig. 4 is an exemplary diagram illustrating a stitching technique for therapy triage used in some alternative examples described herein.
FIG. 5 is an example illustration illustrating the percent nice Rule (Rule of nies) and the Londebraud chart method (Lund-Browder Charts) for calculating the total body surface area in some alternative examples described herein.
Fig. 6 is an example chart showing mortality based on age grouping and burn size by the american burn society.
Fig. 7 is an example illustration of a high resolution multispectral camera and available data used in some alternative examples described herein.
FIG. 8 is an example flow chart illustrating steps for tissue classification in some alternative examples described herein.
Fig. 9A, 9B, 9C and 9D are example images of tissue samples taken from adult piglets, where the performance of MSI, PPG and an alternative example of the invention are compared.
Figure 10 is an example illustration of a burn and debridement procedure.
Fig. 11 illustrates example images acquired by some alternative examples described herein, illustrating successful and unsuccessful tissue transplantation.
Fig. 12 shows an example image of an acquired pressure sore, wherein the alternative examples described herein are used to image an early pressure sore and a normal camera is used to image a pressure sore visible on the skin surface.
Fig. 13 is an example diagram illustrating how some of the alternative examples described herein interact with a data cloud.
Fig. 14A-14C show a desktop system operating in a reflective mode.
Fig. 15 shows a tissue model in a Petri dish (Petri dish) and a model device simulating human pulsatile blood flow.
Figure 16 shows a circular in vivo thermal burn and debridement model on animal skin.
Figure 17 shows time division PPG signal extraction.
Fig. 18A to 18C show comparison of spatial illumination intensity among an LED point light source (fig. 18A), a tungsten lamp (fig. 18B), and a modified LED emitter (fig. 18C) using a flat reflection plate as an imaging object.
Fig. 19 shows a comparison of intensity curves between three illumination patterns.
Fig. 20A-20C show the imaging results of the tissue model and underlying pulsatile model vessels using LED point light sources (fig. 20A), tungsten lamps (fig. 20B), LED emitters (fig. 20C).
FIG. 21 shows the power spectral density of the PPG signal in the pulsatile region of the tissue-like model versus the saturation point of the imager (irradiance 0.004W/m) from the LED emitter module2) The following relationship between the maximum intensity percentages of light.
Fig. 22 shows pixel classification of PPG signal intensity over an area of healthy pig skin based on illumination pattern variations on the skin.
Fig. 23A-23F show various images of the illumination pattern and images of a pig skin burn taken under different illumination patterns.
Figure 24 shows the location of a pig back burn.
Fig. 25 shows the sizes of the tissues in group I (left) and group II (right).
Figure 26 shows a diagram of exemplary debridement steps.
Fig. 27A to 27E show absorption spectra of different tissue components.
Figure 28 shows the histology of burn tissue of different burn severity in animal studies.
Figure 29 shows a histological section taken from successive tangential resections of each debridement in an animal study.
Figure 30 shows a graph of MSI data just after burn indicating that the reflectance spectra of each burn type are initially distinct. The graph shows four reflectance spectra obtained from all burn sites and healthy controls.
Fig. 31 plots the spectra for each burn type immediately after the burn and one hour after the burn.
Fig. 32 shows the reflection spectrum of all wavelengths at each of the cut-off layers. The figure plots the absorption spectra of healthy controls, once debrided healthy controls, mean burned tissue spectra at each excision, and mean wound spectra at each excision.
Figure 33 shows a wound debridement procedure. During wound debridement procedures, an active wound bed (a) for transplantation may be exposed by removing necrotic tissue (b). The PPG imaging device detects differences in the relevant blood flow between these two tissues to distinguish them from each other. At the same time, the MSI technique enables tissue differentiation using reflectance spectra determined by molecular and structural differences between the wound (a) and necrotic burn tissue (b).
Figure 34 shows the composition of the reflectance mode and 2D PPG imaging system (left). Monochromatic light incident on the tissue surface is scattered within the tissue as it interacts with the molecular structure. A small portion of the light returns to the camera. When measured over time, the change in intensity of the backscattered light forms the PPG waveform. Each pixel in the raw data cube includes a unique PPG waveform that can be analyzed to generate a single blood flow image (right) of the tissue.
Fig. 35 shows the composition of a multispectral imager, including a broad-spectrum illumination source, a digital camera, a rotating filter wheel equipped with different filters that separate light of predetermined wavelengths (left) reflected from the target surface. The system rapidly collects images at each position of the filter wheel to generate a spectral data cube (right). Each pixel in the data cube represents the reflectance spectrum of the tissue at low spectral resolution.
Figure 36 shows the steps involved in the deep partial thickness pig burn debridement model. Five time points, color photographs, data collection at each time point are shown.
Figure 37 shows the average thickness per dermatome excision (left) when a pig burn is excised. Also, the average depth of burn was divided by the severe and mild burn results. Error bars reflect standard deviation. H & E staining of partial thickness burns shows the histologist's marking on the tissue (right)
Figure 38 shows histology of a tangentially resected tissue sample for a partial thickness burn at depth. The numbers indicate the sequence of excision from the epidermis into the dermis layer and the arrows indicate the most superficial portions of each harvester sample. The most severely burned tissue was on the surface of the yellow line. The slightly burned tissue was between the black and yellow lines. Tissues deep to the black line were considered non-burned.
Figure 39 shows PPG imaging results of a continuous tangential ablation of a partial thickness burn at depth. When the first 1.0mm layer of skin was removed, the burned tissue in the wound bed remained evident as shown by the smaller relevant PPG signal. At a depth of about 2mm to 3mm (after the second resection), the PPG signal has recovered in the area of the burned wound.
Figure 40 shows multispectral images of successive tangent ablations of partial thickness burns of depth. The more layers of skin removed, the fewer severe burns that occur. At the second debridement, the burn was almost completely removed, and at the third debridement, the burn was completely removed. Obviously, some errors occurred during the first debridement, attributing a healthy wound to healthy skin. Errors can be reduced by improvements in algorithms and hardware, or by selecting more efficient wavelengths.
Figure 41 shows the effectiveness of the MSI technique on various burns. Once present, the surgeon must identify the tissue (superior) requiring the procedure. During surgery, physicians encounter burns of uneven depth. These images enable the physician to know which sites require more burns to be removed and which sites have reached an active wound (below).
FIG. 42 shows a test set classified by a previously trained quadratic discriminant analysis algorithm and compared to actual classification labels to form a confusion matrix. The matrix shows the number of correct classifications by the diagonal line in the center of the matrix. Incorrect classifications are for elements outside the diagonal.
Fig. 43A-43C show an exemplary hardware system setup (fig. 43A), animal burn (fig. 43B), first resection of burned tissue (fig. 43C).
Fig. 44 shows an example burned skin.
FIG. 45 shows example steps for modifying a data set used to train a tissue classification algorithm.
Fig. 46A-46F illustrate an exemplary training set.
Fig. 47A to 47B show exemplary cassette diagrams: six categories of different frequency bands before removal of the abnormal value (fig. 47A), and six categories of different frequency bands after removal of the abnormal value (fig. 47B).
Fig. 48A to 48B show exemplary six classifications in the case where there are abnormal values (fig. 48A) and abnormal values (fig. 48B) in the 2D feature space.
Fig. 49 shows an example: (Al) healthy condition, (a2) result before removing abnormal value, (A3) result after removing abnormal value, (Bl) burn condition, (B2) result before removing abnormal value, (B3) result after removing abnormal value.
Fig. 50 shows a high-level diagrammatic overview of the generation of prediction information in accordance with the present invention, namely photoplethysmography (PPG imaging), multispectral imaging (MSI), two optical imaging techniques in combination with patient health indicators.
Fig. 51 shows an exemplary diagram of an apparatus designed to fuse the optical imaging techniques of photoplethysmography (PPG imaging) and multispectral imaging (MSI).
Figure 52 shows an example of a combination of depView Gen 1PPG imager, MSI camera, and target patient health metric input.
Fig. 53 shows the difference between the signal of burned tissue and the signal of a healthy wound exposed by debridement.
FIG. 54 illustrates six exemplary physiological classifications implemented in the disclosed MSI evaluation.
Fig. 55 graphically illustrates example results of PPG data, MSI data, PPG data, and MSI data combinations.
Fig. 56 shows an exemplary PPG signal occurring in the hand, leg, foot regions.
FIG. 57 illustrates an example process for training a machine learning diagnostic algorithm.
FIG. 58 shows an example clinical study flow chart.
FIG. 59 shows a graphical illustration of tissue involved in a conventional resection step.
FIG. 60 shows example steps for generating a classifier model for resection levels.
Fig. 61 shows an example clinical study flow chart.
FIG. 62 shows example statistical sample size analysis results.
Fig. 63A shows color marks for the example results of fig. 63B to 63F.
Fig. 63B-63F illustrate exemplary reference images, correctly labeled data images, classification results, error images for various different classification techniques.
Fig. 64A and 64B show a comparison of feature configurations of different classification techniques.
Fig. 65 shows an example block diagram of PPG output pre-processing.
FIG. 66 shows examples of locations for training scenarios, classification scenarios, cross-validation.
Fig. 67A to 67L show examples of correct label data images, real images, and classification results among five different classification techniques.
Fig. 68 shows a confusion matrix for an exemplary cross-validation experiment.
FIG. 69 illustrates the accuracy of each classification result for an example feature set for classification.
Fig. 70A, 70B, and 71 illustrate example fiber optic systems that may be used to obtain the image data described herein.
Fig. 72 shows five example study time points and multiple probe positions for acquiring diffuse reflectance spectral data in the visible and Near Infrared (NIR) ranges.
Fig. 73 shows the average diffuse reflectance spectra of burned tissue, healthy skin, and wound tissue.
Fig. 74 shows the P-value versus control wavelength for burn control healthy skin.
Figure 75 shows the P-value versus control wavelength for burn control wounds.
Fig. 76A and 76B show the P values of the first data set in ascending order, with the corrected P value significance level.
Fig. 77A and 77B show the P values of the first data set in ascending order, labeled with the corrected P value significance level.
Detailed Description
Introduction to the design reside in
Alternative examples of the invention relate to systems and techniques for identifying, estimating, and/or classifying target tissues. Some alternative examples relate to devices and methods for classifying tissue, wherein such devices include optical imaging elements. Some optional examples described herein include reflectance-mode multispectral time-division optical imaging software and hardware that when executed can implement several tissue classification methods described herein.
There is a long-felt need for non-invasive medical imaging techniques that are capable of classifying tissue. The classification includes wounds and tissue conditions such as decubitus ulcers, chronic wounds, burns, healthy tissue, tissue grafts, tissue flaps, vascular pathophysiology, congestion, and the like. In particular, there is a need for a technique: the severity of the wound or tissue condition can be assessed and the percentage of the total body surface area damaged (% TBSA) can also be estimated. The% TBSA damaged is defined as the surface area of the damaged tissue region divided by the entire body surface area, and may be expressed as a percentage (e.g., 40%), as a fractional value less than 1 (e.g., 0.4), or as a fraction (e.g., 2/5). Each of these representations is the "% TBSA" as used herein, and is numerically equivalent or approximately equivalent, such as scaled, numerator denominator split outputs, and the like.
The optional examples described herein enable assessment and classification of injured targets that require immediate emergency treatment as opposed to non-emergency needs, in an automated or semi-automated manner, and also provide treatment recommendations. While shallow partial-thickness burns of the surface (e.g., first and second degree burns) are typically recovered without surgical intervention, deep partial-thickness burns and full-thickness burns (e.g., third and fourth degree burns) require surgical excision to prevent functional loss and excessive degradation of the surface appearance. Indeed, early resection is associated with decreased mortality and a long hospital stay. Some of the alternative examples described herein are particularly useful for burn management because they enable a physician to quickly assess the severity of a burn and, thus, even a non-burn specialist can quickly and accurately make treatment decisions, such as a conclusion that an urgent surgical procedure is required, etc. Some optional examples can also assist the physician in carefully and more accurately performing a series of resections (e.g., identifying the appropriate edges of the incision and/or debridement procedure). Other alternative examples described herein are particularly useful for enabling a physician to make treatment decisions, such as management of fluid amounts for restoring consciousness, and the like.
Moreover, in the field of burns, it is often difficult to assess tissue wounds to a complete extent until several days after injury. The delayed nature of these types of wounds further complicates the treatment process because it is often difficult for a medically experienced burn physician to accurately determine the edges where necrotic or necrotic tissue meets healthy tissue, where recovery can occur without surgical intervention or debridement. Burn assessment typically relies on the subjective examination of the skin by the practitioner and takes into account additional changes in skin sensitivity, consistency, health status. However, determining whether surgery is required requires accurate burn assessment, particularly assessment of burn depth. In addition to determining whether surgery is required, early detection and proper treatment of burns can avoid infection and sepsis, and therefore, inaccurate burn classification and assessment can disrupt target recovery. Moreover, minimal surgical intervention is desired to facilitate the healing process and minimize trauma to the target.
However, even when surgery is required, one of the greatest challenges facing physicians is finding viable, healthy tissue from non-viable, necrotic or impending necrotic tissue. Even for experienced physicians, the typical end point of the cutting depth is the occurrence of punctate bleeding. However, there are significant drawbacks to using this metric, including unnecessary removal of viable tissue during surgery. Moreover, control of bleeding during burn excision is difficult and requires extensive clinical judgment, accuracy, and experience.
In burn surgery, incomplete resection or excessive resection of tissue has life-threatening consequences. Incomplete burn excision can lead to graft migration on viable tissue, ultimately resulting in poor graft uptake. Incomplete burn excision can also lead to increased risk of infection and/or longer recovery times. In addition, excessive resection can result in excessive blood loss or bleeding from the resected surface, which can also affect graft uptake.
There is a great need for methods and devices for rapid quantitative assessment of burn severity on larger tissue surfaces. The methods and devices described herein may be used to provide rapid and accurate burn assessment, so that burn professionals may be concerned with patients with severe burns, while non-burn professionals may meet the needs of patients with minor burns. There is also a great need for methods and devices for rapid quantitative assessment of other wounds and tissue conditions, which can be used to rapidly and accurately assess decubitus ulcers, chronic wounds, subacute and lacerated wounds, traumatic wounds, lacerations, bruises, diabetic ulcers, pressure ulcers, surgical wounds, trauma and venous ulcers, etc., and to rapidly and accurately assess the exact location or tissue flap where tissue transplantation is performed or tissue valve, vascular pathophysiology, congestion when healthy tissue is associated with necrotic or impending necrotic tissue.
Throughout the specification, reference is made to one or more "wounds". It should be understood that the word "wound" is to be interpreted broadly, including open and closed lesions such as skin tears, cuts, punctures or diseases, or skin lesions of a target such as a human or animal, particularly a mammal, resulting in contusions, superficial lesions, changes or defects. "wound" also includes any damaged area of tissue in which fluid is or is not produced as a result of injury or disease. Examples of such wounds include, but are not limited to: acute wounds, chronic wounds, surgical and other incisions, subacute and dehiscent wounds, flaps and grafts, lacerations, abrasions, bruises, burns, diabetic ulcers, pressure ulcers, stomata, surgical wounds, trauma and venous ulcers, and the like.
Various alternative examples will be described below in conjunction with the figures for illustrative purposes. It is to be understood that other embodiments of the disclosure are possible and that various advantages can be obtained from the disclosed embodiments. All possible combinations and sub-combinations are within the scope of the invention. Some of the alternative examples described herein include similar components, which in themselves may be interchanged in different parts of the invention.
Headings are included herein for reference and to indicate parts. These headings are not intended to limit the scope of what is described herein. These matters apply to the entire description.
Overview of alternative examples for burn assessment
Fig. 1A and 1B show an example of an alternative embodiment of the invention. The device shown in the figures is particularly suitable for whole body assessment of burn targets. The device is particularly useful for burn classification functions, i.e. making clinical decisions about the need for recent treatment. In this example, the detector 100 comprises one or more light sources and an image acquisition arrangement 102, in this example four light sources 101, 104, 118 and 120. The light sources 101, 104, 118 and 120 illuminate a tissue region, in this case tissue 103, which preferably comprises the entire body surface of the object facing the detector 100. In some alternatives, the one or more light sources may be light-emitting diodes (LEDs), tungsten halogen lamps, tungsten lamps, or any other lighting technology. The one or more light sources may be selected by the user to emit white light or light falling in one or more spectral bands as desired.
Many LEDs emit light in a narrow bandwidth (e.g., full width at half maximum below 50 nm), where a particular LED can be selected to emit light in a particular bandwidth. Typically, the one or more spectral bands may be selected based on the light measurements most relevant to the kind of data sought and/or the clinical application. The one or more light sources may also be connected to one or more drivers that power and control the light sources. These drivers may be part of the light source itself, or separate therefrom. Multiple narrow band light sources or broadband light sources with selectable filters (e.g., filter wheels) can be used to sequentially or simultaneously illuminate tissue 103 with light in multiple spectral bands. The center wavelength of the selected spectral band typically falls within the visible and near infrared wavelengths, such as between about 400nm and about 1100nm (e.g., at least less than or equal to 400nm, 500nm, 600nm, 700nm, 800nm, 900nm, 1000nm, 1100nm, or within a range defined by any wavelength between any two of the aforementioned wavelengths).
In some alternatives, the light source illuminates the tissue region with a substantially uniform intensity. For example, the light disperser provides a substantially uniform distribution of light intensity to tissue 103 by utilizing a light disperser that is part of light sources 101, 104, 118, and 120 to achieve substantially uniform intensity. The optical disperser has the additional benefit of reducing undesirable specular reflections. In some cases, the signal-to-noise ratio of the signal obtained by the image capture device 102 can be significantly improved by using a broad spectral spatial uniform illumination pattern with high power LEDs. In some cases, a patterned light system may also be used, such as checkerboard pattern illumination, etc. In some alternative examples, the field of view of the image capture device is directed to a tissue region that is not directly illuminated by the light source, but is proximate to the illumination region. For example, in the case where the light intensity is substantially uniform, an image capture device such as image capture device 102 can read light outside the illumination area. Similarly, for example, where a checkerboard pattern of illumination is used, the image capture device 102 can read the light of the non-illuminated portion of the checkerboard.
Moreover, while some of the alternative examples described herein function with substantially uniform intensity light, other alternatives may also employ non-uniform light, wherein the one or more light sources are arranged to minimize light intensity differences across the surface. In some cases, these differences are also due to the data acquisition process, back-end software, or hardware logic. For example, non-uniform background illumination may be compensated using top-hat transformation or other image processing techniques.
In some alternative examples, the light may be polarized as desired. In some cases, the light is polarized using reflection, selective absorption, refraction, scattering, and/or any method of polarizing light known in the art. For example, the polarization may employ prisms (e.g., nicols), specular and/or reflective surfaces, filters (e.g., polarizing filters), lenses, and/or crystals. The light may be cross-polarized or co-polarized. In some alternative examples, the light of the one or more light sources is polarized before illuminating the target. For example, a polarizing filter may be provided as part of the light sources 101, 104, 118, and 120. In some alternative examples, the reflected light from the tissue is polarized after reflection from the tissue. For example, a polarizing filter may be provided as part of the image capture device 102. In other alternative examples, the light is polarized both before illuminating the target and after reflection. For example, a polarizing filter may be provided as part of the light sources 101, 104, 118, and 120 and as part of the image acquisition device 102.
The type of polarization technique employed depends on factors such as the angle of illumination, the angle of reception, the type of illumination source used, the type of data desired (e.g., measurements of scattered light, absorbed light, reflected light, emitted light, and/or fluorescent light), and the depth of the imaged tissue. For example, when illuminating tissue, some light is reflected directly off the upper layers of the skin as surface glare and surface reflections. This reflected light is generally not as polar as the light that diffuses into the skin tissue where it is scattered (e.g., reflected) and changes direction and polarity. Cross-polarization techniques can be used to minimize the amount of glare and reflected light read by the acquisition device while maximizing the amount of backscattered light read. For example, a polarizing filter may be provided as part of the light sources 101, 104, 118, and 120 and as part of the image acquisition device 102. In this arrangement, the light is polarized before illuminating the target 103. After the light is reflected from the target 103, the reflected light is polarized in a direction orthogonal to the first polarization to measure the backscattered light while minimizing the amount of incident light reflected off the surface of the target 103 being read.
In some cases, it is also desirable to image tissue at a depth. For example, tissue imaging at a particular depth may be applied in assessing a particular wound at a particular depth, finding and/or identifying the presence or absence of a cancerous tumor, determining the stage of a tumor or cancer progression, or any other therapeutic application noted in the present invention. Tissue may be selectively imaged at depth based on optical properties and/or mean free path length using some polarization techniques well known in the art.
In some alternative examples, other techniques for controlling imaging depth may also be employed. For example, the optical scattering properties of tissue change with temperature, and the depth of light transmission in the skin increases with decreasing temperature. In this way, the imaging depth can be controlled by controlling the temperature of the imaged tissue region. Also, for example, the imaging depth may be controlled by pulsing (or flashing) the light source at different frequencies. Pulsed light is injected deeper into the skin than non-pulsed light: the larger the pulse width, the deeper the transmitted light. As another example, the imaging depth can also be varied by adjusting the light intensity, with greater light intensity transmitting deeper.
As further shown in fig. 1A, image capture device 102 is configured to receive reflected light from tissue 103. The image capture device 102 can detect light from an illuminated area, a sub-area of an illuminated area, or a non-illuminated area. And as described below, the field of view of the image acquisition arrangement 102 includes the entire body surface of the object facing the detector 10. When the illumination faces the whole target of the detector, the whole target facing the detector is in the view field of the image acquisition device, so that the classification speed can be improved and the classification is easier. The image capture device 102 may be a two-dimensional charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) image capture device having suitable optical properties for imaging all or a portion of the irradiated tissue 103.
In some alternative examples, the module 112 is a controller, classifier, processor coupled to the detector 100. Module 112 controls the detector including setting parameters such as physical position of the detector, light intensity, resolution, color filters, etc., or any parameters of the camera and/or light source as described in the present invention. The module 112 also receives and processes data acquired by the detector, as will be described later in the invention.
In some alternative examples, module 112 is also coupled to module 114, where module 114 is a display and user interface ("UI"). The display and UI display information and/or data to the user, which in some optional examples includes the presence of a tissue condition, the severity of the tissue condition, and/or additional information about the target, including any of the information mentioned in this specification. The module 114 receives user input, which in some optional examples includes information about the patient such as age, weight, height, gender, ethnicity, skin tone, and/or blood pressure. The module 114 also receives the following user inputs: calibration information, a user selected scan site, a user selected tissue condition, and/or additional information for diagnosis, including any of the information mentioned in the present disclosure. In some alternative examples, some or any of the foregoing user inputs are automatically sent to module 112 without the user entering information usage module 114.
As shown in fig. 1B, in some alternative examples, the detector 100 can be moved in any direction or combination of directions, such as up, down, left, right, obliquely up left, obliquely down right, obliquely down left, etc., or any combination of these directions. In some alternative examples, the detector also moves in a direction perpendicular to the target, with the detector being closer or further from the target. The detector is connected to, for example, a track or an articulated arm, and the position is controlled manually or automatically by the controller 112 or a combination of both. In some alternatives, the light source or the image capture device is fixed, and in other alternatives, the two may be independently movable. In some alternative examples, a motor is coupled to the image capture device to automate movement of the image capture device to cause the camera to image portions of the target. The camera may also be connected to a rail, track, rail, and/or an actuatable arm. The one or more light sources may illuminate the entire tissue region 103 as the image acquisition device moves, or the one or more light sources may be controlled during the scanning process to illuminate only the desired tissue portion for imaging by the camera.
In an alternative example shown in fig. 1A, the target stands in an upright position against a background 110 while an image or portion of an image of the target (e.g., the entire body of the target or a desired tissue site) is acquired. In some alternative examples, the background 110 is a support structure on or against which the subject lies in a horizontal or oblique direction when the image is acquired. A measurement plate 106 and a measurement plate 108 are provided, and the weight of the object is weighed when the image is acquired. For the measurement board, additionally or alternatively, there is provided a biometric reader for measuring heart rate, temperature, body composition, body mass index, body shape, blood pressure and other physiological data.
FIG. 2 illustrates an example UI 200 displayed on the display/UI 114 for capturing images by the device. In this alternative example, the user interface displays the field of view of the image acquisition device when the light source is illuminating the tissue 103. In some alternative examples, the user can position the field of view of the image acquisition device 102 to include the entire target 202. The user may also adjust image capture device 102 using zoom element 208 such that the target nearly fills its field of view. In some alternative examples, the user may also obtain other information about the target 202 using the user interface. For example, the user may select location 204 and location 206 to measure the target height. In some cases, the user may utilize the user interface to instruct the image capture device to capture an image of the target, such as by pressing capture image button 210.
When images of the target are acquired for tissue classification using these images, the light source (with associated filters, if provided) and the image acquisition device are controlled to acquire multiple images of the target, i.e., separate images associated with different spectral bands of reflected light and/or separated in time. Images acquired at different spectral bands can be processed according to the MSI technique to classify tissue regions, and temporally separated images can be processed according to the PPG technique to classify tissue. In some alternative examples, such two types of image sets are obtained, and fusing the results enables more accurate classification, as will be further described below.
For burn patients, images are acquired with the target in multiple orientations, such as front-facing, back-facing, left-facing, and right-facing peer. The patient stands against the background 110 in different orientations, or, if the background 110 is a support structure in a horizontal direction, the patient lies on the background 110 in different orientations. The data from the acquired images is then used to classify different parts of the target skin as to whether or not burn has occurred, and also to distinguish the degree of burn at these burn parts.
After acquiring images at different orientations, the controller/classifier/processor 112 can process the image data for each target orientation. When the background 110 is a unique color different from the skin tissue, the controller/classifier/processor can separate the object from the background, separating each pixel in each acquired image into a background or object. As another alternative example, the UI may be used to mark an outline of the target (e.g., by a stylus or mouse, cursor) in the original image (e.g., as shown in fig. 2) to distinguish between the background and the target. When the pixels of the image associated with the target are identified, they are analyzed using MSI and/or PPG techniques to classify the skin area of the target according to the degree of burn.
Exemplary output to the display/UI 114 after this processing is illustrated in FIG. 3. In this alternative example, the output image 212 shows a front view of the target 250 in which different portions of the patient's anterior skin have been classified using multiple images taken in front of the patient. The output image 212 shows different classifications in different colors in the output image, for example. For example, the controller/classifier/processor determines that the site 222 is a third degree burn, the site 224 is a first degree burn, and the site 226 is a second degree burn. The processor can also identify healthy tissue, such as site 228. Image 214 is a pictorial example of a rear view of target 250, showing that site 230 is classified as a third-level burn, site 232 is a first-level burn, and site 234 is a second-level burn. Other tissue sites are also identified as healthy tissue. Image 216 and image 218 show a left view and a right view, respectively, of target 250, where part 236 is classified as a tertiary burn, part 238 is a secondary burn, and part 242 is a primary burn. Other tissue sites are also identified as healthy tissue.
As shown in block 220 of fig. 3, some alternative examples can calculate the burn% TBSA from the classification data shown in images 212, 214, 216, and 218, and then output the result to the user on the UI. As shown in block 220 of fig. 3, the device also outputs the% TBSA for one or more burn classifications when classifying according to burn severity. Although optical imaging methods have been used previously to assess burned tissue, there is no device that can generate a burn% TBSA determination.
Generating burn% TBSA using image data of all or part of a patient involves previously unsolved challenges. In an alternative example, simple calculations using the four images 212, 214, 216 and 218 of FIG. 3 have proven to be able to generate sufficiently accurate determinations for the purpose of lesion identification separation. In this alternative example, a first value is generated that is the sum of all pixels in all images that are classified as burn, a second value is generated that is the sum of all pixels in all images that are classified as target, and then the first value is divided by the second value, thus yielding the burn% TBSA. For example, to calculate the% TBSA for a three-level burn, the system counts the pixels of the sites 222, 230, and 236, and then divides the previously derived total count by the total number of pixels of all of the body surfaces of the target 250, which are derived by counting and adding the total number of pixels of the target 250 in each of the images 212, 214, 216, and 218.
In some alternative examples, adjusted area addition may be used to improve burn% TBSA determination. For example, rather than simply summing the regions, a processor such as module 112 analyzes the image to identify those regions that appear in multiple images. In this way, regions acquired for multiple images are not counted multiple times. For example, both region 222 of image 212 and region 236 of image 216 capture a portion of the chest of target 250. If the areas of site 236 and site 222 were simply added, a portion of the thorax would be counted repeatedly. In some alternative examples, the processor may analyze site 222 and site 236 (or image 212 and image 216 as a whole) and count the chest only once. In some alternative examples, the overlap and/or the same region can be calculated using image processing techniques (such as edge detection and segmentation, etc.), reference markers and/or predictive analysis, a computer that can judge overlap by normalizing body shape.
In some alternative examples, a three-dimensional body model may be constructed for the target. The body model may be based on a standardized body model and/or on a constructed body model generated from parameters such as height, weight, body composition (e.g. body fat percentage), body mass index, any metric of a specific size or body shape of the whole or part of the body, etc. These parameters may be entered by a user via a UI such as module 114 and/or calculated by measuring parameters via probe 100, biometric reader 106 and/or 108, or any metric received or calculated via a processor/classifier such as module 112. Once the three-dimensional body model is generated, the classified tissue regions are projected onto regions of the three-dimensional body model. In the case of overlap, the processor determines the difference so that the overlap location is not counted multiple times. Burn% TBSA can be obtained by removing the sum of the areas classified into one or more categories (e.g., primary, secondary, tertiary, or healthy tissue) over the entire body surface area.
In some alternative examples, a processor such as module 112 can reconstruct a three-dimensional model from a plurality of two-dimensional images (e.g., images 212, 214, 216, and 218). In some alternative examples, such reconstruction may be accomplished by projection methods such as Euclidean reconstruction, linear stratification, or any other well-known method of converting a plurality of two-dimensional images into a three-dimensional reconstruction. In some alternative examples, the conversion from a two-dimensional image to a three-dimensional reconstruction may take into account known parameters, such as the angle of the camera, the distance between the camera and the target, target-based measurements, and/or any reference measurements or references, and the like. Once the three-dimensional model is generated using the two-dimensional images, the burn% TBSA can be estimated using the sum of the areas classified into one or more categories (e.g., first degree burn, second degree burn, third degree burn, or healthy tissue).
Once the burn% TBSA is calculated by the processor, the results can be output to the user. For example, output 220 shows an example where the calculated burn% TBSA is 40% and the calculated third-level burn% TBSA is 12%. This information may be displayed to the user via a display such as module 114. A display such as module 114 may also display other information such as mortality estimates or other relevant information for treating the target. In the example of mortality estimation, data such as the data in the table of fig. 6 is stored in the processor 112 for estimating mortality based on% TBSA and/or target age known or estimated by the user and entered into the system.
Fig. 4 shows another example of how some of the devices described herein calculate burn% TBSA. The figure shows a stitching technique where several figures are added together to calculate the burn% TBSA. In some cases, the stitching is generated using an automated procedure in which each image of the detector, such as detector 100, is automatically positioned. The automated operation may employ an actuator arm, motor, rail, or any other method or device for moving the detector. In some cases, the end result is that the detector can acquire images in a particular pattern (such as the grid pattern shown in FIG. 4). Alternatively, the stitch may be generated by a user positioning a probe, such as probe 100. So that the user can capture any number of images at any number of locations of the target.
In any case, with some alternative stitching techniques, the stitch 201 is an image of the head surface. The stitching 207 is a separate image of the hand image, or a portion of the torso may be acquired. Any number of other images of the target or a portion thereof, including images of different surfaces, may be acquired to form other splices. Some of these images may be repeated, the rest being distinct. By replicating the image, and/or generating multiple images of the same feature, region, location, or tissue location (e.g., from different perspectives), and employing an overlay or masking technique, higher resolution and/or a more optimal three-dimensional rendering of the desired tissue can be achieved.
These different images may also be stitched together or stitched to calculate or estimate the entire body surface area. In some cases, the background is removed using image processing techniques, leaving only the target body, before the images are pieced together. Body contours can also be obtained using edge detection techniques to facilitate stitching the images together. In the case of acquiring image repetitions of a portion of the body, cross-correlation of the tissue is derived to determine how the portions are correctly joined, stitched, and patched together.
In some cases, the entire surface area of the tissue classification may be pieced together according to different images. For example, the splices 211 and 212 may be some images used to estimate the surface area with tissue conditions such as burns. Further, some of the combined images may be distinct or repeated, or taken from different perspectives of the same tissue site or location. The process of stitching together the images captures a plurality of images of tissue classified as tissue conditions and combines the images to estimate the surface area of the classified area.
In some alternative examples, the area needs to be estimated using interpolation techniques to calculate the non-imaged regions. For example, in some cases, a portion of the target has been accidentally deleted from the image, or deleted due to being apparently not burned or not suffering from the condition to be assessed. In other cases, some areas of the target are difficult to image, either due to their location (e.g., under the target arm), or due to physical limitations of the target (e.g., the target is severely injured and unable to move). The interpolation uses body symmetry to estimate the non-imaged regions or may contain projected lines between one region and another. For example, if the calf image 205 is missing, a straight line is projected from the leg boundary shown in the upper leg image 213 to the boundary of the ankle and foot in the lower leg image 203 and/or 215. This projection can give an approximate leg shape and thus an un-imaged leg surface can be estimated.
Other methods of estimating the surface area of portions of the target may also be used. For example, FIG. 5 shows the nice Rule (Rule of Nines) and the Londebraud chart method (Lund-Browder Charts). For example, diagram 500 shows a nice rule where the head, neck, and arms are each estimated to be 9% of the entire body surface area. For example, the entire surface area of the arm 501 is estimated to be 9% of the entire body surface area of the illustrated human body according to the nice rule. According to the nice rule, each leg, anterior and posterior of the torso are estimated to be 18% of the total body surface area. For example, the overall body surface area of the leg 502 is estimated to be 18% of the overall body surface area of the illustrated human body according to the nice rule.
Diagram 503 is an example showing the estimation of the surface area of different body parts using other methods. The lundbury chart 504 shows one way to estimate the surface area as a function of the age of the patient. The graph shows different estimates of the relative percentages of body surface areas of 1/2 head, 1/2 thigh, and 1/2 calf for children aged 0, 1, 5, 10, and 15.
Both the nice rule and the lundbury plot are exemplary estimation methods that can be used to calculate the Total Body Surface Area (TBSA). These estimation methods can also be used to supplement the methods described above for non-imaged body parts. For example, for an unimaged leg surface area, the calculation can be made as 18% of the TBSA.
For some patients, the nice rule, the lundbury diagram and other means of estimation will not be used. For example, for an overweight patient or a patient with certain areas of the body having extraordinary body tissue, the relative surface area of the body parts is different. Thus, the imaging techniques described herein can provide a more accurate% TBSA burn calculation than conventionally based solely on these graphs. Also, any data input to module 114 or automatically sent to module 112 may be used for any% TBSA calculation described herein. For example, the age of the target is effectively used to estimate the relative percentage of body surface area using the Londebraud chart method. In alternative examples, other data including sex, weight, height, body type, body shape, skin tone, race and orientation of the imaged body and/or any relevant data mentioned herein may be entered or collected for use in calculating the% TBSA.
Deriving the% TBSA of a tissue classification is important for proper treatment decision. For example, for burn patients, mortality increases as burn% TBSA increases. Figure 6 is a summary of the american burn association. The summary shows mortality as% TBSA versus age and burn extent. Clearly, mortality increases on average with increasing patient burn% TBSA. Therefore, it is important to identify those patients with high burn% TBSA as quickly as possible for emergency treatment. Also, the slope of mortality increases as burn% TBSA increases. Thus, in order to identify targets with greater risk of death, it is important to provide greater accuracy in terms of higher burn% TBSA relative to conventional methods. This discrimination is significant in resource-limited emergencies such as massive casualty situations. Thus, the ability of the present invention to calculate burn% TBSA meets this long felt need.
One desired therapeutic decision is, for example, to determine the amount of fluid used to restore consciousness. Fluid loss is often one of the biggest problems for patients with extensive burns. Therefore, proper management of the amount of fluid given to burn patients is very important for recovery. In many cases, too little fluid can lead to burn edema, burn shock, dehydration, death, and/or other complications. Too much fluid may also increase the risk of morbidity and/or mortality, such as infection, edema, acute respiratory distress syndrome, abdominal compartment syndrome, excessive body water, etc. The amount of liquid required to restore consciousness correlates with burn% TBSA. For example, mild burns usually revive by oral hydration. However, when the burn% TBSA approaches 15-20% (e.g., 15, 16, 17, 18, 19, or 20 percentage points or a range defined between any two of the above percentages), a greater amount of fluid transfer is targeted, making fluid management even more important for avoiding burn edema and burn shock. When the burn% TBSA is greater than about 20% (e.g., 20, 30, 40, 50, 60, 70, 80, 90, or 100 percentage points or a range defined between any two of the above percentages), the current recommendation is to begin formal intravascular fluid resuscitation.
In some alternative examples, a processor, such as processor 112, uses the calculated burn% TBSA to determine the amount of fluid that should be administered to the patient. For example, the resuscitation volume for the first 24 hour treatment can be estimated based on% TBSA using the pekland formula. Pekran formula is V4 m (a 100), where V is resuscitation volume in milliliters, m is target weight in kilograms, and a is% TBSA expressed in decimal fractions (e.g., 0.5 represents 50% of target body surface burns). The first half of the calculated amount is entered at the first 8 hours and the remaining half is entered at the next 16 hours. The fluid management table is output to a system user on a UI, such as part of output 220 shown in FIG. 3. For example, for a 100kg target with a 50% burn area, the system outputs a 24 hour fluid resuscitation chart according to the above formula: the first 8 hours was 1250ml/hr and the next 16 hours was 625 ml/hr.
The pekland formula can also be adjusted to suit the needs of a particular patient. For example, since elderly patients are more prone to edema or other complications due to excess body water, the amount of fluid administered to elderly patients needs to be reduced. In addition, for young patients such as infants, there is less risk of complications from excess fluid. Other factors are also used to adjust the amount of fluid administered, such as the condition (e.g., severity of burns), pulsation, blood pressure, respiratory rate, and other physiological parameters. In some alternative examples, a processor, such as processor 112, uses these factors to adjust the amount of liquid administered. These factors may be input via a user interface (e.g., display/UI 114), measured by the probe 100 or biometric readers 106 and/or 108, and/or otherwise input, calculated, evaluated, or otherwise obtained via the processor 112. These factors may also include other data based on the patient's medical history or data based on other patients. The processor may also retrieve information from the dynamic library to obtain other data for calculation, such as additional patient data, calibration information, and data based on other patients, as will be described later in the present application.
In some alternative examples, another factor to consider is the total blood volume of the patient and/or the variation in the total blood volume. A lower blood volume indicates that the patient requires more fluid to wake up. Blood volume can be measured and/or estimated in a variety of ways. For example, when the patient has more blood, the patient's tissue will absorb more light. The results are more easily measured in the range of red or near infrared light (e.g., wavelengths near 840nm-880nm, including 840nm, 850nm, 860nm, 870nm, or 880nm wavelengths, or any defined range between any two of these wavelengths) because these light wavelengths are more easily transmitted through tissue. The alternative example described in the present invention can be used to measure the amount of reflected red or near infrared light over time to estimate the total blood volume and/or the change in total blood volume. For example, in some alternative examples, the amount of red or near-infrared light reflected over time can be used to measure the phase shift of systolic and diastolic activity of the heartbeat waveform. These shifts can be used to derive systolic and diastolic pressures, which in some cases are used to estimate the pulse pressure of the left and right ventricles. Additionally or alternatively, the systolic and diastolic blood pressure may also be measured using an external cuff. Pulse pressure can be used to estimate the stroke volume of the left and right ventricles (the amount of blood pumped out of the ventricles of the heart per beat). By multiplying the stroke volume by the heart rate (which can also be measured by an alternative example of the invention), the cardiac output of the ventricles is calculated. The cardiac output may be used to estimate blood volume and/or changes in blood volume.
Alternative means known in the art may also be used to measure or estimate blood volume and/or changes in blood volume, for example PPG, catheters, plethysmographs, other imaging techniques, and/or other means of measuring venous distensibility. For example, a patient wears a pulse oximeter to measure oxygen saturation and pulsation. However, the pulse oximeter may also function as a PPG device, measuring blood volume or changes in blood volume in a vascular bed (such as a finger, ear or forehead).
In some optional examples, the data of the total blood volume and/or the change in the total blood volume is input by a user, such as by using the UI 114, or input into the processor through a data path (e.g., plug-in transmission using wired or wireless means). The amount of fluid administered to the patient can be calculated using the total blood volume and/or the change in total blood volume itself or in combination with% TBSA or any of the factors mentioned herein. These additional factors and changes are automatically incorporated into the displayed fluid resuscitation table.
In addition, other alternative examples use other means to calculate the amount of fluid given to a burn patient based on% TBSA. These include the bruke equation, the modified park equation, and any other relevant approach known in the art. Alternative examples may not use a standardized formula. For example, the amount of fluid administered can be calculated by machine learning or mapping of historical data including medical records of the patient or other patients.
Turning now to the specific apparatus and methods for illuminating tissue, acquiring images and analyzing image data, it is believed that a number of attempts have been made to develop apparatus and methods for assessing burns and other wounds. These methods include thermography, nuclear magnetic resonance, spectroscopy, laser doppler flow meters, and ultrasound. In addition, photoplethysmography (PPG) has been used to detect blood volume changes in the microvascular bed of tissue. In some cases, since PPG is only a volumetric measurement, the use alone cannot fully classify tissue. Also, multispectral imaging (MSI) has been used to discern differences between skin tissues, but this technique also does not fully classify tissues. With existing MSI techniques, several challenges are often encountered: variations due to skin type, differences in skin at different body sites, and viable wound pretreatment. Since MSI only measures the appearance of skin or the composition of skin and does not measure dynamic variables such as nutrient availability and tissue oxygen content that are important to skin classification, an overall assessment of skin condition cannot be made either.
Some optional examples described herein combine MSI and PPG to improve the speed, reliability and accuracy of skin classification. The alternative examples described herein provide more accurate skin structure and proper functional ability relative to normal conditions of skin damage using, for example, image data to measure conditions of blood, water, collagen, melanin, and other features. In addition, the alternative examples described herein also detect changes in skin reflectance over time, which enables vital physiological information to be obtained that enables a clinician to quickly assess tissue viability and characteristics such as blood perfusion and oxygenation at a tissue site.
Fig. 7 illustrates a system that, in some alternative examples, may serve, but is not required to serve, as a probe 100, a controller/classifier/processor 112, and a display/UI 114. By combining the MSI and PPG techniques, the system of fig. 7, which will be described below, can also be used to analyze and classify smaller areas of tissue regions, with greater accuracy than previous techniques, and not just the whole body analysis systems and methods described above.
In the system of fig. 7, the detector 408 includes one or more light sources and one or more high resolution multispectral cameras that record the target tissue region 409 with multiple images while ensuring temporal, spectral, spatial resolution for high precision tissue classification. The detector 408 includes a plurality of cameras, imagers, prisms, beam splitters, photodetectors, filters, and multi-spectral band light sources. These cameras measure the scattering, absorption, reflection, emission and/or fluorescence of light of different wavelengths of a tissue region over time. The system also includes a display/UI 414 and a controller/classifier/processor 412, the controller/classifier/processor 412 controlling the operation of the detector 408, receiving user input, controlling display output, and performing analysis and classification of image pixels.
Data set 410 is an example of the output of detector 408, including data on reflected light at different wavelengths and at different times for the location in imaging space. Data subset 404 shows an example of data for different wavelengths of light at an imaging spatial location. The data subset 404 includes a plurality of images of the tissue region, each image measuring reflected light of a different selected frequency band of the tissue region. The multiple images of the data subset 404 may be acquired simultaneously or substantially simultaneously, where substantially simultaneously refers to within one second of each other. Data subset 402 shows an example of data for reflected light at different times for imaging spatial locations of a tissue region. The data subset 402 includes a plurality of images acquired at different times over a one-second period, typically at different times over a two-second period. Multiple images of the data subset 402 may be acquired in a single selected frequency band. In some cases, the multiple images of the data subset 404 may be acquired over a period of more than one second, and the multiple images of the data subset 402 may be acquired at multiple frequency bands. However, the combined data set comprising subset 404 and subset 402 comprises acquired images corresponding to different time instants and different frequency bands.
To collect images of the data subset 404, in some alternative examples, one or more cameras are connected to a filter wheel having a plurality of filters with different pass bands. The wheel of the filter is rotated while the one or more cameras acquire images of the target tissue region, and the one or more cameras can record the target at different spectral bands by acquiring images synchronized with the filter position of the filter wheel while rotating. In this way, the camera receives reflected light of different frequency bands for each pixel of the tissue region. Even more, in some cases, the filters enable the devices described herein to analyze light in a spectrum that is not recognizable to the naked eye. In some cases, the amount of reflected and/or absorbed light of these different spectra can give an indication about the chemical-physical composition of the target tissue or a particular region of the tissue. In some cases, the data obtained using the filters forms a three-dimensional data array, where the data array has one spectral dimension and two spatial dimensions. The spectral characteristics defined by the intensity of the reflected light for each of the collected spectral bands distinguish each pixel in two spatial dimensions. Since different components differ in their scattering, absorption, reflection, emission and/or fluorescence due to different frequencies of light, the intensity of light at different wavelengths gives information about the target component. By measuring these different wavelengths of light, detector 408 obtains this composition information at each spatial location corresponding to each pixel.
In some optional examples, to acquire images for the data set 404 in the multi-spectral bands, the one or more cameras comprise a hyperspectral line scan imager. The hyperspectral line scanner has a continuous spectral band rather than separate bands for each filter in the filter wheel. The optical filter of the hyperspectral line scanner may be integrated with the CMOS image sensor. In some cases, the filters are monolithically integrated optical interference filters, wherein the plurality of filters are arranged in a stepped line. In some cases, the optical filter forms a wedge shape and/or a stair shape. In some cases, there are tens to hundreds of spectral bands corresponding to wavelengths between 400nm and 1100nm, such as 400nm, 500nm, 600nm, 700nm, 800nm, 900nm, 1000nm, or 1100nm, or within a range defined by any wavelength between any two of the above wavelengths. The imager scans the tissue with lines of filters and senses reflected light from the tissue passing through the filters.
In other alternative examples, there are other filtering systems that can implement filtering in different spectral bands. For example, in some alternative examples, fabry-perot filters are used, and in addition, other filter arrangements are, for example, arranging the filters in a tile structure, or arranging the filters directly on an image sensor (CMOS, CCD, etc.) in a pattern such as a bayer array and a multi-sensor array.
In some cases, the pass band of the filter is selected based on the type of information sought. For example, at various stages of debridement, the burn site is imaged with wavelengths between 400nm-1100nm (such as within a defined range of 400nm, 500nm, 600nm, 700nm, 800nm, 900nm, 1000nm, or 1100nm, or any wavelength between any two of the above wavelengths) to obtain blood, water, collagen, and melanin status of the burn site and surrounding tissue. In some experiments, partial thickness burns of varying severity were studied using a pig burn model, with absorption spectra at wavelengths of about 515nm, 750nm, and 972nm being desirable to guide debridement treatment, and absorption spectra at wavelengths of about 542nm, 669nm, and 960nm being desirable to distinguish deep intermediate layer partial thickness from deep partial thickness burns.
In another trial, images of partial thickness burns of adult mini-pigs were collected. Using healthy skin, burn and burn removed samples, tissues were classified as healthy skin, hyperemic, transplantable, bleeding, minor burn and severe burn. In the test, healthy skin included areas of skin that were not damaged by burns. Hyperemia corresponds to a region of high perfusion, usually a first-order burn that heals without treatment. Implantable classifications correspond to light pink skin, which is usually punctate bleeding. The tissue is generally intended for skin grafting. Bleeding classifications correspond to a large area of accumulated bleeding that should be removed and imaged again as the blood covers the tissue to be classified. The "minor burn" classification corresponds to a zone of stasis with reduced perfusion but potentially salvageable tissue. Severe burns are classified as regions of protein coagulation that exhibit irreversible tissue loss that is desired to be ablated.
Alternative examples of the invention are used to measure reflected light from a tissue sample at different wavelengths in the range 400nm to 1100nm (such as 400nm, 500nm, 600nm, 700nm, 800nm, 900nm, 1000nm or 1100nm, or any wavelength defined between any two of the above wavelengths), and to determine which set of wavelengths provides greater variation between reflected light from tissues of different tissue types. This change is used to effectively distinguish tissue classes, namely at least healthy skin, hyperemia, transplantable, bleeding, minor burns and severe burns. The wavelength set including the largest associated wavelength with the smallest amount of redundancy is sometimes identified as the optimal set. Here, the maximum correlation is sometimes found such that these wavelengths are effective in distinguishing one particular tissue classification from the other. Sometimes minimal redundancy is obtained by including only one of the plurality of wavelengths measuring the same information. After classifying the tissue sample using the wavelength sets, the physician compares the classifications to accurately assess the tissue sample.
The classification accuracy was tested using data from different experiments. In a first set of experiments, wavelengths of 475nm, 515nm, 532nm, 560nm, 578nm, 860nm, 601nm and 940nm were measured. In a second set of experiments, wavelengths of 420nm, 542nm, 581nm, 726nm, 800nm, 860nm, 972nm and 1064nm were measured. In a third set of experiments, wavelengths of 420nm, 542nm, 581nm, 601nm, 726nm, 800nm, 972nm and 860nm were measured. In a fourth set of experiments, wavelengths of 620nm, 669nm, 680nm, 780nm, 820nm, 839nm, 900nm and 860nm were measured.
The tissue classification has an accuracy of 83% using wavelengths that provide optimal variability of the tissue classification according to the first and second trial groups. These wavelengths are (in order of relative weight) 726nm, 420nm, 515nm, 560nm, 800nm, 1064nm, 972nm and 581 nm. Similarly, the tissue classification has an accuracy of 74% using wavelengths that provide optimal variability of the tissue classification according to the third and fourth trial groups. These wavelengths are (in order of relative weight) 581nm, 420nm, 620nm, 860nm, 601nm, 680nm, 669nm, and 972 nm. The accuracy of all these groups was higher than the current standard of clinical diagnostic care with 67-71% accuracy in determining burn depth. Also, it is worth noting that a wavelength of 860nm is particularly effective for both MSI and PPG algorithms, and therefore for combined devices. These test groups show that wavelengths in the range of 400nm to 1100nm (such as 400nm, 500nm, 600nm, 700nm, 800nm, 900nm, 1000nm, or 1100nm, or any wavelength defined between any two of the above wavelengths) can effectively classify tissue. As previously mentioned, other sets of wavelengths are also effective. For example, the active wavelength set in the experiment minimizes redundancy. In this way, other wavelengths may be used to effectively classify some features of tissue. Also, based on the above experiments, other wavelengths can be found that can effectively classify burns and/or any other tissue condition described in the present invention.
In summary, the experiments described above led to: wavelengths in the range of 400nm to 900nm (including 400nm, 500nm, 600nm, 700nm, 800nm, or 900nm, or any wavelength defined between any two of the above wavelengths) are particularly effective for imaging burns. More specifically, in this range, a set of wavelengths may be constructed for imaging a burn, where: at least one wavelength less than 500 nm; at least two wavelengths between 500 and 650 nm; at least three wavelengths are between 700 and 900 nm. This set of wavelengths is effective for imaging burns and classifying the imaged burned tissue.
Likewise, based on the above experiments, the following sequence table lists the individual test wavelengths in order of their significance in classification:
Figure BDA0001712011690000361
Figure BDA0001712011690000371
TABLE 1
To collect the images of the data subset 402, the one or more cameras are also configured to acquire a selected number of images with a sufficiently short time interval between the images to measure the temporal variation of the reflected light intensity due to the motion of the tissue region corresponding to the physiological event or condition of the patient. In some cases, a three-dimensional data array is formed from data obtained from a plurality of temporally separated images, where the data array has one temporal dimension and two spatial dimensions. Each pixel in a three-dimensional array can be distinguished by a temporal variation in reflected light intensity. The time domain signal has different energy at different frequency components with respect to blood pressure, heart rate, vascular resistance, neural stimulation, cardiovascular health, respiration, body temperature, and/or blood volume. In some alternative examples, an optical filter may also be used to filter out external noise. For example, an 860nm bandpass filter may be used to filter external light waves corresponding to the dominant wavelength spectrum of indoor illumination light, such that the acquired image corresponds to reflected light originating from a light source in the detector 408. This can reduce and/or prevent aliasing of fluctuations in room light, such as 60Hz fluctuations in room lighting due to AC power line frequency, etc.
Fig. 8 shows the processing performed by the apparatus of fig. 7, and further details regarding preferred image acquisition and signal processing steps will now be described with reference to fig. 8. FIG. 8 illustrates an example flow chart 600 of steps to classify an organization that some alternative examples employ. Blocks 602 and 603 illustrate some alternative examples of acquiring a multispectral image and a plurality of time-separated images (e.g., videos) using, for example, detector 408. For time separated images, such as data subset 402, a relatively long exposure time is considered desirable in order to obtain a signal with less overall noise and a greater signal-to-noise ratio. In some cases, an acquisition time of 27 seconds is employed, which is longer than the 7 second acquisition time of conventional PPG image processing. Thus, in some alternative examples, it is desirable for the acquisition time to be at least or greater than 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, or 60 seconds, or any number between any two of the aforementioned numbers. Within these acquisition times, the number of frames per second acquired by the imager may be set. In some cases, 30 frames per second (30fps) or 60 frames per second (60fps) are effective in imaging tissue. For 30fps in 27 seconds, the imager acquires about 810 images. For 60fps in 27 seconds, the imager acquires approximately 1620 images. In some alternative examples, the number of acquired images may be varied depending on the resolution of the data required (e.g. to acquire a person's heartbeat). For example, for a CMOS camera, 20fps to 120fps may be employed. This includes a sampling rate of 20fps, 30fps, 40fps, 50fps, 60fps, 70fps, 80fps, 90fps, 100fps, 110fps or 120fps, or within a range defined by any sampling rate between any two of the aforementioned sampling rates.
Also, in some alternative examples, the light source arrangement is important due to the illumination point, which is the location where the signal is saturated and the high intensity light of the pulse waveform is shielded. In some alternative examples, this problem is addressed by employing diffusers and other front-end hardware techniques. However, in the case where the illumination spot cannot be eliminated by the front-end technique, in some alternative examples, the illumination spot is eliminated using signal processing. In fact, in order to form a reliable image of histopathology, it is desirable to preserve and display the signal while removing noise. The processing involves removing noise associated with the illumination points and other uncorrelated signals.
At block 604, the time-divided image sequence (e.g., data subset 402) is sent to the controller/classifier/processor 412 for processing, after which the controller/classifier/processor 412 calculates the blood perfusion in the tissue region using a PPG algorithm. The processing includes amplification, linearization, signal averaging, correlation, and/or one or more filters (e.g., band pass, high pass, or low pass) to remove noise, separate portions of the signal of interest, and improve signal-to-noise ratio. The choice of filter is important because too much filtering will remove the necessary data, while too little filtering will make the signal difficult to analyze. Cross-correlation and auto-correlation may also be used to remove noise. In some alternative examples, the sampled signal may also be used to remove noise, as will be described later. After which the signal is converted into the frequency domain. For example, in some alternative examples, a Fast Fourier Transform (FFT) is employed. After the FFT, the signal can be analyzed by frequency. The temporal variation of the intensity of the reflected light at each pixel over a plurality of time-divided image periods varies in signal energy at different frequencies. These frequencies and their corresponding physiological events indicate the effect between the occurrence and intensity of these physiological events at the tissue locations imaged by the pixels. For example, the signal intensity of a pixel at a frequency band around 1.0Hz, which is close to the frequency of a resting human heartbeat, can be used to assess the amount of blood flowing to and near the tissue at the pixel location in the image.
In some alternative examples, the correlation signal may be identified by looking at local maxima. For example, heart rate is derived by looking at the signal energy at the maximum peak of the nearby frequency band and assuming that this peak is the part of the heartbeat that is caused by the blood pressure change. However, this method cannot identify noise with peaks higher than the signal based on the actual heart rate. In this case, other alternative examples employ the following signal processing: these signal processing employ computer learning and training based on examples or on a database of reference values for noise signals, white noise signals, and other example signals. The computer analyzes the correlated signal and the examples of noise to learn to identify the signal from the noise. For example, in the case of identifying a signal related to blood flow, signals having the same frequency components as the heart beat are correlated. Computer learning uses an example heart rate signal or a database of reference heart rate signals as a reference to identify heart rates from noise. The computer learning process can also use these reference points and the database to analyze white noise, false heart rate signals, noise signals with peaks higher than the heart rate signal. Computer learning can identify signals based on characteristic information such as frequency, amplitude, signal-to-noise ratio, zero-crossings, typical shapes, or other characteristics of the signal.
In some cases, other comparisons are also used to identify the signal. For example, in some optional examples, a summary of the hand-selected clinical staging signals is generated. The hand selected clinical staging signal is then compared to the measured signal to classify the measured signal as a useful signal or as noise. Another technical advance achieved is the elimination of edge effects. In some alternative examples, there is blurred noise near the edges of the image, and in some cases, the region of interest is not as sharp as desired. The region of interest is shown to be more intense when the edge effects are eliminated. In some alternative examples, edge effects are eliminated using image processing, including averaging, dilation, and erosion, as well as edge detection and improvement.
Another technical advance is the automatic removal of motion artifacts. Motion artifacts include motion related to patient breathing, patient movement, or any general vibration in the camera or patient vicinity that can distort the image. To remove these motion artifacts, the signal is processed with a "window", where regions in the time domain that are larger and noisier than nearby parts are identified, and these regions are identified as "motion". These parts are then clipped from the time domain, resulting in a modified signal without motion artifacts. Other filters and selection methods may be used to remove noise and other unwanted signal components. After this processing, the signal energy calculated at the desired frequency (e.g., typically about 1Hz) can be classified for a tissue region (e.g., for each two-dimensional pixel location) into a class that defines blood perfusion at the pixel location.
At about the same time that blocks 602 and 604 are performed, some alternative examples also perform blocks 603 and 605. Block 603 acquires images that form a multispectral datacube (e.g., data subset 404 of fig. 7). The data cube includes a 2D image of each MSI spectral band. These alternative examples are then followed by analyzing the data using the MSI algorithm in block 605, and in block 614 the system assigns a category of tissue constituents to each tissue region (e.g., for each two-dimensional pixel location).
Next, block 616 generates a tissue classification based on the MSI and PPG data in conjunction with the blood flow perfusion and MSI data from blocks 603 and 604.
For example, illustratively, eight bandpass filters may be employed to generate eight reflectance values for each imaging pixel, one for each selected spectral band. And, with a filter centered at an infrared or near-infrared wavelength (e.g., near 840nm-880nm, including 840nm, 850nm, 860nm, 870nm, or 880nm wavelengths, or a range defined by any of these wavelengths), 810 images may be acquired 30 frames per second at 27 seconds. As described above, the 810 images may be analyzed in the frequency domain to generate PPG data indicative of blood perfusion for each imaging spatial location, generating a perfusion value for each imaging pixel. Each pixel of the imaged tissue region then has a measurement value corresponding to the measurement value obtained by each of the eight band-pass filters, and a value corresponding to the local blood flow. I.e. there are 9 measurements per pixel. With these 9 measurements, the pixels can be divided (e.g., classified) into different categories. As will be appreciated by those skilled in the art, any number (e.g., 2, 10, or 20, or any number defined between any two of these numbers, or greater than these numbers) of measurements may be taken for each pixel, with the pixels then being divided by these measurements.
Different partitioning/classification methods may be used. Typically, the classifier is trained using a "training" dataset in which the measured parameters and the appropriate classification are known. The trained classifier is then tested against a "test" dataset in which the measured parameters and appropriate classifications are also known but not used to train the classifier. Classifier quality can be assessed by how successfully the classifier classifies the test data set. In some alternative examples, a predetermined number of categories may be employed, with the pixels being classified into these predetermined categories. For example, for classifying burns in a treatment triage setting, classifications for healthy skin, hyperemia, minor burns, and severe burns may be used.
In other alternative examples, the number of classes is unknown, and a processor, such as processor 112, generates a classification based on the grouping of pixels and the characteristics of each other. For example, with measurements relative to nearby measurements, the processor identifies tissue regions with less blood volume and lower standard pixel intensity at certain wavelengths as being associated with severe burns.
In some alternative examples, pixels are assigned based on a preset range of values for each class. For example, certain ranges of values of light reflectance are associated with healthy skin. When the data falls within these ranges, the tissue is identified as healthy skin. These preset ranges may be stored in a memory of the system 412, input by a user, or automatically determined by system learning or adaptive algorithms. In some alternative examples, the classification is determined by information transmitted to the system by an external resource, such as a data uplink, cloud, or any data resource as will be explained later in the disclosure. In other alternative examples, where the range of occurrence of the values for each class is unknown, the processor adapts the classification based on comparing the measured values of each pixel to each other.
In some alternative examples, pixels having common characteristics are grouped using a suitable algorithm and the groups are identified. For example, by looking at image segmentation such as minimal cut, the pixels are classified using graph theory. Other partitioning methods may also be used, such as thresholds, clustering (e.g., k-means, hierarchical clustering, fuzzy clustering), watershed algorithms, edge detection, region growing, statistical grouping, shape recognition, morphological image processing, computer training/computer vision, histogram methods, and any partitioning method known in the packet data arts.
In some alternative examples, the partitioning is further supplemented using historical data. The historical data includes data previously obtained by the patient and/or data based on other patients. In some alternative examples, other data such as skin tone, race, age, weight, gender, and other physiological parameters are considered in the segmentation process. In some cases, data may be uploaded, obtained from the cloud, or otherwise input into the system, also including using the UI 114. In some optional examples, a dynamic database of patient data is analyzed. The previously identified images are compared to the acquired images using statistical methods including t-test, f-test, z-test, or any other statistical method for comparison. Such a comparison takes into account, for example, the measured pixel intensity, the measured value of the pixel relative to other pixels in the image, and the pixel distribution.
In some optional examples, the dynamic database is uploaded with example images of tissue conditions, such as burns, for classifying tissue. In other alternative examples, images are assigned and identified by what tissue condition and how the condition is displayed by the images. It is desirable to have a full range of images at different angles in the dynamic database to account for variations in the angle, quality and state of the imaged skin condition.
Returning to FIG. 8, various data outputs are presented to the user. These data include PPG perfusion images 620 based on PPG data, MSI classification images based on MSI data, white light illumination images based on standard RGB data, MSI/PPG fusion images 622 showing classification based on combining MSI and PPG data. For example, in the therapy triage apparatus described above with reference to fig. 1-6, the display outputs 212, 214, 216, and 218 of fig. 3 may be a combined MSI/PPG fusion classification image 622. In such images, as described above, various tissue regions (e.g., pixels) of the target are classified into burn categories, such as healthy, hyperemic, severe burn, and mild burn. Additionally or alternatively, data output such as% TBSA for each category, etc. is presented to the user as shown in FIG. 3.
Compared with the prior art, the method has remarkable progress in the use of components and survivability data in classification organization. Fig. 9A, 9B, 9C, and 9D illustrate some of these advantages for burn classification. In one trial, images were taken from adult mini-pigs burned at partial thickness. Fig. 9A shows an example of five tissue samples used in the experiment. Fig. 9A was acquired using a standard camera for photography. These images were taken of the self-organizing surface before injury (e.g., before burn), the surface after burn, three tangential cuts of the burn (first cut, second cut, and third cut), respectively. These five tissue samples were also used to compare PPG, MSI with the results of the new system according to some alternative examples of the invention, where the tissue was classified based on PPG and MSI algorithms. Since the image acquisition in the trial is from tissue that the physician independently analyzes, comparing the imager's results with how the tissue should be classified can assess the effectiveness of the different image techniques.
Fig. 9B is an example of these five images showing separate PPG image data at each pixel of the image. These images show that there are limitations to correctly classifying tissue based solely on PPG data. Only the most severe burned tissue with minimal blood flow can be conveniently identified from this example data. Tissue regions 808, 810, 812, and 814 are examples of such tissue regions with minimal blood flow, which have been scaled to appear darker than other regions. Other regions fall somewhere between the minimum and maximum blood flow readings and are difficult to classify and categorize.
FIG. 9C is an example of these five images showing separate MSI image data at each pixel of the image. This image shows that there are also limitations to correctly classifying tissues based solely on MSI data. In the pre-injury image, most tissues are classified as "hyperemic" and should in fact be classified as "healthy skin". In the image of the burn, region 816 is correctly identified as a severe burn. However, some areas, such as area 818, are mistakenly classified as "severe burns" rather than "healthy skin". In the first cut, a region such as region 820 is incorrectly identified as a "minor burn" and should be classified as an "implantable wound". Similarly, the third cut tissue region 820 is also erroneously identified as a "minor burn" and should be classified as an "implantable wound".
Figure 9D is an example of the same five images showing data for a new system of some alternative examples of the invention, where the new system utilizes at least the MSI and PPG algorithms. The new system correctly identifies pre-burn tissue as "healthy skin". In the burn image, the new system correctly identifies region 824 as a ring of "hyperemic" tissue surrounding the "severely burned" tissue of region 826. The ring is not correctly identified by PPG or MSI. In burn images, the new system also reduces the error of identifying healthy tissue as "severely burned" tissue in nature. Similarly, in the first and third cut images, the new system correctly identified the transplantable wound, while the MSI and PPG images failed to identify. That is, the MSI imager wrongly classified regions 820 and 822, which should be classified as "implantable wounds," as "minor burns. The new system correctly identifies these same regions as "implantable wounds", as shown by regions 828 and 830.
From the results of this experiment, it can be seen that this new system for classifying tissue based on composition and survivability is better able to classify burn injuries at different stages of debridement than the prior art, which includes the use of PPG alone and MSI alone. Thus, the new system represents a significant and unexpected advance over other systems and methods of the prior art.
One clinical application of some of the alternative examples described herein is burn classification. Fig. 10 shows a high level flow chart for burn treatment. Tissue 700 shows a burn of tissue. Skin layer 702 shows a burn on the skin surface. Often burns can cause the skin to discolor or lose epidermis. Beneath the skin layer 702 is a tissue layer 704, the tissue layer 704 being denatured skin without blood supply. Sometimes this is referred to as a coagulum, a coagulative necrotic area, or eschar. This is a dead tissue. Depending on the degree of burn, other tissue with improved blood flow may be near or around the tissue layer 704. This is sometimes referred to as a stasis zone, and is a region around the coagulation zone where cell damage is less severe. The tissue far from the coagulation zone and outside the stasis zone is the congestion zone, and the tissue in the congestion zone is mostly recovered. Burns are classified by degree into first to fourth grades, with the first grade being the lightest and near the surface and the fourth grade being the most severe, extending to the muscles and bones.
Subtle differences between burns of different severity can also be difficult to distinguish with the naked eye. In fact, in the early stages of burns, the entire effect of the burn is buried deep below the skin surface, and thus it is almost impossible to determine the extent of the burn and even the appearance of the burned tissue without surgery. However, despite these differences, time is critical for burn treatment. In fact, early treatment can make the burn recovery more variable.
Some alternative examples are effective for identifying and assessing the severity of burn injury. In fact, the device described herein can physically locate and identify burns, including burn severity (e.g., extent of burn and whether it is superficial, shallow thickness burn, deep or full thickness), and can derive the burn% TBSA for the overall or individual burn severity. As described above, the original state and quality of skin tissue changes after a burn. Thus, depending on the degree of burn, different layers of tissue absorb and reflect light in a different manner than other types of tissue. In some cases, the high resolution multispectral camera of some of the alternative examples described herein can capture these differences, which are used to assess the composition of the skin to identify burns and burn severity. However, relying on this information alone may sometimes provide an incomplete estimate of the severity of the burn. As previously mentioned, burn severity not only relates to how the skin is currently damaged, but also to the presence and absence of tissue blood flow. Thus, the high resolution multispectral camera employed by some alternative examples also desirably measures blood flow to a tissue region, where the combined information of skin composition and blood flow can be used to more accurately and finely judge burn appearance and burn severity.
A surgical knife 706 is an example way to treat a burn. By debridement, dead, damaged, infected, necrotic or soon necrotic tissue is excised, facilitating and aiding the healing of the remaining healthy tissue. In fact, as mentioned above, excessive and subcutaneous removal of tissue has life threatening consequences. Subcutaneous resection burns have problems with placement of the devitalized tissue graft and poor graft resorption. Subcutaneous ablation burns can also increase the risk of infection and/or allow longer recovery times. On the other hand, over-resection can result in excessive blood loss, bleeding from the resected surface, which can affect graft absorption. The devices described herein provide a quantitative means of identifying the boundary between healthy tissue and tissue that needs to be ablated. This is an advance over the prior art, which relies on expert subjective opinion.
In this example, the burn 708 is cut off the wound 710. After removal of the dead tissue, the clean wound 712 is ready for transplantation, with healthy tissue transplanted to the resection area to promote tissue recovery. In fact, the device and method described herein have the advantage that non-burn specialists can also quickly assess the severity of burns by non-invasive means prior to surgery and implantation.
Figure 11 illustrates the use of some of the devices described herein for assessing graft viability. As used in some alternative examples, transplantation is of tissue or regenerative cells, including stem cells, endothelial precursor cells, and/or mixtures of these cells in isolated, enriched, or aggregated form, or prosthetic, donor, or medical devices. As used in some alternative examples, transplantation includes the aforementioned tissues and/or cells with a scaffold, prosthetic, or medical device. In the case of a graft to tissue without a blood supply, a successful graft will have a new blood supply of surrounding tissue to support the tissue. Some applications include the introduction of regenerative cells, including stem cells, endothelial precursor cells and/or mixtures of these cells in isolated, enriched or aggregated form, which can provide blood supply, i.e., the ability of the cells to regenerate or arteriogenesis through blood vessels to produce or cause the production of a new blood supply. Some applications include the use of transplanted and/or rejuvenated cells, including stem cells, endothelial precursor cells and/or mixtures of these cells in isolated, enriched or aggregated form, or in conjunction with scaffolds, donors, prosthetics or medical devices, which are supplemented with one or more growth factors, such as FGF, HGF or VEGF. The devices described herein can classify grafts based on whether there is successful graft uptake or whether the graft will be rejected as necrotic tissue. Image 900 shows an image generated by an alternative example described herein, in which tissue regions are imaged corresponding to different times and different frequency bands. The color of the image indicates the presence of healthy tissue being fed. In contrast, image 902 shows unhealthy tissue with no blood supply, indicating a graft failure.
Another clinical application of the device described herein is the classification of decubitus ulcers, known as pressure ulcers or decubitus ulcers. These wounds develop because the pressure acting on the tissue impedes blood flow to the tissue. Due to the presence of the obstruction, tissue necrosis and tissue damage can occur. In some cases, this may cause a noticeable change in tissue color at a later stage. Decubitus ulcers can be divided into one to four stages, which are related to the amount of tissue damage that occurs.
Part of the difficulty in identifying decubitus ulcers is that tissue changes caused by early obstruction are not readily observable at the tissue surface. The devices described herein are effective for identifying decubitus ulcers at an early stage of development, which facilitates early treatment and prophylactic treatment. Figure 12 illustrates the use of the device described herein to identify the presence or induction of decubitus ulcers and the classification of the different stages of decubitus ulcers. Image 800 shows an example of skin tissue with existing tissue classification data. This color indicates the presence of decubitus ulcers beneath the surface. Devices made as described herein are classified by reading the light reflectance at different times and different frequency bands, which enables detection of differences in tissue composition and differences in tissue blood flow. Image 806 is a picture of the tissue surface after 13 days, where the patient was a grade II decubitus ulcer.
Tissue can also have problems with excessive blood flow as opposed to decubitus ulcers where blood flow to the tissue is impeded. In the hyperemic phenomenon, manifested as erythema, blood flow to the tissue is increased. This can lead to swelling, discoloration and necrosis. It is also accompanied by engorgement of capillaries and veins, tissue containing excess ferroxanthin and fibrosis. Alternative examples of the present invention are effective for identifying and assessing early hyperemia of tissue. Also, being able to detect changes in the original appearance and quality of the tissue, along with detecting blood flow to the tissue, these alternative examples can easily identify and assess the severity of hyperemic tissue.
The alternative example apparatus described herein has many other applications in the medical field where tissue classification and assessment is required. Like burns, decubitus ulcers, and congestion, these alternative examples are also capable of classifying and assessing other types of wounds, including: abrasion, laceration, bleeding, fracture, puncture, penetrating wound, chronic wound, or any type of wound where the original look and quality of the tissue changes along with changes in tissue blood volume. The alternative example described herein provides the physician with physiological information relating to tissue viability in the form of a simple image. Information such as blood perfusion and oxygenation at the wound site is an important indicator of wound healing. By imaging these hemodynamic features hidden under the skin, the physician can be better informed about the progress of wound healing and make better and timely patient care decisions. Also, some of the devices described herein are capable of providing information about the composition of skin that is indicative of the condition of the skin.
Moreover, the use of some of the devices described herein is not limited to applications with damaged tissue conditions. Indeed, some alternative examples may also detect healthy tissue and distinguish healthy tissue from necrotic tissue or tissue that is about to become necrotic.
Healthy tissue in a normal position can be classified and assessed as compared to the wound or skin condition. For example, in connection with a burn, there may be areas of healthy tissue associated with or juxtaposed to the burn. This will aid in burn diagnosis and treatment, enabling identification of the presence of a margin of healthy tissue relative to necrotic or impending necrotic tissue. Healthy tissue is identified by imaging the skin at different times and different frequency bands to assess skin composition, blood perfusion, and oxygenation at the tissue site.
The alternative examples described herein also classify tissues based on the likelihood of success as transplanted tissue or regenerative cell implantation. This classification will take into account the quality and original appearance of the recipient tissue and the ability of the recipient tissue to receive a new blood supply. Alternative examples also classify the recipient tissue based on the likelihood that the tissue will form a new blood supply for the transplanted or rejuvenated cells, and the general health of the skin. Some of the devices described herein are capable of analyzing multiple images corresponding to different times and different frequency bands when classifying transplanted or received tissue.
In addition to merely classifying the health of the tissue, the alternative examples described herein also measure other aspects of the tissue, such as also assessing the thickness of skin regions and skin granulation tissue. In another example, tissue health in the vicinity of a suture wound and healing of the suture wound are monitored and assessed by the apparatus described herein.
Another application of some of the devices described herein is monitoring tissue healing. The apparatus described herein can also obtain several images at multiple points in time to monitor how the wound changes or how healthy tissue forms. In some cases, therapeutic agents such as steroids, Hepatocyte Growth Factor (HGF), Fibroblast Growth Factor (FGF), antibiotics, isolated or aggregated cell populations including stem cells and/or endothelial cells, or tissue transplants may be used to treat wounds or other ailments, which treatment may also be monitored using the devices described herein. Some alternative examples can monitor the effect of a therapeutic agent by assessing the healing of the tissue before, during, or after a particular treatment is undertaken. Some alternative examples monitor the effect of a therapeutic agent by acquiring multiple images at different times and different frequency bands. From these images, the reflected light energy of the skin is used to assess the original appearance and quality of the tissue and the blood flow of the tissue. Thus, the devices described herein can give valuable information about how the tissue heals, as well as the efficacy and speed of the therapeutic agent to promote the healing process.
Some alternative examples are used to monitor the introduction of Left Ventricular Assist Devices (LVADs) and the recovery after implantation. As LVAD flow increases, diastolic pressure increases, systolic pressure remains constant, and pulse pressure decreases. Pulse pressure is the difference between systolic and diastolic pressure, and is affected by the contractility of the left ventricle, the intravascular volume, the anterior and posterior pressure loading, and the pump speed. Thus, assessment of arterial blood pressure values and waveforms gives valuable information about the physiological interactions between LVAD and the cardiovascular system. For example, left ventricular dysfunction is associated with arterial waveforms that do not exhibit pulsatility. The alternative examples described herein may be used to monitor a patient's pulsatile flow reflux following LVAD implantation and provide a powerful tool to monitor and assist in patient recovery.
Some optional examples are also used to provide intra-operative management of orthopaedic tissue transplantation and reconstructive surgery. For example, in the case of breast cancer patients, treatment involves a total mastectomy followed by a breast reconstruction. The published complications of breast reconstruction are as high as 50%. The devices described herein facilitate evaluation of the tissue to be transplanted and the transplanted tissue itself. Evaluation of these alternative examples utilized the methods described above to view tissue health and quality, blood perfusion, and oxygenation.
Some alternative examples are also used to aid in the analysis of chronic wound therapy. Chronic wound patients typically receive expensive advanced treatment modalities without measuring their utility. Alternative examples described herein enable imaging of chronic wounds using the aforementioned imaging techniques and give quantitative data on their status, including the size of the wound, the depth of the wound, the presence of wounded tissue and the presence of healthy tissue.
Some of the optional examples described herein may also be used to identify limb degeneration. In these applications, the image identifies peripheral perfusion of the limb. This can be used to monitor the health of normal limbs and to detect insufficient peripheral blood flow in limbs requiring professional treatment (e.g., limb ischemia or peripheral vascular disease areas), such as the introduction of growth factors (FGF, HGF or VEGF) and/or regenerative cells and the like, including but not limited to stem cells, endothelial precursor cells, endothelial progenitor cells, or aggregated or isolated cell populations comprising these types of cells. In some cases, this can provide early treatment, keeping the limb from being excised. In other, more severe cases, the data needed to make a reasonable decision whether ablation is needed is provided to the medical professional.
Another application of the device described herein relates to the treatment of Raynaud's phenomenon, which occurs when a patient has a short-term onset of vasospasm (i.e., stenosis). Vasospasm typically occurs in the finger arteries supplying blood to the fingers, but also occurs in the feet, nose, ears, and lips. Some alternative example devices accurately and precisely identify that a patient is experiencing the Reynolds phenomenon, which facilitates any stage of diagnosis.
Some alternative example apparatus are also used to identify, classify, or estimate the presence of cancer cells, cancer cell proliferation, metastasis, tumor burden, or stage of cancer, as well as the reduction of cancer cells, cancer cell proliferation, metastasis, tumor burden, or stage of cancer after treatment. These alternative examples measure light reflected off of tissue to determine the composition of the skin, which may reflect abnormal components associated with cancer cells. Alternative examples can also measure blood flow to cancer cells by evaluating images at different times. The blood flow volume may reflect abnormal blood flow to tissues associated with the presence of cancer cells, cancer cell proliferation, metastasis, tumor burden, or stage of cancer. Alternative examples of the invention may also be used to monitor healing after removal of cancer cells, including growth of healthy tissue and any return of cancer cells.
Some aspects of the foregoing alternative examples have been successfully tested in laboratory settings and clinics. For example, in experiments using optical tissue modeling, where the dynamics of tissue flow due to beating are mechanically mimicked, the device described herein has better light penetration than laser doppler imaging, yet correctly detects pulsatile flow under the tissue phantom material. The test is used for testing the beating blood flow in the range of 40-200 bpm (0.67-3.33 Hz), and can test the whole range from rest to high human heart rate in the process of exercise or action.
Also, based on the experiment with the pig burn model, where different degrees of burn were imaged, one could derive: the images formed by the alternative examples described herein, along with the use of a reference database and computer training, enable the precise identification of areas corresponding to healthy skin, congestion, burns greater than 1.0mm, burns less than 1.0mm, bleeding, and healthy wounds. These are the types of tissues that surgeons encounter when performing debridement.
Moreover, clinical studies of some of the foregoing conditions were conducted using the devices described herein. Participants in the study were imaged after cardiothoracic surgery. The selection criteria were as follows: over 18 years old; hospitalization or imminent hospitalization after cardiothoracic surgery; there is no wound size exclusion or potential risk of wound development. Risk factors include poor blood circulation, mechanical stress of the tissue, temperature, moisture, infection, medication, nutrition, disease, age, and size. Wounds meeting the selection criteria include skin flap wounds, burn wounds, hospital wounds or decubitus ulcers, conditions of vascular insufficiency around diabetic ulcers of the foot. The target is imaged for up to three months, typically 30 minutes. Some patients are imaged up to three times per week to monitor tissue change rates. The following summarizes observed data during some studies.
Figure BDA0001712011690000491
Figure BDA0001712011690000501
TABLE 2
Another aspect of some optional examples described herein is that the apparatus may be combined with a dynamic database comprising more than one tissue condition reference value. In some cases, the dynamic database includes an origin image with information about healthy skin tissue. The dynamic database also includes various images of the wound or skin condition as contrast points, thus yielding an assessment of the wound or skin condition and/or a healing condition. The dynamic database also includes samples of the relevant signals, such as normal heart rate, abnormal heart rate, noise signals, signals corresponding to healthy tissue, and samples of signals corresponding to unhealthy tissue.
In some alternative examples, the images in the dynamic database are other images or data acquired by the apparatus described herein. In some alternative examples, the dynamic database includes images and/or data that are not acquired by the inventive apparatus. These images may be used to assess the target or otherwise manipulate the target.
FIG. 13 illustrates an example of a dynamic database. In the figure, an example imaging device 1000 is connected to an example cloud 1002. The example imaging device 1000 may be the device described herein, or any other computer or user device that is also connected to the dynamic database. In some cases, cloud 1002 includes a Program Execution Server (PES) that includes a plurality of data centers, each data center including one or more physical computing systems configured to execute one or more virtual desktop instances, each virtual desktop instance associated with a computing environment that includes an operating system configured to execute one or more applications, each virtual desktop instance accessible by a computing device of a user of the PES over a network. The cloud may also include other methods to synchronize computing and storage.
Data path 1004 shows a bi-directional connection between imaging device 1000 and cloud 1002. The cloud 1002 itself has a processing element 1006, the processing element 1006 being where the cloud 1002 receives signals, processes data, executes classification algorithms, and generates metadata, the processing element 1006 indicating whether the dynamic database is synchronized with more than one computing device.
In some alternative examples, the data analysis and classification is performed in the cloud. The analysis includes acquiring data of the sample signal for comparison with the acquired signal. Such samples are used to classify tissue regions in the resulting signal. In other alternative examples, the processing elements are disposed on the imaging device 1000 for local processing at the data acquisition location.
In addition to collecting and analyzing data in the dynamic database, the processing element may also include conventional error data and calculations. Errors may be computed at the local location and aggregated in the cloud, and/or computed in the cloud. In some cases, an error threshold may be established for a particular classification model. The threshold takes into account type I and type II error (i.e., false positive and false negative) results as well as criteria for clinical reliability.
The processing element 1006 also analyzes the data. Cloud 1002 also has data element 1008, data element 1008 including information of the dynamic database itself, also receiving updates. The data element 1008 and the processing element 1006 are coupled to one another.
There are also other sources or libraries connected to the cloud. In this example, entity 1012 is also connected to cloud 1002. Entity 1012 can provide updates and algorithms to any device or system to improve system functionality, such as to system 1000 connected to cloud 1002. Through learning and experience, the methods of the various stages can be updated to reduce overall error. Entity 1012 can quickly assess changes to multiple classification algorithms simultaneously, providing system improvement. The new data set and model for the new clinical application may also be updated. Further, entity 1012 can update system 1000 or any device or system connected to cloud 1002, obtain data and analyze the data for new medical applications, such as analyzing frostbite, etc. This approach expands the functionality and allows the system to adapt to different situations due to improvements in scientific understanding.
In addition, various aspects of the alternative examples described herein have shown efficacy in tissue and animal models as the subject of the experiments. These tests show that the alternative examples of the present invention are effective at least for treating burns. The following non-limiting examples are for illustration. The examples also provide further details of the tests performed.
1. Test of
1.1 example 1: experiments with tissue and animal models were illuminated with point and plane light sources.
1.1.1 materials and methods
The PPG system in this study comprises three functional modules: illumination, sensors (CMOS cameras) and imaging objects. The illumination and sensor modules are arranged on the same side of the target (i.e., in reflection mode, fig. 14A-14C). A light beam incident on the target is scattered into the object, after which a backscattered light signal is acquired by the camera. The imaging subject embedded in the turbid medium changes over time (i.e. the blood vessel volume changes due to pulsatile blood flow) and the back-scattered light then has a modulation in intensity.
1.1.1.a. lighting module
Three different lighting modules were compared: 1) adopting a point light source of a single wavelength LED diode; 2) adopting a plane light source of a broad-spectrum tungsten lamp; 3) a planar light source using a high power single wavelength LED emitter. The three lighting modules are powered by a high stability DC power supply with intensity variations over time of less than 1%.
Fig. 14A-14C show a desktop system operating in a reflective mode. The graph 1400 of FIG. 14A shows a single wavelength LED point source, the graph 1401 of FIG. 14B shows a tungsten lamp, and the graph 1402 of FIG. 14C shows a high power LED emitter. The object under illumination is an optically opaque medium, and the more opaque object is buried at a depth d below.
l.l.ai.single wavelength LED
A single wavelength LED diode of 850nm (e.g., KCL-5230H, Kodenshi (Kodenshi) AUK) is mounted alongside a CMOS camera (e.g., noctrurn XL, photonics USA) at a distance Dl of 18cm from the target surface (see, e.g., diagram 1400). The 12 degree total radiation angle of the LED produces a point circle within the field of view of the sensor, off center, approximately 3.8cm in diameter. The circular illumination spot is centered within the FOV but slightly off the center of the object.
l.l.aii.tungsten filament lamp
A tungsten lamp (e.g., ViP Pro-light, Lowel Inc.) is mounted close to a camera (e.g., BM-141GE, JAI Inc.) at a distance D2 of 60cm from the target surface (see, e.g., diagram 1401). Two frosted glass diffusers (e.g., model: iP-50, Lowel Inc.) are mounted in front of the lamp to reduce the projection directivity of the lamp and to illuminate the object more uniformly. The illumination area is wider than the FOV of the camera, and the spatial uniformity of illumination is better than that of a point light source LED.
High power LED emitter
Four high power monolithic LED emitters (e.g., SFH 4740, OSRAM) are arranged in a 2x 2 array, mounted in the same plane as the sensor (e.g., noctrun XL, photonics usa) in a coaxial mode. The LED emitter array and camera to target surface distance D3 is 30cm (see, e.g., diagram 1402). The spatial intensity variation is as low as less than 15%. The FOV of the camera is controlled by the optical lens, being slightly narrower than the illuminated area.
l.l.b. system configuration
For systems employing LED point sources or LED emitters, a monochrome CMOS camera (e.g., noctrun XL, photonics USA) is used as the detector, which has low dark noise and large dynamic range. The 10-bit ADC resolution provides a signal-to-noise ratio of 60 dB. For tungsten lamp lighting systems, the camera (e.g., BM-141GE, JAI Inc.) provides comparable dynamic range (58dB) and the same 10-bit ADC resolution as the Noctum XL camera. The images obtained by the two cameras are cropped to 1280x 1040 (aspect ratio 5: 4). Because tungsten lamps generate heat and have longer imaging distances than the other two configurations, tungsten lamp illumination systems use telescoping lenses (e.g., Distagon T × 2.8/25ZF-IR, Zeiss Inc.) to control the FOV.
For these three system configurations, the camera is mounted vertically, facing down toward the target surface. Typically a 20x16cm FOV is controlled for inter-system comparisons. The exposure time of the camera in each system configuration was calibrated by a reflectance reference standard (e.g., 95% reflectance standard plate, spectra SG3151, LabSphere Inc.). The exposure time is controlled to take advantage of the full range of the dynamic range of each camera.
Model 1.1.1.c
FIG. 15 shows a tissue model in a Petri dish and a model setup simulating human pulsatile blood flow. Tissue model 1500 in a petri dish, elastic tubes were placed under the homogenous model, which simulated blood flow under the skin. The model device 1501 is designed to simulate a person's pulsatile blood flow in a laboratory environment. The peristaltic pump drives the motive fluid through the elastic model vessel, pulsating at 8.0mm of the gelatin inner lipid tissue model matrix. Due to the elasticity of the tube, at each cycle of the peristaltic pump, approximately 2% of the volume expands in the model vessel, similar to a human artery.
The tissue-like model is designed to simulate blood flow beneath the skin surface. Tissue model matrices were prepared according to Satcher (Thatcher) et al (FIG. 15). Briefly, gelatin (e.g., JT Baker, type B) in 10% (w/v) tris buffered saline (e.g., pH 7.4, Alfa Aesar) was mixed with sterile fat milk (e.g., 20% w/v, Baxter). The final fat milk concentration was controlled at 20%. In addition, 0.2% of a sports standard (e.g., a mixture of polystyrene beads and india ink) was added to the gelatin matrix to simulate the absorption characteristics of the tissue. The mixture is poured into a petri dish (e.g., Novatech, 150mm diameter) to form a homogeneous ambient medium. A silicone rubber tube (e.g., Dow-Corning) with an inner diameter of 1.58mm simulating a blood vessel is placed under the surface at d-8 mm. During each pump cycle, the inner diameter expands by approximately 2%, which simulates the change in diameter of the surrounding artery during the cardiac cycle.
To simulate a pulsatile cardiac cycle, an absorbent blood-like fluid in the tube is delivered by two peristaltic roller pumps (e.g., Watson Marlow, model # sciQ32) at a frequency of 40Hz, which simulates a normal human heart rate of 80bpm (fig. 15). This pulsatile blood flow through the model vessel can produce a PPG signal, which is the measurement target of the PPG imaging device.
l.l.d. animal model
Figure 16 shows an in vivo thermal burn of a circular debridement model on animal skin. Hanford pigs were chosen as the animal model because their skin is anatomically closest to humans. The thickness of the pig epidermis is 30-40 μm, which is close to the thickness of the human epidermis of 50-120 μm. In addition, vascular architecture extracellular matrix composition is also similar to human skin. Animals were carefully cared for according to the us public health agency policy of humanitarian management and use of laboratory animals. The experiments were performed in a fully self-contained large animal surgery room. Burn models and research protocols were approved by the Institutional Animal Care and Use Committee (IACUC).
The thermal burn model was prepared by using a brass rod with controlled temperature and pressure. The rods were heated in a furnace to 100 ℃ and then at 0.2kg/cm2The pressure of (2) was applied to the back skin of the pig for 60 seconds. The method forms deep partial thickness bums. The wound site included a partial thickness burn with a depth of 3.6cm in diameter (fig. 16). Burn images were acquired from each imaging system for comparison of illumination uniformity and PPG signal strength.
1.1.1.e comparison by pixel
Figure 17 shows time division PPG signal extraction. The diagram 1700 shows the intensity of sequentially extracted image pixels (x, y) for 800 frames. Block 1701 shows a processing method for quantifying PPG signals.
800 image sequences at a frame rate of 30 frames/second are acquired and stored as uncompressed TIFF files. The PPG signal strength is calculated pixel by pixel. The key steps of PPG signal and image processing are as follows (fig. 17): (1) removing the linear component, which removes the DC drift; (2) down-sampling in the time domain to reduce the amount of data; (3) filtering signals; (4) fast Fourier Transform (FFT), which converts the time-division signal into a frequency-domain signal; (5) then obtaining the spectral power, in particular at a frequency equivalent to the heart rate; (6) calculating the ratio of the sum of the intensities of the heart rate frequency band and the sum of the intensities of the slightly higher frequency bands (considered as noise) as the signal-to-noise ratio (SNR); (7) the PPG image output uses a color map to show the PPG SNR for each pixel. Colors are plotted linearly from the lowest signal present in an image to the highest signal present.
Signal processing was performed using MATLAB (version 2014a, MathWorks, inc.
1.1.2. Conclusion
1.1.2.a. illumination Pattern assessment
To characterize the light pattern of the three illumination modes, we place a diffuse reflector under the camera and light source: (
Figure BDA0001712011690000551
LabSphere Inc.). The plate surface is perpendicular to the camera.
Fig. 18A-18C show a comparison of spatial illumination intensity between LED point sources (non-uniform illumination) (image 1800 of fig. 18A), tungsten lamps (uniform illumination) (image 1801 of fig. 18B), LED emitters (improved uniform illumination) (image 1802 of fig. 18C), using a flat reflective plate as the imaging subject.
The illumination pattern image varies between the three illumination modes (fig. 18). In the LED point source reflection pattern (image 1800), there is a high spot within the FOV, which shows a high intensity area surrounded by an area that is darkened away from the source. The use of a single LED introduces additional shadowing to the target due to the presence of the LED package structure. The presence of shadows also reduces the uniformity of illumination. In tungsten lamps (image 1801), the illumination pattern is more uniform than spot illumination, eliminating the shadow effect in the FOV. Minimal variation in illumination intensity was observed based on the LED emitter (image 1802). The spatial variation is controlled to be less than 15% and the temporal stability is well controlled to be less than 1%.
Fig. 19 shows a comparison of intensity curves between three illumination patterns. According to
Figure BDA0001712011690000552
The panel, point light sources (see image 1800), tungsten lamp (see image 1801), LED emitters (see image 1802) are in a diagonal intensity curve.
The diagonal intensity curves of the three illumination patterns highlight the intensity variation (fig. 19). Clearly, the FOV of a point source requires the full dynamic range of the camera (e.g., 10 bits, with intensity values of 0-1024) to include saturation state for the point region (flat top), dark surrounding edges for the actual working area (shoulders), roll-off region that is rarely used. Both tungsten lamps and LED emitters improve spatial uniformity and reduce the necessity of using a large dynamic range camera.
1.1.2.b. model conclusion
Fig. 20A to 20C show imaging results of a tissue model and underlying pulsatile model blood vessels when an LED point light source (image 2000 of fig. 20A), a tungsten lamp (image 2001 of fig. 20B), and an LED emitter (image 2002 of fig. 20C) are respectively employed. The imaging results are overlaid by the image of the model.
In a carefully controlled bench-top experiment, to study the effect of illumination intensity variations and PPG signal patterns, a tissue-like model imaging target was placed under these three illumination modules (fig. 20). In the LED point source (image 2000), the model vessel is placed in the dark region of the field of view. The overall position of the model vessel can be determined. The model vessels are precisely located and well aligned with the test apparatus. However, as the model vessel (left end) approaches the center of the point, along the edge of the field of view, the width of the model vessel widens as the illumination intensity decreases. The edges of the petri dish produce dark edges, which decrease the effective FOV and instead increase the difficulty of presenting the image to the user.
For tungsten lamp illumination (image 2001), the incident beam is slightly off axis of the camera, which in effect creates a small angle of incidence. The directionality of the tungsten lamp then causes a slightly shiny (i.e., specular) area on the mold surface. This effect saturates the pixels, preventing the detection of the PPG signal. The position of the model vessel deviates from its true position. In addition, most of the infrared light emitted by the tungsten filament lamp source causes a large amount of heat to be generated in the target, which rapidly denatures the jelly-like tissue model. The surface temperature rapidly increases from room temperature to 30-40 degrees centigrade (not shown) within 30 minutes.
For high power LED emitter illumination (image 2002), the color output in the image result is continuous, with the width remaining constant. The position of the PPG signal is combined with the actual position of the model vessel. The image contrast was sufficient, indicating that the image quality was superior to tungsten lamps and point sources. Also, there is no aggregate thermal effect in the model. The temperature change in 30 minutes was less than 0.1 ℃, negligible.
After estimation of the three illumination methods, the model is subjected to a test of illumination intensity, so that the effect of illumination intensity is judged based on the measured intensity of the PPG signal. The purpose of this study was to show that: in addition to illumination uniformity within the field of view, there is a need to maximize illumination intensity. Bench tests were performed using a tissue-like model device in combination with a high-power LED emitter illuminated depth (deep view) system under varying incident light intensity conditions controlled by varying the voltage input to the LED emitter. By varying the voltage input, the illumination intensity can be varied to a saturation point, which represents the maximum absolute irradiance that the imager can accurately resolve. This saturation point occurs when a voltage of 12.95V is applied to the emitter, corresponding to about 0.004W/m2Absolute irradiation value of (a). The intensity threshold is established in 20% increments of the maximum using this saturation point as a reference.
FIG. 21 shows the power spectral density of the PPG signal in the pulsatile region of the tissue-like model versus the saturation point of the LED emitter module at the imager (irradiance 0.004W/m)2) The following relationship between the maximum intensity percentages of light. The data points reflect the data from 3 tissue-like modelsThe average of 5 pixels sampled by the pattern replica. Logarithmic regression (R)20.9995), error bars reflect the standard deviation on the mean.
At each level from 0% to 100% illumination level, images were recorded and processed using a proprietary DeepView algorithm (FIG. 17). With information from the processed image, several pixels are manually selected from the high pulsation region along the mode tube. These selected pixels are extracted and processed individually to determine the strength of the PPG signal at the indicated point. The metric used to estimate PPG signal strength is the Power Spectral Density (PSD), which is a measure of the distribution of signal energy in the frequency domain. The power spectral density at the ripple frequency is the target of research and comparison in sampling and level. This process is repeated for several pixels in the three models, forming samples at each level, and the PSD values are averaged to reflect the values at each level (fig. 21). The results show a clear logarithmic trend, where the received PPG signal strength continuously increases over the intensity values.
These results show that: maximizing the illumination intensity is an extremely important parameter, also confirming the necessity of uniform illumination within the field of view, since darker areas along the edges will result in a loss of signal intensity of the available PPG signal.
1.1.2.c animal conclusions
Fig. 22 shows pixel classification at a region of healthy pig skin based on PPG signal intensity. Image 2201 and graph 2202 are the result of LED point illumination (as shown in image 2200). Image 2204 and graph 2205 reflect the result of the LED emitter illumination (image 2203).
Based on the LED spot illumination source and the LED sheet illumination source, we measured the intensity of light incident on the pig skin. For LED spot illumination, the pixels in the region where the light is incident on the tissue are fully saturated (see image 2200). The remaining pixels in the image are illuminated to less than 50% of the camera sensitivity range. In addition, the entire field of view illumination indicates that most of the pig skin in the imaged area reflects a significant amount of light in the range of 70-90% of the camera range (see image 2203). There are few saturated pixels and no completely dark pixels. The uniformity of the LED emitters enables pixel-by-pixel comparison of PPG signals, since the variation of the illumination intensity is well controlled.
To confirm the uniformity-based blood flow results, LED chip illumination system is more preferred, and we studied PPG signals collected from both LED point sources and LED emitter illumination. To this end, we assessed areas of porcine skin of uniform tissue type, where it was assumed that blood flow was uniform. We chose healthy skin as the tissue type because this is readily available in large areas of the pig back, and the blood flow in this tissue is similar at any point on the pig back. If we do, we conclude that uniform illumination provides an output image with a more uniform PPG signal from healthy skin. The PPG signal from the region of interest (boxes in images 2201 and 2204) is plotted as a histogram, with the results showing: when uniform illumination is used, the distribution of the PGG signal is more gaussian, and the pixels acquired from this area do not lack the PPG signal. In the LED spot lighting device (fig. 22), many pixels were acquired without PPG signal, and the tissue areas with PPG signal were scattered and not uniform. Such data is difficult for physicians to understand.
Since blood flow in wounds is an important factor in healing and assessing tissue viability, animal burn models were established to assess the suitability of these illumination patterns in burn wound assessment. In partial thickness burns, there is damage to the arterial structures that transfuse the tissue. No or minimal blood flow is believed to be in these damaged areas. Therefore, little or no PPG signal is acquired from burned skin.
Fig. 23A to 23F show various illumination patterns and images corresponding to burn wounds of pig skin. Specifically, fig. 23A to 23C show an illumination pattern using an LED point light source (fig. 23A), an illumination pattern using a tungsten lamp (fig. 23B), and an illumination pattern using an LED emitter (fig. 23C). The corresponding image results (e.g., fig. 23D-23F) show the performance of detecting burn wounds and healthy tissue, respectively.
For LED spot illumination (fig. 23A), the burn wound was 4.0cm from the center of the point of illumination, similar to the model test. Half of the burned circles are under the dark border and the other half are in the dark areas of the image. The imaging results (fig. 23D) show that the edge of the circular burn area is still readable, but the center is not contrasted with the surrounding healthy skin tissue. The area of the image where the light source is directly illuminated to the tissue is fully saturated and no PPG signal is detected. Similarly, the edge region opposite the light-emitting point is too dark to be assessed by the imager. For these non-burned tissues in the dark areas and spot areas, the imaging results did not show any blood flow signals, although physiologically healthy tissue.
For the tungsten lamp illumination model (fig. 23B), the FOV was illuminated substantially uniformly. In the imaging results (fig. 23E), the edges, shape and area of the burn wound can be resolved. The SNR contrast is also sufficient, indicating that the illumination elicits a sufficient PPG signal from the surrounding healthy tissue. Due to the directionality of the incident beam, the right half of the FOV is illuminated at a weaker intensity than the left half. Therefore, in the imaging result (fig. 23E), the right half of the image shows a larger SNR contrast than the left half, which leads to the following interpretation errors: the right half of the FOV perfuses better than the left half.
For the LED emitter model (fig. 23C), the illumination within the FOV was more uniform than the tungsten lamp source, also corresponding to a better PPG image. The imaging results (fig. 23F) show that the edges, shape and area of the burn wound correspond to the actual tissue. The healthy tissue surrounding the burn wound also shows an image of the corresponding property with respect to the burn site in the imaging result. Although the fringe edges at the bottom of the image were straight in the imaging results, healthy tissue (with blood perfusion) under the fringe still showed the same contrast as the homogenous background.
1.1.3. Conclusion
The illumination function plays an important role in optical PPG systems. In this study, we studied illumination variations, including intensity and uniformity for PPG imaging, using LED point sources, tungsten lamps, and LED emitter arrays. Preliminary evaluation based on tissue-like models shows that the PPG signal varies with the illumination intensity, and therefore, uniform illumination is ideal for obtaining an accurate PPG signal in the imaging device. In our animal model, we confirmed the conclusion of the tissue model, showing that changes in illumination intensity also affect the PPG signal received from healthy skin tissue as a physiologically similar area. At the burn site, the tissue damage weakens the blood flow and a reduced PPG signal is obtained. By improving the accuracy of detection of burn areas, uniform illumination has more advantages over the other two modes. Although both tungsten and LED lamps provide uniform illumination patterns, LED light sources have further advantages in clinical use. The LED light source does not cause obvious temperature change on the target surface, and is more reliable and low in power consumption. Fast, non-invasive, safe devices such as optical PPG imagers that enable blood flow perfusion estimation in patients are valuable to clinicians in wound care. PPG imaging techniques can be precisely tailored to these clinical applications with illumination modules such as the proposed light emitter arrays that are more capable of achieving high intensity, uniform light within the field of view.
1.2. Example 2: tests on wound debridement, characterized by multispectral imaging of a burn model of pigs
We studied partial thickness burns of varying severity using a pig burn model. We made eight 4x4cm burns on the back of a small pig. Four burns were studied as received and four burns were subjected to successive eschar cuts. We imaged the burn site using wavelengths from 400nm to 1000 nm.
Histologically we considered that we had different partial thickness burns. Spectral image analysis showed that MSI detected significant changes in the spectral curves of healthy tissue, partial thickness burns on the surface, and partial thickness burns at depth. For identifying superficial partial thickness burns from deep partial thickness burns, the absorption spectra at 515nm, 542nm, 629nm, 669nm are the most accurate, while for guiding debridement treatment, the absorption spectrum at 972nm is the most accurate.
Using the MSI device in a clinical setting can improve the ability of a non-expert to discern between partial-thickness burns of varying severity to assess whether a patient requires surgery.
1.2.1. Materials and methods
The animal study procedure used here was modified by Branski, Gurfikel, Singer et al. Burn models and study protocols were approved by the institutional animal care and use committee.
1.2.1.a. burn model and study protocol
An adult male (age: 7.2 months) small Henford pig weighing 47.5kg was used. Animals were carefully cared for according to the us public health agency policy of humanitarian management and use of laboratory animals. The experiments were performed in a fully self-contained large animal surgery department. Male mini-pigs were fasted overnight prior to anesthesia. Anesthesia was performed using a combination of tarazol (Telazol) (-2.2mg/kg, IM) and Xylazine (Xylazine) (-0.44mg/kg, IM). Animals were intubated and kept under anesthesia with isoflurane (0.1% to 5%, 100% oxygen). Vital signs monitored and recorded in the protocol included heart rate, blood pressure, respiratory rate, PPG waveform. At the end of the experiment, animals were euthanized using sodium pentobarbital (390mg/mL) with a minimum dose of 1.0mL per 4.5kg body weight.
Eight 4x4cm burns were made on the back of the mini-pigs by setting the aluminum metal bar to a temperature of 100 ℃. Different burn depths were produced by different times of application of the heat bar: healthy skin (0 seconds); partial thickness of the surface (30 seconds); partial thickness of depth 1(DPT1, 45 seconds); the depth portion is 2 a thick (DPT2, 90 seconds). Each type forms two burns, divided into two adjacent groups, four. Figure 24 shows the location of a pig back burn. The numbers indicate the grouping and the letters indicate the processing mode. (where "1" is group I, "2" is group II, "a" is a control, "b" is SPT, "c" is DPTl, "d" is DPT 2). Fig. 25 shows the sizes of the tissues in group I (left) and group II (right).
Group I burns were imaged before, just after, one hour after the burn. These burns were then resected with a 5x4x1cm tissue mass (fig. 25), here comprising a small portion of healthy adjacent tissue to ensure that an intact full burn was acquired. Group I is referred to as the "burn classification test".
Group II burns were subjected to successive eschar at 1mm depth by an electronic skinning device (e.g., Zimmer, warraw, IN) to carefully analyze the burns layer by layer. For example, fig. 26 shows a diagram of exemplary debridement steps. Tissue was serially excised at 5x5x0.1cm slices (fig. 25) until punctate bleeding was observed at the wound site (fig. 26). Group II burns were imaged before, just after, and after each resection. Group II is referred to as the "burn debridement test".
Each tissue sample (tissue block and excised layer) was preserved in 10% neutral formalin and sent for histopathological examination. Each sample was cut and stained with hematoxylin and eosin. For group II burns with tangential ablation, two pathologists each judged by the separation facility that the exact ablation layer of living tissue had been reached. The overall severity of the burn was judged by the percentage of skin damage. Less than 20% of skin lesions are classified as superficial partial thickness burns and more than 20% but less than 100% are classified as deep partial thickness burns.
1.2.1.b. apparatus and data analysis
Spectral MD wound assessment prototype achieved all MSI performance. The camera has a silicon Charge Coupled Device (CCD) with 1392(h) x1040(v) pixels. A rotating wheel with eight swappable filters rotates within the device to achieve high speed MSI. The effective area is 10.2 mmx8.3mm. A250W tungsten lamp source was used to illuminate the field of view (LowePro). The wavelengths (nm) of the eight filters are as follows: 450. 515, 542, 620, 669, 750, 860, 972 (full width at half maximum 10 nm). All post-acquisition treatments were performed using MATLAB (version 2013 b).
1.2.1.C statistical analysis
The histological findings are used to guide the selection of specific regions of each burn image, and to classify the signals that make up these regions. Signals from different burn depths were aligned by two-way analysis of variance (two-way ANOVA) and multiple comparisons (Tukey-Kramer). Tissue debridement analysis was performed by three-way analysis of variance (three-way ANOVA) and multiple comparisons (Tukey-Kramer). P-values were calculated using the bonferoni method, where P-values less than 0.05 divided by the number of comparisons were important.
1.2.2. Theory of the invention
Viewing the tissue as consisting of a unique combination of blood, melanin, moisture, fat, ECM, simplifies the tissue. When white light is reflected off a model made up of one or all of the above components and measured, we see that the absorption spectrum of each model has a particular peak or preferred wavelength. Fig. 27A to 27E show absorption spectra of different tissue components. By observing the changes that occur at these preferred wavelengths, we can better coordinate the changes between burn types. We assume that the blood spectrum is important to distinguish between partial thickness burns of the surface and deep partial thickness burns. This is based on the following assumptions: deep partial thickness burns burn more vascular damage and stop bleeding than superficial partial thickness burns. Therefore, the wavelength of 450nm to 669nm is the absorption peak range of blood, and is included in the prototype. Since deep partial thickness burns theoretically damage more ECM than surface partial thickness burns, ECM wavelengths are also included.
It has been shown that: compared with the traditional clinical judgment alone, the wavelengths of 450nm, 550nm, 650nm and 800nm can improve the classification of burn depths. After examining the optical properties of skin tissue components, we wanted to test eight additional wavelengths, centered on the previously constructed wavelengths, which, as described above, were more conducive to burn assessment. The complete list of wavelengths tested is as follows: 420nm, 515nm, 542nm, 629nm, 669nm, 750nm, 860nm, 972 nm.
1.2.3. Conclusion
1.2.3.a. histology
Histopathologists blindly analyzed the pathophysiological changes of group I and group II layer by layer, classifying burned tissues by depth. Overall, three superficial partial thickness burns, three deep partial thickness burns, and two healthy controls were generated. Figure 28 shows the cut-line ablated (layer-by-layer ablation) burn histology. The black line indicates the full extent of the burn and the yellow line indicates the most severe burn area.
Debridement histology was analyzed to see how effectively serial excision could remove burned tissue with the dermatome alone. Histology showed: at each site, all of the burned tissue has been removed, with up to four resections being performed. The final cut of each treatment removed healthy tissue to a depth that reached the burned edge. Occasionally, the last resection involves only a healthy wound, which means that debridement should be stopped by the previous step. Figure 29 shows a histological section viewed from successive tangential resections of each debridement in an animal study. The superficial layer of the dermis is the uppermost section, and the next layers go deep into the dermis. Arrows indicate the surface of the tissue section. The black line indicates the full extent of the burn and the yellow line indicates the most severe burn area.
1.2.3.b. burn classification test
The spectral MD wound assessment prototype was able to correctly classify each tissue type of group I as healthy, SPT, DPT1, or DPT 2. Figure 30 shows a graph of MSI data just after burn indicating that the reflectance spectra of each burn type are initially distinct. The graph shows four reflectance spectra obtained from all burn sites and healthy controls. Multiple comparative statistical analysis yielded: all wavelengths except 420nm were effective for distinguishing SPT, DPT1, DPT2 burns. Multiple comparisons also showed that: MSI is able to distinguish DPT1 and DPT2 using wavelengths of 420nm, 542nm, 669nm, 860 nm. The following table shows multiple comparisons between burn categories, where a p-value of 1 corresponds to SPT versus DPTl and a p-value of 2 corresponds to SPT versus DPT2 (meaningful p-values for this test are less than 0.05/6-0.008):
Figure BDA0001712011690000631
TABLE 3
Thus, MSI is able to distinguish between different burn depths immediately after a burn by its individual characteristic spectral characteristics at several important wavelengths of light.
Imaging data collected one hour after the burn was then plotted in the same manner, testing repeatability in identifying severity. Fig. 31 plots the spectra for each burn type immediately after the burn and one hour after the burn. DPT2 is used in data analysis to determine whether the MSI can distinguish between SPTs and DPTs. Multiple comparisons of DPTl versus DPT2 cannot be made based on data one hour after burn to focus on clinical significance. The evaluation prototype measured distinct reflectance spectra for each burn type. In this test, all wavelengths were effective for distinguishing SPT and DPT burns.
The results of this one hour later multiple comparison study are as follows (where the meaningful p-value for this test is less than 0.05/15 ═ 0.003):
Figure BDA0001712011690000641
TABLE 4
1.2.3.C burn debridement test
The second trial tested whether the wound assessment prototype was able to identify the best layer to stop debridement based on the ability to differentiate healthy tissue from DPT burns. Here, we considered ablation of both DPT1 and DPT2 burns together, as the goal was to test whether MSI could identify viable tissue from necrotic tissue, not depth of burn. This debridement analysis does not include SPT data because a tangential resection of SPT burns is generally not possible. In this simulated debridement treatment, the 972nm wavelength provides the most useful debridement analysis. Multiple comparisons yield: the initial burn site was indistinguishable from the wound following the first resection. The burn site after the second resection was not statistically different from the healthy controls. The burn site after the third resection was also not different from the healthy controls. The following table summarizes these conclusions, showing multiple comparative debridement analyses:
Figure BDA0001712011690000651
TABLE 5
In the table, HWB represents healthy wound, health represents HWB, and health represents healthy wound at the depth of ablation. The p value of interest for this test is less than 0.05/45-0.001.
Figure BDA0001712011690000652
Indicating that the expected p-value is determined by histopathological examination and classification of tissue samples. All measured p-values match the expected values.
These conclusions are in accordance with the histological grading in which the second resection of the debridement is considered to have removed the last margin of the burned tissue. Wavelength of 515nm, 669nm, 750nm gives: healthy wounds and excessive wounds (wounds after burned tissue has been removed) are not statistically different. Fig. 32 shows the reflection spectrum of all wavelengths at each of the cut-off layers. The figure plots the absorption spectra of healthy controls, once debrided healthy controls, mean burned tissue spectra at each excision, and mean wound spectra at each excision.
1.2.4. Discussion of the related Art
The spectral MD wound assessment prototype was able to discriminate between partial thickness burns of varying severity and determine when the burn wound debridement treatment proceeded to the appropriate depth of ablation. Multiple comparison statistics indicate the wavelengths that perform best in discriminating the following differences: SPT vs DPT wounds, DPT1 vs DPT2 wounds, necrotic burn tissue vs active wounds.
Although distinguishing between DPT1 and DPT2 did not change the overall treatment plan for the surgery, the protocol was able to classify burn severity, adding function to the prototype. Through future studies, an algorithm can be developed that measures depth using MSI. This information can be used to form an overall burn profile for all depths including large area burns, helping clinicians to formulate a debridement plan for the entire burn area. Future studies may be performed to further refine the algorithm to correlate the absorption spectrum with the exact depth of burn.
As hypothesized in the "theory" section, 515nm, 542nm, 629nm, 669nm wavelengths are useful for distinguishing SPT and DPT wounds just after a burn and one hour after a burn. The wavelength range of 420nm to 669nm is related to the absorption spectrum of blood. Since each burn depth has a different degree of hemostasis, these wavelengths of light are treated differently by the tissue of each burn category and can then be distinguished by MSI. Similar spectra (420nm, 542nm, 669nm, 860nm) can distinguish between DPT1 and DPT2 burns, further supporting this idea.
Wavelengths associated with absorption peaks of ECM (750nm, 860nm, 972nm), moisture, (971nm), fat content (930nm) are also useful for distinguishing burn types. Since SPT burns are less damaging to the skin than DPT burns, we assume that: compared to DPT, SPT has a more intact ECM and a more uniform moisture content distribution, so MSI can distinguish between these tissue types. In addition, because neither DPT1 nor DPT2 burns are full thickness burns, it is unlikely that the skin fat content at the depth of the DPT1 and DPT2 burns will differ. Our conclusions are consistent with these expectations.
This study also shows the feasibility of using the MSI technique to judge the appropriate depth of burn debridement. We can identify differences in reflectance spectra of partially debrided burns and active wounds using 515nm, 669nm, 750nm, 972nm wavelengths. The 515nm and 669nm wavelengths correspond to the blood absorption peaks. The wavelengths 750nm and 972nm correspond to the ECM absorption spectrum, with 972nm also being the absorption peak for water. These results show that: blood, ECM, and water content of the tissue vary the most when comparing healthy tissue to burned tissue. This conclusion is reasonable since burns damage the ECM and blood vessels. In each individual experiment, a wavelength of 972nm was shown to be useful for clinical tissue classification. Burns disturb the distribution of water within the tissue explaining this. This disruption would result in a significant difference between healthy and burned tissue detectable by MSI, enabling guidance for debridement.
1.2.5. Conclusion
The wound assessment prototype for spectral MD provides data to classify burns and guide debridement in a pig burn model. This suggests that there is potential for development of clinical devices that can aid in the triage of burn treatments and debridement procedures. This technique is routinely used in early burn care, which is readily available and familiar to the treatment team in emergency procedures.
Future trials will combine the effective wavelength based on the trial with other wavelengths to adjust the device to automatically classify burns. Currently, the spectral MD wound assessment prototype is simply a data acquisition that the investigator next analyzes and interprets to classify tissue. Our goal is to design an algorithm as follows: MSI data is analyzed, automatically classified, and output is formed that is easy to view and understand. To this end, the data acquired by the trial will be added to the spectral reflectance database for training the classification algorithm. In the future, swine burn trials will need to be conducted, but with the swine burn database as a strong foundation, we plan to eventually test prototypes in a clinical setting.
1.3. Example 3: testing of deep partial thickness burn models in pigs using PPG and MSI
Burn debridement is a difficult technique because of the need to identify the extent and depth of the resection to be treated, and the tools that can be used to indicate to the surgeon where and how to resect. We have studied two rapid and non-invasive optical imaging techniques that enable identification of burned tissue from an active wound surface during simulated burn debridement procedures.
PPG imaging and MSI are used to image the initial, intermediate, and final stages of burn debridement of partial thickness burns at depth. PPG imaging can map blood flow of the skin microcirculation system, MSI can acquire tissue reflectance spectra at visible and infrared wavelengths to classify tissue based on a reference database. For example, fig. 33 shows a wound debridement procedure, and a PPG imaging device detecting blood flow associated between a necrotic wound and an active wound for transplantation. An example configuration of this PPG imaging system is shown in fig. 34. FIG. 35 shows the construction of the MSI system.
In this test, a partial thickness burn model of depth in pigs was generated and continuous tangential debridement was performed by an electronic skinning machine to a depth of 1.0 mm. The excised eschar was stained with hematoxylin and eosin (H & E) to assess the extent of burn at various levels of debridement.
We believe that PPG imaging devices show significantly less blood flow at burned tissue, and that the MSI method can describe the burned tissue remaining in the wound from an active wound. These conclusions were independently confirmed by histological analysis.
We have found that: these devices can identify the appropriate depth of ablation, and these images can be left to the surgeon for preparation of the wound for implantation. These image outputs are expected to be helpful for clinical judgment in the operating room.
In order to use PPG imaging and MSI techniques for burn care, scientists and engineers must demonstrate the ability to improve existing standards of care. The experiment involved developing and training a supervised machine learning algorithm based on an animal image database that included images acquired at known points in time during the wound debridement process. We demonstrate that the accuracy of the classification algorithm is higher than the current standard of clinical care. The algorithm will ultimately be applied to interpret the image data acquired by PPG imaging and MSI as important information for the health care provider to perform resection and transplantation procedures.
1.3.1. Method of producing a composite material
1.3.1.a. photoplethysmography imager
PPG imaging systems include a 10-bit monochrome CMOS camera (noctrurn XL, photonics USA) with low dark noise and large dynamic range. The 10-bit ADC resolution provides a signal-to-noise ratio of 60 dB. The resolution of the imager was set to 1280x1040 (aspect ratio 5: 4). The camera is mounted vertically, facing down towards the target surface. Typically a field of view (FOV) of 20x16cm is controlled for inter-system comparison. The exposure time of the camera was calibrated by a reflection standard of 95% reflectivity (Spectralon SG3151, LabSphere inc., North Sutton, NH). To illuminate the tissue, four single color high power LED emitters (SFH 4740, OSRAM) are arranged in a 2x2 array, mounted in the same plane as the sensors. The distance from the array of LED emitters and camera to the target surface was 15 cm. The LED emitter is chosen because it can illuminate the tissue uniformly (i.e., with less than 15% spatial intensity variation) within the FOV of the camera. The FOV of the camera is controlled by the optical lens, being slightly narrower than the illuminated area.
The movement of the animal during respiration introduces noise into the PPG signal, which makes initial analysis of PPI imaging difficult. We can reduce the effect of respiratory motion using a signal processing method called envelope extraction. For each pixel in the image, the signal is flattened by a low pass filter to extract the envelope of the noise signal. The noise signal is then divided by its envelope to remove the motion spikes of interest in the signal. The remaining clear signal reflects the information that is then processed into a PPG image.
1.3.1.b. multispectral imager
Multispectral images were acquired by staring using a filter wheel camera (SpectroCam, Pixelteq, Largo, FL) fitted with eight special optical band-pass filters with wavelengths between 400nm and 1100 nm. To select the most relevant filter for our system, we tested 22 different filters identified in previous studies using the so-called feature selection technique and performed wavelength selective data analysis. A wavelength filter with the following peak transmission was used in this study: 581nm, 420nm, 620nm, 860nm, 601nm, 680nm, 669nm, 972nm (filtration amplitude 10nm, Ocean Thin Films, Largo, FL). The system was calibrated using a 95% square reflectance standard (Spectral on SG3151, LabSphere Inc, North Sutton, NH) to compensate for the different Spectral responses of the image sensors. The light source used was a 250W tungsten halogen lamp (LowePro) fitted with a frosted glass diffuser to create a more uniform illumination surface over the field of view of the imager. The system used a telescopic lens (Distagon T × 2.8/25ZF-IR, Zeiss inc., USA). 1.3.1.c pig model
The methods used in this animal study were modified by Branski et al (2008) and Gurfinkel et al (2010). Adult Henford pigs weighing approximately 40kg were acclimatized prior to surgery. Under appropriate anesthesia, analgesia, a deep partial thickness burn was formed near the midline of the dorsal side of the pig. Pressing the skin with a hot copper bar heated to 100 deg.C (pressure 0.24 kg/m)2) Burn was formed for 60 seconds. The diameter of the copper rod was 3.6cm and the wound size obtained was the same. Six wounds were formed conformally on each pig to maintain the distance between the wounds, so that healthy tissue adjacent to each circular wound could be used as the uninjured reference tissue.
To calibrate the imaging device to the appropriate ablation depth, a standard model for acute tangential ablation was developed. The tangential resection is a partial resection of the burn in a uniform, continuous, repeatable manner. Excision was performed as follows: the electronic dermatome passed 1.0mm deep (6.0cm wide) several times over the burn site until the entire burn was excised to an active wound depth.
We passed three times with a dermatome to remove deep partial burn thickness burns to expose the underlying active wound. In this trial, debridement was considered successful if we had removed the tissue to the point where punctate bleeding occurred. Healthy tissue adjacent to each circular wound was used as the uninjured reference tissue. The data acquisition time points were: 0) before burn; 1) just after the burn; 2) each of the three debridement layers (fig. 36). At each time point we acquired PPG images, MSI images and physiological data including heart rate, respiratory rate and blood pressure. After each tangential cut, we saved the excised tissue for histology.
1.3.1.d. identification of different wound surfaces
Histology pathologists, without knowledge of the details of the study, use histology and color imaging to determine the depth of exposed viable wound tissue. Histological judgement was achieved according to Gurfinkel et al (2010). Briefly, each tangentially excised tissue sample was deposited in 10% neutral formalin (NBF) and sent to a tissue certified histopathologist (aliee pathologist, Thurmont, MD) for processing and examination. Representative biopsy was performed once for each sample and stained with hematoxylin and eosin (H & E). To determine which tangential resection achieved an active wound, the histopathologist identified the edge of the most severe burn area of the thin slice of tissue and used morphometric analysis to derive the burn depth. At each time point in the study, we also obtained digital photographs of the burns. A colored fiducial bar was placed next to the wound for color calibration.
1.3.1.e Classification Algorithm for multispectral imaging
To automatically classify pixels in the raw MSI data cube, a classification algorithm and burn tissue spectral reference database is generated. For the generation of the database, three points are illustrated: first, we write a program that a technician can use to select a particular pixel from an image of animal study data; second, certified histopathologists process and read tangentially resected tissue specimens in animal trials to identify the location and severity of burns in each part; third, an experienced surgeon views the color picture to identify the location of the spotted bleeding and the contrast of viable versus non-viable tissue in the debrided tissue. After these steps are completed, two technicians manually select pixels from approximately 120 MSI images.
Based on the reference data generated in the previous step, we set up a machine learning algorithm to classify the pixels into six different physiological classes. We use Quadratic Discriminant Analysis (QDA) as the classifier algorithm. The accuracy of the algorithm is judged as follows: as soon as the pixels of the MSI images were classified into the appropriate classes based on histology, we trained the classification algorithm on 2000 pixels per each of the six classes of all 24 burn sites. Thereafter, without replacing the trained pixels, we randomly selected a new set of 2000 pixels for each class as data to test the effectiveness of the classification algorithm. The classification accuracy was calculated from Sokolova & Lapalme.
These are the six physiological classifications used in the experiments and the description:
healthy skin-healthy skin is a common tissue that appears in almost all of our images.
Hyperemia-hyperemia is one of the three areas of burns proposed by Jackson (Jackson) in 1947. The vasculature is diastolic and is considered to be fully restored.
Wound (implantable) -implantable wound tissue is an ideal surface for skin grafting. White or pink, punctate bleeding.
Bleeding-the surface of the tissue accumulates a large amount of blood, and the physician waits to clear the blood before re-imaging the area.
Minor burn-tissue burn is minimal and can recover spontaneously within two weeks.
Severe burns (non-transplantable) -coagulated areas where gangrene and irreversible burns occur, do not recover spontaneously or receive skin grafts.
1.3.1.f statistics and image processing
All image processing and statistics were performed using MATLAB (version 2013 b).
1.3.2. Conclusion
1.3.2.a. burn formation and depth
Partial thickness burns of 24 depths were formed on four piglets. We have found that: with the pressure controlled burn bar, 16 of the 24 wounds (67%) had homogeneous burn areas. For each burn, three tangential excisions were performed, and the degree of burn for each part was judged by the histopathologist. For each of these 72 sections, morphometric analysis was performed to quantify the burn depth consistency and the consistency of each tangent resection performed by the dermatome. The average thickness of the excision by the dermatome was 1.36. + -. 0.16mm (12% standard deviation, FIG. 37).
The burned tissue is divided into an area where the effect of the burn is small and an area where the effect of the burn is severe. We have found that: the average total depth of burned tissue was 3.73 + -0.58 mm (16% standard deviation) and the depth of the severe burned sections was approximately 1.49 + -0.59 mm (+ -39% standard deviation). Fig. 37 summarizes these conclusions. The latter measure differs the most, which may involve more subjective tissue changes used by the tissue pathologist to delineate the region from that involved in burns.
Histological judgement was performed on the excised burned tissue, confirming that we have achieved an active wound surface after three passes of the dermatome. If we have removed the tissue to the point where punctate bleeding occurs, the debridement is considered successful. For 24 of the 24 wounds (100%), we removed the tissue until a uniform punctate bleeding was shown on the wound surface. This was confirmed histologically, indicating that the dermatome removed less than three times all severely burned tissue (fig. 38). Despite the evidence of somewhat bleeding, histology showed: in 8 of 24 wounds (33%), we did not completely remove all tissue where there was a burn. However, tissue certified surgeons are unaware of the imaging data, examine color pictures of the wound following debridement, consider that these burns affect a minor tissue graft acceptable, and the tissue does not turn into a severe burn.
1.3.2.b. photoplethysmography imaging
We look at the difference in PPG signals from three tissues present in the image: healthy skin, burns, wound tissue. We see that: the PPG signal SNR was clearly different for the burned tissues compared to the other two tissue types (healthy skin: 6.5 + -3.4 dB; active wound surface: 6.2 + -4.1 dB; burned tissue: 4.5 + -2.5 dB; p < 0.05). These conclusions are reproducible. The appropriate point of ablation can be identified from the PPG image at 20 in the 24 burn site.
We presented a series of images of the burn to highlight PPG signal changes over the entire burn depth (fig. 39). Initially, the PPG signal is relatively uniform across the intact skin. The signal suddenly decreases in the center of the image where the burn is present. When the first 1.0mm layer of skin is removed, the burned tissue in the wound remains evident and the associated PPG signal diminishes in correlation with the appearance of this tissue. At a depth of about 2mm to 3mm (after the second resection), the PPG signal has recovered in the burn wound.
1.3.2.c multispectral imaging
2000 pixels randomly selected from 24 burns were combined into a test data set based on a labeling database of selected pixels under the supervision of the surgeon and histology. The test set is classified by the previously trained QDA algorithm, in contrast to the actual class labels to form a confusion matrix (fig. 42). The matrix shows the number of correct classifications by the diagonal line in the center of the matrix. Incorrect classifications are for elements outside the diagonal. We found the overall classification accuracy to be 86%. The wound bleeding is classified by the algorithm, the success rate is 92%, and the classification accuracy is the highest among six classifications. The other five categories have similar accuracy, with the "severe burn" category having the lowest accuracy of 81%. Confusion matrices indicate that the common inaccuracy is to incorrectly classify severely burned tissue as healthy skin and healthy skin as severely burned. Also, we see that congested tissue is often mistakenly classified as bleeding, and vice versa.
The classified MSI image output showed the location of the burn and its margins good (fig. 40). When we used a dermatome to deeply excise the burn area, MSI clearly identified the active wound surface. Also, in these images, it can be seen that the aforementioned misclassification of pixels by confusion matrix analysis is present. The spatial representation indicates that errors are not generally random, but occur with a higher frequency in certain areas of the image, such as only the upper most wounds of the first debridement image in FIG. 40 are mistakenly classified as healthy skin.
For burns with different burn depths at the same wound, which is a common clinical case, the MSI image results may identify more severe burn areas (fig. 41). This is the case immediately after injury and during the resection procedure. These images show exactly how effective the tool is in the surgical plan, especially for inexperienced burn surgeons.
1.3.3. Conclusion
The results based on our PPG imaging data show that: the PPG signal is significantly less for burned tissue compared to healthy tissue. From a clinical point of view this means that suspected burns can be identified with the aid of PPG imaging techniques. When the surgeon resects tissue, a corresponding increase in PPG imaging signal is expected as the active wound is exposed by removing necrotic tissue. When the signal reaches an intensity that is characteristic of viable tissue, the PPG imaging device will indicate to the surgeon that there is sufficient blood flow in the wound and that the tissue can support the graft.
The results from MSI imaging are also useful. With the eight wavelengths used in this study, we achieved 86% accuracy in classifying different tissue types. The current standard of care for burned tissue classification is the clinical judgment of an experienced burn physician. Although no studies have published the accuracy of the surgeon's classification during the resection and transplantation procedures, clinical studies of the initial burn depth assessment by experienced surgeons have shown an accuracy of 60% to 80%. Although the accuracy of the surgeon during the initial assessment does not necessarily indicate how accurate an experienced surgeon is when performing the procedure, we envision the challenge of correctly determining the optimal ablation depth in burn surgery to be as difficult as the initial assessment. Therefore, we believe that the accuracy of MSI imaging shown in this study is indistinguishable from the most experienced experts, and it is clear that MSI will tend to improve the clinical judgment of inexperienced surgeons.
The feature values calculated from the PPG data can be combined with the reflectance spectral data using the same machine learning techniques that have been established for MSI data analysis. Since the PPG and MSI raw data cubes can be acquired by the same optical hardware, the statistical analysis to determine salient features to account for in the classifier equations based on each system is a problem. Although MSI alone is effective in identifying the margins of burns, we believe that dynamic blood flow information based on PPG signal can be combined with reflectance data to account for important tissue viability information.
Proper classification of burns during resection and transplantation is important to optimize care for burn patients. PPG imaging and MSI are two techniques that also help burn physicians and non-specialist physicians guide debridement. PPG imaging detects blood flow, identifying healthy tissue by a characteristic higher blood content. MSI aggregates the reflected light of key wavelengths to form and classify a reflectance spectrum that is characteristic of each tissue class. Using the pig burn model, we have applied these techniques, showing the feasibility and utility of clinical use. Both PPG imaging and MSI, used alone or in combination, can improve the diagnostic accuracy of the health care provider during the skin graft process, helping to optimize the debridement step.
1.3.3.a. practical applicability
Physicians have been trained for many years to properly perform surgical debridement. In the case of massive casualties, inexperienced surgeons are instructed to perform multiple procedures with many obstacles. Incomplete or excessive resection of tissue can have serious complications. The incomplete excision of the burn results in the placement of the graft on the devitalized tissue, which can make the graft less absorbable and increase the risk of infection. Conversely, over-resection runs the risk of excessive blood loss, which also impairs graft uptake. In addition to performing the technical aspects of the procedure, the physician must also be able to control the appropriate fluid and blood management before and after the procedure. Moreover, even if a patient is only cut after 48 hours, twice as much blood volume is lost compared to a similar patient who was operated 24 hours earlier, so timing is critical. Finally, multi-zone burns with different burn depths across the burn area make burn care more complex. Removal and transplantation of these burns is challenging to treat in order to ensure maximum removal of devitalized tissue with minimal excision of still active skin.
To reduce the gap between burn and non-burn physicians, auxiliary tools are required. The ideal scheme is as follows: identifying areas that must be excised; determining an appropriate ablation depth; vital organs are monitored to guide the management of treatment for the patient. Further needs for clinical suitability are: the diagnosis accuracy is improved, the adaptation of the real patient condition is realized, and useful data is provided for medical teams in real time. Moreover, in situations where burn professionals have difficulty dealing with numerous patients, such as mass casualties, optimal solutions can be quickly taken to help non-experts.
As mentioned above, several imaging modalities have been proposed as potential solutions to this problem. To date, most techniques have proven to be impractical for clinical use for a variety of reasons. Some techniques are less accurate than the surgeon's independent clinical judgment. Other protocols require the patient to remain lying still for long periods of time, data acquisition times comparable to days, or invasive procedures for accurate diagnosis. Clinical tools that present these limitations are not readily adopted by healthcare providers.
MSI and PPG imaging, including the tests set out in the present invention, have ensured that these techniques can actually meet the above-mentioned needs to improve burn care. By working to convert these technologies into clinically available clinical tools, the quality of life index results can be improved for burn patients in the united states.
This solution also has international impact. Poor populations in developing countries cook and illuminate on open fire. Such living conditions expose women and children to a high risk of severe burns. For example, in south asia, women and children who die from severe burns are more than infected with HIV/AIDS or malaria. The difficulty in obtaining medical care means that relatively small burns can lead to permanent disability, which can be prevented by the burn care aids reducing the technical requirements needed to administer the treatment.
1.4. Example 4: example of improving accuracy of burn diagnosis imaging apparatus by detecting and removing abnormal values
The methods, systems, algorithms, techniques and/or content described in example 4, as well as substantially similar versions and/or variations, may be used for computing in any method or apparatus described herein.
In this test, we used multi-spectral imaging (MSI) to improve burn diagnostic devices that can assist burn surgeons in planning and performing burn debridement procedures. To construct the model, training data that accurately reflects burned tissue is required. Obtaining accurate training data is difficult, in part, because: marking raw MSI data as appropriate organizational categories is error prone. We assume that these difficulties can be overcome by removing outliers from the training data set that improve the classification accuracy. We developed a swine burn model to build an initial MSI training database, and studied the ability of our algorithm to classify clinically important tissues in the burn. Once a correct-label data (ground-truth) database was generated from images of pigs, we developed a multi-stage approach based on Z detection and univariate analysis to calculate outliers in our training database. Using cross-validation with ten folds, we compared the accuracy of the algorithm with and without outliers. We demonstrate that outlier removal methods can reduce the variation of the training data in the wavelength space. Once outliers are removed from the training dataset, the test accuracy improves from 63% to 76% and the output is better. The simple method of building this adjusted training data improves the accuracy of our algorithm to as good as the existing standard of care in burn assessment. Given the fact that there are fewer burn doctors and burn care facilities in the country, it is foreseeable that this technique will improve the standard of burn care for burn patients with fewer professional facilities.
1.4.1 multispectral imaging applications
With the development of camera technology, multispectral imaging (MSI) technology and hyperspectral imaging technology (HSI) derived from the technology are widely applied to different occasions, such as fields of astronomy, agriculture, national defense, geology, medical imaging and the like of NASA.
We introduced the application of the MSI technique for burn analysis. For burn treatment, it is important to determine the depth of the initial wound. Shallower burns, called superficial partial thickness burns, do not require surgical treatment and are usually recovered by supportive therapy. More severe burns, partial thickness burns classified by depth or full thickness burns, require surgical removal of all necrotic tissue to expose an active wound as the basis for the grafting procedure. Currently, the gold standard for the burn classification is the clinical judgment of professional burn physicians. However, the accuracy of such expert estimates are only 60% -80%, and the accuracy of non-expert estimates is no higher than 50%. There is a need for technical solutions that improve the accuracy of burn classification to improve the clinical decision making for relevant burn treatments, which is especially needed in the case of medical centers without burn specialists. MSI can classify burned tissue into different clinical categories with potentially higher accuracy, which can allow burn physicians to select appropriate treatment regimens more frequently and quickly. In the process of removing necrotic tissue from a severe burn, the goal of the physician is to minimize the removal of excess healthy tissue. MSI also facilitates surgical removal by identifying burns from healthy wounds by classifying the burned tissue as the surgery progresses, preventing unnecessary excision of healthy tissue.
Human skin is a multilayered tissue composed of multiple chromophore components, of which there are four main components: blood, water, melanin, fat. Blood, moisture, melanin, fat in different skin layers have a defined spectral response to optical illumination of specific wavelengths of light, especially in the visible and near infrared bands. By using MSI to obtain and analyze the response of different tissues to multiple incident specific wavelengths, the presence of blood in other tissues can be identified, for example, by a characteristic spectral response. The response of the tissue to incident light is quantified by its absorbance. Classification of different tissue classes can be achieved by collection of absorbance data of MSI over the entire wavelength range, based on the relative amount of tissue constituents in each tissue class.
Although MSI can obtain specific spectral data from a variety of tissue types, classification models must be developed to interpret new spectral images and correctly identify tissues. With a process called machine learning, difficulties arise in developing the model because the model must be built from the same type of data that is subsequently used for classification. Thus, during the initial model building process, a "training" data set must first be collected and automatically classified as "correctly labeled data". Establishing the correct label data is a critical step in any machine learning application and is therefore one of the most careful stages of inspection in the development of these applications. High accuracy of correct labeling data is necessary to build an accurate classification model. The manner in which the correct label data is established varies with the assessment objects of the constructed classification model. In each case, however, it must be established by a clinical expert using existing gold standards to gather the necessary information. For burns, the gold standard for tissue classification is a histopathological assessment. We present technical details for establishing correct marking data.
A classification model is then developed using the training set, followed by testing based on the additionally collected data to judge its accuracy with respect to the correct label data. Different algorithms have been developed to build classification models based on correctly labeled data training data sets. For example, in machine learning based on kernel for hyperspectral imaging data analysis, a Support Vector Machine (SVM) algorithm has been previously used.
Finally, manual demarcation of the training data establishes the correct label data, and the resulting model is then potentially biased by classification errors. For example, if healthy skin is improperly classified as bleeding in the training data, the resulting model will subsequently have difficulty accurately classifying healthy skin versus bleeding. Since the training data is the sample space used to build the classification model, reducing any such bias is important to improve model accuracy.
Any inevitable bias in the training set eventually reduces model accuracy in the developed test. To reduce bias and improve model accuracy, it is helpful to identify and remove "outliers" from the training dataset. Outliers are defined as explicit variables that are statistically different from other explicit variables. Outlier detection (also called anomaly detection or singularity detection) is a key factor for statistical pattern recognition studies, applied in fields such as credit card fraud, sensitive events, medical diagnostics, network security, etc. There have been several established outlier detection methods. One commonly implemented outlier detection technique is a model-based algorithm.
In model-based algorithms, statistical tests estimate parameters of the sample distribution. For example, a gaussian distribution is described by two parameters: mean and standard deviation. These parameters are determined by maximum likelihood estimation or maximum a posteriori estimation. In a gaussian distribution of univariates, outliers are points with significant extreme probability (high or low) of being quantitatively included in the model parameters by the Z-score (standard score). Traditionally, in univariate analysis, samples with a probability greater than 0.95 or less than 0.05 are considered outliers.
In many cases, the model-based algorithm correctly identifies outliers. It is important to note, however, that the parameters defining these models are susceptible to any possible outliers when initially calculated. That is, the entire sample set is used to generate the parameters before outliers can be identified and removed. Thus, by identifying and removing outliers prior to using these algorithms to generate classification models, the accuracy of these models can be improved. In this study, we propose a machine learning algorithm in the medical field, for which we adopt the idea of outlier removal. MSI imaging data were first acquired from the established burn model of the swine. We then evaluate the multispectral image, providing a statistical solution to quantitatively improve the classification accuracy of models designed to classify different tissues in the burn image.
1.4.2 detection and removal of outliers
The detection and removal of outliers is an important part of the statistical data and pattern recognition part and has been widely used in different fields, such as credit card fraud, sensitive events of interest, medical diagnostics, network security, etc. Outlier detection is also referred to by other names, such as anomalous detection, singularity detection, etc. Most outliers are detected as being model-based and proximity-based. For model-based algorithms, we can use statistical tests to estimate the parameters of the sample distribution, which can be viewed as a gaussian distribution based on the Central Limit Theorem (CLT), for example. For gaussian distributions, two parameters are considered: mean and standard deviation. We can determine these parameters based on maximum likelihood estimation or maximum a posteriori estimation. In the model-based approach, outliers are points where a small probability of occurrence occurs, which can be estimated by calculating the Z-score (standard score). As a rule of thumb, if the probability values are greater than 0.95 or less than 0.05, these samples may be considered outliers. This is based on univariate analysis. If it is a multivariate normal distribution:
Figure BDA0001712011690000791
μ is the mean of all points and Σ is a mean-based covariance matrix. We can calculate the mahalanobis distance of point x to μ. Mahalanobis distance follows χ 2Distribution, d is the degree of freedom (d is the data dimension). Finally, for all points x, the distance in mahalanobis is greater than χ2(0.975), the point is considered to be an outlier. The idea of statistical testing may work in most cases, however, the parameters are susceptible to potentially outliers when estimating the processing of the parameters. Meanwhile, if the dimension is high, the mahalanobis distance will be similar to a larger degree of freedom. Depth-based approach studies and bias-based approaches for outliers at the boundaries of the data space minimize the variance when outliers are removed.
In proximity-based outlier detection, a proximity algorithm can be used to first generate an inclusion approximation or an exclusion approximation, the concept of distance being important. If there are N samples, M variables, and the size of the matrix is N x M, for example, by using euclidean distances, we can calculate the distance between the sample spaces by the defined distances
Figure BDA0001712011690000792
Clustering methods are common methods that employ this concept of distance. In the clustering algorithm, we can define a radius ω for any set of points for which the center (centroid) has been identified. If these points are less than or equal to the radius, then they are considered good points from which the centroid is corrected based on the inclusion of the new data point. For the K-nearest neighbor algorithm, the sum of the distances of these points to the K-nearest neighbor. However, if the dataset dimensions are high, the method does not work because of "dimension disaster".
There are other methods based on other definitions of central tendency. For example, the local anomaly factor (LOF) algorithm is based on density. The density can be estimated from the clusters of points. If the density of some cluster or group of points is less than the density of its neighbors, then the points in this cluster are potentially outliers. And, if the dataset is high dimensional data, these algorithms do not work. Angle-based outlier detection (ABOD) and grid-based subspace outlier detection have been proposed to handle high-dimensional datasets.
2. Methodology of
2.1 hardware, imaging, animal models
Multispectral image data is acquired using a homemade desktop imaging device. Fig. 1 shows a diagram of the image acquisition system. Both the light source and the image acquisition module are arranged in a reflective mode at a distance of 60cm from the target surface. Tungsten lamps (ViP Pro-light, Lowel Inc.) provide a broader spectral projection on the target surface in DC mode. A piece of ground glass (iP-50, Lowel Inc.) was mounted in front of the tungsten lamp to spread the light and improve the uniformity of the spatial illumination. A portion of the incident light is transmitted through the target surface while some of the backscattered light signals are collected by the image acquisition module. The image acquisition module included a high performance IR enhanced optical lens (model: Distagon T F-2.8/25mm, Zeiss), an eight-channel filter wheel, a 12-bit monochrome camera (BM-141GE, JAI Inc.). The optical bandpass filter design is chosen to separate a single wavelength of light for the camera. The following eight band pass filters are mounted in a filter wheel. The Center Wavelength (CWL) and full width at half maximum (FWHM) (in nm) of the eight filters are: 420-20, 542-10, 581-20, 601-13, 726-41, 800-10, 860-20 and 972-10. The maximum value of a pixel is 4098(12 bits) using reflection Zenith Lite Panel (SphereOptics GmbH) to normalize the intensity of the wavelength. At these wavelengths, which give accurate tissue differentiation for efficient classification, eight embodied wavelengths are selected based on the known absorbance of skin tissue. As the filter wheel rotates, the camera continuously acquires single wavelength images through each of the eight filters. The image is stored in the computer in an uncompressed format. By using
Figure BDA0001712011690000801
The software (version 2014 b) performs all calculations and statistics.
Fig. 43A to 43C show example hardware system apparatuses (fig. 43A, 43B). Multispectral image data is acquired using a homemade desktop imaging device. Fig. 43A shows a diagram of an image acquisition system. While a tungsten lamp is used in the example of fig. 43A-43C, in other embodiments, the light source may be any broad spectrum illumination source, or any illumination source that matches the desired wavelength of light required for data analysis.
We used the above system to collect animal data following a scientific burn model study protocol designed under the approval of the Institutional Animal Care and Use Committee (IACUC). In order to approximate human skin (epidermal thickness: 50 to 120 μm), male Henford pigs (epidermal thickness: 30 to 40 μm) were selected as an animal model.
A round burn (diameter 3.6cm) was formed on the back of the pig (fig. 43B, 43C). At this stage, three skin tissues are visible: healthy, burned, hyperemic (reddening of the skin due to increased blood perfusion after injury). Debridement was performed with a continuous 1mm deep cut of the layer, and the area of each debridement at each burn was 6cm × 6cm (fig. 43B). During debridement, six different skin tissues can be seen: healthy, partial or complete burns (depending on the severity of the burn), bleeding, wounds, hyperemia. Each excised layer was preserved in 10% neutral buffered formalin and sent for histopathological examination. Each specimen was cut and stained with hematoxylin and eosin (H & E). The purpose of the histological examination is to obtain the aforementioned "gold standard" validation of tissue types and location in the multispectral image. The depth of the burn and the precise layer of ablation of the viable tissue have been determined by two pathologists.
Three pigs were used, each with six burn sites. For each burn site, we achieved image acquisition at least five different time points using all eight wavelengths: a baseline image taken before the burn, a burn image taken immediately after the thermal injury, an image after the first 1mm tangential resection with the dermatome, and an image after the next two tangential resections.
2.2 training data acquisition and Classification Algorithm
And executing a supervised learning method to generate a classification model. To build a training database comprising six skin tissue classifications, we extracted the pixel intensity and location of each of the six tissue types in each acquired image using the histological data as a reference. Each section of excised skin was cut to show burn depth, as judged by a certified pathologist according to accepted protocols (fig. 44). We developed a mapping tool to mark healthy areas, partial burn areas, complete burn areas, bleeding areas, wound areas, hyperemic areas. The pathologist judges these areas from the H & E stained burn eschar using the following parameters, with complete burns being the most severely damaged area. There is irreversible tissue damage due to coagulation of collagen and other tissue components. In a histological sense, this region is characterized by a loss of cellular detail. Partial burn wounds have reduced blood perfusion and evidence of vascular occlusion. Collagen generally retains its structural integrity. However, there are some signs of cellular necrosis with dense nuclei. The tissue region is considered to have the potential for rescue. In essentially normal histology, appearing deep as burn tissue, a healthy wound is distinguished. These regions are correlated with previously acquired spectral imaging data, thereby establishing correct label data (ground route) that can be used to judge our classification algorithm.
Good results were not obtained using a Support Vector Machine (SVM) and a k-neighbor (KNN) classification algorithm (see fig. 49(a2), fig. 49 (B2)). For healthy cases the output Al should be healthy skin tissue, however, we have observed that there are other tissues, such as the appearance of a wound in the output. For the case of burns, from physiology, we know that hyperemia should be around the burn, and healthy skin should not be classified as a complete burn in output, fig. 49(B2) uses 10-fold cross validation. We show that the model accuracy is 63% far below the required accuracy we expect. Based on these two results, we can interpret the detection and remove outliers in our small database.
2.3 outlier detection
To reduce the impact of outliers on the model, outlier detection algorithms using two new concepts were developed based on the accepted maximum likelihood estimation basis as described above. First, a subset of samples located around the middle of the sample space is collected as a subspace to compute the mean and standard deviation parameters for the model using maximum likelihood estimation. We call this subspace as the "first window", by the new coefficient α1And alpha 2(0-0.5, unitless) by defining the sizes of the windows as distances to the left and right sides of the middle of the sample space (accordingly, the width of the first window is α)12). Since the width of the entire sample space is normalized to 1, α1=α2A setting of 0.5 would cause the entire sample space to be selected as the "first window". By appropriately adjusting these coefficients, it is possible to exclude abnormal values before calculating distribution parameters (mean (μ) and standard deviation (σ) in gaussian distribution) for the classification model. Second, by a new feature importance (w)j) Weighting (W)i) The probability (according to a Z-score or other distribution function) to generate a threshold for detecting outliers within the first window. The technical details of these steps are as follows.
We start with a large sample space consisting of spectral data acquired from an object model. The basis of the algorithm is a well-established maximum likelihood estimation technique. For the independent co-distributed samples, the joint density function is:
f(χ123,…,χn∣θ)=f(χ1∣θ)×f(χ2∣θ)×f(χ3∣θ)…×f(χn∣θ)
wherein, χ123,…,χnIs a sample, θ represents a parameter of the model. The likelihood function is:
Figure BDA0001712011690000821
in practice, the algorithm of the likelihood function is called log-likelihood and can be used as follows:
Figure BDA0001712011690000822
to estimate theta0Calculating a value of θ that maximizes the following expression
Figure BDA0001712011690000831
We calculate the parameter θ according to the maximum likelihood method 0. If the sample distribution is Gaussian, the mathematical equation describing the maximum likelihood parameter is as follows:
Figure BDA0001712011690000832
Figure BDA0001712011690000833
wherein xiThe values of samples near the median. Our first new anomaly detection and removal method requires that these parameters pass through the coefficient αiThe control is as follows:
n=(α1×N)+(α2×N)
at this point, we take a second new anomaly detection and removal method. We assign weight replacement probabilities when detecting outliers. First, a probability (p) is determinedi) And feature importance (w)i). The probability p can be calculated using the distribution parameters of the sample distribution functioni. For example, for a Gaussian distribution, p is generated from the standard Z-scoreiThe calculation is as follows:
Figure BDA0001712011690000834
Figure BDA0001712011690000835
where μ is the sample mean and σ is the standard deviation of the sample. Z-score determination of piThe following were used:
Figure BDA0001712011690000836
for our outlier detection algorithm, the probability p is adjusted in the following wayiThe value:
pi=2×piif p is 0.05. ltoreq.pi≤0.5
pi=2×(pi-0.5) if 0.05 < 2 XPi<0.95
p i0 if 0.95 > piOr pi<0.05
The feature importance w may vary depending on the desired applicationiFeature importance may also be adjusted to improve the accuracy of any model. In our example, the feature importance is determined by the relative effectiveness of each of the eight wavelengths implemented in the MSI machine to distinguish between different tissue classes from each other. In the region of machine learning, more discrimination information is given higher weight to the wavelength.
In the above step, the probability p is calculatediAnd the feature importance wjThen, the sample weight W is calculatediThe following were used:
Figure BDA0001712011690000841
finally, a threshold weight W is assignedthresholdA "second window" of data is generated. For a given sample, if WiGreater than WthresholdThen the sample is assigned to the training set (second window). Otherwise, the sample point is considered as an abnormal value and is removed from the training set.
Empirical test is repeated to find the algorithm coefficient (alpha)1、α2、wi、Wthreshold) Is determined.
3. Conclusion
3.1 outlier detection
Before performing the data classification and outlier removal algorithm, the unfiltered spectral image data is analyzed by the SVM and k-neighborhood (KNN) classification algorithm to train a plurality of burn classification models. When these models were given test data after training, the average accuracy of the classification as a whole was 63% compared to the correct labeling data. After establishing this baseline accuracy of the burn model, data classification and outlier removal algorithms are applied to the spectral image dataset before the spectral image dataset is used to train the classification algorithms.
Through empirical test, the effective value of the algorithm coefficient is obtained as follows: alpha is alpha1=α2=0.2,wl=w2=…=w8=1,W threshold7. With these parameters specified, the mean and standard deviation parameters of the "first window" are calculated for each of the eight wavelengths performed.
The results of the data classification algorithm after outlier detection and removal are shown in fig. 48A, 48B. Fig. 48A to 48B show exemplary six classifications in the case where there are abnormal values (fig. 48A) and abnormal values (fig. 48B) in the 2D feature space. For illustration, the sample space (red) is shown in two dimensions, with only two of the eight performing wavelengths shown. After outlier detection and removal, the second window subspace (blue) used to train the burn classification model becomes more uniform and tightly clustered, with the theoretically resulting model having higher accuracy.
To visualize the data classification and outlier detection algorithms for the results for all eight MSI wavelengths, sample box plots showing the collection of all wavelengths in each tissue classification before and after outlier detection and removal were plotted. In the initial sample space (fig. 46A to 46F and fig. 47A to 47B), all tissue classifications, especially blood, include a large number of abnormal values. After outlier detection and removal, the number of outliers remaining in the subspace is significantly reduced, as shown in FIG. 47B.
Fig. 48A-48B show a representative two-dimensional sample space with spectral data for all six classifications plotted together. Prior to outlier detection and removal, the data of the fig. 47A-47B box plots describe the sample space (a) before and the sample space (B) after outlier detection and removal for all eight wavelengths of each tissue class. Boxes represent the quartering distances. Red plus signs distinguish data outliers. The number of outliers remaining in the sample space after outlier detection is significantly reduced in all tissue classifications, especially in blood classifications where different tissue classifications are usually plotted against group, blood exceptions, but a large overlap between different groups can also be seen. Better separation between tissue classifications is shown after applying outlier detection and removal algorithms. After removing outliers, a new burn classification model is generated using the same classification algorithm (SVM, K, etc.). The average model accuracy of the population increased from 63% to 76%.
In fig. 46A to 46F and 47A to 47B, the detection processing and the statistical result of each frequency band abnormal value are shown by box diagrams. Furthermore, we construct a 2D feature space using the two most important wavelengths to show the effect of our proposed algorithm. The blue color spreads throughout the sample space due to the blood properties in the visible and near infrared bands. By using this algorithm, the aggregation of blood classifications is clearly seen. The same explanation applies to the red health classification.
3.2 conclusion of animal models
Fig. 49 shows the improvement of the model classification accuracy. Before outliers are removed, the classification model cannot accurately detect healthy skin or areas of hyperemia physiologically surrounding a burn. The model also predicts several different classes of tissue, and in fact healthy skin. In the area of congestion around the burn, the model predicts bleeding. Moreover, healthy skin outside of the hyperemic zone is mistakenly classified as a complete burn. However, after outliers are removed, the model can accurately classify healthy skin in the control and burn images and areas of hyperemia around the burn.
4. Conclusion
Several points of the test are worth emphasizing. First, the algorithm coefficient α is given 1The assigned value approximates a recursive process. These values were chosen because: they are proposed in the present inventionAccuracy is effectively improved in specific MSI application. However, for other application scenarios, it is likely that these values will need to be adjusted to achieve the desired results.
Interestingly, after empirical testing to determine the best value for each wavelength, the feature importance (w) is optimal for all wavelengthsj) Set to a value of 1. All feature importance wjFinally assigned the value 1, this reflects: each of the eight wavelengths employed by our MSI device is selected to provide unique spectral information independently of the others. This result does not give unexpectedly: the wavelength is selected based on the optical properties of the skin tissue and burned tissue as previously described.
The most challenging tissue to be accurately classified is blood. It is apparent that various sample spaces for collecting blood are shown in fig. 47A to 47B and fig. 48A to 48B. The bimodal distribution of spectral data characterizing blood is the result of the absorption spectrum of blood, which is also bimodal, being characteristic in the visible and near infrared bands. Various of the other tissue classifications have a single absorption peak, which in some cases results in a more uniform distribution of the spectral data in these other cases.
Finally, outlier detection and removal algorithms significantly improve the accuracy of MSI application to classify skin tissue. The algorithm successfully reduces the difference in sample space for each tissue type. By limiting the differences in this way, the overlap of spectral characteristics is correspondingly reduced. With reduced overlap, it can be seen that the classification accuracy is improved, thus improving the training of the classification model. By achieving a final accuracy of 76%, we improved the model to at least meet the existing clinical criteria in the classification of burn tissue as clinically judged by burn experts. Without a burn specialist, the model helps the physician decide when treating a burn patient.
Overview of exemplary embodiments relating to ablation
The above lack of sufficiently accurate measurements or tests to judge healing potential and the many factors known to affect the ability of the body to heal wounds suggests that: there is a need for a multivariate approach to diagnosis to improve assessment. Spectral MD is specifically proposed to solve this problem because our device is designed to process information in the form of multiple independent variables to classify histopathological processes. The spectral MD imaging device uses a machine learning algorithm to combine two optical imaging techniques, photoplethysmography (PPG imaging), multispectral imaging (MSI), with the patient's health indicators, such as diabetes control or smoking, to generate prediction information (fig. 50).
Fig. 50 shows a high-level diagrammatic overview of two optical imaging techniques, photoplethysmography (PPG imaging), multispectral imaging (MSI), in combination with patient health indicators to generate predictive information, in accordance with the present invention. We call this device DeepView (Gen 2). DeepView (Gen 2) is expected to maintain the high sensitivity and specificity required to select the appropriate LOA for diabetic patients. Two optical imaging methods are designed to infer important tissue features, including arterial perfusion and tissue oxygenation. These two measurements are key to the selection of LOA, since wound healing in diabetic patients is limited by a severe lack of Arterial perfusion with low tissue oxygenation (Norgren, Hiatt, Dormandy, Nehler, Harris, & Fowkes, TASC II Working Group, Inter-Society Consensus for the Management of a Peripheral area Disease (TASC II), 2007) (Mohler III, Screening for a Peripheral area Disease, 2012). Using our method, we can simultaneously estimate tissue-level perfusion over a large area of the leg to identify the sub-perfusion region of the limb. This is in contrast to guesswork, where only clinical judgment is used, i.e., the observer must estimate the appropriate LOA based on patient history, physical examination in conjunction with a vascular study, which rarely includes a comprehensive assessment of the patient's microcirculation. At the same time, DeepView (Gen 2) also assessed the patient's health index with systemic effects on wound healing potential. By combining local assessment of tissue microcirculation with global assessment of systemic factors affecting wound healing, DeepView (Gen 2) considers multiple factors affecting wound healing rather than a single variable.
Our DeepView (Gen 2) system studies multivariate systems for predictive analysis using statistical training called machine learning in a suitable manner. We believe that by combining data based on local microcirculation assessment with systemic factors affecting wound healing (such as diabetes, smoking status, age, nutritional status, etc.) that are not readily observed in the microcirculation using prior art methods, this approach will provide important information on the overall likelihood of a patient's wound healing for the first time. By considering all of these factors together, the DeepView (Gen 2) system accuracy is improved because of local and systemic factors that affect the likelihood of eventual healing.
Our device has at least 95% sensitivity and 95% specificity for predicting the likelihood of wound healing at the survey level after excision (see phase I viability test and phase II success criteria). If used for routine assessment of patients with this sensitivity and specificity prior to resection, we expect DeepView (Gen 2) to reduce the re-resection rate by 67%, which would result in less than 10000 re-resections per year, while improving the quality of life of the resected and reducing the health costs associated with care. Currently, the cost of ABI examination prior to resection is about $ 150 per patient for medical insurance, with the resulting cost resulting in large portions from the time the technician performs the examination and the time the physician interprets the results (Criqui et al, 2008). The proposed device has no impact on the existing costs of the LOA assessment, since it is expected to be the same as the costs of the existing vessel assessment. Unlike some existing LOA tests, our imaging system does not require disposables. The conventional cleaning and service costs of the system are similar to those of the systems currently on the market. The cost situation is further explained in the commercialization plan.
To our knowledge, the deep view (Gen 2) imaging technique of Spectral MD is the first system designed to fuse the optical imaging techniques of photoplethysmography (PPG imaging) and multispectral imaging (MSI). Fig. 51 shows an exemplary view of a device (DeepView) that fuses optical imaging techniques of photoplethysmography (PPG imaging) and multispectral imaging (MSI). Furthermore, to our knowledge, this is the first imaging technique that can incorporate important health indicators of patients into the evaluation algorithm. Prior to the development of this system, Spectral MD was the first company to provide 2D images of plethysmographic waveforms (Gen 1 technology) sold in the US known by the FDA. Our Gen 2 technology now can combine blood flow assessment (i.e. arterial pulse amplitude) with tissue characterization (i.e. spectroscopic analysis). When these measurements are taken together from the tissue, a more accurate assessment of the tissue can be provided compared to a single measurement (see preliminary studies below).
Studies judging the likelihood of healing at a given LOA indicate a significant difference in tissue oxygen levels between successful ablation versus unsuccessful ablation sites. These studies investigated tissue oxygenation using transcutaneous oxygenation measurement (TCOM). However, despite the fact that TCOM technology has been available for decades, the use of TCOM has not outperformed clinical assessments, and there is no apparent cessation of tissue oxygenation at a given LOA, which is predictive of successful ablation as determined by a number of clinical trials. According to expert evaluation, the reasons why TCOM is not suitable for clinical practice are as follows. First, the TCOM acquires data from a very small region of interest. TCOM treatment also requires heating of the patient's skin, which sometimes results in skin burns, especially in cases where the patient has diabetes. Finally, the TCOM results are affected by the surrounding environment and local tissue edema, limiting the internal timing consistency of the device.
DeepView (Gen 2) is designed to overcome various limitations of TCOM and other available devices to predict the likelihood of cure at a selected LOA. The device acquires data over a large area of tissue surface, which enables characterization and comparison of tissue oxygenation and perfusion changes over the entire surface rather than in isolated regions. Since DeepView (Gen 2) is non-invasive, non-contact, and does not emit harmful radiation, there is no significant risk of patient damage within the device. The device is not affected by small changes in ambient temperature. More importantly, however, deep view (Gen 2) analyzes clinically important patient health indicators such as diabetes history, occurrence of infection, smoking status, nutritional status, etc. to provide an understandable assessment of wound healing potential to the end user, whereas the prior art only assesses local tissue oxygenation.
Method
Aspects of the proposed imaging apparatus include non-invasive optical imaging for various tissue classification scenarios, including optimal selection of an LOA. The DeepView (Gen 2) imaging system of Spectral MD is a perfusion imaging focused system that provides diagnostic images based on measurements of tissue perfusion and patient health indicators. It is easy to train a caregiver to perform an imaging examination. Imaging of the limb takes approximately 10 minutes and the results are stored electronically for evaluation by the physician. This is acceptable from a patient perspective because the examination has no harmful side effects, does not contact the skin, and does not cause discomfort.
The main innovations researched by the invention are as follows: assessment of microcirculation increases patient health indicators to improve the accuracy of diagnosing wound healing potential in an ablation plan. We will give values for the DeepView components in the subsequent section, followed by a brief discussion: how these multiple variables are combined into a single prediction of wound healing potential.
As described above, the DeepView (Gen 2) device performs two optical imaging methods of blood flow assessment simultaneously. For the first of these, PPG imaging, although more advanced because DeepView (Gen 2) acquires one million unique PPG signals over a large area of tissue area, PPG imaging is a common modality used in pulse oximetry to acquire heart rate, respiratory rate, SpO2The same technique as that for vital signs (Severinghaus)&Honda, 1987). The PPG signal is generated by measuring the interaction of light with dynamic changes in the vascularized tissue. The vascularized tissue expands and contracts approximately 1-2% by volume at the frequency of the cardiac cycle under each systolic pressure fluctuation (Webster, 1997). This blood inflow increases the volume of the tissue and brings with it additional hemoglobin, which is particularly light absorbing. Thus, the overall absorption of light within the tissue oscillates with each heartbeat. This information can be converted into vital signs recorded by a pulse oximeter.
To generate images based on plethysmography, we utilized the path of light through tissue (Thatcher, Plant, King, Block, Fan, & DiMaio, 2014). A small portion of the light incident on the tissue surface is scattered into the tissue. A small portion of this scattered light leaves the tissue from the same surface that it was originally incident on (Hu, Peris, Echiadis, Zheng, & Shi, 2009). This backscattered light is acquired over the entire tissue area using a sensitive digital camera, such that each pixel in the imager includes a characteristic PPG waveform determined by the scattered light intensity variations. To generate a 2D visualization map of the relative tissue blood flow, the amplitude of each unique waveform is measured. To improve accuracy, we measure the average amplitude for many heartbeat samples.
The second optical measurement taken by DeepView (Gen 2) is MSI. This technique measures the reflectance of selected visible and Near Infrared (NIR) wavelengths (400 nm-1100 nm) at the tissue surface. Spectral features of objects were originally used in remote sensing (e.g. satellite or flight imaging) for geological exploration or detection of military targets, but the technology is becoming more and more accepted in the medical field (Li, He, Wang, Liu, Xu, & Guo, 2013). The method is effective for quantifying important skin features associated with a number of pathologies, including PAD. With respect to selection of LOA, MSI allows quantification of the number of volumes of hemoglobin and the appearance of oxygenated hemoglobin (Jolivot, Benezeth, & Marzani, 2013) (Zonios, Bykowski, & Kollias, 2001). Other applications of this technique will be illustrated in preliminary work below.
The wavelength of light used in MSI in DeepView (Gen2) was selected based on recognized light-tissue interaction characteristics. Melanin in the stratum corneum and epidermis primarily absorbs UV and visible wavelengths. The near infrared wavelength (700nm to 5000nm) is the least absorbed melanin and is considered the best choice for determining its depth through the skin. Blood vessels that pass through the skin contain a large amount of hemoglobin, the concentration of which determines the extent to which the skin absorbs wavelengths greater than 320 nm. The light absorption of hemoglobin also varies depending on whether the molecule is in the oxygenated or deoxygenated state. Since tissue melanin and hemoglobin concentrations and oxygenated hemoglobin counts change during the course of the disease, MSI is able to detect changes in the resulting reflectance spectrum. Thus, abnormal skin tissue can be identified by a change in the reflectance spectrum compared to healthy tissue. Although MSI describes tissue using a smaller number of unique wavelengths than newer hyperspectral imagers, MSI has advantages when comprehensively considering spatial resolution, spectral range, image acquisition speed, cost (Lu & Fei, 2014).
The third part of the data used by DeepView (Gen2) is the relevant patient health index collected during routine patient assessment. Various factors that affect wound healing have been identified and specified. Many or all of these factors (including patient age, diagnosis of diabetes, smoking history, infection, obesity, medical conditions, nutritional conditions) often affect patients with diabetic lower limb amputations. Although clinicians currently consider the format towers of these variables in evaluating potential LOAs, DeepView (Gen2) is able to quantitatively evaluate these metrics to predict the likelihood of initial wound healing at a given LOA. The DeepView device utilizes a machine learning algorithm to realize integration of the patient health index and the optical imaging data. The physician simply enters the relevant patient health indicator into the device at the time of imaging. This data is processed as a further variable by our machine learning algorithm, no different from the optical data acquired by PPG imaging and MSI. The machine learning algorithm was trained to generate a quantitative output after evaluating all data collected by DeepView (Gen 2). The quantitative output is converted into an image recognition region of the scanned tissue surface that may or may not be rehabilitated after ablation.
The DeepView (Gen 2) device is a combination of DeepView Gen 1PPG imager, MSI camera and target patient health index input (FIG. 52). Figure 52 shows an example of a combination of depView Gen 1PPG imager, MSI camera, and target patient health metric input.
Preliminary study
By adjusting system settings and algorithms, DeepView (Gen 2) can be adapted to assess tissue characteristics under different pathological conditions. For the phase I LOA study of the present invention, we developed a specific algorithm using specific lenses and filters suitable for measuring the pulsation amplitude and tissue oxygenation used to predict wound healing after initial ablation (see experimental design and methods section). The proposed technology has successfully passed bench-top, preclinical, and pilot clinical testing for other applications. The results of these tests are presented to support the use of the device in selecting an LOA.
Preclinical burn model
For guiding physicians during burn debridement procedures, we use lenses, filters, algorithms (e.g., spectral analysis) that detect for a combination of blood flow (i.e., arterial pulse amplitude) and tissue structural integrity including blood volume, inflammation, gangrene, etc. Next, our PPG and MSI algorithms for assessing epidermal microvascular status are shown to accurately identify post-burn necrotic tissue individually or together. Using the deep view Gen 1PPG imaging system, we identified a clear difference in blood flow in necrotic burned tissue relative to surrounding healthy tissue. By the MSI camera, we demonstrate that: in the IACUC approved swine burn model test, the presence of burned tissue requiring surgical removal was accurately identified according to the gold standard of histopathology (96% sensitivity, 82% specificity).
Briefly, partial thickness burns of 24 depths were formed on four piglets using a pressure controlled burn bar. Ten minutes after burn, we obtained PPG and MSI signals immediately after a series of 1.0mm debridements until healthy tissue was reached. After each debridement, the excised tissue samples were processed and sent to a tissue pathologist for blind evaluation. In the gold standard histopathological evaluation procedure, certified histopathologists identified healthy wound tissue and inactive burn tissue in each resection specimen. In addition, certified surgeons blindly observed the color images of burns to depict healthy wound tissue and inactive burn tissue. We worked independently, blindly facing the results of the histopathologist and the analysis of the surgeon, judging the results of our PPG and MSI assessments.
By identifying the difference in PPG signal strength between different tissue types, our PPG imager is able to identify the appropriate points of debridement as judged by histological evaluation. A series of data acquisitions begins with the measurement of the PPG signal in the region of interest before a burn, and if so, the PPG signal on the non-burned skin is uniformly representative of healthy tissue. However, the PPG signal suddenly decreases in the center of the image where the burn occurred, while the surrounding tissue still shows a signal consistent with healthy, non-burned tissue.
Fig. 53 shows the difference between the signal of burned tissue and the signal of a healthy wound exposed by debridement. After a series of debridements, the continuously processed images show a clear distinction between signals of burned tissue that need further removal and healthy wounds that are eventually revealed by debridement. The average signal intensity of the burned tissue is 2.8 +/-1.8 dB, and the signal intensity of healthy skin and healthy wound tissue is obviously increased to be 4.4 +/-2.2 dB and 4.2 +/-2.6 dB (p is less than 0.05). Not surprisingly, the conclusions of PPG are completely identical to those of the histopathologist and surgeon.
In the same experiment, MSI assessment accurately classified important physiological tissue classes occurring during burn debridement treatment with an accuracy of 82%, specifically, we achieved 96% sensitivity and 82% specificity for necrotic burn tissue as judged by histopathology. Six possible physiological classifications were achieved in the MSI assessment: healthy skin, congestion, wound, bleeding, minor burns, severe necrotic burns. Fig. 54 shows these six exemplary physiological classifications. Similar to the PPG signal progression at the stage of the burn site, the MSI results initially detect the homogeneity of healthy skin before the burn is formed, and subsequently, accurately identify various different types of viable and non-viable tissues during the course of successive debridements until a healthy wound is reached.
The final step evaluates the efficacy of the combined PPG and MSI data. As previously described, we use one imaging system to acquire the PPG and MSI signals simultaneously for the same burn. Using the combined data, we tested the efficacy of mixing the two measurements using a machine learning algorithm. From this data set, we found that the accuracy of using MSI alone was 82%. Combining PPG data in the classifier with MSI data can improve overall accuracy to 88%. Fig. 55 graphically shows the results of PPG data, MSI data, PPG data, and MSI data combinations.
Clinical feasibility test of the assay
Fig. 56 shows an example PPG signal occurring in the hand, leg, foot regions. The deep view (Gen 1) PPG imaging device has also been subjected to experimental clinical studies to determine the ability of PPG imaging to provide useful blood flow data in various clinical applications. The data included PPG images of skin blood flow collected from cardiovascular ICU patients, and tissue viability in decubitus ulcers, skin grafts, and lower limb ischemia. As an example of our experimental clinical assessment, we present an example study of a female with an aortic dissection that leads to bilateral coagulation of her posterior fossa arteries and abnormal flow of blood in her extremities. Based on clinical assessments by vascular surgeons, we considered that the fraction of her leg away from the knee was reduced in blood flow. We measure the pulsatile blood flow occurring in the hands, legs (near the knees), feet. The resulting images show that: PPG signals (pulsatile area) are present in the hands and legs and pulsatile blood flow is not present in the feet, which is related to the known clinical condition of the patient. As demonstrated in these preliminary studies, the demonstrated ability of the deep view technique to detect blood flow is an important feature of device performance to guide the selection of LOAs.
Summary and discussion
We have demonstrated that: in the burn model and patient example studies, the feasibility of identifying tissue lack of blood flow and oxygen content using our PPG and MSI capable device was used. Although our technique differs in its straightforward implementation in classifying burns and identifying LOAs in PADs, the basic principles of treatment are the same. Whether the clinical user assesses a burn or estimates a likely initial wound location at different LOAs, the same physical tissue composition is measured in both cases. For burn assessment and LOA assessment (or other possible assessments), only the algorithms and filter settings used are different. As shown above, the use of PPG and MSI together in our technique can give a more accurate assessment of the pathology caused by epidermal microvasculature and reduced blood perfusion. Our technique should be able to predict the healing potential at a given LOA based on the same principles as used in burn studies, and adding important patient health indicators affecting wound healing outcomes could further improve the accuracy of DeepView (Gen 2). Our phase I study will test this hypothesis.
Assay design and methods-phase I assay clinical study
In phase I, our specific objective is to test the feasibility of using our device to diagnose the restitution of the resection site in a pilot clinical study. As part of this evaluation, we collected data from a large number of resection patients in order to train a diagnostic machine learning algorithm for diagnostic recovery possibilities in different resection situations. Because the device is rapid, non-invasive, and imaging studies can be performed in routine care, such as clinical or preoperative, humans are a suitable model for this stage of testing.
FIG. 57 illustrates an example process for training a machine learning diagnostic algorithm. Training the diagnostic machine learning algorithm requires data from what will be the end user (fig. 57). Importantly, the accuracy of the algorithm is only as accurate as the method used to identify the true case of the training data, in which case the non-healing resection group is compared to the healing resection group. To this end, we have generated a standardized ablation healing assessment system to track and classify results. Since the results have been created for research purposes, we can start developing algorithms and focus on the analysis. Machine learning algorithm development iterates in the following manner: initial judgment of accuracy, research to improve accuracy and evaluation of new accuracy. This feasibility preliminary study provides evidence that: combining the microcirculation image with the patient health index is more successful for a large scale of key studies.
This was a clinical trial study design, including 60 patients, to assess the accuracy of the DeepView Gen2 system in predicting initial healing of resection for patients with PAD compared to current standard of care.
The DeepView Gen2 imager optically acquired spectra and PPG signals from a large area (up to 15X20cm portions of tissue) of the skin supply. The device is very suitable for studying the microcirculation of the skin of the lower limbs in large areas. The special parts of the device are as follows: important patient health features can be integrated into the diagnostic algorithm to improve accuracy. The trial study will identify useful patient health indicators to confirm in the critical study. As a major task in this study, we will confirm that the machine-learned diagnostic algorithms of the device contain patient health indicators that can improve the accuracy in measuring the individual microcirculation. In this study, we assessed microcirculation at each conventional LOA in combination with patient health characteristics affecting wound healing, and the extent to which this correlates with the patient's initial wound healing potential after resection.
The lower limb to be excised for each patient will be examined and included in the study. Clinically relevant patient health information is collected by a facility care provider. The measurements made by our experimental imaging setup were performed by a hospital staff who had previously trained the image testing via a Spectral MD.
The area of skin covering the remaining portion of the resection will be classified as positive or negative healing capacity by the DeepView Gen2LOA algorithm. The technician performs the DeepView Gen2 analysis blindly to the clinical judgment of the site to be resected.
To obtain our true positive (+) and negative (-) results or non-healing and healing goals, we used standardized post-excision initial wound healing assessments (table 5). The evaluation includes three categories: successful excision, successful excision with long-term healing, and non-healing. Successful excision was: healed within 30 days with complete granulation tissue generation (granulation) without additional resection. Successful excision for long-term healing is: delayed healing, incomplete granulation tissue formation at 30 days, and final healing within six months without the need for re-excision to a more proximal level. Finally, the inability to heal is characterized by: gangrene and/or putrefaction develop, and/or need to be resected to a more proximal level. In addition, it is believed that wounds require revascularization and are not able to heal.
TABLE 5 standardized wound healing evaluation
Figure BDA0001712011690000951
These healing assessments were performed 30 days after surgery. For the target patient with delayed healing, we will perform a second healing assessment six months after surgery. Target patients who did not heal at six months and did not have a closer resection again will be classified in the non-healed group.
Fig. 58 shows a flow chart of an exemplary clinical study. Deep view imaging estimation (fig. 58): microcirculation data for each target will be collected by imaging the skin using a Spectral MD Gen2 device. Each scan is performed for approximately 30 seconds for each leg to be excised. We will image the ankle and foot areas according to the traditional surgical method of PAD patient resection, including: supraknee (AKA), infraknee (BKA), above the ankle (i.e., the foot), metatarsals, or toes. The area of skin was selected as the flat portion to cover the remaining portion of the resection for analysis (fig. 59).
FIG. 59 shows a graphical illustration of tissue involved in a conventional resection step. The dashed line indicates the location of the skin incision and the red oval indicates the location of the skin that must be active for successful initial healing of the excision.
Important patient health information to be used in the diagnostic model is collected by clinical staff at each clinical site. We do not collect any information beyond the standard of care. These indicators include, but are not limited to: indicators of diabetes control (e.g., HbAlc, glucose, insulin), smoking history, obesity (e.g., BMI or lean girth), nutrition (e.g., albumin, prealbumin, transferrin), infection (e.g., WBC, granulocyte status, body temperature, antibiotic usage), age, impaired function, important medications (e.g., glucocorticoids or chemotherapy). This information is added to the diagnostic algorithm by entering the information into the software of the DeepView imaging device.
Machine learning algorithms that separate goals into non-healing (+ outcome) and healing (-outcome) can be developed based on clinical features collected from individual patients. We initially include all of the features in the algorithm. Then, the algorithm accuracy is judged by ten-fold cross validation as follows: algorithm coefficients are first generated using a random 60% of the targets, after which the remaining 40% of the targets are classified by a trained classifier. The accuracy of the algorithm for classifying the separated 40% of the targets was calculated using standard sensitivity and specificity methods. This step was repeated 10 times, yielding a stable quantified accuracy.
FIG. 60 shows example steps for generating a classifier model for resection level. After creating the initial accuracy, we begin to develop the algorithm using a standard set of methods to improve accuracy (fig. 60). An important issue in this process is to trade off bias and variance, which is a problem that most models, such as our model at this stage, encounter. In other words, the algorithm fits well with the data of the current research group, but does not fit the general population. To address this problem, we performed Feature Selection (forward integration Feature Selection for Classification and Clustering, Huan Liu, Lei Yu) to create a combination of microcirculation measurements and patient health data with high accuracy and minimal redundancy between variables (i.e., information is eliminated from the model using covariance). At this stage, we also investigated the scope of the classifier model used to classify the data. This includes: linear and quadratic discriminant analysis, decision tree, clustering, neural network.
The criteria for success are: we must demonstrate that the device can predict initial healing of resection with an accuracy comparable to existing standard of care (70-90%) and demonstrate a reasonable chance to improve that accuracy in large-scale clinical studies.
Possible problems and solutions: the revascularization procedure is sometimes performed by the resecting surgeon and this additional procedure may affect the diagnostic result. We will record these situations and consider them in statistical analysis to determine if these processes interact with the results of the diagnostic decision.
Another potential problem is to combine the delayed healing group with the healing group in our dichotomized device output. We actually found that: there is a clear distinction between the delayed healing group and the healing group, which can be used as a separate classification in the diagnostic output. In contrast, the data for the delayed healing group was closer to the non-healing group and was not easily separable. In this case, we include data from more of the closest images into the algorithm. In this case, the clinical use of the device remains valuable as a tool to identify complications of ablation, rather than simply to judge success or failure.
In this study, differences in skin pigmentation caused changes in the measurements taken from the target. To overcome these differences, our method will include the identification of healthy regions of the patient tissue from which the DeepView measurements can be normalized.
Another problem is that normal blood flow to the skin is visible to patients with PAD. This may be the result of side by side duct compensation. However, this suggests that patients with PAD have a poor motor response and short-term ischemia. One easy to implement change to this study was: the patient was tested for DeepView signal after 3 minutes of ischemia due to inflation of the blood pressure cuff in the limb being measured. PAD is a well-known extension to 50% of peak reactive hyperemia response, which can be measured by the same optical properties of the tissue evaluated by DeepView.
Test design and method stage II
Phase II is a diagnostic clinical performance study that estimates the sensitivity and specificity of our device to predict the likelihood of initial wound healing after initial resection for patients with PAD. We chose this group because: if LOA is selected using current clinical care criteria, the group includes a recurrence rate of approximately 20%, making it easier to achieve the goal of suboptimal wound healing. The diagnostic clinical features of the DeepView Gen2 device were the following measurements as the primary targets: quantitative DeepView Gen2 diagnoses how close to correctly predicting wound healing outcome. We will classify wound healing after excision in PAD by standardizing wound healing assessment by the gold standard method used in previous studies.
DeepView Gen2 images were taken of the area of skin as the flattened portion of the skin at the extreme end of each of the conventional levels of post-excision residue that was excised. The tissue of this region is chosen because it is important for the initial healing of the surgical site. Although the traditional study of diagnosing the healing potential of resection measures only microvascular flow, the purpose of this study was to assess the accuracy of our algorithm of DeepView Gen2, including microcirculation measurements and patient health indicators.
The sensitivity and specificity of DeepView Gen2 imaging to estimate the likelihood of successful ablation was determined by our standardized wound assessment.
This was a clinical trial study design, including 354 patients, to assess the accuracy of the DeepView Gen2 system in predicting initial healing of resection for patients with PAD compared to current standard of care.
The DeepView Gen2 imager optically acquired spectra and PPG signals from a large area (up to 15X20cm portions of tissue) of the skin supply. The device is very suitable for studying the microcirculation of the skin of the lower limbs in large areas. The special parts of the device are as follows: important patient health features can be integrated into the diagnostic algorithm to improve accuracy. The trial study will identify the most useful patient health indicators to confirm in the critical study. As a major task in this study, we included in the machine learning diagnostic algorithm of the device a patient health indicator that was identified as important information in the experimental study. In this study, we assessed microcirculation at each conventional LOA in combination with patient health characteristics affecting wound healing, and the extent to which this correlates with the patient's initial wound healing potential after resection.
Data collection: the lower limb to be excised for each patient will be examined and included in the study. Clinically relevant patient health indicators are collected by a facility care provider. The measurements made by our experimental imaging setup were performed by a hospital staff who had previously trained the image testing via a Spectral MD.
The area of skin covering the remaining portion of the resection will be classified as positive or negative healing capacity by the DeepView Gen2 LOA algorithm. The technician performs the DeepView Gen2 analysis blindly to the clinical judgment of the site to be resected.
To obtain our true positive (+) and negative (-) results or non-healing and healing goals, we used standardized post-excision initial wound healing assessments (table 5). The evaluation includes three categories: successful excision, successful excision with long-term healing, and non-healing. Successful excision was: healed within 30 days with complete granulation tissue generation and no additional resection required. Successful excision for long-term healing is: delayed healing, incomplete granulation tissue formation at 30 days, and final healing within six months without the need for re-excision to a more proximal level. Finally, the inability to heal is characterized by: gangrene and/or putrefaction develop, and/or need to be resected to a more proximal level. In addition, it is believed that wounds require revascularization and are not able to heal.
These healing assessments were performed 30 days after surgery. For the target patient with delayed healing, we will perform a second healing assessment six months after surgery. Target patients who did not heal at six months and did not have a closer resection again will be classified in the non-healed group.
Fig. 61 shows a flow chart of an exemplary clinical study. Diagnosis of resection site healing was performed during imaging using the Spectral MD Gen2 imaging device. Each scan is performed for approximately 30 seconds for each leg to be excised. We will image the ankle and foot areas according to the traditional surgical method of PAD patient resection, including: supraknee (AKA), infraknee (BKA), supraankle (AAA), metatarsal, or toe. The area of skin was selected as the flat portion to cover the remaining portion of the resection for analysis (fig. 59).
Collecting patient health indexes: important patient health information to be used in the diagnostic model is collected by clinical staff at each clinical site. We do not collect any information beyond the standard of care. These indicators, which will be identified in the experimental study, are expected to include: measurements of diabetes control (e.g., HbAlc, glucose, insulin), smoking history, obesity (e.g., BMI or lean girth), nutrition (e.g., albumin, prealbumin, transferrin), infection (e.g., WBC, granulocyte status, body temperature, antibiotic usage), age, important medications (e.g., glucocorticoids, or chemotherapy). This information is added to the diagnostic algorithm by entering the information into the software of the DeepView imaging device.
Data analysis and statistics: imaging measurements of five resection sites of the affected limb by DeepView Gen2 will be evaluated to determine wound healing potential. Based on each limb, we will judge the total healing score, comparing these measurements to the actual resection success of the limb to get the overall accuracy of the assessment. This gives an initial outcome measure of the subject operating characteristics (ROC), our sensitivity and specificity.
For our initial outcome measure of graded wound healing, we compared the depview Gen2 diagnosis of the physician-determined resection location with the successful resection determined by the standardized wound healing assessment. This analysis will yield a Receiver Operating Characteristic (ROC) curve for the DeepView diagnostic algorithm.
And (3) analyzing the efficacy: clinical trials were designed to create the sensitivity and specificity of the device and verify that these values are superior to the clinical judgment selection LOA. We have established the objectives of the study to be: the sensitivity and specificity of the DeepView Gen2 system in diagnosis LOA reach 95%, and the accuracy is better than 70-90% of the current clinical judgment. To create the sample size, we need to first get Positive Predictive Value (PPV) and Negative Predictive Value (NPV), which requires that the prevalence of the disease is known. We identified that the prevalence of re-excision to a more similar level for the population examined by DeepView Gen2 (patients >18 years old requiring initial excision of the affected limb due to diabetes) was close to 20% (reference). Therefore, the expected positive predictive value (PPVDeepView) was 97% and the expected negative predictive value (NPVDeepView) was 93%.
Analysis to verify the Sample size assumed below was performed using the method proposed by Steinberg et al, 2008, see "Sample size for positive and negative predictive value in diagnostic research use case-control designs" biostatics, 10 th, first, pages 94-10, 2009. Among them, the significance level (α) is 0.05, and the desired weight (β) is 0.80.
For PPV for NPV
H0:PPVDeepView=PPVClinical judgmentH0:NPVDeepView=NPVClinical judgment
H1:PPVDeepView﹥PPVClinical judgmentH1:NPVDeepView﹥NPVClinical judgment
These results indicate that, according to our healing assessment (fig. 62), these null hypotheses (H) are rejected0) We had to enroll a total of 236 lower limbs, of which 1/5 had no healed limbs (+ outcome). However, because we do not know the target before enrollmentSo we cannot pre-select the ratio to be 1/5. Thus, the ratio may be different. If the ratio is too low, 1/10 limbs do not heal (+), then the study requires approximately a total of 450 limbs, and if the ratio is too high, 3/5 limbs do not heal (+), we only need a total of 124 limbs.
FIG. 62 shows example statistical sample size analysis results. The total sample size is based on the ratio of resection non-healed (+) to healed (-) in the study group. The significance level (α) was 0.05, and the desired weight (β) was 0.80.
To account for possible variations in the ratio of positive to negative targets, we would account for more than about 50% of the 236 original estimates. Therefore, we total sample size was established for a total of 354 targets. We believe that this number can be reached because this is a minimum risk study, with busy clinics performing approximately 100 resections per year. We monitored the study data as it was collected and calculated the total number of limbs studied and the ratio of unsuccessful (+ outcome) to successful (-outcome) amputation of limbs, and stopped the study when the appropriate ratio and total sample size was achieved.
The predicted conclusion is: to determine how well the DeepView output performance associated with initial wound healing was, we compared the DeepView results to a standardized healing assessment, which classified the target as either a healing or non-healing group. Based on this comparison, we believe that there is a correlation that supports high sensitivity and specificity in predicting initial healing after resection.
Criteria for success: the ROC needs to contain decision thresholds that make the sensitivity and specificity greater than what is required (70-90% accuracy) established by existing standards of care.
Possible problems and solutions: it is difficult to get a sample size large enough to weigh the importance of all non-imaging data (patient health indicators) used in the diagnostic algorithm. For example, diabetes is an important clinical feature and we will find that all patients in our study group have diabetes, or that a certain proportion of patients do not have diabetes, which has sufficient weight to study its effects. Therefore, the existence of this co-disease in our diagnostic algorithms is not explained. We anticipate that this patient study group will have many similarities in overall health, and some of these variables may be measured at different levels, rather than simply being dichotomized. For example, a diabetic subject has a series of controls measured by HbAlc and blood glucose tests. For the infeasible case, we consider continuing to collect this data in the post-market analysis so we can observe more resection samples.
Summary of Performance examples
As shown in the figures described below, experimental data show exemplary advantages of fusing PPG and MSI features into one algorithm.
In the following discussion, the feature set includes photoplethysmography (PPG), multispectral imaging (MSI), Real Image (RI). Exemplary methods include deriving correct label data, training a classification algorithm with three feature sets, either alone or in combination, classifying images, reporting errors to compare a classifier to different feature set constituents. These features have been developed and can be used for classification. These features are divided into three categories of features: PPG, MSI, RI. For the following example, a classifier, Quadratic Discriminant Analysis (QDA) is trained with a variety of feature sets. The feature sets are combined until all 33 features are included in the model. Classifier-based classification errors compare the developed classifiers (i.e., classifiers having a distinguishing feature set).
Fig. 63B-63F show exemplary reference images, correctly labeled data images, classification results, error images for various classifiers, as will be described in detail below. Fig. 63A shows color markings for the example results of fig. 63B-63F, where blue indicates healthy tissue, green indicates excised tissue, orange indicates a light burn, and red indicates a burn.
The DeepView classifier has the following 14 features:
deep View output
2. Maximum degree of dispersion
3. Standard deviation from mean
4. Number of intersections
5. Small neighborhood SNR
6. Improved SNR
7. Standardized lighting
8. Standardized DeepView images
9. Standard deviation of
10. Deflection
11. Kurtosis
Gradient X12
Gradient of Y13
14. Standard deviation of gradient
FIG. 63B shows an exemplary reference image, correctly labeled data image, classification result, error image of the DeepView classifier. The total error rate of the DeepView classifier was 45% as indicated by the percentage of yellow in the error image (or white/lighter color in the grayscale rendition of FIG. 63B).
The Real Image classifier has the following 11 features:
1. real image
2. Standardized real image
3. Deflection
4. Kurtosis
Gradient of X
Gradient of Y6
Standard deviation of the X gradient
8. Small neighborhood range
9. Normalized small neighborhood range
10. Large neighborhood range
11. Normalized large neighborhood range
Fig. 63C shows an exemplary reference Image, correct label data Image, classification result, error Image of the Real Image classifier. As shown by the percentage of yellow in the error Image (or white/lighter color in the grayscale rendition of fig. 63C), the total error rate of the Real Image classifier is 16%.
FIG. 63D shows an exemplary reference Image, correct label data Image, classification result, error Image combined by the DeepView classifier and the Real Image classifier. The DeepView/Real Image combination classifier uses 25 features, including the 14 DeepView features and the 11 Real Image features described above. The total error rate for the DeepView/Real Image combination classifier was 19% as indicated by the percentage of yellow in the error Image (or white/lighter color in the grayscale rendition of FIG. 63D).
The MSI classifier has the following 8 features:
1.MSIλ1
2.MSIλ2
3.MSIλ3
4.MSIλ4
5.MSIλ5
6.MSIλ6
7.MSIλ7
8.MSIλ8
FIG. 63E shows an exemplary reference Image, correct label data Image, classification result, error Image for the MSI classifier in combination with the Real Image classifier. As indicated by the percentage of yellow in the error image (or white/lighter color in the grayscale rendition of fig. 63E), the total error rate of the MSI classifier was 3.4%.
FIG. 63F shows the combined reference Image, correct label data Image, classification result, error Image of the DeepView classifier, Real Image classifier, MSI classifier. The DeepView/Real Image/MSI combined classifier uses 33 features, including the 14 DeepView features, 11 Real Image features, and 8 MSI features described above. The combined DeepView/Real Image/MSI classifier had a total error rate of 2.7% as indicated by the percentage of yellow in the error Image (or white/lighter color in the grayscale rendition of FIG. 63F).
Fig. 64A and 64B show a comparison of feature configurations of different classification techniques. Fig. 64A shows an error (e) comparison for the DVO (deepview) classifier, RI classifier, DVO + RI classifier, MSI classifier, DVO + RI + MSI classifier, where e is error (total) (misclassification)/total (total classification). As shown, the error of the DVO + RI + MSI classifier is 71.7% lower than that of the DVO + RI classifier.
FIG. 64B shows the error (e) comparison of the DVO (DeepView) classifier, RI classifier, DVO + RI classifier, MSI classifier, DVO + RI + MSI classifier at the study time point.
As shown in the data of fig. 63B-64B, the error decreases more significantly as more features are added. The feature groups may be ordered by importance, in one example, as: (1) MSI, (2) RI, (3) PPG. Some embodiments of the classification algorithm are reusable, meaning that the algorithm can be trained with a first target and then used to classify a wound of a second target.
Exemplary intraoperative burn surgical imaging and Signal processing overview
Burn debridement is a challenging technique requiring specific skills to identify the area requiring ablation and the appropriate depth of ablation. Machine learning tools are developed to assist physicians by providing quantitative assessment of burned tissue. In the pig model provided by the invention, three noninvasive optical imaging technologies can distinguish four tissues in the continuous burn debridement process: healthy skin, active wound, deep burn, superficial burn. The combination of these three techniques can significantly improve the accuracy of tissue classification.
I. Preamble of preamble
The present invention proposes a signal processing technique to develop intraoperative burn physician aids (deep view burn imaging system, Spectral MD, Dallas, TX). Active wounds need to be exposed for skin grafting, and are distinguished from other healthy skin, deep burns and shallow burns. The input metrics are based on three main feature sets: photoplethysmography (PPG) features, identifying pulsatile blood flow of the skin microcirculation; real Image (RI) features, taken from black and white photographs of wounds; multi-spectral imaging (MSI) collects reflectance spectra of tissue at significant visible and infrared wavelengths. As a commonly used machine learning technique, tissue classification is performed using Quadratic Discriminant Analysis (QDA). The system was tested against a sample wound of swine using a combination of input features based on three achievable imaging techniques. The test results show that an increased number of features can improve the performance of the classifier.
Description of the II test
The experimental set-up for the pig burn model has been described previously. Briefly, an imager (Nocturn XL, photonics USA) mounted with a filter wheel having eight unique optical bandpass filters (400-. An additional LED array (SFH4740, OSRAM) illuminates the field of view. PPG imaging, RI, MSI were performed simultaneously on four adult haddock pigs that were anesthetized and prepared for dorsal burn. A spring loaded copper rod (3.6 cm diameter) was heated to 100 ℃ and pressed against the skin for 45 seconds per wound, resulting in six burns per animal. An electronic skin-removing machine (Zimmer, model No.: 8821-06) was set to pass continuously through 1mm deep (6cm wide) each time at each burn until the active wound was exposed. Punctate bleeding, which is an indication of sufficient exposure of the active wound, was visible after removal of three layers for all burns.
With all three techniques, imaging was performed before the burn (healthy skin), just after the burn (acute burn), after each layer was excised. The initial resection is taken 20 minutes after the burn, with a maximum of 80 minutes between the initial burn and the final resection.
III technical method
PPG output pre-processing
Fig. 65 shows an example block diagram of PPG output pre-processing. Each image is acquired by 800 frames of 27 second burn images, and a PPG image is generated. The PPG signal is defined in time by the pixels. The purpose of this pre-processing is to obtain some physiological information related to the heart rate of the target, as well as some initial features for the classification step. As shown in fig. 65, a preprocessing algorithm is implemented based on the time domain signal (one for each pixel), as will be explained below.
Initial spatial averages were calculated, after which deconvolution was performed, in which the low-frequency, large-amplitude components were removed, which corresponded to artificial ventilation of the anesthetized pigs. Thereafter, a linearity correction of the signal is performed, as well as a band-pass filtering in the frequency range in which the heart rate is expected. And applying a fast Fourier transform algorithm to the time domain signal of each pixel to calculate a corresponding frequency domain signal.
For each pixel, four indices are obtained from these sets of frequency signals: (1) signal to noise ratio (SNR), (2) maximum degree of dispersion, (3) number of standard deviations from the mean, and (4) number of signal zero crossing levels. These four indices are used to establish the probability of blood vessels in each pixel of the image. The blood vessel probability shows a degree of usefulness for providing the information pixels about the heart rate. For vessel probabilities > 0: 9, storing a value of the heart rate corresponding to the maximum value of the frequency signal. The most repeated value was selected as the real heart rate of the pig at the current step. From this value, an improved SNR criterion can be calculated. Finally, depending on the degree of difference from the true heart rate, the model is defined such that the pixels corresponding to the calculated heart rate for the heart rate are set to 1, the remaining pixels being set to values between 0 and 1. The PPG output indicator is the result of the improved SNR and its model.
For a study of the pixels of the image, all these 6 indices give physiological information about the blood flow at about 1cm below the surface of the target body.
Description of B characteristics
In the analysis, three main feature sets are used to determine the attributes of each pixel of the image. There are a total of 33 features, distributed as follows: 14 features processed based on PPG output; 11 features based on RI; based on the 8 features of the MSI, one for each wavelength of light. The following table illustrates the characteristics.
PPG output Real Image Multispectral images
PPG output image Real image MSIλ1
Maximum degree of dispersion Standardized real image MSIλ2
Standard deviation from mean Deflection MSIλ3
Number of intersections Kurtosis MSIλ4
SNR X gradient MSIλ5
Improved SNR Gradient of Y MSIλ6
Standardized lighting Standard deviation of X gradient MSIλ7
Standardized PPG images Small neighborhood range MSIλ8
Standard deviation of Normalized small neighborhood range
Deflection Large neighborhood range
Kurtosis Normalized large neighborhood range
X gradient
Gradient of Y
Standard deviation of gradient
Description of C correctly labeled data images
In one embodiment, a comparison data may be provided to train the classification algorithm. In one example, a database of correct label data (GT) images is generated for all cases in the study. For this reason, a total of 60 cases are available for one pig: six lesion sites (three on each side of the pig, back) at each of the following stages: before and after burn, first excision, second excision and third excision, and two images are collected at each stage. To generate training data, each burn site is analyzed to determine the condition of each region of tissue. The data is used to classify the image data into categories corresponding to respective tissue types. The GT matrix defines different kinds of tissue in each image. Some indeterminate pixel sets are discarded. Since the pre-specification of the organization represents an ideal output of the classifier, the pre-specification of the organization is used in the classification algorithm.
D classifier
The classifier used in this test is a Quadratic Discriminant Analysis (QDA) algorithm, which is a commonly used management classification algorithm for machine learning. The QDA is trained assuming that the data follows a Gaussian distribution in an n-dimensional space, where n is the number of features used. The algorithm seeks to find a conditional probability that a given pixel x belongs to that class
Figure BDA0001712011690001081
The largest class type k. Mathematically, this decision is given by the following expression:
Figure BDA0001712011690001082
wherein, deltak(x) For the quadratic discriminant function, the following is defined:
Figure BDA0001712011690001083
the index k indicates the class of the tissue, for each class fk(x) Is a probability density function of n-dimensional Gaussian distribution, mukSum ΣkRespectively representing the mean and covariance of the matrix, pikIs the prior probability of class k. Calculating mu in a training phase using a set of N known pixels (x, k)kSum ΣkWhere the n-dimensional vector x represents the value of each feature and k is the class to which the pixel belongs. For each tissue k, δ is obtainedkRepresenting the likelihood that the unknown pixel x belongs to each class k. The discrimination boundary between classes k and l is specified to satisfy { x: δ%k(x)=δl(x) Pixel set of (b) }, due to δk(x) The boundary will be a quadratic function of x.
Conclusion IV
FIG. 66 shows examples of locations 1A, 1B, 2A, 2B, 3A, 3B for training scenarios, classification scenarios, cross-validation. From all available data, 2/3 of data was used to train the classifier, 2/3 of this data corresponds to images acquired at positions 1A, 1B, 2A, 2B at all stages. As shown in fig. 2, cross-validation is performed based on classifying pixels based on the image acquired a second time at location 3B. The study was repeated five different times, each repetition using a different set of features: (i) only 14 features based on PPG output are used, (ii) only 11 features based on RI are used, (iii) a total of 25 features based on PPG output and RI are combined, (iv) only 8 features based on MSI are used, (v) a total of 33 features based on PPG, RI, MSI are combined.
Fig. 67A to 67L show example correct label data images, real images, classification results of the first ablation step in five different classification methods. Fig. 67A shows correct labeling data (GT), fig. 67B shows RGB real images, fig. 67C shows classification using PPG features, fig. 67D shows classification error rate using PPG features (17.6329%), fig. 67E shows classification using RI features, fig. 67F shows classification error rate using RI features (8.639%), fig. 67G shows classification using PPG + RI features, fig. 67H shows classification error rate using PPG + RI features (9.5821%), fig. 67I shows classification using MSI features, fig. 67J shows classification error rate using MSI features (5.0958%), fig. 67K shows classification using PPG + RI + MSI features, and fig. 67L shows classification error rate using PPG + RI + MSI features (3.694%). The color rules of fig. 67A to 67L are as follows:
healthy skin: blue color
Active wound surface: green colour
Superficial burn: orange colour
Deep burns: brown colour
Based on the results of the classifier, a confusion matrix is built for each iteration. FIG. 68 shows a matrix built based on using all 33 feature classifications. The parameters included in the confusion matrix are defined as follows:
R: the reproduction ratio, defined as the probability that a pixel classified as discriminant classification belongs to the actual classification, P (discriminant classification/actual classification). Also known as sensitivity.
R: the recognition rate, defined as the probability that a pixel belonging to the actual classification is classified as discriminant classification, P (actual classification/discriminant classification). Also known as precision.
C: the combination ratio, defined as the probability that a pixel belongs to a discriminant classification and belongs to an actual classification, P (actual classification \ discriminant classification).
E: an evaluation index, defined as the difference between the reproduction rate and the recognition rate.
The values of the reproduction rate and the recognition rate are between 0 and 1, indicating low performance or high performance, respectively. The optimal result is as follows: the diagonal of the matrix has a value of 1 and the values outside the diagonal are 0. The sum of all the combination ratio values is equal to 1. Finally, the evaluation index should be close to 0 in all the elements of the confusion matrix.
Based on these confusion matrices, the accuracy a of each classification can be defined as the geometric mean of the reproduction and recognition rates as follows:
Figure BDA0001712011690001101
wherein the subscript i represents the classification. The quantitative accuracy method achieves high combined sensitivity and precision for each tissue, respectively, and is disadvantageous for the case of large difference between the two parameters. The overall accuracy a of the classifier is the arithmetic mean of the N classification accuracies:
Figure BDA0001712011690001102
For each of the five different repeated classifications, the following table shows the accuracy value for each classification and the overall accuracy of the trial based on the set of features selected for training.
Figure BDA0001712011690001103
FIG. 69 plots accuracy comparison and overall accuracy for each tissue classification using the displayed feature set for classification. Adding MSI features greatly improves the accuracy of the classification model.
Conclusion V
It has been shown that this imaging system can provide information that can distinguish active wounds in burned tissue. This task was successfully completed using the QDA model. The present invention has shown that: the performance of cross-validation when increasing the number of features, how increasing the MSI features significantly improves the accuracy of tissue classification.
Example embodiment wavelength Range overview for MSI
In some embodiments, the multispectral images described herein are acquired via an optical cable having both an optical emitter and an optical detector at the same end of the detector. Previously used camera systems have selected about eight separate wavelengths, in contrast to light emitters capable of emitting about 1000 different wavelengths of light between 400nm and 1100nm to provide a smooth range of target illumination at different wavelengths. In some embodiments, the target is sequentially illuminated with each of a defined range of wavelengths, for example, wavelengths between 400nm and 500nm (e.g., 400nm, 425nm, 450nm, 475nm, 500nm), wavelengths between 720nm and 1000nm (e.g., 720nm, 750nm, 775nm, 800nm, 825nm, 850nm, 875nm, 900nm, 925nm, 950nm, 975nm, 1000nm), or any range defined by any two of the above wavelengths, to produce more than one image at each wavelength.
Fig. 70A, 70B, 71 illustrate example fiber optic systems that may be used to obtain the image data described herein. As shown in fig. 70A, the fiber optic probe 7000 includes a plurality of light emitting fibers 7005, which light emitting fibers 7005 surround a light collecting fiber 7010. Each light-emitting fiber 7005 can illuminate one of multiple overlapping regions 7015, and light emitted by the light-emitting fibers 7005 is reflected from the tissue of the target before being collected by the light-collecting fibers 7010 from the uniformly illuminated region 7020. In some embodiments, the light-emitting fibers can be controlled to emit one of 1000 different wavelengths between 400nm and 1100nm in sequence, and the signals received by the light-collecting fibers 7010 can be used to generate an image of the illuminated tissue at the emission wavelength.
In some embodiments, probe 7000 is a fiber optic spectrophotometer equipped with a coaxial light source for reflectance and backscatter measurements. The probe may be arranged to block ambient light with a cover (not shown) and then image the tissue using only the emitted wavelengths, in a manner that provides a higher degree of accuracy in the classification than if the tissue were illuminated with both ambient and selected emitted wavelengths.
As shown in fig. 71, probe 7100 includes a first fiber optic cable 7105, the first fiber optic cable 7105 having a light emitting and detecting end 7110. The light emitting and detecting end 7110 comprises a plurality of light emitting fibers 7115, and the light emitting fibers 7115 surround a light collecting fiber 7125. The light emitting fiber 7115 passes through a first fiber optic cable 7105 and splits into a second fiber optic cable 7140, and a cross-section 7145 of the second fiber optic cable 7140 is shown including light emitting fiber 7115 surrounding core 7120. The multi-fiber second fiber optic cable 7140 may be connected to a light source, providing light of a desired wavelength to the light emitting and detecting end 7110 of the first fiber optic cable 7105 via the second fiber optic cable 7140. Light detecting fiber 7125 passes through first optical cable 7105 and splits into third optical cable 7130, with section 7135 of third optical cable 7130 shown including only light detecting fiber 7125. The single fiber third optical cable 7130 provides the signal from the light detection fiber 7125 to an image sensor (e.g., a CMOS or CCD image sensor) for acquiring image data or to a spectrometer. The size of the core of the optical fiber is in the range of 200 μm to 600 μm, such as 200 μm, 250 μm, 300 μm, 350 μm, 400 μm, 450 μm, 500 μm, 550 μm, 600 μm, or within a range defined by any two of the above wavelengths.
Fig. 72 shows five example study time points and multiple probe positions used to acquire Diffuse Reflectance Spectroscopy (DRS) data in the visible and Near Infrared (NIR) ranges. The first study time point 7200A corresponds to a pre-burn situation, the second study time point 7200B corresponds to a post-burn situation where the target has two burns 7210A, 7210B, the third time point 7200C corresponds to a situation where the target is after the first ablation, the fourth time point 7200D corresponds to a situation where the target is after the second ablation, and the fifth time point 7200E corresponds to a situation where the target is after the third ablation. Each excision removed a layer of skin of approximately 1.0mm, totaling 3.0mm in depth. Fig. 72 also shows three healthy skin probe positions 7205A, 7205B, 7205C, one burn probe position 7215, three wound (within the excised skin area) probe positions 7220A, 7220B, 7220C.
The experimental configuration shown in figure 72A used a total of 12 burn areas for data acquisition. Similar to the probe shown in fig. 70A, 70B, 71, which is located at about 1.0cm from the skin surface, the area of skin measurement is circular with a radius of about 0.5cm, and ambient light is blocked at the measurement location. A total of 76 DRS measurements were performed: at probe position 7215, a partial thickness burn of each depth is once; at probe locations 7205A, 7205B, 7205C, adjacent skin surrounding the burn was burned three times; at probe positions 7220A, 7220B, 7220C, the healthy wound was excised three times after the skin.
In the techniques employed for treatment of patients described herein, probe positions similar to those shown in the case of 7200B can be used for initial tissue classification, e.g., to classify burned and healthy tissue, and in some embodiments, to identify the degree of burn in burned tissue. In other embodiments, more probe locations may be used, for example, to classify the severity of burns in different areas. Based on the classification, the physician or an automated system decides on the treatment plan, including several resections of the area surrounding the burned tissue to facilitate recovery.
In the techniques used to treat patients described herein, probe positions similar to those shown in the 7200C-7200E cases can be used for tissue classification during treatment, e.g., to identify burned tissue and excised/debrided tissue, and in some embodiments, to identify the degree of burn in the burned tissue. In other embodiments, more probe locations may be used, for example, to classify the severity of burns in different areas. As shown in fig. 7200C-7200E, the burned tissue is less visible as the tissue is resected to a deeper level. This corresponds to a reduced severity of burned tissue, which is detectable by collecting data from the probe at the site of burning a portion of tissue and excising a portion of tissue. Based on the classification of burn severity at each resection, a physician or automated system decides a treatment plan, including: if necessary, several additional resections of the area surrounding the burned tissue may be performed to facilitate recovery.
The resulting collected data show that: during burn excision surgery, the visible and NIR light ranges differ significantly between burned and healthy tissue, and the DRS carries sufficient information to distinguish the three clinically important tissue types. DRS data were collected as follows: at each of the test probe locations, at defined time points, three tissue types were collected from the burn excision animal model, namely deep partial thickness burns, healthy intact skin, exposed viable underlying wound tissue. The clear distinction between DRS in these tissue types is apparent from the resulting data, indicating that the visible and NIR light spectral regions are effective for identifying these tissues during surgery.
Fig. 73 shows a graph 7300 of the average diffuse reflectance spectrum of burned tissue 7305, healthy skin 7310, wound tissue 7315 as a function of wavelength (nm) and reflectance between 400nm and 1100 nm. High reflectance values are consistent with brighter reflections of tissue. As shown by the peaks of the respective spectral curves 7305, 7310, 7315, DRS of burn, healthy skin, wound tissue showed the highest reflectance values of all three tissue types to occur at about 625 nm. Also, there are secondary peaks at 525nm and 575nm for all three tissue types. Burned and healthy tissue reflects the most light, while wounded tissue reflects the least. This is expected because, in the test configuration, the skin of the target pig had no dark pigmentation and the burn resulted in a lighter colour impression on the skin surface. These peaks may change for targets with different skin pigmentation. The few reflections of the wound tissue are due to: blood is present at the surface exposed by surgical resection, where it is reached by the ruptured emerging capillaries.
As shown in fig. 74 and 75, the data from graph 7300 can be processed using statistical methods to identify wavelengths at which the light reflection for tissue types is significantly different. Data based on the experimental configuration shown in fig. 72, which may be divided into two separate data sets prior to performing a significance test, is used to generate a graph 7300. Data set one, comprising data used to generate spectral curve 7305 for burned tissue and spectral curve 7310 for healthy skin, respectively, reflects the tissue that appears when the surgeon decides to ablate or not ablate a burn. Data set one included burn and healthy skin data available to the physician from a patient similar to the situation shown in 7200B. Data set two, including data used to generate spectral curve 7305 of burned tissue and spectral curve 7315 of wound tissue, respectively, reflects data from which a surgeon needs to identify the remaining burned tissue in the wound from the exposed wound tissue for a wound encountered during surgery. The second data set included burn and wound tissue data that was available to physicians from patients similar to the cases shown at 7200C-7200E.
Figure 74 shows a plot 7400 of P-value versus control wavelength for burn control healthy skin. The graph 7400 reflects: the difference between the data used to generate the spectral curve 7305 for burned tissue and the spectral curve 7310 for healthy skin, respectively, at the test wavelength. The P value is between 0 and 1, and decreases as the difference between the data used to generate spectral curves 7305 and 7310 increases, a high P value indicates a small difference, and a P value of 0.05 or less indicates a significant difference. To obtain the P value of plot 7400, multiple comparisons are made by calculating the t statistic at each wavelength of the collected data used to generate spectral curves 7305 and 7310.
Line 7405 represents a significance level of 0.05 where the error associated with multiple comparisons was not controlled. Line 7410 represents α at 0.05/comparison times, which is a conservative Bonferroni (Bonferroni) correction for family-wide error rate (family-wide error rate). The P value of plot 7400 is considered insignificant above line 7405 and the P value of plot 7400 is considered significant below line 7410. When the P value falls in the intermediate range 7415 between line 7405 and line 7410, additional processing is performed to identify which are significant. For example, these values are sorted from lowest to highest, and then processed to judge the significance level according to a metric that the larger the P value is, the easier it is to pass through, which will be described in detail with reference to fig. 76A and 76B.
For data set one represented by graph 7400, the wavelengths at which the maximum difference between burned and healthy skin occurs between 475nm and 525nm, between 450nm and 500nm, between 700nm and 925nm, as indicated by the P value below line 7405. Thus, for classifying burned tissue as compared to healthy skin, the multispectral imaging system described herein uses a low-end range and a high-end range rather than a continuous range of wavelengths, for example, between 450nm and 525nm, between 700nm and 925nm, or any wavelength defined range between any two of the above wavelengths.
Fig. 75 shows a plot 7500 of P-value versus control wavelength for burn control wounds. The graph reflects: the difference between the data used to generate the spectral curve 7305 for burned tissue and the spectral curve 7315 for wound tissue, respectively, at the test wavelength. The P-value is between 0 and 1, and decreases as the difference between the data used to generate spectral curves 7305 and 7315 becomes larger, a high P-value indicates no significance because there is not much difference, and a P-value below 0.05 indicates significance. To obtain the P value of plot 7500, multiple comparisons are made by calculating the t statistic at each wavelength of the collected data used to generate spectral curves 7305 and 7315.
Line 7505 represents a significance level of 0.05 where the error associated with multiple comparisons was not controlled. Line 7510 indicates α at 0.05/comparison times, which is a conservative Bonferroni (Bonferroni) correction for family-wide error rate (family-wide error rate). The P value of plot 7500 is considered insignificant above line 7505 and the P value of plot 7500 is considered significant below line 7510. When the P value falls in the intermediate range 7515 between lines 7505 and 7510, additional processing is performed to identify which are significant. For example, these values are sorted from lowest to highest, and then processed, and the significance level is judged according to a metric that the larger the P value is, the easier it is to pass, which will be described in detail with reference to fig. 77A and 77B.
For data set two, represented by graph 7500, the wavelengths at which the maximum difference between burned and healthy skin occurs between 400nm and 450nm, between 525nm and 580nm, and between 610nm and 1050nm, as indicated by the P value below line 7505. Thus, for classifying burned tissue as compared to healthy skin, the multispectral imaging system described herein uses a low-end range and a high-end range rather than a continuous range of wavelengths, for example, between 400nm and 450nm, between 525nm and 580nm, between 610nm and 1050nm, or any wavelength defined range between any two of the above wavelengths.
The results obtained for analysis of plots 7400 and 7500 identifying the P values below lines 7405 and 7505 are considered conservative because this approach does not take into account the high probability type II errors that occur when performing the multiplex t-test. Thus, the P values of intermediate ranges 7415 and 7515 may be processed to identify their significance. For example, to correct for type II error probabilities for intermediate ranges 7415 and 7515, the pseudo-discovery rate may be controlled using the Benjamini-Hochberg (Benjamini-Hochberg) method. The benjaminy-hoechberg method achieves a linear stepwise increase in the significance level (α). To implement the benjaminy-hoechberg method, the P values for each intermediate range 7415, 7515 are first aligned from minimum to maximum. Then, for each time t-test, the following correction is applied to α:
alphai=alpha0*i/m
Wherein alphaiIs the significance of t test i, alpha0For the chosen significance level, here 0.05, i is the index of the particular p-value of the organized list, and m is the total number of p-values, which in this trial configuration is 2048. The results of this calculation for the first data set and the second data set are shown in fig. 76A to 76B, fig. 77A to 77B, respectively.
Fig. 76A and 76B show graphs 7600, 7605 of P-values of a first data set arranged in ascending order 7610, with designation 7615 being the corrected P-value significance level. The curve 7600 is a complete curve of all P values in the range 7415, and the curve 7605 is a segment of the curve 7600 of P values which is significant in the range of 0-1000.
Fig. 77A and 77B show graphs 7700, 7705 of the P-values of the second data set arranged in ascending order 7710, with designation 7715 being the corrected P-value significance level. The curve diagram 7700 is a complete curve diagram of all P values in the range 7515, and the curve diagram 7705 is a section of the curve diagram 7500 which is significant in the range of 0-1000.
Based on the significance values determined by the experimental configuration, the multispectral imaging system for classification of burned tissue may use a low-end range and a high-end range rather than a continuous range of wavelengths, for example, between 400nm and 500nm, between 720nm and 1000nm, or any wavelength defined range between any two of the above wavelengths. For example, as shown in fig. 70A, 70B, 71, the probe may be configured to emit light at a plurality of wavelengths, within a range defined by any of 400nm to 500nm, 720nm to 1000nm, or any two of the above wavelengths. In some embodiments, the wavelength range is suitable for tissue classification of skin pigmentation similar to the range of the test configuration case, and different ranges deviating from the disclosed range may be grouped for tissue classification of lighter or darker skin. Different sets of ranges may be identified based on the separation of the spectrum received from healthy tissue from the spectrum received from tissue of interest (e.g., burn tissue or wound tissue).
Implementation System and terminology
Embodiments disclosed herein provide systems, methods, apparatuses for identifying, estimating, and/or classifying target tissues. Those skilled in the art will appreciate that these alternative examples may be implemented in hardware, software, firmware, or any combination thereof.
It should be understood that all of the tests, features, materials, characteristics or groups described above in connection with a particular aspect, alternative example, example also apply to any other aspect, alternative example, example described herein, unless expressly stated otherwise. All of the features disclosed in this specification (including any accompanying claims, abstract, and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The scope of protection is not limited to the details of any of the foregoing alternative examples. The scope of protection extends to any novel feature or any novel combination of features disclosed in this specification (including any accompanying claims, abstract, drawings), or to any novel step or any novel combination of steps of any method or process disclosed herein.
Although some alternative examples have been disclosed, these are presented by way of example only and do not limit the scope of the invention. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Moreover, various omissions, substitutions, and changes to the methods and systems described herein may be made. Those of skill in the art will understand that in some alternative examples, the actual steps taken in the processes shown and/or disclosed are different than those shown in the figures. Depending on the alternative example, some of the steps described above may be omitted, and other steps may be added. Furthermore, the features and attributes of the specific alternative examples described above may be combined in different ways to form additional alternative examples, all of which are within the scope of the present invention.
It will be understood that the use of designations herein such as "first," "second," etc., to refer to any elements generally, without limitation to the number or order of such elements. These designations are used herein only to facilitate distinguishing between more than two elements or multiple examples of elements. Thus, reference to a first element and a second element does not mean that only two elements may be employed or that the first element must precede the second element in some fashion. Also, a set of elements can include more than one element unless specifically stated otherwise.
Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Those of skill would further appreciate that any of the various examples, modules, processors, means, circuits, algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware (e.g., a digital implementation, an analog implementation, or a combination of the two, which may be designed using source code or some other technique), various forms of program or design code incorporating instructions (referred to herein, for convenience, as "software" or a "software module"), or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various example logics, elements, modules, circuits described in connection with the aspects disclosed herein and in connection with the figures may be implemented or realized in an Integrated Circuit (IC), an access terminal, or an access point. The IC includes a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware elements, electronic elements, optical elements, mechanical elements, or a combination thereof designed to perform the functions described herein, executing code or instructions internal to the IC, external to the IC, or internal and external to the IC. The logic blocks, modules, circuits include antennas and/or transceivers to communicate with various elements within the network or within the device. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, more than one microprocessor in conjunction with a DSP core, or any other such combination. The functionality of the modules may also be implemented in some other way than here. The functionality described herein (e.g., with respect to one or more of the figures) corresponds in some respects to the functionality of the similarly written "means for …" in the appended claims.
If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code in a computer-readable medium. The steps of a method or algorithm disclosed herein may be embodied by a software module executable by a processor in a computer readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can transfer a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connector may also be properly termed a computer-readable medium. Optical and magnetic disks, as used herein, include Compact Disks (CDs), laser disks, optical disks, Digital Versatile Disks (DVDs), floppy disks, blu-ray disks where disks usually reproduce data magnetically, while optical disks reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. Additionally, the operations of a method or algorithm are stored as a set of codes and instructions, or any combination thereof, on a machine-readable medium and a computer-readable medium and may also be incorporated in a computer program product.
It should be understood that the specific order or hierarchy of steps in any disclosed process is an example of sample approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged while remaining within the scope of the present invention. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
Various modifications to the embodiments described herein will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the claims, principles and novel features disclosed herein.
Some of the features mentioned in the context of separate embodiments in this specification may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are illustrated in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be preferred. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, but rather should be understood to generally integrate the described program components and systems together in a single software product or packaged into multiple software products. In addition, other implementations are within the scope of the following claims. In some cases, the steps recited in the claims may be performed in a different order and still achieve desirable results.
Although the invention includes certain alternative examples, and applications, it will be understood by those skilled in the art that the invention extends beyond the specifically disclosed alternative examples to other alternative examples and/or applications, obvious modifications, and equivalents thereof, including alternative examples that do not provide all of the features and advantages described herein. Accordingly, the scope of the present invention is not to be limited by the particulars of the preferred embodiments herein, but rather by the claims presented herein or in the future. For example, the following alternative examples are included within the scope of the present invention, in addition to the claims set forth herein.
In the preceding description, specific details are given to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the examples may be practiced without these specific details. For example, electronic components/devices are shown in block diagram form in order not to obscure the examples in unnecessary detail. In other instances, these elements, other structures and techniques may also be specifically shown to further explain these examples.
The embodiments disclosed above are provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A tissue classification system, comprising:
at least one light emitter configured to emit each of a plurality of wavelengths of light to illuminate a first tissue region and a second tissue region, wherein the first tissue region includes a tissue region in which skin has been debrided and the second tissue region includes a wound, wherein the at least one light emitter is configured to emit spatially uniform light, the plurality of wavelengths of light are in a first wavelength range and a second wavelength range, the first wavelength range is lower than the second wavelength range, and the first wavelength range is discontinuous with the second wavelength range;
A light detection element configured to collect light emitted from the at least one light emitter and reflected from at least a portion of the first tissue region and at least a portion of the second tissue region;
one or more processors in communication with the at least one light emitter and the light detection element and configured to:
controlling the at least one light emitter to emit light at each of the plurality of wavelengths of light;
receiving a plurality of signals from the light-detecting elements, a first subset of the plurality of signals representing light emitted at the plurality of wavelengths reflected from the portion of the first tissue region, a second subset of the plurality of signals representing light emitted at the plurality of wavelengths reflected from the portion of the second tissue region;
applying multispectral imaging processing to the plurality of signals;
identifying a tissue difference between the portion of the first tissue region and the portion of the second tissue region based, at least in part, on the multispectral imaging process;
classifying the portion of the first tissue region and the portion of the second tissue region based at least in part on the tissue difference, wherein the one or more processors classify the first tissue region as the tissue region in which skin has been debrided and the second tissue region as the wound; and is
Outputting, to a display, a classification image of the first and second tissue regions, the classification image including first representative pixels classified as the tissue region in which skin has been debrided and second different representative pixels classified as the wound.
2. The tissue classification system of claim 1, further comprising a fiber optic probe, the light detection element comprising a first fiber of the fiber optic probe, the at least one light emitter comprising a plurality of other fibers of the fiber optic probe.
3. The tissue classification system of claim 2, wherein the first optical fiber is in data communication with an image sensor or a spectrometer, and the plurality of other optical fibers receive the plurality of wavelengths of light from a light source.
4. The tissue classification system of claim 1, further comprising a light shield located at a position that blocks ambient light from the first and second tissue regions being illuminated.
5. The tissue classification system according to claim 1, wherein the first wavelength range is between 400nm and 500nm and the second wavelength range is between 720nm and 1000 nm.
6. The tissue classification system of claim 1, wherein the first subset of the plurality of signals corresponds to a first temporal sequence of different points and the second subset of the plurality of signals corresponds to a second temporal sequence of different points.
7. The tissue classification system of claim 6, wherein the one or more processors are configured to calculate blood perfusion in the portion of the first tissue region and the portion of the second tissue region by employing photoplethysmography on the plurality of signals.
8. The tissue classification system of claim 7, wherein the one or more processors are configured to classify the portion of the first tissue region and the portion of the second tissue region further based at least in part on the blood perfusion.
9. The tissue classification system of claim 1, wherein the one or more processors classify the second tissue region as a burn.
10. The tissue classification system according to claim 9, wherein the first wavelength range is between 400nm and 450nm or between 525nm and 580nm and the second wavelength range is between 610nm and 1050 nm.
CN201680076887.XA 2015-10-28 2016-04-28 Reflective mode multispectral time-resolved optical imaging method and device for tissue classification Active CN108471949B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
USPCT/US2015/057882 2015-10-28
PCT/US2015/057882 WO2016069788A1 (en) 2014-10-29 2015-10-28 Reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification
US201662297565P 2016-02-19 2016-02-19
US62/297,565 2016-02-19
PCT/US2016/029864 WO2017074505A1 (en) 2015-10-28 2016-04-28 Reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification

Publications (2)

Publication Number Publication Date
CN108471949A CN108471949A (en) 2018-08-31
CN108471949B true CN108471949B (en) 2021-11-12

Family

ID=58630952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680076887.XA Active CN108471949B (en) 2015-10-28 2016-04-28 Reflective mode multispectral time-resolved optical imaging method and device for tissue classification

Country Status (5)

Country Link
EP (1) EP3367887A4 (en)
JP (1) JP6785307B2 (en)
KR (1) KR102634161B1 (en)
CN (1) CN108471949B (en)
WO (1) WO2017074505A1 (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016069788A1 (en) 2014-10-29 2016-05-06 Spectral Md, Inc. Reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification
US20220240783A1 (en) 2017-03-02 2022-08-04 Spectral Md, Inc. Machine learning systems and techniques for multispectral amputation site analysis
WO2019003245A1 (en) * 2017-06-28 2019-01-03 Skincurate Research Private Limited Multispectral optical imaging device and method for contactless functional imaging of skin
CN107369172B (en) * 2017-07-14 2021-07-09 上海肇观电子科技有限公司 Intelligent device and method for outputting depth image
CA3076478C (en) * 2017-09-21 2021-11-16 Vital Biosciences, Inc. Imaging biological tissue or other subjects
US10500084B2 (en) 2017-12-22 2019-12-10 Coloplast A/S Accessory devices of an ostomy system, and related methods for communicating leakage state
US11559423B2 (en) 2017-12-22 2023-01-24 Coloplast A/S Medical appliance system, monitor device, and method of monitoring a medical appliance
EP3727236A1 (en) 2017-12-22 2020-10-28 Coloplast A/S Sensor assembly part and a base plate for an ostomy appliance and a method for manufacturing a sensor assembly part and a base plate
US10799385B2 (en) 2017-12-22 2020-10-13 Coloplast A/S Ostomy appliance with layered base plate
US11471318B2 (en) 2017-12-22 2022-10-18 Coloplast A/S Data collection schemes for a medical appliance and related methods
LT3727232T (en) 2017-12-22 2022-04-25 Coloplast A/S Ostomy appliance with selective sensor points and related methods
LT3727234T (en) 2017-12-22 2022-04-25 Coloplast A/S Ostomy appliance with angular leakage detection
EP3727242B1 (en) 2017-12-22 2022-03-09 Coloplast A/S Monitor device of an ostomy system having a connector for coupling to both a base plate and an accessory device
JP7320704B2 (en) 2018-05-31 2023-08-04 パナソニックIpマネジメント株式会社 LEARNING DEVICE, INSPECTION DEVICE, LEARNING METHOD AND INSPECTION METHOD
EP3818544A1 (en) * 2018-07-05 2021-05-12 Boston Scientific Scimed, Inc. Devices, systems, and methods for determining inflammation and/or fibrosis
CN109146945B (en) * 2018-08-02 2021-01-26 京东方科技集团股份有限公司 Display panel and display device
KR20200020341A (en) * 2018-08-17 2020-02-26 주식회사 올리브헬스케어 Apparatus and method for bio-signal analysis using machine learning
KR102100229B1 (en) * 2018-12-06 2020-04-13 제주대학교 산학협력단 Device for photographing sequential images of ultra-fast moving object
CA3122351A1 (en) * 2018-12-11 2020-06-18 Eko.Ai Pte. Ltd. Automatic clinical workflow that recognizes and analyzes 2d and doppler modality echocardiogram images for automatic cardiac measurements and the diagnosis, prediction and prognosis of heart disease
EP3666176A1 (en) * 2018-12-14 2020-06-17 Koninklijke Philips N.V. Apparatus for detecting tissue inflammation
EP3893733A4 (en) 2018-12-14 2022-10-12 Spectral MD, Inc. Machine learning systems and methods for assessment, healing prediction, and treatment of wounds
US10783632B2 (en) 2018-12-14 2020-09-22 Spectral Md, Inc. Machine learning systems and method for assessment, healing prediction, and treatment of wounds
US10740884B2 (en) 2018-12-14 2020-08-11 Spectral Md, Inc. System and method for high precision multi-aperture spectral imaging
KR20210099126A (en) 2018-12-14 2021-08-11 스펙트랄 엠디, 인크. Systems and methods for high-precision multi-aperture spectral imaging
US11612512B2 (en) 2019-01-31 2023-03-28 Coloplast A/S Moisture detecting base plate for an ostomy appliance and a system for determining moisture propagation in a base plate and/or a sensor assembly part
WO2020203015A1 (en) * 2019-04-02 2020-10-08 公立大学法人横浜市立大学 Illness aggravation estimation system
JP7142175B2 (en) * 2019-05-24 2022-09-26 スミス メディカル エーエスディー インコーポレーテッド Bandages and systems for phlebitis detection
JP7334073B2 (en) 2019-06-19 2023-08-28 キヤノンメディカルシステムズ株式会社 MEDICAL DATA PROCESSING APPARATUS AND MEDICAL DATA PROCESSING METHOD
DE102019209790A1 (en) * 2019-07-03 2021-01-07 Siemens Healthcare Gmbh Method for providing an evaluation data set from a first medical three-dimensional computer tomography data set
CN110365947A (en) * 2019-08-07 2019-10-22 杭州泽铭睿股权投资有限公司 A kind of baby's monitor camera can detecte baby's heartbeat breathing
CN112617746B (en) * 2019-10-09 2024-04-09 钜怡智慧股份有限公司 Non-contact physiological signal detection device
TWI772689B (en) * 2019-10-09 2022-08-01 鉅怡智慧股份有限公司 Non-contact physiological signal measuring device
JP2021083783A (en) * 2019-11-28 2021-06-03 株式会社エクォス・リサーチ Pulse rate detection device, exercise device, and pulse rate detection program
KR102655200B1 (en) * 2020-08-25 2024-04-08 한국과학기술원 Method and apparatus for multiplexted imaging through iterative separation of fluorophores
KR102628182B1 (en) * 2020-08-28 2024-01-24 한국과학기술원 Method and apparatus for multicolor unmixing by mutual information minimization
CN112487882B (en) * 2020-11-13 2022-09-27 西南交通大学 Method for generating non-sparse index-guided enhanced envelope spectrum based on spectrum coherence
CN112401843B (en) * 2020-11-17 2022-07-19 浙江省人民医院 Device for distinguishing tumor active tissue and necrotic tissue
CN113327655B (en) * 2021-04-21 2022-08-05 福建亿能达信息技术股份有限公司 Outlier detection method, device, equipment and medium for multidimensional data
EP4151276A1 (en) * 2021-09-20 2023-03-22 Koninklijke Philips N.V. Monitoring and detection of cutaneous reactions caused by radiotherapy
KR102659164B1 (en) 2021-10-22 2024-04-19 삼성전자주식회사 Hyperspectral image sensor and method of operation thereof
DE102021134276A1 (en) * 2021-12-22 2023-06-22 Karl Storz Se & Co. Kg Medical device and method for examining an organic tissue
CN116659589B (en) * 2023-07-25 2023-10-27 澳润(山东)药业有限公司 Donkey-hide gelatin cake preservation environment monitoring method based on data analysis

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5701902A (en) * 1994-09-14 1997-12-30 Cedars-Sinai Medical Center Spectroscopic burn injury evaluation apparatus and method
US6081612A (en) * 1997-02-28 2000-06-27 Electro Optical Sciences Inc. Systems and methods for the multispectral imaging and characterization of skin tissue
DE19850350C1 (en) * 1998-11-02 2000-09-28 Jena Optronik Gmbh Method and device for generating data for the diagnosis of the degree of damage to a patient's skin tissue
TW512058B (en) * 2001-12-24 2002-12-01 Yung-Jian Yang Spectral analysis system for burning and scalding injuries and device used in the system
EP1821097A1 (en) * 2002-12-02 2007-08-22 River Diagnostics B.V. Use of high wavenumber Raman spectroscopy for measuring tissue
RU2372117C2 (en) * 2003-09-18 2009-11-10 Аркюо Медикал, Инк. Method of opto-thermo-mechanical impact onto biological tissue and device to this end
US20120321759A1 (en) * 2007-01-05 2012-12-20 Myskin, Inc. Characterization of food materials by optomagnetic fingerprinting
WO2009008745A2 (en) * 2007-07-06 2009-01-15 Industrial Research Limited Laser speckle imaging systems and methods
US20090072142A1 (en) * 2007-09-14 2009-03-19 Forensicare Incorporated Scanning system and techniques for medical and/or forensic assessment using the same
CA2721941C (en) * 2008-04-21 2018-06-26 Drexel University Methods for measuring changes in optical properties of wound tissue and correlating near infrared absorption (fnir) and diffuse reflectance spectroscopy scattering (drs) with tissue neovascularization and collagen concentration to determine whether wound is healing
PT2291640T (en) * 2008-05-20 2019-02-26 Univ Health Network Device and method for fluorescence-based imaging and monitoring
US8694266B2 (en) * 2008-06-05 2014-04-08 The Regents Of The University Of Michigan Multimodal spectroscopic systems and methods for classifying biological tissue
US9572494B2 (en) * 2008-08-12 2017-02-21 New Jersy Institute of Technology Method and apparatus for multi-spectral imaging and analysis of skin lesions and biological tissues
US20120078088A1 (en) 2010-09-28 2012-03-29 Point of Contact, LLC. Medical image projection and tracking system
US20120288230A1 (en) * 2011-05-13 2012-11-15 Kestrel Labs, Inc. Non-Reflective Optical Connections in Laser-Based Photoplethysmography
US20140213910A1 (en) * 2013-01-25 2014-07-31 The Regents Of The University Of California Method and apparatus for performing qualitative and quantitative analysis of burn extent and severity using spatially structured illumination
CN105143448A (en) * 2013-02-01 2015-12-09 丹尼尔·法卡斯 Method and system for characterizing tissue in three dimensions using multimode optical measurements
CN103815875B (en) * 2013-10-28 2015-06-03 重庆西南医院 Near-infrared spectrum imaging system for diagnosis of depth and area of burn skin necrosis
WO2015116823A1 (en) * 2014-01-29 2015-08-06 The Johns Hopkins University System and method for wound imaging and debridement
WO2016069788A1 (en) * 2014-10-29 2016-05-06 Spectral Md, Inc. Reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification

Also Published As

Publication number Publication date
WO2017074505A1 (en) 2017-05-04
JP2019502418A (en) 2019-01-31
EP3367887A4 (en) 2019-05-22
JP6785307B2 (en) 2020-11-18
EP3367887A1 (en) 2018-09-05
KR102634161B1 (en) 2024-02-05
KR20180078272A (en) 2018-07-09
CN108471949A (en) 2018-08-31

Similar Documents

Publication Publication Date Title
CN108471949B (en) Reflective mode multispectral time-resolved optical imaging method and device for tissue classification
US9962090B2 (en) Reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification
EP3212057B1 (en) Reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification
US20220142484A1 (en) Reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification
US11337643B2 (en) Machine learning systems and techniques for multispectral amputation site analysis
JP2018502677A5 (en)
US11883128B2 (en) Multispectral mobile tissue assessment
US8548570B2 (en) Hyperspectral imaging of angiogenesis
CA2592691C (en) Hyperspectral/multispectral imaging in determination, assessment and monitoring of systemic physiology and shock
Thatcher et al. Multispectral and photoplethysmography optical imaging techniques identify important tissue characteristics in an animal model of tangential burn excision
EP2635185A2 (en) Determination of tissue oxygenation in vivo
CN109154558A (en) Method and apparatus for assessing tissue vascular health
Heredia-Juesas et al. Non-invasive optical imaging techniques for burn-injured tissue detection for debridement surgery
AU2013202796B2 (en) Hyperspectral/multispectral imaging in determination, assessment and monitoring of systemic physiology and shock
Saeed et al. Simplifying vein detection for intravenous procedures: A comparative assessment through near‐infrared imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant