GB2559126A - An upper airway classification system - Google Patents
An upper airway classification system Download PDFInfo
- Publication number
- GB2559126A GB2559126A GB1701226.1A GB201701226A GB2559126A GB 2559126 A GB2559126 A GB 2559126A GB 201701226 A GB201701226 A GB 201701226A GB 2559126 A GB2559126 A GB 2559126A
- Authority
- GB
- United Kingdom
- Prior art keywords
- processor
- event
- classification
- artefact
- data source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/267—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/08—Detecting, measuring or recording devices for evaluating the respiratory organs
- A61B5/0826—Detecting or evaluating apnoea events
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7282—Event detection, e.g. detecting unique waveforms indicative of a medical condition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Physiology (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Pulmonology (AREA)
- Artificial Intelligence (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Otolaryngology (AREA)
- Optics & Photonics (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
An upper airway classification system for classifying upper airway events. An upper airway classification system comprising: at least one data source 101 and a processor 103 to extract individual frames of data from each of the said data source, the processor 103 also being used to classify the extracted individual frames of data to identify an artefact identified in said data frame based on an analysis of the behaviour of the artefact. A data source may comprise at least one of a video endoscopy system (201, fig 2a), intra-oral camera system and storage device. The processor may comprise a desktop, laptop (203), field-programmable gate array.
Description
(71) Applicant(s):
Esuabom Nwachukwu Dijemeni Flat 24, Central Building, Bow Quarters,
Fairfield Road, LONDON, Greater London, E3 2US, United Kingdom (72) Inventor(s):
Esuabom Nwachukwu Dijemeni (56) Documents Cited:
WO 2015/163710 A1 WO 2015/101948 A2 WO 2005/103962 A1 US 20130296660 A1 US 20120022365 A1 US 20110144517 A1 US 20110009752 A1 (58) Field of Search:
INT CL A61B, G06T Other: EPODOC, WPI (74) Agent and/or Address for Service:
Esuabom Nwachukwu Dijemeni Flat 24, Central Building, Bow Quarters,
Fairfield Road, LONDON, Greater London, E3 2US, United Kingdom (54) Title of the Invention: An upper airway classification system Abstract Title: An upper airway classification system (57) An upper airway classification system for classifying upper airway events. An upper airway classification system comprising: at least one data source 101 and a processor 103 to extract individual frames of data from each of the said data source, the processor 103 also being used to classify the extracted individual frames of data to identify an artefact identified in said data frame based on an analysis of the behaviour of the artefact. A data source may comprise at least one of a video endoscopy system (201, fig 2a), intra-oral camera system and storage device. The processor may comprise a desktop, laptop (203), field-programmable gate array.
101
Figure 1
103
1/8
DRAWINGS
101
DATA SOURCE
PROCESSOR
103
Figure 1
2/8
201
203
VIDEO ENDOSCOPY SYSTEM | |
1 | |
LAPTOP |
Figure 2a
Figure 2b
3/8
Figure 3 a
305
VIDEO
ENDOSCOPY
SYSTEM
307
L
USB VIDEO GRABBER
309
DESKTOP
Figure 3b
4/8
Figure 4
5/8
Figure 5
6/8
601
603
Figure 6
605
7/8
701
703
705
707
Figure 7
8/8
Figure 8
VELUM
NO OBSTRUCTION
VELUM PARTIAL ANTREROPOSTERIOR OBSTRUCTION
VELUM COMPLETE ANTREROPOSTERIOR OBSTRUCTION
OROPHARYNX NO OBSTRUCTION
VELUM COMPLETE CONCENTRIC OBSTRUCTION
OROPHARYNX
COMPLETE
OBSTRUCTION
TONSIL PARTIAL OBSTRUCTION
PARTIAL
ANTREROPOSTERIOR
OBSTRUCTION
Application No. GB1701226.1
RTM
Date :22 August 2017
Intellectual
Property
Office
The following terms are registered trade marks and should be read as such wherever they occur in this document:
Raspberry Pi (Page 6) HDMI (Page 6)
Cyclone (Page 7)
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
AN UPPER AIRWAY CLASSIFICATION SYSTEM
BACKGROUND
The current invention relates to upper airway assessment and in particular provides a device and method for classifying an upper airway events.
Locating the level of upper airway obstruction is a key clinical feature for assessing obstructive sleep apnoea. Obstructive sleep apnoea (OSA) is the most prevalent sleep disorder breathing problem. The first description of the clinical features of obstructive sleep apnoea was in 1976. OSA is characterised by repetitive partial or complete obstruction of the upper airway during sleep. This results in the reduction or cessation of airflow. This may lead to repetitive hypoxia, increased retention of carbon dioxide and arousals to restore upper airway patency. Hence, sleep is fragmented. One definition of an apnoea is a total obstruction of the airway where there is a complete blockage for 10 seconds or more. A hypopnea is a partial obstruction of the airway where there is a partial blockage with airflow reduction of greater than 50% for 10 seconds or more.
The initial diagnosis of OSA is a comprehensive sleep history of a subject suspected with OSA. The information gained from a sleep history is related to intoxications, medications, weight management, family history, medical history, surgical history, sleep behaviour, nocturnal and daily symptoms. The Epworth Sleepiness Scale (ESS) can be used to assess the severity of daytime sleepiness. However, there is a weak correlation between ESS and
OSA severity. The GOLD standard technique for diagnosing obstructive sleep apnoea is polysomnography (PSG). A full night PSG is considered as the most accurate technique for measuring the presence and severity of OSA. During a PSG study, brain waves, muscle tone, chest and abdomen movement, airflow, heart rate, saturated blood oxygen level, sound level and sleep behaviour are monitored. PSG provides a robust physiological description of obstructive sleep apnoea. However, it fails to capture and assess anatomical information on the upper airway during an obstructive sleep apnoea. When surgical intervention on an upper airway is a necessary intervention for OSA, information provided by PSG is inadequate.
In 1991, Croft and Pringle proposed drug induced sleep endoscopy (DISE) technique as a technique for direct visualisation of an upper airway during pharmacological sleep. The technique has been well adopted and stands as the gold standard for observing an upper airway and site of obstruction during sleep. The procedure starts with careful patient selection. Patients should have basic cardiorespiratory monitoring: pulse oximetry, blood pressure and electrocardiogram. Although not compulsory, a computerised targetcontrolled infusion system for propofol and a bispectral index system for monitoring sedation depth. Patients are required to abstain from food and fluid before DISE. This prevents regurgitation and aspiration. The procedure is performed in an operating room with an ENT surgeon and an anaesthetist. An ENT surgeon uses the endoscopic imaging system to visualise the upper airway and an anaesthetist administers anaesthesia. The patient should lie in a supine position on an operating table or in a bed. The patient should attempt to mimic sleeping habits at home. There are differences in opinion about the optimal positioning: some prefer natural sleeping position while others prefer supine position. The supine position is associated with the most severe upper airway obstruction. Multiple pharmacologic sedation agents are used in the performance of drug-induced sleep endoscopy. Sedation choice includes midazolam only, diazepam, propofol, or a combination of propofol and midazolam, and dexmedetomidine. A combination of propofol and midazolam can be used to increase the speed of induction and maintain the level of induction at an appropriate level. After a patient has reached a satisfactory level of sedation (usually a bispectral Index score of 50 - 60), a fibreoptic endoscope lubricated and coated with anticondense is introduced into the nasal cavity. The observable pathway of the fibreoptic endoscope is: nasal passage, nasopharynx, palate, tonsils, lateral pharyngeal wall, tongue base, epiglottis and the larynx. The anatomical level causing snoring vibration and airway obstruction are assessed. Manoeuvres such as chin lift and jaw thrust can be used to reassess an upper airway obstruction. The key advantages of DISE includes: direct visualisation of a dynamic upper airway obstruction during sleep; a cost-effective method to simulate OSA in patients; providing useful information in determining the management plan for patients with OSA; no radiation from imaging; and providing a sense of volumetric information of anatomical changes. The disadvantages of DISE includes:
merely a 'snapshot' of OSA; changes in snoring patterns; pharmacological sleep is different from natural sleep; difficulty in interpretation requiring experience in DISE and increased muscle relaxation.
Currently, there is no standard classification system for assessing upper airway obstruction. 5 Some examples of upper airway classification systems include: Pringle and Croft grading system, VOTE classification, NOHL classification, P-T-L-Tb-E classification, simple grading system, DISE index, modified VOTE classification and sleep endoscopy grading system
One key limitation of assessing upper airway obstruction using different classification system is high subjectively, difficult to compare different results using different systems and hard to form a universally acceptable treatment plan based on a particular grading system. In addition, choice of classification system, level of experience, level of clinical expertise, DISE procedure, available monitoring equipment and patient selection process contribute in increasing misclassification of upper airway obstruction. Furthermore, human factor and human judgment play a key role in assessing upper airway obstruction. This leads to increase the amount of human assessment errors, reduced classification accuracy, reduced diagnosis precision and increased unpredictability in upper airway assessment.
PROBLEM TO BE SOLVED BY THE INVENTION
There is therefore a need for an upper airway classification system that can accurately identify an upper airway event according to a dynamic upper airway.
It is the object of the present innovation to provide a system that is capable of classifying an upper airway event especially an obstructive upper airway event.
In addition, it is the object of the present innovation to provide a system that can identify anatomical structure, anatomical level, severity of obstruction, and configuration of obstruction.
While many prior art documents describe devices and methods for classifying upper airway events, there is no current device for classifying upper airway events where a data source which contains data on at least one upper airway event and a processor in communication with the said data source to receive data for the data source, wherein said processor classifies at least one artefact. Hence, there is a need in the art to take advantage of new technological innovation.
SUMMARY OF THE INVENTION
Some embodiment of the current invention provides an upper airway anatomical event identification system, comprising: a data source 101 which contains data on at least one upper airway event; and a processor 103 in communication with the said data source 101 to receive data for the data source 101, wherein said processor 103 identifies at least one artefact in the said data on analysis of the behaviour of the artefact.
BRIEF DESCRIPTION OF THE DRAWINGS
Further objectives and advantages will become apparent from consideration of the description, drawings and examples. The present technique will be described further, by way of example only, with reference to embodiments thereof as illustrated in the accompanying drawings, in which:
Figure 1 is a block diagram providing an overview of the technique used in accordance with the described embodiment.
Figure 2a is a block diagram providing an overview of the preferred embodiment.
Figure 2b is a block diagram providing an overview of an alternative embodiment of the preferred embodiment.
Figure 3a is a block diagram providing an overview of an embodiment of the invention wherein a processor is a desktop.
Figure 3b is a block diagram providing an overview of an alternative embodiment of the invention wherein a processor is a desktop.
Figure 4 is a block diagram providing an overview of an embodiment of the invention wherein a processor is a microprocessor.
Figure 5 is a block diagram providing an overview of an embodiment of the invention wherein a processor is a field-programmable gate array.
Figure 6 is a block diagram providing an overview of an embodiment of the invention wherein a processor is a microcontroller.
Figure 7 shows a flow chart of one embodiment of the invention.
Figure 8 shows examples of results produced by a processor in the preferred embodiment.
DETAILED DESCRIPTION
Some embodiments of the current invention are discussed in detail below. In describing embodiments, specific terminology is employed for the sake of clarity. However, the invention is not intended to be limited to the specific so selected. A person skilled in the relevant art will recognise that other equivalent components can be employed and other methods developed without departing from the broad concepts of the current invention. All references cited herein are incorporated by reference as if each had been individually incorporated.
Figure 1 is a block diagram providing an overview of the technique used in accordance with the described embodiment. A data source 101 with contains data of at least upper airway anatomical events and a processor 103 in communication with the said data source to receive data for the data source, wherein said processor 103 classifies at least one artefact in the said data on analysis of the behaviour of the artefact. Examples of data source 101 includes: video endoscopy system, intra oral camera system and a data source. In the preferred embodiment of the invention, a data source 101 is a video endoscopy system. Example of processors 103 includes: desktop, laptop, microcontroller, digital signal processor, FPGA, tablet and mobile phone. Different processors 103 provide different system advantages as further described. In the preferred embodiment, a processor is a laptop.
In one embodiment of the invention, a data source is a video endoscopy system 201 and a processor is a laptop 203. A laptop 203 is in communication with the video endoscopy system 201 via a data cable and a preinstalled video capture software on the said laptop 203. This embodiment of the invention provides a device with high computational efficiency for real time and offline upper airway classification, high system mobility and cost effective system solution. In an alternative embodiment of the invention, a laptop 209 is in communication with a video endoscopy system 205 via a USB video grabber 207.
In an embodiment of the invention, a data source 101 is a video endoscopy system 301 and a processor 103 is a desktop 303. A desktop 303 is in communication with the video endoscopy system 301 via a data cable and a preinstalled video capture software on the said desktop 303. This embodiment of the invention provides a device with high computational efficiency for real time and offline upper airway classification, high speed efficiency for multiple upper airway event classification simultaneously and high classification throughput efficiency. In an alternative embodiment of the invention wherein a processor 103 is a desktop 309, a desktop 309 is in communication with a video endoscopy system 305 via a USB video grabber 307.
In an embodiment of the invention, a data source 101 is a video endoscopy system 401 and a processor 103 is a microprocessor 405. An example of a microprocessor 405 is a raspberry pi. An example of a video grabber 403 is a raspberry pi HDMI input board. A microprocessor 405 is in communication with a video endoscopy system 401 via a video grabber 403. In the embodiment of the invention wherein a processor 103 is a microprocessor 405 which is a raspberry pi and a video grabber 403 which is a raspberry pi HDMI input board, the advantages of the embodiment of the invention is a low cost, portable and durable device for classifying upper airway events.
In an embodiment of the invention, a data source 101 is a video endoscopy system 501 and a processor 103 is a field-programmable- gate array 505. An example of a fieldprogrammable gate array 505 is cyclone III EP3C25F324 FPGA. An example of a video grabber 503 is a DVI-HSMC board. A field-programmable gate array 503 is in communication with the video endoscopy system 501 via a video grabber 503. This embodiment of the invention provides low cost, portable and computational efficient device for classifying upper airway events.
In one embodiment of the invention, a data source 101 is a video endoscopy system 601 and a processor 103 is a microcontroller 605. An example of a microcontroller 605 is Arduino. An example of a video grabber 603 is a serial VGA monitor driver board. A microcontroller 605 is in communication with the video endoscopy system 501 via a video grabber 603. This embodiment of the invention provides low cost, portable and easy to implement device for classifying upper airway events.
In one embodiment of the invention, a processor consists of a combination multiple processors.
Figure 7 shows a flow chart of the preferred embodiment of the invention. A processor 15 101 in communication 701 with a data sourcel03 is used to extract 703 individual frames of data from the said data source 103, the processor 103 to classify 707 the extracted individual frame of data based on analysis 707 of the behaviour of the artefact. Figure 8 shows examples of results produced by a processor in the preferred embodiment
In one embodiment of the invention, a processor 103 uses pattern recognition techniques to 20 classify an artefact identified in data based on analysis of the behaviour of the artefact during a detected upper airway event. The pattern recognition techniques involve using at least one probability distribution, linear model for regression, linear models for classification, neural network, sparse kernel machines, graphical models, mixture models, expectation-maximization models, approximate inference, sampling methods, continuous latent variables, sequential data and combination models.
In another embodiment of the invention, a processor 103 uses image processing techniques to classify an artefact identified in data based on analysis of the behaviour of the artefact during a detected upper airway event. The image processing techniques involve using at least one of grey level image processing, binary processing, image fourier analysis, multiscale image decompositions, wavelets, colour image processing, image filtering and edge detection.
In another embodiment of the invention, a processor 103 uses machine learning techniques 5 to classify an artefact identified in data based on analysis of the behaviour of the artefact during a detected upper airway event. The machine learning techniques involve using at least one of find-s algorithm, candidate-elimination algorithm, decision tree, artificial neural networks, Bayesian learning, instance-based learning, genetic algorithm, reinforcement learning, inductive learning, analytic learning and combining inductive and analytical learning.
In another embodiment of the invention, a processor 103 uses computer vision techniques to classify an artefact identified in data based on analysis of the behaviour of the artefact during a detected upper airway event. The computer vision techniques involve using at least one of features of detection, segmentation, structure from motion, 3D reconstruction, recognition, dense motion estimation, image stitching, stereo correspondence and image based rendering, shape recognition, perceptual grouping, relaxation labelling and image sequence processing.
In another embodiment of the invention, a processor 103 uses at least one of image processing, pattern recognition, machine learning and computer vision techniques to classify an artefact identified in data based on analysis of the behaviour of the artefact during a detected upper airway event.
In another embodiment of the invention, a processor 103 classifies an artefact based on a method which comprises: loading an image sequence, downloading a pre-trained convolutional neural network, loading pre-trained convolutional neural network, pre25 process images for convolutional neural network, prepare training and test image sets, extract training features using convolutional neural network, train a multiclass support vector machine classifier using CNN features, evaluate classifier and apply the newly trained classifier to categorise new images.
In another embodiment of the invention, a processor 103 classifies an artefact based on a method which comprises: setting up image category sets, creating bag of features, training an image classifier with bag of visual words and classify an image or image set, or video sequence based on said image classifier.
In one embodiment of the invention, a processor 103 classifies an artefact based on at least one anatomical structure, anatomical levels, severity of obstruction and configuration of obstruction. This implies the said processor classifies an artefact using one or more combinations of anatomical structures, anatomical levels, severity of obstruction and configuration of obstruction. Anatomical structure comprises: nose, velum, uvula, tonsils, lateral walls, tongue base, epiglottis and larynx. Anatomical levels comprise: nasopharynx, oropharynx, and hypopharynx. Severity of obstruction comprises: 3 degrees severity of obstruction and semi-quantitative system. 3 degrees of severity comprises: no obstruction, partial obstruction, and complete obstruction. Semi-quantitative system comprises: 0% 25% of obstruction, 25% - 50% of obstruction, 50% - 75% of obstruction and 75 - 100% of obstruction. Configuration of obstruction comprises: anteroposterior obstruction, lateral obstruction and concentric obstruction.
In another embodiment of the invention, the processor 103 wherein the processor classifies an artefact based on Pringle and Croft grading system. Pringle and Croft grading system comprises: grade 1 - simple palatal snoring, grade 2 - single level palatal obstruction, grade 3 - multi-segmental involvement: intermittent oro-hypopharyngeal collapse, grade 4 - sustained multi-segmental obstruction and grade 5 - tongue base obstruction.
In another embodiment of the invention, the processor 103 classifies an artefact based on VOTE classification. Anatomical structures in VOTE classification, comprise: velum, oropharynx walls, tongue base and epiglottis. Severity of obstruction in VOTE classification system comprises: no obstruction, partial obstruction and complete obstruction. Configuration of obstruction in VOTE classification comprises: anteroposterior obstruction, lateral obstruction and concentric obstruction.
In another embodiment of the invention, the processor 103 classifies an artefact based on nose oropharynx, hypopharynx and larynx (NOHL) classification. Anatomical zones in
NOHL classification comprise: nasal cavities (nose), retropalatal space (oropharynx), base of tongue space (hypopharynx) and larynx. Severity of obstruction in NOHL classification comprises: 0% - 25% of obstruction, 25% - 50% of obstruction, 50% - 75% of obstruction and 75 - 100% of obstruction. Configuration of obstruction in NOHL classification comprises: anteroposterior obstruction, lateral obstruction and concentric obstruction.
In another embodiment of the invention, a processor 103 classifies an artefact based P-T-LTb-E classification. Anatomical structures in P-T-L-Tb-E classification comprise: palate (P), tonsil (T), lateral pharyngeal wall (L), tongue base (Tb) and Epiglottis (E).
Configuration of obstruction in P-T-L-Tb-E classification comprises: anteroposterior obstruction, lateral obstruction and concentric obstruction.
In another embodiment of the invention, a processor 103 classifies an artefact based on simple grading system. Simple grading system comprises: grade 1 - palatal snoring, grade
2 - missed snoring and non-palatal (tongue base) snoring.
In another embodiment of the system, a processor 103 classifies an artefact based on modified VOTE classification. Anatomical levels in modified VOTE classification comprises: retropalatal and retrolingual. Anatomical structure in modified VOTE classification comprises: palate, lateral walls, tonsils, tongue base and epiglottis. Severity of obstruction in modified VOTE classification system, comprises: no obstruction (0), partial obstruction (1, 50% - 75%) and complete obstruction (2, >75%). Configuration of obstruction in modified VOTE classification comprises: anteroposterior obstruction, lateral obstruction and concentric obstruction.
In another embodiment of the system, a processor 103 classifies an artefact based on DISE index. Anatomical structures in DISE index comprises: palate, lateral walls, tonsils, tongue base and epiglottis. Severity of obstruction in DISE index comprises: no obstruction, partial obstruction and complete obstruction.
In another embodiment of the system, a processor 103 classifies an artefact based on sleep endoscopy grading system. Anatomical structures in sleep endoscopy grading system comprises: nose, palatine plane, uvula, tonsils, tongue base, larynx and hypopharynx. Severity of obstruction in sleep endoscopy grading system comprises: no obstruction, partial obstruction and complete obstruction.
In describing embodiments of the invention specific terminology is employed for the sake of clarity. However, the invention is not intended to be limited to the specific terminology so selected. The above described embodiments of the invention may be modified or varied, without departing from the invention, as appreciated by those skilled in the art and in light of the above teachings. It is therefore to be understood that, within the scope of the claims and their equivalents, the invention may be practiced otherwise than as specifically described.
Claims (22)
1. An upper airway classification device comprising:
at least one data source; and a processor in communication with said data source, wherein said processor extract individual data frame of at least one data source, the processor also being used to analyse the extracted data frames of data to classify an artefact identified in the localised data source based on analysis of the behaviour of the artefact during an detected event.
2. A device as claimed in claim 1, wherein a data source comprises of at least one anatomical data source on an upper airway event.
3. A device as claimed in claim 1, wherein a data source comprises of at least one of a video endoscopy system, intra-oral camera system and storage device.
4. A device as claimed in 1, wherein a processor comprises of at least one of a desktop, laptop, microcontroller, microprocessor, field-programmable gate array, complexprogrammable logic device, application-specific integrated circuit, a mobile phone and a tablet.
5. A device as claimed in 1, where a processor classifies an artefact identified based on at least one of an image processing technique, machine learning technique, pattern recognition technique and computer vision technique.
6. A device as claimed in claim 1, wherein a processor classifies an artefact based on at least one anatomical structure, anatomical levels, severity of obstruction and configuration of obstruction.
7. A device as claimed in claim 1, wherein a processor classifies an artefact based on an anatomical classification system.
8. A device as claimed in claim 1, wherein a processor classifies an artefact based on a drug induced sleep endoscopy grading and classification system.
9. A device as claimed in claim 1, wherein a processor classifies an artefact based on at least one of Pringle and Croft grading system, VOTE classification, NOHL classification, P-T-L-Tb-E classification, simple grading system, DISE index, modified VOTE classification, sleep endoscopy grading system, Mallampati score, modified Mallampati score, cormack-lehane classification system, and simplified airway risk index.
10. A device as claimed in claim 1, wherein an event is either an upper airway obstructive event or not.
11. A device as claimed in claim 1, wherein an event is either an apnoea event or a hypopnoea event or normal upper airway event.
12. A method for classifying an upper airway event wherein a processor is used to extract individual data frame of data of at least one data source, the processor also being used to analyse the extracted data frames of data to classify an artefact identified in the localised data source based on analysis of the behaviour of the artefact during an detected event.
13. A method as claimed in claim 12, wherein a data source comprises of at least one anatomical data source on an upper airway event.
14. A method as claimed in claim 12, wherein a data source comprises of at least one of a video endoscopy system, intra-oral camera system and storage method.
15. A method as claimed in 12, wherein a processor comprises of at least one of a desktop, laptop, microcontroller, microprocessor, field-programmable gate array, complex-programmable logic method, application-specific integrated circuit, a mobile phone and a tablet.
16. A method as claimed in 12, wherein a processor classifies an artefact identified based on at least one of an image processing technique, machine learning technique, pattern recognition technique and computer vision technique.
17. A method as claimed in claim 12, wherein a processor classifies an artefact based on at least one anatomical structure, anatomical levels, severity of obstruction and configuration of obstruction.
18. A method as claimed in claim 12, wherein a processor classifies an artefact based on an anatomical classification system.
19. A method as claimed in claim 12, wherein a processor classifies an artefact based on a drug induced sleep endoscopy grading and classification system.
5
20. A method as claimed in claim 12, wherein a processor classifies an artefact based on at least one of Pringle and Croft's grading system, VOTE classification, NOHL classification, P-T-L-Tb-E classification, simple grading system, DISE index, modified VOTE classification, sleep endoscopy grading system, Mallampati score, modified Mallampati score, cormack-lehane classification system, and simplified airway risk index.
10
21. A method as claimed in claim 12, wherein an event is either an upper airway obstructive event or not.
22. A method as claimed in claim 12, wherein an event is either an apnoea event or a hypopnoea event or normal upper airway event.
Application No: GB1701226.1 Examiner: Mr Mike Walker
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1701226.1A GB2559126A (en) | 2017-01-25 | 2017-01-25 | An upper airway classification system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1701226.1A GB2559126A (en) | 2017-01-25 | 2017-01-25 | An upper airway classification system |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201701226D0 GB201701226D0 (en) | 2017-03-08 |
GB2559126A true GB2559126A (en) | 2018-08-01 |
Family
ID=58463051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1701226.1A Withdrawn GB2559126A (en) | 2017-01-25 | 2017-01-25 | An upper airway classification system |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2559126A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005103962A1 (en) * | 2004-03-30 | 2005-11-03 | Eastman Kodak Company | Classifying images according to anatomical structure |
US20110009752A1 (en) * | 2009-07-10 | 2011-01-13 | The Regents Of The University Of California | Endoscopic long range fourier domain optical coherence tomography (lr-fd-oct) |
US20110144517A1 (en) * | 2009-01-26 | 2011-06-16 | Miguel Angel Cervantes | Video Based Automated Detection of Respiratory Events |
US20120022365A1 (en) * | 2010-07-21 | 2012-01-26 | Mansfield Enterprises | Diagnosing Airway Obstructions |
US20130296660A1 (en) * | 2012-05-02 | 2013-11-07 | Georgia Health Sciences University | Methods and systems for measuring dynamic changes in the physiological parameters of a subject |
WO2015101948A2 (en) * | 2014-01-06 | 2015-07-09 | Body Vision Medical Ltd. | Surgical devices and methods of use thereof |
WO2015163710A1 (en) * | 2014-04-24 | 2015-10-29 | 경상대학교산학협력단 | Device for imaging and diagnosing upper airway obstruction condition using conductivity tomography |
-
2017
- 2017-01-25 GB GB1701226.1A patent/GB2559126A/en not_active Withdrawn
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005103962A1 (en) * | 2004-03-30 | 2005-11-03 | Eastman Kodak Company | Classifying images according to anatomical structure |
US20110144517A1 (en) * | 2009-01-26 | 2011-06-16 | Miguel Angel Cervantes | Video Based Automated Detection of Respiratory Events |
US20110009752A1 (en) * | 2009-07-10 | 2011-01-13 | The Regents Of The University Of California | Endoscopic long range fourier domain optical coherence tomography (lr-fd-oct) |
US20120022365A1 (en) * | 2010-07-21 | 2012-01-26 | Mansfield Enterprises | Diagnosing Airway Obstructions |
US20130296660A1 (en) * | 2012-05-02 | 2013-11-07 | Georgia Health Sciences University | Methods and systems for measuring dynamic changes in the physiological parameters of a subject |
WO2015101948A2 (en) * | 2014-01-06 | 2015-07-09 | Body Vision Medical Ltd. | Surgical devices and methods of use thereof |
WO2015163710A1 (en) * | 2014-04-24 | 2015-10-29 | 경상대학교산학협력단 | Device for imaging and diagnosing upper airway obstruction condition using conductivity tomography |
Also Published As
Publication number | Publication date |
---|---|
GB201701226D0 (en) | 2017-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cook et al. | Opening mechanisms of the human upper esophageal sphincter | |
Mao et al. | Neck sensor-supported hyoid bone movement tracking during swallowing | |
Sejdic et al. | Computational deglutition: Using signal-and image-processing methods to understand swallowing and associated disorders [life sciences] | |
Jayatilake et al. | Smartphone-based real-time assessment of swallowing ability from the swallowing sound | |
Donohue et al. | Tracking hyoid bone displacement during swallowing without videofluoroscopy using machine learning of vibratory signals | |
Khalifa et al. | Upper esophageal sphincter opening segmentation with convolutional recurrent neural networks in high resolution cervical auscultation | |
Islam et al. | Deep learning of facial depth maps for obstructive sleep apnea prediction | |
CN113409944A (en) | Obstructive sleep apnea detection method and device based on deep learning | |
Hanif et al. | Estimation of apnea-hypopnea index using deep learning on 3-D craniofacial scans | |
Hashimoto et al. | Non-invasive quantification of human swallowing using a simple motion tracking system | |
Donohue et al. | How closely do machine ratings of duration of UES opening during videofluoroscopy approximate clinician ratings using temporal kinematic analyses and the MBSImP? | |
JP7197491B2 (en) | Methods and devices using swallowing accelerometer signals for dysphagia detection | |
Yu et al. | Silent aspiration detection in high resolution cervical auscultations | |
JP2022521172A (en) | Methods and Devices for Screening for Dysphagia | |
Sabry et al. | Automatic estimation of laryngeal vestibule closure duration using high-resolution cervical auscultation signals | |
Zhang et al. | Prediction of obstructive sleep apnea using deep learning in 3D craniofacial reconstruction | |
Schwartz et al. | A preliminary investigation of similarities of high resolution cervical auscultation signals between thin liquid barium and water swallows | |
Donohue et al. | Characterizing effortful swallows from healthy community dwelling adults across the lifespan using high-resolution cervical auscultation signals and MBSImP scores: A preliminary study | |
Zhang et al. | A generalized equation approach for hyoid bone displacement and penetration–aspiration scale analysis | |
GB2559126A (en) | An upper airway classification system | |
Ilegbusi et al. | A computational model of upper airway respiratory function with muscular coupling | |
He et al. | Deep learning technique to detect craniofacial anatomical abnormalities concentrated on middle and anterior of face in patients with sleep apnea | |
Daniele et al. | Endoscopic criteria in assessing severity of swallowing disorders | |
TWI848852B (en) | Application of medical imaging in rapid screening of sleep apnea | |
De Rosa et al. | The Future of Artificial Intelligence Using Images and Clinical Assessment for Difficult Airway Management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |