WO2017141057A1 - Appareil de communication, procédé et programme informatique - Google Patents

Appareil de communication, procédé et programme informatique Download PDF

Info

Publication number
WO2017141057A1
WO2017141057A1 PCT/GB2017/050432 GB2017050432W WO2017141057A1 WO 2017141057 A1 WO2017141057 A1 WO 2017141057A1 GB 2017050432 W GB2017050432 W GB 2017050432W WO 2017141057 A1 WO2017141057 A1 WO 2017141057A1
Authority
WO
WIPO (PCT)
Prior art keywords
breathing
breath
subject
modulations
predefined
Prior art date
Application number
PCT/GB2017/050432
Other languages
English (en)
Inventor
Atul GAUR
David Kerr
Kaddour BOUAZZA-MAROUF
Alastair LUCAS
Original Assignee
University Hospitals Of Leicester Nhs Trust
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Hospitals Of Leicester Nhs Trust filed Critical University Hospitals Of Leicester Nhs Trust
Publication of WO2017141057A1 publication Critical patent/WO2017141057A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/087Measuring breath flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7246Details of waveform analysis using correlation, e.g. template matching or determination of similarity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/726Details of waveform analysis characterised by using transforms using Wavelet transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F4/00Methods or devices enabling patients or disabled persons to operate an apparatus or a device not forming part of the body 
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • A61B5/741Details of notification to user or communication with user or patient ; user input means using sound using synthesised speech
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • G06F2218/14Classification; Matching by matching peak patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems

Definitions

  • the present invention relates to a communication apparatus, method and computer program, and in particular to a communication apparatus which assists patients who are unable to otherwise communicate or make purposeful gestures because they have suffered a loss of voluntary muscle function, which affects their speech-producing mechanisms and gestures, for example patients suffering from 'Locked-in Syndrome'.
  • the invention extends to methods of communicating using the apparatus, and to uses of the apparatus for diagnosing a patient's condition.
  • Locked-in syndrome is a condition in which a patient is aware and awake, but cannot move or communicate verbally or by making purposeful gestures due to complete paralysis of nearly all voluntary muscles in the body, except for the eyes.
  • Total locked-in syndrome is a condition where the eyes may also be paralysed.
  • Locked-in syndrome is also known as cerebromedullospinal disconnection, de- efferented state, pseudocoma, and ventral pontine syndrome.
  • a communication apparatus for a subject unable to speak or make a purposeful gesture, the apparatus comprising: at least one sensor arranged, in use, to detect modulations in the subject's breathing; a signal processor arranged to perform a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor, detect at least one peak in the wavelet representation of the breath signal, determine whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature; and communication means arranged to perform an action associated with the predefined breath signature.
  • the apparatus of the invention allows a person to communicate by using breath modulations. This will allow a subject or patient, who is otherwise unable to make a purposeful gesture (such as smile, raise eye brows, sniff, whistle or write etc.), to communicate with medical staff and family members, and thereby greatly improve their quality of life.
  • a purposeful gesture such as smile, raise eye brows, sniff, whistle or write etc.
  • operation of the apparatus is simple to learn, and the apparatus will work in
  • the apparatus will allow the subject to construct, over a "learning period," their own symbolic language of breathing modulation patterns that represent ideas, words, phrases, etc. according to their own physical ability and personal preference. The subject thus will be able to build their own vocabulary aided by the apparatus, instead of being forced to conform to a pre-coded or rule-based communication format.
  • detecting the breath modulations in the form of changing breathing pattern using the apparatus of the invention is mainly dependent on voluntary control. Since no additional effort is required for a subject to modulate their pre-existing spontaneous breathing patterns, the apparatus of the invention can enable a subject to communicate regardless of their level of control over muscular activity or power.
  • the apparatus may therefore be arranged to be used by a patient who cannot speak due to a loss of voluntary muscle function, which affects their speech-producing
  • the apparatus may be arranged to be used by a patient suffering from 'Locked-in Syndrome'.
  • the modulations in the subject's breathing maybe detected via the upper airway, including the nose, nasal cavity, mouth, oral cavity, nasopharynx and larynx and/ or anywhere in the breathing circuit, for example tracheostomy tube.
  • the apparatus may comprise a face mask to which the at least one sensor is connected, where the face mask is, in use, placed over the subject's nose, mouth and/or trachea.
  • the apparatus may comprise a nasal cannula, which, in use, is inserted into a nasal orifice; and where the sensor detects modulations in the subject's breathing through a breathing circuit, the apparatus may comprise of a tube connector, which, in use, is inserted or connected to the breathing circuit.
  • the communication apparatus of the first aspect may comprise at least one sensor arranged, in use, to detect modulations in a subject's breathing, characterised in that the detected modulations are not detected via the nose, in particular the nasal cavity or the nasopharynx.
  • the apparatus maybe arranged in use to detect modulations in the subject's breathing via the mouth, oral cavity and larynx and/ or anywhere in the breathing circuit, for example tracheostomy tube.
  • the signal processor is further arranged to only detect peaks in the wavelet representation with an amplitude higher than a threshold value.
  • the signal processor maybe arranged to set the threshold value as a predefined fraction of the maximum peak amplitude in the wavelet representation of the breath signal, for example 50% of the maximum peak amplitude.
  • the signal processor is arranged to use a weighted centroid method to detect the at least one peak.
  • the signal processor is arranged to use a k nearest neighbour KNN method to determine whether the modulations in the subject's breathing match the predefined breath signature.
  • the communication apparatus further comprises a signal pre- processor arranged to smooth the breath signal obtained from the at least one sensor and/or subtract a DC offset from said breath signal.
  • the action performed by the communication means comprises controlling a cursor to move in a predefined direction on a display unit.
  • the cursor may be controlled to spell out arbitrary words or phrases, or may be controlled to select other user interface elements such as images corresponding to certain activities or requests.
  • the action performed by the communication means comprises generating speech associated with the predefined breath signature.
  • the speech generated by the communication means can take different forms.
  • the communication apparatus further comprises one or more speakers arranged to reproduce the speech generated by the communication means as audio output.
  • the communication means can be configured to output a different representation of the generated speech, for example by generating textual representation of the speech.
  • the generated text can then be displayed on a screen or sent to an external device, for example, via a network connection or other suitable communications link.
  • the communication apparatus may further comprise a face mask to which the at least one sensor is connected, which face mask is, in use, placed over the subject's nose, mouth and/or trachea.
  • the modulations in the subject's breathing may comprise modulations in the pressure and/ or timing of the breathing.
  • the modulations in the pressure and/ or timing of a subject's breathing maybe in the form of amplitude, frequency and/or phase.
  • the apparatus in use, is first calibrated in order to establish a benchmark or control.
  • the apparatus may be used as a diagnostic tool for helping doctors to distinguish between suspected brain dead patients (e.g. coma or vegetative state), and those that are conscious in ambiguous circumstances, such as locked-in patients (with intact spontaneous respiratory/breathing activity which may be adequate not requiring any ventilatory support, or inadequate requiring ventilatory support).
  • suspected brain dead patients e.g. coma or vegetative state
  • locked-in patients with intact spontaneous respiratory/breathing activity which may be adequate not requiring any ventilatory support, or inadequate requiring ventilatory support.
  • a computer-implemented method of communicating via breathing modulations comprising:
  • a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor arranged, in use, to detect modulations in the subject's breathing; detecting at least one peak in the wavelet representation of the breath signal; determining whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature; and performing an action associated with the predefined breath signature if the modulations in the subject's breathing match said predefined breath signature.
  • a non-transitory computer-readable storage medium arranged to store computer program instructions which, when executed, perform a method according to the third aspect and/or any other methods disclosed herein.
  • Figure ⁇ schematically illustrates apparatus for a subject unable to speak or make a purposeful gesture, according to an embodiment of the present invention
  • Figure 2 illustrates locations on a patient where pressure and flow signatures can be measured, according to an embodiment of the present invention
  • Figure 3 is a graph showing an example of a breath signal outputted by the sensor of the apparatus shown in Fig. l, according to an embodiment of the present invention
  • Figure 4 is a flowchart showing a method of generating speech from breathing modulations, according to an embodiment of the present invention.
  • Figure 5 illustrates an example of a wavelet representation of the breath signal shown in Fig. 3, according to an embodiment of the present invention.
  • a communication apparatus is schematically illustrated according to an embodiment of the present invention.
  • the apparatus can be configured for use by a subject who is unable to speak or make a purposeful gesture.
  • the communication apparatus comprises at least one sensor 102 arranged, in use, to detect modulations in the subject's breathing.
  • An example of a breath signal outputted by the sensor of the apparatus shown in Fig. 1 is illustrated in Fig. 3.
  • the apparatus comprises a single sensor 102, but in other embodiments a plurality of sensors may be used.
  • the apparatus further comprises a signal processor 106 arranged to perform a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor, detect at least one peak in the wavelet representation of the breath signal, determine whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature.
  • a signal processor 106 arranged to perform a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor, detect at least one peak in the wavelet representation of the breath signal, determine whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature.
  • the signal processor 106 may be implemented in hardware or software.
  • the signal processor 106 comprises a processing unit which may include one or more general-purpose processors.
  • the processing unit 106 is configured to execute computer program instructions stored in memory 108, which can be any suitable non- transitory computer-readable storage medium. When executed, the computer program instructions cause the processing unit 106 to perform any of the methods disclosed herein. An example of a method performed by the processing unit 106 is described later with reference to Fig. 4.
  • the apparatus may optionally comprise a signal pre-processing unit 104 arranged to smooth the breath signal obtained from the at least one sensor and/ or subtract a DC offset from said breath signal, and sending the pre-processed breath signal to the signal processor 106.
  • the signal pre-processing unit 104 is arranged to smooth the breath signal using a 5-point moving average filter, and subtract the signal mean in order to remove the DC offset.
  • the filtering may be carried out real-time in order to remove any high frequency noise and to improve the signal quality.
  • the apparatus further comprises a communication unit in the form of speech generating means 110 arranged to generate speech associated with the predefined breath signature if the modulations in the subject's breathing match said predefined breath signature.
  • the communication unit can comprise any suitable mechanism for enabling the subject to communicate, that is, any mechanism capable of performing an action which can convey information.
  • the generated speech may comprise single words (e.g. yes/no), sentences or expressions etc.
  • the speech generating means 110 may be physically separate to the signal processor 106, or when a software implementation is used the speech generating means 110 may comprise further software instructions executed on the same general-purpose processing unit as is used for the signal processor 106.
  • the speech generator is arranged to generate speech by retrieving predefined speech segments from a speech database 112.
  • the speech database 112 is included in local storage within the apparatus, however, in another embodiment the speech generator 110 may be arranged to communicate with a remote database.
  • the speech database may be stored on an Internet server and accessed via any suitable network connection.
  • the speech database 112 is arranged to store various predefined speech segments, each associated with a different predefined breath signature.
  • the speech segments are stored in the form of pre-recorded sound files, and the apparatus further comprises a speaker 114 arranged to reproduce the speech generated by the speech generating means.
  • the speech generator 110 may generate speech in a different manner.
  • speech segments may be stored as text files, which can be converted into audible speech by the speech generator 110 using a suitable text-to-speech algorithm.
  • the speech generator 110 may be arranged to output a textual representation of the generated speech to a display unit and/ or another device.
  • the communication unit 110 may be arranged to perform a another type of action in response to a particular breathing pattern.
  • the communication unit 110 may be arranged to control an on-screen cursor on a display unit to move in a certain direction when a particular breathing pattern is detected. In this way the patient may modulate their breathing pattern to move the cursor, for example to point to on-screen user interface (UI) elements and/or to spell words or to select phrases from an on-screen graphical interface.
  • UI on-screen user interface
  • This embodiment enables a subject to control the communication unit 110 to generate arbitrary speech rather than selecting pre-programmed words or phrases, by using the cursor to spell out arbitrary words or phrases as necessary.
  • the apparatus may not necessarily generate speech but can include a UI comprising a plurality of images representing different activities or requests, for example sleep/food/family etc., and the subject can communicate by controlling the cursor to move to the desired activity or request.
  • a UI comprising a plurality of images representing different activities or requests, for example sleep/food/family etc.
  • the sensor 102 of the communication apparatus is configured to be connected via a tube 7 to an existing valve on a standard face mask 5 placed over the patient's nose 16 and mouth 18, or to an outlet valve 22 which is provided on a tracheostomy tube 23, which is connected to the patient's trachea 20.
  • the sensor 102 can thereby detect the changes, deviations and/ or modifications in the patient's breathing pattern, for example modulations in the pressure and/ or timing of the breathing.
  • the apparatus can work with or without the use of a ventilator (not shown).
  • the patient's breathing patterns can be detected by a nasal cannula 15, which is inserted at the nasal orifice 16 and/or breathing through the mouth via the face mask 5.
  • a ventilator when a ventilator is used, the patient's breathing can be detected directly through the trachea 20 via the tracheostomy tube 23 and associated valve 22 or breathing circuit.
  • the modulations in the pressure and/ or timing of a subject's breathing may be in the form of amplitude, frequency and/ or phase.
  • the magnitude of the modulations will depend on the breathing circuit being monitored, as well as on the patient's medical condition and capability. For example, when using a continuous positive airway pressure (CPAP) mask to aid breathing, typical normal breath periods may be about 2.7 seconds, whilst faster modulation rates may have a period of about 1.2 seconds.
  • CPAP continuous positive airway pressure
  • the modulations that are detected may comprise changes in the pressure of the air that is breathed in and/or out by the subject.
  • Air pressure may be detected using any commonly available pressure sensor.
  • the change in air pressure for normal breathing whilst using a face mask under CPAP ventilation may be between 4 and 18 cm H2O, with a respiration rate of around 20 breaths per minute.
  • the apparatus is first calibrated during normal breathing in order to establish a benchmark or control.
  • the calibration may be performed at a certain time of day and/or while the subject is performing a certain activity. For example, calibration may be performed while the subject is awake, or asleep, or having meals.
  • the calibrated benchmark may provide an indication of the subject's normal breathing pattern during routine activity. From this, it is possible to determine when the subject has modulated their breathing, for example by pausing or holding their breath.
  • the apparatus is capable, in use, of locating and extracting maximum and minimum pressure data from the signal, which preferably corresponds to air pressure of the subject's breathing.
  • the apparatus may be capable, in use, of determining breath period or breath frequency from the signal, as a function of time and in the form of a signal spectrum or multi-resolution scale/position data.
  • a multi-resolution technique may comprise wavelet transforms of Hidden Markov Models.
  • detecting pressure changes over time produces a wavelet, and wavelet analysis may then be used to locate frequency modulations within the breathing cycle.
  • the apparatus is preferably arranged to detect frequency modulation of the normal breathing signal in terms of pressure verses time.
  • a flowchart is illustrated showing a method of generating speech from breathing modulations, according to an embodiment of the present invention.
  • the method can be performed by hardware such as a field programmable gate array (FPGA) or application-specific integrated circuit (ASIC), or may be implemented by software instructions executed on a processor, as described above with reference to Fig. 1.
  • FPGA field programmable gate array
  • ASIC application-specific integrated circuit
  • step S401 the signal processor 106 performs a continuous wavelet transform to obtain a wavelet representation of the breath signal received from the sensor 102.
  • a Daubechies 4 (db4) wavelet scaled 1 to 128 is used, however, in other embodiments different forms of wavelet may be used in the continuous wavelet transform, for example a Morlet wavelet.
  • a wavelet representation of the breath signal is obtained.
  • the wavelet representation comprises a 2D scale/space map of wavelet amplitude peaks.
  • step S402 the signal processor 106 detects any peaks in the wavelet representation of the breath signal.
  • the signal processor is arranged to apply thresholding to the wavelet representation so that only peaks with an amplitude higher than a threshold value are detected. Thresholding allows the peaks to be isolated in the scale/space domain.
  • Figure 5 illustrates the 2D wavelet map of the breath signal shown in Fig. 3 after applying thresholding at 50% of the maximum peak amplitude. In other embodiments a different threshold may be set, for example a fixed amplitude or a different fraction of the maximum peak amplitude.
  • the signal processor 106 is arranged to determine coordinates of each peak in the wavelet space by using a weighted centroid method. In other embodiments, any other suitable method of identifying a location of the peak within the scale/ space domain may be used.
  • the scale/ space coordinates obtained from the wavelet map shown in Fig. 5 using a weighted centroid approach are shown in the following table:
  • the signal processor 106 determines whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature. For example, the detected peak locations may be compared against known peak locations for a plurality of different predefined breath signatures store in a speech database, as described above.
  • the signal processor 106 may use any suitable pattern recognition algorithm to check whether the detected breath modulations match one of the predefined breath signatures.
  • the signal processor 106 is arranged to use a k nearest neighbour (KNN) method to determine whether the modulations in the subject's breathing match the predefined breath signature.
  • KNN k nearest neighbour
  • the signal processor may be arranged to set a higher value of k when it is necessary to distinguish between a larger number of breathing patterns, that is, when the number of predefined breath signatures increases.
  • embodiments of the invention are not limited to a KNN pattern recognition method.
  • a different type of pattern recognition algorithm may be used, for example a decision tree, neural network, support vector machine, fuzzy logic, and so on.
  • a combination of multiple different pattern recognition methods may be used.
  • step S404 the speech generator generates the speech associated with the matched breath signature. On the other hand, if no match is found at step S403, then no speech is generated.
  • the process of generating speech from a measured breath signal may be referred to as the 'translation phase'.
  • the communication apparatus is initially configured during a 'learning phase', in which a database of known breath signatures is assembled using a training method as follows. Firstly, the user records a modulated breath signal over a fixed sampling period, for example 10 seconds. In the present embodiment, this signal is recorded at a sampling rate of 100 Hertz (Hz) to give a total of 1000 samples of pressure. Then, the user repeats this process a number of time using the same purposely-made breath signal, for example, at least a further four times to obtain five recordings. Next, the five recordings are processed using steps S401 and S402 of the method shown in Fig. 4 to build up a database of predefined breathing patterns.
  • a modulated breath signal over a fixed sampling period, for example 10 seconds. In the present embodiment, this signal is recorded at a sampling rate of 100 Hertz (Hz) to give a total of 1000 samples of pressure. Then, the user repeats this process a number of time using the same purposely-made breath signal, for example, at least a further four times to obtain
  • a pattern recognition algorithm similar to the one used in step S403 may be applied to check that the reference patterns being generated are sufficiently different to one another to be capable of being distinguished.
  • a word or phrase is selected to be associated with this group of signals.
  • the word or phrase may be preprogrammed, or may be user-defined. The process can be repeated to set up as many words/phrases as necessary.
  • the learning phase may further include a step of verifying the database for data quality and veracity using a "leave one out" cross-verification process, whereby if any one class is in danger of being non-unique, then the user is prompted to choose a different breath pattern for that class or to record a new set of samples for that pattern.
  • the modulations detected by the sensor may include modulations in the timing of the breathing the subject as they hold their breath for a defined period of time.
  • the subject increases and then decreases the frequency of their breathing so as to match a predefined breath signature.
  • the breath signature illustrated in Fig. 3 is merely exemplary, and should not be seen as being limiting.
  • Embodiments of the invention can recognise any type of low-noise breath pattern, and are not limited to detecting patterns which comprise regularised, ordered bursts of frequency such as the example shown in Fig. 3.
  • the communication apparatus of the invention can act as a diagnostic tool for helping doctors to distinguish between suspected brain dead patients (coma or vegetative state) who are either breathing spontaneously or have intact spontaneous breathing efforts which are supported by a ventilator, and those that are conscious in ambiguous circumstances, such as locked-in patients.
  • the communication apparatus also allows speech communication in the ICU between patients and the outside world, with particular application to those who are unable to communicate due to impaired speech production mechanism and loss of ability to make purposeful gestures with or without breathing support on ventilators.
  • the communication apparatus can be effectively used to allow patients to communicate by the simple modulation of their breathing patterns, for example by hyperventilating or holding their breath. Minute voluntary changes in the breathing circuit such as pressure, flow or phase/time (i.e. holding the breath or pausing) can initiate and maintain a dialogue between patient and outside world.
  • the communication apparatus will help a wide range of patients from Locked-in syndrome to patients who are unable to verbally communicate for any reason with communication and diagnosis.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Physiology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Pulmonology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Vascular Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

L'invention concerne un appareil de communication pour un sujet incapable de parler ou d'effectuer un geste volontaire, ledit appareil comprenant au moins un capteur conçu pour détecter, lors de l'utilisation, des modulations dans la respiration du sujet, un traitement du signal conçu pour réaliser une transformée en ondelettes continue pour obtenir une représentation en ondelettes d'un signal respiratoire reçu dudit capteur, pour détecter au moins un pic dans la représentation en ondelettes du signal respiratoire, pour déterminer si les modulations dans la respiration du sujet correspondent à une signature respiratoire prédéfinie par comparaison d'un emplacement desdits pics détectés à un emplacement connu d'au moins un pic dans la signature respiratoire prédéfinie, et un moyen de communication conçu pour exécuter une action associée à la signature respiratoire prédéfinie si les modulations dans la respiration du sujet correspondent à la signature respiratoire prédéfinie. L'invention concerne également un procédé mis en œuvre par ordinateur permettant de communiquer par le biais de modulations respiratoires.
PCT/GB2017/050432 2016-02-19 2017-02-20 Appareil de communication, procédé et programme informatique WO2017141057A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1602897.9A GB2547457A (en) 2016-02-19 2016-02-19 Communication apparatus, method and computer program
GB1602897.9 2016-02-19

Publications (1)

Publication Number Publication Date
WO2017141057A1 true WO2017141057A1 (fr) 2017-08-24

Family

ID=55752884

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2017/050432 WO2017141057A1 (fr) 2016-02-19 2017-02-20 Appareil de communication, procédé et programme informatique

Country Status (2)

Country Link
GB (1) GB2547457A (fr)
WO (1) WO2017141057A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL273993A (en) * 2020-04-16 2021-10-31 Yeda Res & Dev Methods and instrument for assessing wakefulness disorders in people

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003000125A1 (fr) * 2001-06-22 2003-01-03 Cardiodigital Limited Analyse par ondelettes de signaux d'oxymetrie pulsee
WO2006066337A1 (fr) * 2004-12-23 2006-06-29 Resmed Limited Procede de detection et de differentiation des modes respiratoires a partir de signaux respiratoires

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6290654B1 (en) * 1998-10-08 2001-09-18 Sleep Solutions, Inc. Obstructive sleep apnea detection apparatus and method using pattern recognition
US20070106501A1 (en) * 2005-11-07 2007-05-10 General Electric Company System and method for subvocal interactions in radiology dictation and UI commands
US7845350B1 (en) * 2006-08-03 2010-12-07 Cleveland Medical Devices Inc. Automatic continuous positive airway pressure treatment system with fast respiratory response
WO2011138794A1 (fr) * 2010-04-29 2011-11-10 Narasingh Pattnaik Procédé et système activé par la respiration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003000125A1 (fr) * 2001-06-22 2003-01-03 Cardiodigital Limited Analyse par ondelettes de signaux d'oxymetrie pulsee
WO2006066337A1 (fr) * 2004-12-23 2006-06-29 Resmed Limited Procede de detection et de differentiation des modes respiratoires a partir de signaux respiratoires

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A. PLOTKIN ET AL: "Sniffing enables communication and environmental control for the severely disabled", PROCEEDINGS NATIONAL ACADEMY OF SCIENCES PNAS, vol. 107, no. 32, 26 July 2010 (2010-07-26), US, pages 14413 - 14418, XP055366427, ISSN: 0027-8424, DOI: 10.1073/pnas.1006746107 *
VANESSA CHARLAND-VERVILLE ET AL: "Detection of response to command using voluntary control of breathing in disorders of consciousness", FRONTIERS IN HUMAN NEUROSCIENCE, vol. 8, 23 December 2014 (2014-12-23), pages 1 - 5, XP055366237, DOI: 10.3389/fnhum.2014.01020 *

Also Published As

Publication number Publication date
GB2547457A (en) 2017-08-23
GB201602897D0 (en) 2016-04-06

Similar Documents

Publication Publication Date Title
Gholami et al. Replicating human expertise of mechanical ventilation waveform analysis in detecting patient-ventilator cycling asynchrony using machine learning
CN105342563B (zh) 睡眠状况的检测
JP6199330B2 (ja) 酸素測定信号を用いるチェーンストークス呼吸パターンの識別
CN109893732B (zh) 一种基于循环神经网络的机械通气人机不同步检测方法
Shi et al. Theory and Application of Audio‐Based Assessment of Cough
Fang et al. A novel sleep respiratory rate detection method for obstructive sleep apnea based on characteristic moment waveform
CN109273085A (zh) 病理呼吸音库的建立方法、呼吸疾病的检测系统及处理呼吸音的方法
US20080086035A1 (en) Detecting time periods associated with medical device sessions
EP3868293A1 (fr) Système et procédé de surveillance de schémas respiratoires pathologiques
US10573335B2 (en) Methods, systems and apparatuses for inner voice recovery from neural activation relating to sub-vocalization
US20220313155A1 (en) Flow-based sleep stage determination
WO2014045257A1 (fr) Système et procédé de détermination de la respiration d'une personne
KR20210066271A (ko) 마취 분야에서의 의료 딥러닝을 활용한 처방 시스템
US11752288B2 (en) Speech-based breathing prediction
CN114828743A (zh) 自动和客观的症状严重程度评分
CN113941061B (zh) 一种人机不同步识别方法、系统、终端以及存储介质
US20240324951A1 (en) Speech-Controlled Health Monitoring Systems
US20230045078A1 (en) Systems and methods for audio processing and analysis of multi-dimensional statistical signature using machine learing algorithms
WO2017141057A1 (fr) Appareil de communication, procédé et programme informatique
WO2024152392A1 (fr) Procédé d'identification d'une forme d'onde d'asynchronisme patient-ventilateur dans un mode de ventilation mécanique hybride, et dispositif associé
Dubnov Signal analysis and classification of audio samples from individuals diagnosed with COVID-19
CN112494013A (zh) 健康管理装置、方法、电子设备和存储介质
Castro et al. Real-time identification of respiratory movements through a microphone
KR102610271B1 (ko) 사용자의 행동 데이터에 대응되는 생각 데이터를 유도하기 위한 컨텐츠를 제공하는 방법 및 이를 이용한 컴퓨팅 장치
KR102548041B1 (ko) 의식장애 환자와 인터랙션 제공 방법, 장치, 프로그램

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17706876

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17706876

Country of ref document: EP

Kind code of ref document: A1