WO2020208291A1 - Multimodal measuring and tracking of middle ear otitis by artificial intelligence - Google Patents

Multimodal measuring and tracking of middle ear otitis by artificial intelligence Download PDF

Info

Publication number
WO2020208291A1
WO2020208291A1 PCT/FI2020/000007 FI2020000007W WO2020208291A1 WO 2020208291 A1 WO2020208291 A1 WO 2020208291A1 FI 2020000007 W FI2020000007 W FI 2020000007W WO 2020208291 A1 WO2020208291 A1 WO 2020208291A1
Authority
WO
WIPO (PCT)
Prior art keywords
measurement
tympanic membrane
data
measuring
patient
Prior art date
Application number
PCT/FI2020/000007
Other languages
French (fr)
Inventor
Esko Alasaarela
Manne Hannula
Pentti Kuronen
Tuomas HOLMA
Original Assignee
Otometri Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Otometri Oy filed Critical Otometri Oy
Publication of WO2020208291A1 publication Critical patent/WO2020208291A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000096Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00011Operational features of endoscopes characterised by signal transmission
    • A61B1/00016Operational features of endoscopes characterised by signal transmission using wireless means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/227Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for ears, i.e. otoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6814Head
    • A61B5/6815Ear
    • A61B5/6817Ear canal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4416Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to combined acquisition of different diagnostic modalities, e.g. combination of ultrasound and X-ray acquisitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/58Testing, adjusting or calibrating the diagnostic device
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/465Displaying means of special interest adapted to display user selection data, e.g. icons or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the invention relates to a method for indicating and measuring the clinical condi tion of otitis media and for monitoring the healing process.
  • the invention also re lates to a measurement and monitoring arrangement for otitis media, to a server utilized in the arrangement, and to a device and a computer program used in the measurement and monitoring.
  • Acute otitis media is a common disease in young children.
  • the acute form can also be chronic and form a prolonged inflammatory process in the middle ear and ear canal (inflammation of the middle ear and ear canal) or an adhesive-like secretion, in which case we speak of an adhesive otitis media.
  • Otitis media usually affects people aged 1 to 3 years. More than 90% of them occur in children under 5 years of age.
  • the main predisposing factors are respiratory infec tions.
  • otitis media at home is based on symptoms that are usually cough, fever, and ear pain associated with a respiratory infection. Fevers arise in only a quarter of patients, with ear pain occurring in about 75%. In some cases, hearing also deteriorates during an acute situation. Older children can describe their pain symptoms but in young children it must be inferred from crying, restless ness and touching ear by hand. Detection of otitis media is not easy at home and not even at the doctor's office. It is especially difficult at home to decide if a doc tor's appointment is needed.
  • the so-called golden standard in the study of otitis media is to visually inspect the condition of the tympanic membrane through a funnel with suf ficient lightning and to verify the movement sensitivity of the tympanic membrane in good lightning with a pressure-producing bellows.
  • a ruptured tympanic mem brane and secretion leaking outside of it is a clear sign of otitis media.
  • Redness, turbidity, or secretion seen through the tympanic membrane will also provide addi tional information to the experienced physician to assess the condition of the mid dle ear. Redness of the tympanic membrane may be due to causes other than oti tis media.
  • Otitis media imposes high costs on health care and sets an unreasonable burden on doctor's appointments, especially when waves of respiratory infections occur in day care centres and schools in the area. It is estimated that most, up to 70%, of current antibiotic regimens could be omitted if one dared to rely on the body’s nat ural healing process. This would require a means by which otitis media can be more accurately diagnosed and their disease process monitored at home. In Fin land alone, this would mean annual savings of tens of millions of euros, and glob ally the corresponding potential is enormous. The savings potential continues to increase when taken into account the parental absences from work due to ear in fections in children and the transport of children to the doctor's office.
  • otitis media Different methods are used to measure and monitor otitis media, either alone or in combination. These include a visual observation otoscope with an attached air bel lows (air pressure tympanometer), a tuning fork, devices based on acoustic reflec- tometry, and devices based on ultrasonic echoes. In addition, ear thermometers based on infrared radiation are known. Examples of optical video camera-based otoscopes include U.S. Patent Nos. 5,363,839 and 5,919,130, and U.S. Application Nos. 20110152621 , 20150351620 and 20150065803A1. Examples of tympanometer inventions using atmospheric pressure include U.S. 4,688,582 and U.S.
  • Examples of inventions based on acoustic reflectometry include U.S. Patents 4,601 ,295, U.S. 5,699,809 and U.S. 5,594,174.
  • Examples of inventions based on ultrasonic echoes include U.S. Patent 7,632,232 and U.S. Application No. 20170014053A1.
  • An example of an ear thermometer patent is U.S. 6,435,711.
  • Examples of inventions combining the above methods are U.S. 8,858,430, U.S. 20030171655A1 and U.S. 6,126,614.
  • the object of the invention is to present a method and arrangement combining in novative multimodal measurement and collection of data on otitis media with pa tients and a learning system using artificial intelligence, in which the status of otitis media can be assessed using not only the patient's own measurement-analysis history but also measurement-analysis history recorded from other similar patient cases.
  • the method of measuring and monitoring otitis media is characterized in that the method measures multimodal acoustic (including ultra sound), optical, electronic and/or chemical sensors and that the measurement re sults are processed in a learning artificial intelligence operating in a data network so that they are compared with the values of reference data collected from similar patients and that the reference data are accumulated by storing new measurement results with a diagnosis determined by a physician or other expert.
  • the objects of the invention are achieved by an arrangement for measuring and monitoring otitis media, which comprises
  • the server according to the invention which is utilized in the measurement and monitoring of otitis media, is characterized in that the server comprises:
  • the arrangement and server according to the invention are character ized in that the information about patients is stored in two separate databases: 1 ) a patient database, from which identifiable patient information can be easily deleted or transferred to an object requested by the patient or his trustee, and 2) an anon ymous database containing the information necessary for the teaching of the artifi cial intelligence, from which the identification information has been deleted in ac cordance with the regulations.
  • An advantage of the invention is that otitis media can be assessed and monitored at home with a mobile device that is connected to a compact probe and that is reasonably easy for anyone to learn how to use.
  • the learning artificial in telligence system according to the invention makes it possible in many cases to make a reliable diagnosis and to classify the situation at least in terms of whether a visit to a doctor is required.
  • the arrangement also makes it possible to use the telemedicine service effortlessly in cases requiring further assessment.
  • the invention has the advantage that its determination of the severity of the disease state is automatically refined as the number of measurements per formed increases, especially those for which a physician or other expert makes his definition and gives treatment instructions.
  • a further advantage of the invention is that the number of visits to the doctor and the need for antibiotic treatments are reduced, which are significant public eco nomic and therapeutic benefits.
  • the invention is described in detail below. The description refers to the accompa nying drawings, in which
  • FIG. 1 a shows an apparatus arrangement for measuring and monitoring otitis media according to an embodiment of the invention
  • FIG. 1 b shows step by step by way of example the main functions to be performed in the measurement and monitoring method according to the in vention
  • FIG. 2a shows the structure of a probe of an otitis media meter according to an embodiment of the invention
  • Fig. 2b shows step by step by way of example the functions to be performed at the measuring head according to the invention during the measuring event
  • FIG. 2c shows by way of example the principle of operation of a probe suit able for acousto-optical stroboscopic measurement
  • FIG. 3 shows the different stages of operation of an user interface of a de vice for measuring and monitoring otitis media according to the invention
  • FIG. 4 shows the user interface of a doctor utilizing remotely the measure ment scheme for otitis media according to the invention and the functions to be performed in the user interface, and
  • FIG. 5 shows step by step by way of example the functions to be per formed in the databases and in teaching of artificial intelligence in the measurement and monitoring arrangement according to the invention.
  • Figure 1 a roughly shows a measurement arrangement according to the invention comprising a learning artificial intelligence unit 11 and a database unit 15 located in the cloud system 10 of the Internet 1 , a probe 20 inserted into the ear of a pa tient 2, an application 32 operated on a mobile platform 30 of the user 3 by means of an user interface 31 and an application 42 operated on a mobile platform 40 of the remotely working physician 4 by means of an user interface 41.
  • the artificial intelligence unit 11 located in the cloud system 10 preferably comprises an artifi cial intelligence processor 12 for processing measurement data and an algorithm generator 13 for generating learning computational algorithms.
  • the Database unit 15 has been divided in two parts: the personalized data of the patients are handled in the patient database 16 and the non-personalized data of all patients (or as many as possible) in the anonymity database 17.
  • the use of the measuring arrangement according to the invention is outlined by way of example in Figure 1 b.
  • the probe is connected to the mobile de vice 30 either wired to a headphone or universal connection or wirelessly, for ex ample via a Bluetooth or NFC connection.
  • the application 32 performing the inventive function is opened and activated.
  • the application performs the calibration of the measurement arrangement, unless it is found in the memory that it has already been performed for this probe-mobile device combination, and no longer than a specified period of time has elapsed since the calibration.
  • step 104 the user places the probe tip at the mouth of the patient's left (or right) ear canal and in step 105 clicks the left ear trigger on the user interface 31 of the mobile device 30.
  • step 106 the user waits until the probe 20 has performed the measurement event under the mobile device 30 and the application 32 received the measurement da ta. If necessary, through step 107, the user next returns to step 104 and performs steps 104, 105 and 106 of the measurement operation also to the right (or left) ear.
  • step 108 the user receives a report processed automatically in the cloud system 10 to the user interface 31 of the mobile device 30, from which he sees the condition of the left and right ear separately both as a numeric value (e.g., on a scale of 0 to 100, where small values indicate sick and large values represent a healthy ear) and as a colour code (e.g. with colours green - yellow - red, where the image of a green earlobe means a healthy and red a sick ear and yellow is a borderline case).
  • step109 the user selects a continuation based on the received report. If the report shows green, the user terminates the measurement event in step 110 and the data is stored in both the patient database 16 and the anonymous database 17.
  • step 111 the user 3 contacts the attending physician 4 if the report has given a reason (red).
  • step 120 the remote physician 4 re trieves from the patient database 16 the data and measurement reports of the pa tient 2 via his application 42 on his own mobile device 40 and draws conclusions, and in step 121 takes necessary actions such as inviting the patient, prescribing and instructing the user or calming down the user with a message:’’not serious”.
  • step 122 the physician sends to the anonymous database 17 information about the nature, condition, and causes of the disease, as well as possible antibiotic reg imens and other treatments, which in step 123 the algorithm generator 13 of the artificial intelligence unit 1 1 utilizes to refine the algorithms used by the artificial in telligence processor 12. Thereafter, in step 124, the event also ends for the physi cian.
  • the artificial intelligence unit 1 1 sends a message to the user's mobile device 30 requesting to re-measure the pa tient's ears after a set time (for example 4 to 6 weeks). The message also asks for information about the patient's recovery progress, possible antibiotic regimens and other treatment measures.
  • the artificial intelligence unit 1 1 connects this infor mation to the patient database 16 for use as teaching data to refine the patient's personalized algorithms but also as anonymized teaching data to the anonymous database 17 for use in refining general algorithms in the algorithm generator 13.
  • FIG 2a schematically shows a probe 20 used in an arrangement according to the invention.
  • the probe is connected to the mobile device 30 either by cable 21 or wirelessly via a Bluetooth link, NFC link or other wireless connection.
  • the electron ics package 22 contains all the electronics needed by the probe 20 and also acts as a power source for different sensor solutions.
  • the energy required for the oper ating power comes either from the mobile device via the cable 21 or from a battery placed in the measuring head 20, which is not shown separately in the picture and which is charged by known techniques.
  • the speaker 23 is needed to generate the excitation sound of the acoustic reflec- tometer, the microphone 24 to receive the echoes reflected from the ear.
  • the chamber 25 is dimensioned by the reflection required by the reflectometry meas urement, where the chamber 25 cooperates with the adapter tube 26, the ear ca nal 27 and the middle ear 28 as a resonator, where different frequency sound exci tations from the speaker 23 are reflected back
  • the relative positions and intensi ties of the reflectors depend not only on the individual structure of the ear but also on the freedom of movement of the tympanic membrane 29.
  • the probe 20 of Figure 2a also comprises other sensors suitable for examining the ear.
  • the compact video camera 201 produces an image of the tympanic mem brane illuminated by a compact light source 202 preferably rotated around the lens of the camera.
  • the illumination may be brought closer to the tympanic membrane by placing the light sources 202 at the mouth of the adapter tube 26 or by introducing the light with the optical fibre to the ear canal 27.
  • the camera image sensor 203 may have one or more infrared sensitive sensors for measuring the temperature of the tympanic membrane, or may have a separate infrared sensor 204 or more for measuring the temperature in the vicinity of the probe tip.
  • Other sensors can also be added inside the probe, for example a pressure sensor, if an air pressure pulse supply accessory used in a tympanometer is connected to the probe.
  • FIG. 2b shows the functions performed in the measuring head according to the invention during the measuring operation in stages.
  • the acoustic reflectometry measurement is started in step 221 .
  • the probe 20 is connected to the mobile device 30, calibration 103 is performed if necessary, where the following measurement (steps 222-225) was performed on the probe 20 adapter tube 26 in air (i.e. not inserted into the ear), after which the measurement function is started 105 (Fig. 1 b).
  • the mobile device 30, in cooperation with the artificial in telligence unit 1 1 generates a step-by-step excitation signal with successive short periods of signals starting at low frequencies (e.g., 500 Flz) and continuing at high frequencies (e.g., 5000 Hz).
  • the electronics 22 of the probe 20 amplify the signal and supply it to the speaker 23, whereupon that acoustic excitation signal goes to the ear.
  • the sound waves are reflected from the tympanic membrane 29 and the back surfaces of the middle ear 28 and are received by the microphone 24 (step 223).
  • the data measured in this way is suitable for analysis in the form of a fre quency spectrum.
  • the acoustic excitation signal may be one or more bursts oscillating at different frequencies, the sound waves reflected from different surfaces being received by the microphone 24.
  • the data measured in this way are suitable for analysis as envelopes, from which places and amplitudes of the ech oes reflected from the tympanic membrane and the surfaces of the middle ear are aimed to be found in the artificial intelligence unit.
  • the measured signals are then digitalized and compressed (step 224) in the mobile device 30 and sent (step 225) to the cloud system 10 for processing, after which the measurement is completed for acoustic reflectometry (step 226).
  • the measurement continues with optical technology (step 230), whereupon a vid eo image of the ear canal appears on the screen of the mobile device.
  • the user in terface 31 requests to move the probe 20 so that the tympanic membrane 29 is fully visible on the image surface of the mobile device 30 (step 232).
  • the mobile device has a program that recognizes the image of the tympanic membrane and takes a series of images or video clips at moments when the tympanic membrane is preferably visible in the images.
  • the user clicks on the images or video clips using the button image on the mobile device interface (step 233).
  • the mobile device then sends the images for processing by the artificial intelligence (step 234), after which the optical portion of the measurement step is completed (step 235).
  • An air pressure pulse for use in tympanometry can also be added to the optical examination step. It is passed along a separate tube (not shown in Figure 2a) in side the probe and the movement caused by the pulse in the tympanic membrane is videotaped.
  • the temperature measurement starts next (step 240).
  • the temperature is read from an infrared sensor 204 which is applied to the ear canal so that the measurement result comes from the surface of the tympanic membrane 29.
  • the measurement can also be combined with an optical phase so that the temperature measurement takes place with the camera focused on imaging the entire tympanic membrane, whereby it can be ensured that the measured value comes from the surface of the tympanic membrane.
  • the meas urement result is sent to the artificial intelligence unit 1 1 in step 241 , after which the measurement is determined to have been performed (step 242).
  • One or more temperature sensors may also be in the camera image cell 203, preferably in its central region, where a separate infrared sensor is not required, and temperature measurement is combined with optical tympanic examination.
  • step 260 the measurement event is declared complete and one can wait for the results of the analysis of the cloud system 10 (step 106 in Figure 1 b).
  • Figure 2c shows an embodiment of the invention based on acousto-optical strobo scopic measurement.
  • optical imaging is used not only for normal video imag ing but also in connection with acoustic measurement, so that the light of the light source 202 illuminating the tympanic membrane 29 is pulsed in synchronization with the acoustic vibration.
  • the stroboscopic principle light is pulsed at a frequency lower or higher than the acoustic oscillation of 1 to 10 hertz (e.g. 3 hertz), whereby the video image of the tympanic membrane 29 shows the vibration of the tympanic membrane slowed down to the differential frequency (e.g. 3 Hz as above).
  • one or more striped or spotted light sources 209 near the tympanic membrane 29 at the edge portion of the ear canal 27 (in the case of multiple light sources at the different edge portions of the ear canal) so that the tilted and striped or spotted light is seen on the vibrating tympanic membrane as streaks or stains moving in the direction of the tympanic surface.
  • the video camera 201 records this view.
  • two pulsed and striped or spotted light sources of different colours can be advantageously placed on opposite sides of the ear canal, providing a slow motion video image of the tympanic surface, with those colours in different phases of the vibration hitting one on the other and inter lacing side by side, when the tympanic membrane nears and moves away from the light source.
  • the above-mentioned 3 Hertz differential frequency shows streaks or spots whose pattern pulses at 3 Hertz in blue, yellow and green, providing an enhanced view of the tympanic membrane movability in different parts of its surface.
  • the movability of the tympanic membrane can be measured as a function of location, which is useful in formation in determining the presence and amount of possible mucus in the middle ear.
  • the movability of the tympanic membrane can also be viewed visually in real time, either on the screen of the mobile device or with a suitable lens combination directly from the surface of the tympanic membrane.
  • the acoustic excitation periods are preferably longer (e.g., 1 to 5 seconds) to provide sufficient optical slow motion video of the tympanic membrane movements at each acoustic frequency. Recording video at several different acoustic frequencies provides additional information for measur- ing tympanic membrane condition, middle ear pressure difference, and possible mucus.
  • Figure 3 shows the different operating stages of the user interface 31 operating in the mobile device 30 of the user 3 of the otitis media measurement and monitoring system according to the invention.
  • the application 32 according to the invention is shown as one application among oth ers. Clicking on it opens the welcome screen 320, where the user can be told e.g. news 321 about products and services, as well as new application versions.
  • the welcome screen 320 where the user can be told e.g. news 321 about products and services, as well as new application versions.
  • an arrow-back button 323 to access the pre vious box
  • an arrow-forward button 324 to access the next box if it has already been visited during an ongoing measurement event. The measurement is accessed by clicking the START button 322, which opens the patient selection screen 330.
  • the mobile device 30 performs the calibration in cooperation with the artifi cial intelligence system (steps 511 -521 ), followed immediately by the measure ment event.
  • the current patient button 331 is clicked. If the patient is new, the NEW button 332 is clicked to open the patient's personal and background information input screen and return to the list of selected patients 331. If there are many pa tients, the list changes to up-down scrolling. Clicking on the patient's name opens the patient's measurement screen 340. Double-clicking (or long-pressing) the pa tient's name opens a screen where all the patient's personal data, diagnoses and measurement data can be downloaded to the mobile device and/or deleted from the patient database 16. The anonymous part of the measurement data used to teach the artificial intelligence unit 11 remains to the anonymous database 17. This function complies with the security legislation of each country.
  • the patient measurement screen 340 displays the patient's name, which is now displayed on all screens.
  • the probe 20 is placed in the patient's right (or left) ear and the right ear icon 341 (or left icon 342, respectively) is clicked on the screen.
  • the ears can be measured in either order, or only the other ear can be measured. If you want to leave either one unmeasured, the SKIP button 343 or 344 of the ear icon can be clicked, whereupon the measurement automatically continues to the measurement of the remaining ear.
  • the measurement event comprises the measurements of all the sensors at the probe as shown in Figure 2b and may therefore take a moment during which the probe is kept stationary in the ear.
  • an image of the ear canal provided by the video camera 201 opens on the screen 340 and an instruction window re quests to move the measuring head 20 so that the image shows the entire tym panic membrane 29.
  • the data is transferred to the artificial intelligence unit 11 for processing and the report is re trieved, a result screen 350 corresponding to the ear of that patient opens on the mobile device 30.
  • the screen shows the icon 351 of that ear, the colour indicating ear: green means healthy, red inflamed and yellow borderline.
  • In the centre of the icon is a light image or a video clip rotating from the tympanic membrane 352.
  • the screen also shows the tympanic membrane temperature 353 and possible meas urements from other sensors 354. If the ear has been measured previously, a curve 355 of historical data from the time the disease was monitored is displayed. Historical data is displayed as daily values on a scale of 0 to 100, for example, with large values at the top coloured red, small values at the bottom coloured green, and medium values at the centre coloured yellow. At the top of the history screen is a measurement date tag 356 that can be moved to compare the results of previous measurements (e.g., temperature 353, tympanic image 352).
  • the arrow-back button 323 or the NEXT button 357 moves to measure the second ear, resulting in a result box 360 equivalent to the result box 350. This is followed by the NEXT button 361 to the report box 370.
  • Report box 370 shows the last measurement results of both ears and the current daily history for both ears.
  • the results for the right ear are displayed on the left side of the screen and the results for the left are displayed on the right side so that the view corresponds to the natural situation where the patient and the viewer are face to face.
  • the colours (red-yellow-green) of the ear icons 371 and 372 give a quick snapshot of the condition of the ears.
  • In the center of each of the icons 371 and 372 is a photograph taken by the video camera 201 or a short continuously repeated video clip of the said tympanic membrane.
  • Tempera- ture values 373 and 374 show the minimum and maximum values for the current measurement period.
  • History curves 375 and 376 tell the daily condition estimates for the measurement period as in result boxes 350 and 360.
  • the days’ measurement data and tympanic images are displayed at the top of the report.
  • the report box 370 has a verbal re port window 378, which shows e.g. comments written by a physician and texts generated by artificial intelligence.
  • Physician comments may include personalized additional information for interpreting the results. For example, comments on artifi cial intelligence may contain information about the measurement uncertainty (if, for example, the tympanic membrane image is of poor quality) or strangely different measurement results.
  • the NEXT button 379 moves to the communication box 380.
  • the communication box 380 it is possible to choose to send a message either to one's own doctor by clicking on the image 381 or by selecting another physician from the drop-down list on the OTFIER button 382.
  • the message automatically contains the information of the report 370 but the user can also write his/her own comments or hello in the message window 383.
  • the message is sent with the SEND button 385, which opens an acknowledgment screen 390, from which the application can be closed with the CLOSE button 391 or returned to the beginning with the RETURN button 392.
  • the message can be omitted by clicking the NO button 384, which opens the corresponding acknowledgment screen with the text MESSAGE NOT SEND.
  • Figure 4 shows the different stages of operation of the user interface 41 operating in the mobile device 40 of the doctor 4 of the otitis media measuring and monitor ing arrangement according to the invention.
  • the application 42 according to the invention is shown as one application among others. Clicking on it opens the welcome screen 420, where the user can be told e.g. news 421 about products and services as well as new application ver sions.
  • the START button 422 is clicked, which opens the patient queue screen 430.
  • the patient queue box 430 one of the queued patient buttons in list 431 is clicked. If there are many patients in the patient queue box 430, the list changes to be scrolled up and down. Each patient has an Urgent mark 434 that illuminates and/or flashes depending on how urgent actions the condition of that patient re quires. The information about the urgency comes from the artificial intelligence unit 10 of the arrangement according to the invention via the patient database 16. If the patient is new, the NEW button 432 is clicked to open the patient's personal and background information input screen, and upon return, the name of the new patient appears in the list of selectable patients 431. There may be more than one new patient in the queue at the same time.
  • a separate data query and input event can also be arranged between the patient and the doctor, which involves a procedure for event granting access rights to the patient information system and a billing agreement.
  • the acceptance of a new patient takes place only as an exchange of information between the user 3 and the patient database 16 and between the doctor 4 and the patient database
  • Report box 440 displays the most recent measurement results for both ears and the daily history to date for both ears.
  • the results for the right ear are displayed on the left side of the screen and the results for the left are displayed on the right side so that the view corresponds to the natural situation where the patient and the viewer are face to face.
  • the colours (red-yellow-green) of the ear icons 441 and 442 give a quick snapshot of the condition of the ears.
  • each of the icons 441 and 442 is a photograph taken by the video cam era 201 , a repetitive series of images, or a short continuously repeated video clip of the said tympanic membrane (e.g., tympanometer video of tympanic membrane motion caused by air pressure pulse).
  • Temperature values 443 and 444 show the minimum and maximum values for the current measurement period.
  • Historic curves 445 and 446 tell the daily fitness estimates for the measurement period.
  • the re- port box 440 has a verbal report window 448, which shows e.g.
  • the report box 440 seen by the physician is substantially similar to the report box 370 seen by the user. From the report box 440, the physician can use the FULL REPORT button 449 to move to a more detailed Full report box 450 containing physician-relevant infor- mation.
  • measurement and analysis curves associated with the right ear 451 or the left ear 452 may be displayed, which may include original measurement results, light and video images of the tympanic membrane, and time-evolved curves.
  • a diagnosis or other conclusions about the nature and progression of the disease will appear in the diagnosis window 454 at each stage.
  • moving the measurement date tag 453 will display the actions taken at each step in the action window 455, e.g., initiated drug regimens, tympanic membrane punctures, piping, etc.
  • the physician can write a report on the situation in the message box 456 and send in his report box 370 in the message window 378.
  • the physician's report is at least partially structured so that it can be read and understood unambiguously by artificial intelligence.
  • part of the reporting may be to answer questions from the artificial intelligence about the patient's condition and/or disease progression, so that the information can be used to refine the artificial intelligence algorithms for future similar cases.
  • an acknowledgment screen 460 for the suc cessful transmission of the report 461 opens to the physician.
  • the physician is asked if the data can also be sent for the teaching use of artificial intelligence 462.
  • the physician can skip sending the teaching material.
  • the SEND button 464 the data identifying the pa tient is deleted from the data and the rest of the data is sent to an anonymous da tabase 15 as the teaching material of the artificial intelligence unit 11.
  • Both buttons 463 and 464 go to the thank you box 470, which may include advertising 471 and from which the application may be closed with the CLOSE button 472 or may be resumed to continue the treatment of the next patient with the RETURN button 473.
  • a message box 480 opens for the doctor to see which patient is in question 481 and the emergency sign 482 how urgent answer the situation requires.
  • the ANSWER button 483 he can move immediately to the report box 440 of said patient and make the necessary interpretations, decisions, and actions.
  • the REMIND LATER button 484 adds the patient's name last to the list 431 in the patient queue box 430.
  • FIG. 5 shows the different stages of operation of the artificial intelligence unit 11 and the database unit 15 of the measurement and monitoring arrangement ac cording to the invention and their connection to the mobile user interface 31 of the user 3 and the mobile user interface 41 of the doctor 4.
  • Mobile Ul events are orga nized in Mobile Ul column 501 , artificial intelligence processor events in artificial intelligence processor column 502, Patient database events in Patient DB column 503, Computation algorithm generator processor events in Algorithm Generator column 504, and Anonymous database events in Anonyme DB column.
  • the measurement is started from the mobile device 30 by the START button 322 (step 102 in Figure 1 b).
  • the mobile device initiates a calibration of the cur rent mobile device-probe combination (step 511 ) and checks the patient database to see if the calibration has already been performed and is valid (step 512). If the calibration is OK (step 514), the calibration step is skipped directly to the ear measurement step 522. If the calibration is not valid (check in step 513), the pa tient database 16 sends the data needed to generate the signal used for calibra tion to the mobile device 30, which converts the data into a calibration excitation signal (step 515) and feeds it to a probe 20 held in the air during calibration.
  • the probe receives the response signal measured by the probe microphone (step 516) and sends it to the artificial intelligence processor for analysis.
  • the artificial intelli gence processor 12 determines the correction parameters necessary in this mo bile device-probe combination (30, 20) related to e.g. frequency and delay re sponses (step 517) and store them in the patient database 16 in the data of the user in question.
  • the correction information also goes to the al gorithm generator 13, which makes corrections corresponding to the artificial intel ligence calculation algorithms (step 519) and stores the corrected algorithms (step 520).
  • the calibration step 521 ends and the measurement step 522 is passed.
  • the correction parameters can be utilized, for example, by selecting the actual measurement excitation signal used in the actual measurement (step 524) to produce a substantially similar acoustic excitation to the ear canal (step 525) regardless of the acoustic and electronic frequency and delay response of the probe 20 and mobile device 30.
  • the data generated in the calibration measure ment can also be utilized in the artificial intelligence unit 11 to fine-tune the algo rithms of the process by processing the measurement results (step 528) (step 519) so that the data provided by the calibration measurement serves as one reference data in the comparison functions.
  • the patient is selected in box 330, and when box 340 is opened, the ear is selected to start, the probe 20 is inserted into the ear, and acoustic reflectometry measurement is initiated (step 523, step 221 in Figure 2b) by clicking on the ear icon 341 or 342.
  • the mobile device 30 re trieves excitation signal data from the patient database (step 524), converts the data into a signal, sends it to the probe in the ear (step 525) and receives an echo signal from the ear (step 526), and processes the signal into digital format, pack ages and sends to the artificial intelligence for examination (step 527).
  • the artifi cial intelligence processor 12 processes the data using valid algorithms (step 528) and stores the result in the patient database 16 (step 529).
  • the mobile device 30 starts the optical measurement (step 530).
  • the viewfinder image of the video camera 201 opens on the screen of the mobile device, along with instructions instructing the user to ori ent the probe (step 531 ) so that the tympanic membrane appears in the viewfinder image.
  • the video camera automatically takes a video clip or multiple video clips and still images of the tym panic membrane (step 532), processes, crops, packages, and sends (step 533) them to the artificial intelligence for analysis.
  • step 531 When using a probe using the stroboscopic principle shown in Fig. 2c, the orienta tion of the optical image (step 531 ) is repeated so that the tympanic membrane is preferably displayed in the viewfinder image. Thereafter, steps 522 to 529 are re peated so that in addition to the acoustic reflectometry measurement, the video camera 201 and light sources 204 simultaneously perform the functions described in Fig. 2c and its description synchronized to the excitation signal in steps 525 and 526.
  • step 527 in addition to the data produced by the echo signal, the slow mo tion video data is packaged and sent to the artificial intelligence unit 11 , which starts the algorithms assigned to the data analysis (step 528) and stores the re sults (step 529). Then proceed to step 540.
  • the mobile device 30 initiates the temperature measurement (step 540), per forms it by means of an infrared sensor 204 oriented to point in the direction of the tympanic membrane, while the optical image is preferably visible from the previous step.
  • the temperature measurement (step 541 ) is timed to occur im- mediately after or during the optical imaging, in which case the measurement is best positioned.
  • the result is sent to the artificial intelligence for analysis (step 542) and the result is stored in the patient database (step 543).
  • step 545 and 546 The other measurements in use are then performed (steps 545 and 546), the re sults are sent to the artificial intelligence for analysis (step 547), and stored in the patient database (step 548).
  • the probe is moved to the other ear (step 549) and the above sequence of events is repeated starting from step 522. After this, the ear measurements are performed (550), the probe can be removed from the ear and the report is waited for.
  • the artificial intelligence may be set to process the report separately for each ear, as presented in the following steps, so that the report of the ear in question can be seen immediately after the measurement before the next ear is measured.
  • the generation of the report starts from step 560 and is completed in step 569.
  • the artificial intelligence processor 12 retrieves the measurement and analysis re sults of the patient (562) in question, including history (step 561 ) from the patient database 16, the reference data (step 564) selected for this patient based, for ex ample, on medical history from anonymous database 17 and for this case, previ ously refined and stored algorithms (step 566) from the algorithm generator 13.
  • the artificial intelligence processor 12 applies the algorithms (step 567) with the reference data and patient measurement history data to the current measure ment and imaging results and processes the report (step 568) and sends it to the mobile device 30 of the user 3.
  • the events in box 580 occur on the mobile device 40 of the physician 4.
  • the phy sician receives notification of the arrival of the report and proceeds either immedi ately or later to examine the report. He analyses the disease situation, makes a diagnosis, and plans actions (step 571 ), which he records in a report and sends back to the user (step 572). In addition, if desired, he also sends a report to teach artificial intelligence (step 573) for situations like that.
  • the training data (step 574) is stored in the anonymous database 17, from which the algorithm generator 13 extracts it and refines the algorithms (step 575) accordingly.
  • the advanced algo rithms are stored (step 576) and are available the next time.
  • the artificial in telligence analyses measurement data quality and integrity of the measurement event (both ears measured and relevant measurements, at least acoustic and temperature performed) and, based on the result, places valid measurement events for teaching use on the post-checklist.
  • the artificial intelligence unit 1 1 sends a message to the user's mobile device 30 asking the user to measure the patient's ears again.
  • the mes sage also asks for information about the patient’s recovery progress, possible an tibiotic regimens, and other treatment measures.
  • the artificial intelligence unit 1 1 links this information to the patient database 16 for use as training data to refine the patient's personally customized algorithms but also as anonymized instruction al data to the anonymous database 17 for use in refining general algorithms in the algorithm generator 13.
  • the artificial intelligence unit 1 1 can be programmed to use, for example, the clas sification or clustering of measurement results based on pattern recognition and the calculation of disease status/changes based on categories in the processing of measurement results.
  • Classification can be done, for example, by the kNN method (k Nearest Neighbour), in which the measurement result is compared with the ref erence data by searching for k nearest neighbours, which mainly mean similar cases. Neighbours are selected by calculating the distance of the point describing the measurement result in the M-dimensional vector space from other points de scribing the measurement results, where M is the number of features obtained from the measurement results.
  • Features detached from the measurement results can be used as dimensions, such as the responses of the acoustic reflection at dif ferent frequencies, the values of the acoustic response or the ultrasonic envelope at different times (e.g. reflection from the tympanic membrane, reflection from the posterior wall of the middle ear), the value coded as a numerical value of the tym panic colour, the amplitude of the tympanic motion in the middle of the tympanic membrane and separately in its lower part, the temperature value of the tympanic membrane, the measurement value given by a chemical sensor etc.
  • the responses of the acoustic reflection at dif ferent frequencies e.g. reflection from the tympanic membrane, reflection from the posterior wall of the middle ear
  • the value coded as a numerical value of the tym panic colour e.g. reflection from the tympanic membrane, reflection from the posterior wall of the middle ear
  • the diagnosis given by a doctor or other expert body and the information provided by the user in connection with the follow-up measurement about the treatments used and the course of healing can also be coded into features that the algorithm can use as dimensions in said vector space.
  • individual weighting factors can be defined for the features, which can be used to fine-tuned the algorithm.
  • pattern recognition and clustering methods used in artificial intelligence technology can be applied in the measurement and monitoring system of otitis media according to the invention.
  • the artificial intelligence unit 11 can also be programmed to process the meas urement results with the SOM neural network (Self-Organazing Map).
  • SOM neural network Self-Organazing Map
  • the SOM neural network is updated by the following algorithm (1 ):
  • the parameter a is a "forgetfulness" whose magnitude depends on how much of the old neuron value is left in the update. It also controls network stability.
  • N c is a topological neighbourhood, i.e. , a set of neurons that are in a network closest to a neuron performing a minimal operation.
  • the map update rule means that the neurons nrij closest to the data vector x are moved towards the data vector x.
  • neurons in the SOM neural network learn and/or tune through the input variables they receive.
  • the task of teaching algo rithms is to implement updating activities so that the intelligence of the neural net work increases.
  • the SOM neural network learns or is taught to know the acoustic reflections of the ears to be measured, for example their envelopes and/or frequency spectra, and to compare them with acoustic reflections previously recorded from the same ear and from other ears belonging to the same category.
  • optical images/videos of the tympanic membrane may be included in the neural network and post-verification of the information provided by the user in connection with the measurement about the treatments used and the course of healing in the same way as in the pattern recognition methods.
  • Neural network learning takes place automatically by updating its algorithm partial ly or completely self-driven. If necessary, the teaching routine of the neural net work can be given significant freedoms to choose methods with different basic theories to implement learning. However, the results of this learning need to be observed and managed. To this end, an internal validation step may be included in the teaching routine, the task of which is to ensure, on the basis of the available data, that the proposed new neural network is operating at a sufficiently good per formance.
  • the validation methods used here may be, for example, of the leave- one-out or leave-N-out type, in which part of the teaching material is omitted, and network learning is ensured by appropriate statistical methods based on the data excluded from the teaching material. Validation methods can also be regression- based.
  • the relationship between the desired response of a neural network and the response it produces is examined by regression analysis using conventional parameters using a correlation coefficient.
  • An application- specific definition of, for example, the sensitivity or specificity of an otitis media di agnosis within the range of available data can also serve as a performance pa rameter.
  • neural network learning can also be controlled in such a way that the neural network teaching process continuously produces alternatives for a new, better than before, neural network and presents the corresponding per formance parameters. If the situation is too complex for the machine to deduce, an expert monitoring the neural network learning can also be asked at this stage to decide on the best option for the candidates. Following this decision made by ei ther a machine or a human, the selected neural network is deployed.
  • the operating principle of the neural network in use at any given time can be documented. Basically, this is done by storing in memory the neural networks used by the application at each point in time.
  • an interpretation report in a human-understandable form can be stored, which describes the performance and operation of the neural net work in question in the light of the data currently in use. This report may also in clude a description of the principle by which that new neural network was chosen to succeed the previous one.
  • This neural network documentation accumulated over time can also serve as feedback in automating neural network update criteria.
  • updating the neural network means that the algorithms of the artificial intelligence unit 11 will be refined as reference material accumulates, so that the advanced algorithms will be tested with previously measured data before deploy ment and approved only if they give improved relevance to disease quality, status and cause assessment.
  • Neural Network principles other than SOM for example feed-forward neural networks (single-layer perceptron, multi-layer perceptron, deep neural networks), and their numerous variations and subtypes.
  • feed-forward neural networks single-layer perceptron, multi-layer perceptron, deep neural networks
  • genetic algorithms and fuzzy logic can also be utilized to implement the principle.
  • the implementation of neural networks and/or teaching routines can be done with either program codes (SW) or electronics (HW). If efficient HW- based methods (including parallel processing if necessary) are used in the imple mentation, this speeds up data processing and increases the range of means to identify features from the data in mass-based calculations where the power of the SW solution would not be sufficient.
  • SW program codes
  • HW electronics
  • Various neural network solutions can be placed in the algorithm generator 13 and the artificial intelligence unit 11 can be programmed to select the neural network principle with the best performance for use depending on the situation.
  • the selec tion may also include different pattern recognition methods, in which case the arti ficial intelligence unit 11 may apply neuro-computing or pattern recognition or a combination thereof, depending on the situation.
  • Neural network computing can be programmed to operate separately for each measurement mode, such as acoustic reflectometry, optical tympanic membrane imaging, acousto-optical stroboscopic imaging, ultrasound measurements, tympa nometry videos, temperature, chemical measurements, etc. Finally, for the results obtained from these, neural network computation can be used to combine all the results into one value of the disease status and its change, of which the disease status is displayed on the screen of the mobile device as a green, yellow or red ear image.

Abstract

The invention provides a method and arrangement for diagnosing and monitoring otitis media, in which data from a patient's ear are measured by the principle of acoustic reflectometry, optical video imaging and other known measurement methods. The data is measured by a probe (20) connected to the mobile device (30) and sent to the cloud server for processing by an artificial intelligence system (10) comprising an artificial intelligence processor (12), an algorithm generator (13), a patient database (16) and an anonymous reference database (17). Learning is based on adding new measurement results to a reference database with data diagnosed by a doctor and collected from users as a follow-up measurement.

Description

MULTIMODAL MEASURING AND TRACKING OF MIDDLE EAR OTI TIS BY ARTIFICAL INTELLIGENCE
The invention relates to a method for indicating and measuring the clinical condi tion of otitis media and for monitoring the healing process. The invention also re lates to a measurement and monitoring arrangement for otitis media, to a server utilized in the arrangement, and to a device and a computer program used in the measurement and monitoring.
Acute otitis media (Otitis media acuta) is a common disease in young children. The acute form can also be chronic and form a prolonged inflammatory process in the middle ear and ear canal (inflammation of the middle ear and ear canal) or an adhesive-like secretion, in which case we speak of an adhesive otitis media. Otitis media usually affects people aged 1 to 3 years. More than 90% of them occur in children under 5 years of age. The main predisposing factors are respiratory infec tions. Swelling of the mucous membranes and increased adhesion constrict the ear canal (pressure equalization channel from the middle ear to the pharynx), cre ating favourable conditions for bacteria and viruses in the nasopharynx, exposing the inflammation in the middle ear. The ear canal can become completely blocked, creating pressure in the middle ear cavity, which can reduce the free movement of the tympanic membrane and feel like the ear is“locking”. The same sensation can also occur if a waxy effect is secreted into the ear canal to such an extent that it impedes the free movement of the tympanic membrane. Fluid that develops in the middle ear can also accumulate in the middle ear and cause difficulty in the movement of the tympanic membrane as well as a sensation of locking.
The diagnosis of otitis media at home is based on symptoms that are usually cough, fever, and ear pain associated with a respiratory infection. Fevers arise in only a quarter of patients, with ear pain occurring in about 75%. In some cases, hearing also deteriorates during an acute situation. Older children can describe their pain symptoms but in young children it must be inferred from crying, restless ness and touching ear by hand. Detection of otitis media is not easy at home and not even at the doctor's office. It is especially difficult at home to decide if a doc tor's appointment is needed.
At the doctor's office the so-called golden standard in the study of otitis media is to visually inspect the condition of the tympanic membrane through a funnel with suf ficient lightning and to verify the movement sensitivity of the tympanic membrane in good lightning with a pressure-producing bellows. A ruptured tympanic mem brane and secretion leaking outside of it is a clear sign of otitis media. Redness, turbidity, or secretion seen through the tympanic membrane will also provide addi tional information to the experienced physician to assess the condition of the mid dle ear. Redness of the tympanic membrane may be due to causes other than oti tis media. Diagnosis requires experience and physician training. Many ear pain pa tients have also remained borderline cases, where the sensitivity to prescribing unnecessary antibiotic cures has traditionally been too high, when the patient has had no symptoms other than the onset of pain and the physician has the means to adequately control disease progression. Otitis media has traditionally required a follow-up examination a few weeks after the onset of the disease. The omission of the examination has been considered to leave the possibility of chronic inflamma tion and, for example, the formation of an adhesive ear.
Otitis media imposes high costs on health care and sets an unreasonable burden on doctor's appointments, especially when waves of respiratory infections occur in day care centres and schools in the area. It is estimated that most, up to 70%, of current antibiotic regimens could be omitted if one dared to rely on the body’s nat ural healing process. This would require a means by which otitis media can be more accurately diagnosed and their disease process monitored at home. In Fin land alone, this would mean annual savings of tens of millions of euros, and glob ally the corresponding potential is enormous. The savings potential continues to increase when taken into account the parental absences from work due to ear in fections in children and the transport of children to the doctor's office.
Thus, new methods and tools are needed to measure and monitor otitis media, especially those suitable for use at home.
It must be emphasized that different methods of self-diagnosis and treatment will be a megatrend in the future, perhaps faster than expected.
Different methods are used to measure and monitor otitis media, either alone or in combination. These include a visual observation otoscope with an attached air bel lows (air pressure tympanometer), a tuning fork, devices based on acoustic reflec- tometry, and devices based on ultrasonic echoes. In addition, ear thermometers based on infrared radiation are known. Examples of optical video camera-based otoscopes include U.S. Patent Nos. 5,363,839 and 5,919,130, and U.S. Application Nos. 20110152621 , 20150351620 and 20150065803A1. Examples of tympanometer inventions using atmospheric pressure include U.S. 4,688,582 and U.S. 5,792,072. Frequency Sweep Tympa nometry is described in Michelle R. Petrak, PhD, CCC-A, Tympanometry Beyond 226 Hz - What's Different in Babies? (November 18, 2002, audiologyonline.com)
Examples of inventions based on acoustic reflectometry include U.S. Patents 4,601 ,295, U.S. 5,699,809 and U.S. 5,594,174. Examples of inventions based on ultrasonic echoes include U.S. Patent 7,632,232 and U.S. Application No. 20170014053A1. An example of an ear thermometer patent is U.S. 6,435,711. Examples of inventions combining the above methods are U.S. 8,858,430, U.S. 20030171655A1 and U.S. 6,126,614.
Automatic analysis of otitis media based on otoscope images is disclosed, for ex ample, in U.S. 9,445,713, in which an image of the tympanic membrane is taken with an accessory mounted on a mobile phone and image analysis is performed by image recognition technology on the mobile phone.
Artificial intelligence and cloud services are new technologies that are coming to medical diagnostics and treatment monitoring. They are also applied in the present invention in an innovative manner, as the arrangements to date have not solved the above described problem in measuring and monitoring otitis media at home.
The object of the invention is to present a method and arrangement combining in novative multimodal measurement and collection of data on otitis media with pa tients and a learning system using artificial intelligence, in which the status of otitis media can be assessed using not only the patient's own measurement-analysis history but also measurement-analysis history recorded from other similar patient cases.
The method of measuring and monitoring otitis media according to the invention is characterized in that the method measures multimodal acoustic (including ultra sound), optical, electronic and/or chemical sensors and that the measurement re sults are processed in a learning artificial intelligence operating in a data network so that they are compared with the values of reference data collected from similar patients and that the reference data are accumulated by storing new measurement results with a diagnosis determined by a physician or other expert. The objects of the invention are achieved by an arrangement for measuring and monitoring otitis media, which comprises
- means for transmitting acoustic signals and/or ultrasound to the ear and receiv- ing echoes, and/or
- means for optical imaging and colour analysis of the tympanic membrane, measurement of the tympanic membrane temperature, chemical determination of the tympanic membrane secretions and/or measurement of the tympanic mem brane mobility by means of atmospheric pulse or acousto-optical stroboscopic im- aging,
- means for digitizing, compressing and transmitting the measurement results to a computer network,
- means for storing and processing measurement results in a computer network using artificial intelligence, which compares the measurement results with refer- ence data collected from a large number of patients and searches for similarities and differences in the measurement results and their changes and infers the dis ease status according to them,
- means for improving the performance of the above-mentioned learning artificial intelligence by storing in the reference data new measurement results provided with a diagnosis defined by a doctor or other expert body, and
- means for displaying results describing the condition or progression of the dis ease on a mobile device.
The server according to the invention, which is utilized in the measurement and monitoring of otitis media, is characterized in that the server comprises:
- program codes for transmitting and receiving signals based on acoustic reflec- tormetry to and from the user’s mobile device,
- program codes for receiving measurement results from other devices used in ear inspection,
- program codes for storing and processing the measurement results using an arti ficial intelligence unit and algorithms that compare the measurement results with the reference data collected from a large patient population and look for similarities and differences in the measurement results and their changes and are able to infer the disease status according to them,
- program codes for the improvement of the above-mentioned learning artificial in telligence unit and algorithms by storing in the reference data new measurement results provided with a diagnosis defined by a doctor or other expert body, and - program codes for transmitting results describing the status or progression of the disease for display on a mobile device.
In addition, the arrangement and server according to the invention are character ized in that the information about patients is stored in two separate databases: 1 ) a patient database, from which identifiable patient information can be easily deleted or transferred to an object requested by the patient or his trustee, and 2) an anon ymous database containing the information necessary for the teaching of the artifi cial intelligence, from which the identification information has been deleted in ac cordance with the regulations.
Some preferred embodiments of the invention are set out in the dependent patent claims. It should be noted that the ability of artificial intelligence to determine dis ease status or progression can be improved without a diagnosis by a physician or other expert body by monitoring patient status by collecting data from subsequent user measurements and assessing how well the disease was correctly estimated and fine-tuning the algorithms afterwards so that the estimate would have been even better. In this way, the artificial intelligence unit can learn without a defined diagnosis, albeit with a delay.
An advantage of the invention is that otitis media can be assessed and monitored at home with a mobile device that is connected to a compact probe and that is reasonably easy for anyone to learn how to use. However, the learning artificial in telligence system according to the invention makes it possible in many cases to make a reliable diagnosis and to classify the situation at least in terms of whether a visit to a doctor is required. The arrangement also makes it possible to use the telemedicine service effortlessly in cases requiring further assessment.
In addition, the invention has the advantage that its determination of the severity of the disease state is automatically refined as the number of measurements per formed increases, especially those for which a physician or other expert makes his definition and gives treatment instructions.
A further advantage of the invention is that the number of visits to the doctor and the need for antibiotic treatments are reduced, which are significant public eco nomic and therapeutic benefits. The invention is described in detail below. The description refers to the accompa nying drawings, in which
- Figure 1 a shows an apparatus arrangement for measuring and monitoring otitis media according to an embodiment of the invention,
- Figure 1 b shows step by step by way of example the main functions to be performed in the measurement and monitoring method according to the in vention,
- Figure 2a shows the structure of a probe of an otitis media meter according to an embodiment of the invention,
- Fig. 2b shows step by step by way of example the functions to be performed at the measuring head according to the invention during the measuring event,
- Figure 2c shows by way of example the principle of operation of a probe suit able for acousto-optical stroboscopic measurement,
- Figure 3 shows the different stages of operation of an user interface of a de vice for measuring and monitoring otitis media according to the invention,
- Figure 4 shows the user interface of a doctor utilizing remotely the measure ment scheme for otitis media according to the invention and the functions to be performed in the user interface, and
- Figure 5 shows step by step by way of example the functions to be per formed in the databases and in teaching of artificial intelligence in the measurement and monitoring arrangement according to the invention.
The embodiments in the following description are exemplary, and one skilled in the art may also practice the basic idea of the invention in a manner other than that described in the description. Although one embodiment or embodiments may be referred to in the description in several places, this does not mean that the refer ence is to only one of the described embodiments, or that the described feature is useful in only one of the described embodiments. The individual features of the two or more embodiments can be combined to provide new embodiments of the invention.
Figure 1 a roughly shows a measurement arrangement according to the invention comprising a learning artificial intelligence unit 11 and a database unit 15 located in the cloud system 10 of the Internet 1 , a probe 20 inserted into the ear of a pa tient 2, an application 32 operated on a mobile platform 30 of the user 3 by means of an user interface 31 and an application 42 operated on a mobile platform 40 of the remotely working physician 4 by means of an user interface 41. The artificial intelligence unit 11 located in the cloud system 10 preferably comprises an artifi cial intelligence processor 12 for processing measurement data and an algorithm generator 13 for generating learning computational algorithms. The Database unit 15 has been divided in two parts: the personalized data of the patients are handled in the patient database 16 and the non-personalized data of all patients (or as many as possible) in the anonymity database 17.
The use of the measuring arrangement according to the invention is outlined by way of example in Figure 1 b. In step 101 , the probe is connected to the mobile de vice 30 either wired to a headphone or universal connection or wirelessly, for ex ample via a Bluetooth or NFC connection. In step 102, in the user interface 31 of the mobile device 30, the application 32 performing the inventive function is opened and activated. In step 103, the application performs the calibration of the measurement arrangement, unless it is found in the memory that it has already been performed for this probe-mobile device combination, and no longer than a specified period of time has elapsed since the calibration. In step 104, the user places the probe tip at the mouth of the patient's left (or right) ear canal and in step 105 clicks the left ear trigger on the user interface 31 of the mobile device 30. In step 106, the user waits until the probe 20 has performed the measurement event under the mobile device 30 and the application 32 received the measurement da ta. If necessary, through step 107, the user next returns to step 104 and performs steps 104, 105 and 106 of the measurement operation also to the right (or left) ear. In step 108, the user receives a report processed automatically in the cloud system 10 to the user interface 31 of the mobile device 30, from which he sees the condition of the left and right ear separately both as a numeric value (e.g., on a scale of 0 to 100, where small values indicate sick and large values represent a healthy ear) and as a colour code (e.g. with colours green - yellow - red, where the image of a green earlobe means a healthy and red a sick ear and yellow is a borderline case). Next, in step109, the user selects a continuation based on the received report. If the report shows green, the user terminates the measurement event in step 110 and the data is stored in both the patient database 16 and the anonymous database 17. In step 111 , the user 3 contacts the attending physician 4 if the report has given a reason (red). In step 120, the remote physician 4 re trieves from the patient database 16 the data and measurement reports of the pa tient 2 via his application 42 on his own mobile device 40 and draws conclusions, and in step 121 takes necessary actions such as inviting the patient, prescribing and instructing the user or calming down the user with a message:’’not serious”. In step 122, the physician sends to the anonymous database 17 information about the nature, condition, and causes of the disease, as well as possible antibiotic reg imens and other treatments, which in step 123 the algorithm generator 13 of the artificial intelligence unit 1 1 utilizes to refine the algorithms used by the artificial in telligence processor 12. Thereafter, in step 124, the event also ends for the physi cian.
In an advanced embodiment of the invention, the artificial intelligence unit 1 1 sends a message to the user's mobile device 30 requesting to re-measure the pa tient's ears after a set time (for example 4 to 6 weeks). The message also asks for information about the patient's recovery progress, possible antibiotic regimens and other treatment measures. The artificial intelligence unit 1 1 connects this infor mation to the patient database 16 for use as teaching data to refine the patient's personalized algorithms but also as anonymized teaching data to the anonymous database 17 for use in refining general algorithms in the algorithm generator 13.
Figure 2a schematically shows a probe 20 used in an arrangement according to the invention. The probe is connected to the mobile device 30 either by cable 21 or wirelessly via a Bluetooth link, NFC link or other wireless connection. The electron ics package 22 contains all the electronics needed by the probe 20 and also acts as a power source for different sensor solutions. The energy required for the oper ating power comes either from the mobile device via the cable 21 or from a battery placed in the measuring head 20, which is not shown separately in the picture and which is charged by known techniques.
The speaker 23 is needed to generate the excitation sound of the acoustic reflec- tometer, the microphone 24 to receive the echoes reflected from the ear. The chamber 25 is dimensioned by the reflection required by the reflectometry meas urement, where the chamber 25 cooperates with the adapter tube 26, the ear ca nal 27 and the middle ear 28 as a resonator, where different frequency sound exci tations from the speaker 23 are reflected back The relative positions and intensi ties of the reflectors depend not only on the individual structure of the ear but also on the freedom of movement of the tympanic membrane 29. If the tympanic mem brane is inflamed and/or if there is mucus on its inner surface, these reflections are different from a healthy, free-moving, tympanic membrane. The reflection also de pends on the frequency of the sound waves, so it is advisable to make the meas urement at several different frequencies. The probe 20 of Figure 2a also comprises other sensors suitable for examining the ear. The compact video camera 201 produces an image of the tympanic mem brane illuminated by a compact light source 202 preferably rotated around the lens of the camera. Alternatively, the illumination may be brought closer to the tympanic membrane by placing the light sources 202 at the mouth of the adapter tube 26 or by introducing the light with the optical fibre to the ear canal 27. The camera image sensor 203 may have one or more infrared sensitive sensors for measuring the temperature of the tympanic membrane, or may have a separate infrared sensor 204 or more for measuring the temperature in the vicinity of the probe tip. Other sensors can also be added inside the probe, for example a pressure sensor, if an air pressure pulse supply accessory used in a tympanometer is connected to the probe.
Figure 2b shows the functions performed in the measuring head according to the invention during the measuring operation in stages. The acoustic reflectometry measurement is started in step 221 . Prior to this, the probe 20 is connected to the mobile device 30, calibration 103 is performed if necessary, where the following measurement (steps 222-225) was performed on the probe 20 adapter tube 26 in air (i.e. not inserted into the ear), after which the measurement function is started 105 (Fig. 1 b). In step 222, the mobile device 30, in cooperation with the artificial in telligence unit 1 1 , generates a step-by-step excitation signal with successive short periods of signals starting at low frequencies (e.g., 500 Flz) and continuing at high frequencies (e.g., 5000 Hz). The electronics 22 of the probe 20 amplify the signal and supply it to the speaker 23, whereupon that acoustic excitation signal goes to the ear. The sound waves are reflected from the tympanic membrane 29 and the back surfaces of the middle ear 28 and are received by the microphone 24 (step 223). The data measured in this way is suitable for analysis in the form of a fre quency spectrum. Alternatively, the acoustic excitation signal may be one or more bursts oscillating at different frequencies, the sound waves reflected from different surfaces being received by the microphone 24. The data measured in this way are suitable for analysis as envelopes, from which places and amplitudes of the ech oes reflected from the tympanic membrane and the surfaces of the middle ear are aimed to be found in the artificial intelligence unit. The measured signals are then digitalized and compressed (step 224) in the mobile device 30 and sent (step 225) to the cloud system 10 for processing, after which the measurement is completed for acoustic reflectometry (step 226). The measurement continues with optical technology (step 230), whereupon a vid eo image of the ear canal appears on the screen of the mobile device. The user in terface 31 requests to move the probe 20 so that the tympanic membrane 29 is fully visible on the image surface of the mobile device 30 (step 232). In a preferred embodiment of the invention, the mobile device has a program that recognizes the image of the tympanic membrane and takes a series of images or video clips at moments when the tympanic membrane is preferably visible in the images. Alter natively, the user clicks on the images or video clips using the button image on the mobile device interface (step 233). The mobile device then sends the images for processing by the artificial intelligence (step 234), after which the optical portion of the measurement step is completed (step 235).
An air pressure pulse for use in tympanometry can also be added to the optical examination step. It is passed along a separate tube (not shown in Figure 2a) in side the probe and the movement caused by the pulse in the tympanic membrane is videotaped.
The temperature measurement starts next (step 240). In a simple embodiment of the invention, the temperature is read from an infrared sensor 204 which is applied to the ear canal so that the measurement result comes from the surface of the tympanic membrane 29. The measurement can also be combined with an optical phase so that the temperature measurement takes place with the camera focused on imaging the entire tympanic membrane, whereby it can be ensured that the measured value comes from the surface of the tympanic membrane. The meas urement result is sent to the artificial intelligence unit 1 1 in step 241 , after which the measurement is determined to have been performed (step 242). One or more temperature sensors may also be in the camera image cell 203, preferably in its central region, where a separate infrared sensor is not required, and temperature measurement is combined with optical tympanic examination.
Many other sensors can be connected to the measuring head 20 according to the invention, the operation steps of which continue as described above (step 250). In step 260, the measurement event is declared complete and one can wait for the results of the analysis of the cloud system 10 (step 106 in Figure 1 b).
Figure 2c shows an embodiment of the invention based on acousto-optical strobo scopic measurement. In it, optical imaging is used not only for normal video imag ing but also in connection with acoustic measurement, so that the light of the light source 202 illuminating the tympanic membrane 29 is pulsed in synchronization with the acoustic vibration. According to the stroboscopic principle, light is pulsed at a frequency lower or higher than the acoustic oscillation of 1 to 10 hertz (e.g. 3 hertz), whereby the video image of the tympanic membrane 29 shows the vibration of the tympanic membrane slowed down to the differential frequency (e.g. 3 Hz as above). In this embodiment of the invention, it is preferred to place one or more striped or spotted light sources 209 near the tympanic membrane 29 at the edge portion of the ear canal 27 (in the case of multiple light sources at the different edge portions of the ear canal) so that the tilted and striped or spotted light is seen on the vibrating tympanic membrane as streaks or stains moving in the direction of the tympanic surface. The video camera 201 records this view.
In this embodiment of the invention, two pulsed and striped or spotted light sources of different colours can be advantageously placed on opposite sides of the ear canal, providing a slow motion video image of the tympanic surface, with those colours in different phases of the vibration hitting one on the other and inter lacing side by side, when the tympanic membrane nears and moves away from the light source. Thus, for example, by using blue and yellow light on the surface of the tympanic membrane, for example, the above-mentioned 3 Hertz differential frequency shows streaks or spots whose pattern pulses at 3 Hertz in blue, yellow and green, providing an enhanced view of the tympanic membrane movability in different parts of its surface. From the slow motion video, the movability of the tympanic membrane can be measured as a function of location, which is useful in formation in determining the presence and amount of possible mucus in the middle ear. In this embodiment of the invention, there may be two or more light sources and they may be located on different sides of the ear canal but also on the same side at different distances from the tympanic membrane. In this way, the slow mo tion video image generated by the tympanic membrane can be adjusted and opti mized. With this embodiment of the invention, the movability of the tympanic membrane can also be viewed visually in real time, either on the screen of the mobile device or with a suitable lens combination directly from the surface of the tympanic membrane.
In this embodiment of the invention, the acoustic excitation periods are preferably longer (e.g., 1 to 5 seconds) to provide sufficient optical slow motion video of the tympanic membrane movements at each acoustic frequency. Recording video at several different acoustic frequencies provides additional information for measur- ing tympanic membrane condition, middle ear pressure difference, and possible mucus.
Figure 3 shows the different operating stages of the user interface 31 operating in the mobile device 30 of the user 3 of the otitis media measurement and monitoring system according to the invention. On the display screen of the mobile device, the application 32 according to the invention is shown as one application among oth ers. Clicking on it opens the welcome screen 320, where the user can be told e.g. news 321 about products and services, as well as new application versions. In the upper left corner of each box, there is an arrow-back button 323 to access the pre vious box, and in the upper right corner, an arrow-forward button 324 to access the next box if it has already been visited during an ongoing measurement event. The measurement is accessed by clicking the START button 322, which opens the patient selection screen 330. In case the current mobile device-probe combination has not yet been calibrated or a longer period of time has elapsed since the cali bration, the mobile device 30 performs the calibration in cooperation with the artifi cial intelligence system (steps 511 -521 ), followed immediately by the measure ment event.
In the patient selection box 330 (for example, if there are multiple children in the family), the current patient button 331 is clicked. If the patient is new, the NEW button 332 is clicked to open the patient's personal and background information input screen and return to the list of selected patients 331. If there are many pa tients, the list changes to up-down scrolling. Clicking on the patient's name opens the patient's measurement screen 340. Double-clicking (or long-pressing) the pa tient's name opens a screen where all the patient's personal data, diagnoses and measurement data can be downloaded to the mobile device and/or deleted from the patient database 16. The anonymous part of the measurement data used to teach the artificial intelligence unit 11 remains to the anonymous database 17. This function complies with the security legislation of each country.
The patient measurement screen 340 displays the patient's name, which is now displayed on all screens. Next, the probe 20 is placed in the patient's right (or left) ear and the right ear icon 341 (or left icon 342, respectively) is clicked on the screen. The ears can be measured in either order, or only the other ear can be measured. If you want to leave either one unmeasured, the SKIP button 343 or 344 of the ear icon can be clicked, whereupon the measurement automatically continues to the measurement of the remaining ear. The measurement event comprises the measurements of all the sensors at the probe as shown in Figure 2b and may therefore take a moment during which the probe is kept stationary in the ear. At this point, an image of the ear canal provided by the video camera 201 opens on the screen 340 and an instruction window re quests to move the measuring head 20 so that the image shows the entire tym panic membrane 29. When the measurement event is completed, the data is transferred to the artificial intelligence unit 11 for processing and the report is re trieved, a result screen 350 corresponding to the ear of that patient opens on the mobile device 30. The screen shows the icon 351 of that ear, the colour indicating ear: green means healthy, red inflamed and yellow borderline. In the centre of the icon is a light image or a video clip rotating from the tympanic membrane 352. The screen also shows the tympanic membrane temperature 353 and possible meas urements from other sensors 354. If the ear has been measured previously, a curve 355 of historical data from the time the disease was monitored is displayed. Historical data is displayed as daily values on a scale of 0 to 100, for example, with large values at the top coloured red, small values at the bottom coloured green, and medium values at the centre coloured yellow. At the top of the history screen is a measurement date tag 356 that can be moved to compare the results of previous measurements (e.g., temperature 353, tympanic image 352).
The arrow-back button 323 or the NEXT button 357 moves to measure the second ear, resulting in a result box 360 equivalent to the result box 350. This is followed by the NEXT button 361 to the report box 370.
Report box 370 shows the last measurement results of both ears and the current daily history for both ears. The results for the right ear are displayed on the left side of the screen and the results for the left are displayed on the right side so that the view corresponds to the natural situation where the patient and the viewer are face to face. In the report box 370, the colours (red-yellow-green) of the ear icons 371 and 372 give a quick snapshot of the condition of the ears. In the center of each of the icons 371 and 372 is a photograph taken by the video camera 201 or a short continuously repeated video clip of the said tympanic membrane. Tempera- ture values 373 and 374 show the minimum and maximum values for the current measurement period. History curves 375 and 376 tell the daily condition estimates for the measurement period as in result boxes 350 and 360. By moving the meas urement-usage tags 377, the days’ measurement data and tympanic images are displayed at the top of the report. In addition, the report box 370 has a verbal re port window 378, which shows e.g. comments written by a physician and texts generated by artificial intelligence. Physician comments may include personalized additional information for interpreting the results. For example, comments on artifi cial intelligence may contain information about the measurement uncertainty (if, for example, the tympanic membrane image is of poor quality) or strangely different measurement results. From the report box 370, the NEXT button 379 moves to the communication box 380.
In the communication box 380 it is possible to choose to send a message either to one's own doctor by clicking on the image 381 or by selecting another physician from the drop-down list on the OTFIER button 382. The message automatically contains the information of the report 370 but the user can also write his/her own comments or hello in the message window 383. The message is sent with the SEND button 385, which opens an acknowledgment screen 390, from which the application can be closed with the CLOSE button 391 or returned to the beginning with the RETURN button 392. The message can be omitted by clicking the NO button 384, which opens the corresponding acknowledgment screen with the text MESSAGE NOT SEND.
Figure 4 shows the different stages of operation of the user interface 41 operating in the mobile device 40 of the doctor 4 of the otitis media measuring and monitor ing arrangement according to the invention. On the display screen of the mobile device, the application 42 according to the invention is shown as one application among others. Clicking on it opens the welcome screen 420, where the user can be told e.g. news 421 about products and services as well as new application ver sions. In the upper left corner of each box, there is an arrow-back button 423 to access the previous box, and in the upper right corner, an arrow-forward button 424 to access the next box if it has already been visited during an ongoing opera tion. To access the report review, the START button 422 is clicked, which opens the patient queue screen 430.
In the patient queue box 430, one of the queued patient buttons in list 431 is clicked. If there are many patients in the patient queue box 430, the list changes to be scrolled up and down. Each patient has an Urgent mark 434 that illuminates and/or flashes depending on how urgent actions the condition of that patient re quires. The information about the urgency comes from the artificial intelligence unit 10 of the arrangement according to the invention via the patient database 16. If the patient is new, the NEW button 432 is clicked to open the patient's personal and background information input screen, and upon return, the name of the new patient appears in the list of selectable patients 431. There may be more than one new patient in the queue at the same time. For the admission of new patients, a separate data query and input event can also be arranged between the patient and the doctor, which involves a procedure for event granting access rights to the patient information system and a billing agreement. At its simplest, the acceptance of a new patient takes place only as an exchange of information between the user 3 and the patient database 16 and between the doctor 4 and the patient database
16.
Clicking on a patient's name in patient list 431 opens the report box 440 for that patient.
Report box 440 displays the most recent measurement results for both ears and the daily history to date for both ears. The results for the right ear are displayed on the left side of the screen and the results for the left are displayed on the right side so that the view corresponds to the natural situation where the patient and the viewer are face to face. In the report box 440, the colours (red-yellow-green) of the ear icons 441 and 442 give a quick snapshot of the condition of the ears. In the centre of each of the icons 441 and 442 is a photograph taken by the video cam era 201 , a repetitive series of images, or a short continuously repeated video clip of the said tympanic membrane (e.g., tympanometer video of tympanic membrane motion caused by air pressure pulse). Temperature values 443 and 444 show the minimum and maximum values for the current measurement period. Historic curves 445 and 446 tell the daily fitness estimates for the measurement period. By moving the measurement day tags 447, the daily measurement data and images of tympanic membranes are displayed at the top of the report. In addition, the re- port box 440 has a verbal report window 448, which shows e.g. comments written by a doctor and texts generated by artificial intelligence. The report box 440 seen by the physician is substantially similar to the report box 370 seen by the user. From the report box 440, the physician can use the FULL REPORT button 449 to move to a more detailed Full report box 450 containing physician-relevant infor- mation.
In the full report box 450, measurement and analysis curves associated with the right ear 451 or the left ear 452 may be displayed, which may include original measurement results, light and video images of the tympanic membrane, and time-evolved curves. By moving the measurement date tag 453, a diagnosis or other conclusions about the nature and progression of the disease will appear in the diagnosis window 454 at each stage. In addition, moving the measurement date tag 453 will display the actions taken at each step in the action window 455, e.g., initiated drug regimens, tympanic membrane punctures, piping, etc. Based on this Full report information, the physician can write a report on the situation in the message box 456 and send in his report box 370 in the message window 378. Preferably, the physician's report is at least partially structured so that it can be read and understood unambiguously by artificial intelligence. To this end, part of the reporting may be to answer questions from the artificial intelligence about the patient's condition and/or disease progression, so that the information can be used to refine the artificial intelligence algorithms for future similar cases.
After sending the report to the user, an acknowledgment screen 460 for the suc cessful transmission of the report 461 opens to the physician. At the same time, the physician is asked if the data can also be sent for the teaching use of artificial intelligence 462. With the NOT YET button 463, the physician can skip sending the teaching material. By clicking on the SEND button 464, the data identifying the pa tient is deleted from the data and the rest of the data is sent to an anonymous da tabase 15 as the teaching material of the artificial intelligence unit 11. Both buttons 463 and 464 go to the thank you box 470, which may include advertising 471 and from which the application may be closed with the CLOSE button 472 or may be resumed to continue the treatment of the next patient with the RETURN button 473.
When the user 3 sends a message from his communication box 380 to the doctor 4 when the ear of patient 2 is sore by clicking the SEND button 385, a message box 480 opens for the doctor to see which patient is in question 481 and the emergency sign 482 how urgent answer the situation requires. With the ANSWER button 483 he can move immediately to the report box 440 of said patient and make the necessary interpretations, decisions, and actions. The REMIND LATER button 484 adds the patient's name last to the list 431 in the patient queue box 430.
Figure 5 shows the different stages of operation of the artificial intelligence unit 11 and the database unit 15 of the measurement and monitoring arrangement ac cording to the invention and their connection to the mobile user interface 31 of the user 3 and the mobile user interface 41 of the doctor 4. Mobile Ul events are orga nized in Mobile Ul column 501 , artificial intelligence processor events in artificial intelligence processor column 502, Patient database events in Patient DB column 503, Computation algorithm generator processor events in Algorithm Generator column 504, and Anonymous database events in Anonyme DB column.
The measurement is started from the mobile device 30 by the START button 322 (step 102 in Figure 1 b). Initially, the mobile device initiates a calibration of the cur rent mobile device-probe combination (step 511 ) and checks the patient database to see if the calibration has already been performed and is valid (step 512). If the calibration is OK (step 514), the calibration step is skipped directly to the ear measurement step 522. If the calibration is not valid (check in step 513), the pa tient database 16 sends the data needed to generate the signal used for calibra tion to the mobile device 30, which converts the data into a calibration excitation signal (step 515) and feeds it to a probe 20 held in the air during calibration. The probe receives the response signal measured by the probe microphone (step 516) and sends it to the artificial intelligence processor for analysis. The artificial intelli gence processor 12 determines the correction parameters necessary in this mo bile device-probe combination (30, 20) related to e.g. frequency and delay re sponses (step 517) and store them in the patient database 16 in the data of the user in question. At the same time, the correction information also goes to the al gorithm generator 13, which makes corrections corresponding to the artificial intel ligence calculation algorithms (step 519) and stores the corrected algorithms (step 520). Thus, the calibration step 521 ends and the measurement step 522 is passed. The correction parameters can be utilized, for example, by selecting the actual measurement excitation signal used in the actual measurement (step 524) to produce a substantially similar acoustic excitation to the ear canal (step 525) regardless of the acoustic and electronic frequency and delay response of the probe 20 and mobile device 30. The data generated in the calibration measure ment can also be utilized in the artificial intelligence unit 11 to fine-tune the algo rithms of the process by processing the measurement results (step 528) (step 519) so that the data provided by the calibration measurement serves as one reference data in the comparison functions.
In the measurement phase, different measurements are performed sequentially first on one ear and then on the other. Initially, the patient is selected in box 330, and when box 340 is opened, the ear is selected to start, the probe 20 is inserted into the ear, and acoustic reflectometry measurement is initiated (step 523, step 221 in Figure 2b) by clicking on the ear icon 341 or 342. The mobile device 30 re trieves excitation signal data from the patient database (step 524), converts the data into a signal, sends it to the probe in the ear (step 525) and receives an echo signal from the ear (step 526), and processes the signal into digital format, pack ages and sends to the artificial intelligence for examination (step 527). The artifi cial intelligence processor 12 processes the data using valid algorithms (step 528) and stores the result in the patient database 16 (step 529).
Next, the mobile device 30 starts the optical measurement (step 530). With the probe 20 still in the ear, the viewfinder image of the video camera 201 opens on the screen of the mobile device, along with instructions instructing the user to ori ent the probe (step 531 ) so that the tympanic membrane appears in the viewfinder image. When the tympanic membrane is representatively visible, the video camera automatically takes a video clip or multiple video clips and still images of the tym panic membrane (step 532), processes, crops, packages, and sends (step 533) them to the artificial intelligence for analysis. Artificial intelligence analyses the tympanic membrane images and videos using algorithms that compares images from different disease states to typical images stored in an anonymous database, calculates colour and pattern parameters from them using different pattern recog nition methods, and compares them to reference parameters stored in an anony mous database. The results of the analysis are stored in the patient database (step 535).
When using a probe using the stroboscopic principle shown in Fig. 2c, the orienta tion of the optical image (step 531 ) is repeated so that the tympanic membrane is preferably displayed in the viewfinder image. Thereafter, steps 522 to 529 are re peated so that in addition to the acoustic reflectometry measurement, the video camera 201 and light sources 204 simultaneously perform the functions described in Fig. 2c and its description synchronized to the excitation signal in steps 525 and 526. In step 527, in addition to the data produced by the echo signal, the slow mo tion video data is packaged and sent to the artificial intelligence unit 11 , which starts the algorithms assigned to the data analysis (step 528) and stores the re sults (step 529). Then proceed to step 540.
Next, the mobile device 30 initiates the temperature measurement (step 540), per forms it by means of an infrared sensor 204 oriented to point in the direction of the tympanic membrane, while the optical image is preferably visible from the previous step. In practice, the temperature measurement (step 541 ) is timed to occur im- mediately after or during the optical imaging, in which case the measurement is best positioned. The result is sent to the artificial intelligence for analysis (step 542) and the result is stored in the patient database (step 543).
The other measurements in use are then performed (steps 545 and 546), the re sults are sent to the artificial intelligence for analysis (step 547), and stored in the patient database (step 548).
When all the measurements included in the program have been completed, the probe is moved to the other ear (step 549) and the above sequence of events is repeated starting from step 522. After this, the ear measurements are performed (550), the probe can be removed from the ear and the report is waited for. Alterna tively, as shown in Figures 3 in boxes 350 and 360, the artificial intelligence may be set to process the report separately for each ear, as presented in the following steps, so that the report of the ear in question can be seen immediately after the measurement before the next ear is measured.
The generation of the report starts from step 560 and is completed in step 569. The artificial intelligence processor 12 retrieves the measurement and analysis re sults of the patient (562) in question, including history (step 561 ) from the patient database 16, the reference data (step 564) selected for this patient based, for ex ample, on medical history from anonymous database 17 and for this case, previ ously refined and stored algorithms (step 566) from the algorithm generator 13. Next, the artificial intelligence processor 12 applies the algorithms (step 567) with the reference data and patient measurement history data to the current measure ment and imaging results and processes the report (step 568) and sends it to the mobile device 30 of the user 3.
User 3 sees the report (step 569) and according to the result, either finds the situa tion good and does not send the report to the doctor 4 or finds the situation bad and sends the report (step 570) with his own comments to the doctor.
The events in box 580 occur on the mobile device 40 of the physician 4. The phy sician receives notification of the arrival of the report and proceeds either immedi ately or later to examine the report. He analyses the disease situation, makes a diagnosis, and plans actions (step 571 ), which he records in a report and sends back to the user (step 572). In addition, if desired, he also sends a report to teach artificial intelligence (step 573) for situations like that. The training data (step 574) is stored in the anonymous database 17, from which the algorithm generator 13 extracts it and refines the algorithms (step 575) accordingly. The advanced algo rithms are stored (step 576) and are available the next time.
In an advanced embodiment of the invention, in which the user is subsequently asked for feedback on patient recovery, in data analysis step 567, the artificial in telligence analyses measurement data quality and integrity of the measurement event (both ears measured and relevant measurements, at least acoustic and temperature performed) and, based on the result, places valid measurement events for teaching use on the post-checklist. In this case, after a set time, for ex ample 6 weeks, the artificial intelligence unit 1 1 sends a message to the user's mobile device 30 asking the user to measure the patient's ears again. The mes sage also asks for information about the patient’s recovery progress, possible an tibiotic regimens, and other treatment measures. The artificial intelligence unit 1 1 links this information to the patient database 16 for use as training data to refine the patient's personally customized algorithms but also as anonymized instruction al data to the anonymous database 17 for use in refining general algorithms in the algorithm generator 13.
The artificial intelligence unit 1 1 can be programmed to use, for example, the clas sification or clustering of measurement results based on pattern recognition and the calculation of disease status/changes based on categories in the processing of measurement results. Classification can be done, for example, by the kNN method (k Nearest Neighbour), in which the measurement result is compared with the ref erence data by searching for k nearest neighbours, which mainly mean similar cases. Neighbours are selected by calculating the distance of the point describing the measurement result in the M-dimensional vector space from other points de scribing the measurement results, where M is the number of features obtained from the measurement results. Features detached from the measurement results can be used as dimensions, such as the responses of the acoustic reflection at dif ferent frequencies, the values of the acoustic response or the ultrasonic envelope at different times (e.g. reflection from the tympanic membrane, reflection from the posterior wall of the middle ear), the value coded as a numerical value of the tym panic colour, the amplitude of the tympanic motion in the middle of the tympanic membrane and separately in its lower part, the temperature value of the tympanic membrane, the measurement value given by a chemical sensor etc. The diagnosis given by a doctor or other expert body and the information provided by the user in connection with the follow-up measurement about the treatments used and the course of healing can also be coded into features that the algorithm can use as dimensions in said vector space. In addition, individual weighting factors can be defined for the features, which can be used to fine-tuned the algorithm.
Other pattern recognition and clustering methods used in artificial intelligence technology can be applied in the measurement and monitoring system of otitis media according to the invention.
The artificial intelligence unit 11 can also be programmed to process the meas urement results with the SOM neural network (Self-Organazing Map). In the SOM neural network, statistical relationships between elements of a multidimensional input data set are converted into simple geometric relationships.
The SOM neural network is updated by the following algorithm (1 ):
||x(tk) - rrii(tk)|| = min^l x(tk) - mj(tk)||}, (1 ) where x(tk) is the multidimensional data vector received by the SOM neural net work and mj(tk) is the artificial neuron or weight vector. The time is expressed by the variable tk.
Equations (2) and (3) can be used as the update rule for the weight vector: mi(tk+1) = rrii(t
Figure imgf000023_0001
i7ij(tk+i) = i7ij(tk), otherwise. (3)
The parameter a is a "forgetfulness" whose magnitude depends on how much of the old neuron value is left in the update. It also controls network stability. Nc is a topological neighbourhood, i.e. , a set of neurons that are in a network closest to a neuron performing a minimal operation.
The map update rule means that the neurons nrij closest to the data vector x are moved towards the data vector x. Thus, neurons in the SOM neural network learn and/or tune through the input variables they receive. The task of teaching algo rithms is to implement updating activities so that the intelligence of the neural net work increases. In the measurement and monitoring system according to the invention, the SOM neural network learns or is taught to know the acoustic reflections of the ears to be measured, for example their envelopes and/or frequency spectra, and to compare them with acoustic reflections previously recorded from the same ear and from other ears belonging to the same category. In addition, optical images/videos of the tympanic membrane, amplitudes of the tympanic membrane movement from the middle and separately from the lower part of the tympanic membrane, temper ature measurements, and other measurement results from various sensors, and features of diagnosis as determined by a physician or other expert body may be included in the neural network and post-verification of the information provided by the user in connection with the measurement about the treatments used and the course of healing in the same way as in the pattern recognition methods.
Neural network learning takes place automatically by updating its algorithm partial ly or completely self-driven. If necessary, the teaching routine of the neural net work can be given significant freedoms to choose methods with different basic theories to implement learning. However, the results of this learning need to be observed and managed. To this end, an internal validation step may be included in the teaching routine, the task of which is to ensure, on the basis of the available data, that the proposed new neural network is operating at a sufficiently good per formance. The validation methods used here may be, for example, of the leave- one-out or leave-N-out type, in which part of the teaching material is omitted, and network learning is ensured by appropriate statistical methods based on the data excluded from the teaching material. Validation methods can also be regression- based. In them, for example, the relationship between the desired response of a neural network and the response it produces is examined by regression analysis using conventional parameters using a correlation coefficient. An application- specific definition of, for example, the sensitivity or specificity of an otitis media di agnosis within the range of available data can also serve as a performance pa rameter. In complex cases, neural network learning can also be controlled in such a way that the neural network teaching process continuously produces alternatives for a new, better than before, neural network and presents the corresponding per formance parameters. If the situation is too complex for the machine to deduce, an expert monitoring the neural network learning can also be asked at this stage to decide on the best option for the candidates. Following this decision made by ei ther a machine or a human, the selected neural network is deployed. In order to control the learning of the neural network, the operating principle of the neural network in use at any given time can be documented. Basically, this is done by storing in memory the neural networks used by the application at each point in time. In addition to this, an interpretation report in a human-understandable form can be stored, which describes the performance and operation of the neural net work in question in the light of the data currently in use. This report may also in clude a description of the principle by which that new neural network was chosen to succeed the previous one. This neural network documentation accumulated over time can also serve as feedback in automating neural network update criteria. In practice, updating the neural network means that the algorithms of the artificial intelligence unit 11 will be refined as reference material accumulates, so that the advanced algorithms will be tested with previously measured data before deploy ment and approved only if they give improved relevance to disease quality, status and cause assessment.
The same principle can be done with Neural Network principles other than SOM, for example feed-forward neural networks (single-layer perceptron, multi-layer perceptron, deep neural networks), and their numerous variations and subtypes. For example, genetic algorithms and fuzzy logic can also be utilized to implement the principle. The implementation of neural networks and/or teaching routines can be done with either program codes (SW) or electronics (HW). If efficient HW- based methods (including parallel processing if necessary) are used in the imple mentation, this speeds up data processing and increases the range of means to identify features from the data in mass-based calculations where the power of the SW solution would not be sufficient.
Various neural network solutions can be placed in the algorithm generator 13 and the artificial intelligence unit 11 can be programmed to select the neural network principle with the best performance for use depending on the situation. The selec tion may also include different pattern recognition methods, in which case the arti ficial intelligence unit 11 may apply neuro-computing or pattern recognition or a combination thereof, depending on the situation.
Neural network computing can be programmed to operate separately for each measurement mode, such as acoustic reflectometry, optical tympanic membrane imaging, acousto-optical stroboscopic imaging, ultrasound measurements, tympa nometry videos, temperature, chemical measurements, etc. Finally, for the results obtained from these, neural network computation can be used to combine all the results into one value of the disease status and its change, of which the disease status is displayed on the screen of the mobile device as a green, yellow or red ear image. Some preferred embodiments of the method and apparatus of the invention have been described above. The invention is not limited to the solutions just described, but the inventive idea can be applied in numerous ways within the limits set by the claims.

Claims

Claims
1. Arrangement for measuring and monitoring otitis media, measuring changes in the middle ear and/or tympanic membrane caused by disease through the ear canal with a probe (20) inserted into the patient's ear canal wired or wirelessly connected to a mobile device (30), characterized in that it comprises
- means for transmitting acoustic signals to the ear and receiving echoes, and
- means for tympanic membrane illumination and video recording
- means for measuring the temperature of the tympanic membrane by means of an infrared sensor, and
- means for digitizing, compressing and sending the measurement results to an information network;
- means for storing the measurement results in a patient database (16) operat ing in the data network;
- means for processing the measurement results by a learning artificial intelli- gence unit (11 ) operating in the data network, comprising in addition to the above-mentioned patient database (16) an artificial intelligence processor (12), an algorithm generator (13) and an anonymous database (17);
- program code means in said artificial intelligence unit (11 ) comparing the acoustic, optical and temperature measurement results in the artificial intel- ligence processor (12) with the reference data collected in the anonymous database (17); using taught algorithms generated by the algorithm genera tor; looking for similarities and differences in the measurement results and their changes; and inferring the disease status there from;
- program code means for teaching the above-mentioned algorithms with new measurement data collected in the anonymous database (17), provided with medical information defined by a doctor or other expert body on the na ture and condition of the disease and the treatment measures to be pre scribed;
- program code means for teaching the above mentioned algorithms with measurement data collected from the user to the patient database (16) as a follow-up for 1 to 3 months, provided with data collected from the user on the treatment measures used and the progress of improvement, and
- program code means for transmitting and presenting the results describing the disease state resulting from the processing on the mobile device.
2. Arrangement for measuring and monitoring otitis media according to claim 1 , characterized in that it further comprises, located in the artificial intelligence unit (11 ) and the mobile device (30),
- program code means for calibrating the acoustic measurement of the meas uring means formed by the probe (20) and the mobile device (30) by per forming at least one calibration measurement (513 to 518) with the probe in the air before the actual ear measurement, and sending the measurement result to the artificial intelligence unit (11 ) and
- - program code means for utilizing the data from the calibration measurement in selecting an acoustic excitation signal (524) for the actual measurements so that the excitation signal used for the actual measurements produces a substantially similar acoustic excitation (525) in the ear canal regardless of the acoustic and electronic frequency and delay response of the probe (20) and the mobile device (30).
3. Arrangement for measuring and monitoring otitis media according to claim 2, characterized in that the data generated in the calibration measurement is utilized in the artificial intelligence unit (11 ) for fine-tuning (519) the algorithms of the measurement results process (528) so that the calibration measurement data serves as one of the reference data.
4. Arrangement for measuring and monitoring otitis media according to claim 1 , 2 or 3, characterized in that the measurement uses a stepwise acoustic excitation signal with successive short periods starting from low frequencies (e.g. 500 Hz) and continuing to high frequencies (e.g. 5000 Hz) and that the sound waves re flected from the tympanic membrane (29) and the back surfaces of the middle ear (28) are digitized and sent to the artificial intelligence unit (11 ), where they are analyzed as a frequency spectrum.
5. Arrangement for measuring and monitoring otitis media according to claim 1 , 2 or 3, characterized in that the measurement uses one or more (e.g. five) acous tic excitation signal bursts oscillating at different frequencies, the echoes reflected from the tympanic membrane (29) and the posterior surfaces of the middle ear (28) being digitized as a function of time and sent to the artificial intelligence unit (11 ), where they are analysed as envelope curves and the locations and intensi ties of echoes reflected from different surfaces are identified from their envelope curves.
6. Arrangement for measuring and monitoring otitis media according to claim 1 , 2, 3, 4 or 5, characterized in that in addition to (or instead of one or more) acous tic, optical and temperature measurements, a combined acousto-optical strobo scopic measurement is performed, in which simultaneously during acoustic excita tion the tympanic membrane is illuminated obliquely by streaked or spotted light pulsed at a frequency different from the frequency of the acoustic excitation (fre quency difference e.g. 1 to 3 Hz), and at the same time, the streaked or spotted oscillation visible on the surface of the tympanic membrane, slowed down along the surface, is recorded with a video camera and based on the angle of incidence of light, the motion scales of the tympanic membrane are identified from the ob tained video image as a function of location, which motion scales are used in the artificial intelligence unit as measurement results,
7. Arrangement for measuring and monitoring otitis media according to claim 6, characterized in that there are two or more light sources illuminating the tympanic membrane from an oblique direction and emitting lights of different colours from different directions so that when the tympanic membrane oscillates they overlap one another and interlace side by side depending on the momentary distance of the tympanic membrane from the light sources; thus causing a variable colour slow motion video image of the tympanic membrane.
8. Arrangement for measuring and monitoring otitis media according to claims 1 to 6 or 7, characterized in that the measurement results are processed using
- a self-oriented SOM neural network using sound responses measured at dif ferent frequencies from the tympanic membrane and the posterior surfaces of the middle ear as data vector elements and taught to compare different frequency responses with the audio responses of reference data stored in an anonymous database (17), or
- pattern recognition by the kNN method, in which different frequency respons es form, for example, a 50-dimensional vector space in which the meas urement result is compared with reference data stored in an anonymous da tabase (17) by searching for ten (k = 10) closest similar cases and classify ing most of those ten cases fall into this category.
9. Arrangement for measuring and monitoring otitis media according to claim 8, characterized in that the measurement results are additionally processed by us ing
- a self-oriented SOM neural network, in which the data vector elements are features detached from the optical video image recorded from the tympanic membrane, i.e. features such as the colour of the tympanic membrane at its centre, bottom, and other selected sample points, and the uniformity of col our at various points on the tympanic membrane, and which is taught by these features to compare video to video images of reference material stored in an anonymous database (17), or
- pattern recognition by the kNN method, in which the features detached from the optical video image recorded from the tympanic membrane form, for example, a 50-dimensional vector space, wherein the measurement result to be processed is compared with the reference data stored in the anony mous database (17) by searching, for example, the ten (k = 10) closest similar cases and classifying the present case according to which category most of those ten cases belong.
10. Arrangement for measuring and monitoring otitis media according to claim 6, 7, 8 or 9, characterized in that the measurement results are additionally pro cessed by using
- a self-oriented SOM neural network, in which the data vector elements are features detached from the acousto-optic stroboscopic video image record ed from the tympanic membrane, i.e. features such as the colour of the tympanic membrane at its centre, bottom, and other selected sample points, and the uniformity of colour at various points on the tympanic mem brane, and which is taught by these features to compare acousto-optic stroboscopic video to video images of reference material stored in an anon ymous database (17), or
- pattern recognition by the kNN method, in which the features detached from the acousto-optic stroboscopic video image recorded from the tympanic membrane form, for example, a 50-dimensional vector space, wherein the measurement result to be processed is compared with the reference data stored in the anonymous database (17) by searching, for example, the ten (k = 10) closest similar cases and classifying the present case according to which category most of those ten cases belong.
11. Arrangement for measuring and monitoring otitis media according to one of claims 1 to 10, characterized in that it further comprises
- means for at least one of the following known functions:
for chemical determination of tympanic membrane secretions, for measuring the mobility of the tympanic membrane by means of an air pressure pulse, and
for inspecting the tympanic membrane and the middle ear by means of ultrasonic echoes, and
- program code means for incorporating the measurement results produced by the above-mentioned functions into the data processing in the artificial intel ligence unit (11 ) to produce and refine the results describing the disease state and its change.
12. Arrangement for measuring and monitoring otitis media according to one of claims 1 to 11 , characterized in that it also comprises
- program code and user interface means for displaying measurement results, results describing the disease state and its changes, optical images and video images and written statements on a mobile device and sending them to the user and the attending physician or other expert body,
- program code and user interface means for the mobile device that allow a physician or other expert body to determine the nature, condition and caus es of the disease, as well as the treatment measures to be given in each patient case and to transmit the data in addition to the patient database (16) to the anonymous database (17) for teaching artificial intelligence.
13. Arrangement for measuring and monitoring otitis media according to one of claims 1 to 12, characterized in that it also comprises
- program code and user interface means for downloading the patient's per sonal data, measurement results, results describing the disease state and its changes, optical images, video images, acousto-optical stroboscopic video images and written statements to the memory of the mobile device or to a memory unit selected by the user, and
- program code and user interface means for the mobile device that allow the user to delete the desired data from the patient database (16).
14. A cloud server for measuring and monitoring otitis media, comprising means
- to control the measurement event,
- to store the measurement results in an online patient database (16),
- to process the measurement results with a learning artificial intelligence unit
(11 ) operating in the data network, comprising in addition to the above- mentioned patient database (16)
an artificial intelligence processor (12),
an algorithm generator (13) and
an anonymous database (17), and
- program code for transmitting and receiving signals based on acoustic reflec- tometry to and from the user's mobile device, and
- program code for performing measurement based on optical video recording
- program codes for receiving measurement results from other devices used in ear examinations
characterized in that the cloud server (10) also comprises
- program code for storing and processing the measurement results by means of algorithms which compare the measurement results in the artificial intelli gence processor (12) with the reference data collected from a large number of patients in the anonymous database (17) by using the taught algorithms generated by the algorithm generator (13), and looking for similarities and differences in the measurement results and their changes and inferring the disease status from them,
- program code for teaching the above algorithms with new measurement data collected in an anonymous database (17), provided with medical infor mation as defined by a doctor or other expert body on the nature, condition and causes of the disease and the treatment measures to be prescribed, and/or
- program code for teaching the above-mentioned algorithms from the same patients to the patient database (16) as a follow-up for 1 to 3 months with measurement data collected from the user, which is provided with data col lected from the user on the treatment procedures used and the progress of recovery, and which is transferred anonymised to an anonymous database (17) for use in the analysis of other similar patient cases, and
- program code for transmitting and displaying on a mobile device the results describing the disease state or its change resulting from the processing.
PCT/FI2020/000007 2019-04-10 2020-04-07 Multimodal measuring and tracking of middle ear otitis by artificial intelligence WO2020208291A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20197064A FI20197064A1 (en) 2019-04-10 2019-04-10 Multimodal measuring and monitoring of middle ear inflammation using artificial intelligence
FI20197064 2019-04-10

Publications (1)

Publication Number Publication Date
WO2020208291A1 true WO2020208291A1 (en) 2020-10-15

Family

ID=72750440

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2020/000007 WO2020208291A1 (en) 2019-04-10 2020-04-07 Multimodal measuring and tracking of middle ear otitis by artificial intelligence

Country Status (2)

Country Link
FI (1) FI20197064A1 (en)
WO (1) WO2020208291A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014063162A1 (en) * 2012-10-19 2014-04-24 Tawil Jack Modular telemedicine enabled clinic and medical diagnostic assistance systems
US20150065803A1 (en) * 2013-09-05 2015-03-05 Erik Scott DOUGLAS Apparatuses and methods for mobile imaging and analysis
US9867528B1 (en) * 2013-08-26 2018-01-16 The Board Of Trustees Of The University Of Illinois Quantitative pneumatic otoscopy using coherent light ranging techniques
WO2018045269A1 (en) * 2016-09-02 2018-03-08 Ohio State Innovation Foundation System and method of otoscopy image analysis to diagnose ear pathology
CN207755252U (en) * 2017-09-11 2018-08-24 合肥德易电子有限公司 A kind of intelligent radio scope pickup-light source system
US20180353073A1 (en) * 2015-05-12 2018-12-13 Ryan Boucher Devices, Methods, and Systems for Acquiring Medical Diagnostic Information and Provision of Telehealth Services

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014063162A1 (en) * 2012-10-19 2014-04-24 Tawil Jack Modular telemedicine enabled clinic and medical diagnostic assistance systems
US9867528B1 (en) * 2013-08-26 2018-01-16 The Board Of Trustees Of The University Of Illinois Quantitative pneumatic otoscopy using coherent light ranging techniques
US20150065803A1 (en) * 2013-09-05 2015-03-05 Erik Scott DOUGLAS Apparatuses and methods for mobile imaging and analysis
US20180353073A1 (en) * 2015-05-12 2018-12-13 Ryan Boucher Devices, Methods, and Systems for Acquiring Medical Diagnostic Information and Provision of Telehealth Services
WO2018045269A1 (en) * 2016-09-02 2018-03-08 Ohio State Innovation Foundation System and method of otoscopy image analysis to diagnose ear pathology
CN207755252U (en) * 2017-09-11 2018-08-24 合肥德易电子有限公司 A kind of intelligent radio scope pickup-light source system

Also Published As

Publication number Publication date
FI20197064A1 (en) 2020-10-11

Similar Documents

Publication Publication Date Title
US20220047160A1 (en) Ceiling ai health monitoring apparatus and remote medical-diagnosis method using the same
US20210145306A1 (en) Managing respiratory conditions based on sounds of the respiratory system
ES2659945T3 (en) Waste based monitoring of human health
US20170007126A1 (en) System for conducting a remote physical examination
JP6450085B2 (en) Health condition inspection device
WO2013155002A1 (en) Wireless telemedicine system
CN105279362A (en) Personal health monitoring system
CN104427942A (en) Measurement assistance device, measurement assistance method, control program, and recording medium
CN112053321A (en) Artificial intelligence system for identifying high myopia retinopathy
CN112289397A (en) Comprehensive management system for hearing examination and hearing aid fitting effect evaluation
ES2939839T3 (en) Method and system for the analysis of health status based on elasticity detection device
WO2021140670A1 (en) Information transmission device and information transmission method
WO2020208291A1 (en) Multimodal measuring and tracking of middle ear otitis by artificial intelligence
CN112397195B (en) Method, apparatus, electronic device and medium for generating physical examination model
WO2008033010A1 (en) Device and method for positioning recording means for recording images relative to an object
CN113518581B (en) System and method for medical diagnostic support
US20230349769A1 (en) A system and method for an infra-red (ir) thermometer with a built-in self-test
US20220122702A1 (en) A system and method for cluster based medical diagnosis support
Jiang et al. Integrated analyzer and classifier of glottographic signals
JP2022537678A (en) digital biomarkers
Aswini et al. For Effective, Earlier and Simplified Diagnosis of Retinopathy of Prematurity (RoP), a Probe through Digital Image Processing Algorithm in B-Scan
JP6660680B2 (en) Medical examination system
US20230240561A1 (en) Systems and methods for the differential diagnosis of middle and inner ear pathologies using wideband acoustic immittance
JP6755059B1 (en) Dental diagnostic programs and systems
WO2021152710A1 (en) Information transmission device and information transmission method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20787653

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20787653

Country of ref document: EP

Kind code of ref document: A1