WO2022079251A1 - Wireless communication system for medical assistance - Google Patents

Wireless communication system for medical assistance Download PDF

Info

Publication number
WO2022079251A1
WO2022079251A1 PCT/EP2021/078630 EP2021078630W WO2022079251A1 WO 2022079251 A1 WO2022079251 A1 WO 2022079251A1 EP 2021078630 W EP2021078630 W EP 2021078630W WO 2022079251 A1 WO2022079251 A1 WO 2022079251A1
Authority
WO
WIPO (PCT)
Prior art keywords
location
mobile device
input
living
treatment
Prior art date
Application number
PCT/EP2021/078630
Other languages
French (fr)
Inventor
Walther Dietmar BOON VAN OCHSEE
Original Assignee
B.V. Maritime Medical Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by B.V. Maritime Medical Applications filed Critical B.V. Maritime Medical Applications
Publication of WO2022079251A1 publication Critical patent/WO2022079251A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/10ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/10ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients
    • G16H20/17ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients delivered via infusion or injection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/20ICT specially adapted for the handling or processing of medical references relating to practices or guidelines
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Chemical & Material Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Veterinary Medicine (AREA)
  • Medicinal Chemistry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Bioethics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The present invention is in the field of a wireless communication system for medical assistance, and a use of said wireless system for training and providing medical assistance, wherein assistance is typically provided over a long distance (being remote). The medical assistance is typically provided by well trained professional to laymen that at the best have been partly trained.

Description

Wireless communication system for medical assistance
FIELD OF THE INVENTION
The present invention is in the field of a wireless communication system for medical assistance, and a use of said wireless system for training and providing medical assistance, wherein assistance is typically provided over a long distance. The medical assistance is typically provided by a well trained professional to laymen that at the best have been partly trained.
BACKGROUND OF THE INVENTION
The present invention is in the field of a system for medical assistance wherein assistance is typically provided over a long distance in view of a medical professional not being available on site. For instance on ships and water or airborne vessels no medical trained personnel may be present. Provision of medical care is therefore legally transferred to the captain of the ship and/or a ship’s officer. The medical responsible people obtain some basic training, typically delivered on shore. The basic training is found to be insufficient in many cases, partly in view of insufficient experience with many medical cases, sometimes in view of inadequate training for the specific and often complex case, and also in view of lack of familiarity with cases which may result in mental hurdles.
However in many cases the legally responsible people for medical care do have to perform medical actions and carry out medical procedures, sometimes to prevent worse from happening, sometimes to save lives, sometimes to provide accurate care, and so on. In such cases they rely on their basic training and on handbooks or the like. Apart from the fact that training and handbooks are typically outdated at least a few years, they do not provide all the information typically needed to perform the medical actions required; at the best they provide generic information and instructions, which information may be of limited relevance to the case. In addition it is quite often difficult to establish what the medical disorder, disease or injury actually is, especially from handbooks or likewise the internet. Therefore quite often there is a need to consult a medical professional. Apart from the fact that distance, language, time zone, availability, exact knowledge of the professional, etc. are already issues to overcome, the medical professional still has to rely on spoken information, or information in writing, from the people in charge of medical care, and vice versa. Especially when time becomes an issue a risk of wrong or inadequate treatment is significant.
In most, if not all cases, the crew of a ship has insufficient medical knowledge and skills to take optimal action in the event of a medical emergency. Therefore, it is difficult to provide the best possible care for a patient in remote locations. Currently, doctors on shore are contacted via satellite phone or email to help provide medical care. However, often not all relevant available medical data is collected by the crew and additional time will be required to get the missing information if possible at all. As a result any decision will be made based on incomplete information. Actions based on such decision may lead to poor quality of care, to preventable injury, and to high additional unneeded risk and costs. In addition, a final decision by the Captain or shipowner on how to react (divert the ship (or likewise airplane) to a port, keep going, evacuate the patient using helicopter or ship and so on) is then also made based on partial information, resulting in unnecessary high costs for evacuation and diversion of ships (~€200k per incident) and may pose avoidable risk to the patient, the ship and its crew. There are currently no applications available that facilitate systematic collection of better quality data or provide an objective risk assessment to captains to base their decision making on.
WO 2018/231059 Al recites a wireless communication system for medical assistance, and a use of said wireless system for training and providing medical assistance, wherein assistance is typically provided over a long distance (being remote). The medical assistance is typically provided by well trained professional to laymen that at the best have been partly trained. Gomez-Gonzalez et al, in “Artificial Intelligence in medicine and healthcare: a review and classification of current and rear future applications and their ethical and social impact.”, htps://arxiv.org/ftp/arxiv/pa- pers/2001/2001.09778.pdf. provide the above overview, potential benefits and pitfalls, and issues that can be considered controversial and are not deeply discussed in the literature. WO 2019/162054 Al recites a system for providing client-side physiological condition estimations during a live video session. In some embodiments, the system includes a first client computer system that is caused to: (i) store a neural network on one or more computer-readable storage media of the first client computer system, (ii) obtain a live video stream of an individual via a camera of the first client computer system during a video streaming session between the first client computer system and a second client computer system, (iii) provide, during the video streaming session, video data of the live video stream as input to the neural network to obtain physiological condition information from the neural network, and (iv) provide, during the video streaming session, the physiological condition information for presentation at the second client computer system. WO 2014/186838 Al recites a system for use in remote medical diagnosis of a biological subject, the system including one or more electronic processing devices that receive image data indicative of at least one image of part of the subject's eye from a client device via a communications network, review subject data indicative of at least one subject atribute, select at least one analysis process using results of the review of the subject data, use the analysis process to quantify at least one feature in the image data and generate an indicator value indicative of the quantified at least one feature, the indicator value being used in the assessment of a condition status of at least one condition.
Another issue is that the people in charge of medical care, as well as the medical professional, are typically trained in a different location and/or seting. As a consequence a potential risk of inconsistencies is present, which may lead to wrong diagnosis, inadequate treatment, insufficient treatment, neglect of certain aspects of treatment, etc. The “system” of treatment may as a result be considered unreliable and may form a risk for the treatment of the patient, such as in view of claims. Such may especially be the case for ships and vessels out at sea.
In addition communication over these long distance may be hampered by stability or insufficient capabilities of a communication system used, such as bandwidth, noise, disturbances, etc. In principle complex and costly systems might be used to overcome some of the problems mentioned, but these are not used in practice, not even on very large ships.
Some prior art not particularly relevant to the present invention is US2017/069227 Al, WO 2017/072616 Al and US2015/0077502 A2. US2017/069227 Al recites a system and method for providing real-time object recognition. Such is not a goal of the present invention; further recognized objects are not physical reality, but a representation thereof. The recognized objects are provided as tactile or audio feedback to a typically optically disabled user. Only one end-user is involved. WO 2017/072616 Al recites a remote assistance workstation for remote assistance using AR glasses, which may be regarded as a classical one-way system. The goal is to assist people when using AED’s, electrodes or metronome in case of acute medical situations. However the system is not symmetrical and there is no two-way augmentation on both ends of the system. The users are not visible to each other. US2015/0077502 A2 recites a GUI which is in any case background art to the present invention.
The present invention relates to a wireless communication system for medical assistance, and a use of said wireless system fortraining and providing medical assistance, wherein assistance is typically provided over a long distance, which overcome one or more of the above disadvantages, without jeopardizing functionality and advantages.
SUMMARY OF THE INVENTION
The present invention relates in a first aspect to a wireless communication system according to claim 1. Typical steps taken are visualized in fig. 1. Steps that may be taken in case of medical emergency on board a ship are:
Step 1-3 : gather data on state of patient through anamnesis, physical examination, objective measurements.
Step 4: Contact Doctor on shore
Step 5: Captain/Shipowner makes decision based on location of ship, estimates on duration of journey, and advice on treatment of patient and risk to patient/crew/ship by doctor Step 6: Patient is treated on board, ship continues, deviates or evacuates. The present communication system used to provide medical assistance is wireless in view of too large distances typically being present between the person in charge of providing medical care and a medical professional, or in view of physical inability for a professional to be present. The system is intended for providing medical assistance typically to the person in charge thereof. It is specifically noted for some jurisdictions that said assistance does not relate to non-stat- utory subject-matter, such as methods of treatment, surgery, therapy, or diagnosis, but at the most to providing information and instructions to that extend. As mentioned above said person may be regarded to be largely a layman, despite of some training. In order to provide proper instructions it has been found essential that the layman can make use of a simple de- vice, such as a mobile phone, a tablet, a smart phone, or even a (small) computer, which device is typically available. The device should have a display in order to present optical (visual) information, such as images. The layman is typically at a first location, that is moving from one location to another, such as on a ship far from the shore, or at a remote location, such as a remote house or village, where it is impossible or too complicated to provide medical assistance, such as by flying in a doctor, by transporting the patient to shore, and so on. In addition means of communication are thereby inherently limited. It is therefore important that information of a patient, such as optical information on a physical reality, such as a condition of the patient, can be made available. The first mobile device further has a microphone for obtaining audio input at the first location, a speaker, a display, and a transceiver, preferably a wireless transceiver. The second mobile device is configured in a similar, not necessarily the same, manner. Further the first mobile device as well as the second mobile device are configured to augment reality, such as comprises a first optical input, such as a camera. With the camera an image can be taken from a patient. The image can be send and thereby shared to the medical professional and displayed on a second mobile device, which is configured to augment reality, and thereby the second mobile device is comparable in characteristics and/or features with the first mobile device. The wireless system further comprises a computing device adapted to communicate with the first mobile device and with the second mobile device, the computing device comprising a data processor, and a computer program product comprising instructions for operating the system, the instructions causing the system to carry out the following steps; - receiving a live video stream and a live audio stream from said first mobile device, said live streams comprising at least one time slice with at least one frame and/or at least one audio fragment comprising visual or audio input characterizing a living being or at least one part thereof at said first location; - categorizing said living being or at least one part thereof that appear in said live video stream with a computer vision system, and/or - categorizing said living being or at least one part thereof that appear in said live audio stream with a computer audio system, - optionally receiving further input categorizing said living being or at least one part thereof, such as at least one of an identity checker, credentials checker, a unique session identifier, ECG, blood pressure, for vital parameters, blood and urine analysis, blood oxygen level, body temperature, blood sugar level, heart rate and the results of palpation, auscultation and percussion, - based on said categorization if considered incomplete or insufficient for final categorization requesting through said first mobile device further audio and/or video input, such as further photos, or detailed photos, possibly assisted by AR to indicate details to be photographed, - making a final categorization, - optionally requesting further audio and/or video input characterizing said living being or at least one part thereof through said second mobile device, - receiving information with the respect to said first location comprising at least one of said first location, an intended trajectory of said first location, a speed of said first location, a cargo of said first location, a time constraint of said first location, and a distance to a port, - assessing said final categorization and further audio and/or video input, and said information, - providing at least one treatment, which may be regarded as an integral treatment, preferably at least three treatments, more preferably an order of treatments, wherein a treatment comprises a location of treatment, a time frame of treatment, a person performing said treatment, costs of said treatment, risks associated with said treatment, legal and practical boundary conditions of said treatment, and actions and/or absence of actions, and optionally means of transportation of said living being, assessing of the risks associated with the least one treatment, in particular objective risks, wherein the risks include a risk to the patient, a risk to the ship, and a risk to the crew of the ship, and providing at least one treatment based on the risks associated therewith, such as based on a risk score. The computer program product may be loaded on a single computing device, on a distributed computing device, on more than one computing devices, and combinations thereof. Legal and practical boundary conditions of said treatment may for instance relate to a cargo present on a ship or airplane, amount of available fuel, single or combined travel time, weather conditions, weather forecast, presence or absence of medical assistance, insurance requirements, accessibility of a port, e.g. in view of quarantine, and combinations thereof.
The applicant has therewith developed a unique application that provides an integrated system intended to improve all six steps taken during a medical emergency on board a ship (or any remote location for that matter) and may combine this with an assessment of the risks and costs for evacuation or diversion of the ship or airplane based on all the available info (e.g. ship’s current location). The present system uses Artificial Intelligence (Al) and Augmented Reality (AR) to improve all six steps in this process which may include at least one of:
1. Digitalize and automate the anamnesis via Al to ensure the right information is collected;
2. Measurements are taken and entered into the system (e.g. using OCR, Al);
3. Physical examination by doctor supported by intelligent Two-Way-AR;
4. Data from steps 1-3 are sent to doctor on shore, assessment is improved due to better anamnesis, clear overview of measurements, and own examination of patient through 2-way AR. Leading to better objective and quantifiable data;
5. Advice of Doctor and all data about the ship (e.g. location, destination, cargo, deadlines etc) are assessed and an automated risk assessment with 3 scenario’s (i.e. divert to port, evacuate patient, stay on trajectory) is given or calculated in a risk-score; and
6. Treatment is improved through 2-way AR connection with on-shore Doctor.
The present system is considered the first system that integrates potential improvements to all of the steps in this process. Besides, it includes components of Al and AR that simplify the decision making process and help to collect health data and to treat the patient in the best way possible. This is a large improvement compared to the current state-of-the- art in which the captain needs to call a health professional by satellite phone in order to receive medical advice. It allows the captain/ ship owner to make a well-informed decision based on objective data; it allows the crew to treat a patient in the best way possible; the patient gets a better treatment, and the ship company makes a better informed decision and therefore spends less money.
Potential users are shipping companies and airline companies all over the world. It is noted that the global shipping industry has 116.000 ships at sea and hundreds of billions of dollars invested therein. Ships carry 90% of all global goods. The off-shore and remote markets may also benefit from this application (e.g. Off-shore rigs, aircraft, remote communities).
Thereby the present invention provides a solution to one or more of the above mentioned problems. Advantages of the present invention are detailed throughout the description.
DETAILED DESCRIPTION OF THE INVENTION
The present invention relates in a first aspect to a wireless communication system according to claim 1.
In an exemplary embodiment of the present system the computer program product comprising instructions for recognition of information in digital image data and/or digital audio data, comprising a learning phase on a data set of example digital data having known information, and computing characteristics of categories automatically from each example digital data and comparing computed characteristics to their known category, preferably comprising in said learning phase training a convolutional neural network comprising network parameters using said data set, more preferably in which said learning phase via deep learning each layer of said convolutional neural network is represented by a linear decomposition into basis functions of all filters as learned in each layer. In an embodiment of the computer program product said neural network comprise weights that are learnt using a sample dataset, in particular said weights are learned for a whole patch at once.
The invention relates to neural networks that apply convolution to data of data sets. A set of images are an example of such data sets. Images usually have data that has a spatial coherence. The neural network applies convolution using a set of filters. In the art, this set needs to be complete, or information regarding the data is needed in order to select the right filter set to start with. Here, a set of basic functions is used with which the filters are defined. It was found that the filters can be described using the set of basic functions. Furthermore or alternatively, when the fitting accuracy needs to be higher, for fitting higher order information, the basic functions can be set more accurate. In combination or alternatively, the basic functions can be expanded to more of the relatively standard functions. This without using knowledge of the data set. In the recognition of categorical information as machine-learned from digital image data, convolutional neural networks are an important modern tool. “Recognition of categorical information” in this sentence means to say that a label (for example “cow”, “refrigerator”, “birthday party”, or any other category attached to a digital picture; the category may refer to an object in the digital image or it may refer to a condition in the scene). Thus, in fact, data that comprises locally coherent data can be binned. Often, such bins are discrete, like the label or category example above. It is also possible to categorise the data in multidimensional bins. Examples are for instance “small-medium-large” to an object. In fact, the network can even categorize on the basis of information that is continuous, for instance a continuous variable like size. In such a categorisation, regression analysis is possible. In this respect, data that comprises locally coherency relates to data that can be multi-dimensional. This data in at least one dimension has data points that are coherent in an area around at least one of its data points. Examples of such data are images, video (which has position coherence and time coherence), speech data, time series. Data points hold some information on their neighbouring data points.
Purpose
This is the purpose of recognition: to automatically label a yet-unseen image from features and characteristics computed from the digital image alone.
Learning and application phases
The process of recognition of categorical information consists of two steps: a learning phase and an application phase. In the processing of the learning phase, unique characteristics of categories are derived automatically from the features computed from each example digital image and compared to its known category (i.e. a stack of digital pictures each labelled “cow”, a stack of digital pictures each labelled “refrigerator”, etc. for all categories involved). These characteristics derived from features are transferred to the application phase. In the processing of the application phase, the same features are computed from an unknown image. By computation on the features again, it is established whether these features include the unique characteristics of a category A. If so, the unknown image is automatically labelled with this category A.
Introduction to the new approach Convolutional neural network learning can be seen as a series of transformations of representations of the original data. Images, as well as signals in general, are special in that they demonstrate spatial coherence, being the correlation of the value of a pixel with the values in the pixel’s neighbourhood almost everywhere. (Only at the side of steep edges it remains undecided whether a pixel belongs to one side or to the other. The steepness of camera-recorded edges is limited by the bandwidth, as a consequence of which the steepest edges will not occur in practice.) When looking at the intermediate layers of convolutional neural networks, the learned image filters are spatially coherent themselves, not only for the first layers [Mallat] but also for all but the last, fully-connected layers, although there is nothing in the network itself which forces the filters into spatial coherence. See figure 1, for an illustration of intermediate layers.
Approach
Different from standard convolutional neural nets we pre-program the layers of the network with Gaussian-shaped filters to decompose the image as a linear decomposition onto a local (Taylor- or Hermite-) functional expansion.
The invention further relates to a method for recognition of information in digital image data, said method comprising a learning phase on a data set of example digital images having known information, and characteristics of categories are computed automatically from each example digital image and compared to its known category, said method comprises training a convolutional neural network comprising network parameters using said data set, in which via deep learning each layer of said convolutional neural network is represented by a linear decomposition of all filters as learned in each layer into basis functions.
In an exemplary embodiment of the present system the computer program further comprising instructions for - at least once sub-categorizing said living being or at least one part thereof that appear in said live video stream with a computer vision system, and/or
- at least once sub-categorizing said living being or at least one part thereof that appear in said live audio stream with a computer audio system, - optionally receiving further input sub-categorizing said living being or at least one part thereof, such as at least one of an identity checker, credentials checker, a unique session identifier, ECG, blood pressure, for vital parameters, blood and urine analysis, blood oxygen level, body temperature, blood sugar level, heart rate and the results of palpation, auscultation and percussion, - based on said sub-categorization if considered incomplete or insufficient for final sub-categorization requesting through said first mobile device further audio and/or video input, - making a final sub-categorization,
- optionally requesting further audio and/or video input characterizing said living being or at least one part thereof through said second mobile device, and - receiving information with the respect to said location comprising at least one of said first location, an in-tended trajectory of said first location, a speed of said first location, a cargo of said first location, a time constraint of said first location, and a distance to a port, - assessing said final categorization and further audio and/or video input, and said information.
In an exemplary embodiment of the present system categories and sub-categories are selected from living being diseases, living being disorders, living being injuries, living being illnesses, living being mental illnesses, living being injuries, categories not-requiring treatment, and combinations thereof. In an exemplary embodiment of the present system said first location is a ship or an airplane, and wherein said port is a harbour of an airport, respectively.
In an exemplary embodiment of the present system said person performing said treatment is a captain, a pilot, or a crew member.
In an exemplary embodiment of the present system said location of treatment is selected from the first location, and a further location, such as a port.
In an exemplary embodiment of the present system a time frame of treatment is selected from direct, within 30 minutes, within 1, 2, 4 or 12 hours, within 1 day, within 3 days, within 10 days, within 30 days, or after 30 days.
In an exemplary embodiment of the present system in actions and/or absence of actions are selected from administration of medication, surgery, application of medical supports, such as gauzes, or a combination thereof, and/or wherein means of transportation of said living being is selected from a helicopter, a ship, or a combination thereof.
In an exemplary embodiment of the present system costs of said treatment include treatment costs, costs of transportation, commercial costs in view of ship or airplane movements.
In an exemplary embodiment of the present system said further audio and/or video input characterizing said living being or at least one part thereof relate to physical examination, measurement of a body parameter, anamnesis, impression of well-being of said living being, or a combination thereof.
In an exemplary embodiment of the present system the computer program product comprises instructions for receiving augment reality input from the first or second mobile device, and instructions for transmitting augment reality output to the first or second mobile device. Therewith, based on Al, augmented reality can be provided on the first or second mobile device respectively, such as a visual indication on where to perform treatment or part thereof, such as by highlighting an area of the living bean, such as by highlighting with a green (positive) or red (negative) colour.
In an exemplary embodiment of the present system both devices have implemented a two-directional transmitting system, wherein the transmitting system in use receives at least one layer of first optical input from the first device relating to a physical reality from the first device and transmits the at least one layer of first optical input to the second mobile device and receives at least one layer of second optical input relating to a physical reality from the second device from the second optical input and transmits the at least one layer of second optical input to the first mobile device. In addition thereto the first device in use displays second optical input of the second device superimposed over the first optical input, and wherein the second device in use displays first optical input of the first device subimposed over the second optical input, and wherein the displayed input on the first device is preferably equal or partly equal to the displayed input on the second device, that is the combined or merged inputs, either superimposed or subimposed, are equal to one and another. It is noted that the terms “superimposed” and “subimposed” are relative and in principle are considered interchangeable as long as the various layers of input are projected over one and another in a usable fashion. Therewith both devices are provided with at least one layer of augmented reality superimposed over an image representing reality at the first or second location, respectively. Therewith effectively the use of a first device and the user of the second device look at the same image, or at least part thereof; in other words the displayed images on the first and second devices respectively are the same, though the full display need not be used for displaying said images. The image may be displayed together with at least one further image, or not. The medical professional now can give input to the layman, such as directions, advice, can provide medical details, etc. which input can be directly seen by the layman. The present image may likewise relate to a continuous optical recording.
In principle more than one layer of optical and augmented reality can be provided to the first and/or second mobile device, such as 2-5 layers, such as 3-4 layers. A first layer may represent direct input, a second layer may represent input from a database, a third input may represent actions to be taken, a fourth input may represent graphical input, and so on.
The at least one layer of (first or second) optical input may be provided against a, for recording, neutral background, such as a blue background.
In addition to the above the use of the first or second device, respectively, can each independently switch layers of graphical input on or off, therewith increasing or reducing an amount of augmented reality. For instance a first use may look at the first reality and augmented reality layers 2 and 3 provided by the second user, whereas the second user looks at the first physical reality (being typically augmented reality layer 1 for the second device) and physical reality from the second device, and so on. Therewith the present system is very flexible and versatile.
In addition to the above the first and second user may each independently use further functionality of the mobile devices, such as audio, vibration, recording, measurement capabilities, etc.
In an exemplary embodiment the present system may further comprise a tracking system for recording of instructions and actions performed.
In an exemplary embodiment of the present system the first optical input may be provided by a first camera, which may or may not use an additional optical system such as lenses.
In an exemplary embodiment of the present system the second optical input may be provided by a second camera that may or may not use an additional optical system such as lenses. In an exemplary embodiment of the present system the first device and/or second device may display further optical input.
In an exemplary embodiment of the present system optical input may further be provided by a touch screen, a mouse pad, and graphics.
In an exemplary embodiment the present system may further comprise at least one further mobile device having a further optical input, and a (wireless) transceiver, and implemented thereon the two-directional transmitting system.
In an exemplary embodiment of the present system at least one location may be a remote location, such as remote from a shore, such as at least 200 km from a shore or remote from a medical professional, such as at least 200 km.
In an exemplary embodiment the present system may further comprise a digitally and/or physically accessible reference document or images, that may or may not be presented in an additional augmented reality layer, the reference document comprising in view of medical actions instructions for preparation thereof, instructions for triaging, instructions for diagnosing, instructions for performing measurements, instructions for carrying out, instructions for logging data, instructions for after care, a database, and an overview of contents, preferably organized in a layered manner. As such available, and typically regularly updated information, is directly available. In addition also artificial intelligence may be used to further support the layman.
In an exemplary embodiment the present system may further comprise a coordinator for establishing contact between the first mobile device and a second mobile device, wherein the coordinator selects the second device based on at least one of availability, distance, language capabilities of the owner, specific medical expertise of the owner, time zone, and stability of the transmitting system. As such the best available support can be delivered to a subject.
In an exemplary embodiment the present system may further comprise at least one of an identity checker, credentials checker, a unique session identifier, such as a calibration sticker. Therewith secured information can be transferred as well as information on the condition of the subject.
In an exemplary embodiment the present system may further comprise a data-logging system, and a sensor, such as a medical sensor, such as for ECG, blood pressure, for vital parameters, blood and urine analysis, and blood oxygen level. Therewith information on the treatment of a subject as well as details of the subject can be transmitted and stored.
In an exemplary embodiment the present system may further comprise a switch for activating or deactivating superimposed display on one or both devices. For some applications the superimposed display may interfere with a process of treating the subject and can better be switched off. In an exemplary embodiment of the present system the second device may retrieve input from a database. Therewith the layman can be assisted directly by a computer or the like.
In a second aspect the present invention relates to a use of the present system for training and for providing real-time medical assistance.
The invention is further detailed by the accompanying figure and examples, which are exemplary and explanatory of nature and are not limiting the scope of the invention. To the person skilled in the art it may be clear that many variants, being obvious or not, may be conceivable falling within the scope of protection, defined by the present claims.
SUMMARY OF THE FIGURES
Figure 1 shows schematics of the prior art and fig.2 of the present invention.
DETAILED DESCRIPTION OF THE FIGURES
Figure 1 shows schematics of the prior art.
Figure 1 describes the current situation:
In a remote location, such as a ship, when there is a patient a layperson like a ship’s officer, will have to:
Take an anamnesis (1), ask questions and might base these on a book, form or likewise;
Take medical measurements (2), like blood pressure, heartrate and so on. Usually without no recent and often any training;
Do a physical examination (3), like checking out chest and abdomen by doing palpations, auscultation and so on. Again with little or no prior training;
The combined (1+2+3) information will be communicated with a doctor onshore by email, radio, or satellite telephone. The doctor will make a clinical decision and tries to make a diagnosis and will give advice on a treatment (4);
Based on the doctors diagnosis and advice the captain or the shipping company will make a decision on what to do with the logistic part of the treatment, like diverting the ship (5). Step 4 and 5 are usually not taken in coordination;
Finally the on board treatment of the patient needs to be carried out by the layper- son/ship’s officer (6). Usually he/she has to work form memory, any past training and a book and perform procedures like giving injections, set-up IV drips and so on. Figure 2 describes the situation using the system:
In a remote location, such as a ship when there is a patient a layperson like a ship’s officer, will have to:
Take an anamnesis (1) supported by using a mobile device with Al that will ask the right questions, go in detail where needed, and can might interact directly with the patient or the caretaker. This leads to a relevant, accurate, detailed, and digitally stored anamnesis; Take measurements (2), like blood pressure, heartrate, and so on. Using the 2-Way-AR functionality via a mobile device the doctor or onshore expert can guide, instruct, and coach the layperson in real time and ‘hands-on’ on what he/she exactly to do to take the measurement. This way the quality of the measurements taken can be assessed and improved by giving visual guidance;
Do a physical examination (3), like checking out chest, and abdomen by doing palpations, auscultation and so on. Using the 2-Way-AR functionality on a mobile device the doctor or onshore expert can guide, instruct and coach the layperson in real time and ‘hands-on’ on what he/she exactly to do to do a good examination. This way the quality of the physical examination can be assessed, the patients response to it taken into account and the execution and quality of the examination can be improved by giving visual guidance, instructions and coaching;
The combined (1+2+3) information gathered will be of substantial higher quality and give the expert/doctor better information to base his/her diagnosis on (4). For that reason it will be more accurate;
This likely diagnosis and the medical treatment will now be combined with other relevant data (like location, direction, speed, cargo, sailing distances to nearby ports, weather, cargo and so on) from the ship (4-5). Using Al and based on relevant data the system will present the captain and/or shipping company with treatment options to optimize the outcome for the patient, taking the ship’s safety, risks involved and costs into account (5);
Finally the on board treatment of the patient needs to be carried out by the layperson/ship’s officer (6). Using the 2-Way-AR functionality on a mobile device the doctor or onshore expert can guide, instruct and coach the layperson in real time and ‘hands-on’ on what he/she exactly to do to when treating the patient. All medical procedures can be supported and supervised. Working as a team the layman and the expert can carry out on board treatment in a safe and successful way. This way the quality of the treatment can be assessed and improved by giving visual guidance;
All steps (1-6) can be digitized and can, with the approval of parties involved, also be used to improve future care and learn from outcomes.
The system will improve remote medical care by improving various aspects, such as by adding 2-Way-AR and Al, and support decision making.
EXAMPLES/EXPERIMENTS
The invention although described in detailed explanatory context may be best understood in conjunction with the accompanying examples.
Practical example
Step by Step instructions on using AR system for remote medical support
The example relates to a situation wherein Device 1 is on a remote location and Device 2 is a device on medical support location (doctor’s device). Additional devices may be present, in so far as required and feasible and it may relate to any additional unit. It is the combination of devices (minimum two: device 1 and 2) that may be regarded as the present basic system. In order to start the (both) devices should be powered up, meet minimum technical specifications, use a compatible software platform (Android/iOS/Windows/Linux etc.), have additional drivers and software installed for two-directional transmitting, be connected to the internet and/or have an IP address. When using the present AR-application/platform the two devices should connect, identify and use a secure connection.
The following steps are typically performed.
1) The person(s) using device 1 (hereafter: helper, typically a layman) and the person(s) using device 2 (hereafter: the medical professional, typically a doctor) and the patient or object (hereafter: the subject) can be positively identified by voice, vision, available ID or any other legal or required means.
2) If possible they agree to: their role and accept terms and conditions of use by 1) signing in with known and verified credentials of by 2) direct input (typing, touching, clicking) or clearly stating verbally and the patient/object responsible for accepting and consenting to the help offered.
3) Device 1 may be positioned either:
- static, using a fixing device that will hold device 1 in place,
- or dynamic, by being attached to the helper in their line of sight, using goggles or any other means to comfortably attach the device,
- or is positioned in such a way relative to subject, that the helper can work, manipulate and use in- struments/tools on the subject, while keeping the subject in view/on screen of the device.
4) Device 2 may be positioned either:
- static, using a fixing device that will hold device 2 in place,
- or dynamic, by being attached to professional in their line of sight, using goggles or any other means to comfortably attach the device,
- or is positioned in such a way, that the professional can work, manipulate and use instru- ments/tools in front device while keeping the subject on screen of the device.
5) A calibration sticker can (but does not need to) be used: it has a unique session number, a colour calibration print and a fixed size for reference and will be placed close or next to the subject within the vision frame of device 1 (camera) and device 2 can calibrate after detection of the sticker.
6) By using device 2 the professional can in so far as required:
- make drawings, display pictures, project video(s) using device 2 as input device, by touching the screen, or using any input device (mouse, stylus, touchpad, controller, etc.) or retrieving pictures, images and video material from a database or any other source
- manipulate his hands, arms and any other part of his body, such as to indicate to the helper to perform certain actions, manipulate instruments of tools in such manner that the camera of device 2 captures these movements, pictures or projections, such as to provide a visual example.
7) These inputs created by the professional using device 2 can be projected on device 1 (and likewise device 2) onto an overlaying visual layer that results in a projection of the reality (the actual view of the subject, relating to a physical reality) with the visual (AR) layer (relating to at least one of a further physical reality, images, graphics, pictures, etc.) superimposed on the screen of device 1. The helper will see both the subject as well as the (superimposed) input made by professional on device 2.
8) By using device 1, the helper can see the subject (in reality and on screen), see directly what professional shows, explains, and/or instructs him to do (in AR overlay), see pictures, images, supporting information from database or any other source (in AR overlay), see video’s on how to carry out certain skills and techniques (in AR overlay), can switch the AR layer on- and off on device 1, can see and train and prepare for copying and re-doing the manipulations and instructions shown by professional, and can execute or perform the manipulations, instructions or skills shown in the AR layer in reality on the subject.
9) The professional can see the subject, see the AR overlay as visible on device 1, including any visual input used, see any actions by helper, give directions and instructions to the helper, using voice, text, video and/or AR overlay, and can switch the AR layer on and off on device 1 and 2.
10) All actions on device 1 and 2 are logged and saved in a database.
In view of operation additional functionality may be added. For instance device 1 can have sensors attached, that will help to monitor the condition of subject. In case of a patient ECG, heartrate, blood pressure and other vital parameters can be monitored and made visible on device 1 and 2. The helper can use device 1 without a professional (no device 2) and using images, video’s, explanations on skills and techniques retrieved from a database or any other source, that can be superimposed (via AR layer) on the subject. Using artificial intelligence and machine learning, the collected input from the database can be used to recognize, diagnose certain conditions and predict any required actions to be taken by helper.
Selecting of treatment
In an example a ship travelled in relatively rough weather, with high seas, strong winds, and strong tide. The ship had left the harbour of departure a few hours earlier. Unfortunately the ship captain slipped and fell. He noticed a crack, and realized that a severe injury might have had happened. The captain was first stabilized. The first mate realized that quick medical treatment could be required, and contacted a medical professional, at a remote location. The medical professional, based on the medical input, advised to return to the harbour of destination. The first mate considered the return risky, in view of the clearly unfavourable weather conditions. Albeit the consideration are inherently subjective, objective support data in view of risks of a journey to the ship, the crew, and the patient, would have been welcome. The first mate continued the voyage to a port of destination. There the patient, that is the captain, was treated. Serious injuries were found, in particular to his neck. The captain fully recovered. The above may seem rather trivial, but that is mainly because boundary conditions urged actions in a certain direction. More important is that for instance in aviation captains would redirected the airplane in question back to the (air)port of departure.
Such is a subjective decision is contrary to objective information indicating that a chance of survival, when going back to the airport of departure, is reduced by a factor (typically about 10) compared to choosing an alternative option. This is indicative of (well-trained) humans making risky decisions, contrary to objective information; one might conclude that at least in risky situations human beings poorly judge. Such therefore indicates the present need for an objective risk assessment.

Claims

1. Wireless communication system for medical assistance comprising a first mobile device (11) at a first location, the first mobile device having a first optical input for obtaining optical input from a physical reality, a microphone for obtaining audio input at the first location, a speaker, a display, and a transceiver, preferably a wireless transceiver, a second mobile device (12) at a second location, the second device having a second optical input for obtaining optical input from a physical reality, a microphone for obtaining audio input at the second location, a speaker, a display, and a transceiver, preferably a wireless transceiver, wherein the first mobile device is configured to augment reality, wherein the second mobile device is configured to augment reality, a computing device adapted to communicate with the first mobile device and with the second mobile device, the computing device comprising a data processor, and a computer program product comprising instructions for operating the system, the instructions causing the system to carry out the following steps;
- receiving a live video stream and/or a live audio stream from said first mobile device, said live streams comprising at least one time slice with at least one frame and/or at least one audio fragment comprising visual or audio input characterizing a living being or at least one part thereof at said first location;
- categorizing said living being or at least one part thereof that appear in said live video stream with a computer vision system, and/or
- categorizing said living being or at least one part thereof that appear in said live audio stream with a computer audio system,
- optionally receiving further input categorizing said living being or at least one part thereof, such as at least one of an identity checker, credentials checker, a unique session identifier, ECG, blood pressure, for vital parameters, blood and urine analysis, blood oxygen level, body temperature, blood sugar level, heart rate and the results of palpation, auscultation and percussion.
- based on said categorization if considered incomplete or insufficient for final categorization requesting through said first mobile device further audio and/or video input,
- making a final categorization,
- optionally requesting further audio and/or video input characterizing said living being or at least one part thereof through said second mobile device,
- receiving information with the respect to said location comprising at least one of said first location, an intended trajectory of said first location, a speed of said first location, a cargo of said first location, a time constraint of said first location, and a distance to a port,
- assessing said final categorization and further audio and/or video input, and said information,
- providing at least one treatment, preferably at least three treatments, wherein a treatment comprises a location of treatment, a time frame of treatment, a person performing said treatment, costs of said treatment, legal and practical boundary conditions of said treatment, and actions and/or absence of actions, and optionally means of transportation of said living being,
- assessing of the risks associated with the least one treatment, in particular objective risks, wherein the risks include a risk to the patient, a risk to the ship, and a risk to the crew of the ship, and
- providing at least one treatment based on the risks associated therewith, such as based on a risk score.
2. System according to claim 1, wherein the computer program product comprising instructions for recognition of information in digital image data and/or digital audio data, comprising a learning phase on a data set of example digital data having known information, and computing characteristics of categories automatically from each example digital data and comparing computed characteristics to their known category, preferably comprising in said learning phase training a convolutional neural network comprising network parameters using said data set, more preferably in which said learning phase via deep learning each layer of said convolutional neural network is represented by a linear decomposition into basis functions of all filters as learned in each layer.
3. System according to any of claims 1-2, further comprising
- at least once sub-categorizing said living being or at least one part thereof that appear in said live video stream with a computer vision system, and/or
- at least once sub-categorizing said living being or at least one part thereof that appear in said live audio stream with a computer audio system,
- optionally receiving further input sub-categorizing said living being or at least one part thereof, such as at least one of an identity checker, credentials checker, a unique session identifier, ECG, blood pressure, for vital parameters, blood and urine analysis, blood oxygen level, body temperature, blood sugar level, heart rate and the results of palpation, auscultation and percussion,
- based on said sub-categorization if considered incomplete or insufficient for final sub-categorization requesting through said first mobile device further audio and/or video input,
- making a final sub-categorization,
- optionally requesting further audio and/or video input characterizing said living being or at least one part thereof through said second mobile device, and
- receiving information with the respect to said location comprising at least one of said first location, an intended trajectory of said first location, a speed of said first location, a cargo of said first location, a time constraint of said first location, and a distance to a port,
- assessing said final categorization and further audio and/or video input, and said information.
4. System according to any of claims 1-3, wherein categories and sub-categories are selected from living being diseases, living being disorders, living being illnesses, living being mental illnesses, categories not-requiring treatment, and combinations thereof.
5. System according to any of claims 1-4, wherein said first location is a ship or an airplane, and 1 9 wherein said port is a harbour of an airport, respectively, and/or wherein said person performing said treatment is a captain, a pilot, or a crew member, and/or wherein said location of treatment is selected from the first location, and a further location, such as a port, and/or wherein a time frame of treatment is selected from direct, within 30 minutes, within 12 hours, within 1 day, within 3 days, within 10 days, within 30 days, or after 30 days, and/or wherein actions and/or absence of actions are selected from administration of medication, surgery, application of medical supports, such as gauzes, or a combination thereof, and/or wherein means of transportation of said living being is selected from a helicopter, a boat, or a combination thereof, and/or wherein costs of said treatment include treatment costs, costs of transportation, commercial costs in view of ship or airplane movements.
6. System according to any of claims 1-5, wherein said further audio and/or video input characterizing said living being or at least one part thereof relate to physical examination, measurement of a body parameter, anamnesis, impression of well-being of said living being, or a combination thereof.
7. System according to any of claims 1-6, wherein the computer program product comprising instructions for receiving augment reality input from the first or second mobile device, and instructions for transmitting augment reality output to the first or second mobile device, preferably comprising implemented on both the two devices a two-directional transmitting system, wherein the transmitting system is configured to receive at least one layer of first optical input (31) relating to a physical reality from the first device and is configured to transmit the at least one layer of first optical input to the second mobile device and is configured to receive at least one layer of second optical input (32) relating to a physical reality from the second device and is configured to transmit the at least one layer of second optical input to the first mobile device, wherein the first device is configured to display the at least one layer of second optical input of the second device superimposed over the first optical input for forming augmented reality, and wherein the second device is configured to display the at least one layer of first optical input of the first device subimposed under the second optical input for forming augmented reality, and wherein the superimposed displayed inputs on the first device is equal to the subimposed displayed inputs on the second device.
8. System according to any of claims 1-7, further comprising a tracking system for recording of instructions and actions performed.
9. System according to any of the preceding claims, wherein the first optical input is provided by a first camera, and/or wherein the second optical input is provided by at least one of a second camera, a graphical input, a video input, a touch screen, a mouse, a touch pad, a stylus, a controller, or a database.
10. System according to any of the preceding claims, wherein the first device and/or second device 20 is configured to display further optical input.
11. System according to any of the preceding claims, wherein optical input is further provided by a touch screen, a mouse pad, and graphics.
12. System according to any of the preceding claims, comprising at least one further mobile device having a further optical input, and a (wireless) transceiver, and implemented thereon the two-direc- tional transmitting system.
13. System according to any of the preceding claims, wherein at least one location is a remote location, such as at least 200 km from a shore or at least 200 km from a medical professional.
14. System according to any of the preceding claims, further comprising a digitally and/or physically accessible reference document, the reference document comprising in view of medical actions instructions for preparation thereof, instructions for triaging, instructions for diagnosing, instructions for performing measurements, instructions for carrying out, instructions for logging data, instructions for after care, a database, and an overview of contents, preferably organized in a layered manner.
15. System according to any of the preceding claims, further comprising a coordinator configured to establish contact between the first mobile device and a second mobile device, wherein the coordinator is configured to select the second device based on at least one of availability, distance, language capabilities of the owner, specific medical expertise of the owner, time zone, and stability of the transmitting system.
16. System according to any of the preceding claims, further comprising at least one of an identity checker, credentials checker, a unique session identifier, such as a calibration sticker, a data-logging system, and a sensor, such as selected from a medical sensor, such as for ECG, blood pressure, for vital parameters, blood and urine analysis, and blood oxygen level.
17. System according to any of the preceding claims, further comprising a switch for activating or deactivating superimposed display on one or both devices.
18. System according to any of the preceding claims, wherein the second device is configured to retrieve input from a database.
19. Use of a system according to any of the preceding claims for training and for providing realtime medical assistance.
PCT/EP2021/078630 2020-10-15 2021-10-15 Wireless communication system for medical assistance WO2022079251A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NL2026677A NL2026677B1 (en) 2020-10-15 2020-10-15 Wireless communication system for medical assistance
NL2026677 2020-10-15

Publications (1)

Publication Number Publication Date
WO2022079251A1 true WO2022079251A1 (en) 2022-04-21

Family

ID=73402085

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/078630 WO2022079251A1 (en) 2020-10-15 2021-10-15 Wireless communication system for medical assistance

Country Status (2)

Country Link
NL (1) NL2026677B1 (en)
WO (1) WO2022079251A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014186838A1 (en) 2013-05-19 2014-11-27 Commonwealth Scientific And Industrial Research Organisation A system and method for remote medical diagnosis
US20150077502A1 (en) 2012-05-22 2015-03-19 Intouch Technologies, Inc. Graphical user interfaces including touchpad driving interfaces for telemedicine devices
US20170069227A1 (en) 2010-02-24 2017-03-09 Nant Holdings Ip, Llc Augmented Reality Panorama Supporting Visually Impaired Individuals
WO2017072616A1 (en) 2015-10-29 2017-05-04 Koninklijke Philips N.V. Remote assistance workstation, method and system with a user interface for remote assistance with spatial placement tasks via augmented reality glasses
WO2018231059A2 (en) 2017-06-13 2018-12-20 Maritime Medical Applications B.V. Wireless communication system for remote medical assistance
WO2019162054A1 (en) 2018-02-20 2019-08-29 Koninklijke Philips N.V. System and method for client-side physiological condition estimations based on a video of an individual
CN110313896A (en) * 2019-06-26 2019-10-11 杜剑波 The data processing system and method for dedicated remote diagnosis are removed based on augmented reality liquid layered water

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170069227A1 (en) 2010-02-24 2017-03-09 Nant Holdings Ip, Llc Augmented Reality Panorama Supporting Visually Impaired Individuals
US20150077502A1 (en) 2012-05-22 2015-03-19 Intouch Technologies, Inc. Graphical user interfaces including touchpad driving interfaces for telemedicine devices
WO2014186838A1 (en) 2013-05-19 2014-11-27 Commonwealth Scientific And Industrial Research Organisation A system and method for remote medical diagnosis
WO2017072616A1 (en) 2015-10-29 2017-05-04 Koninklijke Philips N.V. Remote assistance workstation, method and system with a user interface for remote assistance with spatial placement tasks via augmented reality glasses
WO2018231059A2 (en) 2017-06-13 2018-12-20 Maritime Medical Applications B.V. Wireless communication system for remote medical assistance
WO2019162054A1 (en) 2018-02-20 2019-08-29 Koninklijke Philips N.V. System and method for client-side physiological condition estimations based on a video of an individual
CN110313896A (en) * 2019-06-26 2019-10-11 杜剑波 The data processing system and method for dedicated remote diagnosis are removed based on augmented reality liquid layered water

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GÓMEZ-GONZÁLEZ EMILIO ET AL: "Artificial intelligence in medicine and healthcare: a review and classification of current and near-future applications and their ethical and social Impact", ARXIV, 22 January 2020 (2020-01-22), XP055818120, Retrieved from the Internet <URL:https://arxiv.org/ftp/arxiv/papers/2001/2001.09778.pdf> [retrieved on 20210625] *
GOMEZ-GONZALEZ ET AL., ARTIFICIAL INTELLIGENCE IN MEDICINE AND HEALTHCARE: A REVIEW AND CLASSIFICATION OF CURRENT AND REAR FUTURE APPLICATIONS AND THEIR ETHICAL AND SOCIAL IMPACT., Retrieved from the Internet <URL:https://arxiv.org/ftp/arxiv/papers/2001/2001.09778.pdf.>
WIKIPEDIA: "Semi-supervised learning - Wikipedia", 2 June 2020 (2020-06-02), XP055899993, Retrieved from the Internet <URL:https://en.wikipedia.org/w/index.php?title=Semi-supervised_learning&oldid=960353636> [retrieved on 20220310] *

Also Published As

Publication number Publication date
NL2026677B1 (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US20210327304A1 (en) System and method for augmented reality guidance for use of equpment systems
US9232040B2 (en) Community-based response system
US20210295048A1 (en) System and method for augmented reality guidance for use of equipment systems
US20220308664A1 (en) System and methods for evaluating images and other subjects
WO2009005600A1 (en) Totally integrated intelligent dynamic systems display
CN114072258A (en) Medical artificial intelligent robot arrangement for robot doctor
US20230039882A1 (en) Artificial intelligence-based platform to optimize skill training and performance
KR20220095104A (en) Big data and cloud system based AI(artificial intelligence) emergency medical care decision-making and emergency patient transfer system and method thereof
CN113053514A (en) Integrated system of wisdom city doctor based on 5G communication technology
Zhou et al. Cognition-driven navigation assistive system for emergency indoor wayfinding (CogDNA): proof of concept and evidence
JP5317818B2 (en) Medical diagnosis support system and medical diagnosis support method
WO2022079251A1 (en) Wireless communication system for medical assistance
Chourasia et al. Redefining industry 5.0 in ophthalmology and digital metrology: a global perspective
US20210249127A1 (en) Wireless Communication System for Remote Medical Assistance
US11728033B2 (en) Dynamic adaptation of clinical procedures and device assessment generation based on determined emotional state
CN114566275A (en) Pre-hospital emergency auxiliary system based on mixed reality
US11033227B1 (en) Digital eyewear integrated with medical and other services
Metelmann et al. Mobile Health Applications in Prehospital Emergency Medicine
Kornelsen et al. Rural patient transport and transfer: Findings from a realist review
Vinekar Screening for ROP
Files et al. THURSDAY, MAY 9, 2019
Rolon et al. Hospital Logistics Management Using Industry 4.0 Techniques.
Sbaih Issues in accident and emergency nursing
AU2016314069A1 (en) System and method for aiding an operator in an emergency situation involving a patient
LECTURE WEDNESDAY MAY 25, 2022

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21823769

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21823769

Country of ref document: EP

Kind code of ref document: A1