NL2026677B1 - Wireless communication system for medical assistance - Google Patents

Wireless communication system for medical assistance Download PDF

Info

Publication number
NL2026677B1
NL2026677B1 NL2026677A NL2026677A NL2026677B1 NL 2026677 B1 NL2026677 B1 NL 2026677B1 NL 2026677 A NL2026677 A NL 2026677A NL 2026677 A NL2026677 A NL 2026677A NL 2026677 B1 NL2026677 B1 NL 2026677B1
Authority
NL
Netherlands
Prior art keywords
location
input
mobile device
living
audio
Prior art date
Application number
NL2026677A
Other languages
Dutch (nl)
Inventor
Dietmar Boon Van Ochssee Walther
Original Assignee
B V Maritime Medical Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by B V Maritime Medical Applications filed Critical B V Maritime Medical Applications
Priority to NL2026677A priority Critical patent/NL2026677B1/en
Priority to PCT/EP2021/078630 priority patent/WO2022079251A1/en
Application granted granted Critical
Publication of NL2026677B1 publication Critical patent/NL2026677B1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/10ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/10ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients
    • G16H20/17ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients delivered via infusion or injection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/20ICT specially adapted for the handling or processing of medical references relating to practices or guidelines
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture

Abstract

The present invention is in the field of a wireless com— munication system for medical assistance, and a use of said wireless system for training and providing medical assis— tance, wherein assistance is typically provided over a long distance (being remote). The medical assistance is typically provided by well trained professional to laymen that at the best have been partly trained. 10

Description

Nr P100566NL00 1 Wireless communication system for medical assistance
FIELD OF THE INVENTION The present invention is in the field of a wireless com- munication system for medical assistance, and a use of said wireless system for training and providing medical assis- tance, wherein assistance is typically provided over a long distance. The medical assistance is typically provided by a well trained professional to laymen that at the best have been partly trained.
BACKGROUND OF THE INVENTION The present invention is in the field of a system for medical assistance wherein assistance is typically provided over a long distance in view of a medical professional not being available on site. For instance on ships and water or airborne vessels no medical trained personnel may be present. Provision of medical care is therefore legally transferred to the captain of the ship and/or a ship’s officer. The medical responsible people obtain some basic training, typically de- livered on shore. The basic training is found to be insuffi- cient in many cases, partly in view of insufficient experi- ence with many medical cases, sometimes in view of inadequate training for the specific and often complex case, and also in view of lack of familiarity with cases which may result in mental hurdles.
However in many cases the legally responsible people for medical care do have to perform medical actions and carry out medical procedures, sometimes to prevent worse from happen- ing, sometimes to save lives, sometimes to provide accurate care, and so on. In such cases they rely on their basic training and on handbooks or the like. Apart from the fact that training and handbooks are typically outdated at least a few years, they do not provide all the information typically needed to perform the medical actions required; at the best
Nr P100566NL00 2 they provide generic information and instructions, which in- formation may be of limited relevance to the case.
In addi- tion it is quite often difficult to establish what the medi- cal disorder, disease or injury actually is, especially from handbooks or likewise the internet.
Therefore quite often there is a need to consult a medical professional.
Apart from the fact that distance, language, time zone, availability, exact knowledge of the professional, etc. are already issues to overcome, the medical professional still has to rely on spoken information, or information in writing, from the peo- ple in charge of medical care, and vice versa.
Especially when time becomes an issue a risk of wrong or inadequate treatment is significant.
In most, if not all cases, the crew of a ship has insuf- ficient medical knowledge and skills to take optimal action in the event of a medical emergency.
Therefore, it is diffi- cult to provide the best possible care for a patient in re- mote locations.
Currently, doctors on shore are contacted via satellite phone or email to help provide medical care.
How- ever, often not all relevant available medical data is col- lected by the crew and additional time will be required to get the missing information if possible at all.
As a result any decision will be made based on incomplete information.
Actions based on such decision may lead to poor quality of care, to preventable injury, and to high additional unneeded risk and costs.
In addition, a final decision by the Captain or shipowner on how to react (divert the ship (or likewise airplane) to a port, keep going, evacuate the patient using helicopter or ship and so on) is then also made based on par- tial information, resulting in unnecessary high costs for evacuation and diversion of ships (~€200k per incident) and may pose avoidable risk to the patient, the ship and its crew.
There are currently no applications available that fa-
Nr P100566NL00 3 cilitate systematic collection of better quality data or pro- vide an objective risk assessment to captains to base their decision making on.
Another issue is that the people in charge of medical care, as well as the medical professional, are typically trained in a different location and/or setting. As a conse- quence a potential risk of inconsistencies is present, which may lead to wrong diagnosis, inadequate treatment, insuffi- cient treatment, neglect of certain aspects of treatment, etc. The “system” of treatment may as a result be considered unreliable and may form a risk for the treatment of the pa- tient, such as in view of claims. Such may especially be the case for ships and vessels out at sea.
In addition communication over these long distance may be hampered by stability or insufficient capabilities of a com- munication system used, such as bandwidth, noise, disturb- ances, etc.
In principle complex and costly systems might be used to overcome some of the problems mentioned, but these are not used in practice, not even on very large ships.
Some prior art not particularly relevant to the present invention is US2017/069227 Al, WO 2017/072616 Al and Us2015/0077502 A2. US2017/069227 Al recites a system and method for providing real-time object recognition. Such is not a goal of the present invention; further recognized ob- jects are not physical reality, but a representation thereof. The recognized objects are provided as tactile or audio feed- back to a typically optically disabled user. Only one end- user is involved. WO 2017/072616 Al recites a remote assis- tance workstation for remote assistance using AR glasses, which may be regarded as a classical one-way system. The goal is to assist people when using AED’s, electrodes or metronome in case of acute medical situations. However the system is not symmetrical and there is no two-way augmentation on both
Nr P100566NL00 4 ends of the system. The users are not visible to each other. US2015/0077502 A2 recites a GUI which is in any case back- ground art to the present invention. The present invention relates to a wireless communication system for medical assistance, and a use of said wireless system for training and providing medical assistance, wherein assistance is typically provided over a long distance, which overcome one or more of the above disadvantages, without jeopardizing functionality and advantages.
SUMMARY OF THE INVENTION The present invention relates in a first aspect to a wireless communication system according to claim 1. Typi- cal steps taken are visualized in fig. 1. Steps that may be taken in case of medical emergency on board a ship are: Step 1-3: gather data on state of patient through anamne- sis, physical examination, objective measurements. Step 4: Contact Doctor on shore Step 5: Captain/Shipowner makes decision based on location of ship, estimates on duration of journey, and advice on treatment of patient and risk to patient/crew/ship by doc- tor Step 6: Patient is treated on board, ship continues, devi- ates or evacuates. The present communication system used to provide medical assistance is wireless in view of too large distances typically being present between the person in charge of providing medical care and a medical profes- sional, or in view of physical inability for a profes- sional to be present. The system is intended for providing medical assistance typically to the person in charge thereof. It is specifically noted for some jurisdictions that said assistance does not relate to non-statutory sub- Ject-matter, such as methods of treatment, surgery, ther- apy, or diagnosis, but at the most to providing infor- mation and instructions to that extend. As mentioned above
Nr P100566NL00 sald person may be regarded to be largely a layman, de- spite of some training.
In order to provide proper in- structions it has been found essential that the layman can make use of a simple device, such as a mobile phone, a
5 tablet, a smart phone, or even a (small) computer, which device is typically available.
The device should have a display in order to present optical (visual) information, such as images.
The layman is typically at a first loca- tion, that is moving from one location to another, such as on a ship far from the shore, or at a remote location, such as a remote house or village, where it is impossible or too complicated to provide medical assistance, such as by flying in a doctor, by transporting the patient to shore, and so on.
In addition means of communication are thereby inherently limited.
It is therefore important that information of a patient, such as optical information on a physical reality, such as a condition of the patient, can be made available.
The first mobile device further has a microphone for obtaining audio input at the first loca-
tion, a speaker, a display, and a transceiver, preferably a wireless transceiver.
The second mobile device is con- figured in a similar, not necessarily the same, manner.
Further the first mobile device as well as the second mo- bile device are configured to augment reality, such as com-
prises a first optical input, such as a camera.
With the camera an image can be taken from a patient.
The image can be send and thereby shared to the medical professional and displayed on a second mobile device, which is configured to augment reality, and thereby the second mobile device is comparable in characteristics and/or features with the first mobile device.
The wireless system further comprises a computing device adapted to communicate with the first mobile device and with the second mobile device, the com- puting device comprising a data processor, and
Nr P100566NL00 6 a computer program product comprising instructions for op- erating the system, the instructions causing the system to carry out the following steps; - receiving a live video stream and a live audio stream from said first mobile device, said live streams compris- ing at least one time slice with at least one frame and/or at least one audio fragment comprising visual or audio in- put characterizing a living being or at least one part thereof at said first location;
- categorizing said living being or at least one part thereof that appear in said live video stream with a com- puter vision system, and/or - categorizing said living being or at least one part thereof that appear in said live audio stream with a com-
puter audio system,
- optionally receiving further input categorizing said living being or at least one part thereof, such as at least one of an identity checker, credentials checker, a unique session identifier, ECG, blood pressure, for vital pa-
rameters, blood and urine analysis, blood oxygen level, body temperature, blood sugar level, heart rate and the results of palpation, auscultation and percussion,
- based on said categorization if considered incomplete or insufficient for final categorization requesting through said first mobile device further audio and/or video input, such as further photos, or detailed photos, possibly as- sisted by AR to indicate details to be photographed,
- making a final categorization, - optionally requesting further audio and/or video input characterizing said living being or at least one part thereof through said second mobile device,
- receiving information with the respect to said first lo- cation comprising at least one of said first location, an intended trajectory of said first location, a speed of
Nr P100566NL00 sald first location, a cargo of said first location, a time constraint of said first location, and a distance to a port, - assessing said final categorization and further audio 5 and/or video input, and said information, - providing at least one treatment, which may be regarded as an integral treatment, preferably at least three treat- ments, more preferably an order of treatments, wherein a treatment comprises a location of treatment, a time frame of treatment, a person performing said treatment, costs of said treatment, risks associated with said treatment, le- gal and practical boundary conditions of said treatment, and actions and/or absence of actions, and optionally means of transportation of said living being. The computer program product may be loaded on a single computing device, on a distributed computing device, on more than one computing devices, and combinations thereof. Legal and practical boundary conditions of said treatment may for instance relate to a cargo present on a ship or airplane, amount of available fuel, single or combined travel time, weather conditions, weather forecast, presence or absence of medical assistance, insurance requirements, accessibility of a port, e.g. in view of quarantine, and combinations thereof. The applicant has therewith developed a unique appli- cation that provides an integrated system intended to im- prove all six steps taken during a medical emergency on board a ship (or any remote location for that matter) and may combine this with an assessment of the risks and costs for evacuation or diversion of the ship or airplane based on all the available info (e.g. ship’s current location). The present system uses Artificial Intelligence (AI) and Augmented Reality (AR) to improve all six steps in this process which may include at least one of:
1. Digitalize and automate the anamnesis via AI to ensure
Nr P100566NL00 8 the right information is collected;
2. Measurements are taken and entered into the system (e.g. using OCR, AI);
3. Physical examination by doctor supported by intelligent Two-Way-AR;
4. Data from steps 1-3 are sent to doctor on shore, as- sessment is improved due to better anamnesis, clear over- view of measurements, and own examination of patient through 2-way AR. Leading to better objective and quanti- fiable data;
5. Advice of Doctor and all data about the ship (e.g. lo- cation, destination, cargo, deadlines etc) are assessed and an automated risk assessment with 3 scenario’s (i.e. divert to port, evacuate patient, stay on trajectory) is given or calculated in a risk-score; and
6. Treatment is improved through 2-way AR connection with on-shore Doctor. The present system is considered the first system that integrates potential improvements to all of the steps in this process. Besides, it includes components of AI and AR that simplify the decision making process and help to col- lect health data and to treat the patient in the best way possible. This is a large improvement compared to the cur- rent state-of-the-art in which the captain needs to call a health professional by satellite phone in order to receive medical advice. It allows the captain/ship owner to make a well-informed decision based on objective data; it allows the crew to treat a patient in the best way possible; the patient gets a better treatment, and the ship company makes a better informed decision and therefore spends less money. Potential users are shipping companies and airline companies all over the world. It is noted that the global shipping industry has 116.000 ships at sea and hundreds of
Nr P100566NL00 9 billions of dollars invested therein. Ships carry 90% of all global goods. The off-shore and remote markets may also benefit from this application (e.g. Off-shore rigs, aircraft, remote communities). Thereby the present invention provides a solution to one or more of the above mentioned problems.
Advantages of the present invention are detailed through- out the description.
DETAILED DESCRIPTION OF THE INVENTION The present invention relates in a first aspect to a wireless communication system according to claim 1.
In an exemplary embodiment of the present system the computer program product comprising instructions for recognition of information in digital image data and/or digital audio data, comprising a learning phase on a data set of example digital data having known information, and computing characteristics of categories automatically from each example digital data and comparing computed charac- teristics to their known category, preferably comprising in said learning phase training a convolutional neural network comprising network parameters using said data set, more preferably in which said learning phase via deep learning each layer of said convolutional neural network is represented by a linear decomposition into basis func- tions of all filters as learned in each layer. In an em- bodiment of the computer program product said neural net- work comprise weights that are learnt using a sample da- taset, in particular said weights are learned for a whole patch at once.
The invention relates to neural networks that apply convolution to data of data sets. A set of images are an example of such data sets. Images usually have data that has a spatial coherence. The neural network applies convo- lution using a set of filters. In the art, this set needs
Nr P100566NL00 10 to be complete, or information regarding the data is needed in order to select the right filter set to start with. Here, a set of basic functions is used with which the filters are defined. It was found that the filters can be described using the set of basic functions. Furthermore or alternatively, when the fitting accuracy needs to be higher, for fitting higher order information, the basic functions can be set more accurate. In combination or al- ternatively, the basic functions can be expanded to more of the relatively standard functions. This without using knowledge of the data set.
In the recognition of categorical information as ma- chine-learned from digital image data, convolutional neu- ral networks are an important modern tool. “Recognition of categorical information” in this sentence means to say that a label (for example “cow”, “refrigerator”, “birthday party”, or any other category attached to a digital pic- ture; the category may refer to an object in the digital image or it may refer to a condition in the scene). Thus, in fact, data that comprises locally coherent data can be binned. Often, such bins are discrete, like the label or category example above. It is also possible to categorise the data in multidimensional bins. Examples are for in- stance “small-medium-large” to an object. In fact, the network can even categorize on the basis of information that is continuous, for instance a continuous variable like size. In such a categorisation, regression analysis is possible. In this respect, data that comprises locally coherency relates to data that can be multi-dimensional.
This data in at least one dimension has data points that are coherent in an area around at least one of its data points. Examples of such data are images, video (which has position coherence and time coherence), speech data, time
Nr P100566NL00 11 series.
Data points hold some information on their neigh- bouring data points.
Purpose This is the purpose of recognition: to automatically label a yet-unseen image from features and characteristics com- puted from the digital image alone.
Learning and application phases The process of recognition of categorical information con- sists of two steps: a learning phase and an application phase.
In the processing of the learning phase, unique characteristics of categories are derived automatically from the features computed from each example digital image and compared to its known category (i.e. a stack of digi- tal pictures each labelled “cow”, a stack of digital pic- tures each labelled “refrigerator”, etc. for all catego- ries involved). These characteristics derived from fea- tures are transferred to the application phase.
In the processing of the application phase, the same features are computed from an unknown image.
By computation on the fea- tures again, it is established whether these features in- clude the unique characteristics of a category A.
If so, the unknown image is automatically labelled with this cat- egory A.
Introduction to the new approach Convolutional neural network learning can be seen as a se- ries of transformations of representations of the original data.
Images, as well as signals in general, are special in that they demonstrate spatial coherence, being the cor- relation of the value of a pixel with the values in the pixel’s neighbourhood almost everywhere. (Only at the side of steep edges it remains undecided whether a pixel be- longs to one side or to the other.
The steepness of cam- era-recorded edges is limited by the bandwidth, as a con- sequence of which the steepest edges will not occur in
Nr P100566NL00 12 practice.) When looking at the intermediate layers of con- volutional neural networks, the learned image filters are spatially coherent themselves, not only for the first lay- ers [Mallat] but also for all but the last, fully-con- nected layers, although there is nothing in the network itself which forces the filters into spatial coherence.
See figure 1, for an illustration of intermediate layers.
Approach Different from standard convolutional neural nets we pre- program the layers of the network with Gaussian-shaped filters to decompose the image as a linear decomposition onto a local (Taylor- or Hermite-) functional expansion.
The invention further relates to a method for recognition of information in digital image data, said method compris- ing a learning phase on a data set of example digital im- ages having known information, and characteristics of cat- egories are computed automatically from each example digi- tal image and compared to its known category, said method comprises training a convolutional neural network compris- ing network parameters using said data set, in which via deep learning each layer of said convoluticnal neural net- work is represented by a linear decomposition of all fil- ters as learned in each layer into basis functions.
In an exemplary embodiment of the present system the computer program further comprising instructions for - at least once sub-categorizing said living being or at least one part thereof that appear in said live video stream with a computer vision system, and/or - at least once sub-categorizing said living being or at least one part thereof that appear in said live audio stream with a computer audio system, - optionally receiving further input sub-categorizing said living being or at least one part thereof, such as at least one of an identity checker, credentials checker, a
Nr P100566NL00 13 unique session identifier, ECG, blood pressure, for vital pa- rameters, blood and urine analysis, blood oxygen level, body temperature, blood sugar level, heart rate and the results of palpation, auscultation and percussion.
- based on said sub-categorization if considered incom- plete or insufficient for final sub-categorization re- questing through said first mobile device further audio and/or video input, - making a final sub-categorization, - optionally requesting further audio and/or video input characterizing said living being or at least one part thereof through said second mobile device, and - receiving information with the respect to said location comprising at least one of said first location, an in- tended trajectory of said first location, a speed of said first location, a cargo of said first location, a time constraint of said first location, and a distance to a port, - assessing said final categorization and further audio and/or video input, and said information.
In an exemplary embodiment of the present system cate- gories and sub-categories are selected from living being dis- eases, living being disorders, living being injuries, living being illnesses, living being mental illnesses, living being injuries, categories not-requiring treatment, and combina- tions thereof.
In an exemplary embodiment of the present system said first location is a ship or an airplane, and wherein said port is a harbour of an airport, respectively.
In an exemplary embodiment of the present system said person performing said treatment is a captain, a pilot, or a crew member.
In an exemplary embodiment of the present system said location of treatment is selected from the first location,
Nr P100566NL00 14 and a further location, such as a port.
In an exemplary embodiment of the present system a time frame of treatment is selected from direct, within 30 minutes, within 1, 2, 4 or 12 hours, within 1 day, within 3 days, within 10 days, within 30 days, or after 30 days.
In an exemplary embodiment of the present system in actions and/or absence of actions are selected from admin- istration of medication, surgery, application of medical supports, such as gauzes, or a combination thereof, and/or wherein means of transportation of said living being is selected from a helicopter, a ship, or a combination thereof.
In an exemplary embodiment of the present system costs of said treatment include treatment costs, costs of trans- portation, commercial costs in view of ship or airplane movements.
In an exemplary embodiment of the present system said further audio and/or video input characterizing said liv- ing being or at least one part thereof relate to physical examination, measurement of a body parameter, anamnesis, impression of well-being of said living being, or a combi- nation thereof.
In an exemplary embodiment of the present system the computer program product comprises instructions for receiving augment reality input from the first or second mobile device, and instructions for transmitting augment reality output to the first or second mobile device.
Therewith, based on AI, augmented reality can be provided on the first or second mo- bile device respectively, such as a visual indication on where to perform treatment or part thereof, such as by high- lighting an area of the living bean, such as by highlighting with a green (positive) or red (negative) colour.
In an exemplary embodiment of the present system both devices have implemented a two-directional transmitting
Nr P100566NL00 15 system, wherein the transmitting system in use receives at least one layer of first optical input from the first de- vice relating to a physical reality from the first device and transmits the at least one layer of first optical input to the second mobile device and receives at least one layer of second optical input relating to a physical reality from the second device from the second optical input and transmits the at least one layer of second optical input to the first mobile device. In addition thereto the first device in use displays second optical input of the second device super- imposed over the first optical input, and wherein the sec- ond device in use displays first optical input of the first device subimposed over the second optical input, and wherein the displayed input on the first device is prefer- ably equal or partly equal to the displayed input on the second device, that is the combined or merged inputs, ei- ther superimposed or subimposed, are equal to one and an- other. It is noted that the terms “superimposed” and “sub- imposed” are relative and in principle are considered in- terchangeable as long as the various layers of input are projected over one and another in a usable fashion. There- with both devices are provided with at least one layer of augmented reality superimposed over an image representing reality at the first or second location, respectively.
Therewith effectively the use of a first device and the user of the second device look at the same image, or at least part thereof; in other words the displayed images on the first and second devices respectively are the same, though the full display need not be used for displaying sald images. The image may be displayed together with at least one further image, or not. The medical professional now can give input to the layman, such as directions, ad- vice, can provide medical details, etc. which input can be
Nr P100566NL00 16 directly seen by the layman. The present image may like- wise relate to a continuous optical recording. In principle more than one layer of optical and aug- mented reality can be provided to the first and/or second mobile device, such as 2-5 layers, such as 3-4 layers. A first layer may represent direct input, a second layer may represent input from a database, a third input may repre- sent actions to be taken, a fourth input may represent graphical input, and so on.
The at least one layer of (first or second) optical input may be provided against a, for recording, neutral background, such as a blue background.
In addition to the above the use of the first or sec- ond device, respectively, can each independently switch layers of graphical input on or off, therewith increasing or reducing an amount of augmented reality. For instance a first use may look at the first reality and augmented re- ality layers 2 and 3 provided by the second user, whereas the second user looks at the first physical reality (being typically augmented reality layer 1 for the second device) and physical reality from the second device, and so on. Therewith the present system is very flexible and versa- tile.
In addition to the above the first and second user may each independently use further functionality of the mobile devices, such as audio, vibration, recording, meas- urement capabilities, etc.
In an exemplary embodiment the present system may fur- ther comprise a tracking system for recording of instructions and actions performed.
In an exemplary embodiment of the present system the first optical input may be provided by a first camera, which may or may not use an additional optical system such as lenses.
Nr P100566NL00 17 In an exemplary embodiment of the present system the second optical input may be provided by a second camera that may or may not use an additional optical system such as lenses.
In an exemplary embodiment of the present system the first device and/or second device may display further op- tical input.
In an exemplary embodiment of the present system opti- cal input may further be provided by a touch screen, a mouse pad, and graphics.
In an exemplary embodiment the present system may fur- ther comprise at least one further mobile device having a further optical input, and a (wireless) transceiver, and implemented thereon the two-directional transmitting sys- tem.
In an exemplary embodiment of the present system at least one location may be a remote location, such as re- mote from a shore, such as at least 200 km from a shore or remote from a medical professional, such as at least 200 km.
In an exemplary embodiment the present system may fur- ther comprise a digitally and/or physically accessible ref- erence document or images, that may or may not be pre- sented in an additional augmented reality layer, the ref- erence document comprising in view of medical actions in- structions for preparation thereof, instructions for tri- aging, instructions for diagnosing, instructions for per- forming measurements, instructions for carrying out, in- structions for logging data, instructions for after care, a database, and an overview of contents, preferably orga- nized in a layered manner. As such available, and typi- cally regularly updated information, is directly availa- ble. In addition also artificial intelligence may be used to further support the layman.
Nr P100566NL00 18 In an exemplary embodiment the present system may further comprise a coordinator for establishing contact be- tween the first mobile device and a second mobile device, wherein the coordinator selects the second device based on at least one of availability, distance, language capabili- ties of the owner, specific medical expertise of the owner, time zone, and stability of the transmitting sys- tem. As such the best available support can be delivered to a subject.
In an exemplary embodiment the present system may fur- ther comprise at least one of an identity checker, creden- tials checker, a unique session identifier, such as a cali- bration sticker. Therewith secured information can be trans- ferred as well as information on the condition of the sub- ject.
In an exemplary embodiment the present system may further comprise a data-logging system, and a sensor, such as a medical sensor, such as for ECG, blood pressure, for vital parameters, blood and urine analysis, and blood oxygen level.
Therewith information on the treatment of a subject as well as details of the subject can be transmitted and stored.
In an exemplary embodiment the present system may further comprise a switch for activating or deactivating su- perimposed display on one or both devices. For some applica- tions the superimposed display may interfere with a process of treating the subject and can better be switched off.
In an exemplary embodiment of the present system the second device may retrieve input from a database. Therewith the layman can be assisted directly by a computer or the like.
In a second aspect the present invention relates to a use of the present system for training and for providing real-time medical assistance.
The invention is further detailed by the accompanying
Nr P100566NL00 19 figure and examples, which are exemplary and explanatory of nature and are not limiting the scope of the invention. To the person skilled in the art it may be clear that many variants, being obvious or not, may be conceivable falling within the scope of protection, defined by the present claims.
SUMMARY OF THE FIGURES Figure 1 shows schematics of the prior art and fig.2 of the present invention.
DETAILED DESCRIPTION OF THE FIGURES Figure 1 shows schematics of the prior art.
Figure 1 describes the current situation: In a remote location, such as a ship, when there is a patient a layperson like a ship's officer, will have to: - Take an anamnesis (1), ask questions and might base these on a book, form or likewise; - Take medical measurements (2), like blood pressure, heartrate and so on. Usually without no recent and often any training; = Do a physical examination (3), like checking out chest and abdomen by doing palpations, auscultation and so on. Again with little or no prior training; - The combined (1+2+3) information will be communicated with a doctor onshore by email, radio, or satellite tele- phone. The doctor will make a clinical decision and tries to make a diagnosis and will give advice on a treatment (4); - Based on the doctors diagnosis and advice the captain or the shipping company will make a decision on what to do with the logistic part of the treatment, like diverting the ship (5). Step 4 and 5 are usually not taken in coor- dination; - Finally the on board treatment of the patient needs to
Nr P100566NL00 20 be carried out by the layperson/ship’s officer (6). Usu- ally he/she has to work form memory, any past training and a book and perform procedures like giving injections, set- up IV drips and so on.
Figure 2 describes the situation us- ing the system:
In a remote location, such as a ship when there is a pa- tient a layperson like a ship’s officer, will have to: - Take an anamnesis (1) supported by using a mobile device with AI that will ask the right questions, go in detail where needed, and can might interact directly with the patient or the caretaker.
This leads to a relevant, accurate, detailed, and digitally stored anamnesis;
- Take measurements (2), like blood pressure, heartrate, and so on.
Using the 2-Way-AR functionality via a mobile de-
vice the doctor or onshore expert can guide, instruct, and coach the layperson in real time and ‘hands-on’ on what he/she exactly to do to take the measurement.
This way the quality of the measurements taken can be assessed and im- proved by giving visual guidance;
= Do a physical examination (3), like checking out chest, and abdomen by doing palpations, auscultation and so on.
Us- ing the 2-Way-AR functionality on a mobile device the doctor or onshore expert can guide, instruct and coach the layperson in real time and ‘hands-on’ on what he/she exactly to do to do a good examination.
This way the quality of the physical examination can be assessed, the patients response to it taken into account and the execution and quality of the exam- ination can be improved by giving visual guidance, instruc- tions and coaching;
- The combined (1+2+3) information gathered will be of sub- stantial higher quality and give the expert/doctor better in- formation to base his/her diagnosis on (4). For that reason it will be more accurate;
- This likely diagnosis and the medical treatment will now
Nr P100566NL00 21 be combined with other relevant data (like location, direc- tion, speed, cargo, sailing distances to nearby ports, weather, cargo and so on) from the ship (4-5). Using AI and based on relevant data the system will present the captain and/or shipping company with treatment options to optimize the outcome for the patient, taking the ship’s safety, risks involved and costs into account (5); - Finally the on board treatment of the patient needs to be carried out by the layperson/ship’s officer (6). Using the 2- Way-AR functionality on a mobile device the doctor or onshore expert can guide, instruct and coach the layperson in real time and ‘hands-on’ on what he/she exactly to do to when treating the patient.
All medical procedures can be supported and supervised.
Working as a team the layman and the expert can carry out on board treatment in a safe and successful way.
This way the quality of the treatment can be assessed and improved by giving visual guidance; - All steps (1-6) can be digitized and can, with the ap- proval of parties involved, also be used to improve future care and learn from outcomes. - The system will improve remote medical care by improving various aspects, such as by adding 2-Way-AR and AI, and sup- port decision making.
EXAMPLES /EXPERIMENTS The invention although described in detailed explana- tory context may be best understood in conjunction with the accompanying examples.
Practical example Step by Step instructions on using AR system for remote medi- cal support The example relates to a situation wherein Device 1 is on a remote location and Device 2 is a device on medical support location (doctor's device). Additional devices may be pre- sent, in so far as required and feasible and it may relate to
Nr P100566NL00 22 any additional unit. It is the combination of devices (mini- mum two: device 1 and 2) that may be regarded as the present basic system.
In order to start the (both) devices should be powered up, meet minimum technical specifications, use a compatible software platform (Android/i0S/Windows/Linux etc.), have ad- ditional drivers and software installed for two-directional transmitting, be connected to the internet and/or have an IP address. When using the present AR-application/platform the two devices should connect, identify and use a secure connec- tion.
The following steps are typically performed. 1) The person(s) using device 1 (hereafter: helper, typi- cally a layman) and the person(s) using device 2 (hereafter: the medical professional, typically a doctor) and the patient or object (hereafter: the subject) can be positively identi- fied by voice, vision, available ID or any other legal or re- quired means.
2) If possible they agree to: their role and accept terms and conditions of use by 1) signing in with known and veri- fied credentials of by 2) direct input (typing, touching, clicking) or clearly stating verbally and the patient/object responsible for accepting and consenting to the help offered. 3) Device 1 may be positioned either: - static, using a fixing device that will hold device 1 in place, - or dynamic, by being attached to the helper in their line of sight, using goggles or any other means to comfortably at- tach the device, — or is positioned in such a way relative to subject, that the helper can work, manipulate and use instruments/tools on the subject, while keeping the subject in view/on screen of the device.
4) Device 2 may be positioned either:
Nr P100566NL00 23 - static, using a fixing device that will hold device 2 in place, — or dynamic, by being attached to professional in their line of sight, using goggles or any other means to comfortably at- tach the device, — or is positioned in such a way, that the professional can work, manipulate and use instruments/tools in front device while keeping the subject on screen of the device. 5) A calibration sticker can (but does not need to) be used: it has a unique session number, a colour calibration print and a fixed size for reference and will be placed close or next to the subject within the vision frame of device 1 (cam- era) and device 2 can calibrate after detection of the sticker. 6) By using device 2 the professional can in so far as re- quired: - make drawings, display pictures, project video(s) using de- vice 2 as input device, by touching the screen, or using any input device (mouse, stylus, touchpad, controller, etc.) or retrieving pictures, images and video material from a data- base or any other source - manipulate his hands, arms and any other part of his body, such as to indicate to the helper to perform certain actions, manipulate instruments of tools in such manner that the cam- era of device 2 captures these movements, pictures or projec- tions, such as to provide a visual example. 7) These inputs created by the professional using device 2 can be projected on device 1 (and likewise device 2) onto an overlaying visual layer that results in a projection of the reality (the actual view of the subject, relating to a physi- cal reality) with the visual (AR) layer (relating to at least one of a further physical reality, images, graphics, pic- tures, etc.) superimposed on the screen of device 1. The
Nr P100566NL00 24 helper will see both the subject as well as the (superim- posed) input made by professional on device 2. 8) By using device 1, the helper can see the subject (in re- ality and on screen), see directly what professional shows, explains, and/or instructs him to do (in AR overlay), see pictures, images, supporting information from database or any other source (in AR overlay), see video’s on how to carry out certain skills and techniques (in AR overlay), can switch the AR layer on- and off on device 1, can see and train and pre- pare for copying and re-doing the manipulations and instruc- tions shown by professional, and can execute or perform the manipulations, instructions or skills shown in the AR layer in reality on the subject.
9) The professional can see the subject, see the AR overlay as visible on device 1, including any visual input used, see any actions by helper, give directions and instructions to the helper, using voice, text, video and/or AR overlay, and can switch the AR layer on and off on device 1 and 2.
10) All actions on device 1 and 2 are logged and saved in a database.
In view of operation additional functionality may be added. For instance device 1 can have sensors attached, that will help to monitor the condition of subject. In case of a patient ECG, heartrate, blood pressure and other vital param- eters can be monitored and made visible on device 1 and 2. The helper can use device 1 without a professional (no device 2) and using images, video's, explanations on skills and techniques retrieved from a database or any other source, that can be superimposed (via AR layer) on the subject. Using artificial intelligence and machine learning, the collected input from the database can be used to recognize, diagnose certain conditions and predict any required actions to be taken by helper.
The next section is added to support the search, and the
Nr P100566NL00 25 section thereafter is considered to be a full translation thereof into Dutch.
1. Wireless communication system for medical assistance comprising a first mobile device (11) at a first location, the first mobile device having a first optical input for obtain- ing optical input from a physical reality, a microphone for obtaining audio input at the first location, a speaker, a display, and a transceiver, preferably a wireless trans- ceiver, a second mobile device (12) at a second location, the second device having a second optical input for obtaining op- tical input from a physical reality, a microphone for obtain- ing audio input at the second location, a speaker, a display, and a transceiver, preferably a wireless transceiver, wherein the first mobile device is configured to augment reality, wherein the second mobile device is configured to aug- ment reality, a computing device adapted to communicate with the first mobile device and with the second mobile device, the compu- ting device comprising a data processor, and a computer program product comprising instructions for operating the system, the instructions causing the system to carry out the following steps; - receiving a live video stream and/or a live audio stream from said first mobile device, said live streams comprising at least one time slice with at least one frame and/or at least one audio fragment comprising visual or audio input characterizing a living being or at least one part thereof at said first location; - categorizing said living being or at least one part thereof that appear in said live video stream with a computer vision system, and/or
Nr P100566NL00 26 - categorizing said living being or at least one part thereof that appear in said live audio stream with a computer audio system, - optionally receiving further input categorizing said living being or at least one part thereof, such as at least one of an identity checker, credentials checker, a unique session identifier, ECG, blood pressure, for vital parameters, blood and urine analysis, blood oxygen level, body temperature, blood sugar level, heart rate and the results of palpation, auscultation and percussion.
- based on said categorization if considered incomplete or insufficient for final categorization requesting through said first mobile device further audio and/or video input, - making a final categorization, - optionally requesting further audio and/or video input characterizing said living being or at least one part thereof through said second mobile device, - receiving information with the respect to said location comprising at least one of said first location, an intended trajectory of said first location, a speed of said first lo- cation, a cargo of said first location, a time constraint of said first location, and a distance to a port, - assessing said final categorization and further audio and/or video input, and said information, - providing at least one treatment, preferably at least three treatments, wherein a treatment comprises a location of treatment, a time frame of treatment, a person performing said treatment, costs of said treatment, legal and practical boundary conditions of said treatment, and actions and/or ab- sence of actions, and optionally means of transportation of said living being.
2. System according to embodiment 1, wherein the computer program product comprising instructions for recognition of information in digital image data and/or digital audio data,
Nr P100566NL00 27 comprising a learning phase on a data set of example digital data having known information, and computing characteristics of categories automatically from each example digital data and comparing computed characteristics to their known cate- gory, preferably comprising in said learning phase training a convolutional neural network comprising network parameters using said data set, more preferably in which said learning phase via deep learning each layer of said convolutional neu- ral network is represented by a linear decomposition into ba- sis functions of all filters as learned in each layer.
3. System according to any of embodiments 1-2, further com- prising - at least once sub-categorizing said living being or at least one part thereof that appear in said live video stream with a computer vision system, and/or - at least once sub-categorizing said living being or at least one part thereof that appear in said live audio stream with a computer audio system, - optionally receiving further input sub-categorizing said living being or at least one part thereof, such as at least one of an identity checker, credentials checker, a unique session identifier, ECG, blood pressure, for vital parame- ters, blood and urine analysis, blood oxygen level, body tem- perature, blood sugar level, heart rate and the results of palpation, auscultation and percussion. - based on said sub-categorization if considered incomplete or insufficient for final sub-categorization requesting through said first mobile device further audio and/or video input, - making a final sub-categorization, - optionally requesting further audio and/or video input characterizing said living being or at least one part thereof through said second mobile device, and - receiving information with the respect to said location
Nr P100566NL00 28 comprising at least one of said first location, an intended trajectory of said first location, a speed of said first lo- cation, a cargo of said first location, a time constraint of said first location, and a distance to a port, - assessing said final categorization and further audio and/or video input, and said information.
4. System according to any of embodiments 1-3, wherein cate- gories and sub-categories are selected from living being dis- eases, living being disorders, living being illnesses, living being mental illnesses, categories not-requiring treatment, and combinations thereof.
5. System according to any of embodiments 1-4, wherein said first location is a ship or an airplane, and wherein said port is a harbour of an airport, respectively, and/or wherein said person performing said treatment is a captain, a pilot, or a crew member, and/or wherein said location of treatment is selected from the first location, and a further location, such as a port, and/or wherein a time frame of treatment is selected from direct, within 30 minutes, within 12 hours, within 1 day, within 3 days, within 10 days, within 30 days, or after 30 days, and/or wherein actions and/or absence of actions are selected from administration of medication, surgery, application of medical supports, such as gauzes, or a combination thereof, and/or wherein means of transportation of said living being is se- lected from a helicopter, a boat, or a combination thereof, and/or wherein costs of said treatment include treatment costs, costs of transportation, comnercial costs in view of ship or airplane movements.
6. System according to any of embodiments 1-5, wherein said further audio and/or video input characterizing said living
Nr P100566NL00 29 being or at least one part thereof relate to physical exami- nation, measurement of a body parameter, anamnesis, impres- sion of well-being of said living being, or a combination thereof.
7. System according to any of embodiments 1-6, wherein the computer program product comprising instructions for receiv- ing augment reality input from the first or second mobile de- vice, and instructions for transmitting augment reality out- put to the first or second mobile device, preferably comprising implemented on both the two devices a two-directional transmitting system, wherein the transmitting system is configured to receive at least one layer of first optical input (31) relating to a physical reality from the first device and is configured to transmit the at least one layer of first optical input to the second mobile device and is configured to receive at least one layer of second optical input (32) relating to a physical reality from the second de- vice and is configured to transmit the at least one layer of second optical input to the first mobile device, wherein the first device is configured to display the at least one layer of second optical input of the second device superimposed over the first optical input for forming aug- mented reality, and wherein the second device is configured to display the at least one layer of first optical input of the first device subimposed under the second optical input for forming augmented reality, and wherein the superimposed displayed inputs on the first device is equal to the subimposed displayed inputs on the second de- vice.
8. System according to any of embodiments 1-7, further com- prising a tracking system for recording of instructions and actions performed.
9. System according to any of the preceding embodiments,
Nr P100566NL00 30 wherein the first optical input is provided by a first cam- era, and/or wherein the second optical input is provided by at least one of a second camera, a graphical input, a video input, a touch screen, a mouse, a touch pad, a stylus, a controller, or a database.
10. System according to any of the preceding embodiments, wherein the first device and/or second device is configured to display further optical input.
11. System according to any of the preceding embodiments, wherein optical input is further provided by a touch screen, a mouse pad, and graphics.
12. System according to any of the preceding embodiments, comprising at least one further mobile device having a fur- ther optical input, and a (wireless) transceiver, and imple- mented thereon the two-directional transmitting system.
13. System according to any of the preceding embodiments, wherein at least one location is a remote location, such as at least 200 km from a shore or at least 200 km from a medi- cal professional.
14. System according to any of the preceding embodiments, further comprising a digitally and/or physically accessible reference document, the reference document comprising in view of medical actions instructions for preparation thereof, in- structions for triaging, instructions for diagnosing, in- structions for performing measurements, instructions for car- rying out, instructions for logging data, instructions for after care, a database, and an overview of contents, prefera- bly organized in a layered manner.
15. System according to any of the preceding embodiments, further comprising a coordinator configured to establish con- tact between the first mobile device and a second mobile de- vice, wherein the coordinator is configured to select the
Nr P100566NL00 31 second device based on at least one of availability, dis- tance, language capabilities of the owner, specific medical expertise of the owner, time zone, and stability of the transmitting system.
16. System according to any of the preceding embodiments, further comprising at least one of an identity checker, cre- dentials checker, a unique session identifier, such as a cal- ibration sticker, a data-logging system, and a sensor, such as selected from a medical sensor, such as for ECG, blood pressure, for vital parameters, blood and urine analysis, and blood oxygen level.
17. System according to any of the preceding embodiments, further comprising a switch for activating or deactivating superimposed display on one or both devices.
18. System according to any of the preceding embodiments, wherein the second device is configured to retrieve input from a database.
19. Use of a system according to any of the preceding embodi- ments for training and for providing real-time medical assis- tance.

Claims (19)

Nr P100566NL00 32 ConclusiesNo P100566EN00 32 Conclusions 1. Draadloos communicatiesysteem voor medische hulp, omvat- tend een eerste mobiel apparaat (11) op een eerste locatie, waarbij het eerste mobiele apparaat een eerste optische in- gang voor het verkrijgen van optische invoer van een fysieke realiteit, een microfoon voor het verkrijgen van audio invoer op de eerste locatie, een luidspreker, een display, en een zendontvanger heeft, bij voorkeur een draadloze zendontvan- ger, een tweede mobiel apparaat (12) op een tweede locatie, waarbij het tweede mobiele apparaat een eerste optische in- gang voor het verkrijgen van optische invoer van een fysieke realiteit, een microfoon voor het verkrijgen van audio invoer op de eerste locatie, een luidspreker, een display, en een zendontvanger heeft, bij voorkeur een draadloze zendontvan- ger, waarin het eerste mobiele apparaat is geconfigureerd om de werkelijkheid te vergroten, waarin het tweede mobiele apparaat is geconfigureerd om de werkelijkheid te vergroten, een computerapparaat dat is aangepast om te communiceren met het eerste mobiele apparaat en met het tweede mobiele ap- paraat, waarbij het computerapparaat een gegevensverwerker omvat, en een computerprogramma product met instructies voor de bediening van het systeem, waarbij de instructies ervoor zor- gen dat het systeem de volgende stappen uitvoert; - ontvangst van een live-video-stream en/of een live-au- dio-stream van het eerste mobiele apparaat, waarbij de livestreams ten minste één tijdssegment omvatten met ten min- ste een frame en/of ten minste één geluidsfragment omvattend visuele of audio-invoer dat een levend wezen of ten minste één deel daarvan op de eerste locatie kenmerkt;A wireless medical aid communication system comprising a first mobile device (11) at a first location, the first mobile device having a first optical input for obtaining optical input of a physical reality, a microphone for obtaining of audio input at the first location, a speaker, a display, and a transceiver, preferably a wireless transceiver, a second mobile device (12) at a second location, the second mobile device having a first optical input for obtaining optical input of a physical reality, having a microphone for obtaining audio input at the first location, a speaker, a display, and a transceiver, preferably a wireless transceiver, in which the first mobile device is configured to magnify reality, in which the second mobile device is configured to magnify reality, a computing device that has been modified to communicate with the first mobile device and with the second mobile device, the computing device comprising a data processor, and a computer program product having instructions for operating the system, the instructions causing the system to perform the following steps performs; - receiving a live video stream and/or a live audio stream from the first mobile device, wherein the live streams comprise at least one time segment with at least one frame and/or at least one sound fragment comprising visual or audio input featuring a living being or at least a portion thereof at the first location; Nr P100566NL00 33No. P100566NL00 33 - het categoriseren van het levende wezen of ten minste een deel daarvan die in deze live-videostream verschijnen met een computervisiesysteem, en/of- categorizing the living entity or at least part thereof appearing in this live video stream using a computer vision system, and/or - het categoriseren van genoemde levende wezen of ten minste een deel daarvan die verschijnen in genoemde live-au- diostroom met een computeraudicsysteem,- categorizing said living being or at least a part thereof appearing in said live audio stream with a computer audio system, - het eventueel ontvangen van verdere invoer voor het categoriseren van genoemd levend wezen of ten minste een deel daarvan, zoals ten minste een van een identiteitscontroleur,- optionally receiving further input for categorizing said living being or at least part thereof, such as at least one from an identity checker, een geloofsbrievencontrole, een unieke sessie-identificatie, ECG, bloeddruk, voor vitale parameters, bloed- en urineana- lyse, zuurstofgehalte in het bloed, lichaamstemperatuur, bloedsuikergehalte, hartslag en het resultaat van palpatie, auscultatie, en percussie,a credential check, a unique session identifier, EKG, blood pressure, for vital signs, blood and urine analysis, blood oxygen level, body temperature, blood sugar level, heart rate, and the result of palpation, auscultation, and percussion, — op basis van deze categorisering indien deze onvolle- dig of onvoldoende wordt geacht voor de definitieve categori- sering, het verzoeken via het eerste mobiele apparaat van verdere audio- en/of video-invoer,— on the basis of this categorization if deemed incomplete or insufficient for the final categorization, requesting further audio and/or video input via the first mobile device, — het maken van een definitieve categorisering,— making a final categorization, -— het eventueel via het tweede mobiele apparaat ver- zoeken van meer audio- en/of video-invoer het genoemde le- vende wezen of ten minste een deel daarvan kenmerken,-— possibly requesting more audio and/or video input via the second mobile device to characterize said living being or at least part thereof, — het ontvangen van informatie met betrekking tot de ge- noemde locatie, omvattend ten minste één van de genoemde eer-— receiving information related to said location, including at least one of said earlier ste locatie, een beoogde koers van de genoemde eerste loca- tie, een snelheid van de genoemde eerste locatie, een lading van de genoemde eerste locatie, een tijdsbeperking van de ge- noemde eerste locatie, en een afstand tot een haven,th location, a target course of said first location, a speed of said first location, a payload of said first location, a time constraint of said first location, and a distance from a port, - de beoordeling van deze definitieve indeling in cate-- the assessment of this definitive classification into categories gorieën en verdere audio- en/of video-invoer, en de genoemde informatie,gories and further audio and/or video input, and the said information, — het verstrekken van ten minste één behandeling, bij voorkeur ten minste drie behandelingen, waarbij een behande- ling omvat een plaats van behandeling, een tijdsbestek van— the provision of at least one treatment, preferably at least three treatments, where a treatment includes a site of treatment, a time frame of Nr P100566NL00 34 behandeling, een persoon die deze behandeling uitvoert, de kosten van deze behandeling, de wettelijke en praktische randvoorwaarden van deze behandeling, en handelingen en/of het afzien van handelingen, en eventueel middelen van vervoer van deze levende wezens.Nr P100566EN00 34 treatment, a person who performs this treatment, the costs of this treatment, the legal and practical preconditions for this treatment, and actions and/or refraining from actions, and any means of transport of these living beings. 2. Systeem volgens conclusie 1, waarin het computerpro- gramma instructies omvat voor de herkenning van informatie in digitale beeldgegevens en/of digitale audiogegevens, omvat- tend een leerfase over een dataset van digitale voorbeelddata met bekende informatie, en het automatisch berekenen van ken- merken van categorieën uit elk voorbeeld digitale gegevens en het vergelijken van berekende kenmerken met hun bekende cate- gorie, bij voorkeur omvattend in genoemde leerfase het trai- nen van een convolutioneel neuraal netwerk dat netwerkparame- ters omvat met behulp van genoemde dataset, waarbij in ge- noemde leerfase via diepgaand leren elke laag van genoemd convolutioneel neuraal netwerk is vertegenwoordigd door een lineaire decompositie in basisfuncties van alle filters zoals die in elke laag worden aangeleerd.The system of claim 1, wherein the computer program includes instructions for recognizing information in digital image data and/or digital audio data, including a learning phase over a data set of sample digital data with known information, and automatically calculating attributes marking categories from each example digital data and comparing computed features with their known category, preferably comprising in said learning phase training a convolutional neural network comprising network parameters using said data set, wherein - said learning phase through deep learning each layer of said convolutional neural network is represented by a linear decomposition into basic functions of all filters as learned in each layer. 3. Systeem volgens een van de conclusies 1-2, verder om- vattend — het ten minste eenmaal een subcategoriseren van ge- noemd levend wezen of ten minste een deel daarvan dat in ge- noemde live-videostream verschijnt met een computer visueel- systeem, en/of — het ten minste een keer subcategoriseren van genoemd levend wezen of ten minste een deel daarvan dat in de ge- noemde live audiostream verschijnt met een computer audio systeem, - Optioneel het ontvangen van verdere invoer voor subca- tegoriseren van genoemd levend wezen of ten minste een deel daarvan, zoals ten minste een van een identiteitscontrole, geloofsbrieven checker, een unieke sessie identificateur,The system of any one of claims 1-2, further comprising - subcategorizing said living being at least once or at least a portion thereof appearing in said live video stream with a computer visual system , and/or — subcategorizing said living being or at least a portion thereof appearing in said live audio stream at least once with a computer audio system, optionally receiving further input for subcategorizing said living creature or at least part thereof, such as at least one of an identity checker, credentials checker, a unique session identifier, Nr P100566NL00 35 ECG, bloeddruk, voor vitale parameters, bloed en urine ana- lyse, bloed zuurstofgehalte, lichaamstemperatuur, bloedsui- kerspiegel, palpaties, auscultatie, en hartslag, - het op basis van deze sub-categorisering indien deze onvolledig of onvoldoende wordt geacht voor de definitieve sub-categorisering het verzoeken via het eerste mobiele appa- raat van verdere audio- en/of video-invoer, — het maken van een laatste sub-categorisatie, - het eventueel het aanvragen van verdere audio- en/of video-invoer die kenmerkend is voor het genoemde levende we- zen of ten minste een deel daarvan via het genoemde tweede mobiele apparaat, en — het ontvangen van informatie met betrekking tot de ge- noemde locatie, omvattend ten minste één van de genoemde eer- ste locatie, een beoogde koers van de genoemde eerste loca- tie, een snelheid van de genoemde eerste locatie, een lading van de genoemde eerste locatie, een tijdsbeperking van de ge- noemde eerste locatie, en een afstand tot een haven, — de beoordeling van deze definitieve indeling in cate- gorieën en verdere audio- en/of video-invoer, en de genoemde informatie.No. P100566EN00 35 EKG, blood pressure, for vital signs, blood and urine analysis, blood oxygen level, body temperature, blood sugar level, palpations, auscultation, and heart rate, - based on this sub-categorization if deemed incomplete or unsatisfactory for the final sub-categorization requesting further audio and/or video input via the first mobile device, — making a final sub-categorization, - optionally requesting further audio and/or video input input characteristic of said living being or at least a part thereof via said second mobile device, and — receiving information regarding said location, including at least one of said first location , a target heading of said first location, a speed of said first location, a payload of said first location, a time constraint of said first location, and a distance of one hectare ven, — the assessment of this final categorization and further audio and/or video input, and the information mentioned. 4. Systeem volgens een van de conclusies 1-3, waarin ca- tegorieën en subcategorieën zijn gekozen uit levende wezens ziekten, levende wezens aandoeningen, levende wezens geestes- ziekten, categorieën die geen behandeling vereisen, en combi- naties daarvan.The system of any one of claims 1 to 3, wherein categories and subcategories are selected from living being diseases, living beings ailments, living beings mental illnesses, categories requiring no treatment, and combinations thereof. 5. Systeem volgens een van de conclusies 1-4, waarbij de genoemde eerste locatie een schip of een vliegtuig is, en de genoemde aankomst respectievelijk een haven of een luchthaven is, en/of waarbij de persoon die deze behandeling uitvoert een ka- pitein, een piloot of een bemanningslid is, en/of waarin genoemde locatie van de behandeling is gekozenA system according to any one of claims 1-4, wherein said first location is a ship or an aircraft, and said arrival is a port or an airport, respectively, and/or wherein the person performing this treatment is a captain , is a pilot or a crew member, and/or in which said location of the treatment has been chosen Nr P100566NL00 36 uit de eerste locatie, en een andere locatie, zoals een ha- ven, en/of waarbij een tijdsbestek van de behandeling is gekozen uit direct, binnen 30 minuten, binnen 12 uur, binnen 1 dag, binnen 3 dagen, binnen 10 dagen, binnen 30 dagen, of na 30 dagen, en/of waarbij acties en/of afwezigheid van acties is gekozen uit de toediening van medicijnen, chirurgie, het aanbrengen van medische hulpmiddelen, zoals gaasjes, of een combinatie daarvan, en/of waarin het vervoermiddel van het levende wezen is geko- zen uit een helikopter, een boot of een combinatie daarvan, en/of waarin de kosten van deze behandeling de behandelings- kosten, de vervoerskosten, en de commerciële kosten met het oog op de scheeps- of vliegtuigbewegingen, omvatten.No. P100566NL00 36 from the first location, and another location, such as a port, and/or where a treatment time frame is selected from immediate, within 30 minutes, within 12 hours, within 1 day, within 3 days, within 10 days, within 30 days, or after 30 days, and/or where actions and/or absences of actions are selected from drug administration, surgery, application of medical devices, such as gauze pads, or a combination thereof, and/or in which the means of transport of the living creature is chosen from a helicopter, a boat or a combination thereof, and/or in which the costs of this treatment include the handling costs, the transport costs, and the commercial costs with a view to the ship's or aircraft movements. 6. Systeem volgens een van de conclusies 1-5, waarin de genoemde verdere audio- en/of video-invoer dat de genoemde levende wezens of ten minste een deel daarvan kenmerkt be- trekking heeft op lichamelijk onderzoek, meting van een li- chaamsparameter, anamnese, indruk van welbevinden van de ge- noemde levende wezens, of een combinatie daarvan.A system according to any one of claims 1 to 5, wherein said further audio and/or video input characterizing said living beings or at least a part thereof relates to physical examination, measurement of a body parameter , anamnesis, impression of well-being of the mentioned living beings, or a combination thereof. 7. Systeem volgens een van de conclusies 1-6, waarin het computerprogramma instructies omvat voor het ontvangen van augment realiteit invoer van het eerste of tweede mobiele ap- paraat, en instructies voor het verzenden van augment reali- teit uitvoer naar het eerste of tweede mobiele apparaat, bij voorkeur geïmplementeerd op beide apparaten een tweerich- tingstransmissie systeem, waarbij het transmissiesysteem in gebruik ten minste één laag eerste optische invoer (31) ont- vangt die betrekking heeft op een fysieke realiteit van het eerste apparaat en de ten minste ene laag van eerste optische invoer aan het tweede mobiele apparaat zendt en ontvangt ten minste één laag tweede optische invoer (32) die betrekkingThe system of any one of claims 1-6, wherein the computer program comprises instructions for receiving augment reality input from the first or second mobile device, and instructions for sending augment reality output to the first or second mobile device, preferably implemented on both devices a bi-directional transmission system, wherein the transmission system in use receives at least one layer of first optical input (31) relating to a physical reality of the first device and the at least one layer from first optical input to the second mobile device sends and receives at least one layer of second optical input (32) related to Nr P100566NL00 37 heeft op een fysieke realiteit van het tweede apparaat en de ten minste ene laag van tweede optische ingang naar het eer- ste mobiele apparaat zendt, waarbij het eerste apparaat in gebruik de ten minste ene laag tweede optische invoer van het tweede apparaat over de eerste optische ingang toont, en waarbij het tweede apparaat in ge- bruik de ten minste ene laag van de eerste optische invoer van het eerste apparaat onder de tweede optische invoer toont, en waarbij de weergegeven invoer op het eerste apparaat gelijk is aan de weergegeven invoer op het tweede apparaat.No. P100566NL00 37 has on a physical reality of the second device and transmits the at least one layer of second optical input to the first mobile device, wherein the first device in use transmits the at least one layer of second optical input of the second device shows the first optical input, and wherein the second device in use shows the at least one layer of the first optical input of the first device below the second optical input, and wherein the displayed input on the first device is equal to the displayed input on the second device. 8. Systeem volgens een van de conclusies 1-7, verder om- vattend een volgsysteem voor het registreren van instructies en uitgevoerde acties.The system of any one of claims 1 to 7, further comprising a tracking system for recording instructions and actions performed. 9. Systeem volgens een van de voorgaande conclusies, waarbij de eerste optische ingang is verschaft door een eer- ste camera, en/of waarbij de tweede optische ingang is verschaft door ten minste één van een tweede camera, een grafische ingang, een video-ingang, een aanraakscherm, een muis, een touchpad, een stilus, een controller, of een database.The system of any preceding claim, wherein the first optical input is provided by a first camera, and/or wherein the second optical input is provided by at least one of a second camera, a graphics input, a video input, a touchscreen, a mouse, a touchpad, a stilus, a controller, or a database. 10. Systeem volgens een van de voorgaande conclusies, waarbij het eerste apparaat en/of het tweede apparaat is ge- configureerd om verdere optische invoer te tonen.The system of any preceding claim, wherein the first device and/or the second device is configured to display further optical input. 11. Systeem volgens een van de voorgaande conclusies, waarbij de optische invoer verder is verschaft door een aan- raakscherm, een muismat, en afbeeldingen.The system of any preceding claim, wherein the optical input is further provided by a touch screen, a mouse pad, and images. 12. Systeem volgens een van de voorgaande conclusies, omvattend ten minste één extra mobiel apparaat met een ver- dere optische invoer en een (draadloze) zendontvanger is ge- bruikt en op het bi directionele zendsysteem is geïmplemen- teerd.A system according to any one of the preceding claims, comprising at least one additional mobile device with a further optical input and a (wireless) transceiver used and implemented on the bi-directional transceiver system. 13. Systeem volgens een van de voorgaande conclusies,13. System according to one of the preceding claims, Nr P100566NL00 38 waarbij ten minste één locatie een afgelegen locatie is, zo- als ten minste 200 km van een wal of ten minste 200 km van een medische deskundige.No. P100566NL00 38 wherein at least one location is a remote location, such as at least 200 km from a shore or at least 200 km from a medical professional. 14. Systeem volgens een van de voorgaande conclusies, verder omvattend een digitaal en/of fysiek toegankelijk refe- rentiedocument, waarbij het referentiedocument met het oog op mediale acties instructies bevat voor de voorbereiding ervan, instrumenten voor triage, instructies voor de diagnose, in- structies voor het uitvoeren van metingen, instructies voor het uitvoeren, instructies voor het registreren van gegevens, instructies voor nazorg, een database, en een overzicht van de inhoud, bij voorkeur gelaagd georganiseerd.A system according to any one of the preceding claims, further comprising a digitally and/or physically accessible reference document, the reference document containing instructions for its preparation for medial actions, instruments for triage, instructions for diagnosis, instructions for performing measurements, instructions for performing, instructions for recording data, instructions for aftercare, a database, and an overview of the contents, preferably organized in layers. 15. Systeem volgens een van de voorgaande conclusies, verder omvattend een coördinator die geconfigureerd is om contact te leggen tussen het eerste mobiele apparaat en een tweede mobiel apparaat, waarbij de coördinator geconfigureerd is om het tweede apparaat te selecteren op basis van ten min- ste één van de beschikbaarheid, de afstand, de taalvaardig- heid van de eigenaar, de specifieke medische expertise van de eigenaar, de tijdzone, en de stabiliteit van het uitzendsys- teem.The system of any preceding claim, further comprising a coordinator configured to contact the first mobile device and a second mobile device, the coordinator configured to select the second device based on at least one of the availability, the distance, the language proficiency of the owner, the specific medical expertise of the owner, the time zone, and the stability of the broadcasting system. 16. Systeem volgens een van de voorgaande conclusies, verder omvattend ten minste een van een identiteitscontro- leur, een controleur van de referenties, een unieke sessie- identificatie, zoals een kalibratiesticker, een dataloggings- systeem, en een sensor, zoals gekozen uit een medische sen- sor, zoals voor ECG, bloeddruk, voor vitale parameters, bloed- en urineanalyse, en het zuurstofgehalte in het bloed.The system of any preceding claim, further comprising at least one of an identity checker, a checker of credentials, a unique session identifier such as a calibration sticker, a data logging system, and a sensor selected from one medical sensor, such as for EKG, blood pressure, for vital parameters, blood and urine analysis, and the oxygen content in the blood. 17. Systeem volgens een van de voorgaande conclusies, verder omvattend een schakelaar voor het activeren of deacti- veren van een supergeplaatste weergave op een of beide appa- raten.The system of any preceding claim, further comprising a switch for enabling or disabling a superimposed display on one or both devices. 18. Systeem volgens een van de voorgaande conclusies, waarbij het tweede apparaat is geconfigureerd om invoer uitThe system of any preceding claim, wherein the second device is configured to output inputs Nr P100566NL00 39 een databank op te halen.Nr P100566NL00 39 to retrieve a database. 19. Gebruik van een systeem volgens een van de voor- gaande conclusies voor opleiding en voor het verlenen van me- dische bijstand in real-time.Use of a system according to any one of the preceding claims for training and for providing medical assistance in real time.
NL2026677A 2020-10-15 2020-10-15 Wireless communication system for medical assistance NL2026677B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
NL2026677A NL2026677B1 (en) 2020-10-15 2020-10-15 Wireless communication system for medical assistance
PCT/EP2021/078630 WO2022079251A1 (en) 2020-10-15 2021-10-15 Wireless communication system for medical assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2026677A NL2026677B1 (en) 2020-10-15 2020-10-15 Wireless communication system for medical assistance

Publications (1)

Publication Number Publication Date
NL2026677B1 true NL2026677B1 (en) 2022-06-14

Family

ID=73402085

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2026677A NL2026677B1 (en) 2020-10-15 2020-10-15 Wireless communication system for medical assistance

Country Status (2)

Country Link
NL (1) NL2026677B1 (en)
WO (1) WO2022079251A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014186838A1 (en) * 2013-05-19 2014-11-27 Commonwealth Scientific And Industrial Research Organisation A system and method for remote medical diagnosis
US20150077502A1 (en) 2012-05-22 2015-03-19 Intouch Technologies, Inc. Graphical user interfaces including touchpad driving interfaces for telemedicine devices
US20170069227A1 (en) 2010-02-24 2017-03-09 Nant Holdings Ip, Llc Augmented Reality Panorama Supporting Visually Impaired Individuals
WO2017072616A1 (en) 2015-10-29 2017-05-04 Koninklijke Philips N.V. Remote assistance workstation, method and system with a user interface for remote assistance with spatial placement tasks via augmented reality glasses
WO2018231059A2 (en) * 2017-06-13 2018-12-20 Maritime Medical Applications B.V. Wireless communication system for remote medical assistance
WO2019162054A1 (en) * 2018-02-20 2019-08-29 Koninklijke Philips N.V. System and method for client-side physiological condition estimations based on a video of an individual

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110313896A (en) * 2019-06-26 2019-10-11 杜剑波 The data processing system and method for dedicated remote diagnosis are removed based on augmented reality liquid layered water

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170069227A1 (en) 2010-02-24 2017-03-09 Nant Holdings Ip, Llc Augmented Reality Panorama Supporting Visually Impaired Individuals
US20150077502A1 (en) 2012-05-22 2015-03-19 Intouch Technologies, Inc. Graphical user interfaces including touchpad driving interfaces for telemedicine devices
WO2014186838A1 (en) * 2013-05-19 2014-11-27 Commonwealth Scientific And Industrial Research Organisation A system and method for remote medical diagnosis
WO2017072616A1 (en) 2015-10-29 2017-05-04 Koninklijke Philips N.V. Remote assistance workstation, method and system with a user interface for remote assistance with spatial placement tasks via augmented reality glasses
WO2018231059A2 (en) * 2017-06-13 2018-12-20 Maritime Medical Applications B.V. Wireless communication system for remote medical assistance
WO2019162054A1 (en) * 2018-02-20 2019-08-29 Koninklijke Philips N.V. System and method for client-side physiological condition estimations based on a video of an individual

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GÓMEZ-GONZÁLEZ EMILIO ET AL: "Artificial intelligence in medicine and healthcare: a review and classification of current and near-future applications and their ethical and social Impact", ARXIV, 22 January 2020 (2020-01-22), XP055818120, Retrieved from the Internet <URL:https://arxiv.org/ftp/arxiv/papers/2001/2001.09778.pdf> [retrieved on 20210625] *

Also Published As

Publication number Publication date
WO2022079251A1 (en) 2022-04-21

Similar Documents

Publication Publication Date Title
US11676513B2 (en) System and method for three-dimensional augmented reality guidance for use of equipment
US20210327304A1 (en) System and method for augmented reality guidance for use of equpment systems
US20210327303A1 (en) System and method for augmented reality guidance for use of equipment systems
CN109310317A (en) System and method for automated medicine diagnosis
US20210295048A1 (en) System and method for augmented reality guidance for use of equipment systems
JP7057589B2 (en) Medical information processing system, gait state quantification method and program
WO2019175675A2 (en) Dr robot medical artificial intelligence robotic arrangement
CN116895372B (en) Intelligent first-aid grading system based on large-scale language model and meta-learning
KR20220095104A (en) Big data and cloud system based AI(artificial intelligence) emergency medical care decision-making and emergency patient transfer system and method thereof
Lebedev et al. Building a telemedicine system for monitoring the health status and supporting the social adaptation of children with autism spectrum disorders
Zhou et al. Cognition-driven navigation assistive system for emergency indoor wayfinding (CogDNA): proof of concept and evidence
NL2026677B1 (en) Wireless communication system for medical assistance
Chourasia et al. Redefining industry 5.0 in ophthalmology and digital metrology: a global perspective
NL2019059B1 (en) Wireless communication system for remote medical assistance
Vinekar Screening for ROP
Krihak et al. Exploration Medical Capability Clinical Decision Support System Concept of Operations
Wong et al. The Use of Telehealth in Optometry: Present and Future Clinical Applications
Eder et al. Telemedicine for Prehospital Trauma Care: A Promising Approach
Rolon et al. Hospital Logistics Management Using Industry 4.0 Techniques.
US20240096476A1 (en) Method, the computing deivce, and the non-transitory computer-readable recording medium for providing cognitive training
Herrera et al. Mobile Triage Applications: A Systematic Review in Literature and Play Store
Nijhawan et al. A Novel Framework Approach for Diabetic Retinopathy Detection
ILIASHENKO et al. IZVESTIYA OF SARATOV UNIVERSITY. NEW SERIES. SERIES: MATHEMATICS. MECHANICS. INFORMATICS
Chowdary et al. Detecting Retinopathy of Prematurity Disease Based on Fundus Image Dataset
CN117897770A (en) Virtual integrated remote assistant device and method