US20200375467A1 - Telemedicine application of video analysis and motion augmentation - Google Patents
Telemedicine application of video analysis and motion augmentation Download PDFInfo
- Publication number
- US20200375467A1 US20200375467A1 US16/995,894 US202016995894A US2020375467A1 US 20200375467 A1 US20200375467 A1 US 20200375467A1 US 202016995894 A US202016995894 A US 202016995894A US 2020375467 A1 US2020375467 A1 US 2020375467A1
- Authority
- US
- United States
- Prior art keywords
- camera
- action
- person
- user
- anomaly
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 54
- 238000004458 analytical method Methods 0.000 title abstract description 26
- 230000003416 augmentation Effects 0.000 title abstract description 19
- 238000000034 method Methods 0.000 claims abstract description 100
- 238000003745 diagnosis Methods 0.000 claims abstract description 32
- 230000004044 response Effects 0.000 claims abstract description 17
- 230000009471 action Effects 0.000 claims description 32
- 230000008859 change Effects 0.000 claims description 31
- 230000015654 memory Effects 0.000 claims description 27
- 230000004962 physiological condition Effects 0.000 claims 4
- 230000003993 interaction Effects 0.000 abstract description 41
- 238000001514 detection method Methods 0.000 abstract description 8
- 230000008569 process Effects 0.000 abstract description 5
- 238000004891 communication Methods 0.000 description 37
- 230000006870 function Effects 0.000 description 15
- 238000012545 processing Methods 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 14
- 238000012544 monitoring process Methods 0.000 description 13
- 238000001931 thermography Methods 0.000 description 12
- 230000036541 health Effects 0.000 description 8
- 238000002595 magnetic resonance imaging Methods 0.000 description 8
- 208000024891 symptom Diseases 0.000 description 8
- 229940079593 drug Drugs 0.000 description 7
- 239000003814 drug Substances 0.000 description 7
- 241001465754 Metazoa Species 0.000 description 5
- 208000012641 Pigmentation disease Diseases 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 238000002600 positron emission tomography Methods 0.000 description 5
- 210000000707 wrist Anatomy 0.000 description 5
- 239000000523 sample Substances 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 206010028347 Muscle twitching Diseases 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 3
- 210000000746 body region Anatomy 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003331 infrared imaging Methods 0.000 description 3
- 208000014674 injury Diseases 0.000 description 3
- 210000002414 leg Anatomy 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 208000034656 Contusions Diseases 0.000 description 2
- 241000282412 Homo Species 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000003412 degenerative effect Effects 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000004821 distillation Methods 0.000 description 2
- 208000030533 eye disease Diseases 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 239000000700 radioactive tracer Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000008733 trauma Effects 0.000 description 2
- 241000238876 Acari Species 0.000 description 1
- 208000010392 Bone Fractures Diseases 0.000 description 1
- 206010020751 Hypersensitivity Diseases 0.000 description 1
- 206010038743 Restlessness Diseases 0.000 description 1
- 206010040914 Skin reaction Diseases 0.000 description 1
- 206010052428 Wound Diseases 0.000 description 1
- 208000030961 allergic reaction Diseases 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 210000001124 body fluid Anatomy 0.000 description 1
- 239000010839 body fluid Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000009519 contusion Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 208000017520 skin disease Diseases 0.000 description 1
- 230000035483 skin reaction Effects 0.000 description 1
- 231100000430 skin reaction Toxicity 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 210000000689 upper leg Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1113—Local tracking of patients, e.g. in a hospital or private home
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1113—Local tracking of patients, e.g. in a hospital or private home
- A61B5/1114—Tracking parts of the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H80/00—ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0015—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
- A61B5/0022—Monitoring a patient using a global network, e.g. telephone networks, internet
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/7465—Arrangements for interactive communication between patient and care services, e.g. by using a telephone network
- A61B5/747—Arrangements for interactive communication between patient and care services, e.g. by using a telephone network in case of emergency, i.e. alerting emergency services
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
Definitions
- the present application relates to technologies for video and motion augmentation, and more particularly, to a system and method for providing video analysis and motion augmentation, particularly in the context of telemedicine applications.
- Invasive medical tools include devices, such as, but not limited to, endoscopes, catheters, probes, and surgical robots.
- endoscopes fitted with lens systems and eyepieces to examine a region inside a patient's body.
- a physician typically inserts the endoscope directly into an opening or organ of the patient's body.
- endoscope itself is often useful in detecting abnormalities in the body of a patient
- using an endoscope often causes patient discomfort and even physical trauma at the site at which the endoscope is inserted.
- probes, catheters, surgical robots, and other invasive medical tools also can cause discomfort and physical trauma.
- Non-invasive diagnostic tools include devices, such as, but not limited to, X-ray machines, Magnetic Resonance Imaging (MRI) machines, computerized tomography (CT) machines, positron emission tomography (PET) machines, and other non-invasive diagnostic devices.
- MRI Magnetic Resonance Imaging
- CT computerized tomography
- PET positron emission tomography
- physicians utilize MRI machines to generate magnetic fields and pulses of radio wave energy to generate pictures of organs and physical structures inside a patient's body. While MRI machines and other similar technologies produce helpful images and information to assist a physician in detecting anomalies and confirming medical diagnoses, such technologies are often very expensive, cumbersome, non-portable, or a combination thereof. As a result, there is still significant room to enhance current methodologies and technologies for detecting anomalies, obtaining patient information, and confirming medical diagnoses.
- FIG. 1 is a schematic diagram of a system for providing video analysis and motion augmentation for telemedicine applications according to an embodiment of the present disclosure.
- FIG. 2 is a flow diagram illustrating a sample method for providing video analysis and motion augmentation for telemedicine applications according to an embodiment of the present disclosure.
- FIG. 3 is a schematic diagram of a machine in the form of a computer system within which a set of instructions, when executed, may cause the machine to perform any one or more of the methodologies or operations of the systems and methods for providing video analysis and motion augmentation for telemedicine applications.
- a system and accompanying methods for providing video analysis and motion augmentation for applications, such as telemedicine applications, are disclosed.
- the system and methods may involve utilizing video analysis and motion augmentation to assist in the detection of various types of physical anomalies and to assist in the determination of diagnoses for beings, such as humans and animals.
- the system and methods may involve utilizing cameras and other technologies to focus on two primary modalities associated with such beings: physiological changes and movements. Both of these modalities may be available at macro (e.g., body or body region) and micro (e.g., specific body structure or body part) levels.
- the system and methods may utilize the cameras and other technologies to capture video and/or other media content of a being in a particular environment, such as an office, home, or other environment. Based on the captured video and/or other media content of the being, the system and methods may include performing an analysis of the content to detect physiological changes and/or movements of the being at macro and micro levels. For example, based on the video of the being, the systems and methods may detect a micro-movement of a body part of the being or a change in skin pigmentation of the being.
- the system and methods may include submitting the content, the detected changes, and information associated with the content and changes for further processing in the system.
- the submitted content, changes, and information may be aggregated with similar information for other beings. Based on a comparison between the aggregated data, the content, the changes, and the information, the system and methods may detect one or more anomalies associated with the being.
- the system and methods may include transmitting a signal to cause the cameras to be adjusted so that additional media content of the being may be obtained from a different vantage point and/or transmitting a signal to a device of the being to instruct the being to perform a particular action, such as move a body part in the presence of the camera.
- the cameras may obtain the additional media content based on the adjusted position of the camera and/or the action performed by the being in response to the instruction.
- the system and methods may include utilizing the additional media content from the cameras in combination with the initial media content obtained of the user to confirm the existence of an anomaly.
- the system and methods may include transmitting one or more alerts to a device of the being or to a device of a physician monitoring the being, which indicate the presence and type of anomaly detected.
- the system and methods may also include generating and transmitting one or more proposed interactions to be performed with the being based on the detected anomaly. For example, if the cameras obtain video content of a person that shows that the person's left eye is twitching in an anomalous manner, the system may generate and transmit a proposed interaction that indicates that the person should blink their left eye in a certain manner and/or that the physician monitoring the person should perform some type of interaction with the person so as to obtain additional information.
- the system and methods may receive information back from the person and/or the physician that relates to the proposed interaction, and may utilize the information, in conjunction with the content, changes, aggregated data, and other information, to generate a diagnosis for the being. For example, using the example above, based on the media content showing the person's left eye twitching, information gathered after the person blinks their eye, aggregated information for other individuals experiencing similar symptoms, previously stored historical patient information for the person, and/or other information, the system and methods may diagnose the person with a certain eye disease. The system and methods may include continuing to monitor the being to confirm the diagnosis, to track the being's progress, to determine trends in a population, update the being's historical information, or to perform any other desired function.
- the content obtained from the cameras may be combined with other technologies, such as, but not limited to, infrared imaging content, thermal imaging content, MRI content, CT content, PET content, and/or any type of other content to confirm anomalies, confirm diagnoses, generate proposed interactions, transmit alerts, or any combination thereof.
- other technologies such as, but not limited to, infrared imaging content, thermal imaging content, MRI content, CT content, PET content, and/or any type of other content to confirm anomalies, confirm diagnoses, generate proposed interactions, transmit alerts, or any combination thereof.
- a system for providing video analysis and motion augmentation for telemedicine applications may include a memory that stores instructions and a processor that executes the instructions to perform various operations of the system.
- the system may perform an operation that includes capturing first media content of a being within a range of a camera monitoring the being.
- the system may then perform an operation that includes analyzing the first media content to detect a first change associated with the being.
- the first change associated with the being may be a movement of the being, a change in a condition of the being, or a combination thereof.
- the system may proceed to perform an operation that includes detecting an anomaly associated with the being based on comparing the first change associated with the being to aggregated data for a plurality of beings including the being.
- the system may perform an operation that includes determining, based on the anomaly, a proposed interaction with the being.
- the system may perform an operation that includes transmitting the proposed interaction to a device of the being.
- the system may perform an operation that includes determining, based on the anomaly, the aggregated data, and on information obtained m response to transmitting the proposed interaction, a diagnosis associated with the being.
- a method for providing video analysis and motion augmentation for telemedicine applications may include utilizing a memory that stores instructions, and a processor that executes the instructions to perform the various functions of the method.
- the method may include obtaining, during a first time interval, first media content of a being within a range of a camera monitoring the being.
- the method may include detecting, based on the first media content, a first change associated with the being.
- the first change associated with the being may include a movement of the being, a first change in a condition of the being, or a combination thereof.
- the method may include detecting an anomaly associated with the being based on comparing the first change associated with the being to aggregated data for a plurality of beings. The method may then include determining, based on the anomaly associated with the being, a proposed interaction with the being, and then transmitting the proposed interaction to a device associated with the being. Finally, the method may include determining, based on the anomaly, the aggregated data, and on information obtained in response to transmitting the proposed interaction, a diagnosis associated with the being.
- a computer-readable device having instructions for providing video analysis and motion augmentation for telemedicine applications.
- the computer instructions which when loaded and executed by a processor, may cause the processor to perform operations including: capturing first media content of a being within a range of a camera; analyzing the first media content to detect a first change associated with the being, wherein the first change associated with the being comprises a movement of the being, a first change in a condition of the being, or a combination thereof; detecting an anomaly associated with the being based on comparing the first change associated with the being to aggregated data for a plurality of beings; determining, based on the anomaly associated with the being, a proposed interaction with the being to determine a diagnosis associated with the being; transmitting the proposed interaction to a device associated with the being; and determining, based on the anomaly, the aggregated data, and on information obtained in response to transmitting the proposed interaction, the diagnosis associated with the being.
- a system 100 and accompanying methods for providing video analysis and motion augmentation for applications are disclosed.
- the system 100 and methods may involve utilizing video analysis and motion augmentation to assist in the detection of various types of anomalies and to assist in the determination of diagnoses for various types of beings, such as humans and animals.
- the system 100 and methods may involve utilizing cameras 120 and other technologies to focus on two primary modalities associated with such beings: physiological changes and movements. Each of these modalities may be available at macro (e.g., body or body region) and micro (e.g., specific body structure or body part) levels.
- the system 100 and methods may utilize the cameras 120 and other technologies to capture video and/or other media content of a being in a particular environment, such as a doctor's office, a home, or other environment. Based on the captured video and/or other media content of the being, the system 100 and methods may analyze the content to detect physiological changes and/or movements of the being at both macro and micro levels.
- the system 100 and methods may submit the content, the detected changes, and information associated with the content and changes for further processing in the system 100 .
- the submitted content, changes, and information may be aggregated with similar information for other beings.
- the system 100 and methods may detect one or more anomalies associated with the being. If an anomaly is not detected or a detected anomaly needs to be confirmed, the system 100 and methods may include transmitting a signal to cause the cameras 120 to be adjusted so that additional media content of the being may be obtained from a different position.
- the system 100 and methods may include transmitting a signal to a device of the being to instruct the being to perform a particular action, such as move a body part in front of the camera 120 .
- the cameras 120 may obtain the additional media content based on the adjusted position of the camera 120 and/or the action performed by the being in response to the instruction sent to the being.
- the system 100 and methods may then include utilizing the additional media content from the cameras 120 in combination with the initial video content depicting the user to confirm the existence of an anomaly.
- the system 100 and methods may include transmitting one or more alerts to a device of the being and/or to a device of a physician monitoring the being.
- the alerts may be utilized to indicate the presence and type of anomaly detected.
- the system 100 and methods may also include generating and transmitting one or more proposed interactions to be performed with the being based on the detected anomaly. For example, if the cameras 120 obtain video content of a person that shows that his skin pigmentation on his right arm is changing, the system 100 may generate and transmit a proposed interaction that indicates that the person should rotate his arm in a certain manner and/or that the physician monitoring the person should perform some type of interaction with the person so as to obtain additional information relating to the anomaly.
- the system 100 and methods may receive information back from the person and/or the physician that relates to the proposed interaction, and may utilize the information, in conjunction with the content, changes, aggregated data, and other information, to generate a diagnosis for the person. For example, using the example above, based on the media content showing the change in skin pigmentation, information gathered after the person rotates his arm, aggregated information for other individuals experiencing similar symptoms, previously stored historical patient information for the person, and/or other information, the system 100 and methods may diagnose the person with a certain skin disease. The system 100 and methods may include continuing to monitor the person to confirm the diagnosis, to track the person's progress, to determine trends in a population, update the person's historical information, or to perform any other desired function.
- the content obtained from the cameras 120 may be combined with information obtained from other technologies, such as, but not limited to, infrared imaging content, thermal imaging content, MRI content, CT content, PET content, and/or any other type of content to confirm anomalies, confirm diagnoses, generate proposed interactions, transmit alerts, or any combination thereof.
- other technologies such as, but not limited to, infrared imaging content, thermal imaging content, MRI content, CT content, PET content, and/or any other type of content to confirm anomalies, confirm diagnoses, generate proposed interactions, transmit alerts, or any combination thereof.
- a system 100 for providing video analysis and motion augmentation for applications such as, but not limited to, telemedicine applications
- the system 100 may be configured to support, but is not limited to supporting, cloud computing services, content delivery services, satellite services, telephone services, voice-over-internet protocol services (VoIP), software as a service (SaaS) applications, gaming applications and services, productivity applications and services, mobile applications and services, and any other computing applications and services.
- the system may include a first user 101 , which may be any type of being, such, as but not limited to, a human, an animal, or any other being.
- the first user 101 may utilize a first user device 102 to access data, content, and services, or to perform a variety of other tasks and functions.
- the first user 101 may utilize first user device 102 to transmit signals to access various online services, such as those provided by a content provider or service provider associated with communications network 135 .
- the first user device 102 may include a memory 103 that includes instructions, and a processor 104 that executes the instructions from the memory 103 to perform the various operations that are performed by the first user device 102 .
- the processor 104 may be hardware, software, or a combination thereof.
- the first user device 102 may also include a camera 105 , which may be configured to record and store video and/or audio content within a viewing range and/or auditory range of the camera 105 .
- the camera 105 may be any type of camera including, but not limited to, a video camera, a photo camera, an infrared camera, a thermal imaging camera, any type of imaging device, or any combination thereof.
- the first user device 102 may be a computer, a medical device, a laptop, a tablet device, a phablet, a server, a mobile device, a smartphone, a smart watch, or any other type of computing device.
- the first user device 102 is shown as a smartphone device in FIG. 1 , and the first user 101 is a person.
- the system 100 may also include a second user 110 , which may be any type of being, such, as but not limited to, a human, an animal, or any other being.
- the second user 110 may utilize a second user device 111 to also access data, content, and services, and to perform a variety of other functions.
- the second user device 111 may be utilized by the second user 110 to transmit signals to request various types of content, services, and data provided by providers associated with communications network 135 or any other network in the system 100 .
- the second user device 111 may include a memory 112 that includes instructions, and a processor 113 that executes the instructions from the memory 112 to perform the various operations that are performed by the second user device 111 .
- the processor 113 may be hardware, software, or a combination thereof.
- the second user device 111 may also include a camera 114 , which may be configured to record and store content within a viewing range of the camera 114 .
- the camera 114 may be any type of camera including, but not limited to, a video camera, a photo camera, an infrared camera, a thermal imaging camera, any type of imaging device, or any combination thereof.
- the second user device 111 may be a computer, a laptop, a tablet device, a phablet, a server, a mobile device, a smartphone, a smart watch, or any other type of computing device.
- the second user device 111 is shown as a tablet device in FIG. 1 , and the second user 110 is a person.
- the system 100 may also include a third user 115 , which may be any type of being, such, as but not limited to, a human, an animal, or any other being.
- the third user 115 may utilize a third user device 116 to also access data, content, and services, and to perform a variety of other functions.
- the third user device 116 may be utilized by the third user 115 to transmit signals to request various types of content, services, and data provided by providers associated with communications network 135 or any other network in the system 100 .
- the third user device 116 may communicate with first and second user devices 102 , 111 .
- the third user device 116 may include a memory 117 that includes instructions, and a processor 118 that executes the instructions from the memory 117 to perform the various operations that are performed by the third user device 116 .
- the processor 118 may be hardware, software, or a combination thereof.
- the third user device 116 may also include a camera 119 , which may be configured to record and store content within a viewing range of the camera 119 .
- the camera 119 may also record audio content as well.
- the camera 119 may be any type of camera including, but not limited to, a video camera, a photo camera, an infrared camera, a thermal imaging camera, any type of imaging device, or any combination thereof.
- the third user device 116 may be a computer, a laptop, a tablet device, a phablet, a server, a mobile device, a smartphone, a smart watch, or any other type of computing device.
- the third user device 116 is shown as a tablet device in FIG. 1
- the third user 115 is a physician associated with the first and second users 101 , 110 .
- first user device 102 , the second user device 111 , and the third user device 116 may have any number of software applications and/or application services stored and/or accessible thereon.
- the first, second, and third user devices 102 , 111 , 116 may include cloud-based applications, mapping applications, location tracking applications, database applications, gaming applications, internet-based applications, browser applications, mobile applications, service-based applications, productivity applications, video applications, music applications, streaming media applications, social media applications, any other type of applications, any types of application services, or a combination thereof.
- the software applications and services may include one or more graphical user interfaces so as to enable the first, second, and third users 101 , 110 , 115 to readily interact with the software applications.
- the software applications and services may also be utilized by the first, second, and third users 101 , 110 , 115 to interact with the any device in the system 100 , any network in the system 100 , or any combination thereof.
- the first user device 102 , the second user device 111 , and the third user device 116 may include associated telephone numbers, device identities, or any other identifiers to uniquely identify the first, second, and third user devices 102 , 111 , 116 .
- the system 100 may also include a camera 120 , which may be utilized to record any type of media content or any type of other content.
- the media content may include, but is not limited to, video content, audio content, image content, any type of content, or any combination thereof.
- the camera 120 may be any type of camera, such as, but not limited to, a video camera, a thermal imaging camera, an infrared camera, an X-ray-enabled camera, any type of imaging device, any type of media content recording device, a surveillance device, or any combination thereof, that may be utilized to capture and record media content associated with the first and second users 101 , 110 .
- the camera 120 can record video of the first user 101 and any sounds that the first user 101 makes when the first user 101 is within a viewing range for the camera 120 or the system 100 .
- the camera 120 may record sounds by utilizing a microphone, which may reside within the camera 120 or in proximity to the camera 120 .
- the camera 120 may be communicatively linked with any of the devices and networks in the system 100 , and may transmit recorded media content to any of the devices and networks in the system 100 .
- the system 100 may also include a device 125 , which may be any type of device including, but not limited to, an MRI machine, a CT machine, a PET machine, a thermal imaging device, an X-ray machine, an infrared imaging device, any type of medical imaging device, any type of device, any type of computing device, or any combination thereof.
- the device 125 may communicate with any of the devices and components in the system 100 , such as, but not limited to, the first, second, and third user devices 102 , 111 , 116 .
- the device 125 may include a memory 126 that includes instructions, and a processor 127 that executes the instructions from the memory 126 to perform the various operations that are performed by the device 125 .
- the processor 127 may be hardware, software, or a combination thereof.
- the device 125 may be configured to record imaging data and content associated with the first and second users 101 , 110 . For example, if the device 125 is a thermal imaging device, the device 125 may be configured to take thermal images of the first and second users 101 , 110 .
- the thermal images may then be transmitted to any component or device of the system 100 for further processing and may be combined with content obtained from the first, second, and third user devices 102 , 111 , 116 to assist in detecting anomalies associated with the first and second users 101 , 111 , and to determine diagnoses for the first and second users 101 , 111 .
- the system 100 may further include a communications network 135 .
- the communications network 135 of the system 100 may be configured to link each of the devices in the system 100 to one another. Additionally, the communications network 135 may be configured to transmit, generate, and receive any information and data traversing the system 100 .
- the communications network 135 may include any number of servers, databases, or other componentry.
- the communications network 135 may also include and be connected to a cloud-computing network, a wireless network, an Ethernet network, a satellite network, a broadband network, a cellular network, a private network, a cable network, the Internet, an internet protocol network, a multiprotocol label switching (MPLS) network, a content distribution network, any network or any combination thereof.
- MPLS multiprotocol label switching
- servers 140 and 145 are shown as being included within communications network 135 , and the communications network 135 is shown as a content delivery network.
- the communications network 135 may be part of a single autonomous system that is located in a particular geographic region, or be part of multiple autonomous systems that span several geographic regions.
- the functionality of the system 100 may be supported and executed by using any combination of the servers 140 , 145 , and 160 .
- the server 140 may include a memory 141 that includes instructions, and a processor 142 that executes the instructions from the memory 141 to perform various operations that are performed by the server 140 .
- the processor 142 may be hardware, software, or a combination thereof.
- the server 145 may include a memory 146 that includes instructions, and a processor 147 that executes the instructions from the memory 146 to perform the various operations that are performed by the server 145 .
- the servers 140 , 145 , and 160 may be network servers, routers, gateways, computers, mobile devices or any other suitable computing device.
- the servers 140 , 145 may be communicatively linked to the communications network 135 , any network, any device in the system 100 , or any combination thereof.
- the database 155 of the system 100 may be utilized to store and relay information that traverses the system 100 , cache content that traverses the system 100 , store data about each of the devices in the system 100 and perform any other typical functions of a database.
- the database 155 may be connected to or reside within the communications network 135 , any other network, or a combination thereof.
- the database 155 may serve as a central repository for any information associated with any of the devices and information associated with the system 100 .
- the database 155 may include a processor and memory or be connected to a processor and memory to perform the various operation associated with the database 155 .
- the database 155 may be connected to the camera 120 , the servers 140 , 145 , 160 , the first user device 102 , the second user device 111 , the third user device 116 , the device 125 , the communications network 135 , or any combination thereof.
- the database 155 may also store information and metadata obtained from the system 100 , store metadata and other information associated with the first, second, and third users 101 , 110 , 115 store user profiles associated with the first, second, and third users 101 , 110 , 115 , store device profiles associated with any device in the system 100 , store communications traversing the system 100 , store user preferences, store information associated with any device or signal in the system 100 , store information relating to patterns of usage relating to the first, second, and third user devices 102 , 111 , 116 , store any information obtained from the communications network 135 , or any combination thereof, store any information generated by or associated with the camera 120 , store performance data for the devices, store information generated or associated with device 125 , store historical data associated with the first and second users 101 , 110 , store health data associated with the first and second users 101 , 110 , store information relating to medical conditions, store information associated with anomalies and/or symptoms associated with various medical conditions, store content obtained from the cameras 120 or any device in the system, store
- the system 100 may provide video analysis and motion augmentation for applications, such as telemedicine applications, as shown in the following exemplary scenario.
- the first user 101 may be located in an office environment and may be utilizing first user device 102 , which may be a smartphone or other similar device.
- the camera 120 of the system 100 may also be located in the office of the first user 101 .
- the camera 120 may record media content of the first user 101 , such as video content of the user, while the first user 101 is sitting in his office.
- the camera 105 may be utilized to record media content associated with the first user 101 , either alone or in combination with the camera 120 .
- the media content may be transmitted by the camera 120 to the communications network 135 for further processing.
- the system 100 may analyze the media content to detect one or more changes associated with the first user 101 .
- the changes may be macro changes and/or micro changes or changes in the condition of the first user 101 , or a combination thereof.
- Macro changes may be changes or movements specific to the first user's 101 entire body or to a specific region (e.g., chest region, back region, head region, leg region, etc.) of the first user's 101 body.
- Micro changes may be changes or movements specific to specific body parts and/or to specific body structures (e.g., parts of the face, a single finger, a toe, etc.).
- the system 100 may detect the first user's 101 blood flow via skin pigmentation changes detected in the media content recorded of the first user 101 .
- the system 100 may detect various types of range of motion for certain body parts or even detect various types of “ticks” (e.g., eye twitching or restless leg) or habits that the first user 101 has.
- Changes in movement may involve detecting that a particular body region is moving in an irregular direction or magnitude.
- the system 100 may be configured to perform a shape analysis (e.g., finding the right rotation or contour of a body part) to help diagnose and normalize automatic observations.
- the system 100 may detect one or more anomalies associated with the first user 101 based on comparing the detected changes to previously stored information, such as health information, for the first user 101 and/or to aggregated information for a selected population of users.
- the information obtained from the media content may be combined with images and information obtained from other technologies to confirm the presence of an anomaly. For example, if the first user 101 had an X-ray of his chest and the X-ray shows an anomaly in a certain region, and the media content shows the same anomaly, then the anomaly may be confirmed by utilizing the image provided by the X-ray in conjunction with the video recording of the first user 101 .
- the information from the thermal imaging scanning may be utilized to confirm that an anomaly does not exist, and can, therefore, reduce false alarms.
- the first user 101 may be identified by analyzing the media content, and, in other embodiments, the identity of the first user 101 may be kept anonymous. If the first user 101 is identified, the anomaly may be confirmed by comparing the media content recording of the first user 101 to the first user's 101 medical records, which may be accessible by accessing the third user device 116 of the third user 115 , who may be a physician.
- the system 100 may request that the first user's 101 physician confirm the existence of the anomaly such, as via the third user device 116 . Additionally, the system 100 may transmit a signal to automatically adjust a position of the camera 120 or request the user to adjust the position of the first user device 102 so that a new media content recording from a different vantage point may be obtained. Furthermore, the system 100 may transmit a signal to the first user device 102 instructing the user to move a body part or move in a particular manner so that new media content may be recorded to confirm whether an anomaly exists.
- the system 100 may present a visual representation of the first user 101 on a visual interface of the first user device 101 that shows where the detected anomaly is on the first user 101 .
- the system 100 may enable the first user 101 to interact with the visual representation, such as via a software application, to confirm whether the anomaly exists or to input additional information, such as text commentary, associated with the anomaly.
- the system 100 may transmit an alert to the third user device 116 of the physician and/or an alert to the first user device 102 confirming the presence of the anomaly. Based on the detection of the anomaly, the recorded media content, aggregated data for a population and/or historical information for the first user 101 , the system 100 may determine one or more proposed interactions for interacting with the first user 101 . For example, if the detected anomaly is a bruised wrist, the system 100 may transmit a signal to the third user device 116 requesting the doctor to prescribe medication for dealing with pain associated with the bruised wrist, to input notes relating to the bruised wrist and/or to input a regimen for the first user 101 to perform to heal the bruised wrist.
- the system 100 may also transmit a signal to the first user device 102 requesting the user to input additional information regarding the cause of the bruising or to input additional information relating to the bruised wrist.
- the signal may advise the first user 101 to adjust a position of the cameras 120 , 105 so that further media content may be recorded of the first user 101 so that additional information associated with a detected condition may be obtained.
- the requested interactions may be handled by a health medication service to individually personalize care for the first user 101 and/or to distill the information for a human cooperator.
- the proposed interactions may be adjusted by the physician as necessary.
- the system 100 may automatically anonymize any of the interactions with the first user 101 , the physician, and the system 100 . Any information gathered from the interactions may also be anonymized. In certain embodiments, if the identity of the first user 101 is known, confidential information may be scrubbed to ensure the privacy of the first user 101 . Additionally, in certain embodiments, based on the severity of the anomaly or condition detected, the system 100 may also automatically scrub information identifying the first user 101 to ensure privacy and confidentiality. The system 100 may generate and transmit any number of interactions to the physician and the first user 101 , and may utilize information gathered from the interactions to further supplement the media content and other information obtained for the first user 101 .
- the system 100 may include determining a diagnosis for the first user 101 . For example, using the example above, the system 100 may determine that the first user 101 suffered a specific type of contusion. The determined diagnosis may be provided to the physician and/or confirmed by the physician. If necessary, the determination of the diagnosis may trigger the automatic scheduling of a medical appointment with the physician, such as by accessing digital calendars on the third user device 116 and first user device 102 . In certain embodiments, processes provided by the system 100 may be repeated as necessary until enough information associated with the first user 101 is obtained and a confirmation of the diagnosis is possible.
- the system 100 may also be utilized to complement and support any type of telemedicine applications as well.
- the system 100 may be extended to monitor athlete performance, worker performance, or for any other purpose. Any of the data generated by the system 100 may be stored in a record associated with the first user 101 and may be combined with aggregated data for a population so as to determine various trends in a population and various health conditions for population.
- the system 100 may perform any of the operative functions disclosed herein by utilizing the processing capabilities of server 160 , the storage capacity of the database 155 , or any other component of the system 100 to perform the operative functions disclosed herein.
- the server 160 may include one or more processors 162 that may be configured to process any of the various functions of the system 100 .
- the processors 162 may be software, hardware, or a combination of hardware and software.
- the server 160 may also include a memory 161 , which stores instructions that the processors 162 may execute to perform various operations of the system 100 .
- the server 160 may assist in processing loads handled by the various devices in the system 100 , such as, but not limited to, capturing media content of a being; analyzing media content to detect micro and macro movements and changes associated with the being; detecting anomalies based on the media content; transmitting signals to adjust a position of the camera 120 , transmitting signals to a device that instruct the being to adjust the being's position; determining proposed interactions with the being; receiving information from the being; determining a diagnosis for the being based on the media content, aggregated data, and other information; and performing any other suitable operations conducted in the system 100 or otherwise.
- multiple servers 160 may be utilized to process the functions of the system 100 .
- the server 160 and other devices in the system 100 may utilize the database 155 for storing data about the devices in the system 100 or any other information that is associated with the system 100 .
- multiple databases 155 may be utilized to store data in the system 100 .
- FIG. 1 illustrates a specific example configuration of the various components of the system 100
- the system 100 may include any configuration of the components, which may include using a greater or lesser number of the components.
- the system 100 is illustratively shown as including a first user device 102 , a second user device 111 , a third user device 116 , a camera 120 , a device 125 , a communications network 135 , a server 140 , a server 145 , a server 160 , and a database 155 .
- the system 100 may include multiple first user devices 102 , multiple second user devices 111 , multiple third user devices 116 , multiple cameras 120 , multiple devices 125 , multiple communications networks 135 , multiple servers 140 , multiple servers 145 , multiple servers 160 , multiple databases 155 , or any number of any of the other components inside or outside the system 100 .
- substantial portions of the functionality and operations of the system 100 may be performed by other networks and systems that may be connected to system 100 .
- the method 200 may include, at step 202 , capturing first media content of a being within a range of a camera 120 monitoring the being.
- the method 200 may involve utilizing the camera 120 to capture a video recording and/or video stream of the first user 101 at a selected time in a selected environment, such as the first user's 101 office.
- capturing of the media content may be performed by utilizing the first user device 102 , the second user device 111 , the third user device 116 , the camera 120 , the device 125 , the server 140 , the server 145 , the server 160 , the communications network 135 , any combination thereof, or by utilizing any other appropriate program, system, or device.
- the method 200 may include analyzing the captured media content to detect a first change associated with the being that is monitored.
- the analyzing may be performed by utilizing the first user device 102 , the second user device 111 , the third user device 116 , the camera 120 , the device 125 , the server 140 , the server 145 , the server 160 , the communications network 135 , any combination thereof, or by utilizing any other appropriate program, system, or device.
- the captured media content may be utilized to detect the first change by detecting both micro and macro changes occurring for the being monitored by the camera 120 .
- the micro changes may include changes to a specific body structure of the being, such as, but not limited to, a leg, a hand, a finger, an eye, a mouth, a nose, a thigh, a head, a back, or any other body structure of the being.
- the macro changes may include changes to the entire body of the being and/or a specific region of the body of the being.
- the macro and micro changes may be any type of physiological change or even changes that may be symptomatic of certain mental conditions.
- the physiological changes may include, but are not limited to, any type of body movement, any type of body part movement, any type of skin pigmentation change, any type of skin change, any type of color change, any type of perspiration change, any body fluid change, any type of physiological change, any type of wound, any type of infection, any type of allergic reaction, or any combination thereof.
- textural changes associated with the body of the monitored being may also be detected. For example, a change in texture associated with the being's skin may be detected.
- the method 200 may include detecting an anomaly associated with the being based on comparing the detected first change with previous historical information for the being, aggregated information for a plurality of other beings, medical information, other information, or a combination thereof.
- the previous historical information may be obtained from a person's medical records or other records.
- the historical information may even be inputted by a person into the system 100 , such as via a digital form or other input instrument.
- the aggregated information may be information, such as, but not limited to, health information, demographic information, psychographic information, or any other type of information, for any number of individuals of a selected population.
- the medical information may include, but is not limited to, information identifying expected health metrics associated with various types of medical conditions, media content displaying healthy body structures, media content displaying unhealthy medical conditions or body structures, healthy and unhealthy human anatomy information, any other types of medical information, or a combination thereof.
- An anomaly may be detected, for example, if a specific micro-movement of a user's eye that was captured in the media content indicates that the micro-movement is indicative of a degenerative eye disease when the media content is compared with medical information stored in the system 100 .
- the same anomaly may be detected by comparing micro-movement detected in the media content to a person's own medical history, previous media content of the person, or to aggregated data associated with the eyes of a multitude of other people.
- the detecting may be performed by utilizing the first user device 102 , the second user device 111 , the third user device 116 , the camera 120 , the device 125 , the server 140 , the server 145 , the server 160 , the communications network 135 , any combination thereof, or by utilizing any other appropriate program, system, or device.
- the method 200 may include determining if additional information is needed to confirm the existence of the anomaly. In certain embodiments, the determining may be performed by utilizing the first user device 102 , the second user device 111 , the third user device 116 , the camera 120 , the device 125 , the server 140 , the server 145 , the server 160 , the communications network 135 , any combination thereof, or by utilizing any other appropriate program, system, or device. If it is determined that additional information is needed to confirm the existence of the anomaly, the method 200 may include, at step 210 , transmitting a signal to adjust a position of the camera 120 and/or a signal to a device of the being to instruct the being to adjust a body part in a prescribed manner.
- a signal may be transmitted by the system 100 to the camera 120 to automatically adjust the camera 120 to a new position so that additional video recordings of the first user 101 may be obtained.
- a signal may be transmitted by the system 100 to the first user device 102 , which may cause a user interface of the first user device 102 to display instructions to the first user 101 to move his arm up and down in a certain manner.
- the method 200 may include, at step 212 , capturing second media content of the being while the camera 120 position is adjusted, after the camera 120 position is adjusted, while the being adjusts the body part, after the being adjusts the body part, or any combination thereof, to confirm the existence of the anomaly.
- the capturing of the second media content may be performed by utilizing the first user device 102 , the second user device 111 , the third user device 116 , the camera 120 , the device 125 , the server 140 , the server 145 , the server 160 , the communications network 135 , any combination thereof, or by utilizing any other appropriate program, system, or device.
- the method 200 may include, at step 214 , determining a proposed interaction with the being based on the detection of the anomaly.
- the determining may be performed by utilizing the first user device 102 , the second user device 111 , the third user device 116 , the camera 120 , the device 125 , the server 140 , the server 145 , the server 160 , the communications network 135 , any combination thereof, or by utilizing any other appropriate program, system, or device.
- a proposed interaction may include, but is not limited to, a request for the being to perform a certain action, a request for a physician to perform some action with respect to the being, a request to change the position of the camera 120 further, a request to have the being take a certain medication, any type of interaction, or any combination thereof.
- the method 200 may include transmitting the proposed interaction to the device of the being, to a device of a person monitoring the being, or to another device. Also, the method 200 may include transmitting one or more alerts indicating the presence of the anomaly to the being, the person monitoring the being, or a combination thereof. The one or more alerts may also be sent to any device of the system 100 . In certain embodiments, the transmitting of the proposed interaction and/or the transmitting of the alerts may be performed by utilizing the server 140 , the server 145 , the server 160 , the communications network 135 , any combination thereof, or by utilizing any other appropriate program, system, or device. At step 218 , the method 200 may include receiving information in response to the proposed interaction.
- the information may include, but is not limited to, information associated with an action being performed by the being, information gathered based on an interaction between the being and the person monitoring the being, any information provided in response to the proposed interaction, or any combination thereof.
- the information may include information provided by a doctor indicating certain additional symptoms associated with the anomaly that doctor has determined to be occurring during the doctor's interaction with the person he is monitoring.
- the information may be received by utilizing the server 140 , the server 145 , the server 160 , the communications network 135 , any combination thereof, or by utilizing any other appropriate program, system, or device.
- the method 200 may include determining, based on the detected anomaly, the aggregated data, the historical information associated with the being, and/or the information received in response to the interaction, a diagnosis for the being. For example, based on the micro-movement of the first user's 101 eye, aggregated data associated with the eyes of multiple other people, previous medical records for the first user 101 , and information provided by the doctor relating to the symptoms associated with the micro-movement of the eye, a diagnosis for the first user 101 may be determined. In this case, the system 100 may determine that the first user 101 may have a degenerative eye condition based on the information and media content obtained for the first user 101 . The steps in the method 200 may be repeated as necessary until a diagnosis is confirmed and/or until enough information associated with the being and the being's condition is obtained. Notably, the method 200 may further incorporate any of the features and functionality described for the system 100 or as otherwise described herein.
- the systems and methods disclosed herein may include additional functionality and features.
- the systems and methods may be configured to allow for the detection of micro and macro changes associated with a being during selected time intervals.
- the system 100 may transmit a signal to the camera 120 to record and/or stream video content or other media content of a being for selected time period, such as from 5:00 pm-6:00 pm or a fixed time period of 1 hour.
- the systems and methods may compare micro and macro changes detected during certain time intervals to micro and macro changes detected during other time intervals. Such comparisons may assist in determining whether an anomaly exists and/or whether a certain diagnosis is accurate.
- the cameras 120 may be configured to be placed in any location where a being may be located.
- one or more cameras 120 , 114 , 105 may be placed in mass-transit areas, such as, but not limited to, airports, train stations, subways, shopping malls, concerts, theme parks, or any other area.
- the media content obtained from the cameras 120 may be aggregated and stored in the system 100 .
- the aggregated media content may be utilized to detect pandemics associated with a group of people being monitored, confirm health trends associated with a group of people being monitored, detect anomalies associated with a group of people being monitored, determine diagnoses associated with a group of people being monitored, or a combination thereof.
- media content obtained for a being may be compared to the aggregated media content for a certain population of beings so as to detect the foregoing as well
- the systems and methods may also include detecting anomalies based on a comparison with a corpus of known “normal” or healthy conditions.
- the corpus may include the monitored being's healthy conditions, a selected population's healthy conditions, or a combination thereof.
- the anomalies and/or diagnoses may be confirmed based on a comparison against images and outputs generated by other technologies. For example, when evaluated against certain models, probabilistic results, obtained media content, and/or other information generated in the system 100 may be coupled with outputs generated by X-ray machines, CT machines, MRI machines, PET machines, thermal imaging machines, infrared devices, any other technologies, or a combination thereof, to detect anomalies and/or determine diagnoses.
- the systems and methods may utilize inputs obtained from a traditional medical practitioner's office, such as an office suited with cameras 120 , 105 , 114 , inputs obtained from a self-serve environment (e.g., phone booth with high-rate/resolution cameras 120 , 114 , 105 , a vehicle, or other environment), and/or any number of mobile devices, such as first and second user devices 102 , 111 .
- a self-serve environment e.g., phone booth with high-rate/resolution cameras 120 , 114 , 105 , a vehicle, or other environment
- Such inputs may be combined and/or compared with the media content obtained of the being to detect anomalies and/or determine diagnoses.
- the system 100 may have multiple modes of operation.
- the system 100 may have an “always on” passive monitoring mode.
- cameras 120 , 105 , 114 may monitor any number of beings passively and on a continuous basis.
- Such a mode may be particularly beneficial in a workplace environment and/or environments that are associated with certain expected injury types.
- the system 100 may be utilized to monitor office employees to conduct a repetitive stress analysis in an office or to monitor elderly individuals who may benefit from continuous monitoring.
- information and media content obtained may be aggregated for many people in a feed (e.g., for the public at large or an entire workforce) so as to preempt a pandemic spread of disease or to prevent other conditions from spreading.
- the system 100 may also have a second mode, which may be an “intentional” scan mode.
- a second mode which may be an “intentional” scan mode.
- the system 100 may be specifically activated from a sleep state to record and/or stream media content for specific areas and/or beings.
- the “intentional scan” mode may be particularly beneficial, for example, while a patient visits a doctor at the doctor's office for a medical appointment.
- the system 100 may further include a “periodic scan” mode, which may involve having the system 100 obtain media content and information for selected areas and beings during periodic time intervals. This may be helpful to determine trends during particular times of the day, to determine when certain conditions occur, or a combination thereof.
- the systems and methods may include utilizing user feedback when detecting anomalies and/or making diagnoses.
- the system 100 may transmit a signal to a device of a being, such as the first user device 102 of the first user 101 , that requests the being to provide feedback regarding the media content recorded for the being, regarding the being's symptoms, regarding what foods the being ate, regarding any type of information, or any combination thereof.
- the feedback from the being may be input via a graphical user interface on a device of the being, and may be transmitted to the system 100 for further analysis.
- the feedback may be utilized to confirm whether an anomaly exists, adjust a determined diagnosis, confirm an area on the being's body for diagnosis, supplement records associated with the being, supplement detected symptoms for the being, or a combination thereof.
- System feedback may be utilized to echo symptoms of a region on the being's body that is suspected to be injured and/or infected.
- the systems and methods may include obtaining the media content and information using the cameras 120 , 105 , 114 after tracers and/or biochemical solutions are either ingested, injected, or otherwise put into the body of a being.
- a certain tracer may be utilized that is known to trigger an expected response by the body of the being.
- the cameras 120 , 105 , 114 may obtain media content of the being while the tracer is coursing through the body of the being, and the media content may be compared to a standard response, which may have been previously imperceptible without micro-change analysis or no longer needs an invasive probe/monitor.
- the systems and methods may also guarantee that the media content and information obtained for each of the beings is confidential.
- the systems and methods may allow for a distributed, network-based analysis of video and other media content feeds obtained from a variety of cameras 120 , 105 , 114 positioned in various areas.
- the media content and information obtained from the cameras 120 , 105 , 114 may be linked with “big data” collections of diagnoses, which may be particularly helpful in identifying conditions associated with a pandemic before it spreads.
- the systems and methods may also institute various triggers based on the media content and information obtained in the system 100 . For example, if a certain anomaly and/or diagnosis is detected and/or determined, the systems and methods may automatically transmit alerts or transmit instructions indicating that certain medication should be provided to a particular being. Alerts may also be sent to emergency personnel or even certain government institutions advising of a pandemic, advising of a certain condition, or a combination thereof.
- the systems and methods may include utilizing one or more cameras 120 , 105 , 114 of different resolutions, capture rates, sampling rates, and capabilities (e.g., storage, lens, processing power, focusing power, pixel dimensions, color capabilities, etc.).
- a camera 120 , 105 , 114 that captures media content of a being at one sampling rate may be used to detect certain micro-movements or changes that a camera 120 , 105 , 114 that has a different sampling rate for capturing media content may not be able to detect.
- media content captured at certain resolutions or by cameras 120 , 105 , 114 of certain capabilities may be utilized to detect certain macro and micro changes that other media content captured with other cameras 120 , 105 , 114 or having different resolutions are unable to show.
- the system 100 may adjust the resolutions, capture rates, sampling rates, and capabilities of each individual camera 120 , 105 , 114 by transmitting one or more signals to each camera 120 , 105 , 114 indicating what parameters should be adjusted.
- the cameras 120 , 105 , 114 , the first and second user devices 102 , 110 , and/or other devices in the system 100 may be utilized to preprocess the media content prior to streaming the media content to the communications network 135 for the detection of anomalies, the determinations of diagnoses, and/or the storage of the media content and information.
- the media content and information obtained from the cameras 120 , 105 , 114 may be transmitted directly to the components (e.g., edge nodes of the communications network 135 ) of the system without performing any preprocessing of the media content and information.
- features may be streamed to a model-based comparison system that looks for both local anomalies (e.g., if the being's identity is known and a history for the being is present) or anomalies that are found based on a comparison with information for a general population of beings.
- copies of media content and/or any information in the system 100 that is associated with a being may be anonymized (and personally identifiable, if known) and stored securely in a cloud-based service, such as a service provided by communications network 135 . This may ensure that patient privacy is preserved and that the identity of the being is concealed.
- the being may be allowed to interact with the system in a “self-service usage mode.”
- the “self-service usage mode” may utilize additional filtering and distillation of the media content and information associated with the being.
- the additional filtering and distillation of the media content and information may be utilized to link with ancillary information sources (e.g., patient records, image data provided by MRI machines, CT machines, or other machines, general population data, etc.), connect and schedule medical appointments for the being, schedule follow-up appointments, schedule medical procedures, connect the being with a pharmacy, or any combination thereof.
- ancillary information sources e.g., patient records, image data provided by MRI machines, CT machines, or other machines, general population data, etc.
- the systems and methods may include coupling the system's 100 functionality and operations with information associated with various medications and drugs that induce a response that the functionality and operations of the system 100 may detect.
- a certain drug may cause a skin reaction, which may be detectable by the cameras 105 , 114 , 120 of the system 100 and be utilized to detect anomalies, update the monitored being's records, and/or confirm diagnoses.
- a trace chemical may cause micro-movements around a bone fracture that may also be detectable by the cameras 105 , 114 , 120 of the system 100 .
- the systems and methods may be extended to determine anomalies and diagnoses for a large crowd or the public at large to immediately determine health trends, needs, and conditions.
- the systems and methods may allow for the distributed analysis of medical content in the cloud. Additionally, the systems and methods may be combined with the functionality of a medical robot or electronic doctor that may visit or remotely communicate with a person. On site care may be provided to the person based on the functionality provided by the systems and methods. Furthermore, the functionality and features of the systems and methods may also be combined with robotic surgical devices and may facilitate surgical procedures and the diagnoses of certain medical conditions.
- the methodologies and techniques described with respect to the exemplary embodiments of the system 100 can incorporate a machine, such as, but not limited to, computer system 300 , or other computing device within which a set of instructions, when executed, may cause the machine to perform any one or more of the methodologies or functions discussed above.
- the machine may be configured to facilitate various operations conducted by the system 100 .
- the machine may be configured to, but is not limited to, assist the system 100 by providing processing power to assist with processing loads experienced in the system 100 , by providing storage capacity for storing instructions or data traversing the system 100 , or by assisting with any other operations conducted by or within the system 100 .
- the machine may operate as a standalone device.
- the machine may be connected (e.g., using communications network 135 , another network, or a combination thereof) to and assist with operations performed by other machines and systems, such as, but not limited to, the first user device 102 , the second user device 111 , the third user device 116 , the camera 120 , the device 125 , the server 140 , the server 145 , the database 155 , the server 160 , or any combination thereof.
- the machine may be connected with any component in the system 100 .
- the machine may operate in the capacity of a server or a client user machine in a server-client user network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- the machine may comprise a server computer, a client user computer, a personal computer (PC), a tablet PC, a laptop computer, a desktop computer, a control system, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- tablet PC tablet PC
- laptop computer a laptop computer
- desktop computer a control system
- a network router, switch or bridge or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
- the computer system 300 may include a processor 302 (e.g., a central processing unit (CPU), a graphics processing unit (GPU, or both), a main memory 304 and a static memory 306 , which communicate with each other via a bus 308 .
- the computer system 300 may further include a video display unit 310 , which may be, but is not limited to, a liquid crystal display (LCD), a flat panel, a solid state display, or a cathode ray tube (CRT).
- LCD liquid crystal display
- CRT cathode ray tube
- the computer system 300 may include an input device 312 , such as, but not limited to, a keyboard, a cursor control device 314 , such as, but not limited to, a mouse, a disk drive unit 316 , a signal generation device 318 , such as, but not limited to, a speaker or remote control, and a network interface device 320 .
- an input device 312 such as, but not limited to, a keyboard
- a cursor control device 314 such as, but not limited to, a mouse
- a disk drive unit 316 such as, but not limited to, a disk drive unit 316
- a signal generation device 318 such as, but not limited to, a speaker or remote control
- the disk drive unit 316 may include a machine-readable medium 322 on which is stored one or more sets of instructions 324 , such as, but not limited to, software embodying any one or more of the methodologies or functions described herein, including those methods illustrated above.
- the instructions 324 may also reside, completely or at least partially, within the main memory 304 , the static memory 306 , or within the processor 302 , or a combination thereof, during execution thereof by the computer system 300 .
- the main memory 304 and the processor 302 also may constitute machine-readable media.
- Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein.
- Applications that may include the apparatus and systems of various embodiments broadly include a variety of electronic and computer systems. Some embodiments implement functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application-specific integrated circuit.
- the example system is applicable to software, firmware, and hardware implementations.
- the methods described herein are intended for operation as software programs running on a computer processor.
- software implementations can include, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
- the present disclosure contemplates a machine-readable medium 322 containing instructions 324 so that a device connected to the communications network 135 , another network, or a combination thereof, can send or receive voice, video or data, and communicate over the communications network 135 , another network, or a combination thereof, using the instructions.
- the instructions 324 may further be transmitted or received over the communications network 135 , another network, or a combination thereof, via the network interface device 320 .
- machine-readable medium 322 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
- the term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present disclosure.
- machine-readable medium shall accordingly be taken to include, but not be limited to: memory devices, solid-state memories such as a memory card or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories; magneto-optical or optical medium such as a disk or tape; or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium.
- the “machine-readable medium,” “machine-readable device,” or “computer-readable device” may be non-transitory, and, in certain embodiments, may not include a wave or signal per se. Accordingly, the disclosure is considered to include any one or more of a machine-readable medium or a distribution medium, as listed herein and including art-recognized equivalents and successor media, in which the software implementations herein are stored.
Abstract
A system for providing video analysis and motion augmentation, particularly in the context of telemedicine applications is disclosed. In particular, the system may utilize cameras and other devices to detect macro and micro changes and movements of a being so as to assist in the detection of an anomaly associated with the being. After detecting the anomaly based on the detected macro and micro changes and movements, the system may transmit an alert identifying the anomaly and formulate a proposed request for interaction with the being. The request for interaction may be transmitted to the being, and information obtained in response to the request for interaction may be utilized by the system to assist in the determination of a diagnosis of a condition of the being. The process may be repeated as necessary until the diagnosis is confirmed and enough information associated with the being is obtained.
Description
- The subject patent application is a continuation of, and claims priority to, U.S. patent application Ser. No. 14/885,746, filed Oct. 16, 2015, and entitled “TELEMEDICINE APPLICATION OF VIDEO ANALYSIS AND MOTION AUGMENTATION,” the entirety of which application is hereby incorporated by reference herein.
- The present application relates to technologies for video and motion augmentation, and more particularly, to a system and method for providing video analysis and motion augmentation, particularly in the context of telemedicine applications.
- In today's society, medical professionals often rely on various types of devices to assist in the detection of various types of physical abnormalities and to assist in determining which disease or condition explains such abnormalities. Currently existing technologies for detecting anomalies often require the use of invasive medical tools or require non-portable and expensive devices to conduct various types of scans on patients. Invasive medical tools include devices, such as, but not limited to, endoscopes, catheters, probes, and surgical robots. As an example, physicians currently utilize endoscopes fitted with lens systems and eyepieces to examine a region inside a patient's body. In order to examine the patient's body, a physician typically inserts the endoscope directly into an opening or organ of the patient's body. While the endoscope itself is often useful in detecting abnormalities in the body of a patient, using an endoscope often causes patient discomfort and even physical trauma at the site at which the endoscope is inserted. Similarly, probes, catheters, surgical robots, and other invasive medical tools also can cause discomfort and physical trauma.
- As an alternative to or in addition to using invasive tools, physicians may also utilize non-invasive diagnostic tools. Non-invasive diagnostic tools include devices, such as, but not limited to, X-ray machines, Magnetic Resonance Imaging (MRI) machines, computerized tomography (CT) machines, positron emission tomography (PET) machines, and other non-invasive diagnostic devices. As an example, physicians utilize MRI machines to generate magnetic fields and pulses of radio wave energy to generate pictures of organs and physical structures inside a patient's body. While MRI machines and other similar technologies produce helpful images and information to assist a physician in detecting anomalies and confirming medical diagnoses, such technologies are often very expensive, cumbersome, non-portable, or a combination thereof. As a result, there is still significant room to enhance current methodologies and technologies for detecting anomalies, obtaining patient information, and confirming medical diagnoses.
-
FIG. 1 is a schematic diagram of a system for providing video analysis and motion augmentation for telemedicine applications according to an embodiment of the present disclosure. -
FIG. 2 is a flow diagram illustrating a sample method for providing video analysis and motion augmentation for telemedicine applications according to an embodiment of the present disclosure. -
FIG. 3 is a schematic diagram of a machine in the form of a computer system within which a set of instructions, when executed, may cause the machine to perform any one or more of the methodologies or operations of the systems and methods for providing video analysis and motion augmentation for telemedicine applications. - A system and accompanying methods for providing video analysis and motion augmentation for applications, such as telemedicine applications, are disclosed. In particular, the system and methods may involve utilizing video analysis and motion augmentation to assist in the detection of various types of physical anomalies and to assist in the determination of diagnoses for beings, such as humans and animals. In order to accomplish the foregoing, the system and methods may involve utilizing cameras and other technologies to focus on two primary modalities associated with such beings: physiological changes and movements. Both of these modalities may be available at macro (e.g., body or body region) and micro (e.g., specific body structure or body part) levels. The system and methods may utilize the cameras and other technologies to capture video and/or other media content of a being in a particular environment, such as an office, home, or other environment. Based on the captured video and/or other media content of the being, the system and methods may include performing an analysis of the content to detect physiological changes and/or movements of the being at macro and micro levels. For example, based on the video of the being, the systems and methods may detect a micro-movement of a body part of the being or a change in skin pigmentation of the being.
- Once the video or other content of the being is obtained and the detected changes of the being are detected, the system and methods may include submitting the content, the detected changes, and information associated with the content and changes for further processing in the system. The submitted content, changes, and information may be aggregated with similar information for other beings. Based on a comparison between the aggregated data, the content, the changes, and the information, the system and methods may detect one or more anomalies associated with the being. If an anomaly is not detected or a detected anomaly needs to be confirmed, the system and methods may include transmitting a signal to cause the cameras to be adjusted so that additional media content of the being may be obtained from a different vantage point and/or transmitting a signal to a device of the being to instruct the being to perform a particular action, such as move a body part in the presence of the camera. The cameras may obtain the additional media content based on the adjusted position of the camera and/or the action performed by the being in response to the instruction. Then, the system and methods may include utilizing the additional media content from the cameras in combination with the initial media content obtained of the user to confirm the existence of an anomaly.
- Once an anomaly is detected, the system and methods may include transmitting one or more alerts to a device of the being or to a device of a physician monitoring the being, which indicate the presence and type of anomaly detected. The system and methods may also include generating and transmitting one or more proposed interactions to be performed with the being based on the detected anomaly. For example, if the cameras obtain video content of a person that shows that the person's left eye is twitching in an anomalous manner, the system may generate and transmit a proposed interaction that indicates that the person should blink their left eye in a certain manner and/or that the physician monitoring the person should perform some type of interaction with the person so as to obtain additional information. The system and methods may receive information back from the person and/or the physician that relates to the proposed interaction, and may utilize the information, in conjunction with the content, changes, aggregated data, and other information, to generate a diagnosis for the being. For example, using the example above, based on the media content showing the person's left eye twitching, information gathered after the person blinks their eye, aggregated information for other individuals experiencing similar symptoms, previously stored historical patient information for the person, and/or other information, the system and methods may diagnose the person with a certain eye disease. The system and methods may include continuing to monitor the being to confirm the diagnosis, to track the being's progress, to determine trends in a population, update the being's historical information, or to perform any other desired function. In certain embodiments, the content obtained from the cameras may be combined with other technologies, such as, but not limited to, infrared imaging content, thermal imaging content, MRI content, CT content, PET content, and/or any type of other content to confirm anomalies, confirm diagnoses, generate proposed interactions, transmit alerts, or any combination thereof.
- In one embodiment, a system for providing video analysis and motion augmentation for telemedicine applications is disclosed. The system may include a memory that stores instructions and a processor that executes the instructions to perform various operations of the system. The system may perform an operation that includes capturing first media content of a being within a range of a camera monitoring the being. The system may then perform an operation that includes analyzing the first media content to detect a first change associated with the being. In certain embodiments, the first change associated with the being may be a movement of the being, a change in a condition of the being, or a combination thereof. The system may proceed to perform an operation that includes detecting an anomaly associated with the being based on comparing the first change associated with the being to aggregated data for a plurality of beings including the being. Once an anomaly is detected, the system may perform an operation that includes determining, based on the anomaly, a proposed interaction with the being. The system may perform an operation that includes transmitting the proposed interaction to a device of the being. Finally, the system may perform an operation that includes determining, based on the anomaly, the aggregated data, and on information obtained m response to transmitting the proposed interaction, a diagnosis associated with the being.
- In another embodiment, a method for providing video analysis and motion augmentation for telemedicine applications is disclosed. The method may include utilizing a memory that stores instructions, and a processor that executes the instructions to perform the various functions of the method. In particular, the method may include obtaining, during a first time interval, first media content of a being within a range of a camera monitoring the being. Additionally, the method may include detecting, based on the first media content, a first change associated with the being. The first change associated with the being may include a movement of the being, a first change in a condition of the being, or a combination thereof. Once the first change is detected, the method may include detecting an anomaly associated with the being based on comparing the first change associated with the being to aggregated data for a plurality of beings. The method may then include determining, based on the anomaly associated with the being, a proposed interaction with the being, and then transmitting the proposed interaction to a device associated with the being. Finally, the method may include determining, based on the anomaly, the aggregated data, and on information obtained in response to transmitting the proposed interaction, a diagnosis associated with the being.
- According to yet another embodiment, a computer-readable device having instructions for providing video analysis and motion augmentation for telemedicine applications is provided. The computer instructions, which when loaded and executed by a processor, may cause the processor to perform operations including: capturing first media content of a being within a range of a camera; analyzing the first media content to detect a first change associated with the being, wherein the first change associated with the being comprises a movement of the being, a first change in a condition of the being, or a combination thereof; detecting an anomaly associated with the being based on comparing the first change associated with the being to aggregated data for a plurality of beings; determining, based on the anomaly associated with the being, a proposed interaction with the being to determine a diagnosis associated with the being; transmitting the proposed interaction to a device associated with the being; and determining, based on the anomaly, the aggregated data, and on information obtained in response to transmitting the proposed interaction, the diagnosis associated with the being.
- These and other features of the systems and methods for providing video analysis and motion augmentation for telemedicine applications are described in the following detailed description, drawings, and appended claims.
- A
system 100 and accompanying methods for providing video analysis and motion augmentation for applications, such as, but not limited to, telemedicine applications, are disclosed. In particular, thesystem 100 and methods may involve utilizing video analysis and motion augmentation to assist in the detection of various types of anomalies and to assist in the determination of diagnoses for various types of beings, such as humans and animals. In order to accomplish the foregoing, thesystem 100 and methods may involve utilizingcameras 120 and other technologies to focus on two primary modalities associated with such beings: physiological changes and movements. Each of these modalities may be available at macro (e.g., body or body region) and micro (e.g., specific body structure or body part) levels. Thesystem 100 and methods may utilize thecameras 120 and other technologies to capture video and/or other media content of a being in a particular environment, such as a doctor's office, a home, or other environment. Based on the captured video and/or other media content of the being, thesystem 100 and methods may analyze the content to detect physiological changes and/or movements of the being at both macro and micro levels. - Once the video or other content of the being is obtained and the detected changes of the being are detected, the
system 100 and methods may submit the content, the detected changes, and information associated with the content and changes for further processing in thesystem 100. The submitted content, changes, and information may be aggregated with similar information for other beings. Based on a comparison among the aggregated data, the content, the changes, and the information, thesystem 100 and methods may detect one or more anomalies associated with the being. If an anomaly is not detected or a detected anomaly needs to be confirmed, thesystem 100 and methods may include transmitting a signal to cause thecameras 120 to be adjusted so that additional media content of the being may be obtained from a different position. Additionally, thesystem 100 and methods may include transmitting a signal to a device of the being to instruct the being to perform a particular action, such as move a body part in front of thecamera 120. Thecameras 120 may obtain the additional media content based on the adjusted position of thecamera 120 and/or the action performed by the being in response to the instruction sent to the being. Thesystem 100 and methods may then include utilizing the additional media content from thecameras 120 in combination with the initial video content depicting the user to confirm the existence of an anomaly. - Once an anomaly is detected, the
system 100 and methods may include transmitting one or more alerts to a device of the being and/or to a device of a physician monitoring the being. The alerts may be utilized to indicate the presence and type of anomaly detected. Thesystem 100 and methods may also include generating and transmitting one or more proposed interactions to be performed with the being based on the detected anomaly. For example, if thecameras 120 obtain video content of a person that shows that his skin pigmentation on his right arm is changing, thesystem 100 may generate and transmit a proposed interaction that indicates that the person should rotate his arm in a certain manner and/or that the physician monitoring the person should perform some type of interaction with the person so as to obtain additional information relating to the anomaly. - The
system 100 and methods may receive information back from the person and/or the physician that relates to the proposed interaction, and may utilize the information, in conjunction with the content, changes, aggregated data, and other information, to generate a diagnosis for the person. For example, using the example above, based on the media content showing the change in skin pigmentation, information gathered after the person rotates his arm, aggregated information for other individuals experiencing similar symptoms, previously stored historical patient information for the person, and/or other information, thesystem 100 and methods may diagnose the person with a certain skin disease. Thesystem 100 and methods may include continuing to monitor the person to confirm the diagnosis, to track the person's progress, to determine trends in a population, update the person's historical information, or to perform any other desired function. In certain embodiments, the content obtained from thecameras 120 may be combined with information obtained from other technologies, such as, but not limited to, infrared imaging content, thermal imaging content, MRI content, CT content, PET content, and/or any other type of content to confirm anomalies, confirm diagnoses, generate proposed interactions, transmit alerts, or any combination thereof. - As shown in
FIG. 1 , asystem 100 for providing video analysis and motion augmentation for applications, such as, but not limited to, telemedicine applications, is disclosed. Thesystem 100 may be configured to support, but is not limited to supporting, cloud computing services, content delivery services, satellite services, telephone services, voice-over-internet protocol services (VoIP), software as a service (SaaS) applications, gaming applications and services, productivity applications and services, mobile applications and services, and any other computing applications and services. The system may include afirst user 101, which may be any type of being, such, as but not limited to, a human, an animal, or any other being. Thefirst user 101 may utilize afirst user device 102 to access data, content, and services, or to perform a variety of other tasks and functions. As an example, thefirst user 101 may utilizefirst user device 102 to transmit signals to access various online services, such as those provided by a content provider or service provider associated withcommunications network 135. Thefirst user device 102 may include amemory 103 that includes instructions, and aprocessor 104 that executes the instructions from thememory 103 to perform the various operations that are performed by thefirst user device 102. In certain embodiments, theprocessor 104 may be hardware, software, or a combination thereof. Thefirst user device 102 may also include acamera 105, which may be configured to record and store video and/or audio content within a viewing range and/or auditory range of thecamera 105. In certain embodiments, thecamera 105 may be any type of camera including, but not limited to, a video camera, a photo camera, an infrared camera, a thermal imaging camera, any type of imaging device, or any combination thereof. In certain embodiments, thefirst user device 102 may be a computer, a medical device, a laptop, a tablet device, a phablet, a server, a mobile device, a smartphone, a smart watch, or any other type of computing device. Illustratively, thefirst user device 102 is shown as a smartphone device inFIG. 1 , and thefirst user 101 is a person. - In addition to the
first user 101, thesystem 100 may also include asecond user 110, which may be any type of being, such, as but not limited to, a human, an animal, or any other being. Thesecond user 110 may utilize asecond user device 111 to also access data, content, and services, and to perform a variety of other functions. For example, thesecond user device 111 may be utilized by thesecond user 110 to transmit signals to request various types of content, services, and data provided by providers associated withcommunications network 135 or any other network in thesystem 100. Thesecond user device 111 may include amemory 112 that includes instructions, and aprocessor 113 that executes the instructions from thememory 112 to perform the various operations that are performed by thesecond user device 111. In certain embodiments, theprocessor 113 may be hardware, software, or a combination thereof. Thesecond user device 111 may also include acamera 114, which may be configured to record and store content within a viewing range of thecamera 114. In certain embodiments, thecamera 114 may be any type of camera including, but not limited to, a video camera, a photo camera, an infrared camera, a thermal imaging camera, any type of imaging device, or any combination thereof. Similar to thefirst user device 102, in certain embodiments, thesecond user device 111 may be a computer, a laptop, a tablet device, a phablet, a server, a mobile device, a smartphone, a smart watch, or any other type of computing device. Illustratively, thesecond user device 111 is shown as a tablet device inFIG. 1 , and thesecond user 110 is a person. - The
system 100 may also include athird user 115, which may be any type of being, such, as but not limited to, a human, an animal, or any other being. Thethird user 115 may utilize athird user device 116 to also access data, content, and services, and to perform a variety of other functions. For example, thethird user device 116 may be utilized by thethird user 115 to transmit signals to request various types of content, services, and data provided by providers associated withcommunications network 135 or any other network in thesystem 100. Additionally, thethird user device 116 may communicate with first andsecond user devices third user device 116 may include amemory 117 that includes instructions, and aprocessor 118 that executes the instructions from thememory 117 to perform the various operations that are performed by thethird user device 116. In certain embodiments, theprocessor 118 may be hardware, software, or a combination thereof. Thethird user device 116 may also include acamera 119, which may be configured to record and store content within a viewing range of thecamera 119. Thecamera 119 may also record audio content as well. In certain embodiments, thecamera 119 may be any type of camera including, but not limited to, a video camera, a photo camera, an infrared camera, a thermal imaging camera, any type of imaging device, or any combination thereof. In certain embodiments, thethird user device 116 may be a computer, a laptop, a tablet device, a phablet, a server, a mobile device, a smartphone, a smart watch, or any other type of computing device. Illustratively, thethird user device 116 is shown as a tablet device inFIG. 1 , and thethird user 115 is a physician associated with the first andsecond users - In certain embodiments,
first user device 102, thesecond user device 111, and thethird user device 116 may have any number of software applications and/or application services stored and/or accessible thereon. For example, the first, second, andthird user devices third users third users system 100, any network in thesystem 100, or any combination thereof. In certain embodiments, thefirst user device 102, thesecond user device 111, and thethird user device 116 may include associated telephone numbers, device identities, or any other identifiers to uniquely identify the first, second, andthird user devices - The
system 100 may also include acamera 120, which may be utilized to record any type of media content or any type of other content. The media content may include, but is not limited to, video content, audio content, image content, any type of content, or any combination thereof. Thecamera 120 may be any type of camera, such as, but not limited to, a video camera, a thermal imaging camera, an infrared camera, an X-ray-enabled camera, any type of imaging device, any type of media content recording device, a surveillance device, or any combination thereof, that may be utilized to capture and record media content associated with the first andsecond users camera 120 can record video of thefirst user 101 and any sounds that thefirst user 101 makes when thefirst user 101 is within a viewing range for thecamera 120 or thesystem 100. Thecamera 120 may record sounds by utilizing a microphone, which may reside within thecamera 120 or in proximity to thecamera 120. In certain embodiments, thecamera 120 may be communicatively linked with any of the devices and networks in thesystem 100, and may transmit recorded media content to any of the devices and networks in thesystem 100. - In addition to the
camera 120, thesystem 100 may also include adevice 125, which may be any type of device including, but not limited to, an MRI machine, a CT machine, a PET machine, a thermal imaging device, an X-ray machine, an infrared imaging device, any type of medical imaging device, any type of device, any type of computing device, or any combination thereof. In certain embodiments, thedevice 125 may communicate with any of the devices and components in thesystem 100, such as, but not limited to, the first, second, andthird user devices device 125 may include amemory 126 that includes instructions, and aprocessor 127 that executes the instructions from thememory 126 to perform the various operations that are performed by thedevice 125. In certain embodiments, theprocessor 127 may be hardware, software, or a combination thereof. Thedevice 125 may be configured to record imaging data and content associated with the first andsecond users device 125 is a thermal imaging device, thedevice 125 may be configured to take thermal images of the first andsecond users system 100 for further processing and may be combined with content obtained from the first, second, andthird user devices second users second users - The
system 100 may further include acommunications network 135. Thecommunications network 135 of thesystem 100 may be configured to link each of the devices in thesystem 100 to one another. Additionally, thecommunications network 135 may be configured to transmit, generate, and receive any information and data traversing thesystem 100. In certain embodiments, thecommunications network 135 may include any number of servers, databases, or other componentry. Thecommunications network 135 may also include and be connected to a cloud-computing network, a wireless network, an Ethernet network, a satellite network, a broadband network, a cellular network, a private network, a cable network, the Internet, an internet protocol network, a multiprotocol label switching (MPLS) network, a content distribution network, any network or any combination thereof. Illustratively,servers communications network 135, and thecommunications network 135 is shown as a content delivery network. In certain embodiments, thecommunications network 135 may be part of a single autonomous system that is located in a particular geographic region, or be part of multiple autonomous systems that span several geographic regions. - Notably, the functionality of the
system 100 may be supported and executed by using any combination of theservers server 140 may include amemory 141 that includes instructions, and aprocessor 142 that executes the instructions from thememory 141 to perform various operations that are performed by theserver 140. Theprocessor 142 may be hardware, software, or a combination thereof. Similarly, theserver 145 may include amemory 146 that includes instructions, and aprocessor 147 that executes the instructions from thememory 146 to perform the various operations that are performed by theserver 145. In certain embodiments, theservers servers communications network 135, any network, any device in thesystem 100, or any combination thereof. - The
database 155 of thesystem 100 may be utilized to store and relay information that traverses thesystem 100, cache content that traverses thesystem 100, store data about each of the devices in thesystem 100 and perform any other typical functions of a database. In certain embodiments, thedatabase 155 may be connected to or reside within thecommunications network 135, any other network, or a combination thereof. In certain embodiments, thedatabase 155 may serve as a central repository for any information associated with any of the devices and information associated with thesystem 100. Furthermore, thedatabase 155 may include a processor and memory or be connected to a processor and memory to perform the various operation associated with thedatabase 155. In certain embodiments, thedatabase 155 may be connected to thecamera 120, theservers first user device 102, thesecond user device 111, thethird user device 116, thedevice 125, thecommunications network 135, or any combination thereof. - The
database 155 may also store information and metadata obtained from thesystem 100, store metadata and other information associated with the first, second, andthird users third users system 100, store communications traversing thesystem 100, store user preferences, store information associated with any device or signal in thesystem 100, store information relating to patterns of usage relating to the first, second, andthird user devices communications network 135, or any combination thereof, store any information generated by or associated with thecamera 120, store performance data for the devices, store information generated or associated withdevice 125, store historical data associated with the first andsecond users second users cameras 120 or any device in the system, store any of the information disclosed for any of the operations and functions disclosed herewith, store any information traversing thesystem 100, or any combination thereof. Furthermore, thedatabase 155 may be configured to process queries sent to it by any device in thesystem 100. - Operatively, the
system 100 may provide video analysis and motion augmentation for applications, such as telemedicine applications, as shown in the following exemplary scenario. In the example scenario, thefirst user 101 may be located in an office environment and may be utilizingfirst user device 102, which may be a smartphone or other similar device. Thecamera 120 of thesystem 100 may also be located in the office of thefirst user 101. At a selected time or on a continual basis, thecamera 120 may record media content of thefirst user 101, such as video content of the user, while thefirst user 101 is sitting in his office. In certain embodiments, thecamera 105 may be utilized to record media content associated with thefirst user 101, either alone or in combination with thecamera 120. The media content may be transmitted by thecamera 120 to thecommunications network 135 for further processing. Once thecommunications network 135 receives the media content, thesystem 100 may analyze the media content to detect one or more changes associated with thefirst user 101. For example, the changes may be macro changes and/or micro changes or changes in the condition of thefirst user 101, or a combination thereof. - Macro changes may be changes or movements specific to the first user's 101 entire body or to a specific region (e.g., chest region, back region, head region, leg region, etc.) of the first user's 101 body. Micro changes may be changes or movements specific to specific body parts and/or to specific body structures (e.g., parts of the face, a single finger, a toe, etc.). For example, the
system 100 may detect the first user's 101 blood flow via skin pigmentation changes detected in the media content recorded of thefirst user 101. As another example, thesystem 100 may detect various types of range of motion for certain body parts or even detect various types of “ticks” (e.g., eye twitching or restless leg) or habits that thefirst user 101 has. Changes in movement, for example, may involve detecting that a particular body region is moving in an irregular direction or magnitude. In certain embodiments, thesystem 100 may be configured to perform a shape analysis (e.g., finding the right rotation or contour of a body part) to help diagnose and normalize automatic observations. - Once the one or more changes associated with the
first user 101 are detected based on the media content, thesystem 100 may detect one or more anomalies associated with thefirst user 101 based on comparing the detected changes to previously stored information, such as health information, for thefirst user 101 and/or to aggregated information for a selected population of users. When detecting anomalies, the information obtained from the media content may be combined with images and information obtained from other technologies to confirm the presence of an anomaly. For example, if thefirst user 101 had an X-ray of his chest and the X-ray shows an anomaly in a certain region, and the media content shows the same anomaly, then the anomaly may be confirmed by utilizing the image provided by the X-ray in conjunction with the video recording of thefirst user 101. As another example, if thefirst user 101 underwent a thermal imagingscan using device 125 and the thermal imaging scan showed that the first user's 101 condition is normal, but the media content recording shows an anomaly, the information from the thermal imaging scanning may be utilized to confirm that an anomaly does not exist, and can, therefore, reduce false alarms. In certain embodiments, thefirst user 101 may be identified by analyzing the media content, and, in other embodiments, the identity of thefirst user 101 may be kept anonymous. If thefirst user 101 is identified, the anomaly may be confirmed by comparing the media content recording of thefirst user 101 to the first user's 101 medical records, which may be accessible by accessing thethird user device 116 of thethird user 115, who may be a physician. - In certain embodiments, if an anomaly has not been detected or if the existence of an anomaly needs to be confirmed, the
system 100 may request that the first user's 101 physician confirm the existence of the anomaly such, as via thethird user device 116. Additionally, thesystem 100 may transmit a signal to automatically adjust a position of thecamera 120 or request the user to adjust the position of thefirst user device 102 so that a new media content recording from a different vantage point may be obtained. Furthermore, thesystem 100 may transmit a signal to thefirst user device 102 instructing the user to move a body part or move in a particular manner so that new media content may be recorded to confirm whether an anomaly exists. In certain embodiments, thesystem 100 may present a visual representation of thefirst user 101 on a visual interface of thefirst user device 101 that shows where the detected anomaly is on thefirst user 101. Thesystem 100 may enable thefirst user 101 to interact with the visual representation, such as via a software application, to confirm whether the anomaly exists or to input additional information, such as text commentary, associated with the anomaly. - If an anomaly is detected, the
system 100 may transmit an alert to thethird user device 116 of the physician and/or an alert to thefirst user device 102 confirming the presence of the anomaly. Based on the detection of the anomaly, the recorded media content, aggregated data for a population and/or historical information for thefirst user 101, thesystem 100 may determine one or more proposed interactions for interacting with thefirst user 101. For example, if the detected anomaly is a bruised wrist, thesystem 100 may transmit a signal to thethird user device 116 requesting the doctor to prescribe medication for dealing with pain associated with the bruised wrist, to input notes relating to the bruised wrist and/or to input a regimen for thefirst user 101 to perform to heal the bruised wrist. Thesystem 100 may also transmit a signal to thefirst user device 102 requesting the user to input additional information regarding the cause of the bruising or to input additional information relating to the bruised wrist. In certain embodiments, the signal may advise thefirst user 101 to adjust a position of thecameras first user 101 so that additional information associated with a detected condition may be obtained. In certain embodiments, the requested interactions may be handled by a health medication service to individually personalize care for thefirst user 101 and/or to distill the information for a human cooperator. In certain embodiments, the proposed interactions may be adjusted by the physician as necessary. - In certain embodiments, the
system 100 may automatically anonymize any of the interactions with thefirst user 101, the physician, and thesystem 100. Any information gathered from the interactions may also be anonymized. In certain embodiments, if the identity of thefirst user 101 is known, confidential information may be scrubbed to ensure the privacy of thefirst user 101. Additionally, in certain embodiments, based on the severity of the anomaly or condition detected, thesystem 100 may also automatically scrub information identifying thefirst user 101 to ensure privacy and confidentiality. Thesystem 100 may generate and transmit any number of interactions to the physician and thefirst user 101, and may utilize information gathered from the interactions to further supplement the media content and other information obtained for thefirst user 101. - Based on the information gathered from the interactions, the detected anomaly, the aggregated data, the historical data for the
first user 101, or a combination thereof, thesystem 100 may include determining a diagnosis for thefirst user 101. For example, using the example above, thesystem 100 may determine that thefirst user 101 suffered a specific type of contusion. The determined diagnosis may be provided to the physician and/or confirmed by the physician. If necessary, the determination of the diagnosis may trigger the automatic scheduling of a medical appointment with the physician, such as by accessing digital calendars on thethird user device 116 andfirst user device 102. In certain embodiments, processes provided by thesystem 100 may be repeated as necessary until enough information associated with thefirst user 101 is obtained and a confirmation of the diagnosis is possible. Thesystem 100 may also be utilized to complement and support any type of telemedicine applications as well. In certain embodiments, thesystem 100 may be extended to monitor athlete performance, worker performance, or for any other purpose. Any of the data generated by thesystem 100 may be stored in a record associated with thefirst user 101 and may be combined with aggregated data for a population so as to determine various trends in a population and various health conditions for population. - Notably, as shown in
FIG. 1 , thesystem 100 may perform any of the operative functions disclosed herein by utilizing the processing capabilities ofserver 160, the storage capacity of thedatabase 155, or any other component of thesystem 100 to perform the operative functions disclosed herein. Theserver 160 may include one ormore processors 162 that may be configured to process any of the various functions of thesystem 100. Theprocessors 162 may be software, hardware, or a combination of hardware and software. - Additionally, the
server 160 may also include amemory 161, which stores instructions that theprocessors 162 may execute to perform various operations of thesystem 100. For example, theserver 160 may assist in processing loads handled by the various devices in thesystem 100, such as, but not limited to, capturing media content of a being; analyzing media content to detect micro and macro movements and changes associated with the being; detecting anomalies based on the media content; transmitting signals to adjust a position of thecamera 120, transmitting signals to a device that instruct the being to adjust the being's position; determining proposed interactions with the being; receiving information from the being; determining a diagnosis for the being based on the media content, aggregated data, and other information; and performing any other suitable operations conducted in thesystem 100 or otherwise. In one embodiment,multiple servers 160 may be utilized to process the functions of thesystem 100. Theserver 160 and other devices in thesystem 100, may utilize thedatabase 155 for storing data about the devices in thesystem 100 or any other information that is associated with thesystem 100. In one embodiment,multiple databases 155 may be utilized to store data in thesystem 100. - Although
FIG. 1 illustrates a specific example configuration of the various components of thesystem 100, thesystem 100 may include any configuration of the components, which may include using a greater or lesser number of the components. For example, thesystem 100 is illustratively shown as including afirst user device 102, asecond user device 111, athird user device 116, acamera 120, adevice 125, acommunications network 135, aserver 140, aserver 145, aserver 160, and adatabase 155. However, thesystem 100 may include multiplefirst user devices 102, multiplesecond user devices 111, multiplethird user devices 116,multiple cameras 120,multiple devices 125,multiple communications networks 135,multiple servers 140,multiple servers 145,multiple servers 160,multiple databases 155, or any number of any of the other components inside or outside thesystem 100. Furthermore, in certain embodiments, substantial portions of the functionality and operations of thesystem 100 may be performed by other networks and systems that may be connected tosystem 100. - As shown in
FIG. 2 , anexemplary method 200 for providing video analysis and motion augmentation for applications, such as telemedicine applications, is schematically illustrated. Themethod 200 may include, atstep 202, capturing first media content of a being within a range of acamera 120 monitoring the being. For example, themethod 200 may involve utilizing thecamera 120 to capture a video recording and/or video stream of thefirst user 101 at a selected time in a selected environment, such as the first user's 101 office. In certain embodiments, capturing of the media content may be performed by utilizing thefirst user device 102, thesecond user device 111, thethird user device 116, thecamera 120, thedevice 125, theserver 140, theserver 145, theserver 160, thecommunications network 135, any combination thereof, or by utilizing any other appropriate program, system, or device. - At
step 204, themethod 200 may include analyzing the captured media content to detect a first change associated with the being that is monitored. In certain embodiments, the analyzing may be performed by utilizing thefirst user device 102, thesecond user device 111, thethird user device 116, thecamera 120, thedevice 125, theserver 140, theserver 145, theserver 160, thecommunications network 135, any combination thereof, or by utilizing any other appropriate program, system, or device. In certain embodiments, the captured media content may be utilized to detect the first change by detecting both micro and macro changes occurring for the being monitored by thecamera 120. The micro changes may include changes to a specific body structure of the being, such as, but not limited to, a leg, a hand, a finger, an eye, a mouth, a nose, a thigh, a head, a back, or any other body structure of the being. The macro changes may include changes to the entire body of the being and/or a specific region of the body of the being. The macro and micro changes may be any type of physiological change or even changes that may be symptomatic of certain mental conditions. The physiological changes may include, but are not limited to, any type of body movement, any type of body part movement, any type of skin pigmentation change, any type of skin change, any type of color change, any type of perspiration change, any body fluid change, any type of physiological change, any type of wound, any type of infection, any type of allergic reaction, or any combination thereof. In certain embodiments, textural changes associated with the body of the monitored being may also be detected. For example, a change in texture associated with the being's skin may be detected. - At
step 206, themethod 200 may include detecting an anomaly associated with the being based on comparing the detected first change with previous historical information for the being, aggregated information for a plurality of other beings, medical information, other information, or a combination thereof. For example, the previous historical information may be obtained from a person's medical records or other records. The historical information may even be inputted by a person into thesystem 100, such as via a digital form or other input instrument. The aggregated information may be information, such as, but not limited to, health information, demographic information, psychographic information, or any other type of information, for any number of individuals of a selected population. The medical information may include, but is not limited to, information identifying expected health metrics associated with various types of medical conditions, media content displaying healthy body structures, media content displaying unhealthy medical conditions or body structures, healthy and unhealthy human anatomy information, any other types of medical information, or a combination thereof. An anomaly may be detected, for example, if a specific micro-movement of a user's eye that was captured in the media content indicates that the micro-movement is indicative of a degenerative eye disease when the media content is compared with medical information stored in thesystem 100. The same anomaly may be detected by comparing micro-movement detected in the media content to a person's own medical history, previous media content of the person, or to aggregated data associated with the eyes of a multitude of other people. In certain embodiments, the detecting may be performed by utilizing thefirst user device 102, thesecond user device 111, thethird user device 116, thecamera 120, thedevice 125, theserver 140, theserver 145, theserver 160, thecommunications network 135, any combination thereof, or by utilizing any other appropriate program, system, or device. - At
step 208, themethod 200 may include determining if additional information is needed to confirm the existence of the anomaly. In certain embodiments, the determining may be performed by utilizing thefirst user device 102, thesecond user device 111, thethird user device 116, thecamera 120, thedevice 125, theserver 140, theserver 145, theserver 160, thecommunications network 135, any combination thereof, or by utilizing any other appropriate program, system, or device. If it is determined that additional information is needed to confirm the existence of the anomaly, themethod 200 may include, at step 210, transmitting a signal to adjust a position of thecamera 120 and/or a signal to a device of the being to instruct the being to adjust a body part in a prescribed manner. For example, if thefirst user 101 is being monitored, a signal may be transmitted by thesystem 100 to thecamera 120 to automatically adjust thecamera 120 to a new position so that additional video recordings of thefirst user 101 may be obtained. As another example, a signal may be transmitted by thesystem 100 to thefirst user device 102, which may cause a user interface of thefirst user device 102 to display instructions to thefirst user 101 to move his arm up and down in a certain manner. - Once the signal to adjust the position of the
camera 120 and/or the signal to the device including instructions is sent, themethod 200 may include, at step 212, capturing second media content of the being while thecamera 120 position is adjusted, after thecamera 120 position is adjusted, while the being adjusts the body part, after the being adjusts the body part, or any combination thereof, to confirm the existence of the anomaly. In certain embodiments, the capturing of the second media content may be performed by utilizing thefirst user device 102, thesecond user device 111, thethird user device 116, thecamera 120, thedevice 125, theserver 140, theserver 145, theserver 160, thecommunications network 135, any combination thereof, or by utilizing any other appropriate program, system, or device. Once the second media content is captured or if it is determined that additional information is not needed to confirm the existence of the anomaly, themethod 200 may include, atstep 214, determining a proposed interaction with the being based on the detection of the anomaly. In certain embodiments, the determining may be performed by utilizing thefirst user device 102, thesecond user device 111, thethird user device 116, thecamera 120, thedevice 125, theserver 140, theserver 145, theserver 160, thecommunications network 135, any combination thereof, or by utilizing any other appropriate program, system, or device. A proposed interaction may include, but is not limited to, a request for the being to perform a certain action, a request for a physician to perform some action with respect to the being, a request to change the position of thecamera 120 further, a request to have the being take a certain medication, any type of interaction, or any combination thereof. - At
step 216, themethod 200 may include transmitting the proposed interaction to the device of the being, to a device of a person monitoring the being, or to another device. Also, themethod 200 may include transmitting one or more alerts indicating the presence of the anomaly to the being, the person monitoring the being, or a combination thereof. The one or more alerts may also be sent to any device of thesystem 100. In certain embodiments, the transmitting of the proposed interaction and/or the transmitting of the alerts may be performed by utilizing theserver 140, theserver 145, theserver 160, thecommunications network 135, any combination thereof, or by utilizing any other appropriate program, system, or device. Atstep 218, themethod 200 may include receiving information in response to the proposed interaction. The information may include, but is not limited to, information associated with an action being performed by the being, information gathered based on an interaction between the being and the person monitoring the being, any information provided in response to the proposed interaction, or any combination thereof. For example, the information may include information provided by a doctor indicating certain additional symptoms associated with the anomaly that doctor has determined to be occurring during the doctor's interaction with the person he is monitoring. In certain embodiments, the information may be received by utilizing theserver 140, theserver 145, theserver 160, thecommunications network 135, any combination thereof, or by utilizing any other appropriate program, system, or device. - At
step 220, themethod 200 may include determining, based on the detected anomaly, the aggregated data, the historical information associated with the being, and/or the information received in response to the interaction, a diagnosis for the being. For example, based on the micro-movement of the first user's 101 eye, aggregated data associated with the eyes of multiple other people, previous medical records for thefirst user 101, and information provided by the doctor relating to the symptoms associated with the micro-movement of the eye, a diagnosis for thefirst user 101 may be determined. In this case, thesystem 100 may determine that thefirst user 101 may have a degenerative eye condition based on the information and media content obtained for thefirst user 101. The steps in themethod 200 may be repeated as necessary until a diagnosis is confirmed and/or until enough information associated with the being and the being's condition is obtained. Notably, themethod 200 may further incorporate any of the features and functionality described for thesystem 100 or as otherwise described herein. - The systems and methods disclosed herein may include additional functionality and features. For example, the systems and methods may be configured to allow for the detection of micro and macro changes associated with a being during selected time intervals. For example, the
system 100 may transmit a signal to thecamera 120 to record and/or stream video content or other media content of a being for selected time period, such as from 5:00 pm-6:00 pm or a fixed time period of 1 hour. Additionally, the systems and methods may compare micro and macro changes detected during certain time intervals to micro and macro changes detected during other time intervals. Such comparisons may assist in determining whether an anomaly exists and/or whether a certain diagnosis is accurate. In certain embodiments, thecameras 120 may be configured to be placed in any location where a being may be located. For example, one ormore cameras cameras 120 may be aggregated and stored in thesystem 100. The aggregated media content may be utilized to detect pandemics associated with a group of people being monitored, confirm health trends associated with a group of people being monitored, detect anomalies associated with a group of people being monitored, determine diagnoses associated with a group of people being monitored, or a combination thereof. In certain embodiments, media content obtained for a being may be compared to the aggregated media content for a certain population of beings so as to detect the foregoing as well - The systems and methods may also include detecting anomalies based on a comparison with a corpus of known “normal” or healthy conditions. The corpus may include the monitored being's healthy conditions, a selected population's healthy conditions, or a combination thereof. The anomalies and/or diagnoses may be confirmed based on a comparison against images and outputs generated by other technologies. For example, when evaluated against certain models, probabilistic results, obtained media content, and/or other information generated in the
system 100 may be coupled with outputs generated by X-ray machines, CT machines, MRI machines, PET machines, thermal imaging machines, infrared devices, any other technologies, or a combination thereof, to detect anomalies and/or determine diagnoses. The systems and methods may utilize inputs obtained from a traditional medical practitioner's office, such as an office suited withcameras resolution cameras second user devices - In certain embodiments, the
system 100 may have multiple modes of operation. As a first mode, thesystem 100 may have an “always on” passive monitoring mode. In such a mode,cameras system 100 may be utilized to monitor office employees to conduct a repetitive stress analysis in an office or to monitor elderly individuals who may benefit from continuous monitoring. During the “always on” mode, information and media content obtained may be aggregated for many people in a feed (e.g., for the public at large or an entire workforce) so as to preempt a pandemic spread of disease or to prevent other conditions from spreading. Thesystem 100 may also have a second mode, which may be an “intentional” scan mode. During “intentional scan” mode, thesystem 100 may be specifically activated from a sleep state to record and/or stream media content for specific areas and/or beings. The “intentional scan” mode may be particularly beneficial, for example, while a patient visits a doctor at the doctor's office for a medical appointment. Thesystem 100 may further include a “periodic scan” mode, which may involve having thesystem 100 obtain media content and information for selected areas and beings during periodic time intervals. This may be helpful to determine trends during particular times of the day, to determine when certain conditions occur, or a combination thereof. - In certain embodiments, the systems and methods may include utilizing user feedback when detecting anomalies and/or making diagnoses. The
system 100 may transmit a signal to a device of a being, such as thefirst user device 102 of thefirst user 101, that requests the being to provide feedback regarding the media content recorded for the being, regarding the being's symptoms, regarding what foods the being ate, regarding any type of information, or any combination thereof. The feedback from the being may be input via a graphical user interface on a device of the being, and may be transmitted to thesystem 100 for further analysis. The feedback may be utilized to confirm whether an anomaly exists, adjust a determined diagnosis, confirm an area on the being's body for diagnosis, supplement records associated with the being, supplement detected symptoms for the being, or a combination thereof. System feedback may be utilized to echo symptoms of a region on the being's body that is suspected to be injured and/or infected. - In further embodiments, the systems and methods may include obtaining the media content and information using the
cameras cameras communications network 135. The systems and methods may allow for a distributed, network-based analysis of video and other media content feeds obtained from a variety ofcameras cameras - The systems and methods may also institute various triggers based on the media content and information obtained in the
system 100. For example, if a certain anomaly and/or diagnosis is detected and/or determined, the systems and methods may automatically transmit alerts or transmit instructions indicating that certain medication should be provided to a particular being. Alerts may also be sent to emergency personnel or even certain government institutions advising of a pandemic, advising of a certain condition, or a combination thereof. In certain embodiments, the systems and methods may include utilizing one ormore cameras camera camera cameras other cameras system 100 may adjust the resolutions, capture rates, sampling rates, and capabilities of eachindividual camera camera - In certain embodiments, the
cameras second user devices system 100 may be utilized to preprocess the media content prior to streaming the media content to thecommunications network 135 for the detection of anomalies, the determinations of diagnoses, and/or the storage of the media content and information. In certain embodiments, the media content and information obtained from thecameras system 100 for further analysis, features may be streamed to a model-based comparison system that looks for both local anomalies (e.g., if the being's identity is known and a history for the being is present) or anomalies that are found based on a comparison with information for a general population of beings. - In various embodiments, copies of media content and/or any information in the
system 100 that is associated with a being may be anonymized (and personally identifiable, if known) and stored securely in a cloud-based service, such as a service provided bycommunications network 135. This may ensure that patient privacy is preserved and that the identity of the being is concealed. In certain embodiments, the being may be allowed to interact with the system in a “self-service usage mode.” The “self-service usage mode” may utilize additional filtering and distillation of the media content and information associated with the being. The additional filtering and distillation of the media content and information may be utilized to link with ancillary information sources (e.g., patient records, image data provided by MRI machines, CT machines, or other machines, general population data, etc.), connect and schedule medical appointments for the being, schedule follow-up appointments, schedule medical procedures, connect the being with a pharmacy, or any combination thereof. - In further embodiments, the systems and methods may include coupling the system's 100 functionality and operations with information associated with various medications and drugs that induce a response that the functionality and operations of the
system 100 may detect. For example, a certain drug may cause a skin reaction, which may be detectable by thecameras system 100 and be utilized to detect anomalies, update the monitored being's records, and/or confirm diagnoses. As another example, a trace chemical may cause micro-movements around a bone fracture that may also be detectable by thecameras system 100. The systems and methods may be extended to determine anomalies and diagnoses for a large crowd or the public at large to immediately determine health trends, needs, and conditions. The systems and methods may allow for the distributed analysis of medical content in the cloud. Additionally, the systems and methods may be combined with the functionality of a medical robot or electronic doctor that may visit or remotely communicate with a person. On site care may be provided to the person based on the functionality provided by the systems and methods. Furthermore, the functionality and features of the systems and methods may also be combined with robotic surgical devices and may facilitate surgical procedures and the diagnoses of certain medical conditions. - Referring now also to
FIG. 3 , at least a portion of the methodologies and techniques described with respect to the exemplary embodiments of thesystem 100 can incorporate a machine, such as, but not limited to,computer system 300, or other computing device within which a set of instructions, when executed, may cause the machine to perform any one or more of the methodologies or functions discussed above. The machine may be configured to facilitate various operations conducted by thesystem 100. For example, the machine may be configured to, but is not limited to, assist thesystem 100 by providing processing power to assist with processing loads experienced in thesystem 100, by providing storage capacity for storing instructions or data traversing thesystem 100, or by assisting with any other operations conducted by or within thesystem 100. - In some embodiments, the machine may operate as a standalone device. In some embodiments, the machine may be connected (e.g., using
communications network 135, another network, or a combination thereof) to and assist with operations performed by other machines and systems, such as, but not limited to, thefirst user device 102, thesecond user device 111, thethird user device 116, thecamera 120, thedevice 125, theserver 140, theserver 145, thedatabase 155, theserver 160, or any combination thereof. The machine may be connected with any component in thesystem 100. In a networked deployment, the machine may operate in the capacity of a server or a client user machine in a server-client user network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may comprise a server computer, a client user computer, a personal computer (PC), a tablet PC, a laptop computer, a desktop computer, a control system, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. - The
computer system 300 may include a processor 302 (e.g., a central processing unit (CPU), a graphics processing unit (GPU, or both), amain memory 304 and astatic memory 306, which communicate with each other via abus 308. Thecomputer system 300 may further include avideo display unit 310, which may be, but is not limited to, a liquid crystal display (LCD), a flat panel, a solid state display, or a cathode ray tube (CRT). Thecomputer system 300 may include an input device 312, such as, but not limited to, a keyboard, acursor control device 314, such as, but not limited to, a mouse, adisk drive unit 316, asignal generation device 318, such as, but not limited to, a speaker or remote control, and anetwork interface device 320. - The
disk drive unit 316 may include a machine-readable medium 322 on which is stored one or more sets ofinstructions 324, such as, but not limited to, software embodying any one or more of the methodologies or functions described herein, including those methods illustrated above. - The
instructions 324 may also reside, completely or at least partially, within themain memory 304, thestatic memory 306, or within theprocessor 302, or a combination thereof, during execution thereof by thecomputer system 300. Themain memory 304 and theprocessor 302 also may constitute machine-readable media. - Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein. Applications that may include the apparatus and systems of various embodiments broadly include a variety of electronic and computer systems. Some embodiments implement functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application-specific integrated circuit. Thus, the example system is applicable to software, firmware, and hardware implementations.
- In accordance with various embodiments of the present disclosure, the methods described herein are intended for operation as software programs running on a computer processor. Furthermore, software implementations can include, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
- The present disclosure contemplates a machine-
readable medium 322 containinginstructions 324 so that a device connected to thecommunications network 135, another network, or a combination thereof, can send or receive voice, video or data, and communicate over thecommunications network 135, another network, or a combination thereof, using the instructions. Theinstructions 324 may further be transmitted or received over thecommunications network 135, another network, or a combination thereof, via thenetwork interface device 320. - While the machine-
readable medium 322 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present disclosure. - The terms “machine-readable medium,” “machine-readable device,” or “computer-readable device” shall accordingly be taken to include, but not be limited to: memory devices, solid-state memories such as a memory card or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories; magneto-optical or optical medium such as a disk or tape; or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. The “machine-readable medium,” “machine-readable device,” or “computer-readable device” may be non-transitory, and, in certain embodiments, may not include a wave or signal per se. Accordingly, the disclosure is considered to include any one or more of a machine-readable medium or a distribution medium, as listed herein and including art-recognized equivalents and successor media, in which the software implementations herein are stored.
- The illustrations of arrangements described herein are intended to provide a general understanding of the structure of various embodiments, and they are not intended to serve as a complete description of all the elements and features of apparatus and systems that might make use of the structures described herein. Other arrangements may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Figures are also merely representational and may not be drawn to scale. Certain proportions thereof may be exaggerated, while others may be minimized. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
- Thus, although specific arrangements have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific arrangement shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments and arrangements of the various embodiments. Combinations of the above arrangements, and other arrangements not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. Therefore, it is intended that the disclosure not be limited to the particular arrangement(s) disclosed as the best mode contemplated for carrying out the various embodiments, but that the various embodiments will include all embodiments and arrangements falling within the scope of the appended claims.
- The foregoing is provided for purposes of illustrating, explaining, and describing the various embodiments. Modifications and adaptations to these embodiments will be apparent to those skilled in the art and may be made without departing from the scope or spirit of the various embodiments. Upon reviewing the aforementioned embodiments, it would be evident to an artisan with ordinary skill in the art that said embodiments can be modified, reduced, or enhanced without departing from the scope and spirit of the claims described below.
Claims (20)
1. A method, comprising:
detecting, by a system comprising a processor, an anomaly associated with a being based on a physiological condition associated with the being identified in first media representative of first motion of the being from a first camera;
transmitting, by the system to a device associated with the being, a proposed action to be performed in relation to the being based on the anomaly;
determining, by the system, based on information obtained in response to performance of the proposed action, a diagnosis associated with the being; and
confirming, by the system, the diagnosis based on second media representative of second motion of the being from a second camera, wherein the second motion comprises a movement of the being that the first camera is unable to detect.
2. The method of claim 1 , wherein the physiological condition is a first physiological condition, and wherein the confirming comprises analyzing the second media to detect a second physiological condition associated with the being.
3. The method of claim 1 , wherein the proposed action to be performed in relation to the being comprises the proposed action to be performed by the being.
4. The method of claim 1 , wherein the proposed action to be performed in relation to the being comprises the proposed action to be performed by a physician treating the being.
5. The method of claim 1 , wherein the proposed action to be performed in relation to the being comprises adjusting a body part of the being.
6. The method of claim 1 , wherein the proposed action to be performed in relation to the being comprises performing a motion with a body part of the being.
7. The method of claim 1 , wherein the confirming comprises adjusting the diagnosis in response to determining that the diagnosis is not confirmed.
8. A system, comprising:
a processor; and
a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising:
identifying an anomaly associated with a person based on a physiological movement associated with the person identified in first media that has captured first motion of the person by a first camera;
transmitting, to a device associated with the person, an action to be performed in relation to the person based on the anomaly;
determining based on information obtained in response to the action being performed, a diagnosis associated with the person; and
confirming the diagnosis based on second media that has captured second motion of the person by a second camera, wherein the second motion captured by the second camera comprises a movement of the person that the first camera is unable to detect.
9. The system of claim 8 , wherein the physiological movement is a first physiological movement, and wherein the confirming comprises analyzing the second media to detect a second physiological movement associated with the person.
10. The system of claim 8 , wherein the action to be performed in relation to the person comprises the action to be performed by the person.
11. The system of claim 8 , wherein the action to be performed in relation to the person comprises the action to be performed by a medical professional treating the person.
12. The system of claim 8 , wherein the action to be performed in relation to the person comprises adjusting a limb of the person.
13. The system of claim 8 , wherein the action to be performed in relation to the person comprises performing a motion with a limb of the person.
14. The system of claim 8 , wherein the confirming comprises adjusting the diagnosis in response to determining that the diagnosis is not confirmed.
15. A non-transitory machine-readable medium, comprising executable instructions that, when executed by a processor, facilitate performance of operations, comprising:
determining an anomaly associated with a being based on a physiological change associated with the being identified in first media capturing first motion of the being via a first camera;
transmitting, a device associated with the being, an action to be performed in relation to the being based on the anomaly;
determining based on information obtained in response to the action being performed, a diagnosis associated with the being; and
confirming the diagnosis based on second media capturing second motion of the being via a second camera, wherein the second motion captured via the second camera comprises a movement of the being that the first camera is unable to detect.
16. The non-transitory machine-readable medium of claim 15 , wherein the physiological change is a first physiological change, and wherein the confirming comprises analyzing the second media to detect a second physiological change associated with the being.
17. The system of claim 8 , wherein the action to be performed in relation to the being comprises the action to be performed by the being.
18. The system of claim 8 , wherein the action to be performed in relation to the being comprises the action to be performed by a medical robot treating the being.
19. The system of claim 8 , wherein the action to be performed in relation to the being comprises adjusting an eye of the being.
20. The system of claim 8 , wherein the action to be performed in relation to the being comprises performing a motion with an eye of the being.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/995,894 US20200375467A1 (en) | 2015-10-16 | 2020-08-18 | Telemedicine application of video analysis and motion augmentation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/885,746 US10779733B2 (en) | 2015-10-16 | 2015-10-16 | Telemedicine application of video analysis and motion augmentation |
US16/995,894 US20200375467A1 (en) | 2015-10-16 | 2020-08-18 | Telemedicine application of video analysis and motion augmentation |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/885,746 Continuation US10779733B2 (en) | 2015-10-16 | 2015-10-16 | Telemedicine application of video analysis and motion augmentation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200375467A1 true US20200375467A1 (en) | 2020-12-03 |
Family
ID=58523343
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/885,746 Active 2039-03-17 US10779733B2 (en) | 2015-10-16 | 2015-10-16 | Telemedicine application of video analysis and motion augmentation |
US16/995,894 Abandoned US20200375467A1 (en) | 2015-10-16 | 2020-08-18 | Telemedicine application of video analysis and motion augmentation |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/885,746 Active 2039-03-17 US10779733B2 (en) | 2015-10-16 | 2015-10-16 | Telemedicine application of video analysis and motion augmentation |
Country Status (1)
Country | Link |
---|---|
US (2) | US10779733B2 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109310340A (en) * | 2016-04-19 | 2019-02-05 | Mc10股份有限公司 | For measuring the method and system of sweat |
WO2018049308A1 (en) * | 2016-09-09 | 2018-03-15 | Fanuc America Corporation | Program and variable change analysis |
EP3586725A1 (en) * | 2018-06-28 | 2020-01-01 | Koninklijke Philips N.V. | Blood pressure measurement analysis method and system |
US11557387B2 (en) * | 2019-05-02 | 2023-01-17 | Lg Electronics Inc. | Artificial intelligence robot and method of controlling the same |
US11580628B2 (en) * | 2019-06-19 | 2023-02-14 | Deere & Company | Apparatus and methods for augmented reality vehicle condition inspection |
US11045271B1 (en) * | 2021-02-09 | 2021-06-29 | Bao Q Tran | Robotic medical system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140153794A1 (en) * | 2011-01-25 | 2014-06-05 | John Varaklis | Systems and methods for medical use of motion imaging and capture |
US20150094823A1 (en) * | 2013-10-02 | 2015-04-02 | Samsung Electronics Co., Ltd. | Walking assistance devices and methods of controlling the same |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5441047A (en) | 1992-03-25 | 1995-08-15 | David; Daniel | Ambulatory patient health monitoring techniques utilizing interactive visual communication |
US5701904A (en) | 1996-01-11 | 1997-12-30 | Krug International | Telemedicine instrumentation pack |
DE69731901T2 (en) | 1996-09-19 | 2005-12-22 | Ortivus Ab | PORTABLE TELEMEDICAL DEVICE |
WO1999004043A1 (en) | 1997-07-14 | 1999-01-28 | Abbott Laboratories | Telemedicine |
WO1999027842A1 (en) | 1997-12-04 | 1999-06-10 | Virtual-Eye.Com | Visual field testing via telemedicine |
US6697103B1 (en) | 1998-03-19 | 2004-02-24 | Dennis Sunga Fernandez | Integrated network for monitoring remote objects |
US20030181790A1 (en) | 2000-05-18 | 2003-09-25 | Daniel David | Methods and apparatus for facilitated, hierarchical medical diagnosis and symptom coding and definition |
US20050149364A1 (en) | 2000-10-06 | 2005-07-07 | Ombrellaro Mark P. | Multifunction telemedicine software with integrated electronic medical record |
WO2003102851A1 (en) | 2002-05-31 | 2003-12-11 | The Texas A & M University System | Communicating medical information in a communication network |
GB2393356B (en) | 2002-09-18 | 2006-02-01 | E San Ltd | Telemedicine system |
US9471751B1 (en) * | 2006-03-03 | 2016-10-18 | Dp Technologies, Inc. | Telemedicine system for preliminary remote diagnosis of a patient |
US20110034209A1 (en) * | 2007-06-18 | 2011-02-10 | Boris Rubinsky | Wireless technology as a data conduit in three-dimensional ultrasonogray |
EP2207479A1 (en) | 2007-10-22 | 2010-07-21 | Idoc24 Ab | Telemedicine care |
TWI377046B (en) | 2007-10-31 | 2012-11-21 | Netown Corp | A telemedicine device and system |
DK200800018U3 (en) | 2008-01-31 | 2008-03-28 | Gits As | Patient case for remote communication |
DE102008054442A1 (en) | 2008-12-10 | 2010-06-17 | Robert Bosch Gmbh | Procedures for remote diagnostic monitoring and support of patients as well as facility and telemedicine center |
WO2010127216A2 (en) * | 2009-05-01 | 2010-11-04 | Telcodia Technologies, Inc. | Automated determination of quasi-identifiers using program analysis |
US20130246097A1 (en) * | 2010-03-17 | 2013-09-19 | Howard M. Kenney | Medical Information Systems and Medical Data Processing Methods |
US20110267418A1 (en) | 2010-04-30 | 2011-11-03 | Alejandro Javier Patron Galindo | Telemedicine system |
US9098611B2 (en) | 2012-11-26 | 2015-08-04 | Intouch Technologies, Inc. | Enhanced video interaction for a user interface of a telepresence network |
US20130018672A1 (en) | 2011-07-15 | 2013-01-17 | David Wong | Method And Apparatus For Providing Telemedicine Services |
US11133096B2 (en) | 2011-08-08 | 2021-09-28 | Smith & Nephew, Inc. | Method for non-invasive motion tracking to augment patient administered physical rehabilitation |
US8891841B2 (en) * | 2012-06-04 | 2014-11-18 | Verizon Patent And Licensing Inc. | Mobile dermatology collection and analysis system |
US9092556B2 (en) | 2013-03-15 | 2015-07-28 | eagleyemed, Inc. | Multi-site data sharing platform |
JP6303297B2 (en) * | 2013-06-14 | 2018-04-04 | 富士通株式会社 | Terminal device, gaze detection program, and gaze detection method |
US20150065812A1 (en) | 2013-09-02 | 2015-03-05 | Ebm Technologies Incorported | Telemedicine information system, monitoring method and computer-accessible storage medium |
WO2015060897A1 (en) | 2013-10-22 | 2015-04-30 | Eyenuk, Inc. | Systems and methods for automated analysis of retinal images |
US20150120312A1 (en) | 2013-10-31 | 2015-04-30 | Elwha Llc | Telemedicine device with usage monitoring |
US9208243B2 (en) * | 2014-01-07 | 2015-12-08 | Google Inc. | Systems and methods for processing machine readable codes by a locked device |
-
2015
- 2015-10-16 US US14/885,746 patent/US10779733B2/en active Active
-
2020
- 2020-08-18 US US16/995,894 patent/US20200375467A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140153794A1 (en) * | 2011-01-25 | 2014-06-05 | John Varaklis | Systems and methods for medical use of motion imaging and capture |
US20150094823A1 (en) * | 2013-10-02 | 2015-04-02 | Samsung Electronics Co., Ltd. | Walking assistance devices and methods of controlling the same |
Also Published As
Publication number | Publication date |
---|---|
US10779733B2 (en) | 2020-09-22 |
US20170105621A1 (en) | 2017-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200375467A1 (en) | Telemedicine application of video analysis and motion augmentation | |
Farahani et al. | Healthcare iot | |
Mukati et al. | Healthcare assistance to COVID-19 patient using internet of things (IoT) enabled technologies | |
US11515030B2 (en) | System and method for artificial agent based cognitive operating rooms | |
US10117617B2 (en) | Automated systems and methods for skin assessment and early detection of a latent pathogenic bio-signal anomaly | |
Poppas et al. | Telehealth is having a moment: will it last? | |
JP2020072958A (en) | System, method and computer program product for physiological monitoring | |
US20170011196A1 (en) | System and Method of Tracking Mobile Healthcare Worker Personnel In A Telemedicine System | |
US11205306B2 (en) | Augmented reality medical diagnostic projection | |
Sample et al. | Short-wavelength automated perimetry and motion automated perimetry in patients with glaucoma | |
US20160364549A1 (en) | System and method for patient behavior and health monitoring | |
US20050256392A1 (en) | Systems and methods for remote body imaging evaluation | |
Osipov et al. | Impact of digital technologies on the efficiency of healthcare delivery | |
Shem et al. | Getting started: mechanisms of telerehabilitation | |
Huang et al. | Challenges and prospects of visual contactless physiological monitoring in clinical study | |
US20200371738A1 (en) | Virtual and augmented reality telecommunication platforms | |
US20080205589A1 (en) | Clinical workflow for visualization and management of catheter intervention | |
US10512401B2 (en) | Methods and systems for facilitating medical care | |
Narang et al. | Impact of Industry 4.0 and Healthcare 4.0 for controlling the various challenges related to healthcare industries | |
Israni et al. | Human‐Machine Interaction in Leveraging the Concept of Telemedicine | |
US20240029888A1 (en) | Generating and traversing data structures for automated classification | |
WO2023053267A1 (en) | Information processing system, information processing device, information processing method, and non-transitory computer-readable medium having program stored therein | |
US20050256379A1 (en) | Systems and methods for remote touch-dependent evaluation | |
WO2023053260A1 (en) | Information processing system, information processing device, information processing method, and non-transitory computer readable medium having program stored thereon | |
WO2022270584A1 (en) | Information processing system, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AT&T INTELLECTUAL PROPERTY I, L.P., GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PRATT, JAMES H.;JACKSON, JAMES E.;ZAVESKY, ERIC;SIGNING DATES FROM 20151013 TO 20151021;REEL/FRAME:053520/0327 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |