CN112828911B - Medical accompanying robot system based on Internet of things - Google Patents

Medical accompanying robot system based on Internet of things Download PDF

Info

Publication number
CN112828911B
CN112828911B CN202110158636.9A CN202110158636A CN112828911B CN 112828911 B CN112828911 B CN 112828911B CN 202110158636 A CN202110158636 A CN 202110158636A CN 112828911 B CN112828911 B CN 112828911B
Authority
CN
China
Prior art keywords
robot
accompanying
server
accompanying object
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110158636.9A
Other languages
Chinese (zh)
Other versions
CN112828911A (en
Inventor
陈森德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinxunyu Technology Co ltd
Original Assignee
Shenzhen Jinxunyu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinxunyu Technology Co ltd filed Critical Shenzhen Jinxunyu Technology Co ltd
Priority to CN202110158636.9A priority Critical patent/CN112828911B/en
Publication of CN112828911A publication Critical patent/CN112828911A/en
Application granted granted Critical
Publication of CN112828911B publication Critical patent/CN112828911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety

Abstract

The invention provides a medical accompanying robot system based on the Internet of things, which comprises a robot system and a medical institution system, wherein the robot system comprises a robot system body and a medical institution system body; the robot system comprises a robot, a router, a local area network and a robot server; the robot carries out conversation with an accompanying object, the sound of the accompanying object is digitalized through an AI system and is converted into a text format through a voice recognition/conversation database; the medical institution system comprises a management server, an electronic medical record server and a terminal; the management server comprises an AI server, a medical dictionary database and an analysis database; the AI server is connected with an AI system in the robot server, analyzes the keywords in the text format through a deep learning function, thereby updating and downloading an analysis database, simultaneously identifies the content of the database of the electronic medical record server, and outputs analysis data about the conversation condition between the accompanying object and the robot.

Description

Medical accompanying robot system based on Internet of things
Technical Field
The invention relates to a robot system, in particular to a medical accompanying robot system based on the Internet of things.
Background
Nursing science is comprehensive application science of nursing theory, knowledge, skill and development law for researching, maintaining, promoting and recovering human health based on natural science and social science theory. In the society that the medical system is gradually improved and developed, nursing staff need to do some simple operations with low technical content repeatedly every day, for example, give medicine to each accompanying object in the department one by one, go to and go from the ward to observe the state of an illness and record nursing notes and other works, under this trend, the application of medical accompanying robot will be increasingly extensive, and there are intelligent medical robot in the prior art, but most of robots can not satisfy the demand of accompanying, for example, some robots have early warning, but the information of early warning collection is not enough, the early warning content is too single, can not satisfy the requirement of quick efficient diagnosis to the user, thereby lead to delaying the emergence of the condition of treatment.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a medical accompanying robot system based on the Internet of things. Comprises a robot system and a medical institution system;
the robot system comprises a robot, a router, a local area network and a robot server;
the robot server comprises an AI system, a voice recognition/conversation database, a storage module and a health test module; the robot converses with an accompanying object, the conversation is sent to a robot server through a router and a local area network, the sound of the accompanying object is digitalized through an AI system, and the sound is converted into a text format through a voice recognition/conversation database;
the medical institution system comprises a management server, an electronic medical record server and a terminal; the management server comprises an AI server, a medical dictionary database and an analysis database; the AI server is connected with an AI system in the robot server, analyzes the keywords in the text format through a deep learning function, thereby updating and downloading an analysis database, simultaneously identifies the content of the database of the electronic medical record server, and outputs analysis data about the conversation condition between the accompanying object and the robot;
the robot comprises a display system and a video system, wherein the display system is provided with a recognition device for recognizing an accompanying object and an image display device for displaying an image, the recognition device and the image display device are both connected with a control device, the image display device comprises an image display and a projector, the image display device is configured to move along with the robot, and the projector takes a patient moving area as a projection position.
Further, the identification device comprises a first identification device and a second identification device, the first identification device is used for identifying a shadow area shielded by the accompanying object, and the second identification device is used for sensing a characteristic point corresponding to a joint of the accompanying object and identifying the orientation of the human body; when the image display device is moved to a plurality of positions and the recognition positions of the first recognition device and the second recognition device are coincident, the video system is awakened to monitor the activity area condition of the accompanying object when the accompanying object is judged to be in bed or abnormal.
Further, the second recognition means abstracts the human skeleton of the accompanying object into an image of a matchmaker, detects characteristic points of each joint of the human body and a line connecting the characteristic points, and thereby judges the inclination direction of the head or body.
Furthermore, the health test module comprises an application program, a user data mapping unit and an analysis unit, and the accompanying object carries out interactive training with the application program in the health test module to obtain data of training conditions.
Further, the user data mapping unit comprises a reaction test function set, a finger test function set, a psychological test function set and a neural test function set; the reaction test function set is used for testing the reaction capability of the accompanying object through the small game, the finger test function set is used for testing the finger control capability of the accompanying object, and the psychological test function set is used for testing the abstinence or attention of the accompanying object; the psychological test function sets are combined to reflect the test results of the test function sets or the finger test function sets, many-to-one mapping is carried out and is mapped to the analysis unit, and the analysis unit gives out analysis results according to cases of accompanying and attending objects and a plurality of items of test data.
Furthermore, the nerve test function set is used for testing eyeball motion data of the accompanying object and face motion data of the user, the measured eyeball motion data and the face motion data are mapped to the analysis unit through the user data mapping unit, and the analysis unit gives an analysis result according to a case of the accompanying object, the eyeball motion data and the face motion data.
Further, after the robot is distributed to the accompanying objects, the robot determines whether the accompanying objects are the accompanying objects for the first accompanying according to the personal data of the accompanying objects input in advance, if the accompanying objects are not the accompanying objects for the first accompanying, the robot server is informed to fetch the data of the previous accompanying, and if the accompanying objects are the accompanying objects for the first accompanying, the robot server is informed to establish new input data.
Drawings
Fig. 1 is a schematic view of a robot system in a medical accompanying robot system of the present invention;
fig. 2 is a schematic view of a medical institution system in the medical accompanying robot system of the present invention;
fig. 3 is a schematic view of a robot system in a preferred embodiment in the medical accompanying robot system of the present invention;
FIG. 4 is a schematic diagram of a display system and video system of the robot of the present invention;
Detailed Description
The medical accompanying robot system based on the internet of things comprises a robot system 10 and a medical institution system 20. Referring to fig. 1, a robot system 10 includes a robot 1, a router 3, a local area network 5, and a robot server 11. The robot 1 has an AI function and is capable of making a conversation with the accompanying object 7, and the robot 1 includes a microphone that can hear the sound of the accompanying object 7 and a speaker that can speak with the accompanying object 7. The robot server 11 includes an AI system 13 and a voice recognition/conversation database 17, and the robot 1 can communicate with the accompanying object 7, specifically, the robot has a conversation with the accompanying object, and is transmitted to the robot server via a router and a local area network, using the AI system 13 and the voice recognition/conversation database 17, and the sound of the accompanying object is digitized by the AI system, and converted into a text format by the voice recognition/conversation database.
The robot 1 may be installed for each accompanying object in a hospital, and it is also possible to connect a plurality of robots N to the robot server 11 through the router 3 and the local area network 5 by installing the plurality of robots N in the hospital.
Referring to fig. 2, the medical institution system 20 includes a management server 22, an electronic medical record server 21, and a terminal 23 used by an operator; the management server 22 includes an AI server 221, a medical dictionary database 222, and an analysis database 223, and the electronic medical record server 21 includes a database 211 capable of storing and reading diagnosis, treatment status, medication information, and the like of the accompanying subject.
The AI server 221 is connected to the AI system 13 in the robot server 11 to communicate with the accompanying person 7 and the doctor and the nurse, and the AI server 221 analyzes the keywords in the text format by the deep learning function to update and download the analysis database 223, recognizes the contents of the database 211 of the electronic medical record server 21, and outputs analysis data on the conversation state between the accompanying person and the robot.
After the robot is distributed to the accompanying objects, the robot determines whether the accompanying objects are the accompanying objects accompanying for the first time according to the personal data of the accompanying objects input in advance, if the accompanying objects are not the accompanying objects accompanying for the first time, the robot server 11 is informed to call the data of the accompanying for the previous time, and if the accompanying objects are the accompanying objects accompanying for the first time, the robot server 11 is informed to establish new input data.
After that, the robot server 11 needs to input a person related to the accompanying object to the robot 1, and the operator can manually input and save these profiles from the terminal 23, and input medical information related to the accompanying object to the voice recognition/dialogue database 17 of the robot server 11.
In a preferred embodiment, referring to fig. 3, the robot server 11 further includes a storage module 15 and a health test module 19, and the medical information is stored in the storage module 15 of the robot server 11 through the management server 22. The AI server 221 of the management server 22 identifies the contents of the database 211 of the electronic medical record server 21, for example, a profile regarding the diagnosis or treatment status of the accompanying subject 7 in the database 211. Further, since the AI server 221 of the management server 22 has a deep learning function, it is possible to perform voice recognition/dialogue training of the robot server 11 in advance so as to answer items frequently asked by a plurality of accompanying subjects in real time. For example, the accompanying object 7 makes some inquiry to the robot 1, and the sound of the accompanying object 7 is digitized in the robot 1, subjected to SSL processing, and transmitted to the robot server 11 via the router 3 and the local area network 5. The AI system 13 of the robot server 11 converts the digitized voice of the question content of the accompanying object 7 into text. Then, the AI system 13 extracts the voice keyword and performs retrieval in the medical dictionary database 222 and the analysis database 223 of the management server 22 through the AI server 221 of the management server 22. The robot server 11 outputs the result to the voice recognition/dialogue module 17 of the accompanying robot when the answer to the question content of the accompanying object 7 is retrieved, and when the answer to the question content of the accompanying object 7 is not retrieved, the voice recognition/dialogue module 17 outputs a message such as "i try to check it but i do not understand it", "sorry", or the like.
The health test module 19 may contain, for example, an application 191, a user data mapping unit 192, and an analysis unit 193. The health test module 19 may generate data of the training situation of the accompanying subject that can be acquired by the accompanying subject through interaction with the application 191 of the health test module 19.
The user data mapping unit 192 may include one or more user health test function sets, such as a reaction test function set 196, a finger test function set 197, a psychological test function set 198, and a neurological test function set 199.
The reaction test function set 196 is used for testing the reaction capability of the accompanying subject through a mini game, the measured reaction time data is mapped to the analysis unit 193 through the user data mapping unit 192, and the analysis unit 193 gives an analysis result according to the case and the reaction time data of the accompanying subject.
The finger test function set 197 allows the accompanying subject to perform a game task by moving a mouse, a trackball, a stylus, or the like, the measured finger manipulation data is mapped to the analysis unit 193 through the user data mapping unit 192, and the analysis unit 193 gives an analysis result according to the case of the accompanying subject and the finger manipulation data.
The psychological test function set 198 tests the guard or attention of the accompanying subject, the psychological test function set 198 needs to be mapped to the analysis unit 193 in a many-to-one mapping manner by combining the test results of the reaction test function set 196 or the finger test function set 197, and the analysis unit 193 provides analysis results according to the case and a plurality of items of test data of the accompanying subject.
The neural test function set 199 is used for testing eyeball motion data of an accompanying object and face motion data of a user, the measured eyeball motion data and face motion data are mapped to the analysis unit 193 through the user data mapping unit 192, and the analysis unit 193 gives an analysis result according to a case of the accompanying object, the eyeball motion data and the face motion data.
Referring to fig. 4, in a preferred embodiment, the robot 1 includes a display system 14 and a video system 12, and the display system 14 has a recognition device 141 recognizing a target accompanying object and an image display device 142 displaying an image, and they are connected to a control device. The image display device 142 includes an image display 32 and a projector 34. The projector 34 has the accompanying object moving area as a display position. The image display device 142 may be configured to be movable following the robot.
The projector emits projection light to the accompanying object moving area, the recognition device 141 comprises a first recognition device and a second recognition device which are wirelessly connected with the first recognition device and the second recognition device, the first recognition device and the second recognition device are separately arranged on the wall of the accompanying object moving area and can be arranged in a plurality of numbers, the first recognition device is used for recognizing a shadow area shielded by the accompanying object, and the second recognition device is used for sensing characteristic points corresponding to joints and the like and recognizing the orientation of a human body and the like. When the image display device 142 moves to a plurality of positions to project the target accompanying object and all obtain the information that the identification positions of the first identification device and the second identification device coincide with each other, the video system 12 needs to be awakened to monitor the activity area condition of the accompanying object when the accompanying object is judged to be in bed or abnormal.
Specifically, the identification method of the second identification device is as follows: the human skeleton is abstracted into an image of a matchmaker, and feature points of each joint of the human body and lines connecting the feature points are detected. By using the human skeleton as a matchmaker, the positions of the hands and feet, the positions of the joints, and the like of the subject can be easily recognized.
Referring to fig. 4, coordinates of the vertex feature point a and the mandible feature point B are detected, respectively. Then, the tilt direction of the head is identified by calculating the angle between a line connecting the two feature points and the reference plane.
And respectively detecting the coordinates of the right shoulder characteristic point C and the left shoulder characteristic point D, and determining the inclination direction of the body according to the position of a connecting line between the characteristic point C and the characteristic point D.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (4)

1. A medical accompanying robot system based on the Internet of things is characterized by comprising a robot system and a medical institution system;
the robot system comprises a robot, a router, a local area network and a robot server;
the robot server comprises an AI system, a voice recognition/conversation database, a storage module and a health test module; the robot converses with an accompanying object, the conversation is sent to a robot server through a router and a local area network, the sound of the accompanying object is digitalized through an AI system, and the sound is converted into a text format through a voice recognition/conversation database;
the medical institution system comprises a management server, an electronic medical record server and a terminal; the management server comprises an AI server, a medical dictionary database and an analysis database; the AI server is connected with an AI system in the robot server, analyzes the keywords in the text format through a deep learning function, thereby updating and downloading an analysis database, simultaneously identifies the content of the database of the electronic medical record server, and outputs analysis data about the conversation condition between the accompanying object and the robot;
the robot comprises a display system and a video system, wherein the display system is provided with a recognition device for recognizing an accompanying object and an image display device for displaying an image, the recognition device and the image display device are both connected with a control device, the image display device comprises an image display and a projector, the image display device is configured to move along with the robot, and the projector takes a patient moving area as a projection position;
the health test module comprises an application program, a user data mapping unit and an analysis unit, and the accompanying object carries out interactive training with the application program in the health test module to obtain data of training conditions; the user data mapping unit comprises a reaction test function set, a finger test function set, a psychological test function set and a neural test function set; the reaction test function set is used for testing the reaction capability of the accompanying object through the small game, the finger test function set is used for testing the finger control capability of the accompanying object, and the psychological test function set is used for testing the abstinence or attention of the accompanying object; the psychological test function set integrates the test results of the reaction test function set or the finger test function set, a plurality of-to-one mapping is carried out, the mapping is carried out to the analysis unit, and the analysis unit gives out the analysis results according to the medical history of the accompanying object and a plurality of items of test data; the nerve test function set is used for testing eyeball motion data of an accompanying object and face motion data of a user, the measured eyeball motion data and face motion data are mapped to the analysis unit through the user data mapping unit, and the analysis unit gives an analysis result according to a case of the accompanying object, the eyeball motion data and the face motion data.
2. The medical accompanying robot system based on the internet of things as claimed in claim 1, wherein the identification device comprises a first identification device and a second identification device, the first identification device is used for identifying a shadow area shielded by an accompanying object, the second identification device is used for sensing a characteristic point corresponding to a joint of the accompanying object and identifying the orientation of a human body; when the image display device is moved to a plurality of positions and the recognition positions of the first recognition device and the second recognition device are coincident, the video system is awakened to monitor the activity area condition of the accompanying object when the accompanying object is judged to be in bed or abnormal.
3. The internet-of-things-based medical accompanying robot system as claimed in claim 2, wherein the second recognition means abstracts human skeleton of an accompanying object into an image of a matchmaker, detects characteristic points of each joint of a human body and lines connecting the characteristic points, and thereby judges a tilt direction of a head or a body.
4. The medical accompanying robot system based on the internet of things as claimed in claim 1, wherein after the robot is allocated to the accompanying object, the robot determines whether the accompanying object is an accompanying object for first accompanying according to personal data of the accompanying object input in advance, if the accompanying object is not the accompanying object for first accompanying, the robot server is notified to call data of previous accompanying, and if the accompanying object for first accompanying is the accompanying object for first accompanying, the robot server is notified to establish new input data.
CN202110158636.9A 2021-02-04 2021-02-04 Medical accompanying robot system based on Internet of things Active CN112828911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110158636.9A CN112828911B (en) 2021-02-04 2021-02-04 Medical accompanying robot system based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110158636.9A CN112828911B (en) 2021-02-04 2021-02-04 Medical accompanying robot system based on Internet of things

Publications (2)

Publication Number Publication Date
CN112828911A CN112828911A (en) 2021-05-25
CN112828911B true CN112828911B (en) 2021-08-31

Family

ID=75932253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110158636.9A Active CN112828911B (en) 2021-02-04 2021-02-04 Medical accompanying robot system based on Internet of things

Country Status (1)

Country Link
CN (1) CN112828911B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114152283A (en) * 2021-11-24 2022-03-08 山东蓝创网络技术股份有限公司 Family old-care nursing bed service supervision system based on stereoscopic dot matrix technology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108814907A (en) * 2018-07-05 2018-11-16 广州龙之杰康复养老科技有限公司 A kind of intelligence ambulation training electro-optical photo system
CN109887238A (en) * 2019-03-12 2019-06-14 朱利 A kind of fall detection system and detection alarm method of view-based access control model and artificial intelligence
CN110405789A (en) * 2019-08-01 2019-11-05 昆山市工研院智能制造技术有限公司 Make the rounds of the wards accompany and attend to robot, robot of one kind makes the rounds of the wards system and method for accompanying and attending to
CN111789579A (en) * 2020-07-02 2020-10-20 江苏蔷盛文化传媒有限公司 Healthy robot of thing networking
CN111951620A (en) * 2020-09-02 2020-11-17 李婕 Fast and firm matchmaker application guide teaching method and system
CN112001177A (en) * 2020-08-24 2020-11-27 浪潮云信息技术股份公司 Electronic medical record named entity identification method and system integrating deep learning and rules
CN112022116A (en) * 2020-09-11 2020-12-04 合肥创兆电子科技有限公司 Patient condition nursing monitoring system based on intelligent wearable watch

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8179418B2 (en) * 2008-04-14 2012-05-15 Intouch Technologies, Inc. Robotic based health care system
US20140139616A1 (en) * 2012-01-27 2014-05-22 Intouch Technologies, Inc. Enhanced Diagnostics for a Telepresence Robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108814907A (en) * 2018-07-05 2018-11-16 广州龙之杰康复养老科技有限公司 A kind of intelligence ambulation training electro-optical photo system
CN109887238A (en) * 2019-03-12 2019-06-14 朱利 A kind of fall detection system and detection alarm method of view-based access control model and artificial intelligence
CN110405789A (en) * 2019-08-01 2019-11-05 昆山市工研院智能制造技术有限公司 Make the rounds of the wards accompany and attend to robot, robot of one kind makes the rounds of the wards system and method for accompanying and attending to
CN111789579A (en) * 2020-07-02 2020-10-20 江苏蔷盛文化传媒有限公司 Healthy robot of thing networking
CN112001177A (en) * 2020-08-24 2020-11-27 浪潮云信息技术股份公司 Electronic medical record named entity identification method and system integrating deep learning and rules
CN111951620A (en) * 2020-09-02 2020-11-17 李婕 Fast and firm matchmaker application guide teaching method and system
CN112022116A (en) * 2020-09-11 2020-12-04 合肥创兆电子科技有限公司 Patient condition nursing monitoring system based on intelligent wearable watch

Also Published As

Publication number Publication date
CN112828911A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
US11776669B2 (en) System and method for synthetic interaction with user and devices
US20210186312A1 (en) Systems and methods for semi-automated medical processes
US20220331028A1 (en) System for Capturing Movement Patterns and/or Vital Signs of a Person
US20210251523A1 (en) Biometric identification in medical devices
CN109310317A (en) System and method for automated medicine diagnosis
US11935656B2 (en) Systems and methods for audio medical instrument patient measurements
Lin et al. Toward unobtrusive patient handling activity recognition for injury reduction among at-risk caregivers
US20220044821A1 (en) Systems and methods for diagnosing a stroke condition
CN109313817A (en) System and method for generating medical diagnosis
WO2020112147A1 (en) Method of an interactive health status assessment and system thereof
CN112828911B (en) Medical accompanying robot system based on Internet of things
JP2004157941A (en) Home care system, its server, and toy device for use with home care system
KR20180055234A (en) Voice using the dental care system and methods
CN109310330A (en) System and method for medical device patient measurement
Fiorini et al. User profiling to enhance clinical assessment and human–robot interaction: A feasibility study
US20230018077A1 (en) Medical information processing system, medical information processing method, and storage medium
Sonntag Medical and health systems
KR20210144208A (en) Augmented reality based cognitive rehabilitation training system and method
JP2003339796A (en) Robot for nursing care
KR101861741B1 (en) Apparatus and method for diagnosing patient's disease using pulse
EP4362033A1 (en) Patient consent
CN113811954A (en) Storing and presenting audio and/or visual content of a user
Sonntag Medical and Health
TW202044268A (en) Medical robot and medical record integration system
CN114171146A (en) Intelligent medical monitoring method, system and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant