WO2022027921A1 - 一种医疗用机器人装置、系统及方法 - Google Patents

一种医疗用机器人装置、系统及方法 Download PDF

Info

Publication number
WO2022027921A1
WO2022027921A1 PCT/CN2021/000162 CN2021000162W WO2022027921A1 WO 2022027921 A1 WO2022027921 A1 WO 2022027921A1 CN 2021000162 W CN2021000162 W CN 2021000162W WO 2022027921 A1 WO2022027921 A1 WO 2022027921A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
robot
medical
recognition
equipment
Prior art date
Application number
PCT/CN2021/000162
Other languages
English (en)
French (fr)
Inventor
谈斯聪
于皓
于梦非
Original Assignee
谈斯聪
于皓
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 谈斯聪, 于皓 filed Critical 谈斯聪
Priority to AU2021321650A priority Critical patent/AU2021321650A1/en
Publication of WO2022027921A1 publication Critical patent/WO2022027921A1/zh

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B10/0045Devices for taking samples of body liquids
    • A61B10/0051Devices for taking samples of body liquids for taking saliva or sputum samples
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • A61B10/0045Devices for taking samples of body liquids
    • A61B10/007Devices for taking samples of body liquids for taking urine samples
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0088Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for oral or dental tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/15Devices for taking samples of blood
    • A61B5/151Devices specially adapted for taking samples of capillary blood, e.g. by lancets, needles or blades
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/15Devices for taking samples of blood
    • A61B5/151Devices specially adapted for taking samples of capillary blood, e.g. by lancets, needles or blades
    • A61B5/15101Details
    • A61B5/15103Piercing procedure
    • A61B5/15107Piercing being assisted by a triggering mechanism
    • A61B5/15109Fully automatically triggered, i.e. the triggering does not require a deliberate action by the user, e.g. by contact with the patient's skin
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/90Identification means for patients or instruments, e.g. tags
    • A61B90/94Identification means for patients or instruments, e.g. tags coded with symbols, e.g. text
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M5/00Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
    • A61M5/42Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests having means for desensitising skin, for protruding skin to facilitate piercing, or for locating point where body is to be pierced
    • A61M5/427Locating point where body is to be pierced, e.g. vein location means using ultrasonic waves, injection site templates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention belongs to the technical field of artificial intelligence robot health medical equipment, and relates to the field of robotics, an image intelligent recognition method, an intelligent device and a system.
  • Background technology is currently used in the medical field.
  • the accuracy of identifying the disease is poor, and the fields of various specialists and medical professions are limited. Illness is difficult to achieve.
  • Remote control by administrators, remote joint consultation, joint rounds by ward specialists, robotic devices for combined therapy, robotic platforms involving robotic theory and practical techniques.
  • robotic arms are used to autonomously collect oral testing samples, blood testing samples, urine, feces testing samples, self-injection, self-management, and drug configuration. medical supplies.
  • machine vision and various intelligent identification methods assist in the identification of disease symptoms associated with the identification of diseases, realize remote detection, autonomous detection, infectious detection, intelligent analysis of data, and effectively prevent infectious diseases and plagues and other major diseases spread.
  • the purpose of the present invention is to overcome the above-mentioned shortcomings and deficiencies of the prior art, and to provide a medical robotic device that utilizes remote consultation, multi-department joint consultation, remote doctor's orders, poor patient-doctor communication, and disease understanding.
  • the ultrasonic image acquisition device, intraoral acquisition device, blood acquisition device, CT image and DR radiology image remote control acquisition and sharing implemented by robots are used to realize image sharing, which solves the problem of artificial diagnosis. Treatment errors, as well as the limitations of a single clinic and the monotony of diagnostic protocols.
  • the present invention also provides an optimal management system for multi-task allocation in an outpatient ward and a method for real-time collection and sharing of medical pictures by sharing a multi-user-robot voice interaction; Dispensing management method; a medical care, patient, robot tripartite matching remote control and autonomous sample collection, injection management method.
  • the technical solution adopted in the present invention is a medical robot device comprising: a robot main system, the robot main system module is used to realize the main control of the robot, the interaction between the voice module and the user, the voice module, the visual recognition module, the heart sound, the lung Parts snail sound recognition module, medical scene recognition, radar autonomous mobile real-time mapping module, blood collection, injection action planning module Action planning module, robotic arm picking, placing, scanning code, management action planning control module.
  • a voice module, the voice module is used to collect the voices of doctors and patients and the scene voices of outpatient wards.
  • the voice module is used for interaction and voice guidance, voice commands, and voice interaction between the main control system and the user.
  • the visual recognition module is connected to an image acquisition device, and collects and recognizes images.
  • the image acquisition device includes one or more of a general camera, a depth camera, and a binocular camera, but is not limited to the above image acquisition devices.
  • the visual recognition module includes: face recognition, human facial features recognition, human body feature position recognition, medical scene recognition, medical supplies recognition, and drug recognition.
  • the face recognition is the face recognition of patient users and medical administrators.
  • Human facial features recognition is the recognition of facial features and their positions, the angle position of the oral cavity, and is used for nucleic acid detection, biometric detection, and other oral detection.
  • Human body feature position recognition refers to finger joint position recognition, including: shoulder, wrist, arm elbow, finger joints and their position recognition, used to identify fingers, toe ends, wrist, elbow, shoulder arm joints, in Under the vascular amplifier, the position of the wrist vein, the cubital vein, and the position of the intramuscular injection near the shoulder are identified for the positioning of blood vessels and other key positions.
  • an improved neural network method is applied to identify the medical scene. Recognition of comprehensive scenes including outpatient clinics, wards, patients, doctors, and alphanumeric characters of house numbers.
  • the identification of the medical supplies includes: a blood pressure meter, a blood glucose meter, a thermometer, a stethoscope, a heart rate device for collecting medical information, a breathing device, a negative pressure device, and a 24-hour monitoring device in the basic medical equipment area carried by the robot, and others Medical devices for various specialties. Identify and manage medical supplies and equipment using an improved neural network method based on shape, color, digital code, and QR code features. According to the doctor's order and according to the doctor's task arrangement, the identified medical supplies correspond to the identified patient's face and the QR code of the bracelet, and they are matched and managed.
  • the drug identification includes: the digital code on the outer label of the drug, the two-dimensional code, the character, the color, and the shape of the feature to identify the name and quantity of the drug, and its correspondence with the recognized face of the patient and the two-dimensional code of the bracelet, and the identification of the drug name and quantity. It performs matching management.
  • Heart sound, lung sound recognition module, the heart sound, lung sound recognition module is used for heart sound, pulmonary spiral sound voiceprint feature extraction, using an improved sound recognition algorithm, intelligent identification of heart sounds, abnormal spiral sounds. Radar autonomous movement, medical scene recognition, mapping module.
  • the radar autonomous positioning, navigation, and real-time mapping module the medical scene recognition department using the visual recognition module, the alphanumeric characters of the ward house number, the bed number and the radar real-time mapping are integrated, and the autonomous positioning, navigation, and movement to the corresponding department, ward , the bed position.
  • the action planning is to set parameters through the administrator's mediation and to train the robot to learn and plan actions and adaptive mediation to set the action planning parameters through the improved method of the neural network.
  • the collection and injection module includes: blood collection, injection module, oral detection sample collection module, urine, stool sample storage and management module, and medical image collection and sharing module.
  • the blood collection, the injection module, the blood collection module at the tip of the finger, and the injection needle module on the basis of identifying the position of the finger, the end of the canthus, and the joints of the arm, apply a blood vessel amplifier, an arm fixing device, and locate the position of the end of the toe, and the wrist of the arm.
  • the oral cavity detection sample collection module uses the visual recognition module to recognize the facial features, identify and locate the oral cavity position, the tooth position, the oral cavity wall position, use the oral cavity collector mounted on the robot arm, and the oral cavity collection cotton, oral mirror, planning movement, sliding along the wall in the left and right front and rear directions, collecting movements, accurately collecting saliva, oral features in the oral cavity, and intraoral images.
  • the urine and feces sample storage and management module, the urine and feces sample storage action planning module is used for robot touring and corresponding wards, beds, patients and their corresponding two-dimensional codes, digital code matching, automatic identification by robotic arms And grab, move, and place urine and fecal samples in the sample collection area.
  • the medical image acquisition and sharing module is characterized in that the medical image acquisition and sharing module is used for acquiring ultrasound images, CT images, image sharing, remote control acquisition and sharing of DR radiology and MRI nuclear magnetic images, remote consultation, and multi-departmental consultation. Joint clinic.
  • an action planning module is used for wearing a medical device. It is characterized in that, the medical equipment refers to the equipment carried by the robot and the respiratory equipment in the medical area, the negative pressure equipment, the 24-hour monitoring equipment and other various medical equipment, which are controlled by the robot main system, and the robot uses the facial features recognition of the visual recognition module.
  • the medical supplies, medicine pick-and-place configuration management module is characterized in that, medicines, treatment equipment, rehabilitation equipment and other medical supplies can be picked up, placed, scanned digital code, two-dimensional code, effective Manage and distribute equipment.
  • the visual recognition module is used to identify the patient's face, and the bracelet scans the QR code to compare the bed, hand card information, digital codes of medical devices and drugs, and the QR code is matched to compare the doctor's order information.
  • An optimized task management system including a medical robot device, medical care tasks of multiple departments, and a call subsystem, the medical robot device is the medical robot device in any of the above schemes, and all the multiple departments
  • the medical task subsystem and a call subsystem are connected with the robot main control system and built on the optimized task management system platform.
  • the medical administrator can arrange time for patients in multiple departments and wards - and the tasks corresponding to each time period, add, modify, delete, query, and dynamically schedule various tasks of robots in real time.
  • Connect with the call system of the medical area conduct remote consultation, jointly consult and treat patients in the jurisdiction, send doctor's order information, accept patient messages, and reply to patient messages.
  • a multi-user-robot voice interaction joint consultation method for collecting and sharing medical pictures in real time comprising the following steps:
  • the robot uses speech recognition and speech synthesis technology to explain the patient's condition.
  • the administrator uses the message information carried by the robot platform, subscribes to the picture data service, publishes images, and multi-user-robots share medical information, such as pictures and voices.
  • the administrator uses the real-time voice interaction, voice recognition module, real-time multi-user voice conversation, voice-to-text additional picture information, recorded multi-user voice interaction, voice conference carried by the robot platform; a medicine that matches medical care, patient, and robot tripartite
  • a method for the management of autonomous picking and distribution of medical devices includes the following steps:
  • the administrator communication module publishes doctor's order messages and services, the robot voice module subscribes to receive doctor's order messages, and patient users subscribe to receive doctor's order messages and services.
  • the robot uses speech recognition, speech synthesis technology, speech recording, and speech-to-text to recognize doctor's orders.
  • the robot uses the visual recognition module to identify equipment, medicines and their corresponding location information.
  • the robot uses the vision module, the equipment released by the communication module, the drug location information service, and the radar positioning and navigation module to subscribe to the location information service, and autonomously moves to the equipment and medicine location placement area.
  • the robot uses the action planning module to pick up equipment, medicines, and scan digital codes and two-dimensional codes.
  • the robot uses the communication module to publish patient location information including: ward, department, and bed location information.
  • the radar positioning and navigation module subscribes to the patient's position information and moves to the hospital bed autonomously.
  • the robot uses the visual recognition module to recognize the medical scene of the department, the ward house number and the alphanumeric characters, and the bed number uses the robot visual module to recognize the face, check the matching, and if they are consistent, perform step 8. If they are inconsistent, reposition the navigation.
  • the robot scans the digital code and two-dimensional code of the patient's wristband by using the motion planning module, and checks and matches with the two-dimensional code, digital code, and digital code of the doctor's order information on the equipment and medicine. If the scanning result is correct, the equipment and medicine will be distributed. Otherwise return a message to the administrator.
  • a medical care, patient, robot tripartite matching remote control and autonomous sample collection, injection management method includes the following steps:
  • the administrator communication module publishes doctor's order messages and services, the robot voice module subscribes to receive doctor's order messages, and patient users subscribe to receive doctor's order messages and services.
  • the robot uses speech recognition, speech synthesis technology, speech recording, and speech-to-text to recognize doctor's orders.
  • the robot uses the communication module to publish patient location information including: ward, department, and bed location information.
  • the radar positioning and navigation module subscribes to the patient's location information and autonomously moves to the hospital bed-
  • the robot uses the vision module to identify the communication module to publish information services, the radar positioning and navigation module subscribes to the location information service, and autonomously moves to the equipment and medicine placement areas.
  • the robot uses the visual recognition module to recognize faces, facial features, features, and their positions. Identify fingers, toe ends, arm joints, and joint positions. Apply vascular amplifier, arm immobilization device, locate toe end position, arm wrist, elbow vein position, upper arm intramuscular injection position, position information.
  • the robot uses the communication module to publish and collect position information, the robotic arm subscribes to the fixed device, collects the position information, injects the position information, and the motion planning module subscribes to the position information.
  • the robot collects oral cavity, image, blood, and injection actions according to the position information in step S6 and according to the action planning module.
  • the collection module includes: blood collection, injection action planning module, oral cavity collection action planning module, urine and stool sample storage Action planning module.
  • the blood collection, the injection action planning module, the finger end blood collection module, and the injection needle module are based on identifying the positions of the fingers, the ends of the toes, and the joints of the arm, applying a blood vessel amplifier, an arm fixing device, and positioning Toe end position, arm wrist, elbow vein position, upper arm intramuscular injection position, application of collection needle, injection needle to collect blood, intravenous injection, intramuscular injection.
  • step S7 the oral cavity collection action planning module, the oral cavity collection action planning module, uses the facial features recognition of the visual recognition module to locate the position of the oral cavity, the position of the teeth, and the position of the oral cavity wall, using the oral cavity collector carried by the robot arm, Oral collector cotton, oral mirror, planning movement, sliding along the wall in the left and right front and rear directions, collecting movements, accurately collecting saliva, oral features in the oral cavity, and intraoral images.
  • step S7 the urine and feces sample collection module, the urine and feces sample storage action planning module are used for the robot to tour the corresponding ward, hospital bed, patient and their corresponding two-dimensional code, digital code matching, using the machine
  • the arm automatically recognizes and grabs, moves, and places urine and fecal samples in the sample collection area.
  • the robot uses the communication module to publish the location information of the recovery area, the radar positioning and navigation module subscribes to the location information service of the recovery area, and moves autonomously to the saliva sample recovery area, the biological information sample recovery area, the blood sample recovery area, the urine sample recovery area, and the feces.
  • the sample recovery area uses the robotic arm action module to place and recover samples.
  • the robot visual recognition module publishes the coordinates of the external position area of the human body corresponding to the external features of each organ
  • the main system subscribes the location and coordinates of the external location acquisition area.
  • the ultrasonic probe carried by the remote main control system and the autonomous robotic arm moves and scans the human body acquisition area according to the subscribed location of the acquisition area and the action of the robotic arm image acquisition action planning module.
  • the ultrasonic probe and ultrasonic device publish the collected image information, and the robot main system and the visual recognition module subscribe to the image information.
  • the main robot system and the visual recognition module input the internal contour of the image and the characteristic value of each organ, and use the deep neural network method and the weight optimizer to obtain the output value and the classification and recognition result of the internal organ.
  • the present invention can solve the remote control robot remote isolation collection, autonomous injection, autonomous positioning, movement, and navigation through the medical robot device. Realize unmanned collection, isolated collection, and independently complete various medical and nursing tasks in outpatient clinics and wards. And in order to improve the problems of doctors and nurses, the work pressure and night shifts are too many.
  • Fig. 1 is a schematic diagram of a medical robot device module in the specification of this application; Fig.
  • robot main system 102-M set, injection action planning module; 103-camera vision module; 104-ultrasound, CT, DR Image acquisition module; 105 - Voice module; 106 - Heart sound and lung sound acquisition module; 107 - Medical data acquisition module; 108 - Radar mapping positioning and navigation module;
  • the purpose of the present invention is to design a remote control robot that can replace human work, realize remote control robotic arm collection, and at the same time effectively solve autonomous collection, collection of oral detection samples for nucleic acid detection, biometric detection, collection of blood samples, Collect urine, stool samples.
  • the present invention is further described in detail below with reference to the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.
  • the general idea of the technical solution in the implementation of the present application is as follows to solve the above technical problems: through the main control system of the robot, the ultrasonic image acquisition device carried by the robot, the intraoral acquisition device, the blood acquisition device, the CT image and the remote end of the DR radiology image Control the acquisition and sharing of images, realize the remote control of the robot through the vascular amplifier, intravenous injector, and other injection devices carried by the robot, autonomously inject and configure medicines independently, through the radar and visual cameras, make rounds, and pick and place medical equipment.
  • the invention also provides an optimal management system for multi-task allocation in an outpatient ward and a method for real-time collection and sharing of medical pictures and a multi-user-robot voice interaction joint consultation method; Methods; a medical care, patient, robot tripartite matching remote control and autonomous sample collection, injection management method; a method for autonomous positioning and identification of human organ feature positions, acquisition, and classification of internal organs ultrasound and CT images.
  • Embodiment 1 As shown in FIG.
  • a medical robot device includes: a robot main system 101, the robot main system 101 is used to realize the main control of the robot, and the voice module 105 is connected to the robot main system 101 for the user Interaction, the visual recognition module 103 is used for face, human body, medical scene recognition, heart sound, and lung sound recognition module 106, which is used for collecting heart sound and lung sound.
  • the radar 108 is used for autonomous mobile real-time mapping, the robotic arm is equipped with blood collection, and the injection action planning module 102 is used for image samples, oral test samples, blood samples, urine and stool samples collection, intravenous injection, and intramuscular injection.
  • the robot arm and pick, place, scan code, and manage the action planning control module 109 is used for medical equipment, medicine pick, place, scan code, and management.
  • a voice module 105 the voice module is used to collect the voices of doctors and patients, and the scene voices of outpatient wards.
  • the robot main control system 101 interacts with the user and provides voice guidance, voice commands, and voice interaction.
  • the visual recognition module 103 the face recognition in the visual recognition module 103, recognizes the face of the patient user, the medical administrator, and is used for the collection of patients and their corresponding samples, medical equipment, and drug management.
  • the visual recognition module 103 recognizes human facial features, recognizes facial features and their positions, and the position of the oral cavity, and is used to collect oral samples to be detected.
  • the visual recognition module 103 recognizes the position of human body features, recognizes the wrist, arm elbow, and finger joints and their positions, under the blood vessel amplifier, the position of the wrist vein and the elbow vein, which is used for blood vessel positioning, blood collection, Intravenous injection. Identify shoulder joints, waist joints, for proximal shoulder intramuscular injection location identification, localization, distal and autonomous injections.
  • the medical scene recognition described in the visual recognition module 103 identifies clinics, wards, patients, doctors, alphanumeric characters of house numbers, etc., and the voice module collects 105 medical scene voices to comprehensively recognize the medical scene.
  • the medical supplies in the visual recognition module 103 identify respiratory equipment, negative pressure equipment, 24-hour monitoring equipment, and other medical equipment used in various specialties.
  • the heart sound and pulmonary spiral sound recognition module 106 is used for the feature extraction of the heart sound and the pulmonary spiral sound, and the improved sound recognition method is used to intelligently identify the abnormal heart sound and the spiral sound.
  • the blood collection, the injection action planning module 102, and the visual recognition module 103 identify the positions of the fingers, the pubic end, and the joints of the arm, using the blood vessel amplifier, the arm fixing device, to locate the position of the end of the toe, the wrist of the arm, and the vein of the elbow Vascular position, upper arm, waist joint intramuscular injection position, application of collection needle, injection needle to collect blood, intravenous injection, intramuscular injection.
  • the robotic arm autonomously collects, moves, and places the blood sample to the sample placement area.
  • Oral collection and facial facial features recognition in the visual recognition module 103 positioning of oral cavity position, tooth position, oral cavity wall position, using the oral cavity collector carried by the robotic arm, the oral cavity collector cotton, the oral mirror, the planning movement, the left and right front and rear directions Sliding along the wall, collecting movements, accurately collecting saliva, biological detection objects in the oral cavity, and intraoral images.
  • Urine, stool sample storage action planning module, the urine, stool sample storage action and the visual recognition module 103 for robot tour and corresponding wards, beds, patients and their corresponding two-dimensional codes, digital codes are matched , Use the robotic arm to automatically identify and grab, move, and place urine and fecal samples in the sample collection area.
  • the medical image acquisition and sharing module 104 is connected to the robot main system 101 and is used for acquiring ultrasound images, CT images, image sharing, remote control acquisition and sharing of DR radiology images, remote consultation, and multi-department joint consultation.
  • Action planning modules for breathing equipment, negative pressure equipment, and 24-hour monitoring equipment The action planning module of the breathing equipment, negative pressure equipment, and 24-hour monitoring equipment applies the facial feature recognition and body feature recognition of the visual recognition module to identify the characteristic positions of the mouth, nose, ears, eyes, and the body, locate the position, and design the robotic arm. Pick up, move, place, wear, pick up equipment, and monitor the normal operation of equipment.
  • the medical supplies, medicine picking, placing, configuring, and managing module 109 are used for medicines, treatment equipment, rehabilitation equipment, picking, placing, scanning digital codes, two-dimensional codes, effective management, and distribution equipment.
  • the visual recognition module 103 is used to identify the patient's face, and the wristband scans the two-dimensional code to compare the bed position, hand card information, digital codes of medical devices and drugs, and two-dimensional code matching to compare the doctor's order information. Self-collection, scanning, and management of medical devices. As shown in FIG.
  • an optimized task management system and a method of using a medical robot device are as follows: Using the optimized task management system, the medical administrator arranges time for patients in multiple departments and wards-and their respective For the tasks corresponding to the time period, add all the tasks to the optimization task management system, and the medical robot device receives the tasks assigned by the administrator of the optimization task management system according to the date, time, and corresponding departments and wards.
  • Administrator users and expert users can log in to the optimization task management department, control robots remotely, manage robots under their respective departments and ward jurisdictions, add, modify, delete, query, dynamically schedule various tasks of robots in real time, and call the medical area System connection, remote consultation, joint consultation and treatment of patients in the jurisdiction, sending doctor's order information, accepting patient messages, and replying to patient messages.
  • the radar module 108 and the vision module 103 for the task are routed for each time period.
  • the application of medical supplies, medicine picking, placing, configuration, management module 109, blood collection, injection action planning module 102, voice module 105, ultrasound, CT, DR image acquisition module 104 respectively handle different tasks.
  • the management and configuration tasks use the robot motion planning of the medical supplies, medicine picking, placing, configuration, and management module 109, and the steps are as follows:
  • the administrator issues medical orders and assigns tasks.
  • the robot uses the voice device 215, the voice recognition module 105, the voice synthesis technology, the voice recording, and the voice to text to recognize the doctor's order.
  • the robot uses the visual recognition module 103 to identify the equipment, medicine and their corresponding positions.
  • the robot uses the radar 207 and the radar to move autonomously, recognize the medical scene, map the module 108, locate, navigate, and autonomously move to the equipment and medicine placement area.
  • the robot uses medical supplies, medicines to take, place, configure, manage module 109, pick up equipment, medicines, and scan information codes.
  • the location information of the patient benefited by the robot includes: ward, department, and bed location information. Radar positioning, navigation autonomously moves to the hospital bed.
  • the robot uses the medical scene of the visual recognition module 103 to identify the department, the ward house number, the alphanumeric characters, and the bed number. Use the robot vision module to recognize the face, check the matching, and if they are consistent, perform step 8. If they are inconsistent, reposition the navigation.
  • the robot uses the 212 information scanning device to scan the digital code and the two-dimensional code of the patient's wristband, and check and match the two-dimensional code, digital code, and digital code of the doctor's order information on the equipment and medicine. If the scanning result is correct, the equipment and medicine will be distributed. Otherwise return the message to the administrator.
  • S9. Use the upper left arm 208 and upper right arm 205 of the robotic arm to place the dispensing equipment and medicines in the medicine box and equipment placement area.
  • the robot uses the motion planning blood collection to inject the motion planning module 102.
  • the collection and injection steps are as follows:
  • the administrator issues medical orders and assigns tasks.
  • the robot uses the voice device 215, the voice module 105, the voice synthesis technology, the voice recording, and the voice to text to recognize the doctor's order.
  • the robot uses the patient's position information, the patient's ward, the department, and the bed position information.
  • the radar 207 navigates autonomously to the hospital bed.
  • the robot uses the camera 201 and the vision module 103 to recognize faces, facial features, features, and their positions. Identify fingers, toe ends, arm joints, and joint positions. Apply the vascular amplifier 209, the arm immobilization device 213, locate the position of the toe end, the wrist of the arm, the position of the venous blood vessel in the elbow, the position of the intramuscular injection in the upper arm, and the position information.
  • the robot collects oral cavity, image, blood, and injection actions according to the position information in step S4 and the action planning module.
  • step S5 the blood collection, the injection action planning module, the finger-end peripheral blood collection module, the collector 210, the syringe needle 211, on the basis of identifying the positions of the fingers, the ends of the toes, and the joints of the arm, the blood vessel amplifier 209 is applied, and the arm is fixed
  • the device 213 locates the position of the end of the toe, the wrist of the arm, the venous blood vessel of the elbow, and the intramuscular injection position of the upper arm, uses the collector 210, collects blood, and uses the syringe 211 for intravenous injection and intramuscular injection.
  • step S5 the oral collection action planning module uses the facial features of the visual recognition module 103 to recognize, identify, and locate the oral cavity position, the tooth position, and the oral cavity wall position, using the oral cavity collector 210 carried by the robot arm, the oral cavity collector cotton 210, the oral mirror 210, plans to move, slides along the wall in the left and right front and rear directions, collects movements, accurately collects saliva, oral features in the oral cavity, and intraoral images.
  • the urine and feces sample collection module the block is used for the robot to tour the corresponding ward, hospital bed, and the patient uses the information scanning device 212 to scan the corresponding two-dimensional code, the digital code is matched, and the right upper arm of the robot arm is used.
  • the upper left arm 208 automatically recognizes and grasps, moves, and places urine and stool samples in the sample placement area 214 .
  • the robot uses the radar 207 to locate and navigate autonomously to move to the sample recovery area.
  • the multi-user-robot voice interaction joint consultation method includes the following steps :
  • the robot uses speech recognition and speech synthesis technology to explain the patient's condition.
  • the administrator uses the robot to use the ultrasound, CT, and DR image acquisition module 104 to acquire ultrasound images and CT images in real time picture.
  • the collection step is as in S6.
  • the administrator uses the robot platform to share voice, collected and real-time collected medical pictures, text, and multi-user-robots to share medical information.
  • S5 uses the blood pressure meter, blood glucose meter, thermometer, stethoscope, and heart rate equipment in the basic medical equipment area carried by the robot to collect basic medical information and share medical information with multiple users.
  • the administrator uses the real-time voice interaction carried by the robot platform, the voice recognition module 105, the real-time multi-user voice conversation, the voice-to-text additional picture information, the recording of the multi-user voice interaction, and the voice conference.
  • Steps for the administrator to classify and identify the internal organs of the human body organ feature position and ultrasound image CT image, autonomously locate, and collect ultrasound and CT images:
  • the robot visual recognition module 103 recognizes the external features of the organs including: shoulder joints, breasts and breast heads, belly navels, characteristic genitals, waist joints and their corresponding coordinates of the external position area of the human body.
  • Step 2 According to the coordinates of the external position area of the human body corresponding to the external features of each organ, the ultrasonic probe 203 and the ultrasonic device 204 carried by the robot arm scan the external position acquisition area.
  • Step 3 The remote main control system 202 and the ultrasonic probe 203 mounted on the autonomous robotic arm move and scan the human body acquisition area according to the action of the robotic arm image acquisition action planning module. Image information collected by the ultrasound probe 203 and the ultrasound device 204.
  • Step 4 The robot main system 202 and the visual recognition module 103 input the ultrasound, the internal contour of the CT image, and the characteristic value of each organ, and use the deep neural network method and the weight optimizer to obtain the output value and the internal organ classification and recognition result.
  • Step 5 According to the output results, accurately classify and identify the ultrasound and CT images of human organs, and associate the identification results with the intelligent identification system for diseases of each organ. Publish the identification results and their corresponding disease symptoms, and disease information to the administrators and users of the robot's main system.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Hematology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Evolutionary Computation (AREA)
  • Vascular Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Primary Health Care (AREA)
  • Acoustics & Sound (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Business, Economics & Management (AREA)
  • Pulmonology (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Dentistry (AREA)
  • Dermatology (AREA)
  • Anesthesiology (AREA)
  • Bioinformatics & Cheminformatics (AREA)

Abstract

提供了一种医疗用机器人装置、系统及方法,利用人工智能机器人技术,提供一种医疗用远端联合诊断辅助治疗的机器人装置,实现远端会诊,多科室联合会诊,远端医嘱,解决病患病情理解不清,治疗方法不配合等问题。利用机器人搭载的超声图像采集装置(104),口腔内采集装置(210),血液采集装置,医疗图像自主及远端控制采集与共享实现图像共享,解决单一诊疗科诊断的局限性及诊断方案的单一性问题。通过机器人搭载的血管放大器(209),静脉注射器(211)实现机器人远端控制,自主注射,自主配置药品,巡回取放医疗设备,缓解医护作业压力大,夜班多等问题。提高专家,医生远端问诊查房多科室联合会诊效率,专家共同意见解决临床案例,应用于门诊,病房,海外医疗机构。

Description

一 种医疗用 机器人装 置、 系统及方 法 技术领域 本发 明属于人工智能机器人健康医疗设备技术领域, 涉及机器人技术领域, 图像智能识 别方法, 智能化设备及系统。 背景技术 目前应用于医疗领域, 在检查过程, 由于各种人为因素分析, 识别病情精准度差, 各专 科医生领域及医疗专业受限, 多科室多专家联合会诊, 在门诊, 病房与患者一起讨论病情很 难实现。 管理员远端控制, 远端联合会诊, 病房专家联合查房, 联合治疗的机器人装置, 机 器人平台涉及机器人理论, 实践技术。 因疫情等感染性高, 效率低下, 人工采集不精准, 瘟 疫传播等问题严重, 利用机器臂自主采集口腔检测样本, 血液检测样本, 尿液, 粪便检测样 本, 自主注射, 自主管理, 配置药品, 医疗用品。 利 用机器人搭载的机器臂及摄像头, 机器视觉及各种智能识别方法, 辅助识别疾病征兆 关联疾病的识别, 实现远端检测, 自主检测, 感染性检测, 智能化分析数据, 有效防止传染 病, 瘟疫等重大疾病蔓延。 技术问题 本发明的目的就在于克服上述现有技术的缺点和不足, 提供一种医疗用机器人装置, 利 用远端会诊, 多科室联合会诊, 远端医嘱, 病患-医生沟通不畅, 病情理解不清, 治疗方法不 配合等问题, 利用机器人搭载的超声图像采集装置, 口腔内采集装置, 血液采集装置, CT图 像以及 DR放射科图像远端控制采集与共享实现图像共享,解决了人为的诊断治疗失误, 以及 单一诊疗科的局限性及诊断方案的单一性等问题。 通过机器人搭载的血管放大器, 静脉注射器, 及其他注射装置实现机器人远端控制, 自 主注射, 自主配置药品, 巡回取放医疗设备, 解决医护人员作业压力大, 夜班多等问题。 提 高专家, 医生远端问诊, 查房, 多科室联合会诊的灵活性, 高效率多治疗方案多专家共同意 见解决临床案例。 本发明还提供了一种门诊病房多任务分配最优化管理系统及图片共享医疗 图片实时采集共享多用户 -机器人语音交互联合问诊方法; 一种医护, 患者,机器人三方匹配 的药品医疗器械自主拾取发放管理方法; 一种医护, 患者, 机器人三方匹配远端控制及自主 采集样本, 注射管理方法。一种自主定位识别人体器官特征位置, 采集, 分类内部脏器超声, CT图像的方法。 本发明的采用的技术方案 一种医疗用机器人装置包括 : 机器人 主系统, 所述机器人主系统模块用于实现机器人的主控制, 语音模块和用户间交 互, 语音模块, 视觉识别模块, 心音, 肺部螺音识别模块, 医疗场景识别, 雷达自主移动实 时建图模块, 血液采集, 注射动作规划模块动作规划模块, 机器臂拾取, 放置, 扫码, 管理 动作规划控制模块。 语音模块 , 所述语音模块用于采集医患者声音, 门诊病房场景声音。 所述语音模块用于 主控制系统与用户间交互和语音引导, 语音命令, 语音交互。 视觉识别模块, 与图像采集装置连接, 采集并识别图像, 所述的图像采集装置包括: 一 般摄像头, 深度摄像头, 双目摄像头中的一种及多种但不限于上述图像采集装置。 所述视觉 识别模块, 包括: 人脸识别, 人体五官识别, 人体特征位置识别, 医疗场景识别, 医疗用品 识别, 药品识别。 所述 人脸识别, 是患者用户, 医护管理员的人脸识别。 人体五官识别, 是人脸五官及其 位置的识别, 口腔角度位置, 用于核酸检测, 生物特征检测, 及其他口腔检测。 人体特征位 置识别, 是指关节位置识别, 包括: 肩部, 腕部, 臂肘部, 手指各关节及其位置识别, 用于 识别手指, 趾末端, 腕, 肘部, 肩的手臂关节, 在血管放大器下, 腕静脉, 肘部静脉血管的 位置, 近肩部肌肉注射位置识别, 用于血管定位, 其他关键位置定位。 所 述医疗场景识别, 应用改进的神经网络方法, 识别医疗场景。包括门诊, 病房, 病患, 医生, 门牌字母数字文字等对综合场景的识别。 所述 医疗用品识别, 包括: 机器人搭载的基本的医疗设备区的血压仪, 血糖仪, 体温计, 听诊器, 心率设备用于采集医疗信息, 呼吸设备, 负压设备, 及 24小时监测设备, 及其他用 于各专科的医疗器械。 应用形状, 颜色, 数字码, 二维码特征改进的神经网络方法, 识别并 管理医疗用品设备。 遵照医嘱, 按照医生任务安排, 识别的医疗用品与识别的患者人脸, 手 环二维码对应, 并对其进行匹配管理。 所述药品识别, 包括: 药品的外标签数字码, 二维码, 文字, 颜色, 形状的特征识别药 品的名称, 数量, 及其与识别的患者人脸, 手环二维码对应, 并对其进行匹配管理。 心音, 肺部螺音识别模块, 所述心音, 肺部螺音识别模块用于心音, 肺部螺音声纹特征 提取, 利用改进的声音识别算法, 智能识别心音, 螺音异常。 雷达自主移动, 医疗场景识别, 建图模块。 所述雷达自主定位, 导航, 实时建图模块, 应用视觉识别模块的医疗场景识别科室, 病房门牌字母数字文字, 床位号与雷达实时建图融 合, 利用自主定位, 导航, 移动至对应科室, 病房, 床位位置。 所述的动作规划, 是通过管理员调解设置参数及通过神经网络改进方法训练机器人学习 规划动作及自适应调解设置动作规划参数, 用于动作规划, 包括: 采集, 注射模块, 医疗设 备佩戴使用动作规划模块, 医疗用品, 药品取放配置管理模块。 所述 的采集, 注射模块, 包括: 血液采集, 注射模块, 口腔检测样本采集模块, 尿液, 粪便样本收纳管理模块, 医疗图像采集共享模块。 所述 血液采集, 注射模块, 指端末梢血液采集模块, 采集注射针头模块, 在识别手指, 眦末端, 手臂各关节位置的基础上, 应用血管放大器, 手臂固定装置, 定位趾端末位置, 手 臂腕部, 肘部静脉血管位置, 上臂部肌肉注射位置, 应用采集针, 注射针头采集血液, 静脉 注射, 肌肉注射。 进一步 , 口腔检测样本采集模块, 所述口腔采集动作规划模块, 应用视觉识别模块的人 脸五官识别, 识别, 定位口腔位置, 牙齿位置, 口腔壁位置, 利用机器臂搭载的口腔采集器, 口腔采集器棉, 口腔镜, 规划移动, 左右前后方向沿壁滑动, 采集动作, 精准采集唾液, 口 腔内的口腔特征物, 口腔内图像。 进一步 , 尿液, 粪便样本收纳管理模块, 所述尿液, 粪便样本收纳动作规划模块用于机 器人巡回与对应的病房, 病床, 患者及其对应二维码, 数字码匹配, 利用机器臂自动识别并 抓取, 移动, 放置尿液, 粪便样本在样本收集区。 进一 步, 医疗图像采集共享模块, 其特征在于, 所述医疗图像采集共享模块用于采集超 声图像 CT图像, 图像共享, DR放射科及 MRI核磁图像远端控制采集与共享及远端会诊, 多 科室联合会诊。 作 为本发明的又一步改进, 医疗设备佩戴使用动作规划模块。 其特征在于, 所述医疗设 备是指机器人搭载的设备及医疗区的呼吸设备, 负压设备, 24小时监测设备及其他各项医疗 设备, 由机器人主系统控制, 机器人应用视觉识别模块的五官识别及身体特征识别, 识别口, 鼻, 耳, 眼, 身体的特征位置, 定位位置, 设计及自适应学习规划机器臂拾取, 移动, 放置, 佩戴, 摘取, 使用设备, 监测设备正常运行动作。 作 为本发明的又一步改进, 所述的医疗用品, 药品取放配置管理模块, 其特征在于, 药 品, 治疗设备, 康复设备及其他医疗用品, 拾取, 放置, 扫描数字码, 二维码, 有效管理, 配送设备。 应用视觉识别模块识别患者人脸, 手环扫描二维码比对床位, 手牌信息, 医疗器 械及药物的数字码, 二维码匹配, 比对医嘱信息。 自主取物, 扫描信息, 归还, 管理医疗器 械。 一种最优化任务管理系统, 包括一种医疗用机器人装置、 多个科室的医护任务和一个呼 叫子系统, 所述医疗用机器人装置为上述任一方案中医疗用机器人装置, 所有多个科室的医 护任务子系统和一个呼叫子系统与机器人主控制系统连接, 搭建在最优化任务管理系统平台 上。 利用最优化任务管理系统, 实现医护管理员为多个科室,病房的的患者排配时间-及其各 时间段对应的任务, 添加, 修改, 删除, 查询, 动态实时排班机器人各种任务, 与医疗区呼 叫系统连接, 远端问诊, 联合会诊治疗管辖区患者, 发送医嘱信息, 接受患者留言, 回复患 者留言。 远端控制机器人, 管理各自科室, 病房管辖区下的机器人, 按照时间段及对应时间 段的机器人任务。 一种医疗图片实时采集共享多用户 -机器人语音交互联合问诊方法,所述方法包括以下步 骤:
51、 利用管理员通过机器人平台上搭载的语音装置, 及其连接语音模块, 与其他用户连 接, 通信。
52、 机器人利用语音识别, 语音合成技术, 语音解说患者病情。
53、 管理员利用机器人平台搭载的消息信息, 图片数据服务订阅, 发布图像, 多用户 - 机器人共享医疗信息, 如图片, 语音。
54、管理员利用机器人平台搭载的实时语音交互, 语音识别模块, 实时多用户语音会话, 语音转文字附加图片信息, 记录多用户语音交互, 语音会议; 一种 医护, 患者, 机器人三方匹配的药品医疗器械自主拾取发放管理方法, 所述方法包 括以下步骤:
51、 管理员通信模块, 发布医嘱消息, 服务, 机器人语音模块订阅接受医嘱消息, 病患 用户订阅接受医嘱消息, 服务。
52、 机器人利用语音识别, 语音合成技术, 语音记录, 语音转文字识别医嘱。
53、 机器人利用视觉识别模块, 识别器材, 药品及其对应的位置信息。
54、 机器人利用视觉模块, 通信模块发布的器材, 药品位置信息服务, 雷达定位导航模 块订阅位置信息服务, 自主移动到器材, 药品位置放置区。
55、 机器人利用动作规划模块, 拾取器材, 药品, 扫描数字码, 二维码。
56、 机器人利用通信模块, 发布病患位置信息包括: 病患病房, 科室, 床位位置信息。 雷达定位导航模块订阅病患位置信息, 自主移动到病床。
57、机器人利用视觉识别模块的医疗场景识别科室,病房门牌字母数字文字,病床号利用 机器人视觉模块识别人脸, 核对匹配, 如果一致, 执行步骤 8如果不一致, 重新定位导航。
58、 机器人利用动作规划模块, 扫描患者手环的数字码, 二维码, 与器材, 药品上的二 维码, 数字码, 医嘱信息数字码核对, 匹配。 如果扫码结果正确, 发放器材, 药品。 否则返 回信息至管理员。
59、 利用机器臂动作规划模块, 放置, 发放器材, 药品至药品箱, 器械放置区.
S10、 结束此时间段的任务。 一种医护, 患者, 机器人三方匹配远端控制及自主采集样本, 注射管理方法, 所述方法 包括以 卜步骤:
51、 管理员通信模块, 发布医嘱消息, 服务, 机器人语音模块订阅接受医嘱消息, 病患 用户订阅接受医嘱消息, 服务。
52、 机器人利用语音识别, 语音合成技术, 语音记录, 语音转文字识别医嘱。
53、 机器人利用通信模块, 发布病患位置信息包括: 病患病房, 科室, 床位位置信息。 雷达定位导航模块订阅病患位置信息, 自主移动到病床 -
54、 机器人利用视觉模块, 识别通信模块发布信息服务, 雷达定位导航模块订阅位置信 息服务, 自主移动到器材, 药品位置放置区。
55、机器人利用视觉识别模块识别, 人脸, 五官, 特征, 及其位置。 识别手指, 趾末端, 手臂各关节, 各关节位置。 应用血管放大器, 手臂固定装置, 定位趾端末位置, 手臂腕部, 肘部静脉血管位置, 上臂部肌肉注射位置, 位置信息。
56、 机器人利用通信模块发布采集位置信息, 机器臂订阅固定装置, 采集位置信息, 注 射位置信息, 动作规划模块订阅位置信息。
57、机器人依照步骤 S6的位置信息, 按照动作规划模块, 采集口腔, 图像, 血液, 注射 动作, 所述采集模块包括: 血液采集, 注射动作规划模块, 口腔采集动作规划模块, 尿液粪 便样本收纳动作规划模块。 步骤 S7中, 所述血液采集, 注射动作规划模块, 指端末梢血液采集模块, 采集注射针头 模块, 在识别手指, 趾末端, 手臂各关节位置的基础上, 应用血管放大器, 手臂固定装置, 定位趾端末位置, 手臂腕部, 肘部静脉血管位置, 上臂部肌肉注射位置, 应用采集针, 注射 针头采集血液, 静脉注射, 肌肉注射。 步骤 S7中,所述口腔采集动作规划模块, 所述口腔采集动作规划模块, 应用视觉识别模 块的人脸五官识别, 定位口腔位置, 牙齿位置, 口腔壁位置, 利用机器臂搭载的口腔采集器, 口腔采集器棉, 口腔镜, 规划移动, 左右前后方向沿壁滑动, 采集动作, 精准采集唾液, 口 腔内的口腔特征物, 口腔内图像。 步骤 S7中, 所述尿液, 粪便样本收集模块, 所述尿液, 粪便样本收纳动作规划模块用于 机器人巡回与对应的病房, 病床, 患者及其对应二维码, 数字码匹配, 利用机器臂自动识别 并抓取, 移动, 放置尿液, 粪便样本在样本收集区。
58、 机器人利用通信模块发布回收区位置信息, 雷达定位导航模块订阅回收区位置信息 服务, 自主移动到, 唾液样本回收区, 生物信息样本回收区, 血液样本回收区, 尿液样本回 收区, 粪便样本回收区, 利用机器臂动作模块, 放置, 回收样本。
59、 返回任务完成信息至管理员。 如果未完成, 将任务移入下一时间段。 一种机器人自主定位并识别人体器官特征位置方法, 分类图像的脏器, 图像的采集方法 包括以下步骤: 人体器官特征位置及医疗图像的内部脏器分类识别方法:
51、 建立人体器官特征模型, 包括: 肩关节, 乳房及乳头, 肚脯肚脐, 特征生殖器, 腰 关节。
52、 抽取图像器官的内部轮廓, 各器官的特征值及其对应的外部特征所对应的人体外部 位置区。
53、 输入各器官外部特征值所对应的人体内部器官图像的特征值, 改进深度神经网络方 法及权值优化器, 通过图像训练, 得到输出值及内部器官分类, 器官识别结果。
S4、 输出结果, 精准分类, 识别人体器官的图像。 机器人 自主定位, 采集医疗图像的方法:
51、 机器人视觉识别模块发布各器官外部特征所对应的人体外部位置区坐标
52、依据各器官外部特征所对应的人体外部位置区坐标,机器人机器臂搭载的超声探头, 主系统订阅外部位置采集区位置及坐标。
53、 远端主控制系统及自主机器臂搭载的超声探头依照订阅的采集区位置, 依照机器臂 图像采集动作规划模块的动作, 移动, 扫描人体采集区。 超声探头及超声装置发布采集的图 像信息, 机器人主系统及视觉识别模块订阅图像信息。
54、 机器人主系统及视觉识别模块输入图像内部轮廓, 各器官的特征值, 利用深度神经 网络方法及权值优化器, 得到输出值及内部器官分类识别结果。
55、 依据输出结果, 精准分类, 识别人体器官的图像, 识别结果关联各器官疾病智能识 别系统。 发布识别结果及其对应的疾病征兆, 疾病信息至机器人主系统的管理员及用户。 综上, 本发明的有益效果是: 本 发明能够通过医疗用机器人装置, 解决远端控制机器人远端隔离采集, 自主注射, 自 主定位, 移动, 导航。 实现无人采集, 隔离采集, 自主完成门诊, 病房的各项医护任务。 以 及为改善了医生, 护士工作压力大, 夜班多等问题。 同时, 实现实时多专家远端联合会诊, 实时获取机器人采集的数据及图像, 大幅度提高工作效率。 本发明能够通过最优化任务管理 系统, 管理, 排配机器人任务, 实时动态排班各机器人任务, 可有效与医疗用机器人装置, 与医疗区呼叫系统联合作业。 附图说明: 图 1是本申请说明书中医疗用机器人装置模块示意图; 附图 1标记: 机器人主系统; 102-M集, 注射动作规划模块; 103 -摄像头视觉模块; 104 -超声, CT, DR 图像采集模块; 105 -语音模块; 106 -心音肺音采集模块; 107 -医疗数据采集模块; 108 - 雷达建图定位导航模块;
109 -放置扫码管理动作规划模块 图 2是本申请说明书中医疗用机器人装置组成结构示意图; 附图 2标记:
201 -摄像头; 202 -机器人主系统; 203 -超声装置; 204 -超声探头;
205 -右上臂; 206 -医疗设备区; 207 -雷达; 208 -左上臂; 209 -血管放大器; 210-采集器; 211 -注射器; 212 -信息扫描装置; 213 -手臂固定装置; 214 -样本存储区; 215 -语音装置; 具体实施方式 本发 明的目的是设计取代人类工作的可远端控制机器人, 实现远端控制机器臂采集, 同 时有效解决自主采集, 采集口腔检测样本用于核酸检测, 生物特征检测, 采集血液样本, 采 集尿液, 粪便样本。 利用人工智能机器人技术, 自动化领域的自主采集, 机器臂动作规划, 深度摄像头采集人脸, 口腔, 手臂, 人体外部特征, 关节图像。 实现自主查房, 实现远端多用户 -机器人语音会诊, 多科室联合会诊, 远端语音医嘱, 多 用户语音交互, 多专家联合会诊。 实现远端控制机器人及自主采集超声图像, 口腔内采集唾液及其他生理特征物图像, 采 集血液, 远端控制 CT采集装置及 DR放射科图像, 共享图像, 解决了人为的诊断治疗失误, 实现机器人远端及自主静脉注射及肌肉注射, 自主配置药品, 巡回取放药品, 医疗设备, 提 高了智能采集的精准度和医疗数据异常识别的准确度 。 为了更好的理解上述技术方案, 下面 结合实施例及附图, 对本发明作进一步地的详细说明, 但本发明的实施方式不限于此。 本申请实施中的技术方案为解决上述技术问题的总体思路如下: 通 过机器人的主控制系统, 机器人搭载的超声图像采集装置, 口腔内采集装置, 血液采 集装置, CT图像以及 DR放射科图像远端控制采集与共享图像, 通过机器人箱载的血管放大 器, 静脉注射器, 及其他注射装置实现机器人远端控制, 自主注射, 自主配置药品, 通过雷 达及视觉摄像头, 巡回查房, 取放医疗设备。 本发明还提供了一种门诊病房多任务分配最优 化管理系统及医疗图片实时采集共享多用户 -机器人语音交互联合问诊方法;一种医护,患者, 机器人三方匹配的药品医疗器械 自主拾取发放管理方法; 一种医护, 患者, 机器人三方匹配 远端控制及自主采集样本, 注射管理方法; 一种自主定位识别人体器官特征位置, 采集, 分 类内部脏器超声, CT图像的方法。 实施例 1: 如图 1所示, 一种医疗用机器人装置包括: 机器人主系统 101 , 所述机器人主系统 101用于实现机器人的主控制, 语音模块 105和 机器人主系统 101连接, 用于用户间交互, 视觉识别模块 103, 用于人脸, 人体器官, 医疗 场景识别, 心音, 肺部螺音识别模块 106, 用于采集心音, 肺部螺音。 雷达 108用于自主移 动实时建图, 机器臂搭载血液采集, 注射动作规划模块 102用于图像样本, 口腔检测物样本, 血液样本, 尿液粪便样本采集, 静脉注射, 肌肉注射。 机器臂与拾取, 放置, 扫码, 管理动 作规划控制模块 109用于医疗器械, 药品拾取, 放置, 扫码, 管理。 语音模块 105, 所述语音模块用于采集医患者声音, 门诊病房场景声音。 机器人主控制 系统 101与用户间交互和语音引导, 语音命令, 语音交互。 视觉识别模块 103, 所述视觉识别模块 103中人脸识别, 识别患者用户, 医护管理员的 人脸, 用于患者及其对应样本采集, 医疗器械, 药品管理。 所述视觉识别模块 103中人体五 官识别, 识别人脸五官及其位置, 口腔位置, 用于采集需检测的口腔样本。 所述视觉识别模 块 103中人体特征位置识别, 识别腕部, 臂肘部, 手指各关节及其位置, 在血管放大器下, 腕静脉, 肘部静脉血管的位置, 用于血管定位, 血液采集, 静脉注射。 识别肩部关节, 腰部 关节, 用于近肩部肌肉注射位置识别, 定位, 远端及自主注射。 所述 的视觉识别模块 103中所述医疗场景识别, 识别门诊, 病房, 病患, 医生, 门牌字 母数字文字等, 以及语音模块采集 105的医疗场景语音, 对医疗场景综合识别。 所述 的视觉识别模块 103中所述医疗用品识别呼吸设备,负压设备,及 24小时监测设备, 及其他用于各专科的医疗器械。 用于遵照医嘱的医疗用品与识别的医疗用品, 患者人睑手环 信息码匹配管理。 所述 的视觉识别模块 103中所述药品识别药品的外标签的数字码, 二维码, 文字,颜色, 形状, 药品的名称, 数量与患者的人脸, 手环二维码, 数字码对应匹配管理。 所述的心音, 肺部螺音识别模块 106, 用于心音, 肺部螺音声纹特征提取, 利用改进的 声音识别方法, 智能识别心音, 螺音异常。 雷达自主移动, 医疗场景识别, 建图模块 108与所述的视觉识别模块 103中医疗场景识 别科室, 病房门牌字母数字文字, 床位号与雷达实时建图融合, 用于自主定位, 导航移动至 对应科室, 病房, 床位位置。 血液采集, 注射动作规划模块 102, 与所述的视觉识别模块 103中手指, 耻末端, 手臂 各关节位置的识别, 利用血管放大器, 手臂固定装置, 定位趾端末位置, 手臂腕部, 肘部静 脉血管位置, 上臂部, 腰部关节肌肉注射位置, 应用采集针, 注射针头采集血液, 静脉注射, 肌肉注射。 机器臂自主采集, 移动, 放置血液样本至样本放置区。 口腔采集与所述的视觉识别模块 103中人脸五官识别, 定位口腔位置, 牙齿位置, 口腔 壁位置, 利用机器臂搭载的口腔采集器, 口腔采集器棉, 口腔镜, 规划移动, 左右前后方向 沿壁滑动, 采集动作, 精准采集唾液, 口腔内的生物检测物, 口腔内图像。 尿液, 粪便样本收纳动作规划模块, 所述尿液, 粪便样本收纳动作与所述的视觉识别模 块 103中用于机器人巡回与对应的病房, 病床, 患者及其对应二维码, 数字码匹配, 利用机 器臂自动识别并抓取, 移动, 放置尿液, 粪便样本在样本收集区。 医疗图像采集共享模块 104, 与机器人主系统 101连接, 用于采集超声图像 CT图像, 图 像共享, DR放射科图像远端控制采集与共享及远端会诊, 多科室联合会诊。 呼吸设备, 负压设备, 及 24小时监测设备的动作规划模块。 所述呼吸设备, 负压设备, 及 24小时监测设备的动作规划模块应用视觉识别模块的五官识别及身体特征识别, 识别口, 鼻, 耳, 眼, 身体的特征位置, 定位位置, 设计机器臂拾取, 移动, 放置, 佩戴, 摘取设备, 监测设备正常运行动作。 医疗用品, 药品取, 放, 配置, 管理模块 109, 用于药品, 治疗设备, 康复设备, 拾取, 放置, 扫描数字码, 二维码, 有效管理, 配送设备。 应用视觉识别模块 103识别患者人脸, 手环扫描二维码比对床位, 手牌信息, 医疗器械及药物的数字码, 二维码匹配, 比对医嘱信 息。 自主取物, 扫描, 管理医疗器械。 如图 2所示, 一种最优化任务管理系统及一种医疗用机器人装置使用方法如下: 利用最优化任务管理系统,医护管理员为多个科室,病房的的患者排配时间 -及其各时间 段对应的任务, 将所有的任务添加到最优化任务管理系统, 医疗用机器人装置按照日期, 时 间、 对应科室, 病房, 收到最优化任务管理系统管理员分配的任务。 管理员用户, 专家用户登录最优化任务管理系, 远端控制机器人, 管理各自科室, 病房 管辖区下的机器人, 添加, 修改, 删除, 查询, 动态实时排班机器人各种任务, 与医疗区呼 叫系统连接, 远端问诊, 联合会诊治疗管辖区患者, 发送医嘱信息, 接受患者留言, 回复患 者留言。 按照时间段及对应时间段的机器人任务, 各时间段的对任务用雷达模块 108及视觉模块 103进行路径规划。 并应用医疗用品, 药品取, 放, 配置, 管理模块 109, 血液采集, 注射动 作规划模块 102, 语音模块 105, 超声, CT, DR图像采集模块 104分别处理不同任务。 当机器人在固定时间段接收到发放, 配置任务时, 管理配置任务利用机器人动作规划的 医疗用品, 药品取, 放, 配置, 管理模块 109, 步骤如下:
51、 管理员通发布医嘱, 排配任务.
52、 机器人利用语音装置 215语音识别模块 105, 语音合成技术, 语音记录, 语音转文 字识别医嘱。
53、 当机器人在固定时间段接收到发放, 配置任务时, 机器人利用视觉识别模块 103, 识别器材, 药品及其对应的位置。
54、 机器人利用雷达 207及雷达自主移动, 医疗场景识别, 建图模块 108, 定位, 导航, 自主移动到器材, 药品位置放置区。
55、 机器人利用医疗用品, 药品取, 放, 配置, 管理模块 109, 拾取器材, 药品, 扫描 信息码。
56、 机器人利病患位置信息包括: 病患病房, 科室, 床位位置信息。 雷达定位, 导航自 主移动到病床。
57、机器人利用视觉识别模块 103的医疗场景识别科室,病房门牌字母数字文字,病床号 利用机器人视觉模块识别人脸, 核对匹配, 如果一致, 执行步骤 8如果不一致, 重新定位导 航。
58、 机器人利用 212信息扫描装置, 扫描患者手环的数字码, 二维码, 与器材, 药品上 的二维码, 数字码, 医嘱信息数字码核对, 匹配。 如果扫码结果正确, 发放器材, 药品。 否 则返回信息至管理员。 S9、 利用机器臂左上臂 208, 右上臂 205, 放置发放器材药品至药品箱器械放置区.
S10、 结束此时间段的任务。 如果未完成, 将任务移入下一时间段。 当处理采集任务时, 机器人利用动作规划的血液采集, 注射动作规划模块 102。 采集注 射步骤如 卜:
51、 管理员通发布医嘱, 排配任务.
52、 机器人利用语音装置 215语音模块 105, 语音合成技术, 语音记录, 语音转文字识 别医嘱。
53、 机器人利用病患位置信息, 病患病房, 科室, 床位位置信息。 雷达 207导航自主移 动到病床。
54、 机器人利用摄像头 201及视觉模块 103, 识别人脸, 五官, 特征, 及其位置。 识别 手指, 趾末端, 手臂各关节, 各关节位置。应用血管放大器 209, 手臂固定装置 213, 定位趾 端末位置, 手臂腕部, 肘部静脉血管位置, 上臂部肌肉注射位置, 位置信息。
55、机器人依照步骤 S4的位置信息, 按照动作规划模块, 采集口腔, 图像, 血液, 注射 动作。 步骤 S5中, 所述血液采集, 注射动作规划模块, 指端末梢血液采集模块, 采集器 210注 射器针 211 , 在识别手指, 趾末端, 手臂各关节位置的基础上, 应用血管放大器 209, 手臂固 定装置 213, 定位趾端末位置, 手臂腕部, 肘部静脉血管位置, 上臂部肌肉注射位置, 应用 采集器 210, 采集血液, 利用注射器 211, 静脉注射, 肌肉注射。 步骤 S5中,所述口腔采集动作规划模块,应用视觉识别模块 103的人脸五官识别,识别, 定位口腔位置, 牙齿位置, 口腔壁位置, 利用机器臂搭载的口腔采集器 210, 口腔采集器棉 210, 口腔镜 210, 规划移动, 左右前后方向沿壁滑动, 采集动作, 精准采集唾液, 口腔内的 口腔特征物, 口腔内图像。 步骤 S5中, 所述尿液, 粪便样本收集模块, 块用于机器人巡回与对应的病房, 病床, 患 者利用信息扫描装置 212扫描其对应二维码, 数字码匹配, 利用机器臂机械爪右上臂 205, 左上臂 208自动识别并抓取, 移动, 放置尿液, 粪便样本在样本放置区 214。
56、 机器人利用雷达 207定位导航自主移动到样本回收区。
57、 返回任务完成信息至管理员。 如果未完成, 将任务移入下一时间段。 当处理查房, 多科室多专家联合会诊, 远端联合问诊任务, 利用语音装置 215, 语音模 块 105, 医疗图片实时采集共享多用户 -机器人语音交互联合问诊方法, 所述方法包括以下步 骤:
51、利用管理员通过机器人平台上搭载的语音装置 215, 及其连接语音模块 105, 与其他 用户连通信。
52、 机器人利用语音识别, 语音合成技术, 语音解说患者病情。
53、 管理员利用机器人利用超声, CT, DR图像采集模块 104, 实时采集超声图像, CT图 像。 采集步骤如 S6。
S4、管理员利用机器人平台共享语音, 已采集的及实时采集的医疗图片, 文字, 多用户- 机器人共享医疗信息。
S5 利用机器人搭载的基本的医疗设备区的血压仪, 血糖仪, 体温计, 听诊器, 心率设备 采集基本的医疗信息, 并多用户共享医疗信息。
56、 管理员利用机器人平台搭载的实时语音交互, 语音识别模块 105, 实时多用户语音 会话, 语音转文字附加图片信息, 记录多用户语音交互, 语音会议。
57、管理员人体器官特征位置及超声图像 CT图像的内部脏器分类识别, 自主定位, 采集 超声, CT图像的步骤:
Stepl、机器人视觉识别模块 103识别器官外部特征包括: 肩关节, 乳房及乳房头, 肚脯 肚脐, 特征生殖器, 腰关节及其对应的人体外部位置区坐标。
Step2、依据各器官外部特征所对应的人体外部位置区坐标,机器人机器臂搭载的超声探 头 203, 及超声装置 204,扫描外部位置采集区。
Step3、 远端主控制系统 202及自主机器臂搭载的超声探头 203, 依照机器臂图像采集动 作规划模块的动作, 移动, 扫描人体采集区。超声探头 203及超声装置 204采集的图像信息。
Step4、 机器人主系统 202及视觉识别模块 103输入超声, CT图像内部轮廓, 各器官的 特征值, 利用深度神经网络方法及权值优化器, 得到输出值及内部器官分类识别结果。
Step5、 依据输出结果, 精准分类, 识别人体器官的超声, CT图像, 识别结果关联各器 官疾病智能识别系统。 发布识别结果及其对应的疾病征兆, 疾病信息至机器人主系统的管理 员及用户。

Claims

权 利 要 求 书
1. 一种医疗用机器人装置、 系统及方法, 其特征在于, 一种医疗用机器人装置包括: 机器人主 系统, 所述机器人主系统模块, 用于连接并控制机器人装置模块, 包括: 语音 模块, 视觉模块及视觉识别模块, 心音, 肺部螺音识别模块, 雷达定位导航模块, 采集, 注 射模块, 医疗设备佩戴使用动作规划模块, 医疗用品, 药品取放配置管理的动作规划模块; 摄像装置及视觉识别模块,机器人主系统与摄像装置连接, 用于采集并识别图像,包括: 人脸识别, 五宫识别, 人体特征位置识别, 医疗场景识别, 医疗用品识别, 药品识别, 所述 的人体特征位置识别是指: 关键关节位置及其识别, 以及人体的其他特殊特征; 语音装置及 语音模块, 机器人主控制系统与语音装置连接, 所述语音模块包括: 声音采 集装置、麦克装置、扬声器、拾音装置, 用于采集并识别声音, 用户间管理员间的语音交互, 语音命令, 语音文字互转, 语音合成, 声纹识别; 雷达 自主移动, 视觉识别建图模块, 机器人主控制系统与雷达, 摄像装置, 移动底座连 接, 实现雷达自主移动医疗场景识别, 建图; 采集 , 注射模块, 机器人主控制系统与摄像装置, 机器臂, 采集注射装置, 超声探头, 超声装置, 其他医疗图像采集控制装置, 采集器, 注射器, 血管放大器, 手臂固定装置连接, 所述模块包括: 血液样本采集, 注射动作规划模块, 口腔唾液及身体特征物采集动作规划模 块, 尿液粪便样本收纳动作规划模块, 医疗图像采集模块; 医疗设备佩戴使用动作规划模块及医疗用品, 药品取放配置管理的动作规划模块, 机器 人主控制系统与所述的机器人主控制系统与摄像装置, 雷达, 机器臂, 信息扫描装置连接, 所述医疗设备包括: 机器人搭载的医疗设备区的血压仪, 血糖仪, 体温计, 听诊器, 心率设 备及医疗区的呼吸设备, 呼吸设备, 负压设备, 及 24小时监测设备的摘戴动作规划及医疗用 品, 药品取, 放, 配置, 管理;
2. 根据权利要求 1所述的一种医疗用机器人装置,其特征在于,所述的语音装置及语音模块, 采集并识别医患声音, 门诊病房场景声音, 机器人主控制系统与多用户, 管理员间的语音交 互, 语音命令, 语音文字互转, 语音合成, 声纹识别。
3. 根据权利要求 1所述的一种医疗用机器人装置, 其特征在于,所述雷达自主移动医疗场景 识别建图模块是将雷达, 移动底座, 摄像装置与主系统连接, 雷达自主定位, 导航, 实时建 图, 及视觉识别人脸及医疗场景, 医疗场景包括: 科室, 病房门牌字母数字文字床位号与雷 达实时建图融合, 自主定位, 导航, 移动至对应科室, 病房, 床位位置。
4. 根据权利要求 1所述的一种医疗用机器人装置, 其特征在于, 采集、注射模块, 机器人主 控制系统与机器臂连接, 所述的采集注射模块是通过管理员调解设置参数及通过神经网络改 进方法训练机器人机器臂学习规划动作及自适应调解设置动作规划参数, 用于动作规划, 实 现采集, 注射; 所述的采集、 注射模块包括: 血液样本采集, 注射模块, 口腔检测样本采集 模块, 尿液, 粪便样本收纳管埋模块, 医疗图像采集共享模块; 所述血 液样本采集, 注射模块是通过采集器, 在识别趾末端, 手臂各关节位置的基础上, 应用血管放大器, 手臂固定装置, 定位手, 眦端末位置, 手臂腕部, 肘部静脉血管位置, 应 用采集器, 采集血液, 利用注射器, 在肩关节腰关节部肌肉注射位置, 应用注射器静脉注射, 肌肉注射; 所述 的口腔喉部鼻腔采集模块,所述的口腔喉部鼻腔采集模块包括口腔喉部鼻腔采集器, 口腔喉部鼻腔采集器棉, 口腔喉部鼻腔镜, 应用视觉识别模块的人脸五官识别, 识别口腔喉 部鼻腔位置, 牙齿喉部位置, 口腔喉部鼻腔壁位置, 利用机器臂搭载的口腔喉部鼻腔采集器, 口腔喉部鼻腔采集器棉, 口腔喉部鼻腔镜, 规划移动, 左右前后方向沿壁滑动, 采集动作, 精准采集唾液, 口腔喉部鼻腔内的口腔喉部鼻腔特征物, 口腔喉部鼻腔内图像; 所述 的尿液, 粪便样本收纳, 管理模块, 所述尿液, 粪便样本收纳动作规划模块用于机 器人巡回与对应的病房, 病床, 患者及其对应一维码, 数字码匹配, 利用机器臂自动识别并 抓取, 移动, 放置尿液, 粪便样本在样本收集区; 所述 的医疗图像采集共享模块, 其特征在于, 所述医疗图像采集共享模块用于采集超声 图像, CT图像, DR放射科图像, MRI核磁图像远端控制采集与共享及远端会诊, 多科室联合 会诊, 图像共享。
5. 根据权利要求 1所述的一种医疗用机器人装置,其特征在于,所述的医疗设备佩戴使用动 作规划模块及医疗用品, 药品取放配置管理模块, 机器人主控制系统与机器臂连接, 通过管 理员调解设置参数及通过神经网络改进方法训练机器人学习规划动作及自适应调解设置动作 规划参数, 用于机器人搭载的医疗设备区的血压仪, 血糖仪, 体温计, 听诊器, 心率设备及 医疗区的呼吸设备, 负压设备, 24小时监测设备及其他各项医疗设备使用的动作规划; 所述的医疗设备佩戴使用动作规划模块是指机器人连接的设备及医疗区的呼吸设备, 负 压设备, 24小时监测设备及其他各项医疗设备与机器人主系统连接, 由机器人主系统控制, 机器人应用视觉识别模块的五官识别及身体特征识别, 识别口, 鼻, 耳, 眼, 身体的特征位 置, 定位位置, 设计及自适应学习规划机器臂拾取, 移动, 放置, 佩戴, 摘取, 使用设备, 监测设备正常运行动作; 进一步, 所述的医疗设备区的听诊器与机器人主系统连接, 由机器人主系统控制, 所述 心音, 肺部螺音识别模块连接, 采集并识别心音, 肺部螺音肺部螺音声纹特征提取, 利用改 进的声音识别算法, 智能识别心音, 螺音异常; 所述的医疗用品, 药品取放配置管理模块, 机器人主控制系统与机器臂连接, 机器臂拾 14 取, 放置, 扫描数字码, 二维码, 有效管理药品, 治疗设备, 康复设备及其他各医疗用品, 配送设备, 应用视觉识别模块识别患者人脸, 手环扫描二维码比对床位, 手牌信息, 医疗器 械及药物的数字码, 一维码匹配, 比对医嘱信息, 自主取物, 扫描信息, 归还, 管理医疗器 械。
6. 一种医疗用机器人装置、 系统及方法, 其特征在于, 一种最优化任务管理系统, 包括: 一 种医疗用机器人装置、 多个科室的医护任务管理子系统和 1个呼叫于系统, 所述医疗用机器 人装置为上述任一方案中医疗用机器人装置, 多个科室的医护任务管理子系统和 1个呼叫系 子系统与机器人主控制系统连接在所述的最优化任务管理系统平台。
7. 一种医疗用机器人装置、 系统及方法, 其特征在于, 一种医疗图片实时采集共享多用户 - 机器人语音交互联合问诊方法, 所述方法包括以下步骤:
51、利用管理员通过机器人平台上搭载的语音装置, 及其连接语音模块, 与其他用户连接, 通信;
52、 机器人利用语音识别, 语音合成技术, 语音解说患者病情;
53、管理员利用机器人平台搭载的消息信息, 图片数据服务订阅, 发布图像, 多用户 -机器 人共享医疗信息, 如图片, 语音;
54、 管理员利用机器人平台搭载的实时语音交互, 语音识别模块, 实时多用户语音会话, 语音转文字附加图片信息, 记录多用户语音交互, 语音会议。
8. 一种医疗用机器人装置、 系统及方法, 其特征在于, 一种医护, 患者, 机器人三方匹配的 药品医疗器械自主拾取发放管理方法, 所述管理方法包括以下步骤:
SK 管理员通信模块, 发布医嘱消息, 服务, 机器人语音模块订阅接受医嘱消息, 病患用 户订阅接受医嘱消息, 服务;
52、 机器人利用语音识别, 语音合成技术, 语音记录, 语音转文字识别医嘱:
53、 机器人利用视觉识别模块, 识别器材, 药品及其对应的位置信息;
54、 机器人利用视觉模块, 通信模块发布的器材, 药品位置信息服务, 雷达定位导航模块 订阅位置信息服务, 自主移动到器材, 药品位置放置区;
55、 机器人利用动作规划模块, 拾取器材, 药品, 扫描数字码, 二维码;
56、 机器人利用通信模块, 发布病患位置信息包括: 病患病房, 科室, 床位位置信息, 雷 达定位导航模块订阅病患位置信息, 自主移动到病床;
57、机器人利用视觉识别模块的医疗场景识别科室,病房门牌字母数字文字,病床号利用机 器人视觉模块识别人脸, 核对匹配, 如果一致, 执行步骤 8如果不一致, 重新定位导航;
S8、 机器人利用动作规划模块, 扫描患者手环的数字码, 二维码, 与器材, 药品上的二维 15 码, 数字码, 医嘱信息数字码核对, 匹配, 如果扫码结果正确, 发放器材, 药品, 否则返回 信息至管理员;
S9、 利用机器臂动作规划模块, 放置, 发放器材, 药品至药品箱, 器械放置区;
SlOs 结束此时间段的任务。
9. 一种医疗用机器人装置、 系统及方法, 其特征在于, 一种医护, 患者, 机器人三方匹配远 端控制及自主采集样本, 注射管理方法, 所述采集方法包括以下步骤:
51、 管理员通信模块, 发布医嘱消息, 服务, 机器人语音模块订阅接受医嘱消息, 病患 用户订阅接受医嘱消息, 服务:
52、 机器人利用语音识别, 语音合成技术, 语音记录, 语音转文字识别医嘱;
53、 机器人利用通信模块, 发布病患位置信息包括: 病患病房, 科室, 床位位置信息, 雷达定位导航模块订阅病患位置信息, 自主移动到病床;
54、 机器人利用视觉模块, 识别通信模块发布信息服务, 雷达定位导航模块订阅位置信 息服务, 自主移动到器材, 药品位置放置区;
55、 机器人利用视觉识别模块识别, 人脸, 五官, 特征, 及其位置, 识别手指, 眦末端, 手臂各关节, 各关节位置, 应用血管放大器, 手臂固定装置, 定位趾端末位置, 手臂腕部, 肘部静脉血管位置, 上臂部肌肉注射位置, 位置信息;
56、 机器人利用通信模块发布采集位置信息, 机器臂订阅固定装置, 采集位置信息, 注 射位置信息, 动作规划模块订阅位置信息;
57、机器人依照步骤 S6的位置信息, 按照动作规划模块, 采集口腔, 图像, 血液, 注射 动作, 所述采集模块包括: 血液采集, 注射动作规划模块, 口腔采集动作规划模块, 尿液, 粪便样本收纳动作规划模块; 进一步, 所述采集动作规划模块, 步骤 S7中, 所述血液采集, 注射动作规划模块, 指端末梢血液采集模块, 采集注射针头 模块, 在识别手指, fit末端, 手臂各关节位置的基础上, 应用血管放大器, 手臂固定装置, 定位趾端末位置, 手臂腕部, 肘部静脉血管位置, 上臂部肌肉注射位置, 应用采集针, 注射 针头采集血液, 静脉注射, 肌肉注射; 步骤 S7中,所述口腔采集动作规划模块, 所述口腔采集动作规划模块,应用视觉识别模 块的人脸五官识别, 识别, 定位口腔位置, 牙齿位置, 口腔壁位置, 利用机器臂搭载的口腔 采集器, 口腔采集器棉, 口腔镜, 规划移动, 左右前后方向沿壁滑动, 采集动作, 精准采集 唾液, 口腔内的口腔特征物, 口腔内图像; 步骤 S7中, 所述尿液, 粪便样本收集模块, 所述尿液, 粪便样本收纳动作规划模块用于 16 机器人巡回与对应的病房, 病床, 患者及其对应二维码, 数字码匹配, 利用机器臂自动识别 并抓取, 移动, 放置尿液, 粪便样本在样本收集区:
58、 机器人利用通信模块发布回收区位置信息, 雷达定位导航模块订阅回收区位置信息 服务, 自主移动到, 唾液样本回收区, 生物信息样本回收区, 血液样本回收区, 尿液样本回 收区, 粪便样本回收区, 利用机器臂动作模块, 放置, 回收样本;
59、 返回任务完成信息至管理员, 如果未完成, 将任务移入下一时间段;
10. 一种医疗用机器人装置、 系统及方法, 其特征在于, 医疗图像采集共享模块中的一种机 器人自主定位并识别人体器官特征位置方法, 分类图像的内部脏器, 图像的采集方法包括以 下步骤: 人体器官特征位置及医疗图像的内部脏器分类识别方法, 包括以下步骤:
51、 建立人体器官特征模型, 包括: 肩关节, 乳房及乳头, 肚脯肚脐, 特征生殖器, 腰 关节;
52、 抽取图像器官的内部轮廓, 各器官的特征值及其对应的外部特征所对应的人体外部 位置区;
53、 输入各器官外部特征值所对应的人体内部器官图像的特征值, 改进深度神经网络方 法及权值优化器, 通过图像训练, 得到输出值及内部器官分类, 器官识别结果;
54、 输出结果, 精准分类, 识别人体器官的图像; 机 器人自主定位, 采集医疗图像的方法, 包括以下步骤:
S1、 机器人视觉识别模块发布各器官外部特征所对应的人体外部位置区坐标;
$2、依据各器官外部特征所对应的人体外部位置区坐标,机器人机器臂搭载的超声探头, 主系统订阅外部位算采集区位置及坐标;
53、 远端主控制系统及自主机器臂搭载的超声探头依照订阅的采集区位置, 依照机器臂 图像采集动作规划模块的动作, 移动, 扫描人体采集区, 超声探头及超声装置发布采集的图 像信息, 机器人主系统及视觉识别模块订阅图像信息;
54、 机器人主系统及视觉识别模块输入图像内部轮廓, 各器官的特征值, 利用深度神经 网络方法及权值优化器, 得到输出值及内部器官分类识别结果;
55、 依据输出结果, 精准分类, 识别人体器官的图像, 识别结果关联各器官疾病智能识 别系统, 发布识别结果及其对应的疾病征兆, 疾病信息至机器人主系统的管理员及用户。
PCT/CN2021/000162 2020-08-05 2021-07-29 一种医疗用机器人装置、系统及方法 WO2022027921A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021321650A AU2021321650A1 (en) 2020-08-05 2021-07-29 Medical robotic device, system, and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010780479.0 2020-08-05
CN202010780479.0A CN111916195A (zh) 2020-08-05 2020-08-05 一种医疗用机器人装置,系统及方法

Publications (1)

Publication Number Publication Date
WO2022027921A1 true WO2022027921A1 (zh) 2022-02-10

Family

ID=73287855

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/000162 WO2022027921A1 (zh) 2020-08-05 2021-07-29 一种医疗用机器人装置、系统及方法

Country Status (3)

Country Link
CN (1) CN111916195A (zh)
AU (1) AU2021321650A1 (zh)
WO (1) WO2022027921A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114886476A (zh) * 2022-07-14 2022-08-12 清华大学 咽拭子自动采集机器人
CN115781686A (zh) * 2022-12-26 2023-03-14 北京悬丝医疗科技有限公司 一种用于远程诊脉的机械手臂及控制方法
CN116129112A (zh) * 2022-12-28 2023-05-16 深圳市人工智能与机器人研究院 一种核酸检测机器人的口腔三维点云分割方法及机器人

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021254427A1 (zh) * 2020-06-17 2021-12-23 谈斯聪 超声图像数据采集分析识别一体化机器人,平台
CN111916195A (zh) * 2020-08-05 2020-11-10 谈斯聪 一种医疗用机器人装置,系统及方法
WO2021253809A1 (zh) * 2020-06-19 2021-12-23 谈斯聪 血液采集分析、图像智能识别诊断一体化装置、系统及方法
CN114800538A (zh) * 2021-01-21 2022-07-29 谈斯聪 一种陪伴陪护机器人装置、自适应学习系统及方法
CN112951230A (zh) * 2021-02-08 2021-06-11 谈斯聪 一种远端及自主实验机器人装置、管理系统及方法
CN113110325A (zh) * 2021-04-12 2021-07-13 谈斯聪 一种多臂分拣作业移动投递装置、最优化的管理系统及方法
CN115192051A (zh) * 2021-04-13 2022-10-18 佳能医疗系统株式会社 医用影像装置、医用影像系统以及医用影像装置中辅助检查方法
CN112990101B (zh) * 2021-04-14 2021-12-28 深圳市罗湖医院集团 基于机器视觉的面部器官定位方法及相关设备
CN113425332A (zh) * 2021-06-29 2021-09-24 尹丰 核酸采集和疫苗接种一体化装置及方法
CN113478457A (zh) * 2021-08-03 2021-10-08 爱在工匠智能科技(苏州)有限公司 一种医疗服务机器人
CN113858219A (zh) * 2021-08-23 2021-12-31 谈斯聪 一种医疗用机器人装置、系统及方法
CN113855067A (zh) * 2021-08-23 2021-12-31 谈斯聪 视觉图像与医疗图像融合识别、自主定位扫查方法
CN113855068A (zh) * 2021-08-27 2021-12-31 谈斯聪 智能识别胸部器官、自主定位扫查胸部器官的方法
CN113855250A (zh) * 2021-08-27 2021-12-31 谈斯聪 一种医疗用机器人装置、系统及方法
CN114310957A (zh) * 2022-01-04 2022-04-12 中国科学技术大学 用于医疗检测的机器人系统及检测方法
WO2023167830A1 (en) * 2022-03-01 2023-09-07 The Johns Hopkins University Autonomous robotic point of care ultrasound imaging
CN117245635B (zh) * 2022-12-12 2024-10-15 北京小米机器人技术有限公司 机器人及其控制方法、装置、存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150273697A1 (en) * 2014-03-27 2015-10-01 Fatemah A.J.A. Abdullah Robot for medical assistance
CN107030714A (zh) * 2017-05-26 2017-08-11 深圳市天益智网科技有限公司 一种医用看护机器人
CN107322602A (zh) * 2017-06-15 2017-11-07 重庆柚瓣家科技有限公司 用于远程医疗的家庭服务机器人
CN107788958A (zh) * 2017-10-20 2018-03-13 深圳市前海安测信息技术有限公司 医疗监护机器人及医疗监护方法
WO2019175675A2 (en) * 2019-07-01 2019-09-19 Wasfi Alshdaifat Dr robot medical artificial intelligence robotic arrangement
CN110477956A (zh) * 2019-09-27 2019-11-22 哈尔滨工业大学 一种基于超声图像引导的机器人诊断系统的智能扫查方法
CN111916195A (zh) * 2020-08-05 2020-11-10 谈斯聪 一种医疗用机器人装置,系统及方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120130739A1 (en) * 2010-11-21 2012-05-24 David Crane Unsupervised Telemedical Office for Remote &/or Autonomous & Automated Medical Care of Patients
CN206780416U (zh) * 2017-05-23 2017-12-22 周葛 一种智能医疗助理机器人
CN111358439A (zh) * 2020-03-14 2020-07-03 厦门波耐模型设计有限责任公司 全科医师机器人系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150273697A1 (en) * 2014-03-27 2015-10-01 Fatemah A.J.A. Abdullah Robot for medical assistance
CN107030714A (zh) * 2017-05-26 2017-08-11 深圳市天益智网科技有限公司 一种医用看护机器人
CN107322602A (zh) * 2017-06-15 2017-11-07 重庆柚瓣家科技有限公司 用于远程医疗的家庭服务机器人
CN107788958A (zh) * 2017-10-20 2018-03-13 深圳市前海安测信息技术有限公司 医疗监护机器人及医疗监护方法
WO2019175675A2 (en) * 2019-07-01 2019-09-19 Wasfi Alshdaifat Dr robot medical artificial intelligence robotic arrangement
CN110477956A (zh) * 2019-09-27 2019-11-22 哈尔滨工业大学 一种基于超声图像引导的机器人诊断系统的智能扫查方法
CN111916195A (zh) * 2020-08-05 2020-11-10 谈斯聪 一种医疗用机器人装置,系统及方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114886476A (zh) * 2022-07-14 2022-08-12 清华大学 咽拭子自动采集机器人
CN114886476B (zh) * 2022-07-14 2022-09-20 清华大学 咽拭子自动采集机器人
CN115781686A (zh) * 2022-12-26 2023-03-14 北京悬丝医疗科技有限公司 一种用于远程诊脉的机械手臂及控制方法
CN116129112A (zh) * 2022-12-28 2023-05-16 深圳市人工智能与机器人研究院 一种核酸检测机器人的口腔三维点云分割方法及机器人

Also Published As

Publication number Publication date
CN111916195A (zh) 2020-11-10
AU2021321650A1 (en) 2023-04-13

Similar Documents

Publication Publication Date Title
WO2022027921A1 (zh) 一种医疗用机器人装置、系统及方法
US20210030275A1 (en) System and method for remotely adjusting sound acquisition sensor parameters
AU2012219076B2 (en) System and method for performing an automatic and self-guided medical examination
CN107752984A (zh) 一种基于大数据的高智能全科医疗执业机器人
WO2021254444A1 (zh) 五官及外科医疗数据采集分析诊断机器人,平台
US20210166812A1 (en) Apparatus and methods for the management of patients in a medical setting
CN109044656B (zh) 医用护理设备
WO2023024399A1 (zh) 一种医疗用机器人装置、系统及方法
CN111844078A (zh) 一种协助护士临床工作的智能护理机器人
CN108942952A (zh) 一种医疗机器人
US20200027568A1 (en) Physician House Call Portal
WO2019100585A1 (zh) 基于眼底相机的中医治未病监控系统及方法
WO2012111013A1 (en) System and method for performing an automatic and remote trained personnel guided medical examination
WO2023024397A1 (zh) 一种医疗用机器人装置、系统及方法
Gritsenko et al. Current state and prospects for the development of digital medicine
CN115844346A (zh) 一种应用于疾病检查、观察和治疗的无线体征参数监测装置
CN110660487A (zh) 一种新生儿疼痛闭环管理系统及方法
CN108577884A (zh) 一种远程听诊系统及方法
JP2022000763A (ja) 自動の及び遠隔の訓練された人によりガイドされる医学検査を行うためのシステム及び方法
EP4371493A1 (en) Method for ecg reading service providing
TW202044268A (zh) 醫療機器人及病歷資訊整合系統
CN115644807A (zh) 基于面部和舌部图像采集的中医分析系统及其方法
CN115813358A (zh) 一种应用于健康管理的无线体征参数检测仪

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21852262

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021852262

Country of ref document: EP

Effective date: 20230306

ENP Entry into the national phase

Ref document number: 2021321650

Country of ref document: AU

Date of ref document: 20210729

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 21852262

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.09.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21852262

Country of ref document: EP

Kind code of ref document: A1