WO2021254427A1 - 超声图像数据采集分析识别一体化机器人,平台 - Google Patents

超声图像数据采集分析识别一体化机器人,平台 Download PDF

Info

Publication number
WO2021254427A1
WO2021254427A1 PCT/CN2021/100562 CN2021100562W WO2021254427A1 WO 2021254427 A1 WO2021254427 A1 WO 2021254427A1 CN 2021100562 W CN2021100562 W CN 2021100562W WO 2021254427 A1 WO2021254427 A1 WO 2021254427A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
image
ultrasound
module
data
Prior art date
Application number
PCT/CN2021/100562
Other languages
English (en)
French (fr)
Inventor
谈斯聪
Original Assignee
谈斯聪
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010556720.1A external-priority patent/CN111973228A/zh
Priority claimed from CN202010780479.0A external-priority patent/CN111916195A/zh
Application filed by 谈斯聪 filed Critical 谈斯聪
Priority to CN202180008741.2A priority Critical patent/CN116507286A/zh
Priority to AU2021292112A priority patent/AU2021292112A1/en
Publication of WO2021254427A1 publication Critical patent/WO2021254427A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings

Definitions

  • the invention belongs to the technical field of artificial intelligence robot health examination equipment, and relates to medical data analysis and medical image intelligent recognition systems.
  • the purpose of the present invention is to provide a health examination system based on an artificial intelligence robot.
  • An artificial intelligence robot system combined with various data acquisition devices and other nodes to build a platform physical examination medical data acquisition and analysis robot platform system.
  • An artificial intelligence robot medical data collection and analysis system for health examination the robot device includes:
  • the robot main system the robot main system module is used to realize the main control of the robot, from the communication between the camera and the medical ultrasound equipment acquisition module to the medical data analysis module, and is used for the interaction between the robot arm motion planning control module, the voice module and the user.
  • Camera and sensor data acquisition module the data acquisition module is used to collect ultrasound medical images and camera and other tested medical data.
  • Voice module the data module is used for interaction and voice guidance between the main control system and the user.
  • the data analysis module is used to analyze medical data against standard values and find abnormal medical data.
  • Image classification module the data module is used to classify ultrasound medical images and internal organ ultrasound images.
  • the ultrasound image module is a data acquisition module for medical ultrasound equipment, and the data acquisition module is used to collect medical data of the ultrasound detection equipment and medical images of the ultrasound equipment.
  • the robot arm motion planning acquisition module which is used for motion planning and the interaction between the robot arm motion and the user.
  • the main control system of the robot, the camera and sensor data acquisition module, the ultrasound module and other heart detection equipment medical data, and the medical images in the ultrasound organs can be used; and the acquisition module, voice module, and voice are planned according to the action of the robotic arm.
  • Command remote control strengthen the interaction between the robot and the user, and realize intelligent collection.
  • Medical data analysis is used to analyze medical data against standard values and intelligently find medical abnormal data
  • image classification module is used to accurately classify ultrasound images, intelligently locate ultrasound positions and classify internal ultrasound images. It improves the accuracy of intelligent collection and the accuracy of medical data abnormal recognition, and improves the remote collection of medical image classification and analysis, and the flexibility and possibility of remote recognition.
  • the main robot system is used to realize the main control of the robot, data collection, image classification, voice interaction, action interaction, intelligent collection, intelligent analysis of abnormal data, intelligent identification, and remote identification.
  • a camera is used to recognize human faces, color markings, and organ collection areas outside the body, and medical detection equipment and ultrasound equipment are used to collect medical data and medical images in ultrasound organs.
  • the voice module includes remote collection of voice commands and voice recognition for interaction and voice guidance between the main control system and the user.
  • the action module includes an action planning module and an action acquisition module, which are used for the action interaction between the main control system and the user, and the action image collection of the robotic arm.
  • the action module includes an action planning module, an ultrasound unit collection action plan, a cardiac medical data collection plan, which is used for action interaction between the main control system and the user, and robotic arm action image collection.
  • STEP2 Set the target parameters (target name, left and right arm joints)
  • STEP4 Publish target and parameters (target pose, pose tag)
  • STEP6 Set the target for head id, target pose and direction value
  • STEP8 Set the pose mark as the coordinate origin and direction value
  • the vision camera communicates with the ultrasound collector:
  • Step2 Set the parameters of the publisher node of the gripper (target name, pose mark)
  • Step3 Set the camera subscriber node parameters (point cloud, recent point cloud list)
  • Step4 Define and obtain the nearest point cloud list
  • Step5 Define the nearest point and convert it into an array of points
  • Step7 Confirm the parameters and return to the point cloud information
  • Step8 Set the pose direction value as a point object
  • Step9 Publish COG as the target pose
  • Step10 Set target parameters (posture mark, timestamp, target to head id, COG target pose, direction value)
  • Step11 Publish the target node of the gripper
  • Step1 Set the allowable error of position and attitude
  • Step2 When the motion planning fails, re-planning is allowed
  • Step3 Set the reference coordinate system of the target position
  • Step4 Set the time limit for each exercise plan
  • Step5 Set the position of the medical bed, arms and legs, set the height of the medical bed, the position of the arm and the position of the leg
  • Step6 Set up medical bed, arm, leg position physical examination and diagnosis DEMO (including: medical bed ID, medical bed position, left arm ID, left arm position, right arm ID, right arm, left leg ID, left leg position, right Leg ID, right leg pose) add the above parameters to the medical examination and treatment DEMO
  • Step7 Set the color, AR label and other special marks on the position of the medical bed, arms and legs
  • Step8 Set the position target, that is, the moving position (the color label for lying flat between the human body position marks, the color label for lying on the left side, and the color label for lying on the right side)
  • Step9 Set the scene color
  • Step10 Set the lying color label, the left label lying color, the right lying label color and other special marks
  • Step11 Set the color to the DEMO, including: initialize the planning scene object, monitor and set the scene difference, set the color, publish the color label, the color of the flat scene, the color of the left scene, the color of the right scene, and other special marks
  • a method for patient face recognition, external position recognition of human organs, and color mark recognition includes the following steps:
  • the position image collected from the outside of the human organ and the external position information of the organ collection area use the improved deep neural network algorithm to intelligent face images, joint images, color-marked images, accurately locate the external collection location of the organ, and intelligent collection
  • an improved method for machine learning classification algorithm to classify organ images includes the following steps:
  • a disease recognition method under a deep neural network algorithm organ model includes the following steps:
  • the present invention solves the problems of low physical examination efficiency, difficult data collection, and inaccurate data collection in the prior art through the camera and ultrasound probe carried by the robot to collect data. .
  • the physical examination intelligent research and development platform can realize health management, effectively detect, analyze, and identify abnormalities in the heart, breast, and abdominal organs, realize intelligent recognition, and remotely identify problems in the ultrasound cavity, as well as abnormal diseases in the organs and other health problems.
  • the ultrasound probe collects data such as the heart and internal organs of the body. Realize accurate analysis and classify abnormal data of various organs. Achieve accurate identification of common problems such as ultrasound internal organs and heart disease.
  • FIG. 1 is a schematic structural diagram of a physical examination and medical data collection and analysis robot in Embodiment 1 of the present application.
  • Fig. 2 is a schematic diagram of a camera and an ultrasound image acquisition module in the first embodiment of the present application.
  • Figure 3 is a positioning diagram of the ultrasound collection position of the human body.
  • Figure 1 is labeled: 100-robot main system; 101-voice module; 102-medical image acquisition module; 103-robotic arm motion planning module; 104-camera image acquisition module.
  • Attached drawing 2 signs: 10-robot main control system simulation device, 20-camera simulation device, 30-voice module, 40-radar mobile base, 50-image acquisition device module, 60-robotic arm module, 100-human face, 300 -Corresponding to the external position of human organs (collection of internal organs).
  • Figure 3 Labels: 200-color marking, 400-shoulder joint, 601-atrium, 602-breast, 603-liver, 604-spleen and stomach, 605-kidney, 606-uterus, bladder, female ovary, 607-male prostate.
  • the embodiments of the application provide a medical examination robot system, an ultrasonic device for analyzing medical data collection, and an organ classification disease identification method, which solves the problems of low physical examination efficiency, remote data, difficulty in autonomous data collection, and inaccurate data collection in the prior art. , It realizes effective detection, data analysis, identification of body abnormalities, realization of intelligent identification, identification of problems such as diseases in the ultrasound cavity, and health problems such as abnormal diseases in organs.
  • the robot device includes: the robot main system, the robot main system module is used to realize the main control of the robot, from the camera acquisition module, the ultrasound module, the equipment data acquisition module to the medical data
  • the communication between the analysis modules is used for the interaction between the robot arm motion planning control module, the voice module and the user.
  • the data collection module is used to collect ultrasound medical images, heart and other measured medical data
  • voice module the data module is used for interaction between the main control system and the user and voice guidance
  • image classification module the data module is used In the ultrasound image module, the ultrasound inspection equipment data acquisition module, the data acquisition module is used to collect ultrasonic testing equipment medical data and ultrasonic equipment medical images
  • the robotic arm motion planning acquisition module the robotic arm motion planning acquisition module is used for motion Planning, the interaction between the robot's actions and the user.
  • an artificial intelligence robot medical data collection and analysis system for health examination the robot device includes:
  • the main control system 10 of the robot the module is used to realize the communication between the main control of the robot and the camera module, the ultrasound image acquisition module, the main control system is equipped with the robotic arm, and the ultrasound inspection equipment data acquisition module communicates, and is used for the robotic arm motion planning collection , To communicate with the voice module for voice interaction between the robot and the user.
  • the camera 20, the voice module 30, the ultrasound image acquisition module 50, and the medical ultrasound are used to collect medical images in the ultrasound organs. And according to the robot arm motion planning collection module 103 and the voice module 101, guide the user, strengthen the interaction between the robot and the user, and realize intelligent collection.
  • Medical data analysis is used to analyze medical data against standard values and intelligently find medical abnormal data; image classification module is used to accurately classify ultrasound, ultrasound, ultrasound medical images, intelligently locate ultrasound positions and classify ultrasound images in organs.
  • the main control system 10 of the robot the communication between the main control system of the robot and each module is used to realize the main control of the robot, the communication with the camera 20 and the voice module 30, the ultrasound image acquisition module 50, the main control system and the machine
  • the arm is equipped with an ultrasound module 50, which is used for the robot arm motion planning collection, communicates with the voice module 30, and is used for voice interaction between the robot and the user.
  • the robot main control system is connected to the robotic arm simulation device 60 through the system 10 and the depth camera simulation unit 20; and the analog robot main control system device 10 and the voice module 30 are connected Communication connection. And the communication connection between the analog robot main control system device 10 and the ultrasonic image acquisition module 102 to be tested; and the communication connection between the analog robot main control system device 10 and the robotic arm, and the ultrasonic inspection equipment data acquisition module 50. And the communication connection between the analog robot main control system device 10 and the voice module 30.
  • the robot main control system is connected with a depth camera for face, ultrasound, image collection for voice interaction, and image collection.
  • the camera simulation unit 20 is used to collect human faces, and according to the instructions of the robot main control system simulation device 10, release image data, communicate with image recognition nodes, and recognize human faces, color markings, and joints.
  • the robot main control system returns color marking information, joint information, and external position information of the body organs, and the robot arm 60 moves to the collection position of the external parts of the human body. Thereby accurately positioning the face, joints, and ultrasound collection area.
  • Use the robot main system to plan the action interaction and realize data collection. Design robot actions, and aim at the camera and other collection positions to realize human-robot friendly interaction and collect data efficiently.
  • the voice module 30 is used for voice commands, voice recognition, and voice consultation.
  • the platform robot main control system 10 communicates with the voice module 30 to realize the voice control main system.
  • the main system 10 sends an action instruction to the robot arm action plan collection module 60.
  • the voice module is used for voice recognition, voice synthesis, robotic voice autonomous consultation, and disease knowledge answering. Voice interviews with family doctors and specialists at the remote end.
  • the ultrasound acquisition module 50 is used to collect medical images in the ultrasound organs, according to the instructions of the robot main control system simulation device 10, release medical image data, use the robot main control system 10 to download the TF package to return the position information of the body, and the robot arm 60 moves to Collect data on the position of the internal organs of the body. So as to accurately locate the internal organs of the organs. Returns the name, image, and data value of each organ.
  • the robot arm action planning acquisition module 60 is used to move and collect ultrasound medical images, calculate the position and time according to the action plan, the robot main control system simulation device 10 action instructions, and communicate with the organ recognition program node according to the camera module 20 to identify color marks and joints Mark, identify and determine the location of the organs in the ultrasound collection organ. Move to the location of the external organ.
  • the robot arm package is used to realize the robot arm motion planning and data collection.
  • the robot arm engineering package under the robot system is used to plan the robot arm movements. It is planned to use a camera and other mounted robot arms to effectively collect ultrasound heart, breast, and abdominal organ data through the robot arm movement planning and action interaction to achieve accurate data collection.
  • the patient face recognition, external position recognition of human organs, and color mark recognition methods include:
  • a mathematical model of face 100 a mathematical model for individual face image recognition, extract facial features, color labels 200 and corresponding external positions 300 of human organs, including features such as colors, faces, joints 400, etc., and extract the position of external organs of the human body.
  • Feature value mark color value, shoulder, waist, lower limb joint position, face), etc., enter the feature value of the detection item.
  • Improve the weight optimizer and get the output value through image training. According to the output result, the position image collected from the outside of the human organ and the external position information of the organ collection area.
  • the improved deep neural network algorithm is used to intelligently recognize the face image 100, the color-marked image 200, accurately locate the external collection location 300 of the organ, and the joint 400, and collect data intelligently.
  • the method includes:
  • an internal ultrasound collection area 500 is established.
  • Establish the internal organ 600 mathematical model extract the internal contour features of the organ, including features such as color, shape, contour, etc., extract the feature values (color, shape, contour) of the image, etc., and enter the feature values of the items. Calculate the output value.
  • the organ images are classified.
  • the accurately classified ultrasound images include atrium 601, breast 602, liver 603, spleen and stomach 604, kidney 605, uterus, bladder, female ovary 606, male prostate 607 and other images.
  • the disease identification methods of the deep neural network algorithm organs 601-607 include:
  • the ultrasound organ image input corresponds to the mathematical model of the organ 601-607, and the features of the input image are extracted, including the color, contour, and texture of the organ image, the image feature accelerator of the common organ corresponding to the disease, and the blood vessel color value and other features are transformed into the input data, after the algorithm
  • the weight accelerator and optimizer calculate to obtain the output value, and classify the disease type of this organ according to the output result, and accurately identify the disease.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

超声图像数据采集分析识别一体化机器人,平台是利用机器人技术与图像采集装置(50),医疗大数据图像识别技术结合。利用机器臂(60)与超声设备通信,实现远端控制机器手臂动作采集及自主采集图像数据,分析采集数据,智能识别疾病,机器人语音互动等功能。通过机器人平台上搭载的摄像头(20)和图像采集装置(50)结合,智能识别体外器官位置和医疗图像识别的内脏器官相结合,双精准定位脏器内器官,高精度采集图像,分类图像,智能识别图像异常,智能诊断脏器内的常见疾病。利用神经网络方法及机器学习方法分类图像,智能识别内脏器官疾病。语音指引,远端自主采集数据。远程问诊,采集,分析,智能诊断各脏器内常见疾病,有效排查异常症状。

Description

超声图像数据采集分析识别一体化机器人,平台 技术领域
本发明属于人工智能机器人健康体检设备技术领域,涉及医疗数据分析,医疗图像智能识别系统。
背景技术
目前应用于健康体检领域;在检查过程,由于各种人为因素,很难有效肉超声图像识别,识别超声图像中等存在的疾病问题。体检效率低下,数据采集难,数据采集不精准等问题。
造成体检效率低,费时间,费精力等后果,针对体检效率低下,数据采集难,数据采集不精准等问题,利用机器人搭载的摄像头,超声探头等通过医疗图像数据智能监测超声腹腔的各项异常指标,有效识别超声内疾病,超声脏器异常,脏器图像分类,远程控制,自主采集,智能识别,分类医疗数据,筛查异常数据,分类识别,智能反馈异常和疾病结果,定期检查。健康检测,体检等问题。
技术解决方案
本发明的目的是提供一种基于人工智能机器人的健康体检系统。人工智能机器人系统与各数据采集装置等节点相结合搭建的平台体检医疗数据采集分析机器人平台系统。
人工智能机器人医疗数据采集,分析健康体检系统,所述机器人装置包括:
机器人主系统,所述机器人主系统模块用于实现机器人的主控制,从摄像头及医疗超声设备采集模块到医疗数据分析模块间通信,用于机器臂动作规划控制模块,语音模块和用户间交互。
摄像头及传感器数据采集模块,所述数据采集模块用于采集超声医疗图像和摄像头等被测医疗数据。
语音模块,所述数据模块用于主控制系统与用户间交互和语音引导。
医疗数据分析,所述数据分析模块用于比照标准值分析医疗数据,发现医疗异常数据。
图像分类模块,所述数据模块用于分类超声医疗图像及脏器内超声图像。
超声图像模块,医疗超声设备数据采集模块,所述数据采集模块用于采集超声检测设备医疗数据和超声设备的医疗图像。
机器臂动作规划采集模块,所述机器臂动作规划采集模块用于动作规划,机器臂动作与用户间的交互。
本方案中,能够通过机器人的主控制系统,摄像头及传感器数据采集模块,超声模块等心脏检测设备医疗数据,及超声脏器内的医疗图像;并依据机器臂动作规划采集模块,语音模块,语音指令远端控制,加强机器人与用户间的交互,实现智能化采集。医疗数据分析,用于比照标准值分析医疗数据,智能发现医疗异常数据;图像分类模块,用于精准分类超声图像,智能定位超声位置及分类脏内超声图像。提高了智能采集的精准度和医疗数据异常识别的准确度,提高了医疗图像分类,分析的远端采集,远端识别的灵活性和可能性。
进一步,机器人主系统用于实现机器人的主控制,数据采集,图像分类,语音交互,动作交互,实现智能采集,智能分析异常数据,智能识别,远端识别。
作为本发明的进一步改进,用摄像头识别人脸,颜色标记,体外器官采集区,用医疗检测设备及超声设备采集医疗数据及超声脏器内的医疗图像。
作为本发明的进一步改进,语音模块,包括语音指令远端采集,语音识别,用于主控制系统与用户间交互和语音引导。
作为本发明的又一步改进,动作模块,包括动作规划模块,动作采集模块,用于主控制系统与用户间动作交互,机器臂动作图像采集。
作为本发明的又一步改进,动作模块,包括动作规划模块,超声部采集动作规划,心脏医疗数据采集规划,用于主控制系统与用户间动作交互,机器臂动作图像采集。
进一步,手臂伸向腹部目标采集方法(头部跟踪超声采集器):
STEP1:设置目标
STEP2:设置目标参数(目标名,左右臂关节)
STEP3:设置通信目标
STEP4:发布目标,参数(目标位姿,位姿标记)
STEP5:设置位姿标记
STEP6:设置目标对于头部id,目标位姿,方向值
STEP7:设置时间戳
STEP8:设置位姿标记为坐标原点和方向值
又进一步,视觉摄像头与超声采集器通信:
Step1:初始点云节点
Step2:设置夹持器发布方节点参数(目标名,位姿标记)
Step3:设置摄像头订阅方节点参数(点云,最近点云list)
Step4:定义并取得最近点云list
Step5:定义最近的点并将其转化成点数组
Step6:计算COG
Step7:确认参数,返回点云信息
Step8:设置位姿方向值作为点对象
Step9:发布COG作为目标位姿
Step10:设置目标参数(位姿标记,时间戳,目标对于头部id,COG目标位姿,方向值)
Step11:发布夹持器目标节点
又进一步,超声图像采集方法:
Step1:设置位置和姿态的允许误差
Step2:当运动规划失败后,允许重新规划
Step3:设置目标位置的参考坐标系
Step4:设置每次运动规划的时间限制
Step5:设置医疗床,手臂,腿放置位置,设置医疗床的高度,手臂放置区位置,腿放置区位置
Step6:设置医疗床,手臂,腿位置体检诊DEMO(包括:医疗床ID,医疗床位姿,左手臂ID,左手臂位姿,右手臂ID,右手臂,左腿ID,左腿位姿,右腿ID,右腿位姿)将以上参数加入体检诊疗DEMO
Step7:将医疗床和手臂,腿位置设置颜色和AR标签及其他特殊标记
Step8:设置位置目标,即移动位置(人体位置标记间平躺颜色标签,左侧卧颜色标签,右侧卧颜色标签)
Step9:设置场景颜色
Step10:设置平躺颜色标签,左侧标签卧颜色,右侧卧标签颜色及其他特殊标记
Step11:设置颜色到DEMO中,包括:初始化规划场景对象,监测设置场景差异,设置颜色,发布颜色标签平躺场景颜色,左侧卧场景颜色,右侧卧场景颜色及其他特殊标记
一种病患人脸识别,人体器官外部位置识别,颜色标记识别方法,所述方法包括以下步骤:
S1、建立人脸数学模型及个体脸图像识别的数学模型
S2、抽取人脸特征,颜色标签及对应人体器官外部位置,包括颜色,人脸,关节等的特征
S3、提取人体外部器官位置图像的特征值(标记颜色值,肩,腰,下肢关节位置,人脸)等
S4、输入检测项目特征值
S5、改进权值优化器,通过图像训练,得到输出值
S6、依据输出结果人体器官外部采集的位置图像及器官采集区的外部位置信息,利用改进深度神经网络算法智能人脸图像,关节图像,颜色标记图像,精准定位器官外部采集位置,智能采集
进一步,一种机器学习分类算法改进方法分类脏器图像,所述方法包括以下步骤:
S1、建立内部脏器数学模型
S2、抽取脏器内轮廓特征,包括颜色,形状,轮廓等的特征
S3、输入项目特征值
S4、改进机器学习算法,计算得到输出值
S5、依据输出结果分类脏器图像,精准分类脏器包括乳房,肺,肝胆脾,肾脏等图像,利用改进机器学习算法智能分类脏器图像,精准定位各脏器位置
一种深度神经网络算法器官模型下的疾病识别方法,所述方法包括以下步骤:
S1、输入对应器官的数学模型
S2、抽取疾病的特征包括器官图像的颜色,轮廓,纹理,常见器官对应疾病的图像特征,包括:肩关节,乳房及乳头,肚腩肚脐,特征生殖器,腰关节及血管颜色的特征,血管颜色值等的特征转化为输入数据
S3、抽取图像器官的内部轮廓,各器官的特征值及其对应的外部特征所对应的人体外部位置区,建立图像的特征的数学模型,输入检测项目特征值
S4、输入各器官外部特征值所对应的人体内部器官图像的特征值,改进深度神经网络方法及权值优化器,通过图像训练,得到输出值及内部器官分类,器官识别结果
S5、改进权值优化器,快速训练图像,得到输出值
S6、依据输出结果分类此种器官的疾病种类,精准识别疾病
综上,本发明的有益效果是:
本发明针对体检效率低,费时间,费精力,疾病识别度低等问题,通过机器人搭载的摄像头及超声探头采集数据解决现有技术中,体检效率低下,数据采集难,数据采集不精准等问题。
通过超声医疗图像和医疗数据各项医疗指标,识别很难有效肉超声识别的异常和疾病,识别超声腔存在的器官模型下的疾病问题。高效识别与管理疾病。体检智能研究与开发平台能实现健康管理,有效检测,分析,识别心脏,乳房,腹腔脏器异常,实现智能识别,远端识别超声腔内等问题,及脏器内异常疾病等健康问题。
远端识别脏器内异常和疾病,改善了体检的精确度和效率,智能检测,分析,识别疾病。有效创建人工智能机器人+体检医疗体系。
有益效果
实现有效采集图像,超声探头采集心脏,身体内部脏器图像等数据。实现准确分析,分类各器官异常数据。实现准确识别超声内部脏器,心脏病等常见问题。
附图说明
图1是本申请实施例一中体检医疗数据采集分析机器人的结构示意图。
图2是本申请实施例一中摄像头与超声图像采集模块示意图。
图3是人体超声采集位置定位图。
附图1标记:100-机器人主系统;101-语音模块;102-医疗图像采集模块;103-机器臂动作规划模块;104-摄像头图像采集模块。
附图2标记:10-机器人主控制系统模拟装置,20-摄像头模拟装置,30-语音模块,40-雷达移动底座,50-图像采集装置模块,60-机器臂模块,100-人脸,300-对应人体器官外部位置(采集区内脏)。
附图3标记:200-颜色标记,400-肩关节,601-心房,602-乳房,603-肝脏,604-脾胃,605-肾脏,606-子宫、膀胱、女卵巢,607-男前列腺。
本发明的实施方式
本申请实施例通过提供一种体检机器人系统及分析医疗数据采集超声装置、器官分类疾病识别方法,解决现有技术中体检效率低下,数据远端,自主采集困难,数据采集不精准等问题的问题,实现了有效检测,数据分析,识别身体异常,实现智能识别,识别超声腔内疾病等问题,及脏器内异常疾病等健康问题。
本申请实施中的技术方案为解决上述技术问题的总体思路如下:
人工智能机器人医疗数据采集,分析健康体检系统,所述机器人装置包括:机器人主系统,所述机器人主系统模块用于实现机器人的主控制,从摄像头采集模块,超声模块设备数据采集模块到医疗数据分析模块间通信,用于机器臂动作规划控制模块,语音模块和用户间交互。摄像头,所述数据采集模块用于采集超声医疗图像,心脏等被测医疗数据;语音模块,所述数据模块用于主控制系统与用户间交互和语音引导;图像分类模块,所述数据模块用于超声图像模块,超声检查设备数据采集模块,所述数据采集模块用于采集超声检测设备医疗数据和超声设备的医疗图像;机器臂动作规划采集模块,所述机器臂动作规划采集模块用于动作规划,机器臂动作与用户间的交互。
为了更好的理解上述技术方案,下面结合实施例及附图,对本发明作进一步地的详细说明,但本发明的实施方式不限于此。
实施例 1
如图1所示,一种人工智能机器人医疗数据采集,分析健康体检系统,机器人装置包括:
机器人的主控制系统10,所述模块用于实现机器人的主控制与摄像头模块,超声图像采集模块通信,主控制系统与机器臂搭载,超声检查设备数据采集模块通信,用于机器臂动作规划采集,与语音模块通信,用于机器人与用户间语音交互。
摄像头20,语音模块30,超声图像采集模块50,医疗超声用于采集超声脏器内的医疗图像。并依据机器臂动作规划采集模块103,语音模块101,引导用户,加强机器人与用户间的交互,实现智能化采集。医疗数据分析,用于比照标准值分析医疗数据,智能发现医疗异常数据;图像分类模块,用于精准分类超声,超声,超声医疗图像,智能定位超声位置及分类脏器内超声图像。
机器人的主控制系统10,所述机器人主控制系统与各模块间的通信所述模块用于实现机器人的主控制,与摄像头20及语音模块30,超声图像采集模块50通信,主控制系统与机器臂搭载,超声模块50,用于机器臂动作规划采集,与语音模块30通信,用于机器人与用户间语音交互。
其中,在本申请实施例中,所述机器人主控制系统通过系统10与深度摄像头模拟单元20与所述机器臂模拟装置60的连接;以及所述模拟机器人主控制系统装置10与语音模块30的通信连接。以及所述模拟机器人主控制系统装置10与超声图像采集模块102通信被测的通信连接;以及所述模拟机器人主控制系统装置10与机器臂搭载,超声检查设备数据采集模块50的通信连接。以及所述模拟机器人主控制系统装置10与语音模块30的通信连接。本实施例中,所述机器人主控制系统与深度摄像头连接用于人脸,超声,图像采集用于语音交互,用于图像采集。
摄像头模拟单元20用于采集人脸,依据机器人主控制系统模拟装置10指令,发布图像数据,与图像识别节点通信,识别人脸,颜色标记,关节。机器人主控系统返回颜色标记信息,关节信息,身体器官外部位置信息,机器臂60移动到人体外部部位采集位置。从而精准定位人脸,关节,超声采集区。利用机器人主系统规划动作交互,实现数据采集。设计机器人动作,并针对摄像头等采集位置,实现人-机器人友善交互设,高效采集数据。
语音模块30用于用语音指令,语音识别,语音问诊。平台机器人主控系统10与语音模块30通信实现语音控制主系统。由主系统10发送动作指令到机器臂动作规划采集模块60。语音模块用于语音识别,语音合成,机器人语音自主问诊,疾病知识解答。远端与家庭医生,专科医生语音问诊。
超声采集模块50用于采集超声脏器内的医疗图像,依据机器人主控制系统模拟装置10指令,发布医疗图像数据,用机器人主控系统10下TF包返回身体各位置信息,机器臂60移动到身体内脏位置采集数据。从而精准定位脏器内器官。返回各脏器器官名称,图像,数据值。
机器臂动作规划采集模60用于移动采集超声的医疗图像,依据动作规划计算位置和时间,机器人主控制系统模拟装置10动作指令,依据摄像头模块20与器官识别程序节点通信,识别颜色标记,关节标记,识别确定超声采集脏器内器官位置。移动到外部器官位置。在机器人系统下采用机器臂包实现机器臂动作规划及数据采集。采用机器人系统下机器臂工程包规划机器臂动作,拟采用摄像头等搭载机器臂上,通过机器臂移动规划,动作交互等有效采集超声心脏,乳房,腹腔内脏器数据,实现精准数据采集。
实施例 2
在实施例1的基础上,提供几种超声定位,识别方法,如图3所示:
所述病患人脸识别,人体器官外部位置识别,颜色标记识别方法包括:
建立人脸100数学模型,个体脸图像识别的数学模型,抽取人脸特征,颜色标签200及对应人体器官外部位置300,包括颜色,人脸,关节400等的特征,提取人体外部器官位置图像的特征值(标记颜色值,肩,腰,下肢关节位置,人脸)等,输入检测项目特征值。改进权值优化器,通过图像训练,得到输出值。依据输出结果人体器官外部采集的位置图像及器官采集区的外部位置信息。利用改进深度神经网络算法智能识别人脸图像100,颜色标记图像200,精准定位器官外部采集位置300,关节400,智能采集数据。
所述的超声器官图像分类方法,所述方法包括:
针对对应人体器官外部位置300,建立超声内部采集区500。建立内部脏器600数学模型,抽取脏器内轮廓特征,包括颜色,形状,轮廓等的特征,提取图像的特征值(颜色,形状,轮廓)等,输入项目特征值。计算得到输出值。依据输出结果分类脏器图像,精准分类超声图像包括心房601,乳房602,肝脏 603,脾胃604,肾脏605,子宫、膀胱、女卵巢606,男前列腺607等图像。
所述的深度神经网络算法器官601-607的疾病识别方法包括:
超声器官图像输入对应器官601-607的数学模型,抽取输入图像的特征包括器官图像的颜色,轮廓,纹理,常见器官对应疾病的图像特征加速器,血管颜色值等的特征转化为输入数据,经过算法权值加速器及优化器计算,得到输出值,依据输出结果分类此种器官的疾病种类,精准识别疾病。

Claims (8)

  1. 超声图像数据采集分析识别一体化机器人,平台,其特征在于,采用人工智能机器人主系统,通过机器臂搭载摄像头和超声等装置智能采集医疗数据和医疗图像,对数据进行分析,对超声等图像进行分类,提高了体检及数据采集的效率,人工智能机器人医疗数据采集,分析健康体检系统,所述机器人装置包括:
    机器人主系统,所述机器人主系统模块用于实现机器人的主控制,从摄像头及医疗超声设备采集模块到医疗数据分析模块间通信,用于机器臂动作规划控制模块,语音模块和用户间交互;
    摄像头及传感器数据采集模块,所述数据采集模块用于采集图像和被测医疗数据;
    语音模块,所述数据模块用于主控制系统与用户间交互和语音引导;
    医疗数据分析,所述数据分析模块用于比照标准值分析医疗数据,发现医疗异常数据;
    图像分类模块,所述数据模块用于分类医疗图像;
    医疗图像模块,医疗设备数据采集模块,所述数据采集模块用于采集医疗数据和医疗图像;
    机器臂动作规划采集模块,所述机器臂动作规划采集模块用于动作规划,机器臂动作与用户间的交互。
  2. 超声图像数据采集分析识别一体化机器人,平台,其特征在于,利用改进的神经网络方法实现人体超声图像分类识别,实现智能定位身体组织器官,从而精准识别内部组织器官并对其定位采集。
  3. 根据权利要求1所述的机器人装置,其特征在于,利用机器人系统颜色标记关节标记特殊标记,坐标转换包返回颜色标记及身体超声各位置信息,用机器人系统连接机器臂移动到身体各部位数据采集位置,从而精准定位内部组织器官,精准采集内部组织器官图像。
  4. 根据权利要求1所述的机器人装置,其特征在于,利用机器人手臂,摄像头,采用机器人手臂连接超声探头采集图像,摄像头及传感器数据采集模块,利用机器人手臂搭载摄像头,采集人脸部,体外部位,关节图像数据,利用神经网络算法,识别人脸,身体外部位置,关节图像,计算返回值,大大提高了疾病的智能识别,智能体检数据异常识别效率。
  5. 根据权利要求1所述的机器人装置,其特征在于,利用机器人与机器臂搭载的摄像头,传感器,探头连接,采用机器人手臂搭载探头采集图像,利用超声采集装置,采集器官数据,超声图像模块及图像分类模块,基于机器学习改进方法,建立超声部轮廓和内部组织器官的特征模型,利用机器学习改进方法智能分类脏器部位,从而指示机器臂移动的方向和位置,实现超声部轮廓和内部组织器官的分类,精准识别,智能定位,识别超声图像疾病的方法。
  6. 超声图像数据采集分析识别一体化机器人,平台,其特征在于,基于SVM方法及不限于SVM方法的机器学习改进方法,建立超声部轮廓和内部组织器官的特征模型,利用SVM方法及不限于SVM方法的机器学习方法改进智能分类超声图像的脏器位置,从而指示机器臂移动的方向和位置,实现高效分类超声脏器图像轮廓和内部组织器官。
  7. 超声图像数据采集分析识别一体化机器人,平台,其特征在于,利用神经网络算法改进的方法,建立图像识别的数学模型,疾病的表象特征,包括:对图像超声腔的图形特征进行提取,引导可变模型轮廓演化到目标特征,通过脏器轮廓,血管位置形状,通过图像颜色,灰度对比,病症特征识别脏器疾病提取图像的特征值(颜色,形状,轮廓)等,输入检测项目特征值,利用改进深度神经网络方法调整权值参数,得到输出值,依据输出值的范围来识别对应器官正常体征或疾病。
  8. 超声图像数据采集分析识别一体化机器人,平台,其特征在于,利用机器人手臂及其动作规划设计方法,实现机器人手臂移动,抓取,有效动作向导,从而实现数据远端自主采集功能。
PCT/CN2021/100562 2020-06-17 2021-06-17 超声图像数据采集分析识别一体化机器人,平台 WO2021254427A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180008741.2A CN116507286A (zh) 2020-06-17 2021-06-17 超声图像数据采集分析识别一体化机器人,平台
AU2021292112A AU2021292112A1 (en) 2020-06-17 2021-06-17 Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010556720.1A CN111973228A (zh) 2020-06-17 2020-06-17 B超数据采集分析诊断一体化机器人,平台
CN202010556720.1 2020-06-17
CN202010780479.0A CN111916195A (zh) 2020-08-05 2020-08-05 一种医疗用机器人装置,系统及方法
CN202010780479.0 2020-08-05

Publications (1)

Publication Number Publication Date
WO2021254427A1 true WO2021254427A1 (zh) 2021-12-23

Family

ID=79268472

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/100562 WO2021254427A1 (zh) 2020-06-17 2021-06-17 超声图像数据采集分析识别一体化机器人,平台

Country Status (2)

Country Link
AU (1) AU2021292112A1 (zh)
WO (1) WO2021254427A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114536323A (zh) * 2021-12-31 2022-05-27 中国人民解放军国防科技大学 一种基于图像处理的分类机器人

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007037848A2 (en) * 2005-09-28 2007-04-05 Siemens Medical Solutions Usa, Inc. Systems and methods for computer aided diagnosis and decision support in whole-body imaging
US20190262084A1 (en) * 2018-02-27 2019-08-29 NavLab, Inc. Artificial intelligence guidance system for robotic surgery
CN110288574A (zh) * 2019-06-13 2019-09-27 南通市传染病防治院(南通市第三人民医院) 一种超声辅助诊断肝肿块系统及方法
CN110477956A (zh) * 2019-09-27 2019-11-22 哈尔滨工业大学 一种基于超声图像引导的机器人诊断系统的智能扫查方法
US20190358822A1 (en) * 2018-05-23 2019-11-28 Aeolus Robotics, Inc. Robotic interactions for observable signs of core health
CN111916195A (zh) * 2020-08-05 2020-11-10 谈斯聪 一种医疗用机器人装置,系统及方法
CN111973152A (zh) * 2020-06-17 2020-11-24 谈斯聪 五官及外科医疗数据采集分析诊断机器人,平台
CN111973228A (zh) * 2020-06-17 2020-11-24 谈斯聪 B超数据采集分析诊断一体化机器人,平台

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007037848A2 (en) * 2005-09-28 2007-04-05 Siemens Medical Solutions Usa, Inc. Systems and methods for computer aided diagnosis and decision support in whole-body imaging
US20190262084A1 (en) * 2018-02-27 2019-08-29 NavLab, Inc. Artificial intelligence guidance system for robotic surgery
US20190358822A1 (en) * 2018-05-23 2019-11-28 Aeolus Robotics, Inc. Robotic interactions for observable signs of core health
CN110288574A (zh) * 2019-06-13 2019-09-27 南通市传染病防治院(南通市第三人民医院) 一种超声辅助诊断肝肿块系统及方法
CN110477956A (zh) * 2019-09-27 2019-11-22 哈尔滨工业大学 一种基于超声图像引导的机器人诊断系统的智能扫查方法
CN111973152A (zh) * 2020-06-17 2020-11-24 谈斯聪 五官及外科医疗数据采集分析诊断机器人,平台
CN111973228A (zh) * 2020-06-17 2020-11-24 谈斯聪 B超数据采集分析诊断一体化机器人,平台
CN111916195A (zh) * 2020-08-05 2020-11-10 谈斯聪 一种医疗用机器人装置,系统及方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114536323A (zh) * 2021-12-31 2022-05-27 中国人民解放军国防科技大学 一种基于图像处理的分类机器人

Also Published As

Publication number Publication date
AU2021292112A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
CN112155729B (zh) 手术穿刺路径智能自动化规划方法及系统和医疗系统
CN116507286A (zh) 超声图像数据采集分析识别一体化机器人,平台
CN109567942B (zh) 采用人工智能技术的颅颌面外科手术机器人辅助系统
WO2021254444A1 (zh) 五官及外科医疗数据采集分析诊断机器人,平台
Li et al. An overview of systems and techniques for autonomous robotic ultrasound acquisitions
WO2022027921A1 (zh) 一种医疗用机器人装置、系统及方法
Li et al. Autonomous multiple instruments tracking for robot-assisted laparoscopic surgery with visual tracking space vector method
JP2016080671A5 (zh)
Suligoj et al. RobUSt–an autonomous robotic ultrasound system for medical imaging
CN112998749B (zh) 一种基于视觉伺服的自动超声检查系统
CN112270993A (zh) 一种以诊断结果为反馈的超声机器人在线决策方法及系统
CN108030496A (zh) 一种人体上肢肩部盂肱关节旋转中心与上臂抬升角耦合关系测量方法
Peng et al. Autonomous recognition of multiple surgical instruments tips based on arrow OBB-YOLO network
WO2023024396A1 (zh) 视觉图像与医疗图像融合识别、自主定位扫查方法
CN112132805A (zh) 一种基于人体特征的超声机器人状态归一化方法及系统
WO2023024398A1 (zh) 智能识别胸部器官、自主定位扫查胸部器官的方法
WO2021254427A1 (zh) 超声图像数据采集分析识别一体化机器人,平台
Mathur et al. A semi-autonomous robotic system for remote trauma assessment
WO2021253809A1 (zh) 血液采集分析、图像智能识别诊断一体化装置、系统及方法
CN114310957A (zh) 用于医疗检测的机器人系统及检测方法
CN109993116A (zh) 一种基于人体骨骼相互学习的行人再识别方法
CN102370478B (zh) 一种心电电极安放定位装置
JP2016035651A (ja) 在宅リハビリテーションシステム
Vitali et al. A new approach for medical assessment of patient’s injured shoulder
WO2023024397A1 (zh) 一种医疗用机器人装置、系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21826621

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 202180008741.2

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2022581727

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021826621

Country of ref document: EP

Effective date: 20230117

ENP Entry into the national phase

Ref document number: 2021292112

Country of ref document: AU

Date of ref document: 20210617

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021826621

Country of ref document: EP

Effective date: 20230117

122 Ep: pct application non-entry in european phase

Ref document number: 21826621

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/09/2023)