WO2023024396A1 - Recognition, autonomous positioning and scanning method for visual image and medical image fusion - Google Patents

Recognition, autonomous positioning and scanning method for visual image and medical image fusion Download PDF

Info

Publication number
WO2023024396A1
WO2023024396A1 PCT/CN2022/000117 CN2022000117W WO2023024396A1 WO 2023024396 A1 WO2023024396 A1 WO 2023024396A1 CN 2022000117 W CN2022000117 W CN 2022000117W WO 2023024396 A1 WO2023024396 A1 WO 2023024396A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
medical
scan
scanning
Prior art date
Application number
PCT/CN2022/000117
Other languages
French (fr)
Chinese (zh)
Inventor
谈斯聪
于皓
于梦非
Original Assignee
谈斯聪
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 谈斯聪 filed Critical 谈斯聪
Priority to CN202280056988.6A priority Critical patent/CN118338850A/en
Priority to AU2022335276A priority patent/AU2022335276A1/en
Publication of WO2023024396A1 publication Critical patent/WO2023024396A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0891Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of blood vessels

Definitions

  • the invention belongs to the technical field of medical artificial intelligence, and relates to data analysis technology, the technical field of robot action planning, an image intelligent recognition method, artificial intelligence and medical data analysis and recognition technology and method.
  • the purpose of the present invention is to overcome the shortcomings and deficiencies of the above-mentioned prior art, to provide a device for collecting medical images by an autonomous robot, as well as the intelligent recognition of the fusion of visual images and medical images of the present invention, and the autonomous positioning and scanning of robotic arms
  • the method solves the problem of man-made scanning, inspection, collection, diagnosis and treatment errors.
  • the method has become the key technical issue of self-positioning and scanning.
  • the present invention provides a method for remote and self-adaptive scanning of the neck, left and right thyroid lobes, isthmus, and parotid gland, and image and video collection; a remote and self-adaptive scanning method for collecting blood vessels of the lower extremities, collecting organs and tissues A medical image and video method; a remote and self-adaptive method for collecting bones, joints, muscles, and nerve scans, collecting organs, and organizing medical images and videos.
  • the method is to integrate general visual images, depth images, ultrasonic images, and endoscopic images.
  • the general image models include: face, facial features, ears, lips, mandible, neck, navel, nipples, characteristic genitalia, characteristic parts and Its position is used as a feature item, and the neural network algorithm and its improved method are used to intelligently identify the human body's features and its position.
  • a method for a robot to autonomously locate the coordinates of an external position area of a human body, scan and collect medical images comprising the following steps:
  • doctor communication module According to the administrator, doctor communication module, robot, and medical device to issue acquisition tasks and doctor's order messages, obtain the organs of the human body acquisition images corresponding to the acquisition tasks and their external position area coordinates, set them as the target, set the target name, target parameters, Location information, set the communication target.
  • the robot vision acquisition device and visual recognition module release the external features corresponding to each organ, the external scanning feature information of the nearby characteristic organs recognized by the general image model
  • the robot arm, ultrasound device, ultrasound probe, robot, and medical device subscribe to the location information of the corresponding organ external scanning area,
  • the remote main control system and the ultrasonic probe mounted on the autonomous robot arm move and scan the human body acquisition area according to the subscribed acquisition area location and the action of the robotic arm image acquisition action planning module.
  • the ultrasonic probe and the ultrasonic device release the collected image information, blood vessel color information, blood vessel position information and contour shape features, structural features, and color features of the organs in the ultrasonic image.
  • the robot, the medical device and the visual recognition module subscribe to the image information.
  • the blood vessel color information, blood vessel position information, and ultrasonic image organ contour shape features, structural features, and color features are input into the calculation model according to claim 8, and intelligently identify whether the image is a target organ tissue according to claim 8. If it is a scan Check the target organ.
  • the robot and the medical device determine that all target organ collection tasks are completed according to the returned target organ collection completion information.
  • the coordinates of the external position area of the human body, the method of scanning and collecting medical images, the robot arm, the ultrasonic device, the ultrasonic probe, the robot, and the medical device subscribe to the neck, the left and right thyroid lobes, the isthmus, and the parotid gland Location information of the external scanning area, scanning method, probe angle, subscription target, parameters, target pose, pose mark, set target for head id, target pose, direction value, set timestamp.
  • Robots, medical devices and robotic arms remote and self-adaptive adjustment to collect image parameters, gain parameters, color gain parameters, sensitivity time control adjustment parameters, time gain control parameters, focus parameters, depth parameters, frame size, Blood flow velocity scale parameters, video parameters, image acquisition method, robotic arm, ultrasound probe, remote control and adaptive adjustment of probe rotation angle, tilt angle, ultrasound probe scanning method, probe angle, parameters to the following organs, tissues Effective collection, complete collection.
  • the ultrasonic probe scans along the lower boundary of the bone at the bottom of the ear, and collects images and videos of the neck, spine, common carotid artery, internal carotid artery, and external carotid artery.
  • the ultrasonic probe moves along the bone position below the midline of the lip, along the carotid artery from the head side to the foot side to scan the left and right lobes and isthmus of the thyroid gland, and collect medical images and videos.
  • the blood vessel color information, blood vessel position information and organ information under the ultrasonic image are intelligently identified for abdominal aorta, groin, superficial femoral artery, popliteal artery, Anterior tibial artery, posterior tibial artery.
  • the pulsation position is identified as the groin area by using the ultrasonic image and a variety of medical image fusion intelligent identification methods, and using the dynamic identification method under the ultrasonic image.
  • the ultrasonic probe scans the superficial femoral artery along the groin area, moves to the knee joint and its position, measures the position of the popliteal fossa from the back, and moves from the position of the popliteal fossa to the dorsal side of the knee joint Vascular scanning, collecting medical images and videos.
  • the ultrasonic probe scans downward along the popliteal artery, anterior tibial artery, and posterior tibial artery, presses with a pressing device, scans nearby blood vessels along the anterior tibial artery from the knee joint, and collects medical records. images and videos.
  • the shoulder, shoulder joint and its position, bone and its position, elbow joint and its position, ankle joint and its position can be intelligently identified through the depth vision acquisition device and multiple medical image fusion intelligent recognition methods .
  • the blood vessel color information, blood vessel position information and organ information under the ultrasonic image are intelligently identified for abdominal aorta, groin, superficial femoral artery, popliteal artery, anterior tibial artery, and tibial artery posterior artery.
  • S6 Remote control and self-adaptive mobile robot arm, ultrasonic probe to ankle joint and position, fibula and talus.
  • the push-pull flip angle adjustment device of the robotic arm assists in bending the ankle joint, scans the anterior talofibular ligament, moves the probe to the round bone surface of the fibula, scans the outer edge of the talus, collects triangular images of the talus, talus, and anterior fibular ligament, and collects medical images and videos .
  • the present invention can use a medical robot device, remotely isolate and autonomously collect, isolate and collect, autonomously locate and identify the characteristic positions of human organs, and collect ultrasonic images. Independently complete outpatient and ward medical care tasks. And in order to improve the problems of many collection tasks, high work pressure, and many night shifts.
  • the present invention provides a remote and self-adaptive method for collecting neck, left and right thyroid lobes, isthmus, parotid gland, images, and videos; a remote and self-adaptive method for collecting blood vessels of the lower extremities, collecting organs and tissues Medical image and video method; a remote and self-adaptive acquisition of bones, joints, muscles, nerve scanning methods, acquisition of organs, medical image organization, video method, to achieve remote and autonomous effective acquisition, complete acquisition of medical images, Video, sharing medical pictures, video, multi-expert remote joint consultation, real-time acquisition of data, images and videos collected by robots, greatly improving work efficiency.
  • Fig. 1 is a flow chart of an intelligent recognition method for intelligent recognition of human body features, organs, bones, joints, blood vessels and their positions through the fusion of visual images and medical images in the specification of this application;
  • Fig. 2 is a flow chart of a method for scanning and collecting medical images and videos by robots and medical devices autonomously positioning the external position area of the human body in the specification of this application;
  • the purpose of the present invention is to design a remote-controllable robot that replaces human work, realize remote control of robotic arm acquisition, and effectively solve autonomous image and video acquisition.
  • Artificial intelligence robot technology autonomous collection in the field of automation, robot arm motion planning, and depth cameras to collect images of faces, facial features, arms, external features of the human body, bones, and joints.
  • Fusion of visual images and medical images to intelligently recognize human body features, organs, bones, joints, blood vessels, intelligent recognition methods for human body features and their positions, as well as autonomous positioning of robots, robotic arms, medical devices, external position areas of the human body, scanning and collection of medical images The method becomes autonomous positioning and scanning.
  • the present invention provides a remote and self-adaptive intelligent recognition, scans the neck, left and right thyroid lobes, isthmus, parotid gland, and collects images and video methods; a remote and self-adaptive intelligent recognition, scans and collects blood vessels of the lower extremities A method for collecting organs, organizing medical images and videos; a method for remote and adaptive intelligent recognition, collecting bones, joints, muscles, and nerve scans, collecting organs, organizing medical images and videos.
  • the invention provides an intelligent recognition method for medical image and visual image fusion, intelligent recognition of human facial features, organs, bones, joints, blood vessels, human body features and their positions, and establishment of general visual image models, human organ models, blood vessel models and feature models,
  • the general image model includes: face, facial features, ears, lips, mandible, neck, navel, nipples, characteristic genitals, characteristic parts and their positions as characteristic items, using neural network algorithms and their improved methods to intelligently identify human bodies features and their location.
  • a method for remote and adaptive collection of neck, left and right thyroid lobes, isthmus, parotid gland, images, and videos a remote and adaptive collection of blood vessels in the lower extremities, collection of organs, tissue medical images, and video Method; a method for remote and self-adaptive collection of bones, joints, muscles, and nerve scans, collection of organs, and medical images and videos.
  • input general visual image, human body organ, blood vessel and feature include: human face, facial features, ear, Lips, mandible, neck, navel, nipples, characteristic genitals, characteristic parts and their positions, input depth visual image, bone information, mandible, bones at the base of ribs, xiphoid process, spine, spine position, lower bone position, shoulder Joints, knee joints, feet, bones, foot joints, waist joints and their positions, the position of each joint.
  • Blood vessel color information, blood vessel position information and ultrasound image organ contour shape features, structural features, and color features under the ultrasound image Combining blood vessel information, blood vessel position information with organ information and organ features under the ultrasound image into an information combination item as a characteristic item and an input item of the ultrasound image model.
  • Use the external scanning feature information and external scanning bone information as external scanning information apply the improved deep neural network method and weight optimizer, and obtain output values and organs, blood vessels, bones, and human body scanning where they are located through image training Organ and location information, output the results of autonomous positioning of organs, blood vessels, bones, and their positions in the human body.
  • the doctor releases the acquisition task, the doctor's order message, obtains the coordinates of the organ and its external location area of the human body acquisition image corresponding to the acquisition task, sets it as the target, sets the target name, target parameters, location information, and sets the communication target.
  • the robot vision acquisition device and visual recognition module release the external features corresponding to each organ, and the external scanning feature information of the adjacent characteristic organs recognized by the general image model
  • Robot arm, ultrasound device, ultrasound probe, and robot subscribe to the location information of the corresponding organ external scan area, subscription target, parameter, target pose, pose mark, set target for head id, target pose, direction value, set time Stamp, the remote main control system and the ultrasonic probe carried by the autonomous robot arm move and scan the human body acquisition area according to the subscribed acquisition area location and the action of the robotic arm image acquisition action planning module.
  • the ultrasonic probe and the ultrasonic device release the collected image information, blood vessel color information, blood vessel position information and contour shape features, structural features, and color features of the organs in the ultrasonic image.
  • Robots, medical devices and visual recognition modules subscribe to image information. Extract blood vessel color information, blood vessel position information and ultrasonic image organ contour shape features, structural features, and color features into the calculation model to intelligently identify whether the image is the target organ tissue, and if it is the target organ for scanning.
  • Set acquisition target parameters pose mark, timestamp, target for head id, COG target pose, direction value
  • set the allowable error of position and attitude allow re-planning when motion planning fails
  • set the reference of target position Coordinate system set the time limit for each motion planning.
  • the robot and the medical device determine that all target organ collection tasks are completed according to the returned target organ collection completion information.
  • intelligently identify the human ear, auricle and its position intelligently identify the human lip and its position, through the depth vision acquisition device and a variety of medical image fusion intelligent recognition methods , Intelligently identify bones, intelligently identify the mandible at the bottom of the ear, spine, bone position, spine position, intelligently identify bones, and the bone position below the midline of the lip.
  • the coordinates of the external position area of the human body the method of scanning and collecting medical images, the robotic arm, ultrasonic device, ultrasonic probe, robot, and medical device subscribe to the position information of the external scanning area of the neck, left and right thyroid lobes, isthmus, and parotid gland, Scanning method, probe angle, subscription target, parameters, target pose, pose marker, set target for head id, target pose, direction value, set timestamp.
  • Remotely control and adaptively move the robot arm scan and collect images and videos of the neck, spine, common carotid artery, internal carotid artery, and external carotid artery along the lower boundary of the bone at the bottom of the ear with the ultrasonic probe.
  • the ultrasonic probe moves along the bone position below the midline of the lip, and moves along the carotid artery from the head side to the foot side to scan the left and right lobes and isthmus of the thyroid gland, and collect medical images and videos.
  • Remote control and adaptive mobile robot arm ultrasonic probe, along the bottom of the ear, along the bone position, along the mandible, bone position, scan the parotid gland, submandibular gland, sublingual gland to the bone position below the midline of the lip, collect medical images and video.
  • the general visual device and a variety of medical image fusion intelligent recognition methods it can intelligently identify the human ear, auricle and its position, and intelligently identify the navel and its position.
  • the blood vessel color information, blood vessel position information and organ information under the ultrasonic image can be intelligently identified abdominal aorta, groin, superficial femoral artery, popliteal artery, anterior tibial artery, posterior tibial artery .
  • the dynamic recognition method under the ultrasonic image is used to identify the pulsation position as the groin area.
  • ultrasonic probe scans downward along the popliteal artery, anterior tibial artery, and posterior tibial artery, presses with a pressing device, scans nearby blood vessels along the anterior tibial artery from the knee joint, collects medical images and video.
  • Remote control and self-adaptive mobile robot arm ultrasonic probe to the foot bone, foot joint and its position, scan the dorsalis pedis artery along the dorsum of the foot, scan the plantar artery along the sole of the foot, and collect medical images and videos.
  • Remotely control and adaptively move the robot arm move the ultrasonic probe to the knee joint and its position, move the probe to the position of the dorsal popliteal fossa and the medial position of the midline of the knee joint, scan downward along the blood vessels, and scan the veins of the lower extremities, including: femoral Vein, superficial femoral vein, deep femoral vein, external iliac vein, great saphenous vein, collect medical images and videos.
  • the deep vision acquisition device and various medical image fusion intelligent recognition methods, it can intelligently recognize shoulders, shoulder joints and their positions, bones and their positions, elbow joints and their positions, ankle joints and their positions.
  • the blood vessel color information, blood vessel position information and organ information under the ultrasonic image can be intelligently identified abdominal aorta, groin, superficial femoral artery, popliteal artery, anterior tibial artery, posterior tibial artery .
  • Remote control and adaptive mobile robotic arm ultrasonic probe to the shoulder joint, scan the long head tendon, deltoid muscle, subscapularis tendon and muscle sheath along the shoulder joint toward the arm, scan the supraspinatus tendon along the shape of the central bone of the shoulder With the infraspinatus tendon, capture medical images and videos.
  • Remote control and self-adaptive mobile robot arm ultrasonic probe to the knee joint and position, scan along the outside of the knee joint, the side of the patella, scan towards the hand direction, scan the head of the columella, the front of the radial head, along the outside of the elbow joint, and the outside of the olecranon Scan the head of the cranium and the rear of the radial head at the edge position, and collect medical images and videos.
  • the push-pull flip angle adjustment device of the robotic arm assists in bending the ankle joint, scans the anterior talofibular ligament, moves the probe to the round bone surface of the fibula, scans the outer edge of the talus, collects triangular images of the talus, talus, and anterior fibular ligament, and collects medical images and videos .

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Vascular Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

Recognition, autonomous positioning and scanning methods for visual image and medical image fusion, using artificial intelligence robot technology. Provided are a method for autonomous positioning and recognition of a human organ feature position to collect an ultrasonic image, remotely and adaptively collecting an image and a video of the neck, the left and right lobes and isthmus of the thyroid, and the parotid gland; a method for remotely and adaptively collecting and scanning lower limb blood vessels, collecting a medical image and video of organs and tissues; and a scanning means for remotely and adaptively capturing and scanning skeletal joints, muscles, and nerves. By means of autonomous and remote controlled collecting and sharing of medical images, image sharing is implemented, thereby alleviating problems of medical care operations such as high stress and increased night shifts. This improves remote collection and sharing of medical images, consultations ward rounding efficiency, and using expert opinion to address clinical cases. The invention is applicable to outpatient clinics, wards, and overseas medical institutions.

Description

视觉图像与医疗图像融合识别、自主定位扫查方法Visual image and medical image fusion recognition, autonomous positioning scanning method 技术领域technical field
本发明属于医疗类人工智能技术领域,涉及数据的分析技术,机器人动作规划技术领域,图像智能识别方法,人工智能与医疗数据分析识别的技术及方法。The invention belongs to the technical field of medical artificial intelligence, and relates to data analysis technology, the technical field of robot action planning, an image intelligent recognition method, artificial intelligence and medical data analysis and recognition technology and method.
背景技术Background technique
目前应用于医疗领域,在检查过程,由于各种人为因素分析,医疗图像,视频采集质量差,标准化程度低,识别病情精准度差。各专科医生领域及医疗专业受限,管理员远端控制,利用机器人搭载的机器臂,视觉装置,深度视觉装置及各种神经网络方法及其改进方法,智能识别人脸,人体器官,骨骼,辅助采集医疗图像,视频。Currently used in the medical field, during the inspection process, due to the analysis of various human factors, the quality of medical image and video acquisition is poor, the degree of standardization is low, and the accuracy of identifying the disease is poor. The fields of specialists and medical specialties are limited, and the administrator controls remotely, using the robotic arm, vision device, depth vision device and various neural network methods and improvement methods carried by the robot to intelligently recognize faces, human organs, bones, Auxiliary collection of medical images and videos.
疫情时期,感染性高,采集传染风险大,效率低下,人工采集不精准,人工采集会导致瘟疫传播等问题,利用机器臂远端及自主采集医疗图像,视频,实现远端采集,自主采集,智能化识别分析数据,图像,视频,自主定位,扫查有效防止传染病,瘟疫等重大疾病蔓延。During the epidemic period, the infectivity is high, the collection infection risk is high, the efficiency is low, manual collection is not accurate, and manual collection will lead to problems such as the spread of the plague. Using the remote and autonomous collection of medical images and videos of the robotic arm to achieve remote collection and autonomous collection, Intelligent identification and analysis of data, images, videos, autonomous positioning, and scanning can effectively prevent the spread of infectious diseases, plague and other major diseases.
视觉图像与医疗图像融合的智能识别,自主定位扫查方法,自适应调整机器臂,智能识别人体,器官,骨骼,识别,远端及自主移动机器臂,扫查探头,扫查装置,自主定位,扫查部位,器官,依照智能扫查方法,自主扫查,采集医疗图像,视频,大大提高智能识别,采集医疗数据,医疗图像视频的效率。Intelligent recognition of visual image and medical image fusion, autonomous positioning and scanning methods, adaptive adjustment of robotic arms, intelligent recognition of human body, organs, bones, recognition, remote and autonomous mobile robotic arms, scanning probes, scanning devices, autonomous positioning , Scan parts, organs, according to the intelligent scanning method, autonomously scan, collect medical images, videos, greatly improve the efficiency of intelligent identification, collection of medical data, medical images and videos.
技术问题technical problem
本发明的目的就在于克服上述现有技术的缺点和不足,提供一种利用远端,自主机器人采集医疗图像装置,以及本发明的视觉图像与医疗图像融合的智能识别,机器臂自主定位扫查方法,解决了人为的扫查,检查采集诊断治疗失误问题。The purpose of the present invention is to overcome the shortcomings and deficiencies of the above-mentioned prior art, to provide a device for collecting medical images by an autonomous robot, as well as the intelligent recognition of the fusion of visual images and medical images of the present invention, and the autonomous positioning and scanning of robotic arms The method solves the problem of man-made scanning, inspection, collection, diagnosis and treatment errors.
一种多种医疗图像融合智能识别人体五官,器官,骨骼,关节,血管,人体特征及其位置的智能识别方法以及机器人,机器臂,医疗装置自主定位,人体外部位置区,扫查采集医疗图像的方法成为自主定位,扫查的关键技术问题。A kind of intelligent identification method for the fusion of multiple medical images to intelligently identify human facial features, organs, bones, joints, blood vessels, human body features and their positions, as well as robots, robotic arms, and medical devices for autonomous positioning, external position areas of the human body, and scanning to collect medical images The method has become the key technical issue of self-positioning and scanning.
有效扫查,完整扫查方法,成为超声扫查另一种重要的技术难题。本发明提供了一种远端及自适应扫查颈部,甲状腺左右叶,峡部,腮腺,采集图像,视频的方法;一种远端及自适应采集下肢血管的扫查方式,采集器官,组织医疗图像,视频的方法;一种远端及自适应采集骨骼关节,肌肉,神经扫查方式,采集器官,组织医疗图像,视频的方法。Effective scanning and complete scanning methods have become another important technical problem in ultrasonic scanning. The present invention provides a method for remote and self-adaptive scanning of the neck, left and right thyroid lobes, isthmus, and parotid gland, and image and video collection; a remote and self-adaptive scanning method for collecting blood vessels of the lower extremities, collecting organs and tissues A medical image and video method; a remote and self-adaptive method for collecting bones, joints, muscles, and nerve scans, collecting organs, and organizing medical images and videos.
本发明的采用的技术方案The technical scheme adopted in the present invention
视觉图像与医疗图像融合智能识别人体五官,器官,骨骼,关节,血管,人体特征及其位置的智能识别方法,所述方法是将一般视觉图像,深度图像及超声图像,内窥镜图像一体化智能识别人体器官,骨骼,关节,血管,人体特征点及其位置的方法以及自主定位人体器官,骨骼,关节,血管,人体特征点位置的方法,包括以下骤:Fusion of visual images and medical images to intelligently identify human facial features, organs, bones, joints, blood vessels, human body features and their positions. The method is to integrate general visual images, depth images, ultrasonic images, and endoscopic images A method for intelligently identifying human organs, bones, joints, blood vessels, human body feature points and their positions and a method for autonomously positioning human organs, bones, joints, blood vessels, and human body feature points, including the following steps:
S1、建立一般视觉图像模型,人体器官模型,血管模型及特征模型,一般图像的模型包括:人脸,五官,耳部,嘴唇,下颌骨,颈部,肚脐,乳头,特征生殖器,特征部位及其位置作为特征项,用神经网络算法及其改进方法,智能识别人体特征及其位置。S1. Establish general visual image models, human organ models, blood vessel models and feature models. The general image models include: face, facial features, ears, lips, mandible, neck, navel, nipples, characteristic genitalia, characteristic parts and Its position is used as a feature item, and the neural network algorithm and its improved method are used to intelligently identify the human body's features and its position.
S2、建立深度视觉装置及深度视觉图像模型,骨骼模型。S2. Establish a depth vision device, a depth vision image model, and a skeleton model.
S3、将深度信息及骨骼所在人体的位置信息,以及S1所述的一般图像模型识别的近邻特征器官信息作为骨骼智能识别模型的特征项结合,作为输入项。S3. Combining the depth information, the position information of the human body where the bones are located, and the information of the adjacent characteristic organs identified by the general image model described in S1 as the feature items of the intelligent bone recognition model as input items.
S4、应用神经网络算法及其改进方法,智能识别各骨骼以及各骨骼所在的位置,下颌骨,肋底端骨骼,剑突,脊椎,脊椎位置,骨骼下界位置,肩关节,膝关节,足,骨,足关节,腰关节及其位置,各关节位置。S4. Apply the neural network algorithm and its improved method to intelligently identify each bone and the position of each bone, mandible, costal bottom bone, xiphoid process, spine, spine position, bone lower boundary position, shoulder joint, knee joint, foot, Bones, foot joints, lumbar joints and their positions, the position of each joint.
S5、将S1所述的一般图像模型识别的近邻特征器官信息标准为外部扫查特征信息,将S4识别的骨骼信息外部扫查骨骼信息,S5. Standardize the adjacent characteristic organ information identified by the general image model described in S1 as external scanning feature information, and externally scan the bone information for the bone information identified in S4,
S6、建立超声图像下特征模型,将超声图像下血管颜色信息,血管位置信息及超声图像器官的轮廓形状特征,结构特征,颜色特征。S6. Establish a feature model under the ultrasound image, and combine the color information of blood vessels under the ultrasound image, the location information of blood vessels, and the contour shape features, structural features, and color features of organs in the ultrasound image.
S7、将血管信息,血管位置信息与超声图像下器官信息,器官特征结合为综合为信息结合项作为超声图像模型的特征项,输入项。S7. Combining blood vessel information, blood vessel position information with organ information and organ features under the ultrasound image into an information combination item as a feature item and an input item of the ultrasound image model.
S8、将外部扫查特征信息及外部扫查骨骼信息作为外部扫查信息,将S1所述的一般图像模型识别的近邻特征器官信息将S4识别的骨骼信息与S7所述的超声图像模型的特征项及其位置区输入神经网络及其改进方法及权值优化器,通过图像训练,得到输出值。S8. Using the external scan feature information and the external scan bone information as the external scan information, combine the adjacent characteristic organ information identified by the general image model described in S1 with the bone information identified by S4 and the features of the ultrasound image model described in S7 The item and its location area are input into the neural network and its improved method and weight optimizer, and the output value is obtained through image training.
S9、改进深度神经网络方法及权值优化器,通过图像训练,得到输出值及器官,血管,骨骼,以及其所在人体扫查器官及位置信息识别结果。S9. Improve the deep neural network method and the weight optimizer, and obtain the output value and the identification results of organs, blood vessels, bones, human body scan organs and position information through image training.
S10、输出结果,作为外部扫查自主定位器官,血管,骨骼,以及其所在人体的位置的结果。S10. Outputting the result, as the result of external scanning to autonomously locate organs, blood vessels, bones, and their positions in the human body.
一种机器人自主定位人体外部位置区坐标,扫查采集医疗图像的方法,包括以下步骤:A method for a robot to autonomously locate the coordinates of an external position area of a human body, scan and collect medical images, comprising the following steps:
S1、依据管理员,医生通信模块,机器人,医疗装置发布采集任务,医嘱消息,获得采集任务对应的人体采集图像的器官及其外部位置区坐标,设置其为目标,设置目标名,目标参 数,位置信息,设置通信目标。S1. According to the administrator, doctor communication module, robot, and medical device to issue acquisition tasks and doctor's order messages, obtain the organs of the human body acquisition images corresponding to the acquisition tasks and their external position area coordinates, set them as the target, set the target name, target parameters, Location information, set the communication target.
S2、机器人视觉采集装置及视觉识别模块发布各器官对应的外部特征,一般图像模型识别的近邻特征器官的外部扫查特征信息人体外部位置区坐标;深度摄像头发布的深度信息,近邻的骨骼信息外部扫查骨骼信息。S2. The robot vision acquisition device and visual recognition module release the external features corresponding to each organ, the external scanning feature information of the nearby characteristic organs recognized by the general image model The coordinates of the external position area of the human body; the depth information released by the depth camera, the external bone information of the nearby neighbors Scan bone information.
S3、机器臂,超声装置,超声探头,机器人,医疗装置订阅对应的器官外部扫查区位置信息,S3. The robot arm, ultrasound device, ultrasound probe, robot, and medical device subscribe to the location information of the corresponding organ external scanning area,
订阅目标,参数,目标位姿,位姿标记,设置目标对于头部id,目标位姿,方向值,设置时间戳Subscribe target, parameter, target pose, pose marker, set target for head id, target pose, direction value, set timestamp
S4、远端主控制系统及自主机器臂搭载的超声探头依照订阅的采集区位置,依照机器臂图像采集动作规划模块的动作,移动,扫查人体采集区。超声探头及超声装置发布采集的图像信息,血管颜色信息,血管位置信息及超声图像器官的轮廓形状特征,结构特征,颜色特征。S4. The remote main control system and the ultrasonic probe mounted on the autonomous robot arm move and scan the human body acquisition area according to the subscribed acquisition area location and the action of the robotic arm image acquisition action planning module. The ultrasonic probe and the ultrasonic device release the collected image information, blood vessel color information, blood vessel position information and contour shape features, structural features, and color features of the organs in the ultrasonic image.
S5、机器人,医疗装置及视觉识别模块订阅图像信息。按照权利要求8,抽取超声图像下血管颜色信息,血管位置信息及超声图像器官的轮廓形状特征,结构特征,颜色特征输入计算模型,按照权利要求8智能识别图像是否为目标器官组织,如果为扫查目标器官。S5. The robot, the medical device and the visual recognition module subscribe to the image information. According to claim 8, the blood vessel color information, blood vessel position information, and ultrasonic image organ contour shape features, structural features, and color features are input into the calculation model according to claim 8, and intelligently identify whether the image is a target organ tissue according to claim 8. If it is a scan Check the target organ.
S6、设置采集目标参数(位姿标记,时间戳,目标对于头部id,COG目标位姿,方向值),设置位置和姿态的允许误差,当运动规划失败后,允许重新规划,设置目标位置的参考坐标系,设置每次运动规划的时间限制.S6. Set the acquisition target parameters (pose mark, time stamp, target for head id, COG target pose, direction value), set the allowable error of position and attitude, when the motion planning fails, allow re-planning, set the target position The reference coordinate system, set the time limit for each motion planning.
S7、机器人,医疗装置及机器臂远端及自适应调节采集医疗装置的图像参数,视频的参数,图像采集方法,是否符合图像,视频的识别标准,是否有效采集。S7. Robots, medical devices, and the remote end of the robot arm and self-adaptive adjustments to collect image parameters of medical devices, video parameters, image collection methods, whether they meet the image and video recognition standards, and whether the collection is effective.
S8、机器人,医疗装置及机器臂远端及自适应调节超声探头的扫查方式,探头角度,参数,检查目标器官采集位置,采集器官,组织的全部图像及视频是否为全部的扫查方式下的图像,视频,是否为目标器官,组织的完整采集。S8. Robots, medical devices, remote ends of robotic arms, and self-adaptive adjustments to the scanning mode, probe angle, and parameters of the ultrasound probe, to check the collection position of the target organ, and whether all the images and videos of the collected organs and tissues are in the scanning mode. The image, video, whether it is the target organ, the complete collection of the tissue.
S9、返回目标器官采集完成信息,机器人,医疗装置及机器臂订阅任务信息,机器人,医疗装置及机器臂远端及自适应移动超声探头至下一目标器官的外部扫查位置区。S9. Return target organ acquisition completion information, robot, medical device and robotic arm subscribe task information, robot, medical device and robotic arm distal end and adaptively move the ultrasonic probe to the external scanning position area of the next target organ.
S10、机器人,医疗装置依照返回的目标器官采集完成信息,判定所有的目标器官采集任务完成。S10. The robot and the medical device determine that all target organ collection tasks are completed according to the returned target organ collection completion information.
一种远端及自适应扫查颈部,甲状腺左右叶,峡部,腮腺,采集医疗图像,视频的方法:A method for remote and adaptive scanning of the neck, left and right thyroid lobes, isthmus, and parotid gland, and collection of medical images and videos:
S1、依据权利要求6,依照一般视觉装置及多种医疗图像融合智能识别方法,智能识别 人体耳部,耳郭部及其位置,智能识别人体嘴唇及其位置,通过深度视觉采集装置及及多种医疗图像融合智能识别方法,智能识别骨骼,智能识别耳部底端下颌骨,脊柱,骨骼位置,脊椎位置,智能识别骨骼,唇中线下方的骨骼位置。S1. According to claim 6, according to the general visual device and a variety of medical image fusion intelligent recognition methods, intelligently identify the human ear, auricle and its position, intelligently identify the human lip and its position, through the depth vision acquisition device and multiple A medical image fusion intelligent identification method, intelligent identification of bones, intelligent identification of the mandible at the bottom of the ear, spine, bone position, spine position, intelligent identification of bones, and the position of bones below the midline of the lip.
S2、依据权利要求7,依照机器人自主定位,人体外部位置区坐标,扫查采集医疗图像的方法,机器臂,超声装置,超声探头,机器人,医疗装置订阅颈部,甲状腺左右叶,峡部,腮腺外部扫查区位置信息,扫查方法,探头角度,订阅目标,参数,目标位姿,位姿标记,设置目标对于头部id,目标位姿,方向值,设置时间戳。S2. According to claim 7, according to the autonomous positioning of the robot, the coordinates of the external position area of the human body, the method of scanning and collecting medical images, the robot arm, the ultrasonic device, the ultrasonic probe, the robot, and the medical device subscribe to the neck, the left and right thyroid lobes, the isthmus, and the parotid gland Location information of the external scanning area, scanning method, probe angle, subscription target, parameters, target pose, pose mark, set target for head id, target pose, direction value, set timestamp.
S3、机器人,医疗装置及机器臂远端及自适应调节采集医疗装置的图像参数,增益参数,颜色增益参数,灵敏度时间控制调节参数,时间增益控制参数,聚焦参数,深度参数,取框尺寸,血流流速标尺参数,视频的参数,图像采集方法,机器臂,超声探头,远端控制及自适应调整探头旋转角度,倾斜角度,超声探头的扫查方式,探头角度,参数至以下器官,组织有效采集,完整采集。S3. Robots, medical devices and robotic arms remote and self-adaptive adjustment to collect image parameters, gain parameters, color gain parameters, sensitivity time control adjustment parameters, time gain control parameters, focus parameters, depth parameters, frame size, Blood flow velocity scale parameters, video parameters, image acquisition method, robotic arm, ultrasound probe, remote control and adaptive adjustment of probe rotation angle, tilt angle, ultrasound probe scanning method, probe angle, parameters to the following organs, tissues Effective collection, complete collection.
S4、远端控制及自适应移动机器臂,超声探头沿耳部底端骨骼下界,扫查,采集颈部,脊椎,颈总动脉,颈内动脉,颈外动脉图像及视频。S4. Remote control and self-adaptive mobile robot arm, the ultrasonic probe scans along the lower boundary of the bone at the bottom of the ear, and collects images and videos of the neck, spine, common carotid artery, internal carotid artery, and external carotid artery.
S5、远端控制及自适应移动机器臂,超声探头沿嘴唇中线下方的骨骼位置,沿颈动脉从头侧向足侧移动扫查甲状腺的左右叶及峡部,采集医疗图像及视频。S5. Remote control and self-adaptive mobile robot arm, the ultrasonic probe moves along the bone position below the midline of the lip, along the carotid artery from the head side to the foot side to scan the left and right lobes and isthmus of the thyroid gland, and collect medical images and videos.
S6、远端控制及自适应移动机器臂,超声探头,沿耳部底端,骨骼位置,沿下颌骨,骨骼位置,扫查腮腺,颌下腺,舌下腺至唇中线下方的骨骼位置,采集医疗图像及视频。S6. Remote control and adaptive mobile robot arm, ultrasonic probe, along the bottom of the ear, bone position, along the mandible, bone position, scan the parotid gland, submandibular gland, sublingual gland to the bone position below the midline of the lip, collect medical treatment images and videos.
一种远端及自适应扫查下肢血管,采集下肢血管,器官,组织医疗图像,视频的方法:A method for remote and self-adaptive scanning of blood vessels of lower extremities, collecting medical images and videos of blood vessels of lower extremities, organs and tissues:
S1、依据权利要求6,依照一般视觉装置及多种医疗图像融合智能识别方法,智能识别人体耳部,耳郭部及其位置,智能识别肚脐及其位置。S1. According to claim 6, according to the general visual device and multiple medical image fusion intelligent recognition methods, intelligently identify human ears, auricles and their positions, and intelligently identify navel and their positions.
S2、通过深度视觉采集装置及及多种医疗图像融合智能识别方法,智能识别骨骼,肋底端骨骼及剑突位置,坐骨,膝关节及其位置,背测腘窝及其位置,足骨,足关节及其位置。S2. Through the deep vision acquisition device and various medical image fusion intelligent recognition methods, intelligently identify bones, rib bottom bones and xiphoid process positions, ischium, knee joints and their positions, dorsally measure popliteal fossa and its positions, foot bones, Foot joints and their positions.
S3、依据权利要求6,通过超声图像及多种医疗图像融合智能识别方法,将血管颜色信息,血管位置信息与超声图像下器官信息,智能识别腹主动脉,腹股沟,股浅动脉,腘动脉,胫前动脉,胫后动脉。S3. According to claim 6, through the ultrasonic image and multiple medical image fusion intelligent recognition methods, the blood vessel color information, blood vessel position information and organ information under the ultrasonic image are intelligently identified for abdominal aorta, groin, superficial femoral artery, popliteal artery, Anterior tibial artery, posterior tibial artery.
S4、依据权利要求6,通过超声图像及多种医疗图像融合智能识别方法,利用超声图像下动态识别方法,识别搏动位置为腹股沟区。S4. According to claim 6, the pulsation position is identified as the groin area by using the ultrasonic image and a variety of medical image fusion intelligent identification methods, and using the dynamic identification method under the ultrasonic image.
S5、远端控制及自适应移动机器臂,超声探头至腹股沟区,超声探头扫查腹股沟区自剑 突位置向下肚脐方向沿腹主动脉,采集医疗图像及视频。S5. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the groin area, the ultrasonic probe scans the groin area from the position of the xiphoid process down to the navel and along the abdominal aorta to collect medical images and videos.
S6、远端控制及自适应移动机器臂,超声探头沿腹股沟区,扫查股浅动脉,移动至膝关节及其位置,背测腘窝位置,由背测腘窝位置向膝关节背侧附近血管扫查,采集医疗图像及视频。S6. Remote control and self-adaptive mobile robot arm, the ultrasonic probe scans the superficial femoral artery along the groin area, moves to the knee joint and its position, measures the position of the popliteal fossa from the back, and moves from the position of the popliteal fossa to the dorsal side of the knee joint Vascular scanning, collecting medical images and videos.
S7、远端控制及自适应移动机器臂,超声探头沿腘动脉,胫前动脉,胫后动脉向下方扫查,利用按压装置按压,由膝关节沿胫前动脉,扫查附近血管,采集医疗图像及视频。S7. Remote control and self-adaptive mobile robot arm, the ultrasonic probe scans downward along the popliteal artery, anterior tibial artery, and posterior tibial artery, presses with a pressing device, scans nearby blood vessels along the anterior tibial artery from the knee joint, and collects medical records. images and videos.
S8、远端控制及自适应移动机器臂,超声探头至足骨,足关节及其位置,沿足背扫查足背动脉,沿足底扫查足底动脉,采集医疗图像及视频。S8. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the foot bones, foot joints and their positions, scan the dorsalis pedis artery along the dorsum of the foot, scan the plantar artery along the sole of the foot, and collect medical images and videos.
S9、远端控制及自适应移动机器臂,超声探头至膝关节及其位置,将探头移动至背测腘窝位置与膝关节中线内侧位置,沿血管向下方扫查,扫查下肢静脉,包括:股静脉,股浅静脉,股深静脉,骼外静脉,大隐静脉,采集医疗图像及视频。S9. Remotely control and adaptively move the robot arm, move the ultrasonic probe to the knee joint and its position, move the probe to the dorsal position of the popliteal fossa and the medial position of the midline of the knee joint, scan downward along the blood vessels, and scan the veins of the lower extremities, including : Femoral vein, superficial femoral vein, deep femoral vein, external iliac vein, great saphenous vein, collecting medical images and videos.
一种远端及自适应扫查骨骼,关节,肌肉,神经,采集骨骼,器官,组织医疗图像,视频的方法:A remote and adaptive method for scanning bones, joints, muscles, nerves, collecting bones, organs, and tissue medical images and videos:
S1、依据权利要求6,通过深度视觉采集装置及及多种医疗图像融合智能识别方法,智能识别肩部,肩关节及其位置,骨骼及其位置,肘关节及其位置,踝关节及其位置。S1. According to claim 6, the shoulder, shoulder joint and its position, bone and its position, elbow joint and its position, ankle joint and its position can be intelligently identified through the depth vision acquisition device and multiple medical image fusion intelligent recognition methods .
S2、通过超声图像及多种医疗图像融合智能识别方法,将血管颜色信息,血管位置信息与超声图像下器官信息,智能识别腹主动脉,腹股沟,股浅动脉,腘动脉,胫前动脉,胫后动脉。S2. Through the intelligent recognition method of ultrasonic image and various medical image fusion, the blood vessel color information, blood vessel position information and organ information under the ultrasonic image are intelligently identified for abdominal aorta, groin, superficial femoral artery, popliteal artery, anterior tibial artery, and tibial artery posterior artery.
S3、远端控制及自适应移动机器臂,超声探头至肩关节,沿肩关节朝手臂方向扫查长头肌腱,三角肌,肩胛下肌腱和肌鞘,沿肩部中心骨形状扫查冈上肌肌腱与冈下肌肌腱,采集医疗图像及视频。S3. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the shoulder joint, scan the long head tendon, deltoid muscle, subscapularis tendon and muscle sheath along the shoulder joint toward the arm, and scan the supraspinatus along the shape of the central bone of the shoulder Muscle tendon and infraspinatus tendon, capture medical images and videos.
S4、远端控制及自适应移动机器臂,超声探头至肘关节,鹰嘴窝,沿肘关节背侧向手方向扫查耾骨小头,桡骨头前方,沿肘关节外侧,鹰嘴外侧边缘位置扫查耾骨小头,桡骨头后方,肌肉,神经,推拉翻转角度调解装置调解肘关节弯曲90度位置,扫查内上骨果和尺骨,采集医疗图像及视频。S4. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the elbow joint, olecranon fossa, along the dorsal side of the elbow joint to scan the head of the columella, in front of the radial head, along the outer side of the elbow joint, and the outer edge of the olecranon Scan the position of the small head of the columella, the rear of the radial head, muscles, nerves, push-pull and flip angle adjustment device to adjust the position of the elbow joint at 90 degrees, scan the inner superior bone and ulna, and collect medical images and videos.
S5、远端控制及自适应移动机器臂,超声探头至膝关节及位置,沿膝关节外侧,髌骨侧扫查,向手方向扫查耾骨小头,桡骨头前方,沿肘关节外侧,鹰嘴外侧边缘位置扫查耾骨小头,桡骨头后方,采集医疗图像及视频。S5. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the knee joint and position, along the outside of the knee joint, scan the side of the patella, scan towards the hand direction to scan the head of the columella, the front of the radial head, along the outside of the elbow joint, and the eagle Scan the head of the cranium and the rear of the radial head at the outer edge of the mouth, and collect medical images and videos.
S6、远端控制及自适应移动机器臂,超声探头至踝关节及位置,腓骨及距骨。机器臂推 拉翻转角度调解装置协助弯曲踝关节,扫查距腓前韧带,移动探头至腓骨圆形骨表面,扫查距骨外侧边缘,采集距骨三角图像,距骨,腓骨前韧带,采集医疗图像及视频。S6. Remote control and self-adaptive mobile robot arm, ultrasonic probe to ankle joint and position, fibula and talus. The push-pull flip angle adjustment device of the robotic arm assists in bending the ankle joint, scans the anterior talofibular ligament, moves the probe to the round bone surface of the fibula, scans the outer edge of the talus, collects triangular images of the talus, talus, and anterior fibular ligament, and collects medical images and videos .
综上,本发明的有益效果In summary, the beneficial effects of the present invention
本发明能够通过医疗用机器人装置,远端隔离自主采集,隔离采集,自主定位识别人体器官特征位置,采集超声图像的方法。自主完成门诊,病房的各项医护任务。以及为改善了采集任务多,工作压力大,夜班多等问题。The present invention can use a medical robot device, remotely isolate and autonomously collect, isolate and collect, autonomously locate and identify the characteristic positions of human organs, and collect ultrasonic images. Independently complete outpatient and ward medical care tasks. And in order to improve the problems of many collection tasks, high work pressure, and many night shifts.
同时,本发明提供了一种远端及自适应采集颈部,甲状腺左右叶,峡部,腮腺,图像,视频的方法;一种远端及自适应采集下肢血管的扫查方式,采集器官,组织医疗图像,视频的方法;一种远端及自适应采集骨骼关节,肌肉,神经扫查方式,采集器官,组织医疗图像,视频的方法,实现远端及自主有效采集,完整采集的医疗图像,视频,共享医疗图片,视频,多专家远端联合会诊,实时获取机器人采集的数据及图像,视频,大幅度提高工作效率。At the same time, the present invention provides a remote and self-adaptive method for collecting neck, left and right thyroid lobes, isthmus, parotid gland, images, and videos; a remote and self-adaptive method for collecting blood vessels of the lower extremities, collecting organs and tissues Medical image and video method; a remote and self-adaptive acquisition of bones, joints, muscles, nerve scanning methods, acquisition of organs, medical image organization, video method, to achieve remote and autonomous effective acquisition, complete acquisition of medical images, Video, sharing medical pictures, video, multi-expert remote joint consultation, real-time acquisition of data, images and videos collected by robots, greatly improving work efficiency.
附图说明Description of drawings
图1是本申请说明书中视觉图像与医疗图像融合智能识别人体五官,器官,骨骼,关节,血管人体特征及其位置的智能识别方法的流程图;Fig. 1 is a flow chart of an intelligent recognition method for intelligent recognition of human body features, organs, bones, joints, blood vessels and their positions through the fusion of visual images and medical images in the specification of this application;
图2是本申请说明书中机器人,医疗装置自主定位人体外部位置区,扫查采集医疗图像视频的方法流程图;Fig. 2 is a flow chart of a method for scanning and collecting medical images and videos by robots and medical devices autonomously positioning the external position area of the human body in the specification of this application;
具体实施方式Detailed ways
本发明的目的是设计取代人类工作的可远端控制机器人,实现远端控制机器臂采集,同时有效解决自主采集图像,视频。利用人工智能机器人技术,自动化领域的自主采集,机器臂动作规划,深度摄像头采集人脸,五官,手臂,人体外部特征,骨骼,关节图像。The purpose of the present invention is to design a remote-controllable robot that replaces human work, realize remote control of robotic arm acquisition, and effectively solve autonomous image and video acquisition. Using artificial intelligence robot technology, autonomous collection in the field of automation, robot arm motion planning, and depth cameras to collect images of faces, facial features, arms, external features of the human body, bones, and joints.
视觉图像与医疗图像融合智能识别人体五官,器官,骨骼,关节,血管,人体特征及其位置的智能识别方法以及机器人,机器臂,医疗装置自主定位,人体外部位置区,扫查采集医疗图像的方法成为自主定位,扫查。Fusion of visual images and medical images to intelligently recognize human body features, organs, bones, joints, blood vessels, intelligent recognition methods for human body features and their positions, as well as autonomous positioning of robots, robotic arms, medical devices, external position areas of the human body, scanning and collection of medical images The method becomes autonomous positioning and scanning.
本发明提供了一种远端及自适应智能识别,扫查颈部,甲状腺左右叶,峡部,腮腺,采集图像,视频的方法;一种远端及自适应智能识别,采集下肢血管的扫查方式,采集器官,组织医疗图像,视频的方法;一种远端及自适应智能识别,采集骨骼关节,肌肉,神经扫查方式,采集器官,组织医疗图像,视频的方法。The present invention provides a remote and self-adaptive intelligent recognition, scans the neck, left and right thyroid lobes, isthmus, parotid gland, and collects images and video methods; a remote and self-adaptive intelligent recognition, scans and collects blood vessels of the lower extremities A method for collecting organs, organizing medical images and videos; a method for remote and adaptive intelligent recognition, collecting bones, joints, muscles, and nerve scans, collecting organs, organizing medical images and videos.
实现远端控制机器人及自主采集医疗图像,视频,远端控制医疗图像,视频采集装置及 共享图像,解决了人为的诊断治疗失误,提高了智能采集的精准度和医疗数据异常识别的准确度。为了更好的理解上述技术方案,下面结合实施例及附图,对本发明作进一步地的详细说明,但本发明的实施方式不限于此。Realize remote control of robots and autonomous collection of medical images, videos, remote control of medical images, video collection devices and shared images, solve human errors in diagnosis and treatment, improve the accuracy of intelligent collection and the accuracy of abnormal identification of medical data. In order to better understand the above technical solutions, the present invention will be further described in detail below in conjunction with the examples and accompanying drawings, but the embodiments of the present invention are not limited thereto.
本申请实施中的技术方案为解决上述技术问题的总体思路如下:The technical scheme in the implementation of this application is as follows for the overall idea of solving the above-mentioned technical problems:
本发明提供了医疗图像与视觉图像融合,智能识别人体五官,器官,骨骼,关节,血管,人体特征及其位置的智能识别方法,建立一般视觉图像模型,人体器官模型,血管模型及特征模型,一般图像的模型包括:人脸,五官,耳部,嘴唇,下颌骨,颈部,肚脐,乳头,特征生殖器,特征部位及其位置作为特征项,用神经网络算法及其改进方法,智能识别人体特征及其位置。建立深度视觉装置及深度视觉图像模型,骨骼模型。将深度信息及骨骼所在人体的位置信息一种自主定位识别人体器官特征位置,采集超声图像的方法;The invention provides an intelligent recognition method for medical image and visual image fusion, intelligent recognition of human facial features, organs, bones, joints, blood vessels, human body features and their positions, and establishment of general visual image models, human organ models, blood vessel models and feature models, The general image model includes: face, facial features, ears, lips, mandible, neck, navel, nipples, characteristic genitals, characteristic parts and their positions as characteristic items, using neural network algorithms and their improved methods to intelligently identify human bodies features and their location. Establish a depth vision device, a depth vision image model, and a skeleton model. Using the depth information and the position information of the human body where the bones are located, a method of autonomously locating and identifying the characteristic positions of human organs, and collecting ultrasonic images;
以及一种远端及自适应采集颈部,甲状腺左右叶,峡部,腮腺,图像,视频的方法;一种远端及自适应采集下肢血管的扫查方式,采集器官,组织医疗图像,视频的方法;一种远端及自适应采集骨骼关节,肌肉,神经扫查方式,采集器官,组织医疗图像,视频的方法。And a method for remote and adaptive collection of neck, left and right thyroid lobes, isthmus, parotid gland, images, and videos; a remote and adaptive collection of blood vessels in the lower extremities, collection of organs, tissue medical images, and video Method; a method for remote and self-adaptive collection of bones, joints, muscles, and nerve scans, collection of organs, and medical images and videos.
实施例1:Example 1:
下面结合实施例及附图1,对本发明作进一步地的详细说明,但本发明的实施方式不限于此,输入一般视觉图像,人体器官,血管及特征,包括:人脸,五官,耳部,嘴唇,下颌骨,颈部,肚脐,乳头,特征生殖器,特征部位及其位置,输入深度视觉图像,骨骼信息,下颌骨,肋底端骨骼,剑突,脊椎,脊椎位置,骨骼下界位置,肩关节,膝关节,足,骨,足关节,腰关节及其位置,各关节位置。Below in conjunction with embodiment and accompanying drawing 1, the present invention will be described in further detail, but the embodiment of the present invention is not limited thereto, input general visual image, human body organ, blood vessel and feature, include: human face, facial features, ear, Lips, mandible, neck, navel, nipples, characteristic genitals, characteristic parts and their positions, input depth visual image, bone information, mandible, bones at the base of ribs, xiphoid process, spine, spine position, lower bone position, shoulder Joints, knee joints, feet, bones, foot joints, waist joints and their positions, the position of each joint.
超声图像下血管颜色信息,血管位置信息及超声图像器官的轮廓形状特征,结构特征,颜色特征。将血管信息,血管位置信息与超声图像下器官信息,器官特征结合为综合为信息结合项作为超声图像模型的特征项,输入项。将外部扫查特征信息及外部扫查骨骼信息作为外部扫查信息,应用改进深度神经网络方法及权值优化器,通过图像训练,得到输出值及器官,血管,骨骼,以及其所在人体扫查器官及位置信息,输出自主定位器官,血管,骨骼,以及其所在人体的位置的结果。Blood vessel color information, blood vessel position information and ultrasound image organ contour shape features, structural features, and color features under the ultrasound image. Combining blood vessel information, blood vessel position information with organ information and organ features under the ultrasound image into an information combination item as a characteristic item and an input item of the ultrasound image model. Use the external scanning feature information and external scanning bone information as external scanning information, apply the improved deep neural network method and weight optimizer, and obtain output values and organs, blood vessels, bones, and human body scanning where they are located through image training Organ and location information, output the results of autonomous positioning of organs, blood vessels, bones, and their positions in the human body.
依据管理员,医生发布采集任务,医嘱消息,获得采集任务对应的人体采集图像的器官及其外部位置区坐标,设置其为目标,设置目标名,目标参数,位置信息,设置通信目标。According to the administrator, the doctor releases the acquisition task, the doctor's order message, obtains the coordinates of the organ and its external location area of the human body acquisition image corresponding to the acquisition task, sets it as the target, sets the target name, target parameters, location information, and sets the communication target.
机器人视觉采集装置及视觉识别模块发布各器官对应的外部特征,一般图像模型识别的近邻特征器官的外部扫查特征信息人体外部位置区坐标;深度摄像头发布的深度信息,近邻 的骨骼信息外部扫查骨骼信息。The robot vision acquisition device and visual recognition module release the external features corresponding to each organ, and the external scanning feature information of the adjacent characteristic organs recognized by the general image model The coordinates of the external position area of the human body; the depth information released by the depth camera, the external scanning of the adjacent bone information Skeletal information.
机器臂,超声装置,超声探头,机器人订阅对应的器官外部扫查区位置信息,订阅目标,参数,目标位姿,位姿标记,设置目标对于头部id,目标位姿,方向值,设置时间戳,远端主控制系统及自主机器臂搭载的超声探头依照订阅的采集区位置,依照机器臂图像采集动作规划模块的动作,移动,扫查人体采集区。超声探头及超声装置发布采集的图像信息,血管颜色信息,血管位置信息及超声图像器官的轮廓形状特征,结构特征,颜色特征。机器人,医疗装置及视觉识别模块订阅图像信息。抽取超声图像下血管颜色信息,血管位置信息及超声图像器官的轮廓形状特征,结构特征,颜色特征输入计算模型,智能识别图像是否为目标器官组织,如果为扫查目标器官。Robot arm, ultrasound device, ultrasound probe, and robot subscribe to the location information of the corresponding organ external scan area, subscription target, parameter, target pose, pose mark, set target for head id, target pose, direction value, set time Stamp, the remote main control system and the ultrasonic probe carried by the autonomous robot arm move and scan the human body acquisition area according to the subscribed acquisition area location and the action of the robotic arm image acquisition action planning module. The ultrasonic probe and the ultrasonic device release the collected image information, blood vessel color information, blood vessel position information and contour shape features, structural features, and color features of the organs in the ultrasonic image. Robots, medical devices and visual recognition modules subscribe to image information. Extract blood vessel color information, blood vessel position information and ultrasonic image organ contour shape features, structural features, and color features into the calculation model to intelligently identify whether the image is the target organ tissue, and if it is the target organ for scanning.
设置采集目标参数(位姿标记,时间戳,目标对于头部id,COG目标位姿,方向值),设置位置和姿态的允许误差,当运动规划失败后,允许重新规划,设置目标位置的参考坐标系,设置每次运动规划的时间限制。机器人,医疗装置及机器臂远端及自适应调节采集医疗装置的图像参数,视频的参数,图像采集方法,是否符合图像,视频的识别标准,是否有效采集。机器人,医疗装置及机器臂远端及自适应调节超声探头的扫查方式,探头角度,参数,检查目标器官采集位置,采集器官,组织的全部图像及视频是否为全部的扫查方式下的图像,视频,是否为目标器官,组织的完整采集,返回目标器官采集完成信息,机器人及机器臂订阅任务信息,机器人,医疗装置及机器臂远端及自适应移动超声探头至下一目标器官的外部扫查位置区。机器人,医疗装置依照返回的目标器官采集完成信息,判定所有的目标器官采集任务完成。Set acquisition target parameters (pose mark, timestamp, target for head id, COG target pose, direction value), set the allowable error of position and attitude, allow re-planning when motion planning fails, and set the reference of target position Coordinate system, set the time limit for each motion planning. The image parameters of the robot, the medical device and the remote end of the robot arm and the self-adaptive adjustment and acquisition of the medical device, the parameters of the video, the image acquisition method, whether it meets the image and video recognition standards, and whether the acquisition is effective. Robots, medical devices, remote ends of robotic arms, and self-adaptive adjustments to the scanning method, probe angle, and parameters of the ultrasound probe, to check the acquisition position of the target organ, and whether all images and videos of the collected organs and tissues are all images in the scanning mode , video, whether it is the target organ, complete collection of tissue, return target organ collection completion information, robot and robot arm subscribe task information, robot, medical device and robot arm distal end and adaptively move the ultrasound probe to the outside of the next target organ Scan the location area. The robot and the medical device determine that all target organ collection tasks are completed according to the returned target organ collection completion information.
实施例2:Example 2:
下面结合实施例及附图1,2,对本发明作进一步地的详细说明,但本发明的实施方式不限于此一种远端及自适应扫查颈部,甲状腺左右叶,峡部,腮腺,采集图像,视频的方法,Below in conjunction with embodiment and accompanying drawing 1, 2, the present invention is described in further detail, but the embodiment of the present invention is not limited to this kind of far-end and self-adaptive scan neck, thyroid left and right lobes, isthmus, parotid gland, acquisition image, video method,
依照一般视觉装置及多种医疗图像融合智能识别方法,智能识别人体耳部,耳郭部及其位置,智能识别人体嘴唇及其位置,通过深度视觉采集装置及及多种医疗图像融合智能识别方法,智能识别骨骼,智能识别耳部底端下颌骨,脊柱,骨骼位置,脊椎位置,智能识别骨骼,唇中线下方的骨骼位置。According to the general visual device and a variety of medical image fusion intelligent recognition methods, intelligently identify the human ear, auricle and its position, intelligently identify the human lip and its position, through the depth vision acquisition device and a variety of medical image fusion intelligent recognition methods , Intelligently identify bones, intelligently identify the mandible at the bottom of the ear, spine, bone position, spine position, intelligently identify bones, and the bone position below the midline of the lip.
依照机器人自主定位,人体外部位置区坐标,扫查采集医疗图像的方法,机器臂,超声装置,超声探头,机器人,医疗装置订阅颈部,甲状腺左右叶,峡部,腮腺外部扫查区位置信息,扫查方法,探头角度,订阅目标,参数,目标位姿,位姿标记,设置目标对于头部id,目标位姿,方向值,设置时间戳。According to the autonomous positioning of the robot, the coordinates of the external position area of the human body, the method of scanning and collecting medical images, the robotic arm, ultrasonic device, ultrasonic probe, robot, and medical device subscribe to the position information of the external scanning area of the neck, left and right thyroid lobes, isthmus, and parotid gland, Scanning method, probe angle, subscription target, parameters, target pose, pose marker, set target for head id, target pose, direction value, set timestamp.
机器人,医疗装置及机器臂远端及自适应调节采集医疗装置的图像参数,增益参数,颜色 增益参数,灵敏度时间控制调节参数,时间增益控制参数,聚焦参数,深度参数,取框尺寸,血流流速标尺参数,视频的参数,图像采集方法,机器臂,超声探头,远端控制及自适应调整探头旋转角度,倾斜角度,超声探头的扫查方式,探头角度,参数至以下器官,组织有效采集,完整采集。Image parameters, gain parameters, color gain parameters, sensitivity time control adjustment parameters, time gain control parameters, focus parameters, depth parameters, frame size, blood flow of robots, medical devices and robotic arms, and self-adaptive adjustment and collection of medical devices Velocity scale parameters, video parameters, image acquisition method, robotic arm, ultrasonic probe, remote control and adaptive adjustment of probe rotation angle, tilt angle, ultrasonic probe scanning method, probe angle, parameters to the following organs, effective collection of tissues , complete collection.
远端控制及自适应移动机器臂,超声探头沿耳部底端骨骼下界,扫查,采集颈部,脊椎,颈总动脉,颈内动脉,颈外动脉图像及视频。Remotely control and adaptively move the robot arm, scan and collect images and videos of the neck, spine, common carotid artery, internal carotid artery, and external carotid artery along the lower boundary of the bone at the bottom of the ear with the ultrasonic probe.
远端控制及自适应移动机器臂,超声探头沿嘴唇中线下方的骨骼位置,沿颈动脉从头侧向足侧移动扫查甲状腺的左右叶及峡部,采集医疗图像及视频。Remotely control and adaptively move the robotic arm, the ultrasonic probe moves along the bone position below the midline of the lip, and moves along the carotid artery from the head side to the foot side to scan the left and right lobes and isthmus of the thyroid gland, and collect medical images and videos.
远端控制及自适应移动机器臂,超声探头,沿耳部底端,骨骼位置,沿下颌骨,骨骼位置,扫查腮腺,颌下腺,舌下腺至唇中线下方的骨骼位置,采集医疗图像及视频。Remote control and adaptive mobile robot arm, ultrasonic probe, along the bottom of the ear, along the bone position, along the mandible, bone position, scan the parotid gland, submandibular gland, sublingual gland to the bone position below the midline of the lip, collect medical images and video.
实施例3Example 3
下面结合实施例及附图1,2,对本发明作进一步地的详细说明远端及自适应采集下肢血管的扫查方式,采集器官,组织医疗图像,视频的方法:Below in combination with the embodiments and accompanying drawings 1 and 2, the present invention will be further described in detail for the far-end and self-adaptive acquisition of scanning methods for blood vessels of the lower extremities, methods for collecting organs, organizing medical images, and videos:
依照一般视觉装置及多种医疗图像融合智能识别方法,智能识别人体耳部,耳郭部及其位置,智能识别肚脐及其位置。According to the general visual device and a variety of medical image fusion intelligent recognition methods, it can intelligently identify the human ear, auricle and its position, and intelligently identify the navel and its position.
通过深度视觉采集装置及及多种医疗图像融合智能识别方法,智能识别骨骼,肋底端骨骼及剑突位置,坐骨,膝关节及其位置,背测腘窝及其位置,足骨,足关节及其位置。Through the depth vision acquisition device and a variety of medical image fusion intelligent recognition methods, intelligently identify bones, rib bottom bones and xiphoid process positions, ischium, knee joints and their positions, dorsally measure popliteal fossa and their positions, foot bones, and foot joints and its location.
通过超声图像及多种医疗图像融合智能识别方法,将血管颜色信息,血管位置信息与超声图像下器官信息,智能识别腹主动脉,腹股沟,股浅动脉,腘动脉,胫前动脉,胫后动脉。Through the intelligent recognition method of ultrasonic image and various medical image fusion, the blood vessel color information, blood vessel position information and organ information under the ultrasonic image can be intelligently identified abdominal aorta, groin, superficial femoral artery, popliteal artery, anterior tibial artery, posterior tibial artery .
通过超声图像及多种医疗图像融合智能识别方法,利用超声图像下动态识别方法,识别搏动位置为腹股沟区。Through the intelligent recognition method of ultrasonic image and a variety of medical image fusion, the dynamic recognition method under the ultrasonic image is used to identify the pulsation position as the groin area.
远端控制及自适应移动机器臂,超声探头至腹股沟区,超声探头扫查腹股沟区自剑突位置向下肚脐方向沿腹主动脉,采集医疗图像及视频。Remote control and self-adaptive mobile robot arm, ultrasonic probe to the groin area, the ultrasonic probe scans the groin area from the xiphoid process down to the navel and along the abdominal aorta to collect medical images and videos.
远端控制及自适应移动机器臂,超声探头沿腹股沟区,扫查股浅动脉,移动至膝关节及其位置,背测腘窝位置,由背测腘窝位置向膝关节背侧附近血管扫查,采集医疗图像及视频。Remotely control and adaptively move the robot arm, scan the superficial femoral artery with the ultrasonic probe along the groin area, move to the knee joint and its position, dorsally measure the position of the popliteal fossa, and scan from the dorsal position of the popliteal fossa to the blood vessels near the dorsal side of the knee joint Check and collect medical images and videos.
远端控制及自适应移动机器臂,超声探头沿腘动脉,胫前动脉,胫后动脉向下方扫查,利用按压装置按压,由膝关节沿胫前动脉,扫查附近血管,采集医疗图像及视频。Remote control and self-adaptive mobile robot arm, ultrasonic probe scans downward along the popliteal artery, anterior tibial artery, and posterior tibial artery, presses with a pressing device, scans nearby blood vessels along the anterior tibial artery from the knee joint, collects medical images and video.
远端控制及自适应移动机器臂,超声探头至足骨,足关节及其位置,沿足背扫查足背动脉,沿足底扫查足底动脉,采集医疗图像及视频。Remote control and self-adaptive mobile robot arm, ultrasonic probe to the foot bone, foot joint and its position, scan the dorsalis pedis artery along the dorsum of the foot, scan the plantar artery along the sole of the foot, and collect medical images and videos.
远端控制及自适应移动机器臂,超声探头至膝关节及其位置,将探头移动至背测腘窝位置与膝关节中线内侧位置,沿血管向下方扫查,扫查下肢静脉,包括:股静脉,股浅静脉,股深静脉,骼外静脉,大隐静脉,采集医疗图像及视频。Remotely control and adaptively move the robot arm, move the ultrasonic probe to the knee joint and its position, move the probe to the position of the dorsal popliteal fossa and the medial position of the midline of the knee joint, scan downward along the blood vessels, and scan the veins of the lower extremities, including: femoral Vein, superficial femoral vein, deep femoral vein, external iliac vein, great saphenous vein, collect medical images and videos.
实施例4Example 4
下面结合实施例及附图1,2,对本发明作进一步地的详细说明,远端及自适应采集骨骼关节,肌肉,神经扫查方式,采集器官,组织医疗图像,视频的方法:Below in conjunction with the embodiments and accompanying drawings 1 and 2, the present invention will be further described in detail, remote and adaptive collection of bone joints, muscles, nerve scanning methods, collection of organs, tissue medical images, video methods:
通过深度视觉采集装置及及多种医疗图像融合智能识别方法,智能识别肩部,肩关节及其位置,骨骼及其位置,肘关节及其位置,踝关节及其位置。Through the deep vision acquisition device and various medical image fusion intelligent recognition methods, it can intelligently recognize shoulders, shoulder joints and their positions, bones and their positions, elbow joints and their positions, ankle joints and their positions.
通过超声图像及多种医疗图像融合智能识别方法,将血管颜色信息,血管位置信息与超声图像下器官信息,智能识别腹主动脉,腹股沟,股浅动脉,腘动脉,胫前动脉,胫后动脉。Through the intelligent recognition method of ultrasonic image and various medical image fusion, the blood vessel color information, blood vessel position information and organ information under the ultrasonic image can be intelligently identified abdominal aorta, groin, superficial femoral artery, popliteal artery, anterior tibial artery, posterior tibial artery .
远端控制及自适应移动机器臂,超声探头至肩关节,沿肩关节朝手臂方向扫查长头肌腱,三角肌,肩胛下肌腱和肌鞘,沿肩部中心骨形状扫查冈上肌肌腱与冈下肌肌腱,采集医疗图像及视频。Remote control and adaptive mobile robotic arm, ultrasonic probe to the shoulder joint, scan the long head tendon, deltoid muscle, subscapularis tendon and muscle sheath along the shoulder joint toward the arm, scan the supraspinatus tendon along the shape of the central bone of the shoulder With the infraspinatus tendon, capture medical images and videos.
远端控制及自适应移动机器臂,超声探头至肘关节,鹰嘴窝,沿肘关节背侧向手方向扫查耾骨小头,桡骨头前方,沿肘关节外侧,鹰嘴外侧边缘位置扫查耾骨小头,桡骨头后方,肌肉,神经,推拉翻转角度调解装置调解肘关节弯曲90度位置,扫查内上骨果和尺骨,采集医疗图像及视频。Remote control and self-adaptive mobile robot arm, ultrasonic probe to the elbow joint, olecranon fossa, scan the head of the columella along the dorsal side of the elbow joint to the hand direction, scan along the outside of the elbow joint and the outer edge of the olecranon in front of the radial head Check the capitulum, the rear of the radial head, muscles, nerves, push-pull and flip angle adjustment device to adjust the 90-degree position of the elbow joint, scan the inner superior bone and ulna, and collect medical images and videos.
远端控制及自适应移动机器臂,超声探头至膝关节及位置,沿膝关节外侧,髌骨侧扫查,向手方向扫查耾骨小头,桡骨头前方,沿肘关节外侧,鹰嘴外侧边缘位置扫查耾骨小头,桡骨头后方,采集医疗图像及视频。Remote control and self-adaptive mobile robot arm, ultrasonic probe to the knee joint and position, scan along the outside of the knee joint, the side of the patella, scan towards the hand direction, scan the head of the columella, the front of the radial head, along the outside of the elbow joint, and the outside of the olecranon Scan the head of the cranium and the rear of the radial head at the edge position, and collect medical images and videos.
远端控制及自适应移动机器臂,超声探头至踝关节及位置,腓骨及距骨。机器臂推拉翻转角度调解装置协助弯曲踝关节,扫查距腓前韧带,移动探头至腓骨圆形骨表面,扫查距骨外侧边缘,采集距骨三角图像,距骨,腓骨前韧带,采集医疗图像及视频。Remote control and adaptive movement of the robotic arm, ultrasound probe to the ankle joint and position, fibula and talus. The push-pull flip angle adjustment device of the robotic arm assists in bending the ankle joint, scans the anterior talofibular ligament, moves the probe to the round bone surface of the fibula, scans the outer edge of the talus, collects triangular images of the talus, talus, and anterior fibular ligament, and collects medical images and videos .

Claims (6)

  1. 视觉图像与医疗图像融合识别、自主定位扫查方法,其特征在于,融合人体五官,器官,骨骼,关节,血管,人体特征及其位置的智能识别方法,所述方法是将一般视觉图像,深度图像及超声图像,内窥镜图像一体化智能识别人体器官,骨骼,关节,血管,人体特征点及其位置的方法以及自主定位人体器官,骨骼,关节,血管,人体特征点位置的方法,包括以下骤:The visual image and medical image fusion recognition, autonomous positioning scanning method is characterized in that it is an intelligent recognition method that integrates human facial features, organs, bones, joints, blood vessels, human body features and their positions. The method is to combine general visual images, depth A method for intelligently identifying human organs, bones, joints, blood vessels, and human body feature points and their positions through the integration of images, ultrasound images, and endoscopic images, and a method for autonomously positioning human organs, bones, joints, blood vessels, and human body feature points, including The following steps:
    S1、建立一般视觉图像模型,人体器官模型,血管模型及特征模型,一般图像的模型包括:人脸,五官,耳部,嘴唇,下颌骨,颈部,肚脐,乳头,特征生殖器,特征部位及其位置作为特征项,用神经网络算法及其改进方法,智能识别人体特征及其位置。S1. Establish general visual image models, human organ models, blood vessel models and feature models. The general image models include: face, facial features, ears, lips, mandible, neck, navel, nipples, characteristic genitalia, characteristic parts and Its position is used as a feature item, and the neural network algorithm and its improved method are used to intelligently identify the human body's features and its position.
    S2、建立深度视觉装置及深度视觉图像模型,骨骼模型。S2. Establish a depth vision device, a depth vision image model, and a skeleton model.
    S3、将深度信息及骨骼所在人体的位置信息,以及S1所述的一般图像模型识别的近邻特征器官信息作为骨骼智能识别模型的特征项结合,作为输入项。S3. Combining the depth information, the position information of the human body where the bones are located, and the information of the adjacent characteristic organs identified by the general image model described in S1 as the feature items of the intelligent bone recognition model as input items.
    S4、应用神经网络算法及其改进方法,智能识别各骨骼以及各骨骼所在的位置,下颌骨,肋底端骨骼,剑突,脊椎,脊椎位置,骨骼下界位置,肩关节,膝关节,足,骨,足关节,腰关节及其位置,各关节位置。S4. Apply the neural network algorithm and its improved method to intelligently identify each bone and the position of each bone, mandible, costal bottom bone, xiphoid process, spine, spine position, bone lower boundary position, shoulder joint, knee joint, foot, Bones, foot joints, lumbar joints and their positions, the position of each joint.
    S5、将S1所述的一般图像模型识别的近邻特征器官信息标准为外部扫查特征信息,将S4识别的骨骼信息外部扫查骨骼信息,S5. Standardize the adjacent characteristic organ information identified by the general image model described in S1 as external scanning feature information, and externally scan the bone information for the bone information identified in S4,
    S6、建立超声图像下特征模型,将超声图像下血管颜色信息,血管位置信息及超声图像器官的轮廓形状特征,结构特征,颜色特征。S6. Establish a feature model under the ultrasound image, and combine the color information of blood vessels under the ultrasound image, the location information of blood vessels, and the contour shape features, structural features, and color features of organs in the ultrasound image.
    S7、将血管信息,血管位置信息与超声图像下器官信息,器官特征结合为综合为信息结合项作为超声图像模型的特征项,输入项。S7. Combining blood vessel information, blood vessel position information with organ information and organ features under the ultrasound image into an information combination item as a feature item and an input item of the ultrasound image model.
    S8、将外部扫查特征信息及外部扫查骨骼信息作为外部扫查信息,将S1所述的一般图像模型识别的近邻特征器官信息将S4识别的骨骼信息与S7所述的超声图像模型的特征项及其位置区输入神经网络及其改进方法及权值优化器,通过图像训练,得到输出值。S8. Using the external scan feature information and the external scan bone information as the external scan information, combine the adjacent characteristic organ information identified by the general image model described in S1 with the bone information identified by S4 and the features of the ultrasound image model described in S7 The item and its location area are input into the neural network and its improved method and weight optimizer, and the output value is obtained through image training.
    S9、改进深度神经网络方法及权值优化器,通过图像训练,得到输出值及器官,血管,骨骼,以及其所在人体的位置信息识别结果。S9. Improve the deep neural network method and the weight optimizer, and obtain the output value and the recognition results of the position information of the organs, blood vessels, bones, and the human body where they are located through image training.
    S10、输出结果,作为外部扫查自主定位器官,血管,骨骼,以及其所在人体的位置的结果。S10. Outputting the result, as the result of external scanning to autonomously locate organs, blood vessels, bones, and their positions in the human body.
  2. 视觉图像与医疗图像融合识别、自主定位扫查方法,其特征在于,一种机器人自主定位人体外部位置区,扫查采集医疗图像的方法,包括以下步骤:The visual image and medical image fusion recognition, autonomous positioning scanning method is characterized in that a robot autonomously locates the external position area of the human body, and scans and collects medical images. The method includes the following steps:
    S1、依据管理员,医生通信模块,机器人主系统发布采集任务,医嘱消息,获得采集任务对应的人体采集图像的器官及其外部位置区坐标,设置其为目标,设置目标名,目标参数,位置信息,设置通信目标。S1. According to the administrator, the doctor communication module, and the main robot system publish collection tasks and doctor’s order messages, obtain the coordinates of the organ and its external position area of the human body collection image corresponding to the collection task, set it as the target, set the target name, target parameters, and location information, set communication goals.
    S2、机器人视觉采集装置及视觉识别模块发布各器官对应的外部特征,一般图像模型识别的近邻特征 器官的外部扫查特征信息人体外部位置区坐标;深度摄像头发布的深度信息,近邻的骨骼信息外部扫查骨骼信息。S2. The robot vision acquisition device and visual recognition module release the external features corresponding to each organ, the external scanning feature information of the nearby characteristic organs recognized by the general image model The coordinates of the external position area of the human body; the depth information released by the depth camera, the external bone information of the nearby neighbors Scan bone information.
    S3、机器臂,超声装置,超声探头,机器人订阅对应的器官外部扫查区位置信息,订阅目标,参数,目标位姿,位姿标记,设置目标对于头部id,目标位姿,方向值,设置时间戳S3. The robot arm, ultrasound device, ultrasound probe, and robot subscribe to the location information of the corresponding organ external scan area, subscribe to the target, parameters, target pose, pose mark, set the target for the head id, target pose, direction value, set timestamp
    S4、远端主控制系统及自主机器臂搭载的超声探头依照订阅的采集区位置,依照机器臂图像采集动作规划模块的动作,移动,扫查人体采集区。超声探头及超声装置发布采集的图像信息,血管颜色信息,血管位置信息及超声图像器官的轮廓形状特征,结构特征,颜色特征。S4. The remote main control system and the ultrasonic probe mounted on the autonomous robot arm move and scan the human body acquisition area according to the subscribed acquisition area location and the action of the robotic arm image acquisition action planning module. The ultrasonic probe and the ultrasonic device release the collected image information, blood vessel color information, blood vessel position information and contour shape features, structural features, and color features of the organs in the ultrasonic image.
    S5、机器人主系统及视觉识别模块订阅图像信息。按照权利要求1,抽取超声图像下血管颜色信息,血管位置信息及超声图像器官的轮廓形状特征,结构特征,颜色特征输入计算模型,按照权利要求1,智能识别图像是否为目标器官组织,如果为扫查目标器官。S5. The main robot system and the visual recognition module subscribe to the image information. According to claim 1, the blood vessel color information, blood vessel position information and ultrasonic image organ contour shape features, structural features, and color features are input into the calculation model according to claim 1, and intelligently identify whether the image is the target organ tissue, if it is Scan target organs.
    S6、设置采集目标参数(位姿标记,时间戳,目标对于头部id,COG目标位姿,方向值),设置位置和姿态的允许误差,当运动规划失败后,允许重新规划,设置目标位置的参考坐标系,设置每次运动规划的时间限制.S6. Set the acquisition target parameters (pose mark, time stamp, target for head id, COG target pose, direction value), set the allowable error of position and attitude, when the motion planning fails, allow re-planning, set the target position The reference coordinate system, set the time limit for each motion planning.
    S7、机器人主系统及机器臂远端及自适应调节采集医疗装置的图像参数,视频的参数,图像采集方法,是否符合图像,视频的识别标准,是否有效采集。S7. The main system of the robot and the remote end of the robot arm and self-adaptive adjustment to collect the image parameters of the medical device, the parameters of the video, the image collection method, whether it meets the image and video recognition standards, and whether the collection is effective.
    S8、机器人主系统及机器臂远端及自适应调节超声探头的扫查方式,探头角度,参数,检查目标器官采集位置,采集器官,组织的全部图像及视频是否为全部的扫查方式下的图像,视频,是否为目标器官,组织的完整采集。S8. The main system of the robot and the remote end of the robot arm and self-adaptive adjustment of the scanning mode, probe angle, and parameters of the ultrasonic probe, check the collection position of the target organ, and whether all the images and videos of the collected organs and tissues are in the scanning mode. Images, videos, whether it is the target organ or not, complete collection of tissues.
    S9、返回目标器官采集完成信息,机器人主系统及机器臂订阅任务信息,机器人主系统及机器臂远端及自适应移动超声探头至下一目标器官的外部扫查位置区。S9. Return the target organ acquisition completion information, the robot main system and the robot arm subscribe to the task information, the robot main system and the robot arm distal end and adaptively move the ultrasound probe to the external scanning position area of the next target organ.
    S10、机器人主系统依照返回的目标器官采集完成信息,判定所有的目标器官采集任务完成。S10. The main robot system determines that all target organ collection tasks are completed according to the returned target organ collection completion information.
  3. 视觉图像与医疗图像融合识别、自主定位扫查方法,其特征在于,一种远端及自适应扫查颈部,甲状腺左右叶,峡部,腮腺,采集医疗图像,视频的方法,步骤如下:The visual image and medical image fusion recognition, self-positioning scanning method is characterized in that it is a method of remote and adaptive scanning of the neck, left and right thyroid lobes, isthmus, and parotid gland, and collecting medical images and videos. The steps are as follows:
    S1、依据权利要求1,依照一般视觉装置及多种医疗图像融合智能识别方法,智能识别人体耳部,耳郭部及其位置,智能识别人体嘴唇及其位置,通过深度视觉采集装置及及多种医疗图像融合智能识别方法,智能识别骨骼,智能识别耳部底端下颌骨,脊柱,骨骼位置,脊椎位置,智能识别骨骼,唇中线下方的骨骼位置。S1. According to claim 1, according to the general visual device and a variety of medical image fusion intelligent recognition methods, intelligent recognition of human ears, auricles and their positions, intelligent recognition of human lips and their positions, through the depth vision acquisition device and multiple A medical image fusion intelligent identification method, intelligent identification of bones, intelligent identification of the mandible at the bottom of the ear, spine, bone position, spine position, intelligent identification of bones, and the position of bones below the midline of the lip.
    S2、依据权利要求2,依照机器人自主定位,人体外部位置区坐标,扫查采集医疗图像的方法,机器臂,超声装置,超声探头,机器人主系统订阅颈部,甲状腺左右叶,峡部,腮腺外部扫查区位置信息,扫查方法,探头角度,订阅目标,参数,目标位姿,位姿标记,设置目标对于头部id,目标位姿,方向值, 设置时间戳。S2. According to claim 2, according to the autonomous positioning of the robot, the coordinates of the external position area of the human body, the method of scanning and collecting medical images, the robot arm, the ultrasonic device, the ultrasonic probe, the main system of the robot subscribes to the neck, the left and right thyroid lobes, the isthmus, and the outside of the parotid gland Location information of the scanning area, scanning method, probe angle, subscription target, parameters, target pose, pose mark, set target for head id, target pose, direction value, set timestamp.
    S3、机器人主系统及机器臂远端及自适应调节采集医疗装置的图像参数,增益参数,颜色增益参数,灵敏度时间控制调节参数,时间增益控制参数,聚焦参数,深度参数,取框尺寸,血流流速标尺参数,视频的参数,图像采集方法,机器臂,超声探头,远端控制及自适应调整探头旋转角度,倾斜角度,超声探头的扫查方式,探头角度,参数至以下器官,组织有效采集,完整采集。S3. The main system of the robot and the remote end of the robot arm and self-adaptive adjustment and acquisition of image parameters, gain parameters, color gain parameters, sensitivity time control adjustment parameters, time gain control parameters, focus parameters, depth parameters, frame size, blood Flow rate scale parameters, video parameters, image acquisition method, robotic arm, ultrasonic probe, remote control and adaptive adjustment of probe rotation angle, tilt angle, ultrasonic probe scanning method, probe angle, parameters are valid for the following organs and tissues Collection, complete collection.
    S4、远端控制及自适应移动机器臂,超声探头沿耳部底端骨骼下界,扫查,采集颈部,脊椎,颈总动脉,颈内动脉,颈外动脉图像及视频。S4. Remote control and self-adaptive mobile robot arm, the ultrasonic probe scans along the lower boundary of the bone at the bottom of the ear, and collects images and videos of the neck, spine, common carotid artery, internal carotid artery, and external carotid artery.
    S5、远端控制及自适应移动机器臂,超声探头沿嘴唇中线下方的骨骼位置,沿颈动脉从头侧向足侧移动扫查甲状腺的左右叶及峡部,采集医疗图像及视频。S5. Remote control and self-adaptive mobile robot arm, the ultrasonic probe moves along the bone position below the midline of the lip, along the carotid artery from the head side to the foot side to scan the left and right lobes and isthmus of the thyroid gland, and collect medical images and videos.
    S6、远端控制及自适应移动机器臂,超声探头,沿耳部底端,骨骼位置,沿下颌骨,骨骼位置,扫查腮腺,颌下腺,舌下腺至唇中线下方的骨骼位置,采集医疗图像及视频。S6. Remote control and adaptive mobile robot arm, ultrasonic probe, along the bottom of the ear, bone position, along the mandible, bone position, scan the parotid gland, submandibular gland, sublingual gland to the bone position below the midline of the lip, collect medical treatment images and videos.
  4. 视觉图像与医疗图像融合识别、自主定位扫查方法,其特征在于,一种远端及自适应扫查下肢血管,采集下肢血管,器官,组织医疗图像,视频的方法,步骤如下:The visual image and medical image fusion recognition, self-positioning scanning method is characterized in that it is a method of remote and adaptive scanning of blood vessels of lower limbs, collecting blood vessels of lower limbs, organs, medical images of tissues, and video. The steps are as follows:
    S1、依据权利要求1,依照一般视觉装置及多种医疗图像融合智能识别方法,智能识别人体耳部,耳郭部及其位置,智能识别肚脐及其位置。S1. According to claim 1, according to the general visual device and multiple medical image fusion intelligent recognition methods, intelligently identify human ears, auricles and their positions, and intelligently identify navel and their positions.
    S2、通过深度视觉采集装置及及多种医疗图像融合智能识别方法,智能识别骨骼,肋底端骨骼及剑突位置,坐骨,膝关节及其位置,背测腘窝及其位置,足骨,足关节及其位置。S2. Through the deep vision acquisition device and various medical image fusion intelligent recognition methods, intelligently identify bones, rib bottom bones and xiphoid process positions, ischium, knee joints and their positions, dorsally measure popliteal fossa and its positions, foot bones, Foot joints and their positions.
    S3、依据权利要求1,通过超声图像及多种医疗图像融合智能识别方法,将血管颜色信息,血管位置信息与超声图像下器官信息,智能识别腹主动脉,腹股沟,股浅动脉,腘动脉,胫前动脉,胫后动脉。S3. According to claim 1, through the ultrasonic image and multiple medical image fusion intelligent recognition method, the blood vessel color information, blood vessel position information and organ information under the ultrasonic image are intelligently identified for abdominal aorta, groin, superficial femoral artery, popliteal artery, Anterior tibial artery, posterior tibial artery.
    S4、依据权利要求1,通过超声图像及多种医疗图像融合智能识别方法,利用超声图像下动态识别方法,识别搏动位置为腹股沟区。S4. According to claim 1, through the intelligent identification method of fusion of ultrasonic images and multiple medical images, the dynamic identification method under ultrasonic images is used to identify the pulsating position as the groin area.
    S5、远端控制及自适应移动机器臂,超声探头至腹股沟区,超声探头扫查腹股沟区自剑突位置向下肚脐方向沿腹主动脉,采集医疗图像及视频。S5. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the groin area, the ultrasonic probe scans the groin area from the position of the xiphoid process down to the navel and along the abdominal aorta to collect medical images and videos.
    S6、远端控制及自适应移动机器臂,超声探头沿腹股沟区,扫查股浅动脉,移动至膝关节及其位置,背测腘窝位置,由背测腘窝位置向膝关节背侧附近血管扫查,采集医疗图像及视频。S6. Remote control and self-adaptive mobile robot arm, the ultrasonic probe scans the superficial femoral artery along the groin area, moves to the knee joint and its position, measures the position of the popliteal fossa from the back, and moves from the position of the popliteal fossa to the dorsal side of the knee joint Vascular scanning, collecting medical images and videos.
    S7、远端控制及自适应移动机器臂,超声探头沿腘动脉,胫前动脉,胫后动脉向下方扫查,利用按压装置按压,由膝关节沿胫前动脉,扫查附近血管,采集医疗图像及视频。S7. Remote control and self-adaptive mobile robot arm, the ultrasonic probe scans downward along the popliteal artery, anterior tibial artery, and posterior tibial artery, presses with a pressing device, scans nearby blood vessels along the anterior tibial artery from the knee joint, and collects medical records. images and videos.
    S8、远端控制及自适应移动机器臂,超声探头至足骨,足关节及其位置,沿足背扫查足背动脉,沿足底扫查足底动脉,采集医疗图像及视频。S8. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the foot bones, foot joints and their positions, scan the dorsalis pedis artery along the dorsum of the foot, scan the plantar artery along the sole of the foot, and collect medical images and videos.
    S9、远端控制及自适应移动机器臂,超声探头至膝关节及其位置,将探头移动至背测腘窝位置与膝关节中线内侧位置,沿血管向下方扫查,扫查下肢静脉,包括:股静脉,股浅静脉,股深静脉,骼外静脉,大隐静脉,采集医疗图像及视频。S9. Remotely control and adaptively move the robot arm, move the ultrasonic probe to the knee joint and its position, move the probe to the dorsal position of the popliteal fossa and the medial position of the midline of the knee joint, scan downward along the blood vessels, and scan the veins of the lower extremities, including : Femoral vein, superficial femoral vein, deep femoral vein, external iliac vein, great saphenous vein, collecting medical images and videos.
  5. 视觉图像与医疗图像融合识别、自主定位扫查方法,其特征在于,一种远端及自适应扫查骨骼,关节,肌肉,神经,采集骨骼,器官,组织医疗图像,视频的方法,步骤如下:The visual image and medical image fusion recognition, autonomous positioning scanning method, is characterized in that it is a remote and adaptive scanning method for bones, joints, muscles, nerves, collection of bones, organs, tissue medical images, and videos, the steps are as follows :
    S1、依据权利要求1,通过深度视觉采集装置及及多种医疗图像融合智能识别方法,智能识别肩部,肩关节及其位置,骨骼及其位置,肘关节及其位置,踝关节及其位置。S1. According to claim 1, the shoulder, shoulder joint and its position, bone and its position, elbow joint and its position, ankle joint and its position can be intelligently identified through the depth vision acquisition device and multiple medical image fusion intelligent recognition methods .
    S2、通过超声图像及多种医疗图像融合智能识别方法,将血管颜色信息,血管位置信息与超声图像下器官信息,智能识别腹主动脉,腹股沟,股浅动脉,腘动脉,胫前动脉,胫后动脉。S2. Through the intelligent recognition method of ultrasonic image and various medical image fusion, the blood vessel color information, blood vessel position information and organ information under the ultrasonic image are intelligently identified for abdominal aorta, groin, superficial femoral artery, popliteal artery, anterior tibial artery, and tibial artery posterior artery.
    S3、远端控制及自适应移动机器臂,超声探头至肩关节,沿肩关节朝手臂方向扫查长头肌腱,三角肌,肩胛下肌腱和肌鞘,沿肩部中心骨形状扫查冈上肌肌腱与冈下肌肌腱,采集医疗图像及视频。S3. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the shoulder joint, scan the long head tendon, deltoid muscle, subscapularis tendon and muscle sheath along the shoulder joint toward the arm, and scan the supraspinatus along the shape of the central bone of the shoulder Muscle tendon and infraspinatus tendon, capture medical images and videos.
    S4、远端控制及自适应移动机器臂,超声探头至肘关节,鹰嘴窝,沿肘关节背侧向手方向扫查耾骨小头,桡骨头前方,沿肘关节外侧,鹰嘴外侧边缘位置扫查耾骨小头,桡骨头后方,肌肉,神经,推拉翻转角度调解装置调解肘关节弯曲90度位置,扫查内上骨果和尺骨,采集医疗图像及视频。S4. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the elbow joint, olecranon fossa, along the dorsal side of the elbow joint to scan the head of the columella, in front of the radial head, along the outer side of the elbow joint, and the outer edge of the olecranon Scan the position of the small head of the columella, the rear of the radial head, muscles, nerves, push-pull and flip angle adjustment device to adjust the position of the elbow joint at 90 degrees, scan the inner superior bone and ulna, and collect medical images and videos.
    S5、远端控制及自适应移动机器臂,超声探头至膝关节及位置,沿膝关节外侧,髌骨侧扫查,向手方向扫查耾骨小头,桡骨头前方,沿肘关节外侧,鹰嘴外侧边缘位置扫查耾骨小头,桡骨头后方,采集医疗图像及视频。S5. Remote control and self-adaptive mobile robot arm, ultrasonic probe to the knee joint and position, along the outside of the knee joint, scan the side of the patella, scan towards the hand direction to scan the head of the columella, the front of the radial head, along the outside of the elbow joint, and the eagle Scan the head of the cranium and the rear of the radial head at the outer edge of the mouth, and collect medical images and videos.
    S6、远端控制及自适应移动机器臂,超声探头至踝关节及位置,腓骨及距骨。机器臂推拉翻转角度调解装置协助弯曲踝关节,扫查距腓前韧带,移动探头至腓骨圆形骨表面,扫查距骨外侧边缘,采集距骨三角图像,距骨,腓骨前韧带,采集医疗图像及视频。S6. Remote control and self-adaptive mobile robot arm, ultrasonic probe to ankle joint and position, fibula and talus. The push-pull flip angle adjustment device of the robotic arm assists in bending the ankle joint, scans the anterior talofibular ligament, moves the probe to the round bone surface of the fibula, scans the outer edge of the talus, collects triangular images of the talus, talus, and anterior fibular ligament, and collects medical images and videos .
  6. 视觉图像与医疗图像融合识别、自主定位扫查方法,其特征在于,视觉图像与医疗图像融合识别,自主定位扫查方法适用于远端及自主定位扫查的装置,其包含机器人,机器臂,医疗装置,医疗装置的控制装置,超声装置,超声探头装置上述包含扫查装置及扫查控制装置中的一种及多种,以及平面视觉装置的一种及多种,深度视觉装置中的一种及多种。The visual image and medical image fusion recognition, autonomous positioning scanning method is characterized in that the visual image and medical image fusion recognition, and the autonomous positioning scanning method is suitable for remote and autonomous positioning scanning devices, which include robots, robotic arms, Medical devices, medical device control devices, ultrasonic devices, ultrasonic probe devices include one or more of scanning devices and scanning control devices, one or more of planar vision devices, and one or more of depth vision devices. species and varieties.
PCT/CN2022/000117 2021-08-23 2022-08-18 Recognition, autonomous positioning and scanning method for visual image and medical image fusion WO2023024396A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280056988.6A CN118338850A (en) 2021-08-23 2022-08-18 Visual image and medical image fusion identification and autonomous positioning scanning method
AU2022335276A AU2022335276A1 (en) 2021-08-23 2022-08-18 Recognition, autonomous positioning and scanning method for visual image and medical image fusion

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110978078.0A CN113855067A (en) 2021-08-23 2021-08-23 Visual image and medical image fusion recognition and autonomous positioning scanning method
CN202110978078.0 2021-08-23

Publications (1)

Publication Number Publication Date
WO2023024396A1 true WO2023024396A1 (en) 2023-03-02

Family

ID=78988258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/000117 WO2023024396A1 (en) 2021-08-23 2022-08-18 Recognition, autonomous positioning and scanning method for visual image and medical image fusion

Country Status (3)

Country Link
CN (2) CN113855067A (en)
AU (1) AU2022335276A1 (en)
WO (1) WO2023024396A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937219A (en) * 2023-03-14 2023-04-07 合肥合滨智能机器人有限公司 Ultrasonic image part identification method and system based on video classification

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113855067A (en) * 2021-08-23 2021-12-31 谈斯聪 Visual image and medical image fusion recognition and autonomous positioning scanning method
CN117017355B (en) * 2023-10-08 2024-01-12 合肥合滨智能机器人有限公司 Thyroid autonomous scanning system based on multi-modal generation type dialogue

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128954A1 (en) * 2008-11-25 2010-05-27 Algotec Systems Ltd. Method and system for segmenting medical imaging data according to a skeletal atlas
CN103679175A (en) * 2013-12-13 2014-03-26 电子科技大学 Fast 3D skeleton model detecting method based on depth camera
CN111658003A (en) * 2020-06-19 2020-09-15 浙江大学 But pressure regulating medical science supersound is swept and is looked into device based on arm
CN111916195A (en) * 2020-08-05 2020-11-10 谈斯聪 Medical robot device, system and method
CN112001925A (en) * 2020-06-24 2020-11-27 上海联影医疗科技股份有限公司 Image segmentation method, radiation therapy system, computer device and storage medium
CN113855067A (en) * 2021-08-23 2021-12-31 谈斯聪 Visual image and medical image fusion recognition and autonomous positioning scanning method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111973152A (en) * 2020-06-17 2020-11-24 谈斯聪 Five sense organs and surgical medical data acquisition analysis diagnosis robot and platform
CN111973228A (en) * 2020-06-17 2020-11-24 谈斯聪 B-ultrasonic data acquisition, analysis and diagnosis integrated robot and platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128954A1 (en) * 2008-11-25 2010-05-27 Algotec Systems Ltd. Method and system for segmenting medical imaging data according to a skeletal atlas
CN103679175A (en) * 2013-12-13 2014-03-26 电子科技大学 Fast 3D skeleton model detecting method based on depth camera
CN111658003A (en) * 2020-06-19 2020-09-15 浙江大学 But pressure regulating medical science supersound is swept and is looked into device based on arm
CN112001925A (en) * 2020-06-24 2020-11-27 上海联影医疗科技股份有限公司 Image segmentation method, radiation therapy system, computer device and storage medium
CN111916195A (en) * 2020-08-05 2020-11-10 谈斯聪 Medical robot device, system and method
CN113855067A (en) * 2021-08-23 2021-12-31 谈斯聪 Visual image and medical image fusion recognition and autonomous positioning scanning method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937219A (en) * 2023-03-14 2023-04-07 合肥合滨智能机器人有限公司 Ultrasonic image part identification method and system based on video classification

Also Published As

Publication number Publication date
CN113855067A (en) 2021-12-31
AU2022335276A1 (en) 2024-04-11
CN118338850A (en) 2024-07-12

Similar Documents

Publication Publication Date Title
WO2023024396A1 (en) Recognition, autonomous positioning and scanning method for visual image and medical image fusion
CN106875432B (en) Temporomandibular joint movement reconstruction system
Mündermann et al. The evolution of methods for the capture of human movement leading to markerless motion capture for biomechanical applications
JP2021154168A (en) Surgical navigation of the hip using fluoroscopy and tracking sensors
AU2021321650A1 (en) Medical robotic device, system, and method
CN112641511A (en) Joint replacement surgery navigation system and method
CN104274183A (en) Motion information processing apparatus
CN112270993B (en) Ultrasonic robot online decision-making method and system taking diagnosis result as feedback
CN112151169B (en) Autonomous scanning method and system of humanoid-operation ultrasonic robot
WO2023024398A1 (en) Method for intelligently identifying thoracic organ, autonomously locating and scanning thoracic organ
CN108968973A (en) A kind of acquisition of body gait and analysis system and method
CN111973228A (en) B-ultrasonic data acquisition, analysis and diagnosis integrated robot and platform
CN112998749A (en) Automatic ultrasonic inspection system based on visual servoing
CN112132805A (en) Ultrasonic robot state normalization method and system based on human body characteristics
CN118338997A (en) Medical robot device, system and method
Fohanno et al. Improvement of upper extremity kinematics estimation using a subject-specific forearm model implemented in a kinematic chain
WO2023024397A1 (en) Medical robot apparatus, system and method
JP6598422B2 (en) Medical information processing apparatus, system, and program
CN108877897A (en) Dental diagnostic scheme generation method, device and diagnosis and therapy system
US20240041529A1 (en) Device for modeling cervical artificial disc based on artificial intelligence and method thereof
CN114974506A (en) Human body posture data processing method and system
CN114947823A (en) Integrated analytic data acquisition system
WO2021254427A1 (en) Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition
Hopper et al. Integrating biomechanical and animation motion capture methods in the production of participant specific, scaled avatars
Mihy et al. Minimizing the effect of IMU misplacement with a functional orientation method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22859758

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022335276

Country of ref document: AU

Ref document number: AU2022335276

Country of ref document: AU

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022335276

Country of ref document: AU

Date of ref document: 20220818

Kind code of ref document: A