AU2022335276A1 - Recognition, autonomous positioning and scanning method for visual image and medical image fusion - Google Patents

Recognition, autonomous positioning and scanning method for visual image and medical image fusion Download PDF

Info

Publication number
AU2022335276A1
AU2022335276A1 AU2022335276A AU2022335276A AU2022335276A1 AU 2022335276 A1 AU2022335276 A1 AU 2022335276A1 AU 2022335276 A AU2022335276 A AU 2022335276A AU 2022335276 A AU2022335276 A AU 2022335276A AU 2022335276 A1 AU2022335276 A1 AU 2022335276A1
Authority
AU
Australia
Prior art keywords
images
medical
scanning
information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
AU2022335276A
Inventor
Sicong TAN
Hao Yu
Mengfei YU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of AU2022335276A1 publication Critical patent/AU2022335276A1/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0891Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of blood vessels

Abstract

Recognition, autonomous positioning and scanning methods for visual image and medical image fusion, using artificial intelligence robot technology. Provided are a method for autonomous positioning and recognition of a human organ feature position to collect an ultrasonic image, remotely and adaptively collecting an image and a video of the neck, the left and right lobes and isthmus of the thyroid, and the parotid gland; a method for remotely and adaptively collecting and scanning lower limb blood vessels, collecting a medical image and video of organs and tissues; and a scanning means for remotely and adaptively capturing and scanning skeletal joints, muscles, and nerves. By means of autonomous and remote controlled collecting and sharing of medical images, image sharing is implemented, thereby alleviating problems of medical care operations such as high stress and increased night shifts. This improves remote collection and sharing of medical images, consultations ward rounding efficiency, and using expert opinion to address clinical cases. The invention is applicable to outpatient clinics, wards, and overseas medical institutions.

Description

RECOGNITION, AUTONOMOUS POSITIONING AND SCANNING METHODS FOR VISUAL IMAGE AND MEDICAL IMAGE FUSION TECHNICAL FIELD
The present invention belongs to the technical field of medical artificial intelligence, and relates to data analysis technology, robot motion planning technology field, image intelligent recognition and method of artificial intelligence and method of medical data analysis and method of recognition technology.
BACKGROUND At present, in the field of medical treatment, in the examination process, due to various human factor analysis, medical images, poor video collection quality and low standardization degree, the accuracy of the illness state is recognized . The field and medical professional limit of each specialist, the remote control of the administrator, the robotic arm carried by the robot, the vision device, the depth vision device, various neural network methods and the improved method thereof can intelligently recognize human faces, human organs, bones, assist in collecting medical images and videos. The epidemic period, high infective, large infection risk, low efficiency, inaccurate manual collection and artificial collection can cause epidemic propagation and the like, and remote controlled and autonomous collection of medical images and videos of the robotic arm are used to realize remote controlled collection, autonomous collection, intelligent recognition and analysis of data, images, videos, autonomous localization and scanning to effectively prevent the spreading of major diseases such as infectious diseases and plague. A visual image and medical image fusion intelligent recognition and autonomous location scanning method, remote controlled an self-adaptive adjustment robot arm, intelligent recognition human body, organ, bone, recognition, and autonomous mobile robot arm, scanning probe, scanning device, autonomous location, scanning part, and organ, which autonomously scan, collect medical image and video according to intelligent scanning method, greatly improve intelligent recognition, and collect medical data and medical image video efficiency.
TECHNICAL PROBLEMS The object of the present invention is to overcome the above-mentioned shortcomings and deficiencies of the prior art, and provide intelligent recognition and robotic arm autonomous location and scanning method by remote controlled, autonomous robot to collect medical image device, and fusion of the visual image and the medical image of the present disclosure, which solves the problem of human scanning and examination and collection diagnosis and treatment errors. A variety of medical image fusion intelligent recognition methods for intelligently recognizing five sense organs, organs, bones, joints, blood vessels, human body features and positions of human body, and robot, robot arm, medical device for autonomous location, external position area of human body, and method for scanning and collecting medical image become key technical problem of autonomous positioning and scanning. Effective scanning and complete scanning methods become another important technical problem for ultrasonic scanning. The present invention provides remote controlled and self-self-adaptive scanning neck, thyroid left and right leaf, xylem, parotid gland, the image collection method and video collection method self adaptive scanning method for collecting lower limb blood vessels, organ, tissue. medical images and videos method, method for remote controlled and self-adaptively collecting bone joints, muscles, nerve scanning modes, collecting organs, organizing medical images and videos.
TECHNICAL SOLUTIONS OF THE PRESENT INVENTION A method for intelligently recognizing human five sense organs, organs, bones, joints, blood vessels, human body features and positions which are intelligently recognized by fusing visual image and medical image, and the method for recognizing human organs, bones, joints, blood vessels, feature points of human body and their positions intelligently, and method for autonomously locating the positions of five sense organs, bones, joints, blood vessels and human body feature points by panal visual images, depth visual images, ultrasonic images and endoscope images, including the following steps: S1, Establish general visual image model, five sense organ model, blood vessel model and feature model, wherein, the model of the general image includes: human face, five organs, ear, lip, mandible, neck, navel, nipple, characteristic genital organ, feature parts of organs and positions as feature items and intelligently recognize features and positions of human body by neural network algorithm and improved method S2, Establish depth vision device, depth vision image model, and bone model. S3, Combine the depth information and the location information of the human body where the bone is located, and the neighbor feature organ information recognized by the general image model described in S1 as feature items of the bones intelligent recognition model as input items. S4, Apply neural network algorithm and the improved method, intelligently recognize each bone and the location of each bone of body, include mandible, rib bottom, end bone, processus xiphoideus, vertebral column spine, spine position, lower boundary position of bone, all of bones and joints on shoulder, all of bones and joints on knee, all of bones and joints on foot, all of bones and joints on waist and their positions,other bones and joints of body and their positions. S5, Perform external scanning feature information on the neighbor feature organ information standard recognized by general image model described in S1, Scan the skeleton information outside the skeleton information recognized in S4. S6, Establish feature model of ultrasonic image, and mapping the color information of blood vesselposition information of blood vesselcontour features, shape features, structural features and color features of ultrasonic images of organs. S7, Combine blood vessel information, blood vessel location information, organ information of ultrasonic images and organ features into information combination items as feature items and input items of ultrasonic image model. S8, By the external scanning feature information and the external scanning skeleton information as external scanning information, input the skeleton information recognized in S4 and the feature items of the ultrasonic image model and the position region into the neural network,the improved method and the weight optimizer of the ultrasonic image model recognized in S1, and obtain output value by image training. S9, Improve deep neural network method and weight optimizer, and obtain output value, organ, blood vessel, bone, and human body. Scan organs and locate information and is recognized by image training. S10, Output resultasresult of external scanning of the position of autonomous location, blood vessel, bone, and human body where is located.
A method for robot autonomously locating coordinates of external location area of human body, scanning and collecting medical images includes the following steps:
S1, Publish collection tasks and medical advice messages according to administrator, doctor communication module, robot and medical device, obtain organ of human body collection images corresponding to the collection tasks and coordinates of external location area, set the organs as target, set target name, target parameter and position information, and set communication target. S2, The robot vision collection device and the visual recognition module publish external features corresponding to each organ, and the external scanning feature information of the neighbor feature organ recognized by the general image model scans the coordinates of the external location area of the human body and the depth information published by the depth camera and the bone information of the neighbor scan the bone information. S3, Robot arm, ultrasonic device, ultrasonic probe, robot and medical device subscribe location information corresponding organ of external scanning area,The subscription target, the parameter, the target pose, and the pose marker,set the target for the head ID, the target pose, the direction value, and the set timestamp. S4, The remote main control system and the ultrasonic probe carried by autonomous robot arm moving and scanning collection area of human body according to the location of subscribed collection area and according to the motions of motion planning module by robot arm and images collection module. The ultrasonic probe and ultrasonic device publish the image information collected, the color information of the blood vessel, the location information of the blood vessel, and the contour feature, the shape feature, the structural feature and the color feature of the ultrasonic images S5, The robot, medical device, and visual recognition module subscribe the image information. According to claim 8, blood vessel color information, the image information collected, the color information of the blood vessel, the location information of the blood vessel, and the contour feature, the shape feature, the structural feature and color feature of ultrasonic images are extracted and input them into the calculation model, whether the images are the tissue of target organ or not, intelligently recognized according to claim 8, whether the images collected are images of scanning target organ. S6, Set collection target parameters which include parameters of pose marker, timestamp, target for the head ID, COG target pose and direction value, set allowable error of location and attitude, if the motion planning fails, re-planning the motions, set reference coordinate system of the target position, and set time limit of each motion planning. S7, The robot, medical device, robot arm, ultrasonic device, ultrasonic probe, remote control and self-adaptive move robotic arm, adjust parameters of the images and parameters of medical devices,the parameters of video by self-adaptive adjustment, whether the images are standard collection of videos, and whether the images collected are valid. S8, The robot, medical device, remote controlled robot arm, and self-self-adaptive adjustment of the scanning mode, the probe angle and the parameters of the ultrasonic probe, Whether is it the collection position of target organ, collect all of the images of organs and tissues whether the video is image and video in all scanning modes whether the videos collected are valid. whether the videos are collection of the target organ or tissue.whether the images collected are images of scanning target organ. S9, Return collection completion information of the target organ, robot, medical device and the robot arm subscription task information, robot, medical device, remote control and self-adaptive move the robot arm, and external scanning location area from self-adaptive mobile ultrasonic probe to the next target organ. S10, The robot and the medical device determine all of the target organ collection tasks are completed according to collection completion information of the returned target organ.
A method for remotely and self-adaptively scanning the neck, left and right leaves, xylem, parotid gland to collect medical images and videos: S1, According to claim 6, according to general vision devices and plurality of medical image fusion intelligent recognition methods, intelligently recognize human ears, ear portions and their positions, intelligently recognize human lips and their positions, intelligently recognize bones, intelligently recognize lower jaw bones, spine, bone positions and spine positions of the bottom ends of ears by depth visual collection device and plurality of medical image fusion intelligent recognition methods, intelligently recognize bone positions below bone and lip mid line. S2, According to claim 7, according to the autonomous locating the robot, the coordinates of the external location area of human body, remote control and self-adaptive move the robot arm, ultrasonic probe the method for scanning and collecting the medical images, robot, medical device subscription neck, the thyroid left and right leaves, the xylem, the parotitis external scanning area position information, the probe angle, the subscription target, the parameters, target pose and the pose marker, set the target for head ID, target pose, direction value and set timestamp. S3, Robot, medical device remote control and self-adaptive move the robot arm self-adaptive adjust parameters of images, parameters of gain, parameters of color gain, parameters of sensitivity time control adjustment, parameters of time gain control, parameters of focusing , parameters of depth ,frame taking size, parameters of blood flow velocity scale, parameters of video, Remote control and self-adaptive move the robot arm, ultrasonic probe to the groin area, adjust rotation angle, the inclination angle of the probe, the scanning mode of the ultrasonic probe, the probe angle, the parameters to the following organs, effective collection and complete collection of tissue. S4, Remote control and self-adaptive move the robot arm, wherein, ultrasonic probe scans and collects images and videos of the neck, the spine, the total carotid artery, the internal carotid artery, the carotid artery along the lower boundary of the bottom end bone of the ear. S5, Remote control and self-adaptive move the robot arm, wherein, the ultrasound probe moves from the head side to the foot side along the carotid artery to scan the left and right leaves and the xylem of the thyroid along the bone position, below the midline of the lip, and collects medical images and videos. S6, Remote control and self-adaptive move the robot arm, ultrasonic probe, scan along position of the bone, the bottom end of ear, location of bone, location of bone scan along the lower jaw, the location of bone, scan the parotid gland, the sublingual gland, the sublingual gland to the bone position below the midline of the lip, and collect medical images and videos.
A method for remotely and self-adaptively scanning lower limb blood vessel and collecting medical recognition images and videos of lower limb blood vessel, organ, tissue: S1, According to claim 6, according to panal vision device and various medical image intelligent methods, human ears, auricle and their positions, the navels and their positions are intelligently recognized. S2, Intelligently recognize locations of bone, rib bottom-end bones andxiphoid, schium, knee joints and their positions, back-measuring fossa and their positions,foot bones, foot joints and their positions by deep visual collection device and by various of medical image intelligent recognition methods. S3, According to claim 6, intelligently recognize color information of the blood vessel, location information of the blood vessel, and the organ information of the ultrasonic images. Medical image intelligent recognition methods, intelligently recognize the abdominal aorta, the inguinal, the femoral shallow artery, the systolic pulse, the tibial anterior artery, and the tibia posterior artery. S4, According to claim 6, by medical image intelligent recognition methods, beat location recognized as groin region by dynamic recognition method of ultrasonic images. S5, Remote control and self-adaptive move the robot arm, ultrasonic probe to the groin area, and ultrasonic probe scan the inguinal area from the xiphoid position to the lower navel direction from the xiphoid position to the belly active pulse, and collect the medical images and videos. S6, Remote control and self-adaptive move the robot arm, ultrasonic probe scans the femoral shallow artery along groin area, moves to the knee joint and its position, measure the position of the fossa, scans the blood vessel near the back side of the knee joint from the position of the back measuring nest, and collects the medical images and the videos. S7, Remote controlled and self-self-adaptive move the robotic arm, ultrasound probe scans downward along the motion pulse, tibial anterior artery, and shin posterior artery, presses the blood vessel by the pressing device, scans nearby blood vessels from the knee joint along the anterior tibial artery, and collects the medical images and the videos. S8, Remote controlled and self-self-adaptive move the robotic arm, ultrasound probe to the foot bone, foot joint and its position, scan the foot back artery along the foot, scan the foot bottom artery along the foot, and collect the medical images and the videos. S9, Remote controlled and self-self-adaptive move the robotic arm, ultrasonic probe moves to the knee joint and the position, moves the probe to the location of the back-measuring fossa and the inner side of the center line of the knee joint, scans downward along the blood vessel, scans the lower limb static vein, including: femoral vein, femoral shallow vein, femoral deep vein, external vein, large implicit vein, and collects medical images and videos.
A method for remotely and self-adaptively scanning bones, joints, muscles, nerves, collecting medical images and videos of bones, organs, tissues:
S1, According to claim 6, intelligently recognize, all of bones and joints on shoulder and their positions, all of bones and joints on body and their positions, all of bones and joints on elbow and positionsall of bones and joints on ankle and positions, are intelligently recognized by depth visual collection device and various of medical images fusion intelligent recognition methods. S2, Intelligently recognize the color information of blood vessels, the position information of the blood vessels and the organ information of the ultrasonic images by a plurality of medical image fusion intelligent recognition methods, intelligently recognize the abdominal aorta, inguinal, superficial femoral artery, femoral shallow artery, popliteal artery, arteria tibialis anterior, and arteria tibialis anterior. S3, Move robot arm and ultrasonic probe to scan by remote controlled mode and self-adaptive-self control mode, from the shoulder joint along arm direction to scanthe shoulder joint, the long-head of biceps tendon(LHBT), the deltoid muscle, the lower tendon and the tendon sheath, scan supraspinatus muscle tendon and the infraspinatus muscle tendon along the central bone shape of the shoulder, and collect the medical images and the videos. S4, Move robot arm and ultrasonic probe to scan by remote controlled mode and self-adaptive control mode, elbow joint, the olecranon fossa, capitulum of the humerus and the front of radius head, Scan from the back side of the elbow joint along hand direction, the elbow joint is swept along the outer side of the elbow joint, along the outer edge of the olecranon fossa, scan capitulum of the humerus and the rear of radius, the position behind the radius head, the muscle, the nerve, and push or pull to turnover angle adjustment device, the position of the elbow joint is adjusted to 90 degrees, scan the epicondyle and the ulna, and the medical images and the videos are collected. S5, Move robot arm and ultrasonic probe to scan by remote controlled mode and self-adaptive control mode,scan all of bones and joints on the knee joint and its position, scan along the outer side of the knee joint and its position,scan from the patella side, scan the head of the pubis and the radius head in front of the radius head, scan the head of the pubis and the rear of the radius head along the outer side of the elbow joint, and collect the medical images and the videos. S6, Move robot arm and ultrasonic probe to scan by remote controlled mode and self-adaptive control mode moves robot arm and scans ankle joint, fibula, talus and their positions, The robot arm push or pull turnover angle adjustment device assists in bending the ankle joint, scans the anterior ligament of the talar, moves the probe to the surface of the fibula circular bone, scans along outer edge of the talus, collects images of the deltoid ligament, the talar, the talus, the fibula anterior ligament and collects the medical images and the videos.
IN SUMMARY, THE PRESENT INVENTION HAS THE BENEFICIAL EFFECTS
The present invention can collect ultrasound images by medical robotic devices, remote isolated autonomous collection, isolated collection, autonomous localization and recognition of human organ feature and their positions. All tasks of the outpatient service are autonomously completed. In order to improve the problems, a plurality of collection tasks, high working pressure, many night shifts, etc. At the same time, the present invention provides remote control robot and methods for self adaptively collecting images and the videos of the neck, the left and right leaves, the xylem, the parotid gland, remote controlled and self-adaptive collection of medical images and videos of lower limbs. Remote controlled and self-self method for collecting bone joints, muscles, nerve scanning, collecting organs, organizing medical images and videos, realizing remote and autonomous effective collection,
Implement collection of medical images, videos, shared medical pictures, videos and multi-expert remote controlled joint consultation, data of images, videos are collected by the robot in real time It can greatly improve the working efficiency.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG.1 is flowchart of method for intelligently human body features, organs, bones, joints, blood vessel human body features and positions by fusing visual images and medical images in the description of the present application; FIG.2 is flowchart of method for scanning and collecting medical images and videos by robot and medical device autonomously locate external location area of human body in the description of the present application;
DETAILED DESCRIPTION OF THE EMBODIMENTS The objective of the present invention is to design remote control robot for replacing human work, to realize the collection by remote controlling robot arm, and to effectively solve the problem of autonomous collection of images and videos. By artificial intelligence robot technology, autonomous collect in the field of automation, robotic arm motion planning, depth camera collection of human face, five sense organs, arms, human body external features, bones, and joint images. The visual images and the medical images are fused to intelligently recognize five sense organ, organs of human body, bones, joints, blood vessels, human body features and their positions, robot, robot arm, medical device autonomous locate, collect external location area of human body, and method for autonomous locating and scanning and collecting medical images. The present invention provides method for remote control and self-adaptive intelligent recognition, scan neck, left and right thyroid leaves, xylem, parotid gland for images collection and videos collectionself-adaptive intelligent recognition method, the method for collecting, scanning mode of lower limb blood vessel, organ, tissue medical images and videos, remote control and self-adaptive intelligent recognition method, bone joint collection method, muscle and nerve scanning method, organ collection, tissue medical image and video method. The remote control robot and autonomous collection of the medical image, the video, the remote control medical image, video collection device and the shared image are realized, to solve the problems of diagnosis and treatment errors, and improving the accuracy of intelligent collection and the accuracy of medical data anomaly recognition. In order to understand the above technical solutions, the present invention will be further described in detail below with reference to the embodiments and drawings, but the embodiments of the present invention are not limited. The technical solution in the implementation of the present application is to solve the following general idea of the above technical problems: The present invention provides intelligent recognition method for fusing medical images and visual images, intelligently recognizing five sense organs, organs, bones, joints, blood vessels, human body features and their positions,human body, establish general visual image model, human organ model, blood vessel model and feature model, the model of the general image includes: ear, lip, mandible, neck, navel, nipple, characteristic genital organ, feature part and position are used as feature items, and neural network algorithm and improved method are used to intelligently recognize human body features and positions. A depth vision device, depth vision image model, and bone model are established. Recognize the positions of the human organ features by autonomously locating according to the depth information and the position information of the human body where the bones are located and ultrasonic images are collected. A method for remote controlled and self-adaptively collecting images and videos of the neck, the left and right leaves, the xylem, the parotid gland; A method for remote controlled and self-adaptively collecting medical images and videos of lower limb blood vessel; A method for collecting organ, tissue medical image, and video; method for remote controlled and self-adaptively collecting medical images and videos of bone joint, muscle, nerve scanning manner, organ, tissue.
Example 1
The present disclosure will be further described in detail below with reference to the embodiments and FIG.1, but the embodiments of the present disclosure are not limited, Input general visual image, human organs, blood vessels, and features, including: human face, five sense organ, ear, lip, mandible, neck, navel, nipple, characteristic genital organ, feature parts and position,input depth visual image, bone information, mandible, rib bottom end bone, xiphoid, spine, spine position, bone lower bound position, shoulder joint, knee joint, foot, bone, foot joint, waist joint and position, and joint position.
The blood vessel color information, the blood vessel position information, and the contour shape feature, the structural feature, and the color feature of the ultrasonic image organ in the ultrasonic image. The blood vessel information, the blood vessel position information, the organ information under the ultrasonic images and the organ features are combined into feature items which are synthesized into information combination item as ultrasonic image model, and entry is input. The external scanning feature information and the external scanning skeleton information are used as external scanning information, improved deep neural network method and weight optimizer are applied by image training, output value of organs, blood vessels, bones, and human body Scan organs and positions where output values of organs, blood vessels, bones and human body are located and obtained, and result of the location of the autonomous position, blood vessel, bone and human body is output. According to administrator, doctor issues collection task and medical advice message, obtains organ of human body collection image corresponding to the collection task and coordinates of external location area, sets it as target, sets target name, target parameter, and location information, and sets communication target. The robot vision collection device and the visual recognition module publish external features corresponding to each organ, and the external scanning feature information of the neighbor feature organ recognized by the general image model which scans the coordinates of the external location area of the human body; and the depth information published by the depth camera and the skeleton information of the neighbor scan the bone information. The robot arm, the ultrasonic device, the ultrasonic probe, the robot subscription corresponding to organ external scanning area location information, the subscription target, the parameter, the target pose, the pose marker, the setting target for the head ID, the target pose, the direction value, the setting timestamp, the remote control master control system and the autonomous robotic arm mounted ultrasonic probe move and scan collection area of the human body according to the subscribed collection area to position according to the motions which motion planning module and according to image collection by robotic arm. The ultrasonic probe and the ultrasonic device publish the collection information of images, color information, location information of blood vessel, contour features, shape features, structural features and the color features, the features of organs. The robot, medical device, and the visual module subscribes the image informationextract color information, position information of blood vessel, contour features, shape features, structural features and color features of the ultrasonic image. Input the structural features and the color features into calculation model, intelligently recognize whether the image is target organ or tissue, and whether the image is scanning target organ. Combine location information, organ information of blood vessel of ultrasonic images and organ features into information combination as feature items and input items into ultrasonic image model. Set collection target parameters which include parameters of pose marker, timestamp, target for the head ID, COG target pose and direction value. Set allowable error of location and attitude, if the motion planning fails, re-planning the motions, set reference coordinate system of the target position, and set time limit of each motion planning. The robot, medical device, robot arm, ultrasonic device, ultrasonic probe remote controls and self-adaptive moves robotic arm, adjusts parameters of the images and parameters of medical devices, the parameters of video by self-adaptive adjustment,and confirm whether the images are standard collection of videos, and whether the images collected are valid. The robot, medical device, remote controlled robot arm, and self-self-adaptive adjustment of the scanning mode,the probe angle and the parameters of the ultrasonic probe, whether is it the collection position of target organcollect all of the images of organs and tissues and confirm whether the video is image and video in all scanning modes, whether the videos collected are valid. whether the videos are collection of the target organ or tissue.whether the images collected are images of scanning target organ. Return collection completion information of the target organ, robot, medical device and the robot arm subscription task information, robot, medical device, remote control and self-adaptive move the robot arm, and external scanning location area from self-adaptive mobile ultrasonic probe to the next target organ. The robot and the medical device determine all of the target organ collection tasks are completed according to collection completion information of the returned target organ.
Example 2
The present disclosure will be further described in detail below with reference to the embodiments and FIG.1 and FIG.2, but the embodiment of the present disclosure are not limited to the distal end and the self-adaptive scanning neck, the left and right leaves of the thyroid. According to general vision devices and plurality of medical image intelligent recognition methods, intelligently recognize human ears, ear portions and their positions, intelligently recognize human lips and their positions, intelligently recognize bones, intelligently recognize lower jaw bones, spine, bone positions and spine positions of the bottom ends of ears by depth visual collection device and plurality of medical image intelligent recognition methods, intelligently recognize bone positions below bone and lip mid line. According to the autonomous locating the robot, the coordinates of the external location area of human body, remote control and self-adaptive move the robot arm, ultrasonic probe, the method for scanning and collecting the medical images, robot, medical device subscription neck, the thyroid left and right leaves, the xylem, the parotitis external scanning area position information, the scanning method, the probe angle, the subscription target, the parameters, target pose and the pose marker, set the target for head ID, target pose, direction value and set timestamp. Robot, medical device remote control and self-adaptive move the robot arm self-adaptive adjust parameters of images, parameters of gain, parameters of color gain, parameters of sensitivity time control adjustment, parameters of time gain control, parameters of focusing , parameters of depth ,frame taking size, parameters of blood flow velocity scale, parameters of video, image collection method, Remote control and self-adaptive move the robot arm, ultrasonic probe to the groin area, adjust rotation angle, the inclination angle of the probe, the scanning mode of the ultrasonic probe, the probe angle, the parameters to the following organs, effective collection and complete collection of tissue. Remote control and self-adaptive move the robot arm, wherein, ultrasonic probe scans and collects images and videos of the neck, the spine, the total carotid artery, the internal carotid artery, the carotid artery along the lower boundary of the bottom end bone of the ear. Remote control and self-adaptive move the robot arm, wherein, the ultrasound probe moves from the head side to the foot side along the carotid artery to scan the left and right leaves and the xylem of the thyroid along the bone position below the midline of the lip, and collects medical images and videos. Remote control and self-adaptive move the robot arm, ultrasonic probe, scan along position of the bone, the bottom end of ear, location of bone, location of bone scan along the lower jaw, the location of bone, scan the parotid gland, the sublingual gland, the sublingual gland to the bone position below the midline of the lip, and collect medical images and videos.
Example 3
In combination with the embodiments and FIG.1 and FIG.2, the present invention is further described in detail with reference to the embodiments and FIG.1 and FIG.2, and the method for collecting medical images and videos of organs, tissues is further described in detail as follows: According to panal vision device and various medical image intelligent methods, human ears, auricle and their positions, the navels and their positions are intelligently recognized. Intelligently recognize locations of bone, rib bottom-end bones and xiphoid, schium, knee joints and positions, back-measuring fossa and positions,foot bones, foot joints and their positions by deep visual collection device and by various of medical image fusion intelligent recognition methods. Intelligently recognize color information of the blood vessel, location information of the blood vessel, and the organ information of the ultrasonic images. A plurality of medical image intelligent recognition methods, and intelligently recognizing the abdominal aorta, the inguinal, the femoral shallow artery, the systolic pulse, the tibial anterior artery, and the tibia posterior artery. Through the ultrasonic images and various medical image fusion intelligent recognition methods, the color information,the information of blood vessel and the organ information of the ultrasonic image are intelligently recognized, and the abdominal aorta, the inguinal, the femoral shallow artery, the pulsation pulse, the tibial anterior artery, and the tibia posterior artery are intelligently recognized. Through the ultrasonic images and various medical image fusion intelligent recognition methods, dynamic recognition method under ultrasonic image is used to recognize the beat position as the groin area. According to claim 6, by ultrasonic image and a plurality of medical image intelligent recognition methods, beat location recognized as groin region by dynamic recognition method under ultrasonic images. Remote control and self-adaptive move the robot arm, ultrasonic probe to the groin area, and ultrasonic probe scan the inguinal area from the xiphoid position to the lower navel direction from the xiphoid position to the belly active pulse, and collect the medical images and videos. Remote control and self-adaptive move the robot arm, ultrasonic probe scans the femoral shallow artery along groin area, moves to the knee joint and its position, measure the position of the fossa, scans the blood vessel near the back side of the knee joint from the position of the back measuring nest, and collects the medical images and the videos. Remote control and self-adaptive move the robotic arm, ultrasound probe scans downward along the motion pulse, tibial anterior artery, and shin posterior artery, presses the blood vessel by the pressing device, scans nearby blood vessels from the knee joint along the anterior tibial artery, and collects the medical images and the videos.
Remote controlled and self-self-adaptive move the robotic arm, ultrasound probe to the foot bone, foot joint and its position, scan the foot back artery along the foot, scan the foot bottom artery along the foot, and collect the medical images and the videos. Remote controlled and self-self-adaptive move the robotic arm, ultrasonic probe moves to the knee joint and the position, moves the probe to the location of the back-measuring fossa and the inner side of the center line of the knee joint, scans downward along the blood vessel, scans the lower limb static vein, including: femoral vein, femoral shallow vein, femoral deep vein, external vein, large implicit vein, and collects medical images and videos.
Example 4 The present disclosure will be further described in detail below with reference to the embodiments and FIG.1 and FIG.2, and method for collecting bone joints, muscles, nerve scanning, collecting organs, organizing medical images and videos is further described in detail below with reference to the embodiments and FIG.1 and FIG.2: Intelligently recognize all of bones and joints on shoulder and their positions, all of bones and joints on body and their positions, all of bones and joints on elbow and positionsall of bones and joints on ankle and positions are intelligently recognized by depth visual collection device and various of medical images fusion intelligent recognition methods. Intelligently recognize the color information of blood vessel, the position information of the blood vessel and the organ information of the ultrasonic images by a plurality of medical image fusion intelligent recognition methods, intelligently recognize the abdominal aorta, inguinal, superficial femoral arteryfemoral shallow artery, popliteal artery and arteria tibialis anterior. Move robot arm and ultrasonic probe to scan by remote controlled mode and self-adaptive control mode, from the shoulderjoint along arm direction to scan the shoulderjoint, the long-head of biceps tendon(LHBT), the deltoid muscle, the lower tendon and the tendon sheath,scan supraspinatus muscle tendon and the infraspinatus muscle tendon along the central bone shape of the shoulder, and collect the medical images and the videos. Move robot arm and ultrasonic probe to scan by remote controlled mode and self-adaptive control modescan elbow joint, the olecranon fossa, capitulum of the humerus and the front of radius head, Scan from the back side of the elbow joint along hand direction the elbow joint is swept along the outer side of the elbow joint, along the outer edge of the olecranon fossa, Scan capitulum of the humerus and the rear of radius,the position behind the radius head, the muscle, the nerve, and push or pull to turnover angle adjustment device, the position of the elbow joint is adjusted to 90 degrees,Scan the epicondyle and the ulna, and the medical images and the videos are collected. Move robot arm and ultrasonic probe to scan by remote controlled mode and self-adaptive-self control mode,scan all of bones and joints on the knee joint and its position,scan along the outer side of the knee joint and its positionscan from the patella side, scan the head of the pubis and the radius head in front of the radius head, scan the head of the pubis and the rear of the radius headalong the outer side of the elbow joint, and collect the medical images and the videos. Move robot arm and ultrasonic probe to scan by remote controlled mode and self-adaptive-self control modemoves robot arm and scans ankle joint, fibula, talus and their positions, The robot arm push or pull turnover angle adjustment device assists in bending the ankle joint, scans the anterior ligament of the talar, moves the probe to the surface of the fibula circular bone, scans along outer edge of the talus, collects images of the deltoid ligament, the talar, the talus, the fibula anterior ligament, and collects the medical images and the videos.
RECOGNITION, AUTONOMOUS POSITIONING AND SCANNING METHODS FOR VISUAL IMAGE AND MEDICAL IMAGE FUSION
Claim 1 Recognition, autonomous positioning and scanning methods for visual image and medical image fusion, wherein, method for intelligently recognizing human five sense organs, organs, bones, joints, blood vessels, human body features and positions which are intelligently recognized by fusing visual image and medical image, and the method for recognizing human organs, bones, joints, blood vessels, feature points of human body and their positions intelligently, and method for autonomously locating the positions of five sense organs, bones, joints, blood vessels and human body feature points by panal visual images, depth visual images, ultrasonic images and endoscope images, wherein, the following steps include: S1, Establish general visual image model, five sense organ model, blood vessel model and feature model, wherein, the model of the general image includes: human face, five organs, ear, lip, mandible, neck, navel, nipple, characteristic genital organ, feature parts of organs and positions as feature items and intelligently recognize features and positions of human body by neural network algorithm and improved method ; S2, Establish depth vision device, depth vision image model, and bone model. S3, Combine the depth information and the location information of the human body where the bone is located, and the neighbor feature organ information recognized by the general image model described in S1 as feature items of the bones intelligent recognition model as input items. S4, Apply neural network algorithm and the improved method, intelligently recognize each bone and the location of each bone of body, include mandible, rib bottom, end bone, processus xiphoideus, vertebral column spine, spine position, lower boundary position of bone, all of bones and joints on shoulder, all of bones and joints on knee, all of bones and joints on foot, all of bones and joints on waist and their positions,other bones and joints of body and their positions. S5, Perform external scanning feature information on the neighbor feature organ information standard recognized by general image model described in S1, Scan the skeleton information outside the skeleton information recognized in S4. S6, Establish feature model of ultrasonic image, and mapping the blood vessel color information, blood vessel position information contour features, shape features, structural features and color features of ultrasonic images of organs. S7, Combine blood vessel information, blood vessel location information, organ information of ultrasonic images and organ features into information combination items as feature items and input items of ultrasonic image model. S8, By the external scanning feature information and the external scanning skeleton information as external scanning information, input the skeleton information recognized in S4 and the feature items of the ultrasonic image model and the position region into the neural network,the improved method and the weight optimizer of the ultrasonic image model recognized in S1, and obtain output value by image training. S9, Improve deep neural network method and weight optimizer, and obtain output value, organ, blood vessel, bone, and human body. Scan organs and locate information and is recognized by image training. S10, Output resultasresult of external scanning of the position of autonomous location, blood vessel, bone, and human body where is located.
Claim 2 Recognition, autonomous positioning and scanning methods for visual image and medical image fusion, wherein, method for robot autonomously locating coordinates of external location area of human body, scanning and collecting medical images, wherein, the following steps include: S1, Publish collection tasks and medical advice messages according to administrator, doctor communication module, robot and medical device, Obtain organ of human body collection images corresponding to the collection tasks and coordinates of external location area, set the organs as target, set target name, target parameter and position information, and set communication target. S2, The robot vision collection device and the visual recognition module publish external features corresponding to each organ, and the external scanning feature information of the neighbor feature organ recognized by the general image model scans the coordinates of the external location area of the human body and the depth information published by the depth camera and the bone information of the neighbor scan the bone information. S3, Robot arm, ultrasonic device, ultrasonic probe, robot and medical device subscribe location information corresponding organ of external scanning area, the subscription target, the parameter, the target pose, and the pose marker, set the target for the head ID, the target pose, the direction value, and the set timestamp S4, The remote main control system and the ultrasonic probe carried by autonomous robot arm moving and scanning collection area of human body according to the location of subscribed collection area and according to the motions of motion planning module by robot arm and images collection module.

Claims (1)

  1. The ultrasonic probe and ultrasonic device publish the image information collected, the color information of the blood vessel, the location information of the blood vessel, the contour feature, the shape feature, the structural feature and the color feature of the ultrasonic images. S5, The robot, medical device, and visual recognition module subscribe the image information. According to claim 8, the image information of blood vessel is collected, the color information of the blood vessel, the location information of the blood vessel, the contour feature, the shape feature, the structural feature and color feature of ultrasonic images are extracted and input them into the calculation model, whether the images are the target organ or not, intelligently recognized according to claim 8, whether the images collected are images of scanning target organ. S6, Set collection target parameters which include parameters of pose marker, timestamp, target for the head ID, COG target pose and direction value.set allowable error of location and attitude, if the motion planning fails, re-planning the motions, set reference coordinate system of the target position, and set time limit of each motion planning. S7, The robot, medical device, robot arm, ultrasonic device, ultrasonic probe remote control and adaptive move robotic arm, adjust parameters of the images and parameters of medical devicesthe parameters of video by adaptive adjustment, whether the images are standard collection of videos, and whether the images collected are valid. S8, The robot, medical device, distal end of robot arm, and self-adaptive adjustment of the scanning mode, the probe angle and the parameters of the ultrasonic probe, Whether is it the collection position of target organ, collect all of the images of organs and tissues whether the video is image and video in all scanning modes, whether the videos collected are valid.whether the videos are collection of the target organ or tissue.whether the images collected are images of scanning target organ S9, Return collection completion information of the target organ, robot, medical device and the robot arm subscription task information, robot, medical device, remote control and adaptive move the robot arm, and external scanning location area from adaptive mobile ultrasonic probe to the next target organ. S10, The robot and the medical device determine all of the target organ collection tasks are completed according to collection completion information of the returned target organ.
    Claim3 Recognition, autonomous positioning and scanning methods for visual image and medical image fusion, wherein, method for remotely and adaptively scanning the neck, left and right leaves, xylem, parotid gland, to collect medical images and videos, wherein, the steps includes: S1, According to claim 6, according to vision devices and plurality of medical image fusion intelligent recognition methods, intelligently recognize human ears, ear portions and their positions, intelligently recognize human lips and their positions, intelligently recognize bones, intelligently recognize lower jaw bones, spine, bone positions and spine positions of the bottom ends of ears by depth visual collection device and plurality of medical image fusion intelligent recognition methods, intelligently recognize bone positions below bone and lip mid line. S2, According to claim 7, according to the autonomous locating the robot, the coordinates of the external location area of human body, remote control and adaptive move the robot arm, ultrasonic probe the method for scanning and collecting the medical images, robot, medical device subscription neck, the thyroid left and right leaves, the xylem, the parotitis external scanning area position information, the scanning method, the probe angle, the subscription target, the parameters, target pose and the pose marker, set the target for head ID, target pose, direction value and set timestamp. S3, Robot, medical device remote control and adaptive move the robot arm adaptive adjust parameters of images, parameters of gain, parameters of color gain, parameters of sensitivity time control adjustment, parameters of time gain control, parameters of focusing , parameters of depth ,frame taking size, parameters of blood flow velocity scale, parameters of video, Remote control and adaptive move the robot arm, ultrasonic probe to the groin area, adjust rotation angle, the inclination angle of the probe, the scanning mode of the ultrasonic probe, the probe angle, the parameters to the following organs, effective collection and complete collection of tissue. S4, Remote control and adaptive move the robot arm, ultrasonic probe scans and collects images and videos of the neck, the spine, the total carotid artery, the internal carotid artery, the carotid artery along the lower boundary of the bottom end bone of the ear. S5, Remote control and adaptive move the robot arm, the ultrasound probe moves from the head side to the foot side along the carotid artery to scan the left and right leaves and the xylem of the thyroid along the bone position, below the midline of the lip, and collects medical images and videos. S6, Remote control and adaptive move the robot arm, the ultrasonic probe, scan along position of the bone, the bottom end of ear, the location of bones, scan along the ower jaw, the location of bones, scan the parotid gland, the sublingual gland, the sublingual gland to the bone position below the midline of the lip, and collect medical images and videos.
    Claim4 Recognition, autonomous positioning and scanning methods for visual image and medical image fusion, wherein, method for remotely and adaptively scanning lower limb vessels and collecting medical recognition images and videos of lower limb vessels, organ, tissue, wherein, the steps includes: S1, According to claim 6, according to panal vision device and improved medical image intelligent methods, human ears, auricle and their positions, the navels and their positions are intelligently recognized. S2, Intelligently recognize locations of bones, rib bottom-end bones andxiphoid, schium, knee joints and their positions, back-measuring fossa and its positions, foot bones, foot joints and their positions by deep visual collection device and by various of image intelligent recognition methods. S3, According to claim 6, intelligently recognize color information of the blood vessel, location information of the blood vessel, and the organ information of the ultrasonic images.A plurality of medical image intelligent recognition methods, and intelligently recognizing the abdominal aorta, the inguinal, the femoral shallow artery, the systolic pulse, the tibial anterior artery, and the tibia posterior artery. S4, According to claim 6, by ultrasonic image and a plurality of medical image intelligent recognition methods, beat location recognized as groin region by dynamic recognition method of ultrasonic images. S5, Remote control and adaptive move the robot arm, ultrasonic probe to the groin area, and ultrasonic probe scan the inguinal area from the xiphoid position to the lower navel direction from the xiphoid position to the belly active pulse, and collect the medical images and videos. S6, Remote control and adaptive move the robot arm, ultrasonic probe scans the femoral shallow artery along groin area, moves to the knee joint and its position, measure the position of the fossa, scans the blood vessel near the back side of the knee joint from the position of the back measuring nest, and collects the medical images and the videos. S7, Remote controlled and self-adaptive move the robotic arm, ultrasound probe scans downward along the motion pulse, tibial anterior artery, and shin posterior artery, presses the blood vessel by the pressing device, scans nearby blood vessels from the knee joint along the anterior tibial artery, and collects the medical images and the videos. S8, Remote controlled and self-adaptive move the robotic arm, ultrasound probe to the foot bone, foot joint and its position, scan the foot back artery along the foot, scan the foot bottom artery along the foot, and collect the medical images and the videos. S9, Remote controlled and self-adaptive move the robotic arm, ultrasonic probe moves to the knee joint and the position, moves the probe to the location of the back-measuring fossa and the inner side of the center line of the knee joint, scans downward along the blood vessel, scans the lower limb static vein, including: femoral vein, femoral shallow vein, femoral deep vein, external vein, large implicit vein, and collects medical images and videos.
    Claim5 Recognition, autonomous positioning and scanning methods for visual image and medical image fusion, whereinmethod for remotely and adaptively scanning bones, joints, muscles, nerves, collecting medical images and videos of bones, organs, tissues, wherein, the steps includes: S1, According to claim 6, intelligently recognize all of bones and joints on shoulder and their positions, all of bones and joints on body and their positions, all of bones and joints on elbow and positions, all of bones and joints on ankle and positions, are intelligently recognized by depth visual collection device and various of medical images fusion intelligent recognition methods. S2, Intelligently recognize the color information of blood vessels, the position information of the blood vessels and the organ information of the ultrasonic images by a plurality of medical image fusion intelligent recognition methods, intelligently recognize the abdominal aorta, inguinal, superficial femoral artery, femoral shallow artery, popliteal artery, arteria tibialis anterior, S3, Move robot arm and ultrasonic probe to scan by remote controlled mode and adaptive-self control mode, from the shoulderjoint along arm direction to scan the shoulder joint, the long-head of biceps tendon(LHBT), the deltoid muscle, the lower tendon and the tendon sheath, scan supraspinatus muscle tendon and the infraspinatus muscle tendon along the central bone shape of the shoulder, and collect the medical images and the videos. S4, Move robot arm and ultrasonic probe to scan by remote controlled mode and adaptive-self control mode, elbow joint, the olecranon fossa, capitulum of the humerus and the front of radius head, Scan from the back side of the elbow joint along hand direction, the elbow joint is swept along the outer side of the elbow joint, along the outer edge of the olecranon fossa, Scan capitulum of the humerus and the rear of radius, the position behind the radius head, the muscle, the nerve, and push or pull to turnover angle adjustment device, the position of the elbow joint is adjusted to 90 degrees, Scan the epicondyle and the ulna, and the medical images and the videos are collected. S5, Move robot arm and ultrasonic probe to scan by remote controlled mode and adaptive-self control modescan all of bones and joints on the knee joint and its position, scan along the outer side of the knee joint and its position, scan from the patella side, scan the head of the pubis and the radius head in front of the radius head, scan the head of the pubis and the rear of the radius head, along the outer side of the elbow joint, and collect the medical images and the videos. S6, Move robot arm and ultrasonic probe to scan by remote controlled mode and adaptive-self control mode,moves robot arm and scans ankle joint, fibula, talus and their positions, The robot arm push or pull turnover angle adjustment device assists in bending the ankle joint, scans the anterior ligament of the talar, moves the probe to the surface of the fibula circular bone, scans along outer edge of the talus, collects images of the deltoid ligament, the talar, the talus, the fibula anterior ligament, and collects the medical images and the videos.
    Claim6 Recognition, autonomous positioning and scanning methods for visual image and medical image fusion, wherein, recognition method of visual image and medical image fusion is used in the medical robot apparatus which are used for remote controlled and autonomous locating and scanning, The medical robot apparatus include robot, robot arm, medical apparatus, controller device of medical apparatus, at least one or more of collection devices of ultrasonic images and ultrasonic probe apparatus above, at least one or more of panal visual apparatus above, at least one or more of depth visual devices.
    Establish depth vision device, depth Figure 1
    Establish general visual image model position vision image model, and bone model Output result as result of external scanning of the
    Yes
    Set parameters,apply neural network algorithm and the improved method image training.
    Obtain output value by
    human face,andfive organs, obtain ear, lip, mandible, neck, output value
    Improve deep neural network method and weight optimizer, navel, nipple, characteristic genital organ, feature Apply neural network algorithm and the
    parts of organs and positions as feature items and improved method, intelligently recognize
    intelligently recognize features and positions of each bone and the location of each bone Input the skeleton information and the feature items, input items of ultrasonic image model.
    human body by neural network algorithm and of body
    improved method external scanning information Get external scanning feature information Get external scanning skeleton information as
    Establish feature model of ultrasonic image, and mapping the blood vessel color information, blood vessel position information ,contour features, shape features, structural features and color features of ultrasonic images of organs.
    structural features and color features of ultrasonic images of organs. information, blood vessel position information ,contour features, shape features,
    Establish feature model of ultrasonic image, and mapping the blood vessel color
    improved method
    human body by neural network algorithm and of body Get external scanning skeleton information as Get external scanning feature information intelligently recognize features and positions of external scanning information each bone and the location of each bone
    parts of organs and positions as feature items and improved method, intelligently recognize
    navel, nipple, characteristic genital organ, feature Apply neural network algorithm and the
    human face, five organs, ear, lip, mandible, neck,
    Input the skeleton information and the feature items,input items of ultrasonic image model.
    Set parameters, apply neural network algorithm and the improved method
    Improve deep neural network vision image model, method and weight optimizer, and bone model Establish general visual image model
    and obtain output value Establish depth vision device, depth
    Obtain output value by image training.
    Yes Output result as result of external scanning of the position
    Figure 1
    Figure2
    Obtain organ All ofof human tasks body collection images corresponding to the collection are completed
    tasks and coordinates of external location area ultrasonic probe to the next target organ.
    move the robot arm, and external scanning location area from adaptive mobile Obtain organ of human body collection yes images corresponding to the
    collection tasks and coordinates of external location area tissues
    Whether collect all of the images of organs
    Scan the coordinates of the external location area of the human body and the depth information the parameters of the ultrasonic probe
    scanning collection area according to the location and according to the self-adaptive adjustment of the scanning mode, the probe angle and
    motions of motion planning moduleyes by robot arm
    whether the images collected are valid. The ultrasonic probe and ultrasonic device publish the image information whether the images are standard
    collected, the color information of the blood vessel, the location information of the blood vessel, the contour feature, the shape feature, the structural feature adaptive move robotic arm, adjust parameters of medical images
    yes whether the images are the target organ or not, organ or not, whether the images are the target
    yes the blood vessel, the contour feature, the shape feature, the structural feature
    adaptive move robotic arm, adjust parameters of medical images collected, the color information of the blood vessel, the location information of
    The ultrasonic probe and ultrasonic device publish the image information
    motions of motion planning module by robot arm
    scanning collection area according to the location and according to the whether the images are standard the depth information whether the images collected are valid. Scan the coordinates of the external location area of the human body and
    yes collection tasks and coordinates of external location area
    Obtain organ of human body collection images corresponding to the
    self-adaptive adjustment of the scanning mode, the probe angle and tasks and coordinates of external location area
    Obtain the parameters of the ultrasonic probe organ of human body collection images corresponding to the collection
    Whether collect all of the images of organs tissues
    yes move the robot arm, and external scanning location area from adaptive mobile ultrasonic probe to the next target organ.
    All of tasks are completed
    Figure2
AU2022335276A 2021-08-23 2022-08-18 Recognition, autonomous positioning and scanning method for visual image and medical image fusion Pending AU2022335276A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN202110978078.0A CN113855067A (en) 2021-08-23 2021-08-23 Visual image and medical image fusion recognition and autonomous positioning scanning method
CN202110978078.0 2021-08-23
PCT/CN2022/000117 WO2023024396A1 (en) 2021-08-23 2022-08-18 Recognition, autonomous positioning and scanning method for visual image and medical image fusion

Publications (1)

Publication Number Publication Date
AU2022335276A1 true AU2022335276A1 (en) 2024-04-11

Family

ID=78988258

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2022335276A Pending AU2022335276A1 (en) 2021-08-23 2022-08-18 Recognition, autonomous positioning and scanning method for visual image and medical image fusion

Country Status (3)

Country Link
CN (1) CN113855067A (en)
AU (1) AU2022335276A1 (en)
WO (1) WO2023024396A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113855067A (en) * 2021-08-23 2021-12-31 谈斯聪 Visual image and medical image fusion recognition and autonomous positioning scanning method
CN115937219B (en) * 2023-03-14 2023-05-12 合肥合滨智能机器人有限公司 Ultrasonic image part identification method and system based on video classification
CN117017355B (en) * 2023-10-08 2024-01-12 合肥合滨智能机器人有限公司 Thyroid autonomous scanning system based on multi-modal generation type dialogue

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2597615A1 (en) * 2008-11-25 2013-05-29 Algotec Systems Ltd. Method and system for segmenting medical imaging data according to a skeletal atlas
CN103679175B (en) * 2013-12-13 2017-02-15 电子科技大学 Fast 3D skeleton model detecting method based on depth camera
CN111973228A (en) * 2020-06-17 2020-11-24 谈斯聪 B-ultrasonic data acquisition, analysis and diagnosis integrated robot and platform
CN111916195A (en) * 2020-08-05 2020-11-10 谈斯聪 Medical robot device, system and method
CN111973152A (en) * 2020-06-17 2020-11-24 谈斯聪 Five sense organs and surgical medical data acquisition analysis diagnosis robot and platform
CN111658003B (en) * 2020-06-19 2021-08-20 浙江大学 But pressure regulating medical science supersound is swept and is looked into device based on arm
CN112001925B (en) * 2020-06-24 2023-02-28 上海联影医疗科技股份有限公司 Image segmentation method, radiation therapy system, computer device and storage medium
CN113855067A (en) * 2021-08-23 2021-12-31 谈斯聪 Visual image and medical image fusion recognition and autonomous positioning scanning method

Also Published As

Publication number Publication date
WO2023024396A1 (en) 2023-03-02
CN113855067A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
AU2022335276A1 (en) Recognition, autonomous positioning and scanning method for visual image and medical image fusion
CN112641511B (en) Joint replacement surgery navigation system and method
WO2022142741A1 (en) Total knee arthroplasty preoperative planning method and device
AU2021321650A1 (en) Medical robotic device, system, and method
CN112270993B (en) Ultrasonic robot online decision-making method and system taking diagnosis result as feedback
CN112151169B (en) Autonomous scanning method and system of humanoid-operation ultrasonic robot
AU2022333990A1 (en) Method for intelligently identifying thoracic organ, autonomously locating and scanning thoracic organ
CN116507286A (en) Ultrasonic image data acquisition, analysis and identification integrated robot and platform
JP2022524583A (en) Smart monitoring system for pelvic fracture reduction
AU2022333762A1 (en) Medical robot apparatus, system and method
CN112132805B (en) Ultrasonic robot state normalization method and system based on human body characteristics
Fohanno et al. Improvement of upper extremity kinematics estimation using a subject-specific forearm model implemented in a kinematic chain
WO2023024397A1 (en) Medical robot apparatus, system and method
CN109887570B (en) Robot-assisted rehabilitation training method based on RGB-D camera and IMU sensor
CN116327357A (en) Automatic knee joint simulation operation planning method and system based on deep learning
WO2023116232A1 (en) Control method for arthroplasty surgical robot
WO2021254427A1 (en) Integrated robot and platform for ultrasound image data acquisition, analysis, and recognition
Nozik et al. OpenArm 2.0: Automated segmentation of 3D tissue structures for multi-subject study of muscle deformation dynamics
CN114947823A (en) Integrated analytic data acquisition system
Hopper et al. Integrating biomechanical and animation motion capture methods in the production of participant specific, scaled avatars
CN109620414A (en) A kind of mechanical gripper force feedback method and system for surgical operation
CN113506603B (en) Wrist joint rehabilitation training effect evaluation method based on dynamic and static characteristics of radial flexion
CN114288071A (en) Orthopedic prosthesis implantation control system
Lu et al. Computer-Assisted Automatic Preoperative Path Planning Method for Pelvic Fracture Reduction Surgery Based on Enlarged RRT $^\ast $ Algorithm
Teng et al. A human-machine interaction method and process based on the intelligent medical robot