CN109473168A - A kind of medical image robot and its control, medical image recognition methods - Google Patents
A kind of medical image robot and its control, medical image recognition methods Download PDFInfo
- Publication number
- CN109473168A CN109473168A CN201811171881.8A CN201811171881A CN109473168A CN 109473168 A CN109473168 A CN 109473168A CN 201811171881 A CN201811171881 A CN 201811171881A CN 109473168 A CN109473168 A CN 109473168A
- Authority
- CN
- China
- Prior art keywords
- medical image
- main control
- information
- control chip
- display screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000033001 locomotion Effects 0.000 claims abstract description 57
- 239000003814 drug Substances 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 12
- 230000007613 environmental effect Effects 0.000 claims abstract description 8
- 230000003902 lesion Effects 0.000 claims description 36
- 238000001514 detection method Methods 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 24
- 238000013135 deep learning Methods 0.000 claims description 15
- 229920000049 Carbon (fiber) Polymers 0.000 claims description 11
- 239000004917 carbon fiber Substances 0.000 claims description 11
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 11
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 4
- 229910000838 Al alloy Inorganic materials 0.000 claims description 3
- 239000003990 capacitor Substances 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 2
- 230000005611 electricity Effects 0.000 claims description 2
- 238000013145 classification model Methods 0.000 claims 2
- 238000002059 diagnostic imaging Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 229910000600 Ba alloy Inorganic materials 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/04—Viewing devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Abstract
The invention discloses a kind of medical image robot and its controls, medicine recognition methods.Acquisition module, processing module and motion module are provided in robot.It is sent in processing module by the environmental information for acquiring acquisition module and generates virtual map, and it is realized by way of choosing target area information in display screen and automatically generates motion control information, designated place is moved to control motion module band mobile robot, it detects automatic identification after inputting medical image and obtains medical advice information, realize moveable medical imaging identification.
Description
Technical field
The present invention relates to field in intelligent robotics, especially a kind of medical image robot and its control, medical image are known
Other method.
Background technique
Currently, being important means to condition-inference analysis by medical image in medical procedure.Due to medicine shadow
The analysis of picture is usually directed to complicated operation, thus the prior art to the analysis and processing of medical image mainly by computer and
Server apparatus is completed.Although the prior art can complete analysis and processing to medical image, computer is usually solid
Fixed to be arranged in doctor's office, and equipment is more large-scale, irremovable, mobility is poor, when encounter need outdoor or
It is not available when being exchanged in ward with patient or other doctors.
Summary of the invention
To solve the above problems, the purpose of the present invention is to provide a kind of machines for being provided with medical image identification function
People according to destination automatic path planning and can be moved to destination in practical applications, will be cured by the interactive module of setting
It learns image and is input to progress medical image identification in main control chip.Realize that moveable medical imaging assists in identifying.
Technical solution used by the present invention solves the problems, such as it is: a kind of medical image robot, comprising: acquisition module and
Processing module, the acquisition module include constructing the laser radar of Scaned map value and for obtaining range information for obtaining
Depth camera;
The processing module includes the first main control chip for handling data transmitted by acquisition module and for transmitting
The input terminal of second main control chip of exercise data, first main control chip is connected with the output end of acquisition module, described
Second main control chip is connected with the first main control chip, the acquisition information that first main control chip is sent in response to acquisition module
Motion control information is sent to the second main control chip;
It further include motion module, the motion module is connected with the second main control chip, the second main control chip response
Enabling signal is sent to motion module in the motion control information;
It further include display screen, the display screen is connected with the first main control chip, and the display screen is in response to the first master control
The display signal that chip is sent is shown.
Further, the motion module includes the electrodeless machine governor and DC brushless motor for electric machine speed regulation, institute
The input terminal for stating electrodeless machine governor is connected with the output end of the second main control chip, the electrodeless machine governor in response to
The enabling signal that second main control chip is sent controls the DC brushless motor operating;The output end of the electrodeless machine governor
It is connected with the input terminal of DC brushless motor by CAN bus;The motion module further includes Mecanum wheel, the Mike
Na Mu wheel is connected between DC brushless motor by ring flange;It is also set between the Mecanum wheel and DC brushless motor
It is equipped with anti-vibration structure;The anti-vibration structure includes oleo damper, hinge carbon fiber connecting plate and aluminium alloy fixed plate.
It further, further include that power module and the power supply for control circuit electric current turn die block, the power module
Output end is connected with the input terminal that power supply turns die block.
Further, the display screen is the LCD display that capacitor formula touches, and the LCD display passes through HDMI image
Connecting line is connected with the first main control chip;The display screen bottom side is additionally provided with the crane for controlling display screen height.
A kind of control method of medical image robot, comprising the following steps:
Map is constructed according to acquisition module environmental information collected, and the map of building is sent in display screen and is shown
Show;
It reads the target area information that user clicks in display screen and is sent in the first main control chip;
First main control chip obtains motion control information according to current location information and target area information, is sent to second
In main control chip;
After second main control chip obtains the motion control information, control motion module running.
Further, the environmental information includes the distance letter of the spatial information by laser radar acquisition and depth camera acquisition
Breath;The motion control information includes moving direction, movement speed and moving distance.
A kind of medical image recognition methods, comprising the following steps:
When detecting the click patient information in display screen, read in the database and doctor corresponding to the patient information
Learn image;
The medical image is input in target detection model, the lesion locations in the medical image are marked
Know, is set as lesion image;
The lesion image is sent in deep learning disaggregated model and carries out feature extraction, is obtained belonging to lesion image
Lesion classification;
Corresponding medical advice information is read in the database according to the lesion classification;
Lesion locations corresponding to the lesion image, lesion classification and medical advice information are sent in display screen and shown
Show.
Further, the target detection model and deep learning disaggregated model are preparatory trained convolutional neural networks.
Further, the target detection model and deep learning disaggregated model are convolutional neural networks trained in advance,
Training method the following steps are included:
Medical image is converted into Jpg picture format, according to the diseased region information of mark, target detection training is generated and uses
XML file;Corresponding label is input in diseased region image according to classification results;
The medical image data of the medical image for being completely used for diseased region target detection and classification is randomly assigned into
Training set data and verifying collection data;
The deep learning training of target detection and classification is carried out by depth convolutional neural networks;
Model after obtaining training, verifies model using verify data.
It further, include 19 convolutional layers and 5 maximum ponds in the target detection model and deep learning disaggregated model
Change layer.
The beneficial effects of the present invention are: the present invention uses a kind of medical image robot and its control, medicine recognition methods.
By the way that acquisition module and processing module are arranged in medical image robot, environmental data is acquired, processing module is passed through
Motion control information is generated, the control to motion module is completed, is automatically moved to realize.It is provided in robot simultaneously aobvious
Display screen can realize the identification of medical image by the operation to display screen.It can only be carried out in fixed-site to compared with the prior art
Medical image knows method for distinguishing, and the method for the invention realizes the identifications of removable medicine, substantially increase the convenience of medical procedure
Property, while preliminary medical advisory information can be obtained by medical treatment identification, it is ensured that the timeliness that doctor exchanges with patient improves
User experience.
Detailed description of the invention
The invention will be further described with example with reference to the accompanying drawing.
Fig. 1 is robot architecture's schematic diagram of a kind of medical image robot of the present invention and its control, medicine recognition methods;
Fig. 2 is the module diagram of a kind of medical image robot of the present invention and its control, medicine recognition methods;
Fig. 3 is the testing process schematic diagram of a kind of medical image robot of the present invention and its control, medicine recognition methods;
Fig. 4 is the control flow chart of a kind of medical image robot of the present invention and its control, medicine recognition methods;
Fig. 5 is the control method detailed step of a kind of medical image robot of the present invention and its control, medicine recognition methods
Figure;
Fig. 6 is the medicine recognition methods process of a kind of medical image robot of the present invention and its control, medicine recognition methods
Figure;
Fig. 7 is the medicine identification model training of a kind of medical image robot of the present invention and its control, medicine recognition methods
Flow chart;
Fig. 8 is the automatic of the second embodiment of a kind of medical image robot of the present invention and its control, medicine recognition methods
Tracking flow chart.
Drawing reference numeral explanation:
1. laser radar;2. depth camera;3. display screen;4. the first main control chip;5. the second main control chip;6. electrodeless electricity
Machine governor;7. Mecanum wheel;8. power supply turns die block;9. crane;10. DC brushless motor;11. power module;12.
Anti-vibration structure.
Specific embodiment
- Fig. 3 referring to Fig.1, a kind of medical image robot of the invention, comprising: acquisition module and processing module, it is described to adopt
Collection module includes the laser radar 1 for obtaining building Scaned map value and the depth camera 2 for obtaining range information;
The processing module includes the first main control chip 4 for handling data transmitted by acquisition module and for transmitting
Second main control chip 5 of exercise data, the input terminal of first main control chip 4 are connected with the output end of acquisition module, institute
It states the second main control chip 5 to be connected with the first main control chip 4, first main control chip 4 is adopted in response to what acquisition module was sent
Collect information and sends motion control information to the second main control chip 5;
It further include motion module, the motion module is connected with the second main control chip 5,5 sound of the second main control chip
Motion control information described in Ying Yu sends enabling signal to motion module;
It further include display screen 3, the display screen 3 is connected with the first main control chip 4, and the display screen 3 is in response to first
The display signal that main control chip 4 is sent is shown.
Wherein, the ontology of the robot is made of 3 layers of carbon fiber board being arranged from top to bottom, wherein depth camera 2, swash
Optical radar 1 and display screen 3 are set in first layer carbon fiber board;First main control chip 4 and power module 11 are set to the second layer
In carbon fiber board;Turn die block 8 including the second main control chip 5, electrodeless machine governor 6, power supply in third layer carbon fiber board.
Wherein, depth camera 2 is USB interface high speed high-definition camera, for providing depth image, RGB and infrared image.
The depth camera 2 passes through in the fixed first layer carbon fiber board of support frame.
Preferably, the laser radar 1 is connected in the first main control chip 4 by serial data conversion line, utilizes serial ports
Data conversion line, which is transmitted, can accelerate transmission speed, it is ensured that the timeliness of reaction.
Wherein, it is connected between first main control chip 4 and the second main control chip 5 by serial data connecting line, is had
Line connection type can ensure that the connection between two main control chips during the motion keeps stablizing.
Further, the motion module includes the electrodeless machine governor 6 and DC brushless motor 10 for electric machine speed regulation,
The input terminal of the electrodeless machine governor 6 is connected with the output end of the second main control chip 5, the electrodeless machine governor 6
The DC brushless motor 10 is controlled in response to the enabling signal that the second main control chip 5 is sent to operate;The electrodeless electric machine speed regulation
The output end of device 6 is connected with the input terminal of DC brushless motor 10 by CAN bus;The motion module further includes that Mike receives
Nurse wheel 7 is connected between the Mecanum wheel 7 and DC brushless motor 10 by ring flange;The Mecanum wheel 7 with it is straight
Anti-vibration structure 12 is additionally provided between stream brushless motor 10;The anti-vibration structure 12 includes oleo damper, hinge carbon fiber company
Fishplate bar and aluminium alloy fixed plate.
Wherein, the anti-vibration structure 12 is connected by hinge in the third layer carbon fiber board of robot, will be straight by bolt
Stream brushless motor 10 is fixed in third time carbon fiber board.
Wherein, it is provided with 4 Mecanum wheels 7 in auxiliary robot, therefore is provided with 4 corresponding DC brushless motors
10, each 10 independent work of DC brushless motor individually responds the motion control information that the second chip 5 is sent.
It further, further include that power module 11 and the power supply for control circuit electric current turn die block 8, the power module
11 output end is connected with the input terminal that power supply turns die block 8.
Wherein, the power supply turns that die block is also used to signal isolation, main circuit current-limiting protection, feedback compensation and over-voltage are protected
Shield.
Further, the display screen 3 is the LCD display that capacitor formula touches, and the LCD display passes through HDMI image
Connecting line is connected with the first main control chip 4;3 bottom side of display screen is additionally provided with the lifting for controlling 3 height of display screen
Frame 9.
Wherein, when not using display screen 3, display screen 3 can be made to be accommodated in first layer carbon fiber by controlling crane 9
Plate surface, convenient for the storage and placement of robot.
A kind of control method of medical image robot, comprising the following steps:
Map is constructed according to acquisition module environmental information collected, and the map of building is sent in display screen 3 and is shown
Show;
It reads the target area information that user clicks in display screen 3 and is sent in the first main control chip 4;
First main control chip 4 obtains motion control information according to current location information and target area information, is sent to
In two main control chips 5;
After second main control chip 5 obtains the motion control information, control motion module running.
Wherein, the map structuring technology that the present embodiment uses is SLAM (simultaneous localization and
Mapping is positioned immediately and map structuring), it can be constructed automatically after getting space and range information virtually using the technology
Map, and the automatic programme path of energy after inputting target area information, while to the barrier energy automatic obstacle-avoiding in path.
Preferably, the motion module also sends motion feedback information to the second main control chip 5 in operation, described
Motion feedback information includes the speed difference of actual speed and pre-set velocity, and second main control chip 5 receives motion feedback letter
It is sent to after breath in the first main control chip 4, the first main control chip 4 recalculates movement according to feedback information and target area information
Control information is simultaneously sent in the second main control chip 5.
Further, the environmental information includes the distance that the spatial information acquired by laser radar 1 and depth camera 2 acquire
Information;The motion control information includes moving direction, movement speed and moving distance.
It is described below by way of control process of the specific steps to medical image robot:
Step 101, laser radar 1 carries out laser scanning, and the shape that will acquire to robot surrounding obstacles shape data
Data are sent in the first main control chip 4;
Step 102, depth camera 2 detects the range data of robot surrounding obstacles, and the distance number that will acquire
According to being sent in the first main control chip 4;
Step 103, it after first main control chip 4 receives shape data and range data, is constructed by SLAM algorithm
Virtual Space map out, and shown by display screen 3;
Step 104, the target area information that user clicks selection in display screen 3,4 basis of the first main control chip are read
Current location information and target area information obtain planning path in conjunction with virtual map, and calculate according to the planning path
Required execution motion control information when operation, including movement speed, moving direction and moving distance.
Step 105, motion control information is sent in the second main control chip 5 by the first main control chip 4, second master control
After chip 5 receives motion control information, motion control information, the electrodeless electric machine speed regulation are sent to electrodeless machine governor 6
Complete movement in the direction of revolving speed and Mecanum wheel 7 that device 6 controls DC brushless motor 10.
With reference to Fig. 6, a kind of medical image recognition methods, comprising the following steps:
Detect in display screen 3 click patient information when, in the database read with the patient information corresponding to
Medical image;
The medical image is input in target detection model, the lesion locations in the medical image are marked
Know, is set as lesion image;
The lesion image is sent in deep learning disaggregated model and carries out feature extraction, is obtained belonging to lesion image
Lesion classification;
Corresponding medical advice information is read in the database according to the lesion classification;
Lesion locations corresponding to the lesion image, lesion classification and medical advice information are sent in display screen 3
Display.
Wherein, saving medical image in the database is DICOM medical image, the medical image in the database with
Patient information is corresponding.
Wherein, the patient information shown in display screen 3 is read from server obtained by patient list.
Wherein, after medical image is input to target detection model, target detection model is mentioned using preparatory trained feature
It takes network to extract the feature in medical image, then classifies to extracted feature according to preset type,
Lesion classification is classified as in the present embodiment.
Preferably, when detecting the corresponding lesion classification of the medical image is not sky, then assert in the medical image
Comprising lesion, then preset medical advice information is read according to lesion classification in the database, realizes preliminary quick diagnosis.
Wherein, the lesion locations by lesion image are shown, in the present embodiment by marking to lesion locations
The form for knowing box identifies in lesion image.
Preferably, memory library for storing data is provided in the robot, when robot connects upper internet,
Automatically by the database in phase in server into memory, the memory is connected with the first main control chip 4;It is described
Medical advice information saves in memory after generating, when robot connects upper internet, the medical advice synchronizing information
The corresponding position of database into server effectively realizes offlineization of medical image identification, improves auxiliary robot
The scope of application, while ensure that the timeliness of data.
Further, the target detection model and deep learning disaggregated model are preparatory trained convolutional neural networks.
It wherein, can be quickly to medical image using preparatory trained target detection model and deep learning disaggregated model
It is identified.
With reference to Fig. 7, further, the target detection model and deep learning disaggregated model are convolutional Neural trained in advance
Network, training method the following steps are included:
Medical image is converted into Jpg picture format, according to the diseased region information of mark, target detection training is generated and uses
XML file;Corresponding label is input in diseased region image according to classification results;
The medical image data of the medical image for being completely used for diseased region target detection and classification is randomly assigned into
Training set data and verifying collection data;
The deep learning training of target detection and classification is carried out by depth convolutional neural networks;
Model after obtaining training, verifies model using verify data.
Wherein, the diseased region information of the mark is provided by manually marking to training network by manually marking completion
Initial parameter can more intuitively embody diseased region information using the form of XML file and lesion the forming of category corresponds
Relationship, it is described to be completed in corresponding label book diseased region image by being manually entered, study, which is provided, for training network knows
Other benchmark.
Wherein, right when the data for detecting input are training set data after medical image being input in trained network
Medical image carries out feature extraction and classification based training, after detecting that being the data inputted is verifying collection data, classifies
Also trained data and verifying collection data are compared after training, to verify the accuracy of training data, realize that training is quasi-
The raising of true property.
Wherein, using depth convolutional neural networks can by multilayer convolutional layer and maximum pond layer to the feature in image into
Row extracts, and can effectively improve the accuracy and computational efficiency of deep learning training.
It further, include 19 convolutional layers and 5 maximum ponds in the target detection model and deep learning disaggregated model
Change layer.
Wherein, using the Darknet-19 basic model in YoloV2 depth convolutional neural networks, the mould in the present embodiment
It include 19 convolutional layers and 5 maximum pond layers in type.
Wherein, the convolution kernel of the convolutional layer is 3 × 3, and the step-length of maximum pond layer is 2 × 2.
With reference to Fig. 8, a kind of medical image robot and its control, medical image recognition methods, basic structure and basic
Process is essentially identical with first embodiment, there is following difference in control method: when user chooses constructed map in display screen 3
In humanoid image when, the first main control chip 4 start robot target following, location information corresponding to the humanoid image is set
It is set to target area information;When detecting that the humanoid image is mobile, real time kinematics control is obtained with mobile real-time position information
Information is sent in the second main control chip 5, control motion module running, to realize that automatically track target is mobile.
The above, only presently preferred embodiments of the present invention, the invention is not limited to above embodiment, as long as
It reaches technical effect of the invention with identical means, all should belong to protection scope of the present invention.
Claims (10)
1. a kind of medical image robot characterized by comprising acquisition module and processing module, the acquisition module include
Depth camera for obtaining the laser radar of building Scaned map value and for obtaining range information;
The processing module includes the first main control chip for handling data transmitted by acquisition module and for translatory movement
The input terminal of second main control chip of data, first main control chip is connected with the output end of acquisition module, and described second
Main control chip is connected with the first main control chip, and the acquisition information that first main control chip is sent in response to acquisition module is to
Two main control chips send motion control information;
It further include motion module, the motion module is connected with the second main control chip, and second main control chip is in response to institute
It states motion control information and sends enabling signal to motion module;
It further include display screen, the display screen is connected with the first main control chip, and the display screen is in response to the first main control chip
The display signal of transmission is shown.
2. a kind of medical image robot according to claim 1, it is characterised in that: the motion module includes for electricity
The electrodeless machine governor and DC brushless motor of machine speed regulation, the input terminal and the second main control chip of the electrodeless machine governor
Output end be connected, the enabling signal that the electrodeless machine governor is sent in response to the second main control chip controls the direct current
Brushless motor operating;The output end of the electrodeless machine governor is connected with the input terminal of DC brushless motor by CAN bus
It connects;The motion module further includes Mecanum wheel, passes through ring flange phase between the Mecanum wheel and DC brushless motor
Connection;
Anti-vibration structure is additionally provided between the Mecanum wheel and DC brushless motor;The anti-vibration structure includes hydraulic shock-absorbing
Device, hinge carbon fiber connecting plate and aluminium alloy fixed plate.
3. a kind of medical image robot according to claim 1, it is characterised in that: further include power module and for controlling
The power supply of circuital current processed turns die block, and the output end of the power module is connected with the input terminal that power supply turns die block.
4. a kind of medical image robot according to claim 1, it is characterised in that: the display screen is the touching of capacitor formula
The LCD display touched, the LCD display are connected by HDMI image connecting line with the first main control chip;The display screen
Bottom side is additionally provided with the crane for controlling display screen height.
5. a kind of control method of medical image robot, which comprises the following steps: acquired according to acquisition module
Environmental information construct map, and the map of building is sent in display screen and is shown;
It reads the target area information that user clicks in display screen and is sent in the first main control chip;
First main control chip obtains motion control information according to current location information and target area information, is sent to the second master control
In chip;
After second main control chip obtains the motion control information, control motion module running.
6. a kind of control method of medical image robot according to claim 5, it is characterised in that: the environmental information
The range information acquired including the spatial information acquired by laser radar and depth camera;The motion control information includes movement
Direction, movement speed and moving distance.
7. a kind of medical image recognition methods, which comprises the following steps:
When detecting the click patient information in display screen, read in the database and medicine shadow corresponding to the patient information
Picture;
The medical image is input in target detection model, the lesion locations in the medical image are identified, if
It is set to lesion image;
The lesion image is sent in deep learning disaggregated model and carries out feature extraction, obtains lesion belonging to lesion image
Classification;
Corresponding medical advice information is read in the database according to the lesion classification;
Lesion locations corresponding to the lesion image, lesion classification and medical advice information are sent in display screen and are shown.
8. a kind of medical image recognition methods according to claim 7, it is characterised in that: the target detection model and depth
Spending learning classification model is preparatory trained convolutional neural networks.
9. a kind of medical image recognition methods according to claim 8, which is characterized in that the target detection model and depth
Spending learning classification model is convolutional neural networks trained in advance, training method the following steps are included:
Medical image is converted into Jpg picture format, according to the diseased region information of mark, generates target detection training
XML file;Corresponding label is input in diseased region image according to classification results;
The medical image data of the medical image for being completely used for diseased region target detection and classification is randomly assigned into training
Collect data and verifying collection data;
The deep learning training of target detection and classification is carried out by depth convolutional neural networks;
Model after obtaining training, verifies model using verify data.
10. a kind of medical image recognition methods according to claim 9, it is characterised in that: the target detection model and
It include 19 convolutional layers and 5 maximum pond layers in deep learning disaggregated model.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811171881.8A CN109473168A (en) | 2018-10-09 | 2018-10-09 | A kind of medical image robot and its control, medical image recognition methods |
PCT/CN2018/113464 WO2020073389A1 (en) | 2018-10-09 | 2018-11-01 | Medical image robot and control method therefor, and medical image identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811171881.8A CN109473168A (en) | 2018-10-09 | 2018-10-09 | A kind of medical image robot and its control, medical image recognition methods |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109473168A true CN109473168A (en) | 2019-03-15 |
Family
ID=65664762
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811171881.8A Pending CN109473168A (en) | 2018-10-09 | 2018-10-09 | A kind of medical image robot and its control, medical image recognition methods |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109473168A (en) |
WO (1) | WO2020073389A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111341441A (en) * | 2020-03-02 | 2020-06-26 | 刘四花 | Gastrointestinal disease model construction method and diagnosis system |
CN112109096A (en) * | 2020-09-21 | 2020-12-22 | 深圳市明锐信息科技有限公司 | High-precision medical image robot and identification method thereof |
CN112151169A (en) * | 2020-09-22 | 2020-12-29 | 深圳市人工智能与机器人研究院 | Ultrasonic robot autonomous scanning method and system based on human-simulated operation |
CN115294515A (en) * | 2022-07-05 | 2022-11-04 | 南京邮电大学 | Artificial intelligence-based comprehensive anti-theft management method and system |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112150543A (en) * | 2020-09-24 | 2020-12-29 | 上海联影医疗科技股份有限公司 | Imaging positioning method, device and equipment of medical imaging equipment and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080161672A1 (en) * | 2006-10-17 | 2008-07-03 | General Electric Company | Self-guided portable medical diagnostic system |
CN104573385A (en) * | 2015-01-24 | 2015-04-29 | 无锡桑尼安科技有限公司 | Robot system for acquiring data of sickrooms |
CN204462851U (en) * | 2015-03-16 | 2015-07-08 | 武汉汉迪机器人科技有限公司 | Mecanum wheel Omni-mobile crusing robot |
CN105479433A (en) * | 2016-01-04 | 2016-04-13 | 江苏科技大学 | Omnidirectional moving transfer robot with Mecanum wheels |
CN106709254A (en) * | 2016-12-29 | 2017-05-24 | 天津中科智能识别产业技术研究院有限公司 | Medical diagnostic robot system |
CN106780460A (en) * | 2016-12-13 | 2017-05-31 | 杭州健培科技有限公司 | A kind of Lung neoplasm automatic checkout system for chest CT image |
CN106909778A (en) * | 2017-02-09 | 2017-06-30 | 北京市计算中心 | A kind of Multimodal medical image recognition methods and device based on deep learning |
CN107368073A (en) * | 2017-07-27 | 2017-11-21 | 上海工程技术大学 | A kind of full ambient engine Multi-information acquisition intelligent detecting robot system |
CN107644419A (en) * | 2017-09-30 | 2018-01-30 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing medical image |
AU2017268489B1 (en) * | 2016-12-02 | 2018-05-17 | Avent, Inc. | System and method for navigation to a target anatomical object in medical imaging-based procedures |
CN108229584A (en) * | 2018-02-02 | 2018-06-29 | 莒县人民医院 | A kind of Multimodal medical image recognition methods and device based on deep learning |
WO2018120942A1 (en) * | 2016-12-31 | 2018-07-05 | 西安百利信息科技有限公司 | System and method for automatically detecting lesions in medical image by means of multi-model fusion |
CN108364006A (en) * | 2018-01-17 | 2018-08-03 | 超凡影像科技股份有限公司 | Medical Images Classification device and its construction method based on multi-mode deep learning |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9947102B2 (en) * | 2016-08-26 | 2018-04-17 | Elekta, Inc. | Image segmentation using neural network method |
IL250382B (en) * | 2017-01-31 | 2021-01-31 | Arbe Robotics Ltd | A radar-based system and method for real-time simultaneous localization and mapping |
CN106808480B (en) * | 2017-03-23 | 2023-01-06 | 北京瑞华康源科技有限公司 | Robot medical guidance system |
CN108478348B (en) * | 2018-05-29 | 2023-12-01 | 华南理工大学 | Intelligent wheelchair with indoor autonomous navigation Internet of things and control method |
-
2018
- 2018-10-09 CN CN201811171881.8A patent/CN109473168A/en active Pending
- 2018-11-01 WO PCT/CN2018/113464 patent/WO2020073389A1/en active Application Filing
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080161672A1 (en) * | 2006-10-17 | 2008-07-03 | General Electric Company | Self-guided portable medical diagnostic system |
CN104573385A (en) * | 2015-01-24 | 2015-04-29 | 无锡桑尼安科技有限公司 | Robot system for acquiring data of sickrooms |
CN204462851U (en) * | 2015-03-16 | 2015-07-08 | 武汉汉迪机器人科技有限公司 | Mecanum wheel Omni-mobile crusing robot |
CN105479433A (en) * | 2016-01-04 | 2016-04-13 | 江苏科技大学 | Omnidirectional moving transfer robot with Mecanum wheels |
AU2017268489B1 (en) * | 2016-12-02 | 2018-05-17 | Avent, Inc. | System and method for navigation to a target anatomical object in medical imaging-based procedures |
CN106780460A (en) * | 2016-12-13 | 2017-05-31 | 杭州健培科技有限公司 | A kind of Lung neoplasm automatic checkout system for chest CT image |
CN106709254A (en) * | 2016-12-29 | 2017-05-24 | 天津中科智能识别产业技术研究院有限公司 | Medical diagnostic robot system |
WO2018120942A1 (en) * | 2016-12-31 | 2018-07-05 | 西安百利信息科技有限公司 | System and method for automatically detecting lesions in medical image by means of multi-model fusion |
CN106909778A (en) * | 2017-02-09 | 2017-06-30 | 北京市计算中心 | A kind of Multimodal medical image recognition methods and device based on deep learning |
CN107368073A (en) * | 2017-07-27 | 2017-11-21 | 上海工程技术大学 | A kind of full ambient engine Multi-information acquisition intelligent detecting robot system |
CN107644419A (en) * | 2017-09-30 | 2018-01-30 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing medical image |
CN108364006A (en) * | 2018-01-17 | 2018-08-03 | 超凡影像科技股份有限公司 | Medical Images Classification device and its construction method based on multi-mode deep learning |
CN108229584A (en) * | 2018-02-02 | 2018-06-29 | 莒县人民医院 | A kind of Multimodal medical image recognition methods and device based on deep learning |
Non-Patent Citations (1)
Title |
---|
郁大鹏: ""CAN总线电机的控制及其在机器人竞赛中的应用"", 《中国信息技术教育》, no. 24, pages 60 - 64 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111341441A (en) * | 2020-03-02 | 2020-06-26 | 刘四花 | Gastrointestinal disease model construction method and diagnosis system |
CN112109096A (en) * | 2020-09-21 | 2020-12-22 | 深圳市明锐信息科技有限公司 | High-precision medical image robot and identification method thereof |
CN112109096B (en) * | 2020-09-21 | 2022-04-19 | 安徽省幸福工场医疗设备有限公司 | High-precision medical image robot and identification method thereof |
CN112151169A (en) * | 2020-09-22 | 2020-12-29 | 深圳市人工智能与机器人研究院 | Ultrasonic robot autonomous scanning method and system based on human-simulated operation |
CN112151169B (en) * | 2020-09-22 | 2023-12-05 | 深圳市人工智能与机器人研究院 | Autonomous scanning method and system of humanoid-operation ultrasonic robot |
CN115294515A (en) * | 2022-07-05 | 2022-11-04 | 南京邮电大学 | Artificial intelligence-based comprehensive anti-theft management method and system |
CN115294515B (en) * | 2022-07-05 | 2023-06-13 | 南京邮电大学 | Comprehensive anti-theft management method and system based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
WO2020073389A1 (en) | 2020-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109473168A (en) | A kind of medical image robot and its control, medical image recognition methods | |
CN110026987B (en) | Method, device and equipment for generating grabbing track of mechanical arm and storage medium | |
CN108885459B (en) | Navigation method, navigation system, mobile control system and mobile robot | |
WO2021103987A1 (en) | Control method for sweeping robot, sweeping robot, and storage medium | |
CN100487724C (en) | Quick target identification and positioning system and method | |
CN108776773B (en) | Three-dimensional gesture recognition method and interaction system based on depth image | |
CN102854983B (en) | A kind of man-machine interaction method based on gesture identification | |
CN111906784A (en) | Pharyngeal swab double-arm sampling robot based on machine vision guidance and sampling method | |
CN103679203A (en) | Robot system and method for detecting human face and recognizing emotion | |
CN107428004A (en) | The automatic collection of object data and mark | |
CN108161882A (en) | A kind of robot teaching reproducting method and device based on augmented reality | |
JP2022542241A (en) | Systems and methods for augmenting visual output from robotic devices | |
US9008442B2 (en) | Information processing apparatus, information processing method, and computer program | |
CN206780416U (en) | A kind of intelligent medical assistant robot | |
CN107214700A (en) | A kind of robot autonomous patrol method | |
CN106468917B (en) | A kind of long-range presentation exchange method and system of tangible live real-time video image | |
CN208629445U (en) | Autonomous introduction system platform robot | |
CN105500370A (en) | Robot offline teaching programming system and method based on somatosensory technology | |
CN105759650A (en) | Method used for intelligent robot system to achieve real-time face tracking | |
CN106272446A (en) | The method and apparatus of robot motion simulation | |
CN108044625A (en) | A kind of robot arm control method based on the virtual gesture fusions of more Leapmotion | |
CN108664125A (en) | A kind of power transformer maintenance exception and defects simulation equipment | |
Zhang et al. | Robot programming by demonstration: A novel system for robot trajectory programming based on robot operating system | |
CN116386414A (en) | Digital mirror image-based ergonomic adjustment line training system and method | |
CN110070039A (en) | Computer room cabinet and master control borad perception and coordinate measuring method and device based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |