CN112382374A - Tumor segmentation device and segmentation method - Google Patents

Tumor segmentation device and segmentation method Download PDF

Info

Publication number
CN112382374A
CN112382374A CN202011341340.2A CN202011341340A CN112382374A CN 112382374 A CN112382374 A CN 112382374A CN 202011341340 A CN202011341340 A CN 202011341340A CN 112382374 A CN112382374 A CN 112382374A
Authority
CN
China
Prior art keywords
tumor
data
button
server
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011341340.2A
Other languages
Chinese (zh)
Other versions
CN112382374B (en
Inventor
徐庸辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202011341340.2A priority Critical patent/CN112382374B/en
Publication of CN112382374A publication Critical patent/CN112382374A/en
Application granted granted Critical
Publication of CN112382374B publication Critical patent/CN112382374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation

Abstract

The invention relates to a tumor segmentation device, which comprises a hand lever, intelligent gloves, a server and a visual terminal, wherein the intelligent gloves are arranged on the hand lever; the hand lever is provided with a function button and a function pressure sensor, the function button is connected with the intelligent glove, and the function pressure sensor is arranged corresponding to the function button; the intelligent gloves are provided with three-dimensional positioning devices and used for acquiring attitude data and motion data of the intelligent gloves in the attitude change and motion change processes and sending the attitude data and the motion data to the server; the server rotates, moves or segments the tumor image according to the posture data and the motion data, adjusts the speed of rotating or moving the tumor image according to the pressure data on the function button, generates a three-dimensional tumor model according to the segmented tumor image and sends the three-dimensional tumor model to the visualization terminal. Can be controlled by operator's both hands simultaneously, can fix a position fast and cut apart focus position and generate three-dimensional tumor model, promote tumour and cut apart efficiency. The invention also relates to a tumor segmentation method.

Description

Tumor segmentation device and segmentation method
Technical Field
The invention relates to the technical field of medical image processing, in particular to a tumor segmentation device and a tumor segmentation method.
Background
Radiotherapy, which uses high-energy radiation to destroy cancer cells, is an important means for tumor therapy. The tumor is accurately segmented through CT or MRI images, and the range of a target area is determined, which is a key step of tumor radiotherapy.
The tumor segmentation method based on machine learning and deep learning is to learn a model capable of accurately distinguishing a tumor foreground region from a tumor background region in a high-dimensional space through massive marked tumor region image data. The segmentation accuracy of the three-dimensional model of the tumor depends on the number and quality of markers of the physician at the two-dimensional view angle. In order to obtain an accurate tumor segmentation model, hundreds of thousands of high-quality tumor three-dimensional models need to be marked, a large amount of manpower and material resources are consumed, and the economic practicability is poor.
In the non-machine learning tumor segmentation method, the tumor segmentation device relies on traditional interactive equipment including a mouse and a keyboard during the operation of CT or MRI images. When segmenting a three-dimensional tumor model, an expert typically needs to view and label the tumor region from multiple three-dimensional perspectives. In addition, in the process of switching the observation angle and marking the tumor, operations such as rotation, scaling, translation and the like need to be frequently performed on the three-dimensional tumor image, and when the three-dimensional tumor image is moved or rotated by using a mouse with one hand, the movement or rotation speed of the three-dimensional tumor image is generally constant. When the number of scanning layers of the tumor image is large, the uniform movement or rotation operation is not favorable for quickly positioning the focus part, and the tumor segmentation efficiency is low.
Disclosure of Invention
Aiming at the technical problems in the prior art, one of the purposes of the invention is as follows: the utility model provides a tumour segmenting device can be controlled by operator's both hands simultaneously, can fix a position fast and cut apart focus position and generate three-dimensional tumor model, need not massive tumour regional image data, has promoted tumour segmentation efficiency.
Aiming at the technical problems in the prior art, the second purpose of the invention is as follows: the tumor segmentation method has the advantages that an operator can control the tumor segmentation device by both hands simultaneously, focus positions can be rapidly positioned and segmented, a three-dimensional tumor model is generated, massive tumor region image data are not needed, and the tumor segmentation efficiency is high.
In order to achieve the purpose, the invention adopts the following technical scheme:
a tumor segmentation device comprises a hand lever, intelligent gloves, a server and a visual terminal;
the hand lever is provided with a function button and a function pressure sensor, and the function button is connected with the intelligent glove and used for starting the functions of simulating rotation and moving tumor images of the intelligent glove; the function pressure sensor is arranged corresponding to the function button and used for acquiring pressure data on the function button and sending the pressure data to the server;
the intelligent gloves are used for simulating rotation and moving tumor images through posture change and motion change;
the intelligent gloves are provided with three-dimensional positioning devices and used for acquiring attitude data and motion data of the intelligent gloves in the attitude change and motion change processes and sending the attitude data and the motion data to the server;
the server is used for receiving the attitude data and the motion data of the intelligent gloves and the pressure data on the function buttons, rotating, moving or dividing the tumor images according to the attitude data and the motion data, adjusting the speed of rotating or moving the tumor images according to the pressure data on the function buttons, generating a three-dimensional tumor model according to the divided tumor images and sending the three-dimensional tumor model to the visualization terminal;
the visualization terminal is used for receiving and displaying the three-dimensional tumor model sent by the server.
Further, the handle bar is cylindrical, and the function button includes turn button and shift knob, and the handle bar lateral wall is located respectively to turn button and shift knob, is used for opening the rotatory and simulation of intelligent gloves respectively and removes tumour image function, and function pressure sensor includes rotation pressure sensor and removal pressure sensor, and rotation pressure sensor corresponds the setting with turn button, and removal pressure sensor corresponds the setting with shift knob.
Further, intelligent gloves include the dactylotheca, the dactylotheca zooms the tumour image with the simulation through straightening or crooked, the dactylotheca is equipped with the crookedness sensor, the crookedness sensor is connected with the server, a crookedness data and the sending for the server for acquireing the dactylotheca, the server zooms the tumour image according to received crookedness data, the handspike still is equipped with zoom button and zoom pressure sensor, zoom pressure sensor and server connection, the handspike lateral wall is located to the zoom button, a tumour image function is zoomed in the simulation for opening intelligent gloves, zoom pressure sensor and zoom button correspond the setting, a pressure data and a sending for the server on the acquisition zoom button, the server is according to the speed of the tumour image of receiving.
Furthermore, the hand lever is further provided with a segmentation button and a segmentation pressure sensor, the segmentation pressure sensor is connected with the server, the segmentation button is arranged on the side wall of the hand lever and used for starting the function of simulating segmentation of the tumor image of the intelligent glove, the segmentation pressure sensor and the segmentation button are correspondingly arranged and used for acquiring pressure data on the segmentation button and sending the pressure data to the server, and the server receives the pressure data on the segmentation button and starts to segment the tumor image.
Further, the three-dimensional positioning device comprises a posture sensor, and the posture sensor is arranged at the back of the hand of the intelligent glove and connected with the server.
Furthermore, the intelligent gloves are provided with a main control module, and the main control module is respectively connected with the attitude sensor, the curvature sensor, the rotating pressure sensor, the moving pressure sensor, the zooming pressure sensor, the dividing pressure sensor and the server.
Furthermore, the top end of the hand lever is provided with a starting button, and the starting button is connected with the main control module and used for starting the intelligent gloves.
A segmentation method of a tumor segmentation device comprises the following steps,
acquiring gesture data and motion data of the intelligent glove, and rotating, moving or segmenting the tumor image according to the gesture data and the motion data of the intelligent glove;
acquiring pressure data on a function button of the hand lever, and adjusting the speed of rotating and/or moving the tumor image according to the pressure data on the function button;
and generating a three-dimensional tumor model according to the segmented tumor image and sending the three-dimensional tumor model to the visualization terminal so that the visualization terminal receives and displays the three-dimensional tumor model.
The method further comprises the following steps of receiving the curvature data of the fingerstall of the intelligent glove and zooming the tumor image according to the curvature data of the fingerstall.
Furthermore, the generation of a three-dimensional tumor model from segmented tumor images is achieved in that,
storing the segmented tumor image into a two-dimensional tumor marking area, respectively projecting the two-dimensional tumor marking area from a plurality of visual angles to a three-dimensional coordinate space by adopting a marking migration method to obtain a plurality of three-dimensional marking coordinates, and synthesizing a three-dimensional tumor model by the plurality of three-dimensional marking coordinates.
In summary, the present invention has the following advantages:
can be controlled by operator's both hands simultaneously, accord with ergonomic design, need not to provide magnanimity tumor area image data, can fix a position fast and cut apart focus position and generate three-dimensional tumor model, promote tumour and cut apart efficiency.
Drawings
Fig. 1 is a schematic structural diagram of an embodiment of the present invention.
Fig. 2 is a schematic perspective view of a handle according to an embodiment of the present invention.
Fig. 3 is a schematic plan structure view of a smart glove according to an embodiment of the present invention.
FIG. 4 is a flow chart of an embodiment of an example of the present invention.
Fig. 5 is a flow chart of server data processing according to an embodiment of the present invention.
FIG. 6 is a flowchart of data processing of a tag migration module according to an embodiment of the present invention.
Description of reference numerals:
1-visual terminal;
2-a server;
3, intelligent gloves; 31-little finger curvature sensor; 32-ring finger bending sensor; 33-middle finger bending degree sensor; 34-forefinger curvature sensor; 35-thumb curvature sensor; 36-attitude sensor; 37-a master control module;
4-data line;
5, a handle bar; 51-start button; 511-starting the pressure sensor; 52-rotating the button; 521-a rotary pressure sensor; 53-moving the button; 531-mobile pressure sensor; 54-zoom button; 541-a zoom pressure sensor; 55-split button; 551-dividing the pressure sensor.
Detailed Description
The present invention will be described in further detail below.
As shown in fig. 1 to 3, a tumor segmentation device includes a hand lever 5, an intelligent glove 3, a server 2 and a visualization terminal 1, wherein the hand lever 5 is provided with a function button and a function pressure sensor, the function button is connected with the intelligent glove 3 and is used for starting the functions of simulating rotation and moving tumor images of the intelligent glove 3; the function pressure sensor is arranged corresponding to the function button and used for acquiring pressure data on the function button and sending the pressure data to the server 2; the intelligent glove 3 is used for simulating rotation and moving tumor images through posture change and motion change; the intelligent gloves 3 are provided with three-dimensional positioning devices and used for acquiring attitude data and motion data of the intelligent gloves 3 in the attitude change and motion change processes and sending the attitude data and the motion data to the server 2; the server 2 receives the attitude data and the motion data of the intelligent gloves 3 and the pressure data on the function buttons, rotates, moves or segments the tumor images according to the attitude data and the motion data, adjusts the speed of rotating or moving the tumor images according to the pressure data on the function buttons, generates a three-dimensional tumor model according to the segmented tumor images and sends the three-dimensional tumor model to the visualization terminal 1; the visualization terminal 1 receives and displays the three-dimensional tumor model transmitted from the server 2.
The operator can wear the smart glove 3 with one hand and hold the joystick 5 with the other hand for operation. The smart glove 3 can be designed to be left-handed or right-handed, and correspondingly, the handle bar 5 is designed to be right-handed or left-handed so as to be suitable for operators with different handedness. The device will be described below by taking the smart glove 3 as a right-hand wearing type and the handle 5 as a left-hand wearing type.
The intelligent glove 3 can simulate rotation and simulate tumor images in the mobile visual terminal 1 through posture change and motion change of the intelligent glove. Specifically, the operator presses the function button with the left hand, and the simulated rotation and simulated movement tumor image functions of the smart glove 3 are turned on. When an operator wears the intelligent gloves 3 to move or rotate the intelligent gloves 3, the three-dimensional positioning device in the intelligent gloves 3 acquires attitude data and motion data of the intelligent gloves 3 in the attitude change and motion change process and sends the attitude data and the motion data to the server 2, the server 2 is connected with the visual terminal 1, and the moving or rotating state of the tumor image under the current operation visual angle is displayed through the visual terminal 1. When the function button is pressed by the left hand, the function pressure sensor corresponding to the function button acquires pressure data applied to the function button and transmits the pressure data to the server 2, and the server 2 adjusts the movement or rotation speed of the tumor image according to the received pressure data. The acceleration of the server 2 rotating and moving the tumor image is in direct proportion to the pressure data on the function button, and the larger the pressure data is, the faster the server 2 rotating and moving the tumor image is, which is beneficial to the operator to quickly position the focus part.
In the moving process of the intelligent gloves 3, the server 2 divides the tumor image according to the focus part by recording and processing the attitude data and the motion data transmitted by the three-dimensional positioning device, generates a three-dimensional tumor model according to the divided tumor image and transmits the three-dimensional tumor model to the visualization terminal 1 for displaying. Therefore, the tumor segmentation device provided by the embodiment of the invention can be operated by two hands of an operator simultaneously, accords with the ergonomic design, does not need to provide massive tumor region image data, can quickly position and segment a focus part and generate a three-dimensional tumor model, and improves the tumor segmentation efficiency.
The handle bar 5 is cylindrical, the function button comprises a rotary button 52 and a moving button 53, the rotary button 52 and the moving button 53 are respectively arranged on the side wall of the handle bar 5 and are respectively used for starting the functions of simulating rotation and moving tumor images in a simulated mode of the intelligent gloves 3, the function pressure sensor comprises a rotary pressure sensor 521 and a moving pressure sensor 531, the rotary pressure sensor 521 is arranged corresponding to the rotary button 52, and the moving pressure sensor 531 is arranged corresponding to the moving button 53.
The columniform staff 5 makes things convenient for the operator to hold, and the 5 lateral walls of staff are located respectively to rotary button 52 and shift button 53, makes things convenient for the operator to press through the finger, and the design accords with ergonomic, has improved user experience.
Intelligent gloves 3 include the dactylotheca, the dactylotheca zooms the tumour image through straightening or crooked in order the simulation, the dactylotheca is equipped with the crookedness sensor, the crookedness sensor is connected with server 2, a crookedness data and sending for server 2 for obtain the dactylotheca, server 2 zooms the tumour image according to received crookedness data, handspike 5 still is equipped with zoom button 54 and zooms pressure sensor 541, it is connected with server 2 to zoom pressure sensor 541, 5 lateral walls of handspike are located to zoom button 54, a tumour image function is zoomed in the simulation for opening intelligent gloves 3, zoom pressure sensor 541 corresponds the setting with zoom button 54, a pressure data and sending for server 2 for obtaining on the zoom button 54, server 2 adjusts the speed of zooming the tumour image according to the pressure data on the.
When the finger of the operator wearing intelligent glove 3 is bent, the finger sleeve driving the intelligent glove 3 is bent, the bending sensor arranged on the finger sleeve detects the bending data of the corresponding finger sleeve and sends the bending data to the server 2, and the server 2 zooms the tumor image according to the received bending data, so that the operator can better and faster check the details of the tumor image. When the zoom button 54 is pressed, the zoom pressure sensor 541 detects pressure data on the zoom button 54 and sends the pressure data to the server 2, the server 2 adjusts the speed of zooming the tumor image according to the received pressure data on the zoom button 54, and the speed of zooming the tumor image is faster as the pressure data on the zoom button 54 is larger, which is beneficial to quickly positioning a focus part.
The hand lever 5 is further provided with a segmentation button 55 and a segmentation pressure sensor 551, the segmentation pressure sensor 551 is connected with the server 2, the segmentation button 55 is arranged on the side wall of the hand lever 5 and is used for starting the function of simulating segmented tumor images of the intelligent gloves 3, the segmentation pressure sensor 551 is arranged corresponding to the segmentation button 55 and is used for acquiring pressure data on the segmentation button 55 and sending the pressure data to the server 2, and the server 2 starts to segment the tumor images after receiving the pressure data on the segmentation button 55.
When the segmentation button 55 is pressed, the segmentation pressure sensor 551 detects that pressure data exists on the segmentation button 55, and then sends a segmentation signal to the server 2, and the server 2 records the moving track and the moving area of the intelligent glove 3 according to the moving acceleration and the angular velocity of the intelligent glove 3 fed back by the attitude sensor 36, stores the moving track and the moving area as a tumor marking area, and generates a three-dimensional tumor model according to the tumor marking area.
The three-dimensional positioning device comprises a posture sensor 36, wherein the posture sensor 36 is arranged at the back of the hand of the intelligent glove 3 and is connected with the server 2.
The gesture sensor 36 is arranged at the back of the hand of the intelligent glove 3, so that the gesture sensor 36 can more accurately detect various gestures of the simulated intelligent glove 3 and can transmit the gesture changes of the intelligent glove 3 to the server 2 in real time.
The intelligent glove 3 is provided with a main control module 37, and the main control module 37 is respectively connected with the attitude sensor 36, the curvature sensor, the rotating pressure sensor 521, the moving pressure sensor 531, the zooming pressure sensor 541, the dividing pressure sensor 551 and the server 2.
In the present embodiment, the attitude sensor 36 includes a 3-axis acceleration sensor, a 3-axis gyroscope, and a 3-axis magnetic field sensor.
The finger stall of the intelligent glove 3 comprises a thumb finger stall, an index finger stall, a middle finger stall, a ring finger stall and a little finger stall, and is respectively and correspondingly bound with a strip-shaped thumb bending sensor 35, an index finger bending sensor 34, a middle finger bending sensor 33, a ring finger bending sensor 32 and a little finger bending sensor 31. When the zoom button 54 is pressed, the server 2 calculates the average value of the curvature data of the curvature sensor on each finger cot, divides the average value by the maximum curvature of the curvature sensor, and amplifies the tumor image in the same scale according to the acquired numerical value, so that the operator can view the tumor image more clearly. After the check is finished, each curvature sensor can be straightened by straightening the finger, so that the tumor image is reduced, and the tumor image can be quickly checked.
The main control module 37 is arranged at the wrist of the smart glove 3. The main control module 37 comprises a lithium battery for supplying power to the smart glove 3, a central processing unit and a bluetooth communication module. The central processing unit is connected with the hand lever 5 through a data line 4, and the central processing unit is connected with the server 2 through a Bluetooth communication module. When the operator moves or rotates the smart glove 3, stretches the finger stall of the smart glove 3, or presses each button of the lever 5, the lever 5 and each sensor of the smart glove 3 transmit sensor data to the main control module 37. After the main control module 37 receives the sensor data, the bluetooth communication module in the main control module 37 sends the sensor data of the handle 5 and the smart glove 3 to the server 2.
The visual terminal 1 is a display, a tablet computer or a smart phone.
The server 2 is a cloud server or a local computer.
The top end of the handle bar 5 is provided with a start button 51, and the start button 51 is connected with the main control module 37 and used for starting the intelligent gloves 3.
In this embodiment, the rotation button 52, the shift button 53, the zoom button 54, and the division button 55 are sequentially disposed on the sidewall of the handle 5 from top to bottom, so that the operator can press the finger, the middle finger, the ring finger, and the small finger respectively. The start button 51 is arranged at the top end of the hand lever 5, and is convenient for an operator to press with the thumb. A start pressure sensor 511 is correspondingly arranged below the start button 51, and the intelligent glove 3 is opened after the start pressure sensor 511 detects the pressure on the start button 51.
As shown in fig. 5, a segmentation method of a tumor segmentation apparatus includes the following steps,
acquiring the attitude data and the motion data of the intelligent glove 3, and rotating, moving or segmenting the tumor image according to the attitude data and the motion data of the intelligent glove 3;
acquiring pressure data on a function button of the hand lever 5;
and generating a three-dimensional tumor model according to the segmented tumor image and sending the three-dimensional tumor model to the visualization terminal 1, so that the visualization terminal 1 receives and displays the three-dimensional tumor model.
The operator controls the hand lever 5 and the intelligent gloves 3 with both hands respectively, the tumor images are rotated, moved or segmented according to the posture data and the motion data of the intelligent gloves 3, the speed of rotating and/or moving the tumor images is adjusted according to the pressure data on the function buttons, the focus parts can be quickly positioned and segmented and the three-dimensional tumor model can be generated through the cooperative cooperation of the hand lever 5 and the intelligent gloves 3, the segmentation model is trained without depending on mass tumor region image data, and the tumor segmentation efficiency is high.
The method further comprises the following steps of receiving the curvature data of the finger sleeves of the intelligent glove 3, and zooming the tumor image according to the curvature data of the finger sleeves.
The server 2 calculates the curvature data of the fingerstall of the intelligent glove 3, divides the data by the maximum curvature of the curvature sensor, amplifies the tumor image in the same proportion according to the acquired numerical value, amplifies the tumor image through the fingerstall of the intelligent glove 3, and can check the details of the tumor image more clearly. After checking, can straighten the bending degree sensor through straightening the finger to reduce tumour image, so that can look over tumour image global.
From the segmented tumor image, a three-dimensional tumor model can be generated using prior art algorithms. In this embodiment, the generation of the three-dimensional tumor model from the segmented tumor images is achieved by,
storing the segmented tumor image into a two-dimensional tumor marking area, respectively projecting the two-dimensional tumor marking area from a plurality of visual angles to a three-dimensional coordinate space by adopting a marking migration method to obtain a plurality of three-dimensional marking coordinates, and synthesizing a three-dimensional tumor model by the plurality of three-dimensional marking coordinates.
The working process of the invention is as follows:
as shown in fig. 4, after the tumor image is opened by the visualization terminal 1, the operator selects a three-dimensional view angle by controlling the joystick 5 and the smart glove 3. Under the selected three-dimensional visual angle, the operator presses the hand lever 5 to cut the button 55, the intelligent glove 3 is moved, and the server 2 records the moving track and area of the intelligent glove 3 according to the moving acceleration and the angular velocity of the intelligent glove 3 fed back by the attitude sensor 36 and stores the moving track and area as the tumor marking area.
As shown in fig. 6, after initializing a tumor model, selecting a view angle, extracting a two-dimensional tumor marking region under the view angle, converting image coordinates in the two-dimensional marking region into a three-dimensional coordinate space according to current view angle coordinates, obtaining a three-dimensional marking coordinate set, traversing the three-dimensional marking set, and checking whether the tumor model includes three-dimensional coordinates? If not, the three-dimensional marker coordinates are added to the tumor model. And changing a visual angle to continuously check the next three-dimensional mark coordinate set until no mark remains in the three-dimensional mark coordinate set, generating a three-dimensional tumor model by using a plurality of three-dimensional mark coordinates in the three-dimensional space, and outputting the three-dimensional tumor model to the visualization terminal 1.
The invention improves the current situations of complicated tumor segmentation process, low segmentation efficiency and inflexible tumor image manipulation in the prior art, utilizes the tumor marking information of a plurality of visual angles to assist the tumor segmentation, improves the tumor segmentation efficiency, and simultaneously flexibly manipulates the tumor image by acquiring the pressing pressure of the left hand of an operator, the acceleration, the angular velocity and the attitude angle information of the movement and the rotation of the right hand, thereby providing the tumor segmentation device based on the multi-visual-angle mark migration.
Compared with the prior art, the invention has the following advantages:
1. compared with the existing tumor image control device, such as a mouse and a keyboard, which can only rotate or move the tumor image on a two-dimensional plane, the device provided by the invention uses the hand lever 5 and the intelligent gloves 3 to simultaneously control the tumor image, and directly rotates, translates, zooms and segments the tumor image in a three-dimensional space by capturing the finger pressing pressure data of the left hand and the gesture data of the right hand of an operator.
2. Compared with the existing tumor image manipulating device, such as a mouse and a keyboard, which generally uses a constant speed to rotate or move the tumor image when rotating or moving the tumor image, the efficiency is low, when the rotating button 52 or the moving button 53 is pressed, the acceleration of the server 2 to rotate and translate the tumor image is proportional to the pressure data of the rotating pressure sensor 521 or the moving pressure sensor 531, and the larger the pressure data is, the faster the server 2 rotates and translates the image is. The operator can individually adjust the speed of tumor image rotation and translation by applying different pressures to the rotation button 52 and the movement button 53 according to his proficiency.
3. Compared with a tumor segmentation method of non-machine learning, the image characteristics of the tumor need to be manually summarized, for example, the tumor region and the background region are distinguished through a tumor segmentation threshold value, the image texture and the color of the tumor region, and the process is complex and inconvenient to implement. Compared with a tumor segmentation method based on machine learning, which needs hundreds of thousands of high-quality marked images to finish the training of a segmentation model, the segmentation device based on multi-view marker migration provided by the invention only needs to mark dozens of view tumor images to directly construct a three-dimensional tumor model, so that the link that a physicist still needs to reconstruct the three-dimensional tumor model according to the marked region in the prior art is saved, and the tumor segmentation is finished more efficiently.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A tumor segmentation apparatus, characterized by: the intelligent glove comprises a handle bar, intelligent gloves, a server and a visual terminal;
the hand lever is provided with a function button and a function pressure sensor, and the function button is connected with the intelligent glove and used for starting the functions of simulating rotation and moving tumor images of the intelligent glove; the function pressure sensor is arranged corresponding to the function button and used for acquiring pressure data on the function button and sending the pressure data to the server;
the intelligent gloves are used for simulating rotation and moving tumor images through posture change and motion change;
the intelligent gloves are provided with three-dimensional positioning devices and used for acquiring attitude data and motion data of the intelligent gloves in the attitude change and motion change processes and sending the attitude data and the motion data to the server;
the server is used for receiving the attitude data and the motion data of the intelligent gloves and the pressure data on the function buttons, rotating, moving or dividing the tumor images according to the attitude data and the motion data, adjusting the speed of rotating or moving the tumor images according to the pressure data on the function buttons, generating a three-dimensional tumor model according to the divided tumor images and sending the three-dimensional tumor model to the visualization terminal;
the visualization terminal is used for receiving and displaying the three-dimensional tumor model sent by the server.
2. A tumor segmentation apparatus as claimed in claim 1, wherein: the handle bar is cylindrical, and the function button includes turn button and shift knob, and the handle bar lateral wall is located respectively to turn button and shift knob, is used for opening the rotatory and simulation of intelligent gloves respectively and removes tumour image function, and function pressure sensor includes rotation pressure sensor and removal pressure sensor, and rotation pressure sensor corresponds the setting with turn button, and removal pressure sensor corresponds the setting with shift knob.
3. A tumor segmentation apparatus as claimed in claim 2, wherein: intelligent gloves include the dactylotheca, the dactylotheca zooms the tumour image with the simulation through straightening or crooked, the dactylotheca is equipped with the crookedness sensor, the crookedness sensor is connected with the server, a crookedness data and the sending for the server for acquireing the dactylotheca, the server zooms the tumour image according to received crookedness data, the handspike still is equipped with the button of zooming and zooms pressure sensor, it is connected with the server to zoom pressure sensor, the handspike lateral wall is located to the button of zooming, a tumour image function is zoomed in the simulation for opening intelligent gloves, the pressure sensor that zooms corresponds the setting with the button of zooming, a pressure data and the sending for the server on the button of zooming.
4. A tumor segmentation apparatus as claimed in claim 3, wherein: the hand lever is further provided with a segmentation button and a segmentation pressure sensor, the segmentation pressure sensor is connected with the server, the segmentation button is arranged on the side wall of the hand lever and used for starting the function of simulating segmentation of the tumor image of the intelligent glove, the segmentation pressure sensor and the segmentation button are correspondingly arranged and used for acquiring pressure data on the segmentation button and sending the pressure data to the server, and the server receives the pressure data on the segmentation button and starts to segment the tumor image.
5. A tumor segmentation apparatus as claimed in claim 4, wherein: the three-dimensional positioning device comprises a posture sensor, and the posture sensor is arranged at the back of the hand of the intelligent glove and connected with the server.
6. A tumor segmentation apparatus as claimed in claim 5, wherein: the intelligent gloves are provided with a main control module, and the main control module is respectively connected with the attitude sensor, the curvature sensor, the rotating pressure sensor, the moving pressure sensor, the zooming pressure sensor, the dividing pressure sensor and the server.
7. A tumor segmentation apparatus as claimed in claim 6, wherein: the top end of the handle rod is provided with a starting button, and the starting button is connected with the main control module and used for starting the intelligent gloves.
8. A segmentation method for a tumor segmentation apparatus according to any one of claims 1 to 7, characterized by: comprises the following steps of (a) carrying out,
acquiring gesture data and motion data of the intelligent glove, and rotating, moving or segmenting the tumor image according to the gesture data and the motion data of the intelligent glove;
acquiring pressure data on a function button of the hand lever, and adjusting the speed of rotating and/or moving the tumor image according to the pressure data on the function button;
and generating a three-dimensional tumor model according to the segmented tumor image and sending the three-dimensional tumor model to the visualization terminal so that the visualization terminal receives and displays the three-dimensional tumor model.
9. A method of making a tumor segmentation apparatus according to claim 8 wherein: the method further comprises the following steps of receiving the curvature data of the fingerstall of the intelligent glove, and zooming the tumor image according to the curvature data of the fingerstall.
10. A method of making a tumor segmentation apparatus according to claim 8 wherein: the generation of a three-dimensional tumor model from segmented tumor images is achieved in that,
storing the segmented tumor image into a two-dimensional tumor marking area, respectively projecting the two-dimensional tumor marking area from a plurality of visual angles to a three-dimensional coordinate space by adopting a marking migration method to obtain a plurality of three-dimensional marking coordinates, and synthesizing a three-dimensional tumor model by the plurality of three-dimensional marking coordinates.
CN202011341340.2A 2020-11-25 2020-11-25 Tumor segmentation device and segmentation method Active CN112382374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011341340.2A CN112382374B (en) 2020-11-25 2020-11-25 Tumor segmentation device and segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011341340.2A CN112382374B (en) 2020-11-25 2020-11-25 Tumor segmentation device and segmentation method

Publications (2)

Publication Number Publication Date
CN112382374A true CN112382374A (en) 2021-02-19
CN112382374B CN112382374B (en) 2024-04-12

Family

ID=74588698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011341340.2A Active CN112382374B (en) 2020-11-25 2020-11-25 Tumor segmentation device and segmentation method

Country Status (1)

Country Link
CN (1) CN112382374B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140278139A1 (en) * 2010-09-30 2014-09-18 Fitbit, Inc. Multimode sensor devices
US20170103160A1 (en) * 2015-10-12 2017-04-13 Milsco Manufacturing Company, A Unit Of Jason Incorporated Customer Comfort Optimization Method, Apparatus, and System
WO2018107679A1 (en) * 2016-12-12 2018-06-21 华为技术有限公司 Method and device for acquiring dynamic three-dimensional image
KR20190054223A (en) * 2017-11-13 2019-05-22 주식회사 휴먼인사이트 Three-axis sensor-based postural visualization and management system
CN110647939A (en) * 2019-09-24 2020-01-03 广州大学 Semi-supervised intelligent classification method and device, storage medium and terminal equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140278139A1 (en) * 2010-09-30 2014-09-18 Fitbit, Inc. Multimode sensor devices
US20170103160A1 (en) * 2015-10-12 2017-04-13 Milsco Manufacturing Company, A Unit Of Jason Incorporated Customer Comfort Optimization Method, Apparatus, and System
WO2018107679A1 (en) * 2016-12-12 2018-06-21 华为技术有限公司 Method and device for acquiring dynamic three-dimensional image
KR20190054223A (en) * 2017-11-13 2019-05-22 주식회사 휴먼인사이트 Three-axis sensor-based postural visualization and management system
CN110647939A (en) * 2019-09-24 2020-01-03 广州大学 Semi-supervised intelligent classification method and device, storage medium and terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
石宇强;刘岩;徐桓;张曦;杜鹏;卢虹冰;刘洋;徐肖攀;: "基于医学影像处理平台的虚拟膀胱镜系统开发", 中国医学装备, no. 07, 10 July 2018 (2018-07-10), pages 42 - 46 *

Also Published As

Publication number Publication date
CN112382374B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
US11911214B2 (en) System and methods for at home ultrasound imaging
CN105045398B (en) A kind of virtual reality interactive device based on gesture identification
EP2755194B1 (en) 3d virtual training system and method
CN105074617B (en) Three-dimensional user interface device and three-dimensional manipulating processing method
US6734847B1 (en) Method and device for processing imaged objects
CN108701429A (en) The virtual and/or augmented reality that physics interactive training is carried out with operating robot is provided
CN113362452B (en) Hand posture three-dimensional reconstruction method and device and storage medium
CN105534694A (en) Human body characteristic visualization device and method
CN106530293A (en) Manual assembly visual detection error prevention method and system
CN110751681B (en) Augmented reality registration method, device, equipment and storage medium
CN102915111A (en) Wrist gesture control system and method
WO2011065034A1 (en) Method for controlling action of robot, and robot system
CN109460150A (en) A kind of virtual reality human-computer interaction system and method
CN106569673A (en) Multi-media case report displaying method and displaying device for multi-media case report
Bornik et al. A hybrid user interface for manipulation of volumetric medical data
CN104156068B (en) Virtual maintenance interaction operation method based on virtual hand interaction feature layer model
CN111639531A (en) Medical model interaction visualization method and system based on gesture recognition
CN113672099A (en) Electronic equipment and interaction method thereof
CN112198962A (en) Method for interacting with virtual reality equipment and virtual reality equipment
CN109671505A (en) A kind of head three-dimensional data processing method for medical consultations auxiliary
CN101869501B (en) Computer-aided needle scalpel positioning system
CN114706490A (en) Mouse model mapping method, device, equipment and storage medium
CN115328304A (en) 2D-3D fused virtual reality interaction method and device
CN213935663U (en) Tumor segmentation device
CN112382374B (en) Tumor segmentation device and segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant