CN112382374B - Tumor segmentation device and segmentation method - Google Patents

Tumor segmentation device and segmentation method Download PDF

Info

Publication number
CN112382374B
CN112382374B CN202011341340.2A CN202011341340A CN112382374B CN 112382374 B CN112382374 B CN 112382374B CN 202011341340 A CN202011341340 A CN 202011341340A CN 112382374 B CN112382374 B CN 112382374B
Authority
CN
China
Prior art keywords
tumor
button
data
server
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011341340.2A
Other languages
Chinese (zh)
Other versions
CN112382374A (en
Inventor
徐庸辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202011341340.2A priority Critical patent/CN112382374B/en
Publication of CN112382374A publication Critical patent/CN112382374A/en
Application granted granted Critical
Publication of CN112382374B publication Critical patent/CN112382374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation

Abstract

The invention relates to a tumor segmentation device, which comprises a hand lever, an intelligent glove, a server and a visual terminal; the hand lever is provided with a functional button and a functional pressure sensor, the functional button is connected with the intelligent glove, and the functional pressure sensor is correspondingly arranged with the functional button; the intelligent glove is provided with a three-dimensional positioning device which is used for acquiring gesture data and motion data of the intelligent glove in gesture change and motion change processes and sending the gesture data and the motion data to the server; the server rotates, moves or segments the tumor image according to the gesture data and the motion data, adjusts the speed of rotating or moving the tumor image according to the pressure data on the function buttons, generates a three-dimensional tumor model according to the segmented tumor image, and sends the three-dimensional tumor model to the visualization terminal. The method can be controlled by both hands of an operator, can rapidly locate and divide the focus part and generate a three-dimensional tumor model, and improves the tumor division efficiency. The invention also relates to a tumor segmentation method.

Description

Tumor segmentation device and segmentation method
Technical Field
The invention relates to the technical field of medical image processing, in particular to a tumor segmentation device and a segmentation method.
Background
The use of high-energy radiation to destroy cancer cells in radiotherapy is an important tool in tumor therapy. The accurate segmentation of tumors by CT or MRI images, and the determination of the target area range are key steps of tumor radiotherapy.
The tumor segmentation method based on machine learning and deep learning is to learn a model capable of accurately distinguishing tumor foreground and background areas in a high-dimensional space through massive marked tumor area image data. The segmentation accuracy of a tumor three-dimensional model depends on the number and quality of markers that the physician makes under a two-dimensional view. In order to obtain an accurate tumor segmentation model, hundreds of thousands of high-quality tumor three-dimensional models need to be marked, a large amount of manpower and material resources are consumed, and the economical practicability is poor.
In a non-machine-learning tumor segmentation method, the tumor segmentation device relies on traditional interaction equipment, including a mouse and a keyboard, in the process of manipulating CT or MRI images. In segmenting a three-dimensional tumor model, an expert typically needs to view and mark the tumor region from multiple three-dimensional perspectives. In addition, in the process of switching the viewing angle and marking the tumor, operations such as rotation, scaling, translation and the like are required to be frequently performed on the three-dimensional tumor image, and when the three-dimensional tumor image is moved or rotated by using a mouse by one hand, the movement or rotation speed of the three-dimensional tumor image is generally uniform. When the number of scanning layers of the tumor image is large, the uniform-speed moving or rotating operation is unfavorable for rapidly positioning the focus part, and the tumor segmentation efficiency is low.
Disclosure of Invention
Aiming at the technical problems existing in the prior art, one of the purposes of the invention is as follows: the tumor segmentation device can be controlled by both hands of an operator, can rapidly locate and segment focus parts and generate a three-dimensional tumor model, does not need massive tumor area image data, and improves tumor segmentation efficiency.
Aiming at the technical problems in the prior art, the second purpose of the invention is as follows: according to the tumor segmentation method, the tumor segmentation device is controlled by both hands of an operator, so that the focus part can be rapidly positioned and segmented, a three-dimensional tumor model is generated, massive tumor area image data is not needed, and the tumor segmentation efficiency is high.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a tumor segmentation device comprises a hand lever, an intelligent glove, a server and a visual terminal;
the hand lever is provided with a functional button and a functional pressure sensor, and the functional button is connected with the intelligent glove and is used for starting the simulated rotation and simulated movement tumor image functions of the intelligent glove; the functional pressure sensor is arranged corresponding to the functional button and is used for acquiring pressure data on the functional button and sending the pressure data to the server;
the intelligent glove is used for simulating rotation and simulating movement tumor images through gesture changes and motion changes;
the intelligent glove is provided with a three-dimensional positioning device which is used for acquiring gesture data and motion data of the intelligent glove in gesture change and motion change processes and sending the gesture data and the motion data to the server;
the server is used for receiving gesture data and motion data of the intelligent glove and pressure data on the functional buttons, rotating, moving or dividing the tumor image according to the gesture data and the motion data, adjusting the speed of rotating or moving the tumor image according to the pressure data on the functional buttons, generating a three-dimensional tumor model according to the divided tumor image and sending the three-dimensional tumor model to the visual terminal;
the visualization terminal is used for receiving and displaying the three-dimensional tumor model sent by the server.
Further, the hand lever is cylindrical, and functional button includes rotatory button and removal button, and rotatory button and removal button are located the hand lever lateral wall respectively, are used for opening the simulation rotation of intelligent gloves respectively and simulate the tumour image function of removal, and functional pressure sensor includes rotatory pressure sensor and removal pressure sensor, and rotatory pressure sensor corresponds the setting with rotatory button, and removal pressure sensor corresponds the setting with removal button.
Further, the intelligent glove comprises a fingerstall, the fingerstall is straightened or bent to simulate a scaled tumor image, the fingerstall is provided with a curvature sensor, the curvature sensor is connected with a server and used for acquiring curvature data of the fingerstall and sending the curvature data to the server, the server scales the tumor image according to the received curvature data, a scaling button and a scaling pressure sensor are further arranged on a hand lever, the scaling pressure sensor is connected with the server, the scaling button is arranged on the side wall of the hand lever and used for starting a simulated scaled tumor image function of the intelligent glove, the scaling pressure sensor is correspondingly arranged with the scaling button and used for acquiring pressure data on the scaling button and sending the pressure data to the server, and the server adjusts the speed of scaling the tumor image according to the received pressure data on the scaling button.
Further, the hand lever is further provided with a segmentation button and a segmentation pressure sensor, the segmentation pressure sensor is connected with the server, the segmentation button is arranged on the side wall of the hand lever and used for starting the function of simulating and segmenting the tumor image of the intelligent glove, the segmentation pressure sensor is arranged corresponding to the segmentation button and used for acquiring pressure data on the segmentation button and sending the pressure data to the server, and the server receives the pressure data on the segmentation button and starts to segment the tumor image.
Further, the three-dimensional positioning device comprises an attitude sensor, and the attitude sensor is arranged at the back of the hand of the intelligent glove and is connected with the server.
Further, the intelligent glove is provided with a main control module, and the main control module is respectively connected with the gesture sensor, the bending sensor, the rotary pressure sensor, the movable pressure sensor, the scaling pressure sensor, the segmentation pressure sensor and the server.
Further, the top end of the hand lever is provided with a starting button, and the starting button is connected with the main control module and used for starting the intelligent glove.
A segmentation method of a tumor segmentation device comprises the following steps,
acquiring gesture data and motion data of the intelligent glove, and rotating, moving or dividing a tumor image according to the gesture data and the motion data of the intelligent glove;
acquiring pressure data on a function button of a hand lever, and adjusting the speed of rotating and/or moving tumor images according to the pressure data on the function button;
and generating a three-dimensional tumor model according to the segmented tumor image and sending the three-dimensional tumor model to a visual terminal so that the visual terminal receives and displays the three-dimensional tumor model.
Further, the method further comprises the steps of receiving bending data of the fingerstall of the intelligent glove and scaling the tumor image according to the bending data of the fingerstall.
Further, the realization mode of generating the three-dimensional tumor model according to the segmented tumor image is that,
and storing the segmented tumor image as a two-dimensional tumor marking area, respectively projecting the two-dimensional tumor marking area into a three-dimensional coordinate space from a plurality of view angles by adopting a marking migration method to obtain a plurality of three-dimensional marking coordinates, and synthesizing a three-dimensional tumor model by the plurality of three-dimensional marking coordinates.
In general, the invention has the following advantages:
the method can be controlled by both hands of an operator, accords with the ergonomic design, does not need to provide massive tumor area image data, can rapidly locate and divide focus positions and generate a three-dimensional tumor model, and improves the tumor division efficiency.
Drawings
Fig. 1 is a schematic structural diagram of an embodiment of the present invention.
Fig. 2 is a schematic perspective view of a hand lever according to an embodiment of the present invention.
Fig. 3 is a schematic plan view of an intelligent glove according to an embodiment of the present invention.
Fig. 4 is a flow chart of an implementation of an example of the present invention.
FIG. 5 is a flow chart of server data processing according to an embodiment of the present invention.
FIG. 6 is a flow chart of a data processing of the mark migration module according to an embodiment of the present invention.
Reference numerals illustrate:
1-a visual terminal;
2-a server;
3-intelligent glove; 31-a little finger curvature sensor; 32-ring finger curvature sensor; 33—middle finger curvature sensor; 34-index finger bending sensor; 35-thumb curvature sensor; 36-an attitude sensor; 37-a main control module;
4-a data line;
5-a hand lever; 51—an actuation button; 511-activating a pressure sensor; 52-a rotary button; 521—a rotary pressure sensor; 53-move button; 531-moving the pressure sensor; 54—zoom button; 541-scaling the pressure sensor; 55-split buttons; 551—split pressure sensor.
Detailed Description
The present invention will be described in further detail below.
As shown in fig. 1 to 3, a tumor segmentation device comprises a hand lever 5, an intelligent glove 3, a server 2 and a visual terminal 1, wherein the hand lever 5 is provided with a functional button and a functional pressure sensor, and the functional button is connected with the intelligent glove 3 and is used for starting the functions of simulating rotation and simulating movement of the intelligent glove 3; the functional pressure sensor is arranged corresponding to the functional button and is used for acquiring pressure data on the functional button and sending the pressure data to the server 2; the intelligent glove 3 is used for simulating rotation and simulating movement tumor images through gesture changes and motion changes; the intelligent glove 3 is provided with a three-dimensional positioning device which is used for acquiring gesture data and motion data of the intelligent glove 3 in gesture change and motion change processes and sending the gesture data and the motion data to the server 2; the server 2 receives gesture data and motion data of the intelligent glove 3 and pressure data on the functional buttons, rotates, moves or segments tumor images according to the gesture data and the motion data, adjusts the speed of rotating or moving the tumor images according to the pressure data on the functional buttons, generates a three-dimensional tumor model according to the segmented tumor images, and sends the three-dimensional tumor model to the visual terminal 1; the visualization terminal 1 receives and displays the three-dimensional tumor model transmitted from the server 2.
The operator can wear the intelligent glove 3 in one hand and hold the hand lever 5 in the other hand to operate. The intelligent glove 3 can be designed to be worn by the left hand or the right hand, and correspondingly, the hand lever 5 is designed to be held by the right hand or the left hand so as to adapt to operators with different handedness. The device is described below by taking the example that the intelligent glove 3 is worn by the right hand and the hand lever 5 is left hand.
The intelligent glove 3 can simulate the rotation and simulate the tumor image in the mobile visual terminal 1 through the posture change and the motion change of the intelligent glove. Specifically, the operator presses the function button with the left hand, turning on the simulated rotation and simulated movement tumor image functions of the intelligent glove 3. When the operator wears the intelligent glove 3 on the right hand to move or rotate the intelligent glove 3, the three-dimensional positioning device in the intelligent glove 3 acquires gesture data and motion data of the intelligent glove 3 in gesture change and motion change processes and sends the gesture data and the motion data to the server 2, the server 2 is connected with the visual terminal 1, and the movement or rotation state of the tumor image under the current operation visual angle is displayed through the visual terminal 1. When the left hand presses the function button with force, the function pressure sensor corresponding to the function button acquires the pressure data applied to the function button and transmits the pressure data to the server 2, and the server 2 correspondingly adjusts the moving or rotating speed of the tumor image according to the received pressure data. The acceleration of the rotation and movement of the tumor image of the server 2 is in direct proportion to the pressure data on the functional buttons, and the greater the pressure data is, the faster the rotation and movement of the tumor image of the server 2 is, so that the operator can quickly position the focus position.
In the moving process of the intelligent glove 3, the server 2 records and processes the gesture data and the motion data transmitted by the three-dimensional positioning device, segments the tumor image according to the focus part, generates a three-dimensional tumor model according to the segmented tumor image and transmits the three-dimensional tumor model to the visual terminal 1 for display. Therefore, the tumor segmentation device provided by the embodiment of the invention can be simultaneously controlled by both hands of an operator, accords with the ergonomic design, does not need to provide massive tumor area image data, can rapidly locate and segment focus positions and generate a three-dimensional tumor model, and improves the tumor segmentation efficiency.
The hand lever 5 is cylindrical, and the function button includes rotatory button 52 and removal button 53, and rotatory button 52 and removal button 53 are located the hand lever 5 lateral wall respectively, are used for opening the simulation rotation of intelligent gloves 3 respectively and simulate the tumour image function of removal, and the function pressure sensor includes rotatory pressure sensor 521 and removal pressure sensor 531, and rotatory pressure sensor 521 corresponds the setting with rotatory button 52, and removal pressure sensor 531 corresponds the setting with removal button 53.
The cylindrical hand lever 5 is convenient for an operator to hold, the rotary button 52 and the movable button 53 are respectively arranged on the side wall of the hand lever 5, the operator can press the hand lever conveniently through fingers, the design accords with ergonomics, and the user experience is improved.
The intelligent glove 3 comprises a finger stall, the finger stall is straightened or bent to simulate zooming tumor images, the finger stall is provided with a bending sensor, the bending sensor is connected with the server 2 and used for acquiring bending data of the finger stall and sending the bending data to the server 2, the server 2 zooms the tumor images according to the received bending data, the hand lever 5 is further provided with a zooming button 54 and a zooming pressure sensor 541, the zooming pressure sensor 541 is connected with the server 2, the zooming button 54 is arranged on the side wall of the hand lever 5 and used for starting a function of simulating zooming the tumor images of the intelligent glove 3, the zooming pressure sensor 541 is correspondingly arranged with the zooming button 54 and used for acquiring pressure data on the zooming button 54 and sending the pressure data to the server 2, and the server 2 adjusts the speed of zooming the tumor images according to the received pressure data on the zooming button 54.
When the fingers of the operator wearing the intelligent glove 3 are bent, the fingerstall of the intelligent glove 3 is driven to bend, the bending sensor arranged on the fingerstall detects bending data of the corresponding fingerstall and sends the bending data to the server 2, and the server 2 scales tumor images according to the received bending data, so that the operator can better and faster view details of the tumor images. When the zoom button 54 is pressed, the zoom pressure sensor 541 detects pressure data on the zoom button 54 and sends the pressure data to the server 2, and the server 2 adjusts the speed of zooming the tumor image according to the received pressure data on the zoom button 54, so that the greater the pressure data on the zoom button 54, the faster the speed of zooming the tumor image is, and the rapid positioning of the focus part is facilitated.
The hand lever 5 is also provided with a dividing button 55 and a dividing pressure sensor 551, the dividing pressure sensor 551 is connected with the server 2, the dividing button 55 is arranged on the side wall of the hand lever 5 and used for starting the function of simulating and dividing the tumor image of the intelligent glove 3, the dividing pressure sensor 551 is correspondingly arranged with the dividing button 55 and used for acquiring pressure data on the dividing button 55 and sending the pressure data to the server 2, and the server 2 receives the pressure data on the dividing button 55 and starts dividing the tumor image.
When the split button 55 is pressed, after the split pressure sensor 551 detects that pressure data exists on the split button 55, a split signal is sent to the server 2, and the server 2 records the moving track and the area of the intelligent glove 3 according to the moving acceleration and the angular velocity of the intelligent glove 3 fed back by the gesture sensor 36, stores the moving track and the area as a tumor marking area, and generates a three-dimensional tumor model according to the tumor marking area.
The three-dimensional positioning device comprises an attitude sensor 36, wherein the attitude sensor 36 is arranged at the back of the hand of the intelligent glove 3 and is connected with the server 2.
The gesture sensor 36 is arranged at the back of the hand of the intelligent glove 3, so that the gesture sensor 36 can more accurately detect various gestures of the simulated intelligent glove 3, and the gesture change of the intelligent glove 3 can be transmitted to the server 2 in real time.
The intelligent glove 3 is provided with a main control module 37, and the main control module 37 is respectively connected with the gesture sensor 36, the bending sensor, the rotation pressure sensor 521, the movement pressure sensor 531, the scaling pressure sensor 541, the segmentation pressure sensor 551 and the server 2.
In the present embodiment, the attitude sensor 36 includes a 3-axis acceleration sensor, a 3-axis gyroscope, and a 3-axis geomagnetic field sensor.
The finger stall of the intelligent glove 3 comprises a thumb stall, an index finger stall, a middle finger stall, a ring finger stall and a little finger stall, and a strip-shaped thumb bending sensor 35, an index finger bending sensor 34, a middle finger bending sensor 33, a ring finger bending sensor 32 and a little finger bending sensor 31 are respectively and correspondingly bound. When the zoom button 54 is pressed, the server 2 calculates an average value of the bending data of the bending sensor on each finger cuff, divides the average value by the maximum bending of the bending sensor, and enlarges the tumor image according to the obtained value in the same proportion, so that the operator can view the tumor image more clearly. After the inspection is finished, the bending sensors can be straightened by straightening the fingers, so that the tumor image is reduced, and the overall situation of the tumor image can be quickly inspected.
The main control module 37 is arranged at the wrist of the intelligent glove 3. The main control module 37 comprises a lithium battery, a central processing unit and a bluetooth communication module for supplying power to the intelligent glove 3. The central processing unit is connected with the hand lever 5 through a data line 4, and the central processing unit is connected with the server 2 through a Bluetooth communication module. When the operator moves or rotates the intelligent glove 3, stretches and stretches the finger stall of the intelligent glove 3, or presses each button of the hand lever 5, each sensor of the hand lever 5 and the intelligent glove 3 sends sensor data to the main control module 37. After the main control module 37 receives the sensor data, the bluetooth communication module in the main control module 37 transmits the sensor data of the hand lever 5 and the intelligent glove 3 to the server 2.
The visual terminal 1 is a display, a tablet computer or a smart phone.
The server 2 is a cloud server or a local computer.
The top of the hand lever 5 is provided with a start button 51, and the start button 51 is connected with the main control module 37 and is used for starting the intelligent glove 3.
In this embodiment, the rotation button 52, the movement button 53, the zoom button 54, and the dividing button 55 are sequentially disposed on the side wall of the hand lever 5 from top to bottom, so that an operator can conveniently press the hand lever with the index finger, the middle finger, the ring finger, and the little finger. The start button 51 is provided at the top end of the hand lever 5, and is convenient for the operator to press with the thumb. A start pressure sensor 511 is correspondingly arranged below the start button 51, and when the start pressure sensor 511 detects the pressure on the start button 51, the intelligent glove 3 is started.
As shown in fig. 5, a segmentation method of a tumor segmentation apparatus includes the steps of,
acquiring posture data and motion data of the intelligent glove 3, and rotating, moving or dividing tumor images according to the posture data and the motion data of the intelligent glove 3;
acquiring pressure data on a functional button of the hand lever 5;
and generating a three-dimensional tumor model according to the segmented tumor image and sending the three-dimensional tumor model to the visualization terminal 1, so that the visualization terminal 1 receives and displays the three-dimensional tumor model.
The hand lever 5 and the intelligent glove 3 are respectively operated by the hands of an operator, the tumor image is rotated, moved or segmented according to the gesture data and the motion data of the intelligent glove 3, the speed of rotating and/or moving the tumor image is adjusted according to the pressure data on the functional buttons, the focus part can be rapidly positioned and segmented and a three-dimensional tumor model is generated through the cooperation of the hand lever 5 and the intelligent glove 3, the segmentation model is not required to be trained depending on the image data of a massive tumor area, and the tumor segmentation efficiency is high.
The method further comprises the following steps of receiving bending data of the fingerstall of the intelligent glove 3 and scaling tumor images according to the bending data of the fingerstall.
The server 2 calculates the bending data of the fingerstall of the intelligent glove 3, divides the maximum bending of the bending sensor, amplifies the tumor image according to the obtained numerical value in the same proportion, and an operator amplifies the tumor image by bending the fingerstall of the intelligent glove 3 so as to more clearly view the details of the tumor image. After the observation is finished, the bending sensor can be straightened by straightening the finger, so that the tumor image is reduced, and the tumor image overall situation can be observed.
From the segmented tumor images, a three-dimensional tumor model can be generated using prior art algorithms. In this embodiment, the three-dimensional tumor model is generated from the segmented tumor image by,
and storing the segmented tumor image as a two-dimensional tumor marking area, respectively projecting the two-dimensional tumor marking area into a three-dimensional coordinate space from a plurality of view angles by adopting a marking migration method to obtain a plurality of three-dimensional marking coordinates, and synthesizing a three-dimensional tumor model by the plurality of three-dimensional marking coordinates.
The working process of the invention is as follows:
as shown in fig. 4, after the visual terminal 1 opens the tumor image, the operator selects a three-dimensional view angle by controlling the hand lever 5 and the smart glove 3. Under the selected three-dimensional view angle, the operator presses the hand lever 5 to split the button 55, moves the intelligent glove 3, and the server 2 records the moving track and area of the intelligent glove 3 according to the moving acceleration and angular velocity of the intelligent glove 3 fed back by the gesture sensor 36 and stores the moving track and area as a tumor marking area.
As shown in fig. 6, after initializing a tumor model, selecting a view angle, extracting a two-dimensional tumor marker region under the view angle, converting image coordinates in the two-dimensional marker region into a three-dimensional coordinate space according to the current view angle coordinates, obtaining a three-dimensional marker coordinate set, traversing the three-dimensional marker set, and checking whether the tumor model contains three-dimensional coordinates? If not, three-dimensional marker coordinates are added to the tumor model. Changing one view angle and continuously checking the next three-dimensional mark coordinate set until no marks remain in the three-dimensional mark coordinate set, generating a three-dimensional tumor model by a plurality of three-dimensional mark coordinates in the three-dimensional space and outputting the three-dimensional tumor model to the visualization terminal 1.
The invention improves the current situations of complicated tumor segmentation process, low segmentation efficiency and inflexible tumor image manipulation, utilizes tumor marking information of multiple visual angles to assist tumor segmentation, improves the tumor segmentation efficiency, and flexibly manipulates the tumor image by acquiring the pressing pressure of the left hand of an operator and the moving and rotating acceleration, angular speed and attitude angle information of the right hand, thereby providing a tumor segmentation device based on multi-visual angle marking migration.
Compared with the prior art, the invention has the following advantages:
1. compared with the existing tumor image manipulation device, such as a mouse and a keyboard, which can only rotate or move the tumor image on a two-dimensional plane, the device provided by the invention uses the hand lever 5 and the intelligent glove 3 to simultaneously manipulate the tumor image, and the tumor image is directly rotated, translated, scaled and segmented in a three-dimensional space by grabbing finger pressing pressure data of the left hand and gesture data of the right hand of an operator.
2. Compared to the conventional tumor image manipulation device, such as a mouse and a keyboard, which generally uses a constant speed to rotate or move the tumor image when rotating or moving the tumor image, the present invention proposes a device in which the acceleration of the rotation and translation of the tumor image by the server 2 is proportional to the pressure data of the rotation pressure sensor 521 or the movement pressure sensor 531 when pressing the rotation button 52 or the movement button 53, and the greater the pressure data, the faster the rotation and translation speed of the image by the server 2. The operator can individually adjust the speed of rotation and translation of the tumor image by applying different pressures to the rotation button 52 and the movement button 53 according to his own proficiency.
3. Compared with a non-machine learning tumor segmentation method which needs to manually summarize the image characteristics of tumors, for example, the tumor region and the background region are distinguished through a tumor segmentation threshold, image textures and colors of the tumor region and the like, the process is complex and inconvenient to implement, and the segmentation device based on multi-view marker migration provided by the invention does not need to summarize the image characteristics of tumors, and an operator can directly mark the tumor region at the current view angle only by moving the intelligent glove 3. Compared with a tumor segmentation method based on machine learning, the segmentation device based on multi-view marker migration provided by the invention only needs to mark tumor images of several tens of views to directly construct a three-dimensional tumor model, and the link that a physical engineer still needs to reconstruct the tumor three-dimensional model according to a marked region in the prior art is omitted, so that tumor segmentation is more efficiently completed.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (8)

1. A tumor segmentation apparatus, characterized in that: the intelligent glove comprises a hand lever, intelligent gloves, a server and a visual terminal;
the hand lever is provided with a functional button and a functional pressure sensor, and the functional button is connected with the intelligent glove and is used for starting the simulated rotation and simulated movement tumor image functions of the intelligent glove; the functional pressure sensor is arranged corresponding to the functional button and is used for acquiring pressure data on the functional button and sending the pressure data to the server;
the intelligent glove is used for simulating rotation and simulating movement tumor images through gesture changes and motion changes;
the intelligent glove is provided with a three-dimensional positioning device which is used for acquiring gesture data and motion data of the intelligent glove in gesture change and motion change processes and sending the gesture data and the motion data to the server;
the server is used for receiving gesture data and motion data of the intelligent glove and pressure data on the functional buttons, rotating, moving or dividing the tumor image according to the gesture data and the motion data, adjusting the speed of rotating or moving the tumor image according to the pressure data on the functional buttons, generating a three-dimensional tumor model according to the divided tumor image and sending the three-dimensional tumor model to the visual terminal;
the visual terminal is used for receiving and displaying the three-dimensional tumor model sent by the server;
the hand lever is cylindrical, the function button comprises a rotary button and a movable button, the rotary button and the movable button are respectively arranged on the side wall of the hand lever and are respectively used for starting the simulated rotation and simulated movement tumor image functions of the intelligent glove, the function pressure sensor comprises a rotary pressure sensor and a movable pressure sensor, the rotary pressure sensor is correspondingly arranged with the rotary button, and the movable pressure sensor is correspondingly arranged with the movable button;
the intelligent glove is provided with a main control module, a starting button is arranged at the top end of the hand lever, and the starting button is connected with the main control module and used for starting the intelligent glove.
2. A tumor segmentation apparatus according to claim 1, wherein: the intelligent glove comprises a finger stall, the finger stall is straightened or bent to simulate a scaled tumor image, the finger stall is provided with a curvature sensor, the curvature sensor is connected with a server and used for acquiring curvature data of the finger stall and sending the curvature data to the server, the server scales the tumor image according to the received curvature data, a scaling button and a scaling pressure sensor are further arranged on a hand lever, the scaling pressure sensor is connected with the server, the scaling button is arranged on the side wall of the hand lever and used for starting a function of simulating the scaled tumor image of the intelligent glove, the scaling pressure sensor is correspondingly arranged with the scaling button and used for acquiring pressure data on the scaling button and sending the pressure data to the server, and the server adjusts the speed of the scaled tumor image according to the received pressure data on the scaling button.
3. A tumor segmentation apparatus according to claim 1, wherein: the hand lever is also provided with a segmentation button and a segmentation pressure sensor, the segmentation pressure sensor is connected with the server, the segmentation button is arranged on the side wall of the hand lever and used for starting the function of simulating and segmenting the tumor image of the intelligent glove, the segmentation pressure sensor is correspondingly arranged with the segmentation button and used for acquiring pressure data on the segmentation button and sending the pressure data to the server, and the server receives the pressure data on the segmentation button and starts to segment the tumor image.
4. A tumor segmentation apparatus according to claim 1, wherein: the three-dimensional positioning device comprises a gesture sensor, wherein the gesture sensor is arranged at the back of the hand of the intelligent glove and is connected with the server.
5. A tumor segmentation apparatus according to claim 1, wherein: the main control module is respectively connected with the gesture sensor, the bending sensor, the rotary pressure sensor, the mobile pressure sensor, the scaling pressure sensor, the segmentation pressure sensor and the server.
6. A segmentation method according to any one of claims 1-5, characterized in that: comprises the steps of,
acquiring gesture data and motion data of the intelligent glove, and rotating, moving or dividing a tumor image according to the gesture data and the motion data of the intelligent glove;
acquiring pressure data on a function button of a hand lever, and adjusting the speed of rotating and/or moving tumor images according to the pressure data on the function button;
and generating a three-dimensional tumor model according to the segmented tumor image and sending the three-dimensional tumor model to a visual terminal so that the visual terminal receives and displays the three-dimensional tumor model.
7. A segmentation method for a tumor segmentation apparatus according to claim 6, wherein: the method further comprises the steps of receiving bending data of the intelligent glove fingerstall and scaling tumor images according to the bending data of the fingerstall.
8. A segmentation method for a tumor segmentation apparatus according to claim 7, wherein: the realization mode of generating the three-dimensional tumor model according to the segmented tumor image is that,
and storing the segmented tumor image as a two-dimensional tumor marking area, respectively projecting the two-dimensional tumor marking area into a three-dimensional coordinate space from a plurality of view angles by adopting a marking migration method to obtain a plurality of three-dimensional marking coordinates, and synthesizing a three-dimensional tumor model by the plurality of three-dimensional marking coordinates.
CN202011341340.2A 2020-11-25 2020-11-25 Tumor segmentation device and segmentation method Active CN112382374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011341340.2A CN112382374B (en) 2020-11-25 2020-11-25 Tumor segmentation device and segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011341340.2A CN112382374B (en) 2020-11-25 2020-11-25 Tumor segmentation device and segmentation method

Publications (2)

Publication Number Publication Date
CN112382374A CN112382374A (en) 2021-02-19
CN112382374B true CN112382374B (en) 2024-04-12

Family

ID=74588698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011341340.2A Active CN112382374B (en) 2020-11-25 2020-11-25 Tumor segmentation device and segmentation method

Country Status (1)

Country Link
CN (1) CN112382374B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018107679A1 (en) * 2016-12-12 2018-06-21 华为技术有限公司 Method and device for acquiring dynamic three-dimensional image
KR20190054223A (en) * 2017-11-13 2019-05-22 주식회사 휴먼인사이트 Three-axis sensor-based postural visualization and management system
CN110647939A (en) * 2019-09-24 2020-01-03 广州大学 Semi-supervised intelligent classification method and device, storage medium and terminal equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10216893B2 (en) * 2010-09-30 2019-02-26 Fitbit, Inc. Multimode sensor devices
US20170103160A1 (en) * 2015-10-12 2017-04-13 Milsco Manufacturing Company, A Unit Of Jason Incorporated Customer Comfort Optimization Method, Apparatus, and System

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018107679A1 (en) * 2016-12-12 2018-06-21 华为技术有限公司 Method and device for acquiring dynamic three-dimensional image
KR20190054223A (en) * 2017-11-13 2019-05-22 주식회사 휴먼인사이트 Three-axis sensor-based postural visualization and management system
CN110647939A (en) * 2019-09-24 2020-01-03 广州大学 Semi-supervised intelligent classification method and device, storage medium and terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于医学影像处理平台的虚拟膀胱镜系统开发;石宇强;刘岩;徐桓;张曦;杜鹏;卢虹冰;刘洋;徐肖攀;;中国医学装备;20180710(第07期);第42-46页 *

Also Published As

Publication number Publication date
CN112382374A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
EP2755194B1 (en) 3d virtual training system and method
CN105074617B (en) Three-dimensional user interface device and three-dimensional manipulating processing method
Song et al. WYSIWYF: exploring and annotating volume data with a tangible handheld device
CN107106245B (en) Interaction between user interface and master controller
US6336052B1 (en) Data acquistion image analysis image manipulation interface
US20050251290A1 (en) Method and a system for programming an industrial robot
JP2001522098A (en) Image processing method and apparatus
JP2011110620A (en) Method of controlling action of robot, and robot system
Bornik et al. A hybrid user interface for manipulation of volumetric medical data
CN111639531A (en) Medical model interaction visualization method and system based on gesture recognition
CN113672099A (en) Electronic equipment and interaction method thereof
CN114706490A (en) Mouse model mapping method, device, equipment and storage medium
CN113786152B (en) Endoscope lens tracking method and endoscope system
CN115328304A (en) 2D-3D fused virtual reality interaction method and device
CN112382374B (en) Tumor segmentation device and segmentation method
CN109102571B (en) Virtual image control method, device, equipment and storage medium thereof
JP6924285B2 (en) Information processing device
CN213935663U (en) Tumor segmentation device
KR101903996B1 (en) Method of simulating medical image and device thereof
CN102629155A (en) Method and device for implementing non-contact operation
KR101467218B1 (en) Method for implementing interface showing information on golf swing and recording medium for recording the same readable by computing device
CN108205373B (en) Interaction method and system
CN111580677A (en) Man-machine interaction method and man-machine interaction system
CN106991398B (en) Gesture recognition method based on image recognition and matched with graphical gloves
KR20200073031A (en) 3D Hand Model Manufacturing Method for Hand Motion Tracking having High Accuracy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant