CN112906205A - Virtual learning method for total hip replacement surgery - Google Patents

Virtual learning method for total hip replacement surgery Download PDF

Info

Publication number
CN112906205A
CN112906205A CN202110126745.2A CN202110126745A CN112906205A CN 112906205 A CN112906205 A CN 112906205A CN 202110126745 A CN202110126745 A CN 202110126745A CN 112906205 A CN112906205 A CN 112906205A
Authority
CN
China
Prior art keywords
virtual
force feedback
equipment
training
learning method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110126745.2A
Other languages
Chinese (zh)
Other versions
CN112906205B (en
Inventor
张日威
何燕
蔡述庭
熊晓明
郭靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110126745.2A priority Critical patent/CN112906205B/en
Publication of CN112906205A publication Critical patent/CN112906205A/en
Application granted granted Critical
Publication of CN112906205B publication Critical patent/CN112906205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Abstract

The invention discloses a virtual learning method for total hip replacement surgery, which comprises the steps of firstly, initializing a system, establishing communication connection between an input end and a controller, and completing mapping between a force feedback device and the controller in a real-time virtual scene; in the operation process, the input end interacts with the surgical instrument model in the virtual scene in a three-dimensional graphic input control mode, and simultaneously, an operator can see a training picture in time through image feedback and is provided with key information prompts in the training process. The invention can obtain vivid and visual perception, improve the understanding of the abstract principle, relieve the problems of expensive and tense medical and operation resources in various regions of China and reduce the operation risk.

Description

Virtual learning method for total hip replacement surgery
Technical Field
The invention relates to the technical field of total hip replacement, in particular to a virtual learning method for total hip replacement surgery.
Background
The surgical progression has gone through a total of three stages: open surgery, manual minimally invasive surgery, and robot-assisted surgery. Surgery is moving towards a more sophisticated, smart and intelligent approach.
Open surgery, which is a traditional incision, cuts open bones and internal organs from the surface of the body. The open surgery has the advantages that the doctor can fully know the anatomical information of the focus area directly through naked eyes, and the operation is well held. However, such procedures have the disadvantages of large wounds, high pain, high bleeding, long hospital stays, and slow wound healing rates, potentially increasing the risk of postoperative complications. Minimally invasive surgery, i.e. minimally invasive surgery. Mainly through the operation of modern medical instruments and related equipment such as laparoscope, thoracoscope and the like. The minimally invasive surgery has the advantages of small wound, less blood loss, short hospitalization period and the like and is widely applied to surgical treatment. However, the artificial minimally invasive surgery has the problems of low positioning precision, limited space and environment, fatigue in the surgery, hand motion tremor caused by long working time and the like, and the robot-assisted surgery has become one of international leading-edge research hotspots.
With the development of automation and robotics, surgical robots have begun to penetrate into various links of surgical planning, minimally invasive positioning, and non-invasive treatment. The surgical robot zooms out the hand trembling, zooms the hand motion into smaller motion, controls the manipulator to complete a series of surgical operations, and improves the accuracy of surgeons. However, in the robot-assisted surgery, the operation habit of the doctor is different from that of the traditional surgery, and the main manifestation is that the master-slave control mode is not as flexible and convenient as the pure manual operation. Therefore, the physician's operating skills need to be trained repeatedly to control the robot proficiently. Common training modes include a human-computer interaction mode and a virtual learning mode. The man-machine interaction mode is that a joystick is directly controlled to operate the robot to complete specified actions, has high cost requirements, and cannot meet the characteristic of diversification of operation objects of a single system, so the man-machine interaction mode cannot be widely used. And the virtual learning system improves the proficiency in advance and then transits to a human-computer interaction mode to complete subsequent operation. The virtual learning system is based on computer software and hardware, is assisted by related technical means, and truly feels an advanced computer application technology to people through simulation of a known or unknown world. The virtual trainer provides a multi-source information fusion interactive three-dimensional dynamic visualization and entity behavior interactive system simulation, and has important significance for surgical training, surgical effect prediction, surgical scheme formulation and navigation of surgeons.
The prior art at present:
1) a digital virtual simulation dental training method is used for oral clinical skill training. The evaluation effects of time required by system operation completion, target completion degree, fault scoring, comprehensive scoring and the like are achieved, and the method is a novel and effective oral cavity experiment teaching method.
2) The virtual learning method for the mandible surgery can realize various operation modes such as grabbing, cutting, drilling, suturing and the like at the operation end of a user. For cutting and drilling procedures, the system renders the best planned path in the virtual scene in real time and displays the procedure progress in percentage form. When the error rate exceeds 10%, the cutting line is tracked in time and failure is declared.
3) The learning method of the myocardial cell surgery based on the tactile feedback plays a key role in training the hand-eye coordination ability of a surgeon and the ability of executing a high-precision injection task.
4) The virtual orthopedic surgery simulator learning method enables a surgeon to perform training of a fracture replacement surgery on a personal computer without the use of any expensive hardware equipment.
However, at present, no virtual learning method applied to the total hip replacement operation in the master-slave control mode exists, so that the failure of the operation can be declared once an error occurs under the condition of a real operation.
Disclosure of Invention
The present invention is directed to overcoming the above problems of the prior art and providing a virtual learning method for total hip replacement surgery.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a virtual learning method for total hip replacement surgery comprises the steps of firstly, initializing a system, establishing communication connection between an input end and a controller, and completing mapping between a force feedback device and the controller in a real-time virtual scene; in the operation process, the input end interacts with the surgical instrument model in the virtual scene in a three-dimensional graphic input control mode, and simultaneously, an operator can see a training picture in time through image feedback and is provided with key information prompts in the training process.
Further, the specific process is as follows:
s1, connecting the force feedback device and the controller;
s2, installing 'Unity 5Haptic plug for Geomagic OpenHaptics 3.3' in the Unity5.2.3 or above version to communicate with the force feedback device;
s3, calling a programming interface of 'Haptic plug for Geomagic OpenHaptics 3.3' to acquire relevant parameters of the force feedback equipment;
s4, realizing real-time interaction between the force feedback equipment and virtual equipment set in the Unity through 3D graphic input;
s5, selecting an attitude angle input mode to complete data communication with equipment;
s6, initializing the working space and the working mode of the plug-in;
s7, continuously updating the working space according to the position of the camera, and completing real-time communication between the virtual force feedback equipment and the obtained numerical value;
s8, establishing a bone model in the virtual scene;
s9, creating an operation scene and an operation instrument in the virtual scene;
s10, setting virtual key points on the training object model;
s11, designing image feedback information;
s12, designing a camera focusing function;
s13, training and evaluation.
Further, in step S3, the programming interface is divided into a Haptic Device API and a Haptic Library API;
the Haptic Device API provides a bottom-layer interface of the equipment, and directly accesses real-time parameters of various equipment in the equipment state query table through a callback function;
the Haptic Library provides an upper layer interface, and in the operation process, an upper computer program obtains information of the force feedback equipment including position, posture, joint angle, terminal speed and joint angular speed at the frequency of 1000hz through a programming interface and continuously sends the information to the controller to realize closed-loop control.
Further, in step S5, in the process of communication based on 3D graphical input and force feedback, DEVICE initialization is completed through the "hdInitDevice" function provided by the plug-in, where the DEFAULT parameter is "HD _ DEFAULT _ DEVICE"; then, finding current equipment by using an 'hdGetCurrentDevice' function and reading equipment parameters by using an 'hdGetDoublev' callback function, wherein the parameters comprise current translation position information, current translation speed and acceleration information and a current attitude angle; wherein the call form of "hdGetDoublev" is "(parameter name, return value type)"; then using the 'hdBeginFrame' and the 'hdEndFrame' as a starting point and an end point of data access to ensure the consistency of the access, thereby realizing the synchronous update of the data of the force feedback equipment and the controller; in the test, information including the position, velocity and acceleration of the translation is read directly through the access interface provided by the Haptic Device API.
Further, for the acquisition of the attitude angle information, a 16-element tail end attitude array is obtained in advance through a Haptic Device API access interface, and then the attitude angle array is converted into the speed information of each axis in the equivalent axis angular coordinate system; the 16-element terminal attitude array described by the equivalent axis angular coordinate system representation is shown as the following formula (1):
Figure BDA0002923753980000041
wherein R isK(θ) is an identity matrix, the elements of the matrix are called 16 elements, each with t [ theta ], []Expressing that K is the attitude angle of the current attitude relative to a base coordinate system, and obtaining the speed information of the attitude angle in the x, y and z directions approximately through difference;
formulas 2 and 3 can be derived from 1:
Figure BDA0002923753980000042
θ is the corresponding rotation angle around the coordinate axis;
Figure BDA0002923753980000051
further, the step S6 is specifically as follows:
approximating the force feedback device in the real scene activity space as a cube, converting its size data from the "float 3" array to "IntPtr" via the "convertfoat 3to IntPtr" instruction in "sethapticWorkSpace";
then, in the GetHapticWorkSpace, utilizing ConvertetIntPtrToFloat 3to convert the IntPtr into a float3 array in the Unity editor so as to determine the space size;
next, updating a working space by using an 'UpdateHapticWorkspace' function in the plug-in and setting a training interaction mode by using an 'indica mode' function according to the position of the camera;
next, in Unity3D, the created object state is set to "Touchable Face", which is any one of "front", "back", "front and back";
in the process of setting force feedback, relevant attributes including amplitude, duration and gain are created and set in scripts of ' Environment stability ', ' vision ', spring effect ' and ' learning effect '; setting an object requires acquiring all object arrays with "desk" tags, acquiring the grid attributes of the object, then drawing a geometric body, and reading the characteristics of the geometric body to start force feedback events of all different objects.
Further, the step S8 establishes a bone model in the virtual scene, and is divided into two steps of CT image segmentation and three-dimensional modeling;
wherein the CT image segmentation is based on a threshold method of mimics, and the region growth is used for completing the image segmentation of the bone CT data; after a CT atlas of a patient is imported into a minics interface, selecting a threshold value to carry out binarization processing, and reserving pixels of which the gray values are within the threshold value range in the CT atlas; the threshold segmentation mainly utilizes the difference of a target area to be extracted in an image and the background thereof on the gray characteristic to divide a CT image into the target area and the background area so as to generate a corresponding binary image; and then, segmenting the binary image into a plurality of blocks by using a region growing tool, and removing floating pixels.
Further, in step S9, the surgical environment is inserted in a map form, and the force feedback model is combined by using a simple stereoscopic model of Unity 3D; creating a main Camera in a 'Hierarchy' panel for shooting a running face painting seen in the operation training process, adding a 'Skybox' component in an 'observer' to design a chartlet style, and switching 'Clear Flags' in a Camera component to a 'Skybox' mode; the operation environment is a three-dimensional space formed by splicing front, back, left, right, upper and lower six sticking pictures, the virtual operation tool is formed by assembling a sphere and a capsule, and the circle center of the sphere is used as the positioning coordinate of the whole operation tool.
Further, in step S12, the Camera is used to set "Field of view" in Camera assembly by script, and focus is performed by mouse button during operation.
Further, in the step S13, the experiment is mainly performed in two steps, and during the operation of the Game panel, focusing is performed first to switch the training screen to the angle most comfortable for the operator; then controlling the force feedback equipment to approach the key points at a correct angle until the two key points turn blue; the familiarity of the operator training is assessed by the time spent on the Game panel during the training process.
Compared with the prior art, the principle and the advantages of the scheme are as follows:
(1) through the "OpenHaptics" toolkit, developers can apply force or tactile feedback devices to a wide range of fields, such as surgical simulation and medical training, aerospace and military training, assistance for blind or visually impaired people, and game entertainment.
(2) The coordination operation of the force feedback equipment and the virtual equipment is completed based on 3D graphic input, compared with a keyboard character input mode, the input signal of the force feedback equipment can be flexibly collected, and the interaction process can be rapidly completed.
(3) And selecting an attitude angle input mode to complete data communication with the equipment, and fully utilizing the advantage of six degrees of freedom of force feedback equipment to complete data input relative to a translation input mode.
(4) The quality of the surgical instruments and the touch sense of the virtual objects can be simulated in the learning system, so that the surgical scene can be experienced more truly and vividly.
(5) The establishment of the virtual femoral head model strictly carries out modeling according to the CT image of the patient and the key point information on the bone is established according to the requirements of the current total hip replacement surgery, and the whole training model has authenticity.
(6) The training scene can be oriented to diversified operation environments, and the training complexity can be easily set at Unity according to the difficulty in the actual process. The real-time display of the time and the distance of the image feedback link can give some path guidance in the operation process of a user so as to assist in finishing training.
(7) The camera focusing is carried out in the virtual scene in a mouse button mode, so that the scene information can be easily and more desirably switched in the training process.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the services required for the embodiments or the technical solutions in the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of a virtual learning system for total hip replacement surgery;
FIG. 2 is a schematic view of a virtual bone model scene based on mimics;
fig. 3 is a schematic view of a virtual simulation training scenario.
Detailed Description
The invention will be further illustrated with reference to specific examples:
referring to fig. 1 to 3, a virtual learning method for total hip replacement surgery includes initializing a system, establishing a communication connection between an input end and a controller, and completing mapping between a force feedback device and the controller in a real-time virtual scene; in the operation process, the input end interacts with the surgical instrument model in the virtual scene in a three-dimensional graphic input control mode, and simultaneously, an operator can see a training picture in time through image feedback and is provided with key information prompts in the training process.
The process of communication between the input end and the controller and mapping of the virtual scene mainly utilizes the OpenHaptics interface to access data of the force feedback device, packages the data into a plug-in and transmits the plug-in to the Unity3D thread, thereby completing Unity interface control.
The specific process is as follows:
s1, connecting the force feedback device and the controller; the provided USB interface is used to connect the force feedback device and the controller, and then a power cord connection socket is powered with a connector on the back of the force feedback device. After the connection is completed, if the blue light emitting diode rapidly flickers twice, the controller is successfully connected with the force feedback device.
S2, installing a Unity 5Haptic plug for Geomagic OpenHaptics 3.3 on the Unity5.2.3 or above version to communicate with the force feedback device.
S3, calling a programming interface of 'Haptic plug for Geomagic OpenHaptics 3.3' to acquire relevant parameters of the force feedback device. The programming interface is divided into a Haptic Device API (HDAPI) and a Haptic Library API (HLAPI).
The HDAPI provides a bottom layer interface of the equipment and can directly access real-time parameters of various equipment in the equipment state query table through a callback function;
the HLAPI provides an upper layer interface for programmers familiar with OpenGL. In the operation process, the upper computer program obtains the information of the position, the posture, the joint angle, the terminal speed, the joint angular speed and the like of the force feedback equipment at the frequency of 1000hz through a programming interface, and continuously sends the information to the controller to realize closed-loop control.
And S4, realizing real-time interaction between the force feedback device and the virtual device set in the Unity through 3D graphic input.
S5, selecting an attitude angle input mode to complete data communication with equipment; specifically, in the 3D graphic input and force feedback communication process, the initialization of the equipment is completed through an 'hdInitDevice' function provided by a plug-in, wherein the DEFAULT parameter is 'HD _ DEFAULT _ DEVICE'; then, finding current equipment by using an 'hdGetCurrentDevice' function and reading equipment parameters by using an 'hdGetDoublev' callback function, wherein the parameters comprise current translation position information, current translation speed and acceleration information and a current attitude angle; wherein the call form of "hdGetDoublev" is "(parameter name, return value type)"; then using the 'hdBeginFrame' and the 'hdEndFrame' as a starting point and an end point of data access to ensure the consistency of the access, thereby realizing the synchronous update of the data of the force feedback equipment and the controller; in the test, information including the position, velocity and acceleration of the translation is read directly through the access interface provided by the Haptic Device API.
For the acquisition of the attitude angle information, a 16-element tail end attitude array is obtained in advance through a Haptic Device API access interface, and then the attitude array is converted into speed information of each axis in an equivalent axis angular coordinate system; the 16-element terminal attitude array described by the equivalent axis angular coordinate system representation is shown as the following formula (1):
Figure BDA0002923753980000091
wherein R isK(θ) is an identity matrix, the elements of the matrix are called 16 elements, each with t [ theta ], []Expressing that K is the attitude angle of the current attitude relative to a base coordinate system, and obtaining the speed information of the attitude angle in the x, y and z directions approximately through difference;
formulas 2 and 3 can be derived from 1:
Figure BDA0002923753980000092
θ is the corresponding rotation angle around the coordinate axis;
Figure BDA0002923753980000093
s6, initializing the working space and the working mode of the plug-in, wherein the specific process is as follows:
approximating the activity space of the force feedback device in a real scene to a cube form, converting the size data (for example, the length, width and height are both 20cm) of the force feedback device from a "float 3" array to "IntPtr" through a "convertfoat 3to IntPtr" instruction in "SetHapticWorkSpace";
then, in the GetHapticWorkSpace, utilizing ConvertetIntPtrToFloat 3to convert the IntPtr into a float3 array in the Unity editor so as to determine the space size;
next, updating a working space by using an 'UpdateHapticWorkspace' function in the plug-in and setting a training interaction mode by using an 'indica mode' function according to the position of the camera;
next, in Unity3D, the created object state is set to "Touchable Face", which is any one of "front", "back", "front and back";
in the process of setting force feedback, relevant attributes including amplitude, duration and gain are created and set in scripts of ' Environment stability ', ' vision ', spring effect ' and ' learning effect '; setting an object requires acquiring all object arrays with "desk" tags, acquiring the grid attributes of the object, then drawing a geometric body, and reading the characteristics of the geometric body to start force feedback events of all different objects.
S7, continuously updating the workspace in a "void Update" loop. And continuously updating the working space according to the position of the camera, and completing real-time communication between the virtual force feedback equipment and the obtained numerical value.
S8, establishing a bone model in a virtual scene, and specifically comprising two steps of CT image segmentation and three-dimensional modeling;
wherein the CT image segmentation is based on a threshold method of mimics, and the region growth is used for completing the image segmentation of the bone CT data; after a CT atlas of a patient is imported into a minics interface, selecting a threshold value to carry out binarization processing, and reserving pixels of which the gray values are within the threshold value range in the CT atlas; the threshold segmentation mainly utilizes the difference of a target area to be extracted in an image and the background thereof on the gray characteristic to divide a CT image into the target area and the background area so as to generate a corresponding binary image; and then, segmenting the binary image into a plurality of blocks by using a region growing tool, and removing floating pixels.
And calculating the segmented sequential CT image to recover the three-dimensional structure. Common three-dimensional reconstruction algorithms are classified into two major categories, surface rendering and volume rendering. The surface rendering mode mainly extracts one or more interested tissue contours on the image, and generates a mesh model for subsequent processing by means of an intermediate geometric plane element and an algorithm. The surface rendering algorithm mainly comprises a fault surface reconstruction method, a moving cube method and the like, wherein the fault surface reconstruction algorithm is used as a classic algorithm for surface rendering and has the most extensive application. The volume rendering mode is as follows: and directly performing resampling by using the acquired voxels through an algorithm, and then completing three-dimensional reconstruction. The commonly used volume rendering methods mainly include a light projection method, a snowball throwing method, a miscut transform method and a texture mapping method. The volume rendering has the advantages of generating every detail of the three-dimensional model, and having the characteristics of high image quality and convenience for parallel processing. The method has the defects of large data volume needing to be processed, long calculation time and complex algorithm, thereby influencing the efficiency of three-dimensional reconstruction. The surface drawing only needs to process a small part of all data, so the calculation speed is high; but the surface rendering cuts off the connection between the structure outline and the whole structure, and key point information may be lost. The virtual learning system provided by the invention has higher requirements on the details of the three-dimensional object, so that the bone model is modeled by adopting a volume rendering mode based on a ray casting method. The training object model provided by the patent mainly comprises a virtual bone model created by mimics and a semitransparent human body model created by Solid work. Both models are saved in "stl" format and imported into the Unity3D scene by converting them into "obj" format in 3D builder software.
S9, creating an operation scene and an operation instrument in the virtual scene;
the surgical environment is inserted in a chartlet form, and the force feedback model is formed by combining simple three-dimensional models carried in Unity 3D; creating a main Camera in a 'Hierarchy' panel for shooting a running face painting seen in the operation training process, adding a 'Skybox' component in an 'observer' to design a chartlet style, and switching 'Clear Flags' in a Camera component to a 'Skybox' mode; the operation environment is a three-dimensional space formed by splicing front, back, left, right, upper and lower six sticking pictures, the virtual operation tool is formed by assembling a sphere and a capsule, and the circle center of the sphere is used as the positioning coordinate of the whole operation tool.
S10, setting virtual key points on the training object model;
in total hip replacement surgery, it is often necessary to attach a metal acetabular cup to the patient's acetabulum and to fix it with 2-3 screws to attach a femoral head prosthesis. However, the position of the acetabular cup and the screw is the key of the whole operation, and the position directly influences the success or failure of the operation. According to the operation experience, a kirschner wire (the bone plate is thicker and the kirschner wire is not easy to shift) is usually vertically inserted into the acetabulum at a position of 2cm above the acetabulum in the direction of 12 points of the acetabulum for positioning the acetabulum cup, so two red virtual key points are designed for simulating the positioning of the kirschner wire in the embodiment. Because the key point is small and difficult to observe in the Game panel, a sphere with the point as the center of a circle and the radius of 1cm is used for simulation, wherein the coordinate of the center of the circle is the position coordinate of the key point.
S11, designing image feedback information;
the image feedback information consists of a time display of the operation and the real-time distance of the surgical tool to the key point. The time display is updated in "fixedupdate" at a fixed frequency and the integer form is converted into the form of components and seconds; when the time exceeds 60 seconds, the color is changed from white to red to remind the user that the time is beyond expectation. The distance real-time prompt is obtained by the difference value of the Position coordinates of the surgical tool and the key point in the "Position" in the Transform "in the moving process. When the distance between the virtual surgical instrument and the coordinate of the circle center is less than 3cm, the displayed numerical value is changed from red to green to prompt the operator to reach the target immediately; when the distance is less than 1cm and the surgical instrument angle satisfies the abduction angle of 45 degrees and the anteversion angle of 15 degrees, changing the color of the sphere from red to blue indicates that the target position has been reached.
S12, designing a camera focusing function; considering that the scene in the Game panel is not the best observation position for training, a script is newly built in the Camera to set the Field of view in the Camera component, and focusing is performed in the form of mouse buttons during operation.
S13, starting training and evaluating; the experiment is mainly carried out in two steps, and during the running process of the Game panel, focusing is firstly carried out to enable the training picture to be switched to the angle which is most comfortable for an operator; the force feedback device is then controlled to approach the keypoints at the correct angle until both keypoints turn blue. The familiarity of the operator training is assessed by the time spent on the Game panel during the training process.
The above-mentioned embodiments are merely preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, so that variations based on the shape and principle of the present invention should be covered within the scope of the present invention.

Claims (10)

1. A virtual learning method for total hip replacement surgery is characterized in that firstly, a system is initialized, communication connection between an input end and a controller is established, and mapping of a force feedback device and the controller is completed in a real-time virtual scene; in the operation process, the input end interacts with the surgical instrument model in the virtual scene in a three-dimensional graphic input control mode, and simultaneously, an operator can see a training picture in time through image feedback and is provided with key information prompts in the training process.
2. The virtual learning method for total hip replacement surgery as claimed in claim 1, wherein the specific process is as follows:
s1, connecting the force feedback device and the controller;
s2, installing 'Unity 5Haptic plug for Geomagic OpenHaptics 3.3' in the Unity5.2.3 or above version to communicate with the force feedback device;
s3, calling a programming interface of 'Haptic plug for Geomagic OpenHaptics 3.3' to acquire relevant parameters of the force feedback equipment;
s4, realizing real-time interaction between the force feedback equipment and virtual equipment set in the Unity through 3D graphic input;
s5, selecting an attitude angle input mode to complete data communication with equipment;
s6, initializing the working space and the working mode of the plug-in;
s7, continuously updating the working space according to the position of the camera, and completing real-time communication between the virtual force feedback equipment and the obtained numerical value;
s8, establishing a bone model in the virtual scene;
s9, creating an operation scene and an operation instrument in the virtual scene;
s10, setting virtual key points on the training object model;
s11, designing image feedback information;
s12, designing a camera focusing function;
s13, training and evaluation.
3. The virtual learning method for total hip replacement surgery as claimed in claim 2, wherein in step S3, the programming interface is divided into a Haptic Device API and a Haptic Library API;
the Haptic Device API provides a bottom-layer interface of the equipment, and directly accesses real-time parameters of various equipment in the equipment state query table through a callback function;
the Haptic Library provides an upper layer interface, and in the operation process, an upper computer program obtains information of the force feedback equipment including position, posture, joint angle, terminal speed and joint angular speed at the frequency of 1000hz through a programming interface and continuously sends the information to the controller to realize closed-loop control.
4. The virtual learning method for total hip replacement surgery as claimed in claim 2, wherein the step S5, in the process of communication based on 3D graphical input and force feedback, is implemented by the function "hdInitDevice" provided by the plug-in, wherein the DEFAULT parameter is "HD _ DEFAULT _ DEVICE"; then, finding current equipment by using an 'hdGetCurrentDevice' function and reading equipment parameters by using an 'hdGetDoublev' callback function, wherein the parameters comprise current translation position information, current translation speed and acceleration information and a current attitude angle; wherein the call form of "hdGetDoublev" is "(parameter name, return value type)"; then using the 'hdBeginFrame' and the 'hdEndFrame' as a starting point and an end point of data access to ensure the consistency of the access, thereby realizing the synchronous update of the data of the force feedback equipment and the controller; in the test, information including the position, velocity and acceleration of the translation is read directly through the access interface provided by the Haptic Device API.
5. The virtual learning method for total hip replacement surgery as claimed in claim 4, wherein for the acquisition of the attitude angle information, a 16-element end attitude array is obtained in advance through a Haptic Device API access interface, and then converted into the speed information of each axis in the equivalent axis angular coordinate system; the 16-element terminal attitude array described by the equivalent axis angular coordinate system representation is shown as the following formula (1):
Figure FDA0002923753970000021
wherein R isK(θ) is an identity matrix, the elements of the matrix are called 16 elements, each with t [ theta ], []Expressing that K is the attitude angle of the current attitude relative to the base coordinate system, and the velocity information of the attitude angle in the x, y and z directions can be approximately obtained by differenceInformation;
formulas 2 and 3 can be derived from 1:
Figure FDA0002923753970000031
θ is the corresponding rotation angle around the coordinate axis;
Figure FDA0002923753970000032
6. the virtual learning method for total hip replacement surgery as claimed in claim 2, wherein the step S6 is as follows:
approximating the activity space of the force feedback device in a real scene to a cube form, converting the size data thereof from the "float 3" array to "IntPtr" through the "convertfoat 3to IntPtr" instruction in "sethapticWorkSpace";
then, in the GetHapticWorkSpace, utilizing ConvertetIntPtrToFloat 3to convert the IntPtr into a float3 array in the Unity editor so as to determine the space size;
next, updating a working space by using an 'UpdateHapticWorkspace' function in the plug-in and setting a training interaction mode by using an 'indica mode' function according to the position of the camera;
next, in Unity3D, the created object state is set to "Touchable Face", which is any one of "front", "back", "front and back";
in the process of setting force feedback, relevant attributes including amplitude, duration and gain are created and set in scripts of 'Environment constant force', 'vision', 'spring effect' and 'massage effect'; setting an object requires acquiring all object arrays with "desk" tags, acquiring the grid attributes of the object, then drawing a geometric body, and reading the characteristics of the geometric body to start force feedback events of all different objects.
7. The virtual learning method for total hip replacement surgery as claimed in claim 2, wherein the step S8 is to establish a bone model in a virtual scene, and is divided into two steps of CT image segmentation and three-dimensional modeling;
wherein the CT image segmentation is based on a threshold method of mimics, and the region growth is used for completing the image segmentation of the bone CT data; after a CT atlas of a patient is imported into a minics interface, selecting a threshold value to carry out binarization processing, and reserving pixels of which the gray values are within the threshold value range in the CT atlas; the threshold segmentation mainly utilizes the difference of a target area to be extracted in an image and the background thereof on the gray characteristic to divide a CT image into the target area and the background area so as to generate a corresponding binary image; and then, segmenting the binary image into a plurality of blocks by using a region growing tool, and removing floating pixels.
8. The virtual learning method for total hip replacement surgery as claimed in claim 2, wherein in step S9, the surgery environment is inserted in a map form, and the force feedback model is combined by using a simple stereo model carried by Unity 3D; creating a main Camera in a 'Hierarchy' panel for shooting a running face painting seen in the operation training process, adding a 'Skybox' component in an 'observer' to design a chartlet style, and switching 'Clear Flags' in a Camera component to a 'Skybox' mode; the operation environment is a three-dimensional space formed by splicing front, back, left, right, upper and lower six sticking pictures, the virtual operation tool is formed by assembling a sphere and a capsule, and the circle center of the sphere is used as the positioning coordinate of the whole operation tool.
9. The virtual learning method for total hip replacement surgery as claimed in claim 2, wherein step S12, the Camera assembly is set up "Field of view" by script in Camera, and focusing is performed by mouse button during operation.
10. The virtual learning method for total hip replacement surgery according to claim 2, wherein in step S13, the experiment is mainly performed in two steps, and during the operation of the Game panel, focusing is performed first to switch the training screen to the angle most comfortable for the operator; then controlling the force feedback equipment to approach the key points at a correct angle until the two key points turn blue; the familiarity of the operator training is assessed by the time spent on the Game panel during the training process.
CN202110126745.2A 2021-01-29 2021-01-29 Virtual learning method for total hip replacement surgery Active CN112906205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110126745.2A CN112906205B (en) 2021-01-29 2021-01-29 Virtual learning method for total hip replacement surgery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110126745.2A CN112906205B (en) 2021-01-29 2021-01-29 Virtual learning method for total hip replacement surgery

Publications (2)

Publication Number Publication Date
CN112906205A true CN112906205A (en) 2021-06-04
CN112906205B CN112906205B (en) 2023-01-20

Family

ID=76121113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110126745.2A Active CN112906205B (en) 2021-01-29 2021-01-29 Virtual learning method for total hip replacement surgery

Country Status (1)

Country Link
CN (1) CN112906205B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113633376A (en) * 2021-08-06 2021-11-12 吉林大学 Full hip joint naked eye three-dimensional virtual replacement method
CN115273583A (en) * 2022-05-16 2022-11-01 华中科技大学同济医学院附属协和医院 Multi-person interactive orthopedics clinical teaching method based on mixed reality
CN117711611A (en) * 2024-02-05 2024-03-15 四川省医学科学院·四川省人民医院 MDT remote consultation system and method based on scene fusion and mr

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050113973A1 (en) * 2003-08-25 2005-05-26 Sony Corporation Robot and attitude control method of robot
CN104537939A (en) * 2014-12-31 2015-04-22 佛山市中医院 Virtual method and device for pedicle screw implantation
US20170000563A1 (en) * 2013-11-26 2017-01-05 Shenzhen Institutes Of Advanced Technology Chinse Academy Of Sciences Method, apparatus and system for simulating force interaction between bone drill and skeleton
CN109192030A (en) * 2018-09-26 2019-01-11 郑州大学第附属医院 True hysteroscope Minimally Invasive Surgery simulation training system and method based on virtual reality
US20190053851A1 (en) * 2017-08-15 2019-02-21 Holo Surgical Inc. Surgical navigation system and method for providing an augmented reality image during operation
US20190355278A1 (en) * 2018-05-18 2019-11-21 Marion Surgical Inc. Virtual reality surgical system including a surgical tool assembly with haptic feedback

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050113973A1 (en) * 2003-08-25 2005-05-26 Sony Corporation Robot and attitude control method of robot
US20170000563A1 (en) * 2013-11-26 2017-01-05 Shenzhen Institutes Of Advanced Technology Chinse Academy Of Sciences Method, apparatus and system for simulating force interaction between bone drill and skeleton
CN104537939A (en) * 2014-12-31 2015-04-22 佛山市中医院 Virtual method and device for pedicle screw implantation
US20190053851A1 (en) * 2017-08-15 2019-02-21 Holo Surgical Inc. Surgical navigation system and method for providing an augmented reality image during operation
US20190355278A1 (en) * 2018-05-18 2019-11-21 Marion Surgical Inc. Virtual reality surgical system including a surgical tool assembly with haptic feedback
CN109192030A (en) * 2018-09-26 2019-01-11 郑州大学第附属医院 True hysteroscope Minimally Invasive Surgery simulation training system and method based on virtual reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
康朋等: "3D手术模拟测量Crowe Ⅳ型发育性髋关节发育不良的真实髋臼形态", 《中国组织工程研究》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113633376A (en) * 2021-08-06 2021-11-12 吉林大学 Full hip joint naked eye three-dimensional virtual replacement method
CN113633376B (en) * 2021-08-06 2024-03-15 吉林大学 Naked eye three-dimensional virtual replacement method for total hip joint
CN115273583A (en) * 2022-05-16 2022-11-01 华中科技大学同济医学院附属协和医院 Multi-person interactive orthopedics clinical teaching method based on mixed reality
CN117711611A (en) * 2024-02-05 2024-03-15 四川省医学科学院·四川省人民医院 MDT remote consultation system and method based on scene fusion and mr
CN117711611B (en) * 2024-02-05 2024-04-19 四川省医学科学院·四川省人民医院 MDT remote consultation system and method based on scene fusion and mr

Also Published As

Publication number Publication date
CN112906205B (en) 2023-01-20

Similar Documents

Publication Publication Date Title
CN112906205B (en) Virtual learning method for total hip replacement surgery
CN107822690B (en) Hybrid image/scene renderer with hands-free control
Rosen et al. Evolution of virtual reality [Medicine]
EP2896034B1 (en) A mixed reality simulation method and system
US9153146B2 (en) Method and system for simulation of surgical procedures
US20100167250A1 (en) Surgical training simulator having multiple tracking systems
KR20170083091A (en) Integrated user environments
CN102207997A (en) Force-feedback-based robot micro-wound operation simulating system
Ecke et al. Virtual reality: preparation and execution of sinus surgery
Heredia‐Pérez et al. Virtual reality simulation of robotic transsphenoidal brain tumor resection: Evaluating dynamic motion scaling in a master‐slave system
CN108492693A (en) A kind of laparoscopic surgery simulated training system shown based on computer aided medicine
US9230452B2 (en) Device and method for generating a virtual anatomic environment
WO1996016389A1 (en) Medical procedure simulator
CN113379929B (en) Bone tissue repair virtual reality solution method based on physical simulation
CN114333482A (en) Virtual anatomy teaching system based on mixed reality technology
Wagner et al. Intraocular surgery on a virtual eye
CN110097944B (en) Display regulation and control method and system for human organ model
CN114503211A (en) Automatically configurable simulation system and method
KR20040084243A (en) Virtual surgical simulation system for total hip arthroplasty
Tang et al. Virtual laparoscopic training system based on VCH model
Zhang et al. Development of a virtual training system for master-slave hip replacement surgery
Scharver et al. Pre-surgical cranial implant design using the PARIS/spl trade/prototype
TW202207242A (en) System and method for augmented reality spine surgery
Rasool et al. Image-driven haptic simulation of arthroscopic surgery
Deakyne et al. Immersive anatomical scenes that enable multiple users to occupy the same virtual space: a tool for surgical planning and education

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant