CN114387836B - Virtual operation simulation method and device, electronic equipment and storage medium - Google Patents

Virtual operation simulation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114387836B
CN114387836B CN202111534312.7A CN202111534312A CN114387836B CN 114387836 B CN114387836 B CN 114387836B CN 202111534312 A CN202111534312 A CN 202111534312A CN 114387836 B CN114387836 B CN 114387836B
Authority
CN
China
Prior art keywords
virtual
hand
model
operator
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111534312.7A
Other languages
Chinese (zh)
Other versions
CN114387836A (en
Inventor
于洪波
赵涵江
程奂翀
程梦佳
庄瑜
李萌
沈国芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Original Assignee
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine filed Critical Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority to CN202111534312.7A priority Critical patent/CN114387836B/en
Publication of CN114387836A publication Critical patent/CN114387836A/en
Application granted granted Critical
Publication of CN114387836B publication Critical patent/CN114387836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual operation simulation method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a corresponding virtual hand model according to the hand three-dimensional information of the pre-collected virtual operator; displaying a virtual space and establishing a coordinate system, wherein the virtual space comprises a virtual surgical instrument model, a virtual human tissue model and a virtual hand model; acquiring three-dimensional hand information in real time to identify the gesture of a virtual surgical operator and the position relationship among a virtual surgical instrument model, a virtual human tissue model and a virtual hand model; instructions and operation actions of the virtual surgical operator are determined based on the gestures and the positional relationship to synchronously display virtual pictures of the virtual surgery. The technical scheme provided by the invention can solve the technical problem that in the prior art, when a virtual operation is performed, the gesture of a virtual operation operator can be associated with a virtual hand model on a display only by relying on wearing equipment such as information input equipment or data gloves.

Description

Virtual operation simulation method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of medical technologies, and in particular, to a virtual operation simulation method, a virtual operation simulation device, an electronic device, and a storage medium.
Background
The development of science and technology has brought about the progress of society, and emerging technology is applied to the medical field, has produced huge influence to the medical industry.
Virtual surgical simulation systems are a typical application of technology in the medical field. The medical operation is complex and various, the preoperative design, the operation and the postoperative prediction are difficult, the virtual reality technology provides a virtual operation environment and an interactive operation platform for doctors, the whole process of the clinical operation can be simulated, and an ideal method is provided for solving the problem. Compared with the traditional operation simulation mode, the virtual operation technology based on three-dimensional display has the advantages of no damage, repeatability, designability and the like.
In order to obtain a realistic surgical virtual effect, recognition of gestures and surgical actions is of paramount importance. Current virtual surgery generally utilizes a keyboard, mouse, or by wearing data gloves to obtain information and instructions for the virtual surgery. However, the difference between the information obtained by using the keyboard and the mouse and the actual operation completed by holding the instrument is large, and the operation effect of the virtual operation is greatly reduced. Although the data glove can acquire gesture information by acquiring hand joint data in real time, the data glove is generally expensive, has more limitations in use, has high application cost and is not easy to popularize.
Disclosure of Invention
The invention provides a virtual operation simulation method, a device, electronic equipment and a storage medium, and aims to effectively solve the technical problem that in the prior art, when a virtual operation is performed, a gesture of a virtual operation operator and a virtual hand model on a display can be associated only by relying on wearing equipment such as information input equipment or data gloves.
According to one aspect of the present invention, there is provided a virtual surgery simulation method, the method comprising:
determining a corresponding virtual hand model according to the hand three-dimensional information of the pre-collected virtual operator;
displaying a virtual space for performing the virtual surgery and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual surgical instrument model, a virtual human tissue model serving as the virtual surgical object and the virtual hand model;
in the virtual operation implementation process, acquiring the hand three-dimensional information in real time, and identifying gestures of the virtual operation operator and the position relation among the virtual operation instrument model, the virtual human body tissue model and the virtual hand model based on the hand three-dimensional information;
And determining an instruction and an operation action of the virtual operation operator based on the gesture and the position relation, and synchronously displaying a virtual picture of the virtual operation in the virtual space according to the instruction and the operation action.
Further, during the virtual surgery implementation process, the acquiring the three-dimensional information of the hand in real time includes:
and acquiring the three-dimensional hand information of the virtual operation operator in real time in a depth detection mode based on the flight time.
Further, the real-time acquisition of the three-dimensional information of the hand by the depth detection mode based on the flight time includes:
driving two depth detection devices positioned on the same plane and at different positions to emit modulated light pulses continuously in a space scanning mode so as to capture depth information of different parts on the hands of the virtual operator in real time;
and calculating the three-dimensional information of the hand according to a triangulation method based on the depth information of different parts on the hand detected by the two depth detection devices.
Further, the driving the two depth detection devices located at different positions on the same plane to emit modulated light pulses continuously in a spatial scanning manner to capture depth information of different parts on the hand of the virtual operator in real time includes:
Acquiring, for each scanning line capable of scanning to the hand of the virtual operator, a transmission time of a modulated light pulse corresponding to the scanning line and a reception time of a signal reflected back from a different portion of the hand of the virtual operator;
depth information of different parts on the virtual operator's hand in the coordinate system relative to the depth detection means emitting the modulated light pulses is determined based on the difference between the emission time and the reception time.
Further, the virtual hand model is composed of 15 rigid bodies preset based on anatomy and kinematics, and has 22 degrees of freedom.
Further, the identifying the gesture of the virtual surgical operator based on the three-dimensional hand information, and the positional relationship among the virtual surgical instrument model, the virtual human tissue model, and the virtual hand model include:
determining a hand feature vector based on the hand three-dimensional information, the hand feature vector comprising at least one of: the bending degree of the finger, the number of the extended fingers, the included angle between the adjacent fingers and the distance between the adjacent fingertips;
Inputting the hand feature vector into a trained classifier;
the trained classifier identifies gestures of the virtual surgical operator based on the input hand feature vectors.
Further, the identifying the gesture of the virtual surgical operator and the positional relationship among the virtual surgical instrument model, the virtual human tissue model, and the virtual hand model based on the three-dimensional hand information further includes:
identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model based on the hand three-dimensional information, and identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model;
and synchronously displaying preset actions associated with the space position collision in the virtual space when the space position collision between the virtual surgical instrument model and the virtual hand model and/or the space position collision between the virtual human tissue model and the virtual surgical instrument model are identified.
Further, the identifying, based on the hand three-dimensional information, whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model, and identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model include:
Constructing a multi-level bounding box model based on the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, wherein the multi-level bounding box model comprises attribute information of bounding boxes associated with the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, and the attribute information at least comprises the size, the position information and the patch table information of each level of bounding box;
and identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model according to the multi-stage bounding box model, and identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model.
Further, the identifying, according to the multi-stage bounding box model, whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model, and identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model includes:
determining whether the two bounding boxes collide in space positions according to the maximum nominal radius and the minimum nominal radius of each bounding box and the distance between the geometric center points of the two bounding boxes for each group of the two bounding boxes to be identified;
If the distance between the geometric center points of the two bounding boxes is larger than the sum of the maximum nominal radiuses of the two bounding boxes, determining that the models respectively associated with the two bounding boxes do not collide in space positions;
if the distance between the geometric center points of the two bounding boxes is smaller than the sum of the respective minimum nominal radiuses of the two bounding boxes, determining that the models respectively associated with the two bounding boxes collide in space positions;
otherwise, further identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model and/or whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model by adopting a bounding box-patch detection mode.
Further, the further identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model and/or whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model by adopting a bounding box-patch detection mode includes:
for each group of two bounding boxes to be identified, selecting the space geometric relation between one bounding box and a surface patch forming the other bounding box, and determining whether the two bounding boxes collide in space position according to a separation axis algorithm;
If it is recognized that one bounding box collides with one of the patches forming the other bounding box in space, determining that models respectively associated with the two bounding boxes collide in space, otherwise, determining that models respectively associated with the two bounding boxes do not collide in space.
According to another aspect of the present invention, there is provided a virtual operation simulation apparatus including: the preprocessing module is used for determining a corresponding virtual hand model according to the hand three-dimensional information of a pre-collected virtual operation operator, displaying a virtual space for performing the virtual operation and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual operation instrument model, a virtual human body tissue model serving as a virtual operation object and the virtual hand model; the information acquisition and identification module is used for acquiring hand three-dimensional information of the virtual operation operator in real time in the virtual operation implementation process, and identifying gestures of the virtual operation operator and the position relationship among the virtual operation instrument model, the virtual human tissue model and the virtual hand model based on the hand three-dimensional information; and the virtual synchronization module is used for determining the instruction and the operation action of the virtual operation operator based on the gesture and the position relation, and synchronously displaying the virtual picture of the operation in the virtual space according to the instruction and the operation action.
According to another aspect of the present invention, there is provided an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing any of the virtual surgical simulation methods described above when executing the computer program.
According to another aspect of the present invention there is provided a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform any of the virtual surgical simulation methods described above.
Through one or more of the above embodiments of the present invention, at least the following technical effects can be achieved:
the three-dimensional information of the hands of the virtual operation operator can be acquired in real time, and the corresponding virtual hand model is determined according to the three-dimensional information of the hands, so that hands are not required to be worn on the hands, the operation simulation can be conveniently and flexibly carried out, meanwhile, in the virtual operation implementation process, the position relations among the virtual operation instrument model, the virtual human tissue model and the virtual hand model are identified through the three-dimensional information of the hands, the collision situation among different virtual models can be judged through the position relations among the virtual models, the virtual picture of the virtual operation is determined according to the instructions and the operation actions, and the real operation process is rapidly and accurately simulated. The hand gesture of the virtual operation operator can be displayed in real time, the operation progress can be directly controlled through the hand gesture, and the limitation of a keyboard and a mouse can be separated when the instruction and the operation action are acquired, so that the technical effect that the hand movement of the virtual operation operator is not influenced by other equipment and the operator can perform operation simulation on the spot is obtained.
Drawings
The technical solution and other advantageous effects of the present invention will be made apparent by the following detailed description of the specific embodiments of the present invention with reference to the accompanying drawings.
FIG. 1 is a flowchart illustrating a virtual operation simulation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the rigid body and degrees of freedom of a virtual hand model;
FIG. 3 is a static gesture for recognizing a gesture of a virtual surgical operator;
FIG. 4 is a schematic diagram of a bounding box binary tree segmentation method;
FIG. 5 is a schematic diagram of a bounding box nominal radius judgment method;
fig. 6 is a schematic structural diagram of a virtual operation simulation device according to a third embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a virtual operation simulation method, a device, electronic equipment and a storage medium, which can solve the technical problem that in the prior art, when a virtual operation is performed, the gesture of a virtual operation operator and a virtual hand model on a display can be associated only by depending on wearing equipment such as information input equipment or data gloves.
The technical scheme in the embodiment of the invention aims to solve the technical problems, and the overall thought is as follows:
in the embodiment of the invention, a corresponding virtual hand model is determined according to hand three-dimensional information of a pre-collected virtual operator; displaying a virtual space for performing a virtual operation and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual surgical instrument model, a virtual human body tissue model serving as a virtual operation object and a virtual hand model; in the implementation process of the virtual operation, three-dimensional information of the hand is collected in real time, and gestures of a virtual operator and the position relation among a virtual surgical instrument model, a virtual human tissue model and a virtual hand model are identified based on the three-dimensional information of the hand; and determining instructions and operation actions of the virtual operation operators based on the gestures and the position relations, and synchronously displaying virtual pictures of the virtual operation in a virtual space according to the instructions and the operation actions. The invention provides the method for solving the technical problem that in the prior art, when a virtual operation is performed, the gesture of a virtual operator and a virtual hand model on a display can be associated only by relying on wearing equipment such as information input equipment or data gloves. Compared with the prior art, the hand gesture of the virtual operation operator can be displayed on the display in real time, and the operation process is directly controlled through the hand gesture, so that compared with the traditional keyboard and mouse input, in the simulation operation process, the hand of the virtual operation can be separated from the limitation of the keyboard and the mouse, and the data acquisition equipment is not required to be worn, so that the virtual operation simulation is convenient to flexibly perform, the learning process is natural, visual and easy to implement, the operation implementation process can be simulated personally, and the user experience is improved.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. The character "/" herein generally indicates that the associated object is an "or" relationship unless otherwise specified.
In order to better understand the above technical solutions, the following detailed description of the technical solutions of the present invention is made by using the accompanying drawings and specific embodiments, and it should be understood that the specific features of the embodiments and the embodiments of the present invention are detailed descriptions of the technical solutions of the present invention, and not limiting the technical solutions of the present invention, and the technical features of the embodiments and the embodiments of the present invention may be combined with each other without conflict.
Example 1
The present invention provides a virtual surgery simulation method, which can be applied to a device, wherein the present invention is not limited as to what device the device is specifically.
As shown in fig. 1, the main flow of the information processing method is described as follows:
step S101: determining a corresponding virtual hand model according to the hand three-dimensional information of the pre-collected virtual operator;
Step S102: displaying a virtual space for performing a virtual operation and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual surgical instrument model, a virtual human body tissue model serving as a virtual operation object and a virtual hand model;
step S103: in the implementation process of the virtual operation, three-dimensional information of the hand is collected in real time, and gestures of a virtual operator and the position relation among a virtual surgical instrument model, a virtual human tissue model and a virtual hand model are identified based on the three-dimensional information of the hand;
step S104: and determining instructions and operation actions of the virtual operation operators based on the gestures and the position relations, and synchronously displaying virtual pictures of the virtual operation in a virtual space according to the instructions and the operation actions.
In step S101, a corresponding virtual hand model is determined according to the hand three-dimensional information of the pre-acquired virtual operator. For example, in a virtual surgery implementation scenario, a sensor for acquiring hand fine motion information of a virtual operator needs to be configured, and based on the hand three-dimensional information, a movement acceleration, a speed, a movement direction, etc. of the hand in a three-dimensional space when the hand is in a moving state, a rotation angle, an angular speed, a rotation direction, etc. of the palm when the hand rotates, and a corresponding virtual hand model can be determined. When the hand information is acquired, data can be acquired by one sensor, or data can be acquired by multiple sensors, and in practical application, the type of sensor used can be determined by one skilled in the art according to specific requirements, so that the invention is not limited.
After step S101 is performed, step S102 is performed to display a virtual space for performing a virtual operation and to establish a coordinate system associated with the virtual space, the virtual space including a virtual surgical instrument model, a virtual human tissue model as a virtual surgical object, and a virtual hand model. For example, in a virtual surgery system, it is necessary to display a virtual space for performing a virtual surgery on a display interface, and establish a coordinate system associated with the virtual space in the virtual space for accurately displaying a virtual model. The display device may be a VR head display, a computer display or a liquid crystal display, and the VR head display may be VR glasses, VR eyecups or VR helmets, etc., and the invention is not limited to the type of display device. In the virtual space, at least a virtual surgical instrument model, which is determined according to the type of surgery, and a virtual human tissue model, which is a virtual surgical object, and a virtual hand model, may be included, for example, in a scalpel, a surgical scissors, a tissue forceps, a sterilizing cotton, and the like. Accordingly, the virtual human tissue model is also determined according to the type of surgery, such as a virtual craniomaxillofacial tissue biomechanical model, a stomach model, etc. In addition to the virtual surgical instrument model and the virtual human tissue model, a virtual hand model corresponding to the gesture of the virtual operator is displayed in real time, and hand motions made by the virtual operator in reality are synchronously displayed on the display.
After step S102 is performed, step S103 is performed, in which three-dimensional information of the hand is collected in real time during the virtual operation, and the gesture of the virtual operator, and the positional relationship among the virtual surgical instrument model, the virtual human tissue model, and the virtual hand model are recognized based on the three-dimensional information of the hand. For example, the virtual surgical system needs to collect three-dimensional hand information in real time, perform depth calculation on the three-dimensional hand information to identify gestures of a virtual operator, and obtain a positional relationship among the virtual surgical instrument model, the virtual human tissue model and the virtual hand model through the three-dimensional hand information, wherein the positional relationship further comprises accurate positions when touch occurs among different virtual models. In the virtual operation implementation process, the virtual surgical instrument model and the virtual human tissue model are statically displayed under the condition that the virtual surgical instrument model is not touched, the virtual hand model moves along with the hand movement of a virtual surgical operator, and when the virtual hand model moves to a certain virtual surgical instrument model, the virtual surgical system judges whether the virtual hand model touches the virtual surgical instrument model or not and the accurate position when the virtual hand model touches the virtual surgical instrument model by collecting three-dimensional hand information in real time. When the virtual hand model grabs the virtual surgical instrument model, whether the virtual surgical instrument model touches the virtual human body tissue model or not and the accurate position when the virtual surgical instrument model touches the virtual human body tissue model are needed to be judged by collecting three-dimensional information of the hand in real time.
After step S103 is performed, step S104 is performed to determine an instruction and an operation action of the virtual operator based on the gesture and the positional relationship, and to synchronously display a virtual screen of the virtual operation in the virtual space according to the instruction and the operation action. For example, the virtual operator may issue an instruction for controlling the virtual simulated surgery through a preset specific gesture, where the instruction is specifically used to control the type of surgery of the virtual surgery and an operation step or operation action during the execution of the surgery, such as when the virtual operator makes a motion of a scissor hand, the operation action of the virtual surgery is represented as cutting out the virtual human tissue model, and when the virtual operator makes a motion of a double hand, the operation step of the virtual surgery is represented as being executed next. The collision state and the specific position of the collision between the virtual models can be determined through the position relation between the virtual models. When the virtual hand model touches the virtual surgical instrument model, it is necessary to acquire an operation instruction for determining a surgical operation, and further determine an operation action to be performed by the virtual hand model. For example, when the virtual hand model touches the virtual scalpel model, it is necessary to determine an instruction and an operation action of the virtual hand model to grasp the virtual scalpel model, for example, to hold the virtual scalpel model in one of the postures of the bow holding method, the pen holding method, the grasping method and the anti-picking method. After the operation is determined, the operation is performed according to the touch position, for example, the virtual scalpel model is fixed by the thumb and the index finger of the right hand of the virtual hand model, and then the virtual scalpel model is controlled to move along with the movement of the virtual hand model. Similarly, if the virtual surgical instrument model touches the virtual human tissue model, specific surgical operations, such as cutting, piercing, suturing, etc., performed by the virtual surgical instrument model on the virtual human tissue model need to be determined by the instructions and the operational actions. Optionally, if it is determined through the positional relationship that the virtual hand model touches the virtual human body tissue model, an operation to be performed by the virtual hand model, such as a pulling, pressing, turning, lifting, dropping, and the like, may be determined through an operation instruction.
And synchronously displaying the virtual picture of the virtual operation in the virtual space according to the instruction and the operation action. The specific operation action of the virtual operation can be determined according to the instruction and the operation action, and the operation position of the virtual human tissue model for executing the virtual operation can be obtained according to the position relation and the touch position after the touch, and when the operation position and the operation action are determined, the corresponding operation effect is displayed at the operation position. For example, in a virtual operation based on a craniomaxillofacial tissue biomechanical model, an operator controls a virtual hand model to grasp a virtual scalpel model by a gesture motion made by the virtual operator, and moves a knife edge of the virtual scalpel model onto a virtual maxillofacial soft tissue model, and when the virtual surgical system acquires an instruction to execute puncturing and determines that the operation motion is puncturing, an effect of puncturing maxillofacial soft tissue by the scalpel is dynamically simulated and displayed on the virtual maxillofacial soft tissue model, thereby simulating a real operation environment.
In the embodiment of the invention, the three-dimensional information of the hand of the virtual operation operator can be acquired in real time, and the corresponding virtual hand model is determined according to the three-dimensional information of the hand, so that the user does not need to wear data acquisition equipment on the hand, and can free hands, thereby being convenient for flexibly performing operation simulation. The hand gesture of the virtual operation operator can be displayed in real time, the operation progress can be directly controlled through the hand gesture, and the limitation of a keyboard and a mouse can be separated when the instruction and the operation action are acquired, so that the technical effect that the hand movement of the virtual operation operator is not influenced by other equipment and the operator can perform operation simulation on the spot is obtained.
Example two
Based on the same inventive concept as the virtual surgery simulation method in the first embodiment of the present invention, the virtual surgery simulation method in the second embodiment of the present invention includes:
further, three-dimensional hand information of the virtual operation operator is acquired in real time in a depth detection mode based on the flight time. The time-of-flight ranging method belongs to two-way ranging technology, and mainly utilizes the time of flight of signals to and fro between two asynchronous transceivers or reflected surfaces to measure the distance between nodes. In the virtual surgery simulation system, information is acquired by a depth detection mode based on flight time, specifically, three-dimensional information of hands of a virtual surgery operator including each skeleton and joint of fingers, palm and wrist and the like needs to be acquired in real time.
Further, two depth detection devices located at different positions on the same plane are driven to emit modulated light pulses continuously in a space scanning manner so as to capture depth information of different parts on the hands of a virtual operator in real time. Based on the two depth detection devices, the depth information of different parts on the hand of the virtual operator is detected respectively, and three-dimensional information of the hand is calculated according to a triangulation method. Specifically, the hands of the virtual operator can be tracked by a depth detection device such as a camera and a sensor, and relevant information of the hands can be obtained. The depth detection device may measure the round trip time of light between the depth detection device and the virtual operator's hand, and thus calculate the distance. The depth detection device continuously emits modulated light pulses to a plurality of positions of the hand in a space scanning mode, receives the modulated light pulses reflected back by the hand of the virtual operator, and captures depth information of different positions on the hand of the virtual operator in real time through depth calculation.
Based on the depth information of different parts on the hand detected by the two depth detection devices, three-dimensional information of the hand is calculated according to a triangulation method. Specifically, the two depth detection devices can shoot the picture information of the hand of the virtual operator, obtain two photos with different visual angles aiming at the same environment, calculate the depth information by comparing the differences of images obtained by different cameras at the same time, and acquire the position information and the displacement information of the hand of the virtual operator in a three-dimensional space by multi-angle three-dimensional imaging.
Further, for each scanning line capable of scanning the hand of the virtual operator, the emission time of the modulated light pulse corresponding to the scanning line and the receiving time of the signal reflected back from different parts of the hand of the virtual operator are acquired, and depth information of different parts of the hand of the virtual operator in a coordinate system relative to the depth detection device emitting the modulated light pulse is determined based on the difference between the emission time and the receiving time. In an exemplary embodiment, after the emission time of the modulated light pulse and the receiving time of the reflected signal corresponding to the scan line are obtained, a difference between the two times is calculated, and depth information of different parts on the hand of the virtual operator relative to the depth detection device emitting the modulated light pulse in the coordinate system is determined according to the flight time and the light speed of the modulated light pulse, so that the motion acceleration, the movement speed and the movement direction of the hand of the virtual operator in the three-dimensional space when the hand of the virtual operator is in the movement state, and the rotation angle, the angular speed and the rotation direction of the palm when the hand rotates can be acquired in real time.
Further, the virtual hand model is composed of 15 rigid bodies preset based on anatomy and kinematics, and has 22 degrees of freedom. By way of example, fig. 2 shows a schematic representation of the rigid bodies and degrees of freedom of a virtual hand model in which 15 rigid bodies and 22 degrees of freedom are used. In the motion process, bones do not change shape and can be treated as rigid bodies, and 15 rigid bodies are designed for each virtual hand model corresponding to bones in the anatomy structure of a hand, as shown in fig. 2, in one virtual hand model, 15 cuboid represents 15 rigid bodies, wherein 3 bones of each finger correspond to 3 rigid bodies, and 15 finger bones of one hand correspond to 15 rigid bodies. The 22 ellipses in fig. 2 represent 22 degrees of freedom, corresponding to joints movable between bones, wherein the palm portion corresponds to 7 degrees of freedom for improving accuracy of gesture recognition. In virtual surgery, the spatial position, geometric pose, grasping state, and the like of a virtual hand model in a virtual space can be controlled by controlling the positions and angles of the rigid bodies and the degrees of freedom.
Further, a hand feature vector is determined based on the hand three-dimensional information, the hand feature vector including at least one of the following information: the bending degree of the finger, the number of the extended fingers, the included angle between the adjacent fingers and the distance between the adjacent fingertips; inputting the hand feature vector into a trained classifier; the trained classifier recognizes the gesture of the virtual operator based on the inputted hand feature vector. Illustratively, the hand feature vector may be calculated from the three-dimensional information of the hand, and the hand feature vector may be one or more of a degree of curvature of the finger, a number of extended fingers, an angle between adjacent fingers, and a distance between adjacent fingertips, as determined by the requirements of the virtual surgical simulation system. And inputting the hand feature vectors into a trained classifier, training the classifier in multiple classes by adopting an optimal classification function, carrying out an on-line recognition stage after the training step, requiring quick and high-precision recognition, and mapping the hand feature vectors to labels which are sequentially arranged according to 0,1 and 2 and mapped to various gestures. Fig. 3 shows a static gesture database pre-stored in the virtual surgery simulation system for recognizing gestures of a virtual surgery operator, and several static gestures as objects to be recognized are stored. According to the hand feature vector, the gesture of the virtual operator can be matched in the static gesture database.
And identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model based on the hand three-dimensional information, and identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model. And synchronously displaying preset actions related to the space position collision in the virtual space when the space position collision between the virtual surgical instrument model and the virtual hand model and/or the space position collision between the virtual human tissue model and the virtual surgical instrument model are identified. For example, based on the three-dimensional information identification of the hand, collision relationships between different virtual models may be identified, including between the virtual surgical instrument model and the virtual hand model, and between the virtual human tissue model and the virtual surgical instrument model. Simultaneously, gesture actions related to the space position collision are required to be synchronously displayed in the virtual space, and the operation scene in the real operation is simulated.
Further, a multi-level bounding box model is constructed based on the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, the multi-level bounding box model comprising attribute information of bounding boxes associated with the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, the attribute information comprising at least size, position information and patch table information of each level of bounding box. And identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model according to the multi-stage bounding box model, and identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model. For example, collision detection of the virtual model is performed based on a multi-level bounding box model including attribute information including at least size, position information, and patch table information of each level of bounding box. In particular, descriptive information, patch information (points, normal vectors), bounding box binary tree information may be included. Wherein the description information is used to describe basic information of the model, such as model ID, name, accuracy, root bounding box size, etc. The patch information includes a vertex chain table, a normal vector chain table, and a patch table of the patch. And (3) dividing the bounding boxes of the virtual model layer by layer, removing the bounding boxes without the intersecting state, and reserving the bounding boxes with the intersecting state. For example, fig. 4 is a schematic diagram of a binary tree segmentation method for a bounding box, in which a root bounding box is segmented into left and right bounding boxes by a binary tree, if both bounding boxes intersect a graph, a two-point drawing line is cut into two secondary segments, the bounding box is further segmented into 4 bounding boxes, the bounding boxes in a non-intersecting state are removed, the remaining bounding box drawing line is cut into three segments, the bounding box obtained by the two secondary segments is further segmented into 8 bounding boxes, the bounding boxes in an intersecting state are reserved, the bounding boxes in the non-intersecting state are discarded, and the same segmentation method is adopted until the bounding boxes of each virtual model are segmented to meet the precision requirement.
Further, for each group of two bounding boxes to be identified, determining whether the two bounding boxes collide in space position according to the respective maximum nominal radius and minimum nominal radius of the two bounding boxes and the distance between the geometric center points of the two bounding boxes. And if the distance between the geometric center points of the two bounding boxes is larger than the sum of the maximum nominal radiuses of the two bounding boxes, determining that the models respectively associated with the two bounding boxes do not collide in space position. If the distance between the geometric center points of the two bounding boxes is smaller than the sum of the respective minimum nominal radii of the two bounding boxes, determining that the models respectively associated with the two bounding boxes collide in spatial position. Otherwise, adopting a bounding box-surface patch detection mode to further identify whether space position collision occurs between the virtual surgical instrument model and the virtual hand model and/or whether space position collision occurs between the virtual human tissue model and the virtual surgical instrument model.
The positional relationship between bounding boxes is illustratively judged by a nominal radius method. The maximum distance from the box center of the bounding box of each virtual model to a point on the bounding box surface is referred to as the maximum nominal radius r max The minimum distance is called the minimum nominal radius r min . For a bounding box, the maximum nominal radius r max Equal to half the shortest side length of the bounding box, the minimum nominal radius r min Equal to half the diagonal length of the bounding box.
FIG. 5 is a schematic diagram showing a method for determining a nominal radius of a bounding box, in which two bounding boxes to be intersected are B 1 And B 2 ,B 1 The maximum and minimum nominal radii of (2) are r respectively max1 、r min1 ,B 2 The maximum and minimum nominal radii of (2) are r respectively max 、r min The centers of the two bounding boxes are respectively C 1 And C 2 ,C 1 And C 2 The distance between the two is d, and then the bounding box B 1 And B 2 The basic principle of the intersection detection method is as follows:
when the distance between the bounding boxes of the two virtual models is smaller than the minimum nominal radius r of the two bounding boxes min And when the two bounding boxes are summed, the two bounding boxes are in an intersecting state, namely, the models corresponding to the two bounding boxes generate space position collision. When the distance between the bounding boxes of the two virtual models is greater than the maximum nominal radius r of the two bounding boxes max And when the two bounding boxes are in a separated state, the models corresponding to the two bounding boxes do not collide in space positions. When (when)The distance between the bounding boxes of the two virtual models is greater than the minimum nominal radius r of the two bounding boxes min And is smaller than the maximum nominal radius r of the two bounding boxes max And if the collision relation between the two bounding boxes cannot be judged, further identifying the space position collision condition between the virtual models by adopting a bounding box-surface patch detection mode.
Further, for each group of two bounding boxes to be identified, selecting a space geometric relation between one bounding box and a surface patch forming the other bounding box, and determining whether the two bounding boxes collide in space position according to a separation axis algorithm. If it is recognized that one bounding box collides with one of the patches forming the other bounding box in space, determining that models respectively associated with the two bounding boxes collide in space, otherwise, determining that models respectively associated with the two bounding boxes do not collide in space. For example, collision detection is performed by detecting the intersection of the bounding box and the patch, and since the patch in practice is usually a triangular patch, the speed of detecting the intersection between the bounding box and the triangular patch is increased by using a separation axis judgment method. The separation axis algorithm is a common method for detecting the intersection between convex polyhedrons in space, and in collision detection, triangular patches are regarded as the degradation of the convex polyhedrons in space, so that the intersection detection is carried out through the patches and bounding boxes. For two bounding boxes to be identified in the virtual surgery system, determining the space geometrical relation between one bounding box and a surface patch forming the other bounding box, and determining the space position collision state of the two bounding boxes according to a separation axis algorithm. When one bounding box collides with one of the patches forming the other bounding box at a space position, determining that models corresponding to the two bounding boxes collide at a space position, otherwise, determining that the two models do not collide at a space position.
Example III
Based on the same inventive concept as the virtual operation simulation method in the first embodiment of the present invention, an embodiment of the present invention provides an apparatus, please refer to fig. 6, which includes:
a preprocessing module 201, configured to determine a corresponding virtual hand model according to three-dimensional hand information of a pre-collected virtual operator, and display a virtual space for performing the virtual operation and establish a coordinate system associated with the virtual space, where the virtual space includes a virtual surgical instrument model, a virtual human body tissue model as the virtual surgical object, and the virtual hand model;
the information acquisition and recognition module 202 is configured to acquire the hand three-dimensional information in real time during the virtual surgery implementation process, and recognize a gesture of the virtual surgery operator and a positional relationship among the virtual surgery instrument model, the virtual human tissue model and the virtual hand model based on the hand three-dimensional information;
and the virtual synchronization module 203 is configured to determine an instruction and an operation action of the virtual surgical operator based on the gesture and the positional relationship, and synchronously display a virtual screen of the virtual surgical in the virtual space according to the instruction and the operation action.
According to another aspect of the present invention, there is provided an electronic device including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the virtual operation simulation method according to any of the embodiments of the present invention when executing the computer program.
The present invention also provides a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the virtual surgical simulation method of any of the embodiments of the present invention.
In summary, although the present invention has been described in terms of the preferred embodiments, the preferred embodiments are not limited to the above embodiments, and various modifications and changes can be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention is defined by the appended claims.

Claims (10)

1. A virtual surgical simulation method applied to a virtual maxillofacial soft tissue model, the method comprising:
determining a corresponding virtual hand model according to the hand three-dimensional information of the pre-collected virtual operator;
Displaying a virtual space for performing the virtual surgery and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual surgical instrument model, a virtual human tissue model serving as the virtual surgical object and the virtual hand model;
in the virtual operation implementation process, acquiring the hand three-dimensional information in real time, and identifying gestures of the virtual operation operator and the position relation among the virtual operation instrument model, the virtual human body tissue model and the virtual hand model based on the hand three-dimensional information;
determining an instruction and an operation action of the virtual operation operator based on the gesture and the position relation, and synchronously displaying a virtual picture of the virtual operation in the virtual space according to the instruction and the operation action;
wherein the acquiring the three-dimensional information of the hand in real time comprises:
acquiring three-dimensional hand information of the virtual operation operator in real time in a depth detection mode based on flight time;
the real-time acquisition of the three-dimensional information of the hand by a depth detection mode based on flight time comprises the following steps:
driving two depth detection devices positioned on the same plane and at different positions to emit modulated light pulses continuously in a space scanning mode so as to capture depth information of different parts on the hands of the virtual operator in real time;
Calculating three-dimensional information of the hand according to a triangulation method based on depth information of different parts on the hand detected by the two depth detection devices respectively;
the driving the two depth detection devices located at different positions on the same plane to emit modulated light pulses continuously in a spatial scanning manner to capture depth information of different parts on the hand of the virtual operator in real time comprises:
acquiring, for each scanning line capable of scanning to the hand of the virtual operator, a transmission time of a modulated light pulse corresponding to the scanning line and a reception time of a signal reflected back from a different portion of the hand of the virtual operator;
depth information of different parts on the virtual operator's hand in the coordinate system relative to the depth detection means emitting the modulated light pulses is determined based on the difference between the emission time and the reception time.
2. The method of claim 1, wherein the virtual hand model is composed of 15 rigid bodies preset based on anatomy and kinematics, and the virtual hand model has 22 degrees of freedom.
3. The method of claim 2, wherein the identifying the gesture of the virtual surgical operator based on the hand three-dimensional information, and the positional relationship among the virtual surgical instrument model, the virtual human tissue model, and the virtual hand model comprises:
Determining a hand feature vector based on the hand three-dimensional information, the hand feature vector comprising at least one of: the bending degree of the finger, the number of the extended fingers, the included angle between the adjacent fingers and the distance between the adjacent fingertips;
inputting the hand feature vector into a trained classifier;
the trained classifier identifies gestures of the virtual surgical operator based on the input hand feature vectors.
4. The method of claim 3, wherein the identifying the gesture of the virtual surgical operator and the positional relationship between the virtual surgical instrument model, the virtual human tissue model, and the virtual hand model based on the hand three-dimensional information further comprises:
identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model based on the hand three-dimensional information, and identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model;
and synchronously displaying preset actions associated with the space position collision in the virtual space when the space position collision between the virtual surgical instrument model and the virtual hand model and/or the space position collision between the virtual human tissue model and the virtual surgical instrument model are identified.
5. The method of claim 4, wherein the identifying whether a spatial position collision between the virtual surgical instrument model and the virtual hand model based on the hand three-dimensional information, and identifying whether a spatial position collision between the virtual human tissue model and the virtual surgical instrument model, comprises:
constructing a multi-level bounding box model based on the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, wherein the multi-level bounding box model comprises attribute information of bounding boxes associated with the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, and the attribute information at least comprises the size, the position information and the patch table information of each level of bounding box;
and identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model according to the multi-stage bounding box model, and identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model.
6. The method of claim 5, wherein the identifying whether a spatial position collision between the virtual surgical instrument model and the virtual hand model occurs based on the multi-level bounding box model, and identifying whether a spatial position collision between the virtual human tissue model and the virtual surgical instrument model occurs comprises:
Determining whether the two bounding boxes collide in space positions according to the maximum nominal radius and the minimum nominal radius of each bounding box and the distance between the geometric center points of the two bounding boxes for each group of the two bounding boxes to be identified;
if the distance between the geometric center points of the two bounding boxes is larger than the sum of the maximum nominal radiuses of the two bounding boxes, determining that the models respectively associated with the two bounding boxes do not collide in space positions;
if the distance between the geometric center points of the two bounding boxes is smaller than the sum of the respective minimum nominal radiuses of the two bounding boxes, determining that the models respectively associated with the two bounding boxes collide in space positions;
otherwise, further identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model and/or whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model by adopting a bounding box-patch detection mode.
7. The method of claim 6, wherein the employing bounding box-patch detection to further identify whether a spatial location collision between the virtual surgical instrument model and the virtual hand model and/or whether a spatial location collision between the virtual human tissue model and the virtual surgical instrument model occurred comprises:
For each group of two bounding boxes to be identified, selecting the space geometric relation between one bounding box and a surface patch forming the other bounding box, and determining whether the two bounding boxes collide in space position according to a separation axis algorithm;
if it is recognized that one bounding box collides with one of the patches forming the other bounding box in space, determining that models respectively associated with the two bounding boxes collide in space, otherwise, determining that models respectively associated with the two bounding boxes do not collide in space.
8. A virtual surgical simulation apparatus applied to a virtual maxillofacial soft tissue model, comprising:
the preprocessing module is used for determining a corresponding virtual hand model according to the hand three-dimensional information of a pre-collected virtual operation operator, displaying a virtual space for performing the virtual operation and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual operation instrument model, a virtual human body tissue model serving as a virtual operation object and the virtual hand model;
the information acquisition and identification module is used for acquiring the hand three-dimensional information in real time in the virtual operation implementation process, and identifying the gesture of the virtual operation operator and the position relationship among the virtual operation instrument model, the virtual human tissue model and the virtual hand model based on the hand three-dimensional information;
The virtual synchronization module is used for determining an instruction and an operation action of the virtual operation operator based on the gesture and the position relation, and synchronously displaying a virtual picture of the virtual operation in the virtual space according to the instruction and the operation action;
wherein the acquiring the three-dimensional information of the hand in real time comprises:
acquiring three-dimensional hand information of the virtual operation operator in real time in a depth detection mode based on flight time;
the real-time acquisition of the three-dimensional information of the hand by a depth detection mode based on flight time comprises the following steps:
driving two depth detection devices positioned on the same plane and at different positions to emit modulated light pulses continuously in a space scanning mode so as to capture depth information of different parts on the hands of the virtual operator in real time;
calculating three-dimensional information of the hand according to a triangulation method based on depth information of different parts on the hand detected by the two depth detection devices respectively;
the driving the two depth detection devices located at different positions on the same plane to emit modulated light pulses continuously in a spatial scanning manner to capture depth information of different parts on the hand of the virtual operator in real time comprises:
Acquiring, for each scanning line capable of scanning to the hand of the virtual operator, a transmission time of a modulated light pulse corresponding to the scanning line and a reception time of a signal reflected back from a different portion of the hand of the virtual operator;
depth information of different parts on the virtual operator's hand in the coordinate system relative to the depth detection means emitting the modulated light pulses is determined based on the difference between the emission time and the reception time.
9. An electronic device comprising the virtual surgical simulation apparatus of claim 8.
10. A storage medium having stored therein computer executable instructions adapted to be loaded by a processor to perform the virtual surgical simulation method of any one of claims 1 to 7.
CN202111534312.7A 2021-12-15 2021-12-15 Virtual operation simulation method and device, electronic equipment and storage medium Active CN114387836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111534312.7A CN114387836B (en) 2021-12-15 2021-12-15 Virtual operation simulation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111534312.7A CN114387836B (en) 2021-12-15 2021-12-15 Virtual operation simulation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114387836A CN114387836A (en) 2022-04-22
CN114387836B true CN114387836B (en) 2024-03-22

Family

ID=81197905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111534312.7A Active CN114387836B (en) 2021-12-15 2021-12-15 Virtual operation simulation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114387836B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019591B (en) * 2022-08-05 2022-11-04 上海华模科技有限公司 Operation simulation method, device and storage medium
CN115830229B (en) * 2022-11-24 2023-10-13 江苏奥格视特信息科技有限公司 Digital virtual human 3D model acquisition device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004344491A (en) * 2003-05-23 2004-12-09 Hideo Fujimoto Virtual surgery simulation system
CN104778894A (en) * 2015-04-28 2015-07-15 关宏刚 Virtual simulation bone-setting manipulation training system and establishment method thereof
CN106325509A (en) * 2016-08-19 2017-01-11 北京暴风魔镜科技有限公司 Three-dimensional gesture recognition method and system
CN108256461A (en) * 2018-01-11 2018-07-06 深圳市鑫汇达机械设计有限公司 A kind of gesture identifying device for virtual reality device
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN111191322A (en) * 2019-12-10 2020-05-22 中国航空工业集团公司成都飞机设计研究所 Virtual maintainability simulation method based on depth perception gesture recognition
CN112714900A (en) * 2018-10-29 2021-04-27 深圳市欢太科技有限公司 Display screen operation method, electronic device and readable storage medium
CN112904994A (en) * 2019-11-19 2021-06-04 深圳岱仕科技有限公司 Gesture recognition method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004344491A (en) * 2003-05-23 2004-12-09 Hideo Fujimoto Virtual surgery simulation system
CN104778894A (en) * 2015-04-28 2015-07-15 关宏刚 Virtual simulation bone-setting manipulation training system and establishment method thereof
CN106325509A (en) * 2016-08-19 2017-01-11 北京暴风魔镜科技有限公司 Three-dimensional gesture recognition method and system
CN108256461A (en) * 2018-01-11 2018-07-06 深圳市鑫汇达机械设计有限公司 A kind of gesture identifying device for virtual reality device
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN112714900A (en) * 2018-10-29 2021-04-27 深圳市欢太科技有限公司 Display screen operation method, electronic device and readable storage medium
CN112904994A (en) * 2019-11-19 2021-06-04 深圳岱仕科技有限公司 Gesture recognition method and device, computer equipment and storage medium
CN111191322A (en) * 2019-12-10 2020-05-22 中国航空工业集团公司成都飞机设计研究所 Virtual maintainability simulation method based on depth perception gesture recognition

Also Published As

Publication number Publication date
CN114387836A (en) 2022-04-22

Similar Documents

Publication Publication Date Title
JP6000387B2 (en) Master finger tracking system for use in minimally invasive surgical systems
JP5982542B2 (en) Method and system for detecting the presence of a hand in a minimally invasive surgical system
CN114387836B (en) Virtual operation simulation method and device, electronic equipment and storage medium
JP5702798B2 (en) Method and apparatus for hand gesture control in a minimally invasive surgical system
KR101789064B1 (en) Method and system for hand control of a teleoperated minimally invasive slave surgical instrument
WO2011065035A1 (en) Method of creating teaching data for robot, and teaching system for robot
KR20140048128A (en) Method and system for analyzing a task trajectory
JP2011110620A (en) Method of controlling action of robot, and robot system
CN115576426A (en) Hand interaction method for mixed reality flight simulator
CN105117000A (en) Method and device for processing medical three-dimensional image
KR101900922B1 (en) Method and system for hand presence detection in a minimally invasive surgical system
Chaman Surgical robotic nurse
JP2019197278A (en) Image processing apparatus, method of controlling image processing apparatus, and program
KR101953730B1 (en) Medical non-contact interface system and method of controlling the same
US20240185432A1 (en) System, method, and apparatus for tracking a tool via a digital surgical microscope
KR20180058484A (en) Medical non-contact interface system and method of controlling the same
Hummel et al. New Techniques for Hand Pose Estimation Based on Kinect Depth Data.
Casals et al. Quasi hands free interaction with a robot for online task correction
Parida Addressing hospital staffing shortages: dynamic surgical tool tracking and delivery using baxter
Fangwen et al. Vision-based 3D Shape Reconstructure of a Non-Rigid Object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant