CN114387836A - Virtual surgery simulation method and device, electronic equipment and storage medium - Google Patents

Virtual surgery simulation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114387836A
CN114387836A CN202111534312.7A CN202111534312A CN114387836A CN 114387836 A CN114387836 A CN 114387836A CN 202111534312 A CN202111534312 A CN 202111534312A CN 114387836 A CN114387836 A CN 114387836A
Authority
CN
China
Prior art keywords
virtual
model
hand
information
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111534312.7A
Other languages
Chinese (zh)
Other versions
CN114387836B (en
Inventor
于洪波
赵涵江
程奂翀
程梦佳
庄瑜
李萌
沈国芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Original Assignee
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine filed Critical Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority to CN202111534312.7A priority Critical patent/CN114387836B/en
Publication of CN114387836A publication Critical patent/CN114387836A/en
Application granted granted Critical
Publication of CN114387836B publication Critical patent/CN114387836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Abstract

The invention discloses a virtual surgery simulation method, a virtual surgery simulation device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a corresponding virtual hand model according to the pre-acquired three-dimensional hand information of the virtual operator; displaying a virtual space and establishing a coordinate system, wherein the virtual space comprises a virtual surgical instrument model, a virtual human tissue model and a virtual hand model; acquiring hand three-dimensional information in real time to identify the gesture of a virtual operation operator and the position relationship of a virtual operation instrument model, a virtual human tissue model and a virtual hand model; and determining instructions and operation actions of the virtual operation operator based on the gestures and the position relation so as to synchronously display a virtual picture of the virtual operation. The technical scheme provided by the invention can solve the technical problem that the gesture of a virtual operation operator can be associated with the virtual hand model on the display only by relying on wearing equipment such as information input equipment or data gloves when the virtual operation is carried out in the prior art.

Description

Virtual surgery simulation method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of medical technology, and in particular, to a virtual surgery simulation method, apparatus, electronic device, and storage medium.
Background
The development of science and technology drives social progress, and new science and technology is applied to the medical field, thereby having great influence on the medical industry.
The virtual operation simulation system is a typical application of the technology in the medical field. The medical treatment operation is complicated and various, the preoperative design, the operation and the postoperative prediction are difficult, the virtual reality technology provides a virtual operation environment and an interactive operation platform for doctors, the whole process of clinical operation can be simulated, and an ideal method is provided for solving the problem. Compared with the traditional operation simulation mode, the virtual operation technology based on three-dimensional display has the advantages of no damage, repeatability, specifiability and the like.
The recognition of gestures and surgical actions is crucial in order to obtain a realistic virtual effect of the surgery. The current virtual surgery generally utilizes a keyboard, a mouse or a data glove to obtain information and instructions for the virtual surgery. However, the difference between the information obtained by using the keyboard and the mouse and the actual operation completed by holding the instrument by hand is large, and the virtual operation effect is greatly reduced. Although the data gloves can acquire gesture information by acquiring hand joint data in real time, the data gloves are generally expensive, have more limitations in use, are high in application cost and are not easy to popularize.
Disclosure of Invention
The invention provides a virtual surgery simulation method, a virtual surgery simulation device, electronic equipment and a storage medium, and aims to effectively solve the technical problem that the gesture of a virtual surgery operator can be associated with a virtual hand model on a display only by relying on information input equipment or data gloves and other wearable equipment when virtual surgery is performed in the prior art.
According to one aspect of the invention, there is provided a virtual surgery simulation method, the method comprising:
determining a corresponding virtual hand model according to the pre-acquired three-dimensional hand information of the virtual operator;
displaying a virtual space for performing the virtual surgery and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual surgical instrument model, a virtual human tissue model serving as the virtual surgical object and the virtual hand model;
in the virtual surgery implementation process, the hand three-dimensional information is collected in real time, the gesture of the virtual surgery operator is identified based on the hand three-dimensional information, and the position relation among the virtual surgery instrument model, the virtual human tissue model and the virtual hand model is identified;
and determining instructions and operation actions of the virtual operation operator based on the gestures and the position relation, and synchronously displaying a virtual picture of the virtual operation in the virtual space according to the instructions and the operation actions.
Further, in the virtual surgery implementation process, the acquiring the three-dimensional hand information in real time includes:
and acquiring hand three-dimensional information of the virtual operation operator in real time in a depth detection mode based on flight time.
Further, the acquiring the three-dimensional hand information in real time by a depth detection mode based on flight time includes:
driving two depth detection devices which are positioned on the same plane and at different positions to respectively continuously emit modulated light pulses in a spatial scanning mode so as to capture depth information of different parts on the hand of the virtual operator in real time;
and calculating the three-dimensional hand information according to a triangulation method based on the depth information of different parts on the hand detected by the two depth detection devices respectively.
Further, the driving two depth detection devices located at different positions in the same plane to continuously emit modulated light pulses in a spatial scanning manner to capture depth information of different positions on the hand of the virtual surgical operator in real time includes:
for each scanning line which can scan the hand of the virtual surgical operator, acquiring the emission time of the modulated light pulse corresponding to the scanning line and the receiving time of the signal reflected from different parts of the hand of the virtual surgical operator;
determining depth information of different parts on the virtual operator's hand in the coordinate system relative to the depth detection device that emitted the modulated light pulse based on a difference between the emission time and the reception time.
Further, the virtual hand model is composed of 15 rigid bodies set in advance based on anatomy and kinematics, and has 22 degrees of freedom.
Further, the recognizing the gesture of the virtual surgical operator based on the three-dimensional hand information, and the position relationship among the virtual surgical instrument model, the virtual human tissue model and the virtual hand model comprises:
determining a hand feature vector based on the hand three-dimensional information, the hand feature vector including at least one of the following information: the bending degree of the fingers, the number of the extending fingers, the included angle between the adjacent fingers and the distance between the adjacent finger tips;
inputting the hand feature vector into a trained classifier;
the trained classifier identifies gestures of the virtual surgical operator based on the input hand feature vector.
Further, the recognizing the gesture of the virtual surgical operator and the position relationship among the virtual surgical instrument model, the virtual human tissue model and the virtual hand model based on the hand three-dimensional information further comprises:
identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model and whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model based on the hand three-dimensional information;
and when the spatial position collision between the virtual surgical instrument model and the virtual hand model and/or the spatial position collision between the virtual human tissue model and the virtual surgical instrument model are recognized, synchronously displaying preset actions associated with the spatial position collision in the virtual space.
Further, the identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model based on the three-dimensional hand information, and the identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model comprises:
constructing a multi-level bounding box model based on the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, wherein the multi-level bounding box model comprises attribute information of bounding boxes related to the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, and the attribute information at least comprises the size and position information of each level of bounding box and surface patch information of each level of bounding box;
and identifying whether space position collision occurs between the virtual surgical instrument model and the virtual hand model according to the multi-stage bounding box model, and identifying whether space position collision occurs between the virtual human tissue model and the virtual surgical instrument model.
Further, the identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model according to the multi-level bounding box model, and the identifying whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model includes:
for each group of two bounding boxes to be identified, determining whether the two bounding boxes collide with each other according to the maximum nominal radius and the minimum nominal radius of the two bounding boxes respectively and the distance between the geometric center points of the two bounding boxes;
wherein if the distance between the geometric center points of the two bounding boxes is greater than the sum of the respective maximum nominal radii of the two bounding boxes, then determining that no spatial position collision occurs with the models respectively associated with the two bounding boxes;
if the distance between the geometric center points of the two bounding boxes is smaller than the sum of the minimum nominal radii of the two bounding boxes, determining that the models respectively associated with the two bounding boxes have space position collision;
otherwise, further identifying whether space position collision occurs between the virtual surgical instrument model and the virtual hand model and/or whether space position collision occurs between the virtual human tissue model and the virtual surgical instrument model by adopting a bounding box-patch detection mode.
Further, the further identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model and/or whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model by using a bounding box-patch detection method includes:
selecting a space geometric relationship between one bounding box and a surface patch forming the other bounding box according to each group of two bounding boxes to be identified, and determining whether the two bounding boxes are subjected to space position collision according to a separation axis algorithm;
and if one bounding box is identified to have a spatial position collision with one of the surface patches forming the other bounding box, determining that the models respectively associated with the two bounding boxes have the spatial position collision, and otherwise, determining that the models respectively associated with the two bounding boxes do not have the spatial position collision.
According to another aspect of the present invention, there is provided a virtual surgery simulation apparatus comprising: the system comprises a preprocessing module, a virtual operation control module and a virtual operation control module, wherein the preprocessing module is used for determining a corresponding virtual hand model according to pre-acquired three-dimensional hand information of a virtual operation operator, displaying a virtual space for performing the virtual operation and establishing a coordinate system associated with the virtual space, and the virtual space comprises a virtual operation instrument model, a virtual human tissue model serving as a virtual operation object and the virtual hand model; the information acquisition and identification module is used for acquiring hand three-dimensional information of the virtual operation operator in real time in the virtual operation implementation process and identifying the gesture of the virtual operation operator and the position relation among the virtual operation instrument model, the virtual human tissue model and the virtual hand model based on the hand three-dimensional information; and the virtual synchronization module is used for determining the instruction and the operation action of the virtual operation operator based on the gesture and the position relation, and synchronously displaying the virtual picture of the operation in the virtual space according to the instruction and the operation action.
According to another aspect of the present invention, there is provided an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing any of the virtual surgery simulation methods described above when executing the computer program.
According to another aspect of the present invention, there is provided a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform any of the virtual surgical simulation methods described above.
Through one or more of the above embodiments in the present invention, at least the following technical effects can be achieved:
the three-dimensional information of the hands of a virtual surgery implementer can be collected in real time, and the corresponding virtual hand model is determined according to the three-dimensional information of the hands, so that data collection equipment does not need to be worn on the hands, the hands can be liberated, the surgery simulation can be flexibly performed, meanwhile, in the virtual surgery implementation process, the position relation among the virtual surgery instrument model, the virtual human tissue model and the virtual hand model is identified through the three-dimensional information of the hands, the collision condition among different virtual models can be judged through the position relation among the virtual models, the virtual picture of the virtual surgery is determined according to the instruction and the operation action, and the real surgery process can be simulated quickly and accurately. The hand gesture of the virtual operation operator can be displayed in real time, the operation process can be directly controlled through the hand gesture, and the hand gesture can be separated from the limitation of a keyboard and a mouse when instructions and operation actions are obtained, so that the technical effect that the hand activity of the virtual operation operator is not influenced by other equipment, and the operation simulation of the operator is performed personally on the scene is achieved.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
Fig. 1 is a flowchart illustrating steps of a virtual surgery simulation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the rigid body and degrees of freedom of a virtual hand model;
FIG. 3 is a static gesture for recognizing a gesture of a virtual surgical operator;
FIG. 4 is a schematic diagram of a bounding box binary tree segmentation method;
FIG. 5 is a schematic diagram of a nominal bounding box radius determination;
fig. 6 is a schematic structural diagram of a virtual surgery simulation apparatus according to a third embodiment of the present invention.
Detailed Description
Embodiments of the present invention provide a virtual surgery simulation method, an apparatus, an electronic device, and a storage medium, which can solve the technical problem that a gesture of a virtual surgery operator and a virtual hand model on a display need to be associated with each other only by means of a wearable device such as an information input device or a data glove when performing a virtual surgery in the prior art.
In order to solve the technical problems, the technical scheme in the embodiment of the invention has the following general idea:
in the embodiment of the invention, a corresponding virtual hand model is determined according to the pre-collected three-dimensional hand information of the virtual operator; displaying a virtual space for performing a virtual operation and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual operation instrument model, a virtual human tissue model serving as a virtual operation object and a virtual hand model; in the virtual surgery implementation process, acquiring three-dimensional hand information in real time, and identifying gestures of a virtual surgery operator and position relations among a virtual surgery instrument model, a virtual human tissue model and a virtual hand model on the basis of the three-dimensional hand information; and determining the instruction and the operation action of the virtual operation operator based on the gesture and the position relation, and synchronously displaying the virtual picture of the virtual operation in the virtual space according to the instruction and the operation action. The invention provides the method, which is used for solving the technical problem that the gesture of a virtual operation operator can be associated with a virtual hand model on a display only by relying on wearing equipment such as information input equipment or data gloves when the virtual operation is performed in the prior art. Compare in prior art, this scheme can show virtual operation operating personnel's hand gesture in real time on the display, and through hand gesture direct control operation process, compare traditional keyboard, mouse input, in the simulation operation in-process, the hand of virtual operation can break away from the restriction of keyboard and mouse, also need not dress data acquisition equipment, be convenient for carry out virtual operation simulation in a flexible way, make the learning process natural, directly perceived, easy to carry out, enable the people and face its country simulation operation implementation process, promote user experience.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document generally indicates that the preceding and following related objects are in an "or" relationship unless otherwise specified.
In order to better understand the technical solutions of the present invention, the following detailed descriptions of the technical solutions of the present invention are provided with the accompanying drawings and the specific embodiments, and it should be understood that the specific features in the embodiments and the examples of the present invention are the detailed descriptions of the technical solutions of the present invention, and are not limitations of the technical solutions of the present invention, and the technical features in the embodiments and the examples of the present invention may be combined with each other without conflict.
Example one
The present invention provides a virtual surgery simulation method, which can be applied to a device, wherein the present invention is not limited to what kind of equipment the device is.
As shown in fig. 1, the main flow of the information processing method is described as follows:
step S101: determining a corresponding virtual hand model according to the pre-acquired three-dimensional hand information of the virtual operator;
step S102: displaying a virtual space for performing a virtual operation and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual operation instrument model, a virtual human tissue model serving as a virtual operation object and a virtual hand model;
step S103: in the virtual surgery implementation process, acquiring three-dimensional hand information in real time, and identifying gestures of a virtual surgery operator and position relations among a virtual surgery instrument model, a virtual human tissue model and a virtual hand model on the basis of the three-dimensional hand information;
step S104: and determining the instruction and the operation action of the virtual operation operator based on the gesture and the position relation, and synchronously displaying the virtual picture of the virtual operation in the virtual space according to the instruction and the operation action.
In step S101, a corresponding virtual hand model is determined according to the pre-acquired three-dimensional hand information of the virtual operator. For example, in a virtual surgery implementation scene, a sensor for acquiring hand fine motion information of a virtual surgery operator needs to be configured, and the movement acceleration, speed, movement direction and the like in a three-dimensional space when the hand is in a moving state, and the rotation angle, angular speed, rotation direction and the like of the palm when the hand rotates can be calculated according to the hand three-dimensional information, so as to determine a corresponding virtual hand model. In the present invention, the hand information is obtained by a single sensor, or by a plurality of sensors, and in practical applications, for what type of sensor is used, a person skilled in the art can determine the type of sensor according to specific needs, and the present invention is not limited thereto.
After step S101 is executed, step S102 is executed to display a virtual space for performing a virtual operation and establish a coordinate system associated with the virtual space, where the virtual space includes a virtual surgical instrument model, a virtual human tissue model as a virtual operation object, and a virtual hand model. For example, in a virtual surgery system, a virtual space for performing a virtual surgery needs to be displayed on a display interface, and a coordinate system associated with the virtual space is established in the virtual space to accurately display a virtual model. The display device may be a VR head display, a computer display or a liquid crystal display, and the VR head display may be VR glasses, VR eyecups or VR helmets, and the type of the display device is not limited in the present invention. In the virtual space, at least a virtual surgical instrument model, a virtual human tissue model as a virtual surgical object, and a virtual hand model are included, wherein the virtual surgical instrument model is determined according to the surgical type, and may include, for example, a scalpel, a surgical scissors, a tissue forceps, a sterilized cotton, and the like. Accordingly, the virtual human tissue model is also determined according to the operation type, such as a virtual craniomaxillofacial multi-tissue biomechanical model, a stomach model and the like. In addition to the virtual surgical instrument model and the virtual human tissue model, a virtual hand model corresponding to the gesture of the virtual surgical operator is displayed in real time, and the hand motions made by the virtual surgical operator in reality are synchronously displayed on the display.
After the step S102 is executed, step S103 is executed, during the virtual surgery implementation process, the three-dimensional hand information is collected in real time, and the gesture of the virtual surgery operator and the position relationship among the virtual surgical instrument model, the virtual human tissue model and the virtual hand model are identified based on the three-dimensional hand information. Illustratively, the virtual surgery system needs to collect three-dimensional hand information in real time, perform depth calculation on the three-dimensional hand information to recognize the gesture of a virtual surgery operator, and obtain the position relationship among the virtual surgical instrument model, the virtual human tissue model and the virtual hand model through the three-dimensional hand information, wherein the position relationship further includes the precise position when the touch occurs among different virtual models. In the virtual operation implementation process, the virtual surgical instrument model and the virtual human body tissue model are statically displayed under the condition that the virtual surgical instrument model and the virtual human body tissue model are not touched, the virtual hand model moves along with the movement of the hand of a virtual surgical operator, and when the virtual hand model moves to a certain virtual surgical instrument model, the virtual surgical system judges whether the virtual hand model touches the virtual surgical instrument model or not and the accurate position when the virtual surgical instrument model touches the virtual surgical instrument model by acquiring the three-dimensional information of the hand in real time. After the virtual hand model captures the virtual surgical instrument model, it is necessary to acquire three-dimensional hand information in real time to determine whether the virtual surgical instrument model touches the virtual human tissue model and the precise position of the virtual surgical instrument model when the virtual surgical instrument model touches the virtual human tissue model.
After step S103 is executed, step S104 is executed to determine an instruction and an operation action of the virtual surgery operator based on the gesture and the positional relationship, and to synchronously display a virtual screen of the virtual surgery in the virtual space according to the instruction and the operation action. For example, the virtual surgical operator may issue an instruction for controlling the virtual simulated surgery through a preset specific gesture, where the instruction is specifically used to control the type of the virtual surgery and the operation steps or operation actions during the surgery execution, for example, when the virtual surgical operator performs a scissor-hand action, the operation action of the virtual surgery is to shear off the virtual human tissue model, and when the virtual surgical operator performs a continuous hand action twice, the virtual surgical operator performs the next virtual surgical operation step. Through the position relation among the virtual models, the collision state among the virtual models and the specific position of the collision can be determined. When the virtual hand model touches the virtual surgical instrument model, an operation instruction for determining a surgical operation needs to be acquired, and then an operation action to be executed by the virtual hand model is determined. For example, when the virtual hand model touches the virtual scalpel model, it is necessary to determine an instruction and an operation action for the virtual hand model to grasp the virtual scalpel model, for example, to hold the virtual scalpel model in one of a bow-holding method, a pen-holding method, a grasping method, and a reverse picking method. After the operation action is determined, the operation action is executed according to the touch position, for example, the virtual scalpel model is fixed by the thumb and the index finger of the right hand of the virtual hand model, and then the virtual scalpel model is controlled to move along with the movement of the virtual hand model. Similarly, if the virtual surgical instrument model touches the virtual human tissue model, the specific surgical operation performed by the virtual surgical instrument model on the virtual human tissue model, such as cutting, puncturing, suturing, etc., needs to be determined through instructions and operation actions. Optionally, if it is determined through the position relationship that the virtual hand model touches the virtual human tissue model, the operation to be performed by the virtual hand model, such as actions of lifting, pressing, turning, lifting, dropping and the like, may be determined through the operation instruction.
And synchronously displaying a virtual picture of the virtual operation in the virtual space according to the instruction and the operation action. Illustratively, the specific operation action of the virtual operation can be determined according to the instruction and the operation action, the operation position of the virtual human tissue model for executing the virtual operation can be obtained according to the position relation and the touch position after the touch, and when the operation position and the operation action are both determined, the corresponding operation effect is displayed at the operation position. For example, in a virtual operation based on a cranio-maxillofacial multi-tissue biomechanics model, an operator controls a virtual hand model to grab a virtual scalpel model through gesture actions made by the virtual operator, a knife tip of the virtual scalpel model is moved to the virtual maxillofacial soft tissue model, a virtual operation system acquires an instruction for executing puncture, the operation action is determined to be puncture, the effect of puncturing the maxillofacial soft tissue by the scalpel is dynamically simulated and displayed on the virtual maxillofacial soft tissue model, and a real operation environment is simulated.
In the embodiment of the invention, the three-dimensional hand information of the virtual surgery implementer can be collected in real time, and the corresponding virtual hand model can be determined according to the three-dimensional hand information, so that data collection equipment does not need to be worn on the hand, the hands can be liberated, the surgery simulation can be flexibly performed, meanwhile, in the virtual surgery implementation process, the position relation among the virtual surgery instrument model, the virtual human tissue model and the virtual hand model is identified through the three-dimensional hand information, and the virtual picture of the virtual surgery is determined based on the gesture and the position relation, so that the real surgery process can be accurately simulated. The hand gesture of the virtual operation operator can be displayed in real time, the operation process can be directly controlled through the hand gesture, and the hand gesture can be separated from the limitation of a keyboard and a mouse when instructions and operation actions are obtained, so that the technical effect that the hand activity of the virtual operation operator is not influenced by other equipment, and the operation simulation of the operator is performed personally on the scene is achieved.
Example two
Based on the same inventive concept as the virtual surgery simulation method in the first embodiment of the present invention, the virtual surgery simulation method in the second embodiment of the present invention includes:
furthermore, the hand three-dimensional information of the virtual operation operator is collected in real time through a depth detection mode based on the flight time. The time-of-flight ranging method belongs to the two-way ranging technology, and mainly utilizes the time-of-flight of signals back and forth between two asynchronous transceivers or reflected surfaces to measure the distance between nodes. In the virtual surgery simulation system, information is collected through a depth detection mode based on flight time, and particularly, three-dimensional information of a hand of a virtual surgery operator needs to be collected in real time, wherein the hand comprises each skeleton and joint of fingers, a palm, a wrist and the like.
Further, the two depth detection devices which are positioned at different positions in the same plane are driven to respectively continuously emit modulated light pulses in a spatial scanning mode so as to capture the depth information of different parts on the hand of the virtual operator in real time. Three-dimensional hand information is calculated according to a triangulation method based on two depth detection devices each detecting depth information of different parts on the hand of a virtual operator. Specifically, the hand of the virtual operator can be tracked by a depth detection device such as a camera or a sensor, and information on the hand can be acquired. The depth detection device can measure the round trip time of the light between the depth detection device and the virtual operator hand, and further calculate the distance. The depth detection device continuously emits modulated light pulses to a plurality of positions of the hand in a space scanning mode, receives the modulated light pulses reflected by the hand of the virtual operator, and captures depth information of different parts on the hand of the virtual operator in real time through depth calculation.
The three-dimensional hand information is calculated according to a triangulation method based on the depth information of different parts on the hand detected by each of the two depth detection devices. Specifically, the two depth detection devices can shoot picture information of the hand of the virtual operator to obtain two pictures with different visual angles aiming at the same environment, the depth information is calculated by using an algorithm through comparing the difference of images obtained by different cameras at the same moment, and the position information and the displacement information of the hand of the virtual operator in a three-dimensional space are obtained through multi-angle three-dimensional imaging.
Further, for each scan line that can scan the hand of the virtual operator, the emitting time of the modulated light pulse corresponding to the scan line and the receiving time of the signal reflected from different parts of the hand of the virtual operator are obtained, and based on the difference between the emitting time and the receiving time, the depth information of different parts on the hand of the virtual operator in the coordinate system relative to the depth detection device emitting the modulated light pulse is determined. Illustratively, after acquiring the emitting time of the modulated light pulse corresponding to the scanning line and the receiving time of the reflected signal, calculating a difference value between the two times, and determining depth information of different parts on the hand of the virtual operator in a coordinate system relative to a depth detection device emitting the modulated light pulse according to the flight time and the light speed of the modulated light pulse, so as to be capable of acquiring information such as motion acceleration, moving speed and moving direction of the virtual operator in a three-dimensional space when the hand of the virtual operator is in a moving state, and rotation angle, angular speed and rotating direction of a palm when the hand rotates in real time.
Further, the virtual hand model is composed of 15 rigid bodies set in advance based on the anatomy and the kinematics, and the virtual hand model has 22 degrees of freedom. Exemplarily, fig. 2 is a schematic diagram showing rigid bodies and degrees of freedom of a virtual hand model in which 15 rigid bodies and 22 degrees of freedom are constructed. In the process of movement, the skeleton does not change the shape and can be treated as a rigid body, 15 rigid bodies are designed for each virtual hand model corresponding to the skeleton in the anatomical structure of the hand, as shown in fig. 2, in one virtual hand model, 15 rectangular solids represent 15 rigid bodies, wherein 3 bones of each finger correspond to 3 rigid bodies, and 15 finger bones of one hand correspond to 15 rigid bodies. The 22 ellipses in fig. 2 represent 22 degrees of freedom corresponding to joints movable between the bones, wherein the palm portion corresponds to 7 degrees of freedom in order to improve the accuracy of gesture recognition. In the virtual surgery, the spatial position, the geometric posture, the grasping state, and the like of the virtual hand model in the virtual space can be controlled by controlling the positions and angles of the rigid bodies and the degrees of freedom.
Further, a hand feature vector is determined based on the three-dimensional hand information, and the hand feature vector contains at least one of the following information: the bending degree of the fingers, the number of the extending fingers, the included angle between the adjacent fingers and the distance between the adjacent finger tips; inputting the hand feature vector into a trained classifier; the trained classifier identifies gestures of the virtual surgical operator based on the input hand feature vector. Illustratively, a hand feature vector can be calculated according to the three-dimensional information of the hand, and the hand feature vector is determined according to the requirements of the virtual surgery simulation system, and can be one or more of the bending degree of fingers, the number of extending fingers, the included angle between adjacent fingers and the distance between adjacent finger tips. And inputting the hand characteristic vectors into the trained classifier, training a plurality of classes of classifiers by adopting an optimal classification function, performing an online identification stage after the training step, requiring high-speed and high-precision identification, corresponding the hand characteristic vectors to labels, and sequentially arranging the labels according to 0, 1 and 2 to map to various gestures. Fig. 3 shows a static gesture database for recognizing gestures of a virtual surgery operator, which is pre-stored in the virtual surgery simulation system, and stores a plurality of static gestures as objects to be recognized. From the hand feature vectors, the gestures of the virtual surgical operator may be matched in a static gesture database.
And identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model and whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model based on the three-dimensional hand information. And when the spatial position collision between the virtual surgical instrument model and the virtual hand model and/or the spatial position collision between the virtual human tissue model and the virtual surgical instrument model is recognized, synchronously displaying preset actions associated with the spatial position collision in the virtual space. Illustratively, collision relationships between different virtual models can be identified based on hand three-dimensional information recognition, including collision relationships between a virtual surgical instrument model and a virtual hand model, and between a virtual human tissue model and a virtual surgical instrument model. Meanwhile, gesture actions associated with spatial position collision need to be synchronously displayed in a virtual space, and an operation scene in a real operation is simulated.
Further, a multi-level bounding box model is constructed based on the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, the multi-level bounding box model comprises attribute information of bounding boxes related to the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, and the attribute information at least comprises size and position information of each level of bounding boxes and surface patch information of each level of bounding boxes. And identifying whether the virtual surgical instrument model and the virtual hand model have space position collision or not according to the multi-stage bounding box model, and identifying whether the virtual human tissue model and the virtual surgical instrument model have space position collision or not. Illustratively, the collision detection of the virtual model is performed based on a multi-level bounding box model, wherein the multi-level bounding box model comprises attribute information, and the attribute information at least comprises the size and the position information of each level of bounding box and the patch surface information of each level of bounding box. In particular, description information, patch information (points, normal vectors), bounding box binary tree information may be included. The description information is used to describe basic information of the model, such as model ID, name, precision, and root bounding box size. The patch information includes a vertex linked list, a normal vector linked list, and a patch table of the patch. And (4) dividing the bounding boxes of the virtual model layer by layer, removing the bounding boxes in a non-intersecting state, and reserving the bounding boxes in an intersecting state. For example, fig. 4 is a schematic diagram of a binary tree segmentation method for bounding boxes, in which a root bounding box is divided into two left and right bounding boxes by a binary tree, if both bounding boxes intersect with a graph, a two-dot chain line is used for performing a second segmentation line, the bounding box is further divided into 4 bounding boxes, the bounding box in a non-intersecting state is removed, the remaining bounding box is used for performing a third segmentation line, the bounding box obtained by the second segmentation is further divided into 8 bounding boxes, the bounding boxes in an intersecting state are retained, the bounding boxes in a non-intersecting state are discarded, and the same segmentation method is adopted until the bounding boxes of each virtual model are segmented to meet the precision requirement.
Further, for each group of two bounding boxes to be identified, whether the two bounding boxes collide with each other in space position is determined according to the maximum nominal radius and the minimum nominal radius of the two bounding boxes respectively and the distance between the geometric center points of the two bounding boxes. Wherein if the distance between the geometric center points of the two bounding boxes is greater than the sum of the respective maximum nominal radii of the two bounding boxes, then it is determined that the models respectively associated with the two bounding boxes do not suffer a spatial location collision. And if the distance between the geometric center points of the two bounding boxes is less than the sum of the minimum nominal radii of the two bounding boxes, determining that the models respectively associated with the two bounding boxes are subjected to space position collision. Otherwise, whether space position collision occurs between the virtual surgical instrument model and the virtual hand model and/or whether space position collision occurs between the virtual human tissue model and the virtual surgical instrument model is further identified by adopting a bounding box-surface patch detection mode.
Illustratively, the positional relationship between bounding boxes is judged by a nominal radius method. The maximum distance from the box center of the bounding box to a point on the bounding box surface for each virtual model is called the maximum nominal radius rmaxThe minimum distance is called the minimum nominal radius rmin. For bounding boxes, the maximum nominal radius rmaxEqual to half the length of the shortest side of the bounding box, minimum nominal radius rminEqual to half the diagonal length of the bounding box.
FIG. 5 is a schematic diagram of a nominal bounding box radius determination method, in which two bounding boxes requiring intersection detection are B1And B2,B1Respectively, the maximum and minimum nominal radii ofmax1、rmin1,B2Respectively, the maximum and minimum nominal radii ofmax、rminTwo bounding boxes centralIs other than C1And C2,C1And C2D, then surrounding box B1And B2The basic principle of the intersection detection method is as follows:
Figure BDA0003412046150000151
when the distance between the bounding boxes of the two virtual models is less than the minimum nominal radius r of the two bounding boxesminWhen the two bounding boxes are added, the two bounding boxes are in an intersecting state, namely, the models corresponding to the two bounding boxes generate space position collision. When the distance between the bounding boxes of the two virtual models is larger than the maximum nominal radius r of the two bounding boxesmaxWhen the sum is obtained, the two bounding boxes are in a separated state, and the models corresponding to the two bounding boxes do not collide in a spatial position. When the distance between the bounding boxes of the two virtual models is larger than the minimum nominal radius r of the two bounding boxesminSum of, and less than the maximum nominal radius r of the two bounding boxesmaxAnd when the sum is obtained, the collision relation of the two bounding boxes cannot be judged, and then the spatial position collision condition between the virtual models is further identified by adopting a bounding box-patch detection mode.
Further, for each group of two bounding boxes to be identified, selecting the space geometric relationship between one bounding box and the patch forming the other bounding box, and determining whether the two bounding boxes are collided in space position according to a separation axis algorithm. And if one bounding box is identified to have a spatial position collision with one of the surface patches forming the other bounding box, determining that the models respectively associated with the two bounding boxes have the spatial position collision, and otherwise, determining that the models respectively associated with the two bounding boxes do not have the spatial position collision. For example, collision detection is performed by intersection detection of bounding boxes and patches, and since patches in practice are often triangular patches, a separation axis determination method is employed to increase the intersection detection speed between bounding boxes and triangular patches. The separation axis algorithm is a common method for detecting intersection between convex polyhedrons in space, a triangular patch is regarded as the degeneration of the convex polyhedron in space in collision detection, and intersection detection is further carried out through the patch and a bounding box. For two bounding boxes to be identified in the virtual surgery system, determining the space geometric relationship between one bounding box and a patch forming the other bounding box, and determining the space position collision state of the two bounding boxes according to a separation axis algorithm. And when one bounding box collides with one of the surface patches forming the other bounding box in the spatial position, determining that the models corresponding to the two bounding boxes collide in the spatial position, otherwise, the two models do not collide in the spatial position.
EXAMPLE III
Based on the same inventive concept as the virtual surgery simulation method in the first embodiment of the present invention, an embodiment of the present invention provides an apparatus, please refer to fig. 6, the apparatus includes:
a preprocessing module 201, configured to determine a corresponding virtual hand model according to pre-acquired three-dimensional hand information of a virtual operation operator, display a virtual space for performing the virtual operation, and establish a coordinate system associated with the virtual space, where the virtual space includes a virtual operation instrument model, a virtual human tissue model serving as the virtual operation object, and the virtual hand model;
the information acquisition and identification module 202 is configured to acquire the three-dimensional hand information in real time during the virtual surgery, and identify the gesture of the virtual surgery operator and the position relationship among the virtual surgical instrument model, the virtual human tissue model and the virtual hand model based on the three-dimensional hand information;
and the virtual synchronization module 203 is configured to determine an instruction and an operation action of the virtual operation operator based on the gesture and the position relationship, and synchronously display a virtual picture of the virtual operation in the virtual space according to the instruction and the operation action.
According to another aspect of the present invention, the present invention provides an electronic device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the virtual surgery simulation method according to any embodiment of the present invention when executing the computer program.
The present invention also provides a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform a virtual surgery simulation method according to any of the embodiments of the present invention.
In summary, although the present invention has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, therefore, the scope of the present invention shall be determined by the appended claims.

Claims (13)

1. A virtual surgical simulation method, comprising:
determining a corresponding virtual hand model according to the pre-acquired three-dimensional hand information of the virtual operator;
displaying a virtual space for performing the virtual surgery and establishing a coordinate system associated with the virtual space, wherein the virtual space comprises a virtual surgical instrument model, a virtual human tissue model serving as the virtual surgical object and the virtual hand model;
in the virtual surgery implementation process, the hand three-dimensional information is collected in real time, the gesture of the virtual surgery operator is identified based on the hand three-dimensional information, and the position relation among the virtual surgery instrument model, the virtual human tissue model and the virtual hand model is identified;
and determining instructions and operation actions of the virtual operation operator based on the gestures and the position relation, and synchronously displaying a virtual picture of the virtual operation in the virtual space according to the instructions and the operation actions.
2. The method of claim 1, wherein said collecting said hand three-dimensional information in real-time during said virtual surgery comprises:
and acquiring hand three-dimensional information of the virtual operation operator in real time in a depth detection mode based on flight time.
3. The method of claim 2, wherein the acquiring the three-dimensional hand information in real-time by means of time-of-flight based depth detection comprises:
driving two depth detection devices which are positioned on the same plane and at different positions to respectively continuously emit modulated light pulses in a spatial scanning mode so as to capture depth information of different parts on the hand of the virtual operator in real time;
and calculating the three-dimensional hand information according to a triangulation method based on the depth information of different parts on the hand detected by the two depth detection devices respectively.
4. The method of claim 3, wherein said actuating two depth detection devices located at different positions in the same plane each to emit modulated light pulses continuously in a spatially scanned manner to capture depth information of different locations on the virtual operator's hand in real time comprises:
for each scanning line which can scan the hand of the virtual surgical operator, acquiring the emission time of the modulated light pulse corresponding to the scanning line and the receiving time of the signal reflected from different parts of the hand of the virtual surgical operator;
determining depth information of different parts on the virtual operator's hand in the coordinate system relative to the depth detection device that emitted the modulated light pulse based on a difference between the emission time and the reception time.
5. The method of claim 1, wherein the virtual hand model is composed of 15 rigid bodies preset based on anatomy and kinematics, and the virtual hand model has 22 degrees of freedom.
6. The method of claim 5, wherein the recognizing the gesture of the virtual surgical operator based on the three-dimensional hand information, and the positional relationship between the virtual surgical instrument model, the virtual human tissue model, and the virtual hand model comprises:
determining a hand feature vector based on the hand three-dimensional information, the hand feature vector including at least one of the following information: the bending degree of the fingers, the number of the extending fingers, the included angle between the adjacent fingers and the distance between the adjacent finger tips;
inputting the hand feature vector into a trained classifier;
the trained classifier identifies gestures of the virtual surgical operator based on the input hand feature vector.
7. The method of claim 6, wherein said identifying the gesture of the virtual surgical operator and the positional relationship between the virtual surgical instrument model, the virtual human tissue model, and the virtual hand model based on the three-dimensional hand information further comprises:
identifying whether a spatial position collision occurs between the virtual surgical instrument model and the virtual hand model and whether a spatial position collision occurs between the virtual human tissue model and the virtual surgical instrument model based on the hand three-dimensional information;
and when the spatial position collision between the virtual surgical instrument model and the virtual hand model and/or the spatial position collision between the virtual human tissue model and the virtual surgical instrument model are recognized, synchronously displaying preset actions associated with the spatial position collision in the virtual space.
8. The method of claim 7, wherein the identifying whether a spatial position collision has occurred between the virtual surgical instrument model and the virtual hand model based on the three-dimensional hand information, and the identifying whether a spatial position collision has occurred between the virtual human tissue model and the virtual surgical instrument model comprises:
constructing a multi-level bounding box model based on the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, wherein the multi-level bounding box model comprises attribute information of bounding boxes related to the virtual surgical instrument model, the virtual human tissue model and the virtual hand model, and the attribute information at least comprises the size and position information of each level of bounding box and surface patch information of each level of bounding box;
and identifying whether space position collision occurs between the virtual surgical instrument model and the virtual hand model according to the multi-stage bounding box model, and identifying whether space position collision occurs between the virtual human tissue model and the virtual surgical instrument model.
9. The method of claim 8, wherein the identifying whether a spatial position collision occurred between the virtual surgical instrument model and the virtual hand model from the multi-level bounding box model, and the identifying whether a spatial position collision occurred between the virtual human tissue model and the virtual surgical instrument model comprises:
for each group of two bounding boxes to be identified, determining whether the two bounding boxes collide with each other according to the maximum nominal radius and the minimum nominal radius of the two bounding boxes respectively and the distance between the geometric center points of the two bounding boxes;
wherein if the distance between the geometric center points of the two bounding boxes is greater than the sum of the respective maximum nominal radii of the two bounding boxes, then determining that no spatial position collision occurs with the models respectively associated with the two bounding boxes;
if the distance between the geometric center points of the two bounding boxes is smaller than the sum of the minimum nominal radii of the two bounding boxes, determining that the models respectively associated with the two bounding boxes have space position collision;
otherwise, further identifying whether space position collision occurs between the virtual surgical instrument model and the virtual hand model and/or whether space position collision occurs between the virtual human tissue model and the virtual surgical instrument model by adopting a bounding box-patch detection mode.
10. The method of claim 9, wherein the identifying further whether a spatial position collision has occurred between the virtual surgical instrument model and the virtual hand model and/or whether a spatial position collision has occurred between the virtual human tissue model and the virtual surgical instrument model using a bounding box-patch detection approach comprises:
selecting a space geometric relationship between one bounding box and a surface patch forming the other bounding box according to each group of two bounding boxes to be identified, and determining whether the two bounding boxes are subjected to space position collision according to a separation axis algorithm;
and if one bounding box is identified to have a spatial position collision with one of the surface patches forming the other bounding box, determining that the models respectively associated with the two bounding boxes have the spatial position collision, and otherwise, determining that the models respectively associated with the two bounding boxes do not have the spatial position collision.
11. A virtual surgery simulation device is characterized by comprising
The system comprises a preprocessing module, a virtual operation control module and a virtual operation control module, wherein the preprocessing module is used for determining a corresponding virtual hand model according to pre-acquired three-dimensional hand information of a virtual operation operator, displaying a virtual space for performing the virtual operation and establishing a coordinate system associated with the virtual space, and the virtual space comprises a virtual operation instrument model, a virtual human tissue model serving as a virtual operation object and the virtual hand model;
the information acquisition and identification module is used for acquiring the hand three-dimensional information in real time in the virtual surgery implementation process, and identifying the gesture of the virtual surgery operator and the position relation among the virtual surgery instrument model, the virtual human tissue model and the virtual hand model based on the hand three-dimensional information;
and the virtual synchronization module is used for determining the instruction and the operation action of the virtual operation operator based on the gesture and the position relation, and synchronously displaying the virtual picture of the virtual operation in the virtual space according to the instruction and the operation action.
12. An electronic device, characterized in that it comprises a virtual surgery simulation apparatus according to claim 11.
13. A storage medium having stored therein computer-executable instructions adapted to be loaded by a processor to perform the virtual surgery simulation method of any one of claims 1 to 10.
CN202111534312.7A 2021-12-15 2021-12-15 Virtual operation simulation method and device, electronic equipment and storage medium Active CN114387836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111534312.7A CN114387836B (en) 2021-12-15 2021-12-15 Virtual operation simulation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111534312.7A CN114387836B (en) 2021-12-15 2021-12-15 Virtual operation simulation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114387836A true CN114387836A (en) 2022-04-22
CN114387836B CN114387836B (en) 2024-03-22

Family

ID=81197905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111534312.7A Active CN114387836B (en) 2021-12-15 2021-12-15 Virtual operation simulation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114387836B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019591A (en) * 2022-08-05 2022-09-06 上海华模科技有限公司 Operation simulation method, operation simulation device and storage medium
CN115830229A (en) * 2022-11-24 2023-03-21 江苏奥格视特信息科技有限公司 Digital virtual human 3D model acquisition device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004344491A (en) * 2003-05-23 2004-12-09 Hideo Fujimoto Virtual surgery simulation system
CN104778894A (en) * 2015-04-28 2015-07-15 关宏刚 Virtual simulation bone-setting manipulation training system and establishment method thereof
CN106325509A (en) * 2016-08-19 2017-01-11 北京暴风魔镜科技有限公司 Three-dimensional gesture recognition method and system
CN108256461A (en) * 2018-01-11 2018-07-06 深圳市鑫汇达机械设计有限公司 A kind of gesture identifying device for virtual reality device
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN111191322A (en) * 2019-12-10 2020-05-22 中国航空工业集团公司成都飞机设计研究所 Virtual maintainability simulation method based on depth perception gesture recognition
CN112714900A (en) * 2018-10-29 2021-04-27 深圳市欢太科技有限公司 Display screen operation method, electronic device and readable storage medium
CN112904994A (en) * 2019-11-19 2021-06-04 深圳岱仕科技有限公司 Gesture recognition method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004344491A (en) * 2003-05-23 2004-12-09 Hideo Fujimoto Virtual surgery simulation system
CN104778894A (en) * 2015-04-28 2015-07-15 关宏刚 Virtual simulation bone-setting manipulation training system and establishment method thereof
CN106325509A (en) * 2016-08-19 2017-01-11 北京暴风魔镜科技有限公司 Three-dimensional gesture recognition method and system
CN108256461A (en) * 2018-01-11 2018-07-06 深圳市鑫汇达机械设计有限公司 A kind of gesture identifying device for virtual reality device
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN112714900A (en) * 2018-10-29 2021-04-27 深圳市欢太科技有限公司 Display screen operation method, electronic device and readable storage medium
CN112904994A (en) * 2019-11-19 2021-06-04 深圳岱仕科技有限公司 Gesture recognition method and device, computer equipment and storage medium
CN111191322A (en) * 2019-12-10 2020-05-22 中国航空工业集团公司成都飞机设计研究所 Virtual maintainability simulation method based on depth perception gesture recognition

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019591A (en) * 2022-08-05 2022-09-06 上海华模科技有限公司 Operation simulation method, operation simulation device and storage medium
CN115019591B (en) * 2022-08-05 2022-11-04 上海华模科技有限公司 Operation simulation method, device and storage medium
CN115830229A (en) * 2022-11-24 2023-03-21 江苏奥格视特信息科技有限公司 Digital virtual human 3D model acquisition device
CN115830229B (en) * 2022-11-24 2023-10-13 江苏奥格视特信息科技有限公司 Digital virtual human 3D model acquisition device

Also Published As

Publication number Publication date
CN114387836B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
JP6000387B2 (en) Master finger tracking system for use in minimally invasive surgical systems
JP5982542B2 (en) Method and system for detecting the presence of a hand in a minimally invasive surgical system
EP2904472B1 (en) Wearable sensor for tracking articulated body-parts
US11422530B2 (en) Systems and methods for prototyping a virtual model
JP5702798B2 (en) Method and apparatus for hand gesture control in a minimally invasive surgical system
JP5702797B2 (en) Method and system for manual control of remotely operated minimally invasive slave surgical instruments
CN114387836B (en) Virtual operation simulation method and device, electronic equipment and storage medium
CN110476168A (en) Method and system for hand tracking
Wen et al. A robust method of detecting hand gestures using depth sensors
KR20140048128A (en) Method and system for analyzing a task trajectory
JP2016506260A (en) Markerless tracking of robotic surgical instruments
JP2011110621A (en) Method of producing teaching data of robot and robot teaching system
Speidel et al. Tracking of instruments in minimally invasive surgery for surgical skill analysis
Becker et al. Real-time retinal vessel mapping and localization for intraocular surgery
CN105117000A (en) Method and device for processing medical three-dimensional image
KR101956900B1 (en) Method and system for hand presence detection in a minimally invasive surgical system
Chaman Surgical robotic nurse
KR101953730B1 (en) Medical non-contact interface system and method of controlling the same
Parida Addressing hospital staffing shortages: dynamic surgical tool tracking and delivery using baxter
WO2022216810A2 (en) System, method, and apparatus for tracking a tool via a digital surgical microscope
Zhang et al. The Algorithm Researching of Hand Pose Estimation Based on 3D Joint Regression
Daroltia et al. Measurement of surgical motion with a marker free computer vision based system
Gadd et al. Hand-Eye: A Vision-Based Approach to Data Glove Calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant