CN115662234A - Thoracoscope surgery teaching system based on virtual reality - Google Patents

Thoracoscope surgery teaching system based on virtual reality Download PDF

Info

Publication number
CN115662234A
CN115662234A CN202211371509.8A CN202211371509A CN115662234A CN 115662234 A CN115662234 A CN 115662234A CN 202211371509 A CN202211371509 A CN 202211371509A CN 115662234 A CN115662234 A CN 115662234A
Authority
CN
China
Prior art keywords
operation box
endoscope
surgical instrument
coordinates
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211371509.8A
Other languages
Chinese (zh)
Other versions
CN115662234B (en
Inventor
谷倬宇
齐宇
李一鑫
白睿敏
伍东红
马骞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN202211371509.8A priority Critical patent/CN115662234B/en
Publication of CN115662234A publication Critical patent/CN115662234A/en
Application granted granted Critical
Publication of CN115662234B publication Critical patent/CN115662234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a virtual reality-based thoracoscopic surgery teaching system, which is characterized in that coordinates of a mark on a surgical instrument in an operation box are obtained according to at least one mark on the surgical instrument, and coordinates of the mark on an endoscope in the operation box are obtained according to the at least one mark on the endoscope; and constructing a three-dimensional view of the thoracic cavity, converting coordinates marked on the surgical instrument and the endoscope in the operation box into coordinates in the three-dimensional view of the thoracic cavity, and simulating the surgical instrument and the endoscope in the three-dimensional view of the thoracic cavity according to the coordinates marked in the three-dimensional view of the thoracic cavity. The invention can truly simulate the operation process of a doctor on the endoscope and the surgical instrument by monitoring the endoscope and the surgical instrument in the operation box in real time and projecting the endoscope and the surgical instrument into the thoracic cavity three-dimensional view, thereby improving the training effect.

Description

Thoracoscope surgery teaching system based on virtual reality
Technical Field
The invention relates to the field of medicine, in particular to a thoracoscope surgery teaching system based on virtual reality.
Background
The surgical training is a necessary project in the process of learning and training of doctors, animals are mostly adopted as surgical objects in the prior surgical training, however, the difference between the animals and human bodies is large, the structures are not completely the same, the cost is high, and the difficulty is brought to the surgical training. Virtual Reality (VR) is a simulation of reality, and a virtual reality technology is applied to the simulation of an operation, so that a doctor can know the structure of a human body more intuitively and can better master items needing attention in the operation.
Virtual surgery also has many shortcomings, it is not true enough to simulate human body structure such as chest, different patients' obesity degree, constitution, organ etc. all differ, and it is not true enough to simulate the apparatus used in the operation, in the real operation, the apparatus has operations such as moving, the size of the moving amplitude directly relates to the influence to the internal organs, it is difficult to train the application of the apparatus in the virtual operation training.
Disclosure of Invention
In order to truly simulate surgical instruments and human thoracic cavity structures, the invention provides a virtual reality-based thoracoscopic surgery teaching system, which comprises surgical instruments, an endoscope, an operation box, a processing device and a display device, wherein at least one mark is engraved on each of the surgical instruments and the endoscope, and the processing device comprises the following modules:
the image processing module is used for acquiring coordinates of the mark on the surgical instrument in the operation box according to the at least one mark on the surgical instrument and acquiring coordinates of the mark on the endoscope in the operation box according to the at least one mark on the endoscope;
and the scene simulation module is used for constructing a three-dimensional thoracic view, converting the coordinates marked in the operation box on the surgical instrument and the endoscope into coordinates in the three-dimensional thoracic view, and simulating the surgical instrument and the endoscope in the three-dimensional thoracic view according to the coordinates marked in the three-dimensional thoracic view.
Preferably, the acquiring the coordinates of the identifier on the surgical instrument in the console box according to the at least one identifier on the surgical instrument, and the acquiring the coordinates of the identifier on the endoscope in the console box according to the at least one identifier on the endoscope specifically include:
acquiring the distance between each identifier on the surgical instrument and the position right above the operation box and the position right in front of the operation box according to images shot by a binocular camera right above the operation box and the binocular camera right in front of the operation box, and determining the coordinate of each identifier on the surgical instrument in the operation box based on the distance between each identifier and the position right above the operation box and the position right in front of the operation box;
according to the binocular camera directly above the operation box, the image shot by the binocular camera directly in front of the operation box is used for obtaining the distance of each mark on the endoscope, the distance is directly above the operation box, the distance is directly in front of the operation box, and based on the distance of each mark, the distance is directly above the operation box, the distance is directly in front of the operation box, the coordinate of each mark on the endoscope is determined, and the coordinate of each mark in the operation box is determined.
Preferably, the coordinates of the surgical instrument and the mark on the endoscope in the operation box are converted into coordinates in a three-dimensional view of the thoracic cavity, specifically:
acquiring the ratio of the size in the three-dimensional view of the chest to the size of the real chest, and converting the three-dimensional coordinates of the operation box into the three-dimensional coordinates in the three-dimensional view of the chest based on the ratio; wherein the scale is further used to translate movement of the endoscope and the surgical instrument into movement of the endoscope and the surgical instrument in the three-dimensional view of the thoracic cavity.
Preferably, the three-dimensional view of the thoracic cavity is constructed by:
constructing a thoracic 3D view according to the CT scanning image, and segmenting the thoracic 3D view to obtain a segmented region, wherein the segmented region comprises soft tissues and bones; the soft tissue comprises heart, lung, artery, nerve, trachea;
for each tissue in the soft tissue, segmenting the tissue into a plurality of cubes according to voxels of a three-dimensional view of the tissue, filling mass points at the vertex of each cube, connecting each mass point of each cube with other 7 mass points in a spring structure, constructing a tissue model filled with the cubes, and rendering the tissue model;
and replacing the rendered tissue with the tissue of the corresponding segmentation area to form a new thoracic 3D view containing bones, hearts, lungs, arteries, nerves and tracheas, constructing skin through a mass point-spring model according to the thickness of the skin tissue, and covering the skin on the new thoracic 3D view to form an elastic thoracic three-dimensional view.
Preferably, the connecting parts of the surgical instrument, the endoscope and the operation box are respectively provided with a force feedback assembly; the attributes of the surgical instrument comprise operation and force, and the operation of the three-dimensional view of the thoracic cavity is simulated according to the operation of the surgical instrument.
Preferably, the three-dimensional view of the thoracic cavity is constructed by: and generating a three-dimensional view of the chest cavity by adopting a 3D animation simulation mode.
In a further aspect, the invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of:
s1, acquiring coordinates of a mark on the surgical instrument in the operation box according to at least one mark on the surgical instrument, and acquiring coordinates of the mark on the endoscope in the operation box according to at least one mark on the endoscope;
s2, constructing a three-dimensional thoracic view, converting the coordinates of the surgical instrument and the mark on the endoscope in the operation box into coordinates in the three-dimensional thoracic view, and simulating the surgical instrument and the endoscope in the three-dimensional thoracic view according to the coordinates marked in the three-dimensional thoracic view.
Preferably, the acquiring the coordinates of the identifier on the surgical instrument in the console box according to the at least one identifier on the surgical instrument, and the acquiring the coordinates of the identifier on the endoscope in the console box according to the at least one identifier on the endoscope specifically include:
acquiring the distance between each identifier on the surgical instrument and the position right above the operation box and the position right in front of the operation box according to images shot by a binocular camera right above the operation box and the binocular camera right in front of the operation box, and determining the coordinate of each identifier on the surgical instrument in the operation box based on the distance between each identifier and the position right above the operation box and the position right in front of the operation box;
according to the binocular camera directly over the control box the image that the binocular camera in the dead ahead of the control box shot acquires every sign distance on the endoscope directly over the control box, distance the distance in the dead ahead of the control box is based on every sign distance directly over the control box, distance every sign is in on the distance determination endoscope in the dead ahead of the control box coordinate in the control box.
Preferably, the coordinates of the surgical instrument and the mark on the endoscope in the operation box are converted into coordinates in a three-dimensional view of the thoracic cavity, specifically:
acquiring the ratio of the size in the three-dimensional view of the chest to the size of the real chest, and converting the three-dimensional coordinates of the operation box into the three-dimensional coordinates in the three-dimensional view of the chest based on the ratio; wherein the scale is further used to translate movement of the endoscope and the surgical instrument into movement of the endoscope and the surgical instrument in the three-dimensional view of the thoracic cavity.
Preferably, the three-dimensional view of the thoracic cavity is constructed by:
constructing a thoracic 3D view according to the CT scanning image, and segmenting the thoracic 3D view to obtain a segmented region, wherein the segmented region comprises soft tissues and bones; the soft tissue comprises heart, lung, artery, nerve, trachea;
for each tissue in the soft tissue, segmenting the tissue into a plurality of cubes according to voxels of a three-dimensional view of the tissue, filling mass points at the vertexes of each cube, connecting each mass point of each cube with other 7 mass points in a spring structure, constructing a tissue model filled with the cubes, and rendering the tissue model;
and replacing the rendered tissue with the tissue of the corresponding segmentation area to form a new thoracic 3D view containing bones, hearts, lungs, arteries, nerves and tracheas, constructing skin through a mass point-spring model according to the thickness of the skin tissue, and covering the skin on the new thoracic 3D view to form an elastic thoracic three-dimensional view.
The invention can directly put the surgical instruments actually used in the thoracic cavity operation into the operation box, has real simulation on the operation of the surgical instruments, such as movement, and the like, and feeds back the simulation to the thoracic cavity three-dimensional view, thereby being greatly helpful for training the application proficiency of doctors on the surgical instruments and leading the doctors to better master the movement distance of the surgical instruments in the real operation; in addition, the invention also provides a method for constructing a three-dimensional view of the chest cavity of a real patient according to CT scanning, the elastic three-dimensional view of the chest cavity is constructed before the operation of the patient, and the system constructed by the invention is used for simulation before the operation, so that the operation can be mastered in the real operation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic view of an operating box;
FIG. 2 is an identification schematic of a surgical instrument;
FIG. 3 is a diagram of a mass-spring model of the present invention;
FIG. 4 is a flowchart of an embodiment;
FIG. 5 is an embodiment flowchart.
Detailed Description
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
The invention provides a virtual reality-based thoracoscopic surgery teaching system, which comprises a surgical instrument, an endoscope, an operation box, a processing device and a display device, wherein the operation box is shown in figure 1, at least one mark is respectively engraved on the surgical instrument and the endoscope, and the processing device comprises the following modules:
the image processing module is used for acquiring coordinates of the mark on the surgical instrument in the operation box according to the at least one mark on the surgical instrument and acquiring coordinates of the mark on the endoscope in the operation box according to the at least one mark on the endoscope;
surgical instruments and endoscopes are indispensable devices for thoracic surgery, and in surgical training, the grasping of the instruments is also important. In the present invention, indicia are provided on the instrument, including but not limited to colors, such as yellow, etc. The positions and angles of the endoscope and the surgical instrument can be acquired according to the identification information.
And the scene simulation module is used for constructing a thoracic cavity three-dimensional view, converting the coordinates of the surgical instrument and the mark on the endoscope in the operation box into coordinates in the thoracic cavity three-dimensional view, and simulating the surgical instrument and the endoscope in the thoracic cavity three-dimensional view according to the coordinates marked in the thoracic cavity three-dimensional view.
The operation of the instrument in the simulated operation process also needs to be matched with the three-dimensional view of the chest cavity, so that the force and the displacement distance can be better sensed. To simulate the surgical instruments and the endoscope in real time in the three-dimensional view of the thoracic cavity, the coordinates of the surgical instruments and the endoscope need to be converted into the three-dimensional view of the thoracic cavity in real time, and the surgical instruments and the endoscope can be simulated through the coordinates of several marked points. Thus, the surgeon's manipulation of the surgical instruments and endoscope is displayed in real time in a three-dimensional view of the chest cavity.
For the acquisition of the coordinates, the invention adopts a binocular camera mode, two cameras are arranged right above and right in front of the operation box, the distances between the marks and the right above and right in front of the operation box can be acquired in real time, and then the specific coordinates are determined according to the size of the operation box. Specifically, the acquiring the coordinates of the identifier on the surgical instrument in the operation box according to the at least one identifier on the surgical instrument, and the acquiring the coordinates of the identifier on the endoscope in the operation box according to the at least one identifier on the endoscope specifically include:
acquiring the distance between each identifier on the surgical instrument and the position right above the operation box and the position right in front of the operation box according to images shot by a binocular camera right above the operation box and the binocular camera right in front of the operation box, and determining the coordinate of each identifier on the surgical instrument in the operation box based on the distance between each identifier and the position right above the operation box and the position right in front of the operation box;
according to the binocular camera directly above the operation box, the image shot by the binocular camera directly in front of the operation box is used for obtaining the distance of each mark on the endoscope, the distance is directly above the operation box, the distance is directly in front of the operation box, and based on the distance of each mark, the distance is directly above the operation box, the distance is directly in front of the operation box, the coordinate of each mark on the endoscope is determined, and the coordinate of each mark in the operation box is determined.
The right above and the right front of the operation box refer to the inner parts of an upper panel and a front panel of the operation box, namely the binocular camera is positioned in the box body.
The size of the operation box and the three-dimensional view of the thoracic cavity are different, and the coordinates do not correspond to each other, which requires to match the two, specifically, the coordinates of the surgical instrument and the mark on the endoscope in the operation box are converted into the coordinates in the three-dimensional view of the thoracic cavity, specifically:
acquiring the ratio of the size in the three-dimensional view of the chest to the size of the real chest, and converting the three-dimensional coordinates of the operation box into the three-dimensional coordinates in the three-dimensional view of the chest based on the ratio; wherein the scale is further used to translate movement of the endoscope and the surgical instrument into movement of the endoscope and the surgical instrument in the three dimensional view of the chest cavity.
The doctor moves the surgical instrument or endoscope a distance in the operation box, and the actual representation is in the three-dimensional view of the thoracic cavity, for example, 5cm in the operation box, and 5cm in the actual operation, but since the three-dimensional view of the thoracic cavity is different from the actual thoracic cavity, the two are different in size, and if the ratio of the size of the thoracic cavity to the actual thoracic cavity in the three-dimensional view of the thoracic cavity is 1.
Human tissues and organs have elasticity, cutting and other operations are involved in the operation process, and the generated thoracic cavity three-dimensional view is as same as the real situation as possible. In the prior art, a real body tissue is simulated by adopting a soft tissue modeling mode, and a common method is a mass spring model and improvement thereof, such as Maxwell, but different users have different body conditions and lack of pertinence. According to the invention, the CT scanning image is adopted to construct the 3D view of the thoracic cavity, and then the three-dimensional view of the thoracic cavity is established to simulate the thoracic cavity condition of a real person, so that the training is more real, and for patients with complicated illness, a doctor can adopt the system of the invention to perform rehearsal, and the success rate of the operation is improved. In one embodiment, the three-dimensional view of the thoracic cavity is constructed by:
constructing a thoracic 3D view according to a CT scanning image, and segmenting the thoracic 3D view to obtain a segmented region, wherein the segmented region comprises soft tissues and bones; the soft tissue comprises heart, lung, artery, nerve, trachea;
for each tissue in the soft tissue, segmenting the tissue into a plurality of cubes according to voxels of a three-dimensional view of the tissue, filling mass points at the vertex of each cube, connecting each mass point of each cube with other 7 mass points in a spring structure, as shown in fig. 3, constructing a tissue model filled with a plurality of cubes, and rendering the tissue model;
and replacing the rendered tissue with the tissue of the corresponding segmentation area to form a new thoracic 3D view containing bones, hearts, lungs, arteries, nerves and tracheas, constructing skin through a mass point-spring model according to the thickness of the skin tissue, and covering the skin on the new thoracic 3D view to form an elastic thoracic three-dimensional view.
Voxels (Voxel) are the basic elements of a CT scan to construct a 3D view, i.e. a Voxel is the basic element of a 3D view, which is referred to as a pixel of 3D space. In constructing a three-dimensional view of the thoracic cavity with true elasticity, the present invention segments the tissue into a plurality of cubes according to voxels, each cube including a plurality of voxels, as shown in fig. 3, in order to better describe the shape of the tissue. The cube is 8 mass points, each mass point is connected with other mass points through a spring structure, and when one mass point is operated, the other mass points are influenced, so that simulation of real tissues is realized.
In another embodiment, a 3D view of each tissue is constructed by segmenting the CT scan image, then a 3D view of each tissue is reconstructed using the mass spring model, and finally the reconstructed 3D views of each tissue are combined into the thoracic cavity.
Cutting and instrument moving are inevitably involved in the operation, and in order to simulate the real situation, the invention is also provided with a force feedback assembly, which belongs to the prior art and is not described again. Can let the doctor experience the operation dynamics in real time through the force feedback tissue, set up attributes such as operation, dynamics to every surgical instruments simultaneously, surgical instruments's dynamics attribute is different from the operation dynamics that lets the doctor experience in real time, and surgical instruments's dynamics is used for the control to the operation process, for example according to dynamics size and surgical instruments position determination cutting opening etc.. In a particular development of the invention, the surgical instruments are arranged in classes, each class comprising several properties.
For the junior users, in order to get them familiar with the system proposed by the present invention as soon as possible, a simple three-dimensional view of the thorax is provided, in particular: and generating a three-dimensional view of the chest cavity by adopting a 3D animation simulation mode. The invention can also generate all three-dimensional views of the thorax by adopting 3D animation simulation.
Example two
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method, as shown in fig. 4, of:
s1, obtaining coordinates of a mark on the surgical instrument in the operation box according to at least one mark on the surgical instrument, and obtaining coordinates of the mark on the endoscope in the operation box according to at least one mark on the endoscope;
s2, constructing a three-dimensional thoracic view, converting the coordinates marked on the surgical instrument and the endoscope in the operation box into coordinates in the three-dimensional thoracic view, and simulating the surgical instrument and the endoscope in the three-dimensional thoracic view according to the coordinates marked in the three-dimensional thoracic view.
Preferably, the acquiring the coordinates of the identifier on the surgical instrument in the console box according to the at least one identifier on the surgical instrument, and the acquiring the coordinates of the identifier on the endoscope in the console box according to the at least one identifier on the endoscope specifically include:
acquiring the distance between each identifier on the surgical instrument and the position right above the operation box and the position right in front of the operation box according to images shot by a binocular camera right above the operation box and the binocular camera right in front of the operation box, and determining the coordinate of each identifier on the surgical instrument in the operation box based on the distance between each identifier and the position right above the operation box and the position right in front of the operation box;
according to the binocular camera directly above the operation box, the image shot by the binocular camera directly in front of the operation box is used for obtaining the distance of each mark on the endoscope, the distance is directly above the operation box, the distance is directly in front of the operation box, and based on the distance of each mark, the distance is directly above the operation box, the distance is directly in front of the operation box, the coordinate of each mark on the endoscope is determined, and the coordinate of each mark in the operation box is determined.
Preferably, the coordinates of the surgical instrument and the mark on the endoscope in the operation box are converted into coordinates in a three-dimensional view of the thoracic cavity, specifically:
acquiring the ratio of the size in the three-dimensional view of the chest to the size of the real chest, and converting the three-dimensional coordinates of the operation box into the three-dimensional coordinates in the three-dimensional view of the chest based on the ratio; wherein the scale is further used to translate movement of the endoscope and the surgical instrument into movement of the endoscope and the surgical instrument in the three-dimensional view of the thoracic cavity.
Preferably, the three-dimensional view of the thorax is constructed by:
constructing a thoracic 3D view according to a CT scanning image, and segmenting the thoracic 3D view to obtain a segmented region, wherein the segmented region comprises soft tissues and bones; the soft tissue comprises heart, lung, artery, nerve, trachea;
for each tissue in the soft tissue, segmenting the tissue into a plurality of cubes according to voxels of a three-dimensional view of the tissue, filling mass points at the vertex of each cube, connecting each mass point of each cube with other 7 mass points in a spring structure, constructing a tissue model filled with the cubes, and rendering the tissue model;
and replacing the rendered tissue with the tissue of the corresponding segmentation area to form a new thoracic 3D view containing bones, hearts, lungs, arteries, nerves and tracheas, constructing skin through a mass point-spring model according to the thickness of the skin tissue, and covering the skin on the new thoracic 3D view to form an elastic thoracic three-dimensional view.
EXAMPLE III
The invention also provides a method for constructing the three-dimensional thoracic view based on CT scanning, which comprises the following steps as shown in figure 5:
constructing a thoracic 3D view according to the CT scanning image, and segmenting the thoracic 3D view to obtain a segmented region, wherein the segmented region comprises soft tissues and bones; the soft tissue comprises heart, lung, artery, nerve, trachea;
for each tissue in the soft tissue, segmenting the tissue into a plurality of cubes according to voxels of a three-dimensional view of the tissue, filling mass points at the vertex of each cube, connecting each mass point of each cube with other 7 mass points in a spring structure, constructing a tissue model filled with the cubes, and rendering the tissue model;
and replacing the rendered tissue with the tissue of the corresponding segmentation area to form a new thoracic 3D view containing bones, hearts, lungs, arteries, nerves and tracheas, constructing skin through a mass point-spring model according to the thickness of the skin tissue, and covering the skin on the new thoracic 3D view to form an elastic thoracic three-dimensional view.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by a necessary general hardware platform, and may also be implemented by a combination of hardware and software. With this understanding in mind, the above-described solutions and/or portions thereof that are prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including but not limited to disk storage, CD-ROM, optical storage, etc.).
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A virtual reality based thoracoscopic surgery teaching system, said system comprising surgical instruments, an endoscope, an operation box, a processing device and a display device, said surgical instruments and said endoscope each having at least one marking engraved thereon, characterized in that said processing device comprises the following modules:
the image processing module is used for acquiring coordinates of the mark on the surgical instrument in the operation box according to the at least one mark on the surgical instrument and acquiring coordinates of the mark on the endoscope in the operation box according to the at least one mark on the endoscope;
and the scene simulation module is used for constructing a three-dimensional thoracic view, converting the coordinates marked in the operation box on the surgical instrument and the endoscope into coordinates in the three-dimensional thoracic view, and simulating the surgical instrument and the endoscope in the three-dimensional thoracic view according to the coordinates marked in the three-dimensional thoracic view.
2. The system according to claim 1, wherein the acquiring coordinates of the identifier on the surgical instrument in the console box according to the at least one identifier on the surgical instrument, and the acquiring coordinates of the identifier on the endoscope in the console box according to the at least one identifier on the endoscope, specifically:
acquiring the distance between each identifier on the surgical instrument and the position right above the operation box and the position right in front of the operation box according to images shot by a binocular camera right above the operation box and the binocular camera right in front of the operation box, and determining the coordinate of each identifier on the surgical instrument in the operation box based on the distance between each identifier and the position right above the operation box and the position right in front of the operation box;
according to the binocular camera directly above the operation box, the image shot by the binocular camera directly in front of the operation box is used for obtaining the distance of each mark on the endoscope, the distance is directly above the operation box, the distance is directly in front of the operation box, and based on the distance of each mark, the distance is directly above the operation box, the distance is directly in front of the operation box, the coordinate of each mark on the endoscope is determined, and the coordinate of each mark in the operation box is determined.
3. The system according to claim 1, characterized in that said transformation of the coordinates of said surgical instruments and of said markers on said endoscope in said operating box into coordinates in a three-dimensional view of the thoracic cavity, is in particular:
acquiring the ratio of the size in the three-dimensional view of the chest to the size of the real chest, and converting the three-dimensional coordinates of the operation box into the three-dimensional coordinates in the three-dimensional view of the chest based on the ratio; wherein the scale is further used to translate movement of the endoscope and the surgical instrument into movement of the endoscope and the surgical instrument in the three-dimensional view of the thoracic cavity.
4. The system of claim 1, wherein the constructing of the three-dimensional view of the thorax is in particular:
constructing a thoracic 3D view according to a CT scanning image, and segmenting the thoracic 3D view to obtain a segmented region, wherein the segmented region comprises soft tissues and bones; the soft tissue comprises heart, lung, artery, nerve and trachea;
for each tissue in the soft tissue, segmenting the tissue into a plurality of cubes according to voxels of a three-dimensional view of the tissue, filling mass points at the vertex of each cube, connecting each mass point of each cube with other 7 mass points in a spring structure, constructing a tissue model filled with the cubes, and rendering the tissue model;
and replacing the rendered tissue with the tissue of the corresponding segmentation area to form a new thoracic 3D view containing bones, hearts, lungs, arteries, nerves and tracheas, constructing skin through a mass point-spring model according to the thickness of the skin tissue, and covering the skin on the new thoracic 3D view to form an elastic thoracic three-dimensional view.
5. The system of any of claims 1-3, wherein the surgical instrument, the endoscope, and the coupling portion of the console box each have a force feedback assembly; the attributes of the surgical instrument comprise operation and force, and the operation of the three-dimensional view of the thoracic cavity is simulated according to the operation of the surgical instrument.
6. The system according to claim 1, characterized in that said construction of a three-dimensional view of the thorax is in particular: and generating a three-dimensional view of the chest cavity by adopting a 3D animation simulation mode.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of:
s1, acquiring coordinates of a mark on the surgical instrument in the operation box according to at least one mark on the surgical instrument, and acquiring coordinates of the mark on the endoscope in the operation box according to at least one mark on the endoscope;
s2, constructing a three-dimensional thoracic view, converting the coordinates of the surgical instrument and the mark on the endoscope in the operation box into coordinates in the three-dimensional thoracic view, and simulating the surgical instrument and the endoscope in the three-dimensional thoracic view according to the coordinates marked in the three-dimensional thoracic view.
8. The storage medium according to claim 7, wherein the obtaining coordinates of the identifier on the surgical instrument in the console box according to the at least one identifier on the surgical instrument, and the obtaining coordinates of the identifier on the endoscope in the console box according to the at least one identifier on the endoscope are specifically:
acquiring the distance between each identifier on the surgical instrument and the position right above the operation box and the position right in front of the operation box according to images shot by a binocular camera right above the operation box and the binocular camera right in front of the operation box, and determining the coordinate of each identifier on the surgical instrument in the operation box based on the distance between each identifier and the position right above the operation box and the position right in front of the operation box;
according to the binocular camera directly above the operation box, the image shot by the binocular camera directly in front of the operation box is used for obtaining the distance of each mark on the endoscope, the distance is directly above the operation box, the distance is directly in front of the operation box, and based on the distance of each mark, the distance is directly above the operation box, the distance is directly in front of the operation box, the coordinate of each mark on the endoscope is determined, and the coordinate of each mark in the operation box is determined.
9. The storage medium of claim 7, wherein said converting coordinates of said surgical instrument and said markings on said endoscope in said operating box to coordinates in a three-dimensional view of the chest cavity is in particular:
acquiring the ratio of the size in the three-dimensional view of the chest to the size of the real chest, and converting the three-dimensional coordinates of the operation box into the three-dimensional coordinates in the three-dimensional view of the chest based on the ratio; wherein the scale is further used to translate movement of the endoscope and the surgical instrument into movement of the endoscope and the surgical instrument in the three-dimensional view of the thoracic cavity.
10. The storage medium according to claim 7, wherein the constructing a three-dimensional view of the thorax is in particular:
constructing a thoracic 3D view according to a CT scanning image, and segmenting the thoracic 3D view to obtain a segmented region, wherein the segmented region comprises soft tissues and bones; the soft tissue comprises heart, lung, artery, nerve, trachea;
for each tissue in the soft tissue, segmenting the tissue into a plurality of cubes according to voxels of a three-dimensional view of the tissue, filling mass points at the vertexes of each cube, connecting each mass point of each cube with other 7 mass points in a spring structure, constructing a tissue model filled with the cubes, and rendering the tissue model;
and replacing the rendered tissue with the tissue of the corresponding segmentation area to form a new thoracic cavity 3D view containing bones, hearts, lungs, arteries, nerves and tracheas, constructing skin through a mass point-spring model according to the thickness of the skin tissue, and covering the skin on the new thoracic cavity 3D view to form an elastic thoracic cavity three-dimensional view.
CN202211371509.8A 2022-11-03 2022-11-03 Thoracic surgery teaching system based on virtual reality Active CN115662234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211371509.8A CN115662234B (en) 2022-11-03 2022-11-03 Thoracic surgery teaching system based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211371509.8A CN115662234B (en) 2022-11-03 2022-11-03 Thoracic surgery teaching system based on virtual reality

Publications (2)

Publication Number Publication Date
CN115662234A true CN115662234A (en) 2023-01-31
CN115662234B CN115662234B (en) 2023-09-08

Family

ID=84995115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211371509.8A Active CN115662234B (en) 2022-11-03 2022-11-03 Thoracic surgery teaching system based on virtual reality

Country Status (1)

Country Link
CN (1) CN115662234B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010027541A (en) * 1999-09-14 2001-04-06 윤종용 Apparatus and Method for viewing optimal three dimentional image of surgical instrument for automatic controlling the position of the endoscope in Minimally Invasive Surgery
US20110014596A1 (en) * 2008-01-25 2011-01-20 University Of Florida Research Foundation, Inc. Devices and methods for implementing endoscopic surgical procedures and instruments within a virtual environment
CN102254475A (en) * 2011-07-18 2011-11-23 广州赛宝联睿信息科技有限公司 Method for realizing endoscopic minimal invasive surgery simulated training 3D platform system
CN202422539U (en) * 2011-12-23 2012-09-05 中国人民解放军海军总医院 Virtual surgery operation device
CN203397592U (en) * 2013-08-16 2014-01-15 广州医学院第一附属医院 Simulated training system for endoscope
CN107195215A (en) * 2017-07-04 2017-09-22 中国人民解放军第四军医大学 A kind of thoracoscopic operation simulated training case
CN107240344A (en) * 2017-06-28 2017-10-10 华中科技大学鄂州工业技术研究院 Virtual laparoscopic bile duct exploration training method and system
CN108648548A (en) * 2018-04-19 2018-10-12 浙江工业大学 A kind of neuro-surgery virtual operation training system
CN109192030A (en) * 2018-09-26 2019-01-11 郑州大学第附属医院 True hysteroscope Minimally Invasive Surgery simulation training system and method based on virtual reality
CN211555175U (en) * 2020-04-15 2020-09-22 浙江引力波网络科技有限公司 Experiment box based on virtual reality technology
US20210059760A1 (en) * 2019-08-30 2021-03-04 National Central University Digital image reality aligning kit and method applied to mixed reality system for surgical navigation
CN112566578A (en) * 2018-06-19 2021-03-26 托尼尔公司 Mixed reality assisted teaching using virtual models or virtual representations for orthopedic surgery
CN115054367A (en) * 2022-06-20 2022-09-16 上海市胸科医院 Focus positioning method and device based on mixed reality and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010027541A (en) * 1999-09-14 2001-04-06 윤종용 Apparatus and Method for viewing optimal three dimentional image of surgical instrument for automatic controlling the position of the endoscope in Minimally Invasive Surgery
US20110014596A1 (en) * 2008-01-25 2011-01-20 University Of Florida Research Foundation, Inc. Devices and methods for implementing endoscopic surgical procedures and instruments within a virtual environment
CN102254475A (en) * 2011-07-18 2011-11-23 广州赛宝联睿信息科技有限公司 Method for realizing endoscopic minimal invasive surgery simulated training 3D platform system
CN202422539U (en) * 2011-12-23 2012-09-05 中国人民解放军海军总医院 Virtual surgery operation device
CN203397592U (en) * 2013-08-16 2014-01-15 广州医学院第一附属医院 Simulated training system for endoscope
CN107240344A (en) * 2017-06-28 2017-10-10 华中科技大学鄂州工业技术研究院 Virtual laparoscopic bile duct exploration training method and system
CN107195215A (en) * 2017-07-04 2017-09-22 中国人民解放军第四军医大学 A kind of thoracoscopic operation simulated training case
CN108648548A (en) * 2018-04-19 2018-10-12 浙江工业大学 A kind of neuro-surgery virtual operation training system
CN112566578A (en) * 2018-06-19 2021-03-26 托尼尔公司 Mixed reality assisted teaching using virtual models or virtual representations for orthopedic surgery
CN109192030A (en) * 2018-09-26 2019-01-11 郑州大学第附属医院 True hysteroscope Minimally Invasive Surgery simulation training system and method based on virtual reality
US20210059760A1 (en) * 2019-08-30 2021-03-04 National Central University Digital image reality aligning kit and method applied to mixed reality system for surgical navigation
CN211555175U (en) * 2020-04-15 2020-09-22 浙江引力波网络科技有限公司 Experiment box based on virtual reality technology
CN115054367A (en) * 2022-06-20 2022-09-16 上海市胸科医院 Focus positioning method and device based on mixed reality and electronic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
李艳东 等: "基于局部动态模型的软组织形变建模与仿真", 计算机科学, vol. 40, no. 10, pages 283 - 287 *
潘振宽, 高波: "手术仿真中基于质点―弹簧模型的人体组织变形仿真", 青岛大学学报(工程技术版), no. 03, pages 9 - 14 *
臧晓军 等: "基于增强现实的鼻内窥镜手术导航系统", 北京理工大学学报, vol. 30, no. 1, pages 69 - 73 *
陈炜生 等: "胸腔镜肺叶切除手术中肺动脉定位的实验研究", 中国临床解剖学杂志, vol. 25, no. 1, pages 43 - 45 *

Also Published As

Publication number Publication date
CN115662234B (en) 2023-09-08

Similar Documents

Publication Publication Date Title
US11361516B2 (en) Interactive mixed reality system and uses thereof
US20160328998A1 (en) Virtual interactive system for ultrasound training
US20100179428A1 (en) Virtual interactive system for ultrasound training
CN109410680A (en) A kind of virtual operation training method and system based on mixed reality
WO2003023737A1 (en) Medical procedure training system
US20210233429A1 (en) Mixed-reality endoscope and surgical tools with haptic feedback for integrated virtual-reality visual and haptic surgical simulation
CN115005981A (en) Surgical path planning method, system, equipment, medium and surgical operation system
CN113035038A (en) Virtual orthopedic surgery exercise system and simulation training method
CN115457008A (en) Real-time abdominal puncture virtual simulation training method and device
Cotin et al. Volumetric deformable models for simulation of laparoscopic surgery
CN115472051A (en) Medical student operation simulation dummy and use method
KR20200081540A (en) System for estimating orthopedics surgery based on simulator of virtual reality
Guo et al. Automatically addressing system for ultrasound-guided renal biopsy training based on augmented reality
Li et al. Research and development of the regulation training system of minimally invasive knee surgery based on VR technology
CN115662234B (en) Thoracic surgery teaching system based on virtual reality
CN111768494B (en) Method for training reduction of joint dislocation
CN114913309A (en) High-simulation surgical operation teaching system and method based on mixed reality
CN115188232A (en) Medical teaching comprehensive training system and method based on MR-3D printing technology
Mastrangelo Jr et al. Inclusion of 3-D computed tomography rendering and immersive VR in a third year medical student surgery curriculum
Rasool et al. Image-driven haptic simulation of arthroscopic surgery
Vidal et al. Principles and Applications of Medical Virtual Environments.
Soler et al. Virtual reality, augmented reality and robotics in surgical procedures of the liver
Nakao et al. Haptic Reproduction and Interactive Visualization of a Beating Heart Based on Cardiac Morphology
Ra et al. Visually guided spine biopsy simulator with force feedback
Blezek et al. Simulation of spinal nerve blocks for training anesthesiology residents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant