WO2021007935A1 - Procédé basé sur la rv à plaque de guidage de construction d'un modèle d'impression 3d - Google Patents

Procédé basé sur la rv à plaque de guidage de construction d'un modèle d'impression 3d Download PDF

Info

Publication number
WO2021007935A1
WO2021007935A1 PCT/CN2019/104404 CN2019104404W WO2021007935A1 WO 2021007935 A1 WO2021007935 A1 WO 2021007935A1 CN 2019104404 W CN2019104404 W CN 2019104404W WO 2021007935 A1 WO2021007935 A1 WO 2021007935A1
Authority
WO
WIPO (PCT)
Prior art keywords
cutting
guide plate
model
mesh
vertices
Prior art date
Application number
PCT/CN2019/104404
Other languages
English (en)
Chinese (zh)
Inventor
覃文军
林国丛
董智伟
张力
王同亮
杨金柱
栗伟
曹鹏
冯朝路
赵大哲
Original Assignee
东北大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东北大学 filed Critical 东北大学
Publication of WO2021007935A1 publication Critical patent/WO2021007935A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/379Handling of additively manufactured objects, e.g. using robots
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2008Assembling, disassembling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • the invention relates to the field of 3D technology, in particular to a method for establishing a 3D printing model of a guide plate based on VR.
  • 3D printing technology can not only be customized according to individual needs. For example, in the medical field, it can achieve complete matching of materials and diseased parts, perform in-situ printing of bone defect parts, and print some Free complex structures that are difficult to manufacture by traditional methods.
  • 3D printing has been successfully applied to the field of orthopedics and dentistry on a large scale.
  • a batch of customized osteotomy guides, orthopedic implants and dental implants based on 3D printing for patients have obtained European conformity (CE) certification and the United States. Food and Drug Administration (Food and Drug Administration, referred to as FDA) approved for clinical use.
  • CE European conformity
  • FDA Food and Drug Administration
  • the full name of the surgical guide is the "surgical navigation physical template". It is one of the first batch of 3D printing technology's main application achievements in medical treatment. Its main function is to help accurate fracture reduction, auxiliary screws and other implants or instruments to reach predetermined positions and assist The precise cutting of the scalpel not only improves the convenience of the operation, but also significantly improves the accuracy of the operation.
  • the use of 3D printed guide plates to assist surgery has high accuracy, reduced intraoperative risks, high surgical safety, and satisfactory clinical efficacy. It is an effective, feasible and worthy of promotion technology. However, it is necessary to print out the actual diseased part for observation, which not only wastes printing materials and printing time, but also the printed model cannot be restored after a cutting drill and needs to be printed again.
  • the present invention provides a VR-based guide plate 3D printing model establishment method, which solves the problems of large printing material loss and long printing time during surgical planning and pre-operative training in 3D printing in the prior art.
  • the main technical solutions adopted by the present invention include:
  • An embodiment of the present invention provides a VR-based 3D printing model building method of a guide plate, which includes:
  • the cutting position and the cutting target model are obtained based on the plane cutting of the cut model
  • the cutting position and the cutting target model obtained based on the plane cutting of the cut model includes:
  • the cut model is divided according to the relative position of the mesh vertex of the cut model and the cutting plane to generate model cuts, wherein the cut model is composed of a plurality of triangular faces, and the triangular faces are composed of Composed of three vertices of the grid;
  • the cut of the model is filled by adding a triangular mesh to form the cut target model.
  • the segmenting the cut model according to the relative position of the mesh vertex of the cut model and the cutting plane includes:
  • the judgment is made according to the dot product of the vector formed by the local coordinates of the grid vertex and the local coordinates of a point on the tangent plane and the local coordinate normal vector. If the dot product is greater than or equal to 0, the grid The vertex is added to the first vertex set, and the corresponding label value is set in the label array; if the dot product is less than 0, the mesh vertex is added to the second vertex set , And set the corresponding tag value to 0 in the tag array.
  • the segmenting the cut model according to the relative position of the mesh vertex of the cut model and the cutting plane further includes:
  • the three mesh vertices of the triangle face are all located on one side of the cutting plane, the three mesh vertices of the triangle face are divided into the first triangle face set or the second triangle Face set
  • the triangular face is cut according to the label value corresponding to the vertex index of the triangular face, and the cut triangular faces are added separately To the first set of triangles and the second set of triangles.
  • the cutting the triangular face according to the label value corresponding to the vertex index of the triangular face includes:
  • Two cutting points are formed according to the intersection of the cutting plane and the triangular surface, and the coordinate positions of the two cutting points are obtained, and the two cutting points are connected to form the boundary of the model cut;
  • the two cutting points are added to the first set of vertices and the second set of vertices, respectively.
  • said performing surface mesh extraction according to the mesh information of the cutting target model, and generating a film with a preset thickness on the surface of the cutting target model includes:
  • the cutting points on the edges of the upper surface of the film and the lower surface of the film are connected pair by pair to form a triangle surface to obtain the film.
  • the extraction of the grid information of the cutting target model based on the spherical area grid extraction method includes:
  • the triangle surface of the cutting target model if the three vertex indexes are all in the spherical area, the triangle surface is located in the spherical area;
  • the triangular face is cut according to the cutting plane to obtain two cutting points, which will be located between the three vertex indexes and the two cutting points. Points are generated by cutting and new triangles generated by division after cutting are divided into the spherical area;
  • the step of using a rectangular plate to cut the surface of the film at the cutting position to obtain the position of the guide plate slot and form the guide plate surface includes:
  • the grid vertices of the guide plate are generated according to the apex coordinates of the rectangular plate, and the grid vertices of the guide plate are connected with the cutting points on the film to form a triangular mesh to form the surface of the guide plate.
  • the construction of 4 planes by using the coordinates of the rectangular plate includes:
  • the rectangular plate After the rectangular plate is translated by the preset thickness distance, it is used as a plane for cutting both ends of the upper surface of the film to obtain the remaining two planes among the four planes.
  • the mesh vertices of the guide plate are connected with the cutting points on the film to form a triangular mesh to form the surface of the guide plate, including:
  • the upper half of the mesh vertices of the mesh vertices of the guide plate are connected with the cutting points of the upper half circle to form a triangular mesh to obtain the upper surface of the guide plate;
  • the lower half of the grid vertices of the grid vertices of the guide plate are connected with the cutting points of the lower half circle to form a triangular mesh to obtain the lower surface of the guide plate;
  • the guide plate surface is formed according to the upper surface of the guide plate and the lower surface of the guide plate.
  • the beneficial effects of the present invention are: the VR-based guide plate 3D printing model establishment method provided by the embodiments of the present invention.
  • the VR-based guide plate 3D printing model establishment method provided by the embodiments of the present invention.
  • by modeling the cutting position and guide plate of the cut model in the VR scene it can be used in virtual reality. Repeat multiple cutting operations on the real surgical object without limitation, and generate a 3D guide plate print model at the cutting position.
  • building a 3D guide plate printing model in virtual reality can reduce the operation difficulty and production time compared with the traditional direct 3D printing guide plate model.
  • FIG. 1 is a flowchart of a method for establishing a 3D printing model of a guide plate based on VR according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a common UI interaction interface provided in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a UI interaction interface of a VR scene provided in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of the UI interface in FIG. 3 according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an interface for setting the tool shown in FIG. 4 in an example of the present disclosure
  • Fig. 6 is a flowchart of step S110 in Fig. 1 according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of a model cutting algorithm in an embodiment of the present invention.
  • FIG. 8 is a flowchart of step S120 in FIG. 1 according to an embodiment of the disclosure.
  • FIG. 9 is a schematic diagram of a triangle surface segmentation in an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a cutting target model constructed in a VR scene in an embodiment of the present invention.
  • Fig. 11 is a schematic diagram of the guide plate model when the handle is used to lock the guide plate model in an embodiment of the present invention
  • FIG. 12 is a schematic diagram of the card slot model when the handle is used to lock the card slot model in an embodiment of the present invention
  • FIG. 13 is a schematic diagram of the guide plate model when the handle is used to move the guide plate model in an embodiment of the present invention
  • FIG. 14 is a schematic diagram of generating a double guide plate model in an embodiment of the present invention.
  • Figure 15 is a schematic diagram of generating a semi-guide plate model in an embodiment of the present invention.
  • virtual reality is an emerging technology applied in the medical field in recent years, and its application in combination with traditional medical treatment also has great potential.
  • For the surgical traps found make corresponding preparations for the extraction of the difficulties that may be encountered during the operation, which is more conducive to the determination of the surgical site and the determination of the corresponding guide plate design.
  • 3D printing alone is not only a waste of printing materials and printing time, but the printed model cannot be restored after a cutting drill and needs to be printed again.
  • the real human tissue and organ model can be repeatedly cut multiple times until the optimal model solution is found, which fully reflects the advantages of virtual reality.
  • Fig. 1 is a flowchart of a method for establishing a 3D printing model of a guide plate based on VR according to an embodiment of the present invention. As shown in Fig. 1, the method includes the following steps:
  • step S110 in the virtual reality VR scene, the cutting position and the cutting target model are obtained based on the plane cutting of the cut model;
  • step S120 surface mesh extraction is performed according to the mesh information of the cutting target model, and a film with a preset thickness is generated on the surface of the cutting target model;
  • step S130 a rectangular plate is used to cut the surface of the film at the cutting position to obtain the position of the guide plate slot and form the guide plate surface;
  • step S140 a 3D printing model of the guide plate in the VR scene is constructed according to the position of the guide plate slot and the guide plate plane.
  • the present invention provides a VR-based 3D printing model establishment method and software system.
  • the innovation of the present invention lies in the combination of virtual reality technology and the use of virtual reality realism and three-dimensionality.
  • it can automatically generate a 3D guide model of the surgical site in virtual reality, or manually perform a personalized design, which is more three-dimensional, immersive and convenient than ordinary 3D printing design software with a computer screen mouse operation, reducing the difficulty of operation And production time.
  • the present invention mainly includes the following three steps in specific implementation:
  • the first step is to construct a virtual workbench scene: construct an operating room workbench environment, and realize real-time import and export of models;
  • the second step is to design the UI interface suitable for the computer display screen and the virtual interactive interface in virtual reality respectively;
  • the third step is to construct various cutting tools, guide generation tools and design corresponding cutting algorithms and guide automatic generation algorithms.
  • Step 1 Build a virtual workbench scene
  • Step 2 Design a user interface (User Interface, UI for short) for virtual interaction
  • FIG. 2 is a schematic diagram of a common UI interaction interface provided in an embodiment of the present invention.
  • the common scene UI interface includes a menu option list and buttons corresponding to different functions.
  • the menu option list includes loading models and Set up and import models.
  • By clicking the "Load Model” button you can pop up the file explorer, select the model file and import it into the system.
  • the model is displayed in the center of the interface. It also includes the rotation operation of the model. For example, you can select the model to rotate up and down or left and right by checking .
  • By clicking the "reset” button you can cancel the import model and return to the initial state; by clicking the "import model” button, confirm the import model and load the VR scene.
  • FIG. 3 is a schematic diagram of a VR scene UI interaction interface provided in an embodiment of the present invention.
  • the VR scene UI needs to design a special interaction interface due to the uniqueness of the input and output device VR helmet and handle.
  • the range of the interface that can be placed is usually determined by the size of the hardware (mobile phone, computer screen). In the virtual world, it has a 360° field of view.
  • the interface can be placed anywhere. If the interface position is fixed, the user It is likely that you need to turn around frequently to interact with the interface. If the position of the interface is not fixed, it is easy to be embedded in objects in the VR scene when the interface moves, which affects the user experience.
  • the system design interface of the present invention is called up by the user clicking the handle menu button, and the interface placement position is specified by the user using the handle.
  • the user can repeatedly click the menu button to reposition the interface position.
  • the console in the foreground and the UI interface in the background includes the console in the foreground and the UI interface in the background.
  • the current mainstream interaction method is to use the handle to emit rays and hover in the scene of the virtual interface.
  • the user does not need to walk, nor does it need to reach out to touch the UI interface, to avoid problems such as touching the wall in reality.
  • the disadvantage is that the interface location needs to be far away from the user, and it is easily blocked by objects in the scene, and the ray operation does not conform to the user's behavior habits. Therefore, considering that the system interface button of the present invention is simple and the pop-up distance from the handle is relatively short, the direct contact interaction method is still adopted, which is simpler and easier to use, and conforms to the behavioral habits in reality.
  • Toolbar menu click the menu button of the handle to open the toolbar.
  • Figure 4 is a schematic diagram of the UI interface in Figure 3 of an embodiment of the present invention. As shown in Figure 4, the interface provides five operating tools for users to use. Use the handle selection key to click "Create” to generate the corresponding tools. Before the interface, click "Clear” to delete the corresponding tools in the scene in order.
  • the 5 operating tools include:
  • 1Cutting tool used to cut the plane of the model imported into the system and the model generated by the system as a whole;
  • 2Partial cutting tool used to cut the model imported into the system and the local mesh of the model generated by the system;
  • 3Single guide plate generation tool Generate a rectangular plate for users to use, and use the rectangular plate to generate a corresponding size guide plate at the cutting position;
  • 4Double guide plate generation tool generate two rectangular plates for users to use, and use rectangular plates to generate two guide plates of corresponding size sharing a grid at the cutting position;
  • Semi-guide plate generation tool Generate a rectangular plate for users to use, and use the rectangular plate to generate a single-sided guide plate of the corresponding size at the cutting position.
  • Model operation menu select the object to be operated, hold down the handle selection button and click the handle menu button to open the related setting menu:
  • FIG. 5 is a schematic diagram of an interface for setting the tool shown in Figure 4 in an example of the present disclosure. As shown in Figure 5, the following settings can be made:
  • 1Set size You can drag the bar in the interface to change the length, width and height of the object by dragging the handle of the customs clearance.
  • the text box on the right displays the corresponding value. Click the text box to manually enter the precise value.
  • 2Set color provide multiple color buttons, click the button to change the object to the color corresponding to the button, click more to change the color of the button in the interface.
  • 3Copy object copy the object.
  • step S110 in the virtual reality VR scene, the cutting position and the cutting target model are obtained based on the plane cutting of the cut model.
  • the cutting tool in this embodiment is a scalpel tool used to cut the model imported into the system and the entire model generated by the system. It can not only cut the human tissue model imported into the system, but also automatically generate the system. Trim the guide plate.
  • Fig. 6 is a flowchart of step S110 in Fig. 1 according to an embodiment of the present invention, which specifically includes the following steps:
  • step S601 the cutting position and cutting plane for the cut model are determined according to the position of the cutting tool.
  • step S602 the cut model is divided according to the relative positions of the mesh vertices of the cut model and the cutting plane to generate model cuts, wherein the cut model is composed of a plurality of triangular faces, and The triangular surface is composed of three mesh vertices.
  • this step includes dividing the vertices of the mesh, which specifically includes:
  • the grid vertex is added to the first vertex set, and the corresponding tag value is set to 1 in the tag array 5) If the dot product is less than 0, add the grid vertices to the second set of vertices, and set the corresponding tag value to 0 in the tag array.
  • this step further includes dividing the triangle surface, which specifically includes:
  • cutting the triangular face according to the label value corresponding to the vertex index of the triangular face further includes:
  • two cutting points are formed according to the intersection of the cutting plane and the triangular plane, and the coordinate positions of the two cutting points are obtained, and the two cutting points are connected to form the boundary of the model cut;
  • the two The coordinate position of the cutting point and the original three mesh vertices of the triangular face form three new triangular faces, and the three new triangular faces are added to the first triangular face set and the second triangular face And add the two cutting points to the first vertex set and the second vertex set respectively.
  • step S603 the model cut is filled by adding a triangular mesh to form the cut target model.
  • the principle of the model cutting algorithm used in step S120 is to determine a plane in space coordinates according to the position of the scalpel, and then calculate the relative position of each mesh vertex of the cut model to the plane, and establish two new vertices
  • the array maps the grid vertices on the left and right respectively.
  • the three triangles with all vertices on one side are assigned to the left and right triangle arrays according to the mapping relationship, and the triangles with not all vertices on one side intersect the cutting plane.
  • After calculating the intersection points divide these triangles and record the division The points are added to the corresponding vertex array, and finally the split points on both sides are connected to stitch the triangles, and the holes exposed by the cut objects are filled.
  • Fig. 7 is a flowchart of a model cutting algorithm in an embodiment of the present invention, which specifically includes the following steps:
  • Step S701 divide the mesh vertices on the left side of the tangent plane into group A, and divide the mesh vertices on the right side into group B;
  • Step S702 traverse the triangle face array
  • Step S703 It is judged whether the three mesh vertices are all in the A group, if yes, then join the A group, if otherwise, go to step S704;
  • Step S704 judge whether the three mesh vertices are in group B, if yes, then join the group B, if not, go to step S705;
  • Step S705 cutting the triangular face
  • Step S706 adding mesh vertices to group A or group B respectively;
  • Step S707 stitch the triangular faces according to the cutting points.
  • the key to dividing the triangular surface is to calculate the space coordinates of the cutting points M1 and M2 of the tangent surface and the triangular surface.
  • the calculation formula is as follows:
  • Scalar1 is the ratio of the distance from M1 to R[xi] to the distance from R[yi] to R[xi].
  • M2 can be calculated in the same way.
  • the original object model grid is divided into two halves, but because the model grids are all surface grids, the interior is empty and there is no grid, so when the model is cut, the interior of the grid is exposed When it comes out, a cavity is formed, and the model with the cavity does not meet the sealing requirements of 3D printing. Therefore, it is necessary to add a triangular mesh to the model cut for stitching, as follows:
  • the first step is to find the vertices of the cut boundary of the model, and from the above step of cutting the mesh vertices, all the M1 and M2 cutting points generated by the cut triangles are connected to form the cut boundary of the model, so only a new one needs to be created in the cutting step Vertex set M, put all the cutting points in pairs into the new vertex set M, and then take the first cutting point in the first pair of cutting points in the set M as the reference, traverse the remaining cutting point pairs, each pair The cutting point is connected with the first cutting point to form a triangular surface, and after the traversal is completed, the cavity is filled by the triangular grid.
  • the model of the surgical guide is constructed. This part is based on the mesh information of the cut part of the human tissue model, and a certain range of film with a certain thickness is automatically generated along the surface of the tissue model. Because the film is completely with the target surface Fit, after 3D printing, it can be used as a surgical aid to support, fix, and accurately position during the operation. In addition, the doctor performs multiple cutting drills on the patient’s tissue and organ model in the VR scene to determine the cutting position during the operation. This technology can also generate diversified guide plates that are exactly the same as the cutting direction at the target cutting site, which plays a key role in the auxiliary cutting of the operation.
  • step S120 surface mesh extraction is performed according to the mesh information of the cutting target model, and a film with a preset thickness is generated on the surface of the cutting target model.
  • the system of the present invention uses a standard rectangular plate in Unity to simulate the guide plate slot, and the corresponding length can be obtained by setting the length, width and height of the rectangular plate.
  • the size of the guide plate In this step, a certain range of surface meshes are extracted from the cutting target model, and after duplication, inversion, displacement, and edge stitching, a film with a preset thickness is obtained.
  • Fig. 8 is a flowchart of step S120 in Fig. 1 of an embodiment of the disclosure, which specifically includes the following steps:
  • step S801 the mesh information of the cutting target model is extracted based on the spherical region mesh extraction method to obtain the lower surface of the film.
  • step S802 the lower surface of the thin film is displaced by the preset thickness in the direction of the normal vector of the lower surface of the thin film, and repeated points are searched for the displaced mesh vertices.
  • step S803 the coordinate after the displacement of the repeated point is calculated, and the displaced repeated point is corrected based on the coordinate to the upper surface of the film.
  • step S804 the cutting points on the edges of the upper surface of the film and the lower surface of the film are connected pair by pair to form a triangle surface to obtain the film.
  • the extraction range of the surface grid is the range of the film. Due to the variety of actual operations and the large differences in the surgical site, there are different requirements for the size and shape of the film, no matter what extraction rule is designed It is difficult to be universally applicable to different surgical sites. Therefore, the system of the present invention adopts a unified spherical area grid extraction method, and provides cutting tools for the doctor, so that the doctor can manually adjust and modify the generated film guide to make it shape And size can meet the diverse needs of surgery.
  • step S801 extracting the mesh information of the cutting target model based on the spherical region mesh extraction method includes:
  • the triangular face is cut according to the cutting plane to obtain two cutting points, which will be located between the three vertex indexes and the Two cutting points are generated by cutting and new triangles generated after cutting are divided into the spherical area;
  • the specific implementation process of the spherical area grid extraction method of the system of the present invention is as follows:
  • the single vertex in the area is Top
  • the outside of the area is L and R
  • the cutting points generated by cutting are LS and RS.
  • RS is the same. Taking LS as an example, the calculation formula is as follows:
  • the surface mesh After the surface mesh is extracted in the previous step, the surface mesh needs to be replicated to produce a film with a preset thickness. A mesh model without thickness cannot be 3D printed.
  • the specific steps for producing the film are as follows:
  • Thickness is the thickness of the film, which can be regarded as a new layer of mesh obtained after each vertex of the mesh is displaced in the direction of its normal vector by Thickness units. This new layer of mesh will be used as the upper surface of the film, the original mesh As the bottom surface of the film.
  • the specific processing method is: first find out the duplicate points, because the number of duplicate points is not consistent with each other, so the set ArrayList structure is used to store the duplicate points; then each set of duplicate points is nested into the record how many sets of duplicate points In the ArrayList collection, finally calculate the coordinates of each set of repeated points after displacement:
  • Rate is the ratio of the average normal vector of the repeated points to the normal vector of the new point
  • vi is the coordinate of any point in the repeated point.
  • step (1) to use the original mesh as the lower surface mesh, it is necessary to reverse the order of the normal vector and the triangle face index, reverse the normal vectors of the original mesh vertices one by one, and reverse the triangle Surface data is fine.
  • step S130 a rectangular plate is used to cut the surface of the film at the cutting position to obtain the position of the guide plate slot and form the guide plate surface.
  • this step specifically includes:
  • four planes are constructed using the rectangular plate coordinates, and the upper surface of the film and the lower surface of the film are respectively cut based on the four planes, and the position of the cutting is the position of the guide plate slot.
  • the way to obtain four planes is to use the left and right sides of the rectangular plate as the planes for cutting both ends of the lower surface of the film to obtain two of the four planes; translate the rectangular plate to the preset After the thickness distance, the two ends of the upper surface of the film are cut as planes to obtain the remaining two planes among the four planes.
  • the grid vertices of the guide plate are generated according to the coordinates of the vertices of the rectangular plate, and the grid vertices of the guide plate are connected with the cutting points on the film to form a triangular mesh to form the surface of the guide plate.
  • the coordinate information of the rectangular plate is used to construct 4 planes, and the upper and lower surfaces of the film are cut twice to remove the grid in the middle part.
  • the free part is the position of the card slot.
  • This step includes: 1) Obtain the cutting direction according to the coordinates of the upper and lower vertices of the rectangular plate; combine the cutting direction to cut the film 4 times according to the 4 planes to obtain the upper half circle cutting point and the lower half Circle cutting points; 2) The upper half of the mesh vertices of the grid vertices of the guide plate are connected with the upper half circle cutting points to form a triangular mesh to obtain the upper surface of the guide plate; 3) The grid vertices of the guide plate The vertices of the bottom half of the grid in the bottom half are connected with the cutting points of the bottom half circle to form a triangular mesh to obtain the lower surface of the guide plate; 4) The guide plate surface is formed according to the upper surface of the guide plate and the lower surface of the guide plate.
  • the middle part of the film needs to be cut to free the position of the guide plate.
  • the inner surface of the guide plate is connected with the lower surface of the film, and the outer surface of the guide plate is connected with the upper surface of the film. Since the guide plate itself also has thickness, that is, the inner and outer surfaces need to be staggered, the system of the present invention selects the left and right sides of the rectangular plate as the planes at both ends of the lower surface of the cutting film, and translates the two sides by the thickness distance left and right as the upper surface of the cutting film. The plane at both ends.
  • the principle of cutting is basically the same as the above model cutting algorithm, and the part about cutting will not be repeated here.
  • Ci corresponding to the minimum value of Value[i] is the rightmost cutting point, and Ci corresponding to the maximum value of Value[i] is the leftmost cutting point;
  • the cutting point saved in the array is the cutting point of the upper half circle, and the cutting point of the lower half circle can be obtained in the same way.
  • step S140 a 3D printing model of the guide plate in the VR scene is constructed according to the position of the guide plate slot and the guide plate plane.
  • FIG. 10 is the present invention A schematic diagram of a cutting target model constructed in a VR scene in an embodiment.
  • Figures 11 to 13 are schematic diagrams of a process of automatically generating a single guide model in a VR scene in an embodiment of the present invention.
  • Figure 11 is an embodiment of the present invention.
  • Fig. 12 is a schematic diagram of locking the card slot model with the handle in an embodiment of the present invention, and
  • Fig. 13 is a schematic diagram of using the handle to move the guide plate model in an embodiment of the present invention.
  • Double guide plate style The system of the present invention provides two rectangular plates for users to use, which respectively represent the position of the two-cut guide plate. After the first guide plate is generated, only the grid generated by the first guide plate, the upper and lower Information such as the surface grid is transferred to the second guide plate, and the second guide plate executes the guide plate generation method again on the basis of the first generated film.
  • FIG. 14 is a schematic diagram of generating a double guide plate model in an embodiment of the present invention.
  • FIG. 15 is a schematic diagram of generating the semi-guide plate model in an embodiment of the present invention.
  • the above method provided by the embodiments of the present invention, on the one hand, by modeling the cutting position, guide plate, etc. of the cut model in the VR scene, the real surgical object can be unrestricted in virtual reality. Repeat the cutting operation several times to generate a 3D guide plate print model at the cutting position.
  • building a 3D guide plate printing model in virtual reality can reduce the operation difficulty and production time compared with the traditional direct 3D printing guide plate model.
  • modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
  • the technical solution according to the embodiments of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) execute the method according to the embodiment of the present invention.
  • a non-volatile storage medium which can be a CD-ROM, U disk, mobile hard disk, etc.
  • a computing device which may be a personal computer, a server, a touch terminal, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manufacturing & Machinery (AREA)
  • Materials Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Chemical & Material Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Robotics (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé basé sur la RV, à plaque de guidage, de construction d'un modèle d'impression 3D, comprenant les étapes suivantes, consistant à : dans une scène de réalité virtuelle (RV), couper le plan d'un modèle de coupe pour obtenir une position de coupe et un modèle de cible de coupe ; effectuer une extraction de grille de surface en fonction des informations de grille du modèle de cible de coupe et générer un film mince présentant une épaisseur prédéfinie sur la surface du modèle de cible de coupe ; utiliser une plaque rectangulaire pour couper la surface du film mince au niveau de la position de coupe de façon à obtenir la position d'une rainure de serrage de plaque de guidage et à former une surface de plaque de guidage ; et en fonction de la position de la rainure de serrage de plaque de guidage et d'un plan de plaque de guidage, construire un modèle d'impression 3D de plaque de guidage sous la scène de RV. Par la réalisation de la modélisation sur la position de coupe du modèle de coupe, d'une plaque de guidage et similaire dans la scène de RV, la présente invention permet de réaliser de manière répétée une coupe sur un objet chirurgical réel de multiples fois dans la RV sans limitation et la génération d'un modèle d'impression de plaque de guidage 3D au niveau de la position de coupe permet de réduire la difficulté de l'opération et la durée de fabrication.
PCT/CN2019/104404 2019-07-12 2019-09-04 Procédé basé sur la rv à plaque de guidage de construction d'un modèle d'impression 3d WO2021007935A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910630656.4 2019-07-12
CN201910630656.4A CN110341192B (zh) 2019-07-12 2019-07-12 一种基于vr的导板3d打印模型建立方法

Publications (1)

Publication Number Publication Date
WO2021007935A1 true WO2021007935A1 (fr) 2021-01-21

Family

ID=68175148

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104404 WO2021007935A1 (fr) 2019-07-12 2019-09-04 Procédé basé sur la rv à plaque de guidage de construction d'un modèle d'impression 3d

Country Status (2)

Country Link
CN (1) CN110341192B (fr)
WO (1) WO2021007935A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114722688A (zh) * 2022-06-07 2022-07-08 中国城市规划设计研究院(住房和城乡建设部遥感应用中心) 一种三维建筑模型自动分层方法
WO2024055799A1 (fr) * 2022-09-17 2024-03-21 深圳市创必得科技有限公司 Procédé et appareil d'orientation de plaque de guidage d'implant dentaire, et dispositif électronique et support de stockage

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111376482B (zh) * 2020-02-18 2022-11-04 珠海赛纳三维科技有限公司 手术训练模型及其打印方法、打印系统
CN111833462B (zh) * 2020-07-14 2024-05-17 深圳市瑞立视多媒体科技有限公司 基于虚幻引擎的切割方法、装置、设备及存储介质
CN112699426B (zh) * 2020-12-09 2024-08-23 深圳微步信息股份有限公司 一种io挡板设计方法、io挡板
CN112764538A (zh) * 2021-01-13 2021-05-07 杭州师范大学 一种vr环境下基于手势交互的空间能力提升方法
CN113343325B (zh) * 2021-05-21 2023-02-28 成都东极六感信息科技有限公司 榫卯加工模拟系统及其方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104739513A (zh) * 2015-03-12 2015-07-01 徐贵升 制作人体组织模拟手术模型及导板的方法
CN105105833A (zh) * 2015-07-24 2015-12-02 武汉市普仁医院 腓骨近端骨肿瘤病灶清除导向器制备装置及方法
CN105816232A (zh) * 2016-05-17 2016-08-03 南方医科大学 一种个体化骨骼模型的解剖型接骨板的设计及成型方法
CN106388978A (zh) * 2016-09-13 2017-02-15 中南大学湘雅医院 一种基于三维重建技术的髋臼侧模型和导板的制备方法
CN107348987A (zh) * 2017-05-19 2017-11-17 华南理工大学 一种膝关节置换手术用股骨远端个性化切骨导板及实现方法
WO2018007628A1 (fr) * 2016-07-07 2018-01-11 Levels3D Méthode et système de reconstruction d'une représentation tridimensionnelle
WO2018093921A1 (fr) * 2016-11-16 2018-05-24 Terarecon, Inc. Système et procédé d'impression tridimensionnelle, de rendu de réalité holographique et virtuelle à partir d'un traitement d'image médicale

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648548A (zh) * 2018-04-19 2018-10-12 浙江工业大学 一种脑神经外科虚拟手术训练系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104739513A (zh) * 2015-03-12 2015-07-01 徐贵升 制作人体组织模拟手术模型及导板的方法
CN105105833A (zh) * 2015-07-24 2015-12-02 武汉市普仁医院 腓骨近端骨肿瘤病灶清除导向器制备装置及方法
CN105816232A (zh) * 2016-05-17 2016-08-03 南方医科大学 一种个体化骨骼模型的解剖型接骨板的设计及成型方法
WO2018007628A1 (fr) * 2016-07-07 2018-01-11 Levels3D Méthode et système de reconstruction d'une représentation tridimensionnelle
CN106388978A (zh) * 2016-09-13 2017-02-15 中南大学湘雅医院 一种基于三维重建技术的髋臼侧模型和导板的制备方法
WO2018093921A1 (fr) * 2016-11-16 2018-05-24 Terarecon, Inc. Système et procédé d'impression tridimensionnelle, de rendu de réalité holographique et virtuelle à partir d'un traitement d'image médicale
CN107348987A (zh) * 2017-05-19 2017-11-17 华南理工大学 一种膝关节置换手术用股骨远端个性化切骨导板及实现方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114722688A (zh) * 2022-06-07 2022-07-08 中国城市规划设计研究院(住房和城乡建设部遥感应用中心) 一种三维建筑模型自动分层方法
WO2024055799A1 (fr) * 2022-09-17 2024-03-21 深圳市创必得科技有限公司 Procédé et appareil d'orientation de plaque de guidage d'implant dentaire, et dispositif électronique et support de stockage

Also Published As

Publication number Publication date
CN110341192B (zh) 2020-07-17
CN110341192A (zh) 2019-10-18

Similar Documents

Publication Publication Date Title
WO2021007935A1 (fr) Procédé basé sur la rv à plaque de guidage de construction d'un modèle d'impression 3d
JP6883177B2 (ja) 解剖学的アイテムのコンピュータによる視覚化
US7672822B2 (en) Automated three-dimensional alternative position viewer
Bruyns et al. Interactive cutting of 3D surface meshes
US20140324400A1 (en) Gesture-Based Visualization System for Biomedical Imaging and Scientific Datasets
US20220346888A1 (en) Device and system for multidimensional data visualization and interaction in an augmented reality virtual reality or mixed reality environment
Mirhosseini et al. Benefits of 3D immersion for virtual colonoscopy
US20120223945A1 (en) Calibrated natural size views for visualizations of volumetric data sets
CN105225272B (zh) 一种基于多轮廓线三角网重构的三维实体建模方法
Zhang Virtual reality technology
CN109360219A (zh) 一种增强现实辅助手术方法及系统
Zachow et al. Draw and cut: intuitive 3D osteotomy planning on polygonal bone models
TWI241533B (en) Methods and systems for interaction with three-dimensional computer models
Sørensen et al. A new virtual reality approach for planning of cardiac interventions
KR101903996B1 (ko) 의료 영상 시뮬레이션 방법 및 그 방법을 이용한 장치
Mills et al. IMEX: A tool for image display and contour management in a windowing environment
CN108389203B (zh) 三维虚拟器官的体积计算方法、装置、存储介质及设备
Krapichler et al. Physicians in virtual environments—multimodal human–computer interaction
Chen et al. Computer-aided liver surgical planning system using CT volumes
US20160180584A1 (en) Virtual model user interface pad
Monclús et al. Data-aware picking for medical models
Williams A method for viewing and interacting with medical volumes in virtual reality
Li et al. Object-in-hand feature displacement with physically-based deformation
KR20200094397A (ko) 3d 의료정보 입력 방법 및 시스템
Antonijuan Tresens et al. Hybrid-reality: Collaborative biomedical data exploration exploiting 2-d and 3-d correspondence

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19937376

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19937376

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19937376

Country of ref document: EP

Kind code of ref document: A1