CN110992477B - Bioepidermal marking method and system for virtual surgery - Google Patents

Bioepidermal marking method and system for virtual surgery Download PDF

Info

Publication number
CN110992477B
CN110992477B CN201911356159.6A CN201911356159A CN110992477B CN 110992477 B CN110992477 B CN 110992477B CN 201911356159 A CN201911356159 A CN 201911356159A CN 110992477 B CN110992477 B CN 110992477B
Authority
CN
China
Prior art keywords
virtual
marking
user
mark
patient model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911356159.6A
Other languages
Chinese (zh)
Other versions
CN110992477A (en
Inventor
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuxin Medical Technology Co ltd
Original Assignee
Shanghai Chuxin Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuxin Medical Technology Co ltd filed Critical Shanghai Chuxin Medical Technology Co ltd
Priority to CN201911356159.6A priority Critical patent/CN110992477B/en
Publication of CN110992477A publication Critical patent/CN110992477A/en
Application granted granted Critical
Publication of CN110992477B publication Critical patent/CN110992477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a biological epidermis marking method for virtual operation, which displays a three-dimensional virtual drawing environment through a display screen, wherein the three-dimensional virtual drawing environment comprises the following steps: a pre-drawn virtual patient model. Because the virtual patient model is pre-rendered in a three-dimensional virtual rendering environment, the cost of consumption is negligible relative to the cost of making the patient model object. Different virtual patient models can also be provided for users, and different marking methods of the users for different virtual patient models under different operations can be exercised. In the application, the marking track drawn by the user on the virtual patient model is received, the marking track can be drawn freely by the user, the degree of freedom is higher, and the user can customize the marking style and the content. After the user marks, judging whether the position of the mark track on the virtual patient model is the required mark position or not, and sending the judging result to the user, so that the user can clearly know whether the mark position of the user is correct or not.

Description

Bioepidermal marking method and system for virtual surgery
Technical Field
The application relates to the technical field of medical education equipment, in particular to a biological epidermis marking method and system for virtual surgery.
Background
Surgery is the specialty in the medical industry that most has the "high technology, high risk, high degree of difficulty" characteristics, and the operating room is as the main place of surgery, and work load is big, and the operation batch is high, and the risk is ubiquitous, and the risk result is serious. Therefore, how to avoid the occurrence of errors in the surgical site during the surgical procedure is becoming more and more important to the manager. According to the requirements of China society of physicians: in order to prevent the surgical site of the surgical patient from being wrong, a surgical site identification mark system is established, and in the actual surgical operation flow, after the surgical doctor order is issued, the operator or a first surgical assistant marks the surgical site on the patient and then enters a preparation and waiting flow before the surgery.
In the prior art, a process of marking the epidermis of a patient can be simulated by a surgical virtual training product, and the surgical virtual training product adopts a static mark mapping scheme, wherein the principle is that a mark is simulated by attaching a mark picture which is manufactured in advance on the epidermis of a specified patient model. However, the method consumes resources for making the patient model, can not provide various patient models for users, can not customize the marking patterns and contents, and can not detect and feed back marking results.
Disclosure of Invention
To overcome at least some of the problems associated with the prior art, the present application provides a method and system for virtual surgical bioepidermal marking.
The scheme of the application is as follows:
according to a first aspect of an embodiment of the present application, there is provided a method for marking a biological epidermis for a virtual operation, comprising:
displaying a three-dimensional virtual drawing environment through a display screen, wherein the three-dimensional virtual drawing environment comprises: a pre-drawn virtual patient model;
receiving a mark track drawn by a user on the virtual patient model;
and judging whether the position of the mark track on the virtual patient model is the required mark position or not, and sending a judging result to the user.
Preferably, in one implementation manner of the present application, the method further includes:
rendering a plurality of virtual subcutaneous tissue structures in the three-dimensional virtual rendering environment, the virtual subcutaneous tissue structures comprising at least: viscera, pathogens, bones;
drawing virtual epidermis tissue structures corresponding to the virtual subcutaneous tissue structures one by one in the three-dimensional virtual drawing environment, and marking the virtual epidermis tissue structures, wherein the marks carry the corresponding relation between the virtual epidermis tissue structures and the virtual subcutaneous tissue structures;
splicing the virtual subcutaneous tissue structure and the virtual epidermal tissue structure to obtain the virtual patient model;
decomposing the virtual patient model into model grids, and determining UV coordinates of each vertex of the model grids;
and mapping the virtual patient model according to the UV coordinates of each vertex of the model grid.
Preferably, in one implementation manner of the present application, the receiving the mark drawn by the user on the virtual patient model specifically includes:
acquiring screen coordinates of a user input position, converting the screen coordinates into three-dimensional space coordinates, vertically transmitting rays to the virtual patient model by taking the three-dimensional space coordinates as a starting point, and generating marking points at contact points of the rays and a virtual epidermis tissue structure of the virtual patient model;
and generating a plurality of marking points according to the continuous input of the user, and rendering the marking points into continuous marking tracks.
Preferably, in one implementation manner of the present application, the method further includes:
after the continuous input of the user is finished, sending whether to continue drawing options to the user;
if the user selects no, judging whether the position of the mark track on the virtual patient model is the required mark position or not, and sending a judging result to the user;
if the user selects yes, the marking track drawn by the user on the virtual patient model is received again.
Preferably, in one implementation of the present application, the required marker position is a virtual epidermal tissue structure corresponding to a pathogen;
the determining whether the position of the marking track on the virtual patient model is a required marking position specifically comprises:
acquiring an identifier of a virtual epidermal tissue structure at the position of the marking track, determining a virtual subcutaneous tissue structure corresponding to the virtual epidermal tissue structure according to the identifier, and if the virtual subcutaneous tissue structure is a pathogen, marking the correct position; if the virtual subcutaneous tissue structure is not a pathogen, the marker is mispositioned.
Preferably, in an implementation manner of the present application, the sending the determination result to the user specifically includes:
sending virtual subcutaneous tissue structure information corresponding to the virtual epidermis tissue structure of the position of the marking track to a user through sound and/or characters;
and sending a correct mark position prompt or an incorrect mark position prompt to a user through sound and/or text and/or special effect animation.
Preferably, in one implementation manner of the present application, the method further includes:
if the ray fails to contact the virtual patient model, screen coordinates of the user input location are reacquired.
Preferably, in one implementation manner of the present application, the method further includes: judging whether to first receive the mark track drawn by the current user on the virtual patient model, if so, creating a mark track storage map; if the mark track drawn by the current user on the virtual patient model is not received for the first time, selecting whether to erase the data stored in the mark track storage map according to preset conditions;
generating a plurality of marking points according to continuous input of a user, and rendering the marking points into continuous marking tracks, wherein the method specifically comprises the following steps of:
after the mark track storage mapping is ready, generating the mark points according to the input of a user, and sampling the mark points;
determining a mark point generated according to the first input position of the user as a mark starting point, and judging whether the current mark point is the mark starting point or not;
if the current marking point is a marking starting point, storing the current marking point into the marking track storage mapping;
if the current marking point is not a marking starting point, carrying out point supplementing processing on the current marking point and a marking point which is the last marking point of the current marking point, generating an intermediate marking point supplementing, and storing the current marking point and the intermediate marking point supplementing into the marking track storage mapping;
and rendering all the mark points and the middle mark complementary points stored in the mark track storage map into a continuous mark track.
Preferably, in one implementation manner of the present application, the method further includes:
and correcting the position of the marking point of the line segment turning part forming the marking track.
According to a second aspect of an embodiment of the present application, there is provided a biological epidermis marker system for virtual surgery, comprising:
the display screen module is used for displaying a three-dimensional virtual drawing environment, and the three-dimensional virtual drawing environment comprises: a pre-drawn virtual patient model;
the input module is used for receiving a mark track drawn by a user on the virtual patient model;
the processing module is used for judging whether the position of the marking track on the virtual patient model is a required marking position or not;
and the output module is used for sending the judging result to the user.
The technical scheme provided by the application can comprise the following beneficial effects:
in the application, a three-dimensional virtual drawing environment is displayed through a display screen, wherein the three-dimensional virtual drawing environment comprises: a pre-drawn virtual patient model. Because the virtual patient model is drawn in advance in the three-dimensional virtual drawing environment, the consumption cost is negligible compared with the process of manufacturing the patient model real object, various virtual patient models can be drawn in the three-dimensional virtual drawing environment, different virtual patient models are provided for users, and different marking methods of the users for different virtual patient models under different operations are exercised. In the application, the marking track drawn by the user on the virtual patient model is received, the marking track can be drawn freely by the user, the degree of freedom is higher, and the user can customize the marking style and the content. After the user marks, judging whether the position of the mark track on the virtual patient model is the required mark position or not, and sending the judging result to the user, so that the user can clearly know whether the mark position of the user is correct or not.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of a method for virtual surgical bioepidermal marking according to an embodiment of the present application;
FIG. 2 is a flowchart showing a method for receiving a user-drawn marking trajectory on a virtual patient model in accordance with one embodiment of the present application;
FIG. 3 is a schematic flow chart of a method for marking a bioepidermal portion of a virtual operation according to an embodiment of the present application, wherein the method comprises correcting the positions of marking points of a line segment turning part constituting a marking track;
fig. 4 is a block diagram of a virtual surgical bio-epidermal marker system according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
Fig. 1 is a flowchart of a method for marking a bioepidermal of a virtual surgery according to an embodiment of the present application, and referring to fig. 1, a method for marking a bioepidermal of a virtual surgery includes:
s11: displaying a three-dimensional virtual drawing environment through a display screen, wherein the three-dimensional virtual drawing environment comprises: a pre-drawn virtual patient model;
the display screen may be a touch screen or a non-touch screen.
Three-dimensional virtual rendering environments are well established techniques, such as rendering three-dimensional graphics by 3DMAX, MAYA, etc. software.
Pre-rendering a virtual patient model, comprising:
drawing a plurality of virtual subcutaneous tissue structures in a three-dimensional virtual drawing environment, the virtual subcutaneous tissue structures comprising at least: viscera, pathogens, bones;
drawing virtual epidermis tissue structures corresponding to the virtual subcutaneous tissue structures one by one in a three-dimensional virtual drawing environment, and marking the virtual epidermis tissue structures, wherein the marks carry the corresponding relation between the virtual epidermis tissue structures and the virtual subcutaneous tissue structures;
splicing the virtual subcutaneous tissue structure and the virtual epidermis tissue structure to obtain a virtual patient model;
decomposing the virtual patient model into model grids, and determining UV coordinates of each vertex of the model grids;
and (5) mapping the virtual patient model according to the UV coordinates of each vertex of the model mesh. After the virtual subcutaneous tissue structure and the virtual epidermis tissue structure are spliced into a virtual patient model, the virtual epidermis tissue structure is endowed with a map with the same color, so that a user cannot distinguish the virtual subcutaneous tissue structure inside the virtual epidermis tissue structure only by the color of the virtual epidermis tissue structure. Different colored maps are assigned to the virtual subcutaneous tissue structures to distinguish.
The identifying of each virtual epidermis tissue structure may specifically be: and the virtual epidermis tissues corresponding to the different virtual subcutaneous tissue structures are made of different materials, and the materials of the virtual epidermis tissues are bound with the mapping color values of the corresponding virtual subcutaneous tissue structures.
Splicing the virtual subcutaneous tissue structure, and splicing the virtual epidermal tissue structure outside the virtual subcutaneous tissue structure.
S12: receiving a mark track drawn by a user on a virtual patient model;
referring specifically to fig. 2:
s121: acquiring screen coordinates of a user input position, converting the screen coordinates into three-dimensional space coordinates, vertically transmitting rays to the virtual patient model by taking the three-dimensional space coordinates as a starting point, and generating marking points at contact points of the rays and a virtual epidermis tissue structure of the virtual patient model;
the user can input the drawing track through the mouse, and when the display screen is a touch screen, the user can also input the drawing track through handwriting.
The screen coordinates of the current input position of the user are obtained, and the screen coordinates are converted into three-dimensional space coordinates, so that the method is the most basic technology in the virtual reality technology.
Three-dimensional space coordinates: a coordinate system is uniformly observed by all objects in the virtual three-dimensional scene, and marks the unique position and direction information of each object in the world, such as (0, 0).
Screen coordinates: screen coordinates are understood to be window coordinates of the software, and are related to resolution, e.g. window resolution 500 x 500, the lower left corner of the screen is (0, 0) and the upper right corner is (500 ).
And vertically transmitting rays to the virtual patient model by taking the three-dimensional space coordinates as a starting point, so that all the rays are parallel. Marker points are generated at points of contact of the rays with the virtual epidermal tissue structure of the virtual patient model.
S122: generating a plurality of mark points according to the continuous input of the user, and rendering the plurality of mark points into a continuous mark track.
If the user continuously presses the left button of the mouse or continuously draws the left button with the touch screen with fingers, the user is judged to continuously input. The continuous input of the user can generate a plurality of continuous marking points, the marking points are connected, and a continuous marking track is rendered, namely, the marking of the virtual patient model by the user is achieved.
S13: and judging whether the position of the mark track on the virtual patient model is the required mark position or not, and sending the judging result to the user.
In surgery, the position of the pathogen needs to be marked, so the required marked position is a virtual epidermal tissue structure corresponding to the pathogen.
Judging whether the position of the mark track on the virtual patient model is the required mark position or not, and specifically comprising the following steps:
obtaining the mark of the virtual epidermis tissue structure of the position of the mark track, determining a virtual subcutaneous tissue structure corresponding to the virtual epidermis tissue structure according to the mark, and if the virtual subcutaneous tissue structure is a pathogen, determining that the mark position is correct; if the virtual subcutaneous tissue structure is not a pathogen, the marker is mispositioned.
And acquiring the material of the virtual epidermis tissue structure at the position of the mark track, determining the bound mapping color value according to the material of the virtual epidermis tissue structure, and judging the corresponding virtual subcutaneous tissue structure according to the mapping color value. If the virtual subcutaneous tissue structure is a pathogen, the marking position is correct; if the virtual subcutaneous tissue structure is not a pathogen, the marker is mispositioned.
In this embodiment, the three-dimensional virtual drawing environment is displayed through the display screen, where the three-dimensional virtual drawing environment includes: a pre-drawn virtual patient model. Because the virtual patient model is drawn in advance in the three-dimensional virtual drawing environment, the consumption cost is negligible compared with the process of making the patient model real object, various virtual patient models can be drawn in the three-dimensional virtual drawing environment, different virtual patient models are provided for users, and different marking methods of the users for the different virtual patient models are trained. In the application, the marking track drawn by the user on the virtual patient model is received, the marking track can be drawn freely by the user, the degree of freedom is higher, and the user can customize the marking style and the content. After the user marks, judging whether the position of the mark track on the virtual patient model is the required mark position or not, and sending the judging result to the user, so that the user can clearly know whether the mark position of the user is correct or not.
The method of bioepidermal marking for virtual surgery in some embodiments further comprises:
after the continuous input of the user is finished, sending whether to continue drawing options to the user;
if the user selects no, judging whether the position of the mark track on the virtual patient model is the required mark position or not, and sending the judging result to the user;
if the user selects yes, the marked track drawn by the user on the virtual patient model is received again.
After the continuous input of the user is finished, the user is judged to finish drawing the mark track once. At this time, a user is sent with a drawing continuation option to inquire whether drawing is continued.
If the user selects no, executing the steps of judging whether the position of the mark track on the virtual patient model is the required mark position or not and sending the judging result to the user.
If the user selects yes, the marked track drawn by the user on the virtual patient model is received again.
And repeatedly executing until the user selects no, and ending the drawing after executing the steps of judging whether the position of the mark track on the virtual patient model is the required mark position and sending the judging result to the user.
The method for marking the biological epidermis of the virtual operation in some embodiments sends the judgment result to the user, and specifically includes:
sending virtual subcutaneous tissue structure information corresponding to the virtual epidermis tissue structure of the position of the marking track to a user through sound and/or characters;
and sending a correct mark position prompt or an incorrect mark position prompt to a user through sound and/or text and/or special effect animation.
The determination may be optionally sent to the user in different ways.
And sending virtual subcutaneous tissue structure information corresponding to the virtual epidermal tissue structure of the position of the marking track to the user through sound and/or characters, so that the user can conveniently know the virtual subcutaneous tissue structure corresponding to the marking position, and if the marking is wrong, the user can also know the error reason in time, thereby further improving the marking skill.
Through sound and/or words and/or special effect animation, a correct mark position prompt or a wrong mark position prompt is sent to a user, so that the user can know whether the mark position is correct or not in time.
If the voice prompt is selected, the corresponding voice file is loaded from the voice library of the corresponding language according to the current software language environment, and the virtual subcutaneous tissue structure information or the marking position correct prompt or the marking position error prompt is played.
If the text prompt is selected, corresponding text contents are extracted from a text library of the corresponding language according to the current software language environment, and virtual subcutaneous tissue structure information or marking position correct prompt or marking position error prompt is displayed.
If the special effect prompt is selected, directly loading the corresponding special effect resource, and playing a mark position correct prompt or a mark position error prompt with a preset position.
The method of bioepidermal marking for virtual surgery in some embodiments further comprises:
if the ray fails to contact the virtual patient model, screen coordinates of the user input location are reacquired.
If the rays fail to contact the virtual patient model, the user input position is judged to be unreasonable and wrong, and the user can be prompted, so that the user is prompted that the current input position is unreasonable and drawing cannot be completed according to the current input position. After the user reselects the input location, screen coordinates of the user input location are reacquired.
The above process is repeatedly performed until the input position of the user is reasonable. The input position is reasonable: the screen coordinates according to the input position of the user are converted into three-dimensional space coordinates, and the rays vertically emitted toward the virtual patient model with the three-dimensional space coordinates as a starting point can contact the virtual patient model.
The method of bioepidermal marking for virtual surgery in some embodiments further comprises: judging whether to first receive the mark track drawn by the current user on the virtual patient model, if so, creating a mark track storage map; if the mark track drawn by the current user on the virtual patient model is not received for the first time, selecting whether to erase the data stored in the mark track storage map according to preset conditions;
because after the continuous input of the user is finished, whether to continue drawing is sent to the user, if yes, the marked track drawn by the user on the virtual patient model is received again. At this time, the data of the current user when the user marks the mark last time is already stored in the mark track storage map, so that it is required to determine whether the data stored when the current user marks the mark last time needs to be erased according to a preset condition.
The preset conditions are as follows: judging whether the last marking position of the user is correct or not, and if the marking position of the user is wrong, erasing the data corresponding to the wrong marking position. If the marking position of the user is correct and incorrect, the data corresponding to the correct marking position is reserved.
Generating a plurality of mark points according to the continuous input of a user, and rendering the plurality of mark points into a continuous mark track, wherein the method specifically comprises the following steps of:
after the mark track storage mapping is ready, generating mark points according to the input of a user, and sampling the mark points;
determining a mark point generated according to the first input position of the user as a mark starting point, and judging whether the current mark point is the mark starting point or not;
if the current marking point is the marking starting point, storing the current marking point into a marking track storage mapping;
if the current marking point is not a marking starting point, carrying out point supplementing processing on the current marking point and a marking point which is the last marking point of the current marking point, generating an intermediate marking point supplementing, and storing the current marking point and the intermediate marking point supplementing into a marking track storage map;
and rendering all the mark points and the middle mark complementary points stored in the mark track storage map into a continuous mark track.
The complementary processing is done to make the marking track more coherent, otherwise the marking track may be a discontinuous line segment or point.
Preferably, the current mark point is stored in the mark track storage map, or the current mark point and the middle mark complementary point are stored in the mark track storage map, specifically:
storing the current mark point into a track point queue, or storing the current mark point and the middle mark complementary point into the track point queue. And storing the newly added mark points and the middle mark complementary points of the track point queue into the track storage map in each updating frame.
Further, the method further comprises:
the positions of the marking points constituting the line segment turning portions of the marking track are corrected.
In order to make the line segment turning part of the marking track smooth enough during drawing, the position of the marking point needs to be corrected by using a third-order Bezier curve formula.
Referring to fig. 3, four points P0, P1, P2, P3 define a cubic bezier curve in a plane or in three-dimensional space. The curve starts at P0, goes to P1, and goes from P2 to P3. Typically without passing through P1 or P2; these two points merely provide a directional reference. The spacing between P0 and P1 determines how long the curve "runs" in the direction of P2 before turning to P3.
The curve formula is:
B(t)=P 0 (1-t) 3 +3P 1 t(1-t) 2 +3P 2 t 2 (1-t)+P 3 t 3 ,t∈[0,1]
preferably, rendering all the marker points and the intermediate marker complement points stored in the marker trajectory storage map into a continuous marker trajectory specifically includes: and converting the positions of the marking points and the middle marking complementary points into UV coordinates, acquiring pixel data in the virtual patient model map through the UV coordinates, and modifying the drawing effect of the corresponding pixel data on the marking track.
UV is a set of data for recording how and where the map on the model should be attached, UV coordinates can be understood as coordinates of two-dimensional [0-1], each UV coordinate corresponds to vertex data of a model grid, the positions of the percentage of the map are represented by 0-1 in the coordinates, if a square model is used for fully filling a posted map, UV of 4 vertexes in the model grid is only required to be set as a lower left corner (0, 0), a lower right corner (0, 1), a left upper corner (1, 0) and a right upper corner (1, 1), so that the four corners of the map are respectively corresponding.
The mark track storage map stores position data, UV data and the like of all mark points and intermediate mark complementary points.
Fig. 4 is a block diagram of a bioepidermal marker system for virtual surgery according to an embodiment of the present application, and referring to fig. 4, a bioepidermal marker system for virtual surgery includes:
the display screen module 31 is configured to display a three-dimensional virtual drawing environment, where the three-dimensional virtual drawing environment includes: a pre-drawn virtual patient model;
an input module 32 for receiving a user drawn marker trajectory on the virtual patient model;
a processing module 33, configured to determine whether the position of the marker track on the virtual patient model is a required marker position;
and the output module 34 is used for sending the judgment result to the user.
Further, the method further comprises the following steps: and the drawing module is used for drawing the mark track on the virtual patient model.
The drawing module is specifically used for acquiring screen coordinates of a user input position, converting the screen coordinates into three-dimensional space coordinates, vertically transmitting rays to the virtual patient model by taking the three-dimensional space coordinates as a starting point, and generating marking points at contact points of the rays and a virtual epidermis tissue structure of the virtual patient model;
generating a plurality of mark points according to the continuous input of the user, and rendering the plurality of mark points into a continuous mark track.
And when the current marking point is not a marking starting point, carrying out point compensation processing on the current marking point and the marking point which is the last marking point of the current marking point.
And correcting the position of the marking point of the line segment turning part forming the marking track.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (8)

1. A method of virtual surgical bioepidermal marking, comprising:
displaying a three-dimensional virtual drawing environment through a display screen, wherein the three-dimensional virtual drawing environment comprises: a pre-drawn virtual patient model;
receiving a mark track drawn by a user on the virtual patient model;
judging whether the position of the mark track on the virtual patient model is a required mark position or not, and sending a judging result to the user;
the method further comprises the steps of:
rendering a plurality of virtual subcutaneous tissue structures in the three-dimensional virtual rendering environment, the virtual subcutaneous tissue structures comprising at least: viscera, pathogens, bones;
drawing virtual epidermis tissue structures corresponding to the virtual subcutaneous tissue structures one by one in the three-dimensional virtual drawing environment, and marking the virtual epidermis tissue structures, wherein the marks carry the corresponding relation between the virtual epidermis tissue structures and the virtual subcutaneous tissue structures;
splicing the virtual subcutaneous tissue structure and the virtual epidermal tissue structure to obtain the virtual patient model;
decomposing the virtual patient model into model grids, and determining UV coordinates of each vertex of the model grids;
mapping the virtual patient model according to the UV coordinates of each vertex of the model mesh;
the required marking position is a virtual epidermis tissue structure corresponding to a pathogen;
the determining whether the position of the marking track on the virtual patient model is a required marking position specifically comprises:
acquiring an identifier of a virtual epidermal tissue structure at the position of the marking track, determining a virtual subcutaneous tissue structure corresponding to the virtual epidermal tissue structure according to the identifier, and if the virtual subcutaneous tissue structure is a pathogen, marking the correct position; if the virtual subcutaneous tissue structure is not a pathogen, the marker is mispositioned.
2. The method according to claim 1, wherein said receiving a user drawn mark on said virtual patient model, in particular comprises:
acquiring screen coordinates of a user input position, converting the screen coordinates into three-dimensional space coordinates, vertically transmitting rays to the virtual patient model by taking the three-dimensional space coordinates as a starting point, and generating marking points at contact points of the rays and a virtual epidermis tissue structure of the virtual patient model;
and generating a plurality of marking points according to the continuous input of the user, and rendering the marking points into continuous marking tracks.
3. The method as recited in claim 2, further comprising:
after the continuous input of the user is finished, sending whether to continue drawing options to the user;
if the user selects no, judging whether the position of the mark track on the virtual patient model is the required mark position or not, and sending a judging result to the user;
if the user selects yes, the marking track drawn by the user on the virtual patient model is received again.
4. The method according to claim 1, wherein the sending the determination result to the user specifically includes:
sending virtual subcutaneous tissue structure information corresponding to the virtual epidermis tissue structure of the position of the marking track to a user through sound and/or characters;
and sending a correct mark position prompt or an incorrect mark position prompt to a user through sound and/or text and/or special effect animation.
5. The method as recited in claim 2, further comprising:
if the ray fails to contact the virtual patient model, screen coordinates of the user input location are reacquired.
6. The method as recited in claim 2, further comprising: judging whether to first receive the mark track drawn by the current user on the virtual patient model, if so, creating a mark track storage map; if the mark track drawn by the current user on the virtual patient model is not received for the first time, selecting whether to erase the data stored in the mark track storage map according to preset conditions;
generating a plurality of marking points according to continuous input of a user, and rendering the marking points into continuous marking tracks, wherein the method specifically comprises the following steps of:
after the mark track storage mapping is ready, generating the mark points according to the input of a user, and sampling the mark points;
determining a mark point generated according to the first input position of the user as a mark starting point, and judging whether the current mark point is the mark starting point or not;
if the current marking point is a marking starting point, storing the current marking point into the marking track storage mapping;
if the current marking point is not a marking starting point, carrying out point supplementing processing on the current marking point and a marking point which is the last marking point of the current marking point, generating an intermediate marking point supplementing, and storing the current marking point and the intermediate marking point supplementing into the marking track storage mapping;
and rendering all the mark points and the middle mark complementary points stored in the mark track storage map into a continuous mark track.
7. The method as recited in claim 6, further comprising:
and correcting the position of the marking point of the line segment turning part forming the marking track.
8. A virtual surgical biological epidermis marker system comprising:
the display screen module is used for displaying a three-dimensional virtual drawing environment, and the three-dimensional virtual drawing environment comprises: a pre-drawn virtual patient model;
the input module is used for receiving a mark track drawn by a user on the virtual patient model;
the processing module is used for judging whether the position of the marking track on the virtual patient model is a required marking position or not;
the output module is used for sending the judging result to the user;
a model build model for rendering a plurality of virtual subcutaneous tissue structures in the three-dimensional virtual rendering environment, the virtual subcutaneous tissue structures comprising at least: viscera, pathogens, bones; drawing virtual epidermis tissue structures corresponding to the virtual subcutaneous tissue structures one by one in the three-dimensional virtual drawing environment, and marking the virtual epidermis tissue structures, wherein the marks carry the corresponding relation between the virtual epidermis tissue structures and the virtual subcutaneous tissue structures; splicing the virtual subcutaneous tissue structure and the virtual epidermal tissue structure to obtain the virtual patient model; decomposing the virtual patient model into model grids, and determining UV coordinates of each vertex of the model grids; mapping the virtual patient model according to the UV coordinates of each vertex of the model mesh;
wherein the required marking position is a virtual epidermal tissue structure corresponding to a pathogen;
the determining whether the position of the marking track on the virtual patient model is a required marking position specifically comprises:
acquiring an identifier of a virtual epidermal tissue structure at the position of the marking track, determining a virtual subcutaneous tissue structure corresponding to the virtual epidermal tissue structure according to the identifier, and if the virtual subcutaneous tissue structure is a pathogen, marking the correct position; if the virtual subcutaneous tissue structure is not a pathogen, the marker is mispositioned.
CN201911356159.6A 2019-12-25 2019-12-25 Bioepidermal marking method and system for virtual surgery Active CN110992477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911356159.6A CN110992477B (en) 2019-12-25 2019-12-25 Bioepidermal marking method and system for virtual surgery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911356159.6A CN110992477B (en) 2019-12-25 2019-12-25 Bioepidermal marking method and system for virtual surgery

Publications (2)

Publication Number Publication Date
CN110992477A CN110992477A (en) 2020-04-10
CN110992477B true CN110992477B (en) 2023-10-20

Family

ID=70075422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911356159.6A Active CN110992477B (en) 2019-12-25 2019-12-25 Bioepidermal marking method and system for virtual surgery

Country Status (1)

Country Link
CN (1) CN110992477B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262830A (en) * 2005-07-20 2008-09-10 布拉科成像S.P.A.公司 Method and system for mapping dummy model of object to object
CN104274247A (en) * 2014-10-20 2015-01-14 上海电机学院 Medical surgical navigation method
CN104680911A (en) * 2015-03-12 2015-06-03 苏州敏行医学信息技术有限公司 Tagging method based on puncture virtual teaching and training system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2751360T3 (en) * 2010-10-29 2020-03-31 Cleveland Clinic Found System and method for the association of a guide device with a patient tissue

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262830A (en) * 2005-07-20 2008-09-10 布拉科成像S.P.A.公司 Method and system for mapping dummy model of object to object
CN104274247A (en) * 2014-10-20 2015-01-14 上海电机学院 Medical surgical navigation method
CN104680911A (en) * 2015-03-12 2015-06-03 苏州敏行医学信息技术有限公司 Tagging method based on puncture virtual teaching and training system

Also Published As

Publication number Publication date
CN110992477A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN103565470B (en) Based on ultrasonoscopy automatic marking method and the system of three-dimensional virtual image
Konukseven et al. Development of a visio‐haptic integrated dental training simulation system
EP1884896A2 (en) Converting deformation data for a mesh to animation data for a skeleton, skinning and shading in a runtime computer graphics animation engine
US20080281182A1 (en) Method and apparatus for improving and/or validating 3D segmentations
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
US9773347B2 (en) Interacting with a three-dimensional object dataset
CN111430014A (en) Display method, interaction method and storage medium of glandular medical image
US11382603B2 (en) System and methods for performing biomechanically driven image registration using ultrasound elastography
JP4492645B2 (en) Medical image display apparatus and program
CN111124233B (en) Medical image display method, interaction method and storage medium
RU2662868C2 (en) Support apparatus for supporting user in diagnosis process
Trier et al. The visible ear surgery simulator
CN113645896A (en) System for surgical planning, surgical navigation and imaging
KR101275938B1 (en) Method for virtual surgery medical simulation and apparatus for thereof
CN111142753A (en) Interactive method, information processing method and storage medium
US20210233330A1 (en) Virtual or Augmented Reality Aided 3D Visualization and Marking System
WO2001097174A1 (en) Point inputting device and method for three-dimensional images
CN110992477B (en) Bioepidermal marking method and system for virtual surgery
US20200320778A1 (en) System and method for image processing
CN110097944B (en) Display regulation and control method and system for human organ model
CN101310303A (en) Method for displaying high resolution image data together with time-varying low resolution image data
CN111145877A (en) Interaction method, information processing method, display method, and storage medium
CN114767270A (en) Navigation display system for lung operation puncture
RU2750278C2 (en) Method and apparatus for modification of circuit containing sequence of dots located on image
Sutherland et al. Towards an augmented ultrasound guided spinal needle insertion system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant