CN110992477A - Biological epidermis marking method and system for virtual operation - Google Patents

Biological epidermis marking method and system for virtual operation Download PDF

Info

Publication number
CN110992477A
CN110992477A CN201911356159.6A CN201911356159A CN110992477A CN 110992477 A CN110992477 A CN 110992477A CN 201911356159 A CN201911356159 A CN 201911356159A CN 110992477 A CN110992477 A CN 110992477A
Authority
CN
China
Prior art keywords
virtual
marking
user
track
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911356159.6A
Other languages
Chinese (zh)
Other versions
CN110992477B (en
Inventor
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuxin Medical Technology Co ltd
Original Assignee
Shanghai Chuxin Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuxin Medical Technology Co ltd filed Critical Shanghai Chuxin Medical Technology Co ltd
Priority to CN201911356159.6A priority Critical patent/CN110992477B/en
Publication of CN110992477A publication Critical patent/CN110992477A/en
Application granted granted Critical
Publication of CN110992477B publication Critical patent/CN110992477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a biological epidermis marking method for virtual surgery, which displays a three-dimensional virtual drawing environment through a display screen, wherein the three-dimensional virtual drawing environment comprises the following steps: a pre-rendered virtual patient model. Because the virtual patient model is drawn in the three-dimensional virtual drawing environment in advance, the consumption cost is negligibly low compared with the production of a patient model real object. Different virtual patient models can be provided for the user, and different marking methods of different virtual patient models under different operations are exercised for the user. In the method and the device, the marking track drawn on the virtual patient model by the user is received, the marking track can be drawn freely by the user, the degree of freedom is high, and the user can define the marking style and content by self. After the user marks, whether the position of the mark track on the virtual patient model is the required mark position or not is judged, and the judgment result is sent to the user, so that the user can clearly know whether the mark position of the user is correct or not.

Description

Biological epidermis marking method and system for virtual operation
Technical Field
The application relates to the technical field of medical education equipment, in particular to a biological epidermis marking method and system for virtual surgery.
Background
Surgery is the major department in the medical industry with the characteristics of high technology, high risk and high difficulty, and the operating room is used as the main place of surgery, so that the workload is large, the operation batch is high, the risk is everywhere, and the risk consequence is serious. Therefore, attention of managers is increasingly paid to how to avoid errors occurring at the surgical site during the surgical operation. According to the requirements of the Chinese medical association: in order to prevent the operation position of the operation patient from being wrong, an operation position identification mark system is established, in the actual operation flow of the operation, after the order of an operation doctor is issued, the operator or a first operation assistant marks the operation position of the patient, and then the preparation and waiting flow before the operation is carried out.
In the prior art, a virtual training product for surgery is provided to simulate the process of marking the epidermis of a patient, and the virtual training product for surgery adopts a static marking mapping scheme and adopts the principle that a pre-made marking picture is attached to the epidermis of a specified patient model to simulate marking. However, the method for making the patient model consumes resources, cannot provide various patient models for users, cannot self-define the marking style and content, and cannot detect and feed back the marking result.
Disclosure of Invention
To overcome, at least to some extent, the problems in the related art, the present application provides a method and system for virtually surgical bio-epidermal marking.
The scheme of the application is as follows:
according to a first aspect of embodiments of the present application, there is provided a method for virtually surgical bio-epidermal marking, comprising:
displaying a three-dimensional virtual drawing environment through a display screen, wherein the three-dimensional virtual drawing environment comprises: a pre-rendered virtual patient model;
receiving a marker track drawn by a user on the virtual patient model;
and judging whether the position of the marking track on the virtual patient model is the required marking position or not, and sending the judgment result to the user.
Preferably, in an implementation manner of the present application, the method further includes:
rendering a plurality of virtual subcutaneous tissue structures in the three-dimensional virtual rendering environment, the virtual subcutaneous tissue structures including at least: viscera, pathogens, bones;
drawing virtual epidermal tissue structures which are in one-to-one correspondence with the virtual subcutaneous tissue structures in the three-dimensional virtual drawing environment, and identifying the virtual epidermal tissue structures, wherein the identification carries the correspondence between the virtual epidermal tissue structures and the virtual subcutaneous tissue structures;
splicing the virtual subcutaneous tissue structure and the virtual epidermal tissue structure to obtain the virtual patient model;
decomposing the virtual patient model into a model mesh, and determining UV coordinates of each vertex of the model mesh;
and endowing a chartlet to the virtual patient model according to the UV coordinates of each vertex of the model mesh.
Preferably, in an implementable manner of the present application, the receiving of the mark drawn by the user on the virtual patient model specifically includes:
acquiring a screen coordinate of a user input position, converting the screen coordinate into a three-dimensional space coordinate, vertically emitting rays to the virtual patient model by taking the three-dimensional space coordinate as a starting point, and generating a mark point at a contact point of the rays and a virtual epidermis tissue structure of the virtual patient model;
and generating a plurality of the marking points according to the continuous input of the user, and rendering the marking points into a continuous marking track.
Preferably, in an implementation manner of the present application, the method further includes:
after the continuous input of the user is finished, sending whether to continue drawing options to the user;
if the user selects no, judging whether the position of the marking track on the virtual patient model is the required marking position, and sending the judgment result to the user;
if the user selects yes, the mark track drawn on the virtual patient model by the user is received again.
Preferably, in an implementable manner of the present application, the desired marker location is a virtual epidermal tissue structure corresponding to the pathogen;
the judging whether the position of the marking track on the virtual patient model is a required marking position specifically comprises:
acquiring an identifier of a virtual epidermal tissue structure at the position of the marking track, determining a virtual subcutaneous tissue structure corresponding to the virtual epidermal tissue structure according to the identifier, and if the virtual subcutaneous tissue structure is a pathogen, marking the position correctly; if the virtual subcutaneous tissue structure is not a pathogen, the marker is misplaced.
Preferably, in an implementation manner of the present application, the sending the determination result to the user specifically includes:
sending virtual subcutaneous tissue structure information corresponding to the virtual epidermal tissue structure at the position of the marking track to a user through sound and/or characters;
and sending a prompt of correct marking position or wrong marking position to the user through sound and/or text and/or special effect animation.
Preferably, in an implementation manner of the present application, the method further includes:
if the ray does not contact the virtual patient model, screen coordinates of the user input location are retrieved.
Preferably, in an implementation manner of the present application, the method further includes: judging whether a marking track drawn on the virtual patient model by the current user is received for the first time, and if the marking track drawn on the virtual patient model by the current user is received for the first time, creating a marking track storage map; if the marking track drawn by the current user on the virtual patient model is not received for the first time, selecting whether to erase the data stored in the marking track storage map or not according to preset conditions;
the generating a plurality of the mark points according to the continuous input of the user, rendering the mark points into a continuous mark track, specifically including:
after the map is stored in the mark track and is ready, generating the mark points according to the input of a user, and sampling the mark points;
determining a mark point generated according to the first input position of a user as a mark starting point, and judging whether the current mark point is the mark starting point;
if the current marking point is the marking starting point, storing the current marking point to the marking track storage map;
if the current marking point is not the marking starting point, performing point supplementing processing on the current marking point and the previous marking point of the current marking point to generate an intermediate marking point supplementing point, and storing the current marking point and the intermediate marking point supplementing point into the marking track storage map;
rendering all the mark points and the middle mark complement points stored in the mark track storage map into a continuous mark track.
Preferably, in an implementation manner of the present application, the method further includes:
and correcting the position of the mark point of the segment turning part forming the mark track.
According to a second aspect of the embodiments of the present application, there is provided a bio-epidermal marking system of virtual surgery, comprising:
a display screen module for displaying a three-dimensional virtual rendering environment, the three-dimensional virtual rendering environment comprising: a pre-rendered virtual patient model;
an input module for receiving a marker track drawn by a user on the virtual patient model;
the processing module is used for judging whether the position of the marking track on the virtual patient model is a required marking position;
and the output module is used for sending the judgment result to the user.
The technical scheme provided by the application can comprise the following beneficial effects:
in this application, show three-dimensional virtual drawing environment through the display screen, include in the three-dimensional virtual drawing environment: a pre-rendered virtual patient model. Because the virtual patient model is drawn in the three-dimensional virtual drawing environment in advance, the consumption cost is negligibly low compared with the production of a patient model real object, various virtual patient models can be drawn in the three-dimensional virtual drawing environment, different virtual patient models are provided for a user, and different marking methods of the user on different virtual patient models under different operations are exercised. In the method and the device, the marking track drawn on the virtual patient model by the user is received, the marking track can be drawn freely by the user, the degree of freedom is high, and the user can define the marking style and content by self. After the user marks, whether the position of the mark track on the virtual patient model is the required mark position or not is judged, and the judgment result is sent to the user, so that the user can clearly know whether the mark position of the user is correct or not.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of a method for virtually-surgically marking a bio-epidermis according to one embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for virtually marking a patient in a virtual surgical bio-epidermis according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating a process of correcting the position of a marker point constituting a turning portion of a line segment of a marker trajectory in a bio-epidermal marking method for virtual surgery according to an embodiment of the present application;
fig. 4 is a block diagram of a bio-epidermal marking system of a virtual surgery according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Fig. 1 is a flowchart of a virtual surgery bio-epidermal marking method according to an embodiment of the present application, and referring to fig. 1, the virtual surgery bio-epidermal marking method includes:
s11: displaying a three-dimensional virtual drawing environment through a display screen, wherein the three-dimensional virtual drawing environment comprises: a pre-rendered virtual patient model;
the display screen may be a touch screen or a non-touch screen.
The three-dimensional virtual drawing environment is a mature prior art, and three-dimensional graphic drawing can be completed through software such as 3DMAX, MAYA and the like.
Pre-rendering a virtual patient model, comprising:
rendering a plurality of virtual subcutaneous tissue structures in a three-dimensional virtual rendering environment, the virtual subcutaneous tissue structures including at least: viscera, pathogens, bones;
drawing virtual epidermal tissue structures corresponding to the virtual subcutaneous tissue structures one by one in a three-dimensional virtual drawing environment, identifying each virtual epidermal tissue structure, wherein the identification carries the corresponding relation between the virtual epidermal tissue structure and the virtual subcutaneous tissue structure;
splicing the virtual subcutaneous tissue structure and the virtual epidermal tissue structure to obtain a virtual patient model;
decomposing the virtual patient model into a model mesh, and determining UV coordinates of each vertex of the model mesh;
the virtual patient model is assigned a map based on the UV coordinates of each vertex of the model mesh. After the virtual subcutaneous tissue structure and the virtual epidermal tissue structure are spliced into the virtual patient model, a map with the same color is given to the virtual epidermal tissue structure, so that a user cannot distinguish the virtual subcutaneous tissue structure inside the virtual epidermal tissue structure only through the color of the virtual epidermal tissue structure. The virtual subcutaneous tissue structures are differentiated by giving different colored maps.
Identifying each virtual epidermis tissue structure, which may specifically be: the virtual skin tissues corresponding to different virtual subcutaneous tissue structures are made of different materials, and the materials of the virtual skin tissues are bound with the mapping color values of the corresponding virtual subcutaneous tissue structures.
And splicing the virtual subcutaneous tissue structure, and splicing the virtual epidermal tissue structure to the outside of the virtual subcutaneous tissue structure.
S12: receiving a marking track drawn on the virtual patient model by a user;
with particular reference to fig. 2:
s121: acquiring a screen coordinate of a user input position, converting the screen coordinate into a three-dimensional space coordinate, vertically emitting rays to the virtual patient model by taking the three-dimensional space coordinate as a starting point, and generating a mark point at a contact point of the rays and a virtual epidermis tissue structure of the virtual patient model;
the user can input the drawing track through a mouse, and when the display screen is a touch screen, the user can input the drawing track through handwriting.
The method comprises the steps of acquiring screen coordinates of a current input position of a user, and converting the screen coordinates into three-dimensional space coordinates, and is the most basic technology in the virtual reality technology.
Three-dimensional space coordinates: all objects in the virtual three-dimensional scene uniformly adhere to a coordinate system, which marks unique position and direction information of each object in the world, such as (0,0, 0).
Screen coordinates: the screen coordinates may be understood as window coordinates of the software, and are related to the resolution, for example, a window resolution of 500 × 500, the lower left corner of the screen is (0,0), and the upper right corner is (500 ).
And taking the three-dimensional space coordinate as a starting point, and vertically emitting rays to the virtual patient model so that all the rays are parallel. Marker points are generated at the points of contact of the rays with the virtual epidermal tissue structure of the virtual patient model.
S122: and generating a plurality of marking points according to the continuous input of the user, and rendering the marking points into a continuous marking track.
And if the user continuously presses the left mouse button or continuously makes contact with the touch screen with a finger to draw, determining that the user continuously inputs. The continuous input of the user can generate a plurality of continuous marking points, the marking points are connected, and a continuous marking track is rendered, namely, the marking of the virtual patient model by the user is obtained.
S13: and judging whether the position of the marking track on the virtual patient model is the required marking position or not, and sending the judgment result to the user.
In the operation, the position of the pathogen needs to be marked, so the required marked position is a virtual epidermal tissue structure corresponding to the pathogen.
Judging whether the position of the marking track on the virtual patient model is the required marking position, and specifically comprising the following steps:
acquiring an identifier of a virtual epidermal tissue structure at the position of the marking track, determining a virtual subcutaneous tissue structure corresponding to the virtual epidermal tissue structure according to the identifier, and if the virtual subcutaneous tissue structure is a pathogen, marking the position correctly; if the virtual subcutaneous tissue structure is not a pathogen, the marker is misplaced.
Obtaining the material of the virtual skin tissue structure at the position of the mark track, determining the binding chartlet color value according to the material of the virtual skin tissue structure, and judging the corresponding virtual subcutaneous tissue structure according to the chartlet color value. If the virtual subcutaneous tissue structure is a pathogen, the marking position is correct; if the virtual subcutaneous tissue structure is not a pathogen, the marker is misplaced.
In this embodiment, a three-dimensional virtual rendering environment is displayed through a display screen, where the three-dimensional virtual rendering environment includes: a pre-rendered virtual patient model. Because the virtual patient model is drawn in the three-dimensional virtual drawing environment in advance, the consumption cost is negligibly low compared with the production of a patient model real object, various virtual patient models can be drawn in the three-dimensional virtual drawing environment, different virtual patient models are provided for a user, and different marking methods of the user on different virtual patient models are exercised. In the method and the device, the marking track drawn on the virtual patient model by the user is received, the marking track can be drawn freely by the user, the degree of freedom is high, and the user can define the marking style and content by self. After the user marks, whether the position of the mark track on the virtual patient model is the required mark position or not is judged, and the judgment result is sent to the user, so that the user can clearly know whether the mark position of the user is correct or not.
The method of virtually surgically bio-epidermal labeling in some embodiments, further comprising:
after the continuous input of the user is finished, sending whether to continue drawing options to the user;
if the user selects no, judging whether the position of the marking track on the virtual patient model is the required marking position, and sending the judgment result to the user;
if the user selects yes, the mark track drawn on the virtual patient model by the user is received again.
And after the continuous input of the user is finished, judging that the user finishes drawing the marking track once. At this time, whether to continue drawing is sent to the user, and the user is asked whether to continue drawing.
If the user selects no, executing the step of judging whether the position of the marking track on the virtual patient model is the required marking position or not, and sending the judgment result to the user.
If the user selects yes, the mark track drawn on the virtual patient model by the user is received again.
And repeating the step of judging whether the position of the marking track on the virtual patient model is the required marking position or not until the user selects no, and finishing the drawing after the step of sending the judgment result to the user.
In some embodiments, the method for virtually marking a biological epidermis of an operation sends the determination result to a user, and specifically includes:
sending virtual subcutaneous tissue structure information corresponding to the virtual epidermal tissue structure at the position of the mark track to a user through sound and/or characters;
and sending a prompt of correct marking position or wrong marking position to the user through sound and/or text and/or special effect animation.
The decision result may be sent to the user in different ways.
The virtual subcutaneous tissue structure information corresponding to the virtual epidermal tissue structure at the position of the marking track is sent to the user through sound and/or characters, the user can conveniently know the virtual subcutaneous tissue structure corresponding to the marking position, and if the marking is wrong, the reason of the mistake can be timely known, so that the marking skill can be further improved.
The correct mark position prompt or the wrong mark position prompt is sent to the user through sound and/or characters and/or special effect animation, so that the user can know whether the mark position is correct or not in time.
If the voice prompt is selected, loading the corresponding voice file from the voice library of the corresponding language and playing virtual subcutaneous tissue structure information or a mark position correct prompt or a mark position error prompt according to the current software language environment.
If the word prompt is selected, corresponding word content is extracted from a text library of a corresponding language according to the current software language environment, and virtual subcutaneous tissue structure information or a prompt of correct mark position or a prompt of wrong mark position is displayed.
And if the special effect prompt is selected, directly loading the corresponding special effect resource and playing a prompt indicating that the mark position is correct or a prompt indicating that the mark position is wrong with a preset position.
The method of virtually surgically bio-epidermal labeling in some embodiments, further comprising:
if the ray fails to contact the virtual patient model, screen coordinates of the user input location are retrieved.
If the ray does not contact the virtual patient model, the input position of the user is judged to be unreasonable and wrong, preferably, the user can be prompted to prompt that the current input position is unreasonable and drawing cannot be completed according to the current input position. And after the user reselects the input position, acquiring the screen coordinates of the input position of the user again.
The above process is repeatedly performed until the input position of the user is reasonable. The input position is reasonable as follows: the screen coordinates according to the input position of the user are converted into three-dimensional space coordinates, and the ray vertically emitted to the virtual patient model with the three-dimensional space coordinates as a starting point may contact the virtual patient model.
The method of virtually surgically bio-epidermal labeling in some embodiments, further comprising: judging whether a marking track drawn on the virtual patient model by the current user is received for the first time, and if the marking track drawn on the virtual patient model by the current user is received for the first time, creating a marking track storage map; if the marking track drawn by the current user on the virtual patient model is not received for the first time, selecting whether to erase the data stored in the marking track storage map or not according to preset conditions;
after the continuous input of the user is finished, whether the drawing option is continued or not is sent to the user, and if the user selects yes, the mark track drawn on the virtual patient model by the user is received again. At this time, the data of the current user during the last marking is already stored in the mark track storage map, so whether the data stored by the current user during the last marking needs to be erased or not needs to be judged according to the preset condition.
The preset conditions are as follows: and judging whether the last marking position of the user is correct or not, and if the marking position of the user is wrong, erasing the data corresponding to the wrong marking position. If the marking position of the user is correct and wrong, data corresponding to the correct marking position is reserved.
Generating a plurality of mark points according to the continuous input of a user, and rendering the mark points into a continuous mark track, which specifically comprises the following steps:
after the map is stored in the mark track and is ready, generating mark points according to the input of a user, and sampling the mark points;
determining a mark point generated according to the first input position of a user as a mark starting point, and judging whether the current mark point is the mark starting point;
if the current mark point is the mark starting point, storing the current mark point into a mark track storage map;
if the current marking point is not the marking starting point, performing point supplementing treatment on the current marking point and the previous marking point of the current marking point to generate an intermediate marking point, and storing the current marking point and the intermediate marking point into a marking track storage map;
rendering all the mark points and the middle mark complement points stored in the mark track storage map into a continuous mark track.
The dotting process is performed to make the mark track more coherent, otherwise the mark track may be discontinuous line segments or points.
Preferably, the current mark point is stored in the mark track storage map, or the current mark point and the middle mark supplementary point are stored in the mark track storage map, which specifically includes:
and storing the current marking point into a track point queue, or storing the current marking point and the middle marking supplementary point into the track point queue. And storing the newly added mark points and the middle mark supplementary points in the track point queue into the track storage map in each updating frame.
Further, the method further comprises:
the positions of the mark points constituting the turning portions of the line segments of the mark trajectory are corrected.
During drawing, in order to make the segment turning part of the marking track smooth enough, a third-order Bezier curve formula is needed to correct the position of the marking point.
Referring to fig. 3, four points P0, P1, P2, P3 define a cubic bezier curve in a plane or in a three-dimensional space. The curve starts from P0 to P1 and goes from the direction of P2 to P3. Generally does not pass through P1 or P2; these two points provide only a directional reference. The spacing between P0 and P1 determines how long the curve "goes" in length "in the direction of P2 before turning to P3.
The curve formula is:
B(t)=P0(1-t)3+3P1t(1-t)2+3P2t2(1-t)+P3t3,t∈[0,1]
preferably, rendering all the mark points and the middle mark patch points stored in the mark track storage map into a continuous mark track includes: the positions of the marking points and the middle marking compensation points are converted into UV coordinates, pixel data in the virtual patient model chartlet are obtained through the UV coordinates, and the drawing effect of the marking track of the corresponding pixel data is modified.
The UV is a set of data recording how and where the map on the model should be pasted, and the UV coordinates can be understood as two-dimensional coordinates [0-1], each UV coordinate corresponds to vertex data of the model mesh, and the position of the percentage of the map in the coordinates is represented by 0-1, for example, by using a square model, if one map is to be fully paved, the UV of 4 vertices in the model mesh is only required to be respectively set to be (0,0) lower left corner, (0, 1) lower right corner, (1, 0) upper left corner, and (1, 1) upper right corner, which respectively correspond to the four corners of the map.
The position data of all the mark points and the middle mark supplementary points, UV data and the like are stored in the mark track storage map.
Fig. 4 is a block diagram of a bio-epidermal marking system of a virtual surgery according to an embodiment of the present application, and referring to fig. 4, the bio-epidermal marking system of the virtual surgery includes:
a display screen module 31, configured to display a three-dimensional virtual rendering environment, where the three-dimensional virtual rendering environment includes: a pre-rendered virtual patient model;
an input module 32 for receiving a marker trajectory drawn by a user on the virtual patient model;
the processing module 33 is used for judging whether the position of the marking track on the virtual patient model is the required marking position;
and the output module 34 is used for sending the judgment result to the user.
Further, the method also comprises the following steps: and the drawing module is used for drawing the marking track on the virtual patient model.
The system comprises a drawing module, a marking module and a display module, wherein the drawing module is specifically used for acquiring a screen coordinate of a user input position, converting the screen coordinate into a three-dimensional space coordinate, vertically emitting rays to a virtual patient model by taking the three-dimensional space coordinate as a starting point, and generating a marking point at a contact point of the rays and a virtual epidermis tissue structure of the virtual patient model;
and generating a plurality of marking points according to the continuous input of the user, and rendering the marking points into a continuous marking track.
And when the current marking point is not the marking starting point, point supplementing processing is carried out on the current marking point and the previous marking point of the current marking point.
And correcting the position of the mark point of the segment turning part forming the mark track.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A method of virtually surgically marking a biological epidermis, comprising:
displaying a three-dimensional virtual drawing environment through a display screen, wherein the three-dimensional virtual drawing environment comprises: a pre-rendered virtual patient model;
receiving a marker track drawn by a user on the virtual patient model;
and judging whether the position of the marking track on the virtual patient model is the required marking position or not, and sending the judgment result to the user.
2. The method of claim 1, further comprising:
rendering a plurality of virtual subcutaneous tissue structures in the three-dimensional virtual rendering environment, the virtual subcutaneous tissue structures including at least: viscera, pathogens, bones;
drawing virtual epidermal tissue structures which are in one-to-one correspondence with the virtual subcutaneous tissue structures in the three-dimensional virtual drawing environment, and identifying the virtual epidermal tissue structures, wherein the identification carries the correspondence between the virtual epidermal tissue structures and the virtual subcutaneous tissue structures;
splicing the virtual subcutaneous tissue structure and the virtual epidermal tissue structure to obtain the virtual patient model;
decomposing the virtual patient model into a model mesh, and determining UV coordinates of each vertex of the model mesh;
and endowing a chartlet to the virtual patient model according to the UV coordinates of each vertex of the model mesh.
3. The method according to claim 2, wherein receiving the label drawn by the user on the virtual patient model comprises:
acquiring a screen coordinate of a user input position, converting the screen coordinate into a three-dimensional space coordinate, vertically emitting rays to the virtual patient model by taking the three-dimensional space coordinate as a starting point, and generating a mark point at a contact point of the rays and a virtual epidermis tissue structure of the virtual patient model;
and generating a plurality of the marking points according to the continuous input of the user, and rendering the marking points into a continuous marking track.
4. The method of claim 3, further comprising:
after the continuous input of the user is finished, sending whether to continue drawing options to the user;
if the user selects no, judging whether the position of the marking track on the virtual patient model is the required marking position, and sending the judgment result to the user;
if the user selects yes, the mark track drawn on the virtual patient model by the user is received again.
5. The method of claim 2, wherein the desired marker location is a virtual epidermal tissue structure corresponding to a pathogen;
the judging whether the position of the marking track on the virtual patient model is a required marking position specifically comprises:
acquiring an identifier of a virtual epidermal tissue structure at the position of the marking track, determining a virtual subcutaneous tissue structure corresponding to the virtual epidermal tissue structure according to the identifier, and if the virtual subcutaneous tissue structure is a pathogen, marking the position correctly; if the virtual subcutaneous tissue structure is not a pathogen, the marker is misplaced.
6. The method according to claim 5, wherein the sending the determination result to the user specifically includes:
sending virtual subcutaneous tissue structure information corresponding to the virtual epidermal tissue structure at the position of the marking track to a user through sound and/or characters;
and sending a prompt of correct marking position or wrong marking position to the user through sound and/or text and/or special effect animation.
7. The method of claim 3, further comprising:
if the ray does not contact the virtual patient model, screen coordinates of the user input location are retrieved.
8. The method of claim 3, further comprising: judging whether a marking track drawn on the virtual patient model by the current user is received for the first time, and if the marking track drawn on the virtual patient model by the current user is received for the first time, creating a marking track storage map; if the marking track drawn by the current user on the virtual patient model is not received for the first time, selecting whether to erase the data stored in the marking track storage map or not according to preset conditions;
the generating a plurality of the mark points according to the continuous input of the user, rendering the mark points into a continuous mark track, specifically including:
after the map is stored in the mark track and is ready, generating the mark points according to the input of a user, and sampling the mark points;
determining a mark point generated according to the first input position of a user as a mark starting point, and judging whether the current mark point is the mark starting point;
if the current marking point is the marking starting point, storing the current marking point to the marking track storage map;
if the current marking point is not the marking starting point, performing point supplementing processing on the current marking point and the previous marking point of the current marking point to generate an intermediate marking point supplementing point, and storing the current marking point and the intermediate marking point supplementing point into the marking track storage map;
rendering all the mark points and the middle mark complement points stored in the mark track storage map into a continuous mark track.
9. The method of claim 8, further comprising:
and correcting the position of the mark point of the segment turning part forming the mark track.
10. A virtually-surgically bio-epidermal marker system, comprising:
a display screen module for displaying a three-dimensional virtual rendering environment, the three-dimensional virtual rendering environment comprising: a pre-rendered virtual patient model;
an input module for receiving a marker track drawn by a user on the virtual patient model;
the processing module is used for judging whether the position of the marking track on the virtual patient model is a required marking position;
and the output module is used for sending the judgment result to the user.
CN201911356159.6A 2019-12-25 2019-12-25 Bioepidermal marking method and system for virtual surgery Active CN110992477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911356159.6A CN110992477B (en) 2019-12-25 2019-12-25 Bioepidermal marking method and system for virtual surgery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911356159.6A CN110992477B (en) 2019-12-25 2019-12-25 Bioepidermal marking method and system for virtual surgery

Publications (2)

Publication Number Publication Date
CN110992477A true CN110992477A (en) 2020-04-10
CN110992477B CN110992477B (en) 2023-10-20

Family

ID=70075422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911356159.6A Active CN110992477B (en) 2019-12-25 2019-12-25 Bioepidermal marking method and system for virtual surgery

Country Status (1)

Country Link
CN (1) CN110992477B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262830A (en) * 2005-07-20 2008-09-10 布拉科成像S.P.A.公司 Method and system for mapping dummy model of object to object
US20120143267A1 (en) * 2010-10-29 2012-06-07 The Cleveland Clinic Foundation System and method for association of a guiding aid with a patient tissue
CN104274247A (en) * 2014-10-20 2015-01-14 上海电机学院 Medical surgical navigation method
CN104680911A (en) * 2015-03-12 2015-06-03 苏州敏行医学信息技术有限公司 Tagging method based on puncture virtual teaching and training system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262830A (en) * 2005-07-20 2008-09-10 布拉科成像S.P.A.公司 Method and system for mapping dummy model of object to object
US20120143267A1 (en) * 2010-10-29 2012-06-07 The Cleveland Clinic Foundation System and method for association of a guiding aid with a patient tissue
CN104274247A (en) * 2014-10-20 2015-01-14 上海电机学院 Medical surgical navigation method
CN104680911A (en) * 2015-03-12 2015-06-03 苏州敏行医学信息技术有限公司 Tagging method based on puncture virtual teaching and training system

Also Published As

Publication number Publication date
CN110992477B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN110249367A (en) System and method for real-time rendering complex data
CN111061375B (en) Intelligent disinfection training method and equipment based on virtual operation
CN105956395A (en) Medical image processing method, device and system
CN114711962B (en) Augmented reality operation planning navigation system and method
US20230248439A1 (en) Method for generating surgical simulation information and program
CN111070664B (en) 3D printing slice generation method, device, equipment and storage medium
US9760993B2 (en) Support apparatus for supporting a user in a diagnosis process
Bryan et al. Virtual temporal bone dissection: a case study
CN111613122A (en) Virtual-actual fused vascular interventional operation simulation system
Jönsson et al. Intuitive exploration of volumetric data using dynamic galleries
CN111430014A (en) Display method, interaction method and storage medium of glandular medical image
CN110136522A (en) Skull base surgery simulation teching training system
Trier et al. The visible ear surgery simulator
CN111124233A (en) Medical image display method, interaction method and storage medium
Low et al. Three-dimensional printing: current use in rhinology and endoscopic skull base surgery
CN115457008A (en) Real-time abdominal puncture virtual simulation training method and device
CN110097944B (en) Display regulation and control method and system for human organ model
CN112655029A (en) Virtual or augmented reality assisted 3D visualization and tagging system
CN110992477B (en) Bioepidermal marking method and system for virtual surgery
CN110491517A (en) A kind of threedimensional model locally translucent display operation implementation method and device
CN104680911A (en) Tagging method based on puncture virtual teaching and training system
CN110827960A (en) Medical image display method and display equipment
CN105869218B (en) The neoplastic lesion edit methods and device of blood vessel mathematical model
JP3762482B2 (en) Shading method and apparatus by volume rendering method
CN117093100A (en) Method and device for adjusting installation angle of hip joint prosthesis model and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant