CN101320526B - Apparatus and method for operation estimation and training - Google Patents

Apparatus and method for operation estimation and training Download PDF

Info

Publication number
CN101320526B
CN101320526B CN2008101416023A CN200810141602A CN101320526B CN 101320526 B CN101320526 B CN 101320526B CN 2008101416023 A CN2008101416023 A CN 2008101416023A CN 200810141602 A CN200810141602 A CN 200810141602A CN 101320526 B CN101320526 B CN 101320526B
Authority
CN
China
Prior art keywords
data
operative site
unit
deformation
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2008101416023A
Other languages
Chinese (zh)
Other versions
CN101320526A (en
Inventor
吴剑煌
陈辉
马炘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Institute Of Advanced Technology Chinese Academy Of Sciences Co ltd
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN2008101416023A priority Critical patent/CN101320526B/en
Publication of CN101320526A publication Critical patent/CN101320526A/en
Application granted granted Critical
Publication of CN101320526B publication Critical patent/CN101320526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention discloses a surgery predicting and training device, and a method thereof. The device comprises a model generation module, a monitoring device, an operating device, a physical computation module; wherein, the model generation module adopts the external images of the human body, the images of anatomized structure and the attribute data of tissues of the human body, to simulate the states of various tissues; the monitoring device is connected with the model generation module; the operating device is used for receiving surgical instructions; the surgical instructions comprise operating action data and surgical position information; the physical computation module is connected with the operating device, and is used for transforming the operating action data into the deformation data of the tissues corresponding to the surgical positions; the model generation module also comprises a data correcting unit which is connected with the physical computation module; the data correcting unit adopts the deformation data to correct the attribute data corresponding to the surgical position. The device and the method can directly predict the surgery, lowers the difficulty of surgical prediction, and reduces the cost of surgical training.

Description

The Apparatus for () and method therefor of a kind of operation estimation and training
Technical field
The present invention relates to a kind of education or demonstration equipment, the Apparatus for () and method therefor of in particular a kind of operation estimation and training.
Background technology
The modern surgery surgical technic develops to Wicresoft's wound, low coverage, low painful Minimally Invasive Surgery direction.Use small video camera and the cutting tool that inserts in the patient body in the operation, finish surgical procedure by watching monitor attentively.Minimally Invasive Surgery helps saving the operation cost, alleviate convalescence after patient's misery and the desmopyknosis, obtained using widely in ear nose larynx, division of gastroenterology, urological department, gynaecology and neurology department.
Operation estimation is the operation plan of formulating before operation is carried out, the operation estimation of traditional mode relies on operator's subjective experience, for example the azimuth-range that in osteotomy, modus operandi, osteotomy position, bone section is moved, occluding relation determine etc. all need do a series of forecast analysis and sham operated earlier, just can guarantee the success of performing the operation, this requirement to the operator is very high, has increased the difficulty of surgical procedure.In case Minimally Invasive Surgery failure need transfer traditional operation to temporarily, wound is bigger, and patient is caused serious injury, with " asking beautiful " be purpose craniofacial region plastic and aesthetic surgery particularly more very; So operation estimation of the prior art is very difficult.
In addition, in minimal invasive surgical procedures, because otch is little, the operator can not see his hand, need be by instrument, and judge according to oneself experience and to undergo surgery operation skill that these needs are high and outstanding trick compounding technique.New hand needs through the anatomical structure of a large amount of training with the grasp human body before formal operation, and cultivates the ability of dealing with various emergency case.Traditional operative training is an experimental subjects with animal or corpse mainly, still, because the inconsistency of the anatomical structure of anatomical structure of animal and human body and the factors such as histological difference of corpse and live body can influence training effect; And, because the nonrepeatability of animal and corpse causes the training cost of hospital to increase.
Therefore, prior art has yet to be improved and developed.
Summary of the invention
The object of the present invention is to provide the Apparatus for () and method therefor of a kind of operation estimation and training, the prediction that can undergo surgery intuitively of this operation estimation and exercise equipment and method has reduced the difficulty of operation estimation and the cost of operative training.
Technical scheme of the present invention is as follows:
A kind of operation estimation and exercise equipment comprise: the model generation module, be used to utilize attribute data of each tissue of outside drawing picture, anatomical structure image and human body of human body, and simulate the state of various tissues; The monitoring arrangement that is connected with described model generation module; Wherein, also comprise: be used to receive the operating means of operation instruction, this operation instruction comprises: operational motion data and operative site information; The physical computing module that is connected with described operating means is used for described operational motion data are converted to the deformation data of described operative site respective organization; Comprise in the described model generation module: attribute data unit, medical imaging devices, graphics processing unit, three-dimensional geometry unit and data fusion unit; Described medical imaging devices is used to obtain the image sequence of mode such as the CT of patient's profile and anatomical structure or MRI; Described graphics processing unit is connected with medical imaging devices, is used for the image sequence that is obtained is carried out figure image intensifying, noise remove, Rigid Registration processing and analysis; Described three-dimensional geometry unit is connected with graphics processing unit, is used to utilize the image sequence after handling and analyzing to make up 3-D geometric model; Described attribute data unit is used to obtain various attribute datas, and it is connected with the physical computing module, and according to the deformation data that the physical computing module generates the attribute data of respective organization is revised; Described data fusion unit is used for 3-D geometric model and attribute data are merged, to realize the simulation of each structural state of human body;
Wherein, described attribute data unit comprises the data correction unit that is connected with described physical computing module, and this data correction unit is used to utilize the corresponding attribute data of the described operative site of described deformation data correction.
Described equipment, wherein, described physical computing module comprises: action converting unit that is connected with described operating means and position extraction unit, the power converting unit that is connected with the position extraction unit with this action converting unit; Action converting unit 210 is used for described operational motion data are converted to outer force data; Position extraction unit 220 is used for extracting the operative site information of described operation instruction; Power converting unit 230 is used for producing according to this external force data computation operative site respective organization the deformation data of distortion.
Described equipment, wherein, described operating means comprises: be suitable for the operating parts of operator's operation by human hand, the sensing module that is connected with this operating parts, this sensing module is used for the state parameter of the described operating parts of perception; And the processing module that is connected with described physical computing module with this sensing module, this processing module is used for the state parameter of described operating parts is converted into described operation instruction.
Described equipment wherein, also comprises: with the biomechanical module that described physical computing module is connected, be used for generating according to the deformation data of described operative site respective organization the reacting force data of operative site; Be arranged on the described operating parts and the mechanics feedback module that is connected with this biomechanical module, be used to utilize described reacting force data to apply power to described operating parts.
Described equipment, wherein, described operating parts comprises: the gloves that are connected with described sensing module.
Described equipment, wherein, described operating parts also comprises: the operating theater instruments that is connected with described sensing module.
Described equipment, wherein, described processing module comprises: the tool unit that is used to store the information of operating theater instruments; The mutual selected cell that is connected with this tool unit is used for selecting operating theater instruments information at described tool unit.
Described equipment, wherein, described monitoring arrangement adopts anaglyph spectacles.
The present invention also provides a kind of operation estimation and training method, in the virtual environment that is applied to provide by computer unit, may further comprise the steps: attribute data of each tissue of outside drawing picture, anatomical structure image and human body of the human body that S1, utilization are obtained, simulate the state and the visualization display of various tissues; S2, reception are instructed from operator's operation, and this operation instruction comprises: operational motion data and operative site information; S3, described operational motion data are converted to the deformation data of described operative site respective organization; S4, utilize the corresponding attribute data of the described operative site of described deformation data correction.
Described method, wherein, described step S2 may further comprise the steps: S21, obtain the state parameter of operator's hand motion; S22, described state parameter is converted into described operation instruction.
Described method, wherein, further comprising the steps of: as S5, to generate the reacting force data of operative site according to the deformation data of described operative site respective organization; S6, utilize described reacting force data to apply power to described operator's hand.
Described method, wherein, described step S3 comprises: the described operational motion data that comprise in S31, the instruction of will performing the operation are converted to outer force data; Operative site information in S32, the described operation instruction of extraction; S33, produce the deformation data of distortion according to this external force data computation operative site respective organization.
Operation estimation provided by the present invention and exercise equipment and method thereof, adopting operating means to receive about the operation of operation estimation or operative training instructs, the physical computing module is the stressed of respective organization and deformation data with the operation instruction transformation that operating means receives, the model generation module utilizes the attribute data of each tissue of outside drawing picture, anatomical structure image and human body of human body, and the corrected attribute data of described deformation data, the state of the various tissues of simulation human body.Thereby the prediction that can undergo surgery has intuitively reduced the difficulty of operation estimation and the cost of operative training.
Description of drawings
Fig. 1 is the theory diagram of first embodiment of the invention;
Fig. 2 is the design sketch of 3-D geometric model of the present invention;
Fig. 3 is the design sketch before the present invention simulates the distortion of human body forehead portion;
Fig. 4 is the design sketch after the present invention simulates the distortion of human body forehead portion;
Fig. 5 is the theory diagram of the operating means of second embodiment of the invention;
Fig. 6 is the theory diagram of third embodiment of the invention;
Fig. 7 is the theory diagram of the operating means of four embodiment of the invention.
Embodiment
The present invention is described in further detail below in conjunction with embodiment and accompanying drawing.
In the virtual environment that operation estimation of the present invention and exercise equipment are applied to be provided by computer unit, the operator can operate to the model generation module by operating means, is immersed in the virtual operation environment, finishes the prediction and the training of operation.Experience and learn how to carry out various operations by the emulation operating theater instruments.
As first kind of embodiment of the present invention, as shown in Figure 1, comprising: operating means 100, physical computing module 200, model generation module 300, monitoring arrangement 400.
Operating means 100 is used for receiving the operation instruction, and described operation instruction comprises: operation measurement data, operation route data and operational motion data and operative site information.The operation measurement data comprises choosing of identification point, the area of target site and volume, on the anatomical structure vertically, sagittal, the straight line between crown, camber line distance etc., for example the operation measurement data measured in the craniofacial region operation has: the air line distance (head greatly enhances most) between glabella point and the head back point, distance between the left and right sides measuring point (head is maximum wide) from gnathion to the projector distance between the point of the crown (full head height), and in the edge sagittal plane from nasion to the arc length between the inion (a sagittal arc).The operation route data is the concrete operations data according to operation plan input, action data mainly the displacement and the time of logical surgical action surgical action is described, operative site message reflection surgical action at the information of respective organization.
Physical computing module 200 is connected with described operating means 100, and physical computing module 200 is converted to the operational motion data in the described operation instruction deformation data of described operative site respective organization.Physical computing module 200 comprises: action converting unit 210, position extraction unit 220 and power converting unit 230, as shown in Figure 2.
Action converting unit 210 is connected with operating means 100, is used for the operational motion data that the operation instruction comprises are converted to outer force data, according to position parameter and time-parameters computing velocity and the pairing acting force of this speed;
Position extraction unit 220 is connected with operating means 100, is used for extracting the operative site information of described operation instruction, operative site information just surgical action at concrete tissue and characteristic (such as skin, fat, muscle) thereof;
Power converting unit 230 is connected with position extraction unit 210 with action converting unit 220, be used for producing according to this external force data computation operative site respective organization the deformation data of distortion, so-called deformation data comprise for example data of the single or associating deformation of ligament, muscle, fat, blood vessel and these soft tissues of skin.
Model generation module 300 is used to collect the attribute data of each tissue of outside drawing picture, anatomical structure image and human body of human body, and utilizes the state of the various tissues of above-mentioned information simulation human body.As shown in Figure 2, be provided with attribute data unit 340 in the model generation module 300, attribute data unit 340 comprises again: attribute data acquiring unit 341, attribute data amending unit 342.Attribute data acquiring unit 341 is used to obtain various attribute datas, and it is connected with physical computing module 200; Attribute data amending unit 342 is used for the corresponding attribute data of the described operative site of deformation data correction according to 200 generations of physical computing module.
Described attribute data comprises: the data of reflection different tissues organ (such as skin, fat, muscle) material properties and geometric distributions shape; Reflect size, shape, soft durometer, the roughness of unusual position (such as tumour), the data of texture; The data of the size of reflection operative site, shape, soft durometer, roughness, texture.
Also comprise in the model generation module 300: medical imaging devices 310, graphics processing unit 320, three-dimensional geometry unit 330 and data fusion unit 350.Medical imaging devices 310 is used to obtain the image sequence of mode such as the CT of patient's profile and anatomical structure or MRI, graphics processing unit 320 is connected with medical imaging devices 310, is used for the image sequence that is obtained is carried out figure image intensifying, noise remove, Rigid Registration processing and analysis; Three-dimensional geometry unit 330 is connected with graphics processing unit 320, is used to utilize the image sequence after handling and analyzing to make up 3-D geometric model; Attribute data unit 340 is used to obtain various attribute datas, and it is connected with physical computing module 200, and according to the deformation data that physical computing module 200 generates the attribute data of respective organization is revised; Data fusion unit 350 is used for 3-D geometric model and attribute data are merged, to realize the simulation of each structural state of human body.
Monitoring arrangement 400 is connected with described model generation module 300, is used for showing to the operator state of current time simulated person body and various tissues thereof.
Below be example with the shaping and beauty of craniofacial region, equipment of the present invention is elaborated:
At first obtain the CT or the MRI image sequence of patient's craniofacial region profile and anatomical structure by medical imaging devices 310.
Adopt image smoothing, sharpening and filtering method that the image sequence that is obtained is carried out figure image intensifying, noise remove by graphics processing unit 320, to reach the purpose of improving picture quality; Utilize skeleton density and continuity that image is cut apart simultaneously, rely on dissection knowledge that segmentation result is revised to obtain interesting areas; Because in scanning process, the patient is difficult to guarantee that an anchor is constant, like this because the unfixing of posture will cause not matching between the equidirectional adjacent projections of scanning, therefore at this moment need image sequence is carried out registration, make the corresponding point of two width of cloth images reach the locus with dissect structural consistent, characteristics according to image can adopt the Rigid Registration method, for two dimensional image, that need seek is exactly three parameter: x of rigid transformation, translation Dx and Dy on the y direction, anglec of rotation θ.
X t Y t = cos θ sin θ X Y + Dx Dy
After the image sequence process was handled and analyzed, the 330 pairs of image sequences in three-dimensional geometry unit carried out 3-D geometric model and make up, and obtain the three-dimensional geometry surface model of bone, cartilage and the soft tissue of craniofacial region respectively; According to the characteristics of obtaining model, adopt simplification, fairing, grid optimization, fragment removal, subdivision curved surface match, grid cutting, volume meshization that geometric model is carried out subsequent treatment again; According to every kind of concrete disposal route, can adopt serial or parallel simultaneously, such as technical point other places reason based on GPU.Adopt removal volume elements cutting algorithm or volume elements subdivision cutting algorithm that model is cut.Figure below is depicted as head scanning to a patient and obtains image and carry out the 3-D geometric model that obtains based on the GPU three-dimensional reconstruction, referring to Fig. 2.
Attribute data unit 340, be used for the getattr data, attribute data comprises: the data of data, muscle material properties and the geometric distributions shape of reflection skin material properties and the data of geometric distributions shape, fatty material properties and geometric distributions shape and the data of other histoorgan material properties and geometric distributions shape; Reflect size, shape, soft durometer, the roughness at each position, the data of texture; The position comprises again: operative site, unusual position (for example tumour).
Data fusion unit 350 merges 3-D geometric model and attribute data, with the state of simulation human body and various tissues thereof.Show to the operator by monitoring arrangement 400 at last.
Operating means 100 receives the operation instruction, comprises in the operation instruction: operational motion data, operative site information.
The instruction of will performing the operation passes to physical computing module 200, action converting unit 210 wherein is used for the operational motion data that the operation instruction comprises are converted to outer force data, utilizes the speed and the pairing acting force of this speed of operational motion data computation surgical action.
Position extraction unit 220 is connected with operating means 100, be used for extracting the operative site of operation instruction, just surgical action at concrete tissue (such as skin, fat, muscle).
Power converting unit 230 is connected with position extraction unit 210 with action converting unit 220, be used for producing the deformation data of distortion according to external force data computation operative site respective organization, utilize the size of the operative site of position extraction unit 220 extractions, shape, soft durometer, roughness, the data of texture, and the outer force data that action converting unit 220 is calculated is set up mechanics of materials modeling and deformation computation model, calculate the deformation data that respective organization produces by this mechanics of materials modeling and deformation computation model under this external force effect, so-called deformation data comprise for example ligament, muscle, fat, the data of the single or associating deformation of blood vessel and these soft tissues of skin.Can adopt spring proton model, based on the linear elasticity of finite element or dynamic linear elasticity model, perhaps adopt the combination of these models, such as the Soft Tissue Deformation that adopts dynamic linear elasticity finite element model sham operated position, see formula 1,
M ∂ 2 u → ∂ t 2 + D ∂ u → ∂ t + K u → = f → - - - ( 1 )
Wherein, M is the mass matrix of object, and D is the damping matrix of object, and K is the integral rigidity matrix, and u is displacement, and t is the time, and f is the equivalent force vector.Object is meant the simulated object of virtual operation, during such as operation on face, if image acquisition is whole head, the front Geometric Modeling also is the words of whole head simultaneously, integral body here and object just are meant whole head object, if the part of gathering is exactly singly to refer to lower jaw part such as having only lower jaw part, also can be referred to as the surgical simulation object.Real-time according to simulation requires the computation model of these models can set up on GPU (Graphic Processing Unit, graphic process unit) or the CPU.
In real time these deformation data are sent to attribute data unit 340, attribute data unit 340 utilizes the described attribute data of this deformation data correction, thereby the tissue that data fusion unit 350 is simulated produces corresponding the change, and the effect of simulation human body forehead portion distortion is seen Fig. 3 and Fig. 4.
The operation estimation of present embodiment and exercise equipment, generate the single or associating deformation data of each soft tissue of reflection according to the operation instruction by physical computing module 200, model generation module 300 is observed by monitoring arrangement 400 according to the state of each tissue of this deformation digital simulation.Thus surgical effect is carried out visuality and predict, operation plan, optimization operation pathway, the minimizing damage that one side can help the doctor to formulate concrete patient reaches the infringement of organizing, raising focus bearing accuracy, predicts surgical outcome; Can allow the surgeon carry out simulation and postoperative effect prediction in art preplanning, the art in addition at a real patient, can find the problem that operation plan exists in advance and in time revised by preview operation, and can obtain the guidance of expert's surgery systems of setting up according to expertise, make operation safer, reliable and accurate, these have great importance to the success ratios that improve operation; The patient can also dynamic observe surgical effect prediction, until reaching patient satisfaction and operation plan is feasible.
As second embodiment of the present invention, carry out following improvement on the basis of first embodiment: as shown in Figure 5, described operating means 100 comprises: operating parts 110, the sensing module 120 that is connected with operating parts 110 and the processing module 130 that is connected with physical computing module 200 with sensing module 120.
Operating parts 110 is suitable for operator's operation by human hand, and operating parts 110 can adopt: gloves, the operating theater instruments of parcel hand or the handpiece that is suitable for handing.Operating theater instruments comprises pliers, tweezers, clip, scalpel, operating scissors, gage.
Sensing module 120 is connected with physical computing module 200 with operating parts 110, and sensing module 120 adopts sensor usually, is used for the state parameter of sense operation part 110.Displacement vector when described state parameter mainly characterizes operating parts 110 and moves, and the deflection of operating parts 110 itself;
Processing module 130 is connected with sensing module 120, is used for this described state parameter is converted into described operation instruction.
When operating parts 110 adopted gloves and operating theater instruments, sensor can be arranged on the operating theater instruments, also can be arranged on operating theater instruments and the gloves.
When 110 of operating parts adopted gloves or handpiece, sensor then was arranged on gloves or the handpiece.
Operating means 100 in the present embodiment comprises operating parts 110 and the sensing module 120 that is connected with operating parts 110, realized by realize the performing the operation input of instruction of operator's hand motion, this can make operator's operation personally, participate in surgical procedure, enough help new hand to learn, grasp the anatomical structure of human body, the training that undergos surgery, and cultivate the ability of dealing with various emergency case.
As the 3rd embodiment of the present invention, biomechanical module 500 and mechanics feedback module 600 on the basis of second embodiment, have been increased, as shown in Figure 6.
Biomechanical module 500 is connected with physical computing module 200, utilizes the deformation data of power converting unit 230 and operative site respective organization and the characteristic thereof that position extraction unit 220 extracts, and generates the reacting force data of operative site respective organization; Can adopt spring proton model, based on the linear elasticity of finite element or dynamic linear elasticity model, perhaps adopt the combination of these models, such as the Soft Tissue Deformation that adopts dynamic linear elasticity finite element model sham operated position, in the practical application, physical computing module 200 and biomechanical module 500 can be integrated in same module.
Mechanics feedback module 600 is connected with biomechanical module 500, and mechanics feedback module 600 is arranged on the described operating parts 110, is used to utilize described reacting force data to apply power to operating parts 110.
When operating parts 110 employing handpieces, mechanics feedback module 600 can adopt the arm of force that is connected with handpiece to carry out the power conduction.When operating parts 110 employing gloves and operating theater instruments, mechanics feedback module 600 can be arranged on the gloves or on the operating theater instruments, also can all be provided with at gloves and operating theater instruments; Mechanics feedback module 600 can adopt the vibrations unit with electric signal control, also can adopt miniature air bag.
Adopt mechanics feedback module 600 in the present embodiment, on an embodiment, reacting force of each tissue in can the sham operated process operation has increased operator's tactile experience, has promoted operator's feeling of immersion, the sense of reality greatly.In addition, monitoring arrangement 400 can adopt monitor, display screen, the preferred anaglyph spectacles that adopts, because anaglyph spectacles utilizes the polarisation principle, promptly, gap with two eyes visual angles produces two images with same scene, allows two eyes see the image of one of them respectively, sees through retina and just makes brain produce the stereoscopic sensation of the depth of field.This can further improve the operator feeling of immersion, come to sense personally.
Above embodiment adopts real operating theater instruments to undergo surgery and predicts and training, the following embodiment and the difference of second embodiment and the 3rd embodiment are: as shown in Figure 7, be provided with tool unit 131 and mutual selected cell 132 in the processing module 130; Tool unit 131 is used to store the information of operating theater instruments, for example, and the specification of every kind of operating theater instruments, material, deformation coefficient etc.; Mutual selected cell 132 is connected with this tool model, be used for selecting operating theater instruments information at described tool unit, its external hardware can adopt modes such as mouse commonly used, touch-screen, and the operator finishes the selection of operating theater instruments information by this mutual selected cell 132.
Present embodiment is two kinds of embodiments before, the operator can be before the training beginning, external hardware by mutual selected cell 132 is selected a certain operating theater instruments, processing module 130 merges operator's hand motion of the information of operating theater instruments and acquisition mutually and being converted to the operation instruction, and will perform the operation to instruct and be sent to physical computing module 200.Operating theater instruments in the present embodiment also adopts virtual analog, has abandoned traditional operating theater instruments, has simplified the hardware tools when training, and has further reduced the cost of operative training.
The virtual environment that provided by computer unit is provided in the present invention, this virtual environment comprises the 3-D geometric model mathematical notation of surgical target, its reflection how much of target, machinery and biomechanics characteristic, and virtual instrument, by physical feeding means control, make can influence, the Action Target model.Its method step comprises: set up the three-dimensional model of craniofacial region, measure force feedback modelling and analysis, the surgical simulation and the training of touching visual information based on enhancing based on the three-dimensional sign of reconstruction model is aesthstic.Only changed with the traditional mode of subjective experience diagnostic analysis and design operation plan, the accuracy of the security of handle art and prediction shaping and beauty effect reduces the patient suffering, improves the operation quality.
The present invention also provides a kind of operation estimation and training method, may further comprise the steps:
10, utilize the attribute data of each tissue of outside drawing picture, anatomical structure image and human body of the human body that obtains, simulate the state and the visualization display of various tissues;
11, obtain the image sequence of mode such as the CT of patient's profile and anatomical structure or MRI, and attribute data of each tissue of human body,
12, the image sequence that is obtained is carried out figure image intensifying, noise remove, Rigid Registration processing and analysis;
13, utilize the image sequence after handling and analyzing to make up 3-D geometric model;
14,3-D geometric model and attribute data are merged, to realize the simulation of each structural state of human body;
15, with the analog result visualization display.
20, reception is from operator's operation instruction, and this operation instruction comprises: operational motion data and operative site information;
Described operation instruction comprises: operation measurement data, operation route data and operational motion data and operative site information.The operation measurement data comprises choosing of identification point, the area of target site and volume, on the anatomical structure vertically, sagittal, the straight line between crown, camber line distance etc., for example the operation measurement data measured in the craniofacial region operation has: the air line distance (head greatly enhances most) between glabella point and the head back point, distance between the left and right sides measuring point (head is maximum wide) from gnathion to the projector distance between the point of the crown (full head height), and in the edge sagittal plane from nasion to the arc length between the inion (a sagittal arc).The operation route data is the concrete operations data according to operation plan input, action data mainly the displacement and the time of logical surgical action surgical action is described, operative site message reflection surgical action at the information of respective organization.
The training if undergo surgery, the operation instruction in this step may further comprise the steps:
21, obtain the state parameter of operator's hand motion;
22, described state parameter is converted into described operation instruction.
30, described operational motion data are converted to the deformation data of described operative site respective organization;
31, will perform the operation that the described operational motion data that comprise are converted to outer force data in the instruction;
32, the operative site information in the described operation instruction of extraction, the data of the shape of the operative site tissue that comprises, soft durometer, roughness, texture;
33, produce the deformation data of distortion according to this external force data computation operative site respective organization.
Set up mechanics of materials modeling and deformation computation model, calculate the deformation data that respective organization produces by this mechanics of materials modeling and deformation computation model under this external force effect, so-called deformation data comprise for example data of the single or associating deformation of ligament, muscle, fat, blood vessel and these soft tissues of skin.Can adopt spring proton model, based on the linear elasticity of finite element or dynamic linear elasticity model, perhaps adopt the combination of these models, such as the Soft Tissue Deformation that adopts dynamic linear elasticity finite element model sham operated position, see formula 1,
40, utilize the corresponding attribute data of the described operative site of described deformation data correction.
Generate the single or associating deformation data that reflect each soft tissue according to the operation instruction, by state according to each tissue of this deformation digital simulation.Thus surgical effect is carried out visuality and predict, operation plan, optimization operation pathway, the minimizing damage that one side can help the doctor to formulate concrete patient reaches the infringement of organizing, raising focus bearing accuracy, predicts surgical outcome; Can allow the surgeon carry out simulation and postoperative effect prediction in art preplanning, the art in addition at a real patient, can find the problem that operation plan exists in advance and in time revised by preview operation, and can obtain the guidance of expert's surgery systems of setting up according to expertise, make operation safer, reliable and accurate, these have great importance to the success ratios that improve operation; The patient can also dynamic observe surgical effect prediction, until reaching patient satisfaction and operation plan is feasible.
50, generate the reacting force data of operative site according to the deformation data of described operative site respective organization;
Utilize deformation data and operative site information, can adopt spring proton model, based on the linear elasticity of finite element or dynamic linear elasticity model, perhaps adopt the combination of these models, such as the Soft Tissue Deformation that adopts dynamic linear elasticity finite element model sham operated position, generate the reacting force data of operative site respective organization.
60, utilize described reacting force data to apply power, can on the object that the operator hands, apply acting force to described operator's hand.
To sum up, operation estimation provided by the present invention and exercise equipment and method thereof are instructed by adopting operating means to receive about the operation of operation estimation or operative training, the physical computing module is the stressed of respective organization and deformation data with the operation instruction transformation that operating means receives, the model generation module utilizes the outside drawing picture of human body, the attribute data of each tissue of anatomical structure image and human body, and the corrected attribute data of described deformation data, the state of the various tissues of simulation human body, thereby can undergo surgery prediction intuitively, reduce the difficulty of operation estimation and the cost of operative training.
Should be understood that application of the present invention is not limited to above-mentioned giving an example, for those of ordinary skills, can be improved according to the above description or conversion that all these improvement and conversion all should belong to the protection domain of claims of the present invention.

Claims (8)

1. operation estimation and exercise equipment comprise:
The model generation module is used to utilize attribute data of each tissue of outside drawing picture, anatomical structure image and human body of human body, simulates the state of various tissues;
The monitoring arrangement that is connected with described model generation module;
It is characterized in that, also comprise:
Be used to receive the operating means of operation instruction, this operation instruction comprises: operation measurement data, operation route data, operational motion data and operative site information;
The physical computing module that is connected with described operating means is used for described operational motion data are converted to the deformation data of described operative site respective organization;
Described model generation module comprises: attribute data unit, medical imaging devices, graphics processing unit, three-dimensional geometry unit and data fusion unit; Described medical imaging devices is used to obtain the CT of patient's profile and anatomical structure or the image sequence of MRI mode; Described graphics processing unit is connected with medical imaging devices, is used for the image sequence that is obtained is carried out figure image intensifying, noise remove, Rigid Registration processing and analysis; Described three-dimensional geometry unit is connected with graphics processing unit, is used to utilize the image sequence after handling and analyzing to make up 3-D geometric model; Described attribute data unit is used to obtain various attribute datas, and it is connected with the physical computing module, and according to the deformation data that the physical computing module generates the attribute data of respective organization is revised; Described data fusion unit is used for 3-D geometric model and attribute data are merged, to realize the simulation of each structural state of human body;
Wherein, described attribute data unit comprises the data correction unit that is connected with described physical computing module, and this data correction unit is used to utilize the corresponding attribute data of the described operative site of described deformation data correction;
Described attribute data comprises: the data of reflection different tissues organ material properties and geometric distributions shape; Reflect size, shape, soft durometer, the roughness at unusual position, the data of texture; The data of the size of reflection operative site, shape, soft durometer, roughness, texture;
Described physical computing module comprises:
Action converting unit that is connected with described operating means and position extraction unit, the power converting unit that is connected with the position extraction unit with this action converting unit;
The action converting unit is used for described operational motion data are converted to outer force data;
The position extraction unit is used for extracting the operative site information of described operation instruction, comprises shape, soft durometer, the roughness of operative site tissue, the data of texture;
The power converting unit is used for producing according to this external force data computation operative site respective organization the deformation data of distortion, utilize size, shape, soft durometer, the roughness of the operative site that the position extraction unit extracts, the data of texture, and the outer force data that the action converting unit is calculated sets up mechanics of materials modeling and deformation computation model, calculates the deformation data that respective organization produces by this mechanics of materials modeling and deformation computation model under this external force effect.
2. equipment according to claim 1 is characterized in that, described operating means comprises:
Be suitable for the operating parts of operator's operation by human hand;
With the sensing module that this operating parts is connected, this sensing module is used for the state parameter of the described operating parts of perception;
And the processing module that is connected with described physical computing module with this sensing module, this processing module is used for the state parameter of described operating parts is converted into described operation instruction.
3. equipment according to claim 2 is characterized in that, also comprises:
With the biomechanical module that described physical computing module is connected, be used to utilize the deformation data of power converting unit and operative site respective organization and the characteristic thereof that the position extraction unit extracts, generate the reacting force data of operative site respective organization;
Be arranged on the described operating parts and the mechanics feedback module that is connected with this biomechanical module, be used to utilize described reacting force data to apply power to described operating parts.
4. equipment according to claim 3 is characterized in that, described operating parts comprises: the gloves that are connected with described sensing module.
5. equipment according to claim 4 is characterized in that, described operating parts also comprises: the operating theater instruments that is connected with described sensing module.
6. equipment according to claim 2 is characterized in that, described processing module comprises: the tool unit that is used to store the information of operating theater instruments;
The mutual selected cell that is connected with this tool unit is used for selecting operating theater instruments information at described tool unit.
7. equipment according to claim 1 is characterized in that, described monitoring arrangement adopts anaglyph spectacles.
8. operation estimation and training method in the virtual environment that is applied to be provided by computer unit, may further comprise the steps:
Attribute data of each tissue of outside drawing picture, anatomical structure image and human body of the human body that S1, utilization are obtained is simulated the state and the visualization display of various tissues;
S2, reception are instructed from operator's operation, and this operation instruction comprises: operation measurement data, operation route data, operational motion data and operative site information;
S3, described operational motion data are converted to the deformation data of described operative site respective organization;
S4, utilize the corresponding attribute data of the described operative site of described deformation data correction;
S5, generate the reacting force data of operative site according to the deformation data of described operative site respective organization;
S6, utilize described reacting force data to apply power to described operator's hand;
Wherein, described attribute data comprises: the data of reflection different tissues organ material properties and geometric distributions shape; Reflect size, shape, soft durometer, the roughness at unusual position, the data of texture; The data of the size of reflection operative site, shape, soft durometer, roughness, texture;
Described step S2 may further comprise the steps:
S21, obtain the state parameter of operator's hand motion;
S22, described state parameter is converted into described operation instruction;
Described step S3 comprises:
The described operational motion data that comprise in S31, the instruction of will performing the operation are converted to outer force data;
S32, extract the operative site information in the described operation instruction, comprise shape, soft durometer, the roughness of operative site tissue, the data of texture;
S33, produce the deformation data of distortion according to this external force data computation operative site respective organization; It specifically comprises the operative site information of utilizing in the described operation instruction of extracting, comprise size, shape, soft durometer, the roughness of operative site tissue, the data of texture, and the outer force data that calculates sets up mechanics of materials modeling and deformation computation model, calculates the deformation data that respective organization produces by this mechanics of materials modeling and deformation computation model under this external force effect.
CN2008101416023A 2008-07-11 2008-07-11 Apparatus and method for operation estimation and training Active CN101320526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101416023A CN101320526B (en) 2008-07-11 2008-07-11 Apparatus and method for operation estimation and training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101416023A CN101320526B (en) 2008-07-11 2008-07-11 Apparatus and method for operation estimation and training

Publications (2)

Publication Number Publication Date
CN101320526A CN101320526A (en) 2008-12-10
CN101320526B true CN101320526B (en) 2010-12-22

Family

ID=40180553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101416023A Active CN101320526B (en) 2008-07-11 2008-07-11 Apparatus and method for operation estimation and training

Country Status (1)

Country Link
CN (1) CN101320526B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5984235B2 (en) * 2011-07-19 2016-09-06 東芝メディカルシステムズ株式会社 Image processing system, apparatus, method, and medical image diagnostic apparatus
US8824752B1 (en) 2013-03-15 2014-09-02 Heartflow, Inc. Methods and systems for assessing image quality in modeling of patient anatomic or blood flow characteristics
US11246666B2 (en) 2013-09-06 2022-02-15 The Brigham And Women's Hospital, Inc. System and method for a tissue resection margin measurement device
CN105096670B (en) * 2014-05-23 2018-07-06 香港理工大学 A kind of intelligent immersion tutoring system and device for nose catheter operation real training
CN105321415A (en) * 2014-08-01 2016-02-10 卓思生命科技有限公司 Surgery simulation system and method
CN104318840A (en) * 2014-10-22 2015-01-28 北京航空航天大学 Simulation method of medical surgical instrument guide wire on basis of spring proton model
CN104771246B (en) * 2015-04-21 2017-02-01 四川大学华西医院 Visual knee joint cruciate ligament reconstruction method
DE102015208804A1 (en) * 2015-05-12 2016-11-17 Siemens Healthcare Gmbh Apparatus and method for computer-aided simulation of surgical procedures
CN104915519B (en) * 2015-06-30 2018-05-11 中国人民解放军第三军医大学第二附属医院 A kind of cranio-maxillofacial method for establishing model and device
CN106503451B (en) * 2016-10-24 2019-04-05 京东方科技集团股份有限公司 A kind of online operation plan analysis system
WO2020047761A1 (en) * 2018-09-05 2020-03-12 天津天堰科技股份有限公司 Medical simulator, and medical training system and method
CN109979600A (en) * 2019-04-23 2019-07-05 上海交通大学医学院附属第九人民医院 Orbital Surgery training method, system and storage medium based on virtual reality
CN111317581B (en) * 2020-02-27 2021-01-29 吉林大学第一医院 Working method and system of equipment for cleaning operation wound
CN112168361B (en) * 2020-10-29 2021-11-19 上海工程技术大学 Catheter surgical robot pose prediction method capable of effectively relieving time delay influence

Also Published As

Publication number Publication date
CN101320526A (en) 2008-12-10

Similar Documents

Publication Publication Date Title
CN101320526B (en) Apparatus and method for operation estimation and training
Chabanas et al. Patient specific finite element model of the face soft tissues for computer-assisted maxillofacial surgery
CN104778894B (en) A kind of virtual emulation bone-setting manipulation training system and its method for building up
Basdogan et al. VR-based simulators for training in minimally invasive surgery
Tendick et al. Sensing and manipulation problems in endoscopic surgery: experiment, analysis, and observation
Schendel et al. Three-dimensional imaging and computer simulation for office-based surgery
CN109979600A (en) Orbital Surgery training method, system and storage medium based on virtual reality
JP5866346B2 (en) A method to determine joint bone deformity using motion patterns
CN104274183A (en) Motion information processing apparatus
Schendel et al. 3D orthognathic surgery simulation using image fusion
KR102536732B1 (en) Device and method for the computer-assisted simulation of surgical interventions
TW202038867A (en) Optical tracking system and training system for medical equipment
CN108766579A (en) A kind of virtual cerebral surgery operation emulation mode based on high degrees of fusion augmented reality
JP4129527B2 (en) Virtual surgery simulation system
CN113554912A (en) Planting operation training system based on mixed reality technology
CN111026269A (en) Haptic feedback method, device and equipment of biological tissue structure based on force feedback
Chui et al. Haptics in computer-mediated simulation: Training in vertebroplasty surgery
Müller et al. The virtual reality arthroscopy training simulator
US20170278432A1 (en) Medical procedure simulator
Van der Eerden et al. CAREN-computer assisted rehabilitation environment
Suzuki et al. Surgical planning system for soft tissues using virtual reality
Bluteau et al. Vibrotactile guidance for trajectory following in computer aided surgery
Müller-Wittig Virtual reality in medicine
Długosz et al. An improved kinematic model of the spine for three-dimensional motion analysis in the Vicon system
Avşar et al. Automatic 3D modeling and simulation of bone-fixator system in a novel graphical user interface

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230313

Address after: 519085 101, Building 5, Longyuan Smart Industrial Park, No. 2, Hagongda Road, Tangjiawan Town, High-tech Zone, Zhuhai City, Guangdong Province

Patentee after: ZHUHAI INSTITUTE OF ADVANCED TECHNOLOGY CHINESE ACADEMY OF SCIENCES Co.,Ltd.

Address before: 518067, A, Nanshan Medical Instrument Industrial Park, No. 1019 Nanhai Road, Shekou, Guangdong, Shenzhen, Nanshan District

Patentee before: SHENZHEN INSTITUTES OF ADVANCED TECHNOLOGY

TR01 Transfer of patent right