CN206863747U - A kind of minimally invasive spine surgical training system with true force feedback - Google Patents
A kind of minimally invasive spine surgical training system with true force feedback Download PDFInfo
- Publication number
- CN206863747U CN206863747U CN201720446394.2U CN201720446394U CN206863747U CN 206863747 U CN206863747 U CN 206863747U CN 201720446394 U CN201720446394 U CN 201720446394U CN 206863747 U CN206863747 U CN 206863747U
- Authority
- CN
- China
- Prior art keywords
- operating platform
- binocular camera
- minimally invasive
- force feedback
- training system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Processing (AREA)
Abstract
It the utility model is related to a kind of minimally invasive spine surgical training system with true force feedback, including experiment porch, operating theater instruments and PC, the experiment porch includes the operating platform for being arranged at bottom surface, backbone physical model clamp structure is set in the middle panel of operating platform, for clamping backbone physical model;In the surface of backbone physical model binocular camera is placed perpendicular to operating platform;The binocular camera is arranged on operating platform by binocular camera shooting machine support, and passes through cable connection PC;In the end of the operating theater instruments, frame of reference is set.Because force feedback of the present utility model is real, touch feeling true to nature can be brought to doctor, greatly shortens the time of training doctor;Make trainer's multi-angle, the implantation situation of multi-faceted observation pedicle screw, give trainer real visual feedback, be really achieved the purpose of training;And meet the high-precision requirement of minimally invasive spine surgical training system.
Description
Technical field
It the utility model is related to medical training and computer vision field, it is specifically a kind of with true force feedback
Minimally invasive spine surgical training system.
Background technology
Current most virtual emulation training systems both at home and abroad are primarily directed to soft group in laparoscope and ESS
The simulation and training of cutting and deformation, surgical simulation and training for bone tissue are knitted, only a small amount of foreign scholar has carried out phase
Research in terms of pass.For example, D.Morris etc. develops a set of dummy emulation system suitable for temporal bone surgery training.The system
The sensory feedbacks such as tactile, vision and the sense of hearing are integrated with, some simple drilling operations can be simulated and trained.
Petersik etc. have studied a kind of power haptic rendering algorithm based on multi-point contact detection, the rock that they are developed based on the algorithm
Osseous surgery analogue system can provide vibration sense very true to nature in borehole simulation.J.Cordes etc. researches and develops a set for the treatment of backbone
The virtual emulation training system of disease, by human-computer interaction interface and expertise etc., the system realizes to be grasped to spinal operation
The emulation of work.
There is the problem of general character in above training system, no normal direction trainer provides really and accurately force feedback, and it is led
Want reason be bone tissue operation technique mechanical model it is more complicated, it is not only relevant with the bone tissue attribute such as bone density, bone thickness,
It can also be influenceed simultaneously by factors such as the fine motion frequencies of cutting speed, depth and operating theater instruments during operation technique, so very
Difficulty establishes the physical model of bone tissue Mechanics Simulation.And requirement of the spinal operation to precision is more strict, the width of screw placement
Degree only have 10mm~15mm, slightly deviation will injured spinal nerve, cause irreversible consequence.So in surgery training system
In, acquisition force feedback accurate, true to nature is most important, and this always also is the bottle of limitation virtual operation training system increased quality
Where neck.
Utility model content
In view of the shortcomings of the prior art, the utility model offer one kind passes through real operating theater instruments and backbone physics mould
Type, the minimally invasive spine surgical training system of real force feedback is obtained, more real operation experience is provided to trainer.
Used technical scheme is the utility model to achieve the above object:
A kind of minimally invasive spine surgical training system with true force feedback, including experiment porch, operating theater instruments 6 and PC
Machine, the experiment porch include the operating platform 1 for being arranged at bottom surface, and backbone physics mould is set in the middle panel of operating platform 1
Type clamp structure 2, for clamping backbone physical model 3;Placed in the surface of backbone physical model 3 perpendicular to operating platform 1
Binocular camera 4;The binocular camera 4 is arranged on operating platform 1 by binocular camera shooting machine support 5, and is connected by cable
Connect PC;In the end of the operating theater instruments 6, frame of reference 7 is set.
The frame of reference 7 includes four mark balls and two crossbeams, wherein each crossbeam both ends set two colors identical
Mark ball, and mark ball color on two crossbeams is different, each to indicate ball to frame of reference center apart from all same.
Operating theater instruments bracket is set on the operating platform 1, for placing operating theater instruments 6.
Backbone physical model 3 is that the simulation real backbone of the mankind is formed by 3D printing.
The resolution ratio of binocular camera 4 is at least 1280 × 960.
The utility model has the advantages that and advantage:
1. because force feedback of the present utility model is real, touch feeling true to nature can be brought to doctor, is contracted significantly
The time of short training doctor;
2. the utility model can make trainer's multi-angle, the implantation situation of multi-faceted observation pedicle screw, to training
The real visual feedback of instruction person, it is really achieved the purpose of training;
3. the utility model meets the high-precision requirement of minimally invasive spine surgical training system.
Brief description of the drawings
Fig. 1 is system architecture connection figure of the present utility model;
Fig. 2 is operating theater instruments schematic diagram of the present utility model;
Fig. 3 is method flow diagram of the present utility model;
Fig. 4 is the partial points cloud flow chart for extracting real space spine model;
Fig. 5 is the meanshift motion target tracking flow charts for combining the matching of affine invariant moment features and kalman filtering;
Fig. 6 is the flow chart for indicating ball three-dimensional ranging on frame of reference.
Embodiment
Below in conjunction with the accompanying drawings and example is described in further detail to the utility model.
It is system architecture connection figure of the present utility model as shown in Figure 1.
Minimally invasive spine surgical training system, including experiment porch, operating theater instruments 6 and PC, experiment porch include being arranged at
The operating platform 1 of bottom surface, for holding system other equipment, backbone physics mould is set in the panel centre position of operating platform 1
Type clamp structure 2, backbone physical model clamp structure 2 are used to clamp backbone physical model 3, the backbone physical model clamp structure
2 can adjust the elastic of clamping dynamics.
Binocular camera 4 is placed in the surface of backbone physical model 3, the camera of binocular camera 4 is perpendicular to operation
Platform 1, and binocular camera 4 is arranged on operating platform 1 by binocular camera shooting machine support 5, binocular camera 4 is connected by cable
PC is connect, is communicated with PC.Operating theater instruments 6 includes tip and end, and frame of reference 7 is set in the end of operating theater instruments 6.
Operating theater instruments bracket is set on operating platform 1, for placing operating theater instruments 6.
Backbone physical model 3 is that the simulation real backbone of the mankind is formed by 3D printing.
The resolution ratio of binocular camera 4 is at least 1280 × 960.
It is illustrated in figure 2 operating theater instruments schematic diagram of the present utility model.
The frame of reference 7 includes four mark balls and two crossbeams, wherein each crossbeam both ends set two colors identical
Mark ball, and mark ball color on two crossbeams is different, each to indicate ball to frame of reference center apart from all same.
It is illustrated in figure 3 method flow diagram of the present utility model.Including following steps:
Step 1:Binocular camera is demarcated using the gridiron pattern calibration algorithm based on plane, obtains camera coordinates
Transformational relation between system and image pixel coordinates system;
Step 2:Three-dimensional reconstruction is carried out to spine model by binocular camera, obtains the part of real space spine model
Point cloud;
Step 3:CT scan is carried out to spine model, the data obtained according to scanning carry out three-dimensional reconstruction, obtain backbone
Global point cloud, the backbone of reconstruction is imported into virtual interface;
Step 4:Real space and Virtual Space are carried out by registration by ICP spatial registrations algorithm, to obtain real space
With the transformational relation of Virtual Space;
Step 5:Real time dynamic tracing is carried out to operating theater instruments by binocular camera, performed the operation in virtual interface real-time display
Apparatus relative to backbone position.
Binocular camera is demarcated as described above by gridiron pattern plane template, obtains real space coordinate system and figure
As the transforming relationship of pixel coordinate system, specific implementation step are as follows:
Step 1:The formulation of plane gridiron pattern scaling board.The system is using customization gridiron pattern plane reference plate, intersection point
Number be that 7 × 10, Unit Cell size is 25mm × 25mm.
Step:2:IMAQ.The different angle of scaling board is acquired simultaneously using two video cameras, general collection
Image it is more, the result of demarcation is better, it is proposed that 10-20 groups, the system acquire 15 groups.In order to obtain preferably demarcation effect
Fruit, it is desirable to which scaling board picture occupies more than the 1/2 of whole image, and the angle of inclination of scaling board is less than 45 relative to operating platform
Degree.In order to preferably establish distortion model, make edge of the scaling board as far as possible close to left and right cameras public view field.
Step 3:MATLAB calibration process.First, left and right cameras picture is read in, sets the reality of the Unit Cell of scaling board
Size 25mm.Then, distortion model is set, can set and only consider radial distortion or the model of radial distortion and tangential distortion.
Finally, the demarcation of binocular camera is realized using the tool box in MATLAB.
The distortion model of the system setting is as follows:
WhereinFor the new position after correction, (u, v) is home position.
Step 4:Improve stated accuracy.First, deletion projection error is big, smudgy, angle point grid is inaccurate and tilts
The excessive image of angle.Secondly, image less than 10 pairs, template without sufficiently cover camera field of view each corner and
When template does not have enough angle changes relative to video camera, picture is added.Re-scaled after being handled more than.
It is illustrated in figure 4 the flow chart of the partial points cloud of extraction real space spine model.Specific implementation process is as follows:
Picture is gathered by left and right cameras respectively, three-dimensional correction is carried out to left images, side is carried out using canny algorithms
Edge extracts, and carries out Stereo matching by gray scale difference absolute value and (SAD) algorithm, calculates initial parallax figure, then using left and right one
Cause property wave filter, confidence wave filter and uniqueness wave filter is optimized to initialization disparity map, and three-dimensional spine model is carried out
Rebuild, extract the backbone physical model partial points cloud of real space.
Real space coordinate system (camera coordinate system) as described above and the coordinate system of Virtual Space carry out spatial registration.Tool
Body implementation process is as follows:
First using real space and the three-dimensional coordinate of the characteristic point of the backbone of Virtual Space, rough registration is carried out, is then adopted
It is registering that essence is carried out to the partial 3 d point cloud of real space backbone and the global three-dimensional point cloud of Virtual Space backbone with ICP algorithm,
Change in the space for obtaining real space and Virtual Space.
It is illustrated in figure 5 the meanshift motion target trackings with reference to the matching of affine invariant moment features and kalman filtering
Flow chart.Specific implementation step is as follows:
Step 1:The initialized location of moving target in the video of camera acquisition is selected manually, calculates the face of object module
Color Histogram, and determine kalman wave filters initial state vector and initialize other call parameters.
Kalman filter initial state vector is set as:[x0 y00 0], wherein x0And y0Represent initial target model
Centre coordinate.
The setting of Kalman filter other specification is as follows:
(1) error co-variance matrix initial value p in Kalman filter0=0;
(2) state-transition matrix in Kalman filterWherein Δ t is adjacent two field pictures
Between time interval, the system Δ t is equal to 0.1s;
(3) observing matrix is in Kalman filter
(4) system noise covariance matrix and observation noise covariance matrix are respectively set as quadravalence in Kalman filter
With second order unit matrix.
Step 2:Utilize the kalman filter prediction current times after initialization, the candidate family position of present frame;
Step 3:Iteration using the target candidate modal position of kalman filter predictions as meanshift algorithms originates
Point, start meanshift algorithm iteration processes, until the part that the position of previous frame target is arrived with meanshift algorithm search
The Euclidean distance of optimal location is less than 0.5, stops iteration, calculates the color histogram of candidate family;
Step 4:Similitude measure is carried out using similarity measurements flow function, best candidate is obtained by meanshift algorithms
Modal position y1, the similarity calculated now is P1, if P1>0.4, the observation vector using y1 as kalman wave filters is right
Kalman wave filters are updated;Otherwise start affine transformation not bending moment algorithm, and not converter torque ratio is converted with round canonical affine
Compared with obtaining new candidate family position y2;
Step 5:The color histogram of new candidate family is calculated, and P2 is calculated using similarity measurements flow function;Compare P1, P2
Size, the big candidate family position of similarity as the observation vector of kalman wave filters, is carried out to kalman wave filters
Renewal.
The specific implementation process of meanshift algorithms as described above is as follows:
Step 1:The foundation of initial frame object module and present frame candidate family;
Color value at certain intervals is unit, and value is divided into multiple characteristic values for the feature space of pixel color value.
It is that target and candidate family model according to the probability of u-th of characteristic value in the search window comprising target.
Object module modeling is defined as follows:
Candidate family modeling is defined as follows:
X in formula0And y0It is the center pixel coordinate of target and candidate family search window (n pixel), xiIt is i-th of picture
The coordinate of element;k(||x||2) it is kernel function, h represents the half of the bandwidth, generally equivalent to window width of kernel function;Function b and δ
Effect be to judge xiColor value whether belong to characteristic value u;C and ChIt is the constant of a standardization so that all characteristic values
Probability and for 1.
Step 2:Object module and candidate family Similarity Measure;
Similarity function describes the similarity measurement of initial frame object module and present frame candidate family, is defined as follows:
WhereinFor the candidate family of present frame,For initial target model, m is characterized value u number.
Step 3:Meanshift algorithms iterate, and finally obtain the optimal location in present frame target.
By the way that to similarity function maximizing, meanshift vectors can be derived
The signified direction of Meanshift vectors is exactly the direction of similarity increase, iterates and finally obtains in present frame
The optimal location of target.
Traditional not bending moment is only effective under the conditions of target rotation, translation and dimensional variation, but in the three-dimensional of reality
In space, under the different visual angle of binocular camera, target imaging occur affine transformation in the case of be it is inoperative, then
Herein affine not bending moment is proposed on the basis of tradition not bending moment.The system uses and constant three is kept under the conditions of affine transformation
Individual affine not bending moment:
Wherein μpqFor the yardstick standardization square of contour curve.
The yardstick standardization square μ of contour curvepqIt is defined as follows:
Plane curve L (p+q) rank square is defined as:
Wherein, ds be curve L differential of arc, m0,0For profile girth;
Plane curve L (p+q) rank central moment is defined as:
Wherein, (x0,y0) it is center-of-mass coordinate,
Contour curve yardstick standardization square be:
Wherein, m0,For zeroth order square, υp,qFor (p+q) rank central moment.
Candidate family position y2 determination step is as follows in system dynamic tracking algorithm as described above:
Step 1:The image for extracting present frame carries out Gauss denoising, and rim detection is carried out using canny algorithms;
Step 2:Calculate the affine not bending moment of all closed edges, the canonical affine of affine not bending moment and circle not converter torque ratio
Compared with, find target critical position mark ball edge.Bending moment does not meet round canonical affine | I1- 0.006332 |≤0.0003, |
I2|<0.0000001,|I3|<0.0000001, as long as so the edge for meeting these three conditions is just key position;
Step 3:The center of circle at mark ball edge is found by ellipse fitting algorithm, according to colouring information, by same color
Mark ball central coordinate of circle average value as y2.
It is illustrated in figure 6 the flow chart for indicating ball three-dimensional ranging on frame of reference.On the basis of dynamic tracking algorithm, extraction
Indicate the pixel coordinate of ball in image in object module, three-dimensional measurement is carried out to mark ball by binocular camera.First, use
Self-adaption binaryzation, Gauss denoising and Morphology Algorithm carry out image procossing to the picture of camera acquisition.Then, use
The central coordinate of circle of Hough transform algorithm extraction mark ball, if the precision of extraction is not high, can change Hough transform algorithm
Parameter, from new extraction.Finally, three-dimensional measurement is carried out to mark ball by the reconstruction principle of binocular camera, obtained on frame of reference
Four mark balls obtain operating theater instruments in the position of real space in the position of real space.
Claims (4)
1. a kind of minimally invasive spine surgical training system with true force feedback, including experiment porch, operating theater instruments (6) and PC
Machine, it is characterised in that the experiment porch includes the operating platform (1) for being arranged at bottom surface, the middle panel in operating platform (1)
Backbone physical model clamp structure (2) is set, for clamping backbone physical model (3);In the surface of backbone physical model (3)
Binocular camera (4) is placed perpendicular to operating platform (1);The binocular camera (4) is set by binocular camera shooting machine support (5)
In on operating platform (1), and pass through cable connection PC;In the end of the operating theater instruments (6), frame of reference (7) is set.
2. the minimally invasive spine surgical training system according to claim 1 with true force feedback, it is characterised in that:It is described
Frame of reference (7) includes four mark balls and two crossbeams, wherein each crossbeam both ends set two color identical mark balls, and
Mark ball color on two crossbeams is different, each to indicate ball to frame of reference center apart from all same.
3. the minimally invasive spine surgical training system according to claim 1 with true force feedback, it is characterised in that:Institute
Setting operating theater instruments bracket on operating platform (1) is stated, for placing operating theater instruments (6).
4. the minimally invasive spine surgical training system according to claim 1 with true force feedback, it is characterised in that:It is described
The resolution ratio of binocular camera (4) is at least 1280 × 960.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201720446394.2U CN206863747U (en) | 2017-04-26 | 2017-04-26 | A kind of minimally invasive spine surgical training system with true force feedback |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201720446394.2U CN206863747U (en) | 2017-04-26 | 2017-04-26 | A kind of minimally invasive spine surgical training system with true force feedback |
Publications (1)
Publication Number | Publication Date |
---|---|
CN206863747U true CN206863747U (en) | 2018-01-09 |
Family
ID=60822223
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201720446394.2U Active CN206863747U (en) | 2017-04-26 | 2017-04-26 | A kind of minimally invasive spine surgical training system with true force feedback |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN206863747U (en) |
-
2017
- 2017-04-26 CN CN201720446394.2U patent/CN206863747U/en active Active
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10607420B2 (en) | Methods of using an imaging apparatus in augmented reality, in medical imaging and nonmedical imaging | |
CN102592124B (en) | Geometrical correction method, device and binocular stereoscopic vision system of text image | |
CN110459301B (en) | Brain neurosurgery navigation registration method based on thermodynamic diagram and facial key points | |
CN112258516B (en) | Method for generating scoliosis image detection model | |
CN104794722A (en) | Dressed human body three-dimensional bare body model calculation method through single Kinect | |
CN110033465A (en) | A kind of real-time three-dimensional method for reconstructing applied to binocular endoscope medical image | |
CN109389590A (en) | Colon image data processing system and method | |
CN112308932B (en) | Gaze detection method, device, equipment and storage medium | |
CN104851123A (en) | Three-dimensional human face change simulation method | |
CN103927747B (en) | Face matching space registration method based on human face biological characteristics | |
CN102779354B (en) | Three-dimensional reconstruction method for traditional Chinese medicine inspection information surface based on photometric stereo technology | |
CN108230402B (en) | Three-dimensional calibration method based on triangular pyramid model | |
CN106960439B (en) | A kind of vertebrae identification device and method | |
KR102433473B1 (en) | Method, apparatus and computer program for providing augmented reality based medical information of patient | |
CN112330813A (en) | Wearing three-dimensional human body model reconstruction method based on monocular depth camera | |
WO2023056877A1 (en) | Method and apparatus for determining femoral line of force of knee joint, electronic device, and storage medium | |
CN108804861A (en) | A kind of minimally invasive spine surgical training system and method with true force feedback | |
CN108470178A (en) | A kind of depth map conspicuousness detection method of the combination depth trust evaluation factor | |
Hacihaliloglu et al. | Statistical shape model to 3D ultrasound registration for spine interventions using enhanced local phase features | |
CN110477921B (en) | Height measurement method based on skeleton broken line Ridge regression | |
CN106960461A (en) | Infant cranium method for three-dimensional measurement based on deformable model | |
CN105869169B (en) | A kind of automatic division method of the micro- arrangement image of tumor tissues | |
CN112686865B (en) | 3D view auxiliary detection method, system, device and storage medium | |
CN115311405A (en) | Three-dimensional reconstruction method of binocular endoscope | |
CN206863747U (en) | A kind of minimally invasive spine surgical training system with true force feedback |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant |