CN107767464A - A kind of augmented reality method for minimally invasive surgery - Google Patents

A kind of augmented reality method for minimally invasive surgery Download PDF

Info

Publication number
CN107767464A
CN107767464A CN201710927040.4A CN201710927040A CN107767464A CN 107767464 A CN107767464 A CN 107767464A CN 201710927040 A CN201710927040 A CN 201710927040A CN 107767464 A CN107767464 A CN 107767464A
Authority
CN
China
Prior art keywords
mrow
msub
frame
key frame
msubsup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710927040.4A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201710927040.4A priority Critical patent/CN107767464A/en
Publication of CN107767464A publication Critical patent/CN107767464A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

A kind of augmented reality method for minimally invasive surgery proposed in the present invention, its main contents include:Mainly include landmark points detection, frame by frame pose estimation, the bundle adjustment based on key frame, intensive Stereo matching, enlarging geometric grid, its process is, the pose estimation of the video camera of each frame is carried out with matching system first by synchronous positioning, then 3 D stereo matching is carried out with rebuilding and being transformed into global surface mesh for each key frame, re-projection goes back to two-dimentional interface to last global surface mesh again, completes pixel conversion.The present invention can handle the human organ structure chart of different light conditions, there is provided an interactive geometrical layers solve the estimation of camera position, while improve the recognition effect and positioning accuracy of image.

Description

A kind of augmented reality method for minimally invasive surgery
Technical field
The present invention relates to augmented reality field, more particularly, to a kind of augmented reality side for minimally invasive surgery Method.
Background technology
Augmented reality is one and combines the multi-field knowledge such as computer technology, the communication technology, computer graphics Technology, the purpose is to which additional information caused by computer is fused in the real world scene that user is seen, enhancing Perception of the user to real scene.The advantages of being combined due to its actual situation, medical science, robot learning, military affairs, education, There is a wide prospect in the fields such as manufacturing industry, for example, dangerous working environment is produced to human body, based on training place real estate The virtual military target of life and strategic plan information, the manufacture of complicated machinery and maintenance etc. suffer from bigger potentiality to be exploited. In medical domain, the human lenses 3-D graphic obtained by CT or MRI scan can be superimposed upon corresponding body part, had Surgical procedure is instructed beneficial to doctor.Moreover, augmented reality can be additionally used in virtual human body internal anatomy, virtual human body work( Energy, virtual surgery simulation, remote operation etc., especially, there is important application in terms of minimally invasive surgery.
Augmented reality be used for medical image integration technology there is a problem in that:1st, because image is in the process of collection In influenceed by body surface reflection and ambient noise etc., the physical feature unobvious after image objects, be not easy to extract And identification;2nd, because the influence for the factor that distorts, camera calibration have larger error in augmented reality image co-registration alignment system, Therefore the three-dimensional coordinate precision of the spatial point obtained need further to improve;3rd, because operative site is different, in order to more preferable Doctor, the position of patient are adapted to, the intellectuality of system platform and adaptability have much room for improvement.
The present invention proposes a kind of new frame that Stereo matching is carried out based on zero-mean regularization correlation Condition.Using same Step positioning carries out the pose estimation of the video camera of each frame with matching system, then carries out 3 D stereo for each key frame With rebuilding and being transformed into global surface mesh, re-projection goes back to two-dimentional interface to last global surface mesh again, completes pixel and turns for matching Change.The present invention can handle the human organ structure chart of different light conditions, there is provided interactive geometrical layers solve to photograph The estimation that seat in the plane is put, while improve the recognition effect and positioning accuracy of image.
The content of the invention
For solving the problems, such as to apply augmented reality in minimally invasive surgery, it is an object of the invention to provide a kind of pin To the augmented reality method of minimally invasive surgery, it is proposed that a kind of to carry out the new of Stereo matching based on zero-mean regularization correlation Condition Framework.
To solve the above problems, the present invention provides a kind of augmented reality method for minimally invasive surgery, its main contents bag Include:
(1) landmark points detect;
(2) pose estimation frame by frame;
(3) bundle adjustment based on key frame;
(4) intensive Stereo matching;
(5) geometric grid is extended.
Wherein, described landmark points detection, corresponding left and right image is looked for using detector to feed back tracking video camera Position, the coordinate on their x-axis direction are usedRepresent, then in the case of known focal length f and benchmark B, then image is each Point can obtain to endoscope vertical range Z according to similar triangulation:
Wherein,Represent the distance of key point between left and right two images.
Described pose estimation frame by frame, including lap position renewal and swivel estimation.
Described lap position renewal, needs to track lap position frame by frame in real time, based on current in the application of enhancing technology Linear velocity vtWith angular speed wt, after initialization after one section of of short duration time Δ t, use constant motion model estimated location rt+1 And quaternary number rotation qt+1:
Wherein, a and α is renewal coefficient respectively.
Described swivel estimation, using random sample aggregating algorithm, in each iteration, randomly selects current point setArrive Next point setIn corresponding to three pairs of 3d viewpoints be used to calculate swivel matrix R and transposition T, its result is following by minimizing Object function obtains:
Being found by each iteration can formula (3) intermediate value minimum R and T.
The described bundle adjustment based on key frame, including key frame screening and bundle adjustment calculate.
Described key frame screening, retains the frame with material impact to increase backdating capability and raising based on certain criterion Robustness, while reduce the amount of calculation that global optimization is brought, the criterion is, after initialization, meets between any two frame (i.e. Between key frame to be determined and a upper fixed key frame) common keypoints less than 80% and more than 50 condition person, Key frame can be classified as.
Described bundle adjustment calculates, once it is determined that good key frame, then go renolation each using bundle adjustment Key frame KFiWith landmark points PjThree-dimensional position, specifically, for the re-projection error between two-dimentional key point, pass through minimum Total Hans Huber robustness loss function goes to obtain the three-dimensional position on camera angle, i.e.,:
Wherein, ρhIt is the constant factor of loss function, CamProj () represents video camera visual projection function.
Described intensive Stereo matching, landmark points are carried out with intensive reconstruct so that the change of environment is better described, according to left and right Corresponding pixel p and its inconsistent distance d designs a loss function C in image, specifically,
Wherein,Refer to NpCriticize the average density put in image centered on pixel p.
Described enlarging geometric grid, uses transition matrix Tf2wThe estimate of three-dimensional point set is transformed into now from frame space The real space, specifically, the surface mesh of point set is reconstructed first by triangulation method, then using the video camera by estimation Posture goes to extend global surface mesh, and last image is projected back the two dimensional surface consistent with camera angles.
Brief description of the drawings
Fig. 1 is a kind of system flow chart of augmented reality method for minimally invasive surgery of the invention.
Fig. 2 is a kind of example schematic diagram of the measurement application of augmented reality method for minimally invasive surgery of the invention.
Fig. 3 is a kind of example schematic diagram of the other application of augmented reality method for minimally invasive surgery of the invention.
Embodiment
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase Mutually combine, the present invention is described in further detail with specific embodiment below in conjunction with the accompanying drawings.
Fig. 1 is a kind of system flow chart of augmented reality method for minimally invasive surgery of the invention.Mainly include landmark points Detection;Pose estimation frame by frame;Bundle adjustment based on key frame;Intensive Stereo matching;Extend geometric grid.
Wherein, landmark points are detected, and corresponding left and right image is looked for feed back the position of tracking video camera using detector, Coordinate on their x-axis direction is usedRepresent, then in the case of known focal length f and benchmark B, then image each point with it is interior Sight glass vertical range Z can obtain according to similar triangulation:
Wherein,Represent the distance of key point between left and right two images.
Pose estimation frame by frame, including lap position renewal and swivel estimation.
Lap position updates, and needs to track lap position frame by frame in real time in the application of enhancing technology, based on current linear velocity vtWith angular speed wt, after initialization after one section of of short duration time Δ t, use constant motion model estimated location rt+1And four First number rotates qt+1:
Wherein, a and α is renewal coefficient respectively.
Swivel estimation, using random sample aggregating algorithm, in each iteration, randomly selects current point setTo subsequent point CollectionIn corresponding to three pairs of 3d viewpoints be used to calculate swivel matrix R and transposition T, its result is by minimizing following target letter Number obtains:
Being found by each iteration can formula (3) intermediate value minimum R and T.
Bundle adjustment based on key frame, including key frame screening and bundle adjustment calculate.
Key frame screens, and retains the frame with material impact based on certain criterion to increase backdating capability and improve robust Property, while the amount of calculation that global optimization is brought is reduced, the criterion is, after initialization, meets (to treat really between any two frame Between fixed key frame and a upper fixed key frame) common keypoints less than 80% and more than 50 condition person, can arrange For key frame.
Bundle adjustment calculates, once it is determined that good key frame, then remove each key frame of renolation using bundle adjustment KFiWith landmark points PjThree-dimensional position, specifically, for the re-projection error between two-dimentional key point, by minimizing total Hu shellfish You go to obtain the three-dimensional position on camera angle robustness loss function, i.e.,:
Wherein, ρhIt is the constant factor of loss function, CamProj () represents video camera visual projection function.
Intensive Stereo matching, landmark points are carried out with intensive reconstruct so that the change of environment is better described, according in left images Corresponding pixel p and its inconsistent distance d designs a loss function C, specifically,
Wherein,Refer to NpCriticize the average density put in image centered on pixel p.
Geometric grid is extended, uses transition matrix Tf2wThe estimate of three-dimensional point set is transformed into realistic space from frame space, Specifically, the surface mesh of point set is reconstructed first by triangulation method, is then gone using the video camera posture by estimation Global surface mesh is extended, last image is projected back the two dimensional surface consistent with camera angles.
Fig. 2 is a kind of example schematic diagram of the measurement application of augmented reality method for minimally invasive surgery of the invention.Such as figure It is shown, it is observed that the measurement effect in two dimensional surface (figure of left column a, c two) is not directly perceived, by the enhancing skill of the present invention After art application, the measurement of its tripleplane is more directly perceived and should be readily appreciated that.
Fig. 3 is a kind of example schematic diagram of the other application of augmented reality method for minimally invasive surgery of the invention.Such as figure It is shown, it is observed that the figure of the first row a, b two illustrate how on human organ labelling and conversion diagram angle should With;Second row c, d two figure illustrate how the human organ of the complex background of close light is subjected to contrast enhancing and highlighted Change is handled, so as to reach the effect at positioning most critical position.
For those skilled in the art, the present invention is not restricted to the details of above-described embodiment, in the essence without departing substantially from the present invention In the case of refreshing and scope, the present invention can be realized with other concrete forms.In addition, those skilled in the art can be to this hair Bright to carry out various changes and modification without departing from the spirit and scope of the present invention, these improvement and modification also should be regarded as the present invention's Protection domain.Therefore, appended claims are intended to be construed to include preferred embodiment and fall into all changes of the scope of the invention More and change.

Claims (10)

  1. A kind of 1. augmented reality method for minimally invasive surgery, it is characterised in that mainly include landmark points detection (one);Appearance frame by frame Gesture estimates (two);Bundle adjustment (three) based on key frame;Intensive Stereo matching (four);Extend geometric grid (five).
  2. 2. detect (one) based on the landmark points described in claims 1, it is characterised in that using detector look for it is corresponding left, Right image to feed back the position of tracking video camera, use by the coordinate on their x-axis directionRepresent, then in known focal length f In the case of benchmark B, then image each point can obtain to endoscope vertical range Z according to similar triangulation:
    <mrow> <mfrac> <mrow> <mi>B</mi> <mo>-</mo> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>P</mi> <mi>l</mi> </msubsup> <mo>-</mo> <msubsup> <mi>x</mi> <mi>P</mi> <mi>r</mi> </msubsup> <mo>)</mo> </mrow> </mrow> <mrow> <mi>Z</mi> <mo>-</mo> <mi>f</mi> </mrow> </mfrac> <mo>=</mo> <mfrac> <mi>B</mi> <mi>Z</mi> </mfrac> <mo>&amp;DoubleRightArrow;</mo> <mi>Z</mi> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mo>&amp;CenterDot;</mo> <mi>B</mi> </mrow> <msub> <mi>d</mi> <mi>P</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    Wherein,Represent the distance of key point between left and right two images.
  3. 3. based on the pose estimation frame by frame (two) described in claims 1, it is characterised in that update and turn including lap position Estimation.
  4. 4. based on described in claims 3 lap position renewal, it is characterised in that enhancing technology application in need in real time by Frame tracks lap position, based on current linear velocity vtWith angular speed wt, after initialization after one section of of short duration time Δ t, use Constant motion model estimated location rt+1And quaternary number rotation qt+1:
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>r</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>r</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>v</mi> <mi>t</mi> </msub> <mo>&amp;CenterDot;</mo> <mi>&amp;Delta;</mi> <mi>t</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>q</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>q</mi> <mi>t</mi> </msub> <mo>&amp;times;</mo> <mi>q</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>t</mi> </msub> <mo>&amp;CenterDot;</mo> <mi>&amp;Delta;</mi> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>v</mi> <mi>t</mi> </msub> <mo>=</mo> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>+</mo> <msub> <mi>a</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;CenterDot;</mo> <mi>&amp;Delta;</mi> <mi>t</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>w</mi> <mi>t</mi> </msub> <mo>=</mo> <msub> <mi>w</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>+</mo> <msub> <mi>&amp;alpha;</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;CenterDot;</mo> <mi>&amp;Delta;</mi> <mi>t</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, a and α is renewal coefficient respectively.
  5. 5. based on the swivel estimation described in claims 3, it is characterised in that using random sample aggregating algorithm, changing every time Dai Zhong, randomly select current point setTo next point setIn corresponding to three pairs of 3d viewpoints be used to calculating swivel matrix R and Transposition T, its result are obtained by minimizing following object function:
    <mrow> <mi>min</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <mo>|</mo> <msubsup> <mi>p</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>-</mo> <mrow> <mo>(</mo> <mi>R</mi> <mo>*</mo> <msubsup> <mi>p</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo>+</mo> <mi>T</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
    Being found by each iteration can formula (3) intermediate value minimum R and T.
  6. 6. based on the bundle adjustment (three) based on key frame described in claims 1, it is characterised in that sieved including key frame Choosing and bundle adjustment calculate.
  7. 7. based on the key frame screening described in claims 6, it is characterised in that being retained based on certain criterion has material impact Frame to increase backdating capability and improve robustness, while reduce the amount of calculation that global optimization is brought, the criterion is, initial After change, meet that the common keypoints between any two frame (between key frame i.e. to be determined and a upper fixed key frame) are few In 80% and more than 50 condition person, key frame can be classified as.
  8. 8. calculated based on the bundle adjustment described in claims 6, it is characterised in that once it is determined that good key frame, then utilize Bundle adjustment removes each key frame KF of renolationiWith landmark points PjThree-dimensional position, specifically, for two-dimentional key point it Between re-projection error, go to obtain the three-dimensional position on camera angle by minimizing total Hans Huber robustness loss function Put, i.e.,:
    <mrow> <mi>arg</mi> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mrow> <msub> <mi>KF</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>j</mi> </msub> </mrow> </munder> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </munder> <msub> <mi>&amp;rho;</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>-</mo> <mi>C</mi> <mi>a</mi> <mi>m</mi> <mi>P</mi> <mi>r</mi> <mi>o</mi> <mi>j</mi> <mo>(</mo> <mrow> <msub> <mi>KF</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>j</mi> </msub> </mrow> <mo>)</mo> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, ρhIt is the constant factor of loss function, CamProj () represents video camera visual projection function.
  9. 9. based on the intensive Stereo matching (four) described in claims 1, it is characterised in that to landmark points carry out it is intensive reconstruct with The change of environment is better described, a loss function is designed according to corresponding pixel p in left images and its inconsistent distance d C, specifically,
    <mrow> <mi>C</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>&amp;Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mo>(</mo> <mi>q</mi> <mo>)</mo> <mo>-</mo> <msub> <mover> <mi>I</mi> <mo>&amp;OverBar;</mo> </mover> <mi>L</mi> </msub> <mo>(</mo> <mi>q</mi> <mo>)</mo> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>R</mi> </msub> <mo>(</mo> <mrow> <mi>q</mi> <mo>-</mo> <mi>d</mi> </mrow> <mo>)</mo> <mo>-</mo> <msub> <mover> <mi>I</mi> <mo>&amp;OverBar;</mo> </mover> <mi>R</mi> </msub> <mo>(</mo> <mrow> <mi>q</mi> <mo>-</mo> <mi>d</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <msqrt> <mrow> <msub> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>&amp;Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>L</mi> </msub> <mo>(</mo> <mi>q</mi> <mo>)</mo> <mo>-</mo> <msub> <mover> <mi>I</mi> <mo>&amp;OverBar;</mo> </mover> <mi>L</mi> </msub> <mo>(</mo> <mi>q</mi> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&amp;CenterDot;</mo> <msub> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>&amp;Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>R</mi> </msub> <mo>(</mo> <mrow> <mi>q</mi> <mo>-</mo> <mi>d</mi> </mrow> <mo>)</mo> <mo>-</mo> <msub> <mover> <mi>I</mi> <mo>&amp;OverBar;</mo> </mover> <mi>R</mi> </msub> <mo>(</mo> <mrow> <mi>q</mi> <mo>-</mo> <mi>d</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
    Wherein,Refer to NpCriticize the average density put in image centered on pixel p.
  10. 10. based on the enlarging geometric grid (five) described in claims 1, it is characterised in that use transition matrix Tf2wBy three-dimensional The estimate of point set is transformed into realistic space from frame space, specifically, the surface of point set is reconstructed first by triangulation method Grid, then go to extend global surface mesh using the video camera posture by estimation, last image is projected back and video camera The consistent two dimensional surface in visual angle.
CN201710927040.4A 2017-10-09 2017-10-09 A kind of augmented reality method for minimally invasive surgery Withdrawn CN107767464A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710927040.4A CN107767464A (en) 2017-10-09 2017-10-09 A kind of augmented reality method for minimally invasive surgery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710927040.4A CN107767464A (en) 2017-10-09 2017-10-09 A kind of augmented reality method for minimally invasive surgery

Publications (1)

Publication Number Publication Date
CN107767464A true CN107767464A (en) 2018-03-06

Family

ID=61266554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710927040.4A Withdrawn CN107767464A (en) 2017-10-09 2017-10-09 A kind of augmented reality method for minimally invasive surgery

Country Status (1)

Country Link
CN (1) CN107767464A (en)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LONG CHEN等: "Real-time Geometry-Aware Augmented Reality in Minimally Invasive Surgery", 《ARXIV:1708.01234V1》 *

Similar Documents

Publication Publication Date Title
JP5877053B2 (en) Posture estimation apparatus and posture estimation method
ES2812578T3 (en) Estimating a posture based on silhouette
WO2020054442A1 (en) Articulation position acquisition method and device, and motion acquisition method and device
EP2739036B1 (en) Plane-characteristic-based markerless augmented reality system and method for operating same
CN102609942B (en) Depth map is used to carry out mobile camera location
CN104995666B (en) Method for indicating virtual information in true environment
US20170140552A1 (en) Apparatus and method for estimating hand position utilizing head mounted color depth camera, and bare hand interaction system using same
Canessa et al. Calibrated depth and color cameras for accurate 3D interaction in a stereoscopic augmented reality environment
CN103543830B (en) Method for mapping human skeleton points to virtual three-dimensional space points in three-dimensional display
CN102510506B (en) Virtual and real occlusion handling method based on binocular image and range information
WO2018019272A1 (en) Method and apparatus for realizing augmented reality on the basis of plane detection
JP7423683B2 (en) image display system
JP2011238222A5 (en)
JP7427188B2 (en) 3D pose acquisition method and device
US11436790B2 (en) Passthrough visualization
Rodas et al. See it with your own eyes: Markerless mobile augmented reality for radiation awareness in the hybrid room
EP3185212B1 (en) Dynamic particle filter parameterization
CN110555869A (en) method and system for extracting primary and secondary motion in augmented reality systems
CN106683163A (en) Imaging method and system used in video monitoring
CN111489392B (en) Single target human motion posture capturing method and system in multi-person environment
Chen et al. Camera networks for healthcare, teleimmersion, and surveillance
Zou et al. Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking
CN104732586B (en) A kind of dynamic body of 3 D human body and three-dimensional motion light stream fast reconstructing method
CN103260008B (en) A kind of image position is to the projection conversion method of physical location
CN102663812A (en) Direct method of three-dimensional motion detection and dense structure reconstruction based on variable optical flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180306

WW01 Invention patent application withdrawn after publication