CN102221884B - Visual tele-existence device based on real-time calibration of camera and working method thereof - Google Patents

Visual tele-existence device based on real-time calibration of camera and working method thereof Download PDF

Info

Publication number
CN102221884B
CN102221884B CN2011101607395A CN201110160739A CN102221884B CN 102221884 B CN102221884 B CN 102221884B CN 2011101607395 A CN2011101607395 A CN 2011101607395A CN 201110160739 A CN201110160739 A CN 201110160739A CN 102221884 B CN102221884 B CN 102221884B
Authority
CN
China
Prior art keywords
camera
frame
real
mounted display
helmet mounted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2011101607395A
Other languages
Chinese (zh)
Other versions
CN102221884A (en
Inventor
秦学英
李超
王延可
钟凡
彭群生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN2011101607395A priority Critical patent/CN102221884B/en
Publication of CN102221884A publication Critical patent/CN102221884A/en
Application granted granted Critical
Publication of CN102221884B publication Critical patent/CN102221884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a visual tele-existence device based on real-time calibration of a camera and a working method thereof. The visual tele-existence device comprises two computers, a helmet display and a binocular camera which is controlled to rotate by a cradle head. The basic working method of the visual tele-existence device comprises the steps of: carrying out calibration on the camera of the helmet display, and calculating the orientation and the moving track of the head of an operator; by calibration of the camera of the binocular camera, calculating the accurate position and the orientation of the cradle head, combining the accurate position and the orientation to generate a rotation control instruction of the cradle head, leading the cradle head and the head of the operator to have the same moving track, sending back left-eye and right-eye images acquired by the binocular camera to the local computers in real time, displaying the images on a helmet in real time and realizing that a remote three-dimensional scene which conforms to the moving posture of the head of a human body is seen locally. In the visual tele-existence device, the remote scene can be really displayed in a three-dimensional manner, and the invention also can be used for other applications such as virtual reality and the like.

Description

A kind of based on the video camera real-time calibration visual distant the device and method of work
Technical field
The present invention relates to telepresence system, relate in particular to plant based on the visual distant of video camera real-time calibration and installing and method of work.
Background technology
As far back as eighties of last century the mid-80, Susumu Tachi has just proposed distant concept at (tele-existence) and has done in early days a large amount of work, propose various forward-looking concepts, and under limited hardware condition, tested, established distant basis in technical research.The research in this period also mainly concentrates on vision and the auditory information that how can experience by local device remote scene, focuses on " appearance " in remote scene.Thereafter Susumu Tachi has proposed to be used for the distant at master slave system (Tele-existence Master Salve System) of Remote first in nineteen ninety, catch the movable information of human arm by sensing equipment, be converted into the control information of tele-robotic by computing machine, control remote machine arm is made identical action, make human operator can carry out the operated from a distance task, and have the sensation that is present in after certain tele-robotic in the long-range scene.
1, recent development
Along with the fast development of modern technologies, various surveying instruments are more and more accurate, and are accurate more and more fast to the seizure of human motion information; Robotics is increasingly mature on the other hand, and the performance of existing robot is more and more outstanding, and especially anthropomorphic robot's appearance becomes increasingly complex can finishing of task, has distantly obtained fast development in technology.Oyama Eimei and Tsunemoto Naoki be the operated from a distance in reality environment in 1993, makes distantly no longer to be confined to the limitation of true environment in technical research, can carry out various test experiments in reality environment.Susumu Tachi has proposed interactively telepresence system in the reality environment afterwards, can make a plurality of operators see the real-time projection of other operating operations person in reality environment.
The people such as C.DiSalvo proposed a kind of design and cognitive method of anthropomorphic robot's head in 2000.Susumu Tachi makes operator's sense of reality go up again new step a kind of distant at control loop (tele-existence cockpit system) for long-range anthropomorphic robot control of invention in 2003.H.Baier etc. proposed a kind of interactively stereoscopic vision for remote exhibition in 2003.R.Tadakuma, the people such as Y.Asahara have invented a kind of for distant multivariant anthropomorphic robot's arm in master slave system in 2005, can finish dexterous complicated operated from a distance task even can exchange with other people by gesture.The people such as Kouichi Watanabe have developed a kind of people's head and 6DOF robot-TORSO of neck movement of can simulating accurately in 2008, and constructed one distant at vision system HMD-TORSO System.
2, interactive telepresence system
Along with distant constantly ripe perfect in technology, people pay attention to research and the realization of interactive telepresence system more, the people such as M.Inami propose a kind of new visual presence technology RPT (Retro-reflective Projection Technology), can be with people's image projecting in robot, make trulyr alternately, S.Tachi is detailed thereafter has told about the application of this technology in tele-existence.S.Tachi has proposed two kinds of development pattern: TELESAR of interactive telepresence system in 2005, TWISTER, and the robot that uses in the existing telepresence system of summary, two thinkings of robot development are proposed: " robot is as independent intelligent body ", " robot is as the mankind's extension ", and man-robot network system concept of future generation is proposed.N.Kawakami etc. a kind of concept original shape telesarPHONE. of the face-to-face remote communication based on robot was proposed in 2005 and Susumu Tachi in 2008, carried out perfect to the telesarPHONE based on RPT, middle people and the thing of people and long-range scene carried out fast, real mutual, and various sensation directly feeds back to the user.
At present distant in technical research, various technology are more and more perfect, more and more really are for people's sensation, but for the movable information of operator's head, mostly use sensor to catch, and both increased system cost, and the precision of following the tracks of on the other hand is not high yet.Therefore can be by the method for computer vision, person's head movement information that use real-time video camera scaling algorithm goes to the calculating operation namely can obtain good tracking accuracy, does not need again to increase extra equipment.
Summary of the invention
The object of the invention is to for telepresence system user head movement information trace problem, provide a kind of and installing and method of work based on the visual distant of video camera real-time calibration.This device is simple and easy to do, binocular camera and the Helmet Mounted Display of two computing machines, a cradle head control.The method need to additionally not installed sensing equipment on Helmet Mounted Display, directly just can obtain user's head movement information by the Helmet Mounted Display camera being carried out real-time calibration, and control The Cloud Terrace by remote computer and make corresponding motion, simultaneously binocular camera is caught the right and left eyes image and pass local computer back, on Helmet Mounted Display, show in real time.
A kind of based on the video camera real-time calibration visual distant the device, the binocular camera and the Helmet Mounted Display that comprise two computing machines, a cradle head control, local computer and Helmet Mounted Display directly connect, remote computer is connected The Cloud Terrace and is directly connected with binocular camera, local computer and remote computer are by the network interconnection.
A kind of visual distant method of work at device based on the video camera real-time calibration is as follows:
1) the Helmet Mounted Display camera is taken the complete offline video sequence of local scene, binocular camera is taken the complete offline video sequence of long-range scene, the complete offline video sequence of local scene and the complete offline video sequence of long-range scene are found the solution and key frame extraction by extraction and coupling, the three-dimensional coordinate of SIFT unique point, obtain the three-dimensional description of local and remote scene;
2) the right and left eyes image sequence that will catch in real time of Helmet Mounted Display camera is sent to local computer, and the right and left eyes image sequence that will catch in real time of binocular camera is sent to remote computer simultaneously;
When 3) current frame image is image sequence the first frame, local computer calculates the initial camera parameter of Helmet Mounted Display camera, and be sent to remote computer, the control The Cloud Terrace turn to identical with the Helmet Mounted Display camera towards, make Helmet Mounted Display and binocular camera have identical towards;
4) image sequence by obtaining, local computer and remote computer carry out respectively camera parameter and calculate;
5) local computer is sent to remote computer with the camera parameter that real-time calibration obtains, the binocular camera camera parameter that the Helmet Mounted Display camera camera parameter that remote computer sends according to local computer and current solution go out generates the cloud platform rotation steering order;
6) remote computer sends back local computer in real time with the right and left eyes image that binocular camera obtains, and shows in real time on Helmet Mounted Display;
7) do you judge that present frame is last frame? be end-of-job; No, then next frame is as present frame, and repeating step (4) is to (6).
Specifically introduce five aspects of the present invention:
1) the video camera real-time calibration algorithm that accelerates based on GPU
The real-time calibration algorithm is divided into off-line and online two stages on the whole.Off-line phase is mainly used in making up the three-dimensional description of real scene, comprises setting up SIFT unique point storehouse, chooses the work such as key frame.At this moment need to obtain the off-line video sequence of scene, extract the SIFT unique point, find the solution the spherical co-ordinate of unique point, adopt key frame unit organization SIFT unique point, consist of offline feature point storehouse.On-line stage, we use KLT to carry out the successive frame signature tracking, carrying out the consecutive frame camera parameter according to tracking results finds the solution, in the solution procedure, the camera Calibration error can accumulate gradually frame by frame, so we can choose suitable strategy regularly on a certain frame, extract the SIFT unique point, and choose suitable key frame and carry out characteristic matching, calculate accurate camera parameter, carry out accumulated error and eliminate.
Because the data traffic of video sequence is very large, the extraction of the unique point on the frame picture, coupling and tracking often need to expend the plenty of time, are the bottlenecks that calibration is calculated.On CPU, directly carry out KLT algorithm and the SIFT algorithm of 640x480 resolution, approximately need the hundreds of millisecond working time of single CPU; And no matter be KLT algorithm or SIFT algorithm, carrying out image filtering, pyramid structure, feature point extraction, and when unique point carried out corresponding calculating, the operation that all can duplicate to a large amount of pixels and unique point, algorithm has very high degree of parallelism.Therefore, can by the concurrent operation function of GPU, greatly reduce working time.
Our KLT algorithm and SIFT algorithm are realized by the CUDA programming.CUDA (Compute Unified Device Architecture) is a general parallel computation framework on GPU, than general GPU programming language such as Cg, HLSL etc., it does not need to be mapped to a pattern API and just can and carry out parallel data and calculate in the GPU management, the reading manner of data is more various, therefore the programming of CUDA is more prone to flexibly, more can take full advantage of the computing power of GPU.When realizing KLT algorithm and SIFT algorithm with the GPU programming, because view data and results of intermediate calculations are kept in the global memory of GPU, repeatedly read and to expend the plenty of time, therefore use texture that the data that needs read are bound, to obtain reading speed at a high speed, use simultaneously shared drive to reduce thread repeatedly reading identical data in the piece.In addition, the KLT algorithm is when carrying out the unique point selection, calculative minimal characteristic single data is passed back and is carried out reconnaissance calculating among the CPU, for reducing the data transfer time, when calculating, the minimal characteristic single data is carried out preliminary screening, and by the atomic function, only will preserve through the data of screening.SIFT is when carrying out the local extremum detection in addition, need to each pixel of DOG middle layer image be detected, and actual extreme point number is a small amount of, therefore we use former subfunction realization to the exclusive reference of critical section, thereby only preserve the extreme point data that obtain, reduce the Thread Count that subsequent step calculates.At last when obtaining the SIFT feature descriptor, the calculated amount of the feature descriptor of unique point and the resource of use are larger, and the number of unique point is smaller, therefore we utilize shared drive, realize the feature descriptor with 4 thread parallel calculated characteristics points, reduce the working time of thread, increase the degree of parallelism of thread.
Through experiment test, for the image sequence of 640x480, after the GPU acceleration, the KLT signature tracking time can reduce in 10 milliseconds, and the SIFT feature extraction also can taper to the 20-30 millisecond, well requirement of real time.
2) adjacent parameter is found the solution
The video sequence that video camera is taken has good continuity, so the location between the frame picture need to take into full account the superiority that continuity is brought.The video camera geometric model is under the fixed view:
If the position for video camera is set to the world coordinate system initial point, 1 M=[X Y Z 1 in the world coordinate system] TWith its projection m on video sequence i frame iRelation between=[the x y 1] can be expressed as:
m i~K iR iM (1)
Wherein, K iIt is the intrinsic parameters of the camera matrix of i frame.R iFor world coordinates is tied to i frame camera coordinate system rotational transform matrix, also can be represented by Eulerian angle, i.e. R=R Z(γ) R X(β) R Z(α), used Euler's rotation of Z-X-Z agreement here.Relation between any two frame corresponding point of video sequence can be expressed as:
m i + n ~ K i + n R i + n R i - 1 K i - 1 m i - - - ( 2 )
Therefore, the point of any two interframe can be expressed with a homography matrix.
Adopt the homography matrix of video camera to find the solution, be still the nonlinear function of camera parameters, its optimization problem is difficult to find the solution.Therefore, utilize the continuity between the frame picture, at the video camera easy motion, and abundant hour of the anglec of rotation will be rotated the adjacent two two field picture conversion that cause by video camera, be approximately translation and rotation on the same plane, ignore the lens distortion between two frames, thus simplified model.
If the some m in the i-1 frame I-1=(x I-1, y I-1) with the i frame in some m i=(x i, y i) be the character pair point, then the pass between them is:
x i y i = Δλ i cos Δγ i sin Δγ i - sin Δγ i cos Δγ i x i - 1 y i - 1 + Δx i Δy i - - - ( 5 )
Its mid point m iWith m I-1All be take image center as true origin, Δ γ iBe the i-1 frame to the rotation angle of i two field picture, Δ λ iBe the i-1 frame to the zoom factor of i frame, (Δ x i, Δ y i) be that the i-1 frame is to the image center displacement of i frame.(Δ x i, Δ y i, Δ λ i, Δ γ i) consisted of in the video sequence i-1 frame to the transformation parameter of i frame, with video camera confidential reference items f iAnd rotation angle (α i, β i, γ i) between corresponding relation be:
f i=Δλ if i-1
γ i=γ i-1+Δγ i (6)
Figure BDA0000068494060000051
Figure BDA0000068494060000052
Wherein,
In order to find the solution (Δ x i, Δ y i, Δ λ i, Δ γ i) value, we divide two steps to come respectively iterative computation (Δ x i, Δ y i) and (Δ λ i, Δ γ i), until the two is restrained simultaneously.Wherein, (Δ x i, Δ y i) searching method can adopt linear least square; (Δ λ i, Δ γ i) parameter estimation be nonlinear, we adopt the method iterate search, realize the rapid solving algorithm.
3) the user's head movement track based on the real-time calibration algorithm calculates
Existing telepresence system is caught user's head operation information, uses sensor to finish, and need to additionally sensing equipment be installed on Helmet Mounted Display, and expensive, precision is limited.And we are by the method for vision, directly just can obtain user's head movement information by the Helmet Mounted Display camera being carried out real-time calibration, do not need to install extra sensor, reduce system cost.
4) locate based on the The Cloud Terrace exact position of real-time calibration algorithm
Always there is error in the rotation of The Cloud Terrace generally speaking, especially error may become very large after system carries out a period of time, at this moment we can't know cloud platform rotation accurately towards, therefore we can be in the binocular camera view data of remote computer by obtaining, carry out camera calibration and calculate the accurate parameters of camera, thereby obtain the accurate turned position of The Cloud Terrace.
5) based on the stereo display of binocular camera
We catch the right and left eyes image of long-range scene by binocular camera, and send back in real time local computer, and the fact that is used for Helmet Mounted Display shows, thereby makes the user can obtain the real-time three-dimensional impression of long-range scene.
Description of drawings
Fig. 1 is the software flow pattern of the inventive method.
Fig. 2 is the structural representation of apparatus of the present invention.
Wherein, 1, Helmet Mounted Display
2, helmet camera
3, display device
4, local computer
5、GPU
6、CPU
7, The Cloud Terrace binocular camera
8, The Cloud Terrace
9, binocular camera
10, remote computer
11, view data
12, camera parameter
Embodiment
The invention will be further described below in conjunction with accompanying drawing and example.
Embodiment:
A kind of based on the video camera real-time calibration distant the device, as shown in Figure 2, the binocular camera and the Helmet Mounted Display that comprise two computing machines, a cradle head control, it is characterized in that, local computer and Helmet Mounted Display directly connect, remote computer is connected The Cloud Terrace and is directly connected with binocular camera, local computer and remote computer are by the network interconnection.
A kind of distant method of work at device based on the video camera real-time calibration, as shown in Figure 1, method is as follows:
1) the Helmet Mounted Display camera is taken the complete offline video sequence of local scene, binocular camera is taken the complete offline video sequence of long-range scene, the complete offline video sequence of local scene and the complete offline video sequence of long-range scene are found the solution and key frame extraction by extraction and coupling, the three-dimensional coordinate of SIFT unique point, obtain the three-dimensional description of local and remote scene;
2) the right and left eyes image sequence that will catch in real time of Helmet Mounted Display camera is sent to local computer, and the right and left eyes image sequence that will catch in real time of binocular camera is sent to remote computer simultaneously;
When 3) current frame image is image sequence the first frame, local computer calculates the initial camera parameter of Helmet Mounted Display camera, and be sent to remote computer, the control The Cloud Terrace turn to identical with the Helmet Mounted Display camera towards, make Helmet Mounted Display and binocular camera have identical towards;
4) image sequence by obtaining, local computer and remote computer carry out respectively camera parameter and calculate;
5) local computer is sent to remote computer with the camera parameter that real-time calibration obtains, the binocular camera camera parameter that the Helmet Mounted Display camera camera parameter that remote computer sends according to local computer and current solution go out generates the cloud platform rotation steering order;
6) remote computer sends back local computer in real time with the right and left eyes image that binocular camera obtains, and shows in real time on Helmet Mounted Display;
7) do you judge that present frame is last frame? be end-of-job; No, then next frame is as present frame, and repeating step (4) is to (6).

Claims (1)

  1. One kind based on the video camera real-time calibration visual distant the device method of work, it is characterized in that method of work is as follows:
    1) the Helmet Mounted Display camera is taken the complete offline video sequence of local scene, binocular camera is taken the complete offline video sequence of long-range scene, the video camera real-time calibration method of using GPU to accelerate, respectively Helmet Mounted Display camera and binocular camera are carried out real-time calibration, described real-time calibration method is divided into off-line and online two stages on the whole, off-line phase is mainly used in making up the three-dimensional description of real scene, comprise and set up SIFT unique point storehouse, choose key frame work, at this moment need to obtain the off-line video sequence of scene, extract the SIFT unique point, find the solution the spherical co-ordinate of unique point, adopt key frame unit organization SIFT unique point, consist of offline feature point storehouse, on-line stage, use KLT to carry out the successive frame signature tracking, carry out the consecutive frame camera parameter according to tracking results and find the solution, in the solution procedure, the camera Calibration error can accumulate gradually frame by frame, therefore extract the SIFT unique point, and choose key frame and carry out characteristic matching, calculate accurate camera parameter, carry out accumulated error and eliminate;
    The extraction of the unique point on the frame picture, coupling and tracking are the bottlenecks that calibration is calculated, therefore described KLT algorithm and SIFT algorithm are realized by the CUDA programming: CUDA(Compute Unified Device Architecture) be a general parallel computation framework on GPU, in addition, described KLT algorithm is when carrying out the unique point selection, calculative minimal characteristic single data is passed back and is carried out reconnaissance calculating among the CPU, when calculating, the minimal characteristic single data is carried out preliminary screening, and by the atomic function, only will preserve through the data of screening; In addition, described SIFT algorithm need to detect each pixel of DOG middle layer image when carrying out the local extremum detection, only preserves the extreme point data that obtain; When obtaining the SIFT feature descriptor, utilize shared drive at last, realize the feature descriptor with 4 thread parallel calculated characteristics points;
    2) the right and left eyes image sequence that will catch in real time of Helmet Mounted Display camera is sent to local computer, and the right and left eyes image sequence that will catch in real time of binocular camera is sent to remote computer simultaneously;
    When 3) current frame image is image sequence the first frame, local computer calculates the initial camera parameter of Helmet Mounted Display camera, and be sent to remote computer, the control The Cloud Terrace turn to identical with the Helmet Mounted Display camera towards, make Helmet Mounted Display and binocular camera have identical towards;
    4) image sequence by obtaining, local computer and remote computer carry out respectively camera parameter and calculate;
    5) local computer is sent to remote computer with the camera parameter that real-time calibration obtains, the binocular camera camera parameter that the Helmet Mounted Display camera camera parameter that remote computer sends according to local computer and current solution go out generates the cloud platform rotation steering order;
    6) remote computer sends back local computer in real time with the right and left eyes image that binocular camera obtains, and shows in real time on Helmet Mounted Display;
    7) do you judge that present frame is last frame? be end-of-job; No, then next frame is as present frame, and repeating step (4) is to (6).
CN2011101607395A 2011-06-15 2011-06-15 Visual tele-existence device based on real-time calibration of camera and working method thereof Active CN102221884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011101607395A CN102221884B (en) 2011-06-15 2011-06-15 Visual tele-existence device based on real-time calibration of camera and working method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011101607395A CN102221884B (en) 2011-06-15 2011-06-15 Visual tele-existence device based on real-time calibration of camera and working method thereof

Publications (2)

Publication Number Publication Date
CN102221884A CN102221884A (en) 2011-10-19
CN102221884B true CN102221884B (en) 2013-04-24

Family

ID=44778449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011101607395A Active CN102221884B (en) 2011-06-15 2011-06-15 Visual tele-existence device based on real-time calibration of camera and working method thereof

Country Status (1)

Country Link
CN (1) CN102221884B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929669A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Interactive video generator, player, generating method and playing method

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101926563B1 (en) * 2012-01-18 2018-12-07 삼성전자주식회사 Method and apparatus for camera tracking
CN104240214A (en) * 2012-03-13 2014-12-24 湖南领创智能科技有限公司 Depth camera rapid calibration method for three-dimensional reconstruction
CN102799271A (en) * 2012-07-02 2012-11-28 Tcl集团股份有限公司 Method and system for identifying interactive commands based on human hand gestures
CN105388851B (en) * 2015-10-30 2018-03-27 黑龙江大学 Movable body vision control system and method, electromechanical movement body and mobile terminal
CN106095134B (en) * 2016-06-07 2019-01-18 苏州佳世达电通有限公司 A kind of electronic device and its record and display methods
CN106303246A (en) * 2016-08-23 2017-01-04 刘永锋 Real-time video acquisition methods based on Virtual Realization
CN106931962A (en) * 2017-03-29 2017-07-07 武汉大学 A kind of real-time binocular visual positioning method based on GPU SIFT
CN107918765A (en) * 2017-11-17 2018-04-17 中国矿业大学 A kind of Moving target detection and tracing system and its method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794349A (en) * 2010-02-09 2010-08-04 北京邮电大学 Experimental system and method for augmented reality of teleoperation of robot
CN101916429A (en) * 2010-07-09 2010-12-15 浙江大学 Geometric correction and disparity extraction device of binocular camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794349A (en) * 2010-02-09 2010-08-04 北京邮电大学 Experimental system and method for augmented reality of teleoperation of robot
CN101916429A (en) * 2010-07-09 2010-12-15 浙江大学 Geometric correction and disparity extraction device of binocular camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929669A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Interactive video generator, player, generating method and playing method

Also Published As

Publication number Publication date
CN102221884A (en) 2011-10-19

Similar Documents

Publication Publication Date Title
CN102221884B (en) Visual tele-existence device based on real-time calibration of camera and working method thereof
CN102848389B (en) Realization method for mechanical arm calibrating and tracking system based on visual motion capture
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
US7714895B2 (en) Interactive and shared augmented reality system and method having local and remote access
EP3114647A2 (en) Method and system for 3d capture based on structure from motion with simplified pose detection
CN103400409A (en) 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
EP4307233A1 (en) Data processing method and apparatus, and electronic device and computer-readable storage medium
CN104820418A (en) Embedded vision system for mechanical arm and method of use
KR20130110441A (en) Body gesture recognition method and apparatus
US10964104B2 (en) Remote monitoring and assistance techniques with volumetric three-dimensional imaging
CN113221726A (en) Hand posture estimation method and system based on visual and inertial information fusion
Xin et al. 3D augmented reality teleoperated robot system based on dual vision
Gulde et al. RoPose: CNN-based 2D pose estimation of industrial robots
US20210142511A1 (en) Method of generating 3-dimensional model data
Wu et al. An integrated vision-based system for efficient robot arm teleoperation
KR102333768B1 (en) Hand recognition augmented reality-intraction apparatus and method
Lovi et al. Predictive display for mobile manipulators in unknown environments using online vision-based monocular modeling and localization
US20230067081A1 (en) System and method for real-time creation and execution of a human Digital Twin
CN115546829A (en) Pedestrian spatial information sensing method and device based on ZED (zero-energy-dimension) stereo camera
CN113850860A (en) Teleoperation attitude tracking estimation system and method
Kästner et al. A markerless deep learning-based 6 degrees of freedom pose estimation for mobile robots using rgb data
Yang et al. Research on Satellite Cable Laying and Assembly Guidance Technology Based on Augmented Reality
Xia et al. Development and application of parts assembly guidance system based on augmented reality
Hu et al. On-line reconstruction based predictive display in unknown environment
Deng et al. A 3D hand pose estimation architecture based on depth camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant