CN103472923B - A kind of three-dimensional virtual gesture selects the method for object scene - Google Patents

A kind of three-dimensional virtual gesture selects the method for object scene Download PDF

Info

Publication number
CN103472923B
CN103472923B CN201310445376.9A CN201310445376A CN103472923B CN 103472923 B CN103472923 B CN 103472923B CN 201310445376 A CN201310445376 A CN 201310445376A CN 103472923 B CN103472923 B CN 103472923B
Authority
CN
China
Prior art keywords
candidate
desktop
gesture
dimensional
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310445376.9A
Other languages
Chinese (zh)
Other versions
CN103472923A (en
Inventor
冯志全
张廷芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201310445376.9A priority Critical patent/CN103472923B/en
Publication of CN103472923A publication Critical patent/CN103472923A/en
Application granted granted Critical
Publication of CN103472923B publication Critical patent/CN103472923B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种三维虚拟手势选择场景物体的方法,包括如下步骤:(1)建立三维手势模型库;其特征在于,还包括如下步骤:(2)确定所放物体的桌面方程;(3)初始化时刻T=0;(4)预测三维手势的运动轨迹,利用三维手势T和T+1时刻的位置信息HT和HT+1,预测一条直线l:(5)初步选定待选的物体集C1;(6)最终确定待选的物体集C2;(7)显示待选的物体,将集合C2中的候选物体进行高光显示;(8)对待选的物体进行碰撞检测,依次对物体集C2每个候选物体进行碰撞检测,若碰撞检测成功,则结束,否则,T=T+1,T为时刻,转步骤(4)。本发明实现三维手势准确选择场景内的物体。

The invention discloses a method for selecting a scene object by a three-dimensional virtual gesture, which comprises the following steps: (1) establishing a three-dimensional gesture model library; it is characterized in that it also includes the following steps: (2) determining the desktop equation of the placed object; (3) ) Initialization time T=0; (4) Predict the trajectory of the three-dimensional gesture, and use the position information H T and H T+1 of the three-dimensional gesture T and T+1 to predict a straight line l: (5) Preliminarily select the candidate ( 6 ) finalize the object set C2 to be selected; ( 7 ) display the object to be selected, and highlight the candidate objects in the set C2; ( 8 ) perform collision detection on the object to be selected , perform collision detection on each candidate object in the object set C 2 in turn, if the collision detection is successful, then end, otherwise, T=T+1, T is the time, go to step (4). The invention realizes accurate selection of objects in the scene by three-dimensional gestures.

Description

一种三维虚拟手势选择场景物体的方法A Method for Selecting Scene Objects with Three-dimensional Virtual Gestures

技术领域technical field

本发明涉及三维虚拟手势领域,具体地讲,涉及一种三维虚拟手势选择场景物体的方法。The invention relates to the field of three-dimensional virtual gestures, in particular to a method for selecting scene objects by three-dimensional virtual gestures.

背景技术Background technique

通过指点选择物体是现实中最简单的一种动作,只要给定一个选择方向即可以通过相交测试判断选中的物体。在沉浸式环境中,往往用视点和虚拟手的位置形成选择方向,而在桌面环境下,仅需要鼠标点的位置即可。这种技术最早可以追溯到MIT在1980年开发的“put-that-there”界面,其中用语音来进行确认选择操纵。之后,大量的选择技术被开发出来,这些技术的区别来自于这两个设计变量:选择方向形成的方式和选择的范围(决定每次选择有多少个物体可能被选中)。Selecting an object by pointing is the simplest action in reality. As long as a selection direction is given, the selected object can be judged by the intersection test. In the immersive environment, the viewpoint and the position of the virtual hand are often used to form the selection direction, while in the desktop environment, only the position of the mouse point is required. This technology can be traced back to the "put-that-there" interface developed by MIT in 1980, in which voice was used to confirm selection manipulation. Since then, a large number of selection techniques have been developed, which differ from these two design variables: the way the selection direction is formed and the range of selection (determining how many objects are likely to be selected per selection).

由于指点选择基本上不需要用户的物理动作,因此仅就选择来说它可以获得良好的用户性能。然而由于维度限制这种指点技术不适合于对象定位操作,用户仅能够在绕视点的一个弧度上定位,也不能将对象沿着选择方向移动。Since point selection basically requires no user's physical action, it can achieve good user performance just for selection. However, due to the limitation of the dimension, this pointing technique is not suitable for the object positioning operation. The user can only locate on an arc around the viewpoint, and cannot move the object along the selected direction.

Ray-Casting是最简单的指点选择技术。当前的选择方向由附着在虚拟手上的矢量P和虚拟手的位置h决定:p(a)=h+ap,0<a<+∞。理论上这种技术能够选择任意远处的物体,但随着距离的增加和对象尺寸的缩小,选取的难度也越来越大。同时由于硬件抖动的影响,选取的错误率也逐渐提高。Ray-Casting is the simplest point-and-select technique. The current selection direction is determined by the vector P attached to the virtual hand and the position h of the virtual hand: p(a)=h+ap, 0<a<+∞. In theory, this technique can select objects at any distance, but as the distance increases and the object size shrinks, the selection becomes more and more difficult. At the same time, due to the influence of hardware jitter, the error rate of selection is also gradually increased.

“探照灯”选取技术(Spotlight)用一个从选择矢量出发点开始的锥体代替原来的射线,只要有对象落在这个锥体内则被选中,类似于对象被照亮一样。这种技术对于微小物体选择尤其有利,但是同时也会出现多个对象落在锥体内的情况,反而会增加误操作。The "Spotlight" selection technology (Spotlight) replaces the original ray with a cone starting from the starting point of the selection vector. As long as an object falls within this cone, it will be selected, similar to the object being illuminated. This technique is especially beneficial for the selection of tiny objects, but at the same time, multiple objects fall into the cone, which will increase misoperations.

Aperture是对Spotlfight的改进技术,它允许用户在用锥体选取对象时交互式地控制投射的范围。锥体的投射方向由头部的跟踪器和虚拟手决定,用户可以交互式地调整虚拟手和头部的距离来调整锥体大小,以此来消除选择的歧义性。Aperture is an improved technology to Spotlfight, which allows users to interactively control the range of the projection when selecting objects with the cone. The projection direction of the cone is determined by the tracker of the head and the virtual hand, and the user can interactively adjust the distance between the virtual hand and the head to adjust the size of the cone to eliminate the ambiguity of the selection.

Image-Plane的思想来自于图形学中的一个基本原理:即我们看到的任何对象都是在视棱锥后平面上的透视投影,因此只需要给定投影平面上的一个点就可以进行选择。这种技术和前面几种技术一样不能改变选择对象离操作者的距离,Fishing-Ree1弥补了这个缺点,通过额外的增加输入设备,如在跟踪器上增加按钮或者滑动杆,用户可以交互式地沿着选择矢量移动被选择的对象,但是由于将对象的六个自由度分开来控制,这样依然会降低用户性能。The idea of Image-Plane comes from a basic principle in graphics: that is, any object we see is a perspective projection on the plane behind the viewing pyramid, so only one point on the given projection plane can be selected. This technique, like the previous techniques, cannot change the distance between the selected object and the operator. Fishing-Ree1 makes up for this shortcoming. By adding additional input devices, such as buttons or sliders on the tracker, users can interactively Moves the selected object along the selection vector, but still degrades user performance due to the separate control of the six degrees of freedom of the object.

发明内容Contents of the invention

本发明要解决的技术方案是提供一种三维虚拟手势选择场景物体的方法,实现三维手势准确选择场景内的物体。The technical solution to be solved by the present invention is to provide a method for selecting scene objects by three-dimensional virtual gestures, so as to realize accurate selection of objects in the scene by three-dimensional gestures.

本发明采用如下技术方案实现发明目的:The present invention adopts following technical scheme to realize the object of the invention:

一种三维虚拟手势选择场景物体的方法,包括如下步骤:A method for selecting a scene object with a three-dimensional virtual gesture, comprising the steps of:

(1)建立三维手势模型库;(1) Establish a three-dimensional gesture model library;

其特征在于,还包括如下步骤:It is characterized in that it also includes the following steps:

(2)确定所放物体的桌面方程;(2) Determine the desktop equation of the placed object;

(3)初始化时刻T=0;(3) Initialization time T=0;

(4)预测三维手势的运动轨迹,利用三维手势T和T+1时刻的位置信息HT和HT+1,预测一条直线l:(4) Predict the trajectory of the three-dimensional gesture, and use the position information H T and H T+1 of the three-dimensional gesture T and T+1 to predict a straight line l:

直线l的方程为: x = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) x T + ( t - T ) / ( ( T + 1 ) - T ) x T + 1 y = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) y T + ( t - T ) / ( ( T + 1 ) - T ) y T + 1 z = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) z T + ( t - T ) / ( ( T + 1 ) - T ) z T + 1 , The equation of the line l is: x = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) x T + ( t - T ) / ( ( T + 1 ) - T ) x T + 1 the y = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) the y T + ( t - T ) / ( ( T + 1 ) - T ) the y T + 1 z = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) z T + ( t - T ) / ( ( T + 1 ) - T ) z T + 1 ,

t=0,1,2,Λ,n,其中n为将要预测的帧数,其中HT=(xT,yT,zT),t=0,1,2,Λ,n, where n is the number of frames to be predicted, where H T =(x T ,y T ,z T ),

HT+1=(xT+1,yT+1,zT+1);H T+1 =(x T+1 ,y T+1 ,z T+1 );

(5)初步选定待选的物体集C1(5) Preliminarily select the object set C 1 to be selected;

(5.1)由直线l与桌面Z得到一个交点o;(5.1) Obtain an intersection point o from the straight line l and the desktop Z;

(5.2)设定待选中的阈值为R,其中阈值Mi为物体i的坐标,N桌面所剩物体的个数;(5.2) Set the threshold to be selected as R, where the threshold M i is the coordinate of object i, and the number of objects left on the N desktop;

(5.3)依次计算各物体到交点o的距离:ri<R的物体即为候选物体,加入到候选物体的集合集C1中;(5.3) Calculate the distance from each object to the intersection o in turn: The object with r i < R is the candidate object, which is added to the set C 1 of the candidate object;

(6)最终确定待选的物体集C2(6) Finalize the object set C 2 to be selected;

(6.1)依次计算候选物体C1集合的三位手势当前位置与各候选物体所形成的向量与手的法向量间的夹角的余弦值,即cos<n,HT+1-Mi>,Mi为物体i的坐标,n为手的法向量,手的法向量即为垂直于手掌面的向量;(6.1) Calculate the cosine value of the angle between the current position of the three-digit gesture of the candidate object C 1 set and the vector formed by each candidate object and the normal vector of the hand, that is, cos<n,H T+1 -M i > , M i is the coordinate of object i, n is the normal vector of the hand, and the normal vector of the hand is the vector perpendicular to the palm surface;

(6.2)判断cos<n,HT+1-Mi>是否大于0,如果大于0,则此物体即为候选物体,加入到候选物体集C2中,此步骤就是将物体在三维手势后面的候选物体排除;(6.2) Determine whether cos<n,H T+1 -M i > is greater than 0. If it is greater than 0, the object is a candidate object and added to the candidate object set C 2. This step is to put the object behind the three-dimensional gesture Candidate object exclusion;

(7)显示待选的物体,将集合C2中的候选物体进行高光显示;(7) displaying the objects to be selected, and highlighting the candidate objects in the set C 2 ;

(8)对待选的物体进行碰撞检测,依次对物体集C2每个候选物体进行碰撞检测,若碰撞检测成功,则结束,否则,T=T+1,T为时刻,转步骤(4)。(8) Perform collision detection on the objects to be selected, and perform collision detection on each candidate object in the object set C 2 in turn, if the collision detection is successful, then end, otherwise, T=T+1, T is the time, go to step (4) .

作为对本技术方案的进一步限定,所述步骤(2)包括如下步骤:As a further limitation to the technical solution, said step (2) includes the following steps:

(2.1)在三维场景中桌面左下角点A的三维坐标,以及相交于桌面左下角的两条边的向量已知,即桌面上的两条相交的向量已知;(2.1) In the 3D scene, the 3D coordinates of point A in the lower left corner of the desktop and the vectors of the two sides intersecting the lower left corner of the desktop are known, that is, the two intersecting vectors on the desktop A known;

(2.2)由点A,向量得到桌面的平面方程:Z=f(x,y,z)。(2.2) From point A, the vector Get the plane equation of the desktop: Z=f(x,y,z).

与现有技术相比,本发明的优点和积极效果是:1、背景中提到的比较好的选择技术要借助了一些设备,如在跟踪器上增加按钮或者滑动杆,而本发明不用。2、背景中提到的选择技术都是只根据物体的位置进行选择,而本发明根据手势的状态信息,以及手与场景中物体相对位置实时更新的进行物体的选择。3、背景中提到的选择技术基本都是远距离选择,选中物体的概率还受物体的远近,大小影响,如在场景中比较小的物体比较难选中。本发明是近距离选择,当手距离物体比较近时才选中,这样就不会出现小物体难选的情况,选择精确度高。Compared with the prior art, the advantages and positive effects of the present invention are: 1. The better selection technology mentioned in the background requires some equipment, such as adding buttons or sliding bars on the tracker, but the present invention does not. 2. The selection technology mentioned in the background is only based on the position of the object, but the present invention selects the object based on the status information of the gesture and the relative position of the hand and the object in the scene updated in real time. 3. The selection techniques mentioned in the background are basically long-distance selection. The probability of selecting an object is also affected by the distance and size of the object. For example, it is more difficult to select a smaller object in the scene. The present invention selects at a short distance, and only selects when the hand is relatively close to the object, so that the difficult selection of small objects does not occur, and the selection accuracy is high.

附图说明Description of drawings

图1为本发明的处理流程图。Fig. 1 is a processing flowchart of the present invention.

具体实施方式detailed description

下面结合附图和优选实施例对本发明作更进一步的详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and preferred embodiments.

参见图1,三维虚拟手势是指采用基于计算机视觉技术、利用跟踪或重构方法计算机所生成的、与用户手势一致的三维手势模型,此为现有技术,在此省略建立三维手势模型库的步骤。利用三维虚拟手势选择场景物体的方法如下:Referring to Figure 1, the 3D virtual gesture refers to a 3D gesture model that is generated by a computer based on computer vision technology, using tracking or reconstruction methods, and is consistent with the user's gesture. This is an existing technology, and the establishment of a 3D gesture model library is omitted here. step. The method of using 3D virtual gestures to select scene objects is as follows:

Step1.Step1.

确定所放物体的桌面方程,在三维场景中桌面左下角点A的三维坐标,以及相交于桌面左下角的两条边的向量已知,即桌面上的两条相交的向量已知;由点A,向量得到桌面的平面方程:Z=f(x,y,z)。Determine the desktop equation of the placed object, the three-dimensional coordinates of point A in the lower left corner of the desktop in the three-dimensional scene, and the vectors of the two sides intersecting the lower left corner of the desktop are known, that is, the two intersecting vectors on the desktop Known; from point A, the vector Get the plane equation of the desktop: Z=f(x,y,z).

Step2.初始化时刻T=0;Step2. Initialization time T=0;

Step3.预测出三维手的运动曲线,即三维手的运动轨迹。利用三维手T和T+1时刻的位置信息HT和HT+1,预测一条直线l:Step3. Predict the motion curve of the three-dimensional hand, that is, the motion trajectory of the three-dimensional hand. Using the position information H T and H T+1 of the three-dimensional hands T and T+1 at the moment, predict a straight line l:

直线l的方程为: x = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) x T + ( t - T ) / ( ( T + 1 ) - T ) x T + 1 y = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) y T + ( t - T ) / ( ( T + 1 ) - T ) y T + 1 z = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) z T + ( t - T ) / ( ( T + 1 ) - T ) z T + 1 , The equation of the line l is: x = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) x T + ( t - T ) / ( ( T + 1 ) - T ) x T + 1 the y = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) the y T + ( t - T ) / ( ( T + 1 ) - T ) the y T + 1 z = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) z T + ( t - T ) / ( ( T + 1 ) - T ) z T + 1 ,

t=0,1,2,Λ,n,其中n为将要预测的帧数,其中HT=(xT,yT,zT),t=0,1,2,Λ,n, where n is the number of frames to be predicted, where H T =(x T ,y T ,z T ),

HT+1=(xT+1,yT+1,zT+1)。H T+1 = (x T+1 , y T+1 , z T+1 ).

Step4.Step4.

初步选定待选的物体集。设定待选中的阈值为R,由直线l与桌面Z得到一个交点o,依次计算各物体到交点o的距离;ri<R的物体即为候选物体,加入到候选物体的集合C1中。其中本文所设的阈值为Mi为物体i的坐标,N桌面所剩物体的个数。Preliminary selection of object sets to be selected. Set the threshold to be selected as R, obtain an intersection point o from the straight line l and the desktop Z, and calculate the distance from each object to the intersection point o in turn; Objects with r i <R are candidate objects, and are added to the set C 1 of candidate objects. The threshold set in this paper is M i is the coordinates of object i, and the number of objects left on the N desktop.

Step5.Step5.

最终确定待选的物体集,依次计算候选物体C1集合的各物体与手的法向量间的夹角的余弦值,即cos<n,HT+1-Mi>。cos<n,HT+1-Mi>>0的物体即为候选物体,加入到候选物体的集合C2中。此步骤就是将物体在手后面的候选物体排除。其中cos<n,HT+1-Mi>>0,此时候选物体在手的前面,cos<n,HT+1-Mi><0,此时候选物体在手的后面。n为手的法向量。Finally determine the set of objects to be selected, and sequentially calculate the cosine value of the angle between each object in the set of candidate objects C 1 and the normal vector of the hand, that is, cos<n, H T+1 -M i >. The object of cos<n,H T+1 -M i >>0 is the candidate object, which is added to the set C 2 of candidate objects. This step is to exclude candidates whose objects are behind the hand. Where cos<n,HT +1- M i >>0, the candidate object is in front of the hand at this time, cos<n,HT +1- M i ><0, the candidate object is behind the hand at this time. n is the normal vector of the hand.

Step6.显示待选的物体。将集合C2中的候选物体进行高光显示。Step6. Display the objects to be selected. Highlight the candidate objects in the set C 2 .

Step7.Step7.

对待选物体进行碰撞检测。依次对C2每个候选物体进行碰撞检测,若碰撞检测成功,则结束,否则,T=T+1,并且转到Step3。Perform collision detection on the object to be selected. Carry out collision detection on each candidate object of C2 in turn, if the collision detection is successful, then end, otherwise, T=T+1, and go to Step3.

本发明未经描述的技术特征可以通过或采用现有技术实现,在此不再赘述,当然,上述说明并非是对本发明的限制,本发明也并不仅限于上述举例,本技术领域的普通技术人员在本发明的实质范围内所做出的变化、改型、添加或替换,也应属于本发明的保护范围。The undescribed technical features of the present invention can be realized by or using the prior art, and will not be repeated here. Of course, the above description is not a limitation of the present invention, and the present invention is not limited to the above examples. Those of ordinary skill in the art Changes, modifications, additions or substitutions made within the essential scope of the present invention shall also belong to the protection scope of the present invention.

Claims (2)

1. three-dimensional virtual gesture selects a method for object scene, comprises the steps:
(1) three-dimension gesture model storehouse is set up;
It is characterized in that, also comprise the steps:
(2) the desktop equation of put object is determined;
(3) initial runtime T=0;
(4) predict the movement locus of three-dimension gesture, utilize the positional information H in three-dimension gesture T and T+1 moment tand H t+1, prediction straight line l:
The equation of straight line l is: x = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) x T + ( t - T ) / ( ( T + 1 ) - T ) x T + 1 y = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) y T + ( t - T ) / ( ( T + 1 ) - T ) y T + 1 z = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) z T + ( t - T ) / ( ( T + 1 ) - T ) z T + 1 ,
T=0,1,2 ..., n, wherein n is the frame number that will predict, wherein H t=(x t, y t, z t),
H T+1=(x T+1,y T+1,z T+1);
(5) preliminary selected object collection C to be selected 1;
(5.1) an intersection point o is obtained by straight line l and desktop Z;
(5.2) setting threshold value to be chosen is R, wherein threshold value m ifor the coordinate of object i, the number of the remaining object of N desktop;
(5.3) distance of each object to intersection point o is calculated successively: r ithe object of < R is candidate's object, joins the set collection C of candidate's object 1in;
(6) object collection C to be selected is finally determined 2;
(6.1) calculated candidate object C successively 1the cosine value of the angle between the normal vector of the vector that three gesture current locations of set and each candidate's object are formed and hand, i.e. cos<n, H t+1-M i>, M ifor the coordinate of object i, n is the normal vector of hand, and the normal vector of hand is the vector perpendicular to palmar aspect;
(6.2) cos<n is judged, H t+1-M iwhether > is greater than 0, if be greater than 0, then this object is candidate's object, joins candidate's object collection C 2in, this step is exactly got rid of by candidate's object of object after three-dimension gesture;
(7) show object to be selected, will C be gathered 2in candidate's object carry out high light display and show;
(8) collision detection is carried out to object to be selected, successively to object collection C 2each candidate's object carries out collision detection, if collision detection success, then terminate, otherwise T=T+1, T is the moment, go to step (4).
2. three-dimensional virtual gesture according to claim 1 selects the method for object scene, and it is characterized in that, described step (2) comprises the steps:
(2.1) select the three-dimensional coordinate of known desktop lower-left angle point A in three-dimensional scenic, and intersect at the known vector on two limits in the desktop lower left corner, two namely on desktop crossing vectors known;
(2.2) by an A, vector obtain the plane equation of desktop: Z=f (x, y, z).
CN201310445376.9A 2013-09-23 2013-09-23 A kind of three-dimensional virtual gesture selects the method for object scene Expired - Fee Related CN103472923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310445376.9A CN103472923B (en) 2013-09-23 2013-09-23 A kind of three-dimensional virtual gesture selects the method for object scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310445376.9A CN103472923B (en) 2013-09-23 2013-09-23 A kind of three-dimensional virtual gesture selects the method for object scene

Publications (2)

Publication Number Publication Date
CN103472923A CN103472923A (en) 2013-12-25
CN103472923B true CN103472923B (en) 2016-04-06

Family

ID=49797806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310445376.9A Expired - Fee Related CN103472923B (en) 2013-09-23 2013-09-23 A kind of three-dimensional virtual gesture selects the method for object scene

Country Status (1)

Country Link
CN (1) CN103472923B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105929947B (en) * 2016-04-15 2020-07-28 济南大学 A Human-Computer Interaction Method Based on Scene Situational Awareness
CN109271023B (en) * 2018-08-29 2020-09-01 浙江大学 Selection method based on three-dimensional object outline free-hand gesture action expression
CN110837326B (en) * 2019-10-24 2021-08-10 浙江大学 Three-dimensional target selection method based on object attribute progressive expression
CN114366295B (en) * 2021-12-31 2023-07-25 杭州脉流科技有限公司 Microcatheter path generation method, shaping method of shaping needle, computer device, readable storage medium, and program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101952818A (en) * 2007-09-14 2011-01-19 智慧投资控股67有限责任公司 Processing based on the user interactions of attitude
CN102270275A (en) * 2010-06-04 2011-12-07 汤姆森特许公司 Method for selection of an object in a virtual environment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US7755608B2 (en) * 2004-01-23 2010-07-13 Hewlett-Packard Development Company, L.P. Systems and methods of interfacing with a machine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101952818A (en) * 2007-09-14 2011-01-19 智慧投资控股67有限责任公司 Processing based on the user interactions of attitude
CN102270275A (en) * 2010-06-04 2011-12-07 汤姆森特许公司 Method for selection of an object in a virtual environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Arm-Pointer:3D Pointing Interface for Real-World Interaction;Eiichi Hosoya等;《Computer vision in human-computer interaction》;20040513;72-82页 *

Also Published As

Publication number Publication date
CN103472923A (en) 2013-12-25

Similar Documents

Publication Publication Date Title
Vanacken et al. Exploring the effects of environment density and target visibility on object selection in 3D virtual environments
Hackenberg et al. Lightweight palm and finger tracking for real-time 3D gesture control
EP2681649B1 (en) System and method for navigating a 3-d environment using a multi-input interface
US9436369B2 (en) Touch interface for precise rotation of an object
Millette et al. DualCAD: integrating augmented reality with a desktop GUI and smartphone interaction
Shah et al. A survey on human computer interaction mechanism using finger tracking
JP2014511534A5 (en)
CN102236414A (en) Picture operation method and system in three-dimensional display space
US12148081B2 (en) Immersive analysis environment for human motion data
CN103472923B (en) A kind of three-dimensional virtual gesture selects the method for object scene
US9432652B2 (en) Information processing apparatus, stereoscopic display method, and program
CN103488292A (en) Three-dimensional application icon control method and device
Renner et al. A path-based attention guiding technique for assembly environments with target occlusions
CN103033145B (en) For identifying the method and system of the shape of multiple object
CN107463261A (en) Three-dimensional interaction system and method
Cui et al. Feasibility study on free hand geometric modelling using leap motion in VRML/X3D
US20190050133A1 (en) Techniques for transitioning from a first navigation scheme to a second navigation scheme
Jung et al. Boosthand: Distance-free object manipulation system with switchable non-linear mapping for augmented reality classrooms
CN114931746A (en) Interaction method, device and medium for 3D game based on pen type and touch screen interaction
Choi et al. Interactive display robot: projector robot with natural user interface
CN114175103A (en) Virtual Reality Simulation Using Surface Tracking
CN104820584A (en) Natural control 3D gesture interface and system facing hierarchical information
CN104050721B (en) The method and device of smooth control three dimensional object
KR102392675B1 (en) Interfacing method for 3d sketch and apparatus thereof
Wither et al. Evaluating techniques for interaction at a distance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160406