CN103472923A - Method for selecting scene objects with three-dimensional virtual gesture - Google Patents
Method for selecting scene objects with three-dimensional virtual gesture Download PDFInfo
- Publication number
- CN103472923A CN103472923A CN2013104453769A CN201310445376A CN103472923A CN 103472923 A CN103472923 A CN 103472923A CN 2013104453769 A CN2013104453769 A CN 2013104453769A CN 201310445376 A CN201310445376 A CN 201310445376A CN 103472923 A CN103472923 A CN 103472923A
- Authority
- CN
- China
- Prior art keywords
- candidate
- gesture
- desktop
- dimensional
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention discloses a method for selecting scene objects with a three-dimensional virtual gesture, and the method comprises the following steps that (1) a three-dimensional gesture model library is established, and is characterized by also comprising the following steps that (2) a desktop equation of a placed object is determined; (3) the initial time T is equal to 0; (4) the movement track of the three-dimensional gesture is predicted, and a straight line l is predicted through the position information HT and HT+1 of the time T and T+1 of the three-dimensional gesture; (5) an object set C1 to be selected is initially selected; (6) an object set C2 to be selected is finally determined; (7) the object to be selected is displayed, and candidate objects in the set C2 are displayed at highlight; (8) a collision test is taken to the object to be selected; the collision test is taken to each candidate object in the object set C2 in sequence; if the collision test succeeds, the test is over; otherwise, T is equal to T+1, T is the time, and the step (4) is carried out. According to the method for selecting the scene objects with the three-dimensional virtual gesture, the objects in a scene are selected accurately with the three-dimensional gesture.
Description
Technical field
The present invention relates to three-dimensional gesture field, specifically, relate to a kind of method that three-dimensional gesture is selected object scene.
Background technology
Selecting object by indication is the simplest a kind of action in reality, as long as the object that a given choice direction can be chosen by intersecting the test judgement.In immersive environment, often with the position of viewpoint and virtual hand, form choice direction, and, under desktop environment, only need the position of mouse point to get final product.This technology can be traced back to MIT " put-that-there " interface exploitation in 1980 the earliest, wherein with voice, confirms to select to handle.Afterwards, a large amount of selection technology are developed, and the difference of these technology comes from this two design variables: the mode that choice direction forms and the scope of selection (determining to select to have how many objects possibilities selected) at every turn.
Select basically not need user's physical action owing to giving directions, therefore only just select it can obtain good user performance.Yet, because this indication technology of dimensional limit is not suitable for the object positioning action, the user only can locate on the radian around viewpoint, object can not be moved along choice direction.
Ray-Casting the most simply gives directions the selection technology.Current choice direction is determined by the position h that is attached to vector P on virtual hand and virtual hand: p (a)=h+ap, and 0<a<+∞.This technology can be selected the object of distant place arbitrarily in theory, but, along with the increase of distance and dwindling of object size, the difficulty of choosing is also increasing.Due to the impact of hardware shake, the error rate of choosing also improves gradually simultaneously.
" searchlight " selecting technology (Spotlight) replaces original ray with one from the cone of selecting the vector starting point to start, as long as there have object to drop in this cone to be selected, be similar to object and be illuminated the same.This technology is selected especially favourable for small items, but also there will be a plurality of objects to drop on the situation in cone simultaneously, can increase maloperation on the contrary.
Aperture is the improvement technology to Spotlfight, and it allows user's interactively when by cone, choosing object to control the scope of projection.The projecting direction of cone determines by the tracker of head and virtual hand, and the user can interactively adjusts the distance of virtual hand and head and adjusts the cone size, with this, eliminates the ambiguousness of selection.
The thought of Image-Plane comes from a ultimate principle in graphics: any object that we see is all in the perspective projection of looking on the pyramid back plane, therefore only needs a point on given projection plane just can be selected.This technology is the same with several technology in front can not change the distance of alternative from the operator, Fishing-Ree1 has made up this shortcoming, by extra increase input equipment, as increased button or sliding bar on tracker, the user can move selecteed object along the selection vector by interactively, but, because the six-freedom degree by object separates control, so still can reduce user performance.
Summary of the invention
The technical scheme that the present invention will solve is to provide a kind of method that three-dimensional gesture is selected object scene, realizes that three-dimension gesture accurately selects the object in scene.
The present invention adopts following technical scheme to realize goal of the invention:
A kind of three-dimensional gesture is selected the method for object scene, comprises the steps:
(1) set up the three-dimension gesture model storehouse;
It is characterized in that, also comprise the steps:
(2) determine the desktop equation of the object of putting;
(3) initialization moment T=0;
(4) movement locus of prediction three-dimension gesture, utilize three-dimension gesture T and T+1 positional information H constantly
tand H
t+1, prediction straight line l:
The equation of straight line l is:
T=0,1,2 ..., n, the wherein frame number of n for predicting, wherein H
t=(x
t, y
t, z
t), H
t+1=(x
t+1, y
t+1, z
t+1);
(5) tentatively selected object collection C1 to be selected;
(6) finally determine object collection C2 to be selected;
(7) show object to be selected, the candidate's object in set C2 is carried out to high light display and show;
(8) object to be selected is carried out to collision detection, successively each candidate's object of object collection C2 is carried out to collision detection, if the collision detection success finishes, otherwise, T=T+1, T is constantly, goes to step (4).
As the further restriction to the technical program, described step (2) comprises the steps:
(2.1) three-dimensional coordinate of desktop lower-left angle point A in three-dimensional scenic, and it is known to intersect at the vector on Liang Tiao limit, the desktop lower left corner, i.e. two crossing vectors on desktop
known;
As the further restriction to the technical program, described step (5) comprising:
(5.1) setting threshold value to be chosen is R, wherein threshold value
m
ifor the coordinate of object i, the number of the last object of N desktop;
(5.2) obtain an intersection point o by straight line l and desktop Z;
(5.3) calculate successively the distance of each object to intersection point o:
r
ithe object of<R is candidate's object, joins the set collection C of candidate's object
1in.
As the further restriction to the technical program, described step (6) comprising:
(6.1) calculated candidate object C successively
1the cosine value of the angle between each object of set and the normal vector of hand; Cos<n, H
t+1-M
i, the normal vector that n is hand;
(6.2) judgement cos<n, H
t+1-M
iwhether be greater than 0, if be greater than 0, this object is candidate's object, joins candidate's object collection C
2in.This step is exactly that object is got rid of at candidate's object of hand back.
Compared with prior art, advantage of the present invention and good effect are: the reasonable selection technology of 1, mentioning in background will be by some equipment, and as increased button or sliding bar on tracker, and the present invention need not.2, the selection technology of mentioning in background is only selected according to the position of object, and the present invention is according to the status information of gesture, and the selection of carrying out object of object relative position real-time update in hand and scene.3, the selection technology of mentioning in background is substantially all remote selection, chooses the probability of object also to be subject to the distance of object, size impact, as more difficult as object smaller in scene choosing.The present invention closely selects, and when hand is closer apart from object, just chooses, and so just not there will be the situation of the difficult choosing of wisp, selects degree of accuracy high.
The accompanying drawing explanation
Fig. 1 is processing flow chart of the present invention.
Embodiment
Below in conjunction with accompanying drawing and preferred embodiment, the present invention is further described in detail.
Referring to Fig. 1, the three-dimensional gesture refers to employing based on computer vision technique, utilization tracking or the reconstructing method computing machine is that generate, consistent with user's gesture three-dimension gesture model, and this is prior art, sets up the step in three-dimension gesture model storehouse in this omission.Utilize the three-dimensional gesture to select the method for object scene as follows:
Step1. determine the desktop equation of the object of putting, the three-dimensional coordinate of desktop lower-left angle point A in three-dimensional scenic, and it is known to intersect at the vector on Liang Tiao limit, the desktop lower left corner, i.e. two crossing vectors on desktop
known; By an A, vector
obtain the plane equation of desktop: Z=f (x, y, z).
Step2. initialization T=0 constantly;
Step3. dope the curve movement of three-dimensional hand, i.e. the movement locus of three-dimensional hand.Utilize three-dimensional hand T and T+1 positional information H constantly
tand H
t+1, prediction straight line l:
The equation of straight line l is:
T=0,1,2 ..., n, the wherein frame number of n for predicting, wherein H
t=(x
t, y
t, z
t), H
t+1=(x
t+1, y
t+1, z
t+1).
Step4. tentatively select object collection to be selected.Setting threshold value to be chosen is R, by straight line l and desktop Z, obtains an intersection point o, calculates successively the distance of each object to intersection point o;
r
ithe object of<R is candidate's object, joins the set C of candidate's object
1in.Wherein the set threshold value of this paper is
m
ifor the coordinate of object i, the number of the last object of N desktop.
Step5. finally determine object collection to be selected, calculated candidate object C successively
1the cosine value of the angle between each object of set and the normal vector of hand; Cos<n, H
t+1-M
i.Cos<n, H
t+1-M
i0 object is candidate's object, joins the set C of candidate's object
2in.This step is exactly that object is got rid of at candidate's object of hand back.Cos<n wherein, H
t+1-M
i0, now candidate's object is at the front of hand, cos<n, H
t+1-M
i><0, now candidate's object is in the back of hand.The normal vector that n is hand.
Step6. show object to be selected.To gather C
2in candidate's object carry out high light display and show.
Step7. object to be selected is carried out to collision detection.Successively to C
2each candidate's object carries out collision detection, if the collision detection success finishes, otherwise, T=T+1, and forward Step3 to.
The present invention can pass through or adopt existing techniques in realizing without the technical characterictic of describing; do not repeat them here; certainly; above-mentioned explanation is not limitation of the present invention; the present invention also is not limited in above-mentioned giving an example; the variation that those skilled in the art make in essential scope of the present invention, remodeling, interpolation or replacement, also should belong to protection scope of the present invention.
Claims (4)
1. the method for a three-dimensional gesture selection object scene, comprise the steps:
(1) set up the three-dimension gesture model storehouse;
It is characterized in that, also comprise the steps:
(2) determine the desktop equation of the object of putting;
(3) initialization moment T=0;
(4) movement locus of prediction three-dimension gesture, utilize three-dimension gesture T and T+1 positional information H constantly
tand H
t+1, prediction straight line l:
The equation of straight line l is:
T=0,1,2 ..., n, the wherein frame number of n for predicting, wherein H
t=(x
t, y
t, z
t), H
t+1=(x
t+1, y
t+1, z
t+1);
(5) tentatively selected object collection C1 to be selected;
(6) finally determine object collection C2 to be selected;
(7) show object to be selected, the candidate's object in set C2 is carried out to high light display and show;
(8) object to be selected is carried out to collision detection, successively each candidate's object of object collection C2 is carried out to collision detection, if the collision detection success finishes, otherwise, T=T+1, T is constantly, goes to step (4).
2. three-dimensional gesture according to claim 1 is selected the method for object scene, it is characterized in that, described step (2) comprises the steps:
(2.1) select the three-dimensional coordinate of known desktop lower-left angle point A in three-dimensional scenic, and the known vector that intersects at Liang Tiao limit, the desktop lower left corner, i.e. two crossing vectors on desktop
known;
3. three-dimensional gesture according to claim 1 is selected the method for object scene, it is characterized in that, described step (5) comprising:
(5.1) setting threshold value to be chosen is R, wherein threshold value
m
ifor the coordinate of object i, the number of the last object of N desktop;
(5.2) obtain an intersection point o by straight line l and desktop Z;
(5.3) calculate successively the distance of each object to intersection point o:
r
ithe object of<R is candidate's object, joins the set collection C of candidate's object
1in.
4. three-dimensional gesture according to claim 1 is selected the method for object scene, it is characterized in that, described step (6) comprising:
(6.1) calculated candidate object C successively
1the cosine value of the angle between three gesture current locations of set and the normal vector of the formed vector of each candidate's object and hand; Cos<n, H
t+1-M
i, the normal vector that n is hand, the normal vector of hand is the vector perpendicular to palmar aspect;
(6.2) judgement cos<n, H
t+1-M
iwhether be greater than 0, if be greater than 0, this object is candidate's object, joins candidate's object collection C
2in, this step is exactly that object is got rid of at candidate's object of three-dimension gesture back.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310445376.9A CN103472923B (en) | 2013-09-23 | 2013-09-23 | A kind of three-dimensional virtual gesture selects the method for object scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310445376.9A CN103472923B (en) | 2013-09-23 | 2013-09-23 | A kind of three-dimensional virtual gesture selects the method for object scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103472923A true CN103472923A (en) | 2013-12-25 |
CN103472923B CN103472923B (en) | 2016-04-06 |
Family
ID=49797806
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310445376.9A Active CN103472923B (en) | 2013-09-23 | 2013-09-23 | A kind of three-dimensional virtual gesture selects the method for object scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103472923B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105929947A (en) * | 2016-04-15 | 2016-09-07 | 济南大学 | Scene situation perception based man-machine interaction method |
CN109271023A (en) * | 2018-08-29 | 2019-01-25 | 浙江大学 | A kind of selection method based on three dimensional object appearance profile free hand gestures manual expression |
CN110837326A (en) * | 2019-10-24 | 2020-02-25 | 浙江大学 | Three-dimensional target selection method based on object attribute progressive expression |
CN114366295A (en) * | 2021-12-31 | 2022-04-19 | 杭州脉流科技有限公司 | Microcatheter path generation method, shaping method of shaped needle, computer device, readable storage medium and program product |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050166163A1 (en) * | 2004-01-23 | 2005-07-28 | Chang Nelson L.A. | Systems and methods of interfacing with a machine |
US20080030460A1 (en) * | 2000-07-24 | 2008-02-07 | Gesturetek, Inc. | Video-based image control system |
CN101952818A (en) * | 2007-09-14 | 2011-01-19 | 智慧投资控股67有限责任公司 | Processing based on the user interactions of attitude |
CN102270275A (en) * | 2010-06-04 | 2011-12-07 | 汤姆森特许公司 | Method for selection of an object in a virtual environment |
-
2013
- 2013-09-23 CN CN201310445376.9A patent/CN103472923B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080030460A1 (en) * | 2000-07-24 | 2008-02-07 | Gesturetek, Inc. | Video-based image control system |
US20050166163A1 (en) * | 2004-01-23 | 2005-07-28 | Chang Nelson L.A. | Systems and methods of interfacing with a machine |
CN101952818A (en) * | 2007-09-14 | 2011-01-19 | 智慧投资控股67有限责任公司 | Processing based on the user interactions of attitude |
CN102270275A (en) * | 2010-06-04 | 2011-12-07 | 汤姆森特许公司 | Method for selection of an object in a virtual environment |
Non-Patent Citations (1)
Title |
---|
EIICHI HOSOYA等: "Arm-Pointer:3D Pointing Interface for Real-World Interaction", 《COMPUTER VISION IN HUMAN-COMPUTER INTERACTION》, 13 May 2004 (2004-05-13), pages 72 - 82, XP019006110 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105929947A (en) * | 2016-04-15 | 2016-09-07 | 济南大学 | Scene situation perception based man-machine interaction method |
CN109271023A (en) * | 2018-08-29 | 2019-01-25 | 浙江大学 | A kind of selection method based on three dimensional object appearance profile free hand gestures manual expression |
CN109271023B (en) * | 2018-08-29 | 2020-09-01 | 浙江大学 | Selection method based on three-dimensional object outline free-hand gesture action expression |
CN110837326A (en) * | 2019-10-24 | 2020-02-25 | 浙江大学 | Three-dimensional target selection method based on object attribute progressive expression |
CN110837326B (en) * | 2019-10-24 | 2021-08-10 | 浙江大学 | Three-dimensional target selection method based on object attribute progressive expression |
CN114366295A (en) * | 2021-12-31 | 2022-04-19 | 杭州脉流科技有限公司 | Microcatheter path generation method, shaping method of shaped needle, computer device, readable storage medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN103472923B (en) | 2016-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11875012B2 (en) | Throwable interface for augmented reality and virtual reality environments | |
Hackenberg et al. | Lightweight palm and finger tracking for real-time 3D gesture control | |
JP6839778B2 (en) | Gesture detection methods and devices on the user-referenced spatial coordinate system | |
Rabbi et al. | A survey on augmented reality challenges and tracking | |
US20170153713A1 (en) | Head mounted display device and control method | |
US10120526B2 (en) | Volumetric image display device and method of providing user interface using visual indicator | |
JP2019519387A (en) | Visualization of Augmented Reality Robot System | |
US9361665B2 (en) | Methods and systems for viewing a three-dimensional (3D) virtual object | |
CN103793060A (en) | User interaction system and method | |
US20130215109A1 (en) | Designating Real World Locations for Virtual World Control | |
CN104317391A (en) | Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system | |
CN102609942A (en) | Mobile camera localization using depth maps | |
CN102236414A (en) | Picture operation method and system in three-dimensional display space | |
CN103472923A (en) | Method for selecting scene objects with three-dimensional virtual gesture | |
US20180113596A1 (en) | Interface for positioning an object in three-dimensional graphical space | |
CN103677240A (en) | Virtual touch interaction method and equipment | |
Renner et al. | A path-based attention guiding technique for assembly environments with target occlusions | |
CN115335894A (en) | System and method for virtual and augmented reality | |
US9870119B2 (en) | Computing apparatus and method for providing three-dimensional (3D) interaction | |
US20220379473A1 (en) | Trajectory plan generation device, trajectory plan generation method, and trajectory plan generation program | |
JP2004265222A (en) | Interface method, system, and program | |
KR101338958B1 (en) | system and method for moving virtual object tridimentionally in multi touchable terminal | |
Wang et al. | Disocclusion headlight for selection assistance in vr | |
Rodriguez et al. | Robust vision-based hand tracking using single camera for ubiquitous 3D gesture interaction | |
Lee et al. | Tunnelslice: Freehand subspace acquisition using an egocentric tunnel for wearable augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |