CN103472923B - A kind of three-dimensional virtual gesture selects the method for object scene - Google Patents

A kind of three-dimensional virtual gesture selects the method for object scene Download PDF

Info

Publication number
CN103472923B
CN103472923B CN201310445376.9A CN201310445376A CN103472923B CN 103472923 B CN103472923 B CN 103472923B CN 201310445376 A CN201310445376 A CN 201310445376A CN 103472923 B CN103472923 B CN 103472923B
Authority
CN
China
Prior art keywords
candidate
desktop
gesture
collection
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310445376.9A
Other languages
Chinese (zh)
Other versions
CN103472923A (en
Inventor
冯志全
张廷芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201310445376.9A priority Critical patent/CN103472923B/en
Publication of CN103472923A publication Critical patent/CN103472923A/en
Application granted granted Critical
Publication of CN103472923B publication Critical patent/CN103472923B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of method that three-dimensional virtual gesture selects object scene, comprise the steps: that (1) sets up three-dimension gesture model storehouse; It is characterized in that, also comprise the steps: that (2) determine the desktop equation of put object; (3) initial runtime T=0; (4) predict the movement locus of three-dimension gesture, utilize the positional information H in three-dimension gesture T and T+1 moment tand H t+1, prediction straight line l:(5) and preliminary selected object collection C to be selected 1; (6) object collection C to be selected is finally determined 2; (7) show object to be selected, will C be gathered 2in candidate's object carry out high light display and show; (8) collision detection is carried out to object to be selected, successively to object collection C 2each candidate's object carries out collision detection, if collision detection success, then terminate, otherwise T=T+1, T is the moment, go to step (4).The present invention realizes the object that three-dimension gesture is accurately selected in scene.

Description

A kind of three-dimensional virtual gesture selects the method for object scene
Technical field
The present invention relates to three-dimensional virtual gesture field, specifically, relate to a kind of method that three-dimensional virtual gesture selects object scene.
Background technology
Object is selected to be the simplest a kind of action in reality by giving directions, as long as namely a given choice direction can judge by test for intersection the object chosen.In immersive environment, often form choice direction with the position of viewpoint and virtual hand, and under desktop environment, only need the position of mouse point.This technology can trace back to MIT " put-that-there " interface exploitation in 1980 the earliest, wherein carries out confirmation with voice and selects to handle.Afterwards, a large amount of selection technique is developed, and the difference of these technology comes from this two design variables: the scope of the mode that choice direction is formed and selection (determine each select to have how many objects may be selected).
Owing to giving directions the physical action selecting substantially not need user, therefore only just select it can obtain good user performance.But being not suitable for objects location operation because dimension limits this indication technology, user only can locate on a radian around viewpoint, object can not be moved along choice direction.
Ray-Casting the most simply gives directions selection technique.Current choice direction is determined by the position h of the vector P be attached in virtual hand and virtual hand: p (a)=h+ap, 0<a<+ ∞.This technology can select the object of distant place arbitrarily in theory, but along with the increase of distance and reducing of object size, the difficulty chosen is also increasing.Simultaneously due to the impact of hardware shake, the error rate chosen also improves gradually.
" searchlight " selecting technology (Spotlight) replaces original ray with one from the cone selected vector starting point, as long as there have object to drop in this cone to be then selected, be similar to object and be illuminated the same.This technology is selected especially favourable for small items, but also there will be multiple object drops on situation in cone simultaneously, can increase maloperation on the contrary.
Aperture is the improvement opportunity to Spotlfight, and it allows user's interactively when choosing object by cone to control the scope of projection.The projecting direction of cone is determined by the tracker of head and virtual hand, and user the distance of interactively adjustment virtual hand and head can adjust cone size, eliminates the ambiguousness of selection with this.
The thought of Image-Plane comes from a ultimate principle in graphics: any object that namely we see is all looking the perspective projection on pyramid back plane, therefore only needs on given projection plane point just can select.This technology can not change alternative distance from operator with several technology is the same above, Fishing-Ree1 compensate for this shortcoming, by extra increase input equipment, as increased button or sliding bar on tracker, user can move by the object selected along selection vector by interactively, but due to the six-freedom degree of object is separated control, so still can user performance be reduced.
Summary of the invention
The technical scheme that the present invention will solve is to provide a kind of three-dimensional virtual gesture and selects the method for object scene, realizes the object that three-dimension gesture is accurately selected in scene.
The present invention adopts following technical scheme to realize goal of the invention:
Three-dimensional virtual gesture selects a method for object scene, comprises the steps:
(1) three-dimension gesture model storehouse is set up;
It is characterized in that, also comprise the steps:
(2) the desktop equation of put object is determined;
(3) initial runtime T=0;
(4) predict the movement locus of three-dimension gesture, utilize the positional information H in three-dimension gesture T and T+1 moment tand H t+1, prediction straight line l:
The equation of straight line l is: x = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) x T + ( t - T ) / ( ( T + 1 ) - T ) x T + 1 y = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) y T + ( t - T ) / ( ( T + 1 ) - T ) y T + 1 z = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) z T + ( t - T ) / ( ( T + 1 ) - T ) z T + 1 ,
T=0,1,2, Λ, n, wherein n is the frame number that will predict, wherein H t=(x t, y t, z t),
H T+1=(x T+1,y T+1,z T+1);
(5) preliminary selected object collection C to be selected 1;
(5.1) an intersection point o is obtained by straight line l and desktop Z;
(5.2) setting threshold value to be chosen is R, wherein threshold value m ifor the coordinate of object i, the number of the remaining object of N desktop;
(5.3) distance of each object to intersection point o is calculated successively: r ithe object of < R is candidate's object, joins the set collection C of candidate's object 1in;
(6) object collection C to be selected is finally determined 2;
(6.1) calculated candidate object C successively 1the cosine value of the angle between the normal vector of the vector that three gesture current locations of set and each candidate's object are formed and hand, i.e. cos<n, H t+1-M i>, M ifor the coordinate of object i, n is the normal vector of hand, and the normal vector of hand is the vector perpendicular to palmar aspect;
(6.2) cos<n is judged, H t+1-M iwhether > is greater than 0, if be greater than 0, then this object is candidate's object, joins candidate's object collection C 2in, this step is exactly got rid of by candidate's object of object after three-dimension gesture;
(7) show object to be selected, will C be gathered 2in candidate's object carry out high light display and show;
(8) collision detection is carried out to object to be selected, successively to object collection C 2each candidate's object carries out collision detection, if collision detection success, then terminate, otherwise T=T+1, T is the moment, go to step (4).
As the further restriction to the technical program, described step (2) comprises the steps:
(2.1) three-dimensional coordinate of desktop lower-left angle point A in three-dimensional scenic, and the vector on two limits intersecting at the desktop lower left corner is known, two namely on desktop crossing vectors known;
(2.2) by an A, vector obtain the plane equation of desktop: Z=f (x, y, z).
Compared with prior art, advantage of the present invention and good effect are: the reasonable selection technique 1, mentioned in background will by means of some equipment, and as increased button or sliding bar on tracker, and the present invention need not.2, the selection technique mentioned in background is all only select according to the position of object, and the present invention is according to the status information of gesture, and the selection carrying out object of object relative position real-time update in hand and scene.3, the selection technique mentioned in background is substantially all remote selection, and choose the probability of object also by the distance of object, size affects, and the object as smaller in scene is more difficult to be chosen.The present invention closely selects, and just chooses, would not occur the situation of wisp difficulty choosing like this, select degree of accuracy high when hand distance object is closer.
Accompanying drawing explanation
Fig. 1 is processing flow chart of the present invention.
Embodiment
Below in conjunction with accompanying drawing and preferred embodiment, the present invention is further described in detail.
See Fig. 1, three-dimensional virtual gesture refers to that employing generates based on computer vision technique, utilization tracking or reconstructing method computing machine, consistent with user's gesture three-dimension gesture model, and this is prior art, omits at this step setting up three-dimension gesture model storehouse.Three-dimensional virtual gesture is utilized to select the method for object scene as follows:
Step1.
Determine the desktop equation of put object, the three-dimensional coordinate of desktop lower-left angle point A in three-dimensional scenic, and the vector on two limits intersecting at the desktop lower left corner is known, two namely on desktop crossing vectors known; By an A, vector obtain the plane equation of desktop: Z=f (x, y, z).
Step2. initial runtime T=0;
Step3. the curve movement of three-dimensional hand is doped, i.e. the movement locus of three-dimensional hand.Utilize three-dimensional hand T and the positional information H in T+1 moment tand H t+1, prediction straight line l:
The equation of straight line l is: x = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) x T + ( t - T ) / ( ( T + 1 ) - T ) x T + 1 y = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) y T + ( t - T ) / ( ( T + 1 ) - T ) y T + 1 z = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) z T + ( t - T ) / ( ( T + 1 ) - T ) z T + 1 ,
T=0,1,2, Λ, n, wherein n is the frame number that will predict, wherein H t=(x t, y t, z t),
H T+1=(x T+1,y T+1,z T+1)。
Step4.
Preliminary selected object collection to be selected.Setting threshold value to be chosen is R, obtains an intersection point o by straight line l and desktop Z, calculates the distance of each object to intersection point o successively; r ithe object of < R is candidate's object, joins the set C of candidate's object 1in.Threshold value wherein set by this paper is m ifor the coordinate of object i, the number of the remaining object of N desktop.
Step5.
Finally determine object collection to be selected, successively calculated candidate object C 1the cosine value of the angle between each object of set and the normal vector of hand, i.e. cos<n, H t+1-M i>.Cos<n, H t+1-M ithe object of >>0 is candidate's object, joins the set C of candidate's object 2in.This step is exactly got rid of by candidate's object of object after hand.Wherein cos<n, H t+1-M i>>0, now candidate's object is before hand, cos<n, H t+1-M i><0, now candidate's object is after hand.N is the normal vector of hand.
Step6. object to be selected is shown.C will be gathered 2in candidate's object carry out high light display and show.
Step7.
Collision detection is carried out to object to be selected.Successively to C 2each candidate's object carries out collision detection, if collision detection success, then terminates, otherwise, T=T+1, and forward Step3 to.
The present invention can pass through without the technical characteristic described or adopt existing techniques in realizing; do not repeat them here; certainly; above-mentioned explanation is not limitation of the present invention; the present invention is also not limited in above-mentioned citing; the change that those skilled in the art make in essential scope of the present invention, remodeling, interpolation or replacement, also should belong to protection scope of the present invention.

Claims (2)

1. three-dimensional virtual gesture selects a method for object scene, comprises the steps:
(1) three-dimension gesture model storehouse is set up;
It is characterized in that, also comprise the steps:
(2) the desktop equation of put object is determined;
(3) initial runtime T=0;
(4) predict the movement locus of three-dimension gesture, utilize the positional information H in three-dimension gesture T and T+1 moment tand H t+1, prediction straight line l:
The equation of straight line l is: x = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) x T + ( t - T ) / ( ( T + 1 ) - T ) x T + 1 y = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) y T + ( t - T ) / ( ( T + 1 ) - T ) y T + 1 z = ( t - ( T + 1 ) ) / ( T - ( T + 1 ) ) z T + ( t - T ) / ( ( T + 1 ) - T ) z T + 1 ,
T=0,1,2 ..., n, wherein n is the frame number that will predict, wherein H t=(x t, y t, z t),
H T+1=(x T+1,y T+1,z T+1);
(5) preliminary selected object collection C to be selected 1;
(5.1) an intersection point o is obtained by straight line l and desktop Z;
(5.2) setting threshold value to be chosen is R, wherein threshold value m ifor the coordinate of object i, the number of the remaining object of N desktop;
(5.3) distance of each object to intersection point o is calculated successively: r ithe object of < R is candidate's object, joins the set collection C of candidate's object 1in;
(6) object collection C to be selected is finally determined 2;
(6.1) calculated candidate object C successively 1the cosine value of the angle between the normal vector of the vector that three gesture current locations of set and each candidate's object are formed and hand, i.e. cos<n, H t+1-M i>, M ifor the coordinate of object i, n is the normal vector of hand, and the normal vector of hand is the vector perpendicular to palmar aspect;
(6.2) cos<n is judged, H t+1-M iwhether > is greater than 0, if be greater than 0, then this object is candidate's object, joins candidate's object collection C 2in, this step is exactly got rid of by candidate's object of object after three-dimension gesture;
(7) show object to be selected, will C be gathered 2in candidate's object carry out high light display and show;
(8) collision detection is carried out to object to be selected, successively to object collection C 2each candidate's object carries out collision detection, if collision detection success, then terminate, otherwise T=T+1, T is the moment, go to step (4).
2. three-dimensional virtual gesture according to claim 1 selects the method for object scene, and it is characterized in that, described step (2) comprises the steps:
(2.1) select the three-dimensional coordinate of known desktop lower-left angle point A in three-dimensional scenic, and intersect at the known vector on two limits in the desktop lower left corner, two namely on desktop crossing vectors known;
(2.2) by an A, vector obtain the plane equation of desktop: Z=f (x, y, z).
CN201310445376.9A 2013-09-23 2013-09-23 A kind of three-dimensional virtual gesture selects the method for object scene Expired - Fee Related CN103472923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310445376.9A CN103472923B (en) 2013-09-23 2013-09-23 A kind of three-dimensional virtual gesture selects the method for object scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310445376.9A CN103472923B (en) 2013-09-23 2013-09-23 A kind of three-dimensional virtual gesture selects the method for object scene

Publications (2)

Publication Number Publication Date
CN103472923A CN103472923A (en) 2013-12-25
CN103472923B true CN103472923B (en) 2016-04-06

Family

ID=49797806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310445376.9A Expired - Fee Related CN103472923B (en) 2013-09-23 2013-09-23 A kind of three-dimensional virtual gesture selects the method for object scene

Country Status (1)

Country Link
CN (1) CN103472923B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105929947B (en) * 2016-04-15 2020-07-28 济南大学 Man-machine interaction method based on scene situation perception
CN109271023B (en) * 2018-08-29 2020-09-01 浙江大学 Selection method based on three-dimensional object outline free-hand gesture action expression
CN110837326B (en) * 2019-10-24 2021-08-10 浙江大学 Three-dimensional target selection method based on object attribute progressive expression
CN114366295B (en) * 2021-12-31 2023-07-25 杭州脉流科技有限公司 Microcatheter path generation method, shaping method of shaping needle, computer device, readable storage medium, and program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101952818A (en) * 2007-09-14 2011-01-19 智慧投资控股67有限责任公司 Processing based on the user interactions of attitude
CN102270275A (en) * 2010-06-04 2011-12-07 汤姆森特许公司 Method for selection of an object in a virtual environment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US7755608B2 (en) * 2004-01-23 2010-07-13 Hewlett-Packard Development Company, L.P. Systems and methods of interfacing with a machine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101952818A (en) * 2007-09-14 2011-01-19 智慧投资控股67有限责任公司 Processing based on the user interactions of attitude
CN102270275A (en) * 2010-06-04 2011-12-07 汤姆森特许公司 Method for selection of an object in a virtual environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Arm-Pointer:3D Pointing Interface for Real-World Interaction;Eiichi Hosoya等;《Computer vision in human-computer interaction》;20040513;72-82页 *

Also Published As

Publication number Publication date
CN103472923A (en) 2013-12-25

Similar Documents

Publication Publication Date Title
JP6839778B2 (en) Gesture detection methods and devices on the user-referenced spatial coordinate system
US10540054B2 (en) Navigation point selection for navigating through virtual environments
US10692287B2 (en) Multi-step placement of virtual objects
US9928649B2 (en) Interface for planning flight path
CN103472923B (en) A kind of three-dimensional virtual gesture selects the method for object scene
Vanacken et al. Exploring the effects of environment density and target visibility on object selection in 3D virtual environments
CN108908330B (en) Robot behavior control method based on virtual reality
CN102236414A (en) Picture operation method and system in three-dimensional display space
CN105303595B (en) A kind of object intelligent barrier avoiding method and system based on three-dimensional virtual scene
JP6106803B2 (en) System and method for optimizing fiducial marker and camera position / orientation
US9361665B2 (en) Methods and systems for viewing a three-dimensional (3D) virtual object
US20220379473A1 (en) Trajectory plan generation device, trajectory plan generation method, and trajectory plan generation program
JP2014511534A5 (en)
CN111368760B (en) Obstacle detection method and device, electronic equipment and storage medium
US20180113596A1 (en) Interface for positioning an object in three-dimensional graphical space
US10713853B2 (en) Automatically grouping objects in three-dimensional graphical space
Renner et al. A path-based attention guiding technique for assembly environments with target occlusions
CN103033145B (en) For identifying the method and system of the shape of multiple object
KR20140080221A (en) 3d volumetric display device for providing user interface using visual indicator and method thereof
KR20150085957A (en) User interface apparatus and control method thereof
CN103679792A (en) Rendering method and system for three-dimensional models
CN103197761B (en) Gesture identification method and device
CN109782914A (en) The selection method of target in virtual three-dimensional scene based on pen device axial-rotation
CN102221880A (en) Display method and system for 3D (Three-dimensional) graphical interface
CN113918015A (en) Augmented reality interaction method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160406

CF01 Termination of patent right due to non-payment of annual fee