CN103955267A - Double-hand man-machine interaction method in x-ray fluoroscopy augmented reality system - Google Patents

Double-hand man-machine interaction method in x-ray fluoroscopy augmented reality system Download PDF

Info

Publication number
CN103955267A
CN103955267A CN201310569738.5A CN201310569738A CN103955267A CN 103955267 A CN103955267 A CN 103955267A CN 201310569738 A CN201310569738 A CN 201310569738A CN 103955267 A CN103955267 A CN 103955267A
Authority
CN
China
Prior art keywords
hand
gesture
described step
user
ray fluoroscopy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310569738.5A
Other languages
Chinese (zh)
Other versions
CN103955267B (en
Inventor
陈一民
黄晨
傅之成
邹一波
张典华
李泽宇
姚杰
董世明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201310569738.5A priority Critical patent/CN103955267B/en
Publication of CN103955267A publication Critical patent/CN103955267A/en
Application granted granted Critical
Publication of CN103955267B publication Critical patent/CN103955267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to a double-hand man-machine interaction method in an x-ray fluoroscopy augmented reality system. The method comprises the following operation steps: 1) initializing a system environment, setting system parameters, scanning and checking system equipment; 2) designing an algorithm to carry out error correction on a magnetic force tracking system; 3) designing an algorithm to carry out self-adapted processing on data gloves; 4) measuring and correcting deviation between a magnetic force tracking sensor and user eyes; 5) designing an algorithm to identify interaction gestures in real time; 6) generating feedback cartoons according to user operation; 7) seamlessly fusing real scenes with a virtual model; 8) using an x-ray fluoroscopy helmet display to check results by a user. According to the double-hand man-machine interaction method, the user can get rid of traditional man-machine interaction modes of keyboard, mouse, touch tablet and the like, but directly uses double hands to carry out man-machine interaction operation, thus an interaction experience closer to daily habits is provided for the user.

Description

Both hands man-machine interaction method in x ray fluoroscopy x augmented reality system
Technical field
What the present invention relates to is both hands man-machine interaction method in a kind of x ray fluoroscopy x augmented reality system.
Background technology
Augmented reality (Augmented Reality, AR) is the important new tool that multimedia technology realizes in field of three dimension, is that a kind of three-dimensional information that utilizes computer system to produce strengthens the technology of user to real world perception.With traditional virtual reality technology (Virtual Reality, the effect difference of being immersed in virtual world completely that VR) will reach, AR technology is devoted to seamless being added in real scene such as dummy object, scene or the system prompt information of computing machine generation, thereby the world that creates an actual situation combination, realizes " enhancing " to real world with this.The authoritative scholar Azuma in this field has provided a comparatively detailed summary and the principal feature of AR system has been summarized as augmented reality system and related technology thereof: 1. follow the tracks of registration; 2. actual situation merges; 3. real-time, interactive.
In recent years, occurred dense thick atmosphere both at home and abroad for the research of AR technology, the number of relevant symposial constantly increases both at home and abroad, and academic journal and international conference provide a space fully exchanging for researcher.Its research emphasis, from technology such as simple system framework, hardware tracking, develops into current interactive performance evaluation.There is every year the scientific paper achievement of a large amount of relevant AR technical research, also occurred the seminar of some AR technology.As can be seen here, AR technical development is at present rapid, and its research will have far-reaching value with application.There are North Carolina, USA university, Massachusetts Institute of Technology (MIT), Columbia University, Rochester University, Boeing, Canadian Duolun university and Sony's computer science laboratory etc. in the academic institution that is engaged at present AR technical research; Domestic this research on the one hand mainly concentrate on registration technology and system applies aspect, research contents is also more limited.
From the study hotspot of AR technology, in two more than ten years of being born so far from AR, the research interest of most researchers concentrates on three-dimensional registration and integrated two fields of information.In recent years, along with developing rapidly of the technology such as tracking, sensing, the real time human-machine interaction technology in AR becomes a study hotspot.Due in daily life, people often get used to making with the hands to complete various tasks, such as simple crawl, or the careful operation of complexity of operation and so on.In these processes, require both hands mutually to coordinate, complete appointed task in steadily natural mode.How people's both hands experience technical ability is applied in the reciprocal process of virtual environment and goes, become a hot issue in AR system.
Target of the present invention is to provide effective implementation method for realize natural both hands man-machine interaction in x ray fluoroscopy x AR system, allow user can be in real environment with virtual object carry out more friendly alternately.Especially, hope can allow user break away from traditional man-machine interaction modes such as keyboard, mouse, touch pad, and directly makes with the hands to carry out man-machine interactive operation, thus the interactive experience that brings user and more press close to daily habits.According to investigating and searching data, yet there are no the ripe solution that realizes both hands man-machine interaction in AR system, it is the first that the present invention still belongs to.
Summary of the invention
In view of the problem and shortage that prior art exists, the object of the present invention is to provide both hands man-machine interaction method in a kind of x ray fluoroscopy x augmented reality system, allow user carry out man-machine interactive operation in the mode more getting close to nature, improve the feeling of immersion of system.
In the x ray fluoroscopy x augmented reality system that the present invention adopts, both hands man-machine interactive system hardware configuration as shown in Figure 1, on both hands data glove (5) and x ray fluoroscopy x Helmet Mounted Display (7), magnetic force tracking transducer (6 is installed, 8), magnetic force is set on operator's console (3) and follows the tracks of transmitter (4), by magnetic force tracking transducer (6, 8) and magnetic force follow the tracks of transmitter (4) and be connected with magnetic force tracking control enclosure (2), magnetic force is followed the tracks of to control enclosure (2), x ray fluoroscopy x Helmet Mounted Display (7) and data glove (5) are connected with system host (1), user dresses x ray fluoroscopy x Helmet Mounted Display (7) and both hands data glove (5).
For achieving the above object, the technical solution used in the present invention is as follows:
Both hands man-machine interaction method in a kind of x ray fluoroscopy x augmented reality system, is characterized in that concrete operation step is as follows:
1): initialization system environment, arranges systematic parameter, scanning and check system equipment;
2): algorithm for design carries out error correction to the reading of magnetic force tracking transducer (6,8), and utilize corrected tracking data to complete the real-time orientation of user's head, hand is followed the tracks of and the three-dimensional registration of dummy model;
3): algorithm for design carries out self-adaptive processing to the reading of data glove (5), and utilize the case of bending of data Real-time Obtaining user after treatment finger;
4): measure the distance between magnetic force tracking transducer (8) and eyes of user, revise side-play amount and the visible angle of viewpoint;
5): algorithm for design carries out analyzing and processing to the locus tracking data of hand and finger case of bending, completes the Real time identification of interaction gesture;
6): carry out virtual scene Real-time modeling set according to user's input operation, and according to its interactive operation, model of place is carried out real-time transform and played up, generate feedback animation;
7): by real scene and the seamless fusion of dummy model;
8): user uses x ray fluoroscopy x Helmet Mounted Display (7) to check result.
Because magnetic field intensity is along with the increase of distance is successively decreased with geometric series, therefore follow the tracks of near of transmitter (4) with at a distance at magnetic force, magnetic field intensity difference is very large, causes magnetic force tracking coordinate and real world coordinates no longer linear, as shown in Figure 2.So need to proofread and correct the reading of magnetic force tracking transducer (6,8).Abovementioned steps 2) magnetic force tracking transducer (6,8) correcting algorithm specifically comprises the following steps:
2)-1: space lattice is divided: by work space sbe divided into nindividual independent small cubes c i , ;
2)-2: set up magnetic force and follow the tracks of the corresponding table of coordinate and real world coordinates: the coordinate points in magnetic force tracking transducer (6,8) reading coordinate space and real world coordinates space is set up to corresponding relation;
2)-3: locate point to be corrected: because magnetic field intensity decay makes space lattice generation deformation, for avoiding positioning error, from xaxle , Yaxle , Zthree direction reduction search volumes of axle; With xdirection is example, x i represent coordinate points in the same plane in magnetic force tracking coordinate system xthe set of axial coordinate value, has x i+1min >X imin and x i+1max >X imax ; Be provided with point to be corrected p (x p , y p , z p ), from xaxle positive dirction maximal value is set out, edge xaxle negative direction traversal is each , find satisfied x p < x imin minimum i, be a little psearch volume of living in xright margin in direction; From xaxle negative direction minimum value is set out, edge xaxle positive dirction traversal is each x jmax , find satisfied maximum , be a little psearch volume of living in xleft margin in direction, and j< i; Equally, can find ythe up-and-down boundary of direction y k , y l (l<k) and zthe border, front and back of direction z m , z n ( n< m); Thus by point psearch volume narrowed down to by plane by whole summits x i , x j , y k , y l , z m , z n in the less space of defining; Then, the summit comprising in this space is traveled through, finally determine point pwhich belong among unit grids;
2)-4: anti-distance weighting interpolation solves the real world coordinates value of point to be corrected.In fixed search volume, calculate each summit and point pdistance, find out and point pthe summit that distance is minimum q; And point pinevitable in point qcentered by hexahedron aBCD-A`B`C`D`in, as shown in Figure 3; Cause qwith respect to other summits and psimilarity maximum, so give qthe weight that point is larger, point preal world coordinates value can be expressed as:
Wherein, d 0 for point pyu Dian qbetween distance, t 0 for d 0 weight, d i for point pwith summit ibetween distance, t i for d i weight, nfor not containing point qin interior number of vertices, wfor the weight coefficient of definition, in actual use, order w=n.
Because staff varies, it is short that finger has length to have, and each one is in the difference of dressing aspect custom, the position with respect to user's finger of the built-in flexibility sensor of data glove (5) can vary with each individual, so just will inevitably cause the original curved degree reading of data glove (5) to have very large error, therefore must carry out the normalized of data glove (5) reading.So, abovementioned steps 3) and data glove (5) reading self-adaptive processing algorithm specifically comprises the following steps:
3)-1: operating personnel put on data glove (5);
3)-2: obtain training data: with the peaceful action of stretching of clenching fist of the most natural mode, repeat 10 times, data glove (5) read-record when bending is each time to array bendMax[10] in, data glove (5) read-record when stretching is each time to array bendMin[10] in;
3)-3: normalization flexibility data: make bMax= max(bendMax[ i], i∈ [0,9]), bMin= min(bendMin[ j], j∈ [0,9]), bMax is the maximal value of each finger flexibility of data glove (5), bMin is the minimum value of each finger flexibility of data glove (5), bendCurrent is the real-time raw data of data glove, and the data after normalization are bCurrent=(bendcurrent-bMin)/(bMax-bMin).
Because user is in the time wearing x ray fluoroscopy x Helmet Mounted Display (7), the skew between eyes and magnetic force tracking transducer (8) is not quite similar, and therefore needs to measure side-play amount between the two, and viewpoint and visible angle are revised.Abovementioned steps 4) specifically comprise the following steps:
4)-1: operating personnel put on x ray fluoroscopy x Helmet Mounted Display (7): operating personnel's the optical axis overlaps with the center line of x ray fluoroscopy x Helmet Mounted Display (7);
4)-2: revise the offset error between magnetic force tracking transducer (8) and observation eye: 3 Eulerian angle that order is obtained by magnetic force tracking transducer (8) are respectively a, e, r, the independent rotation matrix of its correspondence is m r , m e , m a , total rotation matrix m=M r m e m a ;
Wherein, , , ;
Measure the side-play amount between magnetic force tracking transducer (8) and observation eye t( x, y, z), head magnetic force tracking transducer (8) is proofreaied and correct rear reading and is p( x, y, z), the real space coordinate of observation eye is p`= p+ tM;
4)-3: calculating observation eye angle of visibility.The distance measuring between observation eye and x ray fluoroscopy x Helmet Mounted Display (7) is d, the length of x ray fluoroscopy x Helmet Mounted Display (7) is l, be widely w, angle of visibility θ=arctan ( l/ ( w+2 d)).
For user can be made with the hands to carry out in augmented reality system naturally alternately, need first according to kinematic chain model definition both hands rules of interaction, then Real time identification user's gesture operation.Abovementioned steps 5) specifically comprise the following steps:
5)-1: based on kinematic chain model definition interaction gesture;
5)-2: rule-based Real time identification user interactions gesture;
5)-3: the gesture Semantic mapping based on finite state machine.
Described step 5)-1 definition interaction gesture specifically comprise the following steps:
5)-1-1: both hands interaction feature is analyzed: hand motion when people complete various task in actual life can be divided into: 1) singlehanded behavior; 2) both hands symmetrical behaviour; 3) the asymmetric behavior of both hands.Be example smoothly taking the right side, the cardinal rule of kinematic chain model is: 1) motion of the right hand is taking left hand as reference, and left hand can provide a reference coordinate for the motion of the right hand; 2) right hand has been used for accurately interior operation among a small circle, and left hand is for interior on a large scale rough operation; 3) action of left hand is moved prior to the right hand in sequential;
5)-1-2: definition interaction gesture working rule: according to step 5) principle that-1-1 proposes, by the flexibility of user's hand, relative tertiary location relation is as feature, and construction feature vector, as identification goal set, sets up gesture regular collection g.
Described step 5)-2 Real time identification user interactions gestures specifically comprise the following steps:
5)-2-1: obtain hand static nature: according to described step 3) obtain every finger of user's hand ( tfor thumb, ifor forefinger, mfor middle finger, rfor the third finger, lfor little finger) flexibility b( b t , b i , b m , b r , b l ), set corresponding threshold value t( t t , t i , t m , t r , t l ), current finger state sbe defined as:
Wherein, x=t, i, m, r, l; wfor range threshold;
5)-2-2: obtain hand behavioral characteristics: according to described step 2) position coordinates between the real-time empty of user's hand of obtaining, obtain current location pwith a upper moment position p 'between motion vector vthereby, try to achieve direction and the speed of hand exercise;
5)-2-3: the singlehanded gesture of identification user: according to described step 5) the hand static nature that obtains of-2-1 swith described step 5) the hand exercise vector that obtains of-2-2 v, from described step 5) and defined gesture regular collection-1-2 gin search corresponding singlehanded gesture;
5)-2-4: identification user bimanual input: according to described step 5) the singlehanded gesture of right-hand man that obtains of-2-3, be example smoothly taking the right side, taking left hand gesture as constraint condition, in conjunction with right hand gesture, from described step 5) defined gesture regular collection-1-2 gin search corresponding bimanual input.
Described step 5)-3: gesture Semantic mapping specifically comprises the following steps:
5)-3-1: set up constraint condition: according to affiliated step 5) the cited principle of-1-1, be example smoothly taking the right side, system, taking left hand state as preposition constraint, is carried out state transitions in conjunction with right hand state variation;
5)-3-2: set up finite state machine model: according to described step 5) defined gesture regular collection in-1-2 gin state transition between defined gesture, set up finite state machine model, and it carried out to minimization;
5)-3-3: gesture Semantic mapping: according to described step 5) operator's gesture of obtaining in-2-4, to described step 5) finite state machine in-3-2 carries out state transition, triggers the corresponding systemic-function of new state.
The present invention compared with prior art, has following apparent outstanding substantive distinguishing features and remarkable advantage:
In fact the present invention has proposed a kind of method and solution that realizes both hands man-machine interactive operation in x ray fluoroscopy x augmented reality system, provide one more natural human-computer interaction interface, for the realization of similar system provide can reference implementing procedure.The present invention can allow user break away from traditional man-machine interaction modes such as keyboard, mouse, touch pad, and directly makes with the hands to carry out man-machine interactive operation, thus the interactive experience that brings user and more press close to daily habits.
Brief description of the drawings
Both hands man-machine interactive system structural representation in Fig. 1 x ray fluoroscopy x augmented reality system.
Fig. 2 magnetic force is followed the tracks of coordinate scatter diagram.
Fig. 3 locates to be corrected some P.
Fig. 4 finite state machine schematic diagram.
Both hands interactive system process flow diagram in Fig. 5 x ray fluoroscopy x augmented reality system.
Fig. 6 both hands interaction gesture table.
Embodiment
A preferred embodiment of the present invention is described as follows:
Can with the hands operate the rotation of magic square model arbitrary face, integral-rotation, overall convergent-divergent to realize and to be operating as example based on the mutual magic square of both hands, to operate in x ray fluoroscopy x augmented reality system.Operation steps is as follows: (referring to Fig. 1~Fig. 5)
1): initialization system environment, arranges systematic parameter, scanning and check system equipment;
2): algorithm for design carries out error correction to the reading of magnetic force tracking transducer (6,8), and utilize corrected tracking data to complete the real-time orientation of user's head, hand is followed the tracks of and the three-dimensional registration of dummy model;
3): algorithm for design carries out self-adaptive processing to the reading of data glove (5), and utilize the case of bending of data Real-time Obtaining user after treatment finger;
4): measure the distance between magnetic force tracking transducer (8) and eyes of user, revise side-play amount and the visible angle of viewpoint;
5): algorithm for design carries out analyzing and processing to the locus tracking data of hand and finger case of bending, completes the Real time identification of interaction gesture;
6): carry out virtual scene Real-time modeling set according to user's input operation, and according to its interactive operation, model of place is carried out real-time transform and played up, generate feedback animation;
7): by real scene and the seamless fusion of dummy model;
8): user uses x ray fluoroscopy x Helmet Mounted Display (7) to check result.
Because magnetic field intensity is along with the increase of distance is successively decreased with geometric series, therefore follow the tracks of near of transmitter (4) with at a distance at magnetic force, magnetic field intensity difference is very large, causes magnetic force tracking coordinate and real world coordinates no longer linear, as shown in Figure 2.So need to proofread and correct the reading of magnetic force tracking transducer (6,8).Abovementioned steps 2) magnetic force tracking transducer (6,8) correcting algorithm specifically comprises the following steps:
2)-1: space lattice is divided: by work space sbe divided into nindividual independent small cubes c i , ;
2)-2: set up magnetic force and follow the tracks of the corresponding table of coordinate and real world coordinates: the coordinate points in magnetic force tracking transducer (6,8) reading coordinate space and real world coordinates space is set up to corresponding relation;
2)-3: locate point to be corrected: because magnetic field intensity decay makes space lattice generation deformation, for avoiding positioning error, from xaxle , Yaxle , Zthree direction reduction search volumes of axle; With xdirection is example, x i represent coordinate points in the same plane in magnetic force tracking coordinate system xthe set of axial coordinate value, has x i+1min >X imin and x i+1max >X imax .Be provided with point to be corrected p (x p , y p , z p ), from xaxle positive dirction maximal value is set out, edge xaxle negative direction traversal is each , find satisfied x p < x imin minimum i, be a little psearch volume of living in xright margin in direction; From xaxle negative direction minimum value is set out, edge xaxle positive dirction traversal is each x jmax , find satisfied maximum , be a little psearch volume of living in xleft margin in direction, and j< i.Equally, can find ythe up-and-down boundary of direction y k , y l (l<k) and zthe border, front and back of direction z m , z n ( n< m); Thus by point psearch volume narrowed down to by plane by whole summits x i , x j , y k , y l , z m , z n in the less space of defining; Then, the summit comprising in this space is traveled through, finally determine point pwhich belong among unit grids;
2)-4: anti-distance weighting interpolation solves the real world coordinates value of point to be corrected: in fixed search volume, calculate each summit and point pdistance, find out and point pthe summit that distance is minimum q; And point pinevitable in point qcentered by hexahedron aBCD-A`B`C`D`in, as shown in Figure 3.Cause qwith respect to other summits and psimilarity maximum, so give qthe weight that point is larger, point preal world coordinates value can be expressed as:
Wherein, d 0 for point pyu Dian qbetween distance, t 0 for d 0 weight, d i for point pwith summit ibetween distance, t i for d i weight, nfor not containing point qin interior number of vertices, wfor the weight coefficient of definition, in actual use, order w=n.
Because staff varies, it is short that finger has length to have, and each one is in the difference of dressing aspect custom, the position with respect to user's finger of the built-in flexibility sensor of data glove (5) can vary with each individual, so just will inevitably cause the original curved degree reading of data glove (5) to have very large error, therefore must carry out the normalized of data glove (5) reading.So, abovementioned steps 3) and data glove (5) reading self-adaptive processing algorithm specifically comprises the following steps:
3)-1: operating personnel put on data glove (5);
3)-2: obtain training data: with the peaceful action of stretching of clenching fist of the most natural mode, repeat 10 times, data glove (5) read-record when bending is each time to array bendMax[10] in, data glove (5) read-record when stretching is each time to array bendMin[10] in;
3)-3: normalization flexibility data: make bMax= max(bendMax[ i], i∈ [0,9]), bMin= min(bendMin[ j], j∈ [0,9]), bMax is the maximal value of each finger flexibility of data glove (5), bMin is the minimum value of each finger flexibility of data glove (5), bendCurrent is the real-time raw data of data glove, and the data after normalization are bCurrent=(bendcurrent-bMin)/(bMax-bMin).
Because user is in the time wearing x ray fluoroscopy x Helmet Mounted Display (7), the skew between eyes and magnetic force tracking transducer (8) is not quite similar, and therefore needs to measure side-play amount between the two, and viewpoint and visible angle are revised.Abovementioned steps 4) specifically comprise the following steps:
4)-1: operating personnel put on x ray fluoroscopy x Helmet Mounted Display (7): operating personnel's the optical axis overlaps with the center line of x ray fluoroscopy x Helmet Mounted Display (7);
4)-2: revise the offset error between magnetic force tracking transducer (8) and observation eye: 3 Eulerian angle that order is obtained by magnetic force tracking transducer (8) are respectively a, e, r, the independent rotation matrix of its correspondence is m r , m e , m a , total rotation matrix m=M r m e m a ;
Wherein, , , ;
Measure the side-play amount between magnetic force tracking transducer (8) and observation eye t( x, y, z), head magnetic force tracking transducer (8) is proofreaied and correct rear reading and is p( x, y, z), the real space coordinate of observation eye is p`= p+ tM;
4)-3: calculating observation eye angle of visibility: the distance measuring between observation eye and x ray fluoroscopy x Helmet Mounted Display (7) is d, the length of x ray fluoroscopy x Helmet Mounted Display (7) is l, be widely w, angle of visibility θ=arctan ( l/ ( w+2 d));
For user can be made with the hands to carry out in augmented reality system naturally alternately, need first according to kinematic chain model definition both hands rules of interaction, then Real time identification user's gesture operation.Abovementioned steps 5) specifically comprise the following steps:
5)-1: based on kinematic chain model definition interaction gesture;
5)-2: rule-based Real time identification user interactions gesture;
5)-3: the gesture Semantic mapping based on finite state machine.
Described step 5)-1 definition interaction gesture specifically comprise the following steps:
5)-1-1: both hands interaction feature is analyzed: hand motion when people complete various task in actual life can be divided into: 1) singlehanded behavior; 2) both hands symmetrical behaviour; 3) the asymmetric behavior of both hands.Be example smoothly taking the right side, the cardinal rule of kinematic chain model is: 1) motion of the right hand is taking left hand as reference, and left hand can provide a reference coordinate for the motion of the right hand; 2) right hand has been used for accurately interior operation among a small circle, and left hand is for interior on a large scale rough operation; 3) action of left hand is moved prior to the right hand in sequential;
5)-1-2: definition interaction gesture working rule: according to step 5) principle that-1-1 proposes, by the flexibility of user's hand, relative tertiary location relation is as feature, and construction feature vector, as identification goal set, sets up gesture regular collection g.
Described step 5)-2 Real time identification user interactions gestures specifically comprise the following steps:
5)-2-1: obtain hand static nature: according to described step 3) obtain every finger of user's hand ( tfor thumb, ifor forefinger, mfor middle finger, rfor the third finger, lfor little finger) flexibility b( b t , b i , b m , b r , b l ), set corresponding threshold value t( t t , t i , t m , t r , t l ), current finger state sbe defined as:
Wherein, x=t, i, m, r, l; wfor range threshold;
5)-2-2: obtain hand behavioral characteristics: according to described step 2) position coordinates between the real-time empty of user's hand of obtaining, obtain current location pwith a upper moment position p 'between motion vector vthereby, try to achieve direction and the speed of hand exercise;
5)-2-3: the singlehanded gesture of identification user: according to described step 5) the hand static nature that obtains of-2-1 swith described step 5) the hand exercise vector that obtains of-2-2 v, from described step 5) and defined gesture regular collection-1-2 gin search corresponding singlehanded gesture;
5)-2-4: identification user bimanual input: according to described step 5) the singlehanded gesture of right-hand man that obtains of-2-3, be example smoothly taking the right side, taking left hand gesture as constraint condition, in conjunction with right hand gesture, from described step 5) defined gesture regular collection-1-2 gin search corresponding bimanual input.
Described step 5)-3: gesture Semantic mapping specifically comprises the following steps:
5)-3-1: set up constraint condition: according to affiliated step 5) the cited principle of-1-1, be example smoothly taking the right side, system, taking left hand state as preposition constraint, is carried out state transitions in conjunction with right hand state variation;
5)-3-2: set up finite state machine model: according to described step 5) defined gesture regular collection in-1-2 gin state transition between defined gesture, set up finite state machine model, and it carried out to minimization;
5)-3-3: gesture Semantic mapping: according to described step 5) operator's gesture of obtaining in-2-4, to described step 5) finite state machine in-3-2 carries out state transition, triggers the corresponding systemic-function of new state.
Described step 6) generate feedback animation specifically comprise the following steps:
6) systemic-function that-1: determine magic square model state: according to described step 5)-3-3 triggers, is rotated or convergent-divergent the appointment side of magic square model; The angle of rotation and the scale factor of convergent-divergent are obtained by the relative angle in bimanualness process and relative distance;
6)-2: play up magic square model: according to described step 6)-1 magic square model state obtaining is played up virtual magic square model, according to described step 4)-2 and step 4)-3 virtual camera parameters that obtain, adjust magic square model attitude, make its seamless being fused in real scene.
Fig. 1 illustrates the structure of both hands man-machine interactive system in the x ray fluoroscopy x augmented reality system that the present embodiment adopts.
Fig. 5 illustrates the flow chart of both hands man-machine interaction method in the present embodiment x ray fluoroscopy x augmented reality system.

Claims (7)

1. a both hands man-machine interaction method in x ray fluoroscopy x augmented reality system, is characterized in that comprising following operation steps:
1): initialization system environment, arranges systematic parameter, scanning and check system equipment;
2): algorithm for design carries out error correction to the reading of magnetic force tracking transducer (6,8), and utilize corrected tracking data to complete the real-time orientation of user's head, hand is followed the tracks of and the three-dimensional registration of dummy model;
3): algorithm for design carries out self-adaptive processing to the reading of data glove (5), and utilize the case of bending of data Real-time Obtaining user after treatment finger;
4): measure the distance between magnetic force tracking transducer (8) and eyes of user, revise side-play amount and the visible angle of viewpoint;
5): algorithm for design carries out analyzing and processing to the locus tracking data of hand and finger case of bending, completes the Real time identification of interaction gesture;
6): carry out virtual scene Real-time modeling set according to user's input operation, and according to its interactive operation, model of place is carried out real-time transform and played up, generate feedback animation;
7): by real scene and the seamless fusion of dummy model;
8): user uses x ray fluoroscopy x Helmet Mounted Display (7) to check result.
2. both hands man-machine interaction method in x ray fluoroscopy x augmented reality system according to claim 1, is characterized in that described step 2) specifically comprise the following steps:
2)-1: space lattice is divided: by work space sbe divided into nindividual independent small cubes c i , ;
2)-2: set up magnetic force and follow the tracks of the corresponding table of coordinate and real world coordinates: the coordinate points in magnetic force tracking transducer (6,8) reading coordinate space and real world coordinates space is set up to corresponding relation;
2)-3: locate point to be corrected: because magnetic field intensity decay makes space lattice generation deformation, for avoiding positioning error, from xaxle , Yaxle , Zthree direction reduction search volumes of axle; With xdirection is example, x i represent coordinate points in the same plane in magnetic force tracking coordinate system xthe set of axial coordinate value, due to x i+1min >X imin and x i+1max >X imax .Be provided with point to be corrected p (x p , y p , z p ), from xaxle positive dirction maximal value is set out, edge xaxle negative direction traversal is each , find satisfied x p < x imin minimum i, be a little psearch volume of living in xright margin in direction; From xaxle negative direction minimum value is set out, edge xaxle positive dirction traversal is each x jmax , find satisfied maximum , be a little psearch volume of living in xleft margin in direction, and j< i; Equally, can find ythe up-and-down boundary of direction y k , y l (l<k) and zthe border, front and back of direction z m , z n ( n< m); Thus by point psearch volume narrowed down to by plane by whole summits x i , x j , y k , y l , z m , z n in the less space of defining; Then, the summit comprising in this space is traveled through, finally determine point pwhich belong among unit grids;
2)-4: anti-distance weighting interpolation solves the real world coordinates value of point to be corrected: in fixed search volume, calculate each summit and point pdistance, find out and point pthe summit that distance is minimum q; And point pinevitable in point qcentered by hexahedron aBCD-A`B`C`D`in; Cause qwith respect to other summits and psimilarity maximum, so give qthe weight that point is larger, point preal world coordinates value can be expressed as:
Wherein, d 0 for point pyu Dian qbetween distance, t 0 for d 0 weight, d i for point pwith summit ibetween distance, t i for d i weight, nfor not containing point qin interior number of vertices, wfor the weight coefficient of definition, in actual use, order w=n.
3. both hands man-machine interaction method in x ray fluoroscopy x augmented reality system according to claim 1, is characterized in that described step 4) specifically comprises the following steps:
4)-1: operating personnel put on x ray fluoroscopy x Helmet Mounted Display (7): operating personnel's the optical axis overlaps with the center line of x ray fluoroscopy x Helmet Mounted Display (7);
4)-2: revise the offset error between magnetic force tracking transducer (8) and observation eye: 3 Eulerian angle that order is obtained by magnetic force tracking transducer (8) are respectively a, e, r, the independent rotation matrix of its correspondence is m r , m e , m a , total rotation matrix m=M r m e m a ;
Wherein, , , ;
Measure the side-play amount between magnetic force tracking transducer (8) and observation eye t( x, y, z), magnetic force tracking transducer (8) is proofreaied and correct rear reading and is p( x, y, z), the real space coordinate of observation eye is p`= p+ tM;
4)-3: calculating observation eye angle of visibility: the distance measuring between observation eye and x ray fluoroscopy x Helmet Mounted Display (7) is d, the length of x ray fluoroscopy x Helmet Mounted Display (7) is l, be widely w, angle of visibility θ=arctan ( l/ ( w+2 d)), and set it as the attribute of virtual camera in virtual world coordinate system.
4. both hands man-machine interaction method in x ray fluoroscopy x augmented reality system according to claim 1, is characterized in that described step 5) specifically comprises the following steps:
5)-1: based on kinematic chain model definition interaction gesture;
5)-2: rule-based Real time identification user interactions gesture;
5)-3: the gesture Semantic mapping based on finite state machine.
5. both hands man-machine interaction method in x ray fluoroscopy x augmented reality system according to claim 4, is characterized in that described step 5)-1 specifically comprise the following steps:
5)-1-1: both hands interaction feature is analyzed: hand motion when people complete various task in actual life can be divided into: 1) singlehanded behavior; 2) both hands symmetrical behaviour; 3) the asymmetric behavior of both hands.Be example smoothly taking the right side, the cardinal rule of kinematic chain model is: 1) motion of the right hand is taking left hand as reference, and left hand can provide a reference coordinate for the motion of the right hand; 2) right hand has been used for accurately interior operation among a small circle, and left hand is for interior on a large scale rough operation; 3) action of left hand is moved prior to the right hand in sequential;
5)-1-2: definition interaction gesture working rule: according to step 5) principle that-1-1 proposes, by the flexibility of user's hand, relative tertiary location relation is as feature, and construction feature vector, as identification goal set, sets up gesture regular collection G.
6. both hands man-machine interaction method in x ray fluoroscopy x augmented reality system according to claim 4, is characterized in that described step 5)-2 specifically comprise the following steps:
5)-2-1: obtain hand static nature: according to described step 3) every finger of user's hand of obtaining--- tfor thumb, ifor forefinger, mfor middle finger, rfor the third finger, lfor little finger, flexibility b--- b t , b i , b m , b r , b l , set corresponding threshold value t--- t t , t i , t m , t r , t l , current finger state sbe defined as:
Wherein, x=t, i, m, r, l; wfor range threshold;
5)-2-2: obtain hand behavioral characteristics: according to described step 2) position coordinates between the real-time empty of user's hand of obtaining, obtain current location pwith a upper moment position p 'between motion vector vthereby, try to achieve direction and the speed of hand exercise;
5)-2-3: the singlehanded gesture of identification user: according to described step 5) the hand static nature that obtains of-2-1 swith described step 5) the hand exercise vector that obtains of-2-2 v, from described step 5) and defined gesture regular collection-1-2 gin search corresponding singlehanded gesture;
5)-2-4: identification user bimanual input: according to described step 5) the singlehanded gesture of right-hand man that obtains of-2-3, be example smoothly taking the right side, taking left hand gesture as constraint condition, in conjunction with right hand gesture, from described step 5) defined gesture regular collection-1-2 gin search corresponding bimanual input.
7. both hands man-machine interaction method in x ray fluoroscopy x augmented reality system according to claim 4, is characterized in that described step 5)-3 specifically comprise the following steps:
5)-3-1: set up constraint condition: according to affiliated step 5) the cited principle of-1-1, be example smoothly taking the right side, system, taking left hand state as preposition constraint, is carried out state transitions in conjunction with right hand state variation;
5)-3-2: set up finite state machine model: according to described step 5) defined gesture regular collection in-1-2 gin state transition between defined gesture, set up finite state machine model, and it carried out to minimization;
5)-3-3: gesture Semantic mapping: according to described step 5) operator's gesture of obtaining in-2-4, to described step 5) finite state machine in-3-2 carries out state transition, triggers the corresponding systemic-function of new state.
CN201310569738.5A 2013-11-13 2013-11-13 Both hands man-machine interaction method in x ray fluoroscopy x augmented reality system Active CN103955267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310569738.5A CN103955267B (en) 2013-11-13 2013-11-13 Both hands man-machine interaction method in x ray fluoroscopy x augmented reality system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310569738.5A CN103955267B (en) 2013-11-13 2013-11-13 Both hands man-machine interaction method in x ray fluoroscopy x augmented reality system

Publications (2)

Publication Number Publication Date
CN103955267A true CN103955267A (en) 2014-07-30
CN103955267B CN103955267B (en) 2017-03-15

Family

ID=51332552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310569738.5A Active CN103955267B (en) 2013-11-13 2013-11-13 Both hands man-machine interaction method in x ray fluoroscopy x augmented reality system

Country Status (1)

Country Link
CN (1) CN103955267B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104765448A (en) * 2015-03-17 2015-07-08 重庆邮电大学 Natural hand interaction method in augmented reality environment
CN105005986A (en) * 2015-06-19 2015-10-28 北京邮电大学 Three-dimensional registering method and apparatus
CN105373218A (en) * 2014-08-13 2016-03-02 英派尔科技开发有限公司 Scene analysis for improved eye tracking
CN105404384A (en) * 2015-11-02 2016-03-16 深圳奥比中光科技有限公司 Gesture operation method, method for positioning screen cursor by gesture, and gesture system
CN105915877A (en) * 2015-12-27 2016-08-31 乐视致新电子科技(天津)有限公司 Free film watching method and device of three-dimensional video
CN106101470A (en) * 2015-04-28 2016-11-09 京瓷办公信息系统株式会社 Information processor and the operation instruction method to image processing apparatus
CN106095090A (en) * 2016-06-07 2016-11-09 北京行云时空科技有限公司 Control method, device and the system of spatial scene based on intelligence system
CN106502420A (en) * 2016-11-14 2017-03-15 北京视据科技有限公司 Based on the virtual key triggering method that image aberration is recognized
CN107085467A (en) * 2017-03-30 2017-08-22 北京奇艺世纪科技有限公司 A kind of gesture identification method and device
CN107548470A (en) * 2015-04-15 2018-01-05 索尼互动娱乐股份有限公司 Nip and holding gesture navigation on head mounted display
CN107576731A (en) * 2017-08-30 2018-01-12 天津大学 Model experiment structural crack expansion process real-time fluoroscopic method based on mixed reality
CN107764262A (en) * 2017-11-09 2018-03-06 深圳创维新世界科技有限公司 Virtual reality display device, system and pose calibrating method
CN108211355A (en) * 2017-12-29 2018-06-29 武汉市马里欧网络有限公司 Three-dimensional puzzle based on AR
CN108898062A (en) * 2018-05-31 2018-11-27 电子科技大学 A kind of hand motion recognition method based on improved signal segment extraction algorithm
CN109032355A (en) * 2018-07-27 2018-12-18 济南大学 Various gestures correspond to the flexible mapping interactive algorithm of same interactive command
CN109308741A (en) * 2018-08-08 2019-02-05 长春理工大学 A kind of natural interaction craftwork creative design system based on Meta2
CN109582138A (en) * 2018-11-16 2019-04-05 重庆邮电大学 The bearing calibration of Tracing Registration precision in a kind of augmented reality system
CN109683700A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 The human-computer interaction implementation method and device of view-based access control model perception
US10573075B2 (en) 2016-05-19 2020-02-25 Boe Technology Group Co., Ltd. Rendering method in AR scene, processor and AR glasses
TWI702548B (en) * 2018-04-23 2020-08-21 財團法人工業技術研究院 Controlling system and controlling method for virtual display
CN112181134A (en) * 2020-08-24 2021-01-05 东南大学 Model evaluation method and system based on finger click interaction task in virtual environment
CN112181133A (en) * 2020-08-24 2021-01-05 东南大学 Model evaluation method and system based on static and dynamic gesture interaction tasks
US10890979B2 (en) 2018-04-23 2021-01-12 Industrial Technology Research Institute Controlling system and controlling method for virtual display
CN113204306A (en) * 2021-05-12 2021-08-03 同济大学 Object interaction information prompting method and system based on augmented reality environment
CN113838177A (en) * 2021-09-22 2021-12-24 上海拾衷信息科技有限公司 Hand animation production method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539804A (en) * 2009-03-11 2009-09-23 上海大学 Real time human-machine interaction method and system based on augmented virtual reality and anomalous screen
CN102142055A (en) * 2011-04-07 2011-08-03 上海大学 True three-dimensional design method based on augmented reality interactive technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姚争为等: "增强现实系统中基于视觉感知的抓取识别", 《计算机辅助设计与图形学学报》 *
黄晨等: "增强现实系统中大范围磁跟踪注册问题研究", 《计算机工程》 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10394318B2 (en) 2014-08-13 2019-08-27 Empire Technology Development Llc Scene analysis for improved eye tracking
CN105373218A (en) * 2014-08-13 2016-03-02 英派尔科技开发有限公司 Scene analysis for improved eye tracking
CN109062415A (en) * 2014-08-13 2018-12-21 英派尔科技开发有限公司 For improving the scene analysis of eyes tracking
CN105373218B (en) * 2014-08-13 2018-09-18 英派尔科技开发有限公司 Scene analysis for improving eyes tracking
CN104765448A (en) * 2015-03-17 2015-07-08 重庆邮电大学 Natural hand interaction method in augmented reality environment
CN104765448B (en) * 2015-03-17 2018-02-27 重庆邮电大学 Natural hand interaction method in augmented reality environment
CN107548470A (en) * 2015-04-15 2018-01-05 索尼互动娱乐股份有限公司 Nip and holding gesture navigation on head mounted display
CN107548470B (en) * 2015-04-15 2020-06-26 索尼互动娱乐股份有限公司 Pinch and hold gesture navigation on head mounted display
CN106101470A (en) * 2015-04-28 2016-11-09 京瓷办公信息系统株式会社 Information processor and the operation instruction method to image processing apparatus
CN106101470B (en) * 2015-04-28 2019-01-18 京瓷办公信息系统株式会社 Information processing unit and operation instruction method to image processing apparatus
CN105005986A (en) * 2015-06-19 2015-10-28 北京邮电大学 Three-dimensional registering method and apparatus
CN105404384A (en) * 2015-11-02 2016-03-16 深圳奥比中光科技有限公司 Gesture operation method, method for positioning screen cursor by gesture, and gesture system
CN105915877A (en) * 2015-12-27 2016-08-31 乐视致新电子科技(天津)有限公司 Free film watching method and device of three-dimensional video
US10573075B2 (en) 2016-05-19 2020-02-25 Boe Technology Group Co., Ltd. Rendering method in AR scene, processor and AR glasses
CN106095090A (en) * 2016-06-07 2016-11-09 北京行云时空科技有限公司 Control method, device and the system of spatial scene based on intelligence system
CN106502420A (en) * 2016-11-14 2017-03-15 北京视据科技有限公司 Based on the virtual key triggering method that image aberration is recognized
CN107085467A (en) * 2017-03-30 2017-08-22 北京奇艺世纪科技有限公司 A kind of gesture identification method and device
CN107576731A (en) * 2017-08-30 2018-01-12 天津大学 Model experiment structural crack expansion process real-time fluoroscopic method based on mixed reality
CN109683700A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 The human-computer interaction implementation method and device of view-based access control model perception
CN107764262B (en) * 2017-11-09 2019-10-25 深圳创维新世界科技有限公司 Virtual reality shows equipment, system and pose calibrating method
CN107764262A (en) * 2017-11-09 2018-03-06 深圳创维新世界科技有限公司 Virtual reality display device, system and pose calibrating method
CN108211355A (en) * 2017-12-29 2018-06-29 武汉市马里欧网络有限公司 Three-dimensional puzzle based on AR
TWI702548B (en) * 2018-04-23 2020-08-21 財團法人工業技術研究院 Controlling system and controlling method for virtual display
US10890979B2 (en) 2018-04-23 2021-01-12 Industrial Technology Research Institute Controlling system and controlling method for virtual display
CN108898062B (en) * 2018-05-31 2021-12-10 电子科技大学 Hand motion recognition method based on improved signal segment extraction algorithm
CN108898062A (en) * 2018-05-31 2018-11-27 电子科技大学 A kind of hand motion recognition method based on improved signal segment extraction algorithm
CN109032355B (en) * 2018-07-27 2021-06-01 济南大学 Flexible mapping interaction method for corresponding multiple gestures to same interaction command
CN109032355A (en) * 2018-07-27 2018-12-18 济南大学 Various gestures correspond to the flexible mapping interactive algorithm of same interactive command
CN109308741A (en) * 2018-08-08 2019-02-05 长春理工大学 A kind of natural interaction craftwork creative design system based on Meta2
CN109582138A (en) * 2018-11-16 2019-04-05 重庆邮电大学 The bearing calibration of Tracing Registration precision in a kind of augmented reality system
CN112181133A (en) * 2020-08-24 2021-01-05 东南大学 Model evaluation method and system based on static and dynamic gesture interaction tasks
CN112181134A (en) * 2020-08-24 2021-01-05 东南大学 Model evaluation method and system based on finger click interaction task in virtual environment
CN112181134B (en) * 2020-08-24 2024-03-08 东南大学 Model evaluation method and system based on finger click interaction task in virtual environment
CN113204306A (en) * 2021-05-12 2021-08-03 同济大学 Object interaction information prompting method and system based on augmented reality environment
CN113838177A (en) * 2021-09-22 2021-12-24 上海拾衷信息科技有限公司 Hand animation production method and system

Also Published As

Publication number Publication date
CN103955267B (en) 2017-03-15

Similar Documents

Publication Publication Date Title
CN103955267A (en) Double-hand man-machine interaction method in x-ray fluoroscopy augmented reality system
CN108052202B (en) 3D interaction method and device, computer equipment and storage medium
Bowman et al. An introduction to 3-D user interface design
JP5695758B2 (en) Method, circuit and system for human machine interface with hand gestures
US9857868B2 (en) Method and system for ergonomic touch-free interface
CN104854537B (en) It is interacted from, multi-modal natural user with the multiple spurs of computing device
US20170193289A1 (en) Transform lightweight skeleton and using inverse kinematics to produce articulate skeleton
Linqin et al. Dynamic hand gesture recognition using RGB-D data for natural human-computer interaction
Wang et al. Immersive human–computer interactive virtual environment using large-scale display system
Aditya et al. Recent trends in HCI: A survey on data glove, LEAP motion and microsoft kinect
Huang et al. Conceptual three-dimensional modeling using intuitive gesture-based midair three-dimensional sketching technique
Xiao et al. A hand gesture-based interface for design review using leap motion controller
Gao Key technologies of human–computer interaction for immersive somatosensory interactive games using VR technology
Boruah et al. Development of a learning-aid tool using hand gesture based human computer interaction system
Prasad et al. A wireless dynamic gesture user interface for HCI using hand data glove
CN104820584B (en) Construction method and system of 3D gesture interface for hierarchical information natural control
Wang et al. Immersive wysiwyg (what you see is what you get) volume visualization
Mine Exploiting proprioception in virtual-environment interaction
CN108664126A (en) Deformable hand captures exchange method under a kind of reality environment
CN206097049U (en) Human -computer interaction equipment
CN107633551A (en) The methods of exhibiting and device of a kind of dummy keyboard
Mahdikhanlou et al. Object manipulation and deformation using hand gestures
CN105204630A (en) Method and system for garment design through motion sensing
Ogiela et al. Natural user interfaces for exploring and modeling medical images and defining gesture description technology
Saremi et al. Optimisation Algorithms for Hand Posture Estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant