CN106200964B - The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track - Google Patents

The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track Download PDF

Info

Publication number
CN106200964B
CN106200964B CN201610540904.2A CN201610540904A CN106200964B CN 106200964 B CN106200964 B CN 106200964B CN 201610540904 A CN201610540904 A CN 201610540904A CN 106200964 B CN106200964 B CN 106200964B
Authority
CN
China
Prior art keywords
motion track
user
characteristic image
vector
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610540904.2A
Other languages
Chinese (zh)
Other versions
CN106200964A (en
Inventor
王锐
鲍虎军
张孝舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610540904.2A priority Critical patent/CN106200964B/en
Publication of CN106200964A publication Critical patent/CN106200964A/en
Application granted granted Critical
Publication of CN106200964B publication Critical patent/CN106200964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of methods for identifying progress human-computer interaction based on motion track in virtual reality, including:Step 1, user's defined feature image set;Step 2, user defines track-event sets;Step 3, the one or more features image that user selects characteristic image to concentrate operates, and operating process is acquired in real time by video camera;Step 4, according to the position of characteristic image in the video of acquisition, the motion track of user's operation characteristic image in space is calculated;Step 5, the motion track of user's operation characteristic image is compared with user-defined characteristic image space motion track, when having single or a plurality of characteristic image space motion track matches user-defined characteristic image space motion track, triggers corresponding event.Man-machine interaction method provided by the invention can quickly identify motion track input by user, according to the recognition result of motion track, be manipulated to virtual reality, improve user experience.

Description

The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track
Technical field
The present invention relates to the field of virtual reality of computer science, and in particular to motion track is based in a kind of virtual reality The method that identification carries out human-computer interaction.
Background technology
Virtual reality, abbreviation VR (virtual reality) are by VPL companies of U.S. founder Lanier (Jaron Lanier it) proposes, is glaringly appeared in again after it experienced thoroughly failure the nineties big in early 1980s In many visuals field.Its specific intension is:Computer graphics system and the interface equipments such as various reality and control are comprehensively utilized, is being counted The technology for immersing feeling is provided in three-dimensional environment being generated on calculation machine, can interacting.VR (virtual reality) technology can be answered widely For urban planning, indoor design, industrial simulation, historic site recovery, bridge highway layout, real estate sale, Tourism teaching, water conservancy The various fields such as electric power, geological disaster, educational training provide practicable solution for it.
Nowadays most commonly used VR equipment is Google Cardboard, because its price is relatively low, the mobile phone that can arrange in pairs or groups is direct It uses, so most of user is ready to attempt this emerging field by it, however also has one using mobile phone as VR equipment A disadvantage, that is, without suitable input equipment, user cannot be again by touching screen come operating software.If we pass through Application market searches for VR, and the software found is divided into following 3 class substantially:Video player class, roaming experience class, game class.Its Almost without any interactive mode is used, what is done required for user is only to put on for middle video player and roaming experience class Cardboard and earphone sit silent appreciation just.In addition to this, the interactive mode that other application is used is following nothing but It is several.
1. using the physical button on Cardboard boxes
Cardboard 2.0 is compared to version before, more physical buttons on carton.This button connects It is the triangle flap that an end is equipped with conductive backing plate, by lower button, flap will contact touch screen.It is not readily accessible in user The effect for clicking touch screen is played in the case of touch screen.But the trigger position of the button is fixed, is always positioned at screen Surface.This button is used generally as return or ESC Escape, without king-sized interactive meaning.
2. optic centre stares triggering
This is most common interactive mode in the current applications of VR on the market, and in screen center's meeting, there are one the originals for aiming Point, since user can not touch screen, user can pass through rotatable head so that the dot is directed at certain triggers, leads to That spends the short time stares the object to achieve the effect that trigger event.The friendship that most of roaming experience class and game class software are used Mutually mode is all this, such as moon experience, is centering to different numbers and can see different information, then for example beats deadlock Corpse is played, and can be shot towards screen center always, and player only needs rotation head that center is made to be directed at corpse.
3. virtual push button
Virtual push button is a kind of augmented reality interactive mode provided by Vuforia external members, identifies occur by camera Characteristic image in the real space, on the screen corresponding position create a virtual button, if user touches this with finger Button can then trigger corresponding event.But this interactive mode response accuracy rate is not high, speed is also bad.
Invention content
The present invention provides identify that the method for carrying out human-computer interaction, this method are logical in a kind of virtual reality based on motion track The motion track that camera captures characteristic image is crossed accordingly to manipulate virtual article according to the recognition result of motion track.
The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track, including:
Step 1, user defines several characteristic images, as characteristic pattern image set;
Step 2, user defines single or a plurality of characteristic image space motion track and single or a plurality of characteristic image is empty Between event corresponding to motion track, as track-event sets;
Step 3, the one or more features image that user selects characteristic image to concentrate operates, and operating process is by taking the photograph Camera is acquired in real time;
Step 4, according to the position of characteristic image in the video of acquisition, the shifting of user's operation characteristic image in space is calculated Dynamic rail mark;
Step 5, by the motion track of user's operation characteristic image and user-defined characteristic image space motion track into Row compares, and user-defined characteristic image space motion track is matched when having single or a plurality of characteristic image space motion track When, trigger corresponding event.
In step 1, all characteristic image constitutive characteristic image sets.In step 2, single or a plurality of characteristic image space are moved Dynamic rail mark corresponds to one event of triggering, mapping relations composition track-event sets of space motion track and event.In step 5, When the track of user's operation characteristic image is judged as identical with user-defined characteristic image space motion track, then according to rail The mapping relations of space motion track and event in mark-event sets trigger the corresponding event of characteristic image space motion track.
Preferably, characteristic image space motion track is made of several sequentially connected bivectors.In step 4, meter Calculating the motion track of user's operation characteristic image in space includes:
Step 4-1 acquires motion track input by user, using the characteristic image coordinate in every frame video as endpoint, will move Dynamic rail mark is converted into several sequentially connected vectors;
Step 4-2, several directions defined in two dimensional surface press from both sides in eight directions with the vector for a certain vector The direction of angle minimum is the direction of the vector;
Step 4-3 compares the direction of current vector and previous vector successively according to the sequencing of motion track, if two The direction of a vector is identical, then merges two vectors;If two vectorial directions differ, current vector direction is kept;
Step 4-4, after the vectorial denoising after merging, according to following formula calculate with the editor for matched motion track away from From Leva,b(i, j), editing distance is most short and editing distance is for matched motion track apart from the upper limit less than acceptable Motion track input by user:
Wherein cost functions are:
In formula:Leva,b(i, j) is the editing distance of the preceding i character of character string a and the preceding j character of character string b;
A encodes for motion track input by user;
I is the character ordinal number in the character string of motion track input by user;
aiFor i-th of character in the character string of motion track input by user;
B is to be encoded for matched motion track;
J is for the character ordinal number in the character string of matched motion track;
bjFor j-th of character in the character string for matched motion track.
The present invention captures the motion track of characteristic image using camera, by identifying that it is corresponding that different motion tracks triggers Response events, for example, the object in mobile reality scene, is realized virtual by changing the physical distance between two objects The scaling of the size of object, two objects in reality scene are used as characteristic image in the video of acquisition, by feature The identification of motion track between image, triggers corresponding event.
Refer to the pre-set track of system, one a pair of event of these tracks and triggering for matched motion track It answers, the motion track of acquisition characteristics image, then to the motion track of characteristic image and the preset track progress of system Match, if successful match, triggers corresponding event.
The recognizer of motion track provided by the invention, can in the case where not needing training data and learning process Track is quickly identified, and there is higher accuracy.
Editing distance embodies motion track to be identified and the similarity degree for matched motion track, similarity degree It need to be less than and be subjected to, apart from the upper limit, to judge that motion track to be identified is similar to for matched motion track.
The string length of motion track is bigger, then is subjected to character string bigger apart from the upper limit, shorter, is subjected to distance The upper limit is smaller.
Preferably, in step 4-1, video is acquired during user inputs motion track, is obtained per in frame video Characteristic image coordinate, if the characteristic image coordinate dead time be more than threshold value, characteristic image coordinate is identified, adjacent two The characteristic image coordinate of frame video constructs a vector.
Preferably, in step 4-2, eight directions defined in two dimensional surface, the angle between two neighboring direction is 45 Degree.
Preferably, in step 4-4, the step of denoising, is as follows:
Step 4-4-1, shortest vector in current institute's directed quantity is deleted, after deleting shortest vector, according to motion track Sequencing, the direction of relatively current vector and previous vector successively, if two vectorial directions are identical, by two vectors Merge;If two vectorial directions differ, current vector direction is kept;
Step 4-4-2, step 4-4-1 is repeated, until the vectorial total length after deleting is no less than original vector overall length just 60% (value can be adjusted according to the case where actual use, generally 60% to 90%) of degree.
Preferably, eight directions in two dimensional surface correspond to a coding respectively, for after step 4-4 denoisings to Amount, the direction according to vector are encoded successively.
Preferably, when coding, the coding of forward and reverse is added for same motion track, or add for same motion track Add similar coding.
In actual use, everyone likes difference, for example, somebody likes drawing circle clockwise, somebody likes Circle is drawn counterclockwise, can be the coding that the same vector adds forward and reverse respectively.Alternatively, being added for the same vector similar Coding, such as simultaneously use " 0 " and " 010 " to indicate motion track to the right.
The method for carrying out human-computer interaction is identified in virtual reality provided by the invention based on motion track, can quickly be identified Motion track input by user manipulates virtual reality according to the recognition result of motion track, improves user experience.
Description of the drawings
Fig. 1 is the flow chart that the present invention calculates the motion track of user's operation characteristic image in space;
Fig. 2 is motion track input by user;
Fig. 3 is the motion track that sampling obtains;
Fig. 4 is the result after vector merges;
Fig. 5 is the flow chart of denoising;
Fig. 6 is the schematic diagram in the direction defined in two dimensional surface in the present invention;
Fig. 7 is the flow diagram for identifying the method for carrying out human-computer interaction in virtual reality of the present invention based on motion track.
Specific implementation mode
Below in conjunction with the accompanying drawings, to identifying that the method for carrying out human-computer interaction is done in detail based on motion track in virtual reality of the present invention Thin description.
The present embodiment is realized that image identification function therein is used to be carried by Vuforia in Unity3D engines The characteristic image of confession identifies, can efficiently identify the image of any high comparison in video camera.
As shown in fig. 7, the method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track, including:
Step 1, user prints several pattern in 2 D code, using pattern in 2 D code as characteristic image, by several Quick Response Code figures Case is as characteristic pattern image set
Step 2, user defines single or a plurality of characteristic image space motion track and single or a plurality of characteristic image is empty Between event corresponding to motion track, as track-event sets.
Step 3, the one or more features image that user selects characteristic image to concentrate operates, and operating process is by taking the photograph Camera is acquired in real time.
Step 4, acquisition user inputs the video of motion track, is obtained per frame video in the OnUpdate functions of Unity In characteristic image coordinate and preserve, be denoted as positions.If it is more than the predetermined time that image, which keeps pausing, to institute There is the characteristic image coordinate of preservation to be identified, uses formula vectors[i]=positions[i+1]-positions[i] By the characteristic image coordinate transformation of all typings at vector, in formula, positions[i]It is sat for the characteristic image in the i-th frame video Mark, positioms[i+1]For the characteristic image coordinate in i+1 frame video.
Motion track input by user is as shown in Fig. 2, characteristic image coordinate transformation is as shown in Figure 3 at the track after vector.
Step 5,8 directions of plane are defined as shown in fig. 6, number is followed successively by 0 to 7.Defining a vectorial direction is With the immediate direction in 8 directions of plane.
Step 6, for institute's directed quantity, a merging process is carried out:According to the sequence of motion track, compare successively when it is preceding to The direction of amount and previous vector is merged if two vectorial directions are the same, is merged and is used formula newVector =vectors[i]+vectors[i-1], in formula, vectors[i]For i-th of vector, vectors[i-1]For (i-1)-th to Amount.The results are shown in Figure 4 after merging.
Step 7, denoising and coding are carried out to the vector after merging, then calculates editing distance.
Because of a number of factors such as screen shake, handshaking, the data acquisition mistake of user, directly after merging vector direction often Tiny error vector is had to influence the judgement of track.In order to remove these influences, takes and repeatedly delete current institute's directed quantity The method of middle length most short amount carries out a merging process again after deleting shortest vector every time.
In order to ensure that main information is unaffected, the vector deleted should not be excessive, should ensure that the vectorial total length after deleting On the certain percentage of original total length, range is reasonable through overtesting, 70% to 90%.
In practical operation, as shown in figure 5, shortest vector is deleted every time, after certain is primary deletes most short amount, vector Length be less than original vector total length 70%, then cancel last time delete operation, using gained vector as after denoising to Amount.
Vector after denoising is encoded, coding mode directly uses the character string for including all direction vectors successively, Such as be encoded into track to the right " 0 ", the track of Z-type is encoded into " 050 ".
In traditional editing distance calculates, replaces, increases, deleting 3 kinds of operations and can all increase by 1 cost, such as direction 0 With direction 1 very close to, but differ greatly with direction 3, the replacement between adjacent both direction needs to pay less cost, The recurrence formula of Levenshtein Distance is revised as by the present embodiment:
Wherein cost functions are
In formula:Leva,b(i, j) is the editing distance of the preceding i character of character string a and the preceding j character of character string b;
A encodes for motion track input by user;
I is the character ordinal number in the character string of motion track input by user;
aiFor i-th of character in the character string of motion track input by user;
B is to be encoded for matched motion track;
J is for the character ordinal number in the character string of matched motion track;
bjFor j-th of character in the character string for matched motion track.
After editing distance is calculated in step 7, by editing distance is most short and editing distance be less than it is acceptable apart from the upper limit For matched motion track as motion track input by user.The flow of step 4- steps 7 is as shown in Figure 1.
Step 8, by the motion track of user's operation characteristic image and user-defined characteristic image space motion track into Row compares, and user-defined characteristic image space motion track is matched when having single or a plurality of characteristic image space motion track When, trigger corresponding event.

Claims (7)

1. the method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track, which is characterized in that including:
Step 1, user defines several characteristic images, as characteristic pattern image set;
Step 2, user defines single or a plurality of characteristic image space motion track and single or a plurality of characteristic image space is moved Event corresponding to dynamic rail mark, as track-event sets;
Step 3, the one or more features image that user selects characteristic image to concentrate operates, and operating process passes through video camera It is acquired in real time;
Step 4, according to the position of characteristic image in the video of acquisition, the moving rail of user's operation characteristic image in space is calculated Mark specifically includes:
Step 4-1 acquires motion track input by user, using the characteristic image coordinate in every frame video as endpoint, by moving rail Mark is converted into several sequentially connected vectors;
Step 4-2, several directions defined in two dimensional surface, for a certain vector, in eight directions most with the vector angle Small direction is the direction of the vector;
Step 4-3, according to the sequencing of motion track, the direction of relatively current vector and previous vector successively, if two to The direction of amount is identical, then merges two vectors;If two vectorial directions differ, current vector direction is kept;
Step 4-4 after the vectorial denoising after merging, is calculated and the editing distance for matched motion track according to following formula LevA, b(i, j), editing distance is most short and editing distance is as used apart from the upper limit for matched motion track less than acceptable The motion track of family input:
Wherein cost functions are:
In formula:LevA, b(i, j) is the editing distance of the preceding i character of character string a and the preceding j character of character string b;
A encodes for motion track input by user;
I is the character ordinal number in the character string of motion track input by user;
aiFor i-th of character in the character string of motion track input by user;
B is to be encoded for matched motion track;
J is for the character ordinal number in the character string of matched motion track;
bjFor j-th of character in the character string for matched motion track;
Step 5, the motion track of user's operation characteristic image and user-defined characteristic image space motion track are compared Compared with tactile when having single or a plurality of characteristic image space motion track matches user-defined characteristic image space motion track The corresponding event of hair.
2. identifying the method for carrying out human-computer interaction in virtual reality as described in claim 1 based on motion track, feature exists In characteristic image space motion track is made of several sequentially connected bivectors.
3. the method for carrying out human-computer interaction is identified in virtual reality as described in claim 1 based on motion track, which is characterized in that In step 4-1, video is acquired during user inputs motion track, is obtained per the characteristic image coordinate in frame video, if The characteristic image coordinate dead time is more than threshold value, then characteristic image coordinate is identified, the characteristic image of adjacent two frames video Coordinate constructs a vector.
4. the method for carrying out human-computer interaction is identified in virtual reality as described in claim 1 based on motion track, which is characterized in that In step 4-2, eight directions defined in two dimensional surface, the angle between two neighboring direction is 45 degree.
5. the method for carrying out human-computer interaction is identified in virtual reality as described in claim 1 based on motion track, which is characterized in that In step 4-4, the step of denoising, is as follows:
Step 4-4-1, shortest vector in current institute's directed quantity is deleted, after deleting shortest vector, according to the elder generation of motion track Sequence afterwards, relatively currently two vectors are closed if two vectorial directions are identical in the direction of vector and previous vector successively And;If two vectorial directions differ, current vector direction is kept;
Step 4-4-2, step 4-4-1 is repeated, until the vectorial total length after deleting is no less than original vector total length just 60%.
6. the method for carrying out human-computer interaction is identified in virtual reality as described in claim 1 based on motion track, which is characterized in that Eight directions in two dimensional surface correspond to a coding respectively, for the vector after step 4-4 denoisings, the direction according to vector It is encoded successively.
7. the method for carrying out human-computer interaction is identified in virtual reality as claimed in claim 6 based on motion track, which is characterized in that When coding, the coding of forward and reverse is added for same motion track, or similar coding is added for same motion track.
CN201610540904.2A 2016-07-06 2016-07-06 The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track Active CN106200964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610540904.2A CN106200964B (en) 2016-07-06 2016-07-06 The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610540904.2A CN106200964B (en) 2016-07-06 2016-07-06 The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track

Publications (2)

Publication Number Publication Date
CN106200964A CN106200964A (en) 2016-12-07
CN106200964B true CN106200964B (en) 2018-10-26

Family

ID=57473270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610540904.2A Active CN106200964B (en) 2016-07-06 2016-07-06 The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track

Country Status (1)

Country Link
CN (1) CN106200964B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170130582A (en) * 2015-08-04 2017-11-28 구글 엘엘씨 Hover behavior for gaze interaction in virtual reality
CN106774995B (en) * 2016-12-14 2019-05-03 吉林大学 A kind of three-dimensional style of brushwork recognition methods based on localization by ultrasonic
CN108459782A (en) * 2017-02-17 2018-08-28 阿里巴巴集团控股有限公司 A kind of input method, device, equipment, system and computer storage media

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298345A (en) * 2014-07-28 2015-01-21 浙江工业大学 Control method for man-machine interaction system
CN104808790A (en) * 2015-04-08 2015-07-29 冯仕昌 Method of obtaining invisible transparent interface based on non-contact interaction
CN105487673A (en) * 2016-01-04 2016-04-13 京东方科技集团股份有限公司 Man-machine interactive system, method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9323337B2 (en) * 2010-12-29 2016-04-26 Thomson Licensing System and method for gesture recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298345A (en) * 2014-07-28 2015-01-21 浙江工业大学 Control method for man-machine interaction system
CN104808790A (en) * 2015-04-08 2015-07-29 冯仕昌 Method of obtaining invisible transparent interface based on non-contact interaction
CN105487673A (en) * 2016-01-04 2016-04-13 京东方科技集团股份有限公司 Man-machine interactive system, method and device

Also Published As

Publication number Publication date
CN106200964A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
Wang et al. Mask-pose cascaded cnn for 2d hand pose estimation from single color image
Segen et al. Human-computer interaction using gesture recognition and 3D hand tracking
Aliprantis et al. Natural Interaction in Augmented Reality Context.
CN106200964B (en) The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track
Schlattman et al. Simultaneous 4 gestures 6 dof real-time two-hand tracking without any markers
Zhao et al. Synthesizing diverse human motions in 3d indoor scenes
Segen et al. Fast and accurate 3D gesture recognition interface
Du et al. Vision based gesture recognition system with single camera
Shen et al. Interaction-based human activity comparison
Ismail et al. Multimodal fusion: gesture and speech input in augmented reality environment
Seo et al. One-handed interaction with augmented virtual objects on mobile devices
Cirik et al. Following formulaic map instructions in a street simulation environment
Ismail et al. Vision-based technique and issues for multimodal interaction in augmented reality
CN108628455B (en) Virtual sand painting drawing method based on touch screen gesture recognition
Balcisoy et al. Interaction techniques with virtual humans in mixed environments
CN104239119A (en) Method and system for realizing electric power training simulation upon kinect
Tran et al. Easy-to-use virtual brick manipulation techniques using hand gestures
WO2020195017A1 (en) Path recognition method, path recognition device, path recognition program, and path recognition program recording medium
Alam et al. Affine transformation of virtual 3D object using 2D localization of fingertips
CN111860086A (en) Gesture recognition method, device and system based on deep neural network
Jeong et al. Hand gesture user interface for transforming objects in 3d virtual space
Rehman et al. Gesture-based guidance for navigation in virtual environments
Wang Real-time hand-tracking as a user input device
Ismail et al. Real Hand Gesture in Augmented Reality Drawing with Markerless Tracking on Mobile
Chen et al. Accurate fingertip detection from binocular mask images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant