CN106598233A - Input method and input system based on gesture recognition - Google Patents

Input method and input system based on gesture recognition Download PDF

Info

Publication number
CN106598233A
CN106598233A CN201611052589.5A CN201611052589A CN106598233A CN 106598233 A CN106598233 A CN 106598233A CN 201611052589 A CN201611052589 A CN 201611052589A CN 106598233 A CN106598233 A CN 106598233A
Authority
CN
China
Prior art keywords
hand
model
collision body
keyboard
button
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611052589.5A
Other languages
Chinese (zh)
Inventor
王雷
刘享军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Storm Mirror Technology Co Ltd
Original Assignee
Beijing Storm Mirror Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Storm Mirror Technology Co Ltd filed Critical Beijing Storm Mirror Technology Co Ltd
Priority to CN201611052589.5A priority Critical patent/CN106598233A/en
Publication of CN106598233A publication Critical patent/CN106598233A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an input method and an input system based on gesture recognition. The input method based on the gesture recognition comprises the following steps: acquiring gesture information which is generated according to a hand movement track; generating a hand model according to the gesture information, wherein the hand model is provided with a hand collider; activating a keyboard model, wherein each key of the keyboard model is respectively provided with a key collider; and monitoring whether each key collider collides with the hand collider, and if yes, then transmitting a key value of the corresponding key to an input method editing module. According to the technical scheme provided by the embodiment of the invention, whether keys are inputted are judged by introducing the hand model generated according to the gesture recognition and monitoring the collision between the two colliders in combination with the keyboard model, so that the problem of low text input efficiency in the existing virtual reality field can be solved.

Description

Input method and input system based on gesture identification
Technical field
The disclosure relates generally to field of computer technology, more particularly to the input method based on gesture identification and input system System.
Background technology
In field of virtual reality, human eye is separated with reality, and user cannot see the both hands of oneself, also without keyboard and mouse Deng ancillary equipment, input is usually completed by modes such as head control or Bluetooth handles on the virtual keyboard.Head control or bluetooth handss The input mode clicking operation of handle is slow, cause character input efficiency it is low.Head control is one kind of field of virtual reality Interactive mode.The sight line of people is no longer secured to a direction in the scene of virtual reality, and can be comprehensive, level 360 Degree, the rotation of vertical 180 degree, the object that each visual angle is seen is different, as people immerses wherein.Therefore, rotary head Portion is the most natural mode of field of virtual reality man-machine interaction, and this interactive mode is head control.
The content of the invention
In view of drawbacks described above of the prior art or deficiency, expect that providing one kind is suitable to virtual reality (Virtual Reality) the efficient input method in field, in order to solve the above problems, the application proposes a kind of based on the defeated of gesture identification Enter method and input system.
First aspect, there is provided a kind of input method based on gesture identification, the method includes:
Obtain the gesture information according to hand exercise Track Pick-up;
Hand model is generated according to gesture information, hand model is provided with hand collision body;
Activation keyboard model, each button of keyboard model is respectively arranged with button collision body;And
Monitor whether each button collision body collides with hand collision body:Be, then by correspondence button key assignments send to Input method editor module.
Second aspect, there is provided a kind of input system based on gesture identification, the system includes:
Obtain gesture information unit:It is configured to obtain the gesture information according to hand exercise Track Pick-up;
Hand model signal generating unit:It is configured to generate hand model according to gesture information, hand model is provided with hand Collision body;
Keyboard model activates unit:It is configured to activate keyboard model, each button of keyboard model is respectively arranged with button Collision body;And
Monitoring means:It is configured to monitor whether each button collision body collides with hand collision body:Be, then will correspondence The key assignments of button is sent to input method editor module..
According to the technical scheme that the embodiment of the present application is provided, generate hand model according to gesture identification and combine by introducing Collision between two kinds of collision bodies of keyboard model monitoring can solve the problem that existing field of virtual reality determining whether key-press input The low problem of Chinese version input efficiency.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application other Feature, objects and advantages will become more apparent upon:
Fig. 1 shows the exemplary process diagram of the input method based on gesture identification according to the embodiment of the present application;
Fig. 2 shows the structural representation of the input system based on gesture identification according to the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that, in order to It is easy to description, the part related to invention is illustrate only in accompanying drawing.
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase Mutually combination.Below with reference to the accompanying drawings and in conjunction with the embodiments describing the application in detail.
Fig. 1 is refer to, the exemplary flow of the input method based on gesture identification according to the embodiment of the present application is shown Figure.
As shown in figure 1, in a step 101, the gesture information according to hand exercise Track Pick-up is obtained.
In order to make it easy to understand, being applied to virtual reality glasses (hereinafter referred to as VR glasses) and just with the input method of the application Take explanation as a example by formula terminal.It should be apparent that the application is not limited to VR glasses and portable terminal, it is applicable to virtual The other equipment of field of reality.
Preferably, step 101 includes:
The image information of collection hand;
Recognize hand exercise track to generate gesture information according to image information.
VR glasses include eyeglass and infrared depth camera, and the hand of user is moved to the infrared depth camera area of coverage During domain, infrared depth camera collection hand image information in three dimensions, including arm, palm, the position of each finger Put, directional information etc., and hand exercise Track Pick-up gesture information is recognized based on hand images information.When user is by VR eyes Mirror is connected to portable terminal to be needed in the user interface input user information for ejecting into virtual reality scenario, VR glasses Gesture information is sent to portable terminal by USB interface.
Then, in a step 102, hand model is generated according to gesture information, hand model is provided with hand collision body.
Portable terminal reads the mould for generating handss after gesture information on the basis of these gesture informations by physical layer interface Type, and according to information such as the position of gesture information real-time regulation model, directions.So that user sees energy in virtual world The virtual hand model of the enough and handss synchronization action of oneself.Be provided with collision body on the hand model, can with virtual reality in Other objects realized by colliding interaction.
Preferably, each finger of hand model is set to into collision body.In the three dimensions of field of virtual reality, object with The contact of object is collision, is only labeled as being collided between the object of collision body.Therefore, strike to improve hand model The speed of beating keyboard model, can be set to collision body by each finger of hand model.
Then, in step 103, keyboard model is activated, each button of keyboard model is respectively arranged with button collision body.
After generating hand model, activation is provided with the keyboard model of button collision body, to realize the input of character.
Above-mentioned activation keyboard model can be realized using various ways, including:
Whether monitoring hand collision body collides with predefined push button exciting collision body:It is then to activate keyboard model; Or,
Keyboard model is activated by predefined human action;Or,
Keyboard model is activated by phonetic order;Or,
Keyboard model is activated by setting physical button.
In certain embodiments, during hand collision body collision Text Entry, keyboard model, that is, above-mentioned key are activated Disk activation collision body is the Text Entry in virtual reality user interface, and such as Text Entry can be user account input Frame, secret input frame, network address input field etc..
In certain embodiments, keyboard model can also be activated by predefined human action.Preferably, people's eye fixation When text entry user interface reaches the scheduled time, keyboard model is activated.The input method of such as the application is applied in VR eyes During mirror, by judging people's eye fixation user interface in Text Entry time judging whether user has input text It is intended to, so as to activate keyboard model.
In certain embodiments, keyboard model is activated by phonetic order.Voice can be received by Mike, by recognizing language Keyboard model is activated in sound with the presence or absence of the phonetic order about " starting keyboard model " class.
In certain embodiments, keyboard model is activated by setting physical button.In the input method using the application Physical button is set on VR equipment, when the physical button is pressed, keyboard model is activated.
Then, at step 104, monitor whether each button collision body collides with hand collision body:Be, then will correspondence The key assignments of button is sent to input method editor module.
For example, keyboard model can monitor activation event, and the collision of each button of keyboard model can be monitored again and phase is obtained Key assignments is answered, carries out colliding monitoring to realize in particular by the collision body of hand model and the collision body of keyboard model.Finally will be defeated The key assignments for entering is sent to input method editor module, and key assignments is converted into character by input method editor module.
It should be noted that the application can be performed based on the input method of gesture identification by terminal unit, it is also possible to by Server is performed.Hand model or keyboard model can be arranged in terminal unit, it is also possible in being arranged at server.At some In embodiment, hand model or keyboard model can be arranged and be monitored collision in the terminal, specifically can be adopted using VR glasses Collection gesture information, sends to portable terminal such as mobile phone, and portable terminal generates hand model according to the gesture information, receives VR The keyboard model activation instruction that glasses send, activates keyboard model, according to the gesture information monitoring that real-time reception VR glasses send Whether there is the collision of hand model and keyboard model, and generate corresponding key assignments, the corresponding character of the key assignments is sent to VR Eyes show.In certain embodiments, hand model or keyboard model can in the server be monitored collision, collision rift life Into key assignments send to terminal unit, for being converted to corresponding character and showing.Specifically, terminal believes the gesture for gathering Breath is sent to server, and server activates keyboard according to hand model, the keyboard model activation instruction that receiving terminal sends is generated Model, the gesture information sent according to real-time reception terminal monitors whether that hand model occurs generates phase with the collision of keyboard model Key assignments is answered, and key assignments is sent to terminal.
Terminal can be various electronic equipments, including but not limited to PC, VR glasses, VR display, smart mobile phone, Intelligent watch, panel computer, personal digital assistant etc..
Server can be to provide the server of various services.Server can be stored, divided to the data for receiving Analysis etc. is processed, and result is fed back to into terminal.
Fig. 2 shows the structural representation of the input system based on gesture identification according to the embodiment of the present application.
As shown in Fig. 2 being included based on the input system 200 of gesture identification:
Obtain gesture information unit 210:It is configured to obtain the gesture information according to hand exercise Track Pick-up;
Hand model signal generating unit 220:It is configured to generate hand model according to gesture information, hand model is provided with handss Portion's collision body;
Keyboard model activates unit 230:Be configured to activate keyboard model, each button of keyboard model be respectively arranged with by Key collision body;And
Monitoring means 240:It is configured to monitor whether each button collision body collides with hand collision body:It is then will The key assignments of correspondence button is sent to input method editor module.
Preferably, obtaining gesture information unit 210 includes:
It is configured to gather the image information of hand;And
It is configured to recognize hand exercise track to generate gesture information according to staff image information.
For example, in the field of virtual reality using VR glasses, VR glasses include eyeglass and infrared depth camera, user Hand when being moved to infrared depth camera overlay area, infrared depth camera collection hand image in three dimensions Information, and hand exercise Track Pick-up gesture information is recognized based on hand images information.When user is connected to VR glasses just Take formula terminal needs in the user interface input user information for ejecting into virtual reality scenario, and VR glasses are connect by bottom Mouth sends gesture information to portable terminal.It can be seen that, gesture information unit 210 be can will obtain and VR glasses, hand mould are arranged at Type signal generating unit 220, keyboard model activation unit 230 and monitoring means 240 are arranged at portable terminal and realize being known based on gesture The input of other text.
Preferably, keyboard model activation unit 230 includes:
It is configured to monitor whether hand collision body collides with predefined button collision body:It is then to activate keyboard Model;Or,
It is configured to predefined human action activation keyboard model;Or,
It is configured to phonetic order activation keyboard model;Or,
It is configured to set physical button activation keyboard model.
Preferably, when predefined human action reaches the scheduled time including people's eye fixation text entry user interface, swash Live keyboard model.
Preferably, hand model signal generating unit 220 includes:
Collision body setting unit 221:It is configured to for each finger of hand model to be set to collision body.
Flow chart and block diagram in accompanying drawing, it is illustrated that according to the system of various embodiments of the invention, method and computer journey The architectural framework in the cards of sequence product, function and operation.At this point, each square frame in flow chart or block diagram can generation A part for table one module, program segment or code a, part for the module, program segment or code includes one or more For realizing the executable instruction of the logic function of regulation.It should also be noted that in some realizations as replacement, institute in square frame The function of mark can also be with different from the order marked in accompanying drawing generation.For example, the two square frame reality for succeedingly representing On can perform substantially in parallel, they can also be performed in the opposite order sometimes, and this is depending on involved function.Also It is noted that the combination of block diagram and/or each square frame in flow chart and block diagram and/or the square frame in flow chart, Ke Yiyong Perform the function of regulation or the special hardware based system of operation to realize, or can be referred to computer with specialized hardware The combination of order is realizing.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to the technology of the particular combination of above-mentioned technical characteristic Scheme, while also should cover in the case of without departing from the inventive concept, is carried out by above-mentioned technical characteristic or its equivalent feature Combination in any and other technical schemes for being formed.Such as features described above has similar work(with (but not limited to) disclosed herein The technical scheme that the technical characteristic of energy is replaced mutually and formed.

Claims (10)

1. a kind of input method based on gesture identification, it is characterised in that methods described includes:
Obtain the gesture information according to hand exercise Track Pick-up;
Hand model is generated according to the gesture information, the hand model is provided with hand collision body;
Activation keyboard model, each button of the keyboard model is respectively arranged with button collision body;And
Monitor whether each button collision body collides with the hand collision body:It is then to send out the key assignments of correspondence button Deliver to input method editor module.
2. method according to claim 1, it is characterised in that the acquisition is believed according to the gesture of hand exercise Track Pick-up Breath includes:
The image information of collection hand;
Recognize hand exercise track to generate gesture information according to described image information.
3. method according to claim 1, it is characterised in that the activation keyboard model includes:
Monitor the hand collision body activation collision body whether is touched with predefined button and collide:It is then to activate the keyboard Model;Or,
The keyboard model is activated by predefined human action;Or,
The keyboard model is activated by phonetic order;Or,
The keyboard model is activated by setting physical button.
4. method according to claim 3, it is characterised in that the predefined human action includes people's eye fixation text When input user interface reaches the scheduled time, the keyboard model is activated.
5. according to the arbitrary described method of claim 1-4, it is characterised in that each finger of the hand model to be set to touch Collision body.
6. a kind of input system based on gesture identification, it is characterised in that the system includes:
Obtain gesture information unit:It is configured to obtain the gesture information according to hand exercise Track Pick-up;
Hand model signal generating unit:It is configured to generate hand model according to the gesture information, the hand model is provided with Hand collision body;
Keyboard model activates unit:It is configured to activate keyboard model, each button of the keyboard model is respectively arranged with button Collision body;And
Monitoring means:It is configured to whether each button collision body of monitoring collides with the hand collision body:It is then will The key assignments of correspondence button is sent to input method editor module.
7. system according to claim 6, it is characterised in that the acquisition gesture information unit includes:
It is configured to gather the image information of hand;And
It is configured to recognize hand exercise track to generate gesture information according to the staff image information.
8. system according to claim 6, it is characterised in that the keyboard model activation unit includes:
It is configured to monitor whether the hand collision body collides with predefined button collision body:It is then to activate described Keyboard model;Or,
It is configured to predefined human action and activates the keyboard model;Or,
It is configured to phonetic order and activates the keyboard model;Or,
It is configured to set the physical button activation keyboard model.
9. system according to claim 8, it is characterised in that the predefined human action includes people's eye fixation text When input user interface reaches the scheduled time, the keyboard model is activated.
10. according to the arbitrary described system of claim 6-9, it is characterised in that the hand model signal generating unit includes:
Collision body setting unit:It is configured to for each finger of the hand model to be set to collision body.
CN201611052589.5A 2016-11-25 2016-11-25 Input method and input system based on gesture recognition Pending CN106598233A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611052589.5A CN106598233A (en) 2016-11-25 2016-11-25 Input method and input system based on gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611052589.5A CN106598233A (en) 2016-11-25 2016-11-25 Input method and input system based on gesture recognition

Publications (1)

Publication Number Publication Date
CN106598233A true CN106598233A (en) 2017-04-26

Family

ID=58593229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611052589.5A Pending CN106598233A (en) 2016-11-25 2016-11-25 Input method and input system based on gesture recognition

Country Status (1)

Country Link
CN (1) CN106598233A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109743892A (en) * 2017-07-04 2019-05-10 腾讯科技(深圳)有限公司 The display methods and device of virtual reality content
CN111158476A (en) * 2019-12-25 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Key identification method, system, equipment and storage medium of virtual keyboard
US11397463B2 (en) 2019-01-12 2022-07-26 Microsoft Technology Licensing, Llc Discrete and continuous gestures for enabling hand rays

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1502098A (en) * 2001-02-05 2004-06-02 ��˹�ض���÷�� System and method for keyboard independent touch typing
CN102163077A (en) * 2010-02-16 2011-08-24 微软公司 Capturing screen objects using a collision volume
CN104866075A (en) * 2014-02-21 2015-08-26 联想(北京)有限公司 Input method, device and electronic equipment
CN104978016A (en) * 2014-04-14 2015-10-14 宏碁股份有限公司 Electronic device with virtual input function
CN105138118A (en) * 2015-07-31 2015-12-09 努比亚技术有限公司 Intelligent glasses, method and mobile terminal for implementing human-computer interaction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1502098A (en) * 2001-02-05 2004-06-02 ��˹�ض���÷�� System and method for keyboard independent touch typing
CN102163077A (en) * 2010-02-16 2011-08-24 微软公司 Capturing screen objects using a collision volume
CN104866075A (en) * 2014-02-21 2015-08-26 联想(北京)有限公司 Input method, device and electronic equipment
CN104978016A (en) * 2014-04-14 2015-10-14 宏碁股份有限公司 Electronic device with virtual input function
CN105138118A (en) * 2015-07-31 2015-12-09 努比亚技术有限公司 Intelligent glasses, method and mobile terminal for implementing human-computer interaction

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109743892A (en) * 2017-07-04 2019-05-10 腾讯科技(深圳)有限公司 The display methods and device of virtual reality content
CN109743892B (en) * 2017-07-04 2020-10-13 腾讯科技(深圳)有限公司 Virtual reality content display method and device
US11282264B2 (en) 2017-07-04 2022-03-22 Tencent Technology (Shenzhen) Company Limited Virtual reality content display method and apparatus
US11397463B2 (en) 2019-01-12 2022-07-26 Microsoft Technology Licensing, Llc Discrete and continuous gestures for enabling hand rays
CN111158476A (en) * 2019-12-25 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Key identification method, system, equipment and storage medium of virtual keyboard

Similar Documents

Publication Publication Date Title
CN108874126B (en) Interaction method and system based on virtual reality equipment
CN106462242B (en) Use the user interface control of eye tracking
CN102789313B (en) User interaction system and method
CN106845335B (en) Gesture recognition method and device for virtual reality equipment and virtual reality equipment
CN106055088B (en) The air of interactive wearable device writes and gesture system
US9207771B2 (en) Gesture based user interface
EP2877909B1 (en) Multimodal interaction with near-to-eye display
TWI411935B (en) System and method for generating control instruction by identifying user posture captured by image pickup device
US20150084859A1 (en) System and Method for Recognition and Response to Gesture Based Input
CN107066081B (en) Interactive control method and device of virtual reality system and virtual reality equipment
US20120229509A1 (en) System and method for user interaction
WO2013114806A1 (en) Biometric authentication device and biometric authentication method
WO2012119371A1 (en) User interaction system and method
CN111583355B (en) Face image generation method and device, electronic equipment and readable storage medium
CN102779000A (en) User interaction system and method
CN106598233A (en) Input method and input system based on gesture recognition
CN113867531A (en) Interaction method, device, equipment and computer readable storage medium
CN104460967A (en) Recognition method of upper limb bone gestures of human body
CN112328831A (en) Body-building interaction method based on smart television, terminal equipment and readable storage medium
Shajideen et al. Hand gestures-virtual mouse for human computer interaction
CN108646578B (en) Medium-free aerial projection virtual picture and reality interaction method
JP6225612B2 (en) Program, information processing apparatus, and method
KR101567154B1 (en) Method for processing dialogue based on multiple user and apparatus for performing the same
CN104460962A (en) 4D somatosensory interaction system based on game engine
KR101525011B1 (en) tangible virtual reality display control device based on NUI, and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170426