CN110780732A - Input system based on space positioning and finger clicking - Google Patents

Input system based on space positioning and finger clicking Download PDF

Info

Publication number
CN110780732A
CN110780732A CN201910842233.9A CN201910842233A CN110780732A CN 110780732 A CN110780732 A CN 110780732A CN 201910842233 A CN201910842233 A CN 201910842233A CN 110780732 A CN110780732 A CN 110780732A
Authority
CN
China
Prior art keywords
user
input
character
characters
finger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910842233.9A
Other languages
Chinese (zh)
Inventor
翁冬冬
江海燕
胡翔
王聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Beijing Institute of Technology BIT
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910842233.9A priority Critical patent/CN110780732A/en
Publication of CN110780732A publication Critical patent/CN110780732A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an input system based on space positioning and finger clicking, belongs to the technical field of virtual reality, and can ensure the accuracy and convenience of input while carrying out effective character input. The system comprises a touch detection module, a hand space positioning module, a data processing module and a display module. The touch detection module is used for detecting touch action data of a finger of a user. The hand space positioning module is used for detecting the hand space position data of the user. The data processing module acquires touch action data of fingers of a user and spatial position data of hands of the user, and determines input characters according to preset touch action data of the fingers of the user and/or corresponding relations between the spatial position data of the hands of the user and the input characters. The display module is used for receiving input characters, generating an input result image by adopting a set input method, and rendering the input result image in a virtual environment for display.

Description

Input system based on space positioning and finger clicking
Technical Field
The invention relates to the technical field of virtual reality, in particular to an input system based on space positioning and finger clicking.
Background
At present, in virtual reality, a user cannot use peripheral equipment such as a keyboard and a mouse in a moving scene, effective character input cannot be performed, and input is generally performed by using a mode of selecting characters by using a handle, but the input efficiency of the method is low.
The existing text input methods include the following two types:
first, patent No. 201611052589.5 discloses an input method and system based on gesture recognition, which generates gesture information based on a hand motion trajectory and generates a hand model from the gesture information, where the hand model is provided with a hand collision body. Meanwhile, each key of the keyboard model is provided with a key collision body. Character input is performed by monitoring whether each key collision body collides with a hand collision body.
Because the gesture information acquisition has errors and low accuracy, the user cannot quickly and accurately select the keys; meanwhile, a hand model is generated, collision detection can cause delay, and the use comfort of a user is reduced.
The second is the use of a hand-held controller for Virtual Reality Text Entry described in the article pizza Text for visual Reality Systems using dual thumb sticks, and the circular arrangement of 26 characters divides a circle into 7 pieces, each containing 4 characters. The user uses the right thumb joystick to traverse the slice and the left thumb joystick to select a letter for text entry.
This approach requires the use of a handheld controller as an input device, while the user must use both hands to perform operations but not simultaneously perform other operations, and traversing the pie slice and the secondary input approach reduces input efficiency.
In consideration of the above technical solutions, in the current virtual reality technology, effective character input is required, and meanwhile, accuracy and convenience of input can be ensured, which is a problem to be solved urgently at present.
Disclosure of Invention
In view of this, the present invention provides an input system based on spatial positioning and finger clicking, which can ensure the accuracy and convenience of input while performing effective character input.
In order to achieve the purpose, the technical scheme of the invention is as follows: the system comprises a touch detection module, a hand space positioning module, a data processing module and a display module.
The touch detection module is used for detecting touch action data of a finger of a user.
The hand space positioning module is used for detecting the hand space position data of the user.
The data processing module acquires touch action data of fingers of a user and spatial position data of hands of the user, and determines input characters according to preset touch action data of the fingers of the user and/or corresponding relations between the spatial position data of the hands of the user and the input characters.
The display module is used for receiving input characters, generating an input result image by adopting a set input method, and rendering the input result image in a virtual environment for display.
Further, the touch detection module is a touch detection device mounted on a fingertip of a user's finger, and is specifically a pressure sensor or a conductive material; when the finger tip of the thumb touches the other finger tips, a pressure or current signal is generated to serve as touch action data of the finger of the user; when the finger tips of two fingers touching each other are different, the generated pressure or current signals are different.
Furthermore, the hand space positioning module positions the hands of the user by adopting laser or sound waves, and obtains the three-dimensional pose of the hands of the user in the space as the hand space position data of the user.
Further, the specific step of determining the input character according to the preset touch action data of the user finger and the corresponding relation between the spatial position data of the user hand and/or the input character is as follows: dividing block-shaped spaces with set quantity in a space range in which the hands of a user can move, wherein each block-shaped space corresponds to one character block, and each character block comprises the characters with the set quantity; the touch action data of the fingers of the user correspond to the characters in the character blocks one by one; determining a character block where an input character is located according to a block space corresponding to the hand space position data of the user; and determining specific input characters according to the touch action data of the fingers of the user.
Further, a set number of characters in the character block are arranged in line order; dividing character spaces in the block-shaped space according to the same number of lines as the character blocks, wherein each character space corresponds to one line of characters; determining the input character according to the preset touch action data of the user finger and/or the corresponding relation between the user hand space position data and the input character further comprises: determining a line where an input character is located according to a character space corresponding to the hand space position data of the user; and determining specific input characters according to the touch action data of the fingers of the user.
Further, the characters with set number in the character block are arranged according to the row and column sequence; dividing character spaces in the block-shaped space according to lines and rows with the same number as the characters, wherein each character space corresponds to one character; determining the input character according to the preset touch action data of the user finger and/or the corresponding relation between the user hand space position data and the input character further comprises: and determining the input characters according to the character space corresponding to the hand space position data of the user.
Further, the characters are alphabetic characters, numeric characters, or punctuation characters.
Has the advantages that:
the invention provides an input system based on space positioning and finger clicking, which realizes the input of all letters, numbers and symbols (hereinafter, collectively referred to as characters) based on the space position of a hand of a user and the touch of the thumb and the fingertips of the rest fingers of the user. The system expands the selection area by using the space position, realizes the input of all characters, and improves the selection speed and accuracy of a user and increases the comfort by using the short-distance motion of fingers and the force feedback of the hand. Tracking the three-dimensional position of the hand of the user in the space by using a tracking device, and selecting a character area, wherein the character area comprises a plurality of specific characters; the touch detection device is used for detecting the touch of the finger tip of the thumb of the user and the finger tips of the rest fingers, and the touch is used for selecting specific characters.
When the input system provided by the invention is used for inputting the text, the user can select to use one hand or two hands for inputting, and the other hand of the user can be released for other activities in the one-hand mode. The user does not need to additionally carry other physical equipment, and is special, and this mode is applicable to the quick text input of mobile scene, can be used for virtual reality wear-type display and intelligent wrist-watch etc. mobile device's input.
In addition, the mode has higher environmental adaptability, is not limited by illumination, and can be conveniently applied to various real environments. Compared with the input of a virtual keyboard in the current virtual reality, the mode completely depends on the body perception and the force touch sense of a user, the participation of vision is not needed, and the input experience of the user is improved.
Drawings
FIG. 1 is a block diagram of an input system based on spatial localization and finger clicking according to the present invention;
FIG. 2 is a schematic diagram of an input method for text input using an input system based on spatial localization and finger clicking according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of an input method of a character M in an input method for inputting text by using an input system based on spatial localization and finger clicking according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of an input method for inputting a character Q in a text input method using an input system based on spatial localization and finger clicking according to a first embodiment of the present invention;
FIG. 5 is a schematic diagram of an input method for text input using an input system based on spatial localization and finger clicking according to a second embodiment of the present invention;
fig. 6 is a schematic diagram of an input method for text input by using an input system based on spatial positioning and finger clicking according to a third embodiment of the present invention.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides an input system for space positioning and finger clicking, which is composed of a touch detection module, a hand space positioning module, a data processing module and a display module as shown in figure 1.
The touch detection module is used for detecting touch action data of a user finger; in the embodiment of the invention, the touch detection module is a touch detection device arranged on the fingertip of the user finger, and is specifically a pressure sensor or a conductive material.
When the finger tip of the thumb and the other finger tips touch, a pressure or current signal is generated as touch action data of the finger of the user.
When the fingertips of two fingers touching each other are different, the generated pressure or current signals are different, so that the two fingers touching each other can be distinguished conveniently.
The hand space positioning module is used for detecting the hand space position data of the user; the hand space positioning module positions the hands of the user by adopting laser or sound waves, or estimates the space positions of the hands of the user by acquiring body state images of the user in real time by adopting image processing modes such as deep learning and the like. And obtaining the three-dimensional pose of the hand of the user in the space as the spatial position data of the hand of the user.
The data processing module acquires touch action data of fingers of a user and spatial position data of the hands of the user, and determines input characters according to preset touch action data of the fingers of the user and the spatial position data of the hands of the user and/or corresponding relations of the input characters;
in the embodiment of the present invention, the input character is determined according to the preset touch action data of the user's finger and the corresponding relationship between the user's hand spatial position data and the input character, which may specifically adopt the following manner:
dividing block-shaped spaces with set quantity in a space range in which the hands of a user can move, wherein each block-shaped space corresponds to one character block, and each character block comprises the characters with the set quantity; the touch action data of the fingers of the user correspond to the characters in the character blocks one by one; determining a character block where an input character is located according to a block space corresponding to the hand space position data of the user; and determining specific input characters according to the touch action data of the fingers of the user.
If the set number of characters in the character block are arranged according to the row sequence: dividing character spaces in the block-shaped space according to the same number of lines as the character blocks, wherein each character space corresponds to one line of characters; determining the input character according to the preset touch action data of the user finger and/or the corresponding relation between the user hand space position data and the input character further comprises: determining a line where an input character is located according to a character space corresponding to the hand space position data of the user; and determining specific input characters according to the touch action data of the fingers of the user.
If the characters with the set number in the character block are arranged according to the row-column sequence: dividing character spaces in the block-shaped space according to lines and rows with the same number as the characters, wherein each character space corresponds to one character; determining the input character according to the preset touch action data of the user finger and/or the corresponding relation between the user hand space position data and the input character further comprises: and determining the input characters according to the character space corresponding to the hand space position data of the user.
The characters in the invention are alphabetic characters, numeric characters or punctuation characters.
The display module is used for receiving input characters, generating an input result image by adopting a set input method, and rendering the input result image in a virtual environment for display.
Several specific examples are given below.
The first embodiment:
as shown in fig. 2, the user's hand is positioned in front of the user's waist, and the user performs input using both hands, wherein the left hand is used for inputting character block 1 and character block 2, and the right hand is used for inputting character block 3 and character block 4. The characters in the character block are arranged according to the sequence of lines and columns, the user moves the hand left and right to switch the input character block, and the up and down movement is used for switching the lines in the character block: when the left hand of the user is positioned right ahead, the first line of characters in the character block 2 can be input, the character block moves downwards, the second line of characters in the character block 2 can be input, the downward movement is continued, and the character block in the third line can be input; moving the left hand leftwards, switching to the character in the input character block 1, and moving the switching line up and down; similarly, the right hand is positioned right in front for inputting characters in the character block 3, and after moving right, it is used for inputting characters in the character block 4. The pinch of the thumb and the remaining fingertips is used for inputting a specific character, for example, when the left hand of the user is positioned at the top left, the pinch of the thumb and the forefinger fingertips is used for inputting a character E, the pinch of the thumb and the middle finger fingertips is used for inputting a character W, and the pinch of the thumb and the ring finger fingertips is used for inputting a character Q; at the lowermost position right in front, the thumb and the forefinger finger tips are pinched for inputting the character F, the thumb and the middle finger tips are pinched for inputting the character G, and the thumb and the ring finger tips are pinched for inputting the character H. Specifically, as shown in fig. 3, at this time, the right hand of the user is located at the lowest position right in front, and at this time, the thumb and the tip of the index finger of the right hand are pinched to input a character M; as shown in fig. 4, the user's left hand is now uppermost on the left, and the character Q is entered by pinching the thumb and tip of the middle finger of the left hand.
The second embodiment:
in contrast to the first embodiment, in which the user makes a selection of a different row by moving the hand up and down, the second embodiment makes a selection of a different row by moving the user's hand back and forth.
Third embodiment:
in contrast to the first embodiment in which the user selects characters by pinching the three fingertips of the thumb and index finger, middle finger, and ring finger per row, the third embodiment in which the user selects characters by pinching the four fingertips of the thumb and index finger, middle finger, ring finger, and little finger, as shown in fig. 5. Controlling the input of the middle character of the character block 1 by the left hand; the right hand controls the input of characters in character block 2 and character block 3 by moving left and right. The selection of lines in a block of characters is controlled by moving up and down. For example, when the right hand is at the top right in front, the user can perform input of the character I by pinching the thumb and the tip of the little finger.
The fourth embodiment:
compared with the first embodiment, the user only uses the left hand to input characters, and four positions in the left-right direction of the left hand represent four different character blocks.
Fifth embodiment:
the user's hands are placed vertically, as shown in fig. 6, the user controls character block 1 and character block 2 with his left hand, controls the selection of character blocks by moving back and forth, controls the selection of rows in character blocks by moving away from and close to the body; the right hand controls the character block 3 and the character block 4.
In all the modes, two hands or one hand can be selected for input, the control of the character blocks and the selection of the lines in the character blocks are controlled through the spatial positions of the hands, and the position space and the moving mode can be changed. The selection of a particular character is selected by the pinch of the thumb and finger tips, and the manner of selection may vary.
In all embodiments, the user can use the set switching button to switch the capital letters, the characters and the like for inputting the characters which are not in the character block currently, and the mode can reduce the movement of the position of the hand space and input more characters. After pressing the switch button, the user switches the alphabet in the character block 1 to a symbol in mode one, and the user pinches the thumb and the index finger tip for inputting characters "when the user's left hand is positioned at the top left. ".
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. An input system based on space positioning and finger clicking is characterized by comprising a touch detection module, a hand space positioning module, a data processing module and a display module;
the touch detection module is used for detecting touch action data of a user finger;
the hand space positioning module is used for detecting the hand space position data of the user;
the data processing module acquires touch action data of the fingers of the user and spatial position data of the hands of the user, and determines input characters according to preset touch action data of the fingers of the user and/or the corresponding relation between the spatial position data of the hands of the user and the input characters;
the display module is used for receiving the input characters, generating an input result image by adopting a set input method, and rendering the input result image in a virtual environment for display.
2. The system of claim 1, wherein the touch detection module is a touch detection device mounted on a fingertip of a user's finger, in particular a pressure sensor or a conductive material;
when the finger tip of the thumb touches the finger tip of the other finger tips, generating a pressure or current signal as touch action data of the finger of the user;
when the finger tips of two fingers touching each other are different, the generated pressure or current signals are different.
3. The system of claim 1, wherein the hand space positioning module positions the hand of the user by means of laser or sound wave positioning, and obtains a three-dimensional pose of the hand of the user in space as the hand space position data of the user.
4. The system according to claim 1, wherein the determining of the input character according to the preset data of the touch action of the finger of the user and the data of the spatial position of the hand of the user and/or the corresponding relationship of the input character specifically comprises:
dividing block-shaped spaces with set quantity in a space range in which the hands of a user can move, wherein each block-shaped space corresponds to one character block, and each character block comprises the characters with the set quantity;
the touch action data of the user finger corresponds to the characters in the character block one by one;
determining a character block where an input character is located according to a block space corresponding to the user hand space position data;
and determining specific input characters according to the touch action data of the fingers of the user.
5. The system of claim 4, wherein the set number of characters in the block of characters are arranged in a row order;
dividing character spaces in the block-shaped space according to the same number of lines as the character blocks, wherein each character space corresponds to one line of characters;
determining the input character according to the preset touch action data of the user finger and/or the corresponding relation between the user hand space position data and the input character further comprises:
determining a line where an input character is located according to a character space corresponding to the user hand space position data;
and determining specific input characters according to the touch action data of the fingers of the user.
6. The system of claim 4, wherein the set number of characters in the character block are arranged in a row-column order;
dividing character spaces in the block-shaped space according to lines and rows with the same number as the characters, wherein each character space corresponds to one character;
determining the input character according to the preset touch action data of the user finger and/or the corresponding relation between the user hand space position data and the input character further comprises:
and determining input characters according to the character space corresponding to the hand space position data of the user.
7. The system of any one of claims 1 to 6, wherein the characters are alphabetic characters, numeric characters, or punctuation characters.
CN201910842233.9A 2019-09-06 2019-09-06 Input system based on space positioning and finger clicking Pending CN110780732A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910842233.9A CN110780732A (en) 2019-09-06 2019-09-06 Input system based on space positioning and finger clicking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910842233.9A CN110780732A (en) 2019-09-06 2019-09-06 Input system based on space positioning and finger clicking

Publications (1)

Publication Number Publication Date
CN110780732A true CN110780732A (en) 2020-02-11

Family

ID=69384065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910842233.9A Pending CN110780732A (en) 2019-09-06 2019-09-06 Input system based on space positioning and finger clicking

Country Status (1)

Country Link
CN (1) CN110780732A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782031A (en) * 2020-05-26 2020-10-16 北京理工大学 Text input system and method based on head movement and finger micro-gestures
CN111782032A (en) * 2020-05-26 2020-10-16 北京理工大学 Input system and method based on finger micro-gestures
CN111831112A (en) * 2020-05-26 2020-10-27 北京理工大学 Text input system and method based on eye movement and finger micro-gesture

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246349A (en) * 2013-04-03 2013-08-14 汕头大学 Input method of wearable finer-sensing wireless communication, and input device using input method
CN103793057A (en) * 2014-01-26 2014-05-14 华为终端有限公司 Information processing method, device and equipment
US20140285443A1 (en) * 2013-03-19 2014-09-25 Unisys Corporation Method and system for keyglove fingermapping an input device of a computing device
CN104484073A (en) * 2014-12-31 2015-04-01 北京维信诺光电技术有限公司 Hand touch interaction system
CN108932100A (en) * 2017-05-26 2018-12-04 成都理想境界科技有限公司 A kind of operating method and head-mounted display apparatus of dummy keyboard

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140285443A1 (en) * 2013-03-19 2014-09-25 Unisys Corporation Method and system for keyglove fingermapping an input device of a computing device
CN103246349A (en) * 2013-04-03 2013-08-14 汕头大学 Input method of wearable finer-sensing wireless communication, and input device using input method
CN103793057A (en) * 2014-01-26 2014-05-14 华为终端有限公司 Information processing method, device and equipment
CN104484073A (en) * 2014-12-31 2015-04-01 北京维信诺光电技术有限公司 Hand touch interaction system
CN108932100A (en) * 2017-05-26 2018-12-04 成都理想境界科技有限公司 A kind of operating method and head-mounted display apparatus of dummy keyboard

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782031A (en) * 2020-05-26 2020-10-16 北京理工大学 Text input system and method based on head movement and finger micro-gestures
CN111782032A (en) * 2020-05-26 2020-10-16 北京理工大学 Input system and method based on finger micro-gestures
CN111831112A (en) * 2020-05-26 2020-10-27 北京理工大学 Text input system and method based on eye movement and finger micro-gesture

Similar Documents

Publication Publication Date Title
EP2817693B1 (en) Gesture recognition device
JP5166008B2 (en) A device for entering text
US20060279532A1 (en) Data input device controlled by motions of hands and fingers
CN110780732A (en) Input system based on space positioning and finger clicking
CN106104421A (en) A kind of finger ring type wireless finger sense controller, control method and control system
EP3371678B1 (en) Data entry device for entering characters by a finger with haptic feedback
EP1377460A1 (en) Improved keyboard
KR20130088752A (en) Multidirectional button, key, and keyboard
WO2009002787A2 (en) Swipe gestures for touch screen keyboards
WO2010016065A1 (en) Method and device of stroke based user input
US8576170B2 (en) Joystick type computer input device with mouse
JP6740389B2 (en) Adaptive user interface for handheld electronic devices
WO2016018789A1 (en) Dual directional control for text entry
KR100499391B1 (en) Virtual input device sensed finger motion and method thereof
CN105138136A (en) Hand gesture recognition device, hand gesture recognition method and hand gesture recognition system
US20130194190A1 (en) Device for typing and inputting symbols into portable communication means
CN111831112A (en) Text input system and method based on eye movement and finger micro-gesture
CN101124532B (en) Computer input device
US9557825B2 (en) Finger position sensing and display
KR101826552B1 (en) Intecrated controller system for vehicle
CN105138148A (en) Wearable gesture input device and input method
US20230236673A1 (en) Non-standard keyboard input system
US20010035858A1 (en) Keyboard input device
CN107817911B (en) Terminal control method and control equipment thereof
CN105242795A (en) Method for inputting English letters by azimuth gesture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211

RJ01 Rejection of invention patent application after publication