CN117075727A - Method and equipment for realizing keyboard input in virtual space - Google Patents

Method and equipment for realizing keyboard input in virtual space Download PDF

Info

Publication number
CN117075727A
CN117075727A CN202311030170.XA CN202311030170A CN117075727A CN 117075727 A CN117075727 A CN 117075727A CN 202311030170 A CN202311030170 A CN 202311030170A CN 117075727 A CN117075727 A CN 117075727A
Authority
CN
China
Prior art keywords
finger
virtual
palm
joint
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311030170.XA
Other languages
Chinese (zh)
Inventor
潘庭安
潘仲光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dengying Technology Co ltd
Original Assignee
Dengying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dengying Technology Co ltd filed Critical Dengying Technology Co ltd
Priority to CN202311030170.XA priority Critical patent/CN117075727A/en
Publication of CN117075727A publication Critical patent/CN117075727A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/018Input/output arrangements for oriental characters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

The invention provides a method and equipment for realizing keyboard input in a virtual space, which are applied to XR (X-ray) augmented reality wearable equipment; firstly, constructing a pre-training human hand detection model capable of acquiring the plane position of a human hand joint, establishing a coordinate system with any joint point of a single hand as an origin, setting virtual keys at M different directions of the palm surface of the double hand and preset positions when each finger stretches to point to N different directions respectively, and defining output contents for the virtual keys; the finger is tracked, the extending direction of the finger is judged, the finger tip or joint position is calculated, and when the finger tip or joint position exceeds the position threshold value of the virtual key, the content defined by the virtual key is output, so that the keyboard input is realized in the virtual space. The method has the advantages that the method does not need to align the eyes with the finger tips and the keyboard keys rendered in the virtual space, can solve the problem of misjudgment on the movable finger caused by linkage or shielding of the middle finger, the ring finger and the small finger, and can solve the problem of drifting or shaking of the coordinate position of the human hand joint in the picture when in use.

Description

Method and equipment for realizing keyboard input in virtual space
Technical Field
The invention belongs to the technical field of virtual keyboards, and particularly relates to a method and equipment for realizing keyboard input in a virtual space of an XR (X-ray) augmented reality wearable device and a head-mounted display device.
Background
Extended Reality (XR) refers to a real and virtual combined human-computer interactive environment generated by computer technology and wearable equipment, and is a generic name of various forms such as augmented Reality AR, virtual Reality VR, mixed Reality MR and the like.
With the popularity and development of (XR) augmented reality in various industries, keyboards have been inherited as one of the most popular input devices for user interaction with the system. Traditional physical keyboards are hand-feeling, and we can achieve touch typing by familiar location and feel with fingertips. However, in the ages of tablet computers, smart phones and watches, the physical keyboard is replaced by a touch screen, and the touch feeling of the finger tips to the touch panel is the same, and only the plane feeling exists, so that touch typing is not possible any more. In the age of XR glasses, the input method of the touch screen is continued, and even the virtual keyboard and the plane glass are not available, so that touch typing is not possible. Typically, a virtual keyboard is rendered and projected in front of a user's viewing angle through an XR head-mounted display device, but the existing virtual keyboard lacks the use feel of a physical keyboard, a user needs to align his fingertip with the position of a virtual key with eyes, the error often exceeds 1 to 2 cm, during the fingertip movement alignment and pressing operation, visual computation sometimes can be used to distinguish whether the fingertip of the user is pressed deeply or to a key below, and there is a problem of false triggering.
The prior application of the virtual keyboard interaction method and system has the patent number ZL 202110505160.1 and the application date 2021, 5 and 10, and the name of the prior application, namely the prior application of the virtual keyboard interaction method and system is that three-dimensional space position coordinates of all fingertips on image data to be detected relative to a preset reference position are obtained through a pre-trained fingertip detection model; determining a touch area corresponding to the finger tip based on the three-dimensional space position coordinates; when the touch area is overlapped with a preset sensing area of the virtual keyboard, acquiring volume information of the touch area falling into the sensing area; based on the volume information and a preset rule, whether the virtual keyboard where the sensing area corresponding to the touch area is located is triggered or not is judged, so that a user can interact with the virtual keyboard conveniently and rapidly. Because the triggering of the virtual keyboard is related to the volume information of the sensing area, in order to prevent false triggering and prevent the fingertips of the user from scratching a plurality of virtual keys, the size of the whole virtual keyboard cannot be reduced, and the layout of the virtual space is affected.
The method of the binocular and finger touch control for natural people has the following problems in the virtual environment:
(1) Linkage and shielding: as shown in FIG. 1, the extensional tendons EDC of the middle finger, the ring finger and the little finger are connected together through oblique joint tendons, which can lead to linkage of the three fingers, the camera of the glasses is used for observing the movement direction of the fingers from the back of the hand, the ring finger and the little finger are frequently covered, the user does not know which finger is intended to be used when the ring finger and the little finger are lifted in linkage without being covered, and the finger tip detection model based on finger image data can misjudge the moving finger (namely the finger doing key action). Conventional keyboards and touch screens have the advantage that a real touch must be pointed to determine where to click. The user can predict that the finger of the user touches the correct position under the linkage condition, and only the needed fingertip is ensured. Therefore, the traditional computer tablet and the mobile phone do not need to solve the problem. The pre-training hand joint detection model open source software (Mediapipe, openpose, mobilehand, fusenet, seqhand, alphapose, etc.) capable of acquiring the hand joint plane position on the market has errors when the shielded finger joint position is obtained through machine learning prediction, and therefore, the XR glasses have no good finger tracking application up to now.
(2) Drift and jitter: the surrounding environment seen by the camera is three-dimensional, two hands are three-dimensional in the three-dimensional environment, but the shot pictures are two-dimensional, and the two-dimensional coordinate positions (X, Y) of each joint of the wrist, the metacarpal joint and the finger in the shot pictures are output by vision calculation open source software which can acquire the plane positions of the joints of the human hands on the market. Because the camera or the hand on the glasses are moving relatively to the camera in use, the corresponding background three-dimensional space is also moving relatively, so that the coordinate positions of all joints in the picture can be seen to drift and/or shake, the actual positions of the calculated joints can be seriously influenced, and the error of whether the binocular computing fingertips touch the corresponding key positions is particularly large. Particularly, when a virtual key is pressed, the user considers that the three-dimensional environment is a movement which passes through the key only in depth, but because the pre-trained fingertip detection model outputs a two-dimensional position, the visual calculation misjudges the virtual key with the fingertip sliding to the lower part.
Disclosure of Invention
The invention aims to provide a method and equipment for realizing keyboard input in a virtual space of an XR (X-ray) augmented reality wearable device and a head-mounted display device, which are convenient for a user to realize quick touch typing of keyboard input in the virtual space, can overcome the problem of misjudgment of movable fingers caused by linkage or shielding of middle fingers, ring fingers and little fingers, and can overcome the problem of drifting or shaking of the coordinate positions of human hand joints in a picture during use.
The invention discloses a method for realizing keyboard input in a virtual space, which is applied to various XR (virtual reality) extended reality wearable devices and head-mounted display devices, and comprises the following steps:
step 1, constructing a pre-trained hand detection model capable of acquiring the positions of joints of a hand, establishing a coordinate system with any joint point of a single hand as an origin, acquiring the two-dimensional space position of each joint point of the hand through the hand detection model, and converting the two-dimensional space position of each joint point relative to the origin of the single hand;
step 2, setting virtual keys and position thresholds thereof at preset positions when each finger of the two hands stretches to point to N different directions, and defining output contents for the virtual keys;
step 3, presetting a finger preparation action, and not performing joint point tracking and fingertip position calculation when the hand detection model recognizes that the finger is in the preparation action;
and 4, tracking the finger which is separated from the preparation action, judging the extending direction of the finger and calculating the position of the fingertip, further judging whether the position of the fingertip exceeds the position threshold of the virtual key, and outputting the content defined by the virtual key if the position of the fingertip exceeds the position threshold of the virtual key, so that the keyboard input is realized in the virtual space.
In the step 2, each finger extends to point to N different directions, namely, when the hands hang in the air to perform similar typing actions, four directions of upward, forward, downward and extending towards the palm center of a plane formed by the fingers relative to the palm joints are defined; when the hands are placed on the desktop to perform similar typing, three directions of upward and forward directions of the finger relative to the plane formed by the metacarpal joints and extending towards the palm center and a fourth direction of upward lifting of the non-fingertip joints of the finger are defined.
According to the habit of typing on the keyboard by using both hands, the virtual keys which are arranged in the upward direction, the forward direction, the downward direction and the four extending directions of each finger of the left hand and the right hand respectively correspond to four numbers and letters on the keyboard from top to bottom.
In the step 2, each finger stretching direction is in N different directions, including the direction of the finger stretching after the wrist/finger swings left and right.
In the step 4, the finger separated from the preparation action is tracked, and the direction of the extending direction and the fingertip position are judged, wherein the specific judgment rule is as follows:
if the finger separated from the preparation action is a thumb or an index finger, judging the extending direction and calculating the fingertip position of the finger by the space positions of three joints or four joints of the thumb or the index finger obtained by the hand detection model;
if the finger separated from the preparation operation is the middle finger, the direction of the extending direction of the middle finger is judged by adopting the middle finger judgment rule, and the fingertip position is calculated: calculating the total length of the middle section and the far section and the bending degree of the near section and the middle section according to the space positions of the middle finger four joints obtained by the hand detection model to convert the space positions of the fingertips of the middle finger relative to the origin;
if the finger separated from the preparation operation is a ring finger: when the ring finger extends and points forwards, downwards and towards the palm center relative to the plane formed by the metacarpal joints, the middle finger judging rule is adopted to judge the extending and pointing direction and calculate the fingertip position;
If the fingers which are separated from the preparation action are ring fingers and little fingers: when the proximal section of the ring finger and the proximal section of the little finger are lifted at the same time, adopting a middle finger judgment rule to judge only the extension pointing direction of the ring finger and calculate the fingertip position;
if the finger separated from the preparation action is a little finger: when the small finger stretches upwards and forwards relative to the plane stretching direction formed by the metacarpal joints, the middle finger judging rule is adopted to judge the stretching direction and calculate the fingertip position; when the proximal little finger joint is not seen but the proximal ring finger joint is not seen, judging that the plane extension formed by the little finger relative to the metacarpal joint points downwards, and calculating the fingertip position between the middle joint and the proximal ring joint according to the 45-degree oblique angle and the total length of the distal joint and the middle joint; when the little finger is not seen and the proximal segment of the ring finger follows the little finger while facing downwards, it is determined that the little finger extends inward toward the center of the palm relative to the plane formed by the metacarpal joints, and the fingertip position between the middle segment and the proximal segment is calculated with a bending angle of 90 degrees and the total length of the distal segment and the middle segment.
The method for realizing keyboard input in the virtual space is applied to various XR (X-ray) augmented reality wearable devices and head-mounted display devices and comprises the following steps of:
step 1, constructing a pre-trained hand detection model capable of acquiring the plane position of a joint of a hand, establishing a coordinate system with any joint point of a single hand as an origin, acquiring the two-dimensional space position of each joint point of the hand through the hand detection model, and converting the two-dimensional space position of each joint point relative to the origin of the single hand;
Step 2, respectively defining the palm directions of the hands to N different directions, setting a virtual key and a position threshold value thereof at a preset position of each finger in each preset direction pointed by the palm, and defining output contents for the virtual key;
step 3, presetting a finger preparation action, and not performing joint point tracking and fingertip position calculation when the hand detection model recognizes that the finger is in the preparation action;
and 4, determining different orientations by rotating the palm, when the palm is positioned in a preset direction and the fingers are separated from the preparation action, tracking the fingers doing the straightening action by a hand detection model, calculating and judging whether the position of the finger tip exceeds a position threshold value of the virtual key, and if so, outputting the content defined by the virtual key, thereby realizing keyboard input in a virtual space.
In the step 4, different orientations are determined by rotating the palm, when the palm is located in a preset direction, the preset position of each finger corresponding to the direction displays the content corresponding to the virtual key, if the finger is separated from the preparation action, the hand detection model tracks the finger which is retracted or pressed downwards, calculates and judges whether the position of the fingertip is shorter than the position threshold of the virtual key, if so, the content defined by the virtual key is output, and thus the keyboard input is realized in the virtual space.
In the step 1, the two-dimensional space position of each joint point relative to the origin of the single hand obtained by the single camera is calculated by utilizing the synchronous video frames acquired by the multiple cameras to obtain the three-dimensional space position of each joint point relative to the origin of the single hand.
An XR augmented reality wearable device comprises at least one video processor, wherein the video processor is connected with a binocular camera and a driving module of a left/right display module; the binocular camera at least comprises a left monocular camera group and a right monocular camera group, wherein the monocular camera group is composed of one or more camera sensors and is used for generating image frames according to exposure instructions; the video processor performs the processing steps described above for any of the methods for implementing keyboard entry in virtual space.
A virtual space implementation keyboard input chip comprising an internally packaged integrated circuit substrate for performing the processing steps described in any of the virtual space implementation keyboard input methods described above.
The method for realizing keyboard input in the virtual space is applied to various XR (X-ray) augmented reality wearable devices and head-mounted display devices and comprises the following steps of:
step 1, acquiring pointing and action information of a palm or each finger through a brain-computer interface;
Step 2, respectively defining the palm or each finger of the hands to point to N different directions, arranging a virtual key and a position threshold value thereof at a preset position of each preset direction pointed by the palm or each finger, and defining output contents for the virtual key;
step 3, presetting a preparation action of the finger;
and 4, determining different orientations by rotating the palm or extending the fingers, and when the palm or the fingers are positioned in a preset direction and the fingers are separated from the preparation action, determining whether the finger tips exceed the position threshold of the virtual keys through information acquired by the brain-computer interface, and outputting the content defined by the virtual keys if the finger tips exceed the position threshold of the virtual keys, so that keyboard input is realized in the virtual space.
The invention has the following beneficial effects:
(1) Solve the jitter problem: and establishing a coordinate system with a certain hand joint as an origin, and changing screen coordinates into relative coordinates of the hand joint.
(2) Solves the drift problem: the three-dimensional virtual keys larger than the fingers in the traditional virtual keyboard input are discretized into virtual keys which surround fingertips when the fingers straighten towards multiple directions, the input is realized without touching the virtual keys by moving the palm, thus the whole hand is not required to be moved for typing, the keyboard input is realized by pointing the fingers to N directions or M directions of the palm surface relative to the glasses plane respectively, and the drift problem is solved.
(3) Solves the problem that the finger tip is shielded: the joint bending angle is calculated by utilizing the total length of the far joint and the middle joint of the finger and the position information with time sequence of all joint points of the finger, and whether the index finger is straightened or not and the straightening direction of the index finger are judged relative to a plane formed by the palm, so that the accurate fingertip position can be predicted by the visible joint point position with time sequence, and the problem that the fingertip position cannot be accurately calculated due to fingertip shielding is solved.
(4) The problem that the finger linkage can not accurately judge the movable finger is solved: the invention presumes that when the proximal sections of the ring finger and the little finger are lifted at the same time, the ring finger is judged to be a typing finger, the fingertip position of the ring finger is calculated only, and the action of the little finger is ignored; when the proximal sections of the little finger and the ring finger are downward at the same time, the little finger is judged to be the typing finger, and the straightening direction is inward of the palm.
Because the user does not need to watch and move the hands and fingertips with eyes to aim at the position of the virtual keys for typing, the user only needs to remember the N straightening/bending directions of each finger and/or the keys corresponding to the M directions of the palm surface relative to the eye surface, and the quick touch typing of keyboard input can be realized in the virtual space, therefore, the invention ensures that the vision calculation based on the MR glasses is simple and can not be misjudged.
Drawings
FIG. 1 is a schematic view of the extensor tendons EDC of the middle, ring and little fingers with oblique tendons therebetween;
FIG. 2 is a graph of the 21 node points and node point names identified by a human hand on the Mediapipe official network;
FIG. 3 is a diagram showing input content defined by input gestures of each finger of two hands according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of a human hand with its fingers divided into proximal, middle and distal sections;
FIG. 5 is a schematic diagram showing that the index finger is lifted upwards to display the input content as "7" according to the embodiment of the present invention;
FIG. 6 is a schematic diagram showing that an index finger is clicked forward to display the input content "Y" according to the embodiment of the present invention;
FIG. 7 is a diagram showing an index finger pointing downward to display an input content "H" according to an embodiment of the present invention;
FIG. 8 is a schematic diagram showing that the index finger is pointing toward the palm to display the input content "N" according to the embodiment of the present invention;
FIG. 9 is a schematic diagram showing that a direction of a ring finger is upward lifting to display that the input content is "9" according to the embodiment of the present invention;
FIG. 10 is a diagram illustrating a forward click of an input content "O" with a ring finger direction according to an embodiment of the present invention;
FIG. 11 is a schematic diagram showing that a direction of a ring finger is a downward click to display an input content of "L" according to an embodiment of the present invention;
FIG. 12 is a schematic diagram showing that the pointing direction is clicking forward to display the input content "U" in the first embodiment of the present invention;
FIG. 13 shows four keys "6", "Y", "H" and "B" that are responsible for clicking by the index finger of the right hand in a conventional keyboard input;
FIG. 14 is a schematic representation of the binding of four keys "6", "Y", "H" and "B" with the index finger of the right hand in four directions in the present invention;
FIG. 15 is a schematic diagram showing the finger straightened typing with the palm facing up in an embodiment of the present invention;
FIG. 16 is a schematic illustration of the palm of an embodiment of the present invention with the index finger straightened for typing;
FIG. 17 is a schematic diagram of the finger straightened typing with the palm down in an embodiment of the present invention;
FIG. 18 is a schematic illustration of the finger straightened typing with the palm facing inward in an embodiment of the present invention;
FIG. 19 is a diagram of three-palm decision four directions and straightening as a preparatory motion according to an embodiment of the present invention;
FIG. 20 is a schematic view of the three palms up with the index finger retracted for typing in an embodiment of the present invention;
FIG. 21 is a schematic diagram of the three palms up and the index finger typing down in an embodiment of the present invention;
FIG. 22 is a schematic view showing the upward lifting of a non-fingertip joint of a hand-rest table top index finger in accordance with the first embodiment of the present invention;
FIG. 23 is a schematic view showing the tip of a finger placed on a table top with the finger pointing forward in accordance with the first embodiment of the present invention;
FIG. 24 is a schematic view showing the top of a finger tip of an index finger placed on a table top;
FIG. 25 is a schematic view of a first embodiment of the present invention with a hand in the palm of the hand and a finger tip of a table top and a forefinger pointing into the palm;
fig. 26 is a flow diagram of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, it being apparent that the described embodiments are only some, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Description of principle:
the invention relates to a pre-training hand joint detection model open source software capable of acquiring the plane position of a hand joint in the market, and the invention is illustrated by taking Mediapipe as an example. Mediapipe is an open-source item of Google, is a tool library which is machine-learned and mainly is a visual algorithm, integrates a large number of models such as face detection, face key points, gesture recognition, head portrait segmentation, gesture recognition and the like, and can output time sequence position information of 21 joint points (also called key points) of a human hand in a video picture as shown in fig. 2.
Because the camera or the hand on the XR glasses moves relative to the camera in use, the corresponding three-dimensional environment used as the background moves relatively, namely, the three positions of the three environments cannot be fixed correspondingly when the camera shoots, the vision calculation is misled by the relative positions of the background and the hand, the joints (X, Y) of the vision calculation are (X, Y) of the screen picture, the vision calculation can adopt the three-dimensional environment used as the corresponding coordinates to adjust the joints (X, Y) of the hand seen by the camera, and therefore, the display screen can see that the coordinate positions of all joints in the picture drift and/or shake in the three-dimensional environment.
The invention focuses on:
(1) Solve the jitter problem: and establishing a coordinate system with a certain hand joint as an origin, and changing screen coordinates into relative coordinates of the hand joint. That is, the origin (0, 0) is not a two-dimensional position of the image captured by the camera, but a joint of the hand is set as the origin (0, 0). The present invention assumes that the center point of the wrist is used as the origin (0, 0), then all the metacarpal joints and the finger joints (X, Y) are positions corresponding to the origin of the wrist. Because the overall hand size is fixed, the positions of the other joints, if calculated for the wrist origin, do not have jitter problems. The specific adjustment steps are as follows: origin joint position (Xo, yo) and other joint positions (Xi, yi), the origin (Xo, yo) is subtracted from all joint positions, the new position of the origin joint is (0, 0), and the other joint positions are (Xi-Xo, yi-Yo).
Examples: the wrist in the traditional shot picture is assumed that the first frame is at (100 ) position, the fingertip of the index finger is at (80,280), the wrist position of the second frame becomes (105 ) and the tip position of the index finger becomes (85,285), the first step of the invention: with the wrist position in the first frame being the origin (0, 0), then the index tip position at the first frame is x=80-100= -20 and y=280-100=180, i.e., -20,180), and with the wrist position (105 ) in the second frame being the origin (0, 0), the index tip position is x=85-105= -20 and y=285-105=180, i.e., -20,180), it is seen that the index tip positions of the first and second frames are unchanged, and thus the jitter problem is eliminated.
(2) Solves the drift problem: large movements can cause drift problems. Because an augmented reality keyboard key image is superimposed on the image capture screen, the keyboard key image may be anchored at a certain position (0 DoF) of the screen, or a position (3 DoF) opposite to the glasses may be anchored at a certain position (6 DoF) of the background three-dimensional environment, dof= Degree of Freedom. When typing, because the total area of the keyboard is larger than the area of the hand, the user needs to move the hand and corresponding fingertip to the virtual key position that he wishes to touch. Since the fingertip touches a virtual key which does not exist in the air, and has no hand feeling at all, the fingertip and the virtual key in the background three-dimensional environment need to be aligned by looking at the picture of the camera by eyes. The method is equivalent to the four dimensions that eyes aim at a camera picture, fingers, virtual keys and a background three-dimensional environment to judge. Worse, when the XR glasses are used for binocular observation, the hands, the background and the keyboard are three-dimensional objects with different distances, namely, three-dimensional objects with different parallaxes can be seen by two eyes, and the eyes can not distinguish the distances of a plurality of ghosts, so that alignment is further caused.
The invention discretizes the three-dimensional virtual keys larger than the fingers in the traditional virtual keyboard input into the virtual keys which surround fingertips or any joints when the fingers are straightened or bent in a plurality of directions, as shown in fig. 13 and 14 (the invention is exemplified by four directions in front of and in back of the invention), so that the input is realized without touching the virtual keys by moving the palm, thus the typing can realize the keyboard input without moving the whole hand, and the fingers or the palm can respectively point to N directions, thereby solving the drift problem.
(3) Solves the problem that the finger tip is shielded: as shown in fig. 4, the finger of a human hand is divided into three sections away from the palm to the fingertips: proximal, middle, distal; as shown in fig. 2, taking the index finger as an example, the total length of the far node (the distance between the joint points 8 and 7) and the middle node (the joint points 7 and 6) of the index finger can be calculated by using the position information of the joint points 6-8, the bending angle of the joint can be calculated by using the position information of the joint points 5-8 with time sequence, and the extension action and the extension direction of the index finger can be judged relative to the plane formed by the origin of the wrist and the metacarpal joint, so that the accurate fingertip position can be predicted by the visible joint point position with time sequence, and the problem that the fingertip position cannot be accurately calculated due to fingertip shielding is solved.
(4) The problem that the finger linkage can not accurately judge the movable finger is solved: the invention presumes that when the proximal sections of the ring finger and the little finger are lifted at the same time, the ring finger is judged to be a typing finger, the fingertip position of the ring finger is calculated only, and the action of the little finger is ignored; when the proximal sections of the little finger and the ring finger are downward at the same time, the little finger is judged to be the typing finger, and the straightening direction is inward of the palm.
Fig. 5 to 12 show how the two-dimensional coordinates of the virtual key triggered when the finger straightens in four different directions correspond to the coordinates of the node point in the two-dimensional image. If the visual computation can obtain the two-dimensional video of the dual-purpose cameras, the two cameras can convert the depth position information of the articulation point according to the distance between the two cameras, so that the three-dimensional position information of the articulation point is obtained, and the space computation is more accurate. However, the invention can confirm the typing action of the three-dimensional virtual keyboard based on the position information of the two-dimensional image without calculating binocular parallax. The judging steps of the invention are as follows: judging that the finger doing the extending action is a typing finger, judging the extending direction of the finger by the four joint positions with time sequence of the typing finger and the metacarpal joint, and outputting the definition content of the virtual key when the fingertip of the typing finger extends beyond the preset position (including possible overlapping position) of the virtual key.
The conventional keyboard is shown in fig. 13, because the keyboard is planar and is larger than the size of a hand, a user needs to aim at his fingertip to move in a three-dimensional space, and touch a required key, so that the key is triggered by visual calculation. One application of the invention is a three-dimensional virtual keyboard, and virtual touch or key can appear on the upper front lower inner four sides of the palm and the fingers. The three-dimensional keyboard can be used in a fixed space, and the keys of the virtual keyboard are used as if the keys are sleeved on the hands of a user. Or the virtual keyboard is the hand of the user, and the virtual keyboard moves along with the hand. The invention gets rid of the keyboard in the virtual space, each finger is surrounded by a plurality of virtual keys, the virtual keyboard separates the left-hand keyboard from the right-hand keyboard, and the keys of the virtual keyboard are arranged along with the fingers along with the fact that hands can type in any visible space, so that the problem of fixed size is avoided. The invention has simple vision calculation based on MR glasses and can not be judged by mistake. Because the user does not need to look at with eyes and move hands to aim at the virtual keys for typing, the user only needs to remember the keys corresponding to N directions of each finger or palm, so that the quick touch typing of the keyboard input can be realized in the virtual space.
Example one (case of determining direction by finger)
As shown in fig. 26, a first embodiment of the present invention relates to a method for implementing keyboard input in a virtual space, which is applied to various XR augmented reality wearable devices and head-mounted display devices, and includes the following steps:
step 1, a pre-trained hand detection model capable of acquiring the joint plane position of a hand is built, a coordinate system taking any joint point of a single hand as an origin (0, 0) is built, the two-dimensional space position (X, Y) of each joint point of the hand is acquired through the hand detection model, the hand detection model can be developed by the user, the hand detection model can also be realized by improving the existing open source software (such as Mediapipe), the existing open source software outputs two-dimensional coordinate positions (XY) of 21 joint points of the single hand in a picture, which are obtained after visual calculation, are obtained, when the improvement is carried out, the two-dimensional coordinate positions (X, Y) of the 21 joint points of the single hand in the picture are respectively converted into the two-dimensional coordinate positions (Xi-Xo, yi-Yo) of the joint points corresponding to the origin (0, 0) of the single hand, if a plurality of cameras are arranged on the glasses, the two-dimensional space positions (Xi, yi) acquired by each camera are calculated by utilizing synchronous video frames acquired by the plurality of cameras, and the three-dimensional space position (X, Y) of each joint point of the single hand is obtained by the corresponding single hand;
5-12, respectively setting a virtual key and a position threshold thereof at preset positions when each finger of the two hands extends to point to N different directions, and defining output contents (such as letters, numbers, symbols, expressions and the like) for the virtual key, wherein each finger extends to N different directions and can be defined according to requirements, for example, when the two hands are suspended to do similar typing actions, the fingers can extend upwards, forwards, downwards and towards the palm center relative to a plane formed by an origin and palm joints; when hands are placed on the desktop to perform similar typing, the hands can extend upwards and forwards relative to the plane formed by the origin and the metacarpal joints and extend towards the palm center (see figures 23 to 25) and the non-fingertip joints of the hands can lift upwards (see figure 22); the virtual key is arranged and displayed near the fingertip position (for example, the middle position of the far node and the middle node) when the finger is extended; in this embodiment, as shown in fig. 3, in order to implement quick touch typing of the virtual keyboard, according to the habit of typing on the keyboard by using both hands, the virtual keys corresponding to the upward, forward, downward (or upward lifting of the fingers in an "L" shape) of each finger of the left and right hands and the four extending directions towards the palm center are respectively corresponding to the four numbers and letters on the keyboard from top to bottom, for example, the small finger of the left hand is corresponding to "1" upwards, the front is corresponding to "Q", the downward is corresponding to "a", and the palm center is corresponding to "Z". The content bound by the virtual key and the preset position thereof can be defined according to the application scene and the use habit, and the invention is not particularly limited. The extending direction of each finger can be further subdivided, for example, when the wrist/finger rotates in the left middle and right directions, for example, the extending direction can be divided into the upper left direction, the upper middle direction and the upper right direction, and a single hand can be extended in different directions so as to bind more virtual keys.
Step 3, presetting a preparation action of a finger, for example, presetting a gesture of bending the finger on a desktop as the preparation action, and not carrying out joint tracking and fingertip position calculation when the hand detection model recognizes that the bending of the finger is in the preparation action;
step 4, tracking the finger separated from the preparation action, judging the extending direction of the finger and calculating the position of the fingertip, further judging whether the position of the fingertip exceeds the position threshold of the virtual key, and outputting the content defined by the virtual key if the position of the fingertip exceeds the position threshold of the virtual key, so that the keyboard input is realized in the virtual space, specifically:
step 4.1, tracking the finger separated from the preparation action and judging the extending direction and the fingertip position of the finger, wherein the specific judgment rule is as follows:
if the finger separated from the preparation action is a thumb or an index finger as shown in fig. 5 to 8, the direction of the extending direction and the position of the calculated fingertip are judged by the space positions of the three joints of the thumb or the four joints of the index finger obtained by the hand detection model because the thumb and the index finger of the hand are not normally shielded;
if the finger that is out of the preparation operation is the middle finger: as shown in fig. 4, the finger of a human hand is divided into three sections away from the palm to the fingertips: proximal, middle, distal; because the motion capability of the distal section is limited, the distal section only needs to be curled when a fist is held, and the distal section is directly linked with the middle section and cannot be bent too much when fingers are typed, the middle finger judgment rule is adopted to judge the extending and pointing direction of the middle finger and calculate the position of the fingertip: the total length of the middle section and the far section and the bending degree of the near section and the middle section are calculated through the space positions of the middle finger four joints obtained by the hand detection model to be converted into the space position of the fingertip of the middle finger relative to the origin, see fig. 12;
If the finger separated from the preparation operation is a ring finger: as shown in fig. 9 to 11, when the ring finger is lifted up, the ring finger drives the middle finger and the little finger, and the ring finger does not have linkage problems in the forward direction, the downward direction and the palm center direction relative to the plane formed by the origin of the wrist and the metacarpal joint, so that the middle finger judgment rule is adopted to judge the extending direction and calculate the fingertip position.
If the fingers which are separated from the preparation action are ring fingers and little fingers: the ring finger has the problem that when the ring finger is lifted upwards, the ring finger can drive the middle finger and the little finger to lift together, and the middle finger can be forced to be pressed down without lifting. Especially when the finger tip of the ring finger is to straighten upwards, the proximal section of the little finger must be lifted up. Conversely, if the other finger is bent downward, the small finger alone can be straightened upward, so that when the proximal section of the ring finger and the proximal section of the small finger are lifted simultaneously, the middle finger judgment rule is adopted to judge only the direction in which the ring finger is stretched and calculate the fingertip position.
If the finger separated from the preparation action is a little finger: the little finger has two extensor tendons like the index finger, and the little finger can move up and down or outwards. Its ability to lift straight is strong. Therefore, the user can lift the little finger alone and does not lift the ring finger. The real problem for the little finger is occlusion. Not only is it obscured, but also the middle section of the little finger is often not visible because there is a root ring finger in front. If the middle node is not seen, the position of the middle node cannot be calculated even if the total length of the middle node and the far node is known, and the direction in which the little finger extends and points cannot be determined by the spatial position of the little finger tip relative to the origin. When the small finger is extended upward and forward, all joints are seen to have no problem, and the middle finger judgment rule is adopted. When the little finger extends downwards and inwards towards the palm centre, the ring finger is pulled down when typing in the palm direction because two oblique combining tendons (juncturae tendinum) are connected between the ring finger and the extensor tendons EDC of the little finger. When the small finger is used for typing downwards, the ring finger can only shake but not truly pull down, and once the small finger starts to move downwards towards the palm, the proximal section of the ring finger starts to sink. When the tip of the little finger touches the palm, the proximal section of the ring finger is barely visible. Therefore, when the proximal little finger joint is not seen but the proximal ring finger joint is not seen, judging that the extending direction of the little finger is downward, and calculating the fingertip position between the middle joint and the proximal ring joint according to an oblique angle of 45 degrees and the total length of the distal joint and the middle joint; when the little finger is not visible and the proximal segment of the ring finger follows the little finger while being downward and even completely invisible, the extension direction of the little finger is judged to be inward palm center, and the fingertip position between the middle segment and the proximal segment is calculated by a bending angle of 90 degrees and the total length of the distal segment and the middle segment.
And 4.2, outputting the content defined by the virtual key if the fingertip position exceeds the position threshold of the virtual key.
Example two (palm direction determination case)
The second embodiment of the present invention is different from the first embodiment in that: as shown in fig. 15-18, different orientations are determined by rotating the palm, and at the same time, the hand detection model tracks the finger that is separated from the preparation motion and is straightened, calculates and determines whether the fingertip position exceeds the position threshold of the virtual key, if so, outputs the content defined by the virtual key, thereby realizing keyboard input in the virtual space.
The orientation of the palm can be customized according to the needs, and this embodiment is given as an example for easy understanding. Assuming that the XR glasses are planar when worn by a user, the palm surface formed by the metacarpal joint of the user is planar, and when the palm surface is parallel to the glasses surface, defined as 0 degrees, then:
the invention defines four directions of the palm as follows: referring to fig. 19, the angle between the palm surface and the eye surface is upward from 0 to 22.5 degrees, the angle between the palm surface and the eye surface is forward from >22.5 to 45 degrees, the angle between the palm surface and the eye surface is downward from >45 to 67.5 degrees, the angle between the palm surface and the eye surface is inward from >67.5 to 90 degrees, and all fingers are bent in preparation for operation; when the palm points upwards, a corresponding virtual key appears at a position above each finger, a threshold position is preset for the virtual key, when the corresponding finger straightens, a hand detection model tracks the finger, and when the position of the fingertip exceeds the preset position of the virtual key, the fingertip of the user is calculated and judged, and the content fingertip defined by the virtual key is output.
The benefit of using the palm to determine direction is that there is no occlusion problem with embodiment one. The palm is visible and the fingers have only one direction, i.e. straighten in the same parallel direction as the palm. This direction is also seen as unobstructed. The disadvantage is that the wrist turns up and down more than the fingers, and triggering each button adds an action, i.e. the fingers straighten and recover while the wrist turns.
Example III (case of straightening into preparation action)
The preparatory actions of both of the previous embodiments are finger bending, and only when extended is the typing action. The invention can also be reversed, with all fingers being ready to act when they are straightened. By adopting the method of the second embodiment, the angle of the palm surface relative to the eye mirror surface is used for determining the pointing direction, when the palm is positioned in the preset direction, the preset position of each finger corresponding to the direction displays the content corresponding to the virtual key, when the finger is retracted or pressed downwards to leave the preparation action, the hand detection model tracks the finger, and when the position of the fingertip is calculated and judged to be shorter than the position threshold value of the virtual key, the content defined by the virtual key is output, see fig. 20 and 21.
For the sake of easy understanding, taking the left palm up as an example, when all fingers are straightened, the virtual key "1" is displayed on the DIP joint of the little finger of the left hand, the virtual key "2" is displayed on the DIP joint of the ring finger of the left hand, the virtual key "3" is displayed on the DIP joint of the middle finger of the left hand, the virtual key "4" is displayed on the DIP joint of the index finger of the left hand, the virtual key "5" is displayed on the DIP joint of the thumb of the left hand, and if the index finger of the left hand is out of the straightened state (retracted/rolled/pressed down), the fingertip position of the index finger of the left hand is separated from/smaller than the preset position of the virtual key "4", the "4" defined by the virtual key is output, and the index finger of the left hand is restored to the straightened preparation action.
Fourth embodiment (brain-computer interface typing method)
The brain-computer interface can acquire specific output content and also can acquire dynamic information of the joints on the body. Specific alphanumeric symbols and expressions are difficult and typically require the transmission of corresponding electrode wires implanted in the cerebral vessels. But without the need for implantation of electrode wires if only knowledge of which joints of the body straighten and bend is required. The invention can replace visual calculation, and adopts the brain-computer interface to obtain signals of which joint/wrist bends/straightens in which direction so as to realize the typing judgment logic.
The fourth embodiment of the invention provides a method for realizing keyboard input in a virtual space, which is applied to various XR (virtual reality) extended reality wearable devices and head-mounted display devices, and comprises the following steps:
step 1, acquiring pointing and action information of a palm or each finger through a brain-computer interface;
step 2, respectively defining the palm or each finger of the hands to point to N different directions, arranging a virtual key and a position threshold value thereof at a preset position of each preset direction pointed by the palm or each finger, and defining output contents for the virtual key;
step 3, presetting a preparation action of the finger;
and 4, determining different orientations by rotating the palm or extending the fingers, and when the palm or the fingers are positioned in a preset direction and the fingers are separated from the preparation action, determining whether the finger tips exceed the position threshold of the virtual keys through information acquired by the brain-computer interface, and outputting the content defined by the virtual keys if the finger tips exceed the position threshold of the virtual keys, so that keyboard input is realized in the virtual space.
Example five (other joints typing than finger tips)
In addition to typing methods in which the fingertip position exceeds a nearby preset key position threshold, other joints may be used for typing. Assuming that the preparatory actions are finger bending and that the fingertip is touching the desktop, the push-down action of embodiment one cannot be used at this time unless the desktop is pierced. At this time the system may define typing with joints between the proximal and middle nodes. Fig. 22 refers to the index finger as an example. Instead of requiring downward typing, the tip of the index finger sets the joint #6 of the index finger to be lifted upward, the virtual key is displayed at a position of about 0.5 cm above the joint #6, and when the joint #6 is lifted beyond the setting position of the virtual key, the system outputs preset contents.
Example six
The sixth embodiment of the invention is an XR (X-ray) augmented reality wearable device, which comprises at least one video processor, wherein the video processor is connected with a binocular camera and a driving module of a left/right display module; the binocular camera at least comprises a left monocular camera group and a right monocular camera group, wherein the monocular camera group is composed of one or more camera sensors and is used for generating image frames according to exposure instructions; the video processor performs the processing steps described above for any of the methods for implementing keyboard entry in virtual space.
Example seven
The seventh embodiment of the present invention is a virtual space keyboard input chip, where the chip includes an integrated circuit substrate encapsulated therein, and the integrated circuit substrate is configured to execute the processing steps described in any one of the above methods for implementing keyboard input in a virtual space.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (11)

1. The method for realizing keyboard input in the virtual space is applied to various XR (X-ray) augmented reality wearable devices and head-mounted display devices and is characterized by comprising the following steps of:
step 1, constructing a pre-trained hand detection model capable of acquiring the positions of joints of a hand, establishing a coordinate system with any joint point of a single hand as an origin, acquiring the two-dimensional space position of each joint point of the hand through the hand detection model, and converting the two-dimensional space position of each joint point relative to the origin of the single hand;
step 2, setting virtual keys and position thresholds thereof at preset positions when each finger of the two hands stretches to point to N different directions, and defining output contents for the virtual keys;
step 3, presetting a finger preparation action, and not performing joint point tracking and fingertip position calculation when the hand detection model recognizes that the finger is in the preparation action;
And 4, tracking the finger which is separated from the preparation action, judging the extending direction of the finger and calculating the position of the fingertip, further judging whether the position of the fingertip exceeds the position threshold of the virtual key, and outputting the content defined by the virtual key if the position of the fingertip exceeds the position threshold of the virtual key, so that the keyboard input is realized in the virtual space.
2. The method for realizing keyboard input in virtual space according to claim 1, wherein in the step 2, each finger extends to N different directions, namely, four directions of upward, forward, downward and toward the palm center of the plane formed by the fingers relative to the palm joints are defined when the hands are suspended to perform a similar typing action; when the hands are placed on the desktop to perform similar typing, three directions of upward and forward directions of the finger relative to the plane formed by the metacarpal joints and extending towards the palm center and a fourth direction of upward lifting of the non-fingertip joints of the finger are defined.
3. The method for realizing keyboard input in virtual space according to claim 2, wherein the virtual keys arranged in four extending directions corresponding to each finger of the left and right hands, upward, forward, downward and toward the palm, respectively correspond to four numbers and letters on the keyboard from top to bottom according to the habit of typing on the keyboard by using both hands.
4. The method for implementing keyboard input in virtual space according to claim 2, wherein in the step 2, each finger is extended to point in N different directions, including the direction of finger extension after wrist/finger swing.
5. The method for implementing keyboard input in virtual space according to claim 1, wherein in the step 4, the finger separated from the preparation action is tracked and the direction and fingertip position of the extending direction are determined, and the specific rule of determination is:
if the finger separated from the preparation action is a thumb or an index finger, judging the extending direction and calculating the fingertip position of the finger by the space positions of three joints or four joints of the thumb or the index finger obtained by the hand detection model;
if the finger separated from the preparation operation is the middle finger, the direction of the extending direction of the middle finger is judged by adopting the middle finger judgment rule, and the fingertip position is calculated: calculating the total length of the middle section and the far section and the bending degree of the near section and the middle section according to the space positions of the middle finger four joints obtained by the hand detection model to convert the space positions of the fingertips of the middle finger relative to the origin;
if the finger separated from the preparation operation is a ring finger: when the ring finger extends and points forwards, downwards and towards the palm center relative to the plane formed by the metacarpal joints, the middle finger judging rule is adopted to judge the extending and pointing direction and calculate the fingertip position;
If the fingers which are separated from the preparation action are ring fingers and little fingers: when the proximal section of the ring finger and the proximal section of the little finger are lifted at the same time, adopting a middle finger judgment rule to judge only the extension pointing direction of the ring finger and calculate the fingertip position;
if the finger separated from the preparation action is a little finger: when the small finger stretches upwards and forwards relative to the plane stretching direction formed by the metacarpal joints, the middle finger judging rule is adopted to judge the stretching direction and calculate the fingertip position; when the proximal little finger joint is not seen but the proximal ring finger joint is not seen, judging that the plane extension formed by the little finger relative to the metacarpal joint points downwards, and calculating the fingertip position between the middle joint and the proximal ring joint according to the 45-degree oblique angle and the total length of the distal joint and the middle joint; when the little finger is not seen and the proximal segment of the ring finger follows the little finger while facing downwards, it is determined that the little finger extends inward toward the center of the palm relative to the plane formed by the metacarpal joints, and the fingertip position between the middle segment and the proximal segment is calculated with a bending angle of 90 degrees and the total length of the distal segment and the middle segment.
6. The method for realizing keyboard input in the virtual space is applied to various XR (X-ray) augmented reality wearable devices and head-mounted display devices and is characterized by comprising the following steps of:
Step 1, constructing a pre-trained hand detection model capable of acquiring the plane position of a joint of a hand, establishing a coordinate system with any joint point of a single hand as an origin, acquiring the two-dimensional space position of each joint point of the hand through the hand detection model, and converting the two-dimensional space position of each joint point relative to the origin of the single hand;
step 2, respectively defining the palm directions of the hands to N different directions, setting a virtual key and a position threshold value thereof at a preset position of each finger in each preset direction pointed by the palm, and defining output contents for the virtual key;
step 3, presetting a finger preparation action, and not performing joint point tracking and fingertip position calculation when the hand detection model recognizes that the finger is in the preparation action;
and 4, determining different orientations by rotating the palm, when the palm is positioned in a preset direction and the fingers are separated from the preparation action, tracking the fingers doing the straightening action by a hand detection model, calculating and judging whether the position of the finger tip exceeds a position threshold value of the virtual key, and if so, outputting the content defined by the virtual key, thereby realizing keyboard input in a virtual space.
7. The method for implementing keyboard input in virtual space of claim 6, wherein: in the step 4, different orientations are determined by rotating the palm, when the palm is located in a preset direction, the preset position of each finger corresponding to the direction displays the content corresponding to the virtual key, if the finger is separated from the preparation action, the hand detection model tracks the finger which is retracted or pressed downwards, calculates and judges whether the position of the fingertip is shorter than the position threshold of the virtual key, if so, the content defined by the virtual key is output, and thus the keyboard input is realized in the virtual space.
8. The method for implementing keyboard input in any one of the virtual spaces according to claim 1 or 6, wherein in the step 1, the two-dimensional spatial position of each node relative to the origin of a single hand obtained by a single camera is calculated by using the synchronous video frames collected by multiple cameras to obtain the three-dimensional spatial position of each node relative to the origin of a single hand.
9. An XR augmented reality wearable device comprises at least one video processor, wherein the video processor is connected with a binocular camera and a driving module of a left/right display module; the binocular camera at least comprises a left monocular camera group and a right monocular camera group, wherein the monocular camera group is composed of one or more camera sensors and is used for generating image frames according to exposure instructions; characterized in that the video processor performs the processing steps described in the method for implementing keyboard entry in a virtual space according to any one of claims 1 to 8.
10. A virtual space implementation keyboard input chip comprising an internally packaged integrated circuit substrate, characterized in that: the integrated circuit substrate is adapted to perform the processing steps described in the method for implementing keyboard entry in a virtual space of any one of claims 1 to 8.
11. The method for realizing keyboard input in the virtual space is applied to various XR (X-ray) augmented reality wearable devices and head-mounted display devices and is characterized by comprising the following steps of:
step 1, acquiring pointing and action information of a palm or each finger through a brain-computer interface;
step 2, respectively defining the palm or each finger of the hands to point to N different directions, arranging a virtual key and a position threshold value thereof at a preset position of each preset direction pointed by the palm or each finger, and defining output contents for the virtual key;
step 3, presetting a preparation action of the finger;
and 4, determining different orientations by rotating the palm or extending the fingers, and when the palm or the fingers are positioned in a preset direction and the fingers are separated from the preparation action, determining whether the finger tips exceed the position threshold of the virtual keys through information acquired by the brain-computer interface, and outputting the content defined by the virtual keys if the finger tips exceed the position threshold of the virtual keys, so that keyboard input is realized in the virtual space.
CN202311030170.XA 2023-08-16 2023-08-16 Method and equipment for realizing keyboard input in virtual space Pending CN117075727A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311030170.XA CN117075727A (en) 2023-08-16 2023-08-16 Method and equipment for realizing keyboard input in virtual space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311030170.XA CN117075727A (en) 2023-08-16 2023-08-16 Method and equipment for realizing keyboard input in virtual space

Publications (1)

Publication Number Publication Date
CN117075727A true CN117075727A (en) 2023-11-17

Family

ID=88710836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311030170.XA Pending CN117075727A (en) 2023-08-16 2023-08-16 Method and equipment for realizing keyboard input in virtual space

Country Status (1)

Country Link
CN (1) CN117075727A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472189A (en) * 2023-12-27 2024-01-30 大连三通科技发展有限公司 Typing or touch control realization method with physical sense

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472189A (en) * 2023-12-27 2024-01-30 大连三通科技发展有限公司 Typing or touch control realization method with physical sense
CN117472189B (en) * 2023-12-27 2024-04-09 大连三通科技发展有限公司 Typing or touch control realization method with physical sense

Similar Documents

Publication Publication Date Title
EP3090331B1 (en) Systems with techniques for user interface control
KR101620777B1 (en) Enhanced virtual touchpad and touchscreen
US9529523B2 (en) Method using a finger above a touchpad for controlling a computerized system
US11360551B2 (en) Method for displaying user interface of head-mounted display device
JP6524661B2 (en) INPUT SUPPORT METHOD, INPUT SUPPORT PROGRAM, AND INPUT SUPPORT DEVICE
US9477874B2 (en) Method using a touchpad for controlling a computerized system with epidermal print information
WO2012039140A1 (en) Operation input apparatus, operation input method, and program
US9857868B2 (en) Method and system for ergonomic touch-free interface
US9063573B2 (en) Method and system for touch-free control of devices
US20150363038A1 (en) Method for orienting a hand on a touchpad of a computerized system
US20140267121A1 (en) Method using a predicted finger location above a touchpad for controlling a computerized system
KR20150040580A (en) virtual multi-touch interaction apparatus and method
CN117075727A (en) Method and equipment for realizing keyboard input in virtual space
Cui et al. Mid-air interaction with optical tracking for 3D modeling
KR102184243B1 (en) System for controlling interface based on finger gestures using imu sensor
US20140253486A1 (en) Method Using a Finger Above a Touchpad During a Time Window for Controlling a Computerized System
CN117472189B (en) Typing or touch control realization method with physical sense
KR101488662B1 (en) Device and method for providing interface interacting with a user using natural user interface device
US20140253515A1 (en) Method Using Finger Force Upon a Touchpad for Controlling a Computerized System
Zhang et al. A novel human-3DTV interaction system based on free hand gestures and a touch-based virtual interface
Hoshino Hand gesture interface for entertainment games
WO2015178893A1 (en) Method using finger force upon a touchpad for controlling a computerized system
Kin et al. STMG: A Machine Learning Microgesture Recognition System for Supporting Thumb-Based VR/AR Input
Gil WearPut: Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions
JP7513262B2 (en) Terminal device, virtual object operation method, and virtual object operation program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination