CN116360589A - Method and medium for inputting information by virtual keyboard and electronic equipment - Google Patents

Method and medium for inputting information by virtual keyboard and electronic equipment Download PDF

Info

Publication number
CN116360589A
CN116360589A CN202310195808.9A CN202310195808A CN116360589A CN 116360589 A CN116360589 A CN 116360589A CN 202310195808 A CN202310195808 A CN 202310195808A CN 116360589 A CN116360589 A CN 116360589A
Authority
CN
China
Prior art keywords
virtual
hand
key
finger
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310195808.9A
Other languages
Chinese (zh)
Inventor
黄怡菲
陈可卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIANJIN JIHAO TECHNOLOGY CO LTD
Original Assignee
TIANJIN JIHAO TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TIANJIN JIHAO TECHNOLOGY CO LTD filed Critical TIANJIN JIHAO TECHNOLOGY CO LTD
Priority to CN202310195808.9A priority Critical patent/CN116360589A/en
Publication of CN116360589A publication Critical patent/CN116360589A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

The embodiment of the application provides a method, a medium and electronic equipment for inputting information by a virtual keyboard, wherein the method comprises the following steps: confirming that a finger of a first hand selects a first virtual key on a virtual keyboard; and confirming whether the first virtual key is used as an input key according to the gesture of the second hand. According to the embodiment of the application, the character input method similar to an actual keyboard is completed based on double-hand combination, so that the input accuracy can be ensured, and the input speed is obviously faster than that of a handle ray input mode.

Description

Method and medium for inputting information by virtual keyboard and electronic equipment
Technical Field
The application relates to the field of information input, in particular to a method, a medium and electronic equipment for inputting information by a virtual keyboard.
Background
The technologies involved in existing head-mounted display devices mainly include Virtual Reality (VR), augmented Reality (Augmented Reality, AR), mixed Reality (MR), some combination and/or derivative combination thereof, etc., the principle of which is to adjust in some way before presenting the display content to the user to provide the user with a better immersive experience.
However, the related art information input method based on the virtual keyboard has many drawbacks, such as that the processing speed is slow in the manner of clicking the virtual keyboard by the handle ray (i.e. a ray emitted from the input device such as the handle held by the user) to select a key, and the accuracy in the manner of converting the voice input into the text is not enough. That is, in the related art, input is accomplished by operating an input key on a virtual keyboard through an external control device (e.g., a handle, etc.) or through voice, but controlling input of the virtual keyboard through the external control device has a problem of slow processing speed, and text input through voice is low in accuracy and is easily interfered by surrounding noise.
Disclosure of Invention
The embodiment of the application aims to provide a method, a medium and electronic equipment for inputting information by a virtual keyboard, and the character input method for completing the virtual keyboard based on double-hand combination can ensure the input accuracy, and the character input speed is obviously faster than that of a ray input mode by means of input equipment such as a handle.
In a first aspect, an embodiment of the present application provides a method for inputting information by using a virtual keyboard, where the method includes: confirming that a finger of a first hand selects a first virtual key on a virtual keyboard; and confirming whether the first virtual key is used as an input key according to the gesture of the second hand.
According to the method and the device, selection of the input keys is completed through combination of two hands (one hand is used for selecting keys on the virtual keyboard, and the other hand is used for confirming), and accuracy of selection of the input keys is improved.
In some embodiments, the determining whether to use the first virtual key as an input key according to the gesture of the second hand includes: when the second hand is monitored to complete the target action in the state that the first virtual key is selected, the first virtual key is used as the input key; and in the state that the first virtual key is selected, if the state that the second hand ends the target action is not monitored, the first virtual key is not used as the input key.
According to the method and the device for selecting the virtual key, whether a certain virtual key is used as the input key is determined through the relative states of the first hand and the second hand, and accuracy of selection of the input key is improved.
In some embodiments, the monitoring that the second hand has completed the target action includes: if the second hand is monitored to present a plurality of ordered states according to the continuously acquired multi-frame images, confirming that the second hand completes the target action, wherein the plurality of ordered states comprise: preparing a first state in which the target action is made, a second state in which the target action is maintained, and a third state in which the target action is ended, or the plurality of ordered states includes: and maintaining the second state of the target action and ending the third state of the target action.
According to the method and the device for confirming the target action, the target action of the second hand is confirmed after the plurality of continuous states of the second hand are recognized, and accuracy of a confirmation result can be improved.
In some embodiments, after the first virtual key is the input key, the method further comprises: generating target input information corresponding to the input key according to the attribute information of the target action, wherein the input key corresponds to multiple input information, and the target input information belongs to one of the multiple input information.
According to the method and the device, the input information corresponding to the virtual key is switched through the attribute information of the target action, so that the universality of the technical scheme can be improved.
In some embodiments of the present application, the attribute information includes: the duration of the target action, the frequency of the target action, or the type of finger engaged in the target action.
In some embodiments, the target motion is a pinch of at least two fingers.
According to the method and the device, the pinching is used as the target action executed by the second hand, and the pinching gesture is easy to identify, so that the accuracy of identifying the target action can be improved, and the accuracy of selecting the input keys is further improved.
In some embodiments, the first state in which the target action is prepared is a ready-to-pinch state, the second state in which the target action is maintained is a pinch state, and the third state in which the target action is ended is a non-pinch state.
According to the method and the device, whether the second hand is ready to pinch (two or more fingers to be contacted are in a close state), the pinching state (namely, two fingers to be contacted or three fingers to be contacted are in a contact state) and the pinching state or part of the three states are loosened to confirm that the second hand completes the target action or not are recognized, so that accuracy of recognition of completing the target action event can be improved, and accuracy of selecting input keys is further improved.
In some embodiments, the first virtual key is a letter key; the method further comprises the steps of: acquiring the duration of the state of kneading; wherein the switching of the target input information corresponding to the input key according to the attribute information of the target action further includes: and selecting an uppercase letter or a lowercase letter corresponding to the letter key as the target input information according to the duration.
According to the method and the device, the uppercase letters or lowercase letters corresponding to the selected target virtual keys are switched to serve as input through judging the difference of the duration time of kneading, and therefore the universality of the technical scheme is improved.
In some embodiments, the selecting, as the target input information, an uppercase letter or lowercase letter corresponding to the letter key according to the duration includes: and determining that the target input information belongs to upper or lower case letters according to the size relation between the duration time and the duration threshold value.
According to the method and the device for switching the upper case letters or the lower case letters corresponding to the target virtual keys, the upper case letters or the lower case letters corresponding to the target virtual keys are switched to be used as final input information through the size relation between the duration of kneading and the preset duration threshold, so that convenience in case switching is improved, and the processing speed of case switching events is improved.
In some embodiments, the confirming that the finger of the first hand selected the first virtual key on the virtual keyboard comprises: if the fingers of the first hand are confirmed to be in contact with the first virtual keys on the virtual keyboard, confirming that the fingers of the first hand select the first virtual keys; or if the virtual finger corresponding to the finger of the first hand is confirmed to be in contact with the first virtual key of the virtual keyboard, confirming that the finger of the first hand selects the first virtual key.
Some embodiments of the present application determine whether a virtual key is selected by a first hand as a candidate input key by whether the virtual key is in contact with a corresponding virtual key, thereby improving the speed and accuracy of data processing.
In some embodiments, the method further comprises: acquiring the finger skeleton point coordinates of the finger of the first hand in a real space to obtain first skeleton point coordinates; converting the first bone point coordinates into second bone point coordinates in a virtual space; wherein the finger of the first hand contacts the first virtual key on the virtual keyboard, comprising: and confirming that the finger of the first hand is contacted with the first virtual key through the coordinates of the second skeleton point and the coordinates of the first virtual key in the virtual space.
Some embodiments of the present application increase the speed of data processing by determining the finger coordinates of the first hand to determine whether it is in contact with a virtual key.
In some embodiments, the method further comprises: generating a virtual hand corresponding to the first hand to obtain a first virtual hand; acquiring finger skeleton point coordinates of a virtual finger of the first virtual hand in a virtual space to obtain second skeleton point coordinates; wherein, the virtual finger corresponding to the finger of the first hand contacts with the first virtual key of the virtual keyboard, including: and confirming that the virtual finger is contacted with the first virtual key through the coordinates of the second skeleton point and the coordinates of the first virtual key in the virtual space.
According to the method and the device for determining the virtual keyboard, whether a certain virtual key on the virtual keyboard is touched or not is determined through the constructed virtual hand of the first hand, the key is used as a candidate virtual key, the position of the virtual keyboard can be flexibly placed in space, and the universality of the technical scheme is improved.
In some embodiments, the method further comprises: the virtual keyboard is generated and provided to a target object through a wearable display, wherein the virtual keyboard includes a plurality of virtual keys.
In a second aspect, some embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs a method according to any embodiment of the first aspect.
In a third aspect, some embodiments of the present application provide an electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, may implement a method as in any embodiment of the first aspect.
In a fourth aspect, some embodiments of the present application provide a computer program product comprising a computer program, wherein the computer program is executable by a processor to implement a method according to any embodiment of the first aspect.
In a fifth aspect, some embodiments of the present application provide an apparatus for inputting information by a virtual keyboard, the apparatus comprising: a candidate input key selection module configured to confirm that a finger of a first hand selects a first virtual key on the virtual keyboard; and the confirmation module is configured to confirm whether the first virtual key is used as an input key according to the gesture of the second hand.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flowchart of a method for inputting information by a virtual keyboard according to an embodiment of the present application;
FIG. 3 is a second flowchart of a method for inputting information by a virtual keyboard according to an embodiment of the present application;
FIG. 4 is a block diagram of a head mounted display for performing a method for virtual keyboard input information provided in an embodiment of the present application;
FIG. 5 is a block diagram of a device for inputting information by a virtual keyboard according to an embodiment of the present application;
fig. 6 is a schematic diagram of the composition of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
At least to solve these problems, and also to solve the problem of higher input key error rate when information is input based on a virtual keyboard by using one hand, some embodiments of the present application design a character input method similar to an actual keyboard, which is completed based on a two-hand combination (one hand selects a candidate input key and the other hand confirms the candidate input key as a final selected input key), so that the input accuracy can be ensured, and the character input speed is also significantly faster than that of a handle ray input mode.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of some embodiments of the present application, in which a user 100 wears a head-mounted display 230, through which the user 100 can observe a virtual keyboard 308 and a virtual input box 332 in a virtual environment, and it is understood that the user can also observe other elements (real world elements or elements required for a game scenario, etc.) rendered by a rendering engine in addition to the virtual keyboard and the virtual input box. It should be noted that fig. 1 is only used to illustrate the types of virtual interactive elements that a user wearing the head-mounted display and that the user can observe through the head-mounted display device, and the placement manner of the virtual interactive elements in fig. 1 does not represent the actual placement manner of the virtual interactive elements in the virtual space (for example, the virtual interactive elements need to be placed towards the user in the virtual environment, so as to facilitate the user to observe and operate the virtual interactive elements).
One application scenario, namely a hybrid virtual reality system, of the present application is exemplarily described below. The external real physical three-dimensional environment (namely, the real environment) is captured in real time through the image acquisition unit on the VR head-mounted display device, then the image captured by the image acquisition unit is subjected to image rendering and other processing, and finally the image is presented to the user 100 through the VR head-mounted display device, and the user 100 can view the external real physical three-dimensional environment in real time through the VR head-mounted display device. In a hybrid virtual reality system, it is also necessary to render and overlay some virtual interactive scene elements (e.g., virtual keyboard 308 and virtual input box 332, etc.) in real time in an external three-dimensional real environment. It is to be appreciated that embodiments of the present application may be applied in a scene formed by Virtual Reality (VR), augmented Reality (Augmented Reality, AR), mixed Reality (MR), and some combination and/or derivative thereof.
In the related art, in order to input text into the virtual input box 332 of fig. 1, a user may hold an external control device (e.g., a handle) to select a virtual key on the virtual keyboard 308, or finish the input virtual key by selecting the virtual key from the virtual keyboard 308 with one hand, and finish the text input according to the letter corresponding to the virtual key, or perform an operation on the content input in the virtual input box according to the operation corresponding to the virtual key, etc.
Unlike the related art, some embodiments of the present application at least improve the problem of high erroneous input caused by single-handed input key selection, and require that the selection of a single input key be accomplished by cooperation of the left hand 210 and the right hand 220 of the user 100. For example, the left hand is used to select an alternate input key (e.g., the first virtual hand 340 corresponding to the left hand of FIG. 1 selects the first virtual key 344 as the alternate input key by touching) and the right hand is used to confirm whether this selection is the key that the user 100 is actually selecting.
It should be noted that, the embodiments of the present application may be applied to an office scenario or a game scenario where a virtual keyboard is superimposed, and the embodiments of the present application are not limited to specific application scenarios. Some embodiments of the present disclosure may be applied to performing text input or editing operations in a Virtual Reality (VR), augmented Reality (Augmented Reality, AR), mixed Reality (MR), and some combination and/or derivative combination thereof.
The embodiment of the application does not limit the type of the physical keyboard corresponding to the virtual keyboard. For example, physical keyboard types: QWERTY Keyboard, doughnut Keyboard (Multitap, T9, letterWise, tiltText, chordTap), and chordal Keyboard (Chord Keyboard) or other types of keyboards can also be self-designed by those skilled in the art as desired.
The technical scheme of the method for inputting information by the virtual keyboard provided by the embodiment of the application can be applied to various devices requiring information input by the keyboard. For example, the intelligent glasses can be also suitable for wearable devices such as intelligent glasses which are inconvenient to input by using a keyboard and a touch screen, and the control performance of the intelligent devices when inputting by using a virtual keyboard is greatly expanded.
A method of virtual keyboard input information performed by a head mounted display (e.g., glasses or a helmet, etc.) is exemplarily described below in connection with fig. 2.
As shown in fig. 2, an embodiment of the present application provides a method for inputting information by a virtual keyboard, where the method includes: s101, confirming that a finger of a first hand selects a first virtual key on a virtual keyboard; s102, confirming whether the first virtual key is used as an input key according to the gesture of the second hand.
It should be noted that, in some embodiments of the present application, the first hand is monitored to start to select the first virtual key only when the second hand is monitored to perform the confirmation action; in some embodiments of the present application, the second hand is monitored to start performing the confirmation action only when the first hand is already in a state of selecting the first virtual key; in some embodiments of the present application, the first virtual key may be initially selected while the first hand is monitored and the second hand may be initially performing the confirmation action. In some embodiments of the present application, when the state that the first hand is still selecting the first virtual key is confirmed, it is monitored that the second hand completes the confirmation action, and the first virtual key is used as the input key.
It is understood that the first hand and the second finger in embodiments of the present application represent the real or virtual hand of the user. The first hand and the second hand of some embodiments of the present application belong to different hands, for example, in some embodiments of the present application the first hand and the second hand may belong to the left hand and the right hand of the same user, respectively, in some embodiments of the present application the first hand and the second hand may also belong to hands of different users. In some embodiments of the present application, the first virtual key selected by the first hand may only be used as an alternative or candidate input key, and the virtual key selected by the first hand may be used as the final input key only after a confirmation action by the second hand is detected. It should be noted that, in some embodiments of the present application, the functions of the two hands may be switched at any time, for example, when the left hand is used to primarily select a certain virtual key on the virtual keyboard, the right hand is used to confirm whether the virtual key selected by the left hand is actually to be input, or when the left hand is switched to a certain virtual key on the virtual keyboard selected by the right hand, the left hand is used to confirm whether the virtual key selected by the right hand is actually to be input.
The first virtual key selected by the first hand in some embodiments of the present application may be an alphabetic key, a numeric key, or a function key, for example, the function key includes a delete key, and the corresponding selected input key may be an alphabetic key that is used for text input or a function key that performs a certain operation, for example, a function key that performs an operation on the input content on the input interface (for example, a delete key, etc.).
It is to be understood that in some embodiments of the present application, selection of an input key is completed through a combination of two hands (one hand is used for selecting a key on a virtual keyboard, and the other hand is used for confirming), so that accuracy of selection of the input key is improved.
The implementation of S101 is exemplarily set forth below.
To determine whether a finger of a first hand has selected a virtual key on the virtual keyboard, some embodiments of the present application determine whether the finger (or a corresponding virtual finger) is in contact with a virtual key on the virtual keyboard.
It should be noted that, in some embodiments of the present application, the virtual keyboard may be placed according to a spatial range that the first hand can touch, that is, the position of the virtual keyboard is located according to the three-dimensional coordinates of the first hand in the physical space, so that the first hand can contact the virtual keys on the virtual keyboard. Accordingly, some embodiments of the present application may determine whether to select a virtual key based on whether a finger of a first hand is in contact with the virtual key on the virtual keyboard, an exemplary implementation of which is described in the following paragraphs.
In some embodiments of the present application, the process of confirming that the finger of the first hand selects the first virtual key on the virtual keyboard in S101 includes: and if the fingers of the first hand are confirmed to be in contact with the first virtual keys on the virtual keyboard, confirming that the fingers of the first hand select the first virtual keys. For example, whether or not the two are in contact is determined by whether or not the three-dimensional coordinates of the tip of the corresponding finger are the same as or sufficiently close to the three-dimensional coordinates of a certain virtual key in the same space (e.g., the same virtual space or the same real space). For example, some embodiments of the present application determine whether a corresponding finger's stub is in contact by whether the three-dimensional coordinates of the location of the two are the same or sufficiently close in distance as the three-dimensional coordinates of a virtual key in the same space (e.g., the same virtual space or the same real space).
For example, in some embodiments of the present application, the method further comprises: acquiring the finger skeleton point coordinates of the finger of the first hand in a real space to obtain first skeleton point coordinates; converting the first bone point coordinates into second bone point coordinates in a virtual space; wherein said confirming that the finger of the first hand is in contact with the first virtual key on the virtual keyboard comprises: and confirming that the finger of the first hand is contacted with the first virtual key through the coordinates of the second skeleton point and the coordinates of the first virtual key in the virtual space. Some embodiments of the present application increase the speed of data processing by determining the finger coordinates of the first hand to determine whether it is in contact with a virtual key.
It should be noted that in some embodiments of the present application, the size of the virtual hand is the same as that of the real hand or the size of the virtual hand is larger than that of the real hand, and accordingly, in order to enable the virtual hand to select a virtual key on the virtual keyboard, the virtual hand may be adjusted to a position close to the virtual keyboard. The virtual keyboard of some embodiments of the present application may be located at a position far from the first hand, and the corresponding first virtual hand corresponding to the first hand may be constructed in the virtual space, where the fingers of the first virtual hand may be longer than the real fingers. As can be appreciated in connection with these examples, an exemplary implementation is described in the following section, which may determine whether the first hand has contacted a corresponding virtual key by determining whether a virtual finger has contacted a virtual keyboard to determine whether the first hand has selected the virtual key.
In some embodiments of the present application, the process of confirming that the finger of the first hand selects the first virtual key on the virtual keyboard in S101 includes: and if the virtual finger corresponding to the finger of the first hand is confirmed to be in contact with the first virtual key of the virtual keyboard, confirming that the finger of the first hand selects the first virtual key.
For example, in some embodiments of the present application, the method further comprises: generating a virtual hand corresponding to the first hand to obtain a first virtual hand; acquiring finger skeleton point coordinates of a virtual finger of the first virtual hand in a virtual space to obtain second skeleton point coordinates; wherein the confirming that the virtual finger corresponding to the finger of the first hand is in contact with the first virtual key of the virtual keyboard comprises: and confirming that the virtual finger is contacted with the first virtual key through the coordinates of the second skeleton point and the coordinates of the first virtual key in the virtual space. According to the method and the device for determining the virtual keyboard, whether a certain virtual key on the virtual keyboard is touched or not is determined through the constructed virtual hand of the first hand, the key is used as a candidate virtual key, the position of the virtual keyboard can be flexibly placed in space, and the universality of the technical scheme is improved.
It can be appreciated that some embodiments of the present application increase the speed and accuracy of data processing by determining whether a corresponding virtual key is selected by a first hand as a candidate input key by whether the virtual key is in contact with the corresponding virtual key.
The implementation procedure of S102 is exemplarily set forth below.
In some embodiments of the present application, the step S102 of confirming whether to use the first virtual key as the input key according to the gesture of the second hand includes: in a state that the first virtual key is selected, if the second hand is monitored to complete a target action (for example, the target action comprises a gesture such as pinching or making a fist), the first virtual key is taken as the input key; and in the state that the first virtual key is selected, if the state that the second hand ends the target action is not monitored, the first virtual key is not used as an input key. The completion target action here is a confirmation action for confirming whether or not the finger of the first hand is selected as a character to be actually input. In some embodiments of the present application, after the finger of the first hand releases the selected first virtual key, whether the second hand completes the target action may be determined by the gesture of the second hand, and if the target action is completed, the first virtual key is used as the input key. It can be understood that the input mode of the virtual keyboard provided by the embodiment of the application shortens the process of character input through the virtual keyboard, and has high input efficiency and higher input accuracy.
It should be noted that, the completing the target action in S102 includes monitoring that the corresponding hand respectively presents a plurality of continuous states, and the ending target action in S102 corresponds to a state of the corresponding hand, and the state of the ending target action belongs to one state of the event of completing the target action.
The following exemplarily illustrates the occurrence sequence of the selected event related to S101 and the event related to S102 that completes the target action according to some embodiments of the present application.
In some embodiments of the present application, the first hand is monitored to start selecting the first virtual key during the process of monitoring the second hand to perform the target action; in some embodiments of the present application, the second hand is monitored to start executing the target action only when the first hand is already in a state of selecting the first virtual key; in some embodiments of the present application, the first virtual key may be initially selected while the first hand is simultaneously monitored and the second hand may begin performing the target action. In some embodiments of the present application, when it is determined that the first hand is still selecting the state of the first virtual key, it is monitored that the second hand has completed the target action, and the first virtual key is used as the input key.
That is, some embodiments of the present application may monitor that the first hand begins to select the first virtual key during the process of performing the target action by the second hand, or may monitor that the second hand begins to perform the target action during the process of the first hand having selected the first virtual key.
It will be appreciated that in some embodiments of the present application, if it is confirmed that the first hand is in a state of selecting the first virtual key in the first period of time, the second hand is monitored to complete the target action in the first period of time or in a sufficiently short period of time after the first period of time ends, and the first virtual key may be used as the input key.
It should be noted that, in some embodiments of the present application, whether a certain virtual key is used as an input key is determined together through the relevant states of the first hand and the second hand, so as to improve the accuracy of selecting the input key.
For example, in some embodiments of the present application, the process of S102 that monitors that the second hand has completed the target action includes: if the second hand is monitored to present a plurality of ordered states according to the continuously acquired multi-frame images, confirming that the second hand completes the target action, wherein the plurality of ordered states comprise: preparing a first state in which the target action is made, a second state in which the target action is maintained, and a third state in which the target action is ended, or the plurality of ordered states includes: and maintaining the second state of the target action and ending the third state of the target action.
In order to improve the accuracy of key selection, in some embodiments of the present application, the whole process of completing the target action in the second hand (i.e., the second hand needs to be detected to be in sequence in three states of preparing to make the target action, maintaining the target action and ending the target action) requires that the first hand be always in the state of selecting the first virtual key, so that the first virtual key can be used as the input key. In other embodiments of the present application, the first virtual key is used as the input key when the second hand is in a state of always selecting the first virtual key during all the partial processes of completing the target action (the partial processes are that the second hand needs to be detected to be in a state of sequentially holding the target action and ending the target action).
For example, in some embodiments of the present application, if it is detected at a first time that an index finger of a first hand is in contact with a second virtual key, the second virtual key is used as a candidate input key, the first hand is detected to be always in a state of selecting the second virtual key at a first time period after the first time period, the index finger is no longer in a state of selecting the second virtual key at a period after the first time period, and the second hand is detected to be always in a state of maintaining a target action or always in a state of preparing to make the target action (i.e., a state that the second hand is not detected to end the target action during the whole process of selecting the second virtual key) then the second virtual key is not used as the input key.
For example, in some embodiments of the present application, if it is detected at a first time that an index finger of a first hand is in contact with a second virtual key, the second virtual key is used as a candidate input key, the first hand is detected to be always in a state of selecting the second virtual key at a first time period after the first time period, the index finger is no longer in a state of selecting the second virtual key at a time period after the first time period, and the second hand is detected to be sequentially in the first time period: the second virtual key is used as an input key in preparation for the state of making the target action, the state of holding the target action, and the state of ending the target action.
For example, in some embodiments of the present application, if it is detected at a first time that an index finger of a first hand is in contact with a second virtual key, the second virtual key is used as a candidate input key, the first hand is detected to be always in a state of selecting the second virtual key at a first time period after the first time period, the index finger is no longer in a state of selecting the second virtual key at a time period after the first time period, and the second hand is detected to be sequentially in the first time period: and keeping the state of the target action and the state of ending the target action, and taking the second virtual key as an input key.
For example, in some embodiments of the present application, if it is detected at a first time that an index finger of a first hand is in contact with a second virtual key, the second virtual key is used as a candidate input key, the first hand is detected at a first time period after the first time period to be always in a state of selecting the second virtual key, the index finger is no longer in a state of selecting the second virtual key at a time period after the first time period, and only the second hand is detected at a state of ending a target action and is not detected at a state of preparing to make the target action and a state of maintaining the target action in the first time period, the second virtual key is not used as an input key.
It can be appreciated that, by using pinching as the target action executed by the second hand, the accuracy of identifying the target action, and thus the accuracy of selecting the input key, can be improved because the pinched gesture is easier to identify.
The following example of the kneading targeted action further illustrates the meaning of S102 in relation to completing the targeted action.
For example, in some embodiments of the present application, the first state in which the target action is prepared is a ready-to-pinch state, the second state in which the target action is maintained is a pinch state, and the third state in which the target action is ended is a non-pinch state. For example, the step S102 of monitoring that the second hand is in a plurality of ordered states according to the continuously acquired multi-frame images includes: confirming that at least two fingers of the second hand present a plurality of ordered states according to the multi-frame image, wherein the plurality of ordered states comprise the following states: preparing a kneading state, a kneading state and a releasing kneading state; or, confirming that at least two fingers of the second hand present a plurality of ordered states according to the multi-frame image, wherein the plurality of ordered states comprise the following states: a kneaded state and a released kneaded state. That is, some embodiments of the present application confirm that the second hand has completed the target motion by recognizing whether the fingers of the second hand continuously appear in the ready-to-pinch state, the pinch state, and the pinch-released state or a part of the three states, and may improve accuracy in recognition of completion of the target motion event, and thus, accuracy in selection of the input key.
It will be appreciated that some embodiments of the present application may improve the accuracy of the validation result by identifying a plurality of successive states of the second hand before confirming that the second hand has completed the target action.
It should be noted that, in order to facilitate switching between a plurality of input states corresponding to corresponding virtual keys on the virtual keys, in some embodiments of the present application, the attribute information of the second hand for completing the target action is also combined to complete switching between a plurality of input information of the same key.
For example, in some embodiments of the present application, after the first virtual key is used as the input key in S102, the method further includes: generating target input information corresponding to the input key according to the attribute information of the target action, wherein the input key corresponds to multiple input information, and the target input information belongs to one of the multiple input information. That is, in some embodiments of the present application, the input information corresponding to the virtual key that is finally selected is switched by the attribute information of the target action, so that the universality of the technical scheme can be improved.
It should be noted that, the types of attribute information of the target actions according to some embodiments of the present application may include: the duration of the target action, the frequency of the target action (e.g., the number of times an action is repeated within a set duration), or the type of finger engaged in the target action.
The following describes an exemplary procedure of switching between uppercase and lowercase (one type of input information corresponding to an alphabetic input key is uppercase letters and the other type of input information is lowercase letters) corresponding to an alphabetic key (as a type of input key) with a target action as a kneading.
For example, in some embodiments of the present application, the first virtual key is a letter key; the method further comprises the steps of: acquiring the duration of the state of kneading; accordingly, the above-mentioned process of switching the target input information corresponding to the input key according to the attribute information of the target action exemplarily includes: and selecting an uppercase letter or a lowercase letter corresponding to the letter key as the target input information according to the duration. That is, some embodiments of the present application switch the uppercase letter or lowercase letter corresponding to the selected input key as the input by judging the difference of the duration of kneading, so as to improve the convenience of text input.
For example, in some embodiments of the present application, the selecting, as the target input information, an uppercase letter or lowercase letter corresponding to the letter key according to the duration includes: and determining that the target input information belongs to upper or lower case letters according to the size relation between the duration time and the duration threshold value. That is, some embodiments of the present application switch uppercase letters or lowercase letters corresponding to input keys as final input information by regarding the magnitude relation of the duration of kneading and a preset duration threshold, improving convenience of case switching and improving speed of determining events for switching cases.
It will be appreciated that, in order to perform S101 and S102, it is necessary to first monitor the images acquired by the image capturing device in real time by using the two hands to acquire each frame of images of the two hands, and then determine the specific states of the two hands based on each frame of images (i.e. determine the virtual key selected by the first hand by using one frame of image, and determine whether the target motion is completed by using the second hand by using the images captured by multiple frames continuously), for example, the image data may be acquired in real time by using the image capturing device integrated on the head-mounted display to complete the hand tracking.
For example, in some embodiments of the present application, the method further comprises: acquiring image data; wherein, confirm the first hand's finger to select the first virtual button on the virtual keyboard, include: confirming the first virtual key selected by the finger of the first hand through the image data; the monitoring that the second hand completes the target action comprises: and confirming that the second hand completes the target action through the gesture of the second hand monitored by the image data. Some embodiments of the present application confirm whether a first hand selects a virtual key by tracking images captured during the two hands and confirm whether a second hand completes a target action.
It should be noted that, in order to execute S101 and S102, a virtual keyboard is also required to be generated and provided. For example, in some embodiments of the present application, the method further comprises: the virtual keyboard is generated and provided to a target object through a wearable display, wherein the virtual keyboard includes a plurality of virtual keys.
The method of virtual keyboard input information of the present application is exemplarily described below with reference to fig. 3 and 4 by taking kneading as a target action and taking a case of a switching input key as an example, and it will be understood that the method of fig. 3 may be performed by the head-mounted display of fig. 4 in the following examples.
As shown in fig. 3, a method for inputting information by using a virtual keyboard according to some embodiments of the present application includes:
first, start.
And secondly, acquiring an image.
As shown in fig. 4, image data may be obtained by capturing the state of the user's hands in real time or periodically by an image capture device on a head mounted display.
It will be appreciated that the image acquisition device (e.g., camera) may not be provided on the head mounted display shown in fig. 4, but may be placed separately in real space at other locations or other devices where the status of both hands of the user may be captured.
Third, three-dimensional hand tracking.
And (3) carrying out three-dimensional hand tracking of the two hands according to the acquired image data, acquiring the skeletal point coordinates of the fingers of the two hands, or acquiring the fingertip coordinates of related fingers, for example, if an index finger is used as a target finger for selecting the virtual key, only tracking the fingers on the two hands and acquiring the skeletal point coordinates of the fingers.
And fourthly, judging the kneading state, namely judging the kneading state of the second hand, and performing interactive response of the subsequent steps according to the kneading state.
It will be appreciated that the second hand may be either the left hand or the right hand. In some embodiments of the present application, the second hand remaining in a pinch state refers to the pinching of at least two fingers of the second hand, i.e., the fingertips of two or more fingers (e.g., thumb and index finger) are in contact. In other embodiments of the present application, pinching may also be contact of the fingertips of more than two fingers, or contact of the thumb with the fingertips of other fingers. In some embodiments of the present application, the state of ready pinching refers to a state in which the gathering of the fingers that need to be contacted is detected by the image data, and the state of ending pinching refers to a state in which the relevant fingers that have been contacted while the pinching is detected by the image data are gradually distant.
That is, some embodiments of the present application will distinguish between left and right hands based on image data and identify the second hand that is in a pinch state.
And fifthly, extracting the finger tip of the non-pinching finger.
For example, the fingertip three-dimensional coordinates of the non-pinch finger are extracted from the image data.
It will be appreciated that if the second hand is the left hand (i.e. the left hand is in a pinch state), this step will extract the tip of the right hand's non-pinch finger; if the second hand is the right hand (i.e., the right hand is in a pinch state), this step will extract the tips of the left hand's non-pinch fingers. And then determining whether the key on the virtual keyboard is selected according to the coordinates of the fingertip.
And sixthly, judging the key contact.
For example, in some embodiments of the present application, whether the finger tip contacts the virtual key is determined according to the three-dimensional coordinates of the finger tip extracted in the fifth step, and if the finger tip contacts the virtual key, the current cycle is ended and the next frame of image is acquired.
It should be noted that the above-mentioned process of determining the hand gesture such as pinching and non-pinching may be implemented by a gesture detection program corresponding to a gesture detection unit on the head-mounted display of fig. 4, where the gesture detection unit may identify a hand in a pinching state and identify whether a finger of the other hand is not pinching contacts a virtual key on the virtual keyboard. For example, the gesture detection unit may implement the following operations: and confirming that the finger of the first hand is contacted with the target virtual key of the virtual keyboard according to the finger skeleton point coordinates acquired by the image data and the position of the virtual keyboard in the real environment, and confirming that the confirmation action is completed on the second hand according to the image data.
It will be appreciated that in some embodiments of the present application the gesture detection unit may not be located on a head mounted display as shown in fig. 4, but rather the implementation of the gesture detection algorithm corresponding to the unit may be performed by other computer devices.
Seventh, if the fingertip is contacted with a virtual key (the virtual key is obtained by rendering through a rendering engine in fig. 4, the rendering engine is further used for rendering elements to be displayed in virtual spaces such as virtual hands corresponding to the hands), and interaction response is completed according to the kneading state.
Eighth, the interactive response is mainly divided into four states.
a) Activating: after the first hand touches the first virtual key (i.e., it is monitored that the first hand begins to touch or selects the first virtual key), the second hand is ready to pinch.
It is to be understood that the preparation kneading in a) may be one specific example of the first state in which the target action is to be made. Taking the kneading of thumb and index finger as an example, the kneading is ready, i.e., the thumb and index finger are in a closed state. It should be noted that, in some embodiments of the present application, after continuously obtaining one or more images in which the thumb and index finger are in a closed state, it may be confirmed that the second hand is in a state ready for pinching. For example, in some embodiments of the present application, the first state of preparing for a target action may also monitor by the image whether the second hand makes a target gesture, e.g., if it is confirmed that the gesture of the second hand recorded on an image is a double-finger open and the remaining fingers are contracted, then it is confirmed that the hand is in the first state of preparing for a target action.
b) Preparation: the first hand remains in contact with the first virtual key (i.e., confirms that the first virtual key is in the selected state) and the second hand completes the pinch.
It is understood that the completion kneading in b) may be taken as a specific example of the second state of the holding target action. Taking the kneading of the thumb and the index finger as an example, the kneading is completed, that is, the thumb and the index finger are in contact. It should be noted that, in some embodiments of the present application, after continuously obtaining one or more images of the thumb and index finger in contact, it may be confirmed that the second hand is in contact with the first hand.
c) Confirmation: the first hand maintains contact with the first virtual key (i.e., confirms that the first virtual key is in the selected state), and the second hand is in a rapid non-pinching state (as an example of a third state in which the target action is ended, the rapid non-pinching state is in a state in which the thumb and index finger are far away).
It is understood that the rapid non-kneading state in c) is one specific example of the third state of the end target action. Taking the pinching of thumb and index finger as an example, the rapid non-pinching state is a state in which the thumb and index finger are far from each other from the contact. It should be noted that, in some embodiments of the present application, after continuously obtaining one or more images in which the thumb and the index finger are far away, it may be confirmed that the second hand is in a non-pinched state.
d) Case judgment: when the duration of the state switching from kneading to non-kneading is less than 1s, judging that the key is pressed short, and inputting the lower case letters of the selected target keys as final input information; and when the time length is longer than 1s, judging that the key is pressed for a long time, and inputting capital letters of the selected target key as final input information.
And a ninth step of confirming that the first virtual key contacted with the finger of the first hand (i.e., the finger of the hand contacted with the virtual keyboard in the above step) is used as the input key if the state that the second hand (i.e., the hand pinched in the above step) is pinched and the pinch is released is detected. The influence of the short press and the long press on the input information corresponding to the input key can be specifically referred to the above description to avoid repetition and redundant description is not repeated here.
It will be appreciated that by performing the above steps, the target input information (e.g., uppercase letters or lowercase letters corresponding to alphabetic keys) corresponding to the selected input keys may be input into the text input box shown by the user interface engine of fig. 4, so as to complete text or character input or delete the input text.
Referring to fig. 5, fig. 5 shows a device for inputting information by using a virtual keyboard according to an embodiment of the present application, and it should be understood that the device corresponds to the method embodiment of fig. 2, and is capable of executing the steps related to the method embodiment, and specific functions of the device may be referred to the above description, and detailed descriptions are omitted herein for avoiding repetition. The means for inputting information includes at least one software functional module capable of being stored in a memory in the form of software or firmware or being solidified in an operating system of the device, the means for inputting information comprising: candidate input key selection module 501 and confirmation module 502.
The candidate input key selection module 501 is configured to confirm that a finger of the first hand selects a first virtual key on the virtual keyboard.
A confirmation module 502 configured to confirm whether the first virtual key is to be used as an input key according to the gesture of the second hand.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the information input device described above may refer to the corresponding process in the foregoing method, and will not be described in detail herein.
Some embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs a method as described in any of the embodiments referred to above as a method of virtual keyboard input information.
As shown in fig. 6, some embodiments of the present application provide an electronic device 600, including a memory 610, a processor 620, and a computer program stored on the memory 610 and executable on the processor 620, wherein the processor 620, when reading the program via a bus 630 and executing the program, can implement the method as described in any of the embodiments related to the method for inputting information by a virtual keyboard.
The processor 620 may process the digital signals and may include various computing structures. Such as a complex instruction set computer architecture, a reduced instruction set computer architecture, or an architecture that implements a combination of instruction sets. In some examples, the processor 620 may be a microprocessor.
Memory 610 may be used for storing instructions to be executed by processor 620 or data related to execution of the instructions. Such instructions and/or data may include code to implement some or all of the functions of one or more modules described in embodiments of the present application. The processor 620 of the disclosed embodiments may be used to execute instructions in the memory 610 to implement the methods shown in fig. 2 or 3. Memory 610 includes dynamic random access memory, static random access memory, flash memory, optical memory, or other memory known to those skilled in the art.
Some embodiments of the present application provide a computer program product, where the computer program product includes a computer program, where the computer program when executed by a processor may implement a method as described in any of the embodiments related to the method for inputting information by a virtual keyboard.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (15)

1. A method of inputting information for a virtual keyboard, the method comprising:
confirming that a finger of a first hand selects a first virtual key on a virtual keyboard;
and confirming whether the first virtual key is used as an input key according to the gesture of the second hand.
2. The method of claim 1, wherein the determining whether to use the first virtual key as an input key based on the gesture of the second hand comprises:
when the second hand is monitored to complete the target action in the state that the first virtual key is selected, the first virtual key is used as the input key;
and in the state that the first virtual key is selected, if the state that the second hand ends the target action is not monitored, the first virtual key is not used as the input key.
3. The method of claim 2, wherein the monitoring that the second hand has completed the target action comprises:
if the second hand is monitored to present a plurality of ordered states according to the continuously acquired multi-frame images, confirming that the second hand completes the target action, wherein the plurality of ordered states comprise: preparing a first state in which the target action is made, a second state in which the target action is maintained, and a third state in which the target action is ended, or the plurality of ordered states includes: and maintaining the second state of the target action and ending the third state of the target action.
4. A method according to any one of claim 2 to 3,
after the first virtual key is used as the input key, the method further comprises:
generating target input information corresponding to the input key according to the attribute information of the target action, wherein the input key corresponds to multiple input information, and the target input information belongs to one of the multiple input information.
5. The method of claim 4, wherein the attribute information comprises: the duration of the target action, the frequency of the target action, or the type of finger engaged in the target action.
6. A method according to claim 3, wherein the target motion is a pinching of at least two fingers.
7. The method of claim 6, wherein,
the second state of maintaining the target motion is a kneading state and the third state of ending the target motion is a kneading releasing state.
8. The method of claim 7, wherein the first virtual key is a letter key;
the method further comprises the steps of:
acquiring the duration of the kneading state;
Wherein,,
the generating target input information corresponding to the input key according to the attribute information of the target action comprises the following steps:
and selecting an uppercase letter or a lowercase letter corresponding to the letter key as the target input information according to the duration.
9. A method as claimed in any one of claims 1 to 3, wherein said confirming that the finger of the first hand selected the first virtual key on the virtual keyboard comprises:
if the finger of the first hand is in contact with the first virtual key on the virtual keyboard, confirming that the finger of the first hand selects the first virtual key; or (b)
And if the virtual finger corresponding to the finger of the first hand is in contact with the first virtual key of the virtual keyboard, confirming that the finger of the first hand selects the first virtual key.
10. The method of claim 9, wherein,
the method further comprises the steps of:
acquiring the finger skeleton point coordinates of the finger of the first hand in a real space to obtain first skeleton point coordinates;
converting the first bone point coordinates into second bone point coordinates in a virtual space;
wherein,,
the finger of the first hand is in contact with the first virtual key on the virtual keyboard, comprising:
And confirming that the finger of the first hand is contacted with the first virtual key through the coordinates of the second skeleton point and the coordinates of the first virtual key in the virtual space.
11. The method of claim 9, wherein,
the method further comprises the steps of:
generating a virtual hand corresponding to the first hand to obtain a first virtual hand;
acquiring finger skeleton point coordinates of a virtual finger of the first virtual hand in a virtual space to obtain second skeleton point coordinates;
wherein, the virtual finger corresponding to the finger of the first hand contacts with the first virtual key of the virtual keyboard, including:
and confirming that the virtual finger is contacted with the first virtual key through the coordinates of the second skeleton point and the coordinates of the first virtual key in the virtual space.
12. A method according to any one of claims 1-3, wherein the method further comprises:
the virtual keyboard is generated and provided through a wearable display, wherein the virtual keyboard includes a plurality of virtual keys.
13. A computer readable storage medium having stored thereon a computer program, which when executed by a processor, is adapted to carry out the method of any of claims 1-12.
14. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor is operable to implement a method as claimed in any one of claims 1 to 12 when the program is executed by the processor.
15. A computer program product comprising a computer program, wherein the computer program is executable by a processor to implement the method of any one of claims 1-12.
CN202310195808.9A 2023-02-28 2023-02-28 Method and medium for inputting information by virtual keyboard and electronic equipment Pending CN116360589A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310195808.9A CN116360589A (en) 2023-02-28 2023-02-28 Method and medium for inputting information by virtual keyboard and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310195808.9A CN116360589A (en) 2023-02-28 2023-02-28 Method and medium for inputting information by virtual keyboard and electronic equipment

Publications (1)

Publication Number Publication Date
CN116360589A true CN116360589A (en) 2023-06-30

Family

ID=86938730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310195808.9A Pending CN116360589A (en) 2023-02-28 2023-02-28 Method and medium for inputting information by virtual keyboard and electronic equipment

Country Status (1)

Country Link
CN (1) CN116360589A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193540A (en) * 2023-11-06 2023-12-08 南方科技大学 Control method and system of virtual keyboard

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193540A (en) * 2023-11-06 2023-12-08 南方科技大学 Control method and system of virtual keyboard
CN117193540B (en) * 2023-11-06 2024-03-12 南方科技大学 Control method and system of virtual keyboard

Similar Documents

Publication Publication Date Title
CN106648434B (en) Method and device for controlling application interface through dragging gesture
US9041660B2 (en) Soft keyboard control
US7849421B2 (en) Virtual mouse driving apparatus and method using two-handed gestures
KR102247020B1 (en) Keyboard Typing System and Keyboard Typing Method with Finger Gesture
JP5575645B2 (en) Advanced camera-based input
CN105867599A (en) Gesture control method and device
WO2013049861A1 (en) Tactile glove for human-computer interaction
CN111596757A (en) Gesture control method and device based on fingertip interaction
EP4307096A1 (en) Key function execution method, apparatus and device, and storage medium
CN116360589A (en) Method and medium for inputting information by virtual keyboard and electronic equipment
CN113190109A (en) Input control method and device of head-mounted display equipment and head-mounted display equipment
CN112527112A (en) Multi-channel immersive flow field visualization man-machine interaction method
Siam et al. Human computer interaction using marker based hand gesture recognition
JP5762075B2 (en) Information processing apparatus, information processing method, and program
CN108769395B (en) Wallpaper switching method and mobile terminal
CN111007942A (en) Wearable device and input method thereof
CN111062360A (en) Hand tracking system and tracking method thereof
KR20160100789A (en) Method and apparatus for providing user interface
CN110727345B (en) Method and system for realizing man-machine interaction through finger intersection movement
TWI807955B (en) Method for inputting letters, host, and computer readable storage medium
TWI435280B (en) Gesture recognition interaction system
Lee et al. Finger controller: Natural user interaction using finger gestures
Kurosu Human-Computer Interaction. Interaction Technologies: 20th International Conference, HCI International 2018, Las Vegas, NV, USA, July 15–20, 2018, Proceedings, Part III
CN116909387A (en) Method for controlling interactive object based on gesture, storage medium and electronic equipment
Huerta et al. Hand Gesture Recognition With a Novel Particle Filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination