CN118210426A - Key operation method and device of virtual keyboard - Google Patents

Key operation method and device of virtual keyboard Download PDF

Info

Publication number
CN118210426A
CN118210426A CN202410353994.9A CN202410353994A CN118210426A CN 118210426 A CN118210426 A CN 118210426A CN 202410353994 A CN202410353994 A CN 202410353994A CN 118210426 A CN118210426 A CN 118210426A
Authority
CN
China
Prior art keywords
palm
virtual keyboard
user
target
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410353994.9A
Other languages
Chinese (zh)
Inventor
吴冬悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lifesmart Technology Co ltd
Original Assignee
Hangzhou Lifesmart Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lifesmart Technology Co ltd filed Critical Hangzhou Lifesmart Technology Co ltd
Priority to CN202410353994.9A priority Critical patent/CN118210426A/en
Publication of CN118210426A publication Critical patent/CN118210426A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the technical field of virtual keyboards, in particular to a key operation method and device of a virtual keyboard, wherein the method is used for wearable virtual equipment, an image acquisition device is arranged on the wearable virtual equipment, an image of a target azimuth is acquired in real time through the image acquisition device on the wearable virtual equipment, when the palm of a user exists in the image of the target azimuth, the virtual keyboard is displayed above the palm of the user, and when a target finger corresponding to the palm of the user is overlapped with the key position of the virtual keyboard, the input operation of a target key corresponding to the target finger is determined, and the input operation is responded. According to the scheme, when the palm of the user faces the camera of the wearable virtual device, the virtual keyboard is displayed above the palm of the user, the displayed virtual keyboard is free from shielding, the user can quickly find the edge of each key, accurately senses whether the finger touches the edge and the surface of the key, and increases the certainty of operation.

Description

Key operation method and device of virtual keyboard
Technical Field
The invention relates to the technical field of virtual keyboards, in particular to a key operation method and device of a virtual keyboard.
Background
In recent years, virtual vision technology has advanced, and people increasingly entertain, work and create content in a virtual space, and the image acquisition device recognizes hand motions to perform virtual keyboard operations is a vivid embodiment of the trend.
Currently, virtual keyboards based on conventional keyboard designs are widely used in virtual system software. However, there are some significant differences in the use experience of virtual keyboards and traditional physical keyboards. First, the virtual keyboard lacks physical feedback from the physical keyboard, which makes it difficult to confirm when the action of pressing one key is completed. When a key is pressed, the traditional keyboard gives a definite force feedback to the user so as to inform the user that the key is triggered, but in the virtual keyboard, the user can only judge whether the key is pressed or not by means of visual feedback or tactile simulation of software, so that the uncertainty of operation is increased to a certain extent. Second, keys on a conventional keyboard have well-defined shapes and boundaries, and the user can quickly find the edges of each key through muscle memory. However, on a virtual keyboard, when a finger is placed on the keyboard, it tends to obscure a portion of the keyboard, making it difficult for the user to see the edges of the keys. Meanwhile, it is difficult for the user to accurately perceive whether the finger has touched the edge of the key due to lack of physical feedback, which increases the possibility of erroneous operation. Finally, the keys on a conventional keyboard will naturally stop when the stroke bottoms out, as a result of the physical mechanism. However, on a virtual keyboard, the user cannot obtain such bottoming feedback, and needs to determine when to stop the key by himself. If the user does not retract the finger in time, the finger may "cross" the virtual keyboard, which may affect the accuracy of the next tap and increase the complexity of the path of operation.
In the above scheme, although the virtual keyboard simulates the traditional keyboard in layout, due to lack of physical feedback and limitation in vision, the virtual keyboard still has a certain difference from the traditional keyboard in use experience, and misoperation is easy to generate.
Disclosure of Invention
In view of the above, the present invention provides a key operation method and device for a virtual keyboard, so as to solve the problem that the virtual keyboard is prone to misoperation due to lack of physical feedback and limitation in vision.
In a first aspect, the present invention provides a key operation method of a virtual keyboard, where the method is used for a wearable virtual device, and the wearable virtual device is provided with an image acquisition device, and the method includes:
Acquiring an image of a target azimuth in real time through the image acquisition device on the wearable virtual equipment;
When the fact that the palm of the user exists in the image of the target azimuth is detected, displaying a virtual keyboard above the palm of the user;
when the target finger corresponding to the palm of the user is overlapped with the key position of the virtual keyboard, determining the input operation of the target key corresponding to the target finger, and responding to the input operation.
According to the scheme, the image acquisition device on the wearable virtual equipment is used for acquiring the image containing the palm user, so that when the palm of the user faces the camera of the wearable virtual equipment, the virtual keyboard is displayed above the palm of the user, the displayed virtual keyboard is free from shielding, the user can quickly find the edge of each key, accurately senses whether the finger touches the edge and the surface of the key, and the operation certainty is improved.
In an alternative embodiment, the wearable virtual device has an image display component thereon; when detecting that the palm of the user exists in the image of the target azimuth, displaying a virtual keyboard above the palm of the user comprises:
Detecting the position of a first palm and the position of a second palm of a user in the image;
And displaying a virtual keyboard on the appointed position of the image display assembly according to the positions of the first palm and the second palm, so that the virtual keyboard is positioned above the first palm and the second palm from the view angle from the image display assembly to the palm of the user.
In an alternative embodiment, the displaying a virtual keyboard on the designated position of the image display component according to the position of the first palm and the position of the second palm includes:
And determining a first designated position on an image display interface of the image display assembly and a first virtual keyboard size parameter according to the position of the first palm and the position of the second palm so as to display a first virtual keyboard at the first designated position.
According to the scheme, the first virtual keyboard is displayed above the first palm center and the second palm center, and the first virtual keyboard is properly enlarged or reduced according to the position of the first palm center and the position of the second palm center, so that the scheme is more suitable for the situation that the palm centers face eyes.
In an alternative embodiment, the displaying a virtual keyboard on the designated position of the image display component according to the position of the first palm and the position of the second palm includes:
Determining a second designated position on an image display interface of the image display assembly and a second virtual keyboard size parameter according to the position of the first palm center, and displaying a second virtual keyboard at the second designated position;
And determining a third designated position on an image display interface of the image display assembly and a third virtual keyboard size parameter according to the position of the second palm center, and displaying a third virtual keyboard at the second designated position.
According to the scheme, the second virtual keyboard and the third virtual keyboard are respectively displayed above the first palm center and the second palm center, so that the virtual keyboard can move along with the position of the forearm or the wrist, the second virtual keyboard is properly enlarged or reduced according to the position of the first palm center, the third virtual keyboard is properly enlarged or reduced according to the position of the second palm center, and the situation that the palm center faces eyes can be more suitable.
In an alternative embodiment, the virtual keyboard is in a semi-transparent state.
According to the scheme, the virtual keyboard is designed to be in a semitransparent state, so that a user can see each finger clearly.
In an alternative embodiment, after displaying the virtual keyboard over the palm of the user, the method further comprises:
Identifying the distance between each finger of the user and the corresponding key, and marking the finger entering the pre-click position; the pre-click position is used for indicating that the distance between the finger and the corresponding key is in the target range.
According to the scheme, the position information and the contact information of the finger and the corresponding key can be accurately obtained through the addition of the mark, so that misjudgment is reduced.
In an optional implementation manner, when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, determining an input operation of the target key corresponding to the target finger, and responding to the input operation, where the determining includes:
When the image acquisition device identifies that the mark on the target finger corresponding to the palm of the user is overlapped with the key position of the virtual keyboard, determining that the input operation of the target key corresponding to the target finger is a click operation, and responding to the click operation, performing first dynamic confirmation display.
In an optional implementation manner, when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, determining an input operation of the target key corresponding to the target finger, and responding to the input operation, where the determining includes:
When the image acquisition device recognizes that the coincidence time of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard exceeds a first time threshold, determining that the input operation of the target key corresponding to the target finger is a long-press operation, and responding to the long-press operation, performing second dynamic confirmation display.
In an optional implementation manner, when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, determining an input operation of the target key corresponding to the target finger, and responding to the input operation, where the determining includes:
When the image acquisition device identifies that the speed of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard are overlapped is smaller than a first speed threshold, determining that the input operation of the target key corresponding to the target finger is a tapping operation, and responding to the tapping operation, and performing third dynamic confirmation display.
In an optional implementation manner, when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, determining an input operation of the target key corresponding to the target finger, and responding to the input operation, where the determining includes:
When the image acquisition device recognizes that the speed of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard are overlapped is greater than the second speed threshold, determining that the input operation of the target key corresponding to the target finger is a repeated pressing operation, and responding to the repeated pressing operation, and performing fourth dynamic confirmation display.
In an optional implementation manner, the virtual keyboard is provided with a touch pad, and when a target finger corresponding to a palm of a user is overlapped with a key position of the virtual keyboard, determining an input operation of a target key corresponding to the target finger, and responding to the input operation, wherein the method comprises the following steps:
When the image acquisition device identifies that the mark on the target finger corresponding to the palm of the user is contacted with the touch pad and slides, determining that the input operation corresponding to the target finger is touch pad sliding operation, and responding to the touch pad sliding operation, and performing fifth dynamic confirmation display.
According to the scheme, the user is prompted to finish the action of one key by dynamically confirming and displaying the completion condition of the reminding user key, so that the physical strength and energy waste caused by withdrawing the excessively stretched finger caused by misjudgment of the key stroke by the user are reduced, and the user experience is improved.
In an alternative embodiment, the method further comprises:
triggering a virtual keyboard canceling instruction when the fact that the palm of a user in the image of the target azimuth turns downwards is detected;
and canceling display of the virtual keyboard in response to the virtual keyboard canceling instruction.
In a second aspect, the present invention provides a key operation device of a virtual keyboard, the device comprising:
The image acquisition module is used for acquiring an image of the target azimuth in real time through the image acquisition device on the wearable virtual equipment;
The virtual keyboard display module is used for displaying a virtual keyboard above the palm center of the user when detecting that the palm center of the user exists in the image of the target azimuth;
and the input operation response module is used for determining the input operation of the target key corresponding to the target finger when the target finger corresponding to the palm of the user is overlapped with the key position of the virtual keyboard, and responding to the input operation.
In a third aspect, the present invention provides a computer device comprising: the memory is in communication connection with the processor, the memory stores computer instructions, and the processor executes the computer instructions to perform the key operation method of the virtual keyboard according to the first aspect or any implementation manner corresponding to the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon computer instructions for causing a computer to execute a key operation method of the virtual keyboard of the first aspect or any one of the embodiments corresponding thereto.
In a fifth aspect, the present invention provides a computer program product comprising computer instructions for causing a computer to perform the key operation method of the virtual keyboard of the first aspect or any of the embodiments corresponding thereto.
The technical scheme provided by the invention can comprise the following beneficial effects:
According to the invention, the image acquisition device on the wearable virtual equipment is used for acquiring the image containing the palm user, so that when the palm of the user faces the camera of the wearable virtual equipment, the virtual keyboard is displayed above the palm of the user, the displayed virtual keyboard is free from any shielding, the user can quickly find the edge of each key, accurately sense whether the finger touches the edge and the surface of the key, and the operation certainty is increased. In addition, the virtual keyboard is designed to be in a semitransparent state, so that a user can see each finger clearly, the position information and the contact information of the finger and the corresponding key can be obtained accurately through the addition of the marks on the finger, and the occurrence of misjudgment is reduced.
In addition, the invention not only displays the same second virtual keyboard above the first palm center and the second palm center, but also respectively displays a second virtual keyboard and a third virtual keyboard above the first palm center and the second palm center, so that the virtual keyboard can move along with the position of the forearm or the wrist, and the second virtual keyboard is properly enlarged or reduced according to the position of the first palm center, and the third virtual keyboard is properly enlarged or reduced according to the position of the second palm center, thereby being more suitable for the situation of the palm center towards eyes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a wearable virtual device according to an embodiment of the present invention;
FIG. 2 is a flow chart of a key operation method of a virtual keyboard according to an embodiment of the invention;
FIG. 3 is a flowchart of another key operation method of a virtual keyboard according to an embodiment of the present invention;
FIG. 4 is a control schematic of a first virtual keyboard according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of marking of a target finger according to an embodiment of the invention;
FIG. 6 is a flowchart of a key operation method of a virtual keyboard according to another embodiment of the present invention;
FIG. 7 is a control schematic diagram of a second virtual keyboard and a second virtual keyboard according to an embodiment of the invention;
FIG. 8 is a schematic view of a fixed point setting of a user's forearm according to an embodiment of the invention;
fig. 9 is a block diagram of a key operation device of a virtual keyboard according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic diagram of a wearable virtual device, according to an example embodiment. As shown in fig. 1, the wearable Virtual device may be represented as Virtual REALITY HEAD-Mounted Display (VR glasses), which is a device that places a screen in front of eyes, and may wrap a user completely in a Virtual world to Display a Virtual keyboard in front of the eyes of the user.
In an alternative embodiment, as shown in fig. 1, the wearable virtual device includes an image capturing device 10, the image capturing device 10 being configured to capture real world image and video information. Specifically, the image capturing device 10 generally includes one or more cameras, which can capture image data of the environment and objects around the user in real time, that is, the image capturing device 10 can capture images of the target azimuth in real time, and the wearable virtual device further includes a processor, and the image data captured by the image capturing device 10 is transmitted to the processor of the wearable virtual device, and is fused with the virtual keyboard after a series of processing and analysis.
In an alternative embodiment, as shown in fig. 1, the wearable virtual device further has an image display component 20 thereon, and the image display component 20 is responsible for overlaying the virtual keyboard into the real field of view of the user to achieve the effect of augmented reality. The image display assembly 20 generally comprises an image presentation interface that is responsible for generating and displaying the virtual keyboard. The image display interface may be implemented using micro-OLEDs (organic light emitting diodes) or LCoS (liquid crystal on silicon display technology).
In an alternative embodiment, the wearable virtual device may also have sensors (such as an accelerometer, a gyroscope, an eyeball tracker, etc.) thereon for capturing information such as head movement, eyeball position, direction, etc. of the user in real time, where the sensors are used to provide key data for user interaction and environmental perception for the wearable virtual device.
In an alternative embodiment, the wearable virtual device may also have a power system (including a battery and a power management module) thereon, to provide a continuous power supply to the wearable virtual device, ensuring proper operation of the components.
In an alternative embodiment, the wearable virtual device may further have an interaction interface (including a touch pad, a voice recognition module, etc.) thereon, for receiving an input instruction of a user, and implementing an interaction operation with the wearable virtual device.
In an alternative embodiment, the wearable virtual device may also have a connection module (including Wi-Fi, bluetooth, etc. wireless communication technology module) thereon, for data transmission and communication with other external devices (such as a mobile phone, a personal computer device, etc.).
According to an embodiment of the present invention, there is provided a key operation method embodiment of a virtual keyboard, it should be noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from that shown or described herein.
In this embodiment, a key operation method of a virtual keyboard is provided, which is used for a wearable virtual device as shown in fig. 1, where the wearable virtual device has an image acquisition device 10, and fig. 2 is a flowchart of a key operation method of a virtual keyboard according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
step S201, acquiring an image of a target azimuth in real time through the image acquisition device on the wearable virtual equipment.
In an alternative embodiment, the image capturing device on the wearable virtual device is typically composed of high resolution cameras that capture images of the target orientation, and when the user wears the wearable virtual device and faces the target orientation, the cameras capture images of the target orientation in real time and convert them into digital signals for transmission and processing to extract useful information in the image (e.g., whether the user's palm is present) and fuse them with the virtual keyboard. Thus, the user can see the real image fused with the virtual keyboard, and the fused image can be displayed to the user through the image display component of the wearable virtual device.
In step S202, when it is detected that the palm of the user exists in the image of the target azimuth, a virtual keyboard is displayed above the palm of the user.
In an alternative embodiment, when the image capturing device on the wearable virtual device detects that the palm of the user is present in the image of the target azimuth (meaning that the palm of the user faces the camera of the virtual device), the wearable virtual device may display a virtual keyboard above the palm of the user. Firstly, the image acquisition device continuously captures images of the target azimuth in real time and transmits the images to the processor of the image acquisition device, the processor can run an image recognition algorithm to analyze the images in real time so as to detect whether the characteristics of the palm of the user exist (the characteristics may include the shape, the color, the texture and the like of the palm), and the processor can accurately recognize the palm of the user by comparing with a preset model or template. Once the existence of the palm of the user is detected, a processor in the wearable virtual device can immediately trigger a display mechanism of the virtual keyboard, the virtual keyboard is displayed through an image display component on the wearable virtual device, and the virtual keyboard can be presented above the palm of the user in the form of a three-dimensional image and keeps consistent with the vision of the user. In the embodiment, the mode of displaying the virtual keyboard above the palm of the user provides more natural and visual interaction experience for the user, the displayed virtual keyboard is free from any shielding, the user can quickly find the edge of each key, accurately senses whether the finger touches the edge and the surface of the key, and increases the certainty of operation.
In step S203, when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, the input operation of the target key corresponding to the target finger is determined, and the input operation is responded.
In an alternative embodiment, when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, the wearable virtual device recognizes and responds to the input operation of the user, and responds. Firstly, an image acquisition device captures images of the palm and fingers of a user in real time and transmits the images to a processor of the device, the processor can accurately identify the position of the palm of the user and the position of a corresponding target finger (such as an index finger, a middle finger and the like) through an image identification technology, the processor can analyze the corresponding relation between the position of the target finger and keys on a virtual keyboard to determine whether the target finger coincides with a certain key position on the virtual keyboard, and once the position of the target finger coincides with the target key position on the virtual keyboard, the processor triggers corresponding input operation and executes corresponding response according to the type and the requirement of the input operation. For example, if the target key represents an letter or number, the processor may insert the corresponding character in the text entry field; if the target key represents a function key, the processor may execute the corresponding function or command.
In summary, the image acquisition device on the wearable virtual device acquires the image containing the palm user, so that when the palm of the user faces the camera of the wearable virtual device, the virtual keyboard is displayed above the palm of the user, the displayed virtual keyboard is not shielded, the user can quickly find the edge of each key, accurately sense whether the finger touches the edge and the surface of the key, and the operation certainty is increased.
In this embodiment, a key operation method of a virtual keyboard is provided, which is used for a wearable virtual device shown in fig. 1, where the wearable virtual device has an image acquisition device 10, and fig. 3 is a schematic flow chart of another key operation method of a virtual keyboard according to an embodiment of the present invention, and as shown in fig. 3, the flow includes the following steps:
step S301, an image of a target azimuth is acquired in real time through an image acquisition device on the wearable virtual equipment. Please refer to step S201 in the embodiment shown in fig. 2 in detail, which is not described herein.
In step S302, when it is detected that the palm of the user exists in the image of the target azimuth, the position of the first palm and the position of the second palm of the user in the image are detected.
In an alternative embodiment, referring to the control schematic diagram of the first virtual keyboard shown in fig. 4, the image capturing device captures a high-definition image of the target azimuth and transmits the high-definition image to the processor of the wearable virtual device, and the image recognition algorithm in the processor performs depth analysis on the image to find the characteristics of the palm, where the characteristics may include color, texture, shape, edge, and the like of the palm, and when the palm of the user is detected in the image of the target azimuth (meaning that the palm of the user's hands is facing the camera of the wearable virtual device), the processor locates and recognizes the positions of the first palm and the second palm in the image by using the palm detection algorithm. In this embodiment, the detection of the position of the palm center may be implemented by technologies such as edge detection, shape matching, and pattern recognition, and once the positions of the first palm center and the second palm center are successfully detected, the embodiment may store these position information in the memory and update their positions in real time for subsequent gesture recognition, interaction control, and positioning and presentation of the virtual keyboard.
Step S303, displaying a virtual keyboard on the designated position of the image display assembly according to the position of the first palm and the position of the second palm, so that the virtual keyboard is positioned above the first palm and the second palm from the view angle from the image display assembly to the palm of the user. The virtual keyboard is in a semitransparent state. The embodiment can properly enlarge the virtual keyboard so as to be more suitable for the situation of directing the palm to eyes.
Specifically, the step S303 includes:
Step S3031, determining a first designated position on the image display interface of the image display assembly and a first virtual keyboard size parameter according to the position of the first palm and the position of the second palm, so as to display a first virtual keyboard at the first designated position.
Specifically, the user may preset modes, for example, a single-hand keyboard mode and a two-hand keyboard mode, if the two-hand keyboard mode is adopted, the virtual keyboard is not displayed when only one palm is detected, the accurate positions of the first palm center and the second palm center of the user are captured through the image acquisition device, and the processor calculates the first designated position on the image display interface by utilizing the positions of the first palm center and the second palm center and combining the view range and the view angle of the image display assembly. This position should ensure that when the user watches from the image display assembly, virtual keyboard can just be located the top of first palm and second palm, provide audio-visual and natural interactive experience for the user, simultaneously, the processor still can be according to first palm and the interval and the size of second palm, confirm the size parameter of first virtual keyboard to make the size of first virtual keyboard can adapt to the palm interval of user, can ensure again that the button is rationally distributed, convenience of customers operates. And then, the processor sends the parameters of the first designated position and the size parameter of the first virtual keyboard to the image display assembly to trigger the display of the first virtual keyboard, the first virtual keyboard is displayed at the first designated position on the image display interface, the user can watch the hands of the user in the image display interface of the wearable virtual device under (or behind) the first virtual keyboard at the moment, and the first virtual keyboard is in a semitransparent state, so that the user can see each finger clearly. When the user moves the palm or adjusts the viewing angle, the position and the size of the first virtual keyboard can be updated in real time so as to keep the optimal interaction experience.
In an optional implementation manner, the wearable device is provided with an image projection device, the image projection device can project the virtual keyboard on the palm of the user at intervals, and when a preset mode of the user is a two-hand keyboard mode, the position of the first palm and the position of the second palm of the user in the image are detected first; determining a first palm plane according to the position of the first palm and the position of the second palm; and then determining a first projection position between the first palm plane and the image projection device according to the first palm plane so as to project and display a fourth virtual keyboard on the first projection position.
Step S304, identifying the distance between each finger of the user and the corresponding key, and marking the finger entering the pre-click position; the pre-click position is used for indicating that the distance between the finger and the corresponding key is in the target range.
Specifically, the processor of the present embodiment processes the hand image by using image processing and a computer vision algorithm (may include steps of finger segmentation, feature extraction, position identification, etc.), determines a specific position and posture of a target finger by identifying a contour and feature points of the finger, and calculates a distance between the target finger and a corresponding key. The pre-click position is a set area used for indicating that the distance between the finger and the corresponding key is within the target range.
In step S305, when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, the input operation of the target key corresponding to the target finger is determined, and the input operation is responded.
In an optional implementation manner, when the image acquisition device identifies that the mark on the target finger corresponding to the palm of the user is coincident with the key position of the virtual keyboard, determining that the input operation of the target key corresponding to the target finger is a click operation, and responding to the click operation, performing a first dynamic confirmation display.
Specifically, the image capturing device in this embodiment continuously tracks the palm and finger positions of the user, the target finger lifts up to push the mark to approach the target key, when the mark on the target finger is identified to be coincident with a certain key position of the virtual keyboard, the processor determines that a click operation occurs, once it is determined that the click operation occurs, the processor triggers a first dynamic confirmation display mechanism, and the first dynamic confirmation display can provide an immediate visual feedback for the user to inform that the input operation has been successfully identified and accepted. The specific form of the first dynamic confirmation display can be customized according to the application requirement and the user experience design, for example, a short animation effect such as amplification, color change or flashing is displayed around the clicked target key so as to highlight the key, and meanwhile, auxiliary feedback such as sound or vibration can be accompanied, so that the perception of the user is further enhanced. The dynamic confirmation display not only improves the accuracy of user operation, but also enhances the interactive experience between the user and the virtual keyboard, and the user can confirm the input operation by observing the dynamic confirmation display.
In an optional implementation manner, when the image acquisition device recognizes that the coincidence time of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard exceeds a first time threshold, it is determined that the input operation of the target key corresponding to the target finger is a long-press operation, and a second dynamic confirmation display is performed in response to the long-press operation.
Specifically, in order to accurately determine the long-press operation, the processor records the moment when the mark on the target finger starts to be overlapped with the key position, continuously monitors the overlapping state, calculates the overlapping time, and if the overlapping time exceeds a preset first time threshold (the first time threshold can be set according to the actual application scene and the habit of the user), determines that the long-press operation occurs, and once the long-press operation is determined to occur, the wearable virtual device performs second dynamic confirmation display, similar to the first dynamic confirmation display, and also in order to provide instant visual feedback for the user, the specific form of the second dynamic confirmation display can be more striking and durable than the first dynamic confirmation display so as to highlight the specificity of the long-press operation. For example, the system may display a continuous flashing animation effect around the long-pressed key, and may also incorporate auxiliary feedback such as sound or vibration to enhance the user's perceived experience. By making the second dynamic confirmation display, the user can clearly know that he has performed the long press operation by himself, and the wearable virtual device has responded to this operation.
In an optional embodiment, when the image capturing device recognizes that the speed of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard are coincident is less than the first speed threshold, it is determined that the input operation of the target key corresponding to the target finger is a tap operation, and a third dynamic confirmation display is performed in response to the tap operation.
Specifically, the wearable virtual device can calculate the moving speed and acceleration of the target finger by analyzing the change of the finger position in the continuous image frames, so as to track the moving speed of the target finger. Once the finger is detected to coincide with the key position and the speed at which the coincidence is made is less than the preset first speed threshold, it is determined that the input operation of the target key corresponding to the target finger is a tap operation, and the third dynamic confirmation display is different from the confirmation display of the long-press operation and the click operation mentioned above, and is intended to inform the user that the tap operation has been recognized by a unique visual effect, for example, the third dynamic confirmation display may employ a soft animation effect such as a slight depression of the key or a light-emitting of a light, so as to distinguish from other types of operation feedback. Through the third dynamic confirmation display, the user can intuitively perceive that the own tap operation has been recognized and responded by the wearable virtual device.
In an optional implementation manner, when the image acquisition device identifies that the speed of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard are overlapped is greater than the second speed threshold, determining that the input operation of the target key corresponding to the target finger is a re-pressing operation, and responding to the re-pressing operation, and performing a fourth dynamic confirmation display. The first speed threshold is less than or equal to the second speed threshold.
Specifically, the speed and acceleration of the movement of the target finger can be calculated by analyzing the change of the finger position in the continuous image frames, so as to track the movement speed of the target finger. And once the coincidence of the finger and the key position is detected, and the speed during the coincidence is larger than a preset second speed threshold, determining that the input operation of the target key corresponding to the target finger is a repeated pressing operation. Once it is determined that the target finger is performing a double press operation, the wearable virtual device will make a fourth dynamic confirmation display that may be more striking and powerful than the first dynamic confirmation display to the third dynamic confirmation display to match the strong nature of the double press operation. For example, the system may employ a strong animation effect, such as rapid depression of a key or a bright flashing light, to attract the attention of the user, and may also incorporate a sound effect, such as a low-resolution sound effect or vibration feedback, to enhance the user's perceived experience.
In an alternative embodiment, when the image acquisition device recognizes that the mark on the target finger corresponding to the palm of the user contacts with the touch pad and slides, it is determined that the input operation corresponding to the target finger is a touch pad sliding operation, and a fifth dynamic confirmation display is performed in response to the touch pad sliding operation.
Specifically, in order to accurately identify the sliding operation of the touch pad, the wearable virtual device needs to continuously monitor the contact state of the target finger with the touch pad and the movement track of the finger, and once the contact between the target finger and the touch pad is identified and the sliding starts, a corresponding processing mechanism is triggered immediately. In response to the touch panel sliding operation, a fifth dynamic confirmation display is performed. The fifth dynamic confirmation display may be designed to be both intuitive and practical so that the user can clearly see the movement track and effect of the finger on the touch pad. For example, a virtual cursor that follows the movement of the finger may be displayed on the touch panel, or the sliding region may be highlighted by changing the background color, adding an animation effect, or the like. In addition, to provide a richer user experience, the fifth dynamic confirmation display may also incorporate additional feedback such as sound or vibration.
Step S306, triggering a virtual keyboard canceling instruction when detecting that the palm of a user in the image of the target azimuth turns downwards; and in response to the virtual keyboard canceling instruction, canceling display of the virtual keyboard.
Specifically, the wearable virtual device can accurately identify the turning action of the palm of the user by analyzing the direction change of the palm in the continuous image frames, once the palm is detected to be turned from the upward state or the lateral state to the downward state, the user can judge that the user intends to cancel the virtual keyboard, once the instruction of canceling the virtual keyboard is triggered, the wearable virtual device can immediately respond and cancel the display of the virtual keyboard, and the method can be realized by gradually fading out or immediately removing the image of the virtual keyboard, so that the user can clearly see the operation result and continue other tasks.
In summary, the image acquisition device on the wearable virtual device acquires the image containing the palm user, so that when the palm of the user faces the camera of the wearable virtual device, the virtual keyboard is displayed above the palm of the user, the displayed virtual keyboard is not shielded, the user can quickly find the edge of each key, accurately sense whether the finger touches the edge and the surface of the key, and the operation certainty is increased. In addition, the virtual keyboard is designed to be in a semitransparent state, so that a user can see each finger clearly, the position information and the contact information of the finger and the corresponding key can be obtained accurately through the addition of the marks on the finger, and the occurrence of misjudgment is reduced.
In this embodiment, a key operation method of a virtual keyboard is provided, which is used for a wearable virtual device as shown in fig. 1, where the wearable virtual device has an image acquisition device 10, and fig. 6 is a flowchart of a key operation method of another virtual keyboard according to an embodiment of the present invention, and as shown in fig. 6, the flowchart includes the following steps:
Step S601, acquiring an image of a target azimuth in real time through an image acquisition device on the wearable virtual equipment. Please refer to step S201 in the embodiment shown in fig. 2 in detail, which is not described herein.
In step S602, when it is detected that the palm of the user exists in the image of the target azimuth, the position of the first palm and the position of the second palm of the user in the image are detected. Please refer to step S302 in the embodiment shown in fig. 3 in detail, which is not described herein.
Step S603, determining a second designated position on the image display interface of the image display assembly and a second virtual keyboard size parameter according to the position of the first palm, and displaying a second virtual keyboard at the second designated position.
Step S604, determining a third designated position on the image display interface of the image display assembly and a third virtual keyboard size parameter according to the position of the second palm, and displaying a third virtual keyboard at the second designated position.
Specifically, referring to the control schematic diagram of the second virtual keyboard and the second virtual keyboard shown in fig. 7, when the preset keyboard mode is the single-hand keyboard mode, the embodiment may set the adsorption function of the virtual keyboard position, so that the second virtual keyboard may be automatically adsorbed on the fixed point of the user's forearm corresponding to the first palm center, and the third virtual keyboard may be automatically adsorbed on the fixed point of the user's forearm corresponding to the second palm center, so as to reduce the situations of misoperation and instability. The implementation of the adsorption function depends on an accurate image recognition and position tracking technology, please refer to a schematic diagram of the fixed point setting of the user's forearm shown in fig. 8, in this embodiment, the position of the palm of the user is continuously monitored by the image acquisition device, and the fixed points of the user's forearm are determined, once the fixed points are determined, the virtual keyboard is adsorbed on the positions, so as to ensure that the relative position of the virtual keyboard and the user's forearm remains unchanged, so that the plane of the virtual keyboard can rotate along with the forearm (the plane of the second virtual keyboard can rotate along with the user's forearm corresponding to the first palm), and the plane of the third virtual keyboard can rotate along with the user's forearm corresponding to the second palm), and the wrist is adapted to be stationary, so that only the finger is moved. Through the adsorption function, the user does not need to frequently adjust the position of the virtual keyboard, and misoperation and unstable conditions are reduced. Simultaneously, the design also accords with the human engineering characteristics more, so that a user can operate the keyboard only by moving fingers under the condition of the fixed wrist, and the comfort and convenience of operation are improved. In addition, the embodiment also provides an option of customizing the keyboard layout, a user can adjust the arrangement and the size of keys on the keyboard according to own operation habits and preferences so as to obtain the optimal input experience, and in order to realize the customized keyboard layout, the wearable virtual device can provide a visual configuration interface to allow the user to adjust the keyboard layout through operations such as dragging, zooming and the like.
In an optional implementation manner, the wearable device is provided with an image projection device, the image projection device can project the virtual keyboard on the palm of the user at intervals, and when a preset mode of the user is a single-hand keyboard mode, the position of the first palm and the position of the second palm of the user in the image are detected first; determining a second projection position between the first palm and the image projection device according to the position of the first palm; determining a third projection position between the second palm and the image projection device according to the position of the second palm; and then controlling the image projection device to project on the second projection position and the third projection position, and respectively displaying a fifth virtual keyboard and a sixth virtual keyboard.
Step S605, the distance between each finger of the user and the corresponding key is identified, and the finger entering the pre-click position is marked; the pre-click position is used for indicating that the distance between the finger and the corresponding key is in the target range. Please refer to step S304 in the embodiment shown in fig. 3 in detail, which is not described herein.
In step S606, when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, the input operation of the target key corresponding to the target finger is determined, and the input operation is responded. Please refer to step S305 in the embodiment shown in fig. 3 in detail, which is not described herein.
Step S607, triggering a virtual keyboard canceling instruction when detecting that the palm of a user in the image of the target azimuth turns downwards; and in response to the virtual keyboard canceling instruction, canceling display of the virtual keyboard. Please refer to step S306 in the embodiment shown in fig. 3 in detail, which is not described herein.
In summary, the image acquisition device on the wearable virtual device acquires the image containing the palm user, so that when the palm of the user faces the camera of the wearable virtual device, the virtual keyboard is displayed above the palm of the user, the displayed virtual keyboard is not shielded, the user can quickly find the edge of each key, accurately sense whether the finger touches the edge and the surface of the key, and the operation certainty is increased. In addition, the virtual keyboard is designed to be in a semitransparent state, so that a user can see each finger clearly, the position information and the contact information of the finger and the corresponding key can be obtained accurately through the addition of the marks on the finger, and the occurrence of misjudgment is reduced.
In addition, the invention not only displays the same second virtual keyboard above the first palm center and the second palm center, but also respectively displays a second virtual keyboard and a third virtual keyboard above the first palm center and the second palm center, so that the virtual keyboard can move along with the position of the forearm or the wrist, and the second virtual keyboard is properly enlarged or reduced according to the position of the first palm center, and the third virtual keyboard is properly enlarged or reduced according to the position of the second palm center, thereby being more suitable for the situation of the palm center towards eyes.
The embodiment also provides a key operation device of a virtual keyboard, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a key operation device of a virtual keyboard, as shown in fig. 9, including:
The image acquisition module 901 is used for acquiring an image of a target azimuth in real time through the image acquisition device on the wearable virtual equipment;
The virtual keyboard display module 902 is configured to display a virtual keyboard above a palm of a user when it is detected that the palm of the user exists in the image of the target azimuth;
The input operation response module 903 is configured to determine an input operation of a target key corresponding to a target finger when the target finger corresponding to the palm of the user coincides with a key position of the virtual keyboard, and respond to the input operation.
In some alternative embodiments, the wearable virtual device has an image display component thereon; when detecting that the palm of the user exists in the image of the target azimuth, the virtual keyboard display module 902 is further configured to:
Detecting the position of a first palm and the position of a second palm of a user in the image;
And displaying a virtual keyboard on the appointed position of the image display component according to the positions of the first palm and the second palm, so that the virtual keyboard is positioned above the first palm and the second palm from the view angle from the image display component to the palm of the user.
In some alternative embodiments, the virtual keyboard display module 902 is further configured to:
And determining a first designated position on an image display interface of the image display assembly and a first virtual keyboard size parameter according to the position of the first palm and the position of the second palm so as to display a first virtual keyboard at the first designated position.
In some alternative embodiments, the virtual keyboard display module 902 is further configured to:
Determining a second designated position on an image display interface of the image display assembly and a second virtual keyboard size parameter according to the position of the first palm center, and displaying a second virtual keyboard at the second designated position;
And determining a third designated position on the image display interface of the image display assembly and a third virtual keyboard size parameter according to the position of the second palm center, and displaying a third virtual keyboard at the second designated position.
In some alternative embodiments, the virtual keyboard is semi-transparent.
In some alternative embodiments, after displaying the virtual keyboard over the palm of the user, the apparatus is further configured to:
identifying the distance between each finger of the user and the corresponding key, and marking the finger entering the pre-click position; the pre-click position is used for indicating that the distance between the finger and the corresponding key is in the target range.
In some alternative embodiments, the input operation response module 903 is further configured to:
When the image acquisition device identifies that the mark on the target finger corresponding to the palm of the user is overlapped with the key position of the virtual keyboard, determining that the input operation of the target key corresponding to the target finger is a click operation, and responding to the click operation, performing first dynamic confirmation display.
In some alternative embodiments, the input operation response module 903 is further configured to:
When the image acquisition device recognizes that the coincidence time of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard exceeds a first time threshold, determining that the input operation of the target key corresponding to the target finger is a long-press operation, and responding to the long-press operation, performing second dynamic confirmation display.
In some alternative embodiments, the input operation response module 903 is further configured to:
When the image acquisition device recognizes that the speed of the mark on the target finger corresponding to the palm of the user when the mark is overlapped with the key position of the virtual keyboard is smaller than the first speed threshold, determining that the input operation of the target key corresponding to the target finger is a tapping operation, and responding to the tapping operation to perform third dynamic confirmation display.
In some alternative embodiments, the input operation response module 903 is further configured to:
When the image acquisition device recognizes that the speed of the mark on the target finger corresponding to the palm of the user when the mark is overlapped with the key position of the virtual keyboard is greater than the second speed threshold, determining that the input operation of the target key corresponding to the target finger is a repeated pressing operation, and responding to the repeated pressing operation to perform fourth dynamic confirmation display.
In some alternative embodiments, the input operation response module 903 is further configured to:
when the image acquisition device recognizes that the mark on the target finger corresponding to the palm of the user is contacted with the touch pad and slides, the input operation corresponding to the target finger is determined to be the touch pad sliding operation, and a fifth dynamic confirmation display is performed in response to the touch pad sliding operation.
In some alternative embodiments, the input operation response module 903 is further configured to:
triggering a virtual keyboard canceling instruction when the fact that the palm of a user in the image of the target azimuth turns downwards is detected;
And in response to the virtual keyboard canceling instruction, canceling display of the virtual keyboard.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
A key operation device of a virtual keyboard in this embodiment is presented as a functional unit, where the unit refers to an ASIC (Application SPECIFIC INTEGRATED Circuit) Circuit, a processor and a memory that execute one or more software or fixed programs, and/or other devices that can provide the above functions.
In summary, the image acquisition device on the wearable virtual device acquires the image containing the palm user, so that when the palm of the user faces the camera of the wearable virtual device, the virtual keyboard is displayed above the palm of the user, the displayed virtual keyboard is not shielded, the user can quickly find the edge of each key, accurately sense whether the finger touches the edge and the surface of the key, and the operation certainty is increased. In addition, the virtual keyboard is designed to be in a semitransparent state, so that a user can see each finger clearly, the position information and the contact information of the finger and the corresponding key can be obtained accurately through the addition of the marks on the finger, and the occurrence of misjudgment is reduced.
In addition, the invention not only displays the same second virtual keyboard above the first palm center and the second palm center, but also respectively displays a second virtual keyboard and a third virtual keyboard above the first palm center and the second palm center, so that the virtual keyboard can move along with the position of the forearm or the wrist, and the second virtual keyboard is properly enlarged or reduced according to the position of the first palm center, and the third virtual keyboard is properly enlarged or reduced according to the position of the second palm center, thereby being more suitable for the situation of the palm center towards eyes.
The embodiment of the invention also provides computer equipment, which is provided with the key operation device of the virtual keyboard shown in the figure 9.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, as shown in fig. 10, the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 10.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform the methods shown in implementing the above embodiments.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device also includes a communication interface 30 for the computer device to communicate with other devices or communication networks.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Portions of the present invention may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or aspects in accordance with the present invention by way of operation of the computer. Those skilled in the art will appreciate that the form of computer program instructions present in a computer readable medium includes, but is not limited to, source files, executable files, installation package files, etc., and accordingly, the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (16)

1. A key operation method of a virtual keyboard, wherein the method is used for a wearable virtual device, and the wearable virtual device is provided with an image acquisition device, and the method comprises the following steps:
Acquiring an image of a target azimuth in real time through the image acquisition device on the wearable virtual equipment;
When the fact that the palm of the user exists in the image of the target azimuth is detected, displaying a virtual keyboard above the palm of the user;
when the target finger corresponding to the palm of the user is overlapped with the key position of the virtual keyboard, determining the input operation of the target key corresponding to the target finger, and responding to the input operation.
2. The method of claim 1, wherein the wearable virtual device has an image display component thereon; when detecting that the palm of the user exists in the image of the target azimuth, displaying a virtual keyboard above the palm of the user comprises:
Detecting the position of a first palm and the position of a second palm of a user in the image;
And displaying a virtual keyboard on the appointed position of the image display assembly according to the positions of the first palm and the second palm, so that the virtual keyboard is positioned above the first palm and the second palm from the view angle from the image display assembly to the palm of the user.
3. The method of claim 2, wherein displaying a virtual keyboard on the designated location of the image display assembly based on the location of the first palm and the location of the second palm comprises:
And determining a first designated position on an image display interface of the image display assembly and a first virtual keyboard size parameter according to the position of the first palm and the position of the second palm so as to display a first virtual keyboard at the first designated position.
4. The method of claim 2, wherein displaying a virtual keyboard on the designated location of the image display assembly based on the location of the first palm and the location of the second palm comprises:
Determining a second designated position on an image display interface of the image display assembly and a second virtual keyboard size parameter according to the position of the first palm center, and displaying a second virtual keyboard at the second designated position;
And determining a third designated position on an image display interface of the image display assembly and a third virtual keyboard size parameter according to the position of the second palm center, and displaying a third virtual keyboard at the second designated position.
5. The method of claim 1, wherein the virtual keyboard is in a semi-transparent state.
6. The method of any one of claims 1 to 5, wherein after displaying a virtual keyboard over the palm of the user, the method further comprises:
Identifying the distance between each finger of the user and the corresponding key, and marking the finger entering the pre-click position; the pre-click position is used for indicating that the distance between the finger and the corresponding key is in the target range.
7. The method according to claim 6, wherein when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, determining the input operation of the target key corresponding to the target finger, and responding to the input operation, comprises:
When the image acquisition device identifies that the mark on the target finger corresponding to the palm of the user is overlapped with the key position of the virtual keyboard, determining that the input operation of the target key corresponding to the target finger is a click operation, and responding to the click operation, performing first dynamic confirmation display.
8. The method according to claim 6, wherein when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, determining the input operation of the target key corresponding to the target finger, and responding to the input operation, comprises:
When the image acquisition device recognizes that the coincidence time of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard exceeds a first time threshold, determining that the input operation of the target key corresponding to the target finger is a long-press operation, and responding to the long-press operation, performing second dynamic confirmation display.
9. The method according to claim 6, wherein when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, determining the input operation of the target key corresponding to the target finger, and responding to the input operation, comprises:
When the image acquisition device identifies that the speed of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard are overlapped is smaller than a first speed threshold, determining that the input operation of the target key corresponding to the target finger is a tapping operation, and responding to the tapping operation, and performing third dynamic confirmation display.
10. The method according to claim 6, wherein when the target finger corresponding to the palm of the user coincides with the key position of the virtual keyboard, determining the input operation of the target key corresponding to the target finger, and responding to the input operation, comprises:
When the image acquisition device recognizes that the speed of the mark on the target finger corresponding to the palm of the user and the key position of the virtual keyboard are overlapped is greater than a second speed threshold, determining that the input operation of the target key corresponding to the target finger is a repeated pressing operation, and responding to the repeated pressing operation, and performing fourth dynamic confirmation display.
11. The method of claim 6, wherein the virtual keyboard has a touch pad thereon, and wherein determining an input operation of a target key corresponding to a target finger when the target finger corresponds to a palm of a user coincides with a key position of the virtual keyboard, and responding to the input operation, comprises:
When the image acquisition device identifies that the mark on the target finger corresponding to the palm of the user is contacted with the touch pad and slides, determining that the input operation corresponding to the target finger is touch pad sliding operation, and responding to the touch pad sliding operation, and performing fifth dynamic confirmation display.
12. The method according to claim 1, wherein the method further comprises:
triggering a virtual keyboard canceling instruction when the fact that the palm of a user in the image of the target azimuth turns downwards is detected;
and canceling display of the virtual keyboard in response to the virtual keyboard canceling instruction.
13. A key operation device of a virtual keyboard, the device comprising:
The image acquisition module is used for acquiring an image of the target azimuth in real time through the image acquisition device on the wearable virtual equipment;
The virtual keyboard display module is used for displaying a virtual keyboard above the palm center of the user when detecting that the palm center of the user exists in the image of the target azimuth;
and the input operation response module is used for determining the input operation of the target key corresponding to the target finger when the target finger corresponding to the palm of the user is overlapped with the key position of the virtual keyboard, and responding to the input operation.
14. A computer device, comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform a key operation method of a virtual keyboard as claimed in any one of claims 1 to 12.
15. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform a key operation method of a virtual keyboard according to any one of claims 1 to 12.
16. A computer program product comprising computer instructions for causing a computer to perform a method of key operation of a virtual keyboard as claimed in any one of claims 1 to 12.
CN202410353994.9A 2024-03-26 2024-03-26 Key operation method and device of virtual keyboard Pending CN118210426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410353994.9A CN118210426A (en) 2024-03-26 2024-03-26 Key operation method and device of virtual keyboard

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410353994.9A CN118210426A (en) 2024-03-26 2024-03-26 Key operation method and device of virtual keyboard

Publications (1)

Publication Number Publication Date
CN118210426A true CN118210426A (en) 2024-06-18

Family

ID=91447264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410353994.9A Pending CN118210426A (en) 2024-03-26 2024-03-26 Key operation method and device of virtual keyboard

Country Status (1)

Country Link
CN (1) CN118210426A (en)

Similar Documents

Publication Publication Date Title
US11625103B2 (en) Integration of artificial reality interaction modes
CN106845335B (en) Gesture recognition method and device for virtual reality equipment and virtual reality equipment
US10452155B2 (en) Display method of on-screen keyboard and computer program product and non-transitory computer readable storage medium thereof
US8902198B1 (en) Feature tracking for device input
US20060209021A1 (en) Virtual mouse driving apparatus and method using two-handed gestures
WO2016189390A2 (en) Gesture control system and method for smart home
US11714540B2 (en) Remote touch detection enabled by peripheral device
GB2483168A (en) Controlling movement of displayed object based on hand movement and size
KR20190133080A (en) Touch free interface for augmented reality systems
US11030980B2 (en) Information processing apparatus, information processing system, control method, and program
US20140053115A1 (en) Computer vision gesture based control of a device
JP2004246578A (en) Interface method and device using self-image display, and program
CN111736691A (en) Interactive method and device of head-mounted display equipment, terminal equipment and storage medium
KR20170133754A (en) Smart glass based on gesture recognition
JP6841232B2 (en) Information processing equipment, information processing methods, and programs
CN115598831A (en) Optical system and associated method providing accurate eye tracking
US20240185516A1 (en) A Method for Integrated Gaze Interaction with a Virtual Environment, a Data Processing System, and Computer Program
US9940900B2 (en) Peripheral electronic device and method for using same
CN118210426A (en) Key operation method and device of virtual keyboard
CN110291495B (en) Information processing system, information processing method, and program
CN107526439A (en) A kind of interface return method and device
US11869145B2 (en) Input device model projecting method, apparatus and system
KR20230122711A (en) Augmented reality transparent display device with gesture input function and implementation method
CN113270006A (en) HoloLens-based printing machine operation training system and method
CN116166161A (en) Interaction method based on multi-level menu and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination