CN111158476A - Key identification method, system, equipment and storage medium of virtual keyboard - Google Patents
Key identification method, system, equipment and storage medium of virtual keyboard Download PDFInfo
- Publication number
- CN111158476A CN111158476A CN201911357466.6A CN201911357466A CN111158476A CN 111158476 A CN111158476 A CN 111158476A CN 201911357466 A CN201911357466 A CN 201911357466A CN 111158476 A CN111158476 A CN 111158476A
- Authority
- CN
- China
- Prior art keywords
- module
- key
- augmented reality
- user
- movement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000003860 storage Methods 0.000 title claims abstract description 10
- 230000033001 locomotion Effects 0.000 claims abstract description 210
- 230000003190 augmentative effect Effects 0.000 claims abstract description 108
- 239000011521 glass Substances 0.000 claims abstract description 104
- 238000001914 filtration Methods 0.000 claims abstract description 85
- 238000005259 measurement Methods 0.000 claims abstract description 54
- 238000012545 processing Methods 0.000 claims abstract description 51
- 238000006243 chemical reaction Methods 0.000 claims abstract description 28
- 238000003062 neural network model Methods 0.000 claims abstract description 21
- 230000004927 fusion Effects 0.000 claims abstract description 15
- 230000009467 reduction Effects 0.000 claims description 60
- 238000007781 pre-processing Methods 0.000 claims description 25
- 238000012937 correction Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 230000001133 acceleration Effects 0.000 claims description 5
- 230000003993 interaction Effects 0.000 claims description 5
- 238000001454 recorded image Methods 0.000 claims description 4
- 238000003825 pressing Methods 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 2
- 210000003811 finger Anatomy 0.000 description 94
- 230000006870 function Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 16
- 210000004247 hand Anatomy 0.000 description 13
- 239000011159 matrix material Substances 0.000 description 12
- 238000011176 pooling Methods 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 210000004027 cell Anatomy 0.000 description 6
- 230000004913 activation Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 210000004932 little finger Anatomy 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000002985 plastic film Substances 0.000 description 2
- 229920006255 plastic film Polymers 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001054 cortical effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 210000005224 forefinger Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Abstract
The application discloses a method, a system, equipment and a storage medium for identifying keys of a virtual keyboard, wherein the method comprises the following steps: the inertial measurement unit data glove acquires the motion information of the movement of the finger key of the user after pre-filtering and denoising, and sends the motion information to the augmented reality glasses through a pre-stored Bluetooth sending module; after receiving the information, the augmented reality glasses acquire the image information which is processed in advance, input the image information into a data processing module which is stored in advance for fusion, generate a fused data sample, and input the fused data sample into a preset neural network model to generate a key prediction result; inputting the key prediction result into a pre-stored mode conversion module, and converting the key prediction result into characters which can be recognized by augmented reality glasses; and displaying the characters which can be recognized by the augmented reality glasses. Therefore, by adopting the embodiment of the application, the occupied space of the keyboard is reduced, and the user experience is improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, a system, a device, and a storage medium for identifying keys of a virtual keyboard.
Background
The typewriter appearing in the 18 th century has raised a rush of enthusiasm all over the world, has changed people's control mode to the machine too, the continuous innovation and improvement of typewriter have developed into the rudiment of keyboard now, the emergence of the keyboard is a great important innovation in the field of human-computer interaction, it makes people reach a height to the control of machine and intelligent device, it is convenient, rapid and accurate.
The keyboard is the most common and main input device, and English letters, numbers, various punctuations and the like can be input into the computer through the common keyboard, so that commands are sent to the computer and corresponding data and the like are output. The keyboards on the market at present are mainly classified into mechanical keyboards, plastic film type keyboards, conductive rubber type keyboards and the like according to the working principle, and the common defects of the keyboards mainly comprise the following points: 1. physical external equipment is needed, and the occupied space is large. 2. A support surface is required for data input. 3. Both hands are fixed in position, the flexibility is not enough, and the fixed both hands can also influence the travelling comfort and the use sense of keyboard input moreover.
Disclosure of Invention
The embodiment of the application provides a method, a system, equipment and a storage medium for identifying keys of a virtual keyboard. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides a key identification method for a virtual keyboard, which is applied to an inertial measurement unit data glove, and the method includes:
obtaining the motion information of the movement of the finger keys of the user after pre-filtering and denoising;
and sending the motion information of the movement of the finger keys of the user after the pre-filtering and noise reduction to the augmented reality glasses through a pre-stored Bluetooth sending module.
Optionally, before the obtaining the motion information of the movement of the user finger key after the pre-filtering and noise reduction, the method further includes:
when a user finger key movement instruction input by aiming at the inertial measurement unit data glove is received, acquiring movement information of the user finger key movement;
inputting the motion information of the movement of the user finger key into a pre-stored filtering and noise reduction module to generate the motion information of the movement of the user finger key after filtering and noise reduction, and taking the motion information of the movement of the user finger key after filtering and noise reduction as the motion information of the movement of the user finger key after pre-filtering and noise reduction.
Optionally, the inertia measurement unit data glove comprises an inertia measurement unit motion sensor module, a bluetooth sending module, a filtering and noise reduction module, and a wireless charging module;
the inertial measurement unit motion sensor module is a six-axis inertial measurement unit motion sensor and is used for recording motion information of two hands during key pressing, the motion information comprises three-axis acceleration and three-axis angular velocity information, the sensors are located at five fingers and the back of the hand, and the sensors at the fingers are respectively connected with the sensors at the back of the hand;
the filtering and noise reduction module is mainly used for preprocessing the collected motion of the inertia measurement unit to ensure the validity of data;
the Bluetooth sending module is mainly used for sending the motion information of the inertia measurement unit of the data glove to the augmented reality glasses for processing;
the wireless charging module is mainly used for charging the data gloves for endurance, and the convenience of the keyboard system is improved due to wireless charging.
In a second aspect, an embodiment of the present application provides a key identification method for a virtual keyboard, which is applied to augmented reality glasses, and the method includes:
receiving motion information of the movement of the finger key of the user sent by the data glove of the inertial measurement unit;
acquiring image information after pre-processing;
inputting the motion information of the movement of the finger key of the user and the image information after the pre-processing into a pre-stored data processing module for fusion to generate a fused data sample;
inputting the fused data sample into a preset neural network model to generate a key prediction result;
inputting the key prediction result into a pre-stored mode conversion module, and converting the key prediction result into characters which can be recognized by augmented reality glasses;
and displaying the characters which can be recognized by the augmented reality glasses.
Optionally, before the obtaining of the pre-processed image information, the method further includes:
acquiring an image information set of a finger key of a user;
and inputting the image information set into a pre-stored filtering and noise reducing module to generate processed image information. And taking the processed image information as the pre-processed image information.
Optionally, the augmented reality glasses comprise a binocular camera module, a mode conversion module, a filtering and noise reduction module, a data processing module, a bluetooth receiving module, a vibration feedback module and a display module, wherein,
the binocular camera module is positioned at the part obliquely below the augmented reality glasses and used for acquiring images of the data gloves;
the filtering and noise reducing module is used for processing and preprocessing the recorded image information;
the data processing module is used for effectively fusing the collected information of the two-hand inertia measurement unit and the image information collected by the binocular camera and loading a neural network model to obtain a preliminary key prediction result;
the Bluetooth receiving module is used for receiving signal information of the inertial measurement unit transmitted by the data glove;
the vibration feedback module is used for slightly vibrating after the key is successfully identified, so as to give feedback to a user and enhance human-computer interaction inductance;
the mode conversion module is used for carrying out fuzzy intention reasoning on the preliminary prediction result obtained by the data processing module so as to achieve the function of input error correction, and converting the final prediction result into characters which can be recognized by augmented reality glasses;
and the display module is used for displaying the final result identified by the mode conversion module in the augmented reality glasses.
In a third aspect, an embodiment of the present application provides a key identification system for a virtual keyboard, including: inertial measurement unit data gloves and augmented reality glasses;
the inertia measurement unit data glove is used for acquiring motion information of the movement of the finger key of the user, inputting the motion information of the movement of the finger key of the user into a pre-stored filtering and noise reduction module to generate motion information of the movement of the finger key of the user after filtering and noise reduction, and then sending the motion information of the movement of the finger key of the user after filtering and noise reduction to the enhancement realization glasses through the pre-stored Bluetooth sending module;
the augmented reality glasses are used for receiving motion information of movement of a user finger key sent by a data glove of an inertial measurement unit, then acquiring pre-processed image information, inputting the motion information of the user finger key movement and the pre-processed image information into a pre-stored data processing module for fusion to generate a fused data sample, then inputting the fused data sample into a preset neural network model to generate a key prediction result, inputting the key prediction result into a pre-stored mode conversion module to convert the key prediction result into characters which can be recognized by the augmented reality glasses, and finally displaying the characters which can be recognized by the augmented reality glasses.
Optionally, the augmented reality glasses further include:
acquiring an image information set of a user finger key, and inputting the image information set into a pre-stored filtering and noise reduction module to generate processed image information.
In a fourth aspect, an embodiment of the present application provides a key identification device for a virtual keyboard, which is applied to an inertial measurement unit data glove, and the device includes:
the first information acquisition module is used for acquiring the motion information of the movement of the finger keys of the user after pre-filtering and denoising;
and the information sending module is used for sending the motion information of the movement of the finger keys of the user after the pre-filtering and noise reduction to the augmented reality glasses through the pre-stored Bluetooth sending module.
Optionally, the apparatus further comprises:
the second information acquisition module is used for acquiring motion information of the movement of the finger key of the user when receiving a finger key movement instruction of the user input by the data glove of the inertia measurement unit;
and the first information generation module is used for inputting the motion information of the movement of the user finger key into a pre-stored filtering and noise reduction module to generate the motion information of the movement of the user finger key after filtering and noise reduction, and taking the motion information of the movement of the user finger key after filtering and noise reduction as the motion information of the movement of the user finger key after pre-filtering and noise reduction.
In a fifth aspect, an embodiment of the present application provides a key identification device for a virtual keyboard, which is applied to augmented reality glasses, and the device includes:
the information receiving module is used for receiving the motion information of the movement of the finger key of the user, which is sent by the data glove of the inertial measurement unit;
the third information acquisition module is used for acquiring the image information after the pre-processing;
the sample generation module is used for inputting the motion information of the movement of the finger key of the user and the image information after the pre-processing into a pre-stored data processing module for fusion to generate a fused data sample;
the result generation module is used for inputting the fused data samples into a preset neural network model to generate a key prediction result;
the character generation module is used for inputting the key prediction result into a pre-stored mode conversion module and converting the key prediction result into a character which can be recognized by the augmented reality glasses;
and the character display module is used for displaying the characters which can be recognized by the augmented reality glasses.
Optionally, the apparatus further comprises:
the set acquisition module is used for acquiring an image information set of the finger keys of the user;
and the second information generation module is used for inputting the image information set into a pre-stored filtering and noise reduction module to generate processed image information. And taking the processed image information as the pre-processed image information.
In a sixth aspect, an embodiment of the present application provides a key identification device for a virtual keyboard, including:
one or more processors, storage devices storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method steps described above.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the above-mentioned method steps.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, the inertia measurement unit data glove firstly obtains motion information of movement of a finger key of a user after pre-filtering and noise reduction and sends the motion information to the augmented reality glasses through the pre-stored Bluetooth sending module, after the augmented reality glasses receive the information, the image information after pre-processing is obtained and input into the pre-stored data processing module for fusion, a fused data sample is generated and input into the pre-stored neural network model to generate a key prediction result, then the key prediction result is input into the pre-stored mode conversion module to convert the key prediction result into characters which can be recognized by the augmented reality glasses, and finally the characters which can be recognized by the augmented reality glasses are displayed. Because the virtual keyboard combined by the data gloves of the inertial measurement unit and the Augmented Reality (AR for short) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience degree.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of a key identification method of a virtual keyboard according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of a key identification method for a virtual keyboard of an IMU data glove according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a key identification method for a virtual keyboard of augmented reality glasses according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a hardware framework of an IMU data glove according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware frame of augmented reality glasses according to an embodiment of the present disclosure;
FIG. 6 is a structural framework diagram of a virtual keyboard system according to an embodiment of the present application;
FIG. 7 is a software framework diagram of a processing layer provided by an embodiment of the present application;
FIG. 8 is a flow chart illustrating an exemplary embodiment of a virtual keyboard system;
FIG. 9 is a block diagram of a virtual keyboard system according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a first apparatus for identifying keys of a virtual keyboard according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a key identification device of a second virtual keyboard according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a key identification apparatus of a third virtual keyboard according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a fourth apparatus for recognizing keys of a virtual keyboard according to an embodiment of the present application.
Detailed Description
The following description and the annexed drawings set forth in detail certain illustrative embodiments of the application so as to enable those skilled in the art to practice them.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Up to now, the keyboard is the most common and the most main input device, and through the common keyboard, english letters, numbers, various punctuations and the like can be input into the computer, so as to send commands to the computer and output corresponding data and the like. The keyboards on the market at present are mainly classified into mechanical keyboards, plastic film type keyboards, conductive rubber type keyboards and the like according to the working principle, and the common defects of the keyboards mainly comprise the following points: 1. physical external equipment is needed, and the occupied space is large. 2. A support surface is required for data input. 3. Both hands are fixed in position, the flexibility is not enough, and the fixed both hands can also influence the travelling comfort and the use sense of keyboard input moreover. Therefore, the present application provides a method, a system, a device and a storage medium for identifying keys of a virtual keyboard, so as to solve the problems in the related art. In the technical scheme provided by the application, because the virtual keyboard formed by combining the data gloves of the inertial measurement unit and the Augmented Reality (AR for short) technology produced by the Augmented Reality glasses is used for replacing the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience degree.
The method for identifying keys of a virtual keyboard according to the embodiment of the present application will be described in detail below with reference to fig. 1 to 8. The method may be implemented in dependence on a computer program, operable on a key recognition device of a virtual keyboard based on the von neumann architecture.
Referring to fig. 1, a schematic flow chart of a method for identifying keys of a virtual keyboard is provided in an embodiment of the present application. As shown in fig. 1, the method of the embodiment of the present application may include the steps of:
step 101, obtaining motion information of the movement of the finger key of the user after pre-filtering and denoising;
the pre-filtering and denoising is to input the collected motion information of the movement of the finger key of the user into a pre-stored filtering and denoising module for processing. The motion information of the movement of the finger keys of the user refers to the motion information of the hands when the hands do key actions, which is recorded by an IMU (inertial measurement unit) motion sensor module worn in an IMU data glove of the hands.
Generally, as shown in fig. 4, an IMU (inertial measurement unit) data glove is pre-stored with an IMU motion sensor module, a bluetooth transmission module, a filtering and noise reduction module, and a wireless charging module. The IMU motion sensor module comprises six IMU sensors 100, three-axis acceleration and three-axis angular velocity information (x axis, y axis and z axis), the sensors are located at five fingers and the back of the hand, and the sensors at the fingers are respectively connected with the sensors at the back of the hand.
In one possible implementation, a user first wears the IMU data glove, then performs simulated typing according to the typing intent of the user, and during typing, the IMU data glove moves along with the displacement of the fingers, generates movement data information during the movement, and then records and stores the movement data information of the fingers.
102, sending the motion information of the finger key movement of the user after the pre-filtering and noise reduction to augmented reality glasses through a pre-stored Bluetooth sending module;
wherein, the bluetooth sending module is the module that sends the IMU motion information of data gloves to augmented reality glasses and handle. The augmented reality glasses are equipment for receiving and processing information sent by the data gloves.
Generally, augmented reality glasses mainly include a binocular camera module, a mode conversion module, a filtering and noise reduction module, a data processing module, a bluetooth receiving module, a vibration feedback module and a display module.
For example, as shown in fig. 5, augmented reality glasses 200 include a haptic feedback 300 and a binocular camera 400. The binocular camera is positioned at the oblique lower part of the augmented reality glasses, can shoot images of the data gloves, can distinguish the difference of key actions through the motion trail of connecting lines among six IMU sensors on the data gloves, adopts 50 frames of binocular cameras, and is used for recording image information of keys of two hands in multiple frames; the filtering and noise reducing module is mainly used for processing and preprocessing the recorded image information; the data processing module is mainly used for effectively fusing the collected two-hand IMU information and the image information collected by the binocular camera and loading a neural network model to obtain a preliminary key prediction result; the Bluetooth receiving module is mainly used for receiving IMU signal information transmitted by the data glove. The vibration feedback module is mainly used for slightly vibrating after the key is successfully identified, so that feedback is provided for a user, and man-machine interaction inductance is enhanced. The mode conversion module is mainly used for carrying out fuzzy intention reasoning on the preliminary prediction result obtained by the data processing module so as to achieve the function of input error correction and converting the final prediction result into characters which can be recognized by augmented reality glasses; the display module is mainly used for displaying the final result identified by the mode conversion module in the augmented reality glasses.
In a possible implementation manner, the stored movement data information of the finger can be obtained based on step 101, when the IMU data glove detects the stored information, the bluetooth sending module stored in the IMU data glove is firstly obtained, then the information is sent to the bluetooth sending module, after the bluetooth sending module receives the information, the glasses are implemented by connecting and enhancing through an internal program, and after the connection is successful, the information is sent through a wireless network.
103, receiving motion information of the movement of the finger key of the user, which is sent by the data glove of the inertial measurement unit;
in this embodiment of the application, after the information is sent based on step 102, when the augmented reality glasses detect a data information request sent by the IMU data glove, the augmented reality glasses first obtain the bluetooth receiving module that is saved in advance, and after the bluetooth receiving module is obtained, the bluetooth receiving module obtains the sent information and saves the information in the augmented reality glasses.
Step 104, acquiring image information after preprocessing;
the image information after the pre-processing is generated after the filtering and noise reduction module in the augmented reality glasses processes the image information collected by the binocular camera, wherein the image information collected by the binocular camera is the image information of the multi-frame IMU data glove key movement collected by the binocular camera on the augmented reality glasses.
In the embodiment of the application, when the binocular camera on the augmented reality glasses detects that the IMU data glove moves, the augmented reality glasses perform multi-frame acquisition on image information of the IMU data glove key movement through an internal program, and then input the image information of the IMU data glove key movement into the pre-stored filtering noise reduction module to process the image and generate pre-processed image information.
105, inputting the motion information of the movement of the finger key of the user and the image information after the pre-processing into a pre-stored data processing module for fusion to generate a fused data sample;
firstly, data information sent by the IMU data glove is acquired based on step 103, and then image information after pre-processing is acquired according to step 104.
In the embodiment of the present application, for example, as shown in fig. 6, fig. 6 is a structural framework of a virtual keyboard system, which mainly comprises three major parts, an input layer, a processing layer and an output layer.
In the input layer, multi-mode information input is included, and multi-mode information is mainly recorded when a user is in suspension typing, and comprises IMU (inertial measurement Unit) double-hand movement information (acceleration and angular velocity of an x axis, a y axis and a z axis) recorded by a data glove and image information recorded by a binocular camera positioned at the bottom of augmented reality glasses, wherein the user needs to adopt a standard typing method. The motion information recorded by the input layer will be sent to the processing layer.
In the processing layer, the layer mainly comprises three parts, namely a data processing part, a neural network model prediction part and a fuzzy intention reasoning and error correcting part.
In the data processing part, after multi-modal information of an input layer is obtained, noise reduction and filtering are carried out on IMU motion information, a Butterworth filter, band-pass filtering of 9-300hz and a wave trap of 50hz are adopted, feature extraction is carried out after filtering, a feature extraction method for the IMU motion sensor comprises an MAV, an RMS and the like, wherein the MAV is an amplitude absolute value mean value, and the RMS is a root mean square; and carrying out noise reduction and filtering on image information acquired by the binocular camera, wherein a Chebyshev filter, 30hz-150hz band-pass filtering and a 50hz wave trap are adopted. When the user uses the standard typing method, when the user performs the key-press action, each key-press has one or two corresponding fingers to move, so that a presorting treatment can be performed, the two hands are equally divided into four categories of little finger, ring finger, middle finger and forefinger, namely, if the key-press action is detected, the key-press action can be classified into one of the four categories according to the IMU signal of the data glove.
Step 106, inputting the fused data sample into a preset neural network model to generate a key prediction result;
in the embodiment of the present application, step 105 can result in a processing layer, which is mainly composed of three major parts, namely a data processing part, a neural network model prediction part and a fuzzy-intention reasoning error correction part.
In the neural network prediction part, the prediction part mainly comprises two types, wherein the first type is that motion sensor data IMU information adopts an LSTM model, and the second type is that image information of a binocular camera adopts a CNN model. First, the information processing of the IMU of the first category, LSTM is a special RNN, which learns long-term dependencies and is suitable for processing long-term sequence information, and its internal structure has an input gate, a forgetting gate, and an output gate, and its internal operation principle is as follows: the LSTM first step is used to determine what information can pass through the memory cell, this determination is controlled by the forgetting gate layer through the activation function, which produces a value of 0 to 1 based on the output of the previous time and the input of the current time to determine whether to pass or partially pass the information learned at the previous time, and the formula of this step is as follows:
ft=σ(Wf.[ht-1,xt]+bf)
where σ is the activation function, ht-1Is the output of the previous time, xtIs the current input, bfIs an offset, ftIs a forgetting gate.
The second step is to generate new information that we need to update. The step comprises two parts, namely an input gate layer which determines which values are used for updating through a sigmoid activation function, and a tanh layer
For generating new candidate values CtIt may be added to the memory unit as a candidate generated by the current layer. We will combine the values generated by these two parts to update.
it=σ(Wt·[ht-1,xt]+bi)
Where σ is the activation function, ht-1Is the output of the previous time, xtIs the current input, bi and bCIs the offset, tanh is an activation function, itIs an output gate.
The third step is to update the old memory cells, first, we multiply the old memory cells by ft to forget the information we do not need, then, andand adding to obtain a candidate value. The formula is as follows:
wherein ,ftIs the output of the forgetting gate,is an old memory cell, CtIs a new memory cell itIs the output of the input gate.
The final step is to determine the output of the model, firstly, an initial output is obtained through the sigmoid layer, and then C is processed by using tanhtScaling the value to be between-1 and 1, and multiplying the value by the output obtained by sigmoid pair by pair to obtain the output of the model, wherein the formula of the step is as follows:
Ot=σ(Wo.[ht-1,xt]+bo)
ht=Ot*tanh(Ct)
wherein OtIs the output of the output gate, ht-1Is the output of the previous time, xtIs the current input, boIs an offset, CtIs a new memory cell.
The second type of binocular camera image processing adopts a CNN Convolutional Neural Network (CNN), which is a type of feed-forward Neural network that includes Convolutional calculation and has a depth structure, and the overall structural mode is input-Convolutional layer-pooling layer-full connection layer-output. The function of the convolutional layer is to extract the characteristics of input data, the convolutional layer internally comprises a plurality of convolutional kernels, and each element forming the convolutional kernels corresponds to a weight coefficient and a deviation amount, and is similar to a neuron of a feedforward neural network. After the feature extraction is performed on the convolutional layer, the output feature map is transmitted to the pooling layer for feature selection and information filtering. The pooling layer comprises a preset pooling function, the function of the pooling function is to replace the result of a single point in the feature map with the feature map statistic of the adjacent area, and the model adopts LPPooling, a type of pooling model developed by the inspired visual cortical hierarchy [35]It is generally represented by the form:
step length s in the formula0And (i, j) are pixels, p is a pre-specified parameter, and p in the patent adopts 1, namely, taking an average value in a pooling area, also called average pooling.
The pooling layer is followed by a fully-connected layer, which in a convolutional neural network is equivalent to the hidden layer in a conventional feed-forward neural network. The fully-connected layer is located at the last part of the hidden layer of the convolutional neural network and only signals are transmitted to other fully-connected layers. The feature map loses spatial topology in the fully connected layer, is expanded into vectors and passes through the excitation function. The CNN model recognized by the binocular camera image and the LSTM model recognized by the motion sensor are obtained by pre-training in advance, and data of the pre-training models are data of a standard typing method adopted by a user in a suspended state. The two models output a probability matrix, the output of the CNN model is set as a probability matrix x, the output of the LSTM model is set as a probability matrix y, and the final prediction matrix z of the neural network can be obtained by weighting the corresponding elements of the two matrices as follows.
z=a*x+b*y
Where a and b are the weighting coefficients of the two models.
The third part is a fuzzy consciousness reasoning part, the principle of which mainly utilizes the prior knowledge and regularity of Chinese character pinyin, namely in the Chinese character pinyin, the letters next following each letter are limited, and the probability of other letters is 0.
Step 107, inputting the key prediction result into a pre-stored mode conversion module, and converting the key prediction result into characters which can be recognized by augmented reality glasses;
in the embodiment of the present application, for example, as shown in fig. 6, when an input layer performs output, two probability matrices are obtained based on step 106, which are a neural network probability matrix and a fuzzy intention inference error correction matrix, respectively, and finally the error correction matrix and the neural network probability matrix are subjected to point multiplication to obtain a final probability matrix, where a value of an index corresponding to a maximum value in the probability matrix is a final output value.
For example, as shown in fig. 7, the output value is finally converted into a character that can be recognized by the augmented reality glasses through the mode conversion module, and the character is displayed in the augmented reality glasses worn by the user in the first vision.
And 108, displaying the characters which can be recognized by the augmented reality glasses.
In the embodiment of the present application, for example, as shown in fig. 8, fig. 8 is a process of identifying a virtual keyboard system, in which an IMU signal is first acquired by an IMU data glove, and then noise reduction and filtering are performed on the IMU signal, followed by MVA feature extraction. And acquiring image signals by using a binocular camera on the augmented reality glasses, and then performing noise reduction and filtering on the acquired image signals. And then judging which finger motion the finger motion belongs to, wherein the finger motion comprises the finger motion of the index finger, the finger motion of the middle finger, the finger motion of the ring finger and the finger motion of the little finger, then analyzing and processing the judged finger motion types by using an LSTM neural network and a CNN neural network, generating a probability matrix of the model after the analysis and processing are finished, and finally outputting the maximum probability character by using matrix dot multiplication and displaying the maximum probability character in augmented reality glasses.
In the embodiment of the application, the inertia measurement unit data glove firstly obtains motion information of movement of a finger key of a user after pre-filtering and noise reduction and sends the motion information to the augmented reality glasses through the pre-stored Bluetooth sending module, after the augmented reality glasses receive the information, the image information after pre-processing is obtained and input into the pre-stored data processing module for fusion, a fused data sample is generated and input into the pre-stored neural network model to generate a key prediction result, then the key prediction result is input into the pre-stored mode conversion module to convert the key prediction result into characters which can be recognized by the augmented reality glasses, and finally the characters which can be recognized by the augmented reality glasses are displayed. Because the virtual keyboard combined by the data gloves of the inertial measurement unit and the Augmented Reality (AR for short) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience degree.
Referring to fig. 2, a schematic flow chart of a method for identifying keys of a virtual keyboard applied to a data glove of an inertial measurement unit is provided for an embodiment of the present application. As shown in fig. 2, the method of the embodiment of the present application may include the steps of:
s201, when a user finger key moving instruction input by aiming at the inertial measurement unit data glove is received, acquiring the moving information of the user finger key moving;
in one possible implementation, a user first wears the IMU data glove, then performs simulated typing according to the typing intent of the user, and during typing, the IMU data glove moves along with the displacement of the fingers, generates movement data information during the movement, and then records and stores the movement data information of the fingers.
S202, inputting the motion information of the movement of the user finger key into a pre-stored filtering and noise reduction module to generate the motion information of the movement of the user finger key after filtering and noise reduction, and taking the motion information of the movement of the user finger key after filtering and noise reduction as the motion information of the movement of the user finger key after pre-filtering and noise reduction;
in this embodiment of the application, based on the movement data information of the finger which is acquired and recorded in step S201 and stored, when the movement data information of the finger is acquired, the filtering and noise reduction module stored in the IMU data glove is first acquired, then the acquired movement data information of the finger is input into the filtering and noise reduction module in the IMU data glove for processing, and the motion information of the movement of the user finger key after filtering and noise reduction is generated after processing.
S203, obtaining the motion information of the movement of the finger keys of the user after pre-filtering and denoising;
specifically, refer to step 101, which is not described herein again.
And S204, sending the motion information of the movement of the finger keys of the user after the pre-filtering and noise reduction to the augmented reality glasses through a pre-stored Bluetooth sending module.
Specifically, refer to step S102, which is not described herein again.
In the embodiment of the application, the inertia measurement unit data glove firstly obtains motion information of movement of a finger key of a user after pre-filtering and noise reduction and sends the motion information to the augmented reality glasses through the pre-stored Bluetooth sending module, after the augmented reality glasses receive the information, the image information after pre-processing is obtained and input into the pre-stored data processing module for fusion, a fused data sample is generated and input into the pre-stored neural network model to generate a key prediction result, then the key prediction result is input into the pre-stored mode conversion module to convert the key prediction result into characters which can be recognized by the augmented reality glasses, and finally the characters which can be recognized by the augmented reality glasses are displayed. Because the virtual keyboard combined by the data gloves of the inertial measurement unit and the Augmented Reality (AR for short) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience degree.
Referring to fig. 3, a schematic flow chart of a method for recognizing keys of a virtual keyboard applied to augmented reality glasses is provided in an embodiment of the present application. As shown in fig. 3, the method of the embodiment of the present application may include the steps of:
s301, acquiring an image information set of a finger key of a user;
in the embodiment of the application, when the binocular camera on the augmented reality glasses detects that the IMU data glove moves, the augmented reality glasses perform multi-frame acquisition on image information of the IMU data glove key movement through an internal program.
S302, inputting the image information set into a pre-stored filtering and denoising module to generate processed image information. Taking the processed image information as pre-processed image information;
in the embodiment of the present application, based on the image information set of the user' S finger key acquired in step S301, the image information of the key movement is input into a pre-saved filtering and noise reduction module to process the image, and then pre-processed image information is generated.
S303, receiving the motion information of the movement of the finger key of the user, which is sent by the inertial measurement unit data glove;
specifically, refer to step 103, which is not described herein.
S304, acquiring the image information after the pre-processing;
refer to step 104 specifically, and will not be described herein.
S305, inputting the motion information of the movement of the finger key of the user and the image information after the pre-processing into a pre-stored data processing module for fusion to generate a fused data sample;
specifically, refer to step 105, which is not described herein again.
S306, inputting the fused data sample into a preset neural network model to generate a key prediction result;
see step 106 for details, which are not described herein.
S307, inputting the key prediction result into a pre-stored mode conversion module, and converting the key prediction result into characters which can be recognized by augmented reality glasses;
refer to step 107 specifically, and are not described herein.
And S308, displaying the characters which can be recognized by the augmented reality glasses.
See step 108 for details, which are not described herein.
In the embodiment of the application, the inertia measurement unit data glove firstly obtains motion information of movement of a finger key of a user after pre-filtering and noise reduction and sends the motion information to the augmented reality glasses through the pre-stored Bluetooth sending module, after the augmented reality glasses receive the information, the image information after pre-processing is obtained and input into the pre-stored data processing module for fusion, a fused data sample is generated and input into the pre-stored neural network model to generate a key prediction result, then the key prediction result is input into the pre-stored mode conversion module to convert the key prediction result into characters which can be recognized by the augmented reality glasses, and finally the characters which can be recognized by the augmented reality glasses are displayed. Because the virtual keyboard combined by the data gloves of the inertial measurement unit and the Augmented Reality (AR for short) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience degree.
The following are embodiments of the system of the present application.
Referring to fig. 9, a block diagram of a virtual keyboard system according to an exemplary embodiment of the present application is shown. The hardware frame of the virtual keyboard system is mainly composed of two parts, the first part is a data glove worn on both hands, and the second part is an augmented reality glasses worn on the head, which will be described separately below.
The data gloves are worn on IMU data glove parts of two hands, and the data gloves mainly comprise IMU motion sensor modules, Bluetooth sending modules, filtering and noise reduction modules and wireless charging modules. The six-axis IMU motion sensor is mainly used for recording motion information of two hands during key pressing, and comprises three-axis acceleration and three-axis angular velocity information (x axis, y axis and z axis), the sensors are positioned at five fingers and the back of the hand, and the sensors at the fingers are respectively connected with the sensors at the back of the hand; the filtering and noise reduction module is mainly used for preprocessing the acquired IMU motion to ensure the validity of data; the Bluetooth sending module is mainly used for sending the IMU motion information of the data glove to the augmented reality glasses for processing; the wireless module of charging is mainly used for lasting the journey for data gloves charge, and wireless charging has also improved keyboard system's convenience.
Augmented reality glasses wear the augmented reality glasses part at the head, and this part mainly by binocular camera module, mode conversion module, filtering noise reduction module, data processing module, bluetooth receiving module, vibrations feedback module and display module. The binocular camera module is positioned at the oblique lower part of the augmented reality glasses, can shoot images of the data gloves, can distinguish the difference of key actions through the motion trail of connecting lines among six IMU sensors on the data gloves, and adopts a 50-frame binocular camera for recording image information of double-hand keys in multiple frames; the filtering and noise reducing module is mainly used for processing and preprocessing the recorded image information; the data processing module is mainly used for effectively fusing the collected two-hand IMU information and the image information collected by the binocular camera and loading a neural network model to obtain a preliminary key prediction result; the Bluetooth receiving module is mainly used for receiving IMU signal information transmitted by the data glove. The vibration feedback module is mainly used for slightly vibrating after the key is successfully identified, so that feedback is provided for a user, and man-machine interaction inductance is enhanced. The mode conversion module is mainly used for carrying out fuzzy intention reasoning on the preliminary prediction result obtained by the data processing module so as to achieve the function of input error correction and converting the final prediction result into characters which can be recognized by augmented reality glasses; the display module is mainly used for displaying the final result identified by the mode conversion module in the augmented reality glasses.
In the embodiment of the application, the inertia measurement unit data glove firstly obtains motion information of movement of a finger key of a user after pre-filtering and noise reduction and sends the motion information to the augmented reality glasses through the pre-stored Bluetooth sending module, after the augmented reality glasses receive the information, the image information after pre-processing is obtained and input into the pre-stored data processing module for fusion, a fused data sample is generated and input into the pre-stored neural network model to generate a key prediction result, then the key prediction result is input into the pre-stored mode conversion module to convert the key prediction result into characters which can be recognized by the augmented reality glasses, and finally the characters which can be recognized by the augmented reality glasses are displayed. Because the virtual keyboard combined by the data gloves of the inertial measurement unit and the Augmented Reality (AR for short) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience degree.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 10, a schematic structural diagram of a key identification apparatus of a virtual keyboard according to an exemplary embodiment of the present application is shown. The key identification method device of the virtual keyboard can be realized by software, hardware or a combination of the software and the hardware to form all or part of the terminal. The device 1 comprises a first information acquisition module 10 and an information sending module 20.
The first information acquisition module 10 is configured to acquire motion information of the movement of the user finger key after pre-filtering and denoising;
and the information sending module 20 is configured to send the motion information of the finger key movement of the user after the pre-filtering and noise reduction to the augmented reality glasses through a pre-stored bluetooth sending module.
Optionally, as shown in fig. 11, the apparatus 1 further includes:
the second information acquisition module 30 is configured to acquire motion information of the movement of the user finger key when a user finger key movement instruction input for the inertia measurement unit data glove is received;
the first information generating module 40 is configured to input the motion information of the movement of the user finger key into a pre-stored filtering and denoising module to generate motion information of the movement of the user finger key after filtering and denoising, and use the motion information of the movement of the user finger key after filtering and denoising as the motion information of the movement of the user finger key after pre-filtering and denoising.
Referring to fig. 11, a schematic structural diagram of a key identification apparatus of a virtual keyboard according to an exemplary embodiment of the present application is shown. The key identification method device of the virtual keyboard can be realized by software, hardware or a combination of the software and the hardware to form all or part of the terminal. The device 2 comprises an information receiving module 10, a third information acquisition module 20, a sample generation module 30, a result generation module 40, a character generation module 50 and a character display module 60.
The information receiving module 10 is used for receiving the motion information of the movement of the finger key of the user, which is sent by the data glove of the inertial measurement unit;
a third information obtaining module 20, configured to obtain pre-processed image information;
the sample generation module 30 is configured to input the motion information of the movement of the user finger key and the pre-processed image information into a pre-stored data processing module for fusion, so as to generate a fused data sample;
the result generation module 40 is used for inputting the fused data samples into a preset neural network model to generate a key prediction result;
the character generation module 50 is used for inputting the key prediction result into a pre-stored mode conversion module and converting the key prediction result into a character which can be recognized by the augmented reality glasses;
and a character display module 60, configured to display characters that can be recognized by the augmented reality glasses.
Optionally, as shown in fig. 12, the apparatus 2 further includes:
a set obtaining module 60, configured to obtain an image information set of the user finger key;
and a second information generating module 70, configured to input the image information set into a pre-stored filtering and denoising module to generate processed image information. And taking the processed image information as the pre-processed image information.
It should be noted that, in the key identification method of the virtual keyboard, the key identification apparatus of the virtual keyboard provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules, so as to complete all or part of the functions described above. In addition, the key identification device of the virtual keyboard provided in the above embodiments and the key identification method of the virtual keyboard belong to the same concept, and details of implementation processes are shown in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, the inertia measurement unit data glove firstly obtains motion information of movement of a finger key of a user after pre-filtering and noise reduction and sends the motion information to the augmented reality glasses through the pre-stored Bluetooth sending module, after the augmented reality glasses receive the information, the image information after pre-processing is obtained and input into the pre-stored data processing module for fusion, a fused data sample is generated and input into the pre-stored neural network model to generate a key prediction result, then the key prediction result is input into the pre-stored mode conversion module to convert the key prediction result into characters which can be recognized by the augmented reality glasses, and finally the characters which can be recognized by the augmented reality glasses are displayed. Because the virtual keyboard combined by the data gloves of the inertial measurement unit and the Augmented Reality (AR for short) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience degree.
The present application further provides a computer readable medium, on which program instructions are stored, and when the program instructions are executed by a processor, the method for identifying keys of a virtual keyboard provided by the above-mentioned method embodiments is implemented.
The present application further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method for identifying keys of a virtual keyboard according to the above-mentioned method embodiments.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, it should be understood that the disclosed methods, articles of manufacture (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The present application is not limited to the procedures and structures that have been described above and shown in the drawings, and various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (10)
1. A key identification method of a virtual keyboard is applied to an inertial measurement unit data glove, and is characterized by comprising the following steps:
obtaining the motion information of the movement of the finger keys of the user after pre-filtering and denoising;
and sending the motion information of the movement of the finger keys of the user after the pre-filtering and noise reduction to the augmented reality glasses through a pre-stored Bluetooth sending module.
2. The method of claim 1, wherein before the obtaining the motion information of the user's finger key movement after pre-filtering and noise reduction, the method further comprises:
when a user finger key movement instruction input by aiming at the inertial measurement unit data glove is received, acquiring movement information of the user finger key movement;
inputting the motion information of the movement of the user finger key into a pre-stored filtering and noise reduction module to generate the motion information of the movement of the user finger key after filtering and noise reduction, and taking the motion information of the movement of the user finger key after filtering and noise reduction as the motion information of the movement of the user finger key after pre-filtering and noise reduction.
3. The method of claim 2, wherein the inertial measurement unit data glove comprises an inertial measurement unit motion sensor module, a bluetooth transmission module, a filtering and noise reduction module, a wireless charging module;
the inertial measurement unit motion sensor module is a six-axis inertial measurement unit motion sensor and is used for recording motion information of two hands during key pressing, the motion information comprises three-axis acceleration and three-axis angular velocity information, the sensors are located at five fingers and the back of the hand, and the sensors at the fingers are respectively connected with the sensors at the back of the hand;
the filtering and noise reduction module is mainly used for preprocessing the collected motion of the inertia measurement unit to ensure the validity of data;
the Bluetooth sending module is mainly used for sending the motion information of the inertia measurement unit of the data glove to the augmented reality glasses for processing;
the wireless charging module is mainly used for charging the data gloves for endurance, and the convenience of the keyboard system is improved due to wireless charging.
4. A key identification method of a virtual keyboard is applied to augmented reality glasses, and is characterized by comprising the following steps:
receiving motion information of the movement of the finger key of the user sent by the data glove of the inertial measurement unit;
acquiring image information after pre-processing;
inputting the motion information of the movement of the finger key of the user and the image information after the pre-processing into a pre-stored data processing module for fusion to generate a fused data sample;
inputting the fused data sample into a preset neural network model to generate a key prediction result;
inputting the key prediction result into a pre-stored mode conversion module, and converting the key prediction result into characters which can be recognized by augmented reality glasses;
and displaying the characters which can be recognized by the augmented reality glasses.
5. The method of claim 4, wherein before the obtaining the pre-processed image information, further comprising:
acquiring an image information set of a finger key of a user;
and inputting the image information set into a pre-stored filtering and noise reducing module to generate processed image information. And taking the processed image information as the pre-processed image information.
6. The method of claim 4, wherein the augmented reality glasses comprise a binocular camera module, a mode conversion module, a filtering and noise reduction module, a data processing module, a Bluetooth receiving module, a vibration feedback module, and a display module, wherein,
the binocular camera module is positioned at the part obliquely below the augmented reality glasses and used for acquiring images of the data gloves;
the filtering and noise reducing module is used for processing and preprocessing the recorded image information;
the data processing module is used for effectively fusing the collected information of the two-hand inertia measurement unit and the image information collected by the binocular camera and loading a neural network model to obtain a preliminary key prediction result;
the Bluetooth receiving module is used for receiving signal information of the inertial measurement unit transmitted by the data glove;
the vibration feedback module is used for slightly vibrating after the key is successfully identified, so as to give feedback to a user and enhance human-computer interaction inductance;
the mode conversion module is used for carrying out fuzzy intention reasoning on the preliminary prediction result obtained by the data processing module so as to achieve the function of input error correction, and converting the final prediction result into characters which can be recognized by augmented reality glasses;
and the display module is used for displaying the final result identified by the mode conversion module in the augmented reality glasses.
7. A key identification system for a virtual keyboard, comprising: inertial measurement unit data gloves and augmented reality glasses;
the inertia measurement unit data glove is used for acquiring motion information of the movement of the finger key of the user, inputting the motion information of the movement of the finger key of the user into a pre-stored filtering and noise reduction module to generate motion information of the movement of the finger key of the user after filtering and noise reduction, and then sending the motion information of the movement of the finger key of the user after filtering and noise reduction to the enhancement realization glasses through the pre-stored Bluetooth sending module;
the augmented reality glasses are used for receiving motion information of movement of a user finger key sent by a data glove of an inertial measurement unit, then acquiring pre-processed image information, inputting the motion information of the user finger key movement and the pre-processed image information into a pre-stored data processing module for fusion to generate a fused data sample, then inputting the fused data sample into a preset neural network model to generate a key prediction result, inputting the key prediction result into a pre-stored mode conversion module to convert the key prediction result into characters which can be recognized by the augmented reality glasses, and finally displaying the characters which can be recognized by the augmented reality glasses.
8. The system of claim 7, wherein the augmented reality glasses further comprise:
acquiring an image information set of a user finger key, and inputting the image information set into a pre-stored filtering and noise reduction module to generate processed image information.
9. A key recognition device for a virtual keyboard, comprising:
one or more processors, storage devices storing one or more programs;
the one or more programs, when executed by the one or more processors, implement the method of any of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911357466.6A CN111158476B (en) | 2019-12-25 | 2019-12-25 | Key recognition method, system, equipment and storage medium of virtual keyboard |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911357466.6A CN111158476B (en) | 2019-12-25 | 2019-12-25 | Key recognition method, system, equipment and storage medium of virtual keyboard |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111158476A true CN111158476A (en) | 2020-05-15 |
CN111158476B CN111158476B (en) | 2023-05-23 |
Family
ID=70556712
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911357466.6A Active CN111158476B (en) | 2019-12-25 | 2019-12-25 | Key recognition method, system, equipment and storage medium of virtual keyboard |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111158476B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111930225A (en) * | 2020-06-28 | 2020-11-13 | 北京理工大学 | Virtual-real converged keyboard system and method for mobile devices |
CN113821139A (en) * | 2021-09-24 | 2021-12-21 | 维沃移动通信有限公司 | Information display method, information display device, glasses and medium |
Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007286987A (en) * | 2006-04-18 | 2007-11-01 | Ricoh Co Ltd | Image forming apparatus, program for the same, and recording medium |
CN102063183A (en) * | 2011-02-12 | 2011-05-18 | 深圳市亿思达显示科技有限公司 | Virtual input device of grove type |
US20120268376A1 (en) * | 2011-04-20 | 2012-10-25 | Qualcomm Incorporated | Virtual keyboards and methods of providing the same |
CN102880304A (en) * | 2012-09-06 | 2013-01-16 | 天津大学 | Character inputting method and device for portable device |
CN103019377A (en) * | 2012-12-04 | 2013-04-03 | 天津大学 | Head-mounted visual display equipment-based input method and device |
US20140022165A1 (en) * | 2011-04-11 | 2014-01-23 | Igor Melamed | Touchless text and graphic interface |
CN104317403A (en) * | 2014-10-27 | 2015-01-28 | 黄哲军 | Wearable equipment for sign language recognition |
EP2977855A1 (en) * | 2014-07-23 | 2016-01-27 | Wincor Nixdorf International GmbH | Virtual keyboard and input method for a virtual keyboard |
WO2016097841A2 (en) * | 2014-12-16 | 2016-06-23 | Quan Xiao | Methods and apparatus for high intuitive human-computer interface and human centric wearable "hyper" user interface that could be cross-platform / cross-device and possibly with local feel-able/tangible feedback |
WO2016189372A2 (en) * | 2015-04-25 | 2016-12-01 | Quan Xiao | Methods and apparatus for human centric "hyper ui for devices"architecture that could serve as an integration point with multiple target/endpoints (devices) and related methods/system with dynamic context aware gesture input towards a "modular" universal controller platform and input device virtualization |
CN106484119A (en) * | 2016-10-24 | 2017-03-08 | 网易(杭州)网络有限公司 | Virtual reality system and virtual reality system input method |
CN106575159A (en) * | 2014-08-22 | 2017-04-19 | 索尼互动娱乐股份有限公司 | Glove interface object |
CN106598233A (en) * | 2016-11-25 | 2017-04-26 | 北京暴风魔镜科技有限公司 | Input method and input system based on gesture recognition |
US20170123487A1 (en) * | 2015-10-30 | 2017-05-04 | Ostendo Technologies, Inc. | System and methods for on-body gestural interfaces and projection displays |
CN106648093A (en) * | 2016-12-19 | 2017-05-10 | 珠海市魅族科技有限公司 | Input method and device of virtual reality device |
CN107357434A (en) * | 2017-07-19 | 2017-11-17 | 广州大西洲科技有限公司 | Information input equipment, system and method under a kind of reality environment |
RU176318U1 (en) * | 2017-06-07 | 2018-01-16 | Федоров Александр Владимирович | VIRTUAL REALITY GLOVE |
WO2018098861A1 (en) * | 2016-11-29 | 2018-06-07 | 歌尔科技有限公司 | Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus |
CN108519855A (en) * | 2018-04-17 | 2018-09-11 | 北京小米移动软件有限公司 | Characters input method and device |
CN108874119A (en) * | 2017-05-16 | 2018-11-23 | 芬奇科技有限公司 | The mobile input to generate computer system of tracking arm |
CN110443113A (en) * | 2019-06-18 | 2019-11-12 | 中国人民解放军军事科学院国防科技创新研究院 | A kind of virtual reality Writing method, system and storage medium |
CN110442233A (en) * | 2019-06-18 | 2019-11-12 | 中国人民解放军军事科学院国防科技创新研究院 | A kind of augmented reality key mouse system based on gesture interaction |
CN111722713A (en) * | 2020-06-12 | 2020-09-29 | 天津大学 | Multi-mode fused gesture keyboard input method, device, system and storage medium |
-
2019
- 2019-12-25 CN CN201911357466.6A patent/CN111158476B/en active Active
Patent Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007286987A (en) * | 2006-04-18 | 2007-11-01 | Ricoh Co Ltd | Image forming apparatus, program for the same, and recording medium |
CN102063183A (en) * | 2011-02-12 | 2011-05-18 | 深圳市亿思达显示科技有限公司 | Virtual input device of grove type |
US20140022165A1 (en) * | 2011-04-11 | 2014-01-23 | Igor Melamed | Touchless text and graphic interface |
US20120268376A1 (en) * | 2011-04-20 | 2012-10-25 | Qualcomm Incorporated | Virtual keyboards and methods of providing the same |
CN102880304A (en) * | 2012-09-06 | 2013-01-16 | 天津大学 | Character inputting method and device for portable device |
CN103019377A (en) * | 2012-12-04 | 2013-04-03 | 天津大学 | Head-mounted visual display equipment-based input method and device |
EP2977855A1 (en) * | 2014-07-23 | 2016-01-27 | Wincor Nixdorf International GmbH | Virtual keyboard and input method for a virtual keyboard |
CN106575159A (en) * | 2014-08-22 | 2017-04-19 | 索尼互动娱乐股份有限公司 | Glove interface object |
CN104317403A (en) * | 2014-10-27 | 2015-01-28 | 黄哲军 | Wearable equipment for sign language recognition |
WO2016097841A2 (en) * | 2014-12-16 | 2016-06-23 | Quan Xiao | Methods and apparatus for high intuitive human-computer interface and human centric wearable "hyper" user interface that could be cross-platform / cross-device and possibly with local feel-able/tangible feedback |
WO2016189372A2 (en) * | 2015-04-25 | 2016-12-01 | Quan Xiao | Methods and apparatus for human centric "hyper ui for devices"architecture that could serve as an integration point with multiple target/endpoints (devices) and related methods/system with dynamic context aware gesture input towards a "modular" universal controller platform and input device virtualization |
CN107896508A (en) * | 2015-04-25 | 2018-04-10 | 肖泉 | Multiple target/end points can be used as(Equipment)" method and apparatus of the super UI " architectures of equipment, and correlation technique/system of the gesture input with dynamic context consciousness virtualized towards " modularization " general purpose controller platform and input equipment focusing on people of the integration points of sum |
US20170123487A1 (en) * | 2015-10-30 | 2017-05-04 | Ostendo Technologies, Inc. | System and methods for on-body gestural interfaces and projection displays |
CN108431736A (en) * | 2015-10-30 | 2018-08-21 | 奥斯坦多科技公司 | The system and method for gesture interface and Projection Display on body |
CN106484119A (en) * | 2016-10-24 | 2017-03-08 | 网易(杭州)网络有限公司 | Virtual reality system and virtual reality system input method |
CN106598233A (en) * | 2016-11-25 | 2017-04-26 | 北京暴风魔镜科技有限公司 | Input method and input system based on gesture recognition |
WO2018098861A1 (en) * | 2016-11-29 | 2018-06-07 | 歌尔科技有限公司 | Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus |
CN106648093A (en) * | 2016-12-19 | 2017-05-10 | 珠海市魅族科技有限公司 | Input method and device of virtual reality device |
CN108874119A (en) * | 2017-05-16 | 2018-11-23 | 芬奇科技有限公司 | The mobile input to generate computer system of tracking arm |
RU176318U1 (en) * | 2017-06-07 | 2018-01-16 | Федоров Александр Владимирович | VIRTUAL REALITY GLOVE |
CN107357434A (en) * | 2017-07-19 | 2017-11-17 | 广州大西洲科技有限公司 | Information input equipment, system and method under a kind of reality environment |
CN108519855A (en) * | 2018-04-17 | 2018-09-11 | 北京小米移动软件有限公司 | Characters input method and device |
CN110443113A (en) * | 2019-06-18 | 2019-11-12 | 中国人民解放军军事科学院国防科技创新研究院 | A kind of virtual reality Writing method, system and storage medium |
CN110442233A (en) * | 2019-06-18 | 2019-11-12 | 中国人民解放军军事科学院国防科技创新研究院 | A kind of augmented reality key mouse system based on gesture interaction |
CN111722713A (en) * | 2020-06-12 | 2020-09-29 | 天津大学 | Multi-mode fused gesture keyboard input method, device, system and storage medium |
Non-Patent Citations (4)
Title |
---|
STEINICKE,F ETAL.: ""Towards applicable 3D user inerfaces for everyday working environments"", 《LECTURE NOTES IN ARTIFICIAL INTELLIGENCE》 * |
刘小伟: ""基于双目立体视觉的虚拟手势交互技术的研究与实现"" * |
张聪聪等: "基于关键帧的双流卷积网络的人体动作识别方法", 《南京信息工程大学学报(自然科学版)》 * |
王春慧: ""基于动态自适应策略的SSVEP快速目标选择方法"", 《清华大学学报(自然科学版)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111930225A (en) * | 2020-06-28 | 2020-11-13 | 北京理工大学 | Virtual-real converged keyboard system and method for mobile devices |
CN111930225B (en) * | 2020-06-28 | 2022-12-02 | 北京理工大学 | Virtual-real converged keyboard system and method for mobile devices |
CN113821139A (en) * | 2021-09-24 | 2021-12-21 | 维沃移动通信有限公司 | Information display method, information display device, glasses and medium |
WO2023045908A1 (en) * | 2021-09-24 | 2023-03-30 | 维沃移动通信有限公司 | Information display method and apparatus, glasses, and medium |
Also Published As
Publication number | Publication date |
---|---|
CN111158476B (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108986801B (en) | Man-machine interaction method and device and man-machine interaction terminal | |
Hasan et al. | RETRACTED ARTICLE: Static hand gesture recognition using neural networks | |
CN110532861B (en) | Behavior recognition method based on framework-guided multi-mode fusion neural network | |
CN109196526A (en) | For generating the method and system of multi-modal digital picture | |
CN112906604B (en) | Behavior recognition method, device and system based on skeleton and RGB frame fusion | |
CN109685037B (en) | Real-time action recognition method and device and electronic equipment | |
CN107150347A (en) | Robot perception and understanding method based on man-machine collaboration | |
CN106997236A (en) | Based on the multi-modal method and apparatus for inputting and interacting | |
CN108475113B (en) | Method, system, and medium for detecting hand gestures of a user | |
CN111274998B (en) | Parkinson's disease finger knocking action recognition method and system, storage medium and terminal | |
CN111966217A (en) | Unmanned aerial vehicle control method and system based on gestures and eye movements | |
CN110008839B (en) | Intelligent sign language interaction system and method for self-adaptive gesture recognition | |
CN111722713A (en) | Multi-mode fused gesture keyboard input method, device, system and storage medium | |
CN111158476B (en) | Key recognition method, system, equipment and storage medium of virtual keyboard | |
CN111444488A (en) | Identity authentication method based on dynamic gesture | |
Krishnaraj et al. | A Glove based approach to recognize Indian Sign Languages | |
CN111552383A (en) | Finger identification method and system of virtual augmented reality interaction equipment and interaction equipment | |
CN113268143B (en) | Multimodal man-machine interaction method based on reinforcement learning | |
CN116449947B (en) | Automobile cabin domain gesture recognition system and method based on TOF camera | |
Gaikwad et al. | Recognition of American sign language using image processing and machine learning | |
CN117115911A (en) | Hypergraph learning action recognition system based on attention mechanism | |
CN113887373B (en) | Attitude identification method and system based on urban intelligent sports parallel fusion network | |
CN108268125A (en) | A kind of motion gesture detection and tracking based on computer vision | |
CN108255285A (en) | It is a kind of based on the motion gesture detection method that detection is put between the palm | |
Dhamanskar et al. | Human computer interaction using hand gestures and voice |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |