CN111158476B - Key recognition method, system, equipment and storage medium of virtual keyboard - Google Patents

Key recognition method, system, equipment and storage medium of virtual keyboard Download PDF

Info

Publication number
CN111158476B
CN111158476B CN201911357466.6A CN201911357466A CN111158476B CN 111158476 B CN111158476 B CN 111158476B CN 201911357466 A CN201911357466 A CN 201911357466A CN 111158476 B CN111158476 B CN 111158476B
Authority
CN
China
Prior art keywords
module
key
augmented reality
reality glasses
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911357466.6A
Other languages
Chinese (zh)
Other versions
CN111158476A (en
Inventor
闫野
范博辉
姜志杰
裴育
邓宝松
谢良
印二威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center, National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
Priority to CN201911357466.6A priority Critical patent/CN111158476B/en
Publication of CN111158476A publication Critical patent/CN111158476A/en
Application granted granted Critical
Publication of CN111158476B publication Critical patent/CN111158476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a key identification method, a system, equipment and a storage medium of a virtual keyboard, wherein the method comprises the following steps: the inertial measurement unit data glove acquires the motion information of the user finger key movement after pre-filtering and noise reduction and sends the motion information to the augmented reality glasses through a pre-stored Bluetooth sending module; the augmented reality glasses acquire the pre-processed image information after receiving the information, input the pre-processed image information into a pre-stored data processing module for fusion, generate a fused data sample, input the fused data sample into a preset neural network model, and generate a key prediction result; inputting the key prediction result into a pre-stored mode conversion module, and converting the key prediction result into characters which can be identified by the augmented reality glasses; and displaying the characters which can be identified by the augmented reality glasses. Therefore, by adopting the embodiment of the application, the occupied space of the keyboard is reduced, and the user experience is improved.

Description

Key recognition method, system, equipment and storage medium of virtual keyboard
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, a system, an apparatus, and a storage medium for identifying keys of a virtual keyboard.
Background
Typewriters in the 18 th century are raised hot flashes worldwide, the control modes of people on machines are changed, continuous innovation and improvement of the typewriters are developed into the modern keyboard models, and the appearance of keyboards is an important innovation in the field of man-machine interaction, so that the control of people on machines and intelligent equipment is high, convenient, rapid and accurate.
The keyboard is the most commonly used and main input device, and english letters, numbers, various punctuation marks and the like can be input into a computer through a common keyboard, so that a command is sent to the computer and corresponding data and the like are output. The key boards in the market at present are mainly divided into mechanical key boards, plastic film type key boards, conductive rubber type key boards and the like according to the working principle, and the public defects of the key boards mainly comprise the following points: 1. physical external equipment is required and the occupied space is large. 2. Data entry is required by means of a support surface. 3. The fixed position of both hands, the flexibility is not enough, and the travelling comfort and the feel of using of keyboard input are also influenced to fixed both hands moreover.
Disclosure of Invention
The embodiment of the application provides a key identification method, system and equipment of a virtual keyboard and a storage medium. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides a key recognition method of a virtual keyboard, applied to an inertial measurement unit data glove, the method including:
acquiring motion information of the user finger key movement after pre-filtering and noise reduction;
and transmitting the motion information of the user finger key movement after the pre-filtering and noise reduction to the augmented reality glasses through a pre-stored Bluetooth transmission module.
Optionally, before the obtaining the motion information of the user finger key movement after the pre-filtering and noise reduction, the method further includes:
when a user finger key movement instruction input by aiming at the inertial measurement unit data glove is received, acquiring movement information of the user finger key movement;
and inputting the motion information of the user finger key movement into a pre-stored filtering noise reduction module to generate motion information of the user finger key movement after filtering noise reduction, and taking the motion information of the user finger key movement after filtering noise reduction as the motion information of the user finger key movement after pre-filtering noise reduction.
Optionally, the inertial measurement unit data glove comprises an inertial measurement unit motion sensor module, a bluetooth transmitting module, a filtering noise reduction module and a wireless charging module;
The inertial measurement unit motion sensor module is a six-axis inertial measurement unit motion sensor and is used for recording motion information of two hands when the hands do key-press actions, the inertial measurement unit motion sensor module comprises three-axis acceleration and three-axis angular velocity information, the sensor is positioned at five fingers and the back of the hand, and the sensors at the fingers are respectively connected with the sensors at the back of the hand;
the filtering noise reduction module is mainly used for preprocessing the collected motion of the inertial measurement unit and guaranteeing the effectiveness of data;
the Bluetooth transmitting module is mainly used for transmitting the motion information of the inertial measurement unit of the data glove to the augmented reality glasses for processing;
the wireless charging module is mainly used for charging the data glove for cruising, and the wireless charging also improves the convenience of the keyboard system.
In a second aspect, an embodiment of the present application provides a key recognition method of a virtual keyboard, which is applied to augmented reality glasses, and the method includes:
receiving motion information of the finger key movement of a user sent by the data glove of the inertial measurement unit;
acquiring pre-processed image information;
inputting the motion information of the finger key movement of the user and the pre-processed image information into a pre-stored data processing module for fusion, and generating a fused data sample;
Inputting the fused data sample into a preset neural network model to generate a key prediction result;
inputting the key prediction result into a pre-stored mode conversion module, and converting the key prediction result into characters which can be identified by the augmented reality glasses;
and displaying the characters which can be identified by the augmented reality glasses.
Optionally, before the acquiring the pre-processed image information, the method further includes:
acquiring an image information set of a user finger key;
and inputting the image information set into a pre-stored filtering noise reduction module to generate processed image information. And taking the processed image information as the image information after the preprocessing.
Optionally, the augmented reality glasses comprise a binocular camera module, a mode conversion module, a filtering noise reduction module, a data processing module, a Bluetooth receiving module, a vibration feedback module and a display module, wherein,
the binocular camera module is positioned at the obliquely lower part of the augmented reality glasses and is used for acquiring images of the data glove;
the filtering noise reduction module is used for processing and preprocessing the recorded image information;
the data processing module is used for effectively fusing the collected information of the two-hand inertial measurement unit and the image information collected by the binocular camera, and loading a neural network model to obtain a preliminary key prediction result;
The Bluetooth receiving module is used for receiving the inertial measurement unit signal information transmitted by the data glove;
the vibration feedback module is used for slightly vibrating after the key is successfully identified, feeding back to a user and enhancing man-machine interaction;
the mode conversion module is used for carrying out fuzzy intention reasoning on the preliminary prediction result obtained by the data processing module so as to achieve the function of input error correction, and converting the final prediction result into characters which can be recognized by the augmented reality glasses;
the display module is used for displaying the final result identified by the mode conversion module in the augmented reality glasses.
In a third aspect, an embodiment of the present application provides a key identification system of a virtual keyboard, including: inertial measurement unit data glove and augmented reality glasses;
the inertial measurement unit data glove is used for acquiring movement information of movement of a user finger key, inputting the movement information of movement of the user finger key into a pre-stored filtering noise reduction module to generate movement information of movement of the user finger key after filtering noise reduction, and then sending the movement information of movement of the user finger key after pre-filtering noise reduction to the enhancement implementation glasses through a pre-stored Bluetooth sending module;
The augmented reality glasses are used for receiving movement information of movement of a user finger key sent by the inertial measurement unit data glove, acquiring pre-processed image information, inputting the movement information of movement of the user finger key and the pre-processed image information into a pre-stored data processing module for fusion, generating a fused data sample, inputting the fused data sample into a preset neural network model for generating a key prediction result, inputting the key prediction result into a pre-stored mode conversion module for converting the key prediction result into characters which can be recognized by the augmented reality glasses, and finally displaying the characters which can be recognized by the augmented reality glasses.
Optionally, the augmented reality glasses further include:
and acquiring an image information set of the finger keys of the user, and inputting the image information set into a pre-stored filtering noise reduction module to generate processed image information.
In a fourth aspect, an embodiment of the present application provides a key recognition device of a virtual keyboard, applied to an inertial measurement unit data glove, the device including:
the first information acquisition module is used for acquiring the motion information of the user finger key movement after the pre-filtering noise reduction;
And the information sending module is used for sending the motion information of the user finger key movement after the pre-filtering noise reduction to the augmented reality glasses through the pre-stored Bluetooth sending module.
Optionally, the apparatus further includes:
the second information acquisition module is used for acquiring motion information of the finger key movement of the user when receiving a finger key movement instruction of the user, which is input by the data glove of the inertial measurement unit;
the first information generation module is used for inputting the motion information of the user finger key movement into the pre-stored filtering noise reduction module to generate the motion information of the user finger key movement after filtering noise reduction, and taking the motion information of the user finger key movement after filtering noise reduction as the motion information of the user finger key movement after pre-filtering noise reduction.
In a fifth aspect, an embodiment of the present application provides a key recognition device of a virtual keyboard, which is applied to augmented reality glasses, and the device includes:
the information receiving module is used for receiving the movement information of the user finger key movement sent by the inertial measurement unit data glove;
the third information acquisition module is used for acquiring the image information after the preprocessing;
The sample generation module is used for inputting the motion information of the finger key movement of the user and the pre-processed image information into a pre-stored data processing module for fusion, and generating a fused data sample;
the result generation module inputs the fused data samples into a preset neural network model to generate a key prediction result;
the character generation module is used for inputting the key prediction result into a pre-stored mode conversion module and converting the key prediction result into characters which can be recognized by the augmented reality glasses;
and the character display module is used for displaying the characters which can be identified by the augmented reality glasses.
Optionally, the apparatus further includes:
the collection acquisition module is used for acquiring an image information collection of the finger keys of the user;
and the second information generation module is used for inputting the image information set into a pre-stored filtering noise reduction module to generate processed image information. And taking the processed image information as the image information after the preprocessing.
In a sixth aspect, an embodiment of the present application provides a key identification device of a virtual keyboard, including:
one or more processors, a storage device storing one or more programs;
The one or more processors implement the above-described method steps when the one or more programs are executed by the one or more processors.
In a seventh aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-described method steps.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
in the embodiment of the application, the inertial measurement unit data glove firstly acquires motion information of the movement of the finger keys of the user after pre-filtering and noise reduction, and transmits the motion information to the augmented reality glasses through the pre-stored Bluetooth transmission module, when the augmented reality glasses receive the information, acquire the pre-processed image information, input the pre-stored image information into the pre-stored data processing module for fusion, generate a key prediction result after fusion, input the fused data sample into the pre-set neural network model, then input the key prediction result into the pre-stored mode conversion module, convert the key prediction result into characters which can be identified by the augmented reality glasses, and finally display the characters which can be identified by the augmented reality glasses. Because the virtual keyboard combined by the inertial measurement unit data glove and the augmented reality (Augmented Reality, AR) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flowchart of a key recognition method of a virtual keyboard according to an embodiment of the present application;
fig. 2 is a flowchart of a key identification method of a virtual keyboard for IMU data glove according to an embodiment of the present application;
fig. 3 is a flowchart of a key recognition method of a virtual keyboard for augmented reality glasses according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a hardware framework of an IMU data glove provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a hardware framework of augmented reality glasses provided in an embodiment of the present application;
FIG. 6 is a structural framework diagram of a virtual keyboard system according to an embodiment of the present application;
FIG. 7 is a software framework diagram of a processing layer provided by an embodiment of the present application;
FIG. 8 is a flowchart for identifying a virtual keyboard system according to an embodiment of the present application;
FIG. 9 is a block diagram of a virtual keyboard system according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a key recognition device of a first virtual keyboard according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a key recognition device of a second virtual keyboard according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a key recognition device of a third virtual keyboard according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a key recognition device of a fourth virtual keyboard according to an embodiment of the present application.
Detailed Description
The following description and the drawings illustrate specific embodiments of the application sufficiently to enable those skilled in the art to practice them.
It should be understood that the described embodiments are merely some, but not all, of the embodiments of the present application. All other embodiments, based on the embodiments herein, which would be apparent to one of ordinary skill in the art without making any inventive effort, are intended to be within the scope of the present application.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context. Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Up to now, the keyboard is the most commonly used and the most dominant input device, and english letters, numbers, various punctuation marks and the like can be input into a computer through a common keyboard, so as to send commands to the computer and output corresponding data and the like. The key boards in the market at present are mainly divided into mechanical key boards, plastic film type key boards, conductive rubber type key boards and the like according to the working principle, and the public defects of the key boards mainly comprise the following points: 1. physical external equipment is required and the occupied space is large. 2. Data entry is required by means of a support surface. 3. The fixed position of both hands, the flexibility is not enough, and the travelling comfort and the feel of using of keyboard input are also influenced to fixed both hands moreover. For this reason, the present application provides a key recognition method, system, device and storage medium for a virtual keyboard, so as to solve the problems existing in the related technical problems. In the technical scheme provided by the application, because the virtual keyboard combined by the inertial measurement unit data glove and the augmented reality (Augmented Reality, AR for short) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience.
The key recognition method of the virtual keyboard provided in the embodiment of the present application will be described in detail with reference to fig. 1 to 8. The method may be implemented in dependence on a computer program, and may be run on a key recognition device of a virtual keyboard based on von neumann system.
Referring to fig. 1, a flowchart of a key recognition method of a virtual keyboard is provided in an embodiment of the present application. As shown in fig. 1, the method according to the embodiment of the present application may include the following steps:
step 101, obtaining motion information of the user finger key movement after pre-filtering and noise reduction;
the pre-filtering noise reduction is to input the acquired motion information of the movement of the finger keys of the user into a pre-stored filtering noise reduction module for processing. The motion information of the key movement of the fingers of the user refers to the motion information recorded by an IMU (inertial measurement unit) motion sensor module in an IMU data glove worn by the hands when the hands do key actions.
Generally, as shown in fig. 4, an IMU (inertial measurement unit) motion sensor module, a bluetooth transmitting module, a filtering noise reduction module and a wireless charging module are pre-stored in an IMU (inertial measurement unit) data glove. The IMU motion sensor module comprises a six-axis IMU sensor 100, three-axis acceleration and three-axis angular velocity information (x-axis, y-axis and z-axis), wherein the sensor is positioned at the positions of five fingers and the back of the hand, and the sensors at the positions of the fingers are respectively connected with the sensors at the positions of the back of the hand.
In one possible implementation, the user first wears the IMU data glove, then performs simulated typing according to his/her own typing intent, moves the IMU data glove along with the displacement of the finger during typing, generates movement data information during movement, and then records and saves the movement data information of the finger.
Step 102, transmitting the motion information of the user finger key movement after the pre-filtering and noise reduction to the augmented reality glasses through a pre-stored Bluetooth transmission module;
the Bluetooth sending module is used for sending the IMU motion information of the data glove to the augmented reality glasses for processing. Augmented reality glasses are devices that receive and process data glove transmitted information.
In general, the augmented reality glasses mainly comprise a binocular camera module, a mode conversion module, a filtering noise reduction module, a data processing module, a Bluetooth receiving module, a vibration feedback module and a display module.
For example, as shown in fig. 5, the augmented reality glasses 200 include a haptic feedback device 300 and a binocular camera 400. The binocular camera is positioned at the obliquely lower part of the augmented reality glasses, can shoot images of the data glove, can distinguish differences of key actions through motion tracks of connecting lines among six IMU sensors on the data glove, and adopts 50 frames of binocular camera for multi-frame recording of image information of the keys of the two hands; the filtering noise reduction module is mainly used for processing and preprocessing the recorded image information; the data processing module is mainly used for effectively fusing the collected two-hand IMU information and the image information collected by the binocular camera, and loading a neural network model to obtain a preliminary key prediction result; the Bluetooth receiving module is mainly used for receiving IMU signal information transmitted by the data glove. The vibration feedback module is mainly used for slightly vibrating after the key is successfully identified, and is used for feeding back to a user and enhancing man-machine interaction. The mode conversion module is mainly used for carrying out fuzzy intention reasoning on the preliminary prediction result obtained by the data processing module so as to achieve the function of input error correction, and converting the final prediction result into characters which can be recognized by the augmented reality glasses; the display module is mainly used for displaying the final result recognized by the mode conversion module in the augmented reality glasses.
In a possible implementation manner, based on the stored movement data information of the finger obtained in step 101, when the IMU data glove detects the stored movement data information, the bluetooth transmission module stored in the IMU data glove is first obtained, then the information is transmitted to the bluetooth transmission module, after the bluetooth transmission module receives the information, the glasses are realized through internal program connection enhancement, and after the connection is successful, the information is transmitted through a wireless network.
Step 103, receiving motion information of the user finger key movement sent by the inertial measurement unit data glove;
in this embodiment of the present application, after information is sent based on step 102, when the augmented reality glasses detect a data information request sent by the IMU data glove, the augmented reality glasses first acquire a bluetooth receiving module that is stored in advance, and after the bluetooth receiving module is acquired, the bluetooth receiving module acquires the sent information and stores the information in the augmented reality glasses.
104, acquiring pre-processed image information;
the pre-processed image information is generated by processing the image information acquired by the binocular camera by the filtering noise reduction module in the augmented reality glasses, wherein the image information acquired by the binocular camera is the image information of multi-frame IMU data glove key movement acquired by the binocular camera on the augmented reality glasses.
In the embodiment of the application, when the binocular camera on the augmented reality glasses detects that the IMU data glove is moving, the augmented reality glasses perform multi-frame acquisition on the image information of the IMU data glove key movement through an internal program, and then the image information of the IMU data glove key movement is input into the pre-stored filtering noise reduction module to process the image and then generate pre-processed image information.
Step 105, inputting the motion information of the finger key movement of the user and the pre-processed image information into a pre-stored data processing module for fusion, and generating a fused data sample;
firstly, data information sent by the IMU data glove is obtained based on step 103, and then pre-processed image information is obtained according to step 104.
In the embodiment of the present application, for example, as shown in fig. 6, fig. 6 is a structural framework of a virtual keyboard system, and mainly includes three major parts, an input layer, a processing layer and an output layer.
In the input layer, multi-mode information input is included, which is mainly used for recording multi-mode information of a user during hanging typing, including information (acceleration and angular velocity of x axis, y axis and z axis) of the motion of both hands of the IMU recorded by the data glove and image information recorded by the binocular camera positioned at the bottom of the augmented reality glasses, wherein the user needs to adopt a standard typing method. The motion information recorded by the input layer will be sent to the processing layer.
In the processing layer, the layer mainly comprises three parts, namely a data processing part, a neural network model prediction part and a fuzzy intention reasoning error correction part.
In the data processing part, after multi-mode information of an input layer is obtained, noise reduction filtering is carried out on IMU motion information, a Butterworth filter, a 9-300hz band-pass filter and a 50hz trap are adopted, feature extraction is carried out after filtering, and the feature extraction method of the IMU motion sensor comprises MAV, RMS and the like, wherein MAV is an absolute value average value of amplitude, and RMS is root mean square; and (3) noise reduction filtering is carried out on the image information acquired by the binocular camera, and a chebyshev filter, a band-pass filter of 30hz-150hz and a wave trap of 50hz are adopted. When the user uses the standard typing method, the user can move one finger or two fingers corresponding to each key when the user performs the key action, so that the pre-sorting process can be performed, the two hands are equally divided into four major categories, namely, if one key action is detected, the key action can be divided into one of the four major categories according to the IMU signals of the data glove.
Step 106, inputting the fused data sample into a preset neural network model to generate a key prediction result;
In the embodiment of the present application, step 105 may obtain that the processing layer is mainly composed of three major parts, namely a data processing part, a neural network model prediction part and a fuzzy intention reasoning error correction part.
In the neural network prediction part, the prediction part mainly comprises two types, wherein the first type is that motion sensor data IMU information adopts an LSTM model, and the second type is that image information of a binocular camera adopts a CNN model. Firstly, the information processing of the first kind of IMU, LSTM is a special RNN, can learn long-term dependence, is suitable for processing long-time sequence information, and the internal structure of the IMU is provided with an input door, a forgetting door and an output door, and the internal working principle is as follows: the LSTM first step is to determine what information can pass through the memory unit, which is controlled by the forgetting gate layer through the activation function, which generates a value of 0 to 1 according to the output of the last time and the input of the current time to determine whether to pass or partially pass the information learned at the last time, and the formula of this step is as follows:
f t =σ(W f .[h t-1 ,x t ]+b f )
where σ is the activation function, h t-1 Is the output of the last moment, x t Is the current input, b f Is the offset, f t Is a forgetful door.
The second step is to generate new information that we need to update. This step comprises two parts, the first is that one input gate layer decides which values are used for updating by means of a sigmoid activation function, and the second is that of a tanh layer
For generating new candidate value C t It may be added to the memory cell as a candidate for the current layer generation. We will combine the values generated by the two parts to update.
i t =σ(W t ·[h t-1 ,x t ]+b i )
Figure BDA0002336328290000111
Where σ is the activation function, h t-1 Is the output of the last moment, x t Is the current input, b i and bC Is the offset, tanh is an activation function, i t Is an output gate.
The third step is to update the old memory cell, first, we multiply the old memory cell by ft to forget the unnecessary information, then to match with
Figure BDA0002336328290000112
And adding to obtain candidate values. The formula is as follows:
Figure BDA0002336328290000113
/>
wherein ,ft Is the output of the forgetting gate,
Figure BDA0002336328290000114
is an old memory cell, C t Is a new memory cell, i t Is the output of the input gate.
The final step is to determine the output of the model, first by the sigmoid layer to obtain an initial inputThen C was removed using tanh t The values scale to between-1 and 1, and are multiplied by the output obtained by sigmoid pair by pair, so that the output of the model is obtained, and the formula of the step is as follows:
O t =σ(W o .[h t-1 ,x t ]+b o )
h t =O t* tanh(C t )
wherein Ot Is the output of the output gate, h t-1 Is the output of the last moment, x t Is the current input, b o Is the offset, C t Is a new memory cell.
The second type of binocular camera image processing adopts a CNN convolutional neural network, the convolutional neural network (Convolutional Neural Networks, CNN) is a feedforward neural network which comprises convolutional calculation and has a depth structure, and the overall structure mode is input-convolutional layer-pooling layer-full-connection layer-output. The function of the convolution layer is to perform feature extraction on the input data, and the convolution layer internally contains a plurality of convolution kernels, wherein each element constituting the convolution kernels corresponds to a weight coefficient and a deviation amount, and is similar to neurons of a feedforward neural network. After the feature extraction is performed by the convolution layer, the output feature map is transferred to the pooling layer for feature selection and information filtering. The pooling layer contains a preset pooling function, the function of which is to replace the result of single point in the feature map with the feature map statistic of the adjacent area, and the model adopts L P Pooling, which is a class of pooling models created inspired by the hierarchical structure within the visual cortex [35 ]]The general expression is:
Figure BDA0002336328290000121
middle step s 0 And (i, j) is a pixel, p is a pre-specified parameter, and p of the patent adopts 1, namely, the average value is taken in a pooling area, and the method is also called average pooling.
The pooling layer is followed by a fully connected layer, which is equivalent to the hidden layer in a conventional feed forward neural network. The full connection layer is positioned at the last part of the hidden layer of the convolutional neural network and only transmits signals to other full connection layers. The signature loses spatial topology in the fully connected layers, is expanded into vectors and passes through the excitation function. The CNN model of binocular camera image recognition and the LSTM model of motion sensor recognition are both obtained by pre-training in advance, and the data of the pre-training model are data of a user in a suspended state by adopting a standard typing method. The two models can output a probability matrix, the output of the CNN model is set as a probability matrix x, the output of the LSTM model is set as a probability matrix y, and the corresponding elements of the two matrices are weighted as follows to obtain a final prediction matrix z of the neural network.
z=a*x+b*y
Where a and b are the weight coefficients of the two models.
The third part is a fuzzy consciousness reasoning part, and the principle is mainly that the prior knowledge and regularity of Chinese character pinyin are utilized, namely, in Chinese character pinyin, letters immediately following each letter are limited, the probability of occurrence of other letters is 0, according to the above, when a certain key is input, an error correction matrix can be given according to the last letter, namely, the probability of occurrence of other letters is 0 except for possible letters.
Step 107, inputting the key prediction result into a pre-stored mode conversion module, and converting the key prediction result into characters which can be identified by the augmented reality glasses;
in this embodiment of the present application, for example, as shown in fig. 6, when the input layer outputs, two probability matrices are first obtained based on step 106, which are a neural network probability matrix and a fuzzy intent inference error correction matrix, respectively, and finally, the error correction matrix and the neural network probability matrix are subjected to dot multiplication, so as to obtain a final probability matrix, and the value of the index corresponding to the maximum value in the probability matrix is the value of the final output.
For example, as shown in fig. 7, the output value is finally converted into a character recognizable by the augmented reality glasses through the mode conversion module, and the character is displayed in the augmented reality glasses worn by the user in the first vision.
And step 108, displaying the characters which can be recognized by the augmented reality glasses.
In this embodiment of the present application, for example, as shown in fig. 8, fig. 8 is an identification flow of a virtual keyboard system, first, IMU data gloves acquire IMU signals, then, noise reduction filtering is performed on the IMU signals, and MVA feature extraction is performed. The binocular camera on the augmented reality glasses acquires image signals, and then noise reduction and filtering are carried out on the acquired image signals. And judging which type of finger motion the finger motion belongs to, wherein the finger motion comprises finger motion of an index finger, finger motion of a middle finger, finger motion of a ring finger and finger motion of a little finger, analyzing and processing the judged finger motion types by using an LSTM neural network and a CNN neural network, generating a probability matrix of the model after the analysis and processing are finished, and finally multiplying and outputting the maximum probability character by using the matrix point and displaying the maximum probability character in the augmented reality glasses.
In the embodiment of the application, the inertial measurement unit data glove firstly acquires motion information of the movement of the finger keys of the user after pre-filtering and noise reduction, and transmits the motion information to the augmented reality glasses through the pre-stored Bluetooth transmission module, when the augmented reality glasses receive the information, acquire the pre-processed image information, input the pre-stored image information into the pre-stored data processing module for fusion, generate a key prediction result after fusion, input the fused data sample into the pre-set neural network model, then input the key prediction result into the pre-stored mode conversion module, convert the key prediction result into characters which can be identified by the augmented reality glasses, and finally display the characters which can be identified by the augmented reality glasses. Because the virtual keyboard combined by the inertial measurement unit data glove and the augmented reality (Augmented Reality, AR) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience.
Referring to fig. 2, a flow chart of a key recognition method of a virtual keyboard applied to an inertial measurement unit data glove is provided in an embodiment of the present application. As shown in fig. 2, the method according to the embodiment of the present application may include the following steps:
s201, when a user finger key movement instruction input by the glove aiming at the inertial measurement unit data is received, acquiring movement information of the user finger key movement;
in one possible implementation, the user first wears the IMU data glove, then performs simulated typing according to his/her own typing intent, moves the IMU data glove along with the displacement of the finger during typing, generates movement data information during movement, and then records and saves the movement data information of the finger.
S202, inputting the motion information of the user finger key movement into a pre-stored filtering noise reduction module to generate motion information of the user finger key movement after filtering noise reduction, and taking the motion information of the user finger key movement after filtering noise reduction as the motion information of the user finger key movement after pre-filtering noise reduction;
in the embodiment of the present application, based on the step S201, the movement data information of the finger is obtained and stored, when the finger movement data information is obtained, the filtering noise reduction module stored in the IMU data glove is obtained first, then the obtained finger movement data information is input into the filtering noise reduction module in the IMU data glove for processing, and after processing, the movement information of the user finger key movement after filtering noise reduction is generated.
S203, obtaining motion information of the user finger key movement after pre-filtering and noise reduction;
see step 101, which is not described here.
S204, the motion information of the user finger key movement after the pre-filtering and noise reduction is sent to the augmented reality glasses through a pre-stored Bluetooth sending module.
See step S102, which is not described here.
In the embodiment of the application, the inertial measurement unit data glove firstly acquires motion information of the movement of the finger keys of the user after pre-filtering and noise reduction, and transmits the motion information to the augmented reality glasses through the pre-stored Bluetooth transmission module, when the augmented reality glasses receive the information, acquire the pre-processed image information, input the pre-stored image information into the pre-stored data processing module for fusion, generate a key prediction result after fusion, input the fused data sample into the pre-set neural network model, then input the key prediction result into the pre-stored mode conversion module, convert the key prediction result into characters which can be identified by the augmented reality glasses, and finally display the characters which can be identified by the augmented reality glasses. Because the virtual keyboard combined by the inertial measurement unit data glove and the augmented reality (Augmented Reality, AR) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience.
Referring to fig. 3, a flow chart of a key recognition method of a virtual keyboard applied to augmented reality glasses is provided for an embodiment of the present application. As shown in fig. 3, the method according to the embodiment of the present application may include the following steps:
s301, acquiring an image information set of a user finger key;
in the embodiment of the application, when the binocular camera on the augmented reality glasses detects that the IMU data glove is moving, the augmented reality glasses perform multi-frame acquisition on the image information of the IMU data glove key movement through the internal program.
S302, inputting the image information set into a pre-stored filtering noise reduction module to generate processed image information. Taking the processed image information as the image information after the pretreatment;
in the embodiment of the present application, based on the image information set of the user finger key acquired in step S301, the image information of the key movement is input into the pre-stored filtering noise reduction module, and the image is processed to generate the pre-processed image information.
S303, receiving motion information of the user finger key movement sent by the inertial measurement unit data glove;
see step 103, which is not described here.
S304, acquiring pre-processed image information;
see step 104 for details, which will not be described here.
S305, inputting the motion information of the finger key movement of the user and the pre-processed image information into a pre-stored data processing module for fusion, and generating a fused data sample;
see step 105 for details, which are not described here.
S306, inputting the fused data sample into a preset neural network model to generate a key prediction result;
see step 106 for details, which will not be described in detail herein.
S307, inputting the key prediction result into a pre-stored mode conversion module, and converting the key prediction result into characters which can be identified by the augmented reality glasses;
see step 107 for details, which will not be described here.
And S308, displaying the characters which can be recognized by the augmented reality glasses.
See step 108 for details, which are not described in detail herein.
In the embodiment of the application, the inertial measurement unit data glove firstly acquires motion information of the movement of the finger keys of the user after pre-filtering and noise reduction, and transmits the motion information to the augmented reality glasses through the pre-stored Bluetooth transmission module, when the augmented reality glasses receive the information, acquire the pre-processed image information, input the pre-stored image information into the pre-stored data processing module for fusion, generate a key prediction result after fusion, input the fused data sample into the pre-set neural network model, then input the key prediction result into the pre-stored mode conversion module, convert the key prediction result into characters which can be identified by the augmented reality glasses, and finally display the characters which can be identified by the augmented reality glasses. Because the virtual keyboard combined by the inertial measurement unit data glove and the augmented reality (Augmented Reality, AR) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience.
The following is an embodiment of the system of the present application.
Referring to fig. 9, a schematic diagram of a module frame of a virtual keyboard system according to an exemplary embodiment of the present application is shown. The hardware frame of the virtual keyboard system mainly consists of two parts, namely a first part of data glove worn on both hands and a second part of augmented reality glasses worn on the head, and the two parts are respectively described below.
The data glove is worn on the IMU data glove parts of two hands, and the IMU data glove parts mainly comprise an IMU motion sensor module, a Bluetooth sending module, a filtering noise reduction module and a wireless charging module. The six-axis IMU motion sensor is mainly used for recording motion information of two hands when the two hands do key-press actions, and comprises three-axis acceleration and three-axis angular velocity information (x-axis, y-axis and z-axis), wherein the sensor is positioned at the positions of five fingers and the back of the hand, and the sensors at the positions of the fingers are respectively connected with the sensors at the positions of the back of the hand; the filtering noise reduction module is mainly used for preprocessing the acquired motion of the IMU and ensuring the validity of the data; the Bluetooth sending module mainly sends IMU motion information of the data glove to the augmented reality glasses for processing; the wireless charging module is mainly used for charging the data glove for a long time, and the wireless charging also improves the convenience of the keyboard system.
The augmented reality glasses wear the augmented reality glasses part at the head, and this part is mainly by binocular camera module, mode conversion module, filtering noise reduction module, data processing module, bluetooth receiving module, vibrations feedback module and display module. The binocular camera module is positioned at the obliquely lower part of the augmented reality glasses, can shoot images of the data glove, can distinguish differences of key actions through motion tracks of connecting lines among six IMU sensors on the data glove, and adopts a 50-frame binocular camera for multi-frame recording of image information of the keys of the two hands; the filtering noise reduction module is mainly used for processing and preprocessing the recorded image information; the data processing module is mainly used for effectively fusing the collected two-hand IMU information and the image information collected by the binocular camera, and loading a neural network model to obtain a preliminary key prediction result; the Bluetooth receiving module is mainly used for receiving IMU signal information transmitted by the data glove. The vibration feedback module is mainly used for slightly vibrating after the key is successfully identified, and is used for feeding back to a user and enhancing man-machine interaction. The mode conversion module is mainly used for carrying out fuzzy intention reasoning on the preliminary prediction result obtained by the data processing module so as to achieve the function of input error correction, and converting the final prediction result into characters which can be recognized by the augmented reality glasses; the display module is mainly used for displaying the final result recognized by the mode conversion module in the augmented reality glasses.
In the embodiment of the application, the inertial measurement unit data glove firstly acquires motion information of the movement of the finger keys of the user after pre-filtering and noise reduction, and transmits the motion information to the augmented reality glasses through the pre-stored Bluetooth transmission module, when the augmented reality glasses receive the information, acquire the pre-processed image information, input the pre-stored image information into the pre-stored data processing module for fusion, generate a key prediction result after fusion, input the fused data sample into the pre-set neural network model, then input the key prediction result into the pre-stored mode conversion module, convert the key prediction result into characters which can be identified by the augmented reality glasses, and finally display the characters which can be identified by the augmented reality glasses. Because the virtual keyboard combined by the inertial measurement unit data glove and the augmented reality (Augmented Reality, AR) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Fig. 10 is a schematic structural diagram of a key recognition device of a virtual keyboard according to an exemplary embodiment of the present application. The key recognition method device of the virtual keyboard can be realized into all or part of the terminal through software, hardware or a combination of the software and the hardware. The device 1 comprises a first information acquisition module 10, an information transmission module 20.
The first information acquisition module 10 is used for acquiring the motion information of the user finger key movement after the pre-filtering noise reduction;
the information sending module 20 is configured to send the motion information of the user finger key movement after the pre-filtering and noise reduction to the augmented reality glasses through the pre-stored bluetooth sending module.
Optionally, as shown in fig. 11, the apparatus 1 further includes:
the second information obtaining module 30 is configured to obtain movement information of a user finger key movement when receiving a user finger key movement instruction input for the inertial measurement unit data glove;
the first information generating module 40 is configured to input the motion information of the user finger key movement into a pre-stored filtering noise reduction module to generate motion information of the user finger key movement after filtering noise reduction, and take the motion information of the user finger key movement after filtering noise reduction as the motion information of the user finger key movement after pre-filtering noise reduction.
Referring to fig. 11, a schematic structural diagram of a key recognition device of a virtual keyboard according to an exemplary embodiment of the present application is shown. The key recognition method device of the virtual keyboard can be realized into all or part of the terminal through software, hardware or a combination of the software and the hardware. The apparatus 2 comprises an information receiving module 10, a third information obtaining module 20, a sample generating module 30, a result generating module 40, a character generating module 50, and a character displaying module 60.
The information receiving module 10 is used for receiving the motion information of the user finger key movement sent by the inertial measurement unit data glove;
a third information acquisition module 20 for acquiring the image information after the preprocessing;
the sample generation module 30 is configured to input the motion information of the user finger key movement and the pre-processed image information into a pre-stored data processing module for fusion, and generate a fused data sample;
the result generation module 40 inputs the fused data samples into a preset neural network model to generate a key prediction result;
the character generating module 50 is configured to input the key prediction result into a pre-stored mode conversion module, and convert the key prediction result into characters that can be identified by the augmented reality glasses;
And the character display module 60 is used for displaying characters which can be recognized by the augmented reality glasses.
Optionally, as shown in fig. 12, the apparatus 2 further includes:
the set acquisition module 60 is configured to acquire an image information set of the finger keys of the user;
the second information generating module 70 is configured to input the image information set into a pre-stored filtering noise reduction module to generate processed image information. And taking the processed image information as the image information after the preprocessing.
It should be noted that, in the key recognition device of the virtual keyboard provided in the foregoing embodiment, only the division of the foregoing functional modules is used for illustration, and in practical application, the foregoing functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the key recognition device of the virtual keyboard provided in the above embodiment belongs to the same concept as the key recognition method embodiment of the virtual keyboard, which embodies the detailed implementation process and is not described herein.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the embodiment of the application, the inertial measurement unit data glove firstly acquires motion information of the movement of the finger keys of the user after pre-filtering and noise reduction, and transmits the motion information to the augmented reality glasses through the pre-stored Bluetooth transmission module, when the augmented reality glasses receive the information, acquire the pre-processed image information, input the pre-stored image information into the pre-stored data processing module for fusion, generate a key prediction result after fusion, input the fused data sample into the pre-set neural network model, then input the key prediction result into the pre-stored mode conversion module, convert the key prediction result into characters which can be identified by the augmented reality glasses, and finally display the characters which can be identified by the augmented reality glasses. Because the virtual keyboard combined by the inertial measurement unit data glove and the augmented reality (Augmented Reality, AR) technology is utilized to replace the traditional mechanical keyboard, the method can reduce the space occupied by the keyboard, and is convenient to carry, high in flexibility and high in user experience.
The application also provides a computer readable medium, on which program instructions are stored, which when executed by a processor implement the key recognition method of the virtual keyboard provided by the above method embodiments.
The application also provides a computer program product containing instructions, which when run on a computer, cause the computer to execute the key recognition method of the virtual keyboard in the above method embodiments.
Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as outside the scope of this application. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments disclosed herein, it should be understood that the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form. The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
It should be appreciated that the flow charts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The present application is not limited to the flow and structure that has been described above and shown in the drawings, and various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (6)

1. The utility model provides a key recognition method of virtual keyboard, has applied inertial measurement unit data gloves and augmented reality glasses, characterized in that, the method includes:
the inertial measurement unit data glove acquires the motion information of the user finger key movement after pre-filtering and noise reduction; wherein,
before the motion information of the user finger key movement after the pre-filtering noise reduction is obtained, the method further comprises the following steps:
when a user finger key movement instruction input by aiming at the inertial measurement unit data glove is received, acquiring movement information of the user finger key movement;
inputting the motion information of the user finger key movement into a pre-stored filtering noise reduction module to generate motion information of the user finger key movement after filtering noise reduction, and taking the motion information of the user finger key movement after filtering noise reduction as the motion information of the user finger key movement after pre-filtering noise reduction;
the inertial measurement unit data glove transmits the motion information of the user finger key movement after the pre-filtering noise reduction to the augmented reality glasses through a pre-stored Bluetooth transmission module;
the augmented reality glasses receive motion information of the user finger key movement sent by the inertial measurement unit data glove;
The augmented reality glasses acquire the image information after being preprocessed;
the augmented reality glasses input the motion information of the finger key movement of the user and the pre-processed image information into a pre-stored data processing module for fusion, and a fused data sample is generated;
the augmented reality glasses input the fused data samples into a preset neural network model to generate a key prediction result;
the augmented reality glasses input the key prediction result into a pre-stored mode conversion module to convert the key prediction result into characters which can be identified by the augmented reality glasses;
the augmented reality glasses display characters which can be recognized by the augmented reality glasses; wherein,
before the pre-processed image information is acquired, the method further comprises the following steps:
acquiring an image information set of a user finger key;
and inputting the image information set into a pre-stored filtering noise reduction module to generate processed image information, and taking the processed image information as the pre-processed image information.
2. A key recognition system of a virtual keyboard, comprising: inertial measurement unit data glove and augmented reality glasses;
The inertial measurement unit data glove is used for acquiring movement information of movement of a user finger key, inputting the movement information of movement of the user finger key into a pre-stored filtering noise reduction module to generate movement information of movement of the user finger key after filtering noise reduction, and then sending the movement information of movement of the user finger key after filtering noise reduction to the enhancement implementation glasses through the pre-stored Bluetooth sending module;
the augmented reality glasses are used for receiving movement information of movement of a user finger key sent by the data glove of the inertia measurement unit, acquiring pre-processed image information, inputting the movement information of movement of the user finger key and the pre-processed image information into a pre-stored data processing module for fusion, generating a fused data sample, inputting the fused data sample into a preset neural network model for generating a key prediction result, inputting the key prediction result into a pre-stored mode conversion module for converting the key prediction result into characters which can be recognized by the augmented reality glasses, and finally displaying the characters which can be recognized by the augmented reality glasses; wherein,
The augmented reality glasses further include:
and acquiring an image information set of the finger keys of the user, and inputting the image information set into a pre-stored filtering noise reduction module to generate processed image information.
3. The system of claim 2, wherein the inertial measurement unit data glove comprises an inertial measurement unit motion sensor module, a bluetooth transmission module, a filtering noise reduction module, and a wireless charging module;
the inertial measurement unit motion sensor module is a six-axis inertial measurement unit motion sensor and is used for recording motion information of two hands when the hands do key-press actions, the inertial measurement unit motion sensor module comprises three-axis acceleration and three-axis angular velocity information, the sensor is positioned at five fingers and the back of the hand, and the sensors at the fingers are respectively connected with the sensors at the back of the hand;
the filtering noise reduction module is mainly used for preprocessing the collected motion of the inertial measurement unit and guaranteeing the effectiveness of data;
the Bluetooth transmitting module is mainly used for transmitting the motion information of the inertial measurement unit of the data glove to the augmented reality glasses for processing;
the wireless charging module is mainly used for charging the data glove for cruising, and the wireless charging also improves the convenience of the keyboard system.
4. The system of claim 2, wherein the augmented reality glasses comprise a binocular camera module, a mode conversion module, a filtering noise reduction module, a data processing module, a Bluetooth receiving module, a vibration feedback module, and a display module, wherein,
the binocular camera module is positioned at the obliquely lower part of the augmented reality glasses and is used for acquiring images of the data glove;
the filtering noise reduction module is used for processing and preprocessing the recorded image information;
the data processing module is used for effectively fusing the collected information of the two-hand inertial measurement unit and the image information collected by the binocular camera, and loading a neural network model to obtain a preliminary key prediction result;
the Bluetooth receiving module is used for receiving the inertial measurement unit signal information transmitted by the data glove;
the vibration feedback module is used for slightly vibrating after the key is successfully identified, feeding back to a user and enhancing man-machine interaction;
the mode conversion module is used for carrying out fuzzy intention reasoning on the preliminary prediction result obtained by the data processing module so as to achieve the function of input error correction, and converting the final prediction result into characters which can be recognized by the augmented reality glasses;
The display module is used for displaying the final result identified by the mode conversion module in the augmented reality glasses.
5. A key recognition apparatus of a virtual keyboard, comprising:
one or more processors, a storage device storing one or more programs;
the one or more processors implement the method of claim 1 when the one or more programs are executed by the one or more processors.
6. A computer readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the method according to claim 1.
CN201911357466.6A 2019-12-25 2019-12-25 Key recognition method, system, equipment and storage medium of virtual keyboard Active CN111158476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911357466.6A CN111158476B (en) 2019-12-25 2019-12-25 Key recognition method, system, equipment and storage medium of virtual keyboard

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911357466.6A CN111158476B (en) 2019-12-25 2019-12-25 Key recognition method, system, equipment and storage medium of virtual keyboard

Publications (2)

Publication Number Publication Date
CN111158476A CN111158476A (en) 2020-05-15
CN111158476B true CN111158476B (en) 2023-05-23

Family

ID=70556712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911357466.6A Active CN111158476B (en) 2019-12-25 2019-12-25 Key recognition method, system, equipment and storage medium of virtual keyboard

Country Status (1)

Country Link
CN (1) CN111158476B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930225B (en) * 2020-06-28 2022-12-02 北京理工大学 Virtual-real converged keyboard system and method for mobile devices
CN113821139A (en) * 2021-09-24 2021-12-21 维沃移动通信有限公司 Information display method, information display device, glasses and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007286987A (en) * 2006-04-18 2007-11-01 Ricoh Co Ltd Image forming apparatus, program for the same, and recording medium
CN102063183A (en) * 2011-02-12 2011-05-18 深圳市亿思达显示科技有限公司 Virtual input device of grove type
CN106575159A (en) * 2014-08-22 2017-04-19 索尼互动娱乐股份有限公司 Glove interface object
US20170123487A1 (en) * 2015-10-30 2017-05-04 Ostendo Technologies, Inc. System and methods for on-body gestural interfaces and projection displays
CN107357434A (en) * 2017-07-19 2017-11-17 广州大西洲科技有限公司 Information input equipment, system and method under a kind of reality environment
WO2018098861A1 (en) * 2016-11-29 2018-06-07 歌尔科技有限公司 Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus
CN108874119A (en) * 2017-05-16 2018-11-23 芬奇科技有限公司 The mobile input to generate computer system of tracking arm

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140022165A1 (en) * 2011-04-11 2014-01-23 Igor Melamed Touchless text and graphic interface
US8928589B2 (en) * 2011-04-20 2015-01-06 Qualcomm Incorporated Virtual keyboards and methods of providing the same
CN102880304A (en) * 2012-09-06 2013-01-16 天津大学 Character inputting method and device for portable device
CN103019377A (en) * 2012-12-04 2013-04-03 天津大学 Head-mounted visual display equipment-based input method and device
EP2977855B1 (en) * 2014-07-23 2019-08-28 Wincor Nixdorf International GmbH Virtual keyboard and input method for a virtual keyboard
CN104317403B (en) * 2014-10-27 2017-10-27 黄哲军 A kind of wearable device for Sign Language Recognition
CN107209582A (en) * 2014-12-16 2017-09-26 肖泉 The method and apparatus of high intuitive man-machine interface
WO2016189372A2 (en) * 2015-04-25 2016-12-01 Quan Xiao Methods and apparatus for human centric "hyper ui for devices"architecture that could serve as an integration point with multiple target/endpoints (devices) and related methods/system with dynamic context aware gesture input towards a "modular" universal controller platform and input device virtualization
CN106484119A (en) * 2016-10-24 2017-03-08 网易(杭州)网络有限公司 Virtual reality system and virtual reality system input method
CN106598233A (en) * 2016-11-25 2017-04-26 北京暴风魔镜科技有限公司 Input method and input system based on gesture recognition
CN106648093A (en) * 2016-12-19 2017-05-10 珠海市魅族科技有限公司 Input method and device of virtual reality device
RU176318U1 (en) * 2017-06-07 2018-01-16 Федоров Александр Владимирович VIRTUAL REALITY GLOVE
CN108519855A (en) * 2018-04-17 2018-09-11 北京小米移动软件有限公司 Characters input method and device
CN110442233B (en) * 2019-06-18 2020-12-04 中国人民解放军军事科学院国防科技创新研究院 Augmented reality keyboard and mouse system based on gesture interaction
CN110443113A (en) * 2019-06-18 2019-11-12 中国人民解放军军事科学院国防科技创新研究院 A kind of virtual reality Writing method, system and storage medium
CN111722713A (en) * 2020-06-12 2020-09-29 天津大学 Multi-mode fused gesture keyboard input method, device, system and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007286987A (en) * 2006-04-18 2007-11-01 Ricoh Co Ltd Image forming apparatus, program for the same, and recording medium
CN102063183A (en) * 2011-02-12 2011-05-18 深圳市亿思达显示科技有限公司 Virtual input device of grove type
CN106575159A (en) * 2014-08-22 2017-04-19 索尼互动娱乐股份有限公司 Glove interface object
US20170123487A1 (en) * 2015-10-30 2017-05-04 Ostendo Technologies, Inc. System and methods for on-body gestural interfaces and projection displays
WO2018098861A1 (en) * 2016-11-29 2018-06-07 歌尔科技有限公司 Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus
CN108874119A (en) * 2017-05-16 2018-11-23 芬奇科技有限公司 The mobile input to generate computer system of tracking arm
CN107357434A (en) * 2017-07-19 2017-11-17 广州大西洲科技有限公司 Information input equipment, system and method under a kind of reality environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘小伟."基于双目立体视觉的虚拟手势交互技术的研究与实现".《中国优秀硕士学位论文全文数据库 信息科技辑》.(第10期),第I138-790页. *

Also Published As

Publication number Publication date
CN111158476A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN109196526A (en) For generating the method and system of multi-modal digital picture
CN106997236A (en) Based on the multi-modal method and apparatus for inputting and interacting
CN107150347A (en) Robot perception and understanding method based on man-machine collaboration
EP3398035B1 (en) Detection of hand gestures using gesture language discrete values
CN110008839B (en) Intelligent sign language interaction system and method for self-adaptive gesture recognition
CN111158476B (en) Key recognition method, system, equipment and storage medium of virtual keyboard
CN105787478A (en) Face direction change recognition method based on neural network and sensitivity parameter
CN111722713A (en) Multi-mode fused gesture keyboard input method, device, system and storage medium
CN112906604A (en) Behavior identification method, device and system based on skeleton and RGB frame fusion
CN111966217A (en) Unmanned aerial vehicle control method and system based on gestures and eye movements
CN107357434A (en) Information input equipment, system and method under a kind of reality environment
CN110928432A (en) Ring mouse, mouse control device and mouse control system
US11809616B1 (en) Twin pose detection method and system based on interactive indirect inference
CN111552383A (en) Finger identification method and system of virtual augmented reality interaction equipment and interaction equipment
Zhao et al. Comparing head gesture, hand gesture and gamepad interfaces for answering Yes/No questions in virtual environments
Krishnaraj et al. A Glove based approach to recognize Indian Sign Languages
CN115686193A (en) Virtual model three-dimensional gesture control method and system in augmented reality environment
Nishino et al. Interactive two-handed gesture interface in 3D virtual environments
Stassen et al. Telemanipulation and telepresence
CN113268143B (en) Multimodal man-machine interaction method based on reinforcement learning
CN113408443B (en) Gesture posture prediction method and system based on multi-view images
CN110413106B (en) Augmented reality input method and system based on voice and gestures
CN116449947A (en) Automobile cabin domain gesture recognition system and method based on TOF camera
CN106512391A (en) Two-hand gesture recognition method, and simulation driving system and method based on two-hand gesture recognition method
CN113887373B (en) Attitude identification method and system based on urban intelligent sports parallel fusion network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant