CN111930225B - Virtual-real converged keyboard system and method for mobile devices - Google Patents

Virtual-real converged keyboard system and method for mobile devices Download PDF

Info

Publication number
CN111930225B
CN111930225B CN202010598565.XA CN202010598565A CN111930225B CN 111930225 B CN111930225 B CN 111930225B CN 202010598565 A CN202010598565 A CN 202010598565A CN 111930225 B CN111930225 B CN 111930225B
Authority
CN
China
Prior art keywords
keyboard
image
virtual
real
mobile equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010598565.XA
Other languages
Chinese (zh)
Other versions
CN111930225A (en
Inventor
翁冬冬
胡明伟
胡翔
江海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang New Century Conference And Exhibition Center Co ltd
Nanchang Virtual Reality Detection Technology Co ltd
Beijing Institute of Technology BIT
Original Assignee
Nanchang New Century Conference And Exhibition Center Co ltd
Nanchang Virtual Reality Detection Technology Co ltd
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang New Century Conference And Exhibition Center Co ltd, Nanchang Virtual Reality Detection Technology Co ltd, Beijing Institute of Technology BIT filed Critical Nanchang New Century Conference And Exhibition Center Co ltd
Priority to CN202010598565.XA priority Critical patent/CN111930225B/en
Publication of CN111930225A publication Critical patent/CN111930225A/en
Application granted granted Critical
Publication of CN111930225B publication Critical patent/CN111930225B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0231Cordless keyboards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a virtual-real fusion keyboard system and method for mobile equipment, and belongs to the technical field of virtual reality. The scheme has small data processing amount, is suitable for the processing speed of the mobile equipment, and the mobile equipment and the virtual reality display equipment are separately arranged, so that the comfort level of a user is not influenced. The technical scheme of the invention is as follows: a virtual-real fused keyboard system for mobile equipment comprises a physical keyboard, the mobile equipment and a fused display module. The mobile device is provided with a video capturing module and an image conversion module; the mobile equipment is fixed at a set position above the physical keyboard, and the position relation between the mobile equipment and the physical keyboard is as follows: the physical keyboard is positioned in the field of view range of the video capture module; and the image conversion module is used for converting the real-time scene image of the physical keyboard captured by the video capture module into a virtual image of the keyboard with the human eyes of the user as the visual angle. And the fusion display module is used for carrying out virtual-real fusion display on the keyboard virtual image and the virtual environment.

Description

Virtual-real converged keyboard system and method for mobile devices
Technical Field
The invention relates to the technical field of virtual reality, in particular to a virtual-real fusion keyboard system and a virtual-real fusion keyboard method for mobile equipment.
Background
At present, many studies propose virtual reality text input schemes, such as text input through gesture recognition and character selection through ray and key manipulation by using a handle in a virtual environment, but the input efficiency of the input schemes is low and far lower than that of the input by directly using a keyboard.
There are two current ways of fusing a keyboard system with reality and virtuality. One mode is that a tracker is used for tracking a keyboard to obtain position information of the keyboard, meanwhile, a data glove is used for obtaining pose information of a hand of a user, finally, the obtained keyboard information and the hand information are transmitted to a computer, and a corresponding virtual keyboard and a corresponding virtual hand model in the computer are controlled, so that the user can control the virtual keyboard through a virtual hand to realize keyboard input. In such a mode, the keyboard position information and the hand pose information need to be processed in real time, especially, the data volume of the hand real-time pose information is large, the operation requirement on a computer is high, and common mobile equipment cannot meet the requirement. Another mode is to paste a camera on VR glasses, and when a user lowers his head to watch a keyboard, the photographed keyboard picture is directly projected into a virtual environment, so as to construct a keyboard input system. However, such a way of attaching the camera to the helmet increases the weight of the helmet, which reduces the comfort of the user.
The patent with the application number of CN201810330801.2 provides a virtual-real fusion keyboard system for virtual reality, and relates to a method and a system for performing virtual-real fusion of a keyboard based on a position tracker of virtual reality equipment. The system tracks the position of a real keyboard by using a position tracker, acquires position information of a hand and related gesture states through a hand type acquisition device, and transmits the information to a computer. And rendering the information in real time by the computer, and generating a corresponding virtual keyboard model and a corresponding hand model in the virtual environment.
The real-time tracking rendering has a high requirement on the computing speed of a computer, and at present, common mobile devices (such as mobile phones and tablet computers) cannot achieve the computing speed and cannot be applied to the virtual reality experience of the mobile devices.
The article: hawKEY, eicient and Versatile Text Entry for visual Reality provides a solution: the structure that hangs a tray on user's chest puts the keyboard on the tray in front of the chest, pastes a degree of depth camera simultaneously on the helmet, carries out video shooting to the keyboard when user's head-lowering to in the projection virtual environment, help user to use the entity keyboard in virtual environment.
According to the method, the depth camera is used for simultaneously acquiring the depth information and the video information, so that the calculation amount is large, the calculation capability requirement on equipment is high, and the requirement on the calculation capability of the equipment is difficult to meet by common mobile equipment. Meanwhile, the existing mobile equipment does not provide an interface with the depth camera, and the depth camera is rarely loaded on the mobile equipment, so that even if a few mobile phones are provided with the depth cameras, the precision of the mobile equipment cannot meet the requirement of a professional depth camera. Meanwhile, the increased weight of the helmet with the camera adhered to the helmet causes the burden of the head to be heavy, the tray is hung on the neck through the hanging belt, the burden of the neck is increased, and the comfort degree of a user is seriously affected.
Therefore, there is a need for a keyboard system that is adaptable to the processing speed of a mobile device without affecting user comfort.
Disclosure of Invention
In view of this, the present invention provides a virtual-real fusion keyboard system and method for a mobile device, which has a small data processing amount and is suitable for the processing speed of the mobile device, and the mobile device and a virtual-real display device are separately arranged, so that the comfort level of a user is not affected.
In order to achieve the purpose, the technical scheme of the invention is as follows: a virtual-real fusion keyboard system for mobile equipment comprises a physical keyboard, the mobile equipment and a fusion display module.
The mobile device is provided with a video capturing module and an image conversion module; the mobile equipment is fixed at a set position above the physical keyboard, and the position relation between the mobile equipment and the physical keyboard is as follows: the physical keyboard is positioned in the field of view of the video capture module; the image conversion module is used for converting the real-time scene image of the physical keyboard captured by the video capture module into a virtual keyboard image taking human eyes of a user as a visual angle;
and the fusion display module is used for carrying out virtual-real fusion display on the keyboard virtual image and the virtual environment.
Furthermore, the mobile equipment is fixed at a set position above the physical keyboard through a connecting frame; the link includes fixed connection's clamping part and supporting part, and the clamping part is used for the centre gripping at the entity keyboard edge, and supporting part and clamping part vertical fixation set up the equipment groove on the supporting part and are used for fixed mobile device.
Further, the relative position of the mobile device and the physical keyboard is unchanged.
Further, the real-time scene image of the physical keyboard captured by the video capturing module contains image information of the physical keyboard, so as to obtain hand image information when the user operates the keyboard.
Further, the image conversion module specifically comprises: the image conversion module is used for converting the trained image into a neural network model; the training mode of the image transformation neural network model is as follows: constructing a neural network model; acquiring a keyboard scene image of a mobile equipment visual angle as a training sample image through a video capturing module on the mobile equipment; acquiring a keyboard scene image of a human eye visual angle of a user as a result image; and training the neural network model by taking the training image as input and the result image as output, wherein the trained neural network model is the image conversion neural network model.
Preferably, the fusion display module is a head-mounted virtual reality device.
Preferably, the reality fusion module specifically includes:
and using a virtual-real fusion calibration method to directly map the virtual image of the keyboard to a virtual keyboard as a dynamic texture in a virtual environment.
The invention also provides a virtual-real fusion keyboard method for the mobile equipment, which comprises the following steps:
and acquiring a real-time scene image of the physical keyboard by taking the position of the mobile equipment as a visual angle.
And converting the real-time scene image of the physical keyboard into a virtual image of the keyboard with the human eyes of the user as the visual angle.
And performing virtual-real fusion display on the keyboard virtual image and the virtual environment.
Further, the real-time scene image of the physical keyboard is converted into a virtual image of the keyboard with the human eyes of the user as the visual angle, specifically: and constructing a neural network model. And acquiring a keyboard scene image of the view angle of the mobile device as a training sample image through a video capturing module on the mobile device. And acquiring a keyboard scene image of the human eye visual angle of the user as a result image. And training the neural network model by taking the training image as input and the result image as output, wherein the trained neural network model is the image conversion neural network model. And inputting the real-time scene image of the physical keyboard into an image conversion neural network model to obtain a virtual image of the keyboard with the human eyes of the user as a visual angle.
Has the advantages that:
the virtual-real fusion keyboard system and method for the mobile equipment, provided by the invention, have the advantages that the fixed support of the mobile equipment is arranged on the physical keyboard, the mobile equipment is fixed at the front upper part of the Bluetooth keyboard (or can be arranged at the left or the right of the mobile keyboard), the relative position of the mobile equipment and the Bluetooth keyboard is ensured not to change, and a camera of the mobile equipment is utilized to shoot a certain area around the Bluetooth keyboard. According to the invention, after the image directly shot by the mobile equipment is input into the trained neural network for image migration, an image which accords with the observation visual angle of the user is generated and fused to the corresponding position in the virtual environment, so that the virtual-real fusion input system is realized. The virtual-real fusion keyboard system can be quickly and efficiently built, a keyboard is shot by directly utilizing a camera of the mobile equipment, an image directly shot by the camera is input into a trained neural network to output a keyboard image which accords with the visual angle observed by human eyes, the keyboard image is projected into a virtual environment, the virtual-real fusion keyboard system is realized, and a user can quickly, accurately and comfortably input texts by utilizing the system. The method reduces the requirement of the system on computing power, is suitable for a virtual reality environment taking a mobile phone or a tablet as a computing end, ensures the comfort of a user, and enhances the input efficiency of the user.
Drawings
Fig. 1 is a block diagram of a virtual-real converged keyboard system for a mobile device according to an embodiment of the present invention;
fig. 2 is a structural diagram of a physical keyboard, a connection frame, and a mobile device in a virtual-physical converged keyboard system for a mobile device according to an embodiment of the present invention;
fig. 3 is a flowchart of a virtual-real keyboard fusion method for a mobile device according to an embodiment of the present invention.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides a virtual-real converged keyboard system for mobile equipment, which is shown in a block diagram in figure 1 and comprises a physical keyboard, the mobile equipment and a converged display module.
The mobile device is provided with a video capturing module and an image conversion module; the mobile equipment is fixed at a set position above the physical keyboard, and the position relation between the mobile equipment and the physical keyboard is as follows: the physical keyboard is positioned in the field of view range of the video capture module; and the image conversion module is used for converting the real-time scene image of the physical keyboard captured by the video capture module into a virtual image of the keyboard with the human eyes of the user as the visual angle.
And the fusion display module is used for carrying out virtual-real fusion display on the virtual image of the keyboard and the virtual environment. The fusion display module in the embodiment of the invention can be a head-mounted virtual reality device, such as a VR head-mounted display.
In the embodiment of the invention, the mobile equipment is fixed at a set position above the physical keyboard through the connecting frame; the link includes fixed connection's clamping part and supporting part, and the clamping part is used for the centre gripping at the entity keyboard edge, and supporting part and clamping part vertical fixation set up the equipment groove on the supporting part and are used for fixed mobile device. The connection relationship of the mobile device, the connection frame and the physical keyboard is shown in fig. 2. The support can be plastics material or other materials, and the support passes through fixture to be fixed on the entity keyboard to keep relative position motionless with the bluetooth keyboard, the mobile device is fixed on the support. For example, a mobile device (such as a mobile phone) is placed in an equipment slot and fixed by a fastening device, and meanwhile, a camera of the mobile device can shoot the full view of the physical keyboard through a camera hole in the equipment slot. The physical keyboard in the embodiment of the invention can be a Bluetooth keyboard. The whole entity connection module ensures that the relative position between the entity keyboard and the mobile device is unchanged.
The video capturing module in the embodiment of the invention is used for stably shooting the physical keyboard. The video capture module relies primarily on its own camera on the mobile device. At present, general mobile devices such as mobile phones and tablets have built-in cameras, and the built-in cameras are used as video capture modules. The camera shoots the physical keyboard and the surrounding environment in real time, the captured real-time scene image of the physical keyboard comprises image information of the physical keyboard, and hand image information when a user operates the keyboard is obtained.
In order not to influence the input space of a user, the system arranges the bracket in front of the keyboard (or on the left, right and the like, and a complete keyboard picture can be shot). For example, the keyboard image taken by the mobile device is taken from the front to the back of the keyboard, and in this image, the orientation of characters on the keyboard does not coincide with the keyboard image observed by the user on a daily basis, which may affect the comfort of the user and even reduce the input efficiency. The system converts the image directly shot by the mobile equipment through the image conversion module, and finally generates the keyboard input image which accords with the observation habit of human eyes.
In the embodiment of the invention, the image conversion module is mainly an image conversion neural network (such as SingleGAN). The module is used for converting a keyboard image directly shot by the mobile equipment into an image which is convenient for a user to observe.
The image conversion module specifically comprises: the image conversion module is used for converting the trained image into a neural network model; the training mode of the image transformation neural network model is as follows: constructing a neural network model; acquiring a keyboard scene image of a mobile equipment visual angle as a training sample image through a video capturing module on the mobile equipment; acquiring a keyboard scene image of a human eye visual angle of a user as a result image; and training the neural network model by taking the training image as input and the result image as output, wherein the trained neural network model is the image conversion neural network model. After the network is trained, the user does not need to train again when using the network. The system inputs the real-time image shot by the mobile equipment into the trained neural network, and the real-time image can be converted into a keyboard input image which is convenient for a user to watch.
In the embodiment of the invention, the fusion display module is a head-mounted virtual reality device. Fuse reality module, specifically do: and using a virtual-real fusion calibration method to directly map the virtual image of the keyboard to a virtual keyboard as a dynamic texture in a virtual environment.
Another embodiment of the present invention further provides a virtual-real keyboard fusion method for a mobile device, as shown in fig. 3, including the following steps:
s1, acquiring a real-time scene image of an entity keyboard by taking the position of a mobile device as a visual angle;
s2, converting the real-time scene image of the physical keyboard into a virtual keyboard image taking human eyes of a user as a visual angle;
s2 comprises the following steps:
s201, constructing a neural network model.
S202, acquiring a keyboard scene image of a mobile device view angle through a video capture module on the mobile device to serve as a training sample image.
And S203, acquiring a keyboard scene image of the human eye visual angle of the user as a result image.
And S204, training the neural network model by taking the training image as input and the result image as output, wherein the trained neural network model is the image conversion neural network model.
S205, inputting the real-time scene image of the physical keyboard into an image conversion neural network model to obtain a virtual image of the keyboard with human eyes of a user as a visual angle.
And S3, carrying out virtual-real fusion display on the keyboard virtual image and the virtual environment.
The technical scheme fixes the physical keyboard and the mobile equipment through the support to form a whole, and the physical keyboard and the hand image of the user are obtained in real time by shooting the keyboard and the hand image of the user through the camera on the mobile equipment. The mobile device is connected with the VR helmet display and can be used as a computing end of the VR helmet. And converting the images shot by the equipment into images according with the visual angle of the user through the pre-trained neural network, projecting the images into a virtual environment, performing virtual fusion, and helping the user input texts by using an entity keyboard.
In addition to text entry, the user may also perform simple command control via the keyboard.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A virtual-real fusion keyboard system for mobile equipment is characterized by comprising an entity keyboard, the mobile equipment and a fusion display module:
the mobile device is provided with a video capturing module and an image conversion module; the mobile equipment is fixed at a set position above the physical keyboard, and the position relation between the mobile equipment and the physical keyboard is as follows: the physical keyboard is within a field of view of the video capture module; the image conversion module is used for converting the real-time scene image of the physical keyboard captured by the video capture module into a virtual keyboard image taking human eyes of a user as a visual angle;
the fusion display module is used for performing virtual-real fusion display on the keyboard virtual image and the virtual environment;
the mobile equipment is fixed at a set position above the physical keyboard through a connecting frame; the connecting frame comprises a clamping part and a supporting part which are fixedly connected, the clamping part is used for clamping the edge of the physical keyboard, the supporting part and the clamping part are vertically fixed, and an equipment groove is formed in the supporting part and used for fixing the mobile equipment;
the relative position of the mobile device and the physical keyboard is unchanged;
the real-time scene image of the physical keyboard captured by the video capturing module comprises image information of the physical keyboard, so that hand image information of a user when the user operates the keyboard is obtained;
the image conversion module specifically comprises: the image conversion module is a trained image conversion neural network model; the training mode of the image transformation neural network model is as follows: constructing a neural network model; acquiring a keyboard scene image of a mobile device visual angle as a training sample image through a video capturing module on the mobile device; acquiring a keyboard scene image of a human eye visual angle of a user as a result image; training a neural network model by taking the training sample image as input and the result image as output, wherein the trained neural network model is an image conversion neural network model; after the network is trained, the user does not need to train again when using the network; the system inputs the real-time image shot by the mobile equipment into the trained neural network, and the real-time image can be converted into a keyboard input image which is convenient for a user to watch.
2. The system of claim 1, wherein the converged display module is a head-mounted virtual reality device.
3. The system according to claim 2, wherein the fusion display module is specifically:
and using a virtual-real fusion calibration method to directly map the keyboard virtual image to a virtual keyboard as a dynamic texture in a virtual environment.
4. The virtual-real keyboard fusion method for the mobile equipment is characterized by comprising the following steps of:
acquiring a real-time scene image of the physical keyboard by taking the position of the mobile equipment as a visual angle;
converting the real-time scene image of the physical keyboard into a virtual keyboard image taking human eyes of a user as a visual angle;
and performing virtual-real fusion display on the keyboard virtual image and the virtual environment.
5. The method of claim 4, wherein the converting the real-time scene image of the physical keyboard into a virtual image of the keyboard with the human eyes of the user as a visual angle comprises:
constructing a neural network model;
acquiring a keyboard scene image of a mobile device visual angle as a training sample image through a video capturing module on the mobile device;
acquiring a keyboard scene image of a human eye visual angle of a user as a result image;
training a neural network model by taking the training sample image as input and the result image as output, wherein the trained neural network model is an image conversion neural network model;
and inputting the real-time scene image of the physical keyboard into the image conversion neural network model to obtain the virtual image of the keyboard with the human eyes of the user as the visual angle.
CN202010598565.XA 2020-06-28 2020-06-28 Virtual-real converged keyboard system and method for mobile devices Active CN111930225B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010598565.XA CN111930225B (en) 2020-06-28 2020-06-28 Virtual-real converged keyboard system and method for mobile devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010598565.XA CN111930225B (en) 2020-06-28 2020-06-28 Virtual-real converged keyboard system and method for mobile devices

Publications (2)

Publication Number Publication Date
CN111930225A CN111930225A (en) 2020-11-13
CN111930225B true CN111930225B (en) 2022-12-02

Family

ID=73316714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010598565.XA Active CN111930225B (en) 2020-06-28 2020-06-28 Virtual-real converged keyboard system and method for mobile devices

Country Status (1)

Country Link
CN (1) CN111930225B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019377A (en) * 2012-12-04 2013-04-03 天津大学 Head-mounted visual display equipment-based input method and device
CN108334203A (en) * 2018-04-13 2018-07-27 北京理工大学 A kind of virtual reality fusion keyboard system for virtual reality
CN108875730A (en) * 2017-05-16 2018-11-23 中兴通讯股份有限公司 A kind of deep learning sample collection method, apparatus, equipment and storage medium
CN111158476A (en) * 2019-12-25 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Key identification method, system, equipment and storage medium of virtual keyboard

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105955456B (en) * 2016-04-15 2018-09-04 深圳超多维科技有限公司 The method, apparatus and intelligent wearable device that virtual reality is merged with augmented reality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019377A (en) * 2012-12-04 2013-04-03 天津大学 Head-mounted visual display equipment-based input method and device
CN108875730A (en) * 2017-05-16 2018-11-23 中兴通讯股份有限公司 A kind of deep learning sample collection method, apparatus, equipment and storage medium
CN108334203A (en) * 2018-04-13 2018-07-27 北京理工大学 A kind of virtual reality fusion keyboard system for virtual reality
CN111158476A (en) * 2019-12-25 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Key identification method, system, equipment and storage medium of virtual keyboard

Also Published As

Publication number Publication date
CN111930225A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN105487673B (en) A kind of man-machine interactive system, method and device
CN104699247B (en) A kind of virtual reality interactive system and method based on machine vision
CN106873778B (en) Application operation control method and device and virtual reality equipment
CN104159032B (en) A kind of real-time adjustment camera is taken pictures the method and device of U.S. face effect
US10095033B2 (en) Multimodal interaction with near-to-eye display
US9442571B2 (en) Control method for generating control instruction based on motion parameter of hand and electronic device using the control method
CN111263066B (en) Composition guiding method, composition guiding device, electronic equipment and storage medium
CN105867626A (en) Head-mounted virtual reality equipment, control method thereof and virtual reality system
CN108712603B (en) Image processing method and mobile terminal
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
KR20140010541A (en) Method for correcting user's gaze direction in image, machine-readable storage medium and communication terminal
CN106652590A (en) Teaching method, teaching recognizer and teaching system
CN111831119A (en) Eyeball tracking method and device, storage medium and head-mounted display equipment
CN105653037A (en) Interactive system and method based on behavior analysis
JP2009284175A (en) Calibration method and apparatus of display
CN107784885A (en) Operation training method and AR equipment based on AR equipment
CN107357434A (en) Information input equipment, system and method under a kind of reality environment
CN107943842A (en) A kind of photo tag generation method, mobile terminal
US20220366717A1 (en) Sensor-based Bare Hand Data Labeling Method and System
JP2019192116A (en) Image generation device and image generation program
CN109918005A (en) A kind of displaying control system and method based on mobile terminal
TW201814590A (en) Mobile electronic device and server
CN109992111A (en) Augmented reality extended method and electronic equipment
CN113365085A (en) Live video generation method and device
CN106502401B (en) Image control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant