CN106845335B - Gesture recognition method and device for virtual reality equipment and virtual reality equipment - Google Patents

Gesture recognition method and device for virtual reality equipment and virtual reality equipment Download PDF

Info

Publication number
CN106845335B
CN106845335B CN201611073934.3A CN201611073934A CN106845335B CN 106845335 B CN106845335 B CN 106845335B CN 201611073934 A CN201611073934 A CN 201611073934A CN 106845335 B CN106845335 B CN 106845335B
Authority
CN
China
Prior art keywords
current
virtual reality
gesture recognition
current user
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611073934.3A
Other languages
Chinese (zh)
Other versions
CN106845335A (en
Inventor
张茜
张绍谦
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN201611073934.3A priority Critical patent/CN106845335B/en
Priority to PCT/CN2016/111062 priority patent/WO2018098861A1/en
Publication of CN106845335A publication Critical patent/CN106845335A/en
Application granted granted Critical
Publication of CN106845335B publication Critical patent/CN106845335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/117Biometrics derived from hands

Abstract

The invention discloses a gesture recognition method and device for virtual reality equipment and the virtual reality equipment, wherein the gesture recognition method comprises the following steps: controlling a depth camera to acquire a current hand image of a current user; judging whether the current user executes a tapping action according to the current hand image, if so, then: extracting current features from a current hand image; and matching the current characteristic with the reference characteristic in the model, and determining the key struck by the current user according to the matching result. The virtual keyboard can be applied to reality, the flexibility of use of a user can be well improved, meanwhile, the space of the physical keyboard is released, troubles of the user caused by complex character input are reduced, and therefore user experience is improved.

Description

Gesture recognition method and device for virtual reality equipment and virtual reality equipment
Technical Field
The invention relates to the technical field of virtual reality equipment, in particular to a gesture recognition method and device for virtual reality equipment and the virtual reality equipment.
Background
Virtual Reality (VR) is a highly new technology that has emerged in recent years. The virtual reality technology is a key technology for supporting a comprehensive integrated multidimensional information space which combines qualitative and quantitative recognition and perceptual recognition. With the increase of the speed of networks, an internet era based on virtual reality technology is quietly coming, and the internet era will greatly change the production and living modes of people. It is conceivable that the virtual world can be experienced and interacted by people who dare to play and try again by wearing the VR head to play space, parachuting and the like.
At present, emerging virtual reality technology has penetrated the fields of office, entertainment and the like, and has led to the change of many industries. However, the existing VR headset interaction mainly involves language, gestures and the like, and complex processing of characters cannot be achieved. In order to increase the interaction of VR head wearing and improve the user experience, it is very valuable to provide a VR virtual keyboard based on hand recognition.
Disclosure of Invention
One object of the present invention is to provide a new technical solution for gesture recognition of virtual reality devices.
According to a first aspect of the present invention, there is provided a gesture recognition method for a virtual reality device, the virtual reality device including a depth camera, the gesture recognition method including:
controlling the depth camera to acquire a current hand image of a current user;
judging whether the current user executes a tapping action according to the current hand image, if so, then:
extracting a current feature from the current hand image;
and matching the current characteristic with a reference characteristic in a model, and determining the key struck by the current user according to a matching result.
Optionally, the gesture recognition method further includes:
controlling the depth camera to acquire a reference hand image of a reference user;
extracting the reference features from the reference hand image and storing the reference features in the model.
Optionally, the virtual reality device further includes a display screen, and the determining, according to the current image, whether the current user taps further includes:
displaying a keyboard image and an initial position of the current user's finger on the keyboard image on the display screen.
Optionally, the gesture recognition method further includes:
and displaying the current position of the finger of the current user on the keyboard image on the display screen according to the key hit by the current user.
According to a second aspect of the present invention, there is provided a gesture recognition apparatus for a virtual reality device, comprising:
the first control module is used for controlling the depth camera to acquire a current hand image of a current user;
the judging module is used for judging whether the current user executes a knocking action according to the current hand image;
the current feature extraction module is used for extracting current features from the current hand image under the condition that the judgment result of the judgment module is yes;
and the matching module is used for matching the current characteristic with the reference characteristic in the model and determining the key struck by the current user according to the matching result.
Optionally, the gesture recognition apparatus further includes:
the second control module is used for controlling the depth camera to collect a reference hand image of a reference user;
a reference feature extraction module for extracting the reference features from the reference hand image and storing the reference features in the model.
Optionally, the virtual reality device further includes a display screen, and the gesture recognition apparatus further includes:
the first display module is used for displaying a keyboard image and the initial position of the current user finger on the keyboard image on the display screen.
Optionally, the gesture recognition apparatus further includes:
and the second display module is used for displaying the current position of the finger of the current user on the keyboard image on the display screen according to the key hit by the current user determined by the matching module.
According to a third aspect of the present invention, there is provided a virtual reality device comprising the gesture recognition apparatus according to the second aspect of the present invention.
According to a fourth aspect of the present invention, there is provided a virtual reality device, comprising a depth camera for acquiring an image, a processor and a memory for storing instructions for controlling the processor to execute the gesture recognition method according to the first aspect of the present invention.
The inventor of the invention finds that in the prior art, the problem that complicated processing of characters cannot be realized mainly through language, gestures and the like in the interaction of the head-mounted virtual reality device exists. Therefore, the technical task to be achieved or the technical problems to be solved by the present invention are never thought or anticipated by those skilled in the art, and therefore the present invention is a new technical solution.
The method has the advantages that various gestures, left-right hand confirmation and finger fingertip coordinate acquisition are distinguished by using a gesture recognition technology of the depth sensor, the virtual keyboard can be applied to reality, the use flexibility of a user can be well improved, meanwhile, the space of the physical keyboard is released, troubles of the user caused by complex character input are reduced, and therefore user experience is improved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram of one embodiment of a gesture recognition method for a virtual reality device in accordance with the present invention;
FIG. 2 is a block diagram of an implementation structure of a gesture recognition apparatus for a virtual reality device according to the present invention;
fig. 3 is a block schematic diagram of an implementation structure of a virtual reality device according to the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In order to solve the problem that complicated processing of characters cannot be realized mainly through language, gestures and the like in the interaction of head-mounted virtual reality equipment in the prior art, a gesture recognition method for the virtual reality equipment is provided, and the virtual reality equipment comprises a depth camera.
The depth camera is also called a depth sensor or a 3D sensor, for example, a TOF camera, emits modulated near infrared light, and reflects the light after encountering an object, the camera converts the distance of the shot object by calculating the time difference or phase difference between light emission and reflection to generate depth information, and in addition, the three-dimensional outline of the object can be presented by representing images at different distances in different colors by combining with the traditional camera shooting.
Fig. 1 is a flowchart of an embodiment of a gesture recognition method for a virtual reality device according to the present invention.
As shown in fig. 1, the gesture recognition method includes the following steps:
and step S110, controlling the depth camera to acquire the current hand image of the current user.
Specifically, for example, the TOF camera is controlled to emit modulated near-infrared light, the modulated near-infrared light is reflected after encountering the hand of the current user, the distance of any position of the hand of the current user is converted by the camera through calculating the time difference or phase difference between light emission and reflection, so as to generate depth information, and in addition, the three-dimensional outline of the hand can be presented in different colors representing current hand images at different distances by combining with the traditional camera for shooting.
Step S120, judging whether the current user executes the knocking action or not according to the current hand image, if so, executing step S130; if not, the step S110 is continued.
The curvature of each point of the hand contour in the current hand image is acquired according to the depth camera, and the fingertip position can be calculated. Determining the curvature of each point of the hand contour in the current hand image according to a certain step length, wherein the curvature of the finger tip has a certain range, determining whether the curvature of each point is within the range by checking the curvature of each point, determining the position of the finger tip, and calculating the positions of other key points of the hand through the result of gesture determination and the position of the finger tip by morphology, wherein the other key points can be joints.
The depth sensor may be placed on the virtual reality device or anywhere in front of the user's hand. In one embodiment of the invention, the location of the depth sensor may be located on a virtual reality device. After the depth sensor collects the current hand image of the current user, the current fingertip coordinates of the current user can be obtained through calculation of curvature of the contour points of the hands in the current hand image, other key points of the user hand are calculated according to the fingertip coordinates of the user, and the hand fingertips and the other key points are imaged on a VR keyboard image through a 3D rendering technology. And then, the depth change of the fingers of the user can be confirmed according to the current hand image so as to confirm whether the keyboard is knocked or not.
By adopting the 3D rendering technology, the image is clear, the resolution of the image is improved, and the visual angle and the visual distance can be properly adjusted.
Since the depth sensor obtains a depth map, the depths are different according to the distance between the hand and the depth sensor, so that the depth value is smaller when the finger is lifted and larger when the finger is dropped, and whether the current user performs a tapping action can be determined.
In an embodiment of the present invention, the virtual reality device further includes a display screen, and before performing step S120, the method further includes:
and displaying the keyboard image and the initial position of the current finger of the user on the keyboard image on the display screen.
When a user uses the virtual device, which may be a head-mounted virtual reality device, for example, to input characters, the user may use a surrounding flat object, such as a table, to reduce arm discomfort of the user. Current user's both hands can be placed according to the gesture of entity keyboard, and degree of depth camera obtains current hand image and fingertip coordinate, can be through forefinger fingertip normalization processing place the left hand forefinger in the button F department of keyboard, places the forefinger of right hand in the button J department of keyboard, and current user can be according to the formation of image of virtual reality equipment, other finger positions of appropriate adjustment to fall the correct initial position of each finger.
Therefore, the current user can clearly see the positions of the keys needing to be pressed on the keyboard, so that the knocking action is determined, and the user experience is improved.
In step S130, the current feature is extracted from the current hand image.
In a specific embodiment of the present invention, the extraction of the current features may be implemented by a neural network (CNN) refinement algorithm. The CNN improvement algorithm obtains feature points (for example, 10) of the user's hand by convolution, and then obtains all the neurons of the hand according to the maximum pool layer and the full connection layer, wherein the CNN improvement algorithm may be, for example, an algorithm architecture of CNN such as caffe or tensoflow, and then extracts the current features from the CNN.
And step S140, matching the current characteristic with the reference characteristic in the model, and determining the key struck by the current user according to the matching result.
The reference feature may be stored in the model before the virtual reality device is shipped, or may be stored by a current user before the virtual reality device is used.
In an embodiment of the invention, before performing step S140, the gesture recognition method further includes:
controlling the depth camera to acquire a reference hand image of a reference user;
the reference features are extracted from the reference hand image and stored in the model.
Specifically, reference hand images of various actions of a user on knocking keys of a keyboard are collected, reference features are extracted from each reference hand image, a model containing the reference features corresponding to the various actions of the hand is established, the obtained model can be applied to matching with the current features, and if the matching is successful, the keys knocked by the current user corresponding to the reference features which are successfully matched can be determined.
After step S140 is executed, the gesture recognition method further includes:
and displaying the current position of the finger of the current user on the keyboard image on the display screen according to the key hit by the current user.
Specifically, the method of displaying the current position of the current user's finger on the keyboard image on the display screen may be the same as the method of displaying the initial position described above.
Therefore, the virtual keyboard can be applied to reality by distinguishing various gestures, confirming left and right hands and acquiring finger tip coordinates by using a gesture recognition technology of the depth sensor, and finishing the input of characters or numbers by knocking a keyboard picture by fingers in front of eyes of a user, so that the flexibility of the use of the user can be well improved, the space of the physical keyboard is released, the trouble of the user caused by complex character input is reduced, and the user experience is improved.
The invention also provides a gesture recognition device for the virtual reality equipment, and fig. 2 is a block schematic diagram of an implementation structure of the gesture recognition device for the virtual reality equipment.
Referring to fig. 2, the gesture recognition apparatus 200 includes a first control module 210, a determination module 220, a current feature extraction module 230, and a matching module 240.
The first control module 210 is configured to control the depth camera to acquire a current hand image of a current user;
the judging module 220 is configured to judge whether the current user performs a tapping action according to the current hand image;
the current feature extraction module 230 is configured to, if the determination result of the determination module is yes, extract a current feature from the current hand image;
the matching module 240 is configured to match the current feature with a reference feature in the model, and determine a key pressed by the current user according to a matching result.
Specifically, the gesture recognition device further comprises a second control module and a reference feature extraction module, wherein the second control module is used for controlling the depth camera to acquire a reference hand image of a reference user; the reference feature extraction module is used for extracting reference features from the reference hand images and storing the reference features in the model.
Further, the virtual reality device further comprises a display screen, and the gesture recognition device further comprises a first display module for displaying the keyboard image and the initial position of the current user's finger on the keyboard image on the display screen.
On the basis, the gesture recognition device further comprises a second display module, and the second display module is used for displaying the current position of the finger of the current user on the keyboard image on the display screen according to the key hit by the current user determined by the matching module.
The invention also provides a virtual reality device, which according to one aspect comprises the gesture recognition apparatus 200 for the virtual reality device. The virtual reality device may be, for example, a virtual reality glasses, a virtual reality helmet, or the like.
Fig. 3 is a block schematic diagram of an implementation structure of the virtual reality device according to another aspect of the present invention.
As shown in fig. 3, the virtual reality device 300 comprises a memory 301 and a processor 302, wherein the memory 301 is used for storing instructions for controlling the processor 302 to operate so as to execute the gesture recognition method for the virtual reality device.
In addition, as shown in fig. 3, the virtual reality apparatus 300 further includes an interface device 303, an input device 304, a display device 305, a communication device 306, and the like. Although a plurality of devices are shown in fig. 3, the present invention may relate to only some of the devices, for example, the processor 301, the memory 302, the interface device 303, and the like.
The communication device 306 can perform wired or wireless communication, for example.
The interface device 303 includes, for example, a headphone jack, a USB interface, and the like.
The input device 304 may include, for example, a touch screen, a key, and the like.
The display device 305 is, for example, a liquid crystal display panel, a touch panel, or the like.
The above embodiments mainly focus on differences from other embodiments, but it should be clear to those skilled in the art that the above embodiments can be used alone or in combination with each other as needed.
The embodiments in the present disclosure are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments, but it should be clear to those skilled in the art that the embodiments described above can be used alone or in combination with each other as needed. In addition, for the device embodiment, since it corresponds to the method embodiment, the description is relatively simple, and for relevant points, refer to the description of the corresponding parts of the method embodiment. The system embodiments described above are merely illustrative, in that modules illustrated as separate components may or may not be physically separate.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (8)

1. A gesture recognition method for a virtual reality device, the virtual reality device comprising a depth camera, the gesture recognition method comprising:
controlling the depth camera to acquire a current hand image of a current user;
judging whether the current user executes a tapping action according to the current hand image, if so, then:
extracting a current feature from the current hand image;
matching the current characteristic with a reference characteristic in a model, and determining a key struck by the current user according to a matching result;
the judging whether the current user executes a knocking action according to the current hand image comprises the following steps: calculating fingertip positions according to curvatures of all points of the hand contour in the current hand image, calculating positions of other key points of the hand according to the fingertip positions, and confirming depth changes of fingers according to the fingertip positions and the positions of the other key points of the hand to judge whether the current user executes a knocking action;
the virtual reality device further comprises a display screen, and the judging whether the current user knocks or not according to the current image further comprises:
displaying a keyboard image and an initial position of the current user's finger on the keyboard image on the display screen.
2. The gesture recognition method according to claim 1, further comprising:
controlling the depth camera to acquire a reference hand image of a reference user;
extracting the reference features from the reference hand image and storing the reference features in the model.
3. The gesture recognition method according to claim 1, further comprising:
and displaying the current position of the finger of the current user on the keyboard image on the display screen according to the key hit by the current user.
4. A gesture recognition apparatus for a virtual reality device, comprising:
the first control module is used for controlling the depth camera to acquire a current hand image of a current user;
the judging module is used for judging whether the current user executes a knocking action according to the current hand image;
the judging module is specifically used for calculating fingertip positions according to curvatures of each point of a hand contour in the current hand image, calculating positions of other key points of the hand according to the fingertip positions, and confirming depth changes of fingers according to the fingertip positions and the positions of the other key points of the hand to judge whether the current user executes a knocking action;
the current feature extraction module is used for extracting current features from the current hand image under the condition that the judgment result of the judgment module is yes;
the matching module is used for matching the current characteristic with a reference characteristic in a model and determining a key struck by the current user according to a matching result;
the virtual reality equipment still includes the display screen, gesture recognition device still includes:
the first display module is used for displaying a keyboard image and the initial position of the current user finger on the keyboard image on the display screen.
5. The gesture recognition device according to claim 4, further comprising:
the second control module is used for controlling the depth camera to collect a reference hand image of a reference user;
a reference feature extraction module for extracting the reference features from the reference hand image and storing the reference features in the model.
6. The gesture recognition device according to claim 4, further comprising:
and the second display module is used for displaying the current position of the finger of the current user on the keyboard image on the display screen according to the key hit by the current user determined by the matching module.
7. A virtual reality device, characterized in that it comprises a gesture recognition apparatus according to any one of claims 4-6.
8. A virtual reality device comprising a depth camera for capturing images, a processor, and a memory for storing instructions for controlling the processor to perform the gesture recognition method of any one of claims 1-3.
CN201611073934.3A 2016-11-29 2016-11-29 Gesture recognition method and device for virtual reality equipment and virtual reality equipment Active CN106845335B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611073934.3A CN106845335B (en) 2016-11-29 2016-11-29 Gesture recognition method and device for virtual reality equipment and virtual reality equipment
PCT/CN2016/111062 WO2018098861A1 (en) 2016-11-29 2016-12-20 Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611073934.3A CN106845335B (en) 2016-11-29 2016-11-29 Gesture recognition method and device for virtual reality equipment and virtual reality equipment

Publications (2)

Publication Number Publication Date
CN106845335A CN106845335A (en) 2017-06-13
CN106845335B true CN106845335B (en) 2020-03-17

Family

ID=59145422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611073934.3A Active CN106845335B (en) 2016-11-29 2016-11-29 Gesture recognition method and device for virtual reality equipment and virtual reality equipment

Country Status (2)

Country Link
CN (1) CN106845335B (en)
WO (1) WO2018098861A1 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107490983A (en) * 2017-09-29 2017-12-19 中国船舶重工集团公司第七〇四研究所 A kind of emulation mode for simulating parachute jumping full experience
CN107693117B (en) * 2017-09-29 2020-06-12 苏州蓝软智能医疗科技有限公司 Auxiliary operation system and method for automatically matching 3D model and operation patient in superposition mode
CN107644631A (en) * 2017-10-13 2018-01-30 深圳市明德智慧教育科技有限公司 Method, system and the virtual reality device of music input based on virtual reality
CN109857244B (en) * 2017-11-30 2023-09-01 百度在线网络技术(北京)有限公司 Gesture recognition method and device, terminal equipment, storage medium and VR glasses
CN108052277A (en) * 2017-12-14 2018-05-18 深圳市艾德互联网络有限公司 A kind of AR positioning learning methods and device
CN108519855A (en) * 2018-04-17 2018-09-11 北京小米移动软件有限公司 Characters input method and device
CN108815845B (en) * 2018-05-15 2019-11-26 百度在线网络技术(北京)有限公司 The information processing method and device of human-computer interaction, computer equipment and readable medium
CN109508635B (en) * 2018-10-08 2022-01-07 海南师范大学 Traffic light identification method based on TensorFlow combined with multilayer CNN network
CN109598998A (en) * 2018-11-30 2019-04-09 深圳供电局有限公司 Power grid training wearable device and its exchange method based on gesture identification
CN109933190B (en) * 2019-02-02 2022-07-19 青岛小鸟看看科技有限公司 Head-mounted display equipment and interaction method thereof
CN110096166A (en) * 2019-04-23 2019-08-06 广东工业大学华立学院 A kind of virtual input method
CN110321174A (en) * 2019-06-25 2019-10-11 Oppo广东移动通信有限公司 A kind of starting-up method and device, equipment, storage medium
CN111158476B (en) * 2019-12-25 2023-05-23 中国人民解放军军事科学院国防科技创新研究院 Key recognition method, system, equipment and storage medium of virtual keyboard
CN111443831A (en) * 2020-03-30 2020-07-24 北京嘉楠捷思信息技术有限公司 Gesture recognition method and device
CN111766947A (en) * 2020-06-30 2020-10-13 歌尔科技有限公司 Display method, display device, wearable device and medium
CN112462937B (en) 2020-11-23 2022-11-08 青岛小鸟看看科技有限公司 Local perspective method and device of virtual reality equipment and virtual reality equipment
CN113269089B (en) * 2021-05-25 2023-07-18 上海人工智能研究院有限公司 Real-time gesture recognition method and system based on deep learning
CN113299132A (en) * 2021-06-08 2021-08-24 上海松鼠课堂人工智能科技有限公司 Student speech skill training method and system based on virtual reality scene

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636725A (en) * 2015-02-04 2015-05-20 华中科技大学 Gesture recognition method based on depth image and gesture recognition system based on depth images
WO2016010797A1 (en) * 2014-07-15 2016-01-21 Microsoft Technology Licensing, Llc Holographic keyboard display

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2011220382A1 (en) * 2010-02-28 2012-10-18 Microsoft Corporation Local advertising content on an interactive head-mounted eyepiece
JP2012252584A (en) * 2011-06-03 2012-12-20 Nakayo Telecommun Inc Virtual keyboard input method
CN104246682B (en) * 2012-03-26 2017-08-25 苹果公司 Enhanced virtual touchpad and touch-screen
CN102778951B (en) * 2012-06-15 2016-02-10 惠州华阳通用电子有限公司 Use input equipment and the input method of virtual key
US9305229B2 (en) * 2012-07-30 2016-04-05 Bruno Delean Method and system for vision based interfacing with a computer
CN103105930A (en) * 2013-01-16 2013-05-15 中国科学院自动化研究所 Non-contact type intelligent inputting method based on video images and device using the same
US10262462B2 (en) * 2014-04-18 2019-04-16 Magic Leap, Inc. Systems and methods for augmented and virtual reality
CN104423578B (en) * 2013-08-25 2019-08-06 杭州凌感科技有限公司 Interactive input system and method
CN105224069B (en) * 2014-07-03 2019-03-19 王登高 A kind of augmented reality dummy keyboard input method and the device using this method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016010797A1 (en) * 2014-07-15 2016-01-21 Microsoft Technology Licensing, Llc Holographic keyboard display
CN104636725A (en) * 2015-02-04 2015-05-20 华中科技大学 Gesture recognition method based on depth image and gesture recognition system based on depth images

Also Published As

Publication number Publication date
CN106845335A (en) 2017-06-13
WO2018098861A1 (en) 2018-06-07

Similar Documents

Publication Publication Date Title
CN106845335B (en) Gesture recognition method and device for virtual reality equipment and virtual reality equipment
US11262840B2 (en) Gaze detection in a 3D mapping environment
CN107810465B (en) System and method for generating a drawing surface
US20180218545A1 (en) Virtual content scaling with a hardware controller
TW201814438A (en) Virtual reality scene-based input method and device
US20230152902A1 (en) Gesture recognition system and method of using same
JP6165485B2 (en) AR gesture user interface system for mobile terminals
US9599825B1 (en) Visual indicator for transparent display alignment
CN110968187B (en) Remote touch detection enabled by a peripheral device
WO2016032892A1 (en) Navigating augmented reality content with a watch
WO2013028279A1 (en) Use of association of an object detected in an image to obtain information to display to a user
KR102147430B1 (en) virtual multi-touch interaction apparatus and method
KR20120068253A (en) Method and apparatus for providing response of user interface
US20170140215A1 (en) Gesture recognition method and virtual reality display output device
KR20210033394A (en) Electronic apparatus and controlling method thereof
US20150123901A1 (en) Gesture disambiguation using orientation information
US11054941B2 (en) Information processing system, information processing method, and program for correcting operation direction and operation amount
CN111007942A (en) Wearable device and input method thereof
US20230306097A1 (en) Confirm Gesture Identity
US20220236809A1 (en) Wink Gesture Control System
CN114923418A (en) Point selection based measurement
CN116954367A (en) Virtual reality interaction method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201012

Address after: 261031 north of Yuqing street, east of Dongming Road, high tech Zone, Weifang City, Shandong Province (Room 502, Geer electronic office building)

Patentee after: GoerTek Optical Technology Co.,Ltd.

Address before: 266104 Laoshan Qingdao District North House Street investment service center room, Room 308, Shandong

Patentee before: GOERTEK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221212

Address after: 266104 No. 500, Songling Road, Laoshan District, Qingdao, Shandong

Patentee after: GOERTEK TECHNOLOGY Co.,Ltd.

Address before: 261031 north of Yuqing street, east of Dongming Road, high tech Zone, Weifang City, Shandong Province (Room 502, Geer electronics office building)

Patentee before: GoerTek Optical Technology Co.,Ltd.