CN106406527A - Input method and device based on virtual reality and virtual reality device - Google Patents
Input method and device based on virtual reality and virtual reality device Download PDFInfo
- Publication number
- CN106406527A CN106406527A CN201610808139.8A CN201610808139A CN106406527A CN 106406527 A CN106406527 A CN 106406527A CN 201610808139 A CN201610808139 A CN 201610808139A CN 106406527 A CN106406527 A CN 106406527A
- Authority
- CN
- China
- Prior art keywords
- user
- virtual reality
- photographic head
- handwriting trace
- case
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to an input method and device based on virtual reality and a virtual reality device. The method comprises the steps of starting a camera on the condition that an input character is needed by a current virtual reality scene; acquiring a writing track of a user by the camera; and performing identification processing on the acquired writing track to acquire the input character. The camera is started on the condition that the input character is needed by the current virtual reality scene, the writing track of the user is acquired by the camera, and the input character is acquired by performing identification processing on the acquired writing track. According to the input method and device based on the virtual reality and the virtual reality device, the accuracy and the flexibility of character input based on the virtual reality can be improved on the premise of no increase of hardware cost.
Description
Technical field
The present invention relates to areas of information technology, more particularly, to a kind of input method based on virtual reality, device and virtual
Real device.
Background technology
Virtual reality technology is important directions of emulation technology, is emulation technology and computer graphicss, human-machine interface
The set of the multiple technologies such as vocal imitation skill, multimedia technology, sensing technology, network technology.Existing virtual reality system provides
Virtual focus point input and two kinds of input modes of phonetic entry.Virtual focus point input mode shows a void in virtual reality system
Intend focus, user can carry out the selection of function by this virtual focus point, but virtual focus point input mode cannot be carried out word
Input.Phonetic entry mode is identified to the voice of user input, and voice is converted into word.Phonetic entry mode is easily subject to
To environmental disturbances, in the case that user pronunciation is nonstandard or word speed is too fast, easily identify mistake, and the feelings in identification mistake
Under condition, user also cannot correct indivedual words identifying mistake.
Content of the invention
Technical problem
In view of this, the technical problem to be solved in the present invention is, the existing input technology based on virtual reality flexible
Property and accuracy are relatively low.
Solution
In order to solve above-mentioned technical problem, according to one embodiment of the invention, there is provided a kind of defeated based on virtual reality
Enter method, including:
In the case that current virtual reality scene needs to input character, start photographic head;
Obtain the handwriting trace of user by described photographic head;
Acquired handwriting trace is identified process, to obtain the character inputting.
The handwriting trace of user for said method, in a kind of possible implementation, is obtained by described photographic head,
Including:
In the case of the first default gesture is detected, that triggers that described photographic head starts to obtain described user writes rail
Mark.
For said method, in a kind of possible implementation, methods described also includes:
In the case of the second default gesture is detected, stop obtaining the handwriting trace of described user.
For said method, in a kind of possible implementation, acquired handwriting trace is identified process, bag
Include:
In the case of the 3rd default gesture is detected, acquired handwriting trace is identified process.
The handwriting trace of user for said method, in a kind of possible implementation, is obtained by described photographic head,
Including:
Video, and the sequencing according to the time of each frame of video in the video shooting are shot by described photographic head,
The fingertip location of user in each frame of video described is identified, to determine described user by writing that mobile finger tip produces
Track.
The handwriting trace of user for said method, in a kind of possible implementation, is obtained by described photographic head,
Including:
By described photographic head continuously shot images, and the sequencing according to each image taking, to each figure described
In picture, the fingertip location of user is identified, to determine the handwriting trace that described user is produced by mobile finger tip.
For said method, in a kind of possible implementation, acquired handwriting trace is identified process, bag
Include:
In the case that recognition result includes multiple candidate item, show the plurality of candidate item.
For said method, in a kind of possible implementation, methods described also includes:
In the case that current virtual reality scene needs to input character, show character frame.
For said method, in a kind of possible implementation, methods described also includes:
Described user is pointed out to be write in described character frame.
In order to solve above-mentioned technical problem, according to another embodiment of the present invention, there is provided a kind of based on virtual reality
Input equipment, including:
Photographic head starting module, in the case of needing to input character in current virtual reality scene, starts photographic head;
Acquisition module, for obtaining the handwriting trace of user by described photographic head;
Recognition processing module, for being identified to acquired handwriting trace processing, to obtain the character inputting.
For said apparatus, in a kind of possible implementation, described acquisition module is used for:
In the case of the first default gesture is detected, that triggers that described photographic head starts to obtain described user writes rail
Mark.
For said apparatus, in a kind of possible implementation, described device also includes:
Stop acquisition module, for, in the case of the second default gesture is detected, stopping obtaining writing of described user
Track.
For said apparatus, in a kind of possible implementation, described recognition processing module is used for:
In the case of the 3rd default gesture is detected, acquired handwriting trace is identified process.
For said apparatus, in a kind of possible implementation, described acquisition module is used for:
Video, and the sequencing according to the time of each frame of video in the video shooting are shot by described photographic head,
The fingertip location of user in each frame of video described is identified, to determine described user by writing that mobile finger tip produces
Track.
For said apparatus, in a kind of possible implementation, described acquisition module is used for:
By described photographic head continuously shot images, and the sequencing according to each image taking, to each figure described
In picture, the fingertip location of user is identified, to determine the handwriting trace that described user is produced by mobile finger tip.
For said apparatus, in a kind of possible implementation, described recognition processing module is used for:
In the case that recognition result includes multiple candidate item, show the plurality of candidate item.
For said apparatus, in a kind of possible implementation, described device also includes:
Character frame display module, in the case of needing to input character in current virtual reality scene, shows character frame.
For said apparatus, in a kind of possible implementation, described device also includes:
Reminding module, for pointing out described user to be write in described character frame.
In order to solve above-mentioned technical problem, according to another embodiment of the present invention, there is provided a kind of virtual reality device, wrap
Include virtual reality glasses box and mobile terminal, described virtual reality glasses box includes perforate, described mobile terminal includes imaging
Head and the input equipment based on virtual reality, the position of described photographic head is corresponding with the position of described perforate, so that described
Photographic head can be shot.
For said apparatus, in a kind of possible implementation, the position-adjustable of described perforate.
Beneficial effect
In the case of needing to input character in current virtual reality scene, start photographic head, obtained by photographic head
The handwriting trace of user, and the handwriting trace obtaining is identified process to obtain the character inputting, implemented according to the present invention
The input method based on virtual reality of example, device and virtual reality device can carry on the premise of not increasing hardware cost
The high accuracy of character input based on virtual reality and motility.
According to below with reference to the accompanying drawings, to detailed description of illustrative embodiments, the further feature of the present invention and aspect will become
Clear.
Brief description
Comprise in the description and constitute the accompanying drawing of a part of description and description together illustrates the present invention's
Exemplary embodiment, feature and aspect, and for explaining the principle of the present invention.
Fig. 1 illustrates the flowchart of the input method based on virtual reality according to an embodiment of the invention;
Fig. 2 illustrates that the one of the input method based on virtual reality according to an embodiment of the invention exemplary realizes flow process
Figure;
Fig. 3 illustrates that the one of the input method based on virtual reality according to an embodiment of the invention exemplary realizes flow process
Figure;
Fig. 4 illustrates that the one of the input method based on virtual reality according to an embodiment of the invention exemplary realizes flow process
Figure;
Fig. 5 illustrates the structured flowchart of the input equipment based on virtual reality according to another embodiment of the present invention;
Fig. 6 illustrates exemplary structural frames of the input equipment based on virtual reality according to another embodiment of the present invention
Figure;
Fig. 7 a-7c illustrates the schematic diagram of virtual reality device according to another embodiment of the present invention;
Fig. 8 shows a kind of structured flowchart of input equipment based on virtual reality of an alternative embodiment of the invention.
Specific embodiment
Describe various exemplary embodiments, feature and the aspect of the present invention below with reference to accompanying drawing in detail.Identical in accompanying drawing
Reference represent the same or analogous element of function.Although the various aspects of embodiment shown in the drawings, remove
Non-specifically points out it is not necessary to accompanying drawing drawn to scale.
Special word " exemplary " means " as example, embodiment or illustrative " here.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, in order to better illustrate the present invention, giving numerous details in specific embodiment below.
It will be appreciated by those skilled in the art that not having some details, the present invention equally can be implemented.In some instances, for
Method well known to those skilled in the art, means, element and circuit are not described in detail, in order to highlight the purport of the present invention.
Embodiment 1
Fig. 1 illustrates the flowchart of the input method based on virtual reality according to an embodiment of the invention.The present invention
The executive agent of embodiment can be virtual reality device (for example wearable virtual reality glasses etc.) or intelligent handss
The mobile terminals such as machine, or can be other input equipment based on virtual reality, it is not limited thereto.As shown in figure 1, the party
Method mainly includes:
In step S101, in the case that current virtual reality scene needs to input character, start photographic head.
Wherein, character can be one or more in word, letter, numbers and symbols.
As an example of the embodiment of the present invention, in the case that current virtual reality scene needs to input character, inspection
Survey whether photographic head is opened.If photographic head is detected to have turned on, photographic head is kept to be in opening.If photographic head is detected
Do not open, then start photographic head.Wherein, photographic head can be post-positioned pick-up head or front-facing camera, is not limited thereto, example
As the virtual reality devices such as virtual reality glasses or mobile phone being worn on the situation that head to watch virtual reality scenario in user
Under, this camera lens can be the post-positioned pick-up head of virtual reality glasses or handset device, positioned at the opposite side of display screen,
To shoot the gesture of user while user watches so that user can conveniently enter during viewing.
In step s 102, the handwriting trace of user is obtained by photographic head.
As an example of the embodiment of the present invention, obtain the handwriting trace of user, Ke Yiwei by photographic head:By taking the photograph
As head shoots video, and the sequencing according to the time of each frame of video in the video shooting, to user in each frame of video
Fingertip location be identified, to determine that user passes through the handwriting trace that mobile finger tip produces.
As another example of the embodiment of the present invention, obtain the handwriting trace of user, Ke Yiwei by photographic head:Pass through
Photographic head continuously shot images, and the sequencing according to each image taking, enter to the fingertip location of user in each image
Row identification, to determine that user passes through the handwriting trace that mobile finger tip produces.
In step s 103, acquired handwriting trace is identified processing, to obtain the character inputting.
For example, it is possible to by OCR (Optical Character Recognition, optical character recognition) technology to institute
The handwriting trace obtaining is identified processing, to obtain the character inputting.
Voice recognition need not be carried out according to the input method based on virtual reality that this embodiment provides, thus solve sound
Sound identification is easily subject to the problem of environmental disturbances.
The handwriting trace of user in a kind of possible implementation, is obtained by photographic head, including:Detecting first
In the case of default gesture, triggering photographic head starts to obtain the handwriting trace of user.For example, the first default gesture can be for holding
The gesture of finger.
In a kind of possible implementation, the method also includes:In the case of the second default gesture is detected, stop
Obtain the handwriting trace of user.For example, the second default gesture can be the gesture spreading one's fingers.
In a kind of possible implementation, acquired handwriting trace is identified process, including:Detecting
In the case of three default gestures, acquired handwriting trace is identified process.For example, the 3rd default gesture can be for the left
The gesture brandished.
It should be noted that the first default gesture, the second default gesture and the 3rd default gesture can also be according to users
People's hobby and/or practical application scene flexibly set, and are not limited thereto.
Fig. 2 illustrates that the one of the input method based on virtual reality according to an embodiment of the invention exemplary realizes flow process
Figure.As shown in Fig. 2 the method includes:
In step s 201, in the case that current virtual reality scene needs to input character, start photographic head.
In step S202, in the case of the first default gesture is detected, triggering photographic head starts to obtain the book of user
Write track.
In step S203, in the case of the second default gesture is detected, stop obtaining the handwriting trace of user.
In step S204, in the case of the 3rd default gesture is detected, acquired handwriting trace is identified
Process, to obtain the character inputting.
In a kind of possible implementation, acquired handwriting trace is identified process, including:In recognition result
In the case of multiple candidate item, show multiple candidate item.
For example, in the case that there are multiple candidate item, current candidate can be shown by the first pre-set color, use the
Two pre-set color show other candidate item.For example, the first pre-set color can be redness, and the second pre-set color can be green.
In the case of the 4th default gesture is detected, current candidate can be defined as the character of input;Pre- detecting the 5th
If in the case of gesture, the candidate item on the former current candidate left side can be defined as new current candidate;Detecting
In the case of six default gestures, the candidate item on the right of former current candidate can be defined as new current candidate.Wherein,
The gesture that four default gestures can stop for finger, brandish upwards or brandish downwards, the 5th default gesture can be for waving to the left
Dynamic gesture, the 6th default gesture can be the gesture brandished to the right.
Again for example, in the case that there are multiple candidate item, current candidate can be pointed to by default cursor.In detection
To in the case of the 4th default gesture, current candidate can be defined as the character of input;The 5th default gesture is being detected
In the case of, the candidate item on the former current candidate left side can be defined as new current candidate, and default cursor is pointed to
New current candidate;In the case of the 6th default gesture is detected, can be true by the candidate item on the right of former current candidate
It is set to new current candidate, and default cursor is pointed to new current candidate.
In a kind of possible implementation, the method also includes:Need to input character in current virtual reality scene
In the case of, show character frame.For example, the color of this character frame can be green, is not limited thereto.
In a kind of possible implementation, the method also includes:Prompting user is write in character frame.For example,
By text prompt user, finger can be reached in character frame in character frame and be write, or voice message can be passed through
User reaches finger in character frame and is write.
Fig. 3 illustrates that the one of the input method based on virtual reality according to an embodiment of the invention exemplary realizes flow process
Figure.As shown in figure 3, the method includes:
In step S301, in the case that current virtual reality scene needs to input character, start photographic head, and show
Character frame.
In step s 302, prompting user is write in character frame.
The handwriting trace of user in step S303, is obtained by photographic head.
In step s 304, acquired handwriting trace is identified processing, to obtain the character inputting.
Fig. 4 illustrates that the one of the input method based on virtual reality according to an embodiment of the invention exemplary realizes flow process
Figure.As shown in figure 4, the method includes:
In step S401, in the case that current virtual reality scene needs to input character, start photographic head, and show
Character frame.
In step S402, user is pointed out to be write in character frame.
In step S403, in the case of the first default gesture is detected, triggering photographic head starts to obtain the book of user
Write track.
As an example of the embodiment of the present invention, in the case of the first default gesture is detected, Mus can be equivalent to
Mark press it is possible in character frame display highlighting insertion point, with point out user's photographic head have started to obtain user write rail
Mark, thus point out user to start to write.
As another example of the embodiment of the present invention, user is detected, finger is reached in character frame, and detect
In the case of first default gesture, can in character frame display highlighting insertion point, to point out user's photographic head to have started to obtain
The handwriting trace of user, thus point out user to start to write.
In step s 404, in the case of the second default gesture is detected, stop obtaining the handwriting trace of user.
In step S405, in the case of the 3rd default gesture is detected, acquired handwriting trace is identified
Process, to obtain the character inputting.
So, in the case of by needing to input character in current virtual reality scene, start photographic head, by photographic head
Obtain the handwriting trace of user, and the handwriting trace obtaining is identified process to obtain the character inputting, according to the present invention
The input method based on virtual reality of embodiment can improve based on virtual reality on the premise of not increasing hardware cost
The accuracy of character input and motility.
Embodiment 2
Fig. 5 illustrates the structured flowchart of the input equipment based on virtual reality according to another embodiment of the present invention.Fig. 5 is permissible
For running the input method based on virtual reality shown in Fig. 1 to Fig. 4.For convenience of description, illustrate only in Figure 5 and this
The related part of inventive embodiments.
As shown in figure 5, this device includes:Photographic head starting module 51, for needing to input in current virtual reality scene
In the case of character, start photographic head;Acquisition module 52, for obtaining the handwriting trace of user by described photographic head;Identification
Processing module 53, for being identified to acquired handwriting trace processing, to obtain the character inputting.
Fig. 6 illustrates exemplary structural frames of the input equipment based on virtual reality according to another embodiment of the present invention
Figure.Fig. 6 can be used for running the input method based on virtual reality shown in Fig. 1 to Fig. 4.For convenience of description, in figure 6 only
Show the part related to the embodiment of the present invention.In Fig. 6, label and Fig. 5 identical assembly have identical function, for simple and clear
For the sake of, omit the detailed description to these assemblies.As shown in Figure 6:
In a kind of possible implementation, described acquisition module 52 is used for:In the situation the first default gesture is detected
Under, trigger described photographic head and start to obtain the handwriting trace of described user.
In a kind of possible implementation, described device also includes:Stop acquisition module 54, for detecting second
In the case of default gesture, stop obtaining the handwriting trace of described user.
In a kind of possible implementation, described recognition processing module 53 is used for:The 3rd default gesture is being detected
In the case of, acquired handwriting trace is identified process.
In a kind of possible implementation, described acquisition module 52 is used for:Video is shot by described photographic head, and presses
According to the sequencing of the time of each frame of video in the video shooting, the fingertip location of user in each frame of video described is carried out
Identification, to determine the handwriting trace that described user is produced by mobile finger tip.
In a kind of possible implementation, described acquisition module 52 is used for:By described photographic head continuously shot images,
And the sequencing according to each image taking, the fingertip location of user in each image described is identified, to determine
State user and pass through the handwriting trace that mobile finger tip produces.
In a kind of possible implementation, described recognition processing module 53 is used for:Include multiple candidates in recognition result
In the case of, show the plurality of candidate item.
In a kind of possible implementation, described device also includes:Character frame display module 55, in current virtual
In the case that reality scene needs to input character, show character frame.
In a kind of possible implementation, described device also includes:Reminding module 56, for pointing out described user in institute
State in character frame and write.
So, in the case of by needing to input character in current virtual reality scene, start photographic head, by photographic head
Obtain the handwriting trace of user, and the handwriting trace obtaining is identified process to obtain the character inputting, according to the present invention
The input equipment based on virtual reality of embodiment can improve based on virtual reality on the premise of not increasing hardware cost
The accuracy of character input and motility.
Embodiment 3
Fig. 7 a-7c illustrates the schematic diagram of virtual reality device according to another embodiment of the present invention.As shown in Fig. 7 a-7c,
This device includes virtual reality glasses box 71 and mobile terminal 72, and virtual reality glasses box 71 includes perforate 711, mobile terminal
72 include photographic head 721 and the input equipment based on virtual reality, and the position of photographic head 721 is relative with the position of perforate 711
Should, so that photographic head 721 can be shot.Wherein, the base that the input equipment based on virtual reality can provide for embodiment 2
Input equipment in virtual reality.Mobile terminal 72 can be smart mobile phone, is not limited thereto.
In a kind of possible implementation, the position-adjustable of perforate 711 is so that for the different shooting of installation site
721, all outside outdoor scene can be shot by perforate 711.
So, in the case of by needing to input character in current virtual reality scene, start photographic head, by photographic head
Obtain the handwriting trace of user, and the handwriting trace obtaining is identified process to obtain the character inputting, according to the present invention
The virtual reality device of embodiment can improve character input based on virtual reality on the premise of not increasing hardware cost
Accuracy and motility.
Embodiment 4
Fig. 8 shows a kind of structured flowchart of input equipment based on virtual reality of an alternative embodiment of the invention.
Described based on the input equipment 1100 of virtual reality can be possess the host server of computing capability, personal computer PC or
The portable portable computer of person or terminal etc..The specific embodiment of the invention does not limit to implementing of calculate node
Fixed.
Described processor (processor) 1110, communication interface are included based on the input equipment 1100 of virtual reality
(Communications Interface) 1120, memorizer (memory) 1130 and bus 1140.Wherein, processor 1110,
Communication interface 1120 and memorizer 1130 complete mutual communication by bus 1140.
Communication interface 1120 is used for and network device communications, and wherein the network equipment includes such as Virtual Machine Manager center, is total to
Enjoy storage etc..
Processor 1110 is used for configuration processor.Processor 1110 is probably a central processor CPU, or special collection
Become circuit ASIC (Application Specific Integrated Circuit), or be arranged to implement the present invention
One or more integrated circuits of embodiment.
Memorizer 1130 is used for depositing file.Memorizer 1130 may comprise high-speed RAM memorizer it is also possible to also include non-
Volatile memory (non-volatile memory), for example, at least one disk memory.Memorizer 1130 can also be deposited
Memory array.Memorizer 1130 is also possible to by piecemeal, and described piece can be combined into virtual volume by certain rule.
In a kind of possible embodiment, said procedure can be the program code including computer-managed instruction.This journey
Sequence is particularly used in:Realize the operation of each step in embodiment 1.
Those of ordinary skill in the art are it is to be appreciated that each exemplary cell in embodiment described herein and algorithm
Step, being capable of being implemented in combination in electronic hardware or computer software and electronic hardware.These functions are actually with hardware also
Being software form to realize, the application-specific depending on technical scheme and design constraint.Professional and technical personnel can be directed to
Specifically application selects different methods to realize described function, but this realization is it is not considered that exceed the model of the present invention
Enclose.
If to be realized using in the form of computer software described function and as independent production marketing or use when,
To a certain extent it is believed that all or part (part for example prior art being contributed) of technical scheme is
Embody in form of a computer software product.This computer software product is generally stored inside the non-volatile of embodied on computer readable
In storage medium, including some instructions with so that computer equipment (can be that personal computer, server or network set
Standby etc.) all or part of step of execution various embodiments of the present invention method.And aforesaid storage medium include USB flash disk, portable hard drive,
Read only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic
Dish or CD etc. are various can be with the medium of store program codes.
The above, the only specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, all should contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by described scope of the claims.
Claims (20)
1. a kind of input method based on virtual reality is it is characterised in that include:
In the case that current virtual reality scene needs to input character, start photographic head;
Obtain the handwriting trace of user by described photographic head;
Acquired handwriting trace is identified process, to obtain the character inputting.
2. method according to claim 1, it is characterised in that obtained the handwriting trace of user by described photographic head, is wrapped
Include:
In the case of the first default gesture is detected, trigger described photographic head and start to obtain the handwriting trace of described user.
3. method according to claim 1 and 2 is it is characterised in that methods described also includes:
In the case of the second default gesture is detected, stop obtaining the handwriting trace of described user.
4. method according to claim 1 it is characterised in that acquired handwriting trace is identified process, including:
In the case of the 3rd default gesture is detected, acquired handwriting trace is identified process.
5. method according to claim 1, it is characterised in that obtained the handwriting trace of user by described photographic head, is wrapped
Include:
Video, and the sequencing according to the time of each frame of video in the video shooting are shot by described photographic head, to institute
The fingertip location stating user in each frame of video is identified, to determine that what described user passed through that mobile finger tip produces write rail
Mark.
6. method according to claim 1, it is characterised in that obtained the handwriting trace of user by described photographic head, is wrapped
Include:
By described photographic head continuously shot images, and the sequencing according to each image taking, in each image described
The fingertip location of user is identified, to determine the handwriting trace that described user is produced by mobile finger tip.
7. the method according to claim 1 or 4 is it is characterised in that be identified to acquired handwriting trace processing, bag
Include:
In the case that recognition result includes multiple candidate item, show the plurality of candidate item.
8. method according to claim 1 is it is characterised in that methods described also includes:
In the case that current virtual reality scene needs to input character, show character frame.
9. method according to claim 8 is it is characterised in that after display character frame, methods described also includes:
Described user is pointed out to be write in described character frame.
10. a kind of input equipment based on virtual reality is it is characterised in that include:
Photographic head starting module, in the case of needing to input character in current virtual reality scene, starts photographic head;
Acquisition module, for obtaining the handwriting trace of user by described photographic head;
Recognition processing module, for being identified to acquired handwriting trace processing, to obtain the character inputting.
11. devices according to claim 10 are it is characterised in that described acquisition module is used for:
In the case of the first default gesture is detected, trigger described photographic head and start to obtain the handwriting trace of described user.
12. devices according to claim 10 or 11 are it is characterised in that described device also includes:
Stop acquisition module, for, in the case of the second default gesture is detected, stopping obtaining the handwriting trace of described user.
13. devices according to claim 10 are it is characterised in that described recognition processing module is used for:
In the case of the 3rd default gesture is detected, acquired handwriting trace is identified process.
14. devices according to claim 10 are it is characterised in that described acquisition module is used for:
Video, and the sequencing according to the time of each frame of video in the video shooting are shot by described photographic head, to institute
The fingertip location stating user in each frame of video is identified, to determine that what described user passed through that mobile finger tip produces write rail
Mark.
15. devices according to claim 10 are it is characterised in that described acquisition module is used for:
By described photographic head continuously shot images, and the sequencing according to each image taking, in each image described
The fingertip location of user is identified, to determine the handwriting trace that described user is produced by mobile finger tip.
16. devices according to claim 10 or 13 are it is characterised in that described recognition processing module is used for:
In the case that recognition result includes multiple candidate item, show the plurality of candidate item.
17. devices according to claim 10 are it is characterised in that described device also includes:
Character frame display module, in the case of needing to input character in current virtual reality scene, shows character frame.
18. devices according to claim 17 are it is characterised in that described device also includes:
Reminding module, for pointing out described user to be write in described character frame.
A kind of 19. virtual reality devices are it is characterised in that including virtual reality glasses box and mobile terminal, described virtual existing
Real spectacle case includes perforate, described mobile terminal include photographic head and as described in claim 10 to 18 any one based on
The input equipment of virtual reality, the position of described photographic head is corresponding with the position of described perforate, so that described photographic head can
Shot.
20. devices according to claim 19 are it is characterised in that the position-adjustable of described perforate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610808139.8A CN106406527A (en) | 2016-09-07 | 2016-09-07 | Input method and device based on virtual reality and virtual reality device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610808139.8A CN106406527A (en) | 2016-09-07 | 2016-09-07 | Input method and device based on virtual reality and virtual reality device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106406527A true CN106406527A (en) | 2017-02-15 |
Family
ID=57998984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610808139.8A Pending CN106406527A (en) | 2016-09-07 | 2016-09-07 | Input method and device based on virtual reality and virtual reality device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106406527A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107015645A (en) * | 2017-03-24 | 2017-08-04 | 广州幻境科技有限公司 | A kind of character input method based on gesture |
CN107273806A (en) * | 2017-05-18 | 2017-10-20 | 上海斐讯数据通信技术有限公司 | A kind of painting and calligraphy exercising method and system based on virtual reality |
CN107368179A (en) * | 2017-06-12 | 2017-11-21 | 广东网金控股股份有限公司 | The input method and device of a kind of virtual reality system |
CN108459782A (en) * | 2017-02-17 | 2018-08-28 | 阿里巴巴集团控股有限公司 | A kind of input method, device, equipment, system and computer storage media |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101174332A (en) * | 2007-10-29 | 2008-05-07 | 张建中 | Method, device and system for interactively combining real-time scene in real world with virtual reality scene |
US20090190049A1 (en) * | 2007-12-14 | 2009-07-30 | Hyung Ki Hong | Electrically-driven liquid crystal lens and stereoscopic display device using the same |
CN102200830A (en) * | 2010-03-25 | 2011-09-28 | 夏普株式会社 | Non-contact control system and control method based on static gesture recognition |
CN102662465A (en) * | 2012-03-26 | 2012-09-12 | 北京国铁华晨通信信息技术有限公司 | Method and system for inputting visual character based on dynamic track |
CN103092343A (en) * | 2013-01-06 | 2013-05-08 | 深圳创维数字技术股份有限公司 | Control method based on camera and mobile terminal |
CN105242776A (en) * | 2015-09-07 | 2016-01-13 | 北京君正集成电路股份有限公司 | Control method for intelligent glasses and intelligent glasses |
-
2016
- 2016-09-07 CN CN201610808139.8A patent/CN106406527A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101174332A (en) * | 2007-10-29 | 2008-05-07 | 张建中 | Method, device and system for interactively combining real-time scene in real world with virtual reality scene |
US20090190049A1 (en) * | 2007-12-14 | 2009-07-30 | Hyung Ki Hong | Electrically-driven liquid crystal lens and stereoscopic display device using the same |
CN102200830A (en) * | 2010-03-25 | 2011-09-28 | 夏普株式会社 | Non-contact control system and control method based on static gesture recognition |
CN102662465A (en) * | 2012-03-26 | 2012-09-12 | 北京国铁华晨通信信息技术有限公司 | Method and system for inputting visual character based on dynamic track |
CN103092343A (en) * | 2013-01-06 | 2013-05-08 | 深圳创维数字技术股份有限公司 | Control method based on camera and mobile terminal |
CN105242776A (en) * | 2015-09-07 | 2016-01-13 | 北京君正集成电路股份有限公司 | Control method for intelligent glasses and intelligent glasses |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108459782A (en) * | 2017-02-17 | 2018-08-28 | 阿里巴巴集团控股有限公司 | A kind of input method, device, equipment, system and computer storage media |
CN107015645A (en) * | 2017-03-24 | 2017-08-04 | 广州幻境科技有限公司 | A kind of character input method based on gesture |
CN107273806A (en) * | 2017-05-18 | 2017-10-20 | 上海斐讯数据通信技术有限公司 | A kind of painting and calligraphy exercising method and system based on virtual reality |
CN107368179A (en) * | 2017-06-12 | 2017-11-21 | 广东网金控股股份有限公司 | The input method and device of a kind of virtual reality system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102173123B1 (en) | Method and apparatus for recognizing object of image in electronic device | |
CN104104768B (en) | The device and method of additional information are provided by using calling party telephone number | |
CN103688273B (en) | Amblyopia user is aided in carry out image taking and image review | |
CN106326406B (en) | Question searching method and device applied to electronic terminal | |
CN107835366B (en) | Multimedia playing method, device, storage medium and electronic equipment | |
CN106406527A (en) | Input method and device based on virtual reality and virtual reality device | |
CN110602516A (en) | Information interaction method and device based on live video and electronic equipment | |
CN108958503A (en) | input method and device | |
CN106055707A (en) | Bullet screen display method and device | |
CN109871843A (en) | Character identifying method and device, the device for character recognition | |
CN106612396A (en) | Photographing device, photographing terminal and photographing method | |
EP4300431A1 (en) | Action processing method and apparatus for virtual object, and storage medium | |
CN107885483A (en) | Method of calibration, device, storage medium and the electronic equipment of audio-frequency information | |
CN109815462A (en) | A kind of document creation method and terminal device | |
CN112351327A (en) | Face image processing method and device, terminal and storage medium | |
CN108256071B (en) | Method and device for generating screen recording file, terminal and storage medium | |
CN109460556A (en) | A kind of interpretation method and device | |
CN108898649A (en) | Image processing method and device | |
CN108833952A (en) | The advertisement placement method and device of video | |
CN108174270A (en) | Data processing method, device, storage medium and electronic equipment | |
CN103984415B (en) | A kind of information processing method and electronic equipment | |
CN113936697B (en) | Voice processing method and device for voice processing | |
KR20200127928A (en) | Method and apparatus for recognizing object of image in electronic device | |
CN110363187B (en) | Face recognition method, face recognition device, machine readable medium and equipment | |
CN112381091A (en) | Video content identification method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170215 |
|
RJ01 | Rejection of invention patent application after publication |