CN111007942A - Wearable device and input method thereof - Google Patents

Wearable device and input method thereof Download PDF

Info

Publication number
CN111007942A
CN111007942A CN201911360493.9A CN201911360493A CN111007942A CN 111007942 A CN111007942 A CN 111007942A CN 201911360493 A CN201911360493 A CN 201911360493A CN 111007942 A CN111007942 A CN 111007942A
Authority
CN
China
Prior art keywords
preset
wearable device
character
input method
characters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911360493.9A
Other languages
Chinese (zh)
Inventor
张超
刘超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Optical Technology Co Ltd
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Priority to CN201911360493.9A priority Critical patent/CN111007942A/en
Priority to PCT/CN2019/130475 priority patent/WO2021128414A1/en
Publication of CN111007942A publication Critical patent/CN111007942A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • G02C11/10Electronic devices other than hearing aids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Abstract

The invention discloses an input method of wearable equipment, wherein preset characters and a preset area of a hand can be displayed in an overlapping mode through a display lens, so that a user can see the preset characters displayed in the preset area through the display lens conveniently, the preset characters in the preset area are selected according to the preset area pointed by the user through preset characteristics of fingers, and operation corresponding to the preset characters is executed. Therefore, the mode directly takes the hand as an input means, does not need additional peripheral equipment, does not have the defects of poor convenience of wired connection, low reliability of wireless connection and the like, and has high portability and reliability; in addition, the application range and the application scene of the wearable device can be expanded, and the practicability and the popularity are higher. The invention also discloses wearable equipment which has the same beneficial effects as the input method.

Description

Wearable device and input method thereof
Technical Field
The invention relates to the technical field of wearable equipment, in particular to wearable equipment and an input method thereof.
Background
AR (Augmented Reality) glasses are a new type of glasses for "Augmented Reality" applications, and most of AR glasses have diversified functions such as displaying, photographing, video call, processing text information, electronic mail, game entertainment, and the like. On the basis of seeing a real scene, the intelligent hardware can see virtual information and even interact with the virtual information, and is a new form of a future intelligent hardware product.
External input devices (hereinafter, referred to as peripherals) of existing AR glasses generally include, for example, a keyboard, a handle, a stylus, a ring, and the like. These peripherals can be connected to the AR glasses by wire, but this kind of connection brings inconvenience in many aspects such as use, storage, carrying and so on; the peripheral equipment can be connected to the AR glasses in a wireless mode, such as Bluetooth and WiFi, performance requirements on the battery module are met although inconvenience brought by wired connection is solved, loss of the peripheral equipment is prone to occurring due to the fact that the peripheral equipment is in the wireless connection mode, once the battery is exhausted or the peripheral equipment is lost, use of most functions of the AR glasses is limited, even normal use cannot be achieved, and reliability is low.
Therefore, how to provide a solution to the above technical problem is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a wearable device and an input method thereof, which have the advantages of no defects of poor convenience of wired connection, low reliability of wireless connection and the like, and high portability and reliability; in addition, the application range and the application scene of the wearable device can be expanded, and the practicability and the popularity are high.
In order to solve the technical problem, the present invention provides an input method for a wearable device, where the wearable device includes a display lens and an image capture device for taking a picture of a front of the display lens to obtain a front image, the input method includes:
identifying a preset area of a hand of a user from the front image;
displaying preset characters to the preset area in an overlapping mode through the display lens;
recognizing preset characteristics of an input finger from the front image;
and judging whether the preset features meet input confirmation conditions, if so, executing operation corresponding to preset characters corresponding to a preset area where the preset features are located.
Preferably, when the preset character corresponding to the preset region where the preset feature is located is a text character,
executing the operation corresponding to the preset character corresponding to the preset area of the position of the preset feature, wherein the operation comprises the following steps:
and forming a text according to the text characters.
Preferably, after constructing the text according to the text characters, the method further includes:
judging whether the hand of the user executes a preset action or not through the front image;
and if so, executing the functional operation corresponding to the preset action.
Preferably, when the preset character corresponding to the preset region where the preset feature is located is a functional character,
executing the operation corresponding to the preset character corresponding to the preset area of the position of the preset feature, wherein the operation comprises the following steps:
and executing the function operation corresponding to the function character.
Preferably, the functional characters include confirmation characters and/or deletion characters;
executing a functional operation corresponding to the functional character, including:
and executing a confirmation operation and/or a deletion operation corresponding to the functional character.
Preferably, the preset area comprises a knuckle and/or a palm of a finger, and the preset feature is a fingernail.
Preferably, the judging whether the preset feature satisfies an input confirmation condition includes:
and judging whether the preset features stop moving or not and the time length of the stop moving exceeds a time length threshold value.
Preferably, the judging whether the preset feature satisfies an input confirmation condition includes:
and judging whether the distance between the preset feature and the preset area at the position of the preset feature is smaller than a distance threshold value.
Preferably, the determining whether the distance between the preset feature and the preset region where the preset feature is located is smaller than a distance threshold includes:
acquiring first depth information of the preset features and second depth information of a preset area of the position where the first depth information is located;
subtracting the second depth information from the first depth information to obtain a difference value;
and judging whether the difference value is smaller than a distance threshold value.
In order to solve the above technical problem, the present invention further provides a wearable device, where the wearable device includes a display lens, and further includes:
the image acquisition device is used for photographing the front of the display lens to obtain a front image;
the optical machine is used for projecting preset characters onto the display lens based on the control of the processor;
the processor is configured to implement the steps of the input method of the wearable device when executing the computer program.
The invention provides an input method of wearable equipment, wherein preset characters and a preset area of a hand can be displayed in an overlapping mode through a display lens, so that a user can see the preset characters displayed in the preset area through the display lens conveniently, the preset characters in the preset area are selected according to the preset area pointed by the user through preset characteristics of fingers, and operation corresponding to the preset characters is executed. Therefore, the mode directly takes the hand as an input means, does not need additional peripheral equipment, does not have the defects of poor convenience of wired connection, low reliability of wireless connection and the like, and has high portability and reliability; in addition, the application range and the application scene of the wearable device can be expanded, and the practicability and the popularity are high.
The invention also provides wearable equipment which has the same beneficial effects as the input method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed in the prior art and the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a process flow diagram of an input method of a wearable device provided in the present invention;
fig. 2 is a schematic structural diagram of a wearable device provided in the present invention;
FIG. 3 is a diagram illustrating a correspondence relationship between a predetermined region of a hand and a predetermined character according to the present invention;
fig. 4 is a diagram of a positional relationship between a hand and a wearable device when data input is performed by the wearable device provided by the present invention;
FIG. 5 is a schematic diagram of a right-hand input according to the present invention;
FIG. 6 is a schematic diagram of a user making a fist according to the present invention;
fig. 7 is a schematic structural diagram of another wearable device provided by the present invention.
Detailed Description
The core of the invention is to provide a wearable device and an input method thereof, which do not have the defects of poor convenience of wired connection, low reliability of wireless connection and the like, and have high portability and reliability; in addition, the application range and the application scene of the wearable device can be expanded, and the practicability and the popularity are high.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the present application, the wearable device has a function of superimposing virtual and real scenes, wherein the wearable device herein may be, but is not limited to, AR glasses. Based on this, for the peripheral hardware that solves wearable equipment among the prior art when adopting wired connection the convenience poor, when adopting wireless connection the low problem of reliability, the design idea of this application is: the method comprises the steps of dividing a hand of a user into a plurality of regions in advance according to fingerprint grains, and establishing a corresponding relation between each region and characters. In the subsequent use process, the preset region of the hand of the user is only needed to be identified, the superposition display of each preset character and the preset region is realized, the user can determine which character the user needs to select through the preset characteristics of the finger of the other hand, and the operation corresponding to the character is executed.
Referring to fig. 1, fig. 1 is a process flow chart of an input method of a wearable device provided by the present invention.
The wearable equipment comprises a display lens and an image acquisition device for photographing the front of the display lens to obtain a front image.
The input method comprises the following steps:
s11: recognizing a preset area of a hand of a user from the front image;
s12: displaying preset characters to a preset area in an overlapping mode through a display lens;
s13: recognizing preset characteristics of an input finger from a front image;
s14: judging whether the preset characteristics meet input confirmation conditions or not, if so, entering S15;
s15: and executing the operation corresponding to the preset character corresponding to the preset area of the position of the preset feature.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a wearable device provided in the present invention. Be provided with display lens and image acquisition device on the wearable equipment, still include ray apparatus and treater, image acquisition device here can be the camera, and this camera can be for the existing camera on the wearable equipment, and the camera can be in real time or periodically shoot the place ahead that shows the lens, obtains the place ahead image, and the treater carries out image identification to the place ahead image to discern and preset the region and preset the characteristic. In addition, when the wearable device works, the processor can project information such as virtual images and the like into the coupling-in area of the display lens through the optical machine, the display lens couples projected light rays in the coupling-in area into the glass substrate of the display lens, the light rays are transmitted to the coupling-out area of the display lens through the principle of total reflection, and then the projected light rays are released on the display lens. Meanwhile, the outside world can be seen through the display lens, so that virtual and real scenes can be superposed.
Specifically, referring to fig. 3, fig. 3 is a diagram illustrating a correspondence relationship between a preset region of a hand and a preset character according to the present invention. In practical applications, the preset region of the hand may be first divided by fingerprint lines, the shape of the hand, and the like, where the preset region may be a knuckle and a palm of a finger, and the knuckle of the finger may be one or more knuckles of a thumb, an index finger, a middle finger, a ring finger, and a little finger. In addition, function endowing is also carried out on each preset area, namely, preset characters are defined for each preset area. The characters may be single characters or character strings, where the single characters may be numbers, letters, operation symbols, punctuation marks, etc., and the character strings may be english (e.g., delete) or chinese (e.g., confirm, delete), or emoticons, etc., and the present application is not limited thereto.
In addition, a preset region may correspond to a character or a plurality of characters, for example, in fig. 3, a knuckle corresponds to a plurality of characters. Specifically, the method divides the lines among the knuckles of the palm into 3 × 4 — 12 different preset regions, and the processor applies preset characters to the hand according to the set algorithm similar to the nine-grid input method according to the position of each preset region, for example, when the left hand is used as the means of the nine-grid input method, the preset regions of the index finger are respectively assigned as 123/, from left to right. | A ABC, DEF, and similarly, assigns the remaining finger segments to the remaining other letters or values in the usual input manner. According to different input modes, such as numbers, common symbols and the like, the assignment algorithm corresponding to the preset area can be changed according to actual requirements. The novel finger nine-square grid input method has the advantages that the use viscosity of the wearable device in future intelligent hardware products is higher, the practicability is higher, and the wearable device is more popularized in the future.
After the corresponding relationship between the preset area of the hand and the preset characters is established, if a user wants to input a password, text information reply and other text inputs during the process of using the wearable device, the user can extend one hand (for example, the left hand) to the front of the display lens, specifically referring to fig. 4, where fig. 4 is a diagram of a positional relationship between the hand and the wearable device when the wearable device performs data input. At the moment, the processor carries out image recognition on the front image collected by the image collecting device so as to recognize the preset area of the hand. After the preset area of the hand is identified, the processor projects the virtual preset characters onto the display lens according to the corresponding relation between the preset area and the preset characters, and the user can see the preset characters displayed in the preset area of the hand of the user because the user can see the hand through the display lens and stands at the angle of the user.
It should be noted that even if the hand moves during the use process, since the image acquisition device acquires the front image in real time or periodically (the period is usually very small), the processor can accurately identify the preset area from the moved front image again, and superimpose and display the preset characters to the preset area through the display lens, so that the purpose that the preset characters move along with the preset area of the hand is achieved, and the real-time matching between the preset characters and the preset area is ensured.
Referring to fig. 5, fig. 5 is a schematic diagram of a right-hand input according to the present invention.
After a user sees a preset character displayed in a preset area, when the user wants to select which character, the user can move a finger (referred to as an input finger in the present application) of another hand (for example, the right hand) to the preset area corresponding to the character. After the preset features of the input finger are recognized through the front image, whether the preset features meet input confirmation conditions or not is judged, and if yes, operation corresponding to preset characters corresponding to a preset area where the preset features are located is executed. Here, the input confirmation condition is a preset character for determining whether the user wants to select a position where the preset feature is located.
In addition, considering that some preset characters are directly input as text characters, the processor can directly receive the characters and form texts based on the characters, such as passwords or Chinese character information when inputting passwords; some of the preset characters are used to perform corresponding functions, which do not constitute the text itself, and such preset characters are used to perform a function by which, for example, a character indicating "confirmation" after inputting a password. It can be seen that the types of the preset characters are different, the operations to be performed by the processor are also different, and the specific operations to be performed can be pre-defined according to the preset characters.
In summary, the present invention provides an input method for a wearable device, in which a preset character and a preset region of a hand are displayed in an overlapping manner through a display lens, so that a user can see the preset character displayed in the preset region through the display lens, and then select the preset character in the preset region according to which preset region the user points to through a preset feature of a finger, thereby performing an operation corresponding to the preset character. Therefore, the method directly takes the hand as an input means, does not need additional peripheral equipment, does not have the defects of poor convenience of wired connection, low reliability of wireless connection and the like, has high portability and reliability, is more integrated, and can also enable people to have stronger interaction with a virtual scene and more immersive and interesting; in addition, the application range and the application scene of the wearable device can be expanded, and the practicability and the popularity are high.
On the basis of the above-described embodiment:
in a preferred embodiment, when the preset character corresponding to the preset area where the preset feature is located is a text character,
executing the operation corresponding to the preset character corresponding to the preset area of the position where the preset feature is located, wherein the operation comprises the following steps:
text is constructed from the text characters.
Specifically, the text characters here are usually numbers, letters, punctuation marks, and the like. When Chinese characters or passwords are input, texts such as the Chinese characters and the passwords can be formed through text characters, for example, the passwords are directly formed through numbers and letters, or the Chinese characters are obtained through spelling through the letters, and when the preset characters selected through the preset characteristics are determined to be the text characters, the processor directly forms the texts according to the text characters. When the text is formed, the processor may specifically determine whether to use the character as the text directly or perform other subsequent processing based on the character to obtain the text according to a current scene (for example, the current scene is a password input mode or a short message reply mode).
In addition, taking the current requirement for Chinese character reply as an example, what kind of processing is performed after the processor receives the letters is also related to the keyboard mode in which the current preset characters are projected and the corresponding input method. For example, fig. 3 and 5, project preset characters onto preset areas in a manner similar to a nine-grid keyboard, and accordingly, the processor performs text character processing in a manner similar to a nine-grid input method during processing. The processor can obtain corresponding Chinese characters according to letters input by the user and by combining an input method, and display the corresponding Chinese characters in other preset areas, such as the palm, and the processor can select a plurality of Chinese characters to display on the palm because the same pinyin may correspond to the plurality of Chinese characters, so that the user can select the required Chinese characters.
For example, if a password is required to be input and the password is composed of numbers and letters, please refer to fig. 5, the letters can be directly used as texts when the letters are selected and displayed, 123/corresponding preset regions can be selected when the letters and the numbers are required to be switched, the letters in the preset regions are switched to the numbers after the processor determines that the preset regions are selected by the preset characteristics of the input fingers, so that a subsequent user can select the numbers, and the numbers are directly used as texts and displayed by the processor after the numbers are selected.
Therefore, the method directly takes the hand as an input means, does not need additional peripheral equipment, does not have the defects of poor convenience of wired connection, low reliability of wireless connection and the like, has high portability and reliability, is more integrated, and can also enable people to have stronger interaction with a virtual scene and more immersive and interesting; in addition, the application range and the application scene of the wearable device can be expanded, and the practicability and the popularity are high.
As a preferred embodiment, after the text is composed according to the text characters, the method further includes:
judging whether the hand of the user executes a preset action or not through the front image;
and if so, executing the function operation corresponding to the preset action.
Specifically, after the user has input the text, confirmation, transmission, deletion, and the like may be performed. In order to implement this function, in this embodiment, considering that the number of preset areas on the hand is limited, in order to reduce the occupation of the preset areas, the corresponding function may also be implemented by the user performing a preset action. The function operation may be a confirmation operation, a sending operation, a deletion operation, and the like, and specifically, the preset action and the function operation corresponding to the preset action may be defined according to actual needs.
Referring to fig. 6, fig. 6 is a schematic view of a user when making a fist according to the present invention. In practical applications, a handshake may be set to an acknowledge operation. For example, after the user inputs the password, a handshake (which may be a left-handed handshake or a right-handed handshake) may be performed, and when the processor determines that the user's hand is in the handshake based on the front image, the password input may be considered to be completed. Or, after the user edits the reply text, the processor may perform a handshake, and when determining that the hand of the user is in the handshake based on the front image, the processor may perform a sending operation on the reply text.
Therefore, the occupation of the preset area of the function operation opponent can be reduced by the method, and the expansion of more functions is realized.
As a preferred embodiment, when the preset character corresponding to the preset area where the preset feature is located is a functional character,
executing the operation corresponding to the preset character corresponding to the preset area of the position where the preset feature is located, wherein the operation comprises the following steps:
and executing the function operation corresponding to the function character.
In addition to completing operations such as confirmation, sending, deleting and the like after text input by using the above embodiments, functional characters can be set to further implement these functions, taking fig. 3 and fig. 5 as an example, a knuckle or a palm of a thumb can be corresponded to the functional characters, wherein the functional characters can include confirmation characters and/or deletion characters; accordingly, performing a functional operation corresponding to the functional character, including: a confirmation operation and/or a deletion operation corresponding to the functional character is performed.
Of course, the functional characters herein may also include other characters, and the present application is not particularly limited herein.
As a preferred embodiment, the predetermined area comprises a knuckle and/or palm of a finger and the predetermined feature is a fingernail.
Specifically, because there is a deeper fingerprint line between each knuckle on the finger, the knuckle and the palm can be accurately identified by combining the fingerprint line and the shape of the palm. Further, the fingernail on the finger is clearly distinguished from other parts of the finger, and thus, the fingernail may be selected as a preset feature, and the processor may recognize the fingernail according to the shape of the fingernail. Of course, the preset region and the preset feature may be other regions, and the present application is not particularly limited herein, depending on the actual situation.
As a preferred embodiment, the determining whether the preset feature satisfies the input confirmation condition includes:
and judging whether the preset features stop moving or not and the time length of the stop moving exceeds a time length threshold value.
In order to distinguish the preset features from other preset regions, the preset features of the input finger may stay in the position of the preset region to be selected for a period of time, so as to distinguish the preset features from other preset regions which the input finger passes through briefly during the movement of the input finger.
Specifically, the processor judges whether the preset feature stops moving or not, if so, judges whether the time length for stopping moving exceeds a time length threshold value or not, if so, judges that the preset feature meets the input confirmation condition, and at the moment, the preset area of the position where the preset feature is located is the preset area to be selected by the user. The time length threshold may be, but not limited to, 2s, and the time length threshold is not particularly limited in this embodiment, and is determined according to the actual situation. Therefore, the judgment method is simple and has high reliability.
As a preferred embodiment, the determining whether the preset feature satisfies the input confirmation condition includes:
and judging whether the distance between the preset feature and the preset area at the position of the preset feature is smaller than a distance threshold value.
The applicant considers that when the user wants to select one preset area, the user can also select the preset area by inputting a finger to approach or touch the preset area. Therefore, in this embodiment, the processor may further determine whether a distance between the preset feature and the preset region where the preset feature is located is smaller than a distance threshold, and if so, determine that the preset feature satisfies the input confirmation condition, where the preset region where the preset feature is located is the preset region to be selected by the user. The distance threshold is set in connection with the selection of the preset feature, for example, when the preset feature is a nail, the nail is spaced from the preset region by a preset distance even if the input finger touches the preset region. The distance threshold in this embodiment is not particularly limited, and is determined according to actual circumstances. Therefore, the judgment method is simple and has high reliability. As a preferred embodiment, the determining whether the distance between the preset feature and the preset area where the preset feature is located is smaller than a distance threshold includes:
acquiring first depth information of a preset feature and second depth information of a preset area of the position where the first depth information is located;
performing difference on the second depth information and the first depth information to obtain a difference value;
and judging whether the difference value is smaller than the distance threshold value.
Specifically, in order to obtain the distance between the preset feature and the preset region at the position of the preset feature, first depth information D1 of the preset feature may be obtained by the image acquisition device, and second depth information D2 of the preset region at the position of the preset feature may be obtained by the image acquisition device, then the second depth information D2 — the first depth information D1 is △ D, if △ D is smaller than the distance threshold, it indicates that the preset feature is close to the preset region at this time, and then the preset region at the position of the preset feature at this time is the preset region to be selected by the user.
Referring to fig. 7, fig. 7 is a schematic structural diagram of another wearable device provided in the present invention, the wearable device includes a display lens 1, and further includes:
the image acquisition device 2 is used for photographing the front of the display lens 1 to obtain a front image;
an optical machine 3 for projecting preset characters onto the display lens 1 based on the control of the processor 4;
a processor 4 for implementing the steps of the input method of the wearable device as described above when executing the computer program.
For the introduction of the wearable device provided by the present invention, please refer to the above method embodiment, which is not described herein again.
It should be noted that, in the present specification, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An input method of a wearable device, wherein the wearable device comprises a display lens and an image acquisition device for taking a picture of a front of the display lens to obtain a front image, the input method comprising:
identifying a preset area of a hand of a user from the front image;
displaying preset characters to the preset area in an overlapping mode through the display lens;
recognizing preset characteristics of an input finger from the front image;
and judging whether the preset features meet input confirmation conditions, if so, executing operation corresponding to preset characters corresponding to a preset area where the preset features are located.
2. The input method of the wearable device according to claim 1, wherein when a preset character corresponding to a preset region where the preset feature is located is a text character,
executing the operation corresponding to the preset character corresponding to the preset area of the position of the preset feature, wherein the operation comprises the following steps:
and forming a text according to the text characters.
3. The input method of the wearable device of claim 2, after composing text from the text characters, further comprising:
judging whether the hand of the user executes a preset action or not through the front image;
and if so, executing the functional operation corresponding to the preset action.
4. The input method of the wearable device according to claim 1, wherein when a preset character corresponding to a preset region where the preset feature is located is a functional character,
executing the operation corresponding to the preset character corresponding to the preset area of the position of the preset feature, wherein the operation comprises the following steps:
and executing the function operation corresponding to the function character.
5. The input method of the wearable device according to claim 4, wherein the function character comprises a confirmation character and/or a deletion character;
executing a functional operation corresponding to the functional character, including:
and executing a confirmation operation and/or a deletion operation corresponding to the functional character.
6. The input method of the wearable device according to claim 1, wherein the preset region comprises a knuckle and/or a palm of a finger, and the preset feature is a fingernail.
7. The input method of the wearable device of any of claims 1 to 6, wherein determining whether the preset feature satisfies an input confirmation condition comprises:
and judging whether the preset features stop moving or not and the time length of the stop moving exceeds a time length threshold value.
8. The input method of the wearable device of any of claims 1 to 6, wherein determining whether the preset feature satisfies an input confirmation condition comprises:
and judging whether the distance between the preset feature and the preset area at the position of the preset feature is smaller than a distance threshold value.
9. The input method of the wearable device of claim 8, wherein determining whether the distance between the preset feature and the preset area where the preset feature is located is less than a distance threshold comprises:
acquiring first depth information of the preset features and second depth information of a preset area of the position where the first depth information is located;
subtracting the second depth information from the first depth information to obtain a difference value;
and judging whether the difference value is smaller than a distance threshold value.
10. A wearable device comprising a display lens, further comprising:
the image acquisition device is used for photographing the front of the display lens to obtain a front image;
the optical machine is used for projecting preset characters onto the display lens based on the control of the processor;
the processor, when executing the computer program, implementing the steps of the input method of the wearable device according to any of claims 1 to 9.
CN201911360493.9A 2019-12-25 2019-12-25 Wearable device and input method thereof Pending CN111007942A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911360493.9A CN111007942A (en) 2019-12-25 2019-12-25 Wearable device and input method thereof
PCT/CN2019/130475 WO2021128414A1 (en) 2019-12-25 2019-12-31 Wearable device and input method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360493.9A CN111007942A (en) 2019-12-25 2019-12-25 Wearable device and input method thereof

Publications (1)

Publication Number Publication Date
CN111007942A true CN111007942A (en) 2020-04-14

Family

ID=70118662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360493.9A Pending CN111007942A (en) 2019-12-25 2019-12-25 Wearable device and input method thereof

Country Status (2)

Country Link
CN (1) CN111007942A (en)
WO (1) WO2021128414A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113347526A (en) * 2021-07-08 2021-09-03 歌尔科技有限公司 Sound effect adjusting method and device of earphone and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6771294B1 (en) * 1999-12-29 2004-08-03 Petri Pulli User interface
CN104272225A (en) * 2012-05-09 2015-01-07 索尼公司 Information processing device, information processing method, and program
US20150253862A1 (en) * 2014-03-06 2015-09-10 Lg Electronics Inc. Glass type mobile terminal
US20150309629A1 (en) * 2014-04-28 2015-10-29 Qualcomm Incorporated Utilizing real world objects for user input
CN105980965A (en) * 2013-10-10 2016-09-28 视力移动科技公司 Systems, devices, and methods for touch-free typing
CN108834066A (en) * 2018-06-27 2018-11-16 三星电子(中国)研发中心 Method and apparatus for generating information

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171780A1 (en) * 2011-07-03 2016-06-16 Neorai Vardi Computer device in form of wearable glasses and user interface thereof
KR101947034B1 (en) * 2012-08-21 2019-04-25 삼성전자 주식회사 Apparatus and method for inputting of portable device
CN104793731A (en) * 2015-01-04 2015-07-22 北京君正集成电路股份有限公司 Information input method for wearable device and wearable device
US20180329209A1 (en) * 2016-11-24 2018-11-15 Rohildev Nattukallingal Methods and systems of smart eyeglasses
JP2018142168A (en) * 2017-02-28 2018-09-13 セイコーエプソン株式会社 Head mounted type display device, program, and control method for head mounted type display device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6771294B1 (en) * 1999-12-29 2004-08-03 Petri Pulli User interface
CN104272225A (en) * 2012-05-09 2015-01-07 索尼公司 Information processing device, information processing method, and program
CN105980965A (en) * 2013-10-10 2016-09-28 视力移动科技公司 Systems, devices, and methods for touch-free typing
US20150253862A1 (en) * 2014-03-06 2015-09-10 Lg Electronics Inc. Glass type mobile terminal
US20150309629A1 (en) * 2014-04-28 2015-10-29 Qualcomm Incorporated Utilizing real world objects for user input
CN108834066A (en) * 2018-06-27 2018-11-16 三星电子(中国)研发中心 Method and apparatus for generating information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113347526A (en) * 2021-07-08 2021-09-03 歌尔科技有限公司 Sound effect adjusting method and device of earphone and readable storage medium
CN113347526B (en) * 2021-07-08 2022-11-22 歌尔科技有限公司 Sound effect adjusting method and device of earphone and readable storage medium

Also Published As

Publication number Publication date
WO2021128414A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
CN106845335B (en) Gesture recognition method and device for virtual reality equipment and virtual reality equipment
EP2717123A2 (en) Input method and apparatus of portable device
US11782514B2 (en) Wearable device and control method thereof, gesture recognition method, and control system
CN103914138A (en) Identification and use of gestures in proximity to a sensor
US10621766B2 (en) Character input method and device using a background image portion as a control region
CN109074224A (en) For the method for insertion character and corresponding digital device in character string
CN111142673A (en) Scene switching method and head-mounted electronic equipment
CN111596757A (en) Gesture control method and device based on fingertip interaction
EP4307096A1 (en) Key function execution method, apparatus and device, and storage medium
CN107239222A (en) The control method and terminal device of a kind of touch-screen
Tung et al. RainCheck: overcoming capacitive interference caused by rainwater on smartphones
CN112463016B (en) Display control method and device, electronic equipment and wearable display equipment
US11500453B2 (en) Information processing apparatus
CN111007942A (en) Wearable device and input method thereof
JP6033061B2 (en) Input device and program
KR101559424B1 (en) A virtual keyboard based on hand recognition and implementing method thereof
CN106445152A (en) Method for managing menus in virtual reality environments and virtual reality equipment
CN116360589A (en) Method and medium for inputting information by virtual keyboard and electronic equipment
CN117480483A (en) Text input method for augmented reality device
CN112698723B (en) Payment method and device and wearable equipment
CN114578956A (en) Equipment control method and device, virtual wearable equipment and storage medium
CN112416121A (en) Intelligent interaction method and device based on object and gesture induction and storage medium
US11054941B2 (en) Information processing system, information processing method, and program for correcting operation direction and operation amount
CN112818825A (en) Working state determination method and device
CN111062360A (en) Hand tracking system and tracking method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201013

Address after: 261031 north of Yuqing street, east of Dongming Road, high tech Zone, Weifang City, Shandong Province (Room 502, Geer electronic office building)

Applicant after: GoerTek Optical Technology Co.,Ltd.

Address before: 261031 No. 268 Dongfang Road, Weifang hi tech Industrial Development Zone, Shandong, Weifang

Applicant before: GOERTEK Inc.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200414