WO2019003747A1 - Wearable terminal - Google Patents
Wearable terminal Download PDFInfo
- Publication number
- WO2019003747A1 WO2019003747A1 PCT/JP2018/020283 JP2018020283W WO2019003747A1 WO 2019003747 A1 WO2019003747 A1 WO 2019003747A1 JP 2018020283 W JP2018020283 W JP 2018020283W WO 2019003747 A1 WO2019003747 A1 WO 2019003747A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- input
- wearable terminal
- input member
- camera
- indicator
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
Definitions
- the present invention relates to a wearable terminal capable of performing input via an input member.
- HMDs head-mounted displays
- a keyboard widely used as a user interface is connected to the HMD via a wire or wireless such as Bluetooth (registered trademark), and characters and numbers are input by operating the keyboard. It is possible.
- the keyboard contains a large number of keys corresponding to letters and numbers, it is generally bulkier than a wearable terminal, and if it must be held for input, portability is an advantage of the wearable terminal. There is a risk of damaging
- Patent Document 1 discloses a technique of analyzing the position of a finger captured by a camera in the HMD, and controlling the input by associating the position information with the position information of the button displayed on the display.
- Patent Document 1 there is a disparity between the reference screen and the operation screen due to the presence of parallax according to the distance between the user's line of sight and the optical axis of the camera.
- it is necessary to analyze the finger position in a certain coordinate system and match it with the display coordinate system.
- complex arithmetic processing must be performed at high speed, and a high-performance CPU or the like is required, which may significantly increase the cost of the wearable terminal.
- the present invention has been made in view of the above-described circumstances, and an object of the present invention is to provide a wearable terminal capable of performing input easily through a simple input member.
- a wearable terminal reflecting one aspect of the present invention is mounted on the head of the user and divided into at least two or more types of characters, numbers and / or symbols.
- a wearable terminal which is formed in the inside and which inputs information through an input member which can designate the divided area with an input indicator used by a user,
- a display unit that displays an image so that the user can view it;
- a camera which picks up an object and outputs an image signal;
- a processing unit that processes an image signal output from the camera;
- the processing unit is configured such that a subject captured by the camera is at least a part of the input member based on an image signal of the subject output from the camera and a feature of the image of the input member stored in the storage unit.
- HMD100 which constitutes a wearable terminal concerning this embodiment.
- HMD100 is a block diagram of HMD100 concerning this embodiment.
- It is a figure which shows a keyboard.
- It is a figure which shows the state which inputs using a keyboard KB using HMD100 concerning this embodiment.
- It is a flowchart which shows control which performs input via the keyboard KB using HMD100.
- FIG. 6 is a diagram showing an example of an image displayed on the image display unit 104. It is a figure which shows the input member concerning a modification.
- FIG. 1 is a perspective view of an HMD 100 constituting a wearable terminal according to the present embodiment.
- FIG. 2 is a block diagram of the HMD 100 according to the present embodiment.
- the HMD 100 has a frame 101.
- the rectangular frame 102 is attached to a lower end of the front side of the frame 101 substantially U-shaped as viewed from above, and the main body portion 103 is attached to the upper part thereof.
- the main body unit 103 includes an image display unit 104, a camera 105 for imaging an object, and a microphone 106 which is a sound acquisition device.
- the main unit 103 is connected to the control box CB by the wiring CD, and is supplied with power from a battery (not shown) built in the control box CB.
- a rectangular frame 102 is placed in front of the user's right eye.
- an image display unit 104 capable of through display is disposed.
- the control box CB includes a control unit (processing unit) 107 and a memory (storage unit) 108.
- the memory 108 is attached to its shape, which is an inherent feature of the keyboard KB (see FIG. 3), and its shape, which is a feature of the user's finger FG (see FIG. 4) functioning as an indicator. And the characteristics of the unique sound (keying sound) emitted when the key KY of the keyboard KB is pressed.
- the control unit 107 has a function of receiving an image signal output from the camera 105 and performing image processing and pattern matching to be described later.
- FIG. 3 is a diagram showing a keyboard.
- the keyboard KB as an input member is an actual keyboard here, it is used without being connected to the HMD 100 even in that case.
- a general keyboard KB is standardized (alphabet arrangement, JIS arrangement etc.), and different types of characters, numbers and symbols (hereinafter referred to as characters) are respectively formed in the center of the key (division area) KY doing.
- characters are respectively formed in the center of the key (division area) KY doing.
- a toy keyboard or a printed picture or illustration of a keyboard may be used as an input member, thereby improving portability. These are collectively referred to as "an object shaped like a keyboard”.
- the “partitioned area” refers to, for example, a range recognized as being surrounded by an edge formed around a character or the like in an image acquired by imaging.
- “to indicate with a pointer” includes, for example, the overlapping of the tip of the pointer with the sectioned area in an image acquired by imaging, and in this case, performing key operation with a finger.
- FIG. 4 is a diagram showing a state in which an input is performed via the keyboard KB using the HMD 100 according to the present embodiment, and is shown in the state of an image captured by the camera 105.
- a seal mark MK of a red triangle (but not limited to this color and shape) is attached to the nail of the user's index finger FG.
- Such a seal mark MK is easy to pattern match, which will be described later, and can be attached, for example, to the finger of a gloved hand, so that even a user working with a glove can input.
- input can also be performed without using the seal mark MK.
- FIG. 5 is a flowchart showing control for performing input via the keyboard KB using the HMD 100.
- FIG. 6 is a diagram showing an example of an image displayed on the image display unit 104. As shown in FIG. First, it is assumed that the HMD 100 operates and the camera 105 captures an image of a subject, and the control unit 107 processes an image signal from the camera 105 and performs pattern matching as needed. However, at this point in time, the input mode has not yet been executed, and the HMD 100 can execute only the minimum functions, and in order to realize higher functions, password input or the like is required.
- pattern matching refers to a model from a reference image by applying an edge extraction method to a reference image (the shape of the keyboard KB stored in the memory 108) and a subject image based on an image signal acquired by imaging. This is a method of generating an edge to be searched from the edge and the subject image and calculating the similarity between the model edge and the edge to be searched.
- the control unit 107 causes the image display unit 104 to display an image as shown in FIG. 6, for example, so that the user who has observed this image has a part of the keyboard KB within the angle of view of the camera 105. I understand that.
- the control unit 107 waits until at least a part of the keyboard KB is captured by the camera 105 in step S11 of step S5, and as a result of pattern matching, determines that at least a part of the keyboard KB is captured (described above If the similarity exceeds the threshold), the flow proceeds to step S12.
- control unit 107 shifts to the input mode.
- the control unit 107 can cause the image display unit 104 to display a screen on which the input character can be confirmed as shown in FIG.
- the control unit 107 can deduce that the imaged subject is the same as the keyboard KB having the shape stored in the memory 108, so that the origin (for example, the upper left corner of the keyboard KB) It is assumed that position coordinates of each key, character, etc. in the two-dimensional coordinate system as a reference become known. Therefore, even if the key KY is covered with the finger (or a mere finger) FG on which the seal mark MK is attached, characters and the like formed thereon can be recognized.
- the control unit 107 searches by pattern matching whether the seal mark MK (or the finger FG) is present in the subject imaged by the camera 105. If it is determined that the seal mark MK (or the finger FG) is present, the control unit 107 subsequently detects that the finger (or a mere finger) FG to which the seal mark MK has been attached performs a key operation by the following detection method To judge.
- the detection methods (1) to (3) use pattern matching.
- the finger (or mere finger) FG on which the seal mark MK is attached is retained on any key KY of the keyboard KB for a predetermined time (for example, 1 second) or more.
- the finger (or mere finger) FG on which the seal mark MK is attached is displaced in the direction in which any key KY of the keyboard KB is pressed. This can be read by the displacement of the edge of the key KY.
- “overlapping key” refers to a key on which the centers of the seal mark MK or the finger FG overlap.
- the microphone 106 collects a keying sound emitted when the key KY of the keyboard KB is pressed.
- Two or more of the above detection methods (1) to (4) may be used in combination.
- the detection accuracy is improved by using the detection method (4) in combination with any of the detection methods (1) to (3).
- the control unit 107 determines that the finger (or a mere finger) FG on which the seal mark MK is attached has performed the key operation through the above detection method, and the subsequent step S14. Then, the character or the like of the key KY operated by the key operation is determined, displayed as an input character or the like on the screen shown in FIG. 6, and stored in the memory 108, and the flow proceeds to step S15. On the other hand, when it is not determined that the finger (or a mere finger) FG to which the seal mark MK is attached has performed the key operation, the control unit 107 shifts the flow to step S15.
- step S15 when the control unit 107 determines that at least a part of the keyboard KB is not captured by the camera 105, the control unit 107 recognizes that the user wants to end the input mode, and shifts the flow to step S16.
- step S16 the control unit 107 determines, for example, a character or the like (characters or the like displayed on the screen of FIG. 6 in the order of being input) which have been input and stored so far. This enables the HMD 100 to realize more sophisticated operations. However, if there is not even one character input, the control unit 107 treats that there is no input.
- the control unit 107 infers that the user has a character or the like to be added in step S15, and returns the flow to step S13. Perform the same input operation. As a result, it becomes possible to input a plurality of characters etc. For example, in the case of using for password input, the password becomes complicated and security becomes stronger.
- the same camera is used to capture an image of the keyboard KB as an input member and the finger FG performing key operation as an input pointer, and the user actually uses the finger FG. Since the input can be performed by touching the key KY to be input, an intuitive input operation is possible with a simple configuration.
- the relationship between the finger and the key in contact with each other hardly affects the parallax due to the difference between the line of sight of the user and the optical axis of the camera.
- FIG. 7 is a view showing an input member according to a modification.
- the input member shown in FIG. 7 may be, for example, a controller CT of a working machine.
- the controller CT has three keys meaning “up”, “down” and “stop”, and does not connect to the HMD 100, but with a finger (or a finger) to which the user applies the seal mark MK.
- a working machine (not shown) can be operated.
- the control unit 107 determines that the controller CT has been imaged by the camera 105 by storing the shape of the controller CT in advance in the memory 108
- the work machine can be executed by executing the above-described input mode.
- Can work Not only the standardized keyboard as described above, but also the shape of an arbitrary controller or the like as in the present modification may be registered (stored) and used as an input member.
- the same input can be performed by capturing an image of a keyboard or the like using a camera mounted on a tablet or the like. Further, although the user's finger is used as the input indicator, the key may be indicated using a tip of a pen or a bar whose shape is stored in advance.
- HMD 101 frame 102 rectangular frame 103 main unit 104 image display unit 105 camera 106 microphone 107 control unit 108 memory CB control box CD wiring CT controller FG finger KB keyboard KY key MK seal mark
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Provided is a wearable terminal with which it is possible to easily perform input via a simple input member. A wearable terminal, worn on the head of a user, for performing input of information via an input member which forms at least two or more kinds of characters, numerals, and/or symbols in a delimited region each and with which it is possible to indicate the delimited region by an input indicator used by the user. The wearable terminal has a display unit for displaying an image in such a way as to be visible to the user, a camera for imaging a subject and outputting an image signal, a processing unit for processing the image signal outputted from the camera, and a storage unit for storing the feature of the image of the input member. The processing unit: starts execution of an input mode when it is determined, on the basis of the image signal of the subject outputted from the camera and the feature of the image of the input member stored in the storage unit, that the subject imaged by the camera is at least a portion of the input member; and inputs information that corresponds to the characters, numerals, and/or symbols formed in the indicated delimited region when it is determined, while the input mode is being executed, that the input indicator has indicated one of the delimited regions of the input member.
Description
本発明は,入力部材を介して入力を行えるウエアラブル端末に関する。
The present invention relates to a wearable terminal capable of performing input via an input member.
近年、急速に発達してきたヘッドマウントディスプレイ(以下、HMDという)等のウエアラブル端末は、ユーザーの頭部に装着されることでディスプレイがユーザーの眼前に来るため、ディスプレイを手で保持することなく,表示される情報をユーザーが視認することができ、携帯容易性や取扱容易性に優れる。
Wearable terminals such as head-mounted displays (hereinafter referred to as HMDs), which have been rapidly developed in recent years, do not hold the display by hand because the display comes in front of the user by being attached to the head of the user. The information displayed can be viewed by the user, and is easy to carry and handle.
ところで、HMDを装着してディスプレイの表示を見ながら作業を行う場合において、「ユーザログイン時のパスワード入力」や「検索文字列の入力」など、文字や数字等の入力を行う必要に迫られることがある。これに対し、例えばユーザーインターフェースとして広く用いられているキーボード等をHMDに有線またはBluetooth(登録商標)などの無線を介して接続し、かかるキーボードを操作することで、文字や数字等の入力を行うことは可能である。しかしながら、キーボードは文字や数字に対応する多数のキーを含むため、一般的にはウエアラブル端末より嵩張り、これを入力の為に所持しなければならないとすると、ウエアラブル端末の利点である携帯容易性を損なう恐れがある。
By the way, when working while wearing the HMD and looking at the display, it is necessary to enter letters, numbers, etc., such as "Enter password at user login" and "Enter search string". There is. On the other hand, for example, a keyboard widely used as a user interface is connected to the HMD via a wire or wireless such as Bluetooth (registered trademark), and characters and numbers are input by operating the keyboard. It is possible. However, since the keyboard contains a large number of keys corresponding to letters and numbers, it is generally bulkier than a wearable terminal, and if it must be held for input, portability is an advantage of the wearable terminal. There is a risk of damaging
更に、HMDとキーボードを有線介して接続しようとすると、その配線が邪魔になり、入力の妨げになって取扱容易性を損ねたり、或いはコネクタ増設などの必要がありコストが増大する。一方、HMDとキーボードを無線を介して接続しようとすると、互いを認識するためのペアリングなどの操作が予め必要な上、1台のキーボードを複数のウエアラブル端末で共有しようとする際には、接続競合などにより誤入力が生じる恐れがある。一方、投影機から出射された像に対応してユーザーが入力を行える仮想キーボードという新たな技術も開発されているが、かかる技術の導入に当たっては、まず投影機が必要であるところ、ウエアラブル端末の大型化を招き携帯容易性を損ない、コストも増大させてしまうことになる。
Furthermore, when trying to connect the HMD and the keyboard via a wire, the wiring becomes an obstacle, which hinders the input, impairs the ease of handling, or increases the cost because it is necessary to add a connector. On the other hand, when trying to connect the HMD and the keyboard via radio, operations such as pairing to recognize each other are required in advance, and when trying to share one keyboard with a plurality of wearable terminals, Mistakes may occur due to connection contention and the like. On the other hand, a new technology has been developed: a virtual keyboard that allows the user to input in response to the image emitted from the projector. However, to introduce such technology, the projector is first required. This increases the size, impairs portability, and increases costs.
これに対し特許文献1には、HMDにおいてカメラで撮影した指からその位置を解析し、その位置情報をディスプレイに表示したボタンの位置情報と付き合わせて入力を制御する技術が開示されている。
On the other hand, Patent Document 1 discloses a technique of analyzing the position of a finger captured by a camera in the HMD, and controlling the input by associating the position information with the position information of the button displayed on the display.
しかしながら、特許文献1の技術によれば、本来的にユーザーの視線とカメラの光軸との距離に応じた視差が存在することで、基準画面と操作画面との間にずれが生じるから、かかるずれを補正するために、ある座標系における指位置を解析して、ディスプレイの表示の座標系との突き合わせを行う必要がある。しかるに、その突き合わせのためには複雑な演算処理を高速で行わなくてはならず、高性能なCPUなどが必要になってウエアラブル端末のコストを著しく増大させる恐れがある。
However, according to the technology of Patent Document 1, there is a disparity between the reference screen and the operation screen due to the presence of parallax according to the distance between the user's line of sight and the optical axis of the camera. In order to correct the deviation, it is necessary to analyze the finger position in a certain coordinate system and match it with the display coordinate system. However, for the matching, complex arithmetic processing must be performed at high speed, and a high-performance CPU or the like is required, which may significantly increase the cost of the wearable terminal.
本発明は、上記の事情に鑑みてなされたものであって、簡素な入力部材を介して簡便に入力を行うことができるウエアラブル端末を提供することを目的とする。
The present invention has been made in view of the above-described circumstances, and an object of the present invention is to provide a wearable terminal capable of performing input easily through a simple input member.
上述した目的のうち少なくとも一つを実現するために、本発明の一側面を反映したウエアラブル端末は、前記ユーザーの頭部に装着され、少なくとも2種類以上の文字,数字及び/又は記号を区画領域内にそれぞれ形成しており、ユーザーの用いる入力指示体で前記区画領域を指示することが可能な入力部材を介して情報の入力を行うウエアラブル端末であって、
前記ユーザーが視認可能に画像を表示する表示部と、
被写体を撮像して画像信号を出力するカメラと、
前記カメラから出力された画像信号を処理する処理部と、
前記入力部材の画像の特徴を記憶する記憶部と、を有し、
前記処理部は、前記カメラから出力された被写体の画像信号と、前記記憶部に記憶された前記入力部材の画像の特徴とに基づいて、前記カメラが撮像した被写体が前記入力部材の少なくとも一部であると判断したときは、入力モードの実行を開始し、前記入力モードが実行されている間に、前記入力指示体が前記入力部材の区画領域のいずれかを指示したと判断したときは、指示された前記区画領域に形成されている前記文字,数字及び/又は記号に対応する情報を入力するものである。 In order to realize at least one of the above-mentioned objects, a wearable terminal reflecting one aspect of the present invention is mounted on the head of the user and divided into at least two or more types of characters, numbers and / or symbols. A wearable terminal which is formed in the inside and which inputs information through an input member which can designate the divided area with an input indicator used by a user,
A display unit that displays an image so that the user can view it;
A camera which picks up an object and outputs an image signal;
A processing unit that processes an image signal output from the camera;
A storage unit for storing features of an image of the input member;
The processing unit is configured such that a subject captured by the camera is at least a part of the input member based on an image signal of the subject output from the camera and a feature of the image of the input member stored in the storage unit. When it is determined that the input mode is started, execution of the input mode is started, and when it is determined that the input indicator has instructed any of the divided areas of the input member while the input mode is being executed, Information corresponding to the characters, numbers and / or symbols formed in the designated division area is inputted.
前記ユーザーが視認可能に画像を表示する表示部と、
被写体を撮像して画像信号を出力するカメラと、
前記カメラから出力された画像信号を処理する処理部と、
前記入力部材の画像の特徴を記憶する記憶部と、を有し、
前記処理部は、前記カメラから出力された被写体の画像信号と、前記記憶部に記憶された前記入力部材の画像の特徴とに基づいて、前記カメラが撮像した被写体が前記入力部材の少なくとも一部であると判断したときは、入力モードの実行を開始し、前記入力モードが実行されている間に、前記入力指示体が前記入力部材の区画領域のいずれかを指示したと判断したときは、指示された前記区画領域に形成されている前記文字,数字及び/又は記号に対応する情報を入力するものである。 In order to realize at least one of the above-mentioned objects, a wearable terminal reflecting one aspect of the present invention is mounted on the head of the user and divided into at least two or more types of characters, numbers and / or symbols. A wearable terminal which is formed in the inside and which inputs information through an input member which can designate the divided area with an input indicator used by a user,
A display unit that displays an image so that the user can view it;
A camera which picks up an object and outputs an image signal;
A processing unit that processes an image signal output from the camera;
A storage unit for storing features of an image of the input member;
The processing unit is configured such that a subject captured by the camera is at least a part of the input member based on an image signal of the subject output from the camera and a feature of the image of the input member stored in the storage unit. When it is determined that the input mode is started, execution of the input mode is started, and when it is determined that the input indicator has instructed any of the divided areas of the input member while the input mode is being executed, Information corresponding to the characters, numbers and / or symbols formed in the designated division area is inputted.
本発明によれば、簡素な入力部材を介して簡便に入力を行うことができるウエアラブル端末を提供することができる。
According to the present invention, it is possible to provide a wearable terminal capable of performing input easily through a simple input member.
以下に、本発明の実施形態を、図面を参照して説明する。図1は、本実施形態にかかる、ウエアラブル端末を構成するHMD100の斜視図である。図2は、本実施形態にかかるHMD100のブロック図である。
Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a perspective view of an HMD 100 constituting a wearable terminal according to the present embodiment. FIG. 2 is a block diagram of the HMD 100 according to the present embodiment.
図1に示すように、HMD100はフレーム101を有している。上方から見て略コ字状であるフレーム101は、前方の一端側下部に矩形枠102を取り付け、その上部に本体部103を取り付けている。本体部103は、画像表示部104と、被写体を撮像するカメラ105と、音取得装置であるマイク106とを有している。本体部103は、配線CDによりコントロールボックスCBに接続されており、コントロールボックスCBに内蔵された不図示の電池から給電されている。フレーム101をユーザーの頭部に装着した状態で、ユーザーの右目の前に矩形枠102が配置される。矩形枠102内には、スルー表示が可能な画像表示部104が配置されている。
As shown in FIG. 1, the HMD 100 has a frame 101. The rectangular frame 102 is attached to a lower end of the front side of the frame 101 substantially U-shaped as viewed from above, and the main body portion 103 is attached to the upper part thereof. The main body unit 103 includes an image display unit 104, a camera 105 for imaging an object, and a microphone 106 which is a sound acquisition device. The main unit 103 is connected to the control box CB by the wiring CD, and is supplied with power from a battery (not shown) built in the control box CB. With the frame 101 mounted on the head of the user, a rectangular frame 102 is placed in front of the user's right eye. In the rectangular frame 102, an image display unit 104 capable of through display is disposed.
図2において、コントロールボックスCBは、制御部(処理部)107と、メモリ(記憶部)108とを有する。メモリ108は、キーボードKB(図3参照)の固有の特徴であるその形状と、指示体として機能するユーザーの指FG(図4参照)の特徴であるその形状(又は指FGの爪に貼り付けられるシールマークMKの固有の形状又は色)、及びキーボードKBのキーKYが押圧される際に発する固有の音(打鍵音)の特徴を記憶している。制御部107は、カメラ105から出力された画像信号を入力して、画像処理及び後述するパターンマッチングを行う機能を有する。
In FIG. 2, the control box CB includes a control unit (processing unit) 107 and a memory (storage unit) 108. The memory 108 is attached to its shape, which is an inherent feature of the keyboard KB (see FIG. 3), and its shape, which is a feature of the user's finger FG (see FIG. 4) functioning as an indicator. And the characteristics of the unique sound (keying sound) emitted when the key KY of the keyboard KB is pressed. The control unit 107 has a function of receiving an image signal output from the camera 105 and performing image processing and pattern matching to be described later.
図3は、キーボードを示す図である。入力部材としてのキーボードKBは、ここでは実際のキーボードとするが、その場合でもHMD100に接続されずに使用される。一般的なキーボードKBは、規格化(アルファベット配列、JIS配列等)されており、異なる種類の文字、数字、及び符号(以下、文字等とする)をそれぞれキー(区画領域)KYの中央に形成している。但し、キーボードKBの代わりに、玩具のキーボードや、或いはキーボードの写真やイラストをプリントしたものを入力部材として使用しても良く、それにより携帯性が向上する。これらを総称して、「キーボードを形取った物体」とする。又、「区画領域」とは、例えば撮像によって取得された画像において、文字等の周囲に形成されたエッジにより囲まれていると認識される範囲をいう。更に、「指示体で指示する」とは、例えば撮像によって取得された画像において、指示体の先端と区画領域とが重なることを含み、ここでは指でキー操作を行うことをいう。
FIG. 3 is a diagram showing a keyboard. Although the keyboard KB as an input member is an actual keyboard here, it is used without being connected to the HMD 100 even in that case. A general keyboard KB is standardized (alphabet arrangement, JIS arrangement etc.), and different types of characters, numbers and symbols (hereinafter referred to as characters) are respectively formed in the center of the key (division area) KY doing. However, instead of the keyboard KB, a toy keyboard or a printed picture or illustration of a keyboard may be used as an input member, thereby improving portability. These are collectively referred to as "an object shaped like a keyboard". Further, the “partitioned area” refers to, for example, a range recognized as being surrounded by an edge formed around a character or the like in an image acquired by imaging. Furthermore, “to indicate with a pointer” includes, for example, the overlapping of the tip of the pointer with the sectioned area in an image acquired by imaging, and in this case, performing key operation with a finger.
次に、本実施形態の動作について説明する。図4は、本実施形態にかかるHMD100を用いてキーボードKBを介して入力を行う状態を示す図であり、カメラ105により撮像された画像の状態で示している。図4に示すように、ここではユーザーの人差し指FGの爪に、赤色の三角(但し、この色、形に限られない)のシールマークMKを貼り付けている。このようなシールマークMKは、後述するパターンマッチングが容易であり、また例えば手袋をはめた手の指に貼り付けることもできるから、手袋をして作業しているユーザーでも入力が可能になる。但し、シールマークMKを用いずに、入力を行うこともできる。
Next, the operation of this embodiment will be described. FIG. 4 is a diagram showing a state in which an input is performed via the keyboard KB using the HMD 100 according to the present embodiment, and is shown in the state of an image captured by the camera 105. As shown in FIG. 4, here, a seal mark MK of a red triangle (but not limited to this color and shape) is attached to the nail of the user's index finger FG. Such a seal mark MK is easy to pattern match, which will be described later, and can be attached, for example, to the finger of a gloved hand, so that even a user working with a glove can input. However, input can also be performed without using the seal mark MK.
図5は、HMD100を用いてキーボードKBを介して入力を行う制御を示すフローチャートである。図6は、画像表示部104に表示される画像の例を示す図である。まず、HMD100が動作してカメラ105が被写体を撮像し、制御部107がカメラ105からの画像信号を処理し、随時パターンマッチングを行っているものとする。但し、この時点では、まだ入力モードは実行されておらず、またHMD100は最小限の機能しか実行できず、より高機能を実現するには、パスワード入力等が必要であるものとする。
FIG. 5 is a flowchart showing control for performing input via the keyboard KB using the HMD 100. FIG. 6 is a diagram showing an example of an image displayed on the image display unit 104. As shown in FIG. First, it is assumed that the HMD 100 operates and the camera 105 captures an image of a subject, and the control unit 107 processes an image signal from the camera 105 and performs pattern matching as needed. However, at this point in time, the input mode has not yet been executed, and the HMD 100 can execute only the minimum functions, and in order to realize higher functions, password input or the like is required.
尚、「パターンマッチング」とは、撮像により取得した画像信号に基づいて、エッジ抽出方法を参照画像(メモリ108に記憶されたキーボードKBの形状)と被写体画像に適用することにより、参照画像からモデルエッジと被写体画像から被探索エッジを生成し、モデルエッジと被探索エッジとの間の類似度を算出する手法である。
Note that “pattern matching” refers to a model from a reference image by applying an edge extraction method to a reference image (the shape of the keyboard KB stored in the memory 108) and a subject image based on an image signal acquired by imaging. This is a method of generating an edge to be searched from the edge and the subject image and calculating the similarity between the model edge and the edge to be searched.
ユーザーは、入力モードを実行させたい場合、カメラ105をキーボードKBに向けるように頭部を動かす。その際に、制御部107は、例えば図6に示すような画像を画像表示部104に表示させることで、この画像を観察したユーザーは、カメラ105の画角内にキーボードKBの一部が収まったことが分かる。
When the user wants to execute the input mode, the user moves the head to direct the camera 105 to the keyboard KB. At that time, the control unit 107 causes the image display unit 104 to display an image as shown in FIG. 6, for example, so that the user who has observed this image has a part of the keyboard KB within the angle of view of the camera 105. I understand that.
制御部107は、ステップ図5のステップS11で、カメラ105によりキーボードKBの少なくとも一部が撮像されるまで待ち、パターンマッチングの結果、キーボードKBの少なくとも一部が撮像されたと判定した場合(上述の類似度が閾値を越えた場合)、フローをステップS12へと進める。
The control unit 107 waits until at least a part of the keyboard KB is captured by the camera 105 in step S11 of step S5, and as a result of pattern matching, determines that at least a part of the keyboard KB is captured (described above If the similarity exceeds the threshold), the flow proceeds to step S12.
ステップS12で、制御部107は入力モードへと移行する。入力モードの実行と共に、制御部107は、図6に示すように入力した文字を確認できる画面を画像表示部104に表示させることができる。
At step S12, the control unit 107 shifts to the input mode. Along with the execution of the input mode, the control unit 107 can cause the image display unit 104 to display a screen on which the input character can be confirmed as shown in FIG.
入力モードへと移行された時点で、制御部107は、撮像された被写体が、メモリ108に記憶された形状のキーボードKBと同一物と推定できるから、原点(例えばキーボードKBの左上角部)を基準とする2次元座標系における、各キー及び文字等の位置座標が既知となるものとする。従って、例えキーKYがシールマークMKを貼り付けた指(又は単なる指)FGにより覆われた場合でも、それに形成された文字等を認識することができる。
When transitioning to the input mode, the control unit 107 can deduce that the imaged subject is the same as the keyboard KB having the shape stored in the memory 108, so that the origin (for example, the upper left corner of the keyboard KB) It is assumed that position coordinates of each key, character, etc. in the two-dimensional coordinate system as a reference become known. Therefore, even if the key KY is covered with the finger (or a mere finger) FG on which the seal mark MK is attached, characters and the like formed thereon can be recognized.
続くステップS13で、制御部107は、カメラ105で撮像した被写体の中に、シールマークMK(又は指FG)が存在するか否かを、パターンマッチングで探索する。シールマークMK(又は指FG)が存在すると判断した場合、制御部107は、続いてシールマークMKを貼り付けた指(又は単なる指)FGがキー操作を行ったことを、以下の検出手法によりで判断する。検出手法(1)~(3)はパターンマッチングを用いる。
(1)シールマークMKを貼り付けた指(又は単なる指)FGがキーボードKBのいずれかのキーKY上に所定時間(例えば1秒)以上滞留したこと。
(2)シールマークMKを貼り付けた指(又は単なる指)FGがキーボードKBのいずれかのキーKYを押圧する方向に変位させたこと。これはキーKYのエッジの変位で読み取れる。
(3)シールマークMKを貼り付けた指(又は単なる指)FGと重なるキーボードKBのいずれかのキーKYが押圧される方向に変位したこと。但し、「重なるキー」とは、シールマークMK又は指FGの爪の中心が重なるキーをいう。
(4)キーボードKBのキーKYが押圧される際に発する打鍵音をマイク106で集音したこと。 In the subsequent step S13, thecontrol unit 107 searches by pattern matching whether the seal mark MK (or the finger FG) is present in the subject imaged by the camera 105. If it is determined that the seal mark MK (or the finger FG) is present, the control unit 107 subsequently detects that the finger (or a mere finger) FG to which the seal mark MK has been attached performs a key operation by the following detection method To judge. The detection methods (1) to (3) use pattern matching.
(1) The finger (or mere finger) FG on which the seal mark MK is attached is retained on any key KY of the keyboard KB for a predetermined time (for example, 1 second) or more.
(2) The finger (or mere finger) FG on which the seal mark MK is attached is displaced in the direction in which any key KY of the keyboard KB is pressed. This can be read by the displacement of the edge of the key KY.
(3) Displaced in the direction in which one of the keys KY of the keyboard KB overlapping the finger (or mere finger) FG on which the seal mark MK is attached is pressed. However, “overlapping key” refers to a key on which the centers of the seal mark MK or the finger FG overlap.
(4) Themicrophone 106 collects a keying sound emitted when the key KY of the keyboard KB is pressed.
(1)シールマークMKを貼り付けた指(又は単なる指)FGがキーボードKBのいずれかのキーKY上に所定時間(例えば1秒)以上滞留したこと。
(2)シールマークMKを貼り付けた指(又は単なる指)FGがキーボードKBのいずれかのキーKYを押圧する方向に変位させたこと。これはキーKYのエッジの変位で読み取れる。
(3)シールマークMKを貼り付けた指(又は単なる指)FGと重なるキーボードKBのいずれかのキーKYが押圧される方向に変位したこと。但し、「重なるキー」とは、シールマークMK又は指FGの爪の中心が重なるキーをいう。
(4)キーボードKBのキーKYが押圧される際に発する打鍵音をマイク106で集音したこと。 In the subsequent step S13, the
(1) The finger (or mere finger) FG on which the seal mark MK is attached is retained on any key KY of the keyboard KB for a predetermined time (for example, 1 second) or more.
(2) The finger (or mere finger) FG on which the seal mark MK is attached is displaced in the direction in which any key KY of the keyboard KB is pressed. This can be read by the displacement of the edge of the key KY.
(3) Displaced in the direction in which one of the keys KY of the keyboard KB overlapping the finger (or mere finger) FG on which the seal mark MK is attached is pressed. However, “overlapping key” refers to a key on which the centers of the seal mark MK or the finger FG overlap.
(4) The
以上の検出手法(1)~(4)のいずれかを2つ以上組み合わせて用いても良い。特に、検出手法(4)を検出手法(1)~(3)のいずれかと組み合わせて用いることで、検出精度が向上する。
Two or more of the above detection methods (1) to (4) may be used in combination. In particular, the detection accuracy is improved by using the detection method (4) in combination with any of the detection methods (1) to (3).
ここで、ユーザーがキー操作を行ったときは、制御部107は、以上の検出手法を通じて、シールマークMKを貼り付けた指(又は単なる指)FGがキー操作を行ったと判断し、続くステップS14で、キー操作されたキーKYの文字等を決定して、図6に示す画面で入力文字等として表示させ、且つメモリ108に記憶してフローをステップS15に移行する。一方、シールマークMKを貼り付けた指(又は単なる指)FGがキー操作を行ったと判断しなかった場合、制御部107は、フローをステップS15に移行する。
Here, when the user performs a key operation, the control unit 107 determines that the finger (or a mere finger) FG on which the seal mark MK is attached has performed the key operation through the above detection method, and the subsequent step S14. Then, the character or the like of the key KY operated by the key operation is determined, displayed as an input character or the like on the screen shown in FIG. 6, and stored in the memory 108, and the flow proceeds to step S15. On the other hand, when it is not determined that the finger (or a mere finger) FG to which the seal mark MK is attached has performed the key operation, the control unit 107 shifts the flow to step S15.
ステップS15で、制御部107は、カメラ105によりキーボードKBの少なくとも一部が撮像されなくなったと判断した場合、ユーザーが入力モードを終了させたいと認識して、フローをステップS16へと移行する。ステップS16では、制御部107は、これまでに入力され記憶された文字等(入力された順に図6の画面に表示されている文字等)を例えばパスワードとして確定し、入力モードを終了する。これによりHMD100はより高機能な動作を実現することができるようになる。但し、入力が一文字もなければ、制御部107は入力がなかったものとして取り扱う。
In step S15, when the control unit 107 determines that at least a part of the keyboard KB is not captured by the camera 105, the control unit 107 recognizes that the user wants to end the input mode, and shifts the flow to step S16. In step S16, the control unit 107 determines, for example, a character or the like (characters or the like displayed on the screen of FIG. 6 in the order of being input) which have been input and stored so far. This enables the HMD 100 to realize more sophisticated operations. However, if there is not even one character input, the control unit 107 treats that there is no input.
これに対し、依然としてカメラ105によりキーボードKBの少なくとも一部が撮像され続けている場合、ステップS15で、制御部107は、ユーザーが追加したい文字等があると推認し、フローをステップS13へと戻して、同様な入力操作を実行する。これにより複数の文字等の入力が可能になり、例えばパスワード入力に用いる場合、パスワードが複雑化してセキュリティが一層強固になる。
On the other hand, when at least a part of the keyboard KB is still imaged by the camera 105, the control unit 107 infers that the user has a character or the like to be added in step S15, and returns the flow to step S13. Perform the same input operation. As a result, it becomes possible to input a plurality of characters etc. For example, in the case of using for password input, the password becomes complicated and security becomes stronger.
以上述べた本実施形態によれば、入力部材としてのキーボードKBと、入力指示体としてキー操作をする指FGとを、同一のカメラで撮像する構成となっており、ユーザーは実際に指FGで入力したいキーKYに触れることで入力を行えるため、簡易な構成で直感的な入力操作が可能になる。尚、互いに接触する指とキーとの関係であるので、ユーザーの視線とカメラの光軸との差に起因する視差の影響を殆ど受けない。
According to the present embodiment described above, the same camera is used to capture an image of the keyboard KB as an input member and the finger FG performing key operation as an input pointer, and the user actually uses the finger FG. Since the input can be performed by touching the key KY to be input, an intuitive input operation is possible with a simple configuration. The relationship between the finger and the key in contact with each other hardly affects the parallax due to the difference between the line of sight of the user and the optical axis of the camera.
図7は、変形例にかかる入力部材を示す図である。図7に示す入力部材は、例えば作業機械のコントローラーCTであり得る。コントローラーCTは、「上」、「下」、「停止」を意味する3つのキーを有しており、HMD100に接続せずに、ユーザーがシールマークMKを貼り付けた指(又は指)でキー操作を行うことで、不図示の作業機械を動作させることができる。具体的には、コントローラーCTの形状を予めメモリ108に記憶しておくことで、カメラ105によりコントローラーCTを撮像したと制御部107が判断した場合、上述した入力モードを実行することで、作業機械を動作できる。上述したような規格化されたキーボードのみならず、本変形例のように任意のコントローラーなどの形状を登録(記憶)しておくことで、入力部材として用いることが出来る。
FIG. 7 is a view showing an input member according to a modification. The input member shown in FIG. 7 may be, for example, a controller CT of a working machine. The controller CT has three keys meaning "up", "down" and "stop", and does not connect to the HMD 100, but with a finger (or a finger) to which the user applies the seal mark MK. By performing the operation, a working machine (not shown) can be operated. Specifically, when the control unit 107 determines that the controller CT has been imaged by the camera 105 by storing the shape of the controller CT in advance in the memory 108, the work machine can be executed by executing the above-described input mode. Can work. Not only the standardized keyboard as described above, but also the shape of an arbitrary controller or the like as in the present modification may be registered (stored) and used as an input member.
尚、タブレットなどに装備したカメラを用いて、キーボード等を撮像すれば、同様な入力を行うことができる。又、入力指示体としてユーザーの指を使用したが、予め形状を記憶したペンや棒の先などを用いてキーを指示しても良い。
The same input can be performed by capturing an image of a keyboard or the like using a camera mounted on a tablet or the like. Further, although the user's finger is used as the input indicator, the key may be indicated using a tip of a pen or a bar whose shape is stored in advance.
本発明は、明細書に記載の実施形態・変形例に限定されるものではなく、他の実施形態や変形例を含むことは、本明細書に記載された実施形態や変形例や技術思想から本分野の当業者にとって明らかである。明細書の記載及び実施形態は、あくまでも例証を目的としており、本発明の範囲は後述するクレームによって示されている。
The present invention is not limited to the embodiments and modifications described in the specification, and including other embodiments and modifications from the embodiments, modifications and technical ideas described in the present specification. It will be apparent to those skilled in the art. The description and the embodiments of the specification are for the purpose of illustration only, and the scope of the present invention is indicated by the following claims.
100 HMD
101 フレーム
102 矩形枠
103 本体部
104 画像表示部
105 カメラ
106 マイク
107 制御部
108 メモリ
CB コントロールボックス
CD 配線
CT コントローラー
FG 指
KB キーボード
KY キー
MK シールマーク 100 HMD
101frame 102 rectangular frame 103 main unit 104 image display unit 105 camera 106 microphone 107 control unit 108 memory CB control box CD wiring CT controller FG finger KB keyboard KY key MK seal mark
101 フレーム
102 矩形枠
103 本体部
104 画像表示部
105 カメラ
106 マイク
107 制御部
108 メモリ
CB コントロールボックス
CD 配線
CT コントローラー
FG 指
KB キーボード
KY キー
MK シールマーク 100 HMD
101
Claims (10)
- 前記ユーザーの頭部に装着され、少なくとも2種類以上の文字,数字及び/又は記号を区画領域内にそれぞれ形成しており、ユーザーの用いる入力指示体で前記区画領域を指示することが可能な入力部材を介して情報の入力を行うウエアラブル端末であって、
前記ユーザーが視認可能に画像を表示する表示部と、
被写体を撮像して画像信号を出力するカメラと、
前記カメラから出力された画像信号を処理する処理部と、
前記入力部材の画像の特徴を記憶する記憶部と、を有し、
前記処理部は、前記カメラから出力された被写体の画像信号と、前記記憶部に記憶された前記入力部材の画像の特徴とに基づいて、前記カメラが撮像した被写体が前記入力部材の少なくとも一部であると判断したときは、入力モードの実行を開始し、前記入力モードが実行されている間に、前記入力指示体が前記入力部材の区画領域のいずれかを指示したと判断したときは、指示された前記区画領域に形成されている前記文字,数字及び/又は記号に対応する情報を入力するウエアラブル端末。 An input attached to the head of the user, forming at least two or more types of characters, numbers and / or symbols in the division area, and indicating the division area with an input indicator used by the user A wearable terminal for inputting information through members,
A display unit that displays an image so that the user can view it;
A camera which picks up an object and outputs an image signal;
A processing unit that processes an image signal output from the camera;
A storage unit for storing features of an image of the input member;
The processing unit is configured such that a subject captured by the camera is at least a part of the input member based on an image signal of the subject output from the camera and a feature of the image of the input member stored in the storage unit. When it is determined that the input mode is started, execution of the input mode is started, and when it is determined that the input indicator has instructed any of the divided areas of the input member while the input mode is being executed, A wearable terminal for inputting information corresponding to the characters, numbers and / or symbols formed in the designated division area. - 前記入力部材は、キーボード又はキーボードを形取った物体である請求項1に記載のウエアラブル端末。 The wearable terminal according to claim 1, wherein the input member is a keyboard or an object shaped like a keyboard.
- 前記区画領域は、前記キーボード又は前記キーボードを形取った物体のキーである請求項2に記載のウエアラブル端末。 The wearable terminal according to claim 2, wherein the partition area is a key of the keyboard or an object shaped in the keyboard.
- 前記指示体は固有のマークを付されており、前記記憶部は前記固有のマークの特徴を記憶しており、前記処理部は、前記カメラから出力された被写体の画像信号と、前記記憶部に記憶された前記固有のマークの特徴とに基づいて、前記固有のマークを付した前記入力指示体が前記入力部材の区画領域のいずれかを指示したと判断したとき、指示された前記区画領域に形成されている前記文字,数字及び/又は記号に対応する情報を入力する請求項1~3のいずれかに記載のウエアラブル端末。 The indicator is provided with a unique mark, the storage unit stores the features of the unique mark, and the processing unit stores the image signal of the subject output from the camera and the storage unit. When it is determined that the input indicator with the unique mark indicates any of the divided areas of the input member based on the stored characteristic of the unique mark, the instructed divided area is determined The wearable terminal according to any one of claims 1 to 3, wherein information corresponding to the formed letters, numbers and / or symbols is input.
- 前記指示体は固有の形状を有しており、前記記憶部は前記指示体の固有の形状の特徴を記憶しており、前記処理部は、前記カメラから出力された被写体の画像信号と、前記記憶部に記憶された前記指示体の固有の形状の特徴とに基づいて、前記固有の形状を有する前記入力指示体が前記入力部材の区画領域のいずれかを指示したと判断したとき、指示された前記区画領域に形成されている前記文字,数字及び/又は記号に対応する情報を入力する請求項1~3のいずれかに記載のウエアラブル端末。 The indicator has a unique shape, the storage unit stores features of the unique shape of the pointer, and the processing unit generates an image signal of an object output from the camera, and When it is determined that the input indicator having the unique shape has instructed any of the divided regions of the input member based on the characteristic of the unique shape of the indicator stored in the storage unit The wearable terminal according to any one of claims 1 to 3, wherein information corresponding to the characters, numbers and / or symbols formed in the divided area is input.
- 前記指示体は指である請求項5又は6に記載のウエアラブル端末。 The wearable terminal according to claim 5 or 6, wherein the indicator is a finger.
- 前記処理部は、前記入力指示体が前記入力部材の区画領域のいずれかに所定時間以上滞留したことを検出したときは、前記区画領域が指示されたと判断する請求項1~6のいずれかに記載のウエアラブル端末。 The processing unit, when detecting that the input indicator has stayed in any of the divided areas of the input member for a predetermined time or more, determines that the divided area is instructed. Wearable terminal as described.
- 前記処理部は、前記入力部材の区画領域のいずれかと重なる前記入力指示体が所定方向に変位したことを検出したときは、前記区画領域が指示されたと判断する請求項1~7のいずれかに記載のウエアラブル端末。 The processing unit, when detecting that the input indicator overlapping with any of the divided regions of the input member is displaced in a predetermined direction, determines that the divided region is instructed. Wearable terminal as described.
- 前記処理部は、前記入力指示体と重なる前記入力部材の区画領域が所定方向に変位したことを検出したときは、前記区画領域が指示されたと判断する請求項1~8のいずれかに記載のウエアラブル端末。 The processing unit according to any one of claims 1 to 8, wherein the processing unit determines that the divided area is instructed when detecting that the divided area of the input member overlapping with the input indicator is displaced in a predetermined direction. Wearable terminal.
- 前記入力部材の区画領域が変位する際に発生する固有の音を取得する音取得手段を有しており、前記記憶部は前記固有の音に関する情報を記憶しており、前記処理部は、前記入力指示体が前記入力部材の区画領域のいずれかと重なった状態で、前記音取得手段が前記固有の音を検出したときは、前記区画領域が指示されたと判断する請求項1~9のいずれかに記載のウエアラブル端末。 It has sound acquisition means for acquiring a unique sound generated when the section area of the input member is displaced, the storage unit stores information on the unique sound, and the processing unit The apparatus according to any one of claims 1 to 9, wherein, when the sound acquisition unit detects the unique sound in a state where the input indicator overlaps any of the divided areas of the input member, it is determined that the divided area is indicated. The wearable terminal described in.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017-123960 | 2017-06-26 | ||
JP2017123960 | 2017-06-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019003747A1 true WO2019003747A1 (en) | 2019-01-03 |
Family
ID=64740512
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/020283 WO2019003747A1 (en) | 2017-06-26 | 2018-05-28 | Wearable terminal |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019003747A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100182240A1 (en) * | 2009-01-19 | 2010-07-22 | Thomas Ji | Input system and related method for an electronic device |
JP2014109876A (en) * | 2012-11-30 | 2014-06-12 | Toshiba Corp | Information processor, information processing method and program |
-
2018
- 2018-05-28 WO PCT/JP2018/020283 patent/WO2019003747A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100182240A1 (en) * | 2009-01-19 | 2010-07-22 | Thomas Ji | Input system and related method for an electronic device |
JP2014109876A (en) * | 2012-11-30 | 2014-06-12 | Toshiba Corp | Information processor, information processing method and program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6288372B2 (en) | Interface control system, interface control device, interface control method, and program | |
US10585488B2 (en) | System, method, and apparatus for man-machine interaction | |
US9134800B2 (en) | Gesture input device and gesture input method | |
JP5930618B2 (en) | Spatial handwriting system and electronic pen | |
EP3076334A1 (en) | Image analyzing apparatus and image analyzing method | |
US10234955B2 (en) | Input recognition apparatus, input recognition method using maker location, and non-transitory computer-readable storage program | |
US20150109197A1 (en) | Information processing apparatus, information processing method, and program | |
JP6344530B2 (en) | Input device, input method, and program | |
US20170294048A1 (en) | Display control method and system for executing the display control method | |
JP6632322B2 (en) | Information communication terminal, sharing management device, information sharing method, computer program | |
JP6483556B2 (en) | Operation recognition device, operation recognition method and program | |
US20210165492A1 (en) | Program, recognition apparatus, and recognition method | |
JP2013125487A (en) | Space hand-writing system and electronic pen | |
WO2015093130A1 (en) | Information processing device, information processing method, and program | |
JP6127564B2 (en) | Touch determination device, touch determination method, and touch determination program | |
TWI668600B (en) | Method, device, and non-transitory computer readable storage medium for virtual reality or augmented reality | |
KR20110087407A (en) | Camera simulation system and localization sensing method using the same | |
WO2019150430A1 (en) | Information processing device | |
KR20160072306A (en) | Content Augmentation Method and System using a Smart Pen | |
KR20210138923A (en) | Electronic device for providing augmented reality service and operating method thereof | |
WO2019003747A1 (en) | Wearable terminal | |
CN110489026A (en) | A kind of handheld input device and its blanking control method and device for indicating icon | |
CN108196676A (en) | Track and identify method and system | |
CN114721508A (en) | Virtual keyboard display method, apparatus, device, medium, and program product | |
KR102118434B1 (en) | Mobile device and, the method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18823108 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18823108 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |