JP2002259046A - System for entering character and symbol handwritten in air - Google Patents

System for entering character and symbol handwritten in air

Info

Publication number
JP2002259046A
JP2002259046A JP2001105219A JP2001105219A JP2002259046A JP 2002259046 A JP2002259046 A JP 2002259046A JP 2001105219 A JP2001105219 A JP 2001105219A JP 2001105219 A JP2001105219 A JP 2001105219A JP 2002259046 A JP2002259046 A JP 2002259046A
Authority
JP
Japan
Prior art keywords
characters
symbols
air
character
drawn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2001105219A
Other languages
Japanese (ja)
Inventor
Tomoya Sonoda
智也 園田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to JP2001105219A priority Critical patent/JP2002259046A/en
Publication of JP2002259046A publication Critical patent/JP2002259046A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Character Discrimination (AREA)
  • Closed-Circuit Television Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a method which stably detects operation in various illumination conditions and background in a wearable computer environment by solving the problem of the difficulty in specifying the start and the end of user's character or symbol entry and the start and end points of segments constituting a character at recognizing a finger or a gesture, with which the character is written in air. SOLUTION: With respect to a system for entering of characters and symbols handwritten in the air, a terminal device for recognition, a method for representing and designing character and symbols handwritten in the air, and a virtual touch-panel button system, the operation of writing a character or a symbol in the air with a finger or a gesture is photographed by a moving image photographing camera 10, and voice and sounds during photographing are collected by the microphone 12, and the photographed video and collected voice and sounds are transmitted to a computer 14, and a character pattern plotted on the computer 14 by image analysis and voice and sound analysis is collated with patterns in a database and is recognized, and the recognition result is displayed on a display device 16, and the inputted character can be confirmed by voice and sounds from a speaker 18.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する分野】この発明は、空中で手描きした文
字・記号入力システムに関し、更に詳しくは指やジェス
チャによって空中で文字・記号を描く動作を動画撮影用
カメラで撮影し、撮影中の音・音声をマイクロホンで収
音し、撮影した映像及び収音された音・音声を無線・有
線ケ−ブルを介してコンピュ−タに送信し、該コンピュ
−タ上で画像解析のみ、又は画像解析と音・音声解析に
より描かれた文字・記号パタ−ンをデ−タ・ベ−スのパ
タ−ンと照合して認識し、認識結果をディスプレイに表
示し、又はコンピュ−タから無線・有線ケ−ブルで接続
されたスピーカからの音や音声により入力した文字・記
号を確認可能とする空中で手描きした文字・記号の入力
システム、認識用端末装置、空中で描かれた文字・記号
の表記・設計方法及び仮想タッチ・パネル・ボタン・シ
ステムに関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a character / symbol input system which is hand-drawn in the air, and more particularly to an operation of drawing a character / symbol in the air with a finger or a gesture by using a camera for moving image shooting, and the sound / sound during shooting is taken. Sound is picked up by a microphone, and the captured video and the picked up sound / sound are transmitted to a computer via a wireless / wired cable, and only image analysis or image analysis is performed on the computer. Character / symbol patterns drawn by sound / speech analysis are recognized against data-based patterns and recognized, the recognition result is displayed on a display, or a wireless / wired cable is transmitted from a computer. -Character / symbol input system hand-drawn in the air, which allows confirmation of characters / symbols input by sound or voice from a speaker connected with a cable, recognition terminal device, notation of characters / symbols drawn in the air Design method About the fine virtual touch panel button system.

【0002】[0002]

【発明が解決しようとする課題】空中で文字・記号を描
く際には、その書いた軌跡が残らないため、利用者の文
字・記号入力の開始・終了や文字を構成する線分の始点
・終点を特定することが難しい。又、ウェアラブル・コ
ンピュ−タ(装着型計算機)環境においては、様々な照
明条件・背景において安定して動作検出する手法が必要
である。
When a character or symbol is drawn in the air, since the written locus does not remain, the start and end of the user's input of the character or symbol, the starting point of the line segment constituting the character, It is difficult to identify the end point. Also, in a wearable computer (wearable computer) environment, a method for stably detecting operation under various lighting conditions and backgrounds is required.

【0003】考慮される解決策としては、利用者が文字
・記号を描き終えた時点で手を画像フレ−ムの外側へ手
を出すので、その手の存在を検出すれば良いが、前提と
する撮影条件では、背景が完全に静止しておらず、撮影
条件も毎回異なるので、様々な背景で安定して動作する
設計が必要である。又、照明条件も一定でない。
As a solution to be considered, the user puts his hand outside the image frame when he finishes drawing characters / symbols. Under such shooting conditions, the background is not completely stationary and the shooting conditions are different each time, so that a design that operates stably in various backgrounds is required. Also, the lighting conditions are not constant.

【0004】又、公知の手書きによる文字・記号の認識
システムは、感圧式のパネル上に描かれる筆跡を解読す
るか、コンピュ−タのマウスで描かれる筆跡を解読する
ものであり、空中で描かれる文字・記号の解読は不可能
であった。
A known handwritten character / symbol recognition system decodes a handwriting drawn on a pressure-sensitive panel or a handwriting drawn with a computer mouse. It was impossible to decipher the characters and symbols used.

【0005】更に、公知のジェスチャ認識は、文字・記
号の認識を前提としておらず、常に固定カメラからの映
像を解析することを前提としていたため、動画撮影用カ
メラを伴ったウェアラブル・コンピュ−タ(装着型計算
機)或いはモバイル・コンピュ−タ(可搬型計算機)端
末機の環境では文字・記号の認識が困難であった。
Further, since the known gesture recognition does not presuppose the recognition of characters and symbols but presupposes that a video from a fixed camera is always analyzed, a wearable computer having a camera for photographing moving images is used. In the environment of a (worn computer) or mobile computer (portable computer) terminal, it has been difficult to recognize characters and symbols.

【0006】空中で文字を書く際には、ウェアラブル・
コンピュ−タ、又はモバイル・コンピュ−タ端末機環境
では、様々な照明条件・単色でない背景条件において、
画像解析により認識処理を実施しなければならない。
[0006] When writing characters in the air, wearable
In a computer or mobile computer terminal environment, under various lighting conditions and non-monochromatic background conditions,
Recognition processing must be performed by image analysis.

【0007】様々な照明条件・単色でない背景条件、固
定されていないカメラによる撮影という条件のもとで
は、空中で描かれる文字・記号を画像解析によって認識
することが非常に困難である。
[0007] Under various lighting conditions, non-monochromatic background conditions, and conditions of shooting with an unfixed camera, it is very difficult to recognize characters and symbols drawn in the air by image analysis.

【0008】又、空中で文字・記号を描く際の指先は、
様々な方向に傾くので、公知のジェスチャ認識では、特
別な色の手袋を身につけることや、特別な色のものを手
に保持することで手の描く軌跡を認識しており、手に何
も持たない環境では、書かれる文字の軌跡を追跡するこ
とが難しかった。
Also, when drawing characters and symbols in the air,
Since it is tilted in various directions, known gesture recognition recognizes the trajectory of the hand drawn by wearing special colored gloves and holding a special colored object in the hand, It was difficult to track the trajectory of the written characters in an environment without any of them.

【0009】前述の通り、空中で描かれる文字・記号を
描く軌跡は、実際には空中に残らないため、文字入力の
開始・終了や文字を構成する線分の始点・終点を特定す
ることが難しかった。又、‘K’や‘R’等空中に描か
れる軌跡だけでは、判別が困難な文字もある。
As described above, since the trajectory of characters and symbols drawn in the air does not actually remain in the air, it is necessary to specify the start and end of character input and the start and end points of line segments constituting the character. was difficult. In addition, there are some characters such as 'K' and 'R' which are difficult to distinguish only from a locus drawn in the air.

【0010】[0010]

【課題を解決するための手段】この発明は、手に何も装
着・保持しない状態で文字・記号を空中で描かれる様子
を動画撮影用カメラで撮影し、その画像をコンピュ−タ
上で処理し、解析した画像中の変化量から指先やジェス
チャの軌跡パタ−ンを特定し、デ−タ・ベ−スのパタ−
ンと照合することで空中に描かれた文字・記号を認識す
ることができ、更に音・音声情報をも認識に利用するこ
とが可能な空中で手描きした文字・記号の入力システ
ム、動画撮影用カメラ、収音用マイクロホン、画像や音
・音声の解析用のコンピュ−タ及び認識結果を表示する
ディスプレイ、又は音・音声で示すスピ−カで構成され
る文字・記号を確認可能とする空中で手描きした文字・
記号の認識用端末装置と、空中に軌跡が残らないことを
前提にした上で高精度で認識ができる文字・記号の表記
・設計方法及び様々な照明・背景の条件で高精度で手の
領域を求めることを可能とするため、文字入力の開始操
作や入力中の手の領域情報を所望の時刻に取得すること
ができる画像フレ−ム内の仮想タッチ・パネル・ボタン
・システムである。
SUMMARY OF THE INVENTION According to the present invention, a character / symbol is drawn in the air in a state where nothing is attached to or held on a hand, and the image is processed on a computer. Then, the trajectory pattern of the fingertip or the gesture is specified from the amount of change in the analyzed image, and the data base pattern is determined.
Characters and symbols drawn in the air can be recognized by collating with characters, and sound and voice information can also be used for recognition. Camera, microphone for sound collection, computer for image and sound / speech analysis and display for displaying the recognition result, or in the air where characters / symbols composed of speakers shown by sound / speech can be confirmed Hand-painted characters
Symbol recognition terminal device, character / symbol notation / design method that can be recognized with high accuracy on the premise that no trajectory remains in the air, and hand region with high accuracy under various lighting / background conditions This is a virtual touch panel button system in an image frame that can acquire a character input start operation and area information of a hand being input at a desired time in order to obtain a desired time.

【0011】[0011]

【発明の作用】この発明によれば、利用者の手の領域の
画素情報を、文字入力の直前に得ることで、手の領域を
大まかに抽出する手法、隣接画像フレ−ム間の輝度値差
分の重心位置によって指の移動位置を解析する手法及び
連続DPマッチングにより、手に特別なものを装着・保
持しなくても空中で手描きした文字・記号を入力可能と
し、ウェアラブル・コンピュ−タ、モバイル・コンピュ
−タ端末機環境で文字・記号を入力操作するものであ
る。
According to the present invention, a method of roughly extracting a hand region by obtaining pixel information of a region of a user's hand immediately before character input, and a method of obtaining a luminance value between adjacent image frames. A method of analyzing the moving position of the finger based on the position of the center of gravity of the difference and continuous DP matching enables input of hand-drawn characters and symbols in the air without wearing / holding a special object on the hand, and a wearable computer, This is for inputting characters and symbols in a mobile computer terminal environment.

【0012】[0012]

【実施例】この特許請求の範囲請求項1記載の空中で手
描きした文字・記号入力システムの実施例を説明する。
図1〜4で、この発明に係る空中で手描きした文字・記
号入力システムにおいては、動画撮影用カメラ10によ
り映像フレ−ムを取得し(S101)、同時にマイクロ
ホン12によって音・音声を取得し(S102)、取得
した該映像フレ−ムを無線・有線ケ−ブルを介してコン
ピュ−タ14に送信し、該映像フレ−ムと該音・音声信
号とを該コンピュ−タ14を介して解析処理する(S1
03)。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS An embodiment of the character / symbol input system hand-drawn in the air according to claim 1 will be described.
1 to 4, in the character / symbol input system which is hand-drawn in the air according to the present invention, a video frame is acquired by the moving image photographing camera 10 (S101), and at the same time, sound and voice are acquired by the microphone 12 (FIG. 1). S102) The obtained video frame is transmitted to the computer 14 via a wireless / wired cable, and the video frame and the sound / audio signal are analyzed via the computer 14. Process (S1
03).

【0013】該コンピュ−タ14上では、画像解析の
み、又は該画像解析と音・音声解析とを組合せ、空中で
描かれる文字・記号の入力開始信号を取得し、高精度入
力動作を特定するため、該動画撮影用カメラ10により
撮影される該映像フレ−ム内の手の領域情報を取得する
(S104)。この入力開始信号の取得には、特許請求
の範囲請求項4記載の発明に係る仮想タッチ・パネル型
の入力システムや特定の音・音声により実施可能とな
る。
On the computer 14, only an image analysis or a combination of the image analysis and the sound / speech analysis is performed to obtain an input start signal for characters and symbols drawn in the air, and to specify a high-precision input operation. Therefore, region information of the hand in the video frame captured by the video capturing camera 10 is obtained (S104). The acquisition of the input start signal can be performed by a virtual touch panel type input system or a specific sound / voice according to the invention described in claim 4.

【0014】図6において、請求項4記載の発明に係る
仮想タッチ・パネル型の入力システムの実施例であり、
ウェアラブル・コンピュ−タ、或いはモバイル・コンピ
ュ−タ端末機16で利用者側から撮影したカメラ画像を
表し、利用者が画像フレ−ム中央に表示した色が異なる
部分(ボタン部分)を押すジェスチャをすると、該コン
ピュ−タ14上で画像解析により、ボタンを押したこと
を判定し、特定の処理をする。例えば、利用者がボタン
を押した時、利用者の手の部分が画像フレ−ム中の特定
位置にあると想定し、その輝度情報や色情報を取得する
ことで、様々な照明や背景条件のもとで、利用者の手の
部分だけで抽出する。
In FIG. 6, there is shown an embodiment of a virtual touch panel type input system according to the invention of claim 4,
This represents a camera image taken by the wearable computer or the mobile computer terminal 16 from the user side, and the user presses a gesture of pressing a different color (button portion) displayed in the center of the image frame. Then, it is determined by the image analysis on the computer 14 that a button has been pressed, and a specific process is performed. For example, when the user presses a button, it is assumed that the user's hand is at a specific position in the image frame, and by obtaining the luminance information and color information, various lighting and background conditions are obtained. And extract only the part of the user's hand.

【0015】図6において、利用者がボタンを押したこ
とを判定するため、該動画撮影用カメラ10から連続し
て得られる隣接画像フレ−ム間の輝度差分の大きな画素
が、画像フレ−ム中のボタン領域にのみ集中している状
態を検出する。
In FIG. 6, in order to determine that the user has pressed the button, pixels having a large difference in luminance between adjacent image frames continuously obtained from the moving image camera 10 are used as image frames. Detects a state where the user concentrates only on the middle button area.

【0016】図1において、入力開始信号を取得した
後、隣接画像フレ−ム間の特定輝度変化量の重心位置G
を求め(S105)、輝度変化量の重心位置Gの軌跡に
よって描かれた文字・記号を判定し、更に入力動作の終
了信号を取得する(S106)。輝度変化量の重心位置
Gの軌跡によって描かれた文字・記号の判定には、予め
該コンピュ−タ14上の記憶装置に格納しておく文字・
記号の軌跡パタ−ンと照合し、その照合には、DPマッ
チングやHMM(hidden markovmode
l)等の時系列マッチングを用いる。又、入力動作の終
了信号は、画像フレ−ム中の外に利用者の手が出たこと
を検出した状態、又は前述の仮想タッチ・パネルによっ
て終了判定のボタンを押す操作を検出すること等で実現
する。
In FIG. 1, after obtaining the input start signal, the center of gravity G of the specific luminance variation between adjacent image frames is obtained.
Is determined (S105), the character / symbol drawn by the locus of the center of gravity G of the luminance change amount is determined, and an end signal of the input operation is obtained (S106). In order to determine a character / symbol drawn based on the locus of the center of gravity G of the luminance change amount, a character / symbol stored in a storage device on the computer 14 in advance is used.
It is collated with the trajectory pattern of the symbol, and the collation is performed by DP matching or HMM (hidden markovmode).
l) and the like. The end signal of the input operation is, for example, a state in which the user's hand is out of the image frame, or an operation of pressing the end determination button by the virtual touch panel. Is realized.

【0017】描かれた文字・記号は、システムの利用者
が確認するためにディスプレイ16に表示し(S10
7)、又はマイクロホン12やスピ−カ18からの音・
音声によって確認する(S108)。
The drawn characters and symbols are displayed on the display 16 for confirmation by the user of the system (S10).
7) or sound from microphone 12 or speaker 18
Confirmation is made by voice (S108).

【0018】図2において、請求項2記載の発明に係る
認識用端末装置は、該動画撮影用カメラ10から画像フ
レ−ムを該コンピュ−タ14に入力し、同時に該マイク
ロホン12より音・音声信号を取得して該コンピュ−タ
14に入力し、該コンピュ−タ14において、画像解
析、音・音声解析を行い、ジェスチャ等に空中で描かれ
た文字・記号の軌跡の判定をし、判定結果を該ディスプ
レイ16に表示し(S108)、該スピ−カ、又はイア
ホン18に出力する。図5では、図2の認識用端末装置
を暗闇でも使用できるようにし、該動画撮影用カメラ1
0を赤外線カメラ20に置き換え、赤外線ライト22を
照射して撮影する。図2、5では、該マイクロホン1
2、該スピ−カ18を取り外した状態で画像処理のみで
使用できる。
In FIG. 2, the recognition terminal device according to the second aspect of the present invention inputs an image frame from the moving image photographing camera 10 to the computer 14 and simultaneously sounds and sounds from the microphone 12. A signal is obtained and input to the computer 14, where the computer 14 performs image analysis, sound / voice analysis, and determines the trajectory of characters / symbols drawn in the air on gestures and the like. The result is displayed on the display 16 (S108) and output to the speaker or earphone 18. In FIG. 5, the recognition terminal device of FIG.
0 is replaced with an infrared camera 20, and an infrared light 22 is irradiated to shoot. 2 and 5, the microphone 1
2. It can be used only for image processing with the speaker 18 removed.

【0019】図3において、請求項2記載の発明に係る
認識用端末装置をウェアラブル・コンピュ−タ環境で使
用した場合の実施例を示す。モバイル・コンピュ−タ端
末機のカメラ、プロセッサ及びディスプレイを使用した
システムにも応用できる。又、図5の該赤外線カメラ2
0、該赤外線ライト22を伴ったシステムも構築でき
る。
FIG. 3 shows an embodiment in which the recognition terminal device according to the second aspect of the present invention is used in a wearable computer environment. The present invention can also be applied to a system using a camera, a processor, and a display of a mobile computer terminal. The infrared camera 2 shown in FIG.
0, a system with the infrared light 22 can also be constructed.

【0020】図4にいて、請求項2記載の発明に係る認
識用端末装置を固定カメラの環境で使用した場合の実施
例を示す。これは、ロボットに搭載することも可能であ
り、又、図5の該赤外線カメラ20、該赤外線ライト2
2を伴ったシステムも構築できる。
FIG. 4 shows an embodiment in which the recognition terminal device according to the second aspect of the present invention is used in a fixed camera environment. This can be mounted on a robot, and the infrared camera 20 and the infrared light 2 shown in FIG.
2 can also be constructed.

【0021】図に6おいて、請求項4記載の発明に係る
タッチ・パネル型の入力システムの実施例であり、その
構成は、該ディスプレイ16に表示された画像フレ−ム
30中にタッチ・パネル・ボタン32を表示し、現在入
力している文字・記号を表示領域34に表示し、現在の
入力文字・記号の種類を表示領域36に表示する。
Referring to FIG. 6, there is shown an embodiment of a touch panel type input system according to the fourth aspect of the present invention, which comprises a touch panel in an image frame 30 displayed on the display 16. The panel button 32 is displayed, the currently input character / symbol is displayed in the display area 34, and the type of the current input character / symbol is displayed in the display area 36.

【0022】図7において、請求項3記載の発明に係る
空中で描かれる文字・記号の表記・設計方法の実施例で
あり、ウェアラブル・コンピュ−タ環境、又はモバイル
・コンピュ−タ端末機環境での利用を想定した場合の英
数字を示す。この図では、各文字の周りの枠は、画像フ
レ−ムを示し、文字をフレ−ム内で書き始めて、一筆書
きを行い、最後に画像フレ−ムの外に手を出すことを前
提としている。
FIG. 7 shows an embodiment of a method for writing and designing characters / symbols drawn in the air according to the third aspect of the present invention, in a wearable computer environment or a mobile computer terminal environment. Indicates alphanumeric characters assuming the use of. In this figure, the frame around each character shows the image frame, assuming that the characters start writing in the frame, make a single stroke, and finally reach out of the image frame. I have.

【0023】空中において文字・記号を手描きする際
は、描いた軌跡が一筆書きになり、それに応じた文字・
記号の設計が必要となる。特に‘K’や‘R’等の文字
は、軌跡が残らないと区別が極めて困難になるので、一
筆書きの表記で明確に区別可能となる。又、文字・記号
は、書き終わった後に画像フレ−ムの外に手を出す方向
を定めることで、‘B’、‘P’等を区別し易くでき
る。
When hand-drawing characters and symbols in the air, the drawn trajectory becomes a single stroke,
Symbol design is required. In particular, characters such as 'K' and 'R' are extremely difficult to distinguish unless a trajectory remains, so that they can be clearly distinguished by a single-stroke notation. In addition, the characters and symbols can be easily distinguished from each other such as 'B' and 'P' by determining the direction in which the hand is to be moved out of the image frame after the writing.

【0024】[0024]

【発明の効果】(1)この発明に係る空中で手描きした
文字・記号の入力システムば、手になにも持たずに空中
において描いた文字・記号を直ちに認識し、動画撮影用
カメラを伴ったウェアラブル・コンピュ−タ環境やモバ
イル・コンピュ−タ端末機環境において、迅速にメモを
とることができる。 (2)この発明に係る空中で描かれた文字・記号の認識
用端末装置は、動画撮影用カメラを伴ったウェアラブル
・コンピュ−タ、モバイル・コンピュ−タ端末機に搭載
することができる。 (3)この発明に係る空中で描かれた文字・記号の認識
用端末装置を、ロボットに搭載することで、ロボットに
文字・記号を伝えることが可能となり、人間とロボット
のコミュ−ニケ−ションの手段が拡張される。 (4)この発明に係る空中で描かれた文字・記号の認識
用端末装置を、子供向けの教材やビデオゲ−ムに搭載す
ることで、文字・記号を学習したり、遊戯に使用でき
る。 (5)この発明に係る空中で描かれた文字・記号の認識
用端末装置は、赤外線ライトの使用により、夜間でも入
力操作可能なので、ベッドに寝たきりの高齢者や病院の
患者が起き上がらずに、片手で文字・記号を描くこと
で、入力操作が可能となる。 (6)この発明に係る空中で描かれた文字・記号の表記
法・設計法によれば、空中で文字・記号を認識する際
に、一筆書きになるという特徴を活用し、効果的且つ迅
速な文字・記号入力を可能とする。 (7)この発明に係る仮想タッチ・パネル・ボタン・シ
ステムを、動画撮影用カメラを伴ったウェアラブル・コ
ンピュ−タやモバイル・コンピュ−タ端末機の入力イン
タ−フェ−ス等に応用すると、所望の時刻にボタンを押
して端末機操作し、任意の瞬間に手等の特定領域の色情
報を取得できるので、高い精度の入力システムを構築可
能となる。
(1) The character / symbol input system hand-drawn in the air according to the present invention immediately recognizes the characters / symbols drawn in the air without holding anything in hand, and is provided with a video camera. In a wearable computer environment or a mobile computer terminal environment, a memo can be quickly taken. (2) The terminal device for recognizing characters and symbols drawn in the air according to the present invention can be mounted on a wearable computer or a mobile computer terminal with a camera for capturing moving images. (3) By mounting the terminal device for recognizing characters and symbols drawn in the air according to the present invention on a robot, characters and symbols can be transmitted to the robot, and communication between humans and the robot can be achieved. Means are expanded. (4) The character / symbol drawn in the air according to the present invention is mounted on a teaching material or a video game for children so that the character / symbol can be learned or used for games. (5) The terminal device for recognizing characters and symbols drawn in the air according to the present invention can perform input operation even at night by using an infrared light, so that a bedridden elderly person or a hospital patient does not get up. By drawing characters and symbols with one hand, input operation becomes possible. (6) According to the notation and design method of characters / symbols drawn in the air according to the present invention, when recognizing characters / symbols in the air, the feature of one-stroke writing is utilized, and it is effective and quick. Characters and symbols can be input. (7) It is desirable to apply the virtual touch panel button system according to the present invention to an input interface of a wearable computer or a mobile computer terminal with a video camera. At this time, the user can press the button to operate the terminal device and obtain the color information of a specific area such as a hand at an arbitrary moment, so that a highly accurate input system can be constructed.

【図面の簡単な説明】[Brief description of the drawings]

【図1】この発明に係る空中で手描きした文字・記号入
力システムの処理の流れを示すフロ−チャ−トである。
FIG. 1 is a flowchart showing a processing flow of a character / symbol input system which is hand-drawn in the air according to the present invention.

【図2】この発明に係る空中で手描きした文字・記号入
力システムの認識用端末装置の略図である。
FIG. 2 is a schematic diagram of a terminal for recognition of a character / symbol input system hand-drawn in the air according to the present invention.

【図3】この発明に係る空中で手描きした文字・記号入
力システムの認識用端末装置をウェアラブル・コンピュ
−タに応用した実施例の略図である。
FIG. 3 is a schematic diagram of an embodiment in which a terminal device for recognition of a character / symbol input system which is hand-drawn in the air according to the present invention is applied to a wearable computer.

【図4】この発明に係る空中で手描きした文字・記号入
力システムの認識用端末装置を固定カメラに応用した実
施例の略図である。
FIG. 4 is a schematic diagram of an embodiment in which a terminal device for recognition of a character / symbol input system hand-drawn in the air according to the present invention is applied to a fixed camera.

【図5】この発明に係る空中で手描きした文字・記号入
力システムの認識用端末装置を暗闇で使用するために、
赤外線カメラと赤外線ライトを使用する際の実施例の略
図である。
FIG. 5 is a diagram illustrating the use of the terminal device for recognition of the character / symbol input system hand-drawn in the air according to the present invention in the dark;
4 is a schematic diagram of an embodiment when using an infrared camera and an infrared light.

【図6】この発明に係る仮想タッチ・パネル・ボタン・
システムの設計図である。
FIG. 6 shows a virtual touch panel button according to the present invention.
FIG. 2 is a design diagram of a system.

【図7】この発明に係る空中で手描きした文字・記号入
力システムの英数字に関する表記例である。
FIG. 7 is an example of notation related to alphanumeric characters in the character / symbol input system hand-drawn in the air according to the present invention.

【符号の説明】[Explanation of symbols]

10 … 動画撮影用カメラ; 12 … マイクロホン; 14 … 画像処理、音・音声処理、演算処理用コンピ
ュ−タ; 16 … ディスプレイ; 18 … スピ−カ(又はイアホン); 20 … 赤外線カメラ; 22 … 赤外線ライト; 30 … 仮想タッチ・パネルに見立てた画像フレ−
ム; 32 … 仮想タッチ・パネル・ボタン; 34 … 仮想タッチ・パネル上の入力中の文字・記号
の表示領域; 36 … 仮想タッチ・パネル上の入力中の文字・記号
の入力モ−ド表示装置。
10: Camera for video recording; 12: Microphone; 14: Computer for image processing, sound / voice processing, and arithmetic processing; 16: Display; 18: Speaker (or earphone); 20: Infrared camera; 22: Infrared Light; 30… Image frame like virtual touch panel
32: virtual touch panel button; 34: display area of characters / symbols being input on the virtual touch panel; 36: input mode display device of characters / symbols being input on the virtual touch panel .

───────────────────────────────────────────────────── フロントページの続き (51)Int.Cl.7 識別記号 FI テーマコート゛(参考) H04N 7/18 H04N 7/18 K U Fターム(参考) 5B064 AB04 BA05 DD16 EA24 FA03 FA16 5B068 AA05 BD09 BD17 BE08 CC06 CC17 CC19 5C054 AA01 CC02 CH01 EA01 EA03 EA05 FC00 FE00 HA00 HA05 5L096 BA16 BA20 CA04 DA02 FA60 FA70 GA45 HA02 HA08 JA16 JA20 ──────────────────────────────────────────────────続 き Continued on the front page (51) Int.Cl. 7 Identification symbol FI Theme coat ゛ (Reference) H04N 7/18 H04N 7/18 KU F term (Reference) 5B064 AB04 BA05 DD16 EA24 FA03 FA16 5B068 AA05 BD09 BD17 BE08 CC06 CC17 CC19 5C054 AA01 CC02 CH01 EA01 EA03 EA05 FC00 FE00 HA00 HA05 5L096 BA16 BA20 CA04 DA02 FA60 FA70 GA45 HA02 HA08 JA16 JA20

Claims (4)

【特許請求の範囲】[Claims] 【請求項1】指やジェスチャによって空中で描く文字・
記号の認識において、利用者が文字・記号を描く動作を
動画撮影用カメラで撮影し、文字・記号等を入力中の音
・音声をマイクロホンで収音し、撮影した映像や収音し
た音・音声を無線・有線ケ−ブルを介してコンピュ−タ
に送信し、該コンピュ−タを介して画像と音・音声の解
析し、画像中の手や指先の描く軌跡パタ−ンを画像の
み、或いは画像と音・音声の組合せにより求め、デ−タ
・ベ−ス中のパタ−ンと照合して認識し、認識した文字
・記号を該コンピュ−タから該無線・有線ケ−ブルで接
続されたディスプレイに表示し、又は該コンピュ−タか
ら該無線・有線ケ−ブルで接続されたスピ−カからの音
や音声により入力した文字・記号を確認可能とする空中
で手描きした文字・記号の入力システム。
1. A character drawn in the air by a finger or a gesture.
In recognition of symbols, the user draws characters / symbols using a video camera, and the sound / speech input of the characters / symbols is collected by a microphone. Sound is transmitted to a computer via a wireless / wired cable, and images, sounds, and sounds are analyzed via the computer, and a locus pattern drawn by a hand or a fingertip in the image is displayed as an image only. Alternatively, it is obtained by a combination of image and sound / voice, and is recognized by comparing it with a pattern in a data base, and the recognized character / symbol is connected from the computer by the wireless / wired cable. Characters / symbols hand-drawn in the air, which can be displayed on a displayed display or the characters / symbols inputted from the computer by sound or voice from a speaker connected by the wireless / wired cable can be confirmed. Input system.
【請求項2】指やジェスチャによって空中で文字・記号
を描く動作を撮影する動画撮影用カメラと、音・音声を
収音するマイクロホンと、画像や音を解析するコンピュ
−タと、入力された文字・記号を確認するディスプレイ
及びスピ−カとから成り、指やジェスチャによって空中
で描く文字・記号を認識するために使用する利用者が携
帯・装着が可能、又は所望の場所に設置可能な入力した
文字・記号を確認可能とする空中で手描きした文字・記
号の認識用端末装置。
2. A moving image capturing camera for capturing an action of drawing characters and symbols in the air with a finger or a gesture, a microphone for collecting sounds and sounds, and a computer for analyzing images and sounds. It consists of a display and a speaker for confirming characters and symbols, and is used for recognizing characters and symbols drawn in the air with fingers and gestures. A terminal device for recognizing hand-drawn characters / symbols in the air, which makes it possible to confirm the characters / symbols that have been drawn.
【請求項3】空中で描かれる文字・記号を一筆書きで入
力することで、筆跡の残らない文字・記号を区別し易く
し、早く入力することを可能とする空中で描かれる文字
・記号の表記・設計方法。
3. Characters / symbols drawn in the air are input with a single stroke, so that characters / symbols with no handwriting can be easily distinguished, and characters / symbols drawn in the air can be input quickly. Notation and design method.
【請求項4】指やジェスチャによって空中で文字・記号
を描く動作を撮影する動画撮影用カメラのカメラ・レ−
ム中に仮想タッチ・パネル・ボタンを想定し、ディスプ
レイ上に表示し、利用者が該仮想タッチ・パネル・ボタ
ンを押す動作を画像解析で認識して任意の時間にコンピ
ュ−タへのボタン押圧の入力操作をする仮想タッチ・パ
ネル・ボタン・システム。
4. A moving picture camera for shooting a character or symbol in the air by a finger or a gesture.
A virtual touch panel button is assumed during the program, displayed on the display, and the user presses the virtual touch panel button by image analysis to recognize the operation of pressing the button and presses the button to the computer at any time. A virtual touch panel button system for input operations.
JP2001105219A 2001-02-28 2001-02-28 System for entering character and symbol handwritten in air Pending JP2002259046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2001105219A JP2002259046A (en) 2001-02-28 2001-02-28 System for entering character and symbol handwritten in air

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2001105219A JP2002259046A (en) 2001-02-28 2001-02-28 System for entering character and symbol handwritten in air

Publications (1)

Publication Number Publication Date
JP2002259046A true JP2002259046A (en) 2002-09-13

Family

ID=18957943

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2001105219A Pending JP2002259046A (en) 2001-02-28 2001-02-28 System for entering character and symbol handwritten in air

Country Status (1)

Country Link
JP (1) JP2002259046A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095780A1 (en) * 2005-03-08 2006-09-14 Kohei Nishino Character conversion device, recording medium containing program for information processing device, reception content display method, and reception content display system
JP2007080242A (en) * 2005-08-15 2007-03-29 Kobe Steel Ltd Information processing apparatus and program for same
JP2007145106A (en) * 2005-11-25 2007-06-14 Xanavi Informatics Corp On-vehicle information terminal
JP2007296248A (en) * 2006-05-02 2007-11-15 Sony Computer Entertainment Inc Game device
JP2007323268A (en) * 2006-05-31 2007-12-13 Oki Electric Ind Co Ltd Video providing device
WO2008018943A1 (en) * 2006-08-08 2008-02-14 Microsoft Corporation Virtual controller for visual displays
WO2008085783A1 (en) * 2007-01-03 2008-07-17 Apple Inc. Gesture learning
US20100103092A1 (en) * 2008-10-23 2010-04-29 Tatung University Video-based handwritten character input apparatus and method thereof
US7840912B2 (en) 2006-01-30 2010-11-23 Apple Inc. Multi-touch gesture dictionary
JP2011034437A (en) * 2009-08-04 2011-02-17 Mitsubishi Electric Corp Character recognition device, and character recognition program
JP2011237953A (en) * 2010-05-10 2011-11-24 Nec Corp User interface device
EP2395454A2 (en) 2010-06-11 2011-12-14 NAMCO BANDAI Games Inc. Image generation system, shape recognition method, and information storage medium
US8413075B2 (en) 2008-01-04 2013-04-02 Apple Inc. Gesture movies
US8432367B2 (en) 2009-11-19 2013-04-30 Google Inc. Translating user interaction with a touch screen into input commands
WO2014054716A1 (en) * 2012-10-03 2014-04-10 楽天株式会社 User interface device, user interface method, program, and computer-readable information storage medium
JP2014199673A (en) * 2014-06-23 2014-10-23 日本電気株式会社 User interface device
JP2015038747A (en) * 2009-08-28 2015-02-26 ロベルト ボッシュ ゲーエムベーハー Inputting of information and command based on gesture for motor vehicle use
KR101499044B1 (en) * 2013-10-07 2015-03-11 홍익대학교 산학협력단 Wearable computer obtaining text based on gesture and voice of user and method of obtaining the text
US9348144B2 (en) 2013-01-07 2016-05-24 Seiko Epson Corporation Display device and control method thereof
JPWO2015083266A1 (en) * 2013-12-05 2017-03-16 三菱電機株式会社 Display control apparatus and display control method
CN107395968A (en) * 2017-07-26 2017-11-24 Tcl移动通信科技(宁波)有限公司 Mobile terminal and its video recording operation detection process method and storage medium
CN108140361A (en) * 2016-09-23 2018-06-08 苹果公司 Viewing pattern
CN111679745A (en) * 2019-03-11 2020-09-18 深圳市冠旭电子股份有限公司 Sound box control method, device, equipment, wearable equipment and readable storage medium
US11955100B2 (en) 2017-05-16 2024-04-09 Apple Inc. User interface for a flashlight mode on an electronic device

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095780A1 (en) * 2005-03-08 2006-09-14 Kohei Nishino Character conversion device, recording medium containing program for information processing device, reception content display method, and reception content display system
JP2007080242A (en) * 2005-08-15 2007-03-29 Kobe Steel Ltd Information processing apparatus and program for same
JP2007145106A (en) * 2005-11-25 2007-06-14 Xanavi Informatics Corp On-vehicle information terminal
US7840912B2 (en) 2006-01-30 2010-11-23 Apple Inc. Multi-touch gesture dictionary
JP2007296248A (en) * 2006-05-02 2007-11-15 Sony Computer Entertainment Inc Game device
JP2007323268A (en) * 2006-05-31 2007-12-13 Oki Electric Ind Co Ltd Video providing device
US8115732B2 (en) 2006-08-08 2012-02-14 Microsoft Corporation Virtual controller for visual displays
WO2008018943A1 (en) * 2006-08-08 2008-02-14 Microsoft Corporation Virtual controller for visual displays
US8552976B2 (en) 2006-08-08 2013-10-08 Microsoft Corporation Virtual controller for visual displays
KR101292467B1 (en) 2006-08-08 2013-08-05 마이크로소프트 코포레이션 Virtual controller for visual displays
US7907117B2 (en) 2006-08-08 2011-03-15 Microsoft Corporation Virtual controller for visual displays
US8049719B2 (en) 2006-08-08 2011-11-01 Microsoft Corporation Virtual controller for visual displays
US9311528B2 (en) 2007-01-03 2016-04-12 Apple Inc. Gesture learning
WO2008085783A1 (en) * 2007-01-03 2008-07-17 Apple Inc. Gesture learning
US8413075B2 (en) 2008-01-04 2013-04-02 Apple Inc. Gesture movies
US20100103092A1 (en) * 2008-10-23 2010-04-29 Tatung University Video-based handwritten character input apparatus and method thereof
JP2011034437A (en) * 2009-08-04 2011-02-17 Mitsubishi Electric Corp Character recognition device, and character recognition program
JP2015038747A (en) * 2009-08-28 2015-02-26 ロベルト ボッシュ ゲーエムベーハー Inputting of information and command based on gesture for motor vehicle use
US8432367B2 (en) 2009-11-19 2013-04-30 Google Inc. Translating user interaction with a touch screen into input commands
JP2011237953A (en) * 2010-05-10 2011-11-24 Nec Corp User interface device
JP2011258130A (en) * 2010-06-11 2011-12-22 Namco Bandai Games Inc Program, information storage medium, and image generation system
EP2395454A2 (en) 2010-06-11 2011-12-14 NAMCO BANDAI Games Inc. Image generation system, shape recognition method, and information storage medium
WO2014054716A1 (en) * 2012-10-03 2014-04-10 楽天株式会社 User interface device, user interface method, program, and computer-readable information storage medium
US9348144B2 (en) 2013-01-07 2016-05-24 Seiko Epson Corporation Display device and control method thereof
KR101499044B1 (en) * 2013-10-07 2015-03-11 홍익대학교 산학협력단 Wearable computer obtaining text based on gesture and voice of user and method of obtaining the text
JPWO2015083266A1 (en) * 2013-12-05 2017-03-16 三菱電機株式会社 Display control apparatus and display control method
JP2014199673A (en) * 2014-06-23 2014-10-23 日本電気株式会社 User interface device
CN108140361A (en) * 2016-09-23 2018-06-08 苹果公司 Viewing pattern
US11307757B2 (en) 2016-09-23 2022-04-19 Apple Inc. Watch theater mode
US12050771B2 (en) 2016-09-23 2024-07-30 Apple Inc. Watch theater mode
US11955100B2 (en) 2017-05-16 2024-04-09 Apple Inc. User interface for a flashlight mode on an electronic device
CN107395968A (en) * 2017-07-26 2017-11-24 Tcl移动通信科技(宁波)有限公司 Mobile terminal and its video recording operation detection process method and storage medium
CN111679745A (en) * 2019-03-11 2020-09-18 深圳市冠旭电子股份有限公司 Sound box control method, device, equipment, wearable equipment and readable storage medium

Similar Documents

Publication Publication Date Title
JP2002259046A (en) System for entering character and symbol handwritten in air
EP3258423B1 (en) Handwriting recognition method and apparatus
CN105487673B (en) A kind of man-machine interactive system, method and device
EP1186162B1 (en) Multi-modal video target acquisition and re-direction system and method
JP3114813B2 (en) Information input method
JP6747446B2 (en) Information processing apparatus, information processing method, and program
US9639744B2 (en) Method for controlling and requesting information from displaying multimedia
US20110273551A1 (en) Method to control media with face detection and hot spot motion
US20090284469A1 (en) Video based apparatus and method for controlling the cursor
JPH0844490A (en) Interface device
JP2002196877A (en) Electronic equipment using image sensor
CN110888532A (en) Man-machine interaction method and device, mobile terminal and computer readable storage medium
JP2009193323A (en) Display apparatus
JPH07141101A (en) Input system using picture
CN107742446A (en) Book reader
WO2009142098A1 (en) Image processing device, camera, image processing method, and program
US20140300535A1 (en) Method and electronic device for improving performance of non-contact type recognition function
JPH0340860B2 (en)
JP2005301583A (en) Typing input device
KR20110053396A (en) Apparatus and method for providing user interface by gesture
CN113282164A (en) Processing method and device
JP4025516B2 (en) Mouse replacement method, mouse replacement program, and recording medium recording the program
CN110007748B (en) Terminal control method, processing device, storage medium and terminal
US20190073808A1 (en) Terminal apparatus, information processing system, and display control method
JP4972013B2 (en) Information presenting apparatus, information presenting method, information presenting program, and recording medium recording the program