WO2012030153A2 - Non-contact type input device - Google Patents

Non-contact type input device Download PDF

Info

Publication number
WO2012030153A2
WO2012030153A2 PCT/KR2011/006428 KR2011006428W WO2012030153A2 WO 2012030153 A2 WO2012030153 A2 WO 2012030153A2 KR 2011006428 W KR2011006428 W KR 2011006428W WO 2012030153 A2 WO2012030153 A2 WO 2012030153A2
Authority
WO
WIPO (PCT)
Prior art keywords
signature
information
virtual
keypad
area
Prior art date
Application number
PCT/KR2011/006428
Other languages
French (fr)
Korean (ko)
Other versions
WO2012030153A3 (en
Inventor
유병문
황두성
Original Assignee
주식회사 엘앤와이비젼
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020100085878A external-priority patent/KR101036452B1/en
Priority claimed from KR1020100111341A external-priority patent/KR101216537B1/en
Application filed by 주식회사 엘앤와이비젼 filed Critical 주식회사 엘앤와이비젼
Publication of WO2012030153A2 publication Critical patent/WO2012030153A2/en
Publication of WO2012030153A3 publication Critical patent/WO2012030153A3/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • G06F3/0426Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected tracking fingers with respect to a virtual keyboard projected or printed on the surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Definitions

  • the present invention relates to a contactless input device, and more particularly, when a user's finger touches an invisible virtual input surface or signs in a virtual input space area, the user touches an actual physical display device to make an input.
  • the key value is input or the signature is displayed on the display device, it is possible to prevent personal information exposure such as disease transmission or fingerprint due to contact, and to use for signature inquiry during electronic payment by using signature characteristic information for each user. It is about.
  • Input devices such as touch screens, keyboards, keypads, mice, push buttons, light pens, digital tablets, and trackballs, which are widely used at present, are a touch type in which a user inputs data through direct physical contact with the input device.
  • Keypad related to the present invention is used in various fields, such as digital door lock (ATM), automatic teller machine (ATM), credit authorization terminal (Credit Authorization Terminal), a button type and a touch screen using a physical button ( Touch screen) is used and all are touch type.
  • ATM digital door lock
  • ATM automatic teller machine
  • Credit Authorization Terminal Credit authorization terminal
  • button type a touch screen using a physical button ( Touch screen) is used and all are touch type.
  • Such a touch keypad 1) As a user enters data through physical contact with the keypad serves as a medium for transmitting diseases (germs) caused by indirect contact in places where many mobile customers such as hospitals, banks, marts, restaurants, etc. , 2) there is a risk of exposing information such as a password due to wear due to frequent mechanical contact, and 3) there is a problem of exposing personal information such as fingerprints of a user to the input device.
  • Korean Patent No. 10-0894544 projects an optical plane including a keypad pattern in the air, and when a finger is positioned at a specific position of the projected optical plane, the position of the finger using the IR light source unit and the IR receiver unit It is possible to use a small and semi-permanent use without a mechanical key input device by detecting the key input is made, and a technology that can be applied to a door lock (digital home appliance) keypad, etc. is disclosed, Figure 1 A schematic diagram is disclosed.
  • the space touch screen device includes a pattern projection unit 10, an IR light source unit 20, an IR sensing unit 30, a signal processing unit 40, and an optical plane 50. .
  • the pattern projection unit 10 generates a character or an image and displays it in the air, and the optical plane 50 is an image projected in the air through the pattern projection unit 10 to receive input from a user. It may be a keypad, a still image, or the like.
  • the IR light source unit 20 irradiates an IR optical signal to detect a touch signal provided from a user in the optical plane 50, and independently modulates IR light for each channel to correspond to each input button displayed on the optical plane 50.
  • the signal is generated and provided to the optical plane 50, thereby maintaining the optical plane 50 in an input ready state.
  • the IR detector 30 amplifies and demodulates and demodulates the reflected IR optical signal generated by touching a hollow keypad and an image provided on the optical plane 50 to the signal processor 40, and the signal processor 40 transmits the IR signal.
  • the IR optical signal transmitted from the sensing unit 30 is calculated and processed to find an input button touched by the user, and the output signal for the input signal is transmitted to the pattern projection unit 10.
  • the registered patent can be used semi-permanently because it does not require a mechanical key input device by providing a non-contact keypad, and can be applied to a door lock or a keypad of a digital home appliance, but a projection device for generating an optical plane 50 is required.
  • a projection device for generating an optical plane 50 is required.
  • the manufacturing cost is high and it is difficult to miniaturize.
  • a user inputs a password by touching a button on the keypad or a button on the touch screen.
  • the user inputs his or her signature on the touch screen using a touch pen or a pointed object. That's the way it is.
  • Touch signature pads which use a touch pen or pointed object to enter a user's signature on the touchscreen, allow the user to enter data through physical contact with an input device, such as a touch pen, for hospitals, banks, marts and restaurants.
  • an input device such as a touch pen
  • the present invention has been made to solve the problems of the conventional contact keypad as described above, by using two or more cameras to form an invisible input space area on the upper portion of the physical display device contactless key selection input or signature This makes it possible to prevent personal information such as disease transmission or fingerprints caused by contact.
  • Another object of the present invention is to enable the user to use the identity check when making an electronic payment using the characteristic information when the user signs in the virtual three-dimensional signature space area.
  • a main body having a predefined virtual input space area on one side, a physical display device installed on the other side of the main body portion, toward the virtual input space area
  • Two or more cameras installed in the main body to generate a captured image photographing the position or movement of the touch means in the virtual input space region and the captured image to analyze the position coordinates of the touch means
  • an image processing unit for converting the position coordinates of the touch means into projection position coordinates on the display device according to the set mapping information between the virtual input space area and the display device.
  • a user it is possible for a user to input a key value or a signature without any physical contact with an actual input device, so that the user's disease due to indirect contact at a place where a lot of mobile customers such as hospitals, banks, marts, restaurants, etc. ) It is effective to prevent the transmission.
  • the present invention is made in a non-contact manner, there is no mechanical friction between the touch pen and the touch screen in the keypad or signature pad, thereby extending the mechanical life.
  • the user can use the characteristic information when the user signs in the virtual three-dimensional signature space area so that the user can use it for identity check during electronic payment.
  • FIG. 1 is a configuration diagram of a space touch screen device according to Korean Patent No. 10-0894544.
  • Fig. 2 is a block diagram of a contactless keypad device according to the first embodiment of the present invention.
  • FIG. 3 is a two-dimensional block diagram of the contactless keypad device according to the first embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a process of performing virtual keypad setting in the contactless keypad device according to the first embodiment.
  • FIG. 5 is a flowchart illustrating a process of performing a method of detecting a contactless key input using the contactless keypad device according to the first embodiment.
  • FIG. 6 is a block diagram illustrating a contactless signature pad according to the second embodiment.
  • FIG. 7 is a perspective view showing a schematic configuration of a contactless signature pad according to a second embodiment.
  • FIG. 8 illustrates a signature pattern on a virtual signature space area and a signature pattern projected onto a display device.
  • FIG. 10 illustrates a principle of generating mapping information using a calibration tool.
  • 11 is a flowchart illustrating a process of generating mapping information using a calibration tool.
  • FIG. 12 is a flowchart illustrating a process of receiving a signature in a contactless manner from the contactless signature pad according to the second embodiment and expressing the signature.
  • a contactless input device a keypad will be described as an example in the first embodiment and a signature pad in the second embodiment, but the technical concept of the present invention is not limited thereto.
  • the non-contact input device that can be made contactless input by detecting the position coordinates in the input space of the.
  • FIG. 2 is a block diagram of a contactless keypad device according to a first embodiment of the present invention
  • Figure 3 is a two-dimensional block diagram of a contactless keypad device according to a first embodiment of the present invention.
  • the contactless keypad device is the body 1, those installed on the body 1, the physical keypad 100, the camera 200,
  • the interface unit 300 includes an image processing unit 400, a storage unit 500, an output unit 600, and a control unit 700.
  • the physical keypad 100 is for visually showing the keypad shape, and may be a physical key input device such as a physical button or an LCD screen. If the visual keypad is provided to the user, the physical keypad 100 may be printed in various forms such as paper with a keypad pattern printed thereon. Means are available.
  • the camera 200 is installed on the upper portion of the physical keypad 100 to capture an image of the upper portion of the physical keypad 100.
  • the two cameras are combined to generate a stereo image.
  • the interface unit 300 is for connection with a server (not shown) connected with the keypad device.
  • the image processor 400 detects the position of the touch means such as the user's finger from the image photographed by the camera 200 and corresponds to the position of the touch means on the virtual keypad area 800 which describes the position of the detected touch means. A function of determining the key 810 is performed.
  • the storage unit 500 includes a virtual keypad area 800 and a virtual keypad area, which are virtual input spaces located in a predetermined area above the physical keypad 100, information about the size of the physical keypad 100 and the keys 110. Position information of the key 810 in the 800 and information for converting the virtual keypad area photographed by the camera 200 into the actual virtual keypad area 800 are stored.
  • the virtual keypad area 800 is a virtually defined area corresponding to the physical keypad 100 on the upper predetermined height h of the physical keypad 100 for detecting a contactless key input of the touch means. It is invisible.
  • the output unit 600 generates sound or light to give a key touch when the touched key 810 on the virtual keypad area 800 is determined by the image processor 500. When it is determined that a specific key 810 on the image is touched, a sound or light is generated to change the color or brightness of the key 110 on the corresponding physical keypad 100 or give a feeling of touch.
  • the control unit 700 is a part in charge of the overall control of the contactless keypad device using the camera according to the present invention.
  • dam 900 may be installed at one side of the main body 1 for stability of the background when the stereo image is acquired.
  • FIG. 4 is a flowchart illustrating a process of performing virtual keypad setting in the contactless keypad device according to the first embodiment.
  • the physical keypad information may include the size of the physical keypad 100, location information of keys, and the like.
  • the camera 200 may be touched simultaneously or continuously by touching a plurality of points of the physical keypad 100 using a calibration tool having a marker formed at a point corresponding to the height of the virtual keypad area.
  • the marker is detected from the stereo image captured by the controller to determine and store information about the virtual keypad area 800 (S410).
  • the spatial correlation of the camera 200 with respect to the physical keypad 100 or the weather keypad area 800 is calculated and stored using the detected marker information (S420).
  • the spatial correlation may be the relative position or relative angle of the camera 200 with respect to the virtual keypad area 800 or the physical keypad 100.
  • the conversion information is generated using the marker information obtained by touching a plurality of points of the physical keypad 100 using the calibration tool and the marker information obtained from the image information acquired from the camera 200.
  • the transform information is a matrix for transforming between the actual coordinate information and the coordinates when viewed by the camera 200, and it is well known to those skilled in the art to generate the transform matrix and its inverse transform matrix, and thus detailed description thereof will be omitted. .
  • FIG. 5 is a flowchart illustrating a process of performing a method of detecting a contactless key input using the contactless keypad device according to the first embodiment.
  • a virtual keypad area 800 located in a predetermined area on the upper part of the physical keypad 100 is defined through the process of FIG. 4, a process of detecting a non-contact key input as shown in FIG. 5 is performed.
  • the image of the upper portion of the physical keypad 100 is continuously photographed by the camera 200 to determine whether the touch means is detected.
  • the touch means is detected, when the touch means is located in the virtual keypad area 800 based on the captured image of the camera 200, the touch area of the touch means is detected (S500).
  • the touch position on the virtual keypad area 800 is determined based on the detected area (S510).
  • the key 810 on the nearest virtual keypad area 800 is found (S520), and the distance between the touch position and the corresponding key 810 is compared and the distance is a reference value. If it is less than or equal to recognize the selected key (S530), by selecting the corresponding key to transmit a key signal to the server, in response to this process to change the color or brightness of the key 110 on the physical keypad 100 or touch Generates sound or light to give a feeling of (S540).
  • FIG. 6 is a block diagram illustrating a contactless signature pad according to a second embodiment
  • FIG. 7 is a perspective view illustrating a schematic configuration of a contactless signature pad according to a second embodiment.
  • the contactless signature pad according to the second embodiment may be largely signed by a user through a virtual signature space area S as a virtual input space, and the input signature may correspond to the display area of the display device. It is composed of a non-contact signature unit (1) for converting the position coordinate to be provided to the display device 2 and a physical display device (2) to visually express the user's signature.
  • the non-contact signature unit 1 includes a main body 40 having an invisible virtual signature space area S formed on one side, two or more cameras 10 installed on one side of the main body 40, and a virtual signature space area S on one side. And an image processor 20 for converting the signature pattern into position coordinates of the display device 2 and an output unit 30 for generating sound or light to give a touch when a signature is input.
  • the two or more cameras 10 are installed in the main body 40 so as to face the virtual signature space area S, and photograph the movement of the signature input means in the virtual signature space area S.
  • the two cameras are combined. Generates a captured image.
  • the main body 40 is formed to surround four surfaces of the virtual signature space area S, and the two or more cameras 10 are installed on one of four surfaces surrounding the virtual signature space area S, so that two or more cameras ( In 10), the image can be obtained stably.
  • the frame 40 is formed so as to surround the four sides of the virtual signature space area S to acquire an image more stably.
  • the virtual signature space area S is not surrounded by the main body 40 and two or more cameras are provided. (10) It is, of course, also possible to be formed on the open space in front.
  • the image processing unit 20 is installed inside the main body 40, and continuously analyzes the captured images generated by the two or more cameras 10 to continuously calculate the position coordinates of the signature input means, and sets a predetermined virtual signature space area. Signature pattern data obtained by converting the position coordinates of the signature input means calculated according to the mapping information between the display apparatuses into the projection position coordinates on the display apparatus 2 is generated.
  • the image processing unit 20 may include mapping means for performing mapping processing between the physical display device 2 and the virtual signature space area S, and signature input means from images acquired from two or more cameras 10. Position determination means for extracting the position and position determination means for determining the position of the signature input means in the virtual signature space area (S) by using the position information of the two or more cameras 10 may be further included.
  • the lamp 31 or the lamp 31 expressing the signature initiation or signature input means detection as light or the sound is input.
  • a speaker 32 for outputting.
  • the display device 2 is a device for displaying the signature pattern received from the image processing unit 20, and the display unit 21 is formed at one side.
  • the display unit 21 may be an LCD, and since the signature is made in a non-contact manner, it does not need to be a touch screen such as a conventional signature pad.
  • the non-contact signature unit 1 and the display device 2 may be formed integrally, but more preferably formed to be separated from each other for convenience of use. In the latter case, it is necessary to combine the wires or wirelessly between the two.
  • the image processing unit 20 and the output unit 30 are illustrated in the main body 40.
  • the image processing unit 20 and the output unit 30 are installed in the display device 2 or installed in a separate host device.
  • the generated captured image may be analyzed by the host device to map the location coordinates and then transmitted to the display device 2.
  • FIG. 8 illustrates a signature pattern on a virtual signature space area and a signature pattern projected onto a display device.
  • the signature image is captured by two or more cameras 10, and the image processing unit ( A signature pattern is generated by detecting a continuous change of position of the finger 60 at 20), and the generated signature pattern is projected on the display unit 21 of the display device 2 and displayed.
  • the thickness of the signature pattern 70 is varied according to the height of the finger 60 (z value in the virtual signature space area S) during the signature operation so as to be similar to the actual signature of the user. It is desirable to.
  • the main body 40 can be installed separately from the display device 2.
  • the virtual signature space area S is defined in the open space formed in the main body 40 through the calibration process using the calibration tool 50, and the virtual signature space area S and the actual physical display device 2 are defined. This is because the mapping information is set in advance. Therefore, according to the present invention, the main body 40 does not need to be installed integrally with the display device 2, and can be installed and used at any position or angle that is convenient for use.
  • FIG. 9 shows an installation example that makes it easy to sign using a reference and a user can easily see the signature pattern displayed on the display device 2.
  • FIG. 10 illustrates a principle of generating mapping information using a calibration tool
  • FIG. 11 is a flowchart illustrating a process of generating mapping information using a calibration tool.
  • the calibration tool 50 used in the present invention has a structure similar to a cube, and a marker 51 is formed at eight vertices of the cube, and the markers 51 are connected to the connecting rod 52. Is connected, the lower end of the calibration tool 50 is formed with a leg portion 53 to be spaced apart from the bottom by a predetermined distance. The height of the leg 53 may be set equal to the distance between the display device 2 and the virtual signature space area.
  • the mapping information generation process is described.
  • the calibration tool 50 of FIG. 10 is installed to contact four vertices of the display apparatus 2 (S600).
  • the cube area corresponding to the virtual signature space area by the leg portion 53 of the calibration tool 50 is positioned to be spaced apart from the display device 2 by a predetermined height.
  • the calibration tool 50 is photographed using two or more cameras 10 (S610), and the virtual signature space area S that is invisible is determined from the photographed images photographed by the two or more cameras 10. (S620). That is, the position information of the eight markers 51 is obtained from the captured image, and the cubic space formed by the eight markers 51 is defined as the virtual signature space area S.
  • two or more cameras for the physical display device 2 or the virtual signature space area S are obtained using marker information obtained from an image captured by the two or more cameras 10.
  • the relative position of 10) is calculated (S630).
  • transformation information is generated by using the position information of the marker 51 of the preset calibration tool 50 and the position information of the marker 51 extracted from the image information acquired from the two or more cameras 10, and then a virtual three-dimensional image.
  • the mapping relationship between the signature space area and the two-dimensional display apparatus 2 is determined (S640).
  • FIG. 12 is a flowchart illustrating a process of receiving a signature in a contactless manner from the contactless signature pad according to the second embodiment and expressing the signature.
  • the image of the virtual signature space area S is continuously photographed by two or more cameras 10 to determine whether a signature input means such as a finger is detected (S700).
  • the touch area of the signature input means is extracted and the position of the signature input means is detected in the touch region of the extracted signature input means (S710). Detecting the position of the signature input means is performed for the purpose of tracking the change in the coordinates of the user's finger to generate the signature pattern.
  • a three-dimensional touch position is calculated using a trigonometry (S720). That is, a vector is created that connects the positions of the two or more cameras 10 and the positions of the signature input means on the virtual signature space area S, and the vectors of the two vectors connecting the positions of the two cameras and the positions of the signature input means. The intersection point is recognized as a touch position on the virtual signature space area S.
  • S720 trigonometry
  • the user's finger position (x, y coordinate) determined in the invisible virtual signature space area S is determined by the projection position on the display device 2 using the mapping information calculated in the setting step. It is determined (S730).
  • the thickness of the signature displayed on the display device 2 is determined by the height position (z coordinate) of the user's finger (S740). That is, as the height position of the user's finger, i.e., the z-coordinate is smaller, the user's finger is located deeper in the virtual signature space area S, so that the signature pattern is expressed thicker. Since the height position of the user's finger is continuously variable during the signature operation, the thickness of the signature pattern is variably expressed accordingly, so that a signature pattern which is very similar to the actual signature of the user can be obtained.
  • a smoothing process is performed as a post-process to prevent discontinuity due to a mathematical error generated in the process of digitizing the continuously projected finger position and thickness information (S750). Smoothing converts the continuously projected position coordinates and thickness information into digital information, and filters the plurality of adjacent points to form a line connecting a plurality of points. It is done by changing it according to the form.
  • the signature pattern data converted according to the mapping information is transmitted to the display device 2 and displayed on the display unit 21 (S760).
  • signature characteristic information is generated (S770), and the generated signature characteristic information is transmitted to the electronic payment system and used for a user identity inquiry (S780).
  • the signature characteristic information may include information such as signature pattern, signature height, signature speed, signature direction information, and the like, and the image processor 20 analyzes the photographed image to calculate such information and generates signature characteristic information therefrom.
  • information about the user's signature that is, signature pattern information or signature characteristic information, should be stored in advance in the electronic payment system.
  • the present invention is an invention that allows the input of the keypad input or the signature pad in a non-contact manner, it is a very useful invention that can be applied to the keypad or signature pad used in banks, card merchants and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Position Input By Displaying (AREA)

Abstract

The present invention relates to a non-contact type input device. The non-contact type input device includes a main body having a predefined virtual input space area at one side; a physical display device installed at another side of the main body; at least two cameras installed at the main body to face the virtual input space area and generate a capture image capturing a position or movement of a touch means in the virtual input space area; and an image processing unit for calculating position coordinates of the touch means by analyzing the capture image and converting the position coordinates of the touch means into projected position coordinates on the display device. According to the present invention, without any physical contact with an actual input device, a user may input a key value or signature. Therefore, disease (germs) infection of a user due to an indirect contact in crowded places such as hospitals, banks, marts, and restaurants may be prevented.

Description

비접촉식 입력장치Contactless input device
본 발명은 비접촉식 입력장치에 관한 것으로서, 보다 상세하게는 사용자의 손가락이 비가시성의 가상 입력면을 터치하거나 가상 입력공간 영역 내에서 서명을 하면 사용자가 마치 실제 물리적인 디스플레이 장치를 터치하여 입력이 이루어진 것처럼 키값이 입력되거나 디스플레이 장치에 서명이 표출되도록 함으로써 접촉으로 인한 질병전염이나 지문 등의 개인정보 노출을 예방할 수 있고, 사용자별 서명 특성 정보를 이용하여 전자 결제시 신원조회 목적으로 사용할 수 있도록 하는 기술에 관한 것이다.The present invention relates to a contactless input device, and more particularly, when a user's finger touches an invisible virtual input surface or signs in a virtual input space area, the user touches an actual physical display device to make an input. As the key value is input or the signature is displayed on the display device, it is possible to prevent personal information exposure such as disease transmission or fingerprint due to contact, and to use for signature inquiry during electronic payment by using signature characteristic information for each user. It is about.
전자, 정보통신 기술이 비약적으로 발달함에 따라 컴퓨터와 같은 전자기기들이 활용되지 않는 분야가 없을 정도이고, 그 전자기기들의 터치수단(장치) 또한 다양하다. With the rapid development of electronic and information communication technology, there are no fields where electronic devices such as computers are not utilized, and the touch means (devices) of the electronic devices also vary.
현재 널리 사용되어 지고 있는 터치 스크린, 키보드, 키패드, 마우스, 푸쉬 버튼, 라이트 펜, 디지틀 태블릿, 트랙볼 등의 입력 장치들은 사용자가 입력 장치와 직접적인 물리적 접촉을 통하여 데이터를 입력하는 접촉식이다.Input devices, such as touch screens, keyboards, keypads, mice, push buttons, light pens, digital tablets, and trackballs, which are widely used at present, are a touch type in which a user inputs data through direct physical contact with the input device.
본 발명과 관련이 있는 키패드는 디지털 도어락 (Digital Door Lock), ATM (Automatic Teller Machine), 신용조회 단말기 (Credit Authorization Terminal) 등 다양한 분야에 사용되어지고 있으며, 물리적인 버튼을 이용한 버튼식과 터치 스크린(touch screen)을 이용한 방식이 사용되어 지고 있으며 모두 접촉식이다.Keypad related to the present invention is used in various fields, such as digital door lock (ATM), automatic teller machine (ATM), credit authorization terminal (Credit Authorization Terminal), a button type and a touch screen using a physical button ( Touch screen) is used and all are touch type.
그러나, 이러한 접촉식 키패드는 1) 사용자가 키패드와 물리적 접촉을 통하여 데이터를 입력함으로 병원, 은행, 마트, 레스토랑 등 이동 고객이 많은 장소에서 간접 접촉에 의한 질병(병균)을 전달하는 매체 역할을 하고, 2) 빈번한 기계적 접촉에 의한 마모에 의해 비밀번호 등의 정보 노출의 위험이 존재하며, 3) 사용자의 지문 등의 정보가 입력 장치에 남아 개인정보 노출의 위험이 있다는 문제점이 있다.However, such a touch keypad 1) As a user enters data through physical contact with the keypad serves as a medium for transmitting diseases (germs) caused by indirect contact in places where many mobile customers such as hospitals, banks, marts, restaurants, etc. , 2) there is a risk of exposing information such as a password due to wear due to frequent mechanical contact, and 3) there is a problem of exposing personal information such as fingerprints of a user to the input device.
비접촉식 키패드와 관련하여, 국내 등록특허 10-0894544호에는 키패드 패턴을 포함하는 광플레인을 허공에 투영하고, 투영된 광플레인의 특정 위치에 손가락이 위치하면 IR 광원부 및 IR 수신부를 이용하여 손가락의 위치를 검출하여 키 입력이 이루어지도록 함으로써 기계적인 키 입력장치가 필요 없어 소형 및 반영구적 사용이 가능하고, 도어락(door lock)이나 디지털 가전 키패드 등에 적용 가능한 기술이 개시되어 있으며, 도 1에 상기 등록특허의 구성도가 개시되어 있다.In relation to the contactless keypad, Korean Patent No. 10-0894544 projects an optical plane including a keypad pattern in the air, and when a finger is positioned at a specific position of the projected optical plane, the position of the finger using the IR light source unit and the IR receiver unit It is possible to use a small and semi-permanent use without a mechanical key input device by detecting the key input is made, and a technology that can be applied to a door lock (digital home appliance) keypad, etc. is disclosed, Figure 1 A schematic diagram is disclosed.
도 1을 참조하면, 상기 등록특허에 따른 스페이스 터치스크린 장치는 패턴 프로젝션부(10), IR 광원부(20), IR 감지부(30), 신호처리부(40) 및 광플레인(50)을 포함한다. Referring to FIG. 1, the space touch screen device according to the registered patent includes a pattern projection unit 10, an IR light source unit 20, an IR sensing unit 30, a signal processing unit 40, and an optical plane 50. .
패턴 프로젝션부(10)는 문자 또는 화상을 생성하여 허공에 디스플레이하는 역할을 수행하며, 광플레인(50)은 패턴 프로젝션부(10)를 통해 허공에 투영되는 영상으로서, 사용자로부터 입력을 제공받기 위한 키패드, 정지 영상 등일 수 있다.The pattern projection unit 10 generates a character or an image and displays it in the air, and the optical plane 50 is an image projected in the air through the pattern projection unit 10 to receive input from a user. It may be a keypad, a still image, or the like.
IR 광원부(20)는 광플레인(50)에서 사용자로부터 제공되는 터치 신호를 감지하기 위해 IR 광신호를 조사하는 것으로, 광플레인(50)에 표시된 각 입력 버튼에 대응하도록 채널별 독립 변조된 IR광신호를 생성하여 광플레인(50)에 제공함으로써, 상기 광플레인(50)을 입력 준비상태로 유지시킨다.The IR light source unit 20 irradiates an IR optical signal to detect a touch signal provided from a user in the optical plane 50, and independently modulates IR light for each channel to correspond to each input button displayed on the optical plane 50. The signal is generated and provided to the optical plane 50, thereby maintaining the optical plane 50 in an input ready state.
IR 감지부(30)는 광플레인(50)에 제공된 허공 키패드 및 영상 등을 터치함으로써 생성되어 반사된 IR 광신호를 증폭 및 복조하여 신호처리부(40)로 전달하고, 신호처리부(40)는 IR 감지부(30)로부터 전달된 IR 광신호를 연산 및 처리하여 사용자가 터치한 입력버튼을 찾아내고, 상기 입력 신호에 대한 출력신호를 패턴 프로젝션부(10)에 전달한다.The IR detector 30 amplifies and demodulates and demodulates the reflected IR optical signal generated by touching a hollow keypad and an image provided on the optical plane 50 to the signal processor 40, and the signal processor 40 transmits the IR signal. The IR optical signal transmitted from the sensing unit 30 is calculated and processed to find an input button touched by the user, and the output signal for the input signal is transmitted to the pattern projection unit 10.
상기 등록특허는 비접촉식 키패드를 제공하여 기계적인 키 입력장치가 필요 없어 반영구적 사용이 가능하고, 도어락(door lock)이나 디지털 가전 키패드 등에 적용 가능하나, 광플레인(50)을 생성하기 위한 프로젝션 장치가 필요하고, 광플레인(50)에 표시된 각 입력 버튼에 대응되는 개수의 IR 광원부(20) 및 IR 감지부(30)가 필요하여 제조 단가가 높고 소형화에 어려움이 있다.The registered patent can be used semi-permanently because it does not require a mechanical key input device by providing a non-contact keypad, and can be applied to a door lock or a keypad of a digital home appliance, but a projection device for generating an optical plane 50 is required. In addition, since the number of IR light source units 20 and IR detectors 30 corresponding to each input button displayed on the optical plane 50 is required, the manufacturing cost is high and it is difficult to miniaturize.
한편, 현대 사회가 신용사회, 정보화사회로 진전해 감에 따라 신용카드, 전자화폐 및 전자결제 등의 지급 수단이 보편화되었으며, 결제 시스템의 사용자 신원 확인을 위한 사용자 인증 방법 또한 다양하다. 전자 결제 시스템이 사용자의 신원을 확인하기 위한 정보로는 사용자의 자문, 홍채, DNA 정보, 비밀번호, 서명 등의 정보를 이용할 수 있으며, 정보획득의 편리함 또는 기술 구현의 용이함으로 인해 사용자의 비밀번호나 서명 방법이 널리 사용되어 지고 있다. Meanwhile, as the modern society has progressed to a credit society and an information society, payment methods such as credit card, electronic money, and electronic payment have become commonplace, and user authentication methods for verifying user identity of a payment system are also various. Information for the electronic payment system to verify the user's identity may use information such as the user's advice, iris, DNA information, password, signature, etc.User's password or signature due to the convenience of information acquisition or technology implementation The method is widely used.
비밀번호 입력 방식은 사용자가 키패드의 버튼이나 터치스크린의 버튼을 터치하여 비밀번호를 입력하는 방식이며, 서명을 이용한 방식은 사용자가 터치펜이나 뽀족한 물체를 사용하여 터치스크린 상에 사용자 자신의 서명을 입력하는 방식이다. In the password input method, a user inputs a password by touching a button on the keypad or a button on the touch screen. In the method using a signature, the user inputs his or her signature on the touch screen using a touch pen or a pointed object. That's the way it is.
터치펜이나 뽀족한 물체를 사용하여 터치스크린 상에 사용자 자신의 서명을 입력하는 접촉식 서명 패드는 사용자가 터치펜과 같은 입력장치와의 물리적 접촉을 통하여 데이터를 입력함으로써 병원, 은행, 마트, 레스토랑 등 이동 고객이 많은 장소에서 간접 접촉에 의한 질병(병균)을 전달하는 위험에 노출되는 위험이 존재하며, 터치펜과 터치스터린과의 기계적 접촉에 의한 마모로 인해 기기의 수명이 단축되는 단점이 있다.Touch signature pads, which use a touch pen or pointed object to enter a user's signature on the touchscreen, allow the user to enter data through physical contact with an input device, such as a touch pen, for hospitals, banks, marts and restaurants. There is a risk that mobile customers are exposed to the risk of transmitting diseases (germs) by indirect contact in many places, and the life of the device is shortened due to wear caused by mechanical contact between the touch pen and touchsterin. have.
본 발명은 상기와 같은 종래 접촉식 키패드의 문제점을 해결하기 위해 안출된 것으로서, 2 이상의 카메라를 사용하여 물리적인 디스플레이 장치의 상부에 비가시성의 입력 공간영역을 구성하여 무접촉으로 키선택 입력 또는 서명이 가능하게 함으로써 접촉으로 인한 질병전염이나 지문 등의 개인 정보 노출을 예방할 수 있도록 하는 것이다.The present invention has been made to solve the problems of the conventional contact keypad as described above, by using two or more cameras to form an invisible input space area on the upper portion of the physical display device contactless key selection input or signature This makes it possible to prevent personal information such as disease transmission or fingerprints caused by contact.
본 발명의 다른 목적은 사용자가 가상의 3차원 서명 공간영역 내에서 서명을 할 때의 특성 정보를 이용하여 전자 결제시 신원조회 용도로 사용할 수 있도록 하는 것이다.Another object of the present invention is to enable the user to use the identity check when making an electronic payment using the characteristic information when the user signs in the virtual three-dimensional signature space area.
상기와 같은 목적을 달성하기 위한 본 발명의 바람직한 실시예에 따르면, 일측에 미리 정의된 가상 입력 공간 영역을 갖는 본체, 상기 본체부의 다른 일측에 설치되는 물리적인 디스플레이 장치, 상기 가상 입력 공간 영역을 향하도록 상기 본체에 설치되어 상기 가상의 입력 공간 영역내에서의 터치수단의 위치 또는 움직임을 촬영한 촬영 영상을 생성하는 2 이상의 카메라 및 상기 촬영 영상을 분석하여 상기 터치수단의 위치 좌표를 산출하고, 미리 설정된 상기 가상 입력 공간 영역과 디스플레이 장치 간의 매핑정보에 따라 상기 터치 수단의 위치좌표를 상기 디스플레이 장치 상의 투영 위치 좌표로 변환하는 영상처리부를 포함하는 것을 특징으로 하는 비접촉식 입력장치가 제공된다.According to a preferred embodiment of the present invention for achieving the above object, a main body having a predefined virtual input space area on one side, a physical display device installed on the other side of the main body portion, toward the virtual input space area Two or more cameras installed in the main body to generate a captured image photographing the position or movement of the touch means in the virtual input space region and the captured image to analyze the position coordinates of the touch means; And an image processing unit for converting the position coordinates of the touch means into projection position coordinates on the display device according to the set mapping information between the virtual input space area and the display device.
본 발명에 따르면, 실제 입력 장치와의 어떠한 물리적 접촉이 없이 사용자가 키값 또는 서명을 입력하는 것이 가능하므로 병원, 은행, 마트, 레스토랑 등 이동 고객이 많은 장소에서의 간접 접촉에 의한 사용자의 질병(병균) 전염을 예방 할 수 있는 효과가 있다.According to the present invention, it is possible for a user to input a key value or a signature without any physical contact with an actual input device, so that the user's disease due to indirect contact at a place where a lot of mobile customers such as hospitals, banks, marts, restaurants, etc. ) It is effective to prevent the transmission.
또한 본 발명은 비접촉식으로 입력이 이루어지므로 키패드 또는 서명 패드에서의 터치펜과 터치스크린간의 기계적인 마찰이 없어서 기계적 수명이 연장되는 효과도 있다.In addition, since the present invention is made in a non-contact manner, there is no mechanical friction between the touch pen and the touch screen in the keypad or signature pad, thereby extending the mechanical life.
또한, 입력장치가 서명패드인 경우 사용자가 가상의 3차원 서명 공간영역 내에서 서명을 할 때의 특성 정보를 이용하여 전자 결제시 신원조회 용도로 사용할 수 있도록 함으로써 보다 정확한 사용자 인증이 가능한 효과도 있다.In addition, when the input device is a signature pad, the user can use the characteristic information when the user signs in the virtual three-dimensional signature space area so that the user can use it for identity check during electronic payment.
도 1은 한국등록특허 10-0894544호에 따른 스페이스 터치스크린 장치의 구성도이다.1 is a configuration diagram of a space touch screen device according to Korean Patent No. 10-0894544.
도 2는 본 발명의 제 1 실시예에 따른 비접촉식 키패드 장치의 구성 블록도이다.Fig. 2 is a block diagram of a contactless keypad device according to the first embodiment of the present invention.
도 3은 본 발명의 제 1 실시예에 따른 비접촉식 키패드 장치의 2차원적 구성도이다.3 is a two-dimensional block diagram of the contactless keypad device according to the first embodiment of the present invention.
도 4는 제 1 실시예에 따른 비접촉식 키패드 장치에서 가상 키패드 설정이 수행되는 과정을 도시한 흐름도이다.4 is a flowchart illustrating a process of performing virtual keypad setting in the contactless keypad device according to the first embodiment.
도 5는 제1 실시예에 따른 비접촉식 키패드 장치를 이용하여 비접촉식 키 입력을 검출하는 방법이 수행되는 과정을 도시한 흐름도이다.FIG. 5 is a flowchart illustrating a process of performing a method of detecting a contactless key input using the contactless keypad device according to the first embodiment.
도 6은 제 2 실시예에 따른 비접촉식 서명 패드의 구성 블록도이다.6 is a block diagram illustrating a contactless signature pad according to the second embodiment.
도 7은 제 2 실시예 따른 비접촉식 서명 패드의 개략적인 구성을 나타낸 사시도이다.7 is a perspective view showing a schematic configuration of a contactless signature pad according to a second embodiment.
도 8은 가상 서명 공간영역 상의 서명패턴과 디스플레이 장치에 투영된 서명패턴을 보여주는 도면이다.8 illustrates a signature pattern on a virtual signature space area and a signature pattern projected onto a display device.
도 9는 프레임과 디스플레이 장치의 설치 상태의 일례를 도시한 것이다.9 shows an example of the installation state of the frame and the display device.
도 10은 교정 도구를 이용하여 매핑정보를 생성하는 원리를 도시한 것이다.10 illustrates a principle of generating mapping information using a calibration tool.
도 11은 교정 도구를 이용하여 매핑정보를 생성하는 과정을 도시한 흐름도이다.11 is a flowchart illustrating a process of generating mapping information using a calibration tool.
도 12는 제 2 실시예에 따른 비접촉식 서명 패드에서 비접촉식으로 서명을 입력받아 이를 표출하는 과정을 도시한 흐름도이다.12 is a flowchart illustrating a process of receiving a signature in a contactless manner from the contactless signature pad according to the second embodiment and expressing the signature.
이하에서는 첨부된 도면을 참조하여 본 발명의 바람직한 실시예들을 상세하게 설명하기로 한다. 본 실시예에서는 비접촉식 입력장치로서 제 1 실시예에서는 키패드를, 제 2 실시예에서는 서명 패드의 경우를 예시적으로 설명할 것이나 본 발명의 기술적 사상은 이에 국한되지 않고 카메라를 이용하여 터치수단의 가상의 입력 공간 내의 위치좌표를 검출하여 비접촉식으로 입력이 이루어질 수 있는 비접촉식 입력장치를 모두 포함하는 것임은 물론이다.Hereinafter, with reference to the accompanying drawings will be described in detail preferred embodiments of the present invention. In the present embodiment, as a contactless input device, a keypad will be described as an example in the first embodiment and a signature pad in the second embodiment, but the technical concept of the present invention is not limited thereto. Of course it includes all of the non-contact input device that can be made contactless input by detecting the position coordinates in the input space of the.
제 1 실시예 : 비접촉식 키패드First embodiment: contactless keypad
도 2는 본 발명의 제 1 실시예에 따른 비접촉식 키패드 장치의 구성 블록도이고, 도 3은 본 발명의 제 1 실시예에 따른 비접촉식 키패드 장치의 2차원적 구성도이다.2 is a block diagram of a contactless keypad device according to a first embodiment of the present invention, Figure 3 is a two-dimensional block diagram of a contactless keypad device according to a first embodiment of the present invention.
도 2 및 도 3을 참조하면, 본 발명의 제 1 실시예에 따른 비접촉식 키패드 장치는 본체(1)와, 본체(1) 상에 설치되는 것들로서, 물리적 키패드(100), 카메라(200), 인터페이스부(300), 영상처리부(400), 저장부(500), 출력부(600) 및 제어부(700)를 포함하여 구성된다.2 and 3, the contactless keypad device according to the first embodiment of the present invention is the body 1, those installed on the body 1, the physical keypad 100, the camera 200, The interface unit 300 includes an image processing unit 400, a storage unit 500, an output unit 600, and a control unit 700.
물리적 키패드(100)는 시각적으로 키패드 형상을 보여주기 위한 것으로서, 물리적인 버튼, LCD 스크린 등 실제 키 입력장치일 수 있으며, 사용자에게 시각적인 형상을 제공하는 것이면 키패드 패턴이 인쇄된 종이 등 다양한 형태의 수단이 사용 가능하다.The physical keypad 100 is for visually showing the keypad shape, and may be a physical key input device such as a physical button or an LCD screen. If the visual keypad is provided to the user, the physical keypad 100 may be printed in various forms such as paper with a keypad pattern printed thereon. Means are available.
카메라(200)는 물리적 키패드(100)의 부근 상부에 설치되어 물리적 키패드(100) 상부의 영상을 촬영하는 것으로서, 2개의 카메라가 조합되어 스테레오 영상을 생성한다.The camera 200 is installed on the upper portion of the physical keypad 100 to capture an image of the upper portion of the physical keypad 100. The two cameras are combined to generate a stereo image.
인터페이스부(300)는 키패드 장치와 연결된 서버(미도시)와의 연결을 위한 것이다.The interface unit 300 is for connection with a server (not shown) connected with the keypad device.
영상처리부(400)는 카메라(200)에 의해 촬영된 영상으로부터 사용자 손가락 등의 터치수단의 위치를 검출하고, 검출된 터치수단의 위치를 후술하는 가상 키패드 영역(800) 상에서 터치수단의 위치에 대응되는 키(810)를 결정하는 기능을 수행한다.The image processor 400 detects the position of the touch means such as the user's finger from the image photographed by the camera 200 and corresponds to the position of the touch means on the virtual keypad area 800 which describes the position of the detected touch means. A function of determining the key 810 is performed.
저장부(500)는 물리적 키패드(100)의 크기 및 각 키(110)들에 관한 정보, 물리적 키패드(100) 상부의 일정 영역에 위치하는 가상 입력공간인 가상 키패드 영역(800)과 가상 키패드 영역(800) 내의 키(810)의 위치 정보 및 카메라(200)에서 촬영된 가상 키패드 영역을 실제 가상 키패드 영역(800)으로 변환하기 위한 정보를 저장한다.The storage unit 500 includes a virtual keypad area 800 and a virtual keypad area, which are virtual input spaces located in a predetermined area above the physical keypad 100, information about the size of the physical keypad 100 and the keys 110. Position information of the key 810 in the 800 and information for converting the virtual keypad area photographed by the camera 200 into the actual virtual keypad area 800 are stored.
여기서, 가상 키패드 영역(800)은 터치수단의 비접촉식 키 입력 검출을 위해 물리적 키패드(100)의 상부 소정 높이(h) 상에 물리적 키패드(100)에 대응되는 가상으로 정의된 영역으로서 사용자의 눈에 보이지 않는 부분이다.Here, the virtual keypad area 800 is a virtually defined area corresponding to the physical keypad 100 on the upper predetermined height h of the physical keypad 100 for detecting a contactless key input of the touch means. It is invisible.
출력부(600)는 영상처리부(500)에 의해 가상 키패드 영역(800) 상의 터치된 키(810)가 결정되는 경우 키 터치감을 부여하기 위해 사운드 또는 빛을 발생시키는 것으로서, 가상 키패드 영역(800) 상의 특정 키(810)가 터치된 것으로 판단되는 경우 그에 상응하는 물리적 키패드(100) 상의 키(110)의 색상 또는 밝기가 가변되도록 하거나 터치의 느낌을 부여하도록 사운드나 빛이 발생하도록 하는 것이다.The output unit 600 generates sound or light to give a key touch when the touched key 810 on the virtual keypad area 800 is determined by the image processor 500. When it is determined that a specific key 810 on the image is touched, a sound or light is generated to change the color or brightness of the key 110 on the corresponding physical keypad 100 or give a feeling of touch.
제어부(700)는 본 발명에 따른 카메라를 이용한 비접촉식 키패드 장치의 전반적인 제어를 담당하는 부분이다.The control unit 700 is a part in charge of the overall control of the contactless keypad device using the camera according to the present invention.
그리고, 본체(1)의 일측에는 스테레오 영상 획득 시 배경의 안정성을 위하여 댐(900)이 설치되는 것이 바람직하다.In addition, the dam 900 may be installed at one side of the main body 1 for stability of the background when the stereo image is acquired.
도 4는 제 1 실시예에 따른 비접촉식 키패드 장치에서 가상 키패드 설정이 수행되는 과정을 도시한 흐름도이다. 4 is a flowchart illustrating a process of performing virtual keypad setting in the contactless keypad device according to the first embodiment.
도 4를 참조하면, 도시된 바와 같이, 우선, 물리적 키패드(100)의 정보가 획득 및 저장된다(S400). 물리적 키패드 정보는 물리적 키패드(100)의 크기, 키들 의 위치 정보 등을 포함할 수 있다.Referring to FIG. 4, as shown, first, information of the physical keypad 100 is obtained and stored (S400). The physical keypad information may include the size of the physical keypad 100, location information of keys, and the like.
물리적 키패드(100)의 정보가 획득되면, 가상 키패드 영역의 높이에 해당하는 지점에 마커가 형성된 교정도구를 이용하여 물리적 키패드(100)의 복수 개의 지점을 동시 또는 연속적으로 터치하면서 카메라(200)를 통해 촬영된 스테레오 영상으로부터 마커를 검출하여 가상 키패드 영역(800)에 관한 정보를 결정 및 저장한다(S410).When the information of the physical keypad 100 is obtained, the camera 200 may be touched simultaneously or continuously by touching a plurality of points of the physical keypad 100 using a calibration tool having a marker formed at a point corresponding to the height of the virtual keypad area. The marker is detected from the stereo image captured by the controller to determine and store information about the virtual keypad area 800 (S410).
그 다음, 검출된 마커 정보를 이용하여 물리적 키패드(100) 또는 기상 키패드 영역(800)에 대한 카메라(200)의 공간적 상관 관계를 계산 및 저장한다(S420). 여기서, 공간적 상관관계는 가상 키패드 영역(800) 또는 물리적 키패드(100)에 대한 상기 카메라(200)의 상대적 위치 또는 상대적 각도일 수 있다.Next, the spatial correlation of the camera 200 with respect to the physical keypad 100 or the weather keypad area 800 is calculated and stored using the detected marker information (S420). Here, the spatial correlation may be the relative position or relative angle of the camera 200 with respect to the virtual keypad area 800 or the physical keypad 100.
마지막으로, 교정도구를 이용하여 상기 물리적 키패드(100)의 복수 개의 지점을 터치한 상태에서의 마커 정보와 상기 카메라(200)로부터 획득한 영상정보로부터 획득한 마커 정보를 이용하여 변환 정보를 생성 및 저장한다(S430). 여기서, 변환 정보는 실제 좌표 정보와 카메라(200)에서 바라볼 때의 좌표 간의 변환을 위한 행렬로서 변환행렬 및 그 역 변환행렬을 생성하는 것은 당업자에게 있어서는 공지의 기술이므로 상세한 설명은 생략하기로 한다.Finally, the conversion information is generated using the marker information obtained by touching a plurality of points of the physical keypad 100 using the calibration tool and the marker information obtained from the image information acquired from the camera 200. Save (S430). Here, the transform information is a matrix for transforming between the actual coordinate information and the coordinates when viewed by the camera 200, and it is well known to those skilled in the art to generate the transform matrix and its inverse transform matrix, and thus detailed description thereof will be omitted. .
도5는 제 1 실시예에 따른 비접촉식 키패드 장치를 이용하여 비접촉식 키 입력을 검출하는 방법이 수행되는 과정을 도시한 흐름도이다.FIG. 5 is a flowchart illustrating a process of performing a method of detecting a contactless key input using the contactless keypad device according to the first embodiment.
도 4의 과정을 통해 물리적 키패드(100) 상부의 일정 영역에 위치하는 가상의 키패드 영역(800)이 정의되면 도 5와 같은 비접촉식 키 입력을 검출하는 과정이 수행된다. If a virtual keypad area 800 located in a predetermined area on the upper part of the physical keypad 100 is defined through the process of FIG. 4, a process of detecting a non-contact key input as shown in FIG. 5 is performed.
카메라(200)에 의해 물리적 키패드(100) 상부의 영상이 계속적으로 촬영되어 터치수단이 탐지되는지 여부를 판단한다.The image of the upper portion of the physical keypad 100 is continuously photographed by the camera 200 to determine whether the touch means is detected.
만일 터치수단이 탐지되면, 카메라(200)의 촬영 영상에 기초하여 터치수단이 가상 키패드 영역(800) 내에 위치하는 경우 터치수단의 터치 영역을 검출한다(S500).If the touch means is detected, when the touch means is located in the virtual keypad area 800 based on the captured image of the camera 200, the touch area of the touch means is detected (S500).
터치수단의 터치 영역이 검출되면(S500), 검출된 영역에 기초하여 가상 키패드 영역(800) 상에서의 터치 위치를 결정한다(S510). When the touch area of the touch means is detected (S500), the touch position on the virtual keypad area 800 is determined based on the detected area (S510).
검출된 터치수단의 가상 키패드 영역 상에서의 위치에 기초하여 가장 근접한 가상 키패드 영역(800) 상의 키(810)를 찾고(S520), 터치 위치와 해당 키(810) 간의 거리를 비교하여, 거리가 기준값 이하이면 해당 키를 선택한 것으로 인식하고(S530), 대응키를 선택하여 키 신호를 서버로 전송하고, 이에 대한 반응 처리로서 물리적 키패드(100) 상의 키(110)의 색상 또는 밝기가 가변되도록 하거나 터치의 느낌을 부여하도록 사운드나 빛을 발생시킨다(S540).Based on the detected position on the virtual keypad area of the touch means, the key 810 on the nearest virtual keypad area 800 is found (S520), and the distance between the touch position and the corresponding key 810 is compared and the distance is a reference value. If it is less than or equal to recognize the selected key (S530), by selecting the corresponding key to transmit a key signal to the server, in response to this process to change the color or brightness of the key 110 on the physical keypad 100 or touch Generates sound or light to give a feeling of (S540).
제 2 실시예 : 비접촉식 서명 패드Second Embodiment: Contactless Signature Pad
도 6은 제 2 실시예에 따른 비접촉식 서명 패드의 구성 블록도이고, 도 7은 제 2 실시예 따른 비접촉식 서명 패드의 개략적인 구성을 나타낸 사시도이다.6 is a block diagram illustrating a contactless signature pad according to a second embodiment, and FIG. 7 is a perspective view illustrating a schematic configuration of a contactless signature pad according to a second embodiment.
도 6 및 도 7을 참조하면, 제 2 실시예에 따른 비접촉식 서명 패드는 크게 사용자가 가상 입력공간으로서의 가상 서명 공간 영역(S)을 통해 서명이 이루어지고 입력된 서명을 디스플레이 장치의 표출 영역에 대응되는 위치좌표료 변환하여 디스플레이 장치(2)로 제공하는 비접촉 서명부(1)와 시각적으로 사용자의 서명을 표출하는 물리적인 디스플레이 장치(2)로 구성된다.6 and 7, the contactless signature pad according to the second embodiment may be largely signed by a user through a virtual signature space area S as a virtual input space, and the input signature may correspond to the display area of the display device. It is composed of a non-contact signature unit (1) for converting the position coordinate to be provided to the display device 2 and a physical display device (2) to visually express the user's signature.
비접촉 서명부(1)는 일측에 비가시성의 가상 서명 공간 영역(S)이 형성된 본체(40), 본체(40)의 일측에 설치되는 2 이상의 카메라(10)와, 가상 서명 공간 영역(S) 상의 서명 패턴을 디스플레이 장치(2)의 위치 좌표로 변환하는 영상 처리부(20) 및 서명이 입력되는 경우 터치감을 부여하기 위해 사운드 또는 빛을 발생시키는 출력부(30)를 포함하여 구성된다.The non-contact signature unit 1 includes a main body 40 having an invisible virtual signature space area S formed on one side, two or more cameras 10 installed on one side of the main body 40, and a virtual signature space area S on one side. And an image processor 20 for converting the signature pattern into position coordinates of the display device 2 and an output unit 30 for generating sound or light to give a touch when a signature is input.
2 이상의 카메라(10)는 가상 서명 공간 영역(S)을 향하도록 본체(40) 내에 설치되어 가상 서명 공간 영역(S) 내에서의 서명 입력수단의 움직임을 촬영하는 것으로서, 2개의 카메라가 조합되어 촬영 영상을 생성한다. 본체(40)는 가상 서명 공간 영역(S)의 4면을 포위하도록 형성되고, 2 이상의 카메라(10)는 가상 서명 공간 영역(S)을 포위하는 4면 중 일면 상에 설치되므로 2 이상의 카메라(10)에서 영상을 안정적으로 취득할 수 있게 된다. 프레임(40)이 가상 서명 공간 영역(S)의 4면을 포위하도록 형성되는 것은 영상을 보다 안정적으로 취득하기 위한 것으로서, 가상 서명 공간 영역(S)이 본체(40)에 둘러싸이지 않고 2 이상의 카메라(10) 전방의 개방된 공간 상에 형성되도록 하는 것도 가능함은 물론이다.The two or more cameras 10 are installed in the main body 40 so as to face the virtual signature space area S, and photograph the movement of the signature input means in the virtual signature space area S. The two cameras are combined. Generates a captured image. The main body 40 is formed to surround four surfaces of the virtual signature space area S, and the two or more cameras 10 are installed on one of four surfaces surrounding the virtual signature space area S, so that two or more cameras ( In 10), the image can be obtained stably. The frame 40 is formed so as to surround the four sides of the virtual signature space area S to acquire an image more stably. The virtual signature space area S is not surrounded by the main body 40 and two or more cameras are provided. (10) It is, of course, also possible to be formed on the open space in front.
영상처리부(20)는 본체(40)의 내부에 설치되고, 2 이상의 카메라(10)에서 생성된 촬영 영상을 분석하여 서명 입력수단의 위치좌표를 연속적으로 산출하고, 미리 설정된 가상의 서명 공간 영역과 상기 디스플레이 장치 간의 매핑정보에 따라 산출된 서명 입력수단의 각 위치좌표를 디스플레이 장치(2) 상의 투영 위치 좌표로 변환한 서명패턴 데이터를 생성한다.The image processing unit 20 is installed inside the main body 40, and continuously analyzes the captured images generated by the two or more cameras 10 to continuously calculate the position coordinates of the signature input means, and sets a predetermined virtual signature space area. Signature pattern data obtained by converting the position coordinates of the signature input means calculated according to the mapping information between the display apparatuses into the projection position coordinates on the display apparatus 2 is generated.
도시되어 있지는 않으나, 영상처리부(20)는 물리적 디스플레이 장치(2)와 가상 서명 공간영역(S) 간의 매핑처리를 수행하는 매핑수단과, 2 이상의 카메라(10)로부터 획득한 영상으로부터 서명 입력수단의 위치를 추출하는 위치 추출수단 및 2 이상의 카메라(10)의 위치정보를 이용하여 가상 서명 공간영역(S)내에서의 서명 입력수단의 위치를 결정하는 위치 결정수단이 더 포함될 수 있다.Although not shown, the image processing unit 20 may include mapping means for performing mapping processing between the physical display device 2 and the virtual signature space area S, and signature input means from images acquired from two or more cameras 10. Position determination means for extracting the position and position determination means for determining the position of the signature input means in the virtual signature space area (S) by using the position information of the two or more cameras 10 may be further included.
출력부(30)는 서명이 입력되는 경우 즉, 가상 서명 공간영역(S) 내에 서명 입력수단의 진입이 감지되는 경우 서명 개시 또는 서명 입력수단 감지를 빛으로 표출하는 램프(31) 또는 이를 사운드로 출력하는 스피커(32)를 포함한다.When the signature is input, that is, when the entry of the signature input means is detected in the virtual signature space area S, the lamp 31 or the lamp 31 expressing the signature initiation or signature input means detection as light or the sound is input. And a speaker 32 for outputting.
디스플레이 장치(2)는 영상처리부(20)로부터 수신된 서명 패턴을 표출하는 장치로서, 일측에 디스플레이부(21)가 형성되어 있다. 디스플레이부(21)는 LCD일 수 있으며, 비접촉식으로 서명이 이루어지므로 통상의 서명 패드와 같은 터치 스크린일 필요가 없다.The display device 2 is a device for displaying the signature pattern received from the image processing unit 20, and the display unit 21 is formed at one side. The display unit 21 may be an LCD, and since the signature is made in a non-contact manner, it does not need to be a touch screen such as a conventional signature pad.
비접촉 서명부(1)와 디스플레이 장치(2)는 일체로 형성하는 것도 가능하나 사용상의 편리를 위해 상호 분리되도록 형성하는 것이 보다 바람직하며, 후자의 경우 양자 간을 유선 또는 무선으로 결합하는 것이 필요하다.The non-contact signature unit 1 and the display device 2 may be formed integrally, but more preferably formed to be separated from each other for convenience of use. In the latter case, it is necessary to combine the wires or wirelessly between the two.
또한, 본 실시예에서는 영상처리부(20) 및 출력부(30)가 본체(40)에 설치되는 것을 예시하였으나, 디스플레이 장치(2) 내에 설치되거나 별도의 호스트 장치에 설치되어 비접촉 서명부(1)에서 생성된 촬영 영상을 호스트 장치에서 분석하여 위치 좌표를 매핑한 후 디스플레이 장치(2)로 전송하도록 하는 것도 가능함은 물론이다.In addition, in the present exemplary embodiment, the image processing unit 20 and the output unit 30 are illustrated in the main body 40. However, the image processing unit 20 and the output unit 30 are installed in the display device 2 or installed in a separate host device. Of course, the generated captured image may be analyzed by the host device to map the location coordinates and then transmitted to the display device 2.
도 8은 가상 서명 공간영역 상의 서명패턴과 디스플레이 장치에 투영된 서명패턴을 보여주는 도면이다.8 illustrates a signature pattern on a virtual signature space area and a signature pattern projected onto a display device.
도 8에 도시된 바와 같이, 사용자가 가상 서명 공간영역(S) 내에 손가락(60) 을 위치시킨 후 공간 내에서 서명을 하면, 2 이상의 카메라(10)에 의해 서명 영상이 촬영되고, 영상처리부(20)에서 손가락(60)의 연속적인 위치 변화를 검출하여 서명 패턴을 생성하게 되며, 생성된 서명 패턴이 디스플레이 장치(2)의 디스플레이부(21) 상에 투영되어 표출된다. 이때, 후술하는 바와 같이, 서명 동작 중에 손가락(60)의 높이(가상 서명 공간영역(S) 내에서의 z값)에 따라 서명패턴(70)의 굵기가 가변되도록 하여 사용자의 실제 서명과 유사하도록 하는 것이 바람직하다.As shown in FIG. 8, when the user places a finger 60 in the virtual signature space area S and signs in the space, the signature image is captured by two or more cameras 10, and the image processing unit ( A signature pattern is generated by detecting a continuous change of position of the finger 60 at 20), and the generated signature pattern is projected on the display unit 21 of the display device 2 and displayed. At this time, as will be described later, the thickness of the signature pattern 70 is varied according to the height of the finger 60 (z value in the virtual signature space area S) during the signature operation so as to be similar to the actual signature of the user. It is desirable to.
도 9는 본체와 디스플레이 장치의 설치 상태의 일례를 도시한 것이다.9 shows an example of the installation state of the main body and the display device.
제 2 실시예에 따른 비접촉식 서명 패드의 장점 중 하나는 본체(40)가 디스플레이 장치(2)와 분리설치가 가능하다는 것이다. 이는 교정 도구(50)를 이용한 교정 과정을 통해 본체(40) 내에 형성된 개방된 공간 내에 가상 서명 공간영역(S)이 정의되고, 가상 서명 공간영역(S)과 실제 물리적인 디스플레이 장치(2) 간의 매핑 정보가 미리 설정되어 있기 때문이다. 따라서, 본 발명에 의할 경우 본체(40)는 디스플레이 장치(2)와 일체로 설치될 필요가 없고 사용이 편리한 임의의 위치나 각도로 설치하여 사용할 수 있게 된다.One of the advantages of the contactless signature pad according to the second embodiment is that the main body 40 can be installed separately from the display device 2. The virtual signature space area S is defined in the open space formed in the main body 40 through the calibration process using the calibration tool 50, and the virtual signature space area S and the actual physical display device 2 are defined. This is because the mapping information is set in advance. Therefore, according to the present invention, the main body 40 does not need to be installed integrally with the display device 2, and can be installed and used at any position or angle that is convenient for use.
이러한 자유로운 설치의 일례로서, 도 9에서는 디스플레이 장치(2)가 수직으로 설치되고, 본체 즉, 비접촉 서명부(1)는 디스플레이 장치(2)의 전방에 45°각도로 세워서 설치하여 사용자가 손가락(60)을 이용하여 서명하는 것이 용이하도록 함과 아울러 사용자가 디스플레이 장치(2)에 표출되는 서명패턴을 용이하게 볼 수 있도록 하는 설치사례를 나타내고 있다.As an example of such a free installation, in FIG. 9, the display device 2 is vertically installed, and the main body, that is, the non-contact signature part 1, is installed at an angle of 45 ° in front of the display device 2, so that the user can hold the finger 60. FIG. 2 shows an installation example that makes it easy to sign using a reference and a user can easily see the signature pattern displayed on the display device 2.
이 외에, 디스플레이 장치(2)를 수평으로 배치하고, 비접촉 서명부(1)를 이로부터 임의의 높이에 위치시키거나 2개의 장치를 서로 평행하고 수평 배치하는 등의 여러 가지 형태의 배치가 가능하다.In addition, various types of arrangements are possible, such as arranging the display device 2 horizontally, placing the non-contact signature portion 1 at any height therefrom, or arranging the two devices in parallel and horizontally.
도 10은 교정 도구를 이용하여 매핑정보를 생성하는 원리를 도시한 것이고, 도 11은 교정 도구를 이용하여 매핑정보를 생성하는 과정을 도시한 흐름도이다.FIG. 10 illustrates a principle of generating mapping information using a calibration tool, and FIG. 11 is a flowchart illustrating a process of generating mapping information using a calibration tool.
도 10에 도시된 바와 같이, 본 발명에 사용되는 교정 도구(50)는 입방체와 유사한 구조로서 입방체의 8개의 꼭지점 부분에 마커(51)가 형성되고, 마커(51) 간을 연결대(52)로 연결되며, 교정 도구(50)의 하단에는 바닥면으로부터 일정거리 이격되도록 다리부(53)가 형성되어 있다. 다리부(53)의 높이는 디스플레이 장치(2)와 가상 서명 공간영역 간의 거리와 동일하게 설정될 수 있다.As shown in FIG. 10, the calibration tool 50 used in the present invention has a structure similar to a cube, and a marker 51 is formed at eight vertices of the cube, and the markers 51 are connected to the connecting rod 52. Is connected, the lower end of the calibration tool 50 is formed with a leg portion 53 to be spaced apart from the bottom by a predetermined distance. The height of the leg 53 may be set equal to the distance between the display device 2 and the virtual signature space area.
도 11을 참조하여 매핑정보 생성과정을 설명하면, 우선, 도 10의 교정도구(50)를 디스플레이 장치(2)의 4 꼭지점을 접촉하도록 설치한다(S600). 이때, 교정 도구(50)의 다리부(53)에 의해 가상 서명 공간영역에 대응되는 입방체 영역은 디스플레이 장치(2)로부터 일정 높이만큼 이격되도록 위치하게 된다.Referring to FIG. 11, the mapping information generation process is described. First, the calibration tool 50 of FIG. 10 is installed to contact four vertices of the display apparatus 2 (S600). At this time, the cube area corresponding to the virtual signature space area by the leg portion 53 of the calibration tool 50 is positioned to be spaced apart from the display device 2 by a predetermined height.
그 다음, 2 이상의 카메라(10)를 이용하여 교정도구(50)를 촬영하고(S610), 2 이상의 카메라(10)에 의해 촬영된 촬영 영상으로부터 비가시성의 가상 서명 공간영역(S)을 결정한다(S620). 즉, 촬영 영상으로부터 8개의 마커(51)의 위치정보를 획득하고 8개의 마커(51)에 의해 형성되는 입방체 공간을 가상 서명 공간영역(S)으로 정의하게 된다.Next, the calibration tool 50 is photographed using two or more cameras 10 (S610), and the virtual signature space area S that is invisible is determined from the photographed images photographed by the two or more cameras 10. (S620). That is, the position information of the eight markers 51 is obtained from the captured image, and the cubic space formed by the eight markers 51 is defined as the virtual signature space area S.
가상 서명 공간영역(S)이 결정되면, 2 이상의 카메라(10)에 의해 촬영된 영상으로부터 획득한 마커 정보를 이용하여 물리적 디스플레이 장치(2) 또는 가상 서명 공간 영역(S)에 대한 2 이상의 카메라(10)의 상대 위치를 계산한다(S630). When the virtual signature space area S is determined, two or more cameras for the physical display device 2 or the virtual signature space area S are obtained using marker information obtained from an image captured by the two or more cameras 10. The relative position of 10) is calculated (S630).
최종적으로, 기 설정된 교정 도구(50)의 마커(51) 위치 정보와 2 이상의 카메라(10)로부터 획득한 영상정보로부터 추출한 마커(51) 위치 정보를 이용하여 변환 정보를 생성하여, 가상의 3차원 서명 공간 영역과 2차원의 디스플레이 장치(2)와의 매핑 관계를 결정하게 된다(S640).Finally, transformation information is generated by using the position information of the marker 51 of the preset calibration tool 50 and the position information of the marker 51 extracted from the image information acquired from the two or more cameras 10, and then a virtual three-dimensional image. The mapping relationship between the signature space area and the two-dimensional display apparatus 2 is determined (S640).
도 12는 제 2 실시예에 따른 비접촉식 서명 패드에서 비접촉식으로 서명을 입력받아 이를 표출하는 과정을 도시한 흐름도이다.12 is a flowchart illustrating a process of receiving a signature in a contactless manner from the contactless signature pad according to the second embodiment and expressing the signature.
도 12를 참조하면, 2 이상의 카메라(10)에 의해 가상 서명 공간영역(S)의 영상이 계속적으로 촬영되어 손가락 등의 서명 입력수단이 탐지되는지 여부를 판단한다(S700).Referring to FIG. 12, the image of the virtual signature space area S is continuously photographed by two or more cameras 10 to determine whether a signature input means such as a finger is detected (S700).
만일 서명 입력수단이 탐지되면, 서명 입력수단의 터치 영역을 추출하고, 추출된 서명 입력수단의 터치 영역에서 서명 입력수단의 위치를 검출한다(S710). 서명 입력수단의 위치를 검출하는 것은 사용자 손가락의 좌표 변화를 추적하여 서명패턴을 생성하기 위한 목적에서 행해지는 것이다.If the signature input means is detected, the touch area of the signature input means is extracted and the position of the signature input means is detected in the touch region of the extracted signature input means (S710). Detecting the position of the signature input means is performed for the purpose of tracking the change in the coordinates of the user's finger to generate the signature pattern.
서명 입력수단의 위치가 검출되면, 삼각법을 이용하여 3차원 터치 위치를 계산한다(S720). 즉, 2 이상의 카메라(10)의 위치와 가상 서명 공간영역(S) 상의 서명 입력수단의 위치를 연결하는 벡터를 생성하고, 2개의 카메라의 위치와 서명 입력수단의 위치를 연결하는 2개의 벡터의 교차점을 가상 서명 공간영역(S) 상의 터치 위치로 인식하게 된다.When the position of the signature input means is detected, a three-dimensional touch position is calculated using a trigonometry (S720). That is, a vector is created that connects the positions of the two or more cameras 10 and the positions of the signature input means on the virtual signature space area S, and the vectors of the two vectors connecting the positions of the two cameras and the positions of the signature input means. The intersection point is recognized as a touch position on the virtual signature space area S. FIG.
3차원 터치 위치가 산출되면, 비가시성의 가상 서명 공간영역(S)에서 결정된 사용자의 손가락 위치(x, y 좌표)는 설정단계에서 계산된 매핑 정보를 이용하여 디스플레이 장치(2)로의 투영 위치가 결정된다(S730).When the three-dimensional touch position is calculated, the user's finger position (x, y coordinate) determined in the invisible virtual signature space area S is determined by the projection position on the display device 2 using the mapping information calculated in the setting step. It is determined (S730).
그리고, 사용자 손가락의 높이 위치(z 좌표)에 의해 디스플레이 장치(2)에 표출되는 서명의 굵기가 결정된다(S740). 즉, 사용자 손가락의 높이 위치 즉, z 좌표가 작을수록 사용자 손가락이 가상 서명 공간영역(S) 내에 깊이 위치한 것이므로 서명패턴이 굵게 표출되도록 한다. 서명 동작 중에 사용자 손가락의 높이 위치는 계속 가변적이므로 그에 따라 서명패턴의 굵기도 가변적으로 표현되고 그에 따라 사용자의 실제 서명과 매우 유사한 형태의 서명패턴이 얻어질 수 있다.Then, the thickness of the signature displayed on the display device 2 is determined by the height position (z coordinate) of the user's finger (S740). That is, as the height position of the user's finger, i.e., the z-coordinate is smaller, the user's finger is located deeper in the virtual signature space area S, so that the signature pattern is expressed thicker. Since the height position of the user's finger is continuously variable during the signature operation, the thickness of the signature pattern is variably expressed accordingly, so that a signature pattern which is very similar to the actual signature of the user can be obtained.
서명 패턴의 연속적인 위치 좌표가 얻어지면, 연속으로 투영된 손가락 위치 및 굵기 정보가 디지털화하는 과정에서 발생한 수학적 오차에 의해 불연속적으로 되는 것을 방지하기 위한 후처리로서 평활화 처리가 수행된다(S750). 평활화는 연속적으로 투영된 위치 좌표 및 굵기 정보를 디지털 정보로 변환하고, 인접한 복수 개의 점들에 대해 필터링을 수행하여 복수 개의 점들을 잇는 선을 형성할 때 불연속적인 점이 있는 경우 이 점의 좌표를 전체적인 선의 형태에 부합하도록 변경하여 이루어진다.When the continuous position coordinates of the signature pattern are obtained, a smoothing process is performed as a post-process to prevent discontinuity due to a mathematical error generated in the process of digitizing the continuously projected finger position and thickness information (S750). Smoothing converts the continuously projected position coordinates and thickness information into digital information, and filters the plurality of adjacent points to form a line connecting a plurality of points. It is done by changing it according to the form.
평활화 처리가 완료되면, 매핑 정보에 따라 변환된 서명 패턴 데이터가 디스플레이 장치(2)로 전송되어 디스플레이부(21) 상에 표출된다(S760).When the smoothing process is completed, the signature pattern data converted according to the mapping information is transmitted to the display device 2 and displayed on the display unit 21 (S760).
서명 패턴이 디스플레이되면, 서명 특성 정보가 생성되고(S770), 생성된 서명 특성 정보가 전자 결제 시스템으로 전송되어 사용자 신원조회 등의 용도로 사용된다(S780). 서명 특성 정보는 서명 패턴,서명 높이, 서명 속도, 서명 방향 정보 등의 정보를 포함할 수 있으며, 영상처리부(20)가 촬영 영상을 분석하여 이러한 정보를 산출하고 그로부터 서명 특성 정보를 생성하게 된다.When the signature pattern is displayed, signature characteristic information is generated (S770), and the generated signature characteristic information is transmitted to the electronic payment system and used for a user identity inquiry (S780). The signature characteristic information may include information such as signature pattern, signature height, signature speed, signature direction information, and the like, and the image processor 20 analyzes the photographed image to calculate such information and generates signature characteristic information therefrom.
서명 특성 정보를 이용하여 사용자의 신원을 조회할 수 있도록 하기 위해서는 전자 결제 시스템에 사용자의 서명에 관한 정보 즉, 서명 패턴 정보 또는 서명 특성 정보가 미리 저장되어 있어야 할 것이다.In order to be able to query the user's identity using the signature characteristic information, information about the user's signature, that is, signature pattern information or signature characteristic information, should be stored in advance in the electronic payment system.
본 발명은 비접촉식으로 키패드 입력 또는 서명 패드의 입력이 가능하도록 하는 발명으로서, 은행, 카드 가맹점 등에서 사용되는 키패드나 서명 패드에 적용될 수 있는 산업상 매우 유용한 발명이다.The present invention is an invention that allows the input of the keypad input or the signature pad in a non-contact manner, it is a very useful invention that can be applied to the keypad or signature pad used in banks, card merchants and the like.

Claims (11)

  1. 일측에 미리 정의된 가상 입력 공간 영역을 갖는 본체;A main body having a predefined virtual input space region on one side;
    상기 본체부의 다른 일측에 설치되는 물리적인 디스플레이 장치;A physical display device installed at the other side of the main body;
    상기 가상 입력 공간 영역을 향하도록 상기 본체에 설치되어 상기 가상의 입력 공간 영역내에서의 터치수단의 위치 또는 움직임을 촬영한 촬영 영상을 생성하는 2 이상의 카메라; 및Two or more cameras installed in the main body so as to face the virtual input space area and generating a captured image photographing the position or movement of the touch means in the virtual input space area; And
    상기 촬영 영상을 분석하여 상기 터치수단의 위치 좌표를 산출하고, 미리 설정된 상기 가상 입력 공간 영역과 디스플레이 장치 간의 매핑정보에 따라 상기 터치 수단의 위치좌표를 상기 디스플레이 장치 상의 투영 위치 좌표로 변환하는 영상처리부를 포함하는 것을 특징으로 하는 비접촉식 입력장치.An image processor configured to calculate the position coordinates of the touch means by analyzing the photographed image, and convert the position coordinates of the touch means into projection position coordinates on the display apparatus according to preset mapping information between the virtual input space region and the display apparatus; Non-contact input device comprising a.
  2. 제 1 항에 있어서,The method of claim 1,
    상기 물리적 디스플레이 장치에 관한 정보, 가상 입력 공간 영역에 관한 정보, 상기 카메라에서 촬영된 가상 입력 공간 영역을 실제 디스플레이 장치의 디스플레이 영역으로 변환하기 위한 변환 정보를 저장하는 저장부가 더 포함되는 것을 특징으로 하는 비접촉식 입력장치.And a storage unit which stores information about the physical display device, information about a virtual input space area, and conversion information for converting the virtual input space area photographed by the camera into a display area of an actual display device. Contactless input device.
  3. 제 1 항에 있어서,The method of claim 1,
    상기 디스플레이 장치는 키 패턴이 형성된 키패드이고, 상기 영상처리부는 상기 촬영 영상으로부터 상기 터치수단의 터치 영역을 검출하고, 가상 입력 공간 영역 상에서 상기 터치수단의 터치 영역에 대응되는 키를 결정하는 것을 특징으로 하는 비접촉식 입력장치.The display device is a keypad having a key pattern, and the image processor detects a touch area of the touch means from the captured image, and determines a key corresponding to the touch area of the touch means on a virtual input space area. Contactless input device.
  4. 제 3 항에 있어서,The method of claim 3, wherein
    상기 영상처리부에 의해 가상 입력 공간 영역 상의 터치된 키가 결정되는 경우 키 터치감을 부여하기 위해 사운드 또는 빛을 발생시키는 출력부가 더 포함되는 것을 특징으로 하는 비접촉식 입력장치.And an output unit for generating sound or light to give a key touch when the touched key on the virtual input space area is determined by the image processor.
  5. 제 3 항에 있어서,The method of claim 3, wherein
    상기 물리적 키패드의 일측에는 상기 가상 키패드의 높이에 상응하는 높이를 갖는 댐이 설치되는 것을 특징으로 하는 비접촉식 입력장치.One side of the physical keypad is a contactless input device, characterized in that the dam having a height corresponding to the height of the virtual keypad is installed.
  6. 제 3 항에 있어서,The method of claim 3, wherein
    상기 가상 키패드 영역 및 변환 정보는The virtual keypad area and the conversion information
    마커가 형성된 교정도구를 이용하여 상기 물리적 키패드를 터치한 상태에서 카메라에 의해 촬영된 영상으로부터 상기 마커를 검출하여 가상 입력 공간 영역으로서의 가상 키패드 영역을 결정하고, 상기 검출된 마커의 정보를 이용하여 상기 가상 키패드 영역 또는 물리적 키패드에 대한 상기 카메라의 상대적 위치 또는 상대적 각도를 계산하고, 교정도구로 상기 물리적 키패드를 터치하였을 때의 마커의 정보와 상기 카메라로부터 획득한 영상정보로부터 추출한 마커 정보를 이용하여 변환 정보를 생성 및 저장하여 산출되는 것을 특징으로 하는 카메라를 이용한 비접촉식 키패드 장치.Detecting the marker from the image taken by the camera while touching the physical keypad using a calibration tool formed with a marker to determine a virtual keypad area as a virtual input space area, and using the information of the detected marker The relative position or relative angle of the camera with respect to the virtual keypad area or the physical keypad is calculated, and converted using the marker information extracted from the image information obtained from the camera and the marker information when the physical keypad is touched with a calibration tool. Non-contact keypad device using a camera, characterized in that calculated by generating and storing information.
  7. 제 1 항에 있어서,The method of claim 1,
    상기 영상처리부는 상기 터치수단의 위치좌표를 연속적으로 산출하고, 미리 설정된 상기 가상 입력 공간 영역과 디스플레이 장치 간의 매핑정보에 따라 상기 산출된 상기 터치수단의 각 위치좌표를 상기 디스플레이 장치 상의 투영 위치 좌표로 변환한 서명패턴 데이터를 생성하고, 상기 디스플레이 장치는 상기 영상처리부로부터 수신된 서명 패턴을 표출하는 사인 패드인 것을 특징으로 하는 비접촉식 입력장치.The image processing unit continuously calculates the position coordinates of the touch means, and converts the calculated position coordinates of the touch means into projection position coordinates on the display device according to preset mapping information between the virtual input space area and the display device. And generating a converted signature pattern data, wherein the display apparatus is a sign pad displaying a signature pattern received from the image processor.
  8. 제 7 항에 있어서,The method of claim 7, wherein
    상기 영상처리부는The image processor
    상기 촬영 영상으로부터 상기 서명 입력수단의 높이 정보를 연속적으로 산출하고, 상기 높이 정보를 서명 패턴의 굵기 정보로 변환하는 것을 특징으로 하는 비접촉식 서명 패드.And a height information of the signature input means is continuously calculated from the photographed image, and the height information is converted into thickness information of a signature pattern.
  9. 제 7 항에 있어서,The method of claim 7, wherein
    상기 영상 처리부는 상기 연속적으로 투영된 위치 좌표 및 굵기 정보를 디지털 정보로 변환하고, 인접한 복수 개의 점들에 대해 필터링을 수행하여 상기 서명 패턴을 평활화 처리하는 것을 특징으로 하는 비접촉식 서명 패드.The image processing unit converts the continuously projected position coordinates and thickness information into digital information, and performs filtering on a plurality of adjacent points to smooth the signature pattern.
  10. 제 7 항에 있어서,The method of claim 7, wherein
    상기 영상처리부는 상기 촬영 영상으로부터 서명 입력수단의 서명 패턴,서명 높이, 서명 속도, 서명 방향 정보를 산출하고, 이를 전자결제 시스템으로 전송하여 사용자 신원조회 정보로 이용되도록 하는 것을 특징으로 하는 비접촉식 서명 패드.The image processing unit calculates the signature pattern, signature height, signature speed, and signature direction information of the signature input unit from the photographed image, and transmits the signature pattern information to the electronic payment system so as to be used as user identity inquiry information. .
  11. 제 7 항에 있어서,The method of claim 7, wherein
    상기 매핑 정보는The mapping information is
    가상 서명 공간 영역의 각 꼭지점에 대응되는 지점에 마커가 형성된 교정도구를 상기 디스플레이 장치로부터 일정 높이만큼 이격되도록 위치시키고,Position a calibration tool having a marker formed at a point corresponding to each vertex of the virtual signature space area to be spaced apart from the display device by a predetermined height,
    2 이상의 카메라에 의해 촬영된 촬영 영상으로부터 비가시성의 가상 서명 공간 영역을 결정하고,Determine an invisible virtual signature space region from the captured images taken by two or more cameras,
    2 이상의 카메라에 의해 촬영된 영상으로부터 획득한 마커 정보를 이용하여 물리적 디스플레이 장치 또는 가상 서명 공간 영역에 대한 2 이상의 카메라의 상대 위치를 계산하고,Calculate relative positions of two or more cameras with respect to a physical display device or virtual signature space region using marker information obtained from images captured by the two or more cameras,
    교정 도구의 마커 위치 정보와 상기 2 이상의 카메라로부터 획득한 영상정보로부터 추출한 마커 위치 정보를 이용하여 변환 정보를 생성하여, 가상의 3차원 서명 공간 영역과 2차원의 디스플레이 장치와의 매핑 관계를 결정하여 얻어지는 것을 특징으로 하는 비접촉식 서명 패드.The conversion information is generated using the marker position information of the calibration tool and the marker position information extracted from the image information acquired from the two or more cameras, and the mapping relation between the virtual three-dimensional signature space region and the two-dimensional display apparatus is determined. A contactless signature pad, characterized in that obtained.
PCT/KR2011/006428 2010-09-02 2011-08-31 Non-contact type input device WO2012030153A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2010-0085878 2010-09-02
KR1020100085878A KR101036452B1 (en) 2010-09-02 2010-09-02 Contactless keypad and realization method thereof
KR1020100111341A KR101216537B1 (en) 2010-11-10 2010-11-10 Contactless Sign Pad
KR10-2010-0111341 2010-11-10

Publications (2)

Publication Number Publication Date
WO2012030153A2 true WO2012030153A2 (en) 2012-03-08
WO2012030153A3 WO2012030153A3 (en) 2012-05-03

Family

ID=45773389

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2011/006428 WO2012030153A2 (en) 2010-09-02 2011-08-31 Non-contact type input device

Country Status (1)

Country Link
WO (1) WO2012030153A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113908304A (en) * 2021-09-29 2022-01-11 安徽省东超科技有限公司 Self-service terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020087938A (en) * 2000-12-27 2002-11-23 엔티티 도꼬모 인코퍼레이티드 Handwriting data input device and method, and authenticating device and method
KR20030072591A (en) * 2001-01-08 2003-09-15 브이케이비 인코포레이티드 A data input device
KR20050075031A (en) * 2002-11-20 2005-07-19 코닌클리케 필립스 일렉트로닉스 엔.브이. User interface system based on pointing device
KR100929162B1 (en) * 2009-04-20 2009-12-01 (주)디스트릭트홀딩스 Interactive hologram information service apparatus and method using gesture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020087938A (en) * 2000-12-27 2002-11-23 엔티티 도꼬모 인코퍼레이티드 Handwriting data input device and method, and authenticating device and method
KR20030072591A (en) * 2001-01-08 2003-09-15 브이케이비 인코포레이티드 A data input device
KR20050075031A (en) * 2002-11-20 2005-07-19 코닌클리케 필립스 일렉트로닉스 엔.브이. User interface system based on pointing device
KR100929162B1 (en) * 2009-04-20 2009-12-01 (주)디스트릭트홀딩스 Interactive hologram information service apparatus and method using gesture

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113908304A (en) * 2021-09-29 2022-01-11 安徽省东超科技有限公司 Self-service terminal equipment

Also Published As

Publication number Publication date
WO2012030153A3 (en) 2012-05-03

Similar Documents

Publication Publication Date Title
WO2013009040A2 (en) Remote manipulation device and method using a virtual touch of a three-dimensionally modeled electronic device
CN104777927B (en) Image type touch control device and control method thereof
WO2014051231A1 (en) Display device and control method thereof
WO2011093538A1 (en) Iris scanning apparatus employing wide-angle camera, for identifying subject, and method thereof
US20050249386A1 (en) Pointing device having fingerprint image recognition function, fingerprint image recognition and pointing method, and method for providing portable terminal service using thereof
CN107340875B (en) Keyboard device with built-in sensor and light source module
WO2009142453A2 (en) Method and apparatus for sensing multi-touch inputs
JPWO2009139214A1 (en) Display device and control method
JP2001155137A (en) Portable electronic equipment
WO2015126197A1 (en) Apparatus and method for remote control using camera-based virtual touch
WO2012154001A2 (en) Touch recognition method in a virtual touch device that does not use a pointer
WO2014038765A1 (en) Method for controlling content and digital device using the same
JP6104143B2 (en) Device control system and device control method
WO2023243959A1 (en) Method for predicting risk of physical injury on basis of user posture recognition, and apparatus therefor
WO2022247762A1 (en) Electronic device, and fingerprint unlocking method and fingerprint unlocking apparatus therefor
WO2013133624A1 (en) Interface apparatus using motion recognition, and method for controlling same
KR100968205B1 (en) Apparatus and Method for Space Touch Sensing and Screen Apparatus sensing Infrared Camera
CN104699279A (en) Displacement detection device with no hovering function and computer system including the same
WO2012030153A2 (en) Non-contact type input device
KR20090060698A (en) Interface apparatus and its control method using by virtual multi touch screen
WO2022247761A1 (en) Electronic device, fingerprint unlocking method therefor and fingerprint unlocking device thereof
KR20130085094A (en) User interface device and user interface providing thereof
WO2015064991A2 (en) Smart device enabling non-contact operation control and non-contact operation control method using same
WO2022145595A1 (en) Calibration system and method
KR100983051B1 (en) Apparatus and Method for Space Touch Sensing and Screen Apparatus using Depth Sensor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11822128

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11822128

Country of ref document: EP

Kind code of ref document: A2