WO2009066998A2 - Apparatus and method for multiple-touch spatial sensors - Google Patents

Apparatus and method for multiple-touch spatial sensors Download PDF

Info

Publication number
WO2009066998A2
WO2009066998A2 PCT/MY2008/000164 MY2008000164W WO2009066998A2 WO 2009066998 A2 WO2009066998 A2 WO 2009066998A2 MY 2008000164 W MY2008000164 W MY 2008000164W WO 2009066998 A2 WO2009066998 A2 WO 2009066998A2
Authority
WO
WIPO (PCT)
Prior art keywords
spatial
image data
camera
point
view
Prior art date
Application number
PCT/MY2008/000164
Other languages
French (fr)
Other versions
WO2009066998A3 (en
Inventor
Hock Woon Hon
Shern Shiou Tan
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2009066998A2 publication Critical patent/WO2009066998A2/en
Publication of WO2009066998A3 publication Critical patent/WO2009066998A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Definitions

  • the present invention relates to spatial sensing. More particularly, the present invention relates to an apparatus and a method for multiple-touch three- dimensional contactless control spatial sensing.
  • U.S. Patent No. 6,359,680 discloses a process and device for measuring three-dimensional objects through optical exposures, projected patterns and calculations of triangulation wherein the measurement is carried out without any contact.
  • this invention does not provide a three-dimensional control of user interface.
  • the present invention is directed to overcoming one or more of the problems set forth above.
  • an apparatus for multiple-touch three-dimensional contactless control for spatial sensing comprises a first and second camera having spatial sensors to capture object position in a form of an image, a register for registering the0 spatial directions as sensed by the spatial sensors, a data processing unit, and a computer for computing object point derivation and blob analysis.
  • a method multiple-touch three-dimensional contactless control for spatial sensing comprises of the following steps:
  • Another object of the present invention is to secure user interface applications such as Automated Teller Machine (ATM), where the three-dimensional spatial sensors act as a user interface for replacing the keypad of ATM to avoid direct contact with the keypad that may leave fingerprints and may be used to identify the PIN code.
  • ATM Automated Teller Machine
  • An advantage of the present invention is that the user can use their fingertip to enable a three-dimensional control without the need of wearing a motion or spatial sensor, e.g. glove sensors.
  • Another advantage of the present invention is that the apparatus provides multiple discrete three-dimensional points (x, y and z) as inputs to interface with computer and also potentially achieves high resolution in three- dimensional sampling in x-, y- and z-axis, depending on resolution of the camera used.
  • the three-dimensional spatial sensors can also be used for applications in graphical or video industry; specifically when return forces are important for example punching, to detect the force and acceleration in three-dimensional space so that the applications can respond to this extra input accordingly.
  • CAD Computer-aided design
  • the three-dimensional spatial sensors can also be used to manipulate three- dimensional space with stereoscopic display i.e. objects appear on three- dimensional visualization can be touched as if the objects are reside in a physical volume.
  • the multiple touch spatial sensor of the present invention is able to emulate the future keyboard and keypad where the spatial sensor can capture coordinates and send the input signal to computer.
  • Fig. 1a illustrates an apparatus for multiple-touch three-dimensional contactless control for spatial sensing according to the present invention
  • Fig. 1b illustrates an enlarged view of planes where cameras are positioned according to the present invention
  • Fig. 2 illustrates a method to derive a three-dimensional object point of an object of interest using the apparatus of Fig. 1 according to the present invention
  • Fig. 3 illustrates an embodiment of a two-dimensional input device captured by the apparatus of Fig. 1 according to the present invention
  • Fig. 4 illustrates an embodiment of a stereoscopic/volumetric object captured by the apparatus of Fig. 1 according to the present invention.
  • Fig. 5 illustrates geometric transform methods that are applicable on the stereoscopic/volumetric object captured by the apparatus of Fig. 1 according to the present invention.
  • Fig. 1a and Fig. 1b illustrate an apparatus for multiple-touch three- dimensional contactless control for spatial sensing.
  • the apparatus comprises two cameras 101, 102, whereby camera 101 is positioned at XZ-plane to capture object point coordinate (x, z) from below view.
  • Camera 102 is positioned at XV-plane to capture object point coordinate (x, y) from side view.
  • Both cameras 101, 102 have two fields of view 103, 104 and their respective datum points 105, 106.
  • Field of view 103 is the coverage area of camera 101
  • field of view 104 is the coverage area of camera 102.
  • any point falls within the volumetric region can be represented by two (2) two-dimensional coordinate in XZ and XY planes.
  • point 107 at coordinate (xi, yi, Zi) in object space is mapped to a field of view 103 for camera 101 and field of view 104 for camera 102.
  • the dimension of the field of views 103, 104 i.e. width and height that form the volumetric region, is dependent on the distance between cameras 101, 102 and the sensor position. When the distance of camera increases the field of view increases accordingly. Another factor governs the volumetric region is the focal length. The volumetric region increases with the reduction of focal length in the cameras 101, 102.
  • Datum points 105 and 106 are used as reference points to determine the three-dimension coordinate position of object points, such as point 107 at coordinate (xi, y ⁇ , zi), from matching the individual two-dimensional coordinate information from XZ plane and information from XY plane.
  • the detected three-dimension object point is then transferred out to the system by different means for example, parallel port, serial port, USB and other PC input/output interfaces.
  • object point 107 is projected to XZ plane as image point 107A and the same point is projected to XY plane as the image point 107B.
  • coordinate 108 (xi, yi t Z 1 ).
  • Fig. 3 shows that the multiple-touch spatial sensor 302 can be used in many applications to replace mouse as an input device 301 as it is capable of producing x, y and z point to represent the true object space coordinates.
  • the output of the multiple ouch spatial sensor is connected to PC through PC IO interface including USB, parallel or serial port.
  • Fig. 4 One of the examples of using multiple-touch sensor 402 as an input device is shown in Fig. 4.
  • Stereoscopic or volumetric display devices 401 always required three coordinate inputs in order to fully address every single point in the display. Multiple-touch spatial sensor is capable to achieve this. Whenever the user points his finger within the volumetric region 403 the three- dimensional cursor 404 will be pointing at the stereoscopic display devices.
  • the object in the stereoscopic or volumetric display is controlled through a number of geometric transform methods for example zooming 501, translation 502 and rotation 503.
  • rotation 503 it requires two points to perform this action, these are the fixation point and the rotation point.
  • the action of rotation is performed twice, first point sets the fixation point and then the same point rotates the object.
  • this can be performed in one go, a finger is used to fix the fixation point and another finger to rotate the object.
  • the multiple-touch contactless sensing is extended to a three- dimensional gesture recognition method incorporating object classifier to recognize three-dimensional gesture from both camera views.
  • the software is trained with the predefined gesture from two camera views and the input image from both camera views are fed into two (2) object classifiers for multidimensional gesture recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to an apparatus and a method for multiple-touch three-dimensional contactless control for spatial sensing. The apparatus comprises two cameras (101, 102) having spatial sensors to capture object position in a form of an image, a register for registering the spatial directions as sensed by the spatial sensors, a data processing unit, and a computer for computing object point derivation and blob analysis. The method multiple- touch three-dimensional contactless control for spatial sensing comprising the steps of: positioning first and second camera (101, 102) and capturing image data of an object (107) by the cameras (101, 102); transferring the captured image data of the object (107) through background and foreground segmentation using image processing function; determining spatial positions of each of the image data of the object (107) and deriving a three-dimensional spatial position of the point; and processing the captured image data through blob analysis.

Description

APPARATUS AND METHOD FOR MULTIPLE-TOUCH SPATIAL SENSORS
The present invention relates to spatial sensing. More particularly, the present invention relates to an apparatus and a method for multiple-touch three- dimensional contactless control spatial sensing.
BACKGROUND TO THE INVENTION
Currently most available three-dimensional applications and systems for measurement of spatial coordinates rely on single control sensors such as mouse, trackball, and touchpad for three-dimensional control. These systems normally do not provide accurate spatial coordinates as systems based on plurality of angle sensors of corresponding type. An example of such system is found in the U.S. Patent No. 6,856,259 which utilizes a touch input systems for use with information display system and methods distinguishing multiple touches overlapping in time. The systems and methods analyze and optimize data collected on the x-axis over time independently from that collected on the y-axis, and for each (x, y) pair corresponding to a potential touch location, correlation values between x magnitudes and y magnitudes is calculated. However, the systems are based only on two-dimensional touch-based capacity sensors.
Another example of these systems is found in U.S. Patent No. 6,947,032 which discloses a touch system and method for determining pointer contacts on a touch surface based on a touch-base sensing and only provides two- dimensional sensing.
U.S. Patent No. 6,359,680 discloses a process and device for measuring three-dimensional objects through optical exposures, projected patterns and calculations of triangulation wherein the measurement is carried out without any contact. However, this invention does not provide a three-dimensional control of user interface. SUMMARY OF THE INVENTION
The present invention is directed to overcoming one or more of the problems set forth above.
5
In one aspect of the present invention, there is provided an apparatus for multiple-touch three-dimensional contactless control for spatial sensing. The apparatus comprises a first and second camera having spatial sensors to capture object position in a form of an image, a register for registering the0 spatial directions as sensed by the spatial sensors, a data processing unit, and a computer for computing object point derivation and blob analysis.
In another aspect of the present invention, there is provided a method multiple-touch three-dimensional contactless control for spatial sensing. The S method comprises of the following steps:
positioning a first and second camera and capturing image data of an object by the cameras; 0 transferring the captured image data through background and foreground segmentation using image processing function;
determining spatial positions of each of the image data and deriving a three-dimensional spatial position of the point; and 5 processing the captured image data through blob analysis.
It is an object for the present invention to demonstrate the feasibilities and the advantages of multiple-touch contactless spatial sensors over conventional0 single input two-dimensional touch-based sensor, such as keyboard or keypad.
Another object of the present invention is to secure user interface applications such as Automated Teller Machine (ATM), where the three-dimensional spatial sensors act as a user interface for replacing the keypad of ATM to avoid direct contact with the keypad that may leave fingerprints and may be used to identify the PIN code.
An advantage of the present invention is that the user can use their fingertip to enable a three-dimensional control without the need of wearing a motion or spatial sensor, e.g. glove sensors.
Another advantage of the present invention is that the apparatus provides multiple discrete three-dimensional points (x, y and z) as inputs to interface with computer and also potentially achieves high resolution in three- dimensional sampling in x-, y- and z-axis, depending on resolution of the camera used.
The three-dimensional spatial sensors can also be used for applications in graphical or video industry; specifically when return forces are important for example punching, to detect the force and acceleration in three-dimensional space so that the applications can respond to this extra input accordingly.
Moreover, it can also be used in software that requires geometrical manipulation such as translation, zoom, and rotation, for example in three- dimensional Computer-aided design (CAD) and virtual reality applications.
The three-dimensional spatial sensors can also be used to manipulate three- dimensional space with stereoscopic display i.e. objects appear on three- dimensional visualization can be touched as if the objects are reside in a physical volume.
Further, the multiple touch spatial sensor of the present invention is able to emulate the future keyboard and keypad where the spatial sensor can capture coordinates and send the input signal to computer.
These and other aspects, objects, features and advantages of the present invention will be more clearly understood and appreciated from a review of the following detailed description of the preferred embodiment and appended claims, and by reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The specific features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Fig. 1a illustrates an apparatus for multiple-touch three-dimensional contactless control for spatial sensing according to the present invention;
Fig. 1b illustrates an enlarged view of planes where cameras are positioned according to the present invention;
Fig. 2 illustrates a method to derive a three-dimensional object point of an object of interest using the apparatus of Fig. 1 according to the present invention;
Fig. 3 illustrates an embodiment of a two-dimensional input device captured by the apparatus of Fig. 1 according to the present invention;
Fig. 4 illustrates an embodiment of a stereoscopic/volumetric object captured by the apparatus of Fig. 1 according to the present invention; and
Fig. 5 illustrates geometric transform methods that are applicable on the stereoscopic/volumetric object captured by the apparatus of Fig. 1 according to the present invention.
DETAILED DESCRIPTION OF THE INVENTION
In the following description of preferred embodiments of the present invention, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
Fig. 1a and Fig. 1b illustrate an apparatus for multiple-touch three- dimensional contactless control for spatial sensing. The apparatus comprises two cameras 101, 102, whereby camera 101 is positioned at XZ-plane to capture object point coordinate (x, z) from below view. Camera 102, on the other hand, is positioned at XV-plane to capture object point coordinate (x, y) from side view.
Both cameras 101, 102 have two fields of view 103, 104 and their respective datum points 105, 106. Field of view 103 is the coverage area of camera 101, while field of view 104 is the coverage area of camera 102.
The interaction of fields of view 103 and field of view 104 forms a volumetric region any point falls within the volumetric region can be represented by two (2) two-dimensional coordinate in XZ and XY planes. For example, point 107 at coordinate (xi, yi, Zi) in object space is mapped to a field of view 103 for camera 101 and field of view 104 for camera 102.
The dimension of the field of views 103, 104, i.e. width and height that form the volumetric region, is dependent on the distance between cameras 101, 102 and the sensor position. When the distance of camera increases the field of view increases accordingly. Another factor governs the volumetric region is the focal length. The volumetric region increases with the reduction of focal length in the cameras 101, 102.
Datum points 105 and 106 are used as reference points to determine the three-dimension coordinate position of object points, such as point 107 at coordinate (xi, y\, zi), from matching the individual two-dimensional coordinate information from XZ plane and information from XY plane. The detected three-dimension object point is then transferred out to the system by different means for example, parallel port, serial port, USB and other PC input/output interfaces.
In Fig. 2, object point 107 is projected to XZ plane as image point 107A and the same point is projected to XY plane as the image point 107B. By combining the two-dimensional coordinates information 107A1 107B, the three-dimensional coordinate of the object point 107 and be determined and this is represented by coordinate 108 (xi, yit Z1).
Fig. 3 shows that the multiple-touch spatial sensor 302 can be used in many applications to replace mouse as an input device 301 as it is capable of producing x, y and z point to represent the true object space coordinates. The output of the multiple ouch spatial sensor is connected to PC through PC IO interface including USB, parallel or serial port.
One of the examples of using multiple-touch sensor 402 as an input device is shown in Fig. 4. Stereoscopic or volumetric display devices 401 always required three coordinate inputs in order to fully address every single point in the display. Multiple-touch spatial sensor is capable to achieve this. Whenever the user points his finger within the volumetric region 403 the three- dimensional cursor 404 will be pointing at the stereoscopic display devices.
Another example is shown in Fig. 5, the object in the stereoscopic or volumetric display is controlled through a number of geometric transform methods for example zooming 501, translation 502 and rotation 503. Typically in rotation 503, it requires two points to perform this action, these are the fixation point and the rotation point. In a two-dimensional input device, the action of rotation is performed twice, first point sets the fixation point and then the same point rotates the object. In multiple touch spatial sensor, this can be performed in one go, a finger is used to fix the fixation point and another finger to rotate the object.
Furthermore, the multiple-touch contactless sensing is extended to a three- dimensional gesture recognition method incorporating object classifier to recognize three-dimensional gesture from both camera views. The software is trained with the predefined gesture from two camera views and the input image from both camera views are fed into two (2) object classifiers for multidimensional gesture recognition.

Claims

1. An apparatus for multiple-touch three-dimensional contactless control for spatial sensing comprising:
a first camera (101) positioned at XZ-plane capturing image data for (x, z) coordinate with a field of view (103) and a datum point (105) for the field of view (103);
a second camera (102) positioned at YZ-plane capturing image data for (y, z) coordinate with a field of view (104) and a datum point (106) for the field of view (104);
a register for registering the spatial directions as sensed by the spatial sensors;
a data processing unit; and
a computer for computing object point derivation and blob analysis.
2. A method for multiple-touch three-dimensional contactless control for spatial sensing comprising the steps of:
positioning two cameras (101, 102) having spatial sensors to capture an object (107) in a form of image data;
capturing image data of the object (107) by the cameras (101, 102);
transferring the captured image data of the object (107) through background and foreground segmentation using image processing function;
determining spatial positions of each of the image data of the object (107) and deriving a three-dimensional spatial position of the point; and processing the captured image data using biob analysis method.
3. A method according to claim 2, wherein the overlapping field of view of the spatial sensors forms a volumetric grid to determine the spatial sensing for the object (107) falling within the volume.
4. A method according to claim 2, wherein camera (101) is positioned at XZ -plane to image data for (x, z) coordinate and camera (102) is positioned at XY -plane to capture image data for (x, y) coordinate.
5. A method according to claim 2, wherein the object (107) has to be within a field of view (103) of first camera (101) and the field of view (104) of the second camera (102).
6. A method according to claim 2, wherein the field of view (103, 104) is adjustable through distance or focal length of the cameras (101, 102).
7. A method according to claim 5, wherein the object (107) can be a single fingertip or multiple fingertips.
8. A method according to claim 7, wherein point coordinates of the object (107) is determined by referencing to datum point (105) as a starting point position for camera (101) and datum point (106) as a starting point position for camera (102), which further includes:
determining z-i position by referencing to datum point (105);
determining X1 position by referencing to datum point (105);
determining y* position from datum point 105 or datum point (106); and
matching XZ-plane coordinates and XY-plane coordinates.
9. A method according to claim 2, wherein captured image data of object (107) is transferred through background and foreground segmentation using image processing function to retrieve a touch blob forXZ-plane and XY-plane.
PCT/MY2008/000164 2007-11-23 2008-11-24 Apparatus and method for multiple-touch spatial sensors WO2009066998A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI20072085A MY147059A (en) 2007-11-23 2007-11-23 Apparatus and method for multiple-touch spatial sensors
MYPI20072085 2007-11-23

Publications (2)

Publication Number Publication Date
WO2009066998A2 true WO2009066998A2 (en) 2009-05-28
WO2009066998A3 WO2009066998A3 (en) 2009-10-15

Family

ID=40668031

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2008/000164 WO2009066998A2 (en) 2007-11-23 2008-11-24 Apparatus and method for multiple-touch spatial sensors

Country Status (2)

Country Link
MY (1) MY147059A (en)
WO (1) WO2009066998A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2947348A1 (en) * 2009-06-25 2010-12-31 Immersion Object's i.e. car, three-dimensional representation visualizing and modifying device, has wall comprising face oriented toward user to reflect user's image and identification unit and visualize representation of object
CN107945172A (en) * 2017-12-08 2018-04-20 博众精工科技股份有限公司 A kind of character detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0913790A1 (en) * 1997-10-29 1999-05-06 Takenaka Corporation Hand pointing apparatus
US20040108990A1 (en) * 2001-01-08 2004-06-10 Klony Lieberman Data input device
KR20070061153A (en) * 2005-12-08 2007-06-13 한국전자통신연구원 3d input apparatus by hand tracking using multiple cameras and its method
US7277599B2 (en) * 2002-09-23 2007-10-02 Regents Of The University Of Minnesota System and method for three-dimensional video imaging using a single camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0913790A1 (en) * 1997-10-29 1999-05-06 Takenaka Corporation Hand pointing apparatus
US20040108990A1 (en) * 2001-01-08 2004-06-10 Klony Lieberman Data input device
US7277599B2 (en) * 2002-09-23 2007-10-02 Regents Of The University Of Minnesota System and method for three-dimensional video imaging using a single camera
KR20070061153A (en) * 2005-12-08 2007-06-13 한국전자통신연구원 3d input apparatus by hand tracking using multiple cameras and its method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2947348A1 (en) * 2009-06-25 2010-12-31 Immersion Object's i.e. car, three-dimensional representation visualizing and modifying device, has wall comprising face oriented toward user to reflect user's image and identification unit and visualize representation of object
CN107945172A (en) * 2017-12-08 2018-04-20 博众精工科技股份有限公司 A kind of character detection method and system

Also Published As

Publication number Publication date
MY147059A (en) 2012-10-15
WO2009066998A3 (en) 2009-10-15

Similar Documents

Publication Publication Date Title
EP3629129A1 (en) Method and apparatus of interactive display based on gesture recognition
Mayer et al. Estimating the finger orientation on capacitive touchscreens using convolutional neural networks
CN102902473B (en) The mode sensitive of touch data is handled
JP5658500B2 (en) Information processing apparatus and control method thereof
US10082935B2 (en) Virtual tools for use with touch-sensitive surfaces
KR101890459B1 (en) Method and system for responding to user's selection gesture of object displayed in three dimensions
CN107710111A (en) It is determined that for the angle of pitch close to sensitive interaction
CN103797446A (en) Method for detecting motion of input body and input device using same
CN103809733A (en) Man-machine interactive system and method
CN102163108B (en) Method and device for identifying multiple touch points
WO2012054060A1 (en) Evaluating an input relative to a display
CN102306053B (en) Virtual touch screen-based man-machine interaction method and device and electronic equipment
JP6487642B2 (en) A method of detecting a finger shape, a program thereof, a storage medium of the program, and a system for detecting a shape of a finger.
WO2011146070A1 (en) System and method for reporting data in a computer vision system
WO2010082226A1 (en) Pointing device, graphic interface and process implementing the said device
JP2012203563A (en) Operation input detection device using touch panel
CN112363629A (en) Novel non-contact man-machine interaction method and system
Obukhov et al. Organization of three-dimensional gesture control based on machine vision and learning technologies
WO2009066998A2 (en) Apparatus and method for multiple-touch spatial sensors
KR101406855B1 (en) Computer system using Multi-dimensional input device
Kim et al. Visual multi-touch air interface for barehanded users by skeleton models of hand regions
Schlattmann et al. Markerless 4 gestures 6 DOF real‐time visual tracking of the human hand with automatic initialization
Mallik et al. Virtual Keyboard: A Real-Time Hand Gesture Recognition-Based Character Input System Using LSTM and Mediapipe Holistic.
Cheng et al. Fingertip-based interactive projector–camera system
KR101171239B1 (en) Non-touch data input and operating method using image processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08852074

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08852074

Country of ref document: EP

Kind code of ref document: A2