IL266179A - A method and system for detecting the location of a pointing finger - Google Patents
A method and system for detecting the location of a pointing fingerInfo
- Publication number
- IL266179A IL266179A IL266179A IL26617919A IL266179A IL 266179 A IL266179 A IL 266179A IL 266179 A IL266179 A IL 266179A IL 26617919 A IL26617919 A IL 26617919A IL 266179 A IL266179 A IL 266179A
- Authority
- IL
- Israel
- Prior art keywords
- user
- depth
- interaction surface
- contour
- point
- Prior art date
Links
Description
38362/18- A METHOD AND SYSTEM FOR DETECTING THE LOCATION OF A POINTING FINGER Field of the Invention The present invention relates to the field of computerized input devices. More particularly, the invention relates to a method and system for rapidly detecting the location of the tip of a pointing finger with respect to an interaction surface.
Background of the Invention Many applications, such as computer games and graphics require tracking the user's operations, in order to detect when the user intended to provide an input. One of the most convenient input devices is a touch pad. Conventional touch pads have an array of sensors for detecting touch operations, along with the estimated touch location and timing. The location and timing are then used to determine what the user's input was.
Other advanced techniques use touchpads which are simply touch surfaces, such as rectangular planes, without the need for inherent touch sensors. Such touch planes may be for example, the surface of a table, which may have predetermined borders for determining locations on the surface. This type of touchpads uses cameras, such as video or still cameras, for tracking the gestures of the user's hand(s). The images acquired by these cameras are processed and analyzed, in order to identify users' touch gestures, likely to be finger touches.
However, the above conventional techniques require complex processing and analysis.
Such complex processing and analysis take more time and may introduce latency. In addition, in order to obtain accurate results, the location estimation of the user’s fingertip must be done in high rates which exceed 30 times per second. Therefore, a system which can rapidly detect the location of the tip of a pointing finger, is required. 38362/18- It is therefore an object of the present invention to provide a method and system for rapidly detecting the location of the tip of a pointing finger with respect to an interaction surface.
It is another object of the present invention to provide a method and system for rapidly detecting the location of the tip of a pointing finger with respect to an interaction surface, with high accuracy, so as to estimate the point on the surface which is likely to be touched.
Other objects and advantages of the invention will become apparent as the description proceeds.
Summary of the Invention A method for rapidly and accurately detecting the location the tip of a pointing finger with respect to an interaction surface, comprising: a) periodically acquiring an image of the area above the interaction surface, by a least one depth camera located above the interaction surface, while assuming that the maximal distance of the camera above the interaction surface is predetermined; b) creating a “background model”, being an image in the form of a depth frame that is updated over time that looks like the interaction area without any user; c) identifying new objects in the acquired (current) depth frame, such that only objects that belong to user’s hands are kept, and the rest of objects are filtered out; d) using thresholding to find where there are parts of the user’s body that are between some ranges of depth; e) converting the acquired image to a binary image; 38362/18- f) obtaining a contour of the user’s hand by interception of a horizontal plane above the interaction surface with blobs in the binary image; g) finding the depth of each point on the contour; and h) evaluating the location (x-y coordinates) of the tip of the user’s pointing finger with respect to the interaction surface, using the depth of each point on the contour.
In one aspect, the location of the tip of the pointing finger is found by: a) finding the tallest point of the contour which is pointing downwards, where every point on the contour represents a voxel; b) finding all points within half the total contour’s radius from the tallest point; c) finding the centroid of these points; d) finding the farthest point from the hand base; e) extending the pointing finger base away from the hand base, by a predetermined factor.
In one aspect, objects that belong to user’s hands are kept according to their size and mobility.
The centroid may be a point in the middle of the user’s hand, being closer to the wrist.
In one aspect, the toe in the user’s foot is found by using the user’s ankle instead of the wrist. 38362/18- A system for rapidly and accurately detecting the location the tip of a pointing finger with respect to an interaction surface, comprising: a) a least one depth camera located above the interaction surface, for periodically acquiring an image of the area above the interaction surface, while assuming that the maximal distance of the camera above the interaction surface is predetermined; and b) at least one processor, adapted to: b.1) create a “background model”, being an image in the form of a depth frame that is updated over time that looks like the interaction area without any user; b.2) identify new objects in the acquired (current) depth frame, such that only objects that belong to user’s hands are kept, and the rest of objects are filtered out; c) using thresholding to find where there are parts of the user’s body that are between some ranges of depth; d) converting the acquired image to a binary image; e) obtaining a contour of the user’s hand by interception of a horizontal plane above the interaction surface with blobs in the binary image; f) finding the depth of each point on the contour; and g) evaluating the location (x-y coordinates) of the tip of the user’s pointing finger with respect to the interaction surface, using the depth of each point on the contour.
Brief Description of the Drawings The above and other characteristics and advantages of the invention will be better understood through the following illustrative and non-limitative detailed description of preferred embodiments thereof, with reference to the appended drawings, wherein: 38362/18- - Fig. 1 illustrates the process of finding the tip of the pointing finger with high accuracy, according to an embodiment of the invention; - Fig. 2 illustrates a setup for obtaining the accurate location the tip of the pointing finger, according to an embodiment of the invention; and - Fig. 3 illustrates a binary image with blobs that resemble the user’s hand.
Detailed Description of Preferred Embodiments The present invention proposes a method and system for rapidly and accurately detecting the location the tip of a pointing finger with respect to an interaction surface. The goal is to rapidly find the x;y coordinates of the user’s pointing fingertip with respect to an interaction surface, and/or where is he pointing to, using very simple means. Since the finger is pointing down onto the table (or to another interaction surface), the wrist will almost always be the tallest part of the hand, while the tip of the pointing finger is assumed to be very close to the interaction surface. In most practical cases, the pointing finger will most likely be the farthest point away from the wrist. These two assumptions allow finding the pointing finger location quickly and accurately. Also, it is possible to assume that a horizontal plane at a predetermined height above the interaction surface (e.g., 20 cm) will intercept only with hands or stationary objects that are laid on the interaction surface.
The system and method proposed by the present invention uses a single 3D depth camera which samples the area above the interaction surface, in order to find all pointing fingers which are touching a designated interaction area on the interaction surface (e.g., a table).
The method proposed by the present invention assumes that the maximal and minimal, distance of the camera above the interaction surface is predetermined. and that the camera is roughly parallel to the surface and directly above it (i.e., there is no significant offset with respect to the center of the surface). 38362/18- Fig. 1 illustrates the process of finding the tip of the pointing finger with high accuracy, according to an embodiment of the invention. Fig. 2 illustrates a setup for obtaining the accurate location the tip of the pointing finger, according to an embodiment of the invention.
Preprocessing: Looking now at Fig. 1, at the first step 100, the system 10 (illustrated in Fig. 2) acquires a depth frame from the 3D depth camera 20 (illustrated in Fig. 2), which samples the area 21 above the interaction surface 22. At the next step 101, the system creates a “background model”, which is an image that is updated over time that looks like the interaction area without any people. At the next step 102, the system identifies new objects in the acquired (current) depth frame, such that at the next step 103, only objects that belong to user’s hands are kept (according to their size and mobility), and the rest of objects are filtered out (the user’s hands are considered to be parts of the image that differ from the background) .This method is commonly referred to as background subtraction . At the next step 104, basic thresholding is used to find where there are parts of the user’s body that are between some ranges of depth (i.e., are between a minimum and maximum distance above the background depth), as shown in Fig. 2.
This thresholding converts the image to a binary image 201, such that it returns pixels with value “1“ (which may be black) when there is some object in the specified range, and returns pixels with value “0“ otherwise (which may be white). Connected pixels with value “1“are called blobs (or connected components). Fig. 2 shows two blobs 23-24, which represent contours of the user’s feet 25-26, respectively.
In the case of user’s hands, the blobs may resemble hands, such as illustrated in Fig. 3. 38362/18- For a given blob 30 the method proposed by the present invention finds a contour line 31 (the outline) which is obtained by interception of the horizontal plane with the user’s hand.
This contour allows evaluation of the location of the user’s hand with respect to the interaction surface (x-y coordinates). Then, the depth of each point on that contour may be found, as well (since the binary image is obtained from a depth image, the depth at each pixel in the blob is also know). The tip of the pointing finger is then found with high accuracy (step 104), for interaction of the user’s hand with the surface, according to the following steps: 7) At the first step, the tallest point of the contour (which is very likely to be the wrist or at least far away from the pointing finger) which is pointing downwards is found.
Every point on the contour represents a voxel (a unit of graphic information that defines a point in three-dimensional space). 8) At the next step, all points within half the total contour’s radius from the tallest point (roughly, half of the points that are closest to the tallest point) are found. 9) At the next step, the centroid of these points is found. The centroid should be some point (called the “hand base“) in the middle of the hand but a bit closer to the wrist.
) At the next step, the farthest point from the hand base is found (since the 3-D depth camera provides estimated heights). This is the point on the contour (called the “finger base“) that should be the closest to the pointing finger. 11) Typically, the depth map is not extremely accurate and so the fingers are not fully recognized. Instead, it is possible to see roughly up to the first joint of each finger.
To compensate for the missing end of the finger, at the next step the pointing finger base is extend away from the hand base (in the opposite direction) by some empirically found factor.
Using this process allows obtaining a very accurate and stable estimate of the pointing finger edge in about a millisecond or less, per hand. 38362/18- Under the same assumptions, the system proposed by the present invention can also find the toe in the user’s foot, using the ankle instead of the wrist. In this case no extension of the foot is needed since the foot is typically fully visible in the depth frame.
The above examples and description have of course been provided only for the purpose of illustration, and are not intended to limit the invention in any way. As will be appreciated by the skilled person, the invention can be carried out in a great variety of ways, employing more than one technique from those described above, all without exceeding the scope of the invention.
Claims (6)
1. A method for rapidly and accurately detecting the location the tip of a pointing finger with respect to an interaction surface, comprising: a) periodically acquiring an image of the area above the interaction surface, by a least one depth camera located above said interaction surface, while assuming that the maximal distance of the camera above the interaction surface is predetermined; b) creating a “background model”, being an image in the form of a depth frame that is updated over time that looks like the interaction area without any user; c) identifying new objects in the acquired (current) depth frame, such that only objects that belong to user’s hands are kept, and the rest of objects are filtered out; d) using thresholding to find where there are parts of the user’s body that are between some ranges of depth; e) converting said acquired image to a binary image; f) obtaining a contour of the user’s hand by interception of a horizontal plane above said interaction surface with blobs in said binary image; g) finding the depth of each point on said contour; and h) evaluating the location (x-y coordinates) of the tip of the user’s pointing finger with respect to said interaction surface, using the depth of each point on said contour.
2. A method according to claim 1, wherein the location of the tip of the pointing finger is found by: f) finding the tallest point of the contour which is pointing downwards, where every point on the contour represents a voxel; 38362/18- - 10 - g) finding all points within half the total contour’s radius from said tallest point; h) finding the centroid of these points; i) finding the farthest point from the hand base; j) extending the pointing finger base away from the hand base, by a predetermined factor.
3. A method according to claim 1, wherein objects that belong to user’s hands are kept according to their size and mobility.
4. A method according to claim 1, wherein the centroid is a point in the middle of the user’s hand, being closer to the wrist.
5. A method according to claim 1, wherein the toe in the user’s foot is found by using the user’s ankle instead of the wrist.
6. A system for rapidly and accurately detecting the location the tip of a pointing finger with respect to an interaction surface, comprising: h) a least one depth camera located above said interaction surface, for periodically acquiring an image of the area above the interaction surface, while assuming that the maximal distance of the camera above the interaction surface is predetermined; and i) at least one processor, adapted to: b.1) create a “background model”, being an image in the form of a depth frame that is updated over time that looks like the interaction area without any user; b.2) identify new objects in the acquired (current) depth frame, such that only objects that belong to user’s hands are kept, and the rest of objects are filtered out; 38362/18- - 11 - j) using thresholding to find where there are parts of the user’s body that are between some ranges of depth; k) converting said acquired image to a binary image; l) obtaining a contour of the user’s hand by interception of a horizontal plane above said interaction surface with blobs in said binary image; m) finding the depth of each point on said contour; and n) evaluating the location (x-y coordinates) of the tip of the user’s pointing finger with respect to said interaction surface, using the depth of each point on said contour.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IL266179A IL266179A (en) | 2019-04-21 | 2019-04-21 | A method and system for detecting the location of a pointing finger |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IL266179A IL266179A (en) | 2019-04-21 | 2019-04-21 | A method and system for detecting the location of a pointing finger |
Publications (1)
Publication Number | Publication Date |
---|---|
IL266179A true IL266179A (en) | 2020-10-28 |
Family
ID=67734397
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
IL266179A IL266179A (en) | 2019-04-21 | 2019-04-21 | A method and system for detecting the location of a pointing finger |
Country Status (1)
Country | Link |
---|---|
IL (1) | IL266179A (en) |
-
2019
- 2019-04-21 IL IL266179A patent/IL266179A/en unknown
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101606628B1 (en) | Pointing-direction detecting device and its method, program and computer readable-medium | |
US10019074B2 (en) | Touchless input | |
Shen et al. | Vision-based hand interaction in augmented reality environment | |
US11308347B2 (en) | Method of determining a similarity transformation between first and second coordinates of 3D features | |
US10234941B2 (en) | Wearable sensor for tracking articulated body-parts | |
CN104380338B (en) | Information processor and information processing method | |
JP6417702B2 (en) | Image processing apparatus, image processing method, and image processing program | |
US20150089453A1 (en) | Systems and Methods for Interacting with a Projected User Interface | |
US20120319945A1 (en) | System and method for reporting data in a computer vision system | |
US9811916B1 (en) | Approaches for head tracking | |
KR102052449B1 (en) | System for virtual mouse and method therefor | |
JP6501806B2 (en) | INFORMATION PROCESSING APPARATUS, OPERATION DETECTING METHOD, AND COMPUTER PROGRAM | |
Bâce et al. | Wearable eye tracker calibration at your fingertips | |
IL266179A (en) | A method and system for detecting the location of a pointing finger | |
CN116301551A (en) | Touch identification method, touch identification device, electronic equipment and medium | |
JP2012003724A (en) | Three-dimensional fingertip position detection method, three-dimensional fingertip position detector and program | |
Lee et al. | A stereo-vision approach for a natural 3D hand interaction with an AR object | |
Fritz et al. | Evaluating RGB+ D hand posture detection methods for mobile 3D interaction | |
Mizuchi et al. | Monocular 3d palm posture estimation based on feature-points robust against finger motion | |
TW201419087A (en) | Micro-somatic detection module and micro-somatic detection method | |
Chochai et al. | Real-time gesture recognition with finger naming by RGB camera and IR depth sensor | |
Fritz et al. | Markerless 3d interaction in an unconstrained handheld mixed reality setup | |
KR20050060606A (en) | Device and method for interaction between human and computer | |
Gupta et al. | A Defocus Based Novel Keyboard Design | |
JP2020173521A (en) | Information processing apparatus, information processing method and program |