EP1579304A1 - Handzeigegerät - Google Patents

Handzeigegerät

Info

Publication number
EP1579304A1
EP1579304A1 EP02796729A EP02796729A EP1579304A1 EP 1579304 A1 EP1579304 A1 EP 1579304A1 EP 02796729 A EP02796729 A EP 02796729A EP 02796729 A EP02796729 A EP 02796729A EP 1579304 A1 EP1579304 A1 EP 1579304A1
Authority
EP
European Patent Office
Prior art keywords
user
pointing
hand
cameras
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02796729A
Other languages
English (en)
French (fr)
Inventor
Alberto Del Bimbo
Alessandro Valli
Carlo Colombo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Universita degli Studi di Firenze
Original Assignee
Universita degli Studi di Firenze
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universita degli Studi di Firenze filed Critical Universita degli Studi di Firenze
Publication of EP1579304A1 publication Critical patent/EP1579304A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected

Definitions

  • This inventions refers to a hand pointing detection apparatus for determining a specific location pointed at by the user.
  • Human-machine interfaces enabling the transfer of information between the user and the system represent a field of growing importance.
  • Human-machine interfaces enable bi-directional communication; on the one side input devices allow users to send commands to the system, on the other side output devices provide users with both responses to commands and feedback about user actions.
  • keyboard, mouse, touch screens are typical input devices while display, loudspeakers and printers are output devices.
  • An important drawback of the most common input devices descends from the physical contact of the user with some of their mechanical parts that ends up wearing the device out.
  • these kinds of input devices require to be close to the PC making it difficult, for the user, to input data when distant from the computer .
  • a certain degree of training and familiarity with the device is required to the user for an efficient use of the device itself.
  • vision-based hand pointing systems appear to be particularly promising. These systems are typically based on a certain number of cameras, a video projector, a screen and a data processing system like a personal computer. The cameras are located so as to have the user and the screen in view; the system output is displayed by the projector onto the screen whose locations can be pointed at by the user. The presence of the screen is not necessary, the user pointing action can be detected even if it's related to objects located in a closed space (i.e. appliances in a room) or in an open one (i.e.
  • the present invention overcomes the above drawbacks introducing a method and an apparatus for the detection of hand pointing of a user based on standard, low cost hardware equipment.
  • This method and apparatus is independent of the number of cameras used, the minimum number being two, and no constraints are set on cameras placement, save that the user must be in view of at least two cameras.
  • the user is allowed to move freely while pointing and the system is independent of environmental changes and user position.
  • users of the apparatus described in the present invention are not requested to calibrate the system before interacting with it, since self-calibration at run time ensures adaptation to user characteristics such as physical dimensions and pointing style.
  • Fig. 1 is an overview of a typical embodiment of the present invention.
  • Fig. 2 shows how the location of the point P pointed at by the user is calculated as the intersection of the screen plane and the line L described by the user's pointing arm.
  • Fig. 3 is a block diagram of the algorithm followed by the data processing unit to detect the hand pointing action.
  • Fig. 4 is the flowchart of the "Background Learning” step of the algorithm.
  • Fig. 5 is the flowchart of the "Calibration” step of the algorithm.
  • Fig. 6 is the flowchart of the "User Detection” step of the algorithm.
  • Fig. 7 is the flowchart of the "Lighting Adaptation” step of the algorithm.
  • Fig. 8 is the flowchart of the "User Localization” step of the algorithm.
  • FIG. 9 is the flowchart of the "Re-mapping” step of the algorithm.
  • Fig. 10 is the flowchart of the "Selection” step of the algorithm.
  • Fig. 11 is the flowchart of the "Adaptation” step of the algorithm.
  • Fig. 1 A preferred embodiment of the present invention is depicted in Fig. 1 where we can see the systems components:
  • a personal computer (23) that processes data received by the cameras and turns them into interaction parameters and then into commands for its graphical interface.
  • An image projector (24) driven by the graphical interface of the personal computer.
  • the projector illuminates the screen (22) pointed at by the user (21 ).
  • Graphic Interface operation is based on both spatial and temporal analysis of user action.
  • the screen location P currently pointed at by the user is continuously evaluated as the intersection of the pointing direction with the screen plane. From each and every acquisition of the systems cameras the position of the head and of the pointing are arm of the user are detected and input to the next processing phase based on a stereo triangulation algorithm.
  • the system monitors persistency: as point P is detected on a limited portion of the screen for an appropriate amount of time, a discrete event, similar to a mouse click, i.e.
  • a selection action is generated for the interface.
  • the overall interaction system behavior is that of a one-button mouse, whose "drags” and “clicks” reflect respectively changes and fixations of interest as communicated by the user through his natural hand pointing actions.
  • the operation of the hand pointing system described in the present invention can be sketched as in Fig. 3. After the cameras have acquired the images of the user, said images are transferred to the PC that processes them following three distinct operational steps: Initialization (200) , Feature Extraction (201 ) and Runtime (203) where the Feature Extraction is a procedure that is used by both the other two phases since it is the one that is able to understand where, in the images, the head and the arm of the user are located.
  • Initialization 200
  • Feature Extraction a procedure that is used by both the other two phases since it is the one that is able to understand where, in the images, the head and the arm of the user are located.
  • the Initialization is composed by two sub-steps: a phase of Background Learning (A) and a phase of Calibration (B).
  • the Background Learning is described in Fig. 4.
  • a number N of frames are chosen for background modeling.
  • the N frames acquired by the cameras are input to the PC (100) and then, for each chromatic channel the mean value and the variance are calculated at each pixel (101 ).
  • the mean value and the variance of the three color channels at each pixel of the background images are calculated (103).
  • the Initialization phase proceeds with the Calibration step (Fig. 3 - B) that will be described later on.
  • the next operational step is called Feature Extraction (201 ) at the end of which the system will acquire the information regarding the possible presence of a user in the cameras view field and his possible pointing action.
  • the Feature Extraction starts with the phase of User Detection (Fig. 6).
  • the current frame is acquired by the camcorders (100) then the background image previously detected is subtracted from the acquired frame (104).
  • the difference image current frame minus background image
  • variance 105
  • the calculated value is then compared to an appropriate threshold value X to decide if the pixel under consideration belongs to the background (calculated difference is less than X) or to the foreground (calculated difference is greater than X).
  • X an appropriate threshold value
  • the system updates its parameters basing on the light level of the actual frame acquired by the cameras.
  • the statistics of the background pixels are thus recalculated in terms of mean value and variance (107).
  • the system updates the thresholds used for the image binarization during the previous step.
  • the number of isolated foreground pixels is computed (108), in order to estimate the noise level of the CCD (charge coupled device) sensor of the camera and consequently update the threshold values (109) used to binarize the image in order to dynamically adjust system sensitiveness.
  • the threshold values used to binarize the image in order to dynamically adjust system sensitiveness.
  • the updated parameters that the system will use to compute the image binarization of the next frame acquired and the binary mask used at the previous cycle of acquisition is refined by topological filters (Fig. 6 - 110).
  • the user presence is classified by his shape and eventually detected (112).
  • the User Localization (Fig. 8) step is started and it is carried on through two different and parallel processes.
  • the silhouette of the user is estimated by detecting the user head and the user arm position (115) through the use of the binary mask previously computed and of geometrical heuristics.
  • the system detects the color of the detected user shape, to determine the zone of exposed skin internal to it. This process runs through several sub-steps: first the foreground is split up into skin and non-skin parts (116) applying the binary mask previously computed and a skin color model to the image acquired by the cameras.
  • the detected skin parts are aggregated into connected blobs (117) and then again the user head and arm are identified by the use of geometrical heuristics (115).
  • the results of the above described estimation are filtered by a smoothing filter (118) and a predictor (119) to reach the final estimate of the color based user localization step.
  • the shape based estimate and the color based estimate are then combined (120) and the coefficients of the image line on each single image acquired are finally determined (121 ), where the image line is the line ideally connecting the head and the hand of the user and represents the pointing direction.
  • the next step, once the pointing direction for every single frame is determined, is called runtime processing (Fig. 3 - 202).
  • the first sub-step of this phase is called Remapping (Fig. 9).
  • the system described in the present invention determines, in the way described above, as many lines as the cameras employed (li, dx; li, sx). These lines, together with the point the cameras are located at (Cdx, Csx), determine a plane in the real 3D space ( ⁇ p.dx; rip.sx). Each one of these planes determines in turn a screen line (Ip.dx; Ip.sx) as their intersection with the plane of the screen (IT) pointed at by the user. The point to be determined is thus the intersection P of these screen lines.
  • the remapping phase starts with the computing of the screen lines described above (122) one each iteration (123). Once the screen lines are all determined the location the user is pointing at is determined as the pseudo- intersection of the screen lines (124).
  • the system After remapping, the system enters the phase of Selection (Fig. 3 - G). With reference to Fig. 10, the screen point detected at the end of the previous phase is recorded (125), then its position is periodically checked with respect to a certain radius R (126, 127, 128 and 129) to determine if the point maintains the same position for a time that is recognized to be enough to show a pointing action by the user and as a consequence the system performs a "clicking" action in response to the persisting pointing action by the user.
  • the current screen point (130) represents the input datum for the following phase of Adaptation by which the system is trained to work with different users.
  • the system calibration parameters are recomputed by optimization (132).
  • the Calibration phase is displayed in detail in Fig. 5.
  • the PC drives the projector to show on the screen the calibration points (133) that have to be pointed at by the user, then the image line coefficients coming from the phase of User Localization are recorded too (134) and these steps are taken for each one of the K points chosen for the calibration (135).
  • a new set of optimised system calibration parameters is estimated (136).
  • the above described system can be implemented in presence of any kind of actuator and any kind of interface driven by the data processing unit.
  • the present invention can be applied to home automation systems where the target of the user's pointing action might be a set of appliances and the computer interface might simply be a control board for switching the appliances on and off.
  • the target of the user's pointing action could be represented by the landscape in front of the user.
  • the computer interface in this case, can be just a driver for an audio playback system providing, for example, information regarding the monuments and locations pointed at by the user.
EP02796729A 2002-12-23 2002-12-23 Handzeigegerät Withdrawn EP1579304A1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2002/014739 WO2004057450A1 (en) 2002-12-23 2002-12-23 Hand pointing apparatus

Publications (1)

Publication Number Publication Date
EP1579304A1 true EP1579304A1 (de) 2005-09-28

Family

ID=32668686

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02796729A Withdrawn EP1579304A1 (de) 2002-12-23 2002-12-23 Handzeigegerät

Country Status (3)

Country Link
EP (1) EP1579304A1 (de)
AU (1) AU2002361212A1 (de)
WO (1) WO2004057450A1 (de)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013206569B4 (de) 2013-04-12 2020-08-06 Siemens Healthcare Gmbh Gestensteuerung mit automatisierter Kalibrierung

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU6666894A (en) * 1993-04-22 1994-11-08 Pixsys, Inc. System for locating relative positions of objects
US6226395B1 (en) * 1996-04-22 2001-05-01 Malcolm T. Gilliland Method and apparatus for determining the configuration of a workpiece
US6198485B1 (en) * 1998-07-29 2001-03-06 Intel Corporation Method and apparatus for three-dimensional input entry
US6147678A (en) * 1998-12-09 2000-11-14 Lucent Technologies Inc. Video hand image-three-dimensional computer interface with multiple degrees of freedom

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2004057450A1 *

Also Published As

Publication number Publication date
WO2004057450A1 (en) 2004-07-08
AU2002361212A1 (en) 2004-07-14

Similar Documents

Publication Publication Date Title
US11887312B2 (en) Fiducial marker patterns, their automatic detection in images, and applications thereof
US20190126484A1 (en) Dynamic Multi-Sensor and Multi-Robot Interface System
US8698898B2 (en) Controlling robotic motion of camera
CN108283018B (zh) 电子设备和用于电子设备的姿态识别的方法
CN107852447B (zh) 基于设备运动和场景距离使电子设备处的曝光和增益平衡
JP2018522348A (ja) センサーの3次元姿勢を推定する方法及びシステム
KR20120014925A (ko) 가변 자세를 포함하는 이미지를 컴퓨터를 사용하여 실시간으로 분석하는 방법
US11675178B2 (en) Virtual slide stage (VSS) method for viewing whole slide images
Akman et al. Multi-cue hand detection and tracking for a head-mounted augmented reality system
EP1579304A1 (de) Handzeigegerät
Arita et al. Maneuvering assistance of teleoperation robot based on identification of gaze movement
Lee et al. Robust multithreaded object tracker through occlusions for spatial augmented reality
US20160110881A1 (en) Motion tracking device control systems and methods
EP3745332A1 (de) Systeme, vorrichtung und verfahren zur verwaltung einer gebäudeautomatisierungsumgebung
JP2021174089A (ja) 情報処理装置、情報処理システム、情報処理方法およびプログラム
EP3734960A1 (de) Informationsverarbeitungsvorrichtung, informationsverarbeitungsverfahren und informationsverarbeitungssystem
Espinosa et al. Minimalist artificial eye for autonomous robots and path planning
Shanmugapriya et al. Gesture Recognition using a Touch less Feeler Machine
CN116204060A (zh) 鼠标指针基于手势的移动和操纵
CN115702320A (zh) 信息处理装置、信息处理方法和程序
CN114527922A (zh) 一种基于屏幕识别实现触控的方法及屏幕控制设备
Nair et al. 3D Position based multiple human servoing by low-level-control of 6 DOF industrial robot
Nair et al. Visual servoing of presenters in augmented virtual reality TV studios
KR20150001242A (ko) 손 제스처 인식용 초광각 스테레오 카메라 시스템 장치 및 방법

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20050722

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20090701