FR2887660A1 - Volume or surface e.g. computer screen, interactive rendering device for computing application, has bar with matrix image sensors, and computer that stores region of interest formed of line and column segments of image - Google Patents

Volume or surface e.g. computer screen, interactive rendering device for computing application, has bar with matrix image sensors, and computer that stores region of interest formed of line and column segments of image Download PDF

Info

Publication number
FR2887660A1
FR2887660A1 FR0506337A FR0506337A FR2887660A1 FR 2887660 A1 FR2887660 A1 FR 2887660A1 FR 0506337 A FR0506337 A FR 0506337A FR 0506337 A FR0506337 A FR 0506337A FR 2887660 A1 FR2887660 A1 FR 2887660A1
Authority
FR
France
Prior art keywords
step
pointer
surface
image
position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
FR0506337A
Other languages
French (fr)
Inventor
Frederic Jacques Guerault
Christophe Jean Louis Chesnaud
Frederic Omnes
Luis Pedro Gomes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SIMAG DEVELOPPEMENT A RESPONSABILITE Ltee Ste
SIMAG DEV SARL
Original Assignee
SIMAG DEVELOPPEMENT SOCIETE A RESPONSABILITE LIMITEE
SIMAG DEV SARL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SIMAG DEVELOPPEMENT SOCIETE A RESPONSABILITE LIMITEE, SIMAG DEV SARL filed Critical SIMAG DEVELOPPEMENT SOCIETE A RESPONSABILITE LIMITEE
Priority to FR0506337A priority Critical patent/FR2887660A1/en
Priority claimed from PCT/FR2006/001395 external-priority patent/WO2006136696A1/en
Publication of FR2887660A1 publication Critical patent/FR2887660A1/en
Application status is Withdrawn legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0421Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by interrupting or reflecting a light beam, e.g. optical touch-screen
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment

Abstract

The device has an elongated bar (100) comprising two matrix image sensors (120, 160) each providing a two dimensional image. Each sensor has a set of lines and columns placed at the ends of the bar. Optic fields of the sensors superpose and surround a surface. A computer (125) stores a region of interest, for each sensor, where the region of interest is formed of a set of line segments and a set of column segments of an image provided by the sensors. The computer determines the position of a pointer cutting the surface based on part of images of the pointer found in the region of interest. An independent claim is also included for a method for rendering a volume or surface interactive.

Description

METHOD AND DEVICE FOR MAKING A VOLUME OR SURFACE

INTERACTIVE

  The present invention relates to a method and a device for making interactive a volume or a surface. It applies, in particular, to allow a user, with gestures made in this area or this volume, to control actions, for example computer applications such as reading media files and / or consulting data. .

  Many devices are known to make interactive a volume or a surface.

  US 4,746,770, EP 0279652 and JP 63223819 describe a method for isolating and manipulating graphic objects. It implements a frame that supports a plurality of optical sensors and a plurality of opaque objects, for example fingers, that obstruct a portion of the light normally received by the sensors. This method and the associated device, based on the optical occultation carried out with a finger, therefore require the use of a complex, bulky and expensive frame since it must bear, on at least two sides facing each other, light sources and sensors. In addition, its accuracy is limited to the product of the number of sensors by the number of light sources, generally only a few hundred points that can be discriminated.

  The touch screen described in US 5,162,783 has the same disadvantages.

  WO 99/40562 and US 19980020812 disclose a video camera touch system which takes the form of a bar placed above a computer screen. This system does not include lighting therefore requires use indoors, on a background, below the screen, having no light source or reflections.

  In addition, this system uses complex periscope equipment to capture two images of the scene just in front of the computer screen, from different angles. Finally, this system does not identify pointers outside the surface of the screen.

  US 6,061,177 discloses a touch data input device for a back projection system. It comprises a transparent surface and a diffuser on which an image is projected and captured from the rear by a camera. When the screen is touched, the light reflection is interrupted by the absorption on the surface of the finger touching the screen and captured by the camera. This system imposes the release of a large volume corresponding to the sum of the optical field of the camera and the optical field of the overhead projector, which affects its use on large surfaces. In addition, this system can not be implemented outdoors where the parasitic incident light can be very dazzling for the image sensor of the camera. Finally, this system does not allow pointer detection outside the image projection area by the overhead projector.

  The device described in JP 2003 312123 has the same drawbacks.

  Similarly, the device described in DE 1995 1322 and WO 01/31424 has the same disadvantages.

  Similarly, the device described in GB 2315859 has the same disadvantages.

  The present invention aims to overcome these drawbacks and in particular to provide a device operating outdoors or indoors, comprising modules comprising image sensors and lighting sources that can be positioned in the plane of the interactive surface, without respecting binding positioning criteria while ensuring a high accuracy of position discrimination of one or more pointers, for example fingers.

  It is another object of the present invention to provide a device for making interactive a surface or a volume, all of the electrical components are incorporated in a linear bar easy to transport, to implement on site or to integrate into a piece of furniture and to operate opposite said surface, regardless of the position of frame members, these having no electrical component.

  For this purpose, the present invention aims, in a first aspect, a method for rendering an interactive surface or volume, characterized in that it comprises, on the one hand, an initialization step which comprises: a positioning step at least two matrix image sensors in the plane of said surface, the optical field of each array image sensor covering the whole of said surface, - a step of positioning at least one light source whose radiation covers the whole of said surface, a step of storing a region of interest in the image supplied by each of said matrix sensors, said regions of interest representing the image, seen in section, of said surface and of each pointer intersecting said surface and - a step of memorizing coordinates of a pointer successively placed at extreme points of at least one active zone in said surface, said coordinates coming from the image processing of the pointer provided by the image sensors, a step of associating, at each active zone, at least one action to be performed and, secondly, an operating step which comprises: a step of determining the position of at least one pointer and when the position of a pointer corresponds to one of the active zones, a step of triggering an action associated with said active zone.

  Thanks to these provisions, each time a user positions a pointer opposite the interior of one of the active zones, the processing of the images of this pointer in the regions of interest of the two sensors makes it possible to determine that it is inside the active zone considered and launch an action assigned to said active zone.

  Thanks to these advantages, an installer can position the cameras in any positions provided that their optical field covers the surface or the volume to be interactive and that the illumination provided by the light source (s) allows the detection of pointers by each camera in all of this area, only the minimum width of the pointer and the position determination accuracy of the pointer can suffer from a non-optimized positioning of the cameras.

  It is therefore possible to provide installers or untrained operators with independent modules each comprising a matrix camera and with software implementing the method object of the present invention.

  According to particular features, the method as briefly described above comprises, in addition: a step of positioning at least one matrix display parallel to and facing a portion of said surface; a step of memorizing coordinates of a pointer placed successively opposite the end points of said matrix display, said coordinates defining an active zone, said active zone being associated with at least one action to be performed.

  With these provisions, a user can interact with an animated image or a computer screen, as with a mouse.

  According to particular features, the method as briefly described above comprises, in addition: a step of displaying, on said matrix display, at least one reference point; a step of memorizing coordinates of a pointer placed opposite each said reference point, said coordinates coming from the processing of the images of the pointer provided by the image sensors; a step of storing data representative of a distortion optical.

  Thanks to these arrangements, the coordinate determination accuracy of a point of the display pointed by a pointer can be increased and allows operation in computer pointing device similar to that of a mouse.

  According to particular features, the method as briefly described above comprises, in addition: a step of determining successive positions of a pointer in the plane of the interactive rendered surface, by processing the last two images received from the sensors; image, each position having two coordinates and - a position correction means of the pointer, during which, as corrected position, is assigned a position function of at least two successive positions.

  Thanks to these arrangements, even if the image pickups of the two matrix sensors are not synchronized, at least partially compensates for the position errors resulting from the displacement of the pointer between the times of taking of the cameras. This compensation is effective that we use permanently the last image provided by each camera or waiting for a new image of the two cameras before performing the treatment.

  According to particular features, during the position correction step, the corrected position is the weighted centroid of the last, the penultimate, and the last threeteenth positions, the last and the third position respectively having a weight equal to half the weight of the penultimate position. Thus, the compensated position is very close to the actual position of the pointer while adding a response time less than the duration of the time interval between two successive shots of views by the same camera, typically 20 milliseconds.

  According to particular features, the method as briefly described above comprises, in addition: a step of successive positioning along a frame placed in the plane of the surface to be interactive, outside said surface, in at least two positions in the optical field of each camera, a moving light source facing said camera; for each position of said light source, a storage of the coordinates, in the image plane provided by the sensor of said camera, of the point corresponding to said moving light source and a step of determining the region of interest, in the plane image provided by said sensor, said region of interest comprising each line segment drawn between two points corresponding to successive positions of said moving light source.

  Thanks to these provisions, the determination of the region of interest of each image provided by an image sensor is easy and accurate.

  According to particular features, the method as briefly described above comprises, in addition: a step of pointing, in an image taken by each matrix sensor, of at least two positions corresponding to points of a region of interest; for each position, a memorization of the coordinates, in the image plane provided by the sensor of said camera, of the corresponding point and a step of determining the region of interest, in the image plane provided by said sensor, said region of interest comprising each line segment drawn between two successive points whose coordinates have been stored.

  Thanks to these arrangements, it is not necessary to have a light source in the field of the cameras and the determination of the regions of interest can be carried out remotely from an image supplied by each camera, for example by carrying on the Internet.

  According to particular characteristics, during the region of interest determining step, the region of interest related to each matrix image sensor is set, the set of image points being in the direction parallel to the sensor side most perpendicular to said segment, unless a predetermined number of image points.

  Thanks to these provisions, the regions of interest correspond to several successive lines of the captured image and the treatments carried out on the columns perpendicular to these lines make it possible to avoid the parasites, reflections or electronic noises which could trigger the false detection of a pointer in the sensor field.

  According to particular characteristics, in order to detect a pointer, the minimum brightness for the points of this column is determined for each column of picture elements of the region of interest of each sensor, and this minimum is compared with a value brightness threshold and the number of consecutive minimums exceeding this threshold value to at least one threshold value of apparent width.

  This determines whether the pointer traverses the entire region of interest and meets width criteria distinguishing it from a reflection or any object.

  According to particular features, the method as briefly described above comprises: a step of assigning a computer application, for at least one movement of the pointer and for at least one active zone or the entire interactive surface, and when said movement is detected in said active area or in the interactive surface, respectively, a step of operating said computer application.

  Thanks to these provisions, it is possible to match horizontal or vertical hand movements or holds in position of different durations, different computer applications such as screen changes, launches or exits of an application or application. A file.

  According to particular characteristics, at least one said movement is defined by its amplitude, duration or speed, spatial tolerance, direction and direction.

  According to particular characteristics, when at least two pointers are detected, a step of assigning to at least one of said pointers one of the possible positions whose distance to a Kalman prediction is minimal.

  According to particular characteristics, when at least two pointers are detected, a step of assigning pointers audits of the possible positions compatible with the images of the pointers which minimize the sum of their distance to the Kalman predictions.

  According to a second aspect, the present invention relates to a device for rendering an interactive surface or volume, characterized in that it comprises: two image matrix sensors in the plane of said surface, the optical field of each sensor of matrix image covering all of said surface; at least one light source whose radiation covers the whole of said surface; means for storing a region of interest in the image provided by each of said array sensors, said regions of interest representing the image, in sectional view, of said surface and of each pointer intersecting said surface; a means for memorizing coordinates of a pointer placed successively at the endpoints of at least one active zone in said surface, said coordinates coming from the processing of the pointer images provided by the image sensors, a means of association at each active zone, at least one action to be performed and an action triggering means adapted, when, in an operating phase, the position of a pointer corresponds to one of the active zones, to trigger an action associated with said active zone.

  Since the advantages, aims and particular characteristics of this device are similar to those of the method as succinctly described above, they are not recalled here.

  Other advantages, aims and features of the present invention will emerge from the description which follows, made for an explanatory and non-limiting purpose with reference to the accompanying drawings in which: - Figure 1 shows, schematically, in section parallel to the interactive surface, a part of a first embodiment of the device object of the present invention - Figure 2 shows schematically in section perpendicular to the interactive surface, a portion of the first embodiment of the device object of the present invention; 3 schematically represents, in section parallel to the interactive surface, part of a second embodiment of the device which is the subject of the present invention; FIG. 4 schematically represents, in section parallel to the interactive surface, a portion of FIG. A third embodiment of the device forming the subject of the present invention - FIG. 5 schematically represents, in section parallel to the interactive surface, part of a fourth embodiment of the device that is the subject of the present invention - FIGS. 7 represents a light box implemented for the installation of particular embodiments of the device object of the present invention; - Figure 8 shows schematically four positions taken by the light box illustrated in Figures 6 and 7; FIG. 9 represents points, pointer positions and corrected point positions implemented in particular embodiments of the method that is the subject of the present invention; FIG. 10 represents possible positions of two pointers and Kalman predictions of two pointers, implemented in particular embodiments of the method that is the subject of the present invention; Fig. 11 shows positions of a pointer obscured by another pointer, Kalman estimates and predictions of the obscured pointer; FIG. 12 represents measured positions and corrected positions of a moving pointer according to particular embodiments of the method that is the subject of the present invention; and FIGS. 13A to 13C represent, in the form of a logic diagram, steps taken. in particular embodiments of the method which is the subject of the present invention, and FIGS. 14 to 16 show frame profiles comprising concave parts.

  Throughout the description, we speak of a surface to make interactive. However, the present invention is not limited to this particular type of plane volume but extends, quite the contrary to the volumes defined as the intersection of the cones having as a base frame and as the top optical center of the cameras. Thus, the larger the frame, the more it is an important volume that is made interactive.

  FIG. 1 shows a bar 100 comprising two parts 110 and 150 situated longitudinally at the left and right ends, respectively, of the bar 100. Each part 110 and 150 has a window, respectively 112 and 152, facing the surface. to render interactive and provided with a filter filter visible light and letting the rays in the near infrared, respectively 115 and 155, a monochrome camera whose sensitivity spectrum covers the visible and the near infrared, respectively 120 and 160 with a lens , respectively 122 and 162 and a matrix image sensor, respectively 124 and 164. Each camera is oriented, through the corresponding window, to the surface to be interactive and a longitudinal opening for aeration. The bar 100, whose length is, for example, at least equal to five times the other dimensions, comprises, between the parts 110 and 150, a longitudinal cylindrical Fresnel lens 130 oriented towards the surface to make interactive, one or two lines of light-emitting diodes 135 emitting in the near infrared and oriented towards the lens 130, a longitudinal fan 140 and a set of links 145 providing power to the cameras and light-emitting diodes and the transport of the signals from the cameras to a computer 125. The lens 130 converges the light rays from the light-emitting diodes 135 towards the interactive surface.

  FIG. 2 shows a sectional view of an installation comprising the bar 100, above an interactive rendering surface 170, and a lower frame member 175. Side frame members 180, not shown in FIG. Figure 2, bounds the optical field of the cameras, around the surface to make interactive. The frame elements have very little reflectivity in the wavelengths that the filters 115 and 155 pass through. For example, the frame can be formed of a black strip 2 to 4 centimeters wide which is painted with paint Krylon ultra flat black, reference K1602 for the spray (registered trademarks) and placed all around the desired interactive area. The further the lower part of the frame is removed from the bar (the HMI value of documentations becoming larger), the more this black band will have to be wide.

  The profile of the frame may be plane, possibly bevelled or chamfered on the side opposite the support. The profile of the frame can also take a concave shape with at least two plane walls forming between them an angle greater than or equal to 90, at least one of said walls being able to be vertical to limit the parasitic light incident on the inside of the profile, as illustrated by the exemplary profiles 1400, 1500 and 1600, in FIGS. 14 to 16.

  In the installation mode illustrated in FIG. 2, a wall 185 carries the bar 100, a window 190 placed opposite the surface to be made interactive and the frame elements 175 and 180. It is observed that the presence of a window or any object, wall, linear or furniture shelving, is in no way necessary for the operation of the device.

  The steps performed with the computer 125, installation, parameterization and operation of the particular embodiment of the device object of the presentation illustrated in Figures 1 and 2 are described later, with reference to Figures 9 and following.

  In a particular embodiment, the bar is made elongable. In this embodiment illustrated in FIG. 3, a bar comprises the parts 110 and 150 and intermediate elements 305 carrying light-emitting diodes 310 and longitudinal cylindrical Fresnel lenses, these elements 305 being mounted on a deformable structure 320 which holds the elements 305 equidistant from each other during the elongation of the bar.

  In a particular embodiment, illustrated in FIG. 4, a bar 400 is positioned at a corner with respect to the surface to be rendered interactive and comprises the same elements as the bar 100, the windows and the cameras 420 and 460 then being positioned in parallel and an L-shaped frame whose horizontal 475 and vertical 480 elements are positioned around the opposite corner of the interactive surface.

  In a particular embodiment, illustrated in FIG. 5, each camera module 500 comprises a window 505, a filter 510, a camera 520 comprising a lens 522 and a matrix image sensor 524, two lenses 530, two or four lines of light-emitting diodes 535 placed on either side of the camera 520, two ventilation apertures 540 and a fan 550.

  Two of these camera modules can be positioned anywhere in the plane of the surface to be interactive, provided that the intersection of their optical fields covers all of this interactive surface and that the light from light-emitting diodes 535 is enough, over the entire interactive surface, to detect the light reflected by a pointer.

  FIGS. 6 and 7 show a pointing box 600 comprising a light-emitting diode 605 emitting in the near infrared and having an emission angle greater than 60 degrees. At the rear of this diode 605, two planes 610 and 615 are formed, oblique with respect to the optical axis of the diode 605, and present with this axis an angle of approximately 30 degrees. Thus, when the housing 600 is placed against the side frame 180 at the maximum height of the surface to be interactive with an orientation of the optical axis of the diode 605 upwards, each camera in the opposite corner is illuminated by the diode 605, as illustrated in FIG. 6. Similarly, when the housing 600 is placed in a corner, resting on the lower frame element 175, the two cameras are always in the emission field of the diode 605, as illustrated in FIG.

  In alternative embodiments, the diode 605 is replaced by a set of diodes forming a one or two dimensional array to increase the number of image points corresponding to the region of interest. In variants, the pointing case 600 has, on one of its lateral faces, a switch so that the installer can trigger the lighting of the diode or diodes. In variants having said switch and at least two diodes positioned laterally, with respect to the plane perpendicular to the representations of the box 600 in FIGS. 6 to 8, the diodes are turned on in phase-shifted sequences so that, by processing the images taken by the cameras, the coordinates of the image points corresponding to the direct vision of the diodes are differentiated and the coordinates of the image points corresponding to the reflection of the light emitted by these diodes.

  To aid the identification of the light-emitting diode in the optical field of the cameras, this diode is preferably controlled to flash, for example with a period of one second.

  The installation procedure described below applies to the case of bar 100.

  Those skilled in the art know how to extend it to the other embodiments of the device illustrated in FIGS. 3 to 5, taking into account the angle between the optical axes of the lenses of the cameras.

  To automatically adjust the parameters of each camera and, in particular, the illumination or gain factor, this factor is set to the lowest value and is gradually increased, while the housing illustrated in FIG. the field of the camera, until the level of perception of the light-emitting diode of this housing corresponds to a predetermined value, for example the level of white.

  In a variant, a permanent automatic gain control is used to adapt the sensitivity of the camera to the ambient brightness.

  The cameras used in the devices of the present invention being preferentially matrix cameras, they provide images which represent, successively along their smallest dimension, on the one hand the environment close to a first side of the camera. surface to make interactive, for example a showcase, on the other hand, the volume defined by the frame and the lens of the camera, narrow volume that surrounds the plane parallel to the surface to make interactive that goes through the center the lens of the camera, called region of interest or ROI (for, in English region of interest) and finally the environment close to this volume on the other side of the surface to be interactive or, if this surface is reflective, the reflection of the near environment on the first side.

  It is observed that the region of interest ROI is not necessarily parallel to the larger side of this image, because of the mechanical positioning inaccuracies of the camera and the optical components and the sensor inside the camera considered.

  Likewise, this region of interest ROI consists of two diamonds, in the image supplied by the camera, when, as is generally the case, the part of the frame seen by each camera consists of two rectangles. To determine which part of the image is the region of interest

  ROI, it is preferentially carried out according to one of the procedures indicated below.

  In a first procedure, it implements, as illustrated in FIG. 8, the pointing box illustrated in FIGS. 6 and 7 by positioning it successively along the frame in at least three positions 811 to 814 located in the optical field of each camera, directing the axis of the light source towards the cameras. In the present case, when the housing is located in the positions 811 to 813, it is in the optical field of the camera 820 and, when the housing is in the positions 812 to 814, it is in the optical field of the camera 860.

  The coordinates in the image plane of each light point indicate points of the region of interest ROI. From these at least three points, two half-lines are determined in the image plane of the camera and the points of these half-lines and the points lying at a distance from these half-lines are considered as the region of interest ROI. half-lines less than a predetermined limit distance, for example the points of the five half-lines shifted, upward or downward from the image, by at most five pixels or pixels.

  In a second procedure, the installer moves a pointer, finger, fist or hand over the entire surface to be rendered interactive, and for each camera is determined the narrow diamond-shaped dark areas surrounding a luminous area in movement corresponding to the pointer.

  In particular embodiments, once the determination of the region of interest has been made, the sensitivity of each camera is determined or, which amounts to the same, apart from the glare phenomena extending over the surface of the sensor. image (blooming and smearing), the amplification of the signal coming out and the digitized range of values that will be taken into account, so that the region of interest has, over its entire surface a value considered black. It is observed that this normalization of the signal can be carried out point by point and dynamically, considering, for each image point of the region of interest, the average value over a long period, for example five minutes, as a value of black.

  In particular embodiments, the brightness limit value, point by point or globally, is determined beyond which it will be considered that the pointer is potentially opposite a point of the region of interest.

  In particular embodiments, for detecting a pointer, the minimum brightness for the points of this column is determined for each column of the region of interest ROI. Then a threshold value is determined, for example equal to twice the average of these minimums. It is observed that the use of this minimum brightness has many advantages. On the one hand, small movements of the finger perpendicular to the interactive surface can be detected since as soon as the pointer leaves a dark line, the minimum takes this low light. On the other hand, the reflections and artifacts on the frame do not disturb the detection because they cause luminosities greater than the minimum brightness. Finally, even if the region of interest covers more than the dark frame, the detection of the pointer is not disturbed.

  The number of successive columns for which the considered minimum is greater than the threshold value is then determined, which provides a central point of each pointer and a width of each pointer seen by each camera.

  In embodiments, the detection of a real pointer is performed when a sufficient number of image points representing a rectangle in the image (e.g., three columns and eight lines in the image), exhibit a higher brightness to the brightness limit value.

  The lenses of the cameras create a radial distortion. During the calibration phase, the parameters of this distortion are measured and modeled by a polynomial of the third degree which makes it possible to correct the effects thereof (for example a polynomial of the form rc = co + c, .r + c2 .r2 + c3. r3, where r, is the corrected radius, r the distance between the current point and the intersection of the optical axis of the camera with the image sensor and co to c3 are correction coefficients) .

  The calibration phase made in the factory makes it possible to calculate the focal length of the objectives of each camera and the angle between the optical axis of each camera and the horizontal.

  The two image coordinates provided by the cameras for each pointer make it possible to determine the position of the pointer in the user plane, that is to say the interactive surface. The current coordinate system used for the computation of the coordinates is a reference mark whose origin is the optical center of the camera placed on the left in figure 9. The axis of the abscissae is the horizontal axis oriented towards the camera represented on the right in figure 9 The distance between the optical centers of the lenses of the cameras defines the unit of length. The y-axis is downward, the unit length being equal to that of the x-axis.

  The coordinates in the plane thus provided with an orthogonal coordinate system are determined as a function of the angles p1 and p2 formed between the lines starting from each of the optical centers of the lenses and passing through the center of the pointer on the one hand and the horizontal axis on the other hand in the following way: Tx = sin p1 x cos p2 / sin (p1 + p2) Ty = sin p1 x sin p2 / sin (p1 + p2) Or the angles p1 and p2 are functions of the angle between the optical axis of the corresponding camera and the horizontal and the abscissa of the center of the image of the pointer formed on the image sensor.

  From the coordinates in the plane, screen coordinates are defined which are positions on an image formed near the interactive surface, for example by projection on a frosted part of a window, for example between 0 and 65535 in each direction. . To obtain these screen coordinates, the positions of the four corners of this image represented at 820 in FIG. 8 are memorized, and a bilinear transformation of the coordinates in the plane is carried out.

  From the coordinates in the plane, metric coordinates (on the interactive surface) are defined by multiplying the coordinates in the plane by the metric distance between the optical centers of the lenses of the cameras.

  Since the measurements are tainted with errors, an optimization phase is preferentially carried out during which, in the image projected on the screen area 820, successive points of intersection of a regular grid, represented by crosses in Figure 9, and the installer successively points each of these points in the interactive surface, with the physical pointer, for example his finger, as shown in Figure 9. It is performed, by processing the data thus collected, illustrated by black dots in Figure 9, a finer correction than in the initial calibration step.

  The processing of several targets or pointers detected in the region of interest of a camera is different depending on whether one of the pointers masks another in the image of one of the cameras, or not.

  It is assumed that the pointers are successively detected by the cameras, several new pointers not being detected simultaneously.

  In the case where none of the pointers is masked, the possible positions of the pointers are first determined. For example, there are four possible positions when two pointers are detected by each of the two cameras.

  For each old pointer, that is to say for each pointer that has already been detected, the Kalman prediction of the position of said pointer is then determined.

  The distance between the Kalman prediction and each of the possible positions is calculated.

  If there is only one old pointer, it is assigned the nearest possible position of the Kalman prediction and the remaining position is associated with the new pointer.

  If there are two old pointers, they are assigned the two compatible positions that minimize the sum of the distances between these compatible positions and the Kalman predictions. If there is a new pointer, it is assigned the position compatible with the positions of the other pointers.

And so on.

  Figure 10 shows the case where there are two old pointers whose Kalman predictions are represented at 1005 and 1010. There are four possible positions 1015 to 1030 for each of these pointers, defining a quadrilateral. We then define the four sums of the distances of the predictions with the pairs of opposite vertices of the quadrilateral.

  D1 = dist (1005, 1015) + dist (1010, 1025) D2 = dist (1005, 1020) + dist (1010, 1030) D3 = dist (1005, 1025) + dist (1010, 1015) D4 = dist (1005) , 1030) + dist (1010, 1020) The smallest of these sums of distances are selected and the corresponding positions are assigned to the pointers in this smaller sum. In the case of FIG. 10, the pointer whose Kalman prediction is represented at 1005 is assigned the position 1020 and the pointer whose Kalman prediction is represented at 1010 is assigned the position 1030.

  Alternatively, a model break is detected when the minimum of the distances between a Kalman prediction and the four possible positions is much less, for example at least four times lower, at least the distance between the other Kalman prediction and the four possible positions. In this case, the pointer whose Kalman prediction is closest to one of the possible positions is assigned this position and, at the other pointer, the opposite position in the quadrilateral.

  In the case where one of the pointers is obscured by another, there are half as many possible positions and we retain as the position of each pointer, the Kalman prediction, without updating the Kalman filter, as shown in Figure 11.

  As a variant, in the case where one of the pointers is occultated by another, there are half as many possible positions and the position of each pointer is retained, that which minimizes the sum of the distances between the predictions of Kalman and compatible positions, with or without updating the Kalman filter.

  In the case where the cameras are not synchronized, the processing of the last image from each camera causes an irregularity of trajectory around the real trajectory of the pointer, finger, fist or hand, as soon as it is moving. Indeed, during triangulation, the direction of the pointer determined from the image taken earlier will be erroneous as a function of increasing the speed of movement of the pointer and the duration of the time interval separating the two shots. 'picture.

  Figure 12 illustrates two treatments applied to the erroneous positions thus determined.

  In particular embodiments of the present invention, to overcome these disadvantages, after triangulation taking into account the last image taken by each camera to determine a new position of the pointer, said primary position, the middle position of the two last positions and it is considered as the actual position of the pointer. This middle position is said to be secondary and is therefore the isobarycenter of the two last determined primary positions.

  In particular embodiments of the present invention, the so-called tertiary middle position of the last two secondary medium positions thus determined is determined and is considered as the actual position of the pointer. The tertiary position is therefore the isobarycenter of the last two secondary positions and also the centroid of the last three primary positions, assigned respective weights 1, 2 and 1.

  It is observed that by changing the mode of detection of the pointer, for example by determining the minimum per column for two halves of region of interest representing two halves of the dark frame, one can exploit a third dimension, perpendicular to the interactive surface. Two different behaviors can be associated with the two actions of touching a window or just approaching it. You can also perform a zoom function depending on the distance between the pointer and the screen area.

  Figures 13A to 13C summarize the steps performed during the installation of a device object of the present invention and during operation of said device.

  Although it is described, for purposes of explanation, a one-shot processing carried out by the computer 125, in addition to its display processing of a screen page, preferably the various treatments described below are carried out in multitasking.

  During a step 1300, the installer positions the device, including the matrix image sensors and the light sources so that the interactive surface to be covered by each of the optical fields of each image sensor and the image sensor. all the light sources, as well as the frame, so that the frame is in the union of the optical fields of the cameras.

  During a step 1302, the installer positions the housing illustrated in FIG. 6, successively along the frame in at least three positions in the optical field of each camera, by orienting the axis of the light source towards cameras. The coordinates in the image plane of each light point indicate points of the region of interest ROI. During this same step 1302, the region of interest is defined in the images captured by each image sensor and the coordinates are stored in these images.

  During a step 1304, to automatically adjust the parameters of each camera and, in particular, the illumination or gain factor, this factor is set to the lowest value and is gradually increased, while the housing illustrated in FIG. 6 is in the field of the camera, until the level of perception of the light-emitting diode of this housing corresponds to a predetermined value, for example the level of white. In a variant, starting from step 1304, a permanent automatic gain control is implemented to adapt the sensitivity of the camera to the ambient brightness.

  From the at least three points per camera, during a step 1306, two half-lines are determined in the image plane of the camera and the points of these half-lines are considered as the region of interest ROI. the points at a distance from these half-lines less than a predetermined limit distance, for example the points of the five half-stops shifted upward or downward from the image by not more than five pixels or pixels.

  During a step 1308, the sensitivity of each camera is determined or, which amounts to the same, apart from the glare phenomena extending over the surface of the image sensor (blooming and smearing), the amplification of the signal out and the digitized range of values that will be taken into account, so that the region of interest has, over its entire surface a value considered black. It is observed that this normalization of the signal can be carried out point by point and dynamically, during the operation of the device, by considering, for each image point of the region of interest, the average value over a long period, by example five minutes, as black value.

  From step 1316, to detect a pointer, the minimum brightness for the points of this column is determined for each column of the region of interest ROI. Then a threshold value is determined, for example equal to twice the average of these minimums. The number of successive columns for which the considered minimum is greater than the threshold value is then determined, which provides a central point of each pointer and a width of each pointer seen by each camera.

  During a step 1316, an angle registration phase is carried out, by pointing, through the interactive surface, at least two points lying on a horizontal line and treating their coordinates in the image planes. During this step 1316, an optimization phase is also carried out during which, in the projected image, successive points of intersection of a regular grid are displayed successively, and the installer successively points each of these points. points in the interactive surface, with the physical pointer, for example his finger, as shown in Figure 9. The computer 125 performs, by data processing thus collected, a finer correction than in the initial calibration step.

  During a step 1320, the active areas, represented in 820 to 828 in FIG. 8, that is to say the screen areas, here 820 associated with a video projector and 821, are defined in the interactive surface. associated with a flat computer screen placed near the interactive surface, and the sensitive areas, 822 to 828, pointing, in turn, their corners. Then, step 1322, the installer assigns, with each gesture and each sensitive zone, an action to be performed. It is observed that a zone can be reserved for the detection of gestures, outside the screen areas and the areas associated with graphics corresponding to launches of applications. Similarly, a single zone, called keyboard facing a poster representing, by matrix, alphanumeric characters, identified by its four corners, can allow the input of these characters, without the user having to point each of corners of each of the subfields carrying a character, the device automatically decomposing the keyboard area into subfields assigned to different characters, the disposition of the characters on the poster being known to the device, moreover.

  For the screen areas, the computer 125 defines the actions that will be successively launched when icons, graphics, texts or images will be selected with the pointer then acting as a mouse or with the horizontal and vertical gestures detected in these screen areas during a step 1322.

  During a step 1324, the operation of the device is started.

  During a step 1326, it is determined whether a pointer is in the field of the cameras. If no, step 1326 is repeated. If yes, during a step 1328, the coordinates in the plane of the detected pointer are determined.

  The two image coordinates provided by the cameras for a pointer make it possible to determine the position of the pointer in the user plane, that is to say the interactive surface. The current coordinate system used for the computation of the coordinates is a reference mark whose origin is the optical center of the camera placed on the left in figure 9. The axis of the abscissae is the horizontal axis oriented towards the camera represented on the right in figure 9 The distance between the optical centers of the lenses of the cameras defines the unit of length. The y-axis is downward, the unit length being equal to that of the x-axis.

  The coordinates in the plane thus provided with an orthogonal coordinate system are determined as a function of the angles p1 and p2 formed between the lines starting from each of the optical centers of the lenses and passing through the center of the pointer on the one hand and the horizontal axis on the other hand in the following way: Tx = sin p1 x cos p2 / sin (pl + p2) Ty = sin p1 x sin p2 / sin (pl + p2) But the angles p1 and p2 are functions of the angle between the optical axis of the corresponding camera and the horizontal and the abscissa of the center of the image of the pointer formed on the image sensor.

  During a step 1330, it is determined whether at least one new image has been supplied by one of the cameras (only one new image is received at a time if the cameras are not synchronized and two images are simultaneously received if the cameras are synchronized). Otherwise, step 1330 is repeated. If yes, during a step 1332, it is determined whether at least one new image has been provided by one of the cameras (only one new image is received at a time if the cameras are not synchronized and two images are simultaneously received if the cameras are synchronized). Otherwise, step 1332 is repeated. If yes, during a step 1334, the coordinates in the plane of the pointer are determined for each section of images successively received from one or the other of the cameras and, during a step 1336, the first pointer is assigned, as corrected position, the center of gravity of the last three positions determined from these successive pairs, assigned respective weights 1, 2 and 1.

  Then, during a step 1338, it is determined whether the pointer is a finger or a hand, by measuring the actual diameters of the pointer as a function of the apparent diameters and the real coordinates of the center of the pointer and they are compared with a threshold value. for example 3 centimeters. Still during step 1338, it is determined whether a gesture (the gesture term also covering the immobility) can be identified from the last corrected positions of each pointer and the parameters defining each gesture. If yes, during a step 1340, the action corresponding to said gesture and to the active zone in which this gesture is performed is started and step 1348 is taken. Otherwise, step 1342 is proceeded to.

  During a step 1342, it is determined whether at least one new image has been provided by one of the cameras (only one new image is received at a time if the cameras are not synchronized and two images are simultaneously received if the cameras are synchronized). Otherwise, step 1342 is repeated. If yes, during a step 1344, it is determined whether at least two pointers have been detected. Otherwise, during a step 1346, the coordinates in the plane of the pointer are determined for each section of images received successively from one or the other of the cameras and, during a step 1348, one assigns to the first pointer, as the corrected position, the centroid of the last three positions determined from these successive pairs, assigned respective weights 1, 2 and 1.

  If the result of step 1344 is positive, that is to say if there are at least two detected pointers, it is determined during a step 1352, if one of the pointers hides another in the optical field of at least one camera. Otherwise, during a step 1354, the possible positions of the pointers are determined. For example, there are four possible positions when two pointers are detected by each of the two cameras. Then, during a step 1356, for each old pointer, that is to say for each pointer that has already been detected, the Kalman prediction of the position of said pointer is then determined.

  Then, during a step 1358, for each old pointer, the distance between the Kalman prediction and each of the possible positions is calculated.

  During a step 1360, it is determined whether a model break occurs, that is to say if the minimum of the distances between a Kalman prediction and the possible positions is much smaller (for example at least four times lower ), at least distances between another Kalman prediction and the possible positions. Otherwise, proceed to step 1364. If yes, during a step 1362, this position is assigned to the pointer whose Kalman prediction is closest to one of the possible positions and this step is repeated for other pointers by eliminating the possible positions corresponding to each position already allocated. Then we go to step 1366.

  During step 1364, the old pointers are assigned the compatible positions that minimize the sum of the distances between these compatible positions and the Kalman predictions. Then we go to step 1366.

  If there is a new pointer, in the step 1366, it is assigned the position compatible with the positions of the old pointers determined during the step 1362 or 1364.

  Then, during a step 1368, each pointer detected in at least three successive images is assigned, as a corrected position, the centroid of the last three positions determined from these successive pairs, assigned respective weights 1, 2 and 1. Then we go to step 1338.

  If the result of step 1352 is positive, that is to say if one of the pointers hides another in the optical field of at least one camera, during a step 1370, it is retained as position of each pointer, that which minimizes the sum of the distances between the Kalman predictions and the compatible positions, without updating the Kalman filter.

  Then, during a step 1372, each detected pointer is assigned in at least three successive images, as corrected position, the center of gravity of the last three positions determined from these successive pairs, assigned respective weights 1, 2 and 1. Then, we return to step 1338.

  Below, we describe a particular embodiment of the present invention implementing several software associated with the device illustrated in Figures 1 and 2.

  To be able to operate, the system must have been set up logically with the Installation Kit. The installer is responsible for defining the location of the interactive zone as well as the settings adapted to the system environment. In other words, we define the field of action that must be considered by the system.

  This field of action can never go beyond the previously installed black frame. If the frame exceeds in size the measurements indicated in the installation guide, it is these measurements that will correspond to the limit of the interactive zone.

  Then, we logically set the location of the frame relative to the bar and adjust the system to take into account distances and the light environment. It also allows you to check and improve the accuracy of cursor tracking.

  We then define the screen area (video projection, plasma screen, etc.) and the sensitive areas. These areas will be included in the previously defined field of action.

  Software DK (for Development Kit) and Reader allow, as for them, to exploit these detections, to organize the interactive zone by dividing it in zones and to parameterize the behavior of the system during the execution of a gesture by the user.

  The config.roi file is the file that gives the system the location of the frame relative to the camera bar to read the region of interest.

  The val.lum file is the storage file of the system parameters in terms of sensitivity and operating mode (automatic or fixed). It is generated by I (Installation Kit but can be modified under the DK software and Reader software.) The DebugFile.smg file is the file that will be sent to the contact, ie to the installer, for example on the Internet, to benefit from remote assistance.It will be used to retrieve the information needed for remote help and remote config file generation.The DebugFile.smg file is reset each time a Software Installation Kit Under the software DK or Reader, this file is reinitialized when the request of display of the images taken by the cameras, in particular for the verification and the correction of the regions of interest with a pointing of the points corresponding to the extremities and at the corners of the frame.

  The file extension 3df is the file where is stored the setting of the showcase. This file contains the location of the different active zones, the applications associated with these active zones and the behavior setting, in terms of association of gestures and actions in each of these zones. This is the file to open explicitly to have the setting active. All other files are implicitly loaded by the software; they are searched in the installation directory. This is charged by the operator. Thus, for the same environment in terms of bar, frame, light, one can have various behaviors.

  Kit Development defines the interactive environment of the installation and its applications. The Kit Development will therefore allow to define: the locations of the specific areas on the showcase, É the applications associated with the different zones and É the gestures / actions associations of the different zones.

  The Development Kit allows you to prepare the software deployment in advance, before the installation on site. The 3df file can be generated without being connected to a physical system. This makes it possible to prepare the set of applications that will be used, to test their parameterization under the DK software by simulating a real activation.

  The Development Kit is there for all the preparatory work for the final deployment of the system: prior configuration, online or off-line.

  The Reader software allows you to open a document pre-recorded by the DK version.

  It is this version which will be launched when the user doubles will click on a document of type 3df.

  The Reader software corresponds to a daily use of the final installation. The possible settings of the installation are minimal and correspond to the adaptation of the work done under the DK during the final deployment.

  The main screen area is the main area of the system. It is quadrilateral zone. This zone corresponds to the display zone, ie a video projection zone or an area opposite which a flat screen is placed behind the interactive surface.

  Within this zone are: E cursor movements and ^ gesture detections.

  The main zone is defined first because it is used to define other types of zones. We note that, preferably, if the main area is not exploited by the end users, it can nevertheless be defined, for example in a corner, in small, to have a physical existence and exploitable without interacting with end users. Sensitive areas are usually simple detection zones (that is to say without association of gestures and actions), with regard to a graphic, an object, a poster, an opening, a text, a photo, by example.

  When enabled, they can: E trigger a list of actions (launching applications, sending keyboard shortcuts) and É change the setting of the screen area in terms of combination of gestures and actions.

  A sensitive zone is an area that corresponds, in terms of automation, to a simple button. This is a generally quadrilateral zone.

  To activate a simple sensitive zone of detection, it is enough that the end user touches it. Typically, on a showcase, a simple sensitive sensing zone is next to a sticker on the periphery of the main screen area. Each self-adhesive corresponding to a theme, we can associate different applications and therefore a different mode of operation.

  The applications that will be used by the device must have been chosen to match the message that is to be delivered.

  As far as gesture and action associations are concerned, the installer can make the most of the possibilities of the device in terms of visual effects: at the detection of a horizontal gesture, open a bonus window, change the appearance of objects...

  The .3df file contains: É the default configuration of the screen, É the list of posters and their configuration (program, resource, option, keypad messages, and configuration of the post-detection screen area in the sensitive zone). application to launch at the opening of the document and the level of protection of the 3DF file under the Reader software: to set this up, a dialog box appears on the first backup.

  Use this command to save and name the active document displays the Save As dialog box for assigning a name to the document.

  To save a document under its current name and in the current directory, use the save command.

  DK software is used to define a main screen area where users can interact with their hand or finger. This software can also create sensitive areas.

  To delimit a screen area, the installer successively points the four corners of the screen according to the instructions displayed on this screen. Preferably, each corner requires a duration of immobility of the finger from 1 to 2 seconds to take the position of the stabilized finger and thus provide better accuracy.

  Once the screen area is delimited, the hotspots can be created. The device makes it possible to set the action to be performed when the device is launched and the action to be performed when an active zone is touched. There is the possibility to choose an executable to launch. A resource can also be associated with it.

  To set the operation in gesture / action associated with an application associated with an active zone, this operation in command of actions from the discrimination of gestures by the device, is used a menu configure active zone.

  The device has, for example, as possible actions: 1. Click action A mouse click is generated at the current position. As part of a click or double-click gesture, the click is generated where the gesture was detected.

  2. The Double Click action A double mouse click is generated at the current position. As part of a click or double-click gesture, the click is generated where the gesture was detected.

  3. The Keyboard Shortcut Action A keyboard message is sent to the application with the focus: 0 to 9.

^ +, -. Éààz

  F1 to F12 É Entry, Space, Escape, Alt, Control, Shift, Windows (registered trademark) and ^ Left, right, up, down arrows.

  All of these keys can be combined with the Alt, Control, Shift, Windows keys.

  4. Launch Program Action Launching a program that can have a resource and an associated option with a minimum wait of 500ms before any action is allowed. This duration avoids several launches of the same application if the gesture is detected very quickly several times (point with a short duration).

  The management of the paths relating to the position of the program is taken into account. This application will not be closed if a hotspot is enabled.

  5. Click action held Everything is as if the left mouse button was pressed. The click is released when no more finger is detected in the interactive area.

  6. Return action to the default configuration This action closes the last application launched by the system that can be destroyed and restart the default application with its associated setting. This application will be closed if a hotspot is enabled.

  The device has a list of detectable gestures.

  1. The Pointer gesture Definition: An immobile finger points the screen area for a certain duration.

  The notion of immobility is concretely expressed by a small area around the finger. If the finger remains during the required time in the same zone, it is considered as immobile. So, in the context of a single gesture "pointer" parametrized, the detection of gesture "point", as the finger does not come out of this area, no gesture is set. Use: The pointer must be made straight (perpendicular to the interactive plane) or you will detect a hand.

  For example, if a duration of 3 seconds is set with the launch of an application, you can move your finger without anything happening. If one stabilizes his finger (immobility) for 3 seconds, the application starts. It does not restart until the finger remains stationary in the same area. On the other hand, as soon as the finger leaves the zone then restabilizes (elsewhere or in the same zone), the application can be restarted. Several "pointer" gestures parameterized If two "pointing" gestures are parameterized with two different duration parameters A and B; with A <B. Let us note (A) the detection of the gesture point of parameter of duration A ms and (B) the detection of the gesture pointer of parameter of duration B ms. If (A) is done and one moves his finger - before the duration B has elapsed - to get him out of the immobility zone, then (B) does not have places. If the finger remains stationary for the duration B, the action (A) is executed after A ms, then the action (B) after B ms.

  2. The Double Pointer gesture While a first immobile finger is pointing to the screen area, a second (remote from the previous) enters and exits the same screen area.

  The constraints of using the double pointer are: É Two detections quite far apart from each other otherwise they may be detected as a hand.

  É Move the cursor between the two points.

  3. The Hand gesture A hand is present in the screen area.

  The constraints of using the hand gesture are: É If the hand is too close to the bar, it may pose a problem of location and E little precision for the selection and tracking of the cursor: this gesture must be associated with active areas of sufficient size to allow detection.

  4. The gesture Vertical Movement: Vertical movement of the finger characterized by a duration; a given amplitude and meaning.

  An amplitude of 25 cm, for example, is sufficient. To differentiate between cursor tracking on vertical movement and vertical gesture, a time of 700 ms is usually appropriate.

  The longer the duration, the more the gesture of the user can be slow to cover the distance.

  5. The gesture Movement Horizontal Vertical movement of the finger characterized by a given duration, amplitude and direction.

  6. The gesture Nothing This gesture is detected when the user no longer uses the interactivity of the showcase. No more gestures are made or finger detected. This gesture has a duration in ms.

  Typically, it is used to reset the showcase after a user's departure.

  7. The gesture Click The finger taps the showcase with a dry and sharp gesture and immediately comes out. There are no parameters associated with this gesture.

  8. The double-click gesture The finger taps the showcase twice in a row, entering and exiting with a dry and lively gesture. There are no parameters associated with this gesture.

  The double click gesture corresponds to two "click" gestures very quickly and in the same place. A double-click made by the user thus generates the detection of a click and a double click.

  9. Other gestures: The circular movements of the hand, of a diameter greater than a predetermined diameter, in one direction or the other, may provide two gestures interpreted by the device and associated, alone or in combination with other gestures, actions, for example speed and direction of an audiovisual program, a slide show or a page. Similarly, the X or + crosses and the Zigzags N and Z, can provide gestures interpreted by the device and associated, alone or in combination with other gestures, to actions, for example to move to next offer and put the offers in a selection, pause and resume.

  Simultaneous parameterization of gestures: the system makes it possible to manage simultaneously: E three detections of gestures point at the same time if three different durations.

  É a single double pointer detection é a single hand detection É two horizontal movements: depending on the direction. The other two numeric parameters (duration and amplitude) can only be set for the first direction. For the second, they must be identical to the first.

  E two vertical movements: following the direction. The other two numeric parameters (duration and amplitude) can only be set for the first direction. For the second, they must be identical to the first.

  É A click É A double-click É A gesture nothing Hand and point: the combination of the pointer and the hand is possible on the same application but requires a precise pointer. Indeed, if the device does not see the finger sufficiently fine, he will see the hand. This is why we must be vigilant in the combination of these two actions in the same application.

  Click and point: if the two gestures "click" and "point" with a very short duration are set at the same time, it should be noted that a click generates two detections of gestures: the point and the click.

  So a single user gesture is the detection of two system gestures.

  If they have the effect of making a mouse click both, two mouse clicks will be generated very quickly which will generate a double-click mouse. This behavior can select texts for example, or prevent the operation of a "drop-down list" by opening it (first click), then closing it immediately (second click).

  Click, double click and hand: detection of the click gesture is difficult if the hand is also detected because the position of the finger must then be straight.

  If you tap on the glass with a double-click but with the hand, the double-click gesture will be detected.

  Hand and double-pointer: the hand and the double-pointer are the gestures most likely to be undesirably detected. If used together, there is a risk of common detections.

  Hand and horizontal / vertical gestures: If the hand gesture is done dynamically (the user puts his hand and moves it in the interactive zone), we can detect the horizontal or vertical gestures. If the gestures are set together, the following detections can occur: hand, horizontal, hand.

  Point and double point: if the user makes a point and enters another finger and then removes it, we can have the detection series point-second target-point. If there are two "pointer" gestures set with the double-pointer gesture, wait for the time needed to point1 (point it with the shortest duration) before the re-detection, on the other hand if the pointer2 (with the longest duration) is configured there will be no pointer1-second target-pointer2, the gestures being reset.

  For the setting of sensitive areas, a hotspot can be viewed as a single zone (a button) that can be activated. It is a rectangular area. It is associated with a list of actions to perform during its activation (application launches, keyboard events ...) and a modification of the screen space settings. Activating this zone corresponds to touching the physical part that defines the zone. Typically, on a showcase, this would be a sticker on the periphery of the screen area. Each self-adhesive corresponding to a theme, we can associate different applications and therefore a different mode of operation.

  The system has a list of hotspots. Thus, for each zone of the list: É Delete an existing sensitive zone: "Delete" button É Define the actions to be carried out immediately after the activation of the sensitive zone: "Set" button É Define the screen zone setting after activating the hotspot: "Advanced" button É Change the location of the selected hotspot: "Repointed" button É Create a new hotspot that is empty and inactive but has the other parameters (actions post detection and gesture / action setting) of the copied area "Create a copy" This setting can be made: É By selecting the item of the window that lists the sensitive areas and by activating the corresponding buttons É Right-click on the representation of the showcase in the existing sensitive zone and activate the desired functionalities Note 1: Inactive sensitive zones are grayed out in the list of sensitive areas and are placed to the right of the representation of the showcase in-line.

  Note 2: In the absence of a config.cfg file, all the sensitive zones (active or not) are placed to the right of the window representation window because it can not be done without a config.cfg file.

  In the DK software, the representation of the showcase is an approximate representation of the location of the different areas that form a showcase. This makes it possible to visualize the location of the zones relative to each other, but the positions are not real.

  First example of use of horizontal gestures. A PowerPoint slideshow (trademark) will scroll when you make a horizontal gesture to the right and return to the previous page when you make a horizontal gesture to the left.

  To set such a usage, the installer or the operator goes into the setup / sensitive areas menu of the DK software. It adds a new empty hotspot. For its definition, it puts in the program the software "PowerPoint Viewer" (registered trademarks) and as a resource file extension ".ppt" (trademark). It does not put an option because PowerPoint Viewer has the characteristic of launching the files in viewer mode.

  As parameterization in terms of gesture / action associations, it empties everything that exists and then sets the horizontal gesture associated with a keyboard shortcut.

  For the direction "left to right", it puts a gesture of amplitude 15 cm, tolerance 10 cm and duration 2 s. He puts the keyboard shortcut "Page Down" and clicks Add.

  For the "right to left" direction, the parameter values are, by default, identical to those in the other direction. The installer puts the keyboard shortcut "Page Up" and clicks Add.

  Second example of use. The installer or the operator has an HTML application in which he wants to switch from link to link. In addition, this application contains a specific area where a double click of the mouse on a position - within this area - causes the displacement of a graphic object under the double click position.

  This application is associated with an object. This object will be accessible by the end user. He can grab it.

  On the representation of the window, it is the circle that pierces the interactive surface to grab an object that is on a shelf behind the surface. On the representation of the window, a rectangle represents the screen area, management area of the cursor tracking. Small areas represent sensitive areas for which stickers have been applied. The round is a hollow through which an object can be grasped. When the user enters the object, he goes through the defined area and thus activates the application.

  The sensitive area to be defined is a rectangular area that encompasses the circle. By default, entering and resting the object would cause the application that is enabled for detection to be restarted when the user rests the object. To avoid this, we change the activation mode of the sensitive zone. We will put mode 2 with infinite duration to prevent the application from being reset.

  In addition, for gesture / action settings, click and double-click gestures - gesture with the finger - click and click actions - simulation of mouse button use - must be selected.

  This example of use makes it possible, among other things, to make a withdrawal point for objects bought or rented on the Internet, on a site whose pages are programmed in HTML, the object being accessible only after payment or identification of the user.

  Other applications of the present invention include the following. To make an interactive window, once the device and the corresponding frame set up along the window, a computer screen image is formed behind the window (for example on a computer screen), or on the showcase (for example by video-projection on a frosted surface of the window). The device is then calibrated and the actions associated with the different actions that can be performed in relation to the different active zones defined on the showcase, possibly with the help of the displayed computer screen image.

  In the case where the interactive surface covers openings, for example lockers, the object taken through these openings may cause actions such as the addition of an amount corresponding to the contents of the locker to a total amount to be adjusted.

  In a particular application, the positions, shapes and trajectories of the pointers, fingers or hands, control musical parameters, such as the sound volume, the balance between several loudspeakers, the speed of play of a piece, etc.

  In variants, the cameras mentioned in the description are provided with a polarizer filter to reduce the influence of reflections on the support or on the frame. Indeed, these reflections have a particular polarization. In these variants, the polarizing filters preferentially have a direction of polarization perpendicular to that of these reflections.

Claims (2)

  1 - Method for rendering a surface (170) interactive, characterized in that it comprises, on the one hand, an initialization step which comprises: a positioning step (1300) of at least two image sensors matrix in the plane of said surface, the optical field of each array image sensor covering the whole of said surface, - a positioning step (1300) of at least one light source whose radiation covers all of said surface, - a step of storing (1302) a region of interest in the image provided by each of said matrix sensors, said regions of interest representing the image, seen in section, of said surface and of each cutting pointer said surface, - a step of storing (1320) coordinates of a pointer successively placed at extreme points of at least one active area in said surface, said coordinates coming from the processing of the pointer images provided by the image sensors , an association step (1322), with each active zone, of at least one action to be performed and, on the other hand, an exploitation step which comprises: a step of determining the position of at least one pointer (1326) and when the position of a pointer corresponds to one of the active zones, a step of triggering an action associated with said active zone (1340).
  2 Method according to claim 1, characterized in that it further comprises: a step of positioning at least one matrix display parallel to and facing a portion of said surface; a step of storing (1320) coordinates of a pointer placed successively opposite the end points of said matrix display, said coordinates defining an active area, said active area being associated with at least one action to be performed.
  Method according to any of claims 1 or 2, characterized in that it further comprises: - a display step (1316) on said matrix display of at least one reference point; a step of memorizing (1316) coordinates of a pointer placed opposite each said reference point, said coordinates coming from the processing of the pointer images provided by the image sensors and a processing step (1316) of said coordinates to determine a correction to apply to the images.
  4 Process according to any one of claims 1 to 3, characterized in that it further comprises: - a step of determining (1328, 1334) successive positions of a pointer in the plane of the interactive rendered surface, by processing the last two images received from the image sensors, each position having two coordinates and - a position correction means (1336) of the pointer, during which, as corrected position, a function position of at least two successive positions.
    Method according to Claim 4, characterized in that, during the position correction step (1336), the corrected position is the weighted centroid of the last, the penultimate and the third-most positions, the last and antepenultimate position each having a weight equal to half the weight of the penultimate position.
  6 Process according to any one of claims 1 to 5, characterized in that it further comprises: - a step of positioning (1302) sequentially along a frame placed in the plane of the surface to be interactive, outside said surface, at least two positions in the optical field of each camera, of a movable light source facing said camera; for each position of said light source, a step of storing (1302) the coordinates, in the image plane provided by the sensor of said camera, of the point corresponding to said moving light source and a step of determining the region of interest (1302), in the image plane provided by said sensor, said region of interest comprising each line segment drawn between two points corresponding to successive positions of said moving light source.
  7 Process according to any one of claims 1 to 5, characterized in that it further comprises: a step of pointing, in an image taken by each matrix sensor, of at least two positions corresponding to points of a region of interest; for each position, a memorization of the coordinates, in the image plane provided by the sensor of said camera, of the corresponding point and a step of determining the region of interest, in the image plane provided by said sensor, said region; of interest comprising each line segment drawn between two successive points whose coordinates have been stored.
  8 Process according to any one of claims 1 to 7, characterized in that, for detecting a pointer, is determined for each column of pixels of the region of interest of each sensor, the minimum brightness for the points of this column and this minimum is compared to a brightness threshold value and the number of consecutive minimums exceeding this threshold value to at least one threshold value of apparent width.
  9 Process according to any one of claims 1 to 8, characterized in that it comprises: - a step of allocation (1322) of a computer application, for at least one movement of the pointer and for at least one active zone or the entire interactive surface, and -when said movement is detected (1338) in said active area or in the interactive surface, respectively, a step of operation of said computer application.
    Method according to claim 9, characterized in that, during the step of assigning a computer application (1322), at least one said movement is defined by its amplitude, duration or speed, its spatial tolerance its direction and meaning.
  Method according to any one of claims 1 to 10, characterized in that, when at least two pointers are detected, a step of assigning to at least one of said pointers one of the possible positions whose distance to a prediction of Kalman is minimal (1356 to 1366).
  Method according to any one of claims 1 to 11, characterized in that, when at least two pointers are detected, a step of assigning pointers auditors of the possible positions compatible with the images of the pointers which minimize the sum of their distance to Kalman's predictions (1356 to 1366).
  13 - Device for rendering a surface (170) interactive, characterized in that it comprises: two matrix image sensors (120, 160) in the plane of said surface, the optical field of each array image sensor covering the all of said surface; at least one light source (135) whose radiation covers the whole of said surface; means for storing (125) a region of interest in the image provided by each of said array sensors, said regions of interest representing the image, in sectional view, of said surface and each pointer intersecting said surface; means for memorizing (125) coordinates of a pointer successively placed at extreme points of at least one active zone in said surface, said coordinates coming from the processing of the pointer images provided by the image sensors, each zone being assigned to at least one action to be performed and an action triggering means (125) adapted, when, in an operating phase, the position of a pointer corresponds to one of the active zones, to trigger a action associated with said active area.
FR0506337A 2005-06-22 2005-06-22 Volume or surface e.g. computer screen, interactive rendering device for computing application, has bar with matrix image sensors, and computer that stores region of interest formed of line and column segments of image Withdrawn FR2887660A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
FR0506337A FR2887660A1 (en) 2005-06-22 2005-06-22 Volume or surface e.g. computer screen, interactive rendering device for computing application, has bar with matrix image sensors, and computer that stores region of interest formed of line and column segments of image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0506337A FR2887660A1 (en) 2005-06-22 2005-06-22 Volume or surface e.g. computer screen, interactive rendering device for computing application, has bar with matrix image sensors, and computer that stores region of interest formed of line and column segments of image
PCT/FR2006/001395 WO2006136696A1 (en) 2005-06-22 2006-06-21 Method and device for rendering interactive a volume or surface

Publications (1)

Publication Number Publication Date
FR2887660A1 true FR2887660A1 (en) 2006-12-29

Family

ID=35517401

Family Applications (1)

Application Number Title Priority Date Filing Date
FR0506337A Withdrawn FR2887660A1 (en) 2005-06-22 2005-06-22 Volume or surface e.g. computer screen, interactive rendering device for computing application, has bar with matrix image sensors, and computer that stores region of interest formed of line and column segments of image

Country Status (1)

Country Link
FR (1) FR2887660A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9218704B2 (en) 2011-11-01 2015-12-22 Pepsico, Inc. Dispensing system and user interface
US9721060B2 (en) 2011-04-22 2017-08-01 Pepsico, Inc. Beverage dispensing system with social media capabilities

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0279652A2 (en) * 1987-02-17 1988-08-24 Sensor Frame Incorporated Method and apparatus for isolating and manipulating graphic objects on computer video monitor
EP1126236A1 (en) * 2000-02-18 2001-08-22 Ricoh Company, Ltd. Coordinate input/detection device detecting installation position of light-receiving device used for detecting coordinates
US20030001825A1 (en) * 1998-06-09 2003-01-02 Katsuyuki Omura Coordinate position inputting/detecting device, a method for inputting/detecting the coordinate position, and a display board system
US20030085871A1 (en) * 2001-10-09 2003-05-08 E-Business Information Technology Coordinate input device working with at least display screen and desk-top surface as the pointing areas thereof
US6570103B1 (en) * 1999-09-03 2003-05-27 Ricoh Company, Ltd. Method and apparatus for coordinate inputting capable of effectively using a laser ray

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0279652A2 (en) * 1987-02-17 1988-08-24 Sensor Frame Incorporated Method and apparatus for isolating and manipulating graphic objects on computer video monitor
US20030001825A1 (en) * 1998-06-09 2003-01-02 Katsuyuki Omura Coordinate position inputting/detecting device, a method for inputting/detecting the coordinate position, and a display board system
US6570103B1 (en) * 1999-09-03 2003-05-27 Ricoh Company, Ltd. Method and apparatus for coordinate inputting capable of effectively using a laser ray
EP1126236A1 (en) * 2000-02-18 2001-08-22 Ricoh Company, Ltd. Coordinate input/detection device detecting installation position of light-receiving device used for detecting coordinates
US20030085871A1 (en) * 2001-10-09 2003-05-08 E-Business Information Technology Coordinate input device working with at least display screen and desk-top surface as the pointing areas thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEGTERS G R ET AL: "A MATHEMATICAL MODEL FOR COMPUTER IMAGE TRACKING" IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. PAMI-4, no. 6, novembre 1982 (1982-11), pages 583-594, XP000867183 ISSN: 0162-8828 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9721060B2 (en) 2011-04-22 2017-08-01 Pepsico, Inc. Beverage dispensing system with social media capabilities
US9218704B2 (en) 2011-11-01 2015-12-22 Pepsico, Inc. Dispensing system and user interface
US10005657B2 (en) 2011-11-01 2018-06-26 Pepsico, Inc. Dispensing system and user interface

Similar Documents

Publication Publication Date Title
CA2823651C (en) Light-based finger gesture user interface
KR101016981B1 (en) Data processing system, method of enabling a user to interact with the data processing system and computer-readable medium having stored a computer program product
US9298279B2 (en) Cursor control device
US6061064A (en) System and method for providing and using a computer user interface with a view space having discrete portions
US8611667B2 (en) Compact interactive tabletop with projection-vision
US5900863A (en) Method and apparatus for controlling computer without touching input device
US9268413B2 (en) Multi-touch touchscreen incorporating pen tracking
US7538759B2 (en) Touch panel display system with illumination and detection provided from a single edge
EP2082186B1 (en) Object position and orientation detection system
US8842076B2 (en) Multi-touch touchscreen incorporating pen tracking
US7002556B2 (en) Touch responsive display unit and method
US6710770B2 (en) Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device
US7467380B2 (en) Invoking applications with virtual objects on an interactive display
US7593593B2 (en) Method and system for reducing effects of undesired signals in an infrared imaging system
US8115753B2 (en) Touch screen system with hover and click input methods
US7970211B2 (en) Compact interactive tabletop with projection-vision
US7552402B2 (en) Interface orientation using shadows
CN101617271B (en) Enhanced input using flashing electromagnetic radiation
US8466934B2 (en) Touchscreen interface
CN103314391B (en) The user interface method and system based on the natural gesture
US8060840B2 (en) Orientation free user interface
US8971565B2 (en) Human interface electronic device
US20060284874A1 (en) Optical flow-based manipulation of graphical objects
Kratz et al. HoverFlow: expanding the design space of around-device interaction
US8847924B2 (en) Reflecting light

Legal Events

Date Code Title Description
TP Transmission of property
ST Notification of lapse

Effective date: 20120229

RN Application for restoration

Effective date: 20120320

FC Favourable decision of inpi director general on an application for restauration.

Effective date: 20120509

ST Notification of lapse

Effective date: 20130228