EP3973376A1 - System for detecting interactions with a surface - Google Patents
System for detecting interactions with a surfaceInfo
- Publication number
- EP3973376A1 EP3973376A1 EP20742856.6A EP20742856A EP3973376A1 EP 3973376 A1 EP3973376 A1 EP 3973376A1 EP 20742856 A EP20742856 A EP 20742856A EP 3973376 A1 EP3973376 A1 EP 3973376A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- cluster
- sensors
- interactions
- objects
- detecting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0428—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by sensing at the edges of the touch surface the interruption of optical paths, e.g. an illumination plane, parallel to the touch surface which may be virtual
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
- H04N9/3147—Multi-projection systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3191—Testing thereof
- H04N9/3194—Testing thereof including sensor feedback
Definitions
- the present invention relates to a system for detecting interactions with a surface.
- the topology of the system components is designed to obtain the least number of occlusions
- Capacitive surfaces permit detecting the touch of a finger due to a surface which is sensitive to variations in the electric field induced by current passing between the finger and the surface at the contact point. Because of the loss of signal along the electric paths, capacitive surfaces have a maximum size of approximately 2.5 metres per side.
- Interaction is detected when an object of adequate size (greater than 2 cm) interrupts such horizon; therefore, the system can only perceive objects that cross or touch such horizon.
- the static horizon is positioned manually during installation, generally such that it corresponds to the half-plane near the surface to be made interactive.
- Recognition occurs by means of background subtraction operations: for each pixel, the corresponding horizon intensity value captured in the absence of any objects (referred to as background) is subtracted from the horizon intensity value captured by the sensors. After this subtraction operation on the linear horizon, those pixels are considered as foreground pixels (corresponding to the object that has crossed the horizon) whose subtraction exceeds a given static threshold value.
- the system includes an infrared-light illuminator having a linear shape, positioned at the opposite edge, in front of the sensors.
- the horizon must be positioned manually above the linear segment corresponding to the lighted illuminator in the image of the video camera.
- DBscan is used as a clustering algorithm.
- a further optical approach defined as zerotouch builds a multi-touch frame by positioning infrared sensors in a rectangular frame, which can perceive the interactions occurring within.
- Capacitive surfaces do not fulfil requirements 1) and 2) because, in order to detect interaction, they need a current flow and direct contact between the surface and the contact part (finger or hand), so that the user may not wear gloves or any other insulating garments; for this reason, they cannot detect single ungrounded objects thrown towards the wall; moreover, their dimensions cannot exceed a certain limit because of problems in terms of signal propagation along the electric paths.
- the DBscan clustering algorithm allows for object distinction only by analysing the dimension of the interrupted horizon, and does not permit a hierarchical classification and hence a distinction among fingers, hands and objects, which could be obtained by analysing other factors (e.g. shape, input and output speed, colour when using RGB sensors, etc.).
- the sensors are managed by a single controller, and cannot be scaled up beyond a certain limit (due to passband problems, there is a maximum limit as regards the number of sensors that can be connected to a single controller).
- the zerotouch approach while still being an optical system, does not comply with requirement 3) because, due to signal propagation problems, it is not possible to build frames exceeding approximately 3 metres in length.
- Interactive tabletops and other optical systems using rear projection or a rear camera do not comply with requirement 1) because they employ sensors behind the surface, so that it is not possible to use such technologies to make an existing surface become interactive without modifying the original environment.
- optical sensors preferably emit and receive at one frequency only, which is the same as that of the illuminator (in the infrared range%), so as to be independent of the type of environmental illumination and from environmental noise.
- Each sensor sees the object from its own point of view, identifying an imaginary line running from the sensor position up to the point where the horizon has been interrupted. Repeating the process for all sensors, it is possible to obtain the positions of the objects by executing a triangulation operation (which will be described below) on the various straight lines passing through the contact point.
- the present invention aims, therefore, to propose a system for detecting interactions with a surface, which can overcome all of the above-mentioned drawbacks.
- the term“surface” refers to a surface of any kind, even a large one, substantially flat, smooth or characterised by a certain roughness, whether of the active type, such as, for example, a display panel capable of autonomously projecting still or moving images, or of the passive type, i.e. capable of reflecting still or moving images projected from the outside, whether from the rear or from the front, with respect to the system installation position, or provided with an image or a pattern impressed thereon. Its orientation is preferably vertical, but other arrangements are possible as well, e.g. horizontal or oblique.
- interaction refers to contact with or vicinity to the surface by one or more objects, typically one or more fingers of one hand or of different hands of persons approaching the surface, but also different types of objects, normally having dimensions comparable with those of human fingers.
- the system of the invention is based on a low-cost technology for making large interactive surfaces.
- Such technology which is based on an optical approach, allows detecting the presence of multiple entities (objects or parts of the human body) near the surface and determining the position thereof, which may also be variable over time, with respect to the surface.
- the system essentially comprises a series of optical sensors positioned in front of the surface plane, suitably aligned and oriented.
- broader-band sensors are used, which operate in both the visible and the infrared ranges.
- infrared-light illuminators two types:
- illuminator positioned on the opposite side of the surface with respect to the sensors.
- the sensors are positioned in the top edge of the surface and oriented downwards, and the illuminator lies on the bottom edge; nevertheless, the opposite arrangement is possible as well. In this way, the horizon will appear bright, whereas the objects will appear dark (because they will create a shadow in the light coming from the illuminator and directed towards the video camera).
- infrared-light laser illuminator positioned in the same plane as the sensors and oriented, like the sensors, towards the surface to be made interactive. In this way, the objects will be bright, in that they will reflect the light of the illuminator, while the background will be dark.
- the system detects an object (or a person’s fingers) that is touching or approaching the surface by analysing the images coming from each sensor and discerning between the outline of the object and the background.
- the contact-point detection procedure is thus facilitated by the high contrast between the object and the b ackground/horizon .
- the sensors and the illuminator, if any) create a thin, continuous detection zone (or beam) in front of the surface.
- Detection of the contact point occurs via several image processing operations that allow:
- the main goals of the invention are the following:
- EP2122416 describes an optical sensor realized by means of an algorithmic image- processing approach to interaction computation.
- the interactive-component computation algorithm of the present invention employs a “background subtraction” algorithm for determining those image portions which correspond to objects considered to be interactive.
- Each sensor determines a distinction between “ foreground” and “background” by computing the image difference between a background image calculated as the average of the last n-frames and the current image.
- the above-mentioned patent uses background produced by an atomic image obtained at a given moment (with the illuminator off).
- the present invention uses an adaptive background produced by an average of images, while the illuminator, if present, is always on.
- the system of the above-mentioned patent employs two optical sensors to be used at distinct times: one sensor is used while the illuminator is on (the contact points are lit), whereas the other sensor is used while the illuminator is off (the scene framed with the illuminator off represents the background to be subtracted from the image coming from the first sensor). The difference between these two images permits computing the background.
- one sensor is used while the illuminator is on (the contact points are lit)
- the other sensor is used while the illuminator is off (the scene framed with the illuminator off represents the background to be subtracted from the image coming from the first sensor). The difference between these two images permits computing the background.
- the system of the present invention in the system of the present invention:
- each sensor is independent; one distinct image per sensor (and hence one background per sensor) are considered;
- EP2443481 describes an array of infrared-light emitters and an array of sensors (sensitive to infrared light) which are used for computing the position of objects detected as interactive, with an optical approach.
- the system described in such patent utilizes an illuminator only for computing the occlusions and producing a final image where the shadow point is considered as an interaction point.
- the system of the present invention considers the whole scene framed by the video cameras.
- the system described in the above-mentioned patent uses a synchronism between the illuminator and the sensor to produce the image containing the light occlusions (interaction points).
- the system of the present invention there is no synchronism, and background computation is entrusted to an adaptive“background subtraction” algorithm.
- the sensors are arranged along one edge of the surface
- the infrared-ray emitter if present, can be positioned at will, whether on the surface side opposite to the sensors or on the same side as the sensors;
- Each sensor is independent; one distinct image per sensor (and hence one background per sensor) are considered;
- EP2443472 describes a system for sensing the direction of a light source within a sensing region.
- the system described in the above patent determines the position of light sources, whereas the system of the present invention detects the interaction of objects which are not light sources (i.e. they do not project self-generated light, but can reflect or block a luminous radiation);
- EP2487624 generally describes a system for detecting the position of objects that reflect a luminous radiation, which are recognised among a predefined set of objects.
- the system of the present invention envisages to recognise objects touching the surface (touch), pre-touching the surface (pre-touch) and also to recognise certain types of gestures by means of a system for recognising movements over time (touch-start, touch- end, touch-move).
- the system described in the above patent employs a training algorithm for the recognition of objects, which uses a polynomial model representing a set of training points in a multi-dimensional space, whereas the system of the present invention uses a recognition algorithm based on a state machine, feature-based algorithms and a threshold system.
- the present invention relates to a system for detecting interactions with a surface, said surface being substantially flat, said interactions involving contact with or vicinity to the surface by one or more objects, said system comprising:
- S capable of generating view cones (FW) that are partially overlapped, independent and provided with computational capacity, generating a continuous detection zone (Z) in front of the surface, which is adapted to effect said interaction detection;
- each independent sensor adapted for analysing the signals coming from said optical sensors and configured for executing successive detection, recognition and event generation operations, so that said sensors can be grouped into independent modules adapted to be applied to the surface to be made interactive;
- said detection operations comprising: pre-filtering, convolution and “feature- based” algorithms adapted to determine the position of said one or more objects within said continuous detection zone (Z) and to determine the type thereof by discerning among hands, fingers and objects entering a field of view of the sensors;
- recognition operations comprising: triangulation with windowing for computing the positions of the interactions among said positions, hierarchical clustering for determining the interaction type, tracking of said positions to define the variations of said positions within a period of time;
- said event generation operations comprising: transformation of said positions and time variations into displayed events, thereby detecting said interactions.
- the system of the present invention is based on a technology conceived for making a surface become interactive.
- Such technology consists of a subsystem of optical sensors for interaction detection (input), a multi -projection subsystem for image visualisation (output), and a subsystem for managing interactive applications.
- the system may essentially comprise the following parts:
- This subsystem uses optical sensors (Cl, . C5, Figure 1) arranged along one edge of the surface S, adjacent to and oriented towards such surface.
- Each sensor C comprises an infrared-sensitive video camera VC and a microcomputer mR which, by analysing the images, can determine the position of objects and body parts (typically fingers) approaching and/or touching the surface (detecting) (see also Figure 4).
- an infrared illuminator I oriented towards the centre of the surface and located on the side opposite to the sensors with respect to the surface ( Figure 3).
- Each sensor sends information about what has been detected to another microcomputer dealing with the recognition phase (recognition): a hierarchical clustering algorithm (which will be described below) identifies the real interactions by discerning them from noise and recognises the interaction events.
- the interaction events that can be recognised are the following: touch (touch-start, touch-move, touch-end), hand, object-hit, pre-touch.
- touch touch-start, touch-move, touch-end
- hand object-hit, pre-touch.
- Such events are subsequently sent to the driver, which, within the kernel space of the operating system, takes care of generating the events and entering them into the event queue.
- the components of the subsystem are the following:
- each sensor (Cl, . C5) is composed of an infrared-sensitive video camera and a microcomputer mR that analyses the images, determining the positions where interaction has occurred and sending such information to the next component;
- microcomputer it analyses what has been detected by the sensors and recognises events
- n network devices they provide component interconnection
- n power supply units they supply power to the devices
- the layout of the components is the following:
- modules the components are organized into modules M ( Figure 2), each one containing: 4 sensors, a network device NS (switch), a 5V power supply unit P (for supplying power to the 4 sensors and the switch), network and power connections;
- the sensors C are arranged along one edge of the surface S, directed towards the centre, with the axis of the view cone FoV (Field of View) ( Figures 1, 3) perpendicular to the surface edge, and creating a thin and continuous detection zone (or beam) FW in front of the surface.
- FoV Field of View
- 1 network device inside the module box, on the side opposite to the sensor with respect to the surface;
- network cable computer/workstation whereon the driver and the interactive software applications have been installed
- the number of optical sensors employed depends on the chosen field of view FW (aperture angle of the video camera). In order to optimize the multi-finger or multi-object discrimination power, it is preferable that the field of view FW be limited.
- N+l optical sensors are de facto necessary. For example, if one wants to be able to identify 100 fingers, 101 optical sensors must be included, depending on the size of the wall to be used (5 optical sensors per metre will be optimal). This will overcome the limitations of capacitive surfaces, where, as aforementioned, the signal cannot propagate over long distances.
- the phases (and subphases) of the subsystem are the following:
- pre-filtering algorithms are applied to provide image processing for noise reduction and detail enhancement, by applying a convolution mask NxN with a Gaussian kernel (as will be described below);
- o sensing a convolution is applied which determines the position where an object or a body part enters the field of view of the sensor; the information (size, position, shape) detected in this phase, which is still raw, will be subsequently cleaned and analysed in the next phase (recognition);
- o network the information is sent to the component that will carry out the recognition;
- a triangulation algorithm calculates the positions of the interactions, transforming the information from the reference space of the sensor (camera) to the reference space of the surface;
- a hierarchical clustering algorithm is applied (which will be described below) in order to determine the type of interaction, classifying it on the basis of several parameters (size, interaction time, shape, position taken over time); the triangulation and clustering operations are carried out by considering a window of n sensors, predefined during the calibration phase according to the topology of the subsystem; this approach makes it possible to speed up the calculations, with each window being processed in parallel in an independent manner;
- o tracking a tracking algorithm analyses the variations occurring in the position of each recognised object over the interaction time, thereby defining the events;
- o network the information is sent to the component that will generate the events;
- event generation the computer, whereon the interactive applications are in execution, receives the events from the network, translates such information into events of the operating system, and enters them into the event queue;
- o driver the events are generated and entered into the event queue of the operating system
- o network the events that the operating system cannot handle are sent to the interactive application via web socket;
- the events that can be recognised are the following:
- o touch-start a finger begins interacting with the surface
- o touch-move the finger moves and continues the interaction while remaining in contact with the surface
- o touch-end the finger is no longer in contact with the surface
- o hand-start a hand begins interacting with the surface
- o hand-move the hand moves and continues the interaction while remaining in contact with the surface
- o hand-end the hand is no longer in contact with the surface
- o object-hit an object hits the surface (atomic, not detected for a time interval);
- o pre-touch a finger approaches the surface.
- image visualization is entrusted to a subsystem capable of projecting the images of n projectors PR1....PRn, calibrated and rectified, on a wall.
- Other on-surface visualization methods are also possible, such as, for example, self-projecting display or fixed pattern.
- the goal is to produce a series of images from the multi -projection system, generating a resulting image which will lie as much as possible within a perfect rectangle on the total interactive surface and which will show no visible discontinuity between adjacent projectors.
- the components of the subsystem are the following:
- each projector controls a portion of the interactive surface
- 1 workstation the workstation, whereon the interactive applications reside, takes care of dividing the total image, sending it to each projector, and controlling the deformation (warping) and the overlap of two adjacent projections (blending);
- the phases of the subsystem are the following:
- the workstation that handles the multi -projection process computes the image that will have to be displayed by each projector, calculating the shape of the edges and the colour intensity of the pixels in the overlapping zone; the system unites the projectors’ images through a blending algorithm that overlaps the lateral strips of adjacent projectors by approximately 20% and executes an interpolation in the overlapping zone according to the formula:
- ⁇ x is the normalized pixel value from 0 to 1
- ⁇ a (from 0 to 1) is the total blending contribution
- ⁇ p is the (exponential) contribution of the intensity scale to the formula
- the blending operates by colour band (red, green, blue) for the purpose of obtaining a better image efficiency.
- colour band red, green, blue
- the parameters a and p will be different according to the colour and will be calibrated according to the projector type, since projectors of different brands will have different efficiencies.
- the workstation computes a transformation grid for rectifying the image that will have to be projected on the surface by using a triangle tessellation pattern. Such grid will be used for deforming the images coming from the operating system in order to solve the warping and for computing the images that, once projected, will be rectified.
- the components of the subsystem are the following:
- workstation it handles the events that are generated by the driver and the software applications
- multi-channel audio a multi-channel audio system ensures that multiple users can make use of the information at the same time.
- the phases of the subsystem are the following:
- event-handling the workstation handles the events and sends them to the interactive applications. Two different behaviours are envisaged for handling two event types;
- o standard event events that comply with the touch standard (touch start, touch end, touch move) are managed natively by the operating system. Therefore, such events are directly entered into the event queue;
- o non-standard event events that cannot be managed by the operating system (hand, object hit, pre-touch) are sent to the application via web socket;
- multi-user spatial audio provided through a special API that handles audio contents (audio files, video files).
- the volume percentage of each loudspeaker ARI . .ARh ( Figure 8) is computed on the basis of the position of the user on the surface. For example, if a user displays a video in the centre of the surface, the audio will be reproduced at the highest volume by the loudspeaker closest to the user, while the lateral loudspeaker will be muted.
- the active loudspeakers will be those in proximity to the users, so that both users will be able to display different contents with no sound interference.
- a calibration procedure is carried out in order to determine all those system data which are still unknown, such as, for example, the position of each sensor in the surface plane during the first installation, and to update the data whenever the system is altered by external perturbations.
- Some examples of perturbations are micro-variations in the position of the sensors or in the position of the illuminator.
- the calibration procedure comprises the following steps:
- the architecture of the system follows a modular approach.
- the system is divided into modules that can be arranged side by side to cover surfaces of any size.
- the numerous intersections produced by the triangulation step are processed by a hierarchical clustering algorithm, which groups the contact points that fall within a near area. See, for example, Figure 5.
- the algorithm continues to group by increasingly large areas, discriminating between smaller and bigger objects, distinguishing the fingers of a hand and identifying the exact position thereof. This procedure is also useful for determining any false contact points due to the triangulation procedure.
- the points 51 falling within poorly populated clusters are, in fact, labelled as noise and automatically discarded.
- the clustering algorithm is of the bottom-up type.
- Each recognised contact point is classified as belonging to the cluster 52.
- the cluster is analysed by repeatedly comparing it with all the others (B) for the purpose of grouping the points and classifying the group according to a link criterion; for example, the one used herein is Average Linkage
- cluster B if cluster B fulfils the criterion of vicinity to A, then B is entered into the cluster of A and the centroid 54 is updated (Cartesian barycentre of all points belonging to the cluster);
- a parallel calculation is adopted by spatially dividing the contact points along x (the abscissa in pixels) into“windows”, since far points in x will belong to different clusters.
- the window separation threshold for parallel calculation is dependent on the distance in cm between the points: e.g. 50cm. In order to execute this step, it is therefore necessary to know first how many pixels are contained in one centimetre, and then make the division of pixels by dots per centimetres.
- the cluster is formed by at least 4 points and complies with the system’s topology, it is classified as a hand;
- the data are correlated in order to detect the type of the element coming in contact with the surface, detecting whether such element is an object or a hand.
- Each sensor through the use of the already known feature-based SIFT (Scale Invariant Feature Transform) algorithm, analyses the images and computes the features for each object detected. These features are then correlated by the clustering algorithm, which, upon receiving the features of the objects detected by the sensors, will associate the type with the detected cluster, distinguishing between two different cluster sets: object cluster and hand cluster.
- SIFT Scale Invariant Feature Transform
- One of the calibration goals is to determine the luminous horizon in the image coming from the sensors.
- the standard procedure still requires manual intervention: the user must manually provide two points in the form of pixel coordinates x and y, corresponding to the start and end points of the horizon, for the image of each sensor.
- the novelty introduced by the present invention lies in the fact that this step is automated, assuming that both the surface and the horizon have a regular and linear shape in the sensor’s field of view.
- the automatic recognition of the horizon occurs at preset periodic intervals, typically every 30 seconds, by means of smoothing filters and the“HoughLines” algorithm, which is per se known, e.g. as described in (http://www.ai.sri.com/pubs/files/tn036-duda71.pdf).
- a Gaussian noise reduction filter is applied.
- the threshold value T(x,y) used for computing the threshold is a weighted sum of a 7x7 size matrix (kernel or matrix) observing a Gaussian distribution G (Gaussian window).
- the pixels forming the image of the threshold filter are considered as a binary mask: anything above the threshold is considered as foreground.
- the HoughLines algorithm is applied in order to determine the lines in the images, the longest one parallel to the image base being considered as the horizon (corresponding to the wall as viewed from the camera’s perspective).
- the horizon determination procedure is an automatic procedure which is necessary upon the first configuration and whenever the system’s topology changes. Contact-point detection.
- a horizon average is computed in a predefined time window of 5 seconds (i.e. approx. 500 frames, considering 100 frames per second, according to the selected specifications of the optical sensors and video cameras). Each pixel of a new frame will update the average:
- the pixel differs from the average value within a normalized value of 0.1, then it will be considered as an ambient brightness variation and will update the average by a weight of (1/500).
- the trajectory of each cluster is identified by analysing the following properties of the consecutive frames:
- the identifier associated with each cluster is a number that is incremented every time a cluster ends its trajectory (disappears) and a next cluster is identified (appears). For each frame, the following sets are considered:
- T set of tracked clusters (with which an id has been associated)
- the tracking algorithm For each frame, the tracking algorithm must analyse the information in order to modify the set T by:
- set D and set T are empty.
- the clustering algorithm has identified n clusters and the tracking algorithm must analyse the n clusters to update set T, associating each cluster of set D with the clusters of set T of frame t-1.
- the following threshold formula (with weights) is applied, comparing such cluster A with each cluster B of set T at frame t-1
- diffDist, diffVel, diffAcc, diffNCluster, diffDistCluster are, respectively, the differences between the distance, speed, acceleration, number of near clusters and mean distance of the two clusters being compared. If the application of this formula returns a value greater than a given threshold (typically 18.456), then cluster A will be considered as not belonging to the trajectory of cluster B. The threshold value can be modified during the calibration phase. Conversely, if the comparison value is below the threshold, then cluster A will be considered as the next position of cluster B. Therefore, as aforementioned, three different cases may occur:
- Cluster A is entered into set T as the new position of cluster B, and the information (position, speed, acceleration, etc.) is updated, while the identifier id remains unchanged (the identifier of cluster B is kept, in that it has been detected that the same cluster has taken a new position over time).
- cluster A is not associated with any cluster of set T. In this case, it is considered as a new cluster and entered into set T with a new identifier, incrementing by one unit the last assigned identifier. This case occurs when the hand initially touches the surface.
- cluster B is not assigned to any cluster of set D. In this case, cluster B is removed from set T, ending its trajectory. This case occurs when the hand is no longer in contact with the surface.
- the clustering algorithm associates three event types, respectively: touch-move, touch-start, touch-end.
- a further event called finger hold is considered, which is generated when a hand cluster remains in the same position for longer than two seconds.
- the tracking algorithm applies the same calculations while keeping the object cluster set and hand cluster set separate, so that in addition to the above-mentioned events there will also be the following event types: object-move, object-start, object-end, object-hold.
- the purpose of the calibration is to compute the position of all sensors Cl ...Cn ( Figure 6) by analysing the frames belonging to the various sensors and determining the global reference system in pixel coordinates (x, y) of the screen starting from the reference system of the individual sensors in pixel coordinates of the image coming from the individual camera.
- the guided procedure envisages, for example, a single manual intervention for calibrating the entire system. Subsequent interventions may be required in special cases only.
- the single intervention consists of touching with a finger some points displayed on the screen (two per sensor) to allow the system to automatically determine the position of the sensors.
- a series of points RI . .Rh arranged in a grid pattern are shown on the screen S (projection), offset from the edge by approx. 10 cm.
- the distance between one point and another in x and y is, for example, 50 cm.
- the point is highlighted with respect to all the other ones (by changing the colour of the point on the interface), and the user is asked to place a finger on the centre of that point for 3 seconds.
- the point disappears and the next one appears.
- a computes the angles, starting from the positions of the contact points on the horizon of the camera, in order to determine the orientation of the same video camera.
- this information is used in order to compute the imaginary line running from the video camera up to the contact point.
- the imaginary line will subsequently be used by the calibration algorithm.
- the object of the invention consists of a multi -touch interactive system using a multi -projection system combined with optical sensors to create a large, scalable, multi-user interactive surface.
- intersections are computed by positioning the sensors along one edge of the surface, oriented towards the surface that must be made interactive;
- object recognition the system can recognise objects positioned on the interactive surface and distinguish them based on different characteristics thereof (colour, shape, size);
- the information about the position of the user during the use of the interactive surface is used for calculating the volume levels of the loudspeakers positioned on the same interactive surface, providing sound spatiality.
- Tilt sensor the system incorporates an acceleration sensor (accelerometer) to be installed on the surface near the modules, which can detect the vibrations caused by the users on the surface during use. This technology is employed for detecting shocks on the surface, generating a tilt event when the forces applied by the users are too strong.
- the event captured by the software application, is used in order to freeze the interface while issuing a warning (when displays - monitors or projectors - are used) to prompt the user to apply a lighter pressure;
- The“tilt” event is added to the event set.
- Proximity detection through the adoption of a depth sensor (depth camera) oriented towards the user, the system can detect the presence of proximal users (up to 5 metres away from the surface). This functionality is used in order to freeze the interface when no user is present (screensaver) and release the interface when users are detected in front of the surface. Two events,“user-detected” and“no-user”, are added to the event set.
- the present invention can advantageously be implemented by means of a computer program, which comprises coding means for implementing one or more steps of the method when said program is executed by a computer. It is understood, therefore, that the protection scope extends to said computer program and also to computer-readable means that comprise a recorded message, said computer-readable means comprising program coding means for implementing one or more steps of the method when said program is executed by a computer.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IT102019000007040A IT201900007040A1 (en) | 2019-05-21 | 2019-05-21 | System for detecting interactions with a surface |
PCT/IB2020/054720 WO2020234757A1 (en) | 2019-05-21 | 2020-05-19 | System for detecting interactions with a surface |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3973376A1 true EP3973376A1 (en) | 2022-03-30 |
Family
ID=67875979
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20742856.6A Pending EP3973376A1 (en) | 2019-05-21 | 2020-05-19 | System for detecting interactions with a surface |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3973376A1 (en) |
IT (1) | IT201900007040A1 (en) |
WO (1) | WO2020234757A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524004B (en) * | 2023-07-03 | 2023-09-08 | 中国铁路设计集团有限公司 | Method and system for detecting size of steel bar based on HoughLines algorithm |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7355593B2 (en) * | 2004-01-02 | 2008-04-08 | Smart Technologies, Inc. | Pointer tracking across multiple overlapping coordinate input sub-regions defining a generally contiguous input region |
CN102831387B (en) | 2005-01-07 | 2016-12-14 | 高通股份有限公司 | Object in detect and track image |
CN101617271B (en) | 2007-02-15 | 2015-07-15 | 高通股份有限公司 | Enhanced input using flashing electromagnetic radiation |
US20130318479A1 (en) * | 2012-05-24 | 2013-11-28 | Autodesk, Inc. | Stereoscopic user interface, view, and object manipulation |
KR101593574B1 (en) * | 2008-08-07 | 2016-02-18 | 랩트 아이피 리미티드 | Method and apparatus for detecting a multitouch event in an optical touch-sensitive device |
CN105182284A (en) | 2009-06-16 | 2015-12-23 | 百安托国际有限公司 | Two-dimensional and three-dimensional position sensing systems and sensors therefor |
WO2010145038A1 (en) | 2009-06-18 | 2010-12-23 | Baanto International Ltd. | Systems and methods for sensing and tracking radiation blocking objects on a surface |
GB201205303D0 (en) * | 2012-03-26 | 2012-05-09 | Light Blue Optics Ltd | Touch sensing systems |
US20140354602A1 (en) * | 2013-04-12 | 2014-12-04 | Impression.Pi, Inc. | Interactive input system and method |
-
2019
- 2019-05-21 IT IT102019000007040A patent/IT201900007040A1/en unknown
-
2020
- 2020-05-19 WO PCT/IB2020/054720 patent/WO2020234757A1/en unknown
- 2020-05-19 EP EP20742856.6A patent/EP3973376A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
IT201900007040A1 (en) | 2020-11-21 |
WO2020234757A1 (en) | 2020-11-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4965653B2 (en) | Virtual controller for visual display | |
US10564731B2 (en) | Processing of gesture-based user interactions using volumetric zones | |
US6775014B2 (en) | System and method for determining the location of a target in a room or small area | |
US8890812B2 (en) | Graphical user interface adjusting to a change of user's disposition | |
KR100886056B1 (en) | Method and apparatus for light input device | |
KR102335132B1 (en) | Multi-modal gesture based interactive system and method using one single sensing system | |
US9996197B2 (en) | Camera-based multi-touch interaction and illumination system and method | |
US10210629B2 (en) | Information processor and information processing method | |
US20060044282A1 (en) | User input apparatus, system, method and computer program for use with a screen having a translucent surface | |
US20110234481A1 (en) | Enhancing presentations using depth sensing cameras | |
US9632592B1 (en) | Gesture recognition from depth and distortion analysis | |
US9041691B1 (en) | Projection surface with reflective elements for non-visible light | |
US9703371B1 (en) | Obtaining input from a virtual user interface | |
EP2302491A2 (en) | Optical touch system and method | |
NZ525717A (en) | A method of tracking an object of interest using multiple cameras | |
JP5510907B2 (en) | Touch position input device and touch position input method | |
WO2020234757A1 (en) | System for detecting interactions with a surface | |
US20100295823A1 (en) | Apparatus for touching reflection image using an infrared screen | |
KR102158613B1 (en) | Method of space touch detecting and display device performing the same | |
US9489077B2 (en) | Optical touch panel system, optical sensing module, and operation method thereof | |
KR102506037B1 (en) | Pointing method and pointing system using eye tracking based on stereo camera | |
US10289203B1 (en) | Detection of an input object on or near a surface | |
Michel et al. | Building a Multi-Touch Display Based on Computer Vision Techniques. | |
Mustafa et al. | Background reflectance modeling for robust finger gesture detection in highly dynamic illumination | |
Bailey et al. | Interactive exploration of historic information via gesture recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20211105 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20230707 |