EP1743277A2 - Verfolgung bimanueller bewegungen - Google Patents
Verfolgung bimanueller bewegungenInfo
- Publication number
- EP1743277A2 EP1743277A2 EP05733722A EP05733722A EP1743277A2 EP 1743277 A2 EP1743277 A2 EP 1743277A2 EP 05733722 A EP05733722 A EP 05733722A EP 05733722 A EP05733722 A EP 05733722A EP 1743277 A2 EP1743277 A2 EP 1743277A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- hands
- hand
- ofthe
- occlusion
- occluded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Definitions
- This invention relates to data processing.
- BACKGROUND Interacting with computers is not limited to mouse and keyboard. Sensing the movement of a person to recognize his/her gesture is the subject of a wide spectrum of research in Human Computer Interaction and Computer Vision. Recognizing human hand gestures in particular provides computers with a natural method of communication. Applications from medical to surveillance and security may use the technology described herein. Learning and recognizing hand movements are significant components of such technologies.
- Bimanual movements in general form a large subset of hand movements in which both hands move simultaneously in order to do a task or imply a meaning. Clapping, opening a bottle, typing on a keyboard and drumming are some common bimanual movements. Sign Languages also use bimanual movements to accommodate sets of gestures for communication.
- Objects may be tracked using stereo imaging.
- Particle filtering may be used for tracking and resolving occlusion problems.
- Other tracking algorithms may use techniques such as, for example, Bayesian Networks, object model matching based on probabilistic tracking functions, minimization of cost functions, and analytic model matching.
- Several tracking algorithms include non-linear optimizations.
- One or more described implementations allow two hands to be tracked before an occlusion, the occlusion to be identified as such, and the separate hands to be reacquired and tracked after the occlusion.
- the tracking is independent of camera view, of hand shape, and of a changing hand shape such as occurs, for example, when fingers are moving.
- a gesture being performed by the hands may be recognized, including portions ofthe gesture being performed before, during, and after the occlusion.
- One or more tracking algorithms are able to deal with occlusions in real-time, to track non-rigid objects such as human hands, and are tolerant of changes caused by moving the position of a camera.
- one or more described systems is able to reacquire the hands when occlusion ends, and can do so without requiring the hands to be wearing different color gloves.
- One or more disclosed systems handles the variability of an object's shape due to the object's non-rigid nature. Such a system does not necessarily lose its tracking clue when the shape ofthe object changes quickly.
- One or more disclosed systems use a tracking algorithm that is independent of the camera view direction. Therefore, a change in the view direction may be tolerated by the algorithm.
- An interesting application of this is tracking hands while the camera moves. Dynamic changes in camera position are often inevitable in active vision applications such as mobile robots.
- Neural Networks are used for recognition in one or more systems, as are Bayesian Networks and in particular Hidden Markov Models (HMM).
- HMM Hidden Markov Models
- One or more disclosed implementations uses a recognition technique that tolerates hand-hand occlusion. During a bimanual movement one hand may cover the other hand partially or completely.
- One or more disclosed implementations uses a recognition technique that tolerates a hand temporarily moving out ofthe region of interest. In such a case, two hands are not present over the whole period of a bimanual gesture. A disclosed recognition technique also tolerates a hand being completely occluded by some other object like the body of person.
- One or more implementations was a recognition technique that recognizes continuous (concatenated) periodic bimanual movements. A periodic bimanual movement like clapping typically includes a short cycle of movement of two hands repeated several times. In many Virtual Reality applications, a few bimanual movements are concatenated in order to interact with the virtual environment, and these movements should be recognized and movement transitions should be detected.
- a Cognitive System for tracking the hands of a person, resolving left hand and right hand in the presence of occlusion, and recognizing bimanual movements is presented.
- the two hands of a person are tracked by a novel tracking algorithm based on one or more neuroscience phenomena.
- a gesture recognition algorithm recognizes the movement of each hand and combines the results in order to recognize the performed bimanual movement.
- the system may be useful in tracking and recognizing hand movements for interacting with computers, helping deaf people to communicate with others, and security applications.
- movement is tracked of two occluded hands during an occlusion period, and the two occluded hands are tracked as a unit.
- a type of synchronization is determined that characterizes the two occluded hands during the occlusion period.
- the type of synchronization is based, at least in part, on the tracked movement ofthe two occluded hands.
- Based at least in part on the determined type of synchronization it is determined whether directions of travel for each ofthe two occluded hands change during the occlusion period.
- Implementations may include one or more ofthe following features. For example, determining whether directions change may be further based on the tracked movement ofthe two occluded hands. Determining whether directions change may include determining whether the two hands pass each other during the occlusion period, pause during the occlusion period, or collide with each other during the occlusion period.
- Determining whether directions change may include determining whether each ofthe two hands go, after the occlusion period, to directions from which they came, or to directions opposite from which they came.
- the directions may include one or more of a vertical direction, a horizontal direction, and a diagonal direction.
- Determining a type of synchronization may include determining whether the two hands are positively or negatively synchronized, and determining whether directions change may be further based on whether the two hands are negatively synchronized.
- Determining a type of synchronization may include determining a measure ofthe occluded hands' velocities. The measure may include a standard deviation of a difference of velocities of parallel sides of a rectangle formed to surround the occluded hands.
- Tracking movement ofthe two occluded hands may include tracking movement of a rectangle formed to surround the occluded hands, and determining whether directions change may include determining a measure ofthe occluded hands' velocities based on velocities of one or more sides ofthe rectangle. Determining whether directions change may be based on whether the measure goes below a threshold.
- the measure may be a function of a square root of a sum of squares of velocities of parallel sides ofthe rectangle. Determining whether directions change may be based on one or more probability distributions ofthe measure.
- the measure may be a function of a difference of velocities of parallel sides ofthe rectangle.
- the one or more probability distributions may include a first set of distributions associated with a first velocity pattern and a second set of distributions associated with a second velocity pattern.
- the first velocity pattern may be indicative ofthe two hands passing each other during the occlusion period
- the second velocity pattern may be indicative ofthe two hands not passing each other during the occlusion period.
- Determining whether directions change may further include determining a first and a second probability, and comparing the first probability with the second probability.
- the first probability may be based on the first set of distributions, and be the probability that the first velocity pattern produced the measure ofthe occluded hands' velocities.
- the second probability may be based on the second set of distributions, and be the probability that the second velocity pattern produced the measure ofthe occluded hands' velocities. Based on a result obtained during the comparing, it may be determined whether the two occluded hands passed each other during the occlusion period.
- a first hand and a second hand are occluded, the first hand having come from a first direction and the second hand having come from a second direction.
- the movement ofthe occluded hands is tracked as a unit.
- a type of synchronization is determined that characterizes the occluded hands.
- the type of synchronization is determined, at least in part, based on the tracked movement ofthe occluded hands. It is determined that the first hand and the second hand are no longer occluded and, after this determination, the first hand is distinguished from the second hand based at least in part on the determined type of synchronization.
- FIGS. l(a)-(b) shows three main components of a particular system and a hierarchy for recognizing bimanual movements.
- FIG. 2 shows a rectangle around each of two hands.
- FIG. 3 shows the rectangles of FIG. 2 overlapping with no hand-hand occlusion.
- FIG. 4 shows a progression of movement ofthe rectangles of FIG. 2 creating a hand-hand occlusion.
- FIG. 5 shows the rectangles of FIG. 2 modeled by their sides.
- FIG. 6 illustrates a prediction of the intersection of two rectangles.
- FIG. 7 illustrates a scenario in which two hands may be labeled interchangeably in two consecutive images.
- FIGS. 8(a)-8(n) illustrate 14 models of bimanual movements.
- HI and H2 represent hand number one and hand number two.
- the thick ellipses represent the occlusion areas (a, c, d, e,f, h, i,j, and n), and the solid small rectangles represent collision (b, g, k, and /).
- FIG. 9 illustrates an occlusion-rectangle formed around the big blob of hands.
- FIG. 10 shows a progression of images in which the vertical sides ofthe occlusion-rectangle are pushed back because hands pass each other and push the vertical sides in opposite directions.
- FIGS. 1 l(a)-(b) illustrate the velocity changes for movements in which hands (a) pause/collide and return, or (b) pass each other.
- FIGS. 12(a)-(b) illustrate sequences of Gaussian distributions to model an occlusion-rectangle sides' velocities during the two categories of (a) hand-pause, and (b) hand-pass.
- FIG. 13 illustrates hand movements being separated and projected into blank sequences of images.
- FIG. 14 shows an image frame divided into 8 equal regions to represent direction of movement.
- FIG. 15 includes a series of images illustrating hand movement and an extracted vector for the movement.
- FIG. 16 illustrates the segmentation of a bimanual movement over a period of time.
- the separate lines at segments A, C, and D show the separated hands.
- segments B the overlaped lines show hand-hand occlusion.
- FIG. 17 shows a Bayesian network for fusing Hidden Markov Models for the recognition of bimanual movements.
- FIG. 18 shows an abstracted Bayesian network, based on FIG. 17, for the recognition of bimanual movements.
- FIG. 19 shows a 2-state left-to-right Hidden Markov Model assigned to partial gestures.
- FIG. 20(a) graphs the local belief of the root node for three concatenated bimanual movements.
- FIGS. 20(b)-(e) isolate various graphs from FIG. 20(a) associated with particular gestures.
- FIG. 21 graphs the local belief of the root node with limited memory for the three concatenated bimanual movements of FIG. 20.
- FIG. 22 shows a hardware implementation
- FIG. 23 illustrates a process for recognizing a bimanual gesture.
- one or more disclosed implementations includes a cognitive system 100 for learning and understanding bimanual movements that entails three fundamental components: low-level processing 110 to deal with sensory data, intelligent hand tracking 120 to recognize the left hand from the right hand, and bimanual movement recognition 130 for recognizing the movements.
- the hands are to be extracted from the images.
- the second component 120 includes hand tracking, which may be complicated by hand-hand occlusion. When one hand covers the other hand partially or completely, the two hands should be reacquired correctly at the end of occlusion.
- An implementation uses a Kalman filtering based technique to monitor hands' velocities, to detect pauses and to recognize synchronization between the hands. By detecting the synchronization and pauses, particularly during a hand-hand occlusion period, the tracking algorithm of an implementation recognizes the right hand from the left hand when occlusion ends.
- the tracking algorithm of one implementation is also used for segmenting a bimanual movement.
- each part ofthe movement receives a label that indicates whether the part is an occlusion or non-occlusion segment.
- a non-occlusion category may include three different segments, namely beginning, middle, and ending segments. Therefore, the tracking algorithm ofthe implementation divides a bimanual movement into up to four different segments depending on the nature of the movement.
- the tracking algorithm takes a general view ofthe tracking problem. For example, from a pure pattern recognition point of view, a movement can be recognized differently when it is seen from different viewing directions.
- a general set of movement models that are generally independent of view direction are defined so that a model can be found for a bimanual movement when it is seen from different viewing angles.
- bimanual synchronization may also make the tracking algorithm of one or more described implementations independent ofthe hand shapes. independence of hand shape and view direction may make a tracking algorithm useful in mobile vision applications (e.g., Active Vision in Robotics).
- the tracking algorithm of one implementation contains a model that is independent ofthe actual positions and velocities ofthe hands. Consequently, this tracking algorithm can be used in applications where the visual system moves or turns. For instance, assuming that a camera is installed on a mobile robot, the tracker can track the hands of a subject while the robot moves.
- the third component 130 includes gesture recognition, and, referring to FIG. 1(b), may be represented by a hierarchical cognitive system 140.
- System 140 analyzes hand shapes at a bottom level 150, which may use image analysis and pattern recognition for hand shape extraction and detection.
- System 140 learns the individual partial movement of each hand at an intermediate level 160, using, for example, spatio- temporal single-hand gesture recognition.
- System 140 combines the partial movements at a top level 170 to recognize the whole movement.
- Statistical and spatio-temporal pattern recognition methods such as Principal Component Analysis and Hidden Markov Models may be used in the bottom 150 and intermediate 160 levels ofthe system 140.
- a Bayesian inference network at the top level may perceive the movements as a combination of a set of recognized partial hand movements.
- a bimanual movement may be divided into individual movements ofthe two hands. Given that the hands may partially or completely occlude each other or a hand can disappear due to occlusion by another object, the fusion network at the bottom level may be designed to be able to deal with these cases.
- the occlusion and non-occlusion parts of a movement, which are treated as different segments, may be recognized separately.
- Individual Hidden Markov Models at the intermediate level may be assigned to the segments ofthe gestures ofthe hands.
- partial movements are recognized at the intermediate level.
- the hand shapes and the movement of each hand in each frame of a given image sequence are recognized and labeled.
- the recognition and labeling may be done at the bottom level ofthe hierarchy using Principal Component Analysis and motion vector analysis.
- system 140 has been developed so that it learns single movements and recognizes both single and continuous (concatenated) periodic bimanual movements. As mentioned earlier, recognizing continuous movements may be particularly useful in interacting with a virtual environment through virtual reality and immersive technologies.
- Recognition of hand gestures may be more realistic when both hands are tracked and any overlapping is taken into account.
- the gestures of both hands together typically make a single gesture. Movement of one hand in front ofthe other is one source of occlusion in bimanual movements. Also, for the bimanual movements where there is no occlusion in the essence ofthe movement, changing the view direction ofthe camera can cause one hand to be occluded by the other occasionally.
- By using pixel grey-level detection hands from a dark background may be extracted. In an extracted image, only the pixels with a non-zero value can belong to the hands.
- the Grassfire algorithm may be used in order to extract the hands.
- Grassfire may be described as a region-labelling or blob-analysis algorithm, and the Grassfire algorithm may scan an image from left to right, top to bottom to find the pixels of connected regions with values belonging to the range ofthe hands' grey-level. For the first pixel found in that range the algorithm turns around the pixel to find other pixels. The algorithm attempts to find all the connected regions and label them. In order to track hands, we detect occlusion. Two types of occlusion are considered here. First, the case where one hand occludes the other, which we call hand-hand occlusion. Second, the case in which something else occludes a hand or the hand hides behind another object, e.g., the body, partially or completely.
- another object e.g., the body
- a rectangle 210, 220 is constructed around each hand in an image.
- the sides of a rectangle represent the top, bottom, left, and right edges of the corresponding hand's blob. Therefore, by moving a hand its rectangle moves in the same way.
- By tracking these rectangles we detect the start and end points of a hand- hand occlusion.
- To detect the beginning point we look at the movement ofthe rectangles. If at some stage there is any intersection between the rectangles it could be recognized as occlusion.
- FIG. 2 a rectangle 210, 220 is constructed around each hand in an image.
- the sides of a rectangle represent the top, bottom, left, and right edges of the corresponding hand's blob. Therefore, by moving a hand its rectangle moves in the same way.
- By tracking these rectangles we detect the start and end points of a hand- hand occlusion.
- To detect the beginning point we look at the movement ofthe rectangles. If at some stage there is any intersection between the rectangles it could be recognized as o
- x k the state vector of process at time t k k : a matrix relating x k to x k+l v? k : a white noise sequence with known covariance structure
- z k measurement vector at time t k ⁇ .
- k matrix giving the noiseless connection between the measurement and the state vector at time t k
- ⁇ k measurement error - assumed to be a white noise sequence with known covariance structure.
- rectangle 220 includes two vertical sides and , and two horizontal sides and .
- rectangle 210 includes two vertical sides and two horizontal sides and .
- the movement of a rectangle can be modelled by the movement of its sides (see Figure 5). Therefore, Equation 3 is expanded to,
- x k , x 2 ' k , y k and y 2 ' k are the sides ofthe rectangle i at time k, that is, x[ k , x 2 ' k , y ⁇ >k and y 2 ' k describe the positions ofthe sides ofthe rectangle i at time k.
- JC W is position
- the first and second derivatives of x (l) are the velocity and acceleration respectively.
- Equation 4 is expanded to Equation 7 for z-1, 2.
- x[ , x 2 , y[ , y are assumed to have continuous first and second order derivatives denoted by one-dot and double-dot variables, and h>0 is the sampling time.
- Equation 11 predicts the next state of vector x one step in advance. In other words, equation 11 predicts the position ofthe rectangle i one step in advance. The prediction can also be performed for more than one step by increasing the power of ⁇ .
- the occlusion alarm is set 4. In the next captured image if only one hand is detected by Grassfire and the occlusion alarm is already set the hand-hand occlusion is assumed to have happened. Otherwise, if we see one hand in the image and the occlusion alarm is not set, the other type of occlusion (e.g., occlusion by apart of body or leaving the scene) is assumed to have happened. One or more variables may be set to indicate that occlusion of a particular type has been detected 5. Image capturing is continued
- the first shape found in an image is labelled as the first hand.
- a window at time "t” shows a hand 720 labeled "1" because the search finds hand 720 first, and a hand 730 labeled "2" because the search finds hand 7305 second.
- a window 740 shows that at time "t+1," hand 720 has moved down slightly, and hand 730 has moved up slightly, such that the left to right, top to bottom search finds hand 730 first and hand 720 second — as indicated by labeling hand 730 with "1" and labeling hand 720 with "2.” Such re-labeling of hands 720 and 730 may cause confusion, but may be avoided if hands 720 and 730 are tracked.
- Another implementation uses the centroid ofthe hands to track them in a sequence of images. The centroid-based algorithm finds the centroids ofthe hands and compares them in two consecutive frames. By using this technique we are able to track the hands correctly even when something else occludes them.
- the hands For example, if one of the hands is occluded or get totally hidden by the body for some moments and then5 reappears, it can be tracked correctly by keeping records of its last position before occlusion and the position of the other hand. This is expected because when a hand moves behind another object like the body or moves out of the image frame it most probably appears in an area close to the last position before the occlusion. We also have the other hand tracked over the occlusion period. Therefore, if at some point0 there is only one hand in the image the algorithm may keep tracking the hands properly without any confusion. Other implementations may track the hands using an indicator other than the centroid. In a bimanual movement, when one hand, completely or partially, covers the other hand the hand extraction algorithm detects one big blob in the images.
- a tracking system recognizes these classes and identifies the hands correctly at the end of occlusion.
- clapping can be represented by model g, tying a knot by model j, etc.
- model g tying a knot by model j, etc.
- Temporal coordination implies that the hands' velocities are synchronized in bimanual movements. Also, the hands' pauses happen simultaneously. We may exploit the hands' temporal coordination to track the hands in the presence of occlusion.
- a rectangle is constructed around each hand.
- a rectangle 910 around the big blob is formed.
- rectangle 910 the occlusion-rectangle.
- V c x c : velocity of side c
- the subscript k indicates the discrete time index, and the defined terms are referred to as "velocities.”
- the hands In the movements where the hands either collide or pause (for example, classes 2 and 3), the hands return to the same sides that the hands were on prior to the occlusion period. In these movements the parallel sides ofthe rectangle in either horizontal or vertical directions pause when the hands pause or collide. For example, in the models of e,/and / the hands horizontally pause and return to their previous sides. In the models g andy they pause and return in both horizontal and vertical directions. The horizontal pauses ofthe hands are captured by the pauses ofthe vertical sides ofthe occlusion-rectangle, and the vertical pauses ofthe hands are captured by the pauses ofthe horizontal sides.
- the pauses ofthe parallel sides are typically simultaneous.
- the parallel sides associated with the horizontal and vertical movements of hands typically pause simultaneously.
- the horizontal sides ofthe occlusion-rectangle typically pause simultaneously when the hands pause or collide vertically during occlusion.
- the velocities ofthe horizontal sides ofthe occlusion-rectangle reach zero. This is captured by v v>k in the hand-pause model.
- a small threshold ⁇ > 0 can provide a safe margin because we are working in discrete time and our images are captured at discrete points in time.
- v v> ⁇ or V h>k falls below the threshold we conclude that the hands have paused vertically or horizontally. By detecting the pauses in the horizontal or vertical direction we may conclude that the hands have paused or collided and returned to the same sides prior to occlusion in that direction.
- a window 1010 shows two hands 1020 and 1030 approaching each other, resulting in vertical sides "c” and “d” approaching each other.
- a window 1040 shows, at a point in time later than window 1010, hands 1020 and 1030 pushing past each other such that vertical sides "c” and “d” are pushing away from each other. Therefore, the sign ofthe velocities are changed without passing through zero. If no hand pause is detected we conclude that the hands have passed each other.
- the hand shapes may change during an occlusion period.
- the fingers may also move simultaneously so that the shape ofthe hand changes.
- the movement of fingers may be considered in an attempt to detect simultaneous pauses ofthe hands.
- Research shows that fingers and hand are coordinated too in the movement of one hand. In other words, the hand and fingers are temporally synchronized.
- Our experiment shows that the velocity ofthe hand and the velocity of the fingers are highly synchronized with almost no phase difference. Therefore, the pauses ofthe hand and the pauses ofthe fingers that change the hand shape may be expected to happen simultaneously.
- the hand-finger coordination typically guarantees that the velocities ofthe parallel sides ofthe rectangle are synchronized and the pauses happen simultaneously, regardless of whether finger movement causes the hands to change shape. This phenomenon typically makes the algorithm independent of the changing hand shape.
- an unwanted pause may be detected in the vertical or horizontal directions because the velocity ofthe static direction (vertical or horizontal) will be small according to equation 12. For example, when the hands move only horizontally (see FIG. 8(d)) a vertical pause may be detected because vertically the hands do not have much movement and the speed ofthe vertical sides may reach zero.
- N is the number of images (frames) during the occlusion period
- i andj are the frame indices
- v k are the velocities of sides a , b, c, and d at the k frame during hand-hand occlusion.
- the tracking is performed based on the pauses ofthe other sides ofthe occlusion-rectangle. For example, if a small s v is observed we base the tracking on the pauses ofthe other sides, c and d.
- a small standard deviation in the velocity-synchronization model means that a pair of parallel sides ofthe rectangle has been positively synchronized with quite similar velocities during occlusion.
- the above algorithm tracks the hands during a hand-hand occlusion and makes a decision on the positions ofthe hands at the end of occlusion with respect to their positions prior to occlusion.
- the above algorithm 2 may be modified in various ways to provide information on the position ofthe hands after occlusion.
- the form of algorithm 2 presented above typically provides enough information to distinguish the left and right hands after occlusion.
- Implementations of algorithm 2, and other algorithms may provide increased robustness by verifying that (1) the vertical sides are negatively synchronized in step 1, and/or (2) the horizontal sides are negatively synchronized in step 2.
- Another implementation uses a tracking algorithm having a different hand-pause and hand-pass detection methodology.
- the number of images should ideally be large enough so that the velocities converge to zero in the cases of hand collisions and pauses.
- the algorithm should have enough time and images so that the rectangle's sides' velocities reach zero in the cases that a collision or pause occurs.
- the proposed Kalman filter is based on the Kinematics equations of motion. Therefore, in a fast movement (with an insufficient number of images), the sides ofthe occlusion-rectangle have the potential to move further rather than to stop quickly. That is, if the samples are too far apart, the velocities below the threshold may be missed.
- the algorithm may not detect collisions and pauses accurately. Also, in some applications where the visual system moves (e.g., active vision) the velocities may not exactly reach zero. Therefore, we develop a technique to make the algorithm independent ofthe actual velocities, and investigate the speed changes of the occlusion-rectangle's sides.
- this function results in a velocity equal to the sum ofthe individual velocities.
- An important feature of this function is that it makes the algorithm independent ofthe actual velocities. Therefore, in some applications (e.g., active vision) the effect of a constant value added to the both velocities is eliminated.
- FIG. 12(a) shows distributions 1205-1240 in the movements where a pause is detected.
- FIG. 12(b) shows distributions 1245-1280 for the movements where the hands pass each other.
- each ellipse 1205- 1280 represents a 2-dimensional gaussian distribution.
- a decision on whether the hands have passed each other or paused and returned is made based on the probabilities that Function 14 for a given movement matches each ofthe two patterns in FIGS. 12(a) and (b).
- the probabilities are calculated using the following equation,
- v 0 ⁇ v 0 2 ,... ⁇
- v 0 stands for the set of observed velocities over a given occlusion period calculated by Function 14
- v 0 J v(j) is the observed velocity at time j during occlusion
- H is the k th gaussian distribution in the pattern H
- Hf) is calculated using the multidimensional gaussian probability density function
- ⁇ k l stands for the standard deviation of distribution H on the / th principal axis ofthe /c th distribution
- ⁇ k l is the mean ofthe distribution on the Z th principal axis ofthe k th distribution
- VQ Vector Quantization
- the beginning and the end ofthe occlusion period is detected 1. If the horizontal sides ofthe rectangle are positively synchronized
- each hand is tracked and separately projected into a blank sequence of images. For example, two hands 1310 and 1320 on an image 1330 are separately projected onto individual images 1340 and 1350, respectively.
- the direction of movement of each hand is recorded.
- FIG. 14 to record direction of movement, we divide a 2-dimensional space of an image frame 1410 into 8 equal regions 1420-1455. We call the divided frame 1410 the regional-map. The index (1- 8) of each region represents the direction of movement in that region.
- An index of zero (not shown in frame 1410) represents a stationary hand.
- a vector representing the movement is extracted for every single frame. This vector represents the movement from the last image to the present one.
- a hand 1510 is shown at time “t” in frame 1520 and at time “t+1" in frame 1530.
- the movement of hand 1510 from time “t” to time “t+1” is represented by a vector 1540 in window 1550.
- the angle of the vector with respect to the horizontal axis determines the region in the regional-map in which the vector maps onto.
- the region index is recorded for the movement at each time t.
- Implementations may consider the speed ofthe gesture, for example, by determining and analyzing an appropriate magnitude for vector 1540.
- a bimanual movement is constituted from two groups of parts, the occlusion parts in which one hand is occluded, and the other parts.
- the parts in which the hands are recognizable separately are called non-occlusion parts. Since a bimanual movement can be a periodic movement like clapping we separate different parts, which we call segments. Four segments are obtained as following,
- FIG. 16 an example of a segmented bimanual movement is illustrated in window 1610 over the time axis.
- the movement starts and ends in non-occlusion segments
- other implementations extend the algorithm to other cases.
- the process is the same with only one segment (a beginning segment) for the whole gesture.
- FIG. 16 there are 3 occlusion segments labelled "B,” and 2 middle segments labelled “C,” as well as a beginning segment labelled "A” and an ending segment labelled “D".
- the implementation is able to deal with multiple occlusion and middle segments as well as the beginning and the ending segments in order to understand the whole bimanual movement.
- the movement of a hand within a segment is treated as a single movement appearing in the sequence of images ofthe segment.
- These movements are modelled and recognized by Hidden Markov Models, although other models may be used. Therefore, for a bimanual movement we get a set of recognized movements of each ofthe two hands, and the recognized movements ofthe occlusion parts. This information is combined to recognize the bimanual movement.
- One implementation uses a Bayesian network in which the whole gesture is divided into the movements ofthe two hands. Referring to FIG. 17, the movement of each hand is also divided into the four segments through the evidence nodes of BEG, MID, OCC, and END.
- the occluded part of a gesture is a common part for both hands.
- a tree 1700 includes a top node "Bimanual Gesture" 1705, that includes a left-hand gesture node 1710 and a right-hand gesture node 1715.
- Left-hand gesture node 1710 and right-hand gesture node 1715 include BEG evidence nodes 1720 and 1750, respectively, MID evidence nodes 1725 and 1745, respectively, and END evidence nodes 1730 and 1740, respectively, and share a common OCC node 1735.
- each node in this tree represents a multi-valued variable.
- every node is a vector with length g, as shown with vectors 1720a, 1735a, and 1750a.
- the three top nodes of Bimanual Gesture, Left Hand Gesture, and Right Hand Gesture are non-evidence nodes updated by the messages communicated by the evidence nodes.
- the evidence nodes are fed by the Hidden Markov Models of different segments separately, as shown with models 1755a, 1755g, 1760a, 1760g, 1765a, and 1765g.
- the causal tree 1700 can be abstracted to tree 1800 that includes non- occlusion segment nodes (NS nodes) 1810 and 1820, and occlusion segment node (OS node) 1830.
- Node 1810 is associated with vector 1810a, and with models 1840a through 1840g.
- node 1830 is associated with vector 1830a and with models 1850a through 1850g.
- the NS nodes 1810 and 1820 represent the evidences of the beginning, middle, and ending segments at different times for each hand.
- an eigenspace for each hand.
- An eigenspace is made by using a set of training images of a hand in a given segment and Principal Component Analysis.
- the covariance matrix ofthe set of images is made and the eigenvalues and eigenvectors ofthe covariance matrix are calculated.
- the set of eigenvectors associated with the largest eigenvalues are chosen to form the eigenspace.
- the projection ofthe set of training images into the eigenspace is the Principal Components.
- a separate eigenspace is created, also, for the occlusion segments. These eigenspaces are made by the movements in the training set. By projecting all the images of one hand into its own eigenspace a cloud of points is created. Another dimension is also added to the subspaces which is the motion vector extracted using the regional-map.
- a set of codewords is extracted for each eigenspace using Vector Quantization.
- the set of extracted codewords in each eigenspace is used for both training and recognition.
- By projecting a segment of a gesture into the co ⁇ esponding eigenspace a sequence of codewords is extracted.
- a 2-state left-to- right Hidden Markov Model 1900 is assigned to each hand in a non-occlusion segment. Due to the fact that a partial movement of a hand in a segment is normally a short movement, a 2-state HMM is typically suitable to capture the partial movement. Every segment of a gesture has its individual HMMs. Thus, for every gesture in the vocabulary of bimanual movements seven HMMs are assigned, two for the beginning segments for the two hands, one for the occlusion segments, two for the middle segments, and two for the ending segments. By using the extracted sequence of codewords the HMM of each hand in a segment is trained.
- the HMMs ofthe occlusion segments are trained by the extracted sequence of codewords ofthe projected images into the corresponding eigenspace. For example, for a vocabulary of 10 bimanual movements 70 HMMs are created and trained. In the recognition phase the same procedure is performed. A given gesture is segmented. Images of each segment are projected into the corresponding eigenspace and the sequences of codewords are extracted. By employing the trained HMMs, the partial gesture of each hand presented in a segment is recognized. However, we use the HMMs to calculate the likelihoods that a given partial gesture is each ofthe co ⁇ esponding partial gestures in the vocabulary.
- a normalized vector ofthe likelihoods for a given partial gesture in a segment is passed to one ofthe evidence nodes in the Bayesian network of FIG. 18.
- the second scalar in the NS vector 1810a ofthe left hand is the likelihood that: • In a beginning segment: the given partial gesture is the beginning segment of gesture number 2 in the vocabulary, calculated by the HMM ofthe beginning segment ofthe left hand of gesture number 2
- the given partial gesture is the middle segment of gesture number 2 in the vocabulary, calculated by the HMM ofthe middle segment of the left hand of gesture number 2 and so on.
- the occlusion vector which is fed by the likelihoods of the HMMs ofthe occlusion segments, is a shared message communicated to the LH and RH nodes and, ultimately, the BG node, as evidences for the two hands.
- the LH, RH, and BG nodes calculate their beliefs, that is, their vectors of the likelihoods of the possible gestures, using, for example, the well-known belief propagation algorithm.
- three sets of training images are extracted from videos of gestures.
- Each image may contain, for example, 1024 pixels.
- eigenspaces of lower dimensionality are determined for the training data.
- the training data is projected into the eigenspace to produce reduced dimensionality training data.
- codewords are determined for the eigenspaces.
- HMMs are then developed using the sequences of codewords corresponding to appropriate segments ofthe training data for given gestures.
- Images of a given gesture are then projected into the appropriate eigenspace and the closest codewords are determined, producing a sequence of codewords for a given set of images corresponding to a segment of a gesture.
- the sequence of codewords is then fed into the appropriate HMMs (segment and gesture specific) to produce likelihoods that the segment belongs to each ofthe trained gestures. These likelihoods are then combined using, for example, the belief propagation algorithm.
- the network looks loopy (containing a loop).
- the nodes of BG, LH, OS, and RH form a loop. Therefore, the network does not seem to be singly connected and a message may circulate indefinitely.
- the node OS is an evidence node. Referring to the belief propagation rules of Bayesian networks the evidence nodes do not receive messages and they always transmit the same vector. Therefore, the NS and OS nodes are not updated by the messages ofthe LH and RH nodes. In fact, the LH and RH nodes do not send messages to the evidence nodes. Therefore, although the network looks like a loopy network, the occlusion node of OS cuts the loop off and no message can circulate in the loop. This enables us to use the belief propagation rules of singly connected networks in this network.
- a bimanual gesture is segmented by a tracking algorithm
- sequence of codewords is extracted for each hand using, for example, the Principal Components and the motion vectors
- the vectors of likelihoods are passed into the corresponding NS nodes while the vector of occlusion node is set to a vector of all Is.
- the image sequence ofthe segment is projected into the eigenspace of the occlusion segments 3.2.
- a sequence of codewords is extracted using the Principal Components and the motion vectors
- the vector of likelihoods is calculated and normalized by using the corresponding HMMs 3.4.
- the vector is passed to the OS node
- the nodes ' beliefs are updated by the belief propagation algorithm
- the vectors of likelihoods are calculated and normalized by using the corresponding HMMs 4.4.
- the vectors of likelihoods are passed to the corresponding NS nodes
- the nodes ' belief are updated by the belief propagation algorithm
- Tlie nodes ' belief are updated by the belief propagation algorithm
- Hie sequence of codewords are extracted using the Principal Components and the motion vectors 7.3.
- the vectors of likelihoods are calculated and normalized by using the
- Tfie vectors are passed to the corresponding NS nodes
- the nodes ' beliefs are updated by the belief propagation algorithm
- bimanual movements are periodic in essence. Clapping and drumming are some examples. In the environments where the bimanual movements are used as a communication method, e.g., Virtual Reality, concatenated periodic movements should be recognized.
- the HMM of every segment of a gesture was trained by the 5 samples in the training set. Three bimanual gestures were selected to create concatenated periodic bimanual movements. From the 15 movements, first gesture number 3 was repeated 5 times. It was followed by gesture number 2 repeated 30 times, and followed by gesture number 5 repeated 41 times. Therefore, the first gesture is divided into 11 segments, including a beginning segment, and 5 occluded segments separated by 4 middle segments, and an end segment. The second gesture is divided into 61 segments, including a beginning segment, 30 occluded segments, 29 middle segments, and an end segment. The third gesture is divided into 83 segments, including a beginning segment, 41 occluded segments, 40 middle segments, and an end segment. Given the fact that the first segment in the graph of local beliefs represents the belief of th initialization, the first gesture transition should appear in the 13 segment (the beginning segment associated with the second gesture) and the second transition in the 74 th segment(the beginning segment associated with the third gesture).
- a plot 2010 shows multiple graphs (15 graphs) including a first graph 2020 for the first gesture, rising at approximately segment 2 to a belief of approximately 1, and falling at approximately segment 12 to a belief of approximately 0.
- Plot 2010 also shows a second graph 2030 for the second gesture, rising at approximately segment 13 to a belief of approximately 1, and falling at approximately segment 73 to a belief of approximately 0.
- Plot 2010 also shows a third graph 2040 for the third gesture, rising at approximately segment 74 to a belief of approximately 1, and stopping at approximately segment 156.
- Plot 2010 shows a fourth graph 2050 having a positive belief around, for example, segment 40.
- Second graph 2030 also includes several dips, particularly around segment 40.
- the belief is higher for the gesture associated with fourth graph 2050 than for the second gesture.
- the gestures are correctly recognized most ofthe time.
- the gesture transitions are detected properly.
- the belief is not very stable and it varies such that at some points it falls below the graph of other gestures. This happens when the partial gestures of one or two hands are recognized incorrectly. Although the confusion can be treated as temporary spikes, an algorithm may determine that the gesture has changed at some points.
- Each ofthe graphs 2020, 2030, 2040, and 2050 is isolated in one of FIGS.
- An implementation avoids these confusing spikes by changing the belief propagation algorithm. Specifically, the previous belief of the root node is given greater weight so that temporary confusing evidence does not change the belief easily. To give greater weight to a previous belief, we add memory to the root node of the network. This is done, for example, by treating the current belief of the root node as the prior probability ofthe node in the next step. When a hypothesis (that one of the gestures in the vocabulary is the correct gesture) is strengthened multiple times by the messages received from the HMMs, many strong pieces of evidence are needed to change this belief.
- FIG. 21 shows a first graph 2120, a second graph 2130, and a third graph 2140, corresponding to the first, second, and third gestures, respectively.
- an imaging device 2240 e.g., a CCD camera captures sequences of images of a person doing a bimanual movement. The images are transferred to a computing device 2210 running the algorithms described.
- the storage device 2230 such as, for example, a database, contains the training information required by the tracking and recognition algorithms. Storage device 2230 may also store the code for the algorithms. During a training phase the training information of the tracking algorithm including the threshold values and distributions are stored in the storage device 2230.
- a process 2300 may be used to recognize bimanual gestures, and includes many Operations discussed in this disclosure.
- Process 2300 includes receiving or otherwise accessing a series of images of a bimanual gesture (2310). Left and right hands are extracted and tracked from the received images (2320) and a hand-hand occlusion is predicted (2330).
- the hand-hand occlusion is detected (2340) and a single blob including both hands is extracted and tracked from the images in which the occlusion exists (2345).
- the synchronization ofthe left and right hands during the occlusion is determined (2350), the behavior ofthe hands (whether they passed each other or they paused/collided and returned) is recognized (2355), and the left and right hands are identified after the occlusion ends (2360).
- the left and right hands are extracted and tracked post-occlusion (2365).
- the movements in each ofthe segments pre-occlusion, occlusion, and post-occlusion
- the overall gesture is recognized (2370).
- Determining the synchronization ofthe left and right hands (2350) may generally involve determining any relationship between the two hands.
- the relationship may be, for example, a relationship between component-velocities of parallel sides of a rectangle su ⁇ ounding a blob, as described earlier. In other implementations, however, the relationship relates to other characteristics ofthe hands, or the single blob.
- One variation of process 2300 may be performed by a plug-in to a bimanual gesture recognition engine.
- the plug-in may perform some variation of tracking a blob (2345), determining a type of synchronization (2350), and determining whether the two hands change their direction of travel during the occlusion period.
- a plug-in may be used with a gesture recognition engine that is unable to deal with hand-hand occlusion.
- the gesture recognition engine may track the left and right hands until a hand-hand occlusion occurs, then call the plug-in.
- the plug-in may track the blob, determine if the two hands changed direction during the occlusion, and then transfer control ofthe recognition process back to the gesture recognition engine. In transferring control back to the gesture recognition engine, the plug-in may tell the gesture recognition engine whether the two hands changed direction during the occlusion.
- the gesture recognition engine can reacquire the left and right hands and continue tracking the two hands.
- Implementations may attempt to discern whether two occluded hands have passed each other, have collided with each other, or have merely paused.
- the result of a pause may typically be the same as the result of a collision; that the two hands return to the directions from which they came.
- the velocity profile of a "pause” may be similar to the velocity profile of a "collision,” and any differences may be insignificant given expected noise. However, implementations may attempt to separately detect a "collision” and a "pause.”
- the directions referred to with respect to various implementations may refer, for example, to the direction ofthe velocity vector or the direction of a component of the velocity vector.
- the direction of a velocity vector may be described as being, for example, a left direction, a right direction, a top direction, a bottom direction, and a diagonal direction.
- Components of a velocity vector may include, for example, a horizontal component and a vertical component.
- Implementations may be applied to tracking bimanual gestures performed by a single person using the person's left and right hands. Other implementations may be applied to gestures being performed by, for example, two people each using a single hand, one or more robots using one or more gesturing devices, or combinations of people and robots or machines, particularly if a coordination similar to the bimanual coordination exists between the hands.
- Implementations may include, for example, a process, a device, or a device for carrying out a process.
- implementations may include one or more devices configured to perform one or more processes.
- a device may include, for example, discrete or integrated hardware, firmware, and software.
- a device may include, for example, computing device 2210 or another computing or processing device, particularly if programmed to perform one or more described processes or variations thereof.
- Such computing or processing devices may include, for example, a processor, an integrated circuit, a programmable logic device, a personal computer, a personal digital assistant, a game device, a cell phone, a calculator, and a device containing a software application..
- Implementations also may be embodied in a device that includes one or more computer readable media having instructions for carrying out one or more processes.
- the computer readable media may include, for example, storage device 2230, memory 2220, and formatted electromagnetic waves encoding or transmitting instructions.
- Computer readable media also may include, for example, a variety of non- volatile or volatile memory structures, such as, for example, a hard disk, a flash memory, a random access memory, a read-only memory, and a compact diskette. Instructions may be, for example, in hardware, firmware, software, and in an electromagnetic wave.
- computing device 2210 may represent an implementation of a computing device programmed to perform a described implementation
- storage device 2230 may represent a computer readable medium storing instructions for carrying out a described implementation.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US56232604P | 2004-04-15 | 2004-04-15 | |
PCT/US2005/013033 WO2005104010A2 (en) | 2004-04-15 | 2005-04-15 | Tracking bimanual movements |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1743277A2 true EP1743277A2 (de) | 2007-01-17 |
EP1743277A4 EP1743277A4 (de) | 2011-07-06 |
Family
ID=35197616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP05733722A Withdrawn EP1743277A4 (de) | 2004-04-15 | 2005-04-15 | Verfolgung bimanueller bewegungen |
Country Status (5)
Country | Link |
---|---|
US (3) | US7379563B2 (de) |
EP (1) | EP1743277A4 (de) |
JP (1) | JP4708422B2 (de) |
CN (1) | CN100573548C (de) |
WO (1) | WO2005104010A2 (de) |
Families Citing this family (442)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8352400B2 (en) | 1991-12-23 | 2013-01-08 | Hoffberg Steven M | Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore |
US7904187B2 (en) | 1999-02-01 | 2011-03-08 | Hoffberg Steven M | Internet appliance system and method |
US8035612B2 (en) * | 2002-05-28 | 2011-10-11 | Intellectual Ventures Holding 67 Llc | Self-contained interactive video display system |
US8300042B2 (en) * | 2001-06-05 | 2012-10-30 | Microsoft Corporation | Interactive video display system using strobed light |
US7259747B2 (en) | 2001-06-05 | 2007-08-21 | Reactrix Systems, Inc. | Interactive video display system |
US6990639B2 (en) | 2002-02-07 | 2006-01-24 | Microsoft Corporation | System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration |
US7710391B2 (en) | 2002-05-28 | 2010-05-04 | Matthew Bell | Processing an image utilizing a spatially varying pattern |
AU2003301043A1 (en) | 2002-12-13 | 2004-07-09 | Reactrix Systems | Interactive directed light/sound system |
US7665041B2 (en) * | 2003-03-25 | 2010-02-16 | Microsoft Corporation | Architecture for controlling a computer using hand gestures |
US8745541B2 (en) | 2003-03-25 | 2014-06-03 | Microsoft Corporation | Architecture for controlling a computer using hand gestures |
US9182937B2 (en) | 2010-10-01 | 2015-11-10 | Z124 | Desktop reveal by moving a logical display stack with gestures |
US9047047B2 (en) * | 2010-10-01 | 2015-06-02 | Z124 | Allowing multiple orientations in dual screen view |
US7038661B2 (en) * | 2003-06-13 | 2006-05-02 | Microsoft Corporation | Pointing device and cursor for use in intelligent computing environments |
US7536032B2 (en) | 2003-10-24 | 2009-05-19 | Reactrix Systems, Inc. | Method and system for processing captured image information in an interactive video display system |
CN1902930B (zh) * | 2003-10-24 | 2010-12-15 | 瑞克楚斯系统公司 | 管理交互式视频显示系统的方法和系统 |
US20050227217A1 (en) * | 2004-03-31 | 2005-10-13 | Wilson Andrew D | Template matching on interactive surface |
CN100573548C (zh) | 2004-04-15 | 2009-12-23 | 格斯图尔泰克股份有限公司 | 跟踪双手运动的方法和设备 |
US7394459B2 (en) | 2004-04-29 | 2008-07-01 | Microsoft Corporation | Interaction between objects and a virtual environment display |
JP4172793B2 (ja) * | 2004-06-08 | 2008-10-29 | 株式会社東芝 | ジェスチャ検出方法、ジェスチャ検出プログラムおよびジェスチャ検出装置 |
US7787706B2 (en) * | 2004-06-14 | 2010-08-31 | Microsoft Corporation | Method for controlling an intensity of an infrared source used to detect objects adjacent to an interactive display surface |
US7593593B2 (en) * | 2004-06-16 | 2009-09-22 | Microsoft Corporation | Method and system for reducing effects of undesired signals in an infrared imaging system |
US8560972B2 (en) | 2004-08-10 | 2013-10-15 | Microsoft Corporation | Surface UI for gesture-based interaction |
JP3970877B2 (ja) * | 2004-12-02 | 2007-09-05 | 独立行政法人産業技術総合研究所 | 追跡装置および追跡方法 |
HUE049974T2 (hu) * | 2005-01-07 | 2020-11-30 | Qualcomm Inc | Képeken lévõ objektumok észlelése és követése |
EP1851750A4 (de) | 2005-02-08 | 2010-08-25 | Oblong Ind Inc | System und verfahren für ein steuersystem auf genture-basis |
US8111873B2 (en) * | 2005-03-18 | 2012-02-07 | Cognimatics Ab | Method for tracking objects in a scene |
KR100687737B1 (ko) * | 2005-03-19 | 2007-02-27 | 한국전자통신연구원 | 양손 제스쳐에 기반한 가상 마우스 장치 및 방법 |
US9128519B1 (en) | 2005-04-15 | 2015-09-08 | Intellectual Ventures Holding 67 Llc | Method and system for state-based control of objects |
US8081822B1 (en) * | 2005-05-31 | 2011-12-20 | Intellectual Ventures Holding 67 Llc | System and method for sensing a feature of an object in an interactive video display |
US7911444B2 (en) * | 2005-08-31 | 2011-03-22 | Microsoft Corporation | Input method for surface of interactive display |
US7697827B2 (en) | 2005-10-17 | 2010-04-13 | Konicek Jeffrey C | User-friendlier interfaces for a camera |
JP2007122218A (ja) * | 2005-10-26 | 2007-05-17 | Fuji Xerox Co Ltd | 画像分析装置 |
JP4682820B2 (ja) * | 2005-11-25 | 2011-05-11 | ソニー株式会社 | オブジェクト追跡装置及びオブジェクト追跡方法、並びにプログラム |
US8098277B1 (en) | 2005-12-02 | 2012-01-17 | Intellectual Ventures Holding 67 Llc | Systems and methods for communication between a reactive video system and a mobile communication device |
US8060840B2 (en) * | 2005-12-29 | 2011-11-15 | Microsoft Corporation | Orientation free user interface |
US9075441B2 (en) * | 2006-02-08 | 2015-07-07 | Oblong Industries, Inc. | Gesture based control using three-dimensional information extracted over an extended depth of field |
US8370383B2 (en) | 2006-02-08 | 2013-02-05 | Oblong Industries, Inc. | Multi-process interactive systems and methods |
US9823747B2 (en) | 2006-02-08 | 2017-11-21 | Oblong Industries, Inc. | Spatial, multi-modal control device for use with spatial operating system |
US8537112B2 (en) * | 2006-02-08 | 2013-09-17 | Oblong Industries, Inc. | Control system for navigating a principal dimension of a data space |
US8537111B2 (en) * | 2006-02-08 | 2013-09-17 | Oblong Industries, Inc. | Control system for navigating a principal dimension of a data space |
US9910497B2 (en) * | 2006-02-08 | 2018-03-06 | Oblong Industries, Inc. | Gestural control of autonomous and semi-autonomous systems |
US8531396B2 (en) | 2006-02-08 | 2013-09-10 | Oblong Industries, Inc. | Control system for navigating a principal dimension of a data space |
TW200805111A (en) * | 2006-07-14 | 2008-01-16 | Asustek Comp Inc | Method for controlling the function of application software and computer readable recording medium for storing program thereof |
US7907117B2 (en) * | 2006-08-08 | 2011-03-15 | Microsoft Corporation | Virtual controller for visual displays |
KR100783552B1 (ko) * | 2006-10-11 | 2007-12-07 | 삼성전자주식회사 | 휴대 단말기의 입력 제어 방법 및 장치 |
US8356254B2 (en) * | 2006-10-25 | 2013-01-15 | International Business Machines Corporation | System and method for interacting with a display |
JP4771543B2 (ja) * | 2006-11-14 | 2011-09-14 | 国立大学法人電気通信大学 | 物体認識システム、物体認識方法及び物体認識ロボット |
KR100790896B1 (ko) * | 2006-11-17 | 2008-01-03 | 삼성전자주식회사 | 촬상부의 움직임을 이용한 어플리케이션의 제어 방법 및장치 |
EP2087469A2 (de) * | 2006-12-01 | 2009-08-12 | Thomson Licensing | Schätzung des ortes eines objekts in einem bild |
US20080156989A1 (en) * | 2006-12-28 | 2008-07-03 | O2Micro Inc. | Motion sensing/recognition by camera applications |
JP4934810B2 (ja) * | 2006-12-28 | 2012-05-23 | 国立大学法人九州工業大学 | モーションキャプチャ方法 |
US8212857B2 (en) | 2007-01-26 | 2012-07-03 | Microsoft Corporation | Alternating light sources to reduce specular reflection |
US8005238B2 (en) | 2007-03-22 | 2011-08-23 | Microsoft Corporation | Robust adaptive beamforming with enhanced noise suppression |
KR101055554B1 (ko) * | 2007-03-27 | 2011-08-23 | 삼성메디슨 주식회사 | 초음파 시스템 |
US20080252596A1 (en) * | 2007-04-10 | 2008-10-16 | Matthew Bell | Display Using a Three-Dimensional vision System |
US8577126B2 (en) * | 2007-04-11 | 2013-11-05 | Irobot Corporation | System and method for cooperative remote vehicle behavior |
JP5905662B2 (ja) | 2007-04-24 | 2016-04-20 | オブロング・インダストリーズ・インコーポレーテッド | プロテイン、プール、およびスロークス処理環境 |
WO2008137708A1 (en) * | 2007-05-04 | 2008-11-13 | Gesturetek, Inc. | Camera-based user input for compact devices |
US8005237B2 (en) | 2007-05-17 | 2011-08-23 | Microsoft Corp. | Sensor array beamformer post-processor |
US9261979B2 (en) * | 2007-08-20 | 2016-02-16 | Qualcomm Incorporated | Gesture-based mobile interaction |
CN107102723B (zh) * | 2007-08-20 | 2019-12-06 | 高通股份有限公司 | 用于基于手势的移动交互的方法、装置、设备和非暂时性计算机可读介质 |
AU2008299883B2 (en) | 2007-09-14 | 2012-03-15 | Facebook, Inc. | Processing of gesture-based user interactions |
US8629976B2 (en) | 2007-10-02 | 2014-01-14 | Microsoft Corporation | Methods and systems for hierarchical de-aliasing time-of-flight (TOF) systems |
JP5228439B2 (ja) * | 2007-10-22 | 2013-07-03 | 三菱電機株式会社 | 操作入力装置 |
US8005263B2 (en) * | 2007-10-26 | 2011-08-23 | Honda Motor Co., Ltd. | Hand sign recognition using label assignment |
US8159682B2 (en) | 2007-11-12 | 2012-04-17 | Intellectual Ventures Holding 67 Llc | Lens system |
US9171454B2 (en) * | 2007-11-14 | 2015-10-27 | Microsoft Technology Licensing, Llc | Magic wand |
WO2009085233A2 (en) * | 2007-12-21 | 2009-07-09 | 21Ct, Inc. | System and method for visually tracking with occlusions |
US8259163B2 (en) | 2008-03-07 | 2012-09-04 | Intellectual Ventures Holding 67 Llc | Display with built in 3D sensing |
JP5029470B2 (ja) * | 2008-04-09 | 2012-09-19 | 株式会社デンソー | プロンプター式操作装置 |
US10642364B2 (en) | 2009-04-02 | 2020-05-05 | Oblong Industries, Inc. | Processing tracking and recognition data in gestural recognition systems |
US9684380B2 (en) | 2009-04-02 | 2017-06-20 | Oblong Industries, Inc. | Operating environment with gestural control and multiple client devices, displays, and users |
US9952673B2 (en) | 2009-04-02 | 2018-04-24 | Oblong Industries, Inc. | Operating environment comprising multiple client devices, multiple displays, multiple users, and gestural control |
US8723795B2 (en) | 2008-04-24 | 2014-05-13 | Oblong Industries, Inc. | Detecting, representing, and interpreting three-space input: gestural continuum subsuming freespace, proximal, and surface-contact modes |
US9495013B2 (en) | 2008-04-24 | 2016-11-15 | Oblong Industries, Inc. | Multi-modal gestural interface |
US9740922B2 (en) | 2008-04-24 | 2017-08-22 | Oblong Industries, Inc. | Adaptive tracking system for spatial input devices |
US9740293B2 (en) | 2009-04-02 | 2017-08-22 | Oblong Industries, Inc. | Operating environment with gestural control and multiple client devices, displays, and users |
US8113991B2 (en) * | 2008-06-02 | 2012-02-14 | Omek Interactive, Ltd. | Method and system for interactive fitness training program |
US8595218B2 (en) | 2008-06-12 | 2013-11-26 | Intellectual Ventures Holding 67 Llc | Interactive display management systems and methods |
KR101652535B1 (ko) * | 2008-06-18 | 2016-08-30 | 오블롱 인더스트리즈, 인크 | 차량 인터페이스를 위한 제스처 기반 제어 시스템 |
US8385557B2 (en) | 2008-06-19 | 2013-02-26 | Microsoft Corporation | Multichannel acoustic echo reduction |
US8325909B2 (en) | 2008-06-25 | 2012-12-04 | Microsoft Corporation | Acoustic echo suppression |
US8194921B2 (en) * | 2008-06-27 | 2012-06-05 | Nokia Corporation | Method, appartaus and computer program product for providing gesture analysis |
US8203699B2 (en) | 2008-06-30 | 2012-06-19 | Microsoft Corporation | System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed |
US8847739B2 (en) | 2008-08-04 | 2014-09-30 | Microsoft Corporation | Fusing RFID and vision for surface object tracking |
US20100031202A1 (en) * | 2008-08-04 | 2010-02-04 | Microsoft Corporation | User-defined gesture set for surface computing |
JP5250827B2 (ja) * | 2008-09-19 | 2013-07-31 | 株式会社日立製作所 | 行動履歴の生成方法及び行動履歴の生成システム |
DE102008052928A1 (de) * | 2008-10-23 | 2010-05-06 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung, Verfahren und Computerprogramm zur Erkennung einer Geste in einem Bild, sowie Vorrichtung, Verfahren und Computerprogramm zur Steuerung eines Geräts |
US20100105479A1 (en) | 2008-10-23 | 2010-04-29 | Microsoft Corporation | Determining orientation in an external reference frame |
US10086262B1 (en) | 2008-11-12 | 2018-10-02 | David G. Capper | Video motion capture for wireless gaming |
US9586135B1 (en) | 2008-11-12 | 2017-03-07 | David G. Capper | Video motion capture for wireless gaming |
US9383814B1 (en) | 2008-11-12 | 2016-07-05 | David G. Capper | Plug and play wireless video game |
US8681321B2 (en) | 2009-01-04 | 2014-03-25 | Microsoft International Holdings B.V. | Gated 3D camera |
US8448094B2 (en) * | 2009-01-30 | 2013-05-21 | Microsoft Corporation | Mapping a natural input device to a legacy system |
US7996793B2 (en) | 2009-01-30 | 2011-08-09 | Microsoft Corporation | Gesture recognizer system architecture |
US8588465B2 (en) | 2009-01-30 | 2013-11-19 | Microsoft Corporation | Visual target tracking |
US8295546B2 (en) | 2009-01-30 | 2012-10-23 | Microsoft Corporation | Pose tracking pipeline |
US8565476B2 (en) | 2009-01-30 | 2013-10-22 | Microsoft Corporation | Visual target tracking |
US8565477B2 (en) | 2009-01-30 | 2013-10-22 | Microsoft Corporation | Visual target tracking |
US8487938B2 (en) | 2009-01-30 | 2013-07-16 | Microsoft Corporation | Standard Gestures |
US8267781B2 (en) | 2009-01-30 | 2012-09-18 | Microsoft Corporation | Visual target tracking |
US8577085B2 (en) | 2009-01-30 | 2013-11-05 | Microsoft Corporation | Visual target tracking |
US8294767B2 (en) | 2009-01-30 | 2012-10-23 | Microsoft Corporation | Body scan |
US8682028B2 (en) | 2009-01-30 | 2014-03-25 | Microsoft Corporation | Visual target tracking |
US9652030B2 (en) | 2009-01-30 | 2017-05-16 | Microsoft Technology Licensing, Llc | Navigation of a virtual plane using a zone of restriction for canceling noise |
US20100199231A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Predictive determination |
US8577084B2 (en) | 2009-01-30 | 2013-11-05 | Microsoft Corporation | Visual target tracking |
US8624962B2 (en) | 2009-02-02 | 2014-01-07 | Ydreams—Informatica, S.A. Ydreams | Systems and methods for simulating three-dimensional virtual interactions from two-dimensional camera images |
US8517834B2 (en) * | 2009-02-17 | 2013-08-27 | Softkinetic Studios Sa | Computer videogame system with body position detector that requires user to assume various body positions |
WO2010096279A2 (en) * | 2009-02-17 | 2010-08-26 | Omek Interactive , Ltd. | Method and system for gesture recognition |
US8423182B2 (en) | 2009-03-09 | 2013-04-16 | Intuitive Surgical Operations, Inc. | Adaptable integrated energy control system for electrosurgical tools in robotic surgical systems |
US8773355B2 (en) | 2009-03-16 | 2014-07-08 | Microsoft Corporation | Adaptive cursor sizing |
US8988437B2 (en) | 2009-03-20 | 2015-03-24 | Microsoft Technology Licensing, Llc | Chaining animations |
US9256282B2 (en) | 2009-03-20 | 2016-02-09 | Microsoft Technology Licensing, Llc | Virtual object manipulation |
US9313376B1 (en) | 2009-04-01 | 2016-04-12 | Microsoft Technology Licensing, Llc | Dynamic depth power equalization |
US10824238B2 (en) | 2009-04-02 | 2020-11-03 | Oblong Industries, Inc. | Operating environment with gestural control and multiple client devices, displays, and users |
US9317128B2 (en) | 2009-04-02 | 2016-04-19 | Oblong Industries, Inc. | Remote devices used in a markerless installation of a spatial operating environment incorporating gestural control |
US8649554B2 (en) | 2009-05-01 | 2014-02-11 | Microsoft Corporation | Method to control perspective for a camera-controlled computer |
US9377857B2 (en) * | 2009-05-01 | 2016-06-28 | Microsoft Technology Licensing, Llc | Show body position |
US8253746B2 (en) | 2009-05-01 | 2012-08-28 | Microsoft Corporation | Determine intended motions |
US8181123B2 (en) | 2009-05-01 | 2012-05-15 | Microsoft Corporation | Managing virtual port associations to users in a gesture-based computing environment |
US8660303B2 (en) | 2009-05-01 | 2014-02-25 | Microsoft Corporation | Detection of body and props |
US8340432B2 (en) | 2009-05-01 | 2012-12-25 | Microsoft Corporation | Systems and methods for detecting a tilt angle from a depth image |
US9498718B2 (en) | 2009-05-01 | 2016-11-22 | Microsoft Technology Licensing, Llc | Altering a view perspective within a display environment |
US8942428B2 (en) | 2009-05-01 | 2015-01-27 | Microsoft Corporation | Isolate extraneous motions |
US9898675B2 (en) | 2009-05-01 | 2018-02-20 | Microsoft Technology Licensing, Llc | User movement tracking feedback to improve tracking |
US9015638B2 (en) | 2009-05-01 | 2015-04-21 | Microsoft Technology Licensing, Llc | Binding users to a gesture based system and providing feedback to the users |
US8503720B2 (en) | 2009-05-01 | 2013-08-06 | Microsoft Corporation | Human body pose estimation |
US8638985B2 (en) * | 2009-05-01 | 2014-01-28 | Microsoft Corporation | Human body pose estimation |
US9417700B2 (en) * | 2009-05-21 | 2016-08-16 | Edge3 Technologies | Gesture recognition systems and related methods |
US9400559B2 (en) | 2009-05-29 | 2016-07-26 | Microsoft Technology Licensing, Llc | Gesture shortcuts |
US8542252B2 (en) * | 2009-05-29 | 2013-09-24 | Microsoft Corporation | Target digitization, extraction, and tracking |
US8509479B2 (en) | 2009-05-29 | 2013-08-13 | Microsoft Corporation | Virtual object |
US8856691B2 (en) | 2009-05-29 | 2014-10-07 | Microsoft Corporation | Gesture tool |
US8320619B2 (en) | 2009-05-29 | 2012-11-27 | Microsoft Corporation | Systems and methods for tracking a model |
US9383823B2 (en) | 2009-05-29 | 2016-07-05 | Microsoft Technology Licensing, Llc | Combining gestures beyond skeletal |
US8418085B2 (en) | 2009-05-29 | 2013-04-09 | Microsoft Corporation | Gesture coach |
US9182814B2 (en) | 2009-05-29 | 2015-11-10 | Microsoft Technology Licensing, Llc | Systems and methods for estimating a non-visible or occluded body part |
US20100302365A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Depth Image Noise Reduction |
US8625837B2 (en) | 2009-05-29 | 2014-01-07 | Microsoft Corporation | Protocol and format for communicating an image from a camera to a computing environment |
US20100306685A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | User movement feedback via on-screen avatars |
US8744121B2 (en) | 2009-05-29 | 2014-06-03 | Microsoft Corporation | Device for identifying and tracking multiple humans over time |
US8379101B2 (en) | 2009-05-29 | 2013-02-19 | Microsoft Corporation | Environment and/or target segmentation |
US8693724B2 (en) | 2009-05-29 | 2014-04-08 | Microsoft Corporation | Method and system implementing user-centric gesture control |
US8487871B2 (en) | 2009-06-01 | 2013-07-16 | Microsoft Corporation | Virtual desktop coordinate transformation |
US8390680B2 (en) | 2009-07-09 | 2013-03-05 | Microsoft Corporation | Visual representation expression based on player expression |
US9159151B2 (en) | 2009-07-13 | 2015-10-13 | Microsoft Technology Licensing, Llc | Bringing a visual representation to life via learned input from the user |
US8264536B2 (en) | 2009-08-25 | 2012-09-11 | Microsoft Corporation | Depth-sensitive imaging via polarization-state mapping |
US9141193B2 (en) | 2009-08-31 | 2015-09-22 | Microsoft Technology Licensing, Llc | Techniques for using human gestures to control gesture unaware programs |
US8508919B2 (en) | 2009-09-14 | 2013-08-13 | Microsoft Corporation | Separation of electrical and optical components |
US8330134B2 (en) | 2009-09-14 | 2012-12-11 | Microsoft Corporation | Optical fault monitoring |
JP2011065303A (ja) * | 2009-09-16 | 2011-03-31 | Brother Industries Ltd | 入力システム、入力装置及び入力方法 |
US8760571B2 (en) | 2009-09-21 | 2014-06-24 | Microsoft Corporation | Alignment of lens and image sensor |
US8976986B2 (en) | 2009-09-21 | 2015-03-10 | Microsoft Technology Licensing, Llc | Volume adjustment based on listener position |
US8428340B2 (en) | 2009-09-21 | 2013-04-23 | Microsoft Corporation | Screen space plane identification |
US9014546B2 (en) | 2009-09-23 | 2015-04-21 | Rovi Guides, Inc. | Systems and methods for automatically detecting users within detection regions of media devices |
US8452087B2 (en) * | 2009-09-30 | 2013-05-28 | Microsoft Corporation | Image selection techniques |
US8723118B2 (en) | 2009-10-01 | 2014-05-13 | Microsoft Corporation | Imager for constructing color and depth images |
US8564534B2 (en) | 2009-10-07 | 2013-10-22 | Microsoft Corporation | Human tracking system |
US8963829B2 (en) | 2009-10-07 | 2015-02-24 | Microsoft Corporation | Methods and systems for determining and tracking extremities of a target |
US7961910B2 (en) | 2009-10-07 | 2011-06-14 | Microsoft Corporation | Systems and methods for tracking a model |
US8867820B2 (en) | 2009-10-07 | 2014-10-21 | Microsoft Corporation | Systems and methods for removing a background of an image |
US9971807B2 (en) | 2009-10-14 | 2018-05-15 | Oblong Industries, Inc. | Multi-process interactive systems and methods |
US9933852B2 (en) | 2009-10-14 | 2018-04-03 | Oblong Industries, Inc. | Multi-process interactive systems and methods |
US9400548B2 (en) | 2009-10-19 | 2016-07-26 | Microsoft Technology Licensing, Llc | Gesture personalization and profile roaming |
US8698888B2 (en) * | 2009-10-30 | 2014-04-15 | Medical Motion, Llc | Systems and methods for comprehensive human movement analysis |
US8988432B2 (en) | 2009-11-05 | 2015-03-24 | Microsoft Technology Licensing, Llc | Systems and methods for processing an image for target tracking |
US8843857B2 (en) | 2009-11-19 | 2014-09-23 | Microsoft Corporation | Distance scalable no touch computing |
KR20110055062A (ko) * | 2009-11-19 | 2011-05-25 | 삼성전자주식회사 | 로봇 시스템 및 그 제어 방법 |
US8325136B2 (en) | 2009-12-01 | 2012-12-04 | Raytheon Company | Computer display pointer device for a display |
US9244533B2 (en) | 2009-12-17 | 2016-01-26 | Microsoft Technology Licensing, Llc | Camera navigation for presentations |
US20110150271A1 (en) | 2009-12-18 | 2011-06-23 | Microsoft Corporation | Motion detection using depth images |
US8320621B2 (en) | 2009-12-21 | 2012-11-27 | Microsoft Corporation | Depth projector system with integrated VCSEL array |
US8631355B2 (en) | 2010-01-08 | 2014-01-14 | Microsoft Corporation | Assigning gesture dictionaries |
US9019201B2 (en) | 2010-01-08 | 2015-04-28 | Microsoft Technology Licensing, Llc | Evolving universal gesture sets |
US9268404B2 (en) | 2010-01-08 | 2016-02-23 | Microsoft Technology Licensing, Llc | Application gesture interpretation |
US8933884B2 (en) | 2010-01-15 | 2015-01-13 | Microsoft Corporation | Tracking groups of users in motion capture system |
US8334842B2 (en) | 2010-01-15 | 2012-12-18 | Microsoft Corporation | Recognizing user intent in motion capture system |
US8676581B2 (en) | 2010-01-22 | 2014-03-18 | Microsoft Corporation | Speech recognition analysis via identification information |
US8265341B2 (en) | 2010-01-25 | 2012-09-11 | Microsoft Corporation | Voice-body identity correlation |
US20110187678A1 (en) * | 2010-01-29 | 2011-08-04 | Tyco Electronics Corporation | Touch system using optical components to image multiple fields of view on an image sensor |
US8864581B2 (en) | 2010-01-29 | 2014-10-21 | Microsoft Corporation | Visual based identitiy tracking |
US8891067B2 (en) | 2010-02-01 | 2014-11-18 | Microsoft Corporation | Multiple synchronized optical sources for time-of-flight range finding systems |
US8687044B2 (en) | 2010-02-02 | 2014-04-01 | Microsoft Corporation | Depth camera compatibility |
US8619122B2 (en) | 2010-02-02 | 2013-12-31 | Microsoft Corporation | Depth camera compatibility |
US8717469B2 (en) | 2010-02-03 | 2014-05-06 | Microsoft Corporation | Fast gating photosurface |
US8499257B2 (en) | 2010-02-09 | 2013-07-30 | Microsoft Corporation | Handles interactions for human—computer interface |
US8659658B2 (en) | 2010-02-09 | 2014-02-25 | Microsoft Corporation | Physical interaction zone for gesture-based user interfaces |
DE102010007455A1 (de) | 2010-02-10 | 2011-08-11 | Ident Technology AG, 82234 | System und Verfahren zum berührungslosen Erfassen und Erkennen von Gesten in einem dreidimensionalen Raum |
US8522308B2 (en) * | 2010-02-11 | 2013-08-27 | Verizon Patent And Licensing Inc. | Systems and methods for providing a spatial-input-based multi-user shared display experience |
US8633890B2 (en) | 2010-02-16 | 2014-01-21 | Microsoft Corporation | Gesture detection based on joint skipping |
US20110199302A1 (en) * | 2010-02-16 | 2011-08-18 | Microsoft Corporation | Capturing screen objects using a collision volume |
US8928579B2 (en) | 2010-02-22 | 2015-01-06 | Andrew David Wilson | Interacting with an omni-directionally projected display |
US8655069B2 (en) | 2010-03-05 | 2014-02-18 | Microsoft Corporation | Updating image segmentation following user input |
US8422769B2 (en) * | 2010-03-05 | 2013-04-16 | Microsoft Corporation | Image segmentation using reduced foreground training data |
US8411948B2 (en) | 2010-03-05 | 2013-04-02 | Microsoft Corporation | Up-sampling binary images for segmentation |
US20110223995A1 (en) | 2010-03-12 | 2011-09-15 | Kevin Geisner | Interacting with a computer based application |
US8279418B2 (en) | 2010-03-17 | 2012-10-02 | Microsoft Corporation | Raster scanning for depth detection |
US8213680B2 (en) | 2010-03-19 | 2012-07-03 | Microsoft Corporation | Proxy training data for human body tracking |
US8514269B2 (en) | 2010-03-26 | 2013-08-20 | Microsoft Corporation | De-aliasing depth images |
US8523667B2 (en) | 2010-03-29 | 2013-09-03 | Microsoft Corporation | Parental control settings based on body dimensions |
US8605763B2 (en) | 2010-03-31 | 2013-12-10 | Microsoft Corporation | Temperature measurement and control for laser and light-emitting diodes |
US9646340B2 (en) | 2010-04-01 | 2017-05-09 | Microsoft Technology Licensing, Llc | Avatar-based virtual dressing room |
US9098873B2 (en) | 2010-04-01 | 2015-08-04 | Microsoft Technology Licensing, Llc | Motion-based interactive shopping environment |
US8818027B2 (en) * | 2010-04-01 | 2014-08-26 | Qualcomm Incorporated | Computing device interface |
JP6203634B2 (ja) | 2010-04-09 | 2017-09-27 | ゾール メディカル コーポレイションZOLL Medical Corporation | Ems装置通信インタフェースのシステム及び方法 |
US8351651B2 (en) | 2010-04-26 | 2013-01-08 | Microsoft Corporation | Hand-location post-process refinement in a tracking system |
US8379919B2 (en) | 2010-04-29 | 2013-02-19 | Microsoft Corporation | Multiple centroid condensation of probability distribution clouds |
US8593402B2 (en) | 2010-04-30 | 2013-11-26 | Verizon Patent And Licensing Inc. | Spatial-input-based cursor projection systems and methods |
US8284847B2 (en) | 2010-05-03 | 2012-10-09 | Microsoft Corporation | Detecting motion for a multifunction sensor device |
GB2480140B (en) * | 2010-05-04 | 2014-11-12 | Timocco Ltd | System and method for tracking and mapping an object to a target |
US8498481B2 (en) | 2010-05-07 | 2013-07-30 | Microsoft Corporation | Image segmentation using star-convexity constraints |
US8885890B2 (en) | 2010-05-07 | 2014-11-11 | Microsoft Corporation | Depth map confidence filtering |
US20110289455A1 (en) * | 2010-05-18 | 2011-11-24 | Microsoft Corporation | Gestures And Gesture Recognition For Manipulating A User-Interface |
US8457353B2 (en) | 2010-05-18 | 2013-06-04 | Microsoft Corporation | Gestures and gesture modifiers for manipulating a user-interface |
US8396252B2 (en) | 2010-05-20 | 2013-03-12 | Edge 3 Technologies | Systems and related methods for three dimensional gesture recognition in vehicles |
US8803888B2 (en) | 2010-06-02 | 2014-08-12 | Microsoft Corporation | Recognition system for sharing information |
US9008355B2 (en) | 2010-06-04 | 2015-04-14 | Microsoft Technology Licensing, Llc | Automatic depth camera aiming |
US8751215B2 (en) | 2010-06-04 | 2014-06-10 | Microsoft Corporation | Machine based sign language interpreter |
US9557574B2 (en) | 2010-06-08 | 2017-01-31 | Microsoft Technology Licensing, Llc | Depth illumination and detection optics |
US8330822B2 (en) | 2010-06-09 | 2012-12-11 | Microsoft Corporation | Thermally-tuned depth camera light source |
US8749557B2 (en) | 2010-06-11 | 2014-06-10 | Microsoft Corporation | Interacting with user interface via avatar |
US9384329B2 (en) | 2010-06-11 | 2016-07-05 | Microsoft Technology Licensing, Llc | Caloric burn determination from body movement |
US8675981B2 (en) | 2010-06-11 | 2014-03-18 | Microsoft Corporation | Multi-modal gender recognition including depth data |
US8982151B2 (en) | 2010-06-14 | 2015-03-17 | Microsoft Technology Licensing, Llc | Independently processing planes of display data |
US8639020B1 (en) | 2010-06-16 | 2014-01-28 | Intel Corporation | Method and system for modeling subjects from a depth map |
US8558873B2 (en) | 2010-06-16 | 2013-10-15 | Microsoft Corporation | Use of wavefront coding to create a depth image |
US8670029B2 (en) | 2010-06-16 | 2014-03-11 | Microsoft Corporation | Depth camera illuminator with superluminescent light-emitting diode |
US8296151B2 (en) | 2010-06-18 | 2012-10-23 | Microsoft Corporation | Compound gesture-speech commands |
US8381108B2 (en) | 2010-06-21 | 2013-02-19 | Microsoft Corporation | Natural user input for driving interactive stories |
US8416187B2 (en) | 2010-06-22 | 2013-04-09 | Microsoft Corporation | Item navigation using motion-capture data |
JP5601045B2 (ja) * | 2010-06-24 | 2014-10-08 | ソニー株式会社 | ジェスチャ認識装置、ジェスチャ認識方法およびプログラム |
CN101901339B (zh) * | 2010-07-30 | 2012-11-14 | 华南理工大学 | 人手运动检测方法 |
US9075434B2 (en) | 2010-08-20 | 2015-07-07 | Microsoft Technology Licensing, Llc | Translating user motion into multiple object responses |
US8613666B2 (en) | 2010-08-31 | 2013-12-24 | Microsoft Corporation | User selection and navigation based on looped motions |
US8655093B2 (en) | 2010-09-02 | 2014-02-18 | Edge 3 Technologies, Inc. | Method and apparatus for performing segmentation of an image |
US8666144B2 (en) | 2010-09-02 | 2014-03-04 | Edge 3 Technologies, Inc. | Method and apparatus for determining disparity of texture |
US9167289B2 (en) | 2010-09-02 | 2015-10-20 | Verizon Patent And Licensing Inc. | Perspective display systems and methods |
US8582866B2 (en) | 2011-02-10 | 2013-11-12 | Edge 3 Technologies, Inc. | Method and apparatus for disparity computation in stereo images |
US8467599B2 (en) | 2010-09-02 | 2013-06-18 | Edge 3 Technologies, Inc. | Method and apparatus for confusion learning |
US8437506B2 (en) | 2010-09-07 | 2013-05-07 | Microsoft Corporation | System for fast, probabilistic skeletal tracking |
US20120058824A1 (en) | 2010-09-07 | 2012-03-08 | Microsoft Corporation | Scalable real-time motion recognition |
US8988508B2 (en) | 2010-09-24 | 2015-03-24 | Microsoft Technology Licensing, Llc. | Wide angle field of view active illumination imaging system |
US8681255B2 (en) | 2010-09-28 | 2014-03-25 | Microsoft Corporation | Integrated low power depth camera and projection device |
US8749484B2 (en) | 2010-10-01 | 2014-06-10 | Z124 | Multi-screen user interface with orientation based control |
US8548270B2 (en) | 2010-10-04 | 2013-10-01 | Microsoft Corporation | Time-of-flight depth imaging |
US9484065B2 (en) | 2010-10-15 | 2016-11-01 | Microsoft Technology Licensing, Llc | Intelligent determination of replays based on event identification |
US8957856B2 (en) | 2010-10-21 | 2015-02-17 | Verizon Patent And Licensing Inc. | Systems, methods, and apparatuses for spatial input associated with a display |
US8592739B2 (en) | 2010-11-02 | 2013-11-26 | Microsoft Corporation | Detection of configuration changes of an optical element in an illumination system |
US8866889B2 (en) | 2010-11-03 | 2014-10-21 | Microsoft Corporation | In-home depth camera calibration |
US8667519B2 (en) | 2010-11-12 | 2014-03-04 | Microsoft Corporation | Automatic passive and anonymous feedback system |
US8730157B2 (en) * | 2010-11-15 | 2014-05-20 | Hewlett-Packard Development Company, L.P. | Hand pose recognition |
US10726861B2 (en) | 2010-11-15 | 2020-07-28 | Microsoft Technology Licensing, Llc | Semi-private communication in open environments |
US9349040B2 (en) | 2010-11-19 | 2016-05-24 | Microsoft Technology Licensing, Llc | Bi-modal depth-image analysis |
US10234545B2 (en) | 2010-12-01 | 2019-03-19 | Microsoft Technology Licensing, Llc | Light source module |
US8553934B2 (en) | 2010-12-08 | 2013-10-08 | Microsoft Corporation | Orienting the position of a sensor |
US8618405B2 (en) | 2010-12-09 | 2013-12-31 | Microsoft Corp. | Free-space gesture musical instrument digital interface (MIDI) controller |
US8408706B2 (en) | 2010-12-13 | 2013-04-02 | Microsoft Corporation | 3D gaze tracker |
US9171264B2 (en) | 2010-12-15 | 2015-10-27 | Microsoft Technology Licensing, Llc | Parallel processing machine learning decision tree training |
US8884968B2 (en) | 2010-12-15 | 2014-11-11 | Microsoft Corporation | Modeling an object from image data |
US8920241B2 (en) | 2010-12-15 | 2014-12-30 | Microsoft Corporation | Gesture controlled persistent handles for interface guides |
US8448056B2 (en) | 2010-12-17 | 2013-05-21 | Microsoft Corporation | Validation analysis of human target |
US8803952B2 (en) | 2010-12-20 | 2014-08-12 | Microsoft Corporation | Plural detector time-of-flight depth mapping |
US9848106B2 (en) | 2010-12-21 | 2017-12-19 | Microsoft Technology Licensing, Llc | Intelligent gameplay photo capture |
US8385596B2 (en) | 2010-12-21 | 2013-02-26 | Microsoft Corporation | First person shooter control with virtual skeleton |
US9821224B2 (en) | 2010-12-21 | 2017-11-21 | Microsoft Technology Licensing, Llc | Driving simulator control with virtual skeleton |
US8994718B2 (en) | 2010-12-21 | 2015-03-31 | Microsoft Technology Licensing, Llc | Skeletal control of three-dimensional virtual world |
US9823339B2 (en) | 2010-12-21 | 2017-11-21 | Microsoft Technology Licensing, Llc | Plural anode time-of-flight sensor |
US9123316B2 (en) | 2010-12-27 | 2015-09-01 | Microsoft Technology Licensing, Llc | Interactive content creation |
US8488888B2 (en) | 2010-12-28 | 2013-07-16 | Microsoft Corporation | Classification of posture states |
US8811663B2 (en) * | 2011-01-05 | 2014-08-19 | International Business Machines Corporation | Object detection in crowded scenes |
US8401225B2 (en) | 2011-01-31 | 2013-03-19 | Microsoft Corporation | Moving object segmentation using depth images |
US8587583B2 (en) | 2011-01-31 | 2013-11-19 | Microsoft Corporation | Three-dimensional environment reconstruction |
US8401242B2 (en) | 2011-01-31 | 2013-03-19 | Microsoft Corporation | Real-time camera tracking using depth maps |
US9247238B2 (en) | 2011-01-31 | 2016-01-26 | Microsoft Technology Licensing, Llc | Reducing interference between multiple infra-red depth cameras |
US8724887B2 (en) | 2011-02-03 | 2014-05-13 | Microsoft Corporation | Environmental modifications to mitigate environmental factors |
US8970589B2 (en) | 2011-02-10 | 2015-03-03 | Edge 3 Technologies, Inc. | Near-touch interaction with a stereo camera grid structured tessellations |
US8942917B2 (en) | 2011-02-14 | 2015-01-27 | Microsoft Corporation | Change invariant scene recognition by an agent |
US8497838B2 (en) | 2011-02-16 | 2013-07-30 | Microsoft Corporation | Push actuation of interface controls |
CN102096471B (zh) * | 2011-02-18 | 2013-04-10 | 广东威创视讯科技股份有限公司 | 一种基于机器视觉的人机交互方法 |
US9551914B2 (en) | 2011-03-07 | 2017-01-24 | Microsoft Technology Licensing, Llc | Illuminator with refractive optical element |
US9067136B2 (en) | 2011-03-10 | 2015-06-30 | Microsoft Technology Licensing, Llc | Push personalization of interface controls |
US8571263B2 (en) | 2011-03-17 | 2013-10-29 | Microsoft Corporation | Predicting joint positions |
US9470778B2 (en) | 2011-03-29 | 2016-10-18 | Microsoft Technology Licensing, Llc | Learning from high quality depth measurements |
US9842168B2 (en) | 2011-03-31 | 2017-12-12 | Microsoft Technology Licensing, Llc | Task driven user intents |
US9298287B2 (en) | 2011-03-31 | 2016-03-29 | Microsoft Technology Licensing, Llc | Combined activation for natural user interface systems |
US10642934B2 (en) | 2011-03-31 | 2020-05-05 | Microsoft Technology Licensing, Llc | Augmented conversational understanding architecture |
US9760566B2 (en) | 2011-03-31 | 2017-09-12 | Microsoft Technology Licensing, Llc | Augmented conversational understanding agent to identify conversation context between two humans and taking an agent action thereof |
US8503494B2 (en) | 2011-04-05 | 2013-08-06 | Microsoft Corporation | Thermal management system |
US8824749B2 (en) | 2011-04-05 | 2014-09-02 | Microsoft Corporation | Biometric recognition |
US8620113B2 (en) | 2011-04-25 | 2013-12-31 | Microsoft Corporation | Laser diode modes |
US9259643B2 (en) | 2011-04-28 | 2016-02-16 | Microsoft Technology Licensing, Llc | Control of separate computer game elements |
US8702507B2 (en) | 2011-04-28 | 2014-04-22 | Microsoft Corporation | Manual and camera-based avatar control |
US10671841B2 (en) | 2011-05-02 | 2020-06-02 | Microsoft Technology Licensing, Llc | Attribute state classification |
US8888331B2 (en) | 2011-05-09 | 2014-11-18 | Microsoft Corporation | Low inductance light source module |
US9137463B2 (en) | 2011-05-12 | 2015-09-15 | Microsoft Technology Licensing, Llc | Adaptive high dynamic range camera |
US9064006B2 (en) | 2012-08-23 | 2015-06-23 | Microsoft Technology Licensing, Llc | Translating natural language utterances to keyword search queries |
US8788973B2 (en) | 2011-05-23 | 2014-07-22 | Microsoft Corporation | Three-dimensional gesture controlled avatar configuration interface |
US8760395B2 (en) | 2011-05-31 | 2014-06-24 | Microsoft Corporation | Gesture recognition techniques |
US8526734B2 (en) | 2011-06-01 | 2013-09-03 | Microsoft Corporation | Three-dimensional background removal for vision system |
US9594430B2 (en) | 2011-06-01 | 2017-03-14 | Microsoft Technology Licensing, Llc | Three-dimensional foreground selection for vision system |
US9013489B2 (en) | 2011-06-06 | 2015-04-21 | Microsoft Technology Licensing, Llc | Generation of avatar reflecting player appearance |
US9208571B2 (en) | 2011-06-06 | 2015-12-08 | Microsoft Technology Licensing, Llc | Object digitization |
US8929612B2 (en) | 2011-06-06 | 2015-01-06 | Microsoft Corporation | System for recognizing an open or closed hand |
US8597142B2 (en) | 2011-06-06 | 2013-12-03 | Microsoft Corporation | Dynamic camera based practice mode |
US8897491B2 (en) | 2011-06-06 | 2014-11-25 | Microsoft Corporation | System for finger recognition and tracking |
US9098110B2 (en) | 2011-06-06 | 2015-08-04 | Microsoft Technology Licensing, Llc | Head rotation tracking from depth-based center of mass |
US9724600B2 (en) | 2011-06-06 | 2017-08-08 | Microsoft Technology Licensing, Llc | Controlling objects in a virtual environment |
US10796494B2 (en) | 2011-06-06 | 2020-10-06 | Microsoft Technology Licensing, Llc | Adding attributes to virtual representations of real-world objects |
US9597587B2 (en) | 2011-06-08 | 2017-03-21 | Microsoft Technology Licensing, Llc | Locational node device |
US11048333B2 (en) | 2011-06-23 | 2021-06-29 | Intel Corporation | System and method for close-range movement tracking |
JP6074170B2 (ja) | 2011-06-23 | 2017-02-01 | インテル・コーポレーション | 近距離動作のトラッキングのシステムおよび方法 |
US8786730B2 (en) | 2011-08-18 | 2014-07-22 | Microsoft Corporation | Image exposure using exclusion regions |
KR101189633B1 (ko) * | 2011-08-22 | 2012-10-10 | 성균관대학교산학협력단 | 손가락 움직임에 따른 포인터 제어명령어 인식 방법 및 손가락 움직임에 따라 포인터를 제어하는 모바일 단말 |
US9557836B2 (en) | 2011-11-01 | 2017-01-31 | Microsoft Technology Licensing, Llc | Depth image compression |
US9117281B2 (en) | 2011-11-02 | 2015-08-25 | Microsoft Corporation | Surface segmentation from RGB and depth images |
JP5607012B2 (ja) * | 2011-11-04 | 2014-10-15 | 本田技研工業株式会社 | 手話動作生成装置及びコミュニケーションロボット |
US8854426B2 (en) | 2011-11-07 | 2014-10-07 | Microsoft Corporation | Time-of-flight camera with guided light |
US9672609B1 (en) | 2011-11-11 | 2017-06-06 | Edge 3 Technologies, Inc. | Method and apparatus for improved depth-map estimation |
US8724906B2 (en) | 2011-11-18 | 2014-05-13 | Microsoft Corporation | Computing pose and/or shape of modifiable entities |
US8509545B2 (en) | 2011-11-29 | 2013-08-13 | Microsoft Corporation | Foreground subject detection |
US9072929B1 (en) * | 2011-12-01 | 2015-07-07 | Nebraska Global Investment Company, LLC | Image capture system |
US8958631B2 (en) | 2011-12-02 | 2015-02-17 | Intel Corporation | System and method for automatically defining and identifying a gesture |
US8635637B2 (en) | 2011-12-02 | 2014-01-21 | Microsoft Corporation | User interface presenting an animated avatar performing a media reaction |
CN103135754B (zh) * | 2011-12-02 | 2016-05-11 | 深圳泰山体育科技股份有限公司 | 采用交互设备实现交互的方法 |
US8803800B2 (en) | 2011-12-02 | 2014-08-12 | Microsoft Corporation | User interface control based on head orientation |
US9100685B2 (en) | 2011-12-09 | 2015-08-04 | Microsoft Technology Licensing, Llc | Determining audience state or interest using passive sensor data |
US8879831B2 (en) | 2011-12-15 | 2014-11-04 | Microsoft Corporation | Using high-level attributes to guide image processing |
US8971612B2 (en) | 2011-12-15 | 2015-03-03 | Microsoft Corporation | Learning image processing tasks from scene reconstructions |
US8630457B2 (en) | 2011-12-15 | 2014-01-14 | Microsoft Corporation | Problem states for pose tracking pipeline |
US8811938B2 (en) | 2011-12-16 | 2014-08-19 | Microsoft Corporation | Providing a user interface experience based on inferred vehicle state |
US9110502B2 (en) * | 2011-12-16 | 2015-08-18 | Ryan Fink | Motion sensing display apparatuses |
US9342139B2 (en) | 2011-12-19 | 2016-05-17 | Microsoft Technology Licensing, Llc | Pairing a computing device to a user |
FR2985065B1 (fr) * | 2011-12-21 | 2014-01-10 | Univ Paris Curie | Procede d'estimation de flot optique a partir d'un capteur asynchrone de lumiere |
US8971571B1 (en) * | 2012-01-06 | 2015-03-03 | Google Inc. | Visual completion |
US9501152B2 (en) | 2013-01-15 | 2016-11-22 | Leap Motion, Inc. | Free-space user interface and control using virtual constructs |
US9679215B2 (en) | 2012-01-17 | 2017-06-13 | Leap Motion, Inc. | Systems and methods for machine control |
US8638989B2 (en) | 2012-01-17 | 2014-01-28 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US11493998B2 (en) | 2012-01-17 | 2022-11-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US10691219B2 (en) | 2012-01-17 | 2020-06-23 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US8693731B2 (en) | 2012-01-17 | 2014-04-08 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging |
US9720089B2 (en) | 2012-01-23 | 2017-08-01 | Microsoft Technology Licensing, Llc | 3D zoom imager |
US9336456B2 (en) | 2012-01-25 | 2016-05-10 | Bruno Delean | Systems, methods and computer program products for identifying objects in video data |
GB2500416B8 (en) * | 2012-03-21 | 2017-06-14 | Sony Computer Entertainment Europe Ltd | Apparatus and method of augmented reality interaction |
US20130253375A1 (en) * | 2012-03-21 | 2013-09-26 | Henry Nardus Dreifus | Automated Method Of Detecting Neuromuscular Performance And Comparative Measurement Of Health Factors |
US8898687B2 (en) | 2012-04-04 | 2014-11-25 | Microsoft Corporation | Controlling a media program based on a media reaction |
US9477303B2 (en) | 2012-04-09 | 2016-10-25 | Intel Corporation | System and method for combining three-dimensional tracking with a three-dimensional display for a user interface |
KR101370022B1 (ko) * | 2012-05-02 | 2014-03-25 | 주식회사 매크론 | 동작 인식 리모트 컨트롤러 |
US9210401B2 (en) | 2012-05-03 | 2015-12-08 | Microsoft Technology Licensing, Llc | Projected visual cues for guiding physical movement |
CA2775700C (en) | 2012-05-04 | 2013-07-23 | Microsoft Corporation | Determining a future portion of a currently presented media program |
US9588582B2 (en) | 2013-09-17 | 2017-03-07 | Medibotics Llc | Motion recognition clothing (TM) with two different sets of tubes spanning a body joint |
JP6018707B2 (ja) | 2012-06-21 | 2016-11-02 | マイクロソフト コーポレーション | デプスカメラを使用するアバター構築 |
US9836590B2 (en) | 2012-06-22 | 2017-12-05 | Microsoft Technology Licensing, Llc | Enhanced accuracy of user presence status determination |
US9697418B2 (en) | 2012-07-09 | 2017-07-04 | Qualcomm Incorporated | Unsupervised movement detection and gesture recognition |
US9305229B2 (en) | 2012-07-30 | 2016-04-05 | Bruno Delean | Method and system for vision based interfacing with a computer |
US9696427B2 (en) | 2012-08-14 | 2017-07-04 | Microsoft Technology Licensing, Llc | Wide angle depth detection |
JP5652445B2 (ja) * | 2012-08-31 | 2015-01-14 | 株式会社安川電機 | ロボット |
US9301811B2 (en) | 2012-09-17 | 2016-04-05 | Intuitive Surgical Operations, Inc. | Methods and systems for assigning input devices to teleoperated surgical instrument functions |
WO2014052802A2 (en) | 2012-09-28 | 2014-04-03 | Zoll Medical Corporation | Systems and methods for three-dimensional interaction monitoring in an ems environment |
TWI475422B (zh) * | 2012-10-31 | 2015-03-01 | Wistron Corp | 手勢辨識方法與電子裝置 |
US10864048B2 (en) | 2012-11-02 | 2020-12-15 | Intuitive Surgical Operations, Inc. | Flux disambiguation for teleoperated surgical systems |
US10631939B2 (en) | 2012-11-02 | 2020-04-28 | Intuitive Surgical Operations, Inc. | Systems and methods for mapping flux supply paths |
JP5991532B2 (ja) * | 2012-12-07 | 2016-09-14 | 国立大学法人広島大学 | 人体運動評価装置、方法、およびプログラム |
US8882310B2 (en) | 2012-12-10 | 2014-11-11 | Microsoft Corporation | Laser die light source module with low inductance |
TWI454968B (zh) | 2012-12-24 | 2014-10-01 | Ind Tech Res Inst | 三維互動裝置及其操控方法 |
US9857470B2 (en) | 2012-12-28 | 2018-01-02 | Microsoft Technology Licensing, Llc | Using photometric stereo for 3D environment modeling |
US9459697B2 (en) | 2013-01-15 | 2016-10-04 | Leap Motion, Inc. | Dynamic, free-space user interactions for machine control |
US9251590B2 (en) | 2013-01-24 | 2016-02-02 | Microsoft Technology Licensing, Llc | Camera pose estimation for 3D reconstruction |
US20140210707A1 (en) * | 2013-01-25 | 2014-07-31 | Leap Motion, Inc. | Image capture system and method |
US9785228B2 (en) | 2013-02-11 | 2017-10-10 | Microsoft Technology Licensing, Llc | Detecting natural user-input engagement |
US9052746B2 (en) | 2013-02-15 | 2015-06-09 | Microsoft Technology Licensing, Llc | User center-of-mass and mass distribution extraction using depth images |
US9558555B2 (en) | 2013-02-22 | 2017-01-31 | Leap Motion, Inc. | Adjusting motion capture based on the distance between tracked objects |
US9940553B2 (en) | 2013-02-22 | 2018-04-10 | Microsoft Technology Licensing, Llc | Camera/object pose from predicted coordinates |
US9158381B2 (en) | 2013-02-25 | 2015-10-13 | Honda Motor Co., Ltd. | Multi-resolution gesture recognition |
US9135516B2 (en) | 2013-03-08 | 2015-09-15 | Microsoft Technology Licensing, Llc | User body angle, curvature and average extremity positions extraction using depth images |
US9092657B2 (en) | 2013-03-13 | 2015-07-28 | Microsoft Technology Licensing, Llc | Depth image processing |
US9274606B2 (en) | 2013-03-14 | 2016-03-01 | Microsoft Technology Licensing, Llc | NUI video conference controls |
US9704350B1 (en) | 2013-03-14 | 2017-07-11 | Harmonix Music Systems, Inc. | Musical combat game |
US9702977B2 (en) | 2013-03-15 | 2017-07-11 | Leap Motion, Inc. | Determining positional information of an object in space |
US9733715B2 (en) | 2013-03-15 | 2017-08-15 | Leap Motion, Inc. | Resource-responsive motion capture |
US10721448B2 (en) | 2013-03-15 | 2020-07-21 | Edge 3 Technologies, Inc. | Method and apparatus for adaptive exposure bracketing, segmentation and scene organization |
US9953213B2 (en) | 2013-03-27 | 2018-04-24 | Microsoft Technology Licensing, Llc | Self discovery of autonomous NUI devices |
US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
US9442186B2 (en) | 2013-05-13 | 2016-09-13 | Microsoft Technology Licensing, Llc | Interference reduction for TOF systems |
US9829984B2 (en) | 2013-05-23 | 2017-11-28 | Fastvdo Llc | Motion-assisted visual language for human computer interfaces |
CN103257713B (zh) * | 2013-05-31 | 2016-05-04 | 华南理工大学 | 一种手势控制方法 |
US11243611B2 (en) | 2013-08-07 | 2022-02-08 | Nike, Inc. | Gesture recognition |
US10281987B1 (en) | 2013-08-09 | 2019-05-07 | Leap Motion, Inc. | Systems and methods of free-space gestural interaction |
US9721383B1 (en) | 2013-08-29 | 2017-08-01 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
US9462253B2 (en) | 2013-09-23 | 2016-10-04 | Microsoft Technology Licensing, Llc | Optical modules that reduce speckle contrast and diffraction artifacts |
US9632572B2 (en) | 2013-10-03 | 2017-04-25 | Leap Motion, Inc. | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
US9443310B2 (en) | 2013-10-09 | 2016-09-13 | Microsoft Technology Licensing, Llc | Illumination modules that emit structured light |
US10152136B2 (en) * | 2013-10-16 | 2018-12-11 | Leap Motion, Inc. | Velocity field interaction for free space gesture interface and control |
US9996638B1 (en) | 2013-10-31 | 2018-06-12 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
US9674563B2 (en) | 2013-11-04 | 2017-06-06 | Rovi Guides, Inc. | Systems and methods for recommending content |
US20150123890A1 (en) * | 2013-11-04 | 2015-05-07 | Microsoft Corporation | Two hand natural user input |
US9769459B2 (en) | 2013-11-12 | 2017-09-19 | Microsoft Technology Licensing, Llc | Power efficient laser diode driver circuit and method |
US9508385B2 (en) | 2013-11-21 | 2016-11-29 | Microsoft Technology Licensing, Llc | Audio-visual project generator |
US9891712B2 (en) | 2013-12-16 | 2018-02-13 | Leap Motion, Inc. | User-defined virtual interaction space and manipulation of virtual cameras with vectors |
CN103645807B (zh) * | 2013-12-23 | 2017-08-25 | 努比亚技术有限公司 | 空中姿态输入方法和装置 |
US9607319B2 (en) | 2013-12-30 | 2017-03-28 | Adtile Technologies, Inc. | Motion and gesture-based mobile advertising activation |
US9971491B2 (en) | 2014-01-09 | 2018-05-15 | Microsoft Technology Licensing, Llc | Gesture library for natural user input |
US9740923B2 (en) * | 2014-01-15 | 2017-08-22 | Lenovo (Singapore) Pte. Ltd. | Image gestures for edge input |
JP6229554B2 (ja) * | 2014-03-07 | 2017-11-15 | 富士通株式会社 | 検出装置および検出方法 |
RU2014109439A (ru) * | 2014-03-12 | 2015-09-20 | ЭлЭсАй Корпорейшн | Процессор изображений, содержащий систему распознавания жестов с сопоставлением положения руки, основываясь на признаках контура |
US9990046B2 (en) | 2014-03-17 | 2018-06-05 | Oblong Industries, Inc. | Visual collaboration interface |
US10166061B2 (en) | 2014-03-17 | 2019-01-01 | Intuitive Surgical Operations, Inc. | Teleoperated surgical system equipment with user interface |
RU2014113049A (ru) * | 2014-04-03 | 2015-10-10 | ЭлЭсАй Корпорейшн | Процессор изображений, содержащий систему распознавания жестов со слежением за объектом на основании вычислительных признаков контуров для двух или более объектов |
WO2015160849A1 (en) * | 2014-04-14 | 2015-10-22 | Motionsavvy, Inc. | Systems and methods for recognition and translation of gestures |
CN204480228U (zh) | 2014-08-08 | 2015-07-15 | 厉动公司 | 运动感测和成像设备 |
US20160085312A1 (en) * | 2014-09-24 | 2016-03-24 | Ncku Research And Development Foundation | Gesture recognition system |
WO2016183020A1 (en) | 2015-05-11 | 2016-11-17 | Magic Leap, Inc. | Devices, methods and systems for biometric user recognition utilizing neural networks |
US10437463B2 (en) | 2015-10-16 | 2019-10-08 | Lumini Corporation | Motion-based graphical input system |
CN106650554A (zh) * | 2015-10-30 | 2017-05-10 | 成都理想境界科技有限公司 | 静态手势识别方法 |
WO2017104272A1 (ja) * | 2015-12-18 | 2017-06-22 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
US10412280B2 (en) | 2016-02-10 | 2019-09-10 | Microsoft Technology Licensing, Llc | Camera with light valve over sensor array |
US10257932B2 (en) | 2016-02-16 | 2019-04-09 | Microsoft Technology Licensing, Llc. | Laser diode chip on printed circuit board |
AU2017230184B2 (en) | 2016-03-11 | 2021-10-07 | Magic Leap, Inc. | Structure learning in convolutional neural networks |
US10462452B2 (en) | 2016-03-16 | 2019-10-29 | Microsoft Technology Licensing, Llc | Synchronizing active illumination cameras |
US9684822B1 (en) * | 2016-04-08 | 2017-06-20 | International Business Machines Corporation | Mechanism to create pattern gesture transmissions to create device-sourcing emergency information |
CN105912727B (zh) * | 2016-05-18 | 2019-02-15 | 电子科技大学 | 一种在线社交网络标注系统中的快速推荐方法 |
US10981060B1 (en) | 2016-05-24 | 2021-04-20 | Out of Sight Vision Systems LLC | Collision avoidance system for room scale virtual reality system |
US10650591B1 (en) | 2016-05-24 | 2020-05-12 | Out of Sight Vision Systems LLC | Collision avoidance system for head mounted display utilized in room scale virtual reality system |
US10529302B2 (en) | 2016-07-07 | 2020-01-07 | Oblong Industries, Inc. | Spatially mediated augmentations of and interactions among distinct devices and applications via extended pixel manifold |
US20180088671A1 (en) * | 2016-09-27 | 2018-03-29 | National Kaohsiung University Of Applied Sciences | 3D Hand Gesture Image Recognition Method and System Thereof |
CN106570490B (zh) * | 2016-11-15 | 2019-07-16 | 华南理工大学 | 一种基于快速聚类的行人实时跟踪方法 |
US10437342B2 (en) | 2016-12-05 | 2019-10-08 | Youspace, Inc. | Calibration systems and methods for depth-based interfaces with disparate fields of view |
US10303259B2 (en) | 2017-04-03 | 2019-05-28 | Youspace, Inc. | Systems and methods for gesture-based interaction |
US10303417B2 (en) | 2017-04-03 | 2019-05-28 | Youspace, Inc. | Interactive systems for depth-based input |
US11164378B1 (en) | 2016-12-08 | 2021-11-02 | Out of Sight Vision Systems LLC | Virtual reality detection and projection system for use with a head mounted display |
US9983687B1 (en) | 2017-01-06 | 2018-05-29 | Adtile Technologies Inc. | Gesture-controlled augmented reality experience using a mobile communications device |
US11847426B2 (en) * | 2017-11-08 | 2023-12-19 | Snap Inc. | Computer vision based sign language interpreter |
IL256288B (en) | 2017-12-07 | 2019-02-28 | Ophir Yoav | Mutual interctivity between mobile devices based on position and orientation |
CN108549489B (zh) * | 2018-04-27 | 2019-12-13 | 哈尔滨拓博科技有限公司 | 一种基于手部形态、姿态、位置及运动特征的手势控制方法和系统 |
US11875012B2 (en) | 2018-05-25 | 2024-01-16 | Ultrahaptics IP Two Limited | Throwable interface for augmented reality and virtual reality environments |
US10642369B2 (en) * | 2018-06-14 | 2020-05-05 | Dell Products, L.P. | Distinguishing between one-handed and two-handed gesture sequences in virtual, augmented, and mixed reality (xR) applications |
US11068530B1 (en) * | 2018-11-02 | 2021-07-20 | Shutterstock, Inc. | Context-based image selection for electronic media |
EP3973468A4 (de) | 2019-05-21 | 2022-09-14 | Magic Leap, Inc. | Handhaltungsschätzung |
CN112733577A (zh) * | 2019-10-28 | 2021-04-30 | 富士通株式会社 | 检测手部动作的方法和装置 |
US11925863B2 (en) | 2020-09-18 | 2024-03-12 | Snap Inc. | Tracking hand gestures for interactive game control in augmented reality |
WO2022216784A1 (en) * | 2021-04-08 | 2022-10-13 | Snap Inc. | Bimanual interactions between mapped hand regions for controlling virtual and graphical elements |
EP4327185A1 (de) | 2021-04-19 | 2024-02-28 | Snap, Inc. | Handgesten zur animation und steuerung virtueller und graphischer elemente |
Family Cites Families (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4746770A (en) | 1987-02-17 | 1988-05-24 | Sensor Frame Incorporated | Method and apparatus for isolating and manipulating graphic objects on computer video monitor |
US5164992A (en) | 1990-11-01 | 1992-11-17 | Massachusetts Institute Of Technology | Face recognition system |
US5534917A (en) | 1991-05-09 | 1996-07-09 | Very Vivid, Inc. | Video image based control system |
US5483261A (en) | 1992-02-14 | 1996-01-09 | Itu Research, Inc. | Graphical input controller and method with rear screen image detection |
US5982352A (en) | 1992-09-18 | 1999-11-09 | Pryor; Timothy R. | Method for providing human input to a computer |
US6008800A (en) | 1992-09-18 | 1999-12-28 | Pryor; Timothy R. | Man machine interfaces for entering data into a computer |
US5454043A (en) | 1993-07-30 | 1995-09-26 | Mitsubishi Electric Research Laboratories, Inc. | Dynamic and static hand gesture recognition through low-level image analysis |
US5528263A (en) | 1994-06-15 | 1996-06-18 | Daniel M. Platzker | Interactive projected video image display system |
JPH08179888A (ja) | 1994-12-21 | 1996-07-12 | Hitachi Ltd | 大画面ディスプレイ用入力装置 |
US5710833A (en) | 1995-04-20 | 1998-01-20 | Massachusetts Institute Of Technology | Detection, recognition and coding of complex objects using probabilistic eigenspace analysis |
US6526156B1 (en) | 1997-01-10 | 2003-02-25 | Xerox Corporation | Apparatus and method for identifying and tracking objects with view-based representations |
JP3876942B2 (ja) | 1997-06-13 | 2007-02-07 | 株式会社ワコム | 光デジタイザ |
US6720949B1 (en) | 1997-08-22 | 2004-04-13 | Timothy R. Pryor | Man machine interfaces and applications |
EP0905644A3 (de) * | 1997-09-26 | 2004-02-25 | Matsushita Electric Industrial Co., Ltd. | Vorrichtung zur Erkennung von Handgebärden |
JPH11174948A (ja) * | 1997-09-26 | 1999-07-02 | Matsushita Electric Ind Co Ltd | 手動作認識装置 |
JP3795647B2 (ja) * | 1997-10-29 | 2006-07-12 | 株式会社竹中工務店 | ハンドポインティング装置 |
JP4033582B2 (ja) | 1998-06-09 | 2008-01-16 | 株式会社リコー | 座標入力/検出装置および電子黒板システム |
US6204852B1 (en) * | 1998-12-09 | 2001-03-20 | Lucent Technologies Inc. | Video hand image three-dimensional computer interface |
US6147678A (en) * | 1998-12-09 | 2000-11-14 | Lucent Technologies Inc. | Video hand image-three-dimensional computer interface with multiple degrees of freedom |
US6791531B1 (en) | 1999-06-07 | 2004-09-14 | Dot On, Inc. | Device and method for cursor motion control calibration and object selection |
JP4332649B2 (ja) * | 1999-06-08 | 2009-09-16 | 独立行政法人情報通信研究機構 | 手の形状と姿勢の認識装置および手の形状と姿勢の認識方法並びに当該方法を実施するプログラムを記録した記録媒体 |
US6275214B1 (en) | 1999-07-06 | 2001-08-14 | Karl C. Hansen | Computer presentation system and method with optical tracking of wireless pointer |
US6803906B1 (en) | 2000-07-05 | 2004-10-12 | Smart Technologies, Inc. | Passive touch system and method of detecting user input |
US7227526B2 (en) | 2000-07-24 | 2007-06-05 | Gesturetek, Inc. | Video-based image control system |
JP4059620B2 (ja) | 2000-09-20 | 2008-03-12 | 株式会社リコー | 座標検出方法、座標入力/検出装置及び記憶媒体 |
US7058204B2 (en) | 2000-10-03 | 2006-06-06 | Gesturetek, Inc. | Multiple camera control system |
US6774889B1 (en) * | 2000-10-24 | 2004-08-10 | Microsoft Corporation | System and method for transforming an ordinary computer monitor screen into a touch screen |
US8035612B2 (en) | 2002-05-28 | 2011-10-11 | Intellectual Ventures Holding 67 Llc | Self-contained interactive video display system |
US7259747B2 (en) | 2001-06-05 | 2007-08-21 | Reactrix Systems, Inc. | Interactive video display system |
US6594616B2 (en) * | 2001-06-18 | 2003-07-15 | Microsoft Corporation | System and method for providing a mobile input device |
US6977643B2 (en) * | 2002-01-10 | 2005-12-20 | International Business Machines Corporation | System and method implementing non-physical pointers for computer devices |
US7170492B2 (en) | 2002-05-28 | 2007-01-30 | Reactrix Systems, Inc. | Interactive video display system |
US20050122308A1 (en) | 2002-05-28 | 2005-06-09 | Matthew Bell | Self-contained interactive video display system |
US7710391B2 (en) | 2002-05-28 | 2010-05-04 | Matthew Bell | Processing an image utilizing a spatially varying pattern |
US7348963B2 (en) | 2002-05-28 | 2008-03-25 | Reactrix Systems, Inc. | Interactive video display system |
US7103225B2 (en) | 2002-11-07 | 2006-09-05 | Honda Motor Co., Ltd. | Clustering appearances of objects under varying illumination conditions |
US7576727B2 (en) | 2002-12-13 | 2009-08-18 | Matthew Bell | Interactive directed light/sound system |
EP1668786B1 (de) | 2003-09-23 | 2012-08-22 | Nxp B.V. | Anfängliche synchronisation für empfänger |
US7536032B2 (en) | 2003-10-24 | 2009-05-19 | Reactrix Systems, Inc. | Method and system for processing captured image information in an interactive video display system |
CN1902930B (zh) | 2003-10-24 | 2010-12-15 | 瑞克楚斯系统公司 | 管理交互式视频显示系统的方法和系统 |
JP2005203873A (ja) * | 2004-01-13 | 2005-07-28 | Alps Electric Co Ltd | パッチアンテナ |
CN100573548C (zh) * | 2004-04-15 | 2009-12-23 | 格斯图尔泰克股份有限公司 | 跟踪双手运动的方法和设备 |
US7432917B2 (en) * | 2004-06-16 | 2008-10-07 | Microsoft Corporation | Calibration of an interactive display system |
US7454039B2 (en) * | 2004-07-12 | 2008-11-18 | The Board Of Trustees Of The University Of Illinois | Method of performing shape localization |
-
2005
- 2005-04-15 CN CNB200580019474XA patent/CN100573548C/zh not_active Expired - Fee Related
- 2005-04-15 US US11/106,729 patent/US7379563B2/en active Active
- 2005-04-15 JP JP2007508608A patent/JP4708422B2/ja not_active Expired - Fee Related
- 2005-04-15 WO PCT/US2005/013033 patent/WO2005104010A2/en active Application Filing
- 2005-04-15 EP EP05733722A patent/EP1743277A4/de not_active Withdrawn
-
2007
- 2007-10-31 US US11/932,766 patent/US8259996B2/en active Active
-
2012
- 2012-08-01 US US13/564,492 patent/US8515132B2/en active Active
Non-Patent Citations (5)
Title |
---|
ATID SHAMAIE ET AL: "A Dynamic Model for Real-Time Tracking of Hands in Bimanual Movements", 6 February 2004 (2004-02-06), GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION; [LECTURE NOTES IN COMPUTER SCIENCE;LECTURE NOTES IN ARTIFICIAL INTELLIGENCE;LNCS], SPRINGER-VERLAG, BERLIN/HEIDELBERG, PAGE(S) 172 - 179, XP019003032, ISBN: 978-3-540-21072-6 * the whole document * * |
See also references of WO2005104010A2 * |
SHAMAIE A ET AL: "Bayesian fusion of Hidden Markov Models for understanding bimanual movements", AUTOMATIC FACE AND GESTURE RECOGNITION, 2004. PROCEEDINGS. SIXTH IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 17 May 2004 (2004-05-17), pages 602-607, XP010949499, ISBN: 978-0-7695-2122-0 * |
SHAMAIE A ET AL: "Hand tracking in bimanual movements", IMAGE AND VISION COMPUTING, ELSEVIER, GUILDFORD, GB, vol. 23, no. 13, 29 November 2005 (2005-11-29), pages 1131-1149, XP026733377, ISSN: 0262-8856, DOI: DOI:10.1016/J.IMAVIS.2005.07.010 [retrieved on 2005-10-31] * |
THOMAS COOGAN ET AL: "Real Time Hand Gesture Recognition Including Hand Segmentation and Tracking", 1 January 2006 (2006-01-01), ADVANCES IN VISUAL COMPUTING LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER, BERLIN, DE, PAGE(S) 495 - 504, XP019050681, ISBN: 978-3-540-48628-2 * References * * |
Also Published As
Publication number | Publication date |
---|---|
JP4708422B2 (ja) | 2011-06-22 |
EP1743277A4 (de) | 2011-07-06 |
US20050238201A1 (en) | 2005-10-27 |
JP2007533043A (ja) | 2007-11-15 |
CN101073089A (zh) | 2007-11-14 |
WO2005104010A2 (en) | 2005-11-03 |
US7379563B2 (en) | 2008-05-27 |
US20080219502A1 (en) | 2008-09-11 |
US8515132B2 (en) | 2013-08-20 |
US20120293408A1 (en) | 2012-11-22 |
US8259996B2 (en) | 2012-09-04 |
WO2005104010A3 (en) | 2007-05-31 |
CN100573548C (zh) | 2009-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8515132B2 (en) | Tracking bimanual movements | |
Athira et al. | A signer independent sign language recognition with co-articulation elimination from live videos: an Indian scenario | |
Yang et al. | Continuous hand gesture recognition based on trajectory shape information | |
Hamid et al. | Argmode-activity recognition using graphical models | |
Motiian et al. | Online human interaction detection and recognition with multiple cameras | |
Han et al. | Automatic skin segmentation and tracking in sign language recognition | |
Ma et al. | A survey of human action recognition and posture prediction | |
Zhou et al. | Adaptive fusion of particle filtering and spatio-temporal motion energy for human tracking | |
EP2899706B1 (de) | Verfahren und System zur Analyse des menschlichen Verhaltens in einem intelligenten Überwachungssystem | |
KR20170025535A (ko) | 스켈레톤 자세 데이터세트를 이용한 비디오 기반 상호 활동 모델링 방법 | |
Elmezain et al. | Spatio-temporal feature extraction-based hand gesture recognition for isolated american sign language and arabic numbers | |
Kumar | Motion trajectory based human face and hands tracking for sign language recognition | |
Parisi et al. | HandSOM-Neural clustering of hand motion for gesture recognition in real time | |
Yu et al. | Vision-based continuous sign language recognition using product HMM | |
Bhuyan et al. | Continuous hand gesture segmentation and co-articulation detection | |
Shamaie et al. | Hand tracking in bimanual movements | |
Tsai et al. | Visual Hand Gesture Segmentation Using Three-Phase Model Tracking Technique for Real-Time Gesture Interpretation System. | |
Datcu et al. | Automatic bi-modal emotion recognition system based on fusion of facial expressions and emotion extraction from speech | |
Chakraborty et al. | View-invariant human action detection using component-wise hmm of body parts | |
Zhu et al. | Robust Hand Gesture Recognition Using Machine Learning With Positive and Negative Samples. | |
Shamaie et al. | Bayesian fusion of hidden markov models for understanding bimanual movements | |
Khan et al. | A Constructive Review on Pedestrian Action Detection, Recognition and Prediction | |
Aran et al. | Sequential belief-based fusion of manual and non-manual information for recognizing isolated signs | |
Kumar | Gesture Recognition Using Hidden Markov Models Augmented with Active Difference Signatures | |
Schmidt et al. | Automatic initialization for body tracking-using appearance to learn a model for tracking human upper body motions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20061016 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR LV MK YU |
|
PUAK | Availability of information related to the publication of the international search report |
Free format text: ORIGINAL CODE: 0009015 |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20110606 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: QUALCOMM INCORPORATED |
|
17Q | First examination report despatched |
Effective date: 20120511 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20121122 |