US20170330031A1 - Fusing device and image motion for user identification, tracking and device association - Google Patents

Fusing device and image motion for user identification, tracking and device association Download PDF

Info

Publication number
US20170330031A1
US20170330031A1 US15/592,344 US201715592344A US2017330031A1 US 20170330031 A1 US20170330031 A1 US 20170330031A1 US 201715592344 A US201715592344 A US 201715592344A US 2017330031 A1 US2017330031 A1 US 2017330031A1
Authority
US
United States
Prior art keywords
image
mobile device
computer
implemented process
acceleration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/592,344
Inventor
Andrew D. Wilson
Hrvoje Benko
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Priority to US15/592,344 priority Critical patent/US20170330031A1/en
Publication of US20170330031A1 publication Critical patent/US20170330031A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BENKO, HRVOJE, WILSON, ANDREW D.
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/00624
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1637Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer

Definitions

  • Tracking a smart phone can be used to identify and track the smart phone's owner in order to provide indoor location-based services, such as establishing the smart phone's connection with nearby infrastructure such as a wall display, or for providing the user of the phone location-specific information and advertisements.
  • the cross-modal sensor fusion technique described herein provides a cross-modal sensor fusion approach to track mobile devices and the users carrying them.
  • the technique matches motion features captured using sensors on a mobile device to motion features captured in images of the device in order to track the mobile device and/or its user.
  • the technique matches the velocities of a mobile device, as measured by an onboard measurement unit, to similar velocities observed in images of the device to track the device and any object rigidly attached there to (e.g., a user).
  • This motion feature matching process is conceptually simple.
  • the technique does not require a model of the appearance of either the user or the device, nor in many cases a direct line of sight to the device.
  • the technique can track the location of the device even when it is not visible (e.g., it is in a user's pocket).
  • the technique can operate in real time and can be applied to a wide variety of scenarios.
  • the cross-modal sensor fusion technique locates and tracks a mobile device and its user in video using accelerations.
  • the technique matches the mobile device's accelerations with accelerations observed in images (e.g., color and depth images) of the video on a per pixel basis, computing the difference between the image motion features and the device motion features at a number of pixel locations in one or more of the captured images.
  • the number of pixels can be predetermined if desired, as can the pixel locations that are selected.
  • the technique uses the inertial sensors common to many mobile devices to find the mobile device's acceleration.
  • Device and image accelerations are compared in the 3D coordinate frame of the environment, thanks to the absolute orientation sensing capabilities common in today's mobile computing devices such as, for example, smart phones, as well as the range sensing capability of depth cameras which enables computing the real world coordinates (meters) of image features.
  • the device and image accelerations are compared at a predetermined number of pixels at various locations in an image. The smallest difference indicates the presence of the mobile device at the location.
  • FIG. 1 depicts a flow diagram of a process for practicing one exemplary embodiment of the cross-modal sensor fusion technique described herein.
  • FIG. 2 depicts a flow diagram of a process for practicing another exemplary embodiment of the cross-modal sensor fusion technique described herein.
  • FIG. 3 depicts a flow diagram of a process for practicing yet another exemplary embodiment of the cross-modal sensor fusion technique described herein.
  • FIG. 4 shows one exemplary environment for using a system which correlates motion features obtained from a mobile device and motion features obtained from images of the device in order to track the device according to the cross-modal sensor fusion technique described herein.
  • FIG. 5 shows a high-level depiction of an exemplary cross-modal sensor fusion system that can be used in the exemplary environment shown in FIG. 4 .
  • FIG. 6 shows an illustrative mobile device for use in the system of FIG. 5 .
  • FIG. 7 shows an illustrative external camera system for use in the system of FIG. 5 .
  • FIG. 8 shows an illustrative cross-modal sensor fusion system that can be used in conjunction with the external camera system of FIG. 7 .
  • FIG. 9 is a schematic of an exemplary computing environment which can be used to practice the cross-modal sensor fusion technique.
  • the cross-modal sensor fusion technique is a sensor fusion approach to locating and tracking a mobile device and its user in video.
  • the technique matches motion features measured by sensors on the device with image motion features extracted from images taken of the device. These motion features can be velocities or accelerations, for example.
  • the technique matches device acceleration with acceleration of the device observed in images (e.g., color and depth images) taken by a camera (such as a depth camera, for example). It uses the inertial sensors common to many mobile devices to find the device's acceleration in three dimensions. Device and image accelerations are compared in a 3D coordinate frame of the environment, thanks to the absolute orientation sensing capabilities common in today's smart phones, as well as the range sensing capability of depth cameras which enables computing the real world coordinates (meters) of image features.
  • ShakelD considers which of up to four tracked hands is holding the device.
  • fusion can be performed at every pixel in a video image and requires no separate process to suggest candidate objects to track.
  • the cross-modal sensor technique requires no knowledge of the appearance of the device or the user, and allows for a wide range of camera placement options and applications.
  • An interesting and powerful consequence of the technique is that the mobile device user, and in many cases the device itself, may be reliably tracked even if the device is in the user's pocket, fully out of view of the camera.
  • Tracking of a mobile device and its user can be useful in many real-world applications. For example, it can be used to provide navigation instructions to the user or it can be used to provide location-specific advertisements. It may also be used in physical security related applications. For example, it may be used to track objects of interest or people of interest. Many, many other applications are possible.
  • “Sensor fusion” refers to the combination of multiple disparate sensors to obtain a more useful signal.
  • fusion techniques seek to associate two devices by finding correlation among sensor values taken from both. For example, when two mobile devices are held together and shaken, accelerometer readings from both devices will be highly correlated. Detecting such correlation can cause application software to pair or connect the devices in some useful way. Similarly, when a unique event is observed to happen at the same time at both devices, various pairings may be established. Perhaps the simplest example is connecting two devices by pressing buttons on both devices simultaneously, but the same idea can be applied across a variety of sensors. For example, two devices that are physically bumped together will measure acceleration peaks at the same moment in time. These interactions are sometimes referred to as “synchronous gestures.”
  • a mobile phone may be located and paired with an interactive surface by correlating an acceleration peak in the device with the appearance of a touch contact, or when the surface detects the visible flashing of a phone at the precise moment it is triggered.
  • An object tagged with a Radio Frequency Identification (RFID) chip can be detected and located as it is placed on an interactive surface by correlating the appearance of a new surface contact with the appearance of a new RFID code.
  • RFID Radio Frequency Identification
  • Some researchers have proposed correlating accelerometers worn at the waist with visual features to track young children in school. They consider tracking head-worn red LEDs, as well as tracking the position of motion blobs. For the accelerometer measurements, they consider integrating to obtain position for direct comparison with the visual tracking data, as well as deriving pedometer-like features. This research favors pedometer features in combination with markerless motion blob visual features.
  • Still other researchers have proposed identifying and tracking people across multiple existing security cameras by correlating mobile device accelerometer and magnetometer readings. They describe a hidden Markov model-based approach to find the best assignment of sensed devices to tracked people. They rely on an external process to generate tracked objects and use a large matching window, though they demonstrate how their approach can recover from some common tracking failures.
  • One system matches smart phone accelerometer values with the acceleration of up to four hands tracked by a depth camera (e.g., Microsoft Corporation's Kinect® sensor).
  • the hand holding the phone is inferred by matching the device acceleration with acceleration of hand positions over a short window of time (1 s).
  • a Kalman filter is used to estimate the acceleration of each hand.
  • the hand with the most similar pattern of acceleration is determined to be holding the device. This work further studies the correlation of contacts on a touch screen by the opposite hand.
  • touch contacts are associated with the held device by way of the Kinect® tracked skeleton that is seen to be holding the device.
  • Kinect® skeletal tracking requires a fronto-parallel view of the users. Thus relying on Kinect® skeletal tracking constraints where the camera may be placed. For example, skeletal tracking fails when the camera is mounted in the ceiling for an unobstructed top-down view of the room.
  • the cross-modal sensor fusion technique described herein avoids the difficulty of choosing candidate objects by matching low level motion features throughout the image. It may be used in many situations where skeletal tracking is noisy or fails outright and thus can be used in a wide variety of application scenarios. Whereas most of the above discussed work performs matching over a significant window in time, the cross-motion sensor fusion technique described herein uses a fully recursive formulation that relies on storing only the previous frame's results, not a buffer of motion history. In fact, the recursive nature of the computation allows it to be applied everywhere in the image in real time, avoiding the need to track discrete objects.
  • the best approach is to match image motion directly, since as with “synchronous gestures” the pattern of image motion will provide the discriminative power to robustly detect the device or its user. Making fewer assumptions about the appearance of the device or user extends the range of applicability of the approach, and makes the technique less complex, more robust, and ultimately more useful.
  • the technique matches motion features measured on a mobile device with motion features observed in images of the device in order to track the device (and its user).
  • Some embodiments of the technique use color and depth images as described in the following paragraphs, but it is possible to practice the technique using grayscale and/or just two dimensional images.
  • the matching process is performed at a predetermined number of pixels selected from various locations in a color image.
  • the pixels used for matching can be selected based on a variety of distributions in the image.
  • the matching process is performed at each pixel in a color image.
  • FIG. 1 depicts one exemplary process 100 for practicing the cross-modal sensor fusion technique.
  • motion features of a mobile device are measured by sensors on the device and images of the device and any object to which it is rigidly attached are simultaneously captured.
  • Image motion features of the device in the captured images are found (block 104 ).
  • the image motion features can be either velocities or accelerations which are determined on a pixel by pixel basis at various locations in an image.
  • the image motion features are converted into the same coordinate frame of the mobile device, as shown in block 106 .
  • Device motion features measured on the device are then matched with the image motion features of the device, as shown in block 108 .
  • the device motion features can be velocities or accelerations measured by sensors on the device.
  • the difference between the image motion features and the device motion features is computed at on a per pixel basis, at a number of pixel locations in one or more of the captured images in the common (possibly real-world) coordinate system, as shown in block 110 .
  • This number of pixels can be, for example, every pixel in an image, every other pixel in an image, a random distribution of pixels in the image, a uniform distribution of pixels and the like.
  • the number of pixels can be predetermined if desired, as can the pixel locations that are selected.
  • the real world coordinates of the device's motions are provided by the sensors on the device, while the real world coordinates of the image motion features are determined using the coordinates from the camera that captured the images.
  • the presence of the device and the object rigidly attached to it are then determined using the difference at the chosen pixels, as shown in block 112 .
  • the smallest difference in an image determines the device location (and any rigid object attached to it, such as the user of the device) in the common (e.g., real-world) coordinate system.
  • FIG. 2 depicts another exemplary process 200 for practicing the cross-modal sensor fusion technique that matches motion features that are accelerations.
  • block 202 mobile device acceleration and color and depth images of the mobile device and its user are simultaneously captured.
  • Three-dimensional (3D) image accelerations are found in the captured images, as shown in block 204 . These can be found, for example, by computing a 2D optical flow on the captured color images and using corresponding depth images to compute the 3D acceleration. These 3D image accelerations are then converted into the same coordinate frame of the mobile device, as shown in block 206 .
  • the device accelerations measured by sensors on the device and the image accelerations are then matched, as shown in block 206 .
  • the difference between image and device acceleration is computed on a per pixel basis, at a number of pixel locations in the color images, as shown in block 210 .
  • the smallest difference value indicates the presence of the device at that pixel or point, as shown in block 212 .
  • FIG. 3 depicts yet another exemplary process 300 for practicing the cross-modal sensor fusion technique.
  • mobile device acceleration is found.
  • color depth images of the mobile device, and optionally its user are captured.
  • Two-dimensional (2D) image motion is found in the captured images, as shown in block 304 , by simultaneously computing a dense optical flow of flow vectors on the captured color images.
  • Each flow vector is converted to a 3D motion using the depth images, as shown in block 306 , and each flow vector is transformed to the coordinate frame of the mobile device, as shown in block 308 .
  • Image acceleration is estimated, as shown in block 310 . This 3D acceleration is estimated by a Kalman filter at each point of the image, with the 3D flow at the point provided as input.
  • the 3D device and image accelerations are then matched, as shown in block 312 .
  • the difference between image and device acceleration is computed at a number of pixels or points throughout one or more in the color images.
  • the number of pixel or point locations can be predetermined if desired, as can the pixel or point locations that are selected.
  • the smallest difference value in each image indicates the presence of the device at those pixel or point locations, as shown in block 314 .
  • FIG. 4 shows an illustrative environment 400 which serves as a vehicle for introducing a system for practicing the cross-modal sensor fusion technique described herein.
  • the system receives motion information from a mobile device 402 . More specifically, the system receives device motion features measured by sensors on at least one mobile device 402 .
  • the system further receives captured images of the mobile device 402 , from which image motion features are computed, from at least one external camera system 404 .
  • the device motion features from the mobile device are generated by the mobile device 402 itself, with respect to a frame of reference 406 of the mobile device 402 .
  • the captured images are captured by the external camera system 404 from a frame of reference 408 that is external to the mobile device 402 . In other words, the external camera system 404 observes the mobile device 402 from a vantage point that is external to the mobile device 402 .
  • the mobile device 402 is associated with at least one object. That object can be, for example, a user 412 which moves within a scene.
  • the mobile device 402 comprises a handheld unit that is rigidly attached to a user 412 . Any of the parts of an object (e.g., a user) 412 may be in motion at any given time.
  • one purpose of the system is to track the object (for example, the user 412 ) that is associated with the mobile device 402 .
  • the system seeks to track the user 412 that is holding the mobile device 402 .
  • the system performs this task by correlating the device motion features obtained from mobile device 402 with the image motion features of the mobile device 402 obtained from the captured images.
  • the system matches the device motion features from the mobile device (which are generated by sensors on the mobile device 402 ) with image motion features extracted from the captured images.
  • the system then computes the difference between the motion features from the mobile device (which are generated by the mobile device 402 ) and the motion features extracted from the captured images.
  • the difference is computed on a pixel by pixel basis for a predetermined number of pixels at various locations in an image.
  • the smallest difference is determined as the location of the mobile device 402 (with the user 412 rigidly attached thereto). The system can then use this conclusion to perform any environment-specific actions.
  • the mobile device 402 corresponds to a piece of equipment that the user grasps and manipulates with a hand.
  • this type of equipment may comprise a pointing device, a mobile telephone device, a game controller device, a game implement (such as a paddle or racket) and so on.
  • the mobile device 402 can correspond to any piece of equipment of any size and shape and functionality that can monitor its own movement and report that movement to the system.
  • the mobile device 402 may correspond to any piece of equipment that is worn by the user 412 or otherwise detachably fixed to the user.
  • the mobile device 402 can be integrated with (or otherwise associated with) a wristwatch, pair of paints, dress, shirt, shoe, hat, belt, wristband, sweatband, patch, button, pin, necklace, ring, bracelet, eyeglasses, goggles, and so on.
  • a scene contains two or more subjects, such as two or more users (not shown in FIG. 4 ). Each user may hold (or wear) his or her own mobile device.
  • the system can determine the association between mobile devices and respective users.
  • the matching process is run for each device.
  • image motion estimation which is a computationally expensive computation, needs to be run only once regardless of how many devices are matched.
  • the object that is associated with the mobile device 402 is actually a part of the mobile device 402 itself.
  • the object may correspond to the housing of a mobile phone, the paddle of a game implement, etc.
  • Still further interpretations of the terms “mobile device” and “object” are possible.
  • the object corresponds the user 412 which holds or is otherwise associated with the mobile device 402 .
  • FIG. 5 shows a high-level block depiction of a system 500 that performs the functions summarized above.
  • the system 500 includes a mobile device 502 , an external camera system 504 , and a cross-modal sensor fusion processing system 506 .
  • the mobile device 502 supplies device motion features measured on the mobile device to the cross-modal sensor fusion processing system 506 .
  • the external camera system 504 captures images of the device 502 and sends these to the cross-modal sensor fusion processing system 506 .
  • the cross-modal sensor fusion processing system 506 computes the image motion features. It also performs a correlation analysis of the motion features measured on the mobile device and the image motion features obtained from the captured images at various locations in the images.
  • the cross-modal sensor fusion processing system 506 computes the difference between the device motion features measured on the mobile device and the image motion features obtained from the captured image at these pixel locations and the smallest difference indicates the location of the mobile device (and therefore the user attached thereto) in that image.
  • FIG. 6 shows an overview of one type of mobile device 602 .
  • the mobile device 602 incorporates or is otherwise associated with one or more position-determining devices 610 .
  • the mobile device 602 can include one or more accelerometers 604 , one or more gyro devices 606 , one or more magnetometers 608 , one or more GPS units (not shown), one or more dead reckoning units (not shown), and so on.
  • Each of the position-determining devices 610 uses a different technique to detect movement of the device, and, as a result, to provide a part of the motion features measured on the mobile device 602 .
  • the mobile device 602 may include one or more other device processing components 612 which make use of the mobile device's motion features for any environment-specific purpose (unrelated to the motion analysis functionality described herein).
  • the mobile device 602 also sends the mobile device's motion features to one or more destinations, such as the cross-modal sensor fusion processing system ( 506 of FIG. 5 ).
  • the mobile device 602 can also send the mobile device's motion features to any other target system, such as a game system.
  • FIG. 7 shows an overview of one type of external camera system 704 .
  • the external camera system 704 can use one or more data capture techniques to capture a scene which contains the mobile device and an object, such as the user.
  • the external camera system 704 can investigate the scene by irradiating it using any kind electromagnetic radiation, including one or more of visible light, infrared light, radio waves, etc.
  • the external camera system 704 can optionally include an illumination source 702 which bathes the scene in infrared light.
  • the infrared light may correspond to structured light which provides a pattern of elements (e.g., dots, lines, etc.).
  • the structured light deforms as it is cast over the surfaces of the objects in the scene.
  • a depth camera 710 can capture the manner in which the structured light is deformed. Based on that information, the depth camera 710 can derive the distances between different parts of the scene and the external camera system 704 .
  • the depth camera 710 can alternatively, or in addition, use other techniques to generate the depth image, such as a time-of-flight technique, a stereoscopic correspondence technique, etc.
  • the external camera system 704 can alternatively, or in addition, capture other images of the scene.
  • a video camera 706 can capture an RGB video image of the scene or a grayscale video image of the scene.
  • An image processing module 708 can process the depth images provided by the depth camera 704 and/or one or more other images of the scene provided by other capture units.
  • the Kinect® controller provided by Microsoft Corporation of Redmond, Washington, can be used to implement at least parts of the external camera system.
  • the external camera system 704 can capture a video image of the scene.
  • the external camera system 704 send the video images to the cross-modal sensor fusion system 806 , described in greater detail with respect to FIG. 8 .
  • the cross-modal sensor fusion processing system 806 resides on computing device 900 that is described in greater detail with respect to FIG. 9 .
  • the cross-modal sensor fusion processing system 806 receives device motion features measured onboard of a mobile device and images captured by the external camera system previously discussed.
  • the image motion features are computed by the cross-modal sensor fusion processing system 806 .
  • the device motion features can be velocities or 3D accelerations reported by sensors on the mobile device.
  • the motion features of the mobile device and the captured images can be transmitted to the cross-modal sensor fusion system 806 via a communications link, such as, for example, a WiFi link or other communications link.
  • the system 806 includes a velocity determination module 802 that determines the 2D velocity of the image features.
  • the system 806 also includes an image acceleration estimation module that estimates 3D image accelerations by adding depth information to the 2D image velocities.
  • a conversion module 814 converts the image coordinates into a common (e.g., real-world) coordinate frame used by the mobile device.
  • the system 806 also includes a matching module 810 that matches the device motion features and the image motion features (e.g. that matches the image velocities to the device velocities, or that matches the image accelerations to the device accelerations, depending what type of motion features are being used).
  • a difference computation module 812 computes the differences between the device motion features and the image motion features (e.g., 3D device accelerations and the 3D image accelerations) at points in the captured images. The difference computation module 812 determines the location of the mobile device as the point in each image where the difference is the smallest.
  • orientation is computed by combining information from the onboard accelerometers, gyroscopes and magnetometers. Because this orientation is with respect to magnetic north (as measured by the magnetometer) and gravity (as measured by the accelerometer, when the device is not moving), it is often considered an “absolute” orientation.
  • the mobile device reports orientation to a standard “ENU” (east, north, up) coordinate system. While magnetic north is disturbed by the presence of metal and other magnetic fields present in indoor environments, in practice it tends to be constant in a given room. It is only important that magnetic north not change dramatically as the device moves about the area imaged by the depth camera (e.g., Kinect® sensor).
  • Mobile device accelerometers report device acceleration in the 3D coordinate frame of the device. Having computed absolute orientation using the magnetometers, gyros and accelerometers, it is easy to transform the accelerometer outputs to the ENU coordinate frame and subtract acceleration due to gravity. Some mobile devices provide an API that performs this calculation to give the acceleration of the device in the ENU coordinate frame, without acceleration due to gravity. Of course, because it depends on device orientation, its accuracy is only as good as that of the orientation estimate. One mobile device in a prototype implementation transmits this device acceleration (ENU coordinates, gravity removed) over WiFi to the cross-modal sensor fusion system that performs sensor fusion.
  • the cross-modal sensor fusion technique compares image motion features from images of the device and device motion features from sensors on the device in order to track the device (and its user).
  • only velocities are computed.
  • accelerations are also computed.
  • the following discussion focuses more on using accelerations in tracking the mobile device.
  • the processing used in using velocities to track the mobile device is basically a subset of that for using accelerations. For example, estimating velocity from images is already accomplished by computing optical flow. Computing like velocities on the mobile device involves integrating the accelerometer values from the device.
  • the cross-modal sensor fusion technique compares the 3D acceleration of the mobile device with 3D acceleration observed in video.
  • the technique finds acceleration in video by first computing the velocity of movement all of the pixels in a color image using a standard optical flow technique. This 2D image-space velocity is augmented with depth information and converted to velocity in real world 3D coordinates (meters per second). Acceleration is estimated at each point in the image using a Kalman filter. The following paragraphs describe each of these steps in detail.
  • image motion is found by computing a dense optical flow on an entire color image.
  • Dense optical flow algorithms model the motion observed in a pair of images as a displacement u, v at each pixel.
  • optical flow algorithms There are a variety of optical flow algorithms.
  • One implementation of the technique uses an optical flow algorithm known for its accuracy that performs a nonlinear optimization over multiple factors.
  • there are many other ways to compute flow including a conceptually simpler block matching technique, where for each point in the image at time t, the closest patch around the point is found in the neighborhood of the point at time t+1, using the sum of the squared differences on image pixel intensities, or other similarity metrics.
  • the cross-modal sensor fusion technique computes flow from the current frame at time t to the frame at time t ⁇ 1.
  • the velocity u, v at each point x, y is denoted as u x,y and v x,y . It is noted that x, y are integer valued, while u, v are real-valued.
  • Depth cameras such as for example Microsoft Corporation's Kinect® sensor, report distance to the nearest surface at every point in its depth image. Knowing the focal lengths of the depth and color cameras, and their relative position and orientation, the 3D position of a point in the color image may be calculated.
  • One known external camera system provides an API to compute the 3D position of a point in the color camera in real world units (meters). The 3D position corresponding to a 2D point x, y in the color image at time t is denoted as z x,y,t .
  • one embodiment of the cross-modal sensor fusion technique uses a Kalman filter-based technique that estimates velocity and acceleration at each pixel.
  • Some embodiments of the cross-modal sensor fusion technique uses a Kalman filter to estimate acceleration of moving objects in the image.
  • the Kalman filter incorporates knowledge of sensor noise and is recursive (that is, it incorporates all previous observations). The technique thus allows much better estimates of acceleration compared to the approach of using finite differences.
  • the basics of estimating acceleration employed in one embodiment of the cross-modal sensor fusion technique are described below.
  • the Kalman filter is closely related to the simpler “exponential” filter.
  • the exponential filter computes a smoothed estimate of a scalar z t using the recursive relation:
  • x t * is a prediction of x t given x t ⁇ 1 .
  • the Kalman filter is essentially this improved exponential filter, and includes a principled means to set the value of the gain given the uncertainty in both the prediction x t * and observation z t .
  • x t * x t ⁇ 1 +v t ⁇ 1 ⁇ t+ 1 ⁇ 2 a t ⁇ 1 ⁇ t 2
  • the technique Given observation z t of the position of a tracked object, the technique updates the estimates of position, velocity and acceleration with
  • a t a t ⁇ 1 +k a *( z t ⁇ x t *)
  • Kalman gains k x , k v , k ⁇ relate the innovation, or error in the prediction of position, to changes in each of the estimates of position, velocity and acceleration.
  • Kalman gain is computed via a conventional method for computing the optimal Kalman gain using two distinct phases of prediction and update.
  • the predict phase uses the state estimate from a previous time step to produce an estimate of the state at the current time step.
  • This predicted state estimate, or a priori state estimate is an estimate of the state at the current time step, but does not include observation information from the current time step.
  • the current a priori prediction is combined with current observation information to refine the state estimate (called the a posteriori state estimate).
  • the two phases alternate, with the prediction advancing the state until the next observation, and the update incorporating the observation, but this is not necessary.
  • the Kalman gain is a function of the uncertainty in the predictive model x t * and observations .
  • the uncertainty in z t is related to the noise of the sensor.
  • Kalman gain is time-varying. However, if the uncertainty of the predictive model and observations is constant, Kalman gain converges to a constant value, as presented above. This leads to a simplified implementation of the update equations, and further underscores the relationship between the Kalman filter and the simpler exponential filter.
  • the cross-modal sensor fusion technique maintains a Kalman filter of the form described above to estimate 3D acceleration at pixel locations in the image (in some embodiments at each pixel).
  • the estimated position, velocity and acceleration at each pixel location x, y are denoted as x x,y,t, v x,y,t and a x,y,t respectively.
  • Optical flow information is used in two ways: first, the flow at a point in the image is a measurement of the velocity of the object under that point. It thus acts as input to estimate of acceleration using the Kalman filter. Second, the technique can use flow to propagate motion estimates spatially, so that they track the patches of the image whose motion is being estimated. In this way the Kalman filter can use many observations to accurately estimate the acceleration of a given patch of an object as it moves about the image. This is accomplished in the following manner:
  • x x,y,t x x+u,y+v,t ⁇ 1 +k x *( z x,y,t ⁇ x* x,y,t )
  • v x,y,t v x+u,y+v,t ⁇ 1 +k v *( z x,y,t ⁇ x* x,y,t )
  • a x,y,t a x+u,y+v,t ⁇ 1 +k a *( z x,y,t ⁇ x* x,y,t )
  • x, y are integer-valued, while u, v are real-valued.
  • x x,y,t ⁇ 1 , v x,y,t ⁇ 1 and a x,y,t ⁇ 1 are stored as an array the same dimension of the color image, but because x+u and y+v are real valued, quantities x x+u,y+v,t ⁇ 1 , v x+u,y+v,t ⁇ 1 , and a x+u,y+v,t ⁇ 1 are best computed by bilinear interpolation.
  • the Kalman filter at x, y updates motion estimates found at x+u, y+v in the previous time step. In this way motion estimates track the objects whose motion is being estimated.
  • the mobile device is placed display-side down on a plane that is easily observed by the camera, such as a wall or desk. Viewing the color video stream of the camera, the user clicks on three or more points on the plane.
  • the 3D unit normal n k of the plane in coordinates of the camera is computed by first calculating the 3D position of each clicked point and fitting a plane by a least-squares procedure.
  • the same normal n w in ENU coordinates is computed by rotating the unit vector z (out of the display of the device) by the device orientation.
  • gravity unit vector g k in camera coordinates is taken from the 3-axis accelerometer built in to some camera system, such as, for example, the Kinect® sensor.
  • Gravity g w in the ENU coordinate frame is by definition—z.
  • the 3 ⁇ 3 rotation matrix M camera ⁇ world that brings a 3D camera point to the ENU coordinate frame is calculated by matching the normals n k and n w , as well as gravity vectors g k and g w , and forming orthonormal bases K and W by successive cross products:
  • 3D image accelerations are estimated at each pixel and transformed to the ENU coordinate system as described above.
  • the acceleration observed at each pixel may be compared directly to the device acceleration d t :
  • Regions of the image that move with the device will give small values of r x,y, .
  • the hope is that pixels that lie on the device will give the smallest values. If one assumes that the device is present in the scene, it may suffice to locate its position in the image by finding x*, y* that minimizes r x,y, .
  • other objects that momentarily move with the device such as those rigidly attached (e.g., the hand holding the device and the arm) may also match well.
  • locating the device by computing the instantaneous minimum over r x,y will fail to find the device when it is momentarily still or moving with constant velocity.
  • device acceleration may be near zero and so matches many parts of the scene that are not moving, such as the background.
  • smoothing r x,y, with an exponential filter to obtain s x,y,t This smoothed value is “tracked” using optical flow and bilinear interpolation, in the same manner as the Kalman motion estimates.
  • the latency of the depth camera e.g., the Kinect® sensor
  • the measure of similarity may be inaccurate.
  • the cross-modal sensor fusion technique accounts for the relative latency of the camera (e.g., Kinect® sensor) by artificially lagging the mobile device readings by some small number of frames. In one prototype implementation this lag is tuned empirically to four frames, approximately 64 ms.
  • the minimum value over s x,y,t can be checked against a threshold to reject matches of poor quality.
  • the minimum value at x*, y* is denoted as s*.
  • FIG. 9 illustrates a simplified example of a general-purpose computer system on which various embodiments and elements of the cross-modal sensor fusion technique, as described herein, may be implemented. It should be noted that any boxes that are represented by broken or dashed lines in FIG. 9 represent alternate embodiments of the simplified computing device, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • FIG. 9 shows a general system diagram showing a simplified computing device 900 .
  • Such computing devices can be typically be found in devices having at least some minimum computational capability, including, but not limited to, personal computers, server computers, hand-held computing devices, laptop or mobile computers, communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, audio or video media players, etc.
  • the device should have a sufficient computational capability and system memory to enable basic computational operations.
  • the computational capability is generally illustrated by one or more processing unit(s) 910 , and may also include one or more GPUs 915 , either or both in communication with system memory 920 .
  • the processing unit(s) 910 of the general computing device may be specialized microprocessors, such as a DSP, a VLIW, or other micro-controller, or can be conventional CPUs having one or more processing cores, including specialized GPU-based cores in a multi-core CPU.
  • the computing device can be implemented as an ASIC or FPGA, for example.
  • the simplified computing device of FIG. 9 may also include other components, such as, for example, a communications interface 930 .
  • the simplified computing device of FIG. 9 may also include one or more conventional computer input devices 940 (e.g., pointing devices, keyboards, audio and speech input devices, video input devices, haptic input devices, devices for receiving wired or wireless data transmissions, etc.).
  • the simplified computing device of FIG. 9 may also include other optional components, such as, for example, one or more conventional computer output devices 950 (e.g., display device(s) 955 , audio output devices, video output devices, devices for transmitting wired or wireless data transmissions, etc.).
  • typical communications interfaces 930 , input devices 940 , output devices 950 , and storage devices 960 for general-purpose computers are well known to those skilled in the art, and will not be described in detail herein.
  • the simplified computing device of FIG. 9 may also include a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer 900 via storage devices 960 and includes both volatile and nonvolatile media that is either removable 970 and/or non-removable 980 , for storage of information such as computer-readable or computer-executable instructions, data structures, program modules, or other data.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media refers to tangible computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
  • modulated data signal or “carrier wave” generally refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of any of the above should also be included within the scope of communication media.
  • software, programs, and/or computer program products embodying some or all of the various embodiments of the cross-modal sensor fusion technique described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer or machine readable media or storage devices and communication media in the form of computer executable instructions or other data structures.
  • cross-modal sensor fusion technique described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device.
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • the embodiments described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks.
  • program modules may be located in both local and remote computer storage media including media storage devices.
  • the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.

Abstract

The cross-modal sensor fusion technique described herein tracks mobile devices and the users carrying them. The technique matches motion features from sensors on a mobile device to image motion features obtained from images of the device. For example, the acceleration of a mobile device, as measured by an onboard internal measurement unit, is compared to similar acceleration observed in the color and depth images of a depth camera. The technique does not require a model of the appearance of either the user or the device, nor in many cases a direct line of sight to the device. The technique can operate in real time and can be applied to a wide variety of ubiquitous computing scenarios.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a Continuation Application of U.S. patent application Ser. No. 14/096,840, filed on Dec. 4, 2013 by Wilson, et al., and entitled “FUSING DEVICE AND IMAGE MOTION FOR USER IDENTIFICATION, TRACKING AND DEVICE ASSOCIATION,” and claims priority to U.S. patent application Ser. No. 14/096,840.
  • BACKGROUND
  • The ability to track the position of a mobile device and its owner in indoor settings is useful for a number of ubiquitous computing scenarios. Tracking a smart phone can be used to identify and track the smart phone's owner in order to provide indoor location-based services, such as establishing the smart phone's connection with nearby infrastructure such as a wall display, or for providing the user of the phone location-specific information and advertisements.
  • SUMMARY
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
  • The cross-modal sensor fusion technique described herein provides a cross-modal sensor fusion approach to track mobile devices and the users carrying them. The technique matches motion features captured using sensors on a mobile device to motion features captured in images of the device in order to track the mobile device and/or its user. For example, in one embodiment the technique matches the velocities of a mobile device, as measured by an onboard measurement unit, to similar velocities observed in images of the device to track the device and any object rigidly attached there to (e.g., a user). This motion feature matching process is conceptually simple. The technique does not require a model of the appearance of either the user or the device, nor in many cases a direct line of sight to the device. In fact, the technique can track the location of the device even when it is not visible (e.g., it is in a user's pocket). The technique can operate in real time and can be applied to a wide variety of scenarios.
  • In one embodiment, the cross-modal sensor fusion technique locates and tracks a mobile device and its user in video using accelerations. The technique matches the mobile device's accelerations with accelerations observed in images (e.g., color and depth images) of the video on a per pixel basis, computing the difference between the image motion features and the device motion features at a number of pixel locations in one or more of the captured images. The number of pixels can be predetermined if desired, as can the pixel locations that are selected. The technique uses the inertial sensors common to many mobile devices to find the mobile device's acceleration. Device and image accelerations are compared in the 3D coordinate frame of the environment, thanks to the absolute orientation sensing capabilities common in today's mobile computing devices such as, for example, smart phones, as well as the range sensing capability of depth cameras which enables computing the real world coordinates (meters) of image features. The device and image accelerations are compared at a predetermined number of pixels at various locations in an image. The smallest difference indicates the presence of the mobile device at the location.
  • DESCRIPTION OF THE DRAWINGS
  • The specific features, aspects, and advantages of the disclosure will become better understood with regard to the following description, appended claims, and accompanying drawings where:
  • FIG. 1 depicts a flow diagram of a process for practicing one exemplary embodiment of the cross-modal sensor fusion technique described herein.
  • FIG. 2 depicts a flow diagram of a process for practicing another exemplary embodiment of the cross-modal sensor fusion technique described herein.
  • FIG. 3 depicts a flow diagram of a process for practicing yet another exemplary embodiment of the cross-modal sensor fusion technique described herein.
  • FIG. 4 shows one exemplary environment for using a system which correlates motion features obtained from a mobile device and motion features obtained from images of the device in order to track the device according to the cross-modal sensor fusion technique described herein.
  • FIG. 5 shows a high-level depiction of an exemplary cross-modal sensor fusion system that can be used in the exemplary environment shown in FIG. 4.
  • FIG. 6 shows an illustrative mobile device for use in the system of FIG. 5.
  • FIG. 7 shows an illustrative external camera system for use in the system of FIG. 5.
  • FIG. 8 shows an illustrative cross-modal sensor fusion system that can be used in conjunction with the external camera system of FIG. 7.
  • FIG. 9 is a schematic of an exemplary computing environment which can be used to practice the cross-modal sensor fusion technique.
  • DETAILED DESCRIPTION
  • In the following description of the cross-modal sensor fusion technique, reference is made to the accompanying drawings, which form a part thereof, and which show by way of illustration examples by which the cross-modal sensor fusion technique described herein may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the claimed subject matter.
  • 1.0 Cross-Modal Sensor Fusion Technique
  • The following sections provide an introduction to the cross-modal sensor fusion technique, a discussion of sensor fusion, as well as exemplary embodiments of processes and a system for practicing the technique. Details of various embodiments and components of the cross-modal sensor fusion technique are also provided.
  • As a preliminary matter, some of the figures that follow describe concepts in the context of one or more structural components, variously referred to as functionality, modules, features, elements, etc. The various components shown in the figures can be implemented in any manner. In one case, the illustrated separation of various components in the figures into distinct units may reflect the use of corresponding distinct components in an actual implementation. Alternatively, or in addition, any single component illustrated in the figures may be implemented by plural actual components. Alternatively, or in addition, the depiction of any two or more separate components in the figures may reflect different functions performed by a single actual component.
  • Other figures describe the concepts in flowchart form. In this form, certain operations are described as constituting distinct blocks performed in a certain order. Such implementations are illustrative and non-limiting. Certain blocks described herein can be grouped together and performed in a single operation, certain blocks can be broken apart into plural component blocks, and certain blocks can be performed in an order that differs from that which is illustrated herein (including a parallel manner of performing the blocks). The blocks shown in the flowcharts can be implemented in any manner.
  • 1.1 Introduction
  • The ability to track the position of a mobile device and its owner in indoor settings is useful for a number of scenarios. However, many smart phones are small, shiny, and dark in color, making them difficult to image clearly. It might be impossible to differentiate two devices of the same model. A device held in a user's hand may be partially occluded, while a device kept in a purse or clothes pocket cannot be seen at all. Active markers such as an infrared Light Emitting Diode (LED) can assist in tracking and identification. For example, the controllers for next generation game consoles use infrared and visible LEDs to assist in tracking and associating the controllers with players. However, such active markers are rare, and require a line of sight; i.e., the camera must be able to view them.
  • The cross-modal sensor fusion technique is a sensor fusion approach to locating and tracking a mobile device and its user in video. The technique matches motion features measured by sensors on the device with image motion features extracted from images taken of the device. These motion features can be velocities or accelerations, for example. In one embodiment, the technique matches device acceleration with acceleration of the device observed in images (e.g., color and depth images) taken by a camera (such as a depth camera, for example). It uses the inertial sensors common to many mobile devices to find the device's acceleration in three dimensions. Device and image accelerations are compared in a 3D coordinate frame of the environment, thanks to the absolute orientation sensing capabilities common in today's smart phones, as well as the range sensing capability of depth cameras which enables computing the real world coordinates (meters) of image features.
  • A number of works explore the fusion of device sensors and visual features to find the user carrying the device. These rely on some external means of suggesting candidate objects in the video. For example, one example called ShakelD considers which of up to four tracked hands is holding the device. In the cross-modal sensor fusion technique described herein, rather than compare the motion of a small number of candidate objects in the video, fusion can be performed at every pixel in a video image and requires no separate process to suggest candidate objects to track. The cross-modal sensor technique requires no knowledge of the appearance of the device or the user, and allows for a wide range of camera placement options and applications. An interesting and powerful consequence of the technique is that the mobile device user, and in many cases the device itself, may be reliably tracked even if the device is in the user's pocket, fully out of view of the camera.
  • Tracking of a mobile device and its user can be useful in many real-world applications. For example, it can be used to provide navigation instructions to the user or it can be used to provide location-specific advertisements. It may also be used in physical security related applications. For example, it may be used to track objects of interest or people of interest. Many, many other applications are possible.
  • 1.2 Sensor Fusion
  • “Sensor fusion” refers to the combination of multiple disparate sensors to obtain a more useful signal.
  • 1.2.1 Device Association Using Sensor Fusion
  • There are some fusion techniques that seek to associate two devices by finding correlation among sensor values taken from both. For example, when two mobile devices are held together and shaken, accelerometer readings from both devices will be highly correlated. Detecting such correlation can cause application software to pair or connect the devices in some useful way. Similarly, when a unique event is observed to happen at the same time at both devices, various pairings may be established. Perhaps the simplest example is connecting two devices by pressing buttons on both devices simultaneously, but the same idea can be applied across a variety of sensors. For example, two devices that are physically bumped together will measure acceleration peaks at the same moment in time. These interactions are sometimes referred to as “synchronous gestures.”
  • It can be particularly useful to establish correlations across very different modalities, since often such modalities complement each other. A few of these “cross-modal” approaches are mentioned below. For example, a mobile phone may be located and paired with an interactive surface by correlating an acceleration peak in the device with the appearance of a touch contact, or when the surface detects the visible flashing of a phone at the precise moment it is triggered. An object tagged with a Radio Frequency Identification (RFID) chip can be detected and located as it is placed on an interactive surface by correlating the appearance of a new surface contact with the appearance of a new RFID code.
  • 1.2. 2 Correlating Image and Device Motion
  • A small number of works have investigated the idea of correlating mobile device inertial sensor readings with movement observed in a video camera.
  • Some researchers have proposed correlating accelerometers worn at the waist with visual features to track young children in school. They consider tracking head-worn red LEDs, as well as tracking the position of motion blobs. For the accelerometer measurements, they consider integrating to obtain position for direct comparison with the visual tracking data, as well as deriving pedometer-like features. This research favors pedometer features in combination with markerless motion blob visual features.
  • Other researchers propose computing normalized cross correlation between the motion trajectory of an object and device accelerometer readings to determine which of several tracked objects contains the device. Their approach requires a window of many samples to perform correlation and relies on an external process to find and track objects from monocular video. Other researchers use a similar approach to synchronize inertial sensors and video cameras.
  • Still other researchers have proposed identifying and tracking people across multiple existing security cameras by correlating mobile device accelerometer and magnetometer readings. They describe a hidden Markov model-based approach to find the best assignment of sensed devices to tracked people. They rely on an external process to generate tracked objects and use a large matching window, though they demonstrate how their approach can recover from some common tracking failures.
  • One system, called the ShakelD system, matches smart phone accelerometer values with the acceleration of up to four hands tracked by a depth camera (e.g., Microsoft Corporation's Kinect® sensor). The hand holding the phone is inferred by matching the device acceleration with acceleration of hand positions over a short window of time (1 s). A Kalman filter is used to estimate the acceleration of each hand. The hand with the most similar pattern of acceleration is determined to be holding the device. This work further studies the correlation of contacts on a touch screen by the opposite hand. Ultimately touch contacts are associated with the held device by way of the Kinect® tracked skeleton that is seen to be holding the device.
  • All of the works discussed above that correlate device motion with motion in video require that a small number of candidate objects are first tracked. The subsequent correlation process involves determining which of these object's motion most closely matches that of the device. The step of generating candidate objects can be prone to failure. For example, ShakelD compares the motion of the tracked hands of the one or two users detected by the Kinect® sensor skeletal tracking process. If the device is not held in the hand, or if the Kinect® skeletal tracking fails, the device cannot be tracked. Furthermore, holding a mobile device can impact the hand tracking process to an extent that estimating hand acceleration robustly is difficult. Kinect® skeletal tracking requires a fronto-parallel view of the users. Thus relying on Kinect® skeletal tracking constraints where the camera may be placed. For example, skeletal tracking fails when the camera is mounted in the ceiling for an unobstructed top-down view of the room.
  • The cross-modal sensor fusion technique described herein avoids the difficulty of choosing candidate objects by matching low level motion features throughout the image. It may be used in many situations where skeletal tracking is noisy or fails outright and thus can be used in a wide variety of application scenarios. Whereas most of the above discussed work performs matching over a significant window in time, the cross-motion sensor fusion technique described herein uses a fully recursive formulation that relies on storing only the previous frame's results, not a buffer of motion history. In fact, the recursive nature of the computation allows it to be applied everywhere in the image in real time, avoiding the need to track discrete objects.
  • Arguably to correlate image and device motion for the purposes of locating the device or the user carrying it, the best approach is to match image motion directly, since as with “synchronous gestures” the pattern of image motion will provide the discriminative power to robustly detect the device or its user. Making fewer assumptions about the appearance of the device or user extends the range of applicability of the approach, and makes the technique less complex, more robust, and ultimately more useful.
  • 1.2.3 Exemplary Processes for Practicing the Technique
  • The following paragraphs described various exemplary processes for practicing the cross-modal sensor fusion technique. In general, the technique matches motion features measured on a mobile device with motion features observed in images of the device in order to track the device (and its user). Some embodiments of the technique use color and depth images as described in the following paragraphs, but it is possible to practice the technique using grayscale and/or just two dimensional images.
  • In one embodiment of the technique, the matching process is performed at a predetermined number of pixels selected from various locations in a color image. The pixels used for matching can be selected based on a variety of distributions in the image. In one embodiment, the matching process is performed at each pixel in a color image. By virtue of the absolute orientation sensing available on a mobile device such as, for example, a smart phone, and the ability to determine the 3D position of an observed point in the color image taken by a depth camera, the match is performed in a common 3D coordinate frame.
  • FIG. 1 depicts one exemplary process 100 for practicing the cross-modal sensor fusion technique. As shown in block 102, motion features of a mobile device are measured by sensors on the device and images of the device and any object to which it is rigidly attached are simultaneously captured.
  • Image motion features of the device in the captured images are found (block 104). For example, the image motion features can be either velocities or accelerations which are determined on a pixel by pixel basis at various locations in an image. The image motion features are converted into the same coordinate frame of the mobile device, as shown in block 106.
  • Device motion features measured on the device are then matched with the image motion features of the device, as shown in block 108. For example, the device motion features can be velocities or accelerations measured by sensors on the device. The difference between the image motion features and the device motion features is computed at on a per pixel basis, at a number of pixel locations in one or more of the captured images in the common (possibly real-world) coordinate system, as shown in block 110. This number of pixels can be, for example, every pixel in an image, every other pixel in an image, a random distribution of pixels in the image, a uniform distribution of pixels and the like. Furthermore, the number of pixels can be predetermined if desired, as can the pixel locations that are selected. In one embodiment of the technique the real world coordinates of the device's motions are provided by the sensors on the device, while the real world coordinates of the image motion features are determined using the coordinates from the camera that captured the images.
  • The presence of the device and the object rigidly attached to it are then determined using the difference at the chosen pixels, as shown in block 112. The smallest difference in an image determines the device location (and any rigid object attached to it, such as the user of the device) in the common (e.g., real-world) coordinate system.
  • FIG. 2 depicts another exemplary process 200 for practicing the cross-modal sensor fusion technique that matches motion features that are accelerations. As shown in block 202, mobile device acceleration and color and depth images of the mobile device and its user are simultaneously captured.
  • Three-dimensional (3D) image accelerations are found in the captured images, as shown in block 204. These can be found, for example, by computing a 2D optical flow on the captured color images and using corresponding depth images to compute the 3D acceleration. These 3D image accelerations are then converted into the same coordinate frame of the mobile device, as shown in block 206.
  • The device accelerations measured by sensors on the device and the image accelerations are then matched, as shown in block 206. The difference between image and device acceleration is computed on a per pixel basis, at a number of pixel locations in the color images, as shown in block 210. The smallest difference value indicates the presence of the device at that pixel or point, as shown in block 212.
  • FIG. 3 depicts yet another exemplary process 300 for practicing the cross-modal sensor fusion technique. As shown in block 302, mobile device acceleration is found. Simultaneously with the capture of the mobile device's acceleration, color depth images of the mobile device, and optionally its user, are captured.
  • Two-dimensional (2D) image motion is found in the captured images, as shown in block 304, by simultaneously computing a dense optical flow of flow vectors on the captured color images. Each flow vector is converted to a 3D motion using the depth images, as shown in block 306, and each flow vector is transformed to the coordinate frame of the mobile device, as shown in block 308. Image acceleration is estimated, as shown in block 310. This 3D acceleration is estimated by a Kalman filter at each point of the image, with the 3D flow at the point provided as input.
  • The 3D device and image accelerations are then matched, as shown in block 312. The difference between image and device acceleration is computed at a number of pixels or points throughout one or more in the color images. The number of pixel or point locations can be predetermined if desired, as can the pixel or point locations that are selected. The smallest difference value in each image indicates the presence of the device at those pixel or point locations, as shown in block 314.
  • The above-described exemplary processes for practicing the cross-modal sensor fusion technique provide a general description of these processes. Section 2 of this specification provides specific details of the computations performed in each of the actions performed in the processes.
  • Several exemplary processes for practicing the cross-modal sensor fusion technique having been described, the next section describe an exemplary system that can be used for practicing the technique.
  • 1.2.4 An Exemplary System for Practicinq the Technique
  • FIG. 4 shows an illustrative environment 400 which serves as a vehicle for introducing a system for practicing the cross-modal sensor fusion technique described herein. The system receives motion information from a mobile device 402. More specifically, the system receives device motion features measured by sensors on at least one mobile device 402. The system further receives captured images of the mobile device 402, from which image motion features are computed, from at least one external camera system 404. The device motion features from the mobile device are generated by the mobile device 402 itself, with respect to a frame of reference 406 of the mobile device 402. The captured images are captured by the external camera system 404 from a frame of reference 408 that is external to the mobile device 402. In other words, the external camera system 404 observes the mobile device 402 from a vantage point that is external to the mobile device 402.
  • Generally speaking, the mobile device 402 is associated with at least one object. That object can be, for example, a user 412 which moves within a scene. For example, the mobile device 402 comprises a handheld unit that is rigidly attached to a user 412. Any of the parts of an object (e.g., a user) 412 may be in motion at any given time.
  • As will be explained in detail below, one purpose of the system is to track the object (for example, the user 412) that is associated with the mobile device 402. For example, in FIG. 4, the system seeks to track the user 412 that is holding the mobile device 402. The system performs this task by correlating the device motion features obtained from mobile device 402 with the image motion features of the mobile device 402 obtained from the captured images. For example, the system matches the device motion features from the mobile device (which are generated by sensors on the mobile device 402) with image motion features extracted from the captured images. The system then computes the difference between the motion features from the mobile device (which are generated by the mobile device 402) and the motion features extracted from the captured images. In one embodiment of the system the difference is computed on a pixel by pixel basis for a predetermined number of pixels at various locations in an image. The smallest difference is determined as the location of the mobile device 402 (with the user 412 rigidly attached thereto). The system can then use this conclusion to perform any environment-specific actions.
  • The system can be applied to many other scenarios. For example, in FIG. 4, the mobile device 402 corresponds to a piece of equipment that the user grasps and manipulates with a hand. For example, this type of equipment may comprise a pointing device, a mobile telephone device, a game controller device, a game implement (such as a paddle or racket) and so on. But, more generally, the mobile device 402 can correspond to any piece of equipment of any size and shape and functionality that can monitor its own movement and report that movement to the system. For example, in other environments, the mobile device 402 may correspond to any piece of equipment that is worn by the user 412 or otherwise detachably fixed to the user. For example, the mobile device 402 can be integrated with (or otherwise associated with) a wristwatch, pair of paints, dress, shirt, shoe, hat, belt, wristband, sweatband, patch, button, pin, necklace, ring, bracelet, eyeglasses, goggles, and so on.
  • In other cases, a scene contains two or more subjects, such as two or more users (not shown in FIG. 4). Each user may hold (or wear) his or her own mobile device. In this context, the system can determine the association between mobile devices and respective users. In the case of more than one mobile device, the matching process is run for each device. However, image motion estimation, which is a computationally expensive computation, needs to be run only once regardless of how many devices are matched.
  • In yet other cases, the object that is associated with the mobile device 402 is actually a part of the mobile device 402 itself. For example, the object may correspond to the housing of a mobile phone, the paddle of a game implement, etc. Still further interpretations of the terms “mobile device” and “object” are possible. However, to facilitate explanation, most of the examples which follow will assume that the object corresponds the user 412 which holds or is otherwise associated with the mobile device 402.
  • FIG. 5 shows a high-level block depiction of a system 500 that performs the functions summarized above. The system 500 includes a mobile device 502, an external camera system 504, and a cross-modal sensor fusion processing system 506. The mobile device 502 supplies device motion features measured on the mobile device to the cross-modal sensor fusion processing system 506. The external camera system 504 captures images of the device 502 and sends these to the cross-modal sensor fusion processing system 506. The cross-modal sensor fusion processing system 506 computes the image motion features. It also performs a correlation analysis of the motion features measured on the mobile device and the image motion features obtained from the captured images at various locations in the images. Using a pixel by pixel analysis at a number of pixel locations in an image, the cross-modal sensor fusion processing system 506 computes the difference between the device motion features measured on the mobile device and the image motion features obtained from the captured image at these pixel locations and the smallest difference indicates the location of the mobile device (and therefore the user attached thereto) in that image.
  • FIG. 6 shows an overview of one type of mobile device 602. The mobile device 602 incorporates or is otherwise associated with one or more position-determining devices 610. For example, the mobile device 602 can include one or more accelerometers 604, one or more gyro devices 606, one or more magnetometers 608, one or more GPS units (not shown), one or more dead reckoning units (not shown), and so on. Each of the position-determining devices 610 uses a different technique to detect movement of the device, and, as a result, to provide a part of the motion features measured on the mobile device 602.
  • The mobile device 602 may include one or more other device processing components 612 which make use of the mobile device's motion features for any environment-specific purpose (unrelated to the motion analysis functionality described herein). The mobile device 602 also sends the mobile device's motion features to one or more destinations, such as the cross-modal sensor fusion processing system (506 of FIG. 5). The mobile device 602 can also send the mobile device's motion features to any other target system, such as a game system.
  • FIG. 7 shows an overview of one type of external camera system 704. In general, the external camera system 704 can use one or more data capture techniques to capture a scene which contains the mobile device and an object, such as the user. For example, the external camera system 704 can investigate the scene by irradiating it using any kind electromagnetic radiation, including one or more of visible light, infrared light, radio waves, etc.
  • The external camera system 704 can optionally include an illumination source 702 which bathes the scene in infrared light. For example, the infrared light may correspond to structured light which provides a pattern of elements (e.g., dots, lines, etc.). The structured light deforms as it is cast over the surfaces of the objects in the scene. A depth camera 710 can capture the manner in which the structured light is deformed. Based on that information, the depth camera 710 can derive the distances between different parts of the scene and the external camera system 704. The depth camera 710 can alternatively, or in addition, use other techniques to generate the depth image, such as a time-of-flight technique, a stereoscopic correspondence technique, etc.
  • The external camera system 704 can alternatively, or in addition, capture other images of the scene. For example, a video camera 706 can capture an RGB video image of the scene or a grayscale video image of the scene.
  • An image processing module 708 can process the depth images provided by the depth camera 704 and/or one or more other images of the scene provided by other capture units.
  • The Kinect® controller provided by Microsoft Corporation of Redmond, Washington, can be used to implement at least parts of the external camera system.
  • As discussed above, the external camera system 704 can capture a video image of the scene. The external camera system 704 send the video images to the cross-modal sensor fusion system 806, described in greater detail with respect to FIG. 8.
  • As shown in FIG. 8, one embodiment of the cross-modal sensor fusion processing system 806 resides on computing device 900 that is described in greater detail with respect to FIG. 9. The cross-modal sensor fusion processing system 806 receives device motion features measured onboard of a mobile device and images captured by the external camera system previously discussed. The image motion features are computed by the cross-modal sensor fusion processing system 806. The device motion features can be velocities or 3D accelerations reported by sensors on the mobile device. The motion features of the mobile device and the captured images can be transmitted to the cross-modal sensor fusion system 806 via a communications link, such as, for example, a WiFi link or other communications link.
  • The system 806 includes a velocity determination module 802 that determines the 2D velocity of the image features. The system 806 also includes an image acceleration estimation module that estimates 3D image accelerations by adding depth information to the 2D image velocities. A conversion module 814 converts the image coordinates into a common (e.g., real-world) coordinate frame used by the mobile device.
  • The system 806 also includes a matching module 810 that matches the device motion features and the image motion features (e.g. that matches the image velocities to the device velocities, or that matches the image accelerations to the device accelerations, depending what type of motion features are being used). A difference computation module 812 computes the differences between the device motion features and the image motion features (e.g., 3D device accelerations and the 3D image accelerations) at points in the captured images. The difference computation module 812 determines the location of the mobile device as the point in each image where the difference is the smallest.
  • The above-described exemplary system for practicing the cross-modal sensor fusion technique provides a general description of a system that can be used for practicing the technique. Section 2 of this specification provides specific details of the computations performed in each of the components of the system.
  • 2.0 Details of the Processes and System for Practicing the Cross-modal Sensor Fusion Technique
  • In the following sections details for the computations of the processes and system components of the cross-modal sensor fusion technique depicted in FIGS. 1 through 8 are described in greater detail.
  • 2.1 Device Motion
  • Many mobile device APIs offer real-time device orientation information. In many devices orientation is computed by combining information from the onboard accelerometers, gyroscopes and magnetometers. Because this orientation is with respect to magnetic north (as measured by the magnetometer) and gravity (as measured by the accelerometer, when the device is not moving), it is often considered an “absolute” orientation. In some embodiments of the cross-modal sensor fusion technique, the mobile device reports orientation to a standard “ENU” (east, north, up) coordinate system. While magnetic north is disturbed by the presence of metal and other magnetic fields present in indoor environments, in practice it tends to be constant in a given room. It is only important that magnetic north not change dramatically as the device moves about the area imaged by the depth camera (e.g., Kinect® sensor).
  • Mobile device accelerometers report device acceleration in the 3D coordinate frame of the device. Having computed absolute orientation using the magnetometers, gyros and accelerometers, it is easy to transform the accelerometer outputs to the ENU coordinate frame and subtract acceleration due to gravity. Some mobile devices provide an API that performs this calculation to give the acceleration of the device in the ENU coordinate frame, without acceleration due to gravity. Of course, because it depends on device orientation, its accuracy is only as good as that of the orientation estimate. One mobile device in a prototype implementation transmits this device acceleration (ENU coordinates, gravity removed) over WiFi to the cross-modal sensor fusion system that performs sensor fusion.
  • 2.2 Image Motion
  • As discussed above with respect to FIGs.1-3 and 5-8, the cross-modal sensor fusion technique compares image motion features from images of the device and device motion features from sensors on the device in order to track the device (and its user). In some embodiments of the technique, only velocities are computed. In other embodiments, accelerations are also computed. The following discussion focuses more on using accelerations in tracking the mobile device. The processing used in using velocities to track the mobile device is basically a subset of that for using accelerations. For example, estimating velocity from images is already accomplished by computing optical flow. Computing like velocities on the mobile device involves integrating the accelerometer values from the device.
  • In one embodiment of the technique, the cross-modal sensor fusion technique compares the 3D acceleration of the mobile device with 3D acceleration observed in video. The technique finds acceleration in video by first computing the velocity of movement all of the pixels in a color image using a standard optical flow technique. This 2D image-space velocity is augmented with depth information and converted to velocity in real world 3D coordinates (meters per second). Acceleration is estimated at each point in the image using a Kalman filter. The following paragraphs describe each of these steps in detail.
  • 2.2.1 Finding 2D Velocity with Optical Flow
  • Rather than tracking the position of a discrete set of known objects in the scene, image motion is found by computing a dense optical flow on an entire color image. Dense optical flow algorithms model the motion observed in a pair of images as a displacement u, v at each pixel. There are a variety of optical flow algorithms. One implementation of the technique uses an optical flow algorithm known for its accuracy that performs a nonlinear optimization over multiple factors. However, there are many other ways to compute flow, including a conceptually simpler block matching technique, where for each point in the image at time t, the closest patch around the point is found in the neighborhood of the point at time t+1, using the sum of the squared differences on image pixel intensities, or other similarity metrics. While optical flow is typically used to compute the motion forward from time t−1 to the frame at time t, for reasons explained later the cross-modal sensor fusion technique computes flow from the current frame at time t to the frame at time t−1. The velocity u, v at each point x, y is denoted as ux,y and vx,y. It is noted that x, y are integer valued, while u, v are real-valued.
  • 2.2.2 Converting to 3D Motion
  • Depth cameras, such as for example Microsoft Corporation's Kinect® sensor, report distance to the nearest surface at every point in its depth image. Knowing the focal lengths of the depth and color cameras, and their relative position and orientation, the 3D position of a point in the color image may be calculated. One known external camera system provides an API to compute the 3D position of a point in the color camera in real world units (meters). The 3D position corresponding to a 2D point x, y in the color image at time t is denoted as zx,y,t.
  • Rather than converting 2D velocities (as computed by optical flow) to 3D quantities directly, one embodiment of the cross-modal sensor fusion technique uses a Kalman filter-based technique that estimates velocity and acceleration at each pixel.
  • 2.2.3 Estimating Acceleration
  • Some embodiments of the cross-modal sensor fusion technique uses a Kalman filter to estimate acceleration of moving objects in the image. The Kalman filter incorporates knowledge of sensor noise and is recursive (that is, it incorporates all previous observations). The technique thus allows much better estimates of acceleration compared to the approach of using finite differences. The basics of estimating acceleration employed in one embodiment of the cross-modal sensor fusion technique are described below.
  • The Kalman filter is closely related to the simpler “exponential” filter. The exponential filter computes a smoothed estimate of a scalar zt using the recursive relation:

  • x t =x t−1+α(z t −x t−1)
  • where the gain α∈(0,1) controls the degree to which the filter incorporates the “innovation” zt 31 xt−1. The smaller the gain, the less the filter follows the observation zt, and the more the signal is smoothed. An improved version of this filter is

  • xt=x t−1+α(z t −x t*)
  • where xt* is a prediction of xt given xt−1. The Kalman filter is essentially this improved exponential filter, and includes a principled means to set the value of the gain given the uncertainty in both the prediction xt* and observation zt.
  • For the problem of estimating acceleration from image motion, the motion of a single object in 3D is first considered. The equations of motion predict the object's position xt*, velocity veand acceleration at* from previous values, xt−1, vt−1 and at−1:

  • x t *=x t−1 +v t−1 Δt+½a t−1 Δt 2

  • v t *=v t−1 +a t−1 Δt

  • at*=at−1
  • Given observation zt of the position of a tracked object, the technique updates the estimates of position, velocity and acceleration with

  • x t =x t−1 +k x*(z t −x t*)

  • v t =v t−1 +k v*(z t −x t*)

  • a t =a t−1 +k a*(z t −x t*)
  • where * denotes element-wise multiplication, and Kalman gains kx, kv, kα relate the innovation, or error in the prediction of position, to changes in each of the estimates of position, velocity and acceleration. Kalman gain is computed via a conventional method for computing the optimal Kalman gain using two distinct phases of prediction and update. The predict phase uses the state estimate from a previous time step to produce an estimate of the state at the current time step. This predicted state estimate, or a priori state estimate, is an estimate of the state at the current time step, but does not include observation information from the current time step. In the update phase, the current a priori prediction is combined with current observation information to refine the state estimate (called the a posteriori state estimate). Typically, the two phases alternate, with the prediction advancing the state until the next observation, and the update incorporating the observation, but this is not necessary. Hence, the Kalman gain is a function of the uncertainty in the predictive model xt* and observations . In particular, it is preferable to assign a high uncertainty to the estimate of acceleration at to reflect the belief that acceleration of the object varies over time. Similarly, the uncertainty in zt is related to the noise of the sensor.
  • Finally, it is noted that the usual formulation of Kalman gain is time-varying. However, if the uncertainty of the predictive model and observations is constant, Kalman gain converges to a constant value, as presented above. This leads to a simplified implementation of the update equations, and further underscores the relationship between the Kalman filter and the simpler exponential filter.
  • 2.2.4 Incorporating Flow
  • The cross-modal sensor fusion technique maintains a Kalman filter of the form described above to estimate 3D acceleration at pixel locations in the image (in some embodiments at each pixel). The estimated position, velocity and acceleration at each pixel location x, y are denoted as xx,y,t, vx,y,t and ax,y,t respectively.
  • Optical flow information is used in two ways: first, the flow at a point in the image is a measurement of the velocity of the object under that point. It thus acts as input to estimate of acceleration using the Kalman filter. Second, the technique can use flow to propagate motion estimates spatially, so that they track the patches of the image whose motion is being estimated. In this way the Kalman filter can use many observations to accurately estimate the acceleration of a given patch of an object as it moves about the image. This is accomplished in the following manner:
  • The Kalman update equations are elaborated to indicate that there is a separate instance of the filter at each pixel, and to incorporate flow ux,y and vx,y (which is abbreviated as u and v):

  • x x,y,t =x x+u,y+v,t−1 +k x*(z x,y,t −x* x,y,t)

  • v x,y,t =v x+u,y+v,t−1 +k v*(z x,y,t −x* x,y,t)

  • a x,y,t =a x+u,y+v,t−1 +k a*(z x,y,t −x* x,y,t)
  • It should be noted that x, y are integer-valued, while u, v are real-valued. In practice, xx,y,t−1, vx,y,t−1 and ax,y,t−1 are stored as an array the same dimension of the color image, but because x+u and y+v are real valued, quantities xx+u,y+v,t−1, vx+u,y+v,t−1, and ax+u,y+v,t−1 are best computed by bilinear interpolation. In this process, the Kalman filter at x, y updates motion estimates found at x+u, y+v in the previous time step. In this way motion estimates track the objects whose motion is being estimated.
  • This interpolation motivates computing optical flow in reverse fashion, from time t to time t−1: ux,y and vx,y are defined for all integer values x, y. Computing flow in the usual fashion from time t−1 to time t might leave some pixels without “predecessors” from the previous frame, even if previous motion estimates are distributed across multiple pixels using bilinear interpolation. Computing flow from time t to time t−1 avoids this problem.
  • 2.3 Sensor Fusion
  • The following paragraphs describe the sensor fusion computations employed in some embodiments of the technique.
  • 2.3.1 Common Coordinate System
  • In the following, a one-time calibration procedure which obtains the camera's orientation with respect to the ENU coordinate frame of the mobile device is described. Motion observed in the camera may then be transformed to ENU coordinates and compared to device accelerations directly.
  • While there are many ways to compute the relative orientation of the depth camera to the coordinate system used by the mobile device, a straightforward semi-automatic procedure that is easy to implement and gives good results is adopted in one embodiment of the technique. First the mobile device is placed display-side down on a plane that is easily observed by the camera, such as a wall or desk. Viewing the color video stream of the camera, the user clicks on three or more points on the plane.
  • The 3D unit normal nk of the plane in coordinates of the camera is computed by first calculating the 3D position of each clicked point and fitting a plane by a least-squares procedure. The same normal nw in ENU coordinates is computed by rotating the unit vector z (out of the display of the device) by the device orientation. Similarly, gravity unit vector gk in camera coordinates is taken from the 3-axis accelerometer built in to some camera system, such as, for example, the Kinect® sensor. Gravity gw in the ENU coordinate frame is by definition—z.
  • The 3×3 rotation matrix Mcamera→world that brings a 3D camera point to the ENU coordinate frame is calculated by matching the normals nk and nw, as well as gravity vectors gk and gw, and forming orthonormal bases K and W by successive cross products:
  • k 1 = , k 2 = n k × k n k × k k 3 = n k × k 2 , K = [ k 1 k 2 k 3 ] w 1 = n w , w 2 = n w × w n w × w , w 3 = n w × w 2 , W = [ w 1 w 2 w 3 ] M camera world = K - 1 W
  • 2.3.2 Matching
  • In one embodiment of the cross-modal sensor fusion technique, 3D image accelerations are estimated at each pixel and transformed to the ENU coordinate system as described above. The acceleration observed at each pixel may be compared directly to the device acceleration dt:

  • r x,y,t=√{square root over (∥a x,y,t −d t2)}
  • Regions of the image that move with the device will give small values of rx,y,. In particular, the hope is that pixels that lie on the device will give the smallest values. If one assumes that the device is present in the scene, it may suffice to locate its position in the image by finding x*, y* that minimizes rx,y,. However, other objects that momentarily move with the device, such as those rigidly attached (e.g., the hand holding the device and the arm) may also match well.
  • In practice, in some embodiments of the technique locating the device by computing the instantaneous minimum over rx,y, will fail to find the device when it is momentarily still or moving with constant velocity. In these cases device acceleration may be near zero and so matches many parts of the scene that are not moving, such as the background. This is addressed by smoothing rx,y, with an exponential filter to obtain sx,y,t. This smoothed value is “tracked” using optical flow and bilinear interpolation, in the same manner as the Kalman motion estimates. Small values over the smoothed value sx,y,t will pick out objects that match device acceleration over the recent past (depending on smoothing parameter α) and “remember” the moments when some non-zero device acceleration uniquely identified it in the image. In the case where the device stops moving, the small values sx,y,t will stay with the device for some time, hopefully until the device moves again.
  • An important consideration in performing the above matching process is that the latency of the depth camera (e.g., the Kinect® sensor) is much greater than that of the mobile device, including WiFi communications. Without accounting for this difference, the measure of similarity, may be inaccurate. In one embodiment, the cross-modal sensor fusion technique accounts for the relative latency of the camera (e.g., Kinect® sensor) by artificially lagging the mobile device readings by some small number of frames. In one prototype implementation this lag is tuned empirically to four frames, approximately 64 ms.
  • In some applications it may not be appropriate to assume that the device is in the scene. For example, the user holding the device may leave the field of view of the camera. In this case the minimum value over sx,y,t can be checked against a threshold to reject matches of poor quality. The minimum value at x*, y* is denoted as s*.
  • 3.0 Exemplary Operating Environment:
  • The cross-modal sensor fusion technique described herein is operational within numerous types of general purpose or special purpose computing system environments or configurations. FIG. 9 illustrates a simplified example of a general-purpose computer system on which various embodiments and elements of the cross-modal sensor fusion technique, as described herein, may be implemented. It should be noted that any boxes that are represented by broken or dashed lines in FIG. 9 represent alternate embodiments of the simplified computing device, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • For example, FIG. 9 shows a general system diagram showing a simplified computing device 900. Such computing devices can be typically be found in devices having at least some minimum computational capability, including, but not limited to, personal computers, server computers, hand-held computing devices, laptop or mobile computers, communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, audio or video media players, etc.
  • To allow a device to implement the cross-modal sensor fusion technique, the device should have a sufficient computational capability and system memory to enable basic computational operations. In particular, as illustrated by FIG. 9, the computational capability is generally illustrated by one or more processing unit(s) 910, and may also include one or more GPUs 915, either or both in communication with system memory 920. Note that that the processing unit(s) 910 of the general computing device may be specialized microprocessors, such as a DSP, a VLIW, or other micro-controller, or can be conventional CPUs having one or more processing cores, including specialized GPU-based cores in a multi-core CPU. When used in special purpose devices such as the cross-modal sensor fusion technique, the computing device can be implemented as an ASIC or FPGA, for example.
  • In addition, the simplified computing device of FIG. 9 may also include other components, such as, for example, a communications interface 930. The simplified computing device of FIG. 9 may also include one or more conventional computer input devices 940 (e.g., pointing devices, keyboards, audio and speech input devices, video input devices, haptic input devices, devices for receiving wired or wireless data transmissions, etc.). The simplified computing device of FIG. 9 may also include other optional components, such as, for example, one or more conventional computer output devices 950 (e.g., display device(s) 955, audio output devices, video output devices, devices for transmitting wired or wireless data transmissions, etc.). Note that typical communications interfaces 930, input devices 940, output devices 950, and storage devices 960 for general-purpose computers are well known to those skilled in the art, and will not be described in detail herein.
  • The simplified computing device of FIG. 9 may also include a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 900 via storage devices 960 and includes both volatile and nonvolatile media that is either removable 970 and/or non-removable 980, for storage of information such as computer-readable or computer-executable instructions, data structures, program modules, or other data. Computer readable media may comprise computer storage media and communication media. Computer storage media refers to tangible computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
  • Storage of information such as computer-readable or computer-executable instructions, data structures, program modules, etc., can also be accomplished by using any of a variety of the aforementioned communication media to encode one or more modulated data signals or carrier waves, or other transport mechanisms or communications protocols, and includes any wired or wireless information delivery mechanism. Note that the terms “modulated data signal” or “carrier wave” generally refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. For example, communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of any of the above should also be included within the scope of communication media.
  • Further, software, programs, and/or computer program products embodying some or all of the various embodiments of the cross-modal sensor fusion technique described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer or machine readable media or storage devices and communication media in the form of computer executable instructions or other data structures.
  • Finally, the cross-modal sensor fusion technique described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The embodiments described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks. In a distributed computing environment, program modules may be located in both local and remote computer storage media including media storage devices. Still further, the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.
  • It should also be noted that any or all of the aforementioned alternate embodiments described herein may be used in any combination desired to form additional hybrid embodiments. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. The specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A computer-implemented process for locating an object to which a device is attached, comprising:
using one or more computing devices to perform the following process actions, the computing devices being in communication with each other via a computer network whenever a plurality of computing devices is used:
capturing images of the device and the attached object;
finding image motion features in the captured images;
matching device motion features measured by the device with the image motion features in the captured images;
computing the difference between the image motion features and the device motion features at a number of locations in the captured images; and
using the computed difference in a number of the said locations in determining the location of the device and attached object in coordinates of a common coordinate system.
2. The computer-implemented process of claim 1 wherein the number of locations are pixel locations.
3. The computer-implemented process of claim 2 wherein the pixel locations are every pixel in an image.
4. The computer-implemented process of claim 1 wherein the image motion features and the device motion features are expressed in terms of velocities, and wherein the velocities of the image motion features are found by computing the velocity for all of pixels in an image using an optical flow technique.
5. The computer-implemented process of claim 3 wherein the optical flow is computed using only a current image frame at time t and a previous image frame at time t−1.
6. The computer-implemented process of claim 1 wherein the common coordinate system is a real world coordinate system.
7. The computer-implemented process of claim 1 wherein the device captures real-time device orientation.
8. The computer-implemented process of claim 7 wherein the device reports device orientation to a standard east, north, up (ENU) coordinate system.
9. The computer-implemented process of claim 1 wherein the image motion features and the device motion features are expressed in terms of accelerations.
10. The computer-implemented process of claim 9, wherein the accelerations of the image motion features in an image are found by:
computing the velocity of movement for all of the pixels in the image using an optical flow technique;
augmenting the computed velocity of movement for the pixels in the image by corresponding depth information;
converting the augmented velocity of movement for the pixels in the image into a three dimensional coordinate system of the device; and
estimating acceleration for all the pixels in the image in three dimensional coordinates using the converted augmented velocity of movement.
11. The computer-implemented process of claim 10 wherein the acceleration at a pixel is estimated using a Kalman filter.
12. The computer-implemented process of claim 11 wherein the Kalman filter uses the flow at a point in the image to measure the velocity of an object under that point and tracks patches of the image whose motion is being estimated to estimate the acceleration of a given patch of an object as it moves in the image.
13. The computer-implemented process of claim 10 wherein the optical flow information at a point in an image is a measurement of the velocity of an object under that point.
14. The computer-implemented process of claim 10 wherein optical flow information is used to track patches of image whose motion is being estimated.
15. The computer-implemented process of claim 1 further comprising matching device motion features and image motion features in a common coordinate system comprising:
obtaining a camera's orientation that captures an image with respect to a coordinate frame of the mobile device;
transforming motion observed in the captured image to the coordinate frame of the mobile device to obtain a difference.
16. The computer-implemented process of claim 1 wherein no predetermined model of the device or an object rigidly attached to the device is required in order to determine the presence of the device or an object rigidly attached to the device.
17. A system for determining mobile device location, comprising:
one or more computing devices, said computing devices being in communication with each other via a computer network whenever there is a plurality of computing devices, and
a computer program having a plurality of sub-programs executed by said one or more computing devices, wherein the sub-programs cause said one or more computing devices to,
receive mobile device 3D accelerations from sensors on the mobile device;
determine 3D image accelerations in images captured of the mobile device by a depth camera simultaneously with the 3D device accelerations received from the mobile device;
in a common coordinate system, on a per pixel basis, compute the differences between the 3D device accelerations and the 3D image accelerations at a number of locations in the captured images; and
using the computed differences to determine the 3D location of the mobile device.
18. The system of claim 17 wherein there is no direct line of sight to the mobile device when capturing images of the mobile device.
19. The system of claim 18 wherein the differences between 3D device accelerations and 3D image accelerations are smoothed.
20. A computer-implemented process for determining the location of a mobile device, comprising:
using one or more computing devices to perform the following process actions, the computing devices being in communication with each other via a computer network whenever a plurality of computing devices is used:
capturing mobile device three-dimensional (3D) acceleration;
simultaneously with the capture of the mobile device's 3D acceleration, capturing color and depth images of the mobile device;
finding 2D image motion in the captured images by computing a dense optical flow of flow vectors on the captured color images;
converting each flow vector to a 3D motion using the depth images;
transforming each flow vector to a real-world coordinate system of the mobile device;
estimating 3D image acceleration from the transformed flow vectors;
computing the difference between the 3D image acceleration and the 3D device acceleration at a number of point locations in the captured colored images; and
using the computed difference at a number of the point locations to determine the presence of the device in real-world coordinates.
US15/592,344 2013-12-04 2017-05-11 Fusing device and image motion for user identification, tracking and device association Abandoned US20170330031A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/592,344 US20170330031A1 (en) 2013-12-04 2017-05-11 Fusing device and image motion for user identification, tracking and device association

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/096,840 US9679199B2 (en) 2013-12-04 2013-12-04 Fusing device and image motion for user identification, tracking and device association
US15/592,344 US20170330031A1 (en) 2013-12-04 2017-05-11 Fusing device and image motion for user identification, tracking and device association

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/096,840 Continuation US9679199B2 (en) 2013-12-04 2013-12-04 Fusing device and image motion for user identification, tracking and device association

Publications (1)

Publication Number Publication Date
US20170330031A1 true US20170330031A1 (en) 2017-11-16

Family

ID=52101621

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/096,840 Active 2035-05-25 US9679199B2 (en) 2013-12-04 2013-12-04 Fusing device and image motion for user identification, tracking and device association
US15/592,344 Abandoned US20170330031A1 (en) 2013-12-04 2017-05-11 Fusing device and image motion for user identification, tracking and device association

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/096,840 Active 2035-05-25 US9679199B2 (en) 2013-12-04 2013-12-04 Fusing device and image motion for user identification, tracking and device association

Country Status (4)

Country Link
US (2) US9679199B2 (en)
EP (1) EP3077992B1 (en)
CN (1) CN105814609B (en)
WO (1) WO2015084667A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10121259B2 (en) * 2015-06-04 2018-11-06 New York University Langone Medical System and method for determining motion and structure from optical flow
CN109886326A (en) * 2019-01-31 2019-06-14 深圳市商汤科技有限公司 A kind of cross-module state information retrieval method, device and storage medium
US20220070667A1 (en) 2020-08-28 2022-03-03 Apple Inc. Near owner maintenance
US20220200789A1 (en) * 2019-04-17 2022-06-23 Apple Inc. Sharing keys for a wireless accessory
US11606669B2 (en) 2018-09-28 2023-03-14 Apple Inc. System and method for locating wireless accessories
US11863671B1 (en) 2019-04-17 2024-01-02 Apple Inc. Accessory assisted account recovery

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10343283B2 (en) * 2010-05-24 2019-07-09 Intouch Technologies, Inc. Telepresence robot system that can be accessed by a cellular phone
US9437000B2 (en) * 2014-02-20 2016-09-06 Google Inc. Odometry feature matching
EP2988288A1 (en) * 2014-08-22 2016-02-24 Moog B.V. Medical simulator handpiece
EP3205379A4 (en) * 2014-10-10 2018-01-24 Fujitsu Limited Skill determination program, skill determination method, skill determination device, and server
US9412169B2 (en) * 2014-11-21 2016-08-09 iProov Real-time visual feedback for user positioning with respect to a camera and a display
CN107111598B (en) * 2014-12-19 2020-09-15 深圳市大疆创新科技有限公司 Optical flow imaging system and method using ultrasound depth sensing
JP6428277B2 (en) * 2015-01-09 2018-11-28 富士通株式会社 Object association method, apparatus, and program
US10238976B2 (en) * 2016-07-07 2019-03-26 Disney Enterprises, Inc. Location-based experience with interactive merchandise
US20180204331A1 (en) * 2016-07-21 2018-07-19 Gopro, Inc. Subject tracking systems for a movable imaging system
CN107816990B (en) * 2016-09-12 2020-03-31 华为技术有限公司 Positioning method and positioning device
US10198828B2 (en) * 2016-10-07 2019-02-05 Samsung Electronics Co., Ltd. Image processing method and electronic device supporting the same
CN108696293B (en) * 2017-03-03 2020-11-10 株式会社理光 Wearable device, mobile device and connection method thereof
EP3591570A4 (en) * 2017-03-22 2020-03-18 Huawei Technologies Co., Ltd. Method for determining terminal held by subject in picture and terminal
EP3479767A1 (en) * 2017-11-03 2019-05-08 Koninklijke Philips N.V. Distance measurement devices, systems and methods, particularly for use in vital signs measurements
US10122969B1 (en) 2017-12-07 2018-11-06 Microsoft Technology Licensing, Llc Video capture systems and methods
US11194842B2 (en) * 2018-01-18 2021-12-07 Samsung Electronics Company, Ltd. Methods and systems for interacting with mobile device
US10706556B2 (en) 2018-05-09 2020-07-07 Microsoft Technology Licensing, Llc Skeleton-based supplementation for foreground image segmentation
EP3605287A1 (en) * 2018-07-31 2020-02-05 Nokia Technologies Oy An apparatus, method and computer program for adjusting output signals
CN114185071A (en) * 2021-12-10 2022-03-15 武汉市虎联智能科技有限公司 Positioning system and method based on object recognition and spatial position perception
CN114821006B (en) * 2022-06-23 2022-09-20 盾钰(上海)互联网科技有限公司 Twin state detection method and system based on interactive indirect reasoning

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4185052B2 (en) 2002-10-15 2008-11-19 ユニバーシティ オブ サザン カリフォルニア Enhanced virtual environment
US7236091B2 (en) 2005-02-10 2007-06-26 Pinc Solutions Position-tracking system
US20060240866A1 (en) * 2005-04-25 2006-10-26 Texas Instruments Incorporated Method and system for controlling a portable communication device based on its orientation
US7761233B2 (en) 2006-06-30 2010-07-20 International Business Machines Corporation Apparatus and method for measuring the accurate position of moving objects in an indoor environment
US8380246B2 (en) 2007-03-01 2013-02-19 Microsoft Corporation Connecting mobile devices via interactive input medium
WO2009021068A1 (en) 2007-08-06 2009-02-12 Trx Systems, Inc. Locating, tracking, and/or monitoring personnel and/or assets both indoors and outdoors
US8295546B2 (en) 2009-01-30 2012-10-23 Microsoft Corporation Pose tracking pipeline
CN101950200B (en) * 2010-09-21 2011-12-21 浙江大学 Camera based method and device for controlling game map and role shift by eyeballs
US20130046505A1 (en) 2011-08-15 2013-02-21 Qualcomm Incorporated Methods and apparatuses for use in classifying a motion state of a mobile device
US9939888B2 (en) 2011-09-15 2018-04-10 Microsoft Technology Licensing Llc Correlating movement information received from different sources
US20130113704A1 (en) 2011-11-04 2013-05-09 The Regents Of The University Of California Data fusion and mutual calibration for a sensor network and a vision system
WO2013099537A1 (en) 2011-12-26 2013-07-04 Semiconductor Energy Laboratory Co., Ltd. Motion recognition device
US9460029B2 (en) 2012-03-02 2016-10-04 Microsoft Technology Licensing, Llc Pressure sensitive keys

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Farneback, "Two-Frame Motion Estimation Based on Polynomial Expansion", July 2003, Springer, Proceedings of the 13th Scandinavian Conference on Image Analysis, p. 363-370. *
Sanchez et al., "An Efficient Algorithm for Estimating the Inverse Optical Flow", June 2013, Springer, Proceedings of the 6th Iberian Conference on Pattern Recognition and Image Analysis, p. 390-397. *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10121259B2 (en) * 2015-06-04 2018-11-06 New York University Langone Medical System and method for determining motion and structure from optical flow
US11606669B2 (en) 2018-09-28 2023-03-14 Apple Inc. System and method for locating wireless accessories
US11641563B2 (en) 2018-09-28 2023-05-02 Apple Inc. System and method for locating wireless accessories
CN109886326A (en) * 2019-01-31 2019-06-14 深圳市商汤科技有限公司 A kind of cross-module state information retrieval method, device and storage medium
US20220200789A1 (en) * 2019-04-17 2022-06-23 Apple Inc. Sharing keys for a wireless accessory
US11863671B1 (en) 2019-04-17 2024-01-02 Apple Inc. Accessory assisted account recovery
US20220070667A1 (en) 2020-08-28 2022-03-03 Apple Inc. Near owner maintenance
US11889302B2 (en) 2020-08-28 2024-01-30 Apple Inc. Maintenance of wireless devices

Also Published As

Publication number Publication date
EP3077992A1 (en) 2016-10-12
CN105814609A (en) 2016-07-27
CN105814609B (en) 2019-11-12
EP3077992B1 (en) 2019-11-06
US9679199B2 (en) 2017-06-13
WO2015084667A1 (en) 2015-06-11
US20150154447A1 (en) 2015-06-04

Similar Documents

Publication Publication Date Title
US9679199B2 (en) Fusing device and image motion for user identification, tracking and device association
US20180328753A1 (en) Local location mapping method and system
US9875579B2 (en) Techniques for enhanced accurate pose estimation
US9529426B2 (en) Head pose tracking using a depth camera
CN109947886B (en) Image processing method, image processing device, electronic equipment and storage medium
US9576183B2 (en) Fast initialization for monocular visual SLAM
US9411037B2 (en) Calibration of Wi-Fi localization from video localization
Elloumi et al. Indoor pedestrian localization with a smartphone: A comparison of inertial and vision-based methods
US9646384B2 (en) 3D feature descriptors with camera pose information
US9303999B2 (en) Methods and systems for determining estimation of motion of a device
JP5181704B2 (en) Data processing apparatus, posture estimation system, posture estimation method and program
WO2017215024A1 (en) Pedestrian navigation device and method based on novel multi-sensor fusion technology
Zhao et al. Enhancing camera-based multimodal indoor localization with device-free movement measurement using WiFi
TW201715476A (en) Navigation system based on augmented reality technique analyzes direction of users' moving by analyzing optical flow through the planar images captured by the image unit
JP2016534450A (en) Inertial navigation based on vision
US20170311125A1 (en) Method of setting up a tracking system
WO2016126863A1 (en) Estimating heading misalignment between a device and a person using optical sensor
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN115699096B (en) Tracking augmented reality devices
CN116249872A (en) Indoor positioning with multiple motion estimators
Li et al. RD-VIO: Robust visual-inertial odometry for mobile augmented reality in dynamic environments
CN114187509B (en) Object positioning method and device, electronic equipment and storage medium
Koc et al. Indoor mapping and positioning using augmented reality
US20240062541A1 (en) Detection system, detection method, and recording medium
US20220282978A1 (en) Estimating camera motion through visual tracking in low contrast high motion single camera systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WILSON, ANDREW D.;BENKO, HRVOJE;SIGNING DATES FROM 20131129 TO 20131203;REEL/FRAME:046192/0829

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:046192/0880

Effective date: 20141014

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION