US20170090007A1 - Vision and Radio Fusion Based Precise Indoor Localization - Google Patents
Vision and Radio Fusion Based Precise Indoor Localization Download PDFInfo
- Publication number
- US20170090007A1 US20170090007A1 US14/865,531 US201514865531A US2017090007A1 US 20170090007 A1 US20170090007 A1 US 20170090007A1 US 201514865531 A US201514865531 A US 201514865531A US 2017090007 A1 US2017090007 A1 US 2017090007A1
- Authority
- US
- United States
- Prior art keywords
- location data
- storage medium
- computer readable
- readable storage
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
- G01S5/0257—Hybrid positioning
- G01S5/0263—Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
- G01S5/0257—Hybrid positioning
- G01S5/0263—Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems
- G01S5/0264—Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems at least one of the systems being a non-radio wave positioning system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G06K9/46—
-
- G06K9/60—
-
- G06K9/6218—
-
- G06T7/004—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H04W4/043—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/33—Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/16—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Definitions
- Embodiments of the invention concern sensors.
- FIG. 1 depicts a localization system in an embodiment.
- FIG. 2 depicts a process in an embodiment.
- FIG. 3A depicts radio based tracking
- FIG. 3B depicts vision based tracking
- FIG. 3C depicts radio/vision fusion based tracking in an embodiment of the invention.
- FIG. 4 includes a system for use with an embodiment of the invention.
- FIG. 5 includes a system for use with an embodiment of the invention.
- embodiments described herein address multimodal location fusion systems that provide sub-meter accurate (within one meter of accuracy) indoor localization by detecting and tracking moving objects (e.g., fork lifts, flying drones, mobile robots, animals, humans) radio and visual sensors.
- moving objects e.g., fork lifts, flying drones, mobile robots, animals, humans
- Such embodiments utilize some or all available sensors at a location, without demanding extra hardware, to provide highly accurate localization based on integrating radio-based and visual-based signals accentuating the strengths of each modality while de-accentuating the weaknesses of each modality.
- embodiments address at least three issues: (1) how to provide a sub-meter level location accuracy without demanding additional/excessive hardware, (2) how to effectively interact with different sensor modalities, and (3) how to enhance individual sensor pipelines with help from other sensors in a way that leads to overall system performance improvement.
- embodiments successfully interact with different sensory modalities (e.g., vision and radio) and use high-level temporal-spatial features (e.g., trajectory patterns and shape), instead of raw measurement data.
- different sensory modalities e.g., vision and radio
- high-level temporal-spatial features e.g., trajectory patterns and shape
- radio signals have unique identifiers (e.g., sender_ID or device_ID included within a radio data packet) and relatively high precision in terms of trajectory patterns.
- radio signals are often broadcast relatively infrequently (low frequency) so suffer from poor location accuracy.
- radio signals can leverage the unique identifiers to accurately determine the number of users/radio broadcasters in a given space.
- radio is an excellent way to detect and track multiple moving objects.
- visual signals have high localization accuracy and are output relatively frequently (high frequency).
- visual signals have difficulty correctly identifying moving objects that may occlude one another. Such visual tracking programs incorrectly connect multiple objects when those objects near one another.
- embodiments apply clustering techniques to remove shadow and/or group segments of a person and then combine that improved visual output with radio based accurate determinations of the IDs of users in the video frame.
- IDs stay with their correct objects even when those objects start apart from each other, near each other, and then move away from each other.
- FIG. 3 Such as scenario is addressed more fully with regard to FIG. 3 . Therefore, understanding characteristics of different sensor modalities and fusing the best of those characteristics provide advantages to embodiments.
- FIG. 1 depicts a localization system in an embodiment.
- Mobile computing nodes 104 A, 104 B (such as the node of FIG. 4 ) correspond to users 102 A, 102 B.
- the mobile devices 104 A, 104 B may include any suitable processor-driven computing device including, but not limited to, a desktop computing device, a laptop computing device, a server, a smartphone, a tablet, and so forth.
- a desktop computing device a laptop computing device
- a server a smartphone
- a tablet a tablet
- the mobile devices 104 A, 104 B may communicate with one or more access points 110 A, 110 B, 110 C (sometimes collectively referred to herein as 110 ). Each access point 110 A, 110 B, 110 C may be configured with a unique identifier and, optionally, additional information about the access points.
- the access points 110 may provide wireless signal coverage for an area.
- Mobile device 104 A may capture measurements associated with the communication between the mobile device and the access points 110 .
- the measurements may include received signal strength indicators (RSSIs), each of which is a measurement of the power present in a received radio signal.
- the measurements may be transmitted to a server 106 (such as the node of FIG. 5 ).
- RSSIs received signal strength indicators
- Wi-Fi networks for the access points 110 A, 110 B, 110 C
- other embodiments may utilize other wireless technologies, including but not limited to Bluetooth, BLE, RFID, UWB, ZigBee, cellular signals, and the like.
- the server 106 may also communicate with one or more image capture devices 108 A, 108 B, 108 C (e.g., cameras), which capture images of objects (e.g., users 102 A, 102 B).
- the server may also receive data from one or more datastores 108 , such as floor map information for a specific location and/or access point radio fingerprint data associated with the one or more access points 110 .
- Radio fingerprinting data may include information that identifies the one or more access points 110 or any other radio transmitter by the unique “fingerprint” that characterizes its signal transmission.
- An electronic fingerprint makes it possible to identify a wireless device by its unique radio transmission characteristics.
- the location accuracy from radio fingerprinting may depend on environment factors such as the density of the access points, the relative placement of the access points, and the temporal/spatial variations of Wi-Fi signals.
- the server 106 may determine a current location of the mobile devices 104 A, 104 B (and users 102 A, 102 B) based at least in part on the visual data, measurement data RSSI conveyed to mobile devices 104 A, 104 B, access point fingerprint data, and/or floor map information.
- the location may be used to provide location-based services to the identified mobile device 104 and/or another computing node 105 .
- FIG. 2 depicts a process in an embodiment.
- Process 200 includes three main parts: preliminary sensor activity 205 , individual sensory data processing 210 , and integrated sensory data processing 215 .
- a radio sensor e.g., Wi-Fi device, RFID device, BLE device
- a visual sensor e.g., image capture device such as a surveillance camera
- devices 201 , 202 may use the Network Time Protocol (NTP) protocol to time-synchronize with one another.
- NTP Network Time Protocol
- process portion 215 location data from each modality pipeline (i.e., radio and visual) are processed in parallel (but that is not mandatory in all embodiments).
- modality pipeline i.e., radio and visual
- server 106 may receive data from a mobile device 104 .
- the data received may include wireless data measurements associated with the mobile device 104 and one or more access points 110 .
- the wireless data measurements may be RSSI data.
- server 106 may retrieve radio fingerprinting data associated with the one or more access points 110 .
- the radio fingerprint data may be retrieved from one or more datastores 108 .
- the server 106 may also retrieve one or more maps for an area where the mobile device 104 is generally located.
- coordinates Xm, Ym may be determined for m mobile devices 104 identified.
- An embodiment employs RSSI as well as number of detected Wi-Fi access points for generating the Wi-Fi fingerprinting.
- RSSI fingerprinting technique not all embodiments are limited to RSSI fingerprinting technique, and other Wi-Fi location techniques (e.g., RSSI triangulation, Time of Flight trilateration) may be used as well.
- other Wi-Fi location techniques e.g., RSSI triangulation, Time of Flight trilateration
- blend data from various radio sources such as combining Wi-Fi with RFID or BLE data.
- Block 212 concerns sensor disparity adjustment.
- Such adjustments include, for example, frequency matching between radio 201 and camera 202 .
- frequency disparity between the sensors an embodiment associates data from radio 201 , camera 202 based on the closest timestamps between the devices. For example, a camera may capture 29 frames per second but only 1 Wi-Fi signal is produced every 3 to 4 seconds.
- Block 212 also concerns coordinate transformation between radio 201 and camera 202 . Since every sensor speaks its own language, an embodiment determines a common language for the sensors 201 , 202 . For example, Wi-Fi signals (from radio 201 ) may be measured in the world coordinate system, while visual signals (from camera 202 ) are measured in pixel coordinate system. Before integrating different sensory data in process portion 225 , an embodiment converts one set of coordinates (Xn,Yn coordinates from radio 201 or Pxn, Pyn coordinates for camera 202 ) to another set of coordinates. An embodiment may use, for example, a Homography transformation matrix for coordinate transformation. Both radio and visual coordinates may both be transformed to a common coordinate system in some embodiments.
- An embodiment uses pixel coordinates as a reference coordinate for effective trajectory pattern matching in the association block (block 222 ), since when converting pixel to the world coordinate a small off pixel error can be interpreted as a huge difference in the world coordinate system (especially for the those pixels near vanishing points in a frame).
- Block 213 concerns vision tracking to determine visual coordinates.
- An embodiment tracks motion based on two phases: (a) detecting moving objects in each frame using a background subtraction algorithm (e.g., based on Gaussian mixture model), and (b) associating the detections corresponding to the same object over time using a Kalman filter.
- a background subtraction algorithm e.g., based on Gaussian mixture model
- a Kalman filter e.g., based on Kalman filter
- an embodiment applies clustering techniques to a group of segments of a person together or removes shadows based on the knowledge of already identified radio IDs obtained from radio data from radio 201 . Since the embodiment would know how many people should be in the visual scene from number of electronic IDs, the embodiment can apply K-means clustering to the object (where K equals the number of electronic identifiers) to group segments together (block 214 ). After finding distinct groups, if there is a shadow, 2-means clustering can be applied to each distinct group to remove the shadow; one cluster for an object and the other is for its cast shadow. Centroid and moment of inertial of an object together can determine the existence of moving shadow.
- radio signal based location tracking gives reliable overall movement estimations over a time window (e.g., trajectory patterns, direction of movement), but each location estimate may have error of a few meters due to the inherent Wi-Fi signal variations.
- Visual signal based object tracking provides high localization accuracy (e.g. sub-meter), but often is confused between multiple objects and/or reports an incorrect number of objects due to factors such as shadowing, object obstruction or crowded scenes.
- Portion 225 may occur at server 106 to provide a user trajectory by fusing the trajectories of radio (Wi-Fi) and vision with a floor map. While in some embodiments, portion 215 is handled separately (e.g., coordinates are determined on a smart phone and within a camera system) in other embodiments those tasks are handled by server 106 and/or by one of devices 201 , 202 or by some other node.
- Wi-Fi radio
- Block 221 includes extracting trajectory features in each time window (e.g., frame).
- An embodiment begins feature extraction an initial set of measured data and builds derived values (features) intended to be informative and non-redundant.
- feature extraction is related to dimensionality reduction.
- the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g., repetitiveness of images presented as pixels), then it can be transformed into a reduced set of features (also named a “feature vector”).
- the extracted features are expected to contain the relevant information from the input data, so that tracking is more easily performed.
- Features may vary and may include trajectory, shape, color, and the like.
- Block 222 includes solving an association or assignment problem.
- An assignment problem consists of finding a maximum weight matching (or minimum weight perfect matching) in a weighted bipartite graph.
- the task is to correlate a camera based coordinate (tied to electronic ID) to a likely more accurate (but improperly identified) vision based coordinate.
- An embodiment forms this association using a Hungarian optimization algorithm, which finds the minimum cost assignment based on Euclidean distance and a cost matrix, to map electronic identifiers of objects to visual appearances of those same objects. Therefore, the vision pipeline may produce an incorrectly assigned ID of a moving object in a time window. This misidentification may be corrected by solving the association problem.
- the location of a moving object taken from the visual location data can be used as an output location, because visual signals are more accurate ( ⁇ 1 m) in localization (block 223 ).
- the fused location data may be displayed on a display to a user or otherwise communicated.
- the location output can be further fed into an adaptive particle filter framework.
- an embodiment can leverage floor map constraints to correct location errors and speed up the convergence of relative confidence estimation.
- an embodiment employs particle filter (also known as Sequential Monte Carlo (SMC)) methods to estimate the location (i.e., planar coordinates and/or heading orientation) of an object.
- Particle filtering or SMC methods may be on-line posterior density estimation algorithms to estimate the posterior density of a state-space by directly implementing Bayesian recursion equations.
- Particle filtering may utilize a discretization approach, using a set of particles to represent the posterior density.
- Particle filtering may provide a methodology for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions.
- the state-space model may be non-linear and the initial state and noise distributions may take any form required.
- Particle filtering may implement the Bayesian recursion equations directly by using an ensemble based approach.
- the samples from the distribution may be represented by a set of particles.
- Each particle may be associated with a weight that represents the probability of that particle being sampled from the probability density function.
- Weight disparity leading to weight collapse may be mitigated by including a resampling step before the weights become too uneven. In the resampling step, the particles with negligible weights may be replaced by new particles in the proximity of the particles with higher weights.
- particle filtering may be used to track the locations of particles and track the relative confidence between the radio fingerprint location and the visual location tracking.
- floor map constraints may be leveraged to speed up the convergence of relative confidence estimation.
- a sensor update may involve particles that are weighted by the measurements taken from additional sensors.
- Sensor update may use the wireless signal strengths (e.g., RSSI), the radio fingerprint database, and the floor plan map (if available) to calculate the weight of a particle.
- a particle may be penalized if its location is within the wall or if the fingerprint does not match the fingerprint database.
- the set of particles may degenerate and not be able to effectively represent a distribution due to the significant difference in weighting among particles. Resampling is a means to avoid the problem of degeneracy of the algorithm. It is achieved by subsampling from the set of particles to form another set to represent the exact same distribution.
- process 200 also provide for the server updating a location estimate that is represented by a set of particles with the wireless data measurements received from the mobile device 104 .
- the location for each of the particles may be based at least in part on the radio fingerprinting data and visual data associated with the mobile device 104 .
- a fusion engine may calculate a respective weight for the respective location for each of the plurality of particles. In some embodiments, the fusion engine may calculate a respective penalty associated with the respective location for each of the plurality of particles.
- the fusion engine may calculate a weight associated with the determined location for each of the particles based at least in part on radio fingerprint data and the respective weight for each of the determined locations for each of the particles.
- the fusion engine may determine whether the particles are degenerated. If the particles are not degenerated, then the method may reiterate and/or end. In some embodiments, the fusion engine may determine the particles are degenerated based at least in part on the calculated weight associated with the particles. If the fusion engine determines that the particles are degenerated, then the fusion engine may initiate a resampling of at least a portion of the wireless data measurements. Resampling may include requesting additional wireless data measurements and additional visual data associated with the mobile device 104 from the mobile device 104 and iterating through the method.
- an embodiment receives processed data from the radio 201 and camera 202 nodes.
- devices 201 , 202 may typically produce raw measurement data for Wi-Fi (e.g., device_ID, RSSI, timestamp) and other raw measurement data for vision (e.g., vision-assigned ID, pixel location, timestamp) per each signal.
- the format of the processed data would be different for Wi-Fi (device ID, X, Y, timestamp) and vision (vision-assigned ID, X, Y, timestamp).
- An embodiment examines the sequences of traces (radio and visual) in a time window and fuses them with map constraints. An embodiment may look at a sequence of traces to allow the embodiment to derive additional data properties from both Wi-Fi and vision (e.g., direction of movement).
- FIG. 3A depicts radio based tracking
- FIG. 3B depicts vision based tracking
- FIG. 3C depicts radio/vision fusion based tracking.
- a stationary camera such as a surveillance camera
- Wi-Fi tracks paths 301 , 302 and does not get confused when those paths near each other at their midpoints, but the location data is not very specific.
- FIG. 3B visual tracking shows great granularity or detail but confuses the paths identifying paths 303 , 304 , 305 when there are only two objects moving. Furthermore, an identity switch incorrectly occurs for path 303 .
- 3C shows results based on fusion of radio and vision tracking to properly identify the two paths 306 , 307 for two objects while doing so with granularity and precision. This occurred after solving the association problem.
- the fusion embodiment corrected the improperly switched IDs of FIG. 3B and provides sub-meter accurate location information.
- embodiments described herein provide precise indoor localization by intelligently fusing different sensory modalities and leveraging existing infrastructure vision and radio sensors without additional hardware. Since the proposed multimodal fusion embodiments are highly accurate and cost-efficient localization solutions, they can be used in a wide range of internet-of-things (IoT) applications.
- IoT internet-of-things
- system 900 may be a smartphone or other wireless communicator or any other IoT device.
- a baseband processor 905 is configured to perform various signal processing with regard to communication signals to be transmitted from or received by the system.
- baseband processor 905 is coupled to an application processor 910 , which may be a main CPU of the system to execute an OS and other system software, in addition to user applications such as many well-known social media and multimedia apps.
- Application processor 910 may further be configured to perform a variety of other computing operations for the device.
- application processor 910 can couple to a user interface/display 920 , e.g., a touch screen display.
- application processor 910 may couple to a memory system including a non-volatile memory, namely a flash memory 930 and a system memory, namely a DRAM 935 .
- flash memory 930 may include a secure portion 932 in which secrets and other sensitive information may be stored.
- application processor 910 also couples to a capture device 945 such as one or more image capture devices (e.g., camera) that can record video and/or still images.
- a universal integrated circuit card (UICC) 940 comprises a subscriber identity module, which in some embodiments includes a secure storage 942 to store secure user information.
- System 900 may further include a security processor 950 that may couple to application processor 910 .
- a plurality of sensors 925 including one or more multi-axis accelerometers may couple to application processor 910 to enable input of a variety of sensed information such as motion and other environmental information.
- one or more authentication devices 995 may be used to receive, e.g., user biometric input for use in authentication operations.
- a near field communication (NFC) contactless interface 960 is provided that communicates in a NFC near field via an NFC antenna 965 . While separate antennae are shown in FIG. 4 , understand that in some implementations one antenna or a different set of antennae may be provided to enable various wireless functionality.
- NFC near field communication
- a power management integrated circuit (PMIC) 915 couples to application processor 910 to perform platform level power management. To this end, PMIC 915 may issue power management requests to application processor 910 to enter certain low power states as desired. Furthermore, based on platform constraints, PMIC 915 may also control the power level of other components of system 900 .
- a radio frequency (RF) transceiver 970 and a wireless local area network (WLAN) transceiver 975 may be present.
- RF transceiver 970 may be used to receive and transmit wireless data and calls according to a given wireless communication protocol such as 3G or 4G wireless communication protocol such as in accordance with a code division multiple access (CDMA), global system for mobile communication (GSM), long term evolution (LTE) or other protocol.
- CDMA code division multiple access
- GSM global system for mobile communication
- LTE long term evolution
- GPS sensor 980 may be present, with location information being provided to security processor 950 for use as described herein when context information is to be used in a pairing process.
- wireless communications such as receipt or transmission of radio signals, e.g., AM/FM and other signals may also be provided.
- WLAN transceiver 975 local wireless communications, such as according to a BluetoothTM or IEEE 802.11 standard can also be realized.
- Multiprocessor system 1000 is a point-to-point interconnect system such as a server system, and includes a first processor 1070 and a second processor 1080 coupled via a point-to-point interconnect 1050 .
- processors 1070 and 1080 may be multicore processors such as SoCs, including first and second processor cores (i.e., processor cores 1074 a and 1074 b and processor cores 1084 a and 1084 b ), although potentially many more cores may be present in the processors.
- processors 1070 and 1080 each may include a secure engine 1075 and 1085 to perform security operations such as attestations, IoT network onboarding or so forth.
- First processor 1070 further includes a memory controller hub (MCH) 1072 and point-to-point (P-P) interfaces 1076 and 1078 .
- second processor 1080 includes a MCH 1082 and P-P interfaces 1086 and 1088 .
- MCH's 1072 and 1082 couple the processors to respective memories, namely a memory 1032 and a memory 1034 , which may be portions of main memory (e.g., a DRAM) locally attached to the respective processors.
- First processor 1070 and second processor 1080 may be coupled to a chipset 1090 via P-P interconnects 1052 and 1054 , respectively. As shown in FIG. 5 , chipset 1090 includes P-P interfaces 1094 and 1098 .
- chipset 1090 includes an interface 1092 to couple chipset 1090 with a high performance graphics engine 1038 , by a P-P interconnect 1039 .
- chipset 1090 may be coupled to a first bus 1016 via an interface 1096 .
- Various input/output (I/O) devices 1014 may be coupled to first bus 1016 , along with a bus bridge 1018 which couples first bus 1016 to a second bus 1020 .
- Various devices may be coupled to second bus 1020 including, for example, a keyboard/mouse 1022 , communication devices 1026 and a data storage unit 1028 such as a non-volatile storage or other mass storage device.
- data storage unit 1028 may include code 1030 , in one embodiment.
- data storage unit 1028 also includes a trusted storage 1029 to store sensitive information to be protected.
- an audio I/O 1024 may be coupled to second bus 1020 .
- Embodiments may be used in many different types of systems.
- a communication device can be arranged to perform the various methods and techniques described herein.
- the scope of the present invention is not limited to a communication device, and instead other embodiments can be directed to other types of apparatus for processing instructions, or one or more machine readable media including instructions that in response to being executed on a computing device, cause the device to carry out one or more of the methods and techniques described herein.
- Embodiments may be implemented in code and may be stored on a non-transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. Embodiments also may be implemented in data and may be stored on a non-transitory storage medium, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform one or more operations.
- the storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.
- ROMs read-only memories
- RAMs random access memories
- DRAMs dynamic random access memories
- SRAMs static random access memories
- EPROMs erasable programmable read-only memories
- EEPROMs electrically erasable programmable read-only memories
- magnetic or optical cards or any other type of media suitable for storing electronic instructions.
- a module as used herein refers to any hardware, software, firmware, or a combination thereof. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware.
- use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices. However, in another embodiment, logic also includes software or code integrated with hardware, such as firmware or micro-code.
- a “location fusion” module or engine may include the above hardware, software, firmware, or a combination configured (programmed or designed) to fuse certain aspects of visual and radio location data together to form a fused location information.
- a “server” is both a running instance of some software that is capable of accepting requests from clients, and the computer that executes such software. Servers operate within a client-server architecture, in which “servers” are computer programs running to serve the requests of other programs, the “clients”. This may be to share data, information or hardware and software resources. Examples of typical computing servers are database server, file server, mail server, print server, web server, gaming server, and application server. The clients may run on the same computer, but may connect to the server through a network. In the hardware sense, a computer primarily designed as a server may be specialized in some way for its task. Sometimes more powerful and reliable than standard desktop computers, they may conversely be simpler and more disposable if clustered in large numbers.
- Example 1a includes at least one computer readable storage medium comprising instructions that when executed enable a system to: receive (a)(i) first radio signal location data for a first object from a radio sensor; and (a)(ii) first visual signal location data for the first object from a camera sensor; perform feature extraction on (b)(i) the first radio signal location data to determine first extracted radio signal features; and (b)(ii) the first visual signal location data to determine first extracted visual signal features; solve a first association problem between the first extracted radio signal features and the first extracted visual signal features to determine first fused location data; and store the first fused location data in the at least one computer readable storage medium.
- Example 2a the subject matter of Example 1a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to display a location of the first object based on the first fused location data.
- Example 3a the subject matter of Examples 1a-2a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to apply a particle filter to the first fused location data to determine first filtered fused location data.
- Example 4a the subject matter of Examples 1a-3a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to apply the particle filter to the first fused location data based on a site map for a physical site corresponding to the first radio signal location data, the first visual signal location data, and the first object.
- Example 5a the subject matter of Examples 1a-4a can optionally include the at least one computer readable storage medium wherein the first radio signal location data and the first visual signal location data both correspond to a common coordinate system.
- Example 6a the subject matter of Examples 1a-5a can optionally include wherein the first radio signal location data is time synchronized to the first visual signal location data.
- Example 7a the subject matter of Examples 1a-6a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to determine the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data.
- Example 8a the subject matter of Examples 1a-7a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to perform the feature extraction to extract trajectory data for the first object based on the first visual signal location data; wherein the trajectory data is included in the first extracted radio signal features.
- Example 9a the subject matter of Examples 1a-8a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to: receive (a)(i) second radio signal location data for a second object from the radio sensor; and (a)(ii) second visual signal location data for the second object from the camera sensor; perform feature extraction on (b)(i) the second radio signal location data to determine second extracted radio signal features; and (b)(ii) the second visual signal location data to determine second extracted visual signal features; solve a second association problem between the second extracted radio signal features and the second extracted visual signal features to determine second fused location data; and store the second fused location data in the at least one computer readable storage medium; wherein the first and second visual signal location data corresponds to a single frame of video data.
- Example 10a the subject matter of Examples 1a-9a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to reassign a first electronic identifier from the second object to the first object based on the solving the first and second association problems.
- Example 11a the subject matter of Examples 1a-10a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to: determine the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data; and determine the second fused location data based on associating a second electronic identifier, corresponding to the second radio signal location data, with the second visual signal location data.
- Example 12a the subject matter of Examples 1a-11a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to conduct k-means clustering on video data corresponding to the first and second visual signal location data, wherein “k” is based on the first and second electronic identifiers.
- Example 13a the subject matter of Examples 1a-12a can optionally include the at least one computer readable storage medium wherein the first and second visual signal location data correspond to an occlusion with one of the first and second objects occluding another of the first and second objects.
- Example 14a the subject matter of Examples 1a-13a can optionally include the at least one computer readable storage medium wherein the first radio signal location data is based on at least one of received signal strength indicator (RSSI) fingerprinting, RSSI triangulation, and Time Of Flight trilateration.
- RSSI received signal strength indicator
- Example 15a the subject matter of Examples 1a-14a can optionally include the at least one computer readable storage medium wherein the first radio signal location data includes first coordinate data and the first visual signal location data includes second coordinate data.
- Example 16a the subject matter of Examples 1a-15a can optionally include the at least one computer readable storage medium wherein the first radio signal location data includes a first device identifier (ID) for the first object and a first time stamp and the first visual signal location data includes a second time stamp.
- ID device identifier
- Example 17a the subject matter of Examples 1a-16a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to solve the first association problem based on a least cost optimization between the first extracted radio signal features and the first extracted visual signal features.
- Example 18a the subject matter of Examples 1a-17a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to communicate a first message to a first mobile computing node in response to determining the first fused location data.
- Such a message may include, for example, a coupon or advertisement that is specific to the fused location data (e.g., a coupon for product X located within 1 m of the tracked object).
- actuations based on the fused data are possible.
- Those actuations may be virtual or physical.
- an actuation signal may be sent to a solenoid coupled to a gate when an object, having a recognized or whitelisted device_id, is tracked and determined to be within a threshold distance of the gate.
- an virtual avatar may be projected to a user's virtual reality headset when the user, having a recognized or whitelisted device_id, is tracked and determined to be within a threshold distance of a location.
- Other actuations may include displaying privileged data to a user's mobile computing device (e.g., smartphone, wireless spectacles with a video display) when an embodiment determines the user is near a designated location.
- Example 19a the subject matter of Examples 1a-18a can optionally include the at least one computer readable storage medium wherein the first fused location data includes a first electronic identifier included in the first radio signal location data and first coordinate data included in the first visual signal location data.
- Example 20a includes at least one computer readable storage medium comprising instructions that when executed enable a system to: receive (a)(i) first location data, corresponding to a first object, from a radio sensor; and (a)(ii) second location data, corresponding to the first object, from a camera sensor; obtain extracted features based on features from the first and second location data; and determine fused location data that (b)(i) is based on the extracted features, and (b)(ii) includes an electronic identifier included in the first location data and coordinate data included in the second location data.
- the feature extraction may occur apart from the computing node that determines (e.g., obtains, derives, calculates) the fused data.
- example 20a includes at least one computer readable storage medium comprising instructions that when executed enable a system to: receive (a)(i) first location data, corresponding to a first object, from a radio sensor; and (a)(ii) second location data, corresponding to the first object, from a camera sensor; obtain extracted features based on features from the first and second location data; and determine fused location data that (b)(i) is based on the extracted features, (b)(ii) includes an electronic identifier based on an identifier included in the first location data, and (b)(iii) includes coordinate data based on second coordinate data included in the second location data.
- the electronic identifier of the fused data may merely be the same identifier that was in the first location data, an additional instance of the same identifier that was in the first location data, or may be derived from identifier in the first location data.
- the coordinate data of the fused data may be the same coordinate data that is in the second location data, an instance thereof, or may be derived coordinate data in the second location data.
- Example 21a the subject matter of Example 20a can optionally include the at least one computer readable storage medium, wherein the first location data and the second location data both correspond to a common coordinate system.
- Example 22a the subject matter of Examples 20a-21a can optionally include the at least one computer readable storage medium, wherein the first location data includes first coordinate data.
- Example 23a the subject matter of Examples 20a-22a can optionally include wherein the identifier included in the first location data includes a device identifier (ID) for the first object.
- ID device identifier
- the device ID may include a unique identifier for a device, such as a mobile computing node, that is included in a field of a container that includes or corresponds to the first location data.
- Example 24a include an apparatus comprising: at least one memory and at least one processor, coupled to the at least one memory, to perform operations comprising: receiving (a)(i) first location data, corresponding to a first object, from a radio sensor; and (a)(ii) second location data, corresponding to the first object, from a camera sensor; obtaining extracted features based on features from the first and second location data; and determining fused location data that (b)(i) is based on the extracted features, and (b)(ii) includes an electronic identifier in the first location data and coordinate data in the second location data.
- example 24a includes an apparatus comprising: at least one memory and at least one processor, coupled to the at least one memory, to perform operations comprising: receiving (a)(i) first location data, corresponding to a first object, from a first sensor having a first sensing modality; and (a)(ii) second location data, corresponding to the first object, from a second sensor having a second sensing modality unequal to the first sensing modality; obtaining extracted features based on features from the first and second location data; and determining fused location data that (b)(i) is based on the extracted features, (b)(ii) includes an electronic identifier based on an identifier included in the first location data, and (b)(iii) includes coordinate data based on second coordinate data included in the second location data.
- Example 25a the subject matter of Example 24a can optionally include wherein (c)(i) the first location data includes first coordinate data; and (c)(ii) the identifier included in the first location data includes a device identifier (ID) corresponding to the first object.
- ID device identifier
- Example 1b includes an apparatus comprising: at least one memory and at least one processor, coupled to the at least one memory, to perform operations comprising: receiving (a)(i) first location data, corresponding to a first object, from a first sensor having a first sensing modality; and (a)(ii) second location data, corresponding to the first object, from a second sensor having a second sensing modality unequal to the first sensing modality; obtaining extracted features based on features from the first and second location data; and determining fused location data that (b)(i) is based on the extracted features, and (b)(ii) includes an electronic identifier in the first location data and coordinate data in the second location data.
- Example 2b the subject matter of Example 1b can optionally include wherein (c)(i) the first location data includes first coordinate data and the second location data includes second coordinate data; (c)(ii) the first location data includes a device identifier (ID) corresponding to the first object; and (c)(iii) the first sensor includes a radio sensor and the second sensor includes a vision sensor.
- ID device identifier
- the first modality may include WiFi or other radio based sensing (BLE, RAIN RFID, NFC) while the second modality includes vision tracking, sound tracking (e.g., sonar), and the like.
- “camera” includes image capture devices in general. Such image capture devices include still frame cameras, video cameras, thermographic cameras (e.g., infrared cameras, thermal imaging cameras, and image capture devices that use heat vision), and the like. Cameras and “vision” as used herein are not limited to vision within visible light wavelengths.
- Example 1c includes an apparatus comprising: at least one memory and at least one processor, coupled to the at least one memory, to perform operations comprising: receiving (a)(i) first radio signal location data for a first object from a radio sensor; and (a)(ii) first visual signal location data for the first object from a camera sensor; performing feature extraction on (b)(i) the first radio signal location data to determine first extracted radio signal features; and (b)(ii) the first visual signal location data to determine first extracted visual signal features; solving a first association problem between the first extracted radio signal features and the first extracted visual signal features to determine first fused location data; and storing the first fused location data in the at least one computer readable storage medium.
- Example 1c includes an apparatus comprising: at least one memory, at least one processor, a location fusion module, an feature extraction module, and all coupled to the at least one memory, to perform operations comprising: the location fusion module receiving (a)(i) first radio signal location data for a first object from a radio sensor; and (a)(ii) first visual signal location data for the first object from a camera sensor; the feature extraction module performing feature extraction on (b)(i) the first radio signal location data to determine first extracted radio signal features; and (b)(ii) the first visual signal location data to determine first extracted visual signal features; the location fusion module solving a first association problem between the first extracted radio signal features and the first extracted visual signal features to determine first fused location data; and the location fusion module storing the first fused location data in the at least one computer readable storage medium.
- the location fusion module or engine may include hardware (e.g., transistors, registers, field programmable gate arrays (FPAG)), software, firmware, or a combination configured (programmed or designed) to receive the location data and then associate the data to determine the fused location data.
- the extraction module or engine may include hardware (e.g., transistors, registers, FPAG), software, firmware, or a combination configured (programmed or designed) to extract features (e.g., trajectory) from location data.
- the fusion module handles the actions addressed in portion 225 of FIG. 2 .
- Example 2c the subject matter of Example 1c can optionally include wherein the at least one processor is to perform operations comprising displaying a location of the first object based on the first fused location data.
- Example 3c the subject matter of Examples 1c-2c can optionally include wherein the at least one processor is to perform operations comprising applying a particle filter to the first fused location data to determine first filtered fused location data.
- Example 4c the subject matter of Examples 1c-3c can optionally include wherein the at least one processor is to perform operations comprising applying the particle filter to the first fused location data based on a site map for a physical site corresponding to the first radio signal location data, the first visual signal location data, and the first object.
- Example 5c the subject matter of Examples 1c-4c can optionally include wherein the first radio signal location data and the first visual signal location data both correspond to a common coordinate system.
- Example 6c the subject matter of Examples 1c-5c can optionally include wherein the first radio signal location data is time synchronized to the first visual signal location data.
- Example 7c the subject matter of Examples 1c-6c can optionally include wherein the at least one processor is to perform operations comprising determining the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data.
- Example 8c the subject matter of Examples 1c-7c can optionally include wherein the at least one processor is to perform operations comprising performing the feature extraction to extract trajectory data for the first object based on the first visual signal location data; wherein the trajectory data is included in the first extracted radio signal features.
- Example 9c the subject matter of Examples 1c-8c can optionally include wherein the at least one processor is to perform operations comprising receiving (a)(i) second radio signal location data for a second object from the radio sensor; and (a)(ii) second visual signal location data for the second object from the camera sensor; performing feature extraction on (b)(i) the second radio signal location data to determine second extracted radio signal features; and (b)(ii) the second visual signal location data to determine second extracted visual signal features; solving a second association problem between the second extracted radio signal features and the second extracted visual signal features to determine second fused location data; and storing the second fused location data in the at least one computer readable storage medium; wherein the first and second visual signal location data corresponds to a single frame of video data.
- Example 10c the subject matter of Examples 1c-9c can optionally include wherein the at least one processor is to perform operations comprising reassigning a first electronic identifier from the second object to the first object based on the solving the first and second association problems.
- Example 11c the subject matter of Examples 1c-10c can optionally include wherein the at least one processor is to perform operations comprising determining the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data; and determining the second fused location data based on associating a second electronic identifier, corresponding to the second radio signal location data, with the second visual signal location data.
- Example 12c the subject matter of Examples 1c-11c can optionally include wherein the at least one processor is to perform operations comprising conducting k-means clustering on video data corresponding to the first and second visual signal location data, wherein “k” is based on the first and second electronic identifiers.
- Example 13c the subject matter of Examples 1c-12c can optionally include wherein the first and second visual signal location data correspond to an occlusion with one of the first and second objects occluding another of the first and second objects.
- Example 14c the subject matter of Examples 1c-13c can optionally include wherein the first radio signal location data is based on at least one of received signal strength indicator (RSSI) fingerprinting, RSSI triangulation, and Time Of Flight trilateration.
- RSSI received signal strength indicator
- Example 15c the subject matter of Examples 1c-14c can optionally include wherein the first radio signal location data includes first coordinate data and the first visual signal location data includes second coordinate data.
- Example 16c the subject matter of Examples 1c-15c can optionally include wherein the first radio signal location data includes a first device identifier (ID) for the first object and a first time stamp and the first visual signal location data includes a second time stamp.
- ID device identifier
- Example 17c the subject matter of Examples 1c-16c can optionally include wherein the at least one processor is to perform operations comprising solving the first association problem based on a least cost optimization between the first extracted radio signal features and the first extracted visual signal features.
- Example 18c the subject matter of Examples 1c-17c can optionally include wherein the at least one processor is to perform operations comprising communicating a first message to a first mobile computing node in response to determining the first fused location data.
- Example 19c the subject matter of Examples 1c-18c can optionally include wherein the first fused location data includes a first electronic identifier included in the first radio signal location data and first coordinate data included in the first visual signal location data.
Abstract
An embodiment includes at least one computer readable storage medium comprising instructions that when executed enable a system to: receive (a)(i) first radio signal location data for a first object from a radio sensor; and (a)(ii) first visual signal location data for the first object from a camera sensor; perform feature extraction on (b)(i) the first radio signal location data to determine first extracted radio signal features; and (b)(ii) the first visual signal location data to determine first extracted visual signal features; solve a first association problem between the first extracted radio signal features and the first extracted visual signal features to determine first fused location data; and store the first fused location data in the at least one computer readable storage medium. Other embodiments are described herein.
Description
- Embodiments of the invention concern sensors.
- With the proliferation of mobile computing nodes and the advance of wireless technologies, the demand for precise indoor localization and its related services is becoming increasingly prevalent. Reliable and precise accurate indoor localization can support a wide range of applications including location-based couponing (where a coupon is delivered to a user based on the user's proximity to a business organization distributing the coupon), friend tracking (allowing one user to know the whereabouts of another user), personal shopping assistants (where information regarding merchandise that is close to a user is displayed or communicated to the user), traffic heat maps, work flow verification and optimization, and the like.
- Features and advantages of embodiments of the present invention will become apparent from the appended claims, the following detailed description of one or more example embodiments, and the corresponding figures. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements:
-
FIG. 1 depicts a localization system in an embodiment. -
FIG. 2 depicts a process in an embodiment. -
FIG. 3A depicts radio based tracking,FIG. 3B depicts vision based tracking, andFIG. 3C depicts radio/vision fusion based tracking in an embodiment of the invention. -
FIG. 4 includes a system for use with an embodiment of the invention. -
FIG. 5 includes a system for use with an embodiment of the invention. - In the following description, numerous specific details are set forth but embodiments of the invention may be practiced without these specific details. Well-known circuits, structures and techniques have not been shown in detail to avoid obscuring an understanding of this description. “An embodiment”, “various embodiments” and the like indicate embodiment(s) so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Some embodiments may have some, all, or none of the features described for other embodiments. “First”, “second”, “third” and the like describe a common object and indicate different instances of like objects are being referred to. Such adjectives do not imply objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
- However, current indoor positioning systems are problematic. For example, such systems are often inaccurate, too complicated to implement, and/or too expensive. For example, many systems either rely on dedicated infrastructure (e.g., ultra-wide band (UWB), visible light), or adaptation and usage of existing communications infrastructure like wireless local area networks (LANs). The installation of new location infrastructure may lead to better accuracy but also leads to significant deployment and maintenance costs. Other solutions that use existing infrastructure (e.g., systems based on Wi-Fi and received signal strength indicator (RSSI) signals) suffer from poor location accuracy with error between 5 m and 30 m. While such error may be acceptable in some situations, it is unacceptable in other situations such, for example, knowing a precise location of a mobile robot or knowing whether a consumer is in front of product A in a grocery store aisle or in front of product B two meters further down the same aisle.
- To address the problem, embodiments described herein address multimodal location fusion systems that provide sub-meter accurate (within one meter of accuracy) indoor localization by detecting and tracking moving objects (e.g., fork lifts, flying drones, mobile robots, animals, humans) radio and visual sensors. Such embodiments utilize some or all available sensors at a location, without demanding extra hardware, to provide highly accurate localization based on integrating radio-based and visual-based signals accentuating the strengths of each modality while de-accentuating the weaknesses of each modality.
- In addressing the problem, embodiments address at least three issues: (1) how to provide a sub-meter level location accuracy without demanding additional/excessive hardware, (2) how to effectively interact with different sensor modalities, and (3) how to enhance individual sensor pipelines with help from other sensors in a way that leads to overall system performance improvement.
- Regarding issue (1), commonly found infrastructure sensors (e.g., surveillance cameras, Wi-Fi, radio frequency identification (RFID)) may be leveraged in various embodiments to provide greater localization accuracy without adding a great deal of infrastructure. Such embodiments “fuse” or combine output from such sensors by integrating electronic and visual signals and then associating an object's electronic identifier (found in many radio signals) and its highly accurate localization (achieved with visual tracking systems) to provide a fused location that is not only accurate to within 1 meter of an object but that also correctly identifies the object (something visual tracking systems struggle with). Such visual signals are easily extracted from surveillance cameras found in stores, warehouses, homes, and the like.
- Regarding issue (2), embodiments successfully interact with different sensory modalities (e.g., vision and radio) and use high-level temporal-spatial features (e.g., trajectory patterns and shape), instead of raw measurement data. By interacting with radio and visual data in its commonly output and already processed form, the implementation of embodiments is streamlined, more flexible, and can be integrated with different vision based and radio based location pipelines without modifications (or with relatively few modifications).
- Regarding issue (3) the interactions described above are targeted to leverage the best attributes of the different modalities. For instance, radio signals have unique identifiers (e.g., sender_ID or device_ID included within a radio data packet) and relatively high precision in terms of trajectory patterns. However, radio signals are often broadcast relatively infrequently (low frequency) so suffer from poor location accuracy. Further, radio signals can leverage the unique identifiers to accurately determine the number of users/radio broadcasters in a given space. As a result, radio is an excellent way to detect and track multiple moving objects. In contrast, visual signals have high localization accuracy and are output relatively frequently (high frequency). However, visual signals have difficulty correctly identifying moving objects that may occlude one another. Such visual tracking programs incorrectly connect multiple objects when those objects near one another. Thus occurs due to lighting issues (e.g., shadow) and occlusions. To address the limitation of vision algorithms, embodiments apply clustering techniques to remove shadow and/or group segments of a person and then combine that improved visual output with radio based accurate determinations of the IDs of users in the video frame. Thus, IDs stay with their correct objects even when those objects start apart from each other, near each other, and then move away from each other. Such as scenario is addressed more fully with regard to
FIG. 3 . Therefore, understanding characteristics of different sensor modalities and fusing the best of those characteristics provide advantages to embodiments. -
FIG. 1 depicts a localization system in an embodiment.Mobile computing nodes FIG. 4 ) correspond tousers mobile devices devices - The
mobile devices more access points access point Mobile device 104A may capture measurements associated with the communication between the mobile device and the access points 110. The measurements may include received signal strength indicators (RSSIs), each of which is a measurement of the power present in a received radio signal. The measurements may be transmitted to a server 106 (such as the node ofFIG. 5 ). Although embodiments are described herein in the context of Wi-Fi networks for theaccess points - The
server 106 may also communicate with one or moreimage capture devices users more datastores 108, such as floor map information for a specific location and/or access point radio fingerprint data associated with the one or more access points 110. - Radio fingerprinting data may include information that identifies the one or more access points 110 or any other radio transmitter by the unique “fingerprint” that characterizes its signal transmission. An electronic fingerprint makes it possible to identify a wireless device by its unique radio transmission characteristics. The location accuracy from radio fingerprinting may depend on environment factors such as the density of the access points, the relative placement of the access points, and the temporal/spatial variations of Wi-Fi signals.
- The
server 106 may determine a current location of themobile devices users mobile devices computing node 105. -
FIG. 2 depicts a process in an embodiment.Process 200 includes three main parts:preliminary sensor activity 205, individual sensory data processing 210, and integratedsensory data processing 215. - In
process portion 205, a radio sensor (e.g., Wi-Fi device, RFID device, BLE device) 201 performs calibration and time synchronization with a visual sensor (e.g., image capture device such as a surveillance camera) 202. For example,devices - In
process portion 215, location data from each modality pipeline (i.e., radio and visual) are processed in parallel (but that is not mandatory in all embodiments). - For example, at
block 211server 106 may receive data from a mobile device 104. In some embodiments, the data received may include wireless data measurements associated with the mobile device 104 and one or more access points 110. In some embodiments, the wireless data measurements may be RSSI data. Atblock 211server 106 may retrieve radio fingerprinting data associated with the one or more access points 110. The radio fingerprint data may be retrieved from one ormore datastores 108. In some embodiments, theserver 106 may also retrieve one or more maps for an area where the mobile device 104 is generally located. Atblock 211 coordinates Xm, Ym may be determined for m mobile devices 104 identified. - An embodiment employs RSSI as well as number of detected Wi-Fi access points for generating the Wi-Fi fingerprinting. However, not all embodiments are limited to RSSI fingerprinting technique, and other Wi-Fi location techniques (e.g., RSSI triangulation, Time of Flight trilateration) may be used as well. Also, some embodiments blend data from various radio sources such as combining Wi-Fi with RFID or BLE data.
-
Block 212 concerns sensor disparity adjustment. Such adjustments include, for example, frequency matching betweenradio 201 andcamera 202. For example, for frequency disparity between the sensors an embodiment associates data fromradio 201,camera 202 based on the closest timestamps between the devices. For example, a camera may capture 29 frames per second but only 1 Wi-Fi signal is produced every 3 to 4 seconds. - Block 212 also concerns coordinate transformation between
radio 201 andcamera 202. Since every sensor speaks its own language, an embodiment determines a common language for thesensors process portion 225, an embodiment converts one set of coordinates (Xn,Yn coordinates fromradio 201 or Pxn, Pyn coordinates for camera 202) to another set of coordinates. An embodiment may use, for example, a Homography transformation matrix for coordinate transformation. Both radio and visual coordinates may both be transformed to a common coordinate system in some embodiments. An embodiment uses pixel coordinates as a reference coordinate for effective trajectory pattern matching in the association block (block 222), since when converting pixel to the world coordinate a small off pixel error can be interpreted as a huge difference in the world coordinate system (especially for the those pixels near vanishing points in a frame). -
Block 213 concerns vision tracking to determine visual coordinates. An embodiment tracks motion based on two phases: (a) detecting moving objects in each frame using a background subtraction algorithm (e.g., based on Gaussian mixture model), and (b) associating the detections corresponding to the same object over time using a Kalman filter. This type of tracking alone may not be sufficient. Real world scenarios may include people walking closely to each other or making sudden changes in walking direction. Such cases are difficult to track and continuously assign a correct ID to an object. For example, tracking Mouse1 and Mouse2 in a cage may be relatively easy for visual tracking systems provided the mice do not touch. However, when the mice interact and one occludes another, the tracking system that had applied “Object1” to Mouse1 and “Object2” to Mouse2 may now mistakenly apply “Object1” to Mouse2. Many visual tracking systems further connect separate people through shadows that couple the people together. Such systems sometimes break a single person into multiple objects due to occlusions. - Being aware of these current limitations, an embodiment applies clustering techniques to a group of segments of a person together or removes shadows based on the knowledge of already identified radio IDs obtained from radio data from
radio 201. Since the embodiment would know how many people should be in the visual scene from number of electronic IDs, the embodiment can apply K-means clustering to the object (where K equals the number of electronic identifiers) to group segments together (block 214). After finding distinct groups, if there is a shadow, 2-means clustering can be applied to each distinct group to remove the shadow; one cluster for an object and the other is for its cast shadow. Centroid and moment of inertial of an object together can determine the existence of moving shadow. - In
process portion 225, integration or fusion of the disparate radio and visual location data is performed. As mentioned above, radio signal based location tracking gives reliable overall movement estimations over a time window (e.g., trajectory patterns, direction of movement), but each location estimate may have error of a few meters due to the inherent Wi-Fi signal variations. Visual signal based object tracking provides high localization accuracy (e.g. sub-meter), but often is confused between multiple objects and/or reports an incorrect number of objects due to factors such as shadowing, object obstruction or crowded scenes. -
Portion 225 may occur atserver 106 to provide a user trajectory by fusing the trajectories of radio (Wi-Fi) and vision with a floor map. While in some embodiments,portion 215 is handled separately (e.g., coordinates are determined on a smart phone and within a camera system) in other embodiments those tasks are handled byserver 106 and/or by one ofdevices -
Block 221 includes extracting trajectory features in each time window (e.g., frame). An embodiment begins feature extraction an initial set of measured data and builds derived values (features) intended to be informative and non-redundant. Feature extraction is related to dimensionality reduction. When the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g., repetitiveness of images presented as pixels), then it can be transformed into a reduced set of features (also named a “feature vector”). The extracted features are expected to contain the relevant information from the input data, so that tracking is more easily performed. Features may vary and may include trajectory, shape, color, and the like. -
Block 222 includes solving an association or assignment problem. An assignment problem consists of finding a maximum weight matching (or minimum weight perfect matching) in a weighted bipartite graph. Thus, inblock 222 the task is to correlate a camera based coordinate (tied to electronic ID) to a likely more accurate (but improperly identified) vision based coordinate. An embodiment forms this association using a Hungarian optimization algorithm, which finds the minimum cost assignment based on Euclidean distance and a cost matrix, to map electronic identifiers of objects to visual appearances of those same objects. Therefore, the vision pipeline may produce an incorrectly assigned ID of a moving object in a time window. This misidentification may be corrected by solving the association problem. - As a result, the location of a moving object taken from the visual location data can be used as an output location, because visual signals are more accurate (<1 m) in localization (block 223). Furthermore, the fused location data may be displayed on a display to a user or otherwise communicated. By solving the association problem, an embodiment fixes the object tracking/detection errors while still maintain the sub-meter tracking accuracy from vision solutions.
- In an embodiment, the location output can be further fed into an adaptive particle filter framework. When a floor map is available an embodiment can leverage floor map constraints to correct location errors and speed up the convergence of relative confidence estimation.
- Regarding particle filters, an embodiment employs particle filter (also known as Sequential Monte Carlo (SMC)) methods to estimate the location (i.e., planar coordinates and/or heading orientation) of an object. Particle filtering or SMC methods may be on-line posterior density estimation algorithms to estimate the posterior density of a state-space by directly implementing Bayesian recursion equations. Particle filtering may utilize a discretization approach, using a set of particles to represent the posterior density. Particle filtering may provide a methodology for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. The state-space model may be non-linear and the initial state and noise distributions may take any form required. Particle filtering may implement the Bayesian recursion equations directly by using an ensemble based approach. The samples from the distribution may be represented by a set of particles. Each particle may be associated with a weight that represents the probability of that particle being sampled from the probability density function. Weight disparity leading to weight collapse may be mitigated by including a resampling step before the weights become too uneven. In the resampling step, the particles with negligible weights may be replaced by new particles in the proximity of the particles with higher weights.
- In some embodiments, particle filtering may be used to track the locations of particles and track the relative confidence between the radio fingerprint location and the visual location tracking. When a floor map is available, floor map constraints may be leveraged to speed up the convergence of relative confidence estimation.
- A sensor update may involve particles that are weighted by the measurements taken from additional sensors. Sensor update may use the wireless signal strengths (e.g., RSSI), the radio fingerprint database, and the floor plan map (if available) to calculate the weight of a particle. For example, a particle may be penalized if its location is within the wall or if the fingerprint does not match the fingerprint database. For resampling, while the particle filtering reiterates, the set of particles may degenerate and not be able to effectively represent a distribution due to the significant difference in weighting among particles. Resampling is a means to avoid the problem of degeneracy of the algorithm. It is achieved by subsampling from the set of particles to form another set to represent the exact same distribution.
- Thus,
process 200 also provide for the server updating a location estimate that is represented by a set of particles with the wireless data measurements received from the mobile device 104. In some embodiments, the location for each of the particles may be based at least in part on the radio fingerprinting data and visual data associated with the mobile device 104. - A fusion engine may calculate a respective weight for the respective location for each of the plurality of particles. In some embodiments, the fusion engine may calculate a respective penalty associated with the respective location for each of the plurality of particles.
- In some embodiments, the fusion engine may calculate a weight associated with the determined location for each of the particles based at least in part on radio fingerprint data and the respective weight for each of the determined locations for each of the particles.
- The fusion engine may determine whether the particles are degenerated. If the particles are not degenerated, then the method may reiterate and/or end. In some embodiments, the fusion engine may determine the particles are degenerated based at least in part on the calculated weight associated with the particles. If the fusion engine determines that the particles are degenerated, then the fusion engine may initiate a resampling of at least a portion of the wireless data measurements. Resampling may include requesting additional wireless data measurements and additional visual data associated with the mobile device 104 from the mobile device 104 and iterating through the method.
- Returning to process
portion 225, an embodiment receives processed data from theradio 201 andcamera 202 nodes. For example,devices -
FIG. 3A depicts radio based tracking,FIG. 3B depicts vision based tracking, andFIG. 3C depicts radio/vision fusion based tracking. A stationary camera (such as a surveillance camera) was used to track two people, moving in opposite directions, at a café. InFIG. 3A Wi-Fi tracks paths FIG. 3B visual tracking shows great granularity or detail but confuses thepaths identifying paths path 303.FIG. 3C shows results based on fusion of radio and vision tracking to properly identify the twopaths FIG. 3B and provides sub-meter accurate location information. - Thus, embodiments described herein provide precise indoor localization by intelligently fusing different sensory modalities and leveraging existing infrastructure vision and radio sensors without additional hardware. Since the proposed multimodal fusion embodiments are highly accurate and cost-efficient localization solutions, they can be used in a wide range of internet-of-things (IoT) applications.
- Referring now to
FIG. 4 , shown is a block diagram of an example system with which embodiments (such as mobile nodes 104) can be used. As seen,system 900 may be a smartphone or other wireless communicator or any other IoT device. Abaseband processor 905 is configured to perform various signal processing with regard to communication signals to be transmitted from or received by the system. In turn,baseband processor 905 is coupled to anapplication processor 910, which may be a main CPU of the system to execute an OS and other system software, in addition to user applications such as many well-known social media and multimedia apps.Application processor 910 may further be configured to perform a variety of other computing operations for the device. - In turn,
application processor 910 can couple to a user interface/display 920, e.g., a touch screen display. In addition,application processor 910 may couple to a memory system including a non-volatile memory, namely aflash memory 930 and a system memory, namely aDRAM 935. In some embodiments,flash memory 930 may include asecure portion 932 in which secrets and other sensitive information may be stored. As further seen,application processor 910 also couples to acapture device 945 such as one or more image capture devices (e.g., camera) that can record video and/or still images. - Still referring to
FIG. 4 , a universal integrated circuit card (UICC) 940 comprises a subscriber identity module, which in some embodiments includes asecure storage 942 to store secure user information.System 900 may further include asecurity processor 950 that may couple toapplication processor 910. A plurality ofsensors 925, including one or more multi-axis accelerometers may couple toapplication processor 910 to enable input of a variety of sensed information such as motion and other environmental information. In addition, one ormore authentication devices 995 may be used to receive, e.g., user biometric input for use in authentication operations. - As further illustrated, a near field communication (NFC)
contactless interface 960 is provided that communicates in a NFC near field via anNFC antenna 965. While separate antennae are shown inFIG. 4 , understand that in some implementations one antenna or a different set of antennae may be provided to enable various wireless functionality. - A power management integrated circuit (PMIC) 915 couples to
application processor 910 to perform platform level power management. To this end,PMIC 915 may issue power management requests toapplication processor 910 to enter certain low power states as desired. Furthermore, based on platform constraints,PMIC 915 may also control the power level of other components ofsystem 900. - To enable communications to be transmitted and received such as in one or more IoT networks, various circuitry may be coupled between
baseband processor 905 and anantenna 990. Specifically, a radio frequency (RF)transceiver 970 and a wireless local area network (WLAN)transceiver 975 may be present. In general,RF transceiver 970 may be used to receive and transmit wireless data and calls according to a given wireless communication protocol such as 3G or 4G wireless communication protocol such as in accordance with a code division multiple access (CDMA), global system for mobile communication (GSM), long term evolution (LTE) or other protocol. In addition aGPS sensor 980 may be present, with location information being provided tosecurity processor 950 for use as described herein when context information is to be used in a pairing process. Other wireless communications such as receipt or transmission of radio signals, e.g., AM/FM and other signals may also be provided. In addition, viaWLAN transceiver 975, local wireless communications, such as according to a Bluetooth™ or IEEE 802.11 standard can also be realized. - Referring now to
FIG. 5 , shown is a block diagram of a system (e.g., server 106) in accordance with another embodiment of the present invention.Multiprocessor system 1000 is a point-to-point interconnect system such as a server system, and includes afirst processor 1070 and asecond processor 1080 coupled via a point-to-point interconnect 1050. Each ofprocessors processor cores processor cores processors secure engine -
First processor 1070 further includes a memory controller hub (MCH) 1072 and point-to-point (P-P) interfaces 1076 and 1078. Similarly,second processor 1080 includes aMCH 1082 andP-P interfaces memory 1032 and amemory 1034, which may be portions of main memory (e.g., a DRAM) locally attached to the respective processors.First processor 1070 andsecond processor 1080 may be coupled to achipset 1090 viaP-P interconnects FIG. 5 ,chipset 1090 includesP-P interfaces - Furthermore,
chipset 1090 includes aninterface 1092 tocouple chipset 1090 with a highperformance graphics engine 1038, by aP-P interconnect 1039. In turn,chipset 1090 may be coupled to afirst bus 1016 via aninterface 1096. Various input/output (I/O)devices 1014 may be coupled tofirst bus 1016, along with a bus bridge 1018 which couplesfirst bus 1016 to asecond bus 1020. Various devices may be coupled tosecond bus 1020 including, for example, a keyboard/mouse 1022,communication devices 1026 and adata storage unit 1028 such as a non-volatile storage or other mass storage device. As seen,data storage unit 1028 may includecode 1030, in one embodiment. As further seen,data storage unit 1028 also includes a trustedstorage 1029 to store sensitive information to be protected. Further, an audio I/O 1024 may be coupled tosecond bus 1020. - Embodiments may be used in many different types of systems. For example, in one embodiment a communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to a communication device, and instead other embodiments can be directed to other types of apparatus for processing instructions, or one or more machine readable media including instructions that in response to being executed on a computing device, cause the device to carry out one or more of the methods and techniques described herein.
- Embodiments may be implemented in code and may be stored on a non-transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. Embodiments also may be implemented in data and may be stored on a non-transitory storage medium, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform one or more operations. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.
- A module as used herein refers to any hardware, software, firmware, or a combination thereof. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices. However, in another embodiment, logic also includes software or code integrated with hardware, such as firmware or micro-code.
- For example, a “location fusion” module or engine may include the above hardware, software, firmware, or a combination configured (programmed or designed) to fuse certain aspects of visual and radio location data together to form a fused location information.
- As used herein a “server” is both a running instance of some software that is capable of accepting requests from clients, and the computer that executes such software. Servers operate within a client-server architecture, in which “servers” are computer programs running to serve the requests of other programs, the “clients”. This may be to share data, information or hardware and software resources. Examples of typical computing servers are database server, file server, mail server, print server, web server, gaming server, and application server. The clients may run on the same computer, but may connect to the server through a network. In the hardware sense, a computer primarily designed as a server may be specialized in some way for its task. Sometimes more powerful and reliable than standard desktop computers, they may conversely be simpler and more disposable if clustered in large numbers.
- Example 1a includes at least one computer readable storage medium comprising instructions that when executed enable a system to: receive (a)(i) first radio signal location data for a first object from a radio sensor; and (a)(ii) first visual signal location data for the first object from a camera sensor; perform feature extraction on (b)(i) the first radio signal location data to determine first extracted radio signal features; and (b)(ii) the first visual signal location data to determine first extracted visual signal features; solve a first association problem between the first extracted radio signal features and the first extracted visual signal features to determine first fused location data; and store the first fused location data in the at least one computer readable storage medium.
- In example 2a the subject matter of Example 1a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to display a location of the first object based on the first fused location data.
- In example 3a the subject matter of Examples 1a-2a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to apply a particle filter to the first fused location data to determine first filtered fused location data.
- In example 4a the subject matter of Examples 1a-3a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to apply the particle filter to the first fused location data based on a site map for a physical site corresponding to the first radio signal location data, the first visual signal location data, and the first object.
- In example 5a the subject matter of Examples 1a-4a can optionally include the at least one computer readable storage medium wherein the first radio signal location data and the first visual signal location data both correspond to a common coordinate system.
- In example 6a the subject matter of Examples 1a-5a can optionally include wherein the first radio signal location data is time synchronized to the first visual signal location data.
- In example 7a the subject matter of Examples 1a-6a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to determine the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data.
- In example 8a the subject matter of Examples 1a-7a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to perform the feature extraction to extract trajectory data for the first object based on the first visual signal location data; wherein the trajectory data is included in the first extracted radio signal features.
- In example 9a the subject matter of Examples 1a-8a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to: receive (a)(i) second radio signal location data for a second object from the radio sensor; and (a)(ii) second visual signal location data for the second object from the camera sensor; perform feature extraction on (b)(i) the second radio signal location data to determine second extracted radio signal features; and (b)(ii) the second visual signal location data to determine second extracted visual signal features; solve a second association problem between the second extracted radio signal features and the second extracted visual signal features to determine second fused location data; and store the second fused location data in the at least one computer readable storage medium; wherein the first and second visual signal location data corresponds to a single frame of video data.
- In example 10a the subject matter of Examples 1a-9a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to reassign a first electronic identifier from the second object to the first object based on the solving the first and second association problems.
- In example 11a the subject matter of Examples 1a-10a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to: determine the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data; and determine the second fused location data based on associating a second electronic identifier, corresponding to the second radio signal location data, with the second visual signal location data.
- In example 12a the subject matter of Examples 1a-11a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to conduct k-means clustering on video data corresponding to the first and second visual signal location data, wherein “k” is based on the first and second electronic identifiers.
- In example 13a the subject matter of Examples 1a-12a can optionally include the at least one computer readable storage medium wherein the first and second visual signal location data correspond to an occlusion with one of the first and second objects occluding another of the first and second objects.
- In example 14a the subject matter of Examples 1a-13a can optionally include the at least one computer readable storage medium wherein the first radio signal location data is based on at least one of received signal strength indicator (RSSI) fingerprinting, RSSI triangulation, and Time Of Flight trilateration.
- In example 15a the subject matter of Examples 1a-14a can optionally include the at least one computer readable storage medium wherein the first radio signal location data includes first coordinate data and the first visual signal location data includes second coordinate data.
- In example 16a the subject matter of Examples 1a-15a can optionally include the at least one computer readable storage medium wherein the first radio signal location data includes a first device identifier (ID) for the first object and a first time stamp and the first visual signal location data includes a second time stamp.
- In example 17a the subject matter of Examples 1a-16a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to solve the first association problem based on a least cost optimization between the first extracted radio signal features and the first extracted visual signal features.
- In example 18a the subject matter of Examples 1a-17a can optionally include the at least one computer readable storage medium further comprising instructions that when executed enable the system to communicate a first message to a first mobile computing node in response to determining the first fused location data.
- Such a message may include, for example, a coupon or advertisement that is specific to the fused location data (e.g., a coupon for product X located within 1 m of the tracked object). However, other actuations based on the fused data are possible. Those actuations may be virtual or physical. For example, an actuation signal may be sent to a solenoid coupled to a gate when an object, having a recognized or whitelisted device_id, is tracked and determined to be within a threshold distance of the gate. For example, an virtual avatar may be projected to a user's virtual reality headset when the user, having a recognized or whitelisted device_id, is tracked and determined to be within a threshold distance of a location. Other actuations may include displaying privileged data to a user's mobile computing device (e.g., smartphone, wireless spectacles with a video display) when an embodiment determines the user is near a designated location.
- In example 19a the subject matter of Examples 1a-18a can optionally include the at least one computer readable storage medium wherein the first fused location data includes a first electronic identifier included in the first radio signal location data and first coordinate data included in the first visual signal location data.
- Example 20a includes at least one computer readable storage medium comprising instructions that when executed enable a system to: receive (a)(i) first location data, corresponding to a first object, from a radio sensor; and (a)(ii) second location data, corresponding to the first object, from a camera sensor; obtain extracted features based on features from the first and second location data; and determine fused location data that (b)(i) is based on the extracted features, and (b)(ii) includes an electronic identifier included in the first location data and coordinate data included in the second location data.
- Thus, in some embodiments the feature extraction may occur apart from the computing node that determines (e.g., obtains, derives, calculates) the fused data.
- Another version of example 20a includes at least one computer readable storage medium comprising instructions that when executed enable a system to: receive (a)(i) first location data, corresponding to a first object, from a radio sensor; and (a)(ii) second location data, corresponding to the first object, from a camera sensor; obtain extracted features based on features from the first and second location data; and determine fused location data that (b)(i) is based on the extracted features, (b)(ii) includes an electronic identifier based on an identifier included in the first location data, and (b)(iii) includes coordinate data based on second coordinate data included in the second location data.
- Thus, the electronic identifier of the fused data may merely be the same identifier that was in the first location data, an additional instance of the same identifier that was in the first location data, or may be derived from identifier in the first location data. The coordinate data of the fused data may be the same coordinate data that is in the second location data, an instance thereof, or may be derived coordinate data in the second location data.
- In example 21a the subject matter of Example 20a can optionally include the at least one computer readable storage medium, wherein the first location data and the second location data both correspond to a common coordinate system.
- In example 22a the subject matter of Examples 20a-21a can optionally include the at least one computer readable storage medium, wherein the first location data includes first coordinate data.
- In example 23a the subject matter of Examples 20a-22a can optionally include wherein the identifier included in the first location data includes a device identifier (ID) for the first object.
- For example, the device ID may include a unique identifier for a device, such as a mobile computing node, that is included in a field of a container that includes or corresponds to the first location data.
- Example 24a include an apparatus comprising: at least one memory and at least one processor, coupled to the at least one memory, to perform operations comprising: receiving (a)(i) first location data, corresponding to a first object, from a radio sensor; and (a)(ii) second location data, corresponding to the first object, from a camera sensor; obtaining extracted features based on features from the first and second location data; and determining fused location data that (b)(i) is based on the extracted features, and (b)(ii) includes an electronic identifier in the first location data and coordinate data in the second location data.
- Another version of example 24a includes an apparatus comprising: at least one memory and at least one processor, coupled to the at least one memory, to perform operations comprising: receiving (a)(i) first location data, corresponding to a first object, from a first sensor having a first sensing modality; and (a)(ii) second location data, corresponding to the first object, from a second sensor having a second sensing modality unequal to the first sensing modality; obtaining extracted features based on features from the first and second location data; and determining fused location data that (b)(i) is based on the extracted features, (b)(ii) includes an electronic identifier based on an identifier included in the first location data, and (b)(iii) includes coordinate data based on second coordinate data included in the second location data.
- In example 25a the subject matter of Example 24a can optionally include wherein (c)(i) the first location data includes first coordinate data; and (c)(ii) the identifier included in the first location data includes a device identifier (ID) corresponding to the first object.
- Example 1b includes an apparatus comprising: at least one memory and at least one processor, coupled to the at least one memory, to perform operations comprising: receiving (a)(i) first location data, corresponding to a first object, from a first sensor having a first sensing modality; and (a)(ii) second location data, corresponding to the first object, from a second sensor having a second sensing modality unequal to the first sensing modality; obtaining extracted features based on features from the first and second location data; and determining fused location data that (b)(i) is based on the extracted features, and (b)(ii) includes an electronic identifier in the first location data and coordinate data in the second location data.
- In example 2b the subject matter of Example 1b can optionally include wherein (c)(i) the first location data includes first coordinate data and the second location data includes second coordinate data; (c)(ii) the first location data includes a device identifier (ID) corresponding to the first object; and (c)(iii) the first sensor includes a radio sensor and the second sensor includes a vision sensor.
- For example, the first modality may include WiFi or other radio based sensing (BLE, RAIN RFID, NFC) while the second modality includes vision tracking, sound tracking (e.g., sonar), and the like. As used herein, “camera” includes image capture devices in general. Such image capture devices include still frame cameras, video cameras, thermographic cameras (e.g., infrared cameras, thermal imaging cameras, and image capture devices that use heat vision), and the like. Cameras and “vision” as used herein are not limited to vision within visible light wavelengths.
- Example 1c includes an apparatus comprising: at least one memory and at least one processor, coupled to the at least one memory, to perform operations comprising: receiving (a)(i) first radio signal location data for a first object from a radio sensor; and (a)(ii) first visual signal location data for the first object from a camera sensor; performing feature extraction on (b)(i) the first radio signal location data to determine first extracted radio signal features; and (b)(ii) the first visual signal location data to determine first extracted visual signal features; solving a first association problem between the first extracted radio signal features and the first extracted visual signal features to determine first fused location data; and storing the first fused location data in the at least one computer readable storage medium.
- Another version of Example 1c includes an apparatus comprising: at least one memory, at least one processor, a location fusion module, an feature extraction module, and all coupled to the at least one memory, to perform operations comprising: the location fusion module receiving (a)(i) first radio signal location data for a first object from a radio sensor; and (a)(ii) first visual signal location data for the first object from a camera sensor; the feature extraction module performing feature extraction on (b)(i) the first radio signal location data to determine first extracted radio signal features; and (b)(ii) the first visual signal location data to determine first extracted visual signal features; the location fusion module solving a first association problem between the first extracted radio signal features and the first extracted visual signal features to determine first fused location data; and the location fusion module storing the first fused location data in the at least one computer readable storage medium.
- The location fusion module or engine may include hardware (e.g., transistors, registers, field programmable gate arrays (FPAG)), software, firmware, or a combination configured (programmed or designed) to receive the location data and then associate the data to determine the fused location data. The extraction module or engine may include hardware (e.g., transistors, registers, FPAG), software, firmware, or a combination configured (programmed or designed) to extract features (e.g., trajectory) from location data. In an embodiment, the fusion module handles the actions addressed in
portion 225 ofFIG. 2 . - In example 2c the subject matter of Example 1c can optionally include wherein the at least one processor is to perform operations comprising displaying a location of the first object based on the first fused location data.
- In example 3c the subject matter of Examples 1c-2c can optionally include wherein the at least one processor is to perform operations comprising applying a particle filter to the first fused location data to determine first filtered fused location data.
- In example 4c the subject matter of Examples 1c-3c can optionally include wherein the at least one processor is to perform operations comprising applying the particle filter to the first fused location data based on a site map for a physical site corresponding to the first radio signal location data, the first visual signal location data, and the first object.
- In example 5c the subject matter of Examples 1c-4c can optionally include wherein the first radio signal location data and the first visual signal location data both correspond to a common coordinate system.
- In example 6c the subject matter of Examples 1c-5c can optionally include wherein the first radio signal location data is time synchronized to the first visual signal location data.
- In example 7c the subject matter of Examples 1c-6c can optionally include wherein the at least one processor is to perform operations comprising determining the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data.
- In example 8c the subject matter of Examples 1c-7c can optionally include wherein the at least one processor is to perform operations comprising performing the feature extraction to extract trajectory data for the first object based on the first visual signal location data; wherein the trajectory data is included in the first extracted radio signal features.
- In example 9c the subject matter of Examples 1c-8c can optionally include wherein the at least one processor is to perform operations comprising receiving (a)(i) second radio signal location data for a second object from the radio sensor; and (a)(ii) second visual signal location data for the second object from the camera sensor; performing feature extraction on (b)(i) the second radio signal location data to determine second extracted radio signal features; and (b)(ii) the second visual signal location data to determine second extracted visual signal features; solving a second association problem between the second extracted radio signal features and the second extracted visual signal features to determine second fused location data; and storing the second fused location data in the at least one computer readable storage medium; wherein the first and second visual signal location data corresponds to a single frame of video data.
- In example 10c the subject matter of Examples 1c-9c can optionally include wherein the at least one processor is to perform operations comprising reassigning a first electronic identifier from the second object to the first object based on the solving the first and second association problems.
- In example 11c the subject matter of Examples 1c-10c can optionally include wherein the at least one processor is to perform operations comprising determining the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data; and determining the second fused location data based on associating a second electronic identifier, corresponding to the second radio signal location data, with the second visual signal location data.
- In example 12c the subject matter of Examples 1c-11c can optionally include wherein the at least one processor is to perform operations comprising conducting k-means clustering on video data corresponding to the first and second visual signal location data, wherein “k” is based on the first and second electronic identifiers.
- In example 13c the subject matter of Examples 1c-12c can optionally include wherein the first and second visual signal location data correspond to an occlusion with one of the first and second objects occluding another of the first and second objects.
- In example 14c the subject matter of Examples 1c-13c can optionally include wherein the first radio signal location data is based on at least one of received signal strength indicator (RSSI) fingerprinting, RSSI triangulation, and Time Of Flight trilateration.
- In example 15c the subject matter of Examples 1c-14c can optionally include wherein the first radio signal location data includes first coordinate data and the first visual signal location data includes second coordinate data.
- In example 16c the subject matter of Examples 1c-15c can optionally include wherein the first radio signal location data includes a first device identifier (ID) for the first object and a first time stamp and the first visual signal location data includes a second time stamp.
- In example 17c the subject matter of Examples 1c-16c can optionally include wherein the at least one processor is to perform operations comprising solving the first association problem based on a least cost optimization between the first extracted radio signal features and the first extracted visual signal features.
- In example 18c the subject matter of Examples 1c-17c can optionally include wherein the at least one processor is to perform operations comprising communicating a first message to a first mobile computing node in response to determining the first fused location data.
- In example 19c the subject matter of Examples 1c-18c can optionally include wherein the first fused location data includes a first electronic identifier included in the first radio signal location data and first coordinate data included in the first visual signal location data.
- While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
Claims (25)
1. At least one computer readable storage medium comprising instructions that when executed enable a system to:
receive (a)(i) first radio signal location data for a first object from a radio sensor; and (a)(ii) first visual signal location data for the first object from a camera sensor;
perform feature extraction on (b)(i) the first radio signal location data to determine first extracted radio signal features; and (b)(ii) the first visual signal location data to determine first extracted visual signal features;
solve a first association problem between the first extracted radio signal features and the first extracted visual signal features to determine first fused location data; and
store the first fused location data in the at least one computer readable storage medium.
2. The at least one computer readable storage medium of claim 1 , further comprising instructions that when executed enable the system to display a location of the first object based on the first fused location data.
3. The at least one computer readable storage medium of claim 1 , further comprising instructions that when executed enable the system to apply a particle filter to the first fused location data to determine first filtered fused location data.
4. The at least one computer readable storage medium of claim 3 , further comprising instructions that when executed enable the system to apply the particle filter to the first fused location data based on a site map for a physical site corresponding to the first radio signal location data, the first visual signal location data, and the first object.
5. The at least one computer readable storage medium of claim 1 , wherein the first radio signal location data and the first visual signal location data both correspond to a common coordinate system.
6. The at least one computer readable storage medium of claim 5 , wherein the first radio signal location data is time synchronized to the first visual signal location data.
7. The at least one computer readable storage medium of claim 1 , further comprising instructions that when executed enable the system to determine the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data.
8. The at least one computer readable storage medium of claim 1 , further comprising instructions that when executed enable the system to perform the feature extraction to extract trajectory data for the first object based on the first visual signal location data; wherein the trajectory data is included in the first extracted radio signal features.
9. The at least one computer readable storage medium of claim 1 , further comprising instructions that when executed enable the system to:
receive (a)(i) second radio signal location data for a second object from the radio sensor; and (a)(ii) second visual signal location data for the second object from the camera sensor;
perform feature extraction on (b)(i) the second radio signal location data to determine second extracted radio signal features; and (b)(ii) the second visual signal location data to determine second extracted visual signal features;
solve a second association problem between the second extracted radio signal features and the second extracted visual signal features to determine second fused location data; and
store the second fused location data in the at least one computer readable storage medium;
wherein the first and second visual signal location data corresponds to a single frame of video data.
10. The at least one computer readable storage medium of claim 9 , further comprising instructions that when executed enable the system to reassign a first electronic identifier from the second object to the first object based on the solving the first and second association problems.
11. The at least one computer readable storage medium of claim 9 , further comprising instructions that when executed enable the system to:
determine the first fused location data based on associating a first electronic identifier, corresponding to the first radio signal location data, with the first visual signal location data; and
determine the second fused location data based on associating a second electronic identifier, corresponding to the second radio signal location data, with the second visual signal location data.
12. The at least one computer readable storage medium of claim 11 , further comprising instructions that when executed enable the system to conduct k-means clustering on video data corresponding to the first and second visual signal location data, wherein “k” is based on the first and second electronic identifiers.
13. The at least one computer readable storage medium of claim 12 , wherein the first and second visual signal location data correspond to an occlusion with one of the first and second objects occluding another of the first and second objects.
14. The at least one computer readable storage medium of claim 1 , wherein the first radio signal location data is based on at least one of received signal strength indicator (RSSI) fingerprinting, RSSI triangulation, and Time Of Flight trilateration.
15. The at least one computer readable storage medium of claim 14 , wherein the first radio signal location data includes first coordinate data and the first visual signal location data includes second coordinate data.
16. The at least one computer readable storage medium of claim 15 , wherein the first radio signal location data includes a first device identifier (ID) for the first object and a first time stamp and the first visual signal location data includes a second time stamp.
17. The at least one computer readable storage medium of claim 1 , further comprising instructions that when executed enable the system to solve the first association problem based on a least cost optimization between the first extracted radio signal features and the first extracted visual signal features.
18. The at least one computer readable storage medium of claim 1 , further comprising instructions that when executed enable the system to communicate a first message to a first mobile computing node in response to determining the first fused location data.
19. The at least one computer readable storage medium of claim 1 , wherein the first fused location data includes a first electronic identifier included in the first radio signal location data and first coordinate data included in the first visual signal location data.
20. At least one computer readable storage medium comprising instructions that when executed enable a system to:
receive (a)(i) first location data, corresponding to a first object, from a radio sensor; and (a)(ii) second location data, corresponding to the first object, from a camera sensor;
obtain extracted features based on features from the first and second location data; and
determine fused location data that (b)(i) is based on the extracted features, (b)(ii) includes an electronic identifier based on an identifier included in the first location data, and (b)(iii) includes coordinate data based on second coordinate data included in the second location data.
21. The at least one computer readable storage medium of claim 20 , wherein the first location data and the second location data both correspond to a common coordinate system.
22. The at least one computer readable storage medium of claim 21 , wherein the first location data includes first coordinate data.
23. The at least one computer readable storage medium of claim 22 , wherein the identifier included in the first location data includes a device identifier (ID) for the first object.
24. An apparatus comprising:
at least one memory and at least one processor, coupled to the at least one memory, to perform operations comprising:
receiving (a)(i) first location data, corresponding to a first object, from a first sensor having a first sensing modality; and (a)(ii) second location data, corresponding to the first object, from a second sensor having a second sensing modality unequal to the first sensing modality;
obtaining extracted features based on features from the first and second location data; and
determining fused location data that (b)(i) is based on the extracted features, (b)(ii) includes an electronic identifier based on an identifier included in the first location data, and (b)(iii) includes coordinate data based on second coordinate data included in the second location data.
25. The apparatus of claim 24 , wherein (c)(i) the first location data includes first coordinate data; (c)(ii) the identifier included in first location data includes a device identifier (ID) corresponding to the first object; and (c)(iii) the first sensor includes a radio sensor and the second sensor includes a vision sensor.
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/865,531 US9772395B2 (en) | 2015-09-25 | 2015-09-25 | Vision and radio fusion based precise indoor localization |
TW105125468A TWI697689B (en) | 2015-09-25 | 2016-08-10 | Apparatus of vision and radio fusion based precise indoor localization and storage medium thereof |
TW109122104A TWI770544B (en) | 2015-09-25 | 2016-08-10 | Apparatus of vision and radio fusion based precise indoor localization and storage medium thereof |
PCT/US2016/048540 WO2017052951A1 (en) | 2015-09-25 | 2016-08-25 | Vision and radio fusion based precise indoor localization |
US15/716,041 US10571546B2 (en) | 2015-09-25 | 2017-09-26 | Vision and radio fusion based precise indoor localization |
US16/776,233 US20200309894A1 (en) | 2015-09-25 | 2020-01-29 | Vision and radio fusion based precise indoor localization |
US17/107,372 US11467247B2 (en) | 2015-09-25 | 2020-11-30 | Vision and radio fusion based precise indoor localization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/865,531 US9772395B2 (en) | 2015-09-25 | 2015-09-25 | Vision and radio fusion based precise indoor localization |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/716,041 Continuation US10571546B2 (en) | 2015-09-25 | 2017-09-26 | Vision and radio fusion based precise indoor localization |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170090007A1 true US20170090007A1 (en) | 2017-03-30 |
US9772395B2 US9772395B2 (en) | 2017-09-26 |
Family
ID=58387247
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/865,531 Active US9772395B2 (en) | 2015-09-25 | 2015-09-25 | Vision and radio fusion based precise indoor localization |
US15/716,041 Active US10571546B2 (en) | 2015-09-25 | 2017-09-26 | Vision and radio fusion based precise indoor localization |
US16/776,233 Abandoned US20200309894A1 (en) | 2015-09-25 | 2020-01-29 | Vision and radio fusion based precise indoor localization |
US17/107,372 Active US11467247B2 (en) | 2015-09-25 | 2020-11-30 | Vision and radio fusion based precise indoor localization |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/716,041 Active US10571546B2 (en) | 2015-09-25 | 2017-09-26 | Vision and radio fusion based precise indoor localization |
US16/776,233 Abandoned US20200309894A1 (en) | 2015-09-25 | 2020-01-29 | Vision and radio fusion based precise indoor localization |
US17/107,372 Active US11467247B2 (en) | 2015-09-25 | 2020-11-30 | Vision and radio fusion based precise indoor localization |
Country Status (3)
Country | Link |
---|---|
US (4) | US9772395B2 (en) |
TW (2) | TWI697689B (en) |
WO (1) | WO2017052951A1 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170160377A1 (en) * | 2011-12-20 | 2017-06-08 | Honeywell International Inc. | Systems and methods of accuracy mapping in a location tracking system |
US9996752B2 (en) * | 2016-08-30 | 2018-06-12 | Canon Kabushiki Kaisha | Method, system and apparatus for processing an image |
US20180241781A1 (en) * | 2017-02-17 | 2018-08-23 | Microsoft Technology Licensing, Llc | Security rules including pattern matching for iot devices |
US20180321687A1 (en) * | 2017-05-05 | 2018-11-08 | Irobot Corporation | Methods, systems, and devices for mapping wireless communication signals for mobile robot guidance |
US20190132696A1 (en) * | 2017-10-27 | 2019-05-02 | Hewlett Packard Enterprise Development Lp | Match bluetooth low energy (ble) moving patterns |
US20190380106A1 (en) * | 2018-06-07 | 2019-12-12 | Osense Technology Co., Ltd. | Indoor positioning system and method based on geomagnetic signals in combination with computer vision |
US10528725B2 (en) | 2016-11-04 | 2020-01-07 | Microsoft Technology Licensing, Llc | IoT security service |
US10571546B2 (en) | 2015-09-25 | 2020-02-25 | Intel Corporation | Vision and radio fusion based precise indoor localization |
US10901420B2 (en) | 2016-11-04 | 2021-01-26 | Intel Corporation | Unmanned aerial vehicle-based systems and methods for agricultural landscape modeling |
CN112461245A (en) * | 2020-11-26 | 2021-03-09 | 浙江商汤科技开发有限公司 | Data processing method and device, electronic equipment and storage medium |
US10972456B2 (en) | 2016-11-04 | 2021-04-06 | Microsoft Technology Licensing, Llc | IoT device authentication |
US11076264B2 (en) * | 2017-09-22 | 2021-07-27 | Softbank Robotics Europe | Localization of a mobile device based on image and radio words |
CN113316080A (en) * | 2021-04-19 | 2021-08-27 | 北京工业大学 | Indoor positioning method based on Wi-Fi and image fusion fingerprint |
US11330450B2 (en) | 2018-09-28 | 2022-05-10 | Nokia Technologies Oy | Associating and storing data from radio network and spatiotemporal sensors |
US20220182784A1 (en) * | 2020-12-03 | 2022-06-09 | Mitsubishi Electric Automotive America, Inc. | Apparatus and method for providing location |
US20220247723A1 (en) * | 2017-05-31 | 2022-08-04 | Crypto4A Technologies Inc. | Integrated network security appliance, platform and system |
KR20220121112A (en) * | 2021-02-24 | 2022-08-31 | 주식회사 카카오모빌리티 | Method and system for estimating location |
US20230087497A1 (en) * | 2021-09-22 | 2023-03-23 | Fortinet, Inc. | Systems and Methods for Incorporating Passive Wireless Monitoring With Video Surveillance |
US11803666B2 (en) | 2017-05-31 | 2023-10-31 | Crypto4A Technologies Inc. | Hardware security module, and trusted hardware network interconnection device and resources |
CN117474991A (en) * | 2023-10-24 | 2024-01-30 | 纬创软件(武汉)有限公司 | SpectFormer-based Poi positioning method and device |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10444347B2 (en) * | 2016-11-30 | 2019-10-15 | GM Global Technology Operations LLC | Accurate self localization using automotive radar synthetic aperture radar |
US11209517B2 (en) * | 2017-03-17 | 2021-12-28 | Nec Corporation | Mobile body detection device, mobile body detection method, and mobile body detection program |
GB2585800B (en) * | 2018-03-27 | 2022-05-18 | Teledyne FLIR LLC | People counting and tracking systems and methods |
DE102018110327A1 (en) * | 2018-04-30 | 2019-10-31 | Tridonic Gmbh & Co Kg | System and method for determining a location-related radio field characteristic in interiors |
CN109996205B (en) * | 2019-04-12 | 2021-12-07 | 成都工业学院 | Sensor data fusion method and device, electronic equipment and storage medium |
CN111757258B (en) * | 2020-07-06 | 2021-05-14 | 江南大学 | Self-adaptive positioning fingerprint database construction method under complex indoor signal environment |
CN111811503B (en) * | 2020-07-15 | 2022-02-18 | 桂林电子科技大学 | Unscented Kalman filtering fusion positioning method based on ultra wide band and two-dimensional code |
WO2022145738A1 (en) * | 2020-12-28 | 2022-07-07 | Samsung Electronics Co., Ltd. | Intelligent object tracing system utilizing 3d map reconstruction for virtual assistance |
IL280630A (en) * | 2021-02-03 | 2022-09-01 | Elbit Systems C4I And Cyber Ltd | System and method for remote multimodal sensing of a measurement volume |
TWI781655B (en) * | 2021-06-15 | 2022-10-21 | 恆準定位股份有限公司 | Ultra-wideband positioning system combined with graphics |
WO2023215649A1 (en) * | 2022-05-04 | 2023-11-09 | Qualcomm Incorporated | Positioning with radio and video-based channel state information (vcsi) fusion |
Family Cites Families (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5243370A (en) | 1988-08-30 | 1993-09-07 | Dan Slater | Camera stabilizer |
US5801948A (en) | 1996-08-22 | 1998-09-01 | Dickey-John Corporation | Universal control system with alarm history tracking for mobile material distribution apparatus |
US5884205A (en) | 1996-08-22 | 1999-03-16 | Dickey-John Corporation | Boom configuration monitoring and control system for mobile material distribution apparatus |
US5972357A (en) | 1996-12-19 | 1999-10-26 | Kikkoman Corporation | Healthy foods and cosmetics |
US5911362A (en) | 1997-02-26 | 1999-06-15 | Dickey-John Corporation | Control system for a mobile material distribution device |
US20070159476A1 (en) | 2003-09-15 | 2007-07-12 | Armin Grasnick | Method for creating a stereoscopic image master for imaging methods with three-dimensional depth rendition and device for displaying a steroscopic image master |
WO2007018523A2 (en) | 2004-07-28 | 2007-02-15 | Sarnoff Corporation | Method and apparatus for stereo, multi-camera tracking and rf and video track fusion |
US7706979B1 (en) | 2005-05-03 | 2010-04-27 | Stanley Robert Herwitz | Closest points of approach determination for unmanned aerial vehicle ground-based sense-and-avoid display system |
US8054177B2 (en) * | 2007-12-04 | 2011-11-08 | Avaya Inc. | Systems and methods for facilitating a first response mission at an incident scene using patient monitoring |
US20110137547A1 (en) | 2009-12-03 | 2011-06-09 | Electronics And Telecommunications Research Institute | System and method for generating spatial information |
WO2012024516A2 (en) | 2010-08-18 | 2012-02-23 | Nearbuy Systems, Inc. | Target localization utilizing wireless and camera sensor fusion |
TWM420168U (en) * | 2011-05-20 | 2012-01-11 | Yu-Qi Qiu | Pet tracker |
US9392746B2 (en) | 2012-02-10 | 2016-07-19 | Deere & Company | Artificial intelligence for detecting and filling void areas of agricultural commodity containers |
US8972357B2 (en) | 2012-02-24 | 2015-03-03 | Placed, Inc. | System and method for data collection to validate location data |
CN202602813U (en) * | 2012-03-16 | 2012-12-12 | 北京联合大学 | Pet monitoring and searching device |
TWI444645B (en) * | 2012-09-17 | 2014-07-11 | Quanta Comp Inc | Positioning method and positioning device |
DE102012109481A1 (en) * | 2012-10-05 | 2014-04-10 | Faro Technologies, Inc. | Device for optically scanning and measuring an environment |
US9953618B2 (en) * | 2012-11-02 | 2018-04-24 | Qualcomm Incorporated | Using a plurality of sensors for mapping and localization |
US9075415B2 (en) * | 2013-03-11 | 2015-07-07 | Airphrame, Inc. | Unmanned aerial vehicle and methods for controlling same |
US9326103B2 (en) | 2013-07-12 | 2016-04-26 | Microsoft Technology Licensing, Llc | Indoor location-finding using magnetic field anomalies |
CN103442436B (en) * | 2013-08-27 | 2017-06-13 | 华为技术有限公司 | A kind of indoor positioning terminal, network, system and method |
US9824596B2 (en) | 2013-08-30 | 2017-11-21 | Insitu, Inc. | Unmanned vehicle searches |
US9405972B2 (en) * | 2013-09-27 | 2016-08-02 | Qualcomm Incorporated | Exterior hybrid photo mapping |
US10250821B2 (en) | 2013-11-27 | 2019-04-02 | Honeywell International Inc. | Generating a three-dimensional model of an industrial plant using an unmanned aerial vehicle |
US9612598B2 (en) | 2014-01-10 | 2017-04-04 | Pictometry International Corp. | Unmanned aircraft structure evaluation system and method |
CA2941250A1 (en) | 2014-03-19 | 2015-09-24 | Neurala, Inc. | Methods and apparatus for autonomous robotic control |
US9369982B2 (en) | 2014-03-28 | 2016-06-14 | Intel Corporation | Online adaptive fusion framework for mobile device indoor localization |
US9681320B2 (en) | 2014-04-22 | 2017-06-13 | Pc-Tel, Inc. | System, apparatus, and method for the measurement, collection, and analysis of radio signals utilizing unmanned aerial vehicles |
US9336584B2 (en) | 2014-06-30 | 2016-05-10 | Trimble Navigation Limited | Active imaging systems for plant growth monitoring |
KR20160002178A (en) * | 2014-06-30 | 2016-01-07 | 현대자동차주식회사 | Apparatus and method for self-localization of vehicle |
WO2016029054A1 (en) | 2014-08-22 | 2016-02-25 | The Climate Corporation | Methods for agronomic and agricultural monitoring using unmanned aerial systems |
US20160124435A1 (en) | 2014-10-29 | 2016-05-05 | Lyle Thompson | 3d scanning and imaging method utilizing a self-actuating compact unmanned aerial device |
EP3265885A4 (en) | 2015-03-03 | 2018-08-29 | Prenav Inc. | Scanning environments and tracking unmanned aerial vehicles |
US9745060B2 (en) | 2015-07-17 | 2017-08-29 | Topcon Positioning Systems, Inc. | Agricultural crop analysis drone |
US9740208B2 (en) | 2015-07-30 | 2017-08-22 | Deere & Company | UAV-based sensing for worksite operations |
US10356179B2 (en) | 2015-08-31 | 2019-07-16 | Atheer, Inc. | Method and apparatus for switching between sensors |
US9772395B2 (en) | 2015-09-25 | 2017-09-26 | Intel Corporation | Vision and radio fusion based precise indoor localization |
US10301019B1 (en) | 2015-12-17 | 2019-05-28 | Amazon Technologies, Inc. | Source location determination |
US10198954B2 (en) | 2015-12-30 | 2019-02-05 | Motorola Solutions, Inc. | Method and apparatus for positioning an unmanned robotic vehicle |
US9944390B2 (en) | 2016-02-29 | 2018-04-17 | Intel Corporation | Technologies for managing data center assets using unmanned aerial vehicles |
JP6533619B2 (en) | 2016-02-29 | 2019-06-19 | 株式会社日立製作所 | Sensor calibration system |
US10564270B2 (en) | 2016-04-13 | 2020-02-18 | Caterpillar Inc. | Methods and systems for calibrating sensors |
US10338274B2 (en) | 2016-06-02 | 2019-07-02 | The Climate Corporation | Computer radar based precipitation estimate errors based on precipitation gauge measurements |
US10189567B2 (en) | 2016-06-09 | 2019-01-29 | Skycatch, Inc. | Identifying camera position of a UAV in flight utilizing real time kinematic satellite navigation |
US10238028B2 (en) | 2016-08-11 | 2019-03-26 | The Climate Corporation | Automatically detecting outlier values in harvested data |
US10037632B2 (en) | 2016-09-01 | 2018-07-31 | Ford Global Technologies, Llc | Surrogate vehicle sensors |
US10165725B2 (en) | 2016-09-30 | 2019-01-01 | Deere & Company | Controlling ground engaging elements based on images |
US10928821B2 (en) | 2016-11-04 | 2021-02-23 | Intel Corporation | Unmanned aerial vehicle-based systems and methods for generating landscape models |
TWI635413B (en) | 2017-07-18 | 2018-09-11 | 義隆電子股份有限公司 | Fingerprint sensing integrated circuit |
-
2015
- 2015-09-25 US US14/865,531 patent/US9772395B2/en active Active
-
2016
- 2016-08-10 TW TW105125468A patent/TWI697689B/en active
- 2016-08-10 TW TW109122104A patent/TWI770544B/en active
- 2016-08-25 WO PCT/US2016/048540 patent/WO2017052951A1/en active Application Filing
-
2017
- 2017-09-26 US US15/716,041 patent/US10571546B2/en active Active
-
2020
- 2020-01-29 US US16/776,233 patent/US20200309894A1/en not_active Abandoned
- 2020-11-30 US US17/107,372 patent/US11467247B2/en active Active
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170160377A1 (en) * | 2011-12-20 | 2017-06-08 | Honeywell International Inc. | Systems and methods of accuracy mapping in a location tracking system |
US10267893B2 (en) * | 2011-12-20 | 2019-04-23 | Honeywell International Inc. | Systems and methods of accuracy mapping in a location tracking system |
US11467247B2 (en) | 2015-09-25 | 2022-10-11 | Intel Corporation | Vision and radio fusion based precise indoor localization |
US10571546B2 (en) | 2015-09-25 | 2020-02-25 | Intel Corporation | Vision and radio fusion based precise indoor localization |
US9996752B2 (en) * | 2016-08-30 | 2018-06-12 | Canon Kabushiki Kaisha | Method, system and apparatus for processing an image |
US10528725B2 (en) | 2016-11-04 | 2020-01-07 | Microsoft Technology Licensing, Llc | IoT security service |
US10901420B2 (en) | 2016-11-04 | 2021-01-26 | Intel Corporation | Unmanned aerial vehicle-based systems and methods for agricultural landscape modeling |
US10928821B2 (en) | 2016-11-04 | 2021-02-23 | Intel Corporation | Unmanned aerial vehicle-based systems and methods for generating landscape models |
US10972456B2 (en) | 2016-11-04 | 2021-04-06 | Microsoft Technology Licensing, Llc | IoT device authentication |
US20180241781A1 (en) * | 2017-02-17 | 2018-08-23 | Microsoft Technology Licensing, Llc | Security rules including pattern matching for iot devices |
US20180321687A1 (en) * | 2017-05-05 | 2018-11-08 | Irobot Corporation | Methods, systems, and devices for mapping wireless communication signals for mobile robot guidance |
US10664502B2 (en) * | 2017-05-05 | 2020-05-26 | Irobot Corporation | Methods, systems, and devices for mapping wireless communication signals for mobile robot guidance |
US11803666B2 (en) | 2017-05-31 | 2023-10-31 | Crypto4A Technologies Inc. | Hardware security module, and trusted hardware network interconnection device and resources |
US11916872B2 (en) * | 2017-05-31 | 2024-02-27 | Crypto4A Technologies Inc. | Integrated network security appliance, platform and system |
US20220247723A1 (en) * | 2017-05-31 | 2022-08-04 | Crypto4A Technologies Inc. | Integrated network security appliance, platform and system |
US11076264B2 (en) * | 2017-09-22 | 2021-07-27 | Softbank Robotics Europe | Localization of a mobile device based on image and radio words |
US10516982B2 (en) * | 2017-10-27 | 2019-12-24 | Hewlett Packard Enterprise Development Lp | Match Bluetooth low energy (BLE) moving patterns |
US20190132696A1 (en) * | 2017-10-27 | 2019-05-02 | Hewlett Packard Enterprise Development Lp | Match bluetooth low energy (ble) moving patterns |
US11012841B2 (en) | 2017-10-27 | 2021-05-18 | Hewlett Packard Enterprise Development Lp | Match bluetooth low energy (BLE) moving patterns |
US10645668B2 (en) * | 2018-06-07 | 2020-05-05 | Osense Technology Co., Ltd. | Indoor positioning system and method based on geomagnetic signals in combination with computer vision |
US20190380106A1 (en) * | 2018-06-07 | 2019-12-12 | Osense Technology Co., Ltd. | Indoor positioning system and method based on geomagnetic signals in combination with computer vision |
US11330450B2 (en) | 2018-09-28 | 2022-05-10 | Nokia Technologies Oy | Associating and storing data from radio network and spatiotemporal sensors |
CN112461245A (en) * | 2020-11-26 | 2021-03-09 | 浙江商汤科技开发有限公司 | Data processing method and device, electronic equipment and storage medium |
US20220182784A1 (en) * | 2020-12-03 | 2022-06-09 | Mitsubishi Electric Automotive America, Inc. | Apparatus and method for providing location |
US11956693B2 (en) * | 2020-12-03 | 2024-04-09 | Mitsubishi Electric Corporation | Apparatus and method for providing location |
KR20220121112A (en) * | 2021-02-24 | 2022-08-31 | 주식회사 카카오모빌리티 | Method and system for estimating location |
KR102625507B1 (en) * | 2021-02-24 | 2024-01-19 | 주식회사 카카오모빌리티 | Method and system for estimating location |
CN113316080A (en) * | 2021-04-19 | 2021-08-27 | 北京工业大学 | Indoor positioning method based on Wi-Fi and image fusion fingerprint |
US20230087497A1 (en) * | 2021-09-22 | 2023-03-23 | Fortinet, Inc. | Systems and Methods for Incorporating Passive Wireless Monitoring With Video Surveillance |
US11823538B2 (en) * | 2021-09-22 | 2023-11-21 | Fortinet, Inc. | Systems and methods for incorporating passive wireless monitoring with video surveillance |
CN117474991A (en) * | 2023-10-24 | 2024-01-30 | 纬创软件(武汉)有限公司 | SpectFormer-based Poi positioning method and device |
Also Published As
Publication number | Publication date |
---|---|
TW201712361A (en) | 2017-04-01 |
US20200309894A1 (en) | 2020-10-01 |
US20180196118A1 (en) | 2018-07-12 |
US10571546B2 (en) | 2020-02-25 |
TWI697689B (en) | 2020-07-01 |
WO2017052951A1 (en) | 2017-03-30 |
US11467247B2 (en) | 2022-10-11 |
US20210149012A1 (en) | 2021-05-20 |
TWI770544B (en) | 2022-07-11 |
TW202119054A (en) | 2021-05-16 |
US9772395B2 (en) | 2017-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11467247B2 (en) | Vision and radio fusion based precise indoor localization | |
US10587987B2 (en) | High accuracy tracking and interaction for low observable devices | |
US11593951B2 (en) | Multi-device object tracking and localization | |
US20150023562A1 (en) | Hybrid multi-camera based positioning | |
US10212553B1 (en) | Direction determination of a wireless tag | |
US10459066B2 (en) | Self-adaptive system and method for robust Wi-Fi indoor localization in large public site | |
US10375667B2 (en) | Enhancing indoor positioning using RF multilateration and optical sensing | |
US11528452B2 (en) | Indoor positioning system using beacons and video analytics | |
Carrasco et al. | Indoor location service in support of a smart manufacturing facility | |
Feng et al. | Three-dimensional robot localization using cameras in wireless multimedia sensor networks | |
US10997474B2 (en) | Apparatus and method for person detection, tracking, and identification utilizing wireless signals and images | |
US10433118B1 (en) | Navigation tracking in an always aware location environment with mobile localization nodes | |
Gao et al. | A hybrid localization and tracking system in camera sensor networks | |
US11694346B2 (en) | Object tracking in real-time applications | |
KR102612792B1 (en) | Electronic device and method for determining entry in region of interest thereof | |
Pham et al. | Fusion of wifi and visual signals for person tracking | |
US20190065984A1 (en) | Method and electronic device for detecting and recognizing autonomous gestures in a monitored location | |
US10299081B1 (en) | Gesture profiles and location correlation | |
KR20190116039A (en) | Localization method and system for augmented reality in mobile devices | |
US20220262164A1 (en) | Apparatus, method and system for person detection and identification utilizing wireless signals and images | |
US11810387B2 (en) | Location system and method | |
Rizzardo | OPTAR: Automatic Coordinate Frame Registration between OpenPTrack and Google ARCore using Ambient Visual Features | |
KR20230076044A (en) | Method and system for improving target detection performance through dynamic learning | |
KR20230090732A (en) | Operating method of user terminal for ultra wide band(uwb) network configuration and user terminal thereof | |
Zhang et al. | IMM filter based human tracking using a distributed wireless sensor network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PARK, MI S.;YANG, LEI;YANG, SHAO-WEN;AND OTHERS;SIGNING DATES FROM 20150924 TO 20150925;REEL/FRAME:036657/0957 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN) |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |