US9270952B2 - Target localization utilizing wireless and camera sensor fusion - Google Patents
Target localization utilizing wireless and camera sensor fusion Download PDFInfo
- Publication number
- US9270952B2 US9270952B2 US14/064,020 US201314064020A US9270952B2 US 9270952 B2 US9270952 B2 US 9270952B2 US 201314064020 A US201314064020 A US 201314064020A US 9270952 B2 US9270952 B2 US 9270952B2
- Authority
- US
- United States
- Prior art keywords
- track
- computer
- location
- time
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 230000004807 localization Effects 0.000 title abstract description 55
- 230000004927 fusion Effects 0.000 title description 7
- 238000000034 method Methods 0.000 claims description 47
- 238000011524 similarity measure Methods 0.000 claims description 12
- 230000002596 correlated effect Effects 0.000 claims description 8
- 230000001131 transforming effect Effects 0.000 claims 3
- 238000005259 measurement Methods 0.000 abstract description 14
- 230000008569 process Effects 0.000 description 21
- 239000013598 vector Substances 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 13
- 239000012634 fragment Substances 0.000 description 8
- 230000033001 locomotion Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 239000000243 solution Substances 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 230000003068 static effect Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000002372 labelling Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 230000007704 transition Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000001427 coherent effect Effects 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000015654 memory Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 239000011449 brick Substances 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 239000004570 mortar (masonry) Substances 0.000 description 3
- 238000012552 review Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- CDBYLPFSWZWCQE-UHFFFAOYSA-L Sodium Carbonate Chemical compound [Na+].[Na+].[O-]C([O-])=O CDBYLPFSWZWCQE-UHFFFAOYSA-L 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 235000000332 black box Nutrition 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 235000021443 coca cola Nutrition 0.000 description 1
- 230000001143 conditioned effect Effects 0.000 description 1
- 230000003750 conditioning effect Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 238000010790 dilution Methods 0.000 description 1
- 239000012895 dilution Substances 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000011900 installation process Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S3/00—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
- G01S3/78—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
- G01S3/782—Systems for determining direction or deviation from predetermined direction
- G01S3/785—Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
- G01S3/786—Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
- G01S3/7864—T.V. type tracking systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
- G01S5/0257—Hybrid positioning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
- G01S5/0257—Hybrid positioning
- G01S5/0258—Hybrid positioning by combining or switching between measurements derived from different systems
- G01S5/02585—Hybrid positioning by combining or switching between measurements derived from different systems at least one of the measurements being a non-radio measurement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/025—Services making use of location information using location based information parameters
-
- H04W4/028—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/029—Location-based management or tracking services
-
- H04W4/043—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/02—Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
- H04W84/10—Small scale networks; Flat hierarchical networks
- H04W84/12—WLAN [Wireless Local Area Networks]
Definitions
- This invention relates to location-based services (LBS) and the determination of the location of a person or object carrying a wireless device.
- LBS location-based services
- Wi-Fi Global positioning systems
- RSS Received Signal Strength
- TDOA time difference of arrival
- AOA angle of arrival
- TOA time of arrival
- Wi-Fi infrastructure is widely deployed and the ability to use this infrastructure for LBS is desirable.
- Many LBS services require position accuracy of less than one meter. However, this is difficult to achieve in Wi-Fi location systems due to multipath.
- An additional feature of Wi-Fi is that identity can be determined from media access control (MAC) addresses or other network identity protocols (e.g., internet protocol addresses).
- MAC media access control
- internet protocol addresses e.g., internet protocol addresses
- Video camera networks can be used to track people or objects and determine their location. Location accuracy of less than one meter can be achieved using video cameras. Although this position accuracy is important, determining a person or object's identity is required for most applications. Determining identity based on appearance in a video is difficult and prone to error.
- LBS location based services
- Another method to locate the user of a mobile device is to use wireless infrastructure such as Wi-Fi access points to triangulate their location based on radio waves emitted by the device.
- Three or more wireless receivers record the received signal strength or the angle of arrival of the radio frequency signals from the mobile device. These receivers could be Wi-Fi, Bluetooth, RFID, or other wireless devices.
- a location server processes the data from these receivers to triangulate the mobile device's location.
- the application queries the location server for the user's device location.
- the location server ties the radio waves to the specific person's mobile device using MAC addresses or other network identity protocols.
- GPS and Wi-Fi triangulation frequently cannot give the accuracy necessary for emerging applications, especially for indoor environments with RF multipath.
- Another method is to use video cameras to visually determine a person's location. This method has the advantage of being very accurate, however it has a large disadvantage.
- a video camera system cannot identify which person is making a location based query with their mobile device.
- the mobile application there is no way for the mobile application to associate itself with a visual based location system; the visual based system can only calculate where someone is, it cannot figure out who they are.
- Implementations described herein solve the above problems and limitations by fusing together the Wi-Fi and video localization modules.
- a target moves, its trajectory can be tracked on both the Wi-Fi localization module and the video localization module.
- An estimate of the target's location can be calculated by fusing the Wi-Fi and video measurements.
- This spatio-temporal correlation fuses together the Wi-Fi and video tracks to determine an identity and location of an object. The accuracy of the video localization and the identity from the Wi-Fi network provide an accurate location of the Wi-Fi identified object.
- FIG. 1 illustrates an example system for target localization utilizing wireless and camera sensor fusion.
- FIG. 2 illustrates a floor plan of a building having cameras for performing target localization.
- FIG. 3 illustrates an example monocular localization process.
- FIG. 4 is a block diagram of an example video localization subsystem.
- FIG. 5A is a graph that illustrates two objects crossing and a hypothetical set of blobs for those two objects.
- FIG. 5B illustrates an inference graph that is generated from FIG. 5A .
- FIG. 6A is a graph that illustrates blobs that are associated with objects and tracks that have been maintained through an occlusion.
- FIG. 6B illustrates an inference graph that is generated from FIG. 6A .
- FIG. 7 illustrates an example system for multiple view tracking using multiple cameras.
- FIG. 8 illustrates an example occupancy map
- FIG. 9 illustrates cameras having both overlapping and non-overlapping regions.
- FIGS. 10A-10C illustrate camera images covering a scene with six people.
- FIGS. 10D-10F illustrate occupancy grids generated based on the images of FIGS. 10A-10C .
- FIG. 11 illustrates an example combined occupancy grid generated by combining the occupancy maps of FIGS. 10D-10F .
- FIG. 12A illustrates an example inference graph for a video track.
- FIG. 12B illustrates an example inference graph where the Wi-Fi system reports a probability associated with each map grid.
- FIG. 13 illustrates an access point (AP)/Camera combination system.
- FIG. 14 illustrates an example AP Camera combination device.
- FIG. 15 illustrates an example system for warehouse location-based picking.
- FIG. 16 illustrates an example retail system.
- FIG. 17 illustrates an example system for performing a mobile search.
- FIG. 18 illustrates an example system for providing mobile product information.
- FIG. 19 illustrates an example user interface of a mobile device for displaying product information.
- FIG. 20 illustrates an example map for providing in-store directions.
- FIG. 21 illustrates an example graphical interface for requesting in-store assistance.
- FIG. 22 illustrates an example graphical interface for providing product recommendations.
- FIG. 23 illustrates an example graphical interface for presenting mobile in-store advertising.
- FIG. 24 illustrates a short message service advertisement
- FIG. 25 is a block diagram of an exemplary system architecture implementing the features and processes of FIGS. 1-24 .
- FIG. 1 illustrates an example system 100 for target localization utilizing wireless and camera sensor fusion.
- System 100 can include a video localization module 102 for processing video frames 104 and generating video tracks.
- Video localization module 102 can calculate the probability of occupancy for positions or locations that have video coverage.
- Wireless localization module 106 can estimate the positions of targets through calculations comprising wireless feature vectors 108 and occupancy probabilities received from video localization module 102 .
- Fusion module 110 estimates the positions of targets 112 by combining target probabilities from wireless localization module 106 and occupancy probabilities from the video localization module 102 .
- FIG. 2 illustrates a floor plan 200 of a building having seven wireless access points (AP), or sensors, 202 - 214 distributed at the edges of the building.
- the wireless access points can be configured to comply with standards specified by IEEE 802.11. The goal is to determine the location of the person carrying a Wi-Fi enabled device 220 .
- Multipath is a phenomenon where an electromagnetic wave follows multiple paths to a receiver, not just the direct path.
- any pattern matching technique can be used to perform target localization. For example, time-of-arrival (TOA), time-difference-of-arrival (TDOA), forward link received signal strength, received signal strength histograms, and multipath signatures can be employed to perform target localization.
- TOA time-of-arrival
- TDOA time-difference-of-arrival
- forward link received signal strength received signal strength histograms
- multipath signatures can be employed to perform target localization.
- each localization target can be transmitting over a Wi-Fi network to multiple access points that measure the received signal strength (RSS).
- RSS received signal strength
- Wi-Fi fingerprinting can create a radio map of a given area based on the RSS data from several access points and generates a probability distribution of RSS values for a given (x, y) location. Live RSS values can then be compared to the fingerprint to find the closest match and generate a predicted (x, y) location.
- Some wireless location approaches assume that the target device is static rather than mobile. Traditionally, these systems have located essentially fixed items such as laptops sitting on desks for long periods of time. Determining the location of a mobile target entails solving some unique problems. First, small mobile clients do not have omni-directional antennas. Second, the user's body affects the wireless signal's propagation. Having a human body between a transmitting client and a receiving AP can cause a 10 dB loss in signal strength. Finally, these mobile devices have variable transmit power as they try to save battery life. The state-of-the-art static localization calculates the mean over k-nearest neighbors weighted by each location likelihood. For example, the likelihood can be calculated using the following equation:
- h is the wireless calibration vector for location x i and the n th AP.
- Implementations described herein utilize a different approach and calculate the most likely path over time. Implementations attempt to solve the body effects and antenna issues by storing multiple feature vectors per location. By allowing multiple calibration measurements per location, the effects of the human body over orientation can be captured. This also allows the system to capture non-linear and non-Gaussian RF characteristics.
- Storing multiple measurements per location allows statistical analysis of the stored calibration data to remove old measurements as the RF environment changes over time.
- a method is needed to find the probability of a target being located at each location across a grid.
- p w (x t-1 ),z w t , ⁇ ), of being at location x i , at time t can be calculated given the probability of being at all locations, x, at time t ⁇ 1, a RSS measurement from N access points, and a transition probability ⁇ : p ( x t i
- p w ( x t-1 ), z w t , ⁇ ), (2) where i 1:L and L is the number of grid locations; ⁇ t is the transition probabilities at time t; Z t is the wireless RSS measurement vector.
- h is the wireless calibration vector for location x i and the n th AP.
- video surveillance cameras can be used to track individuals in order to increase the accuracy of a localization system.
- implementations described herein do not to track targets with the goal of providing a definitive target position over time, but instead results in a probability of occupancy over all locations without target labels. Since a target's identity and absolute position are not being tracked, the problems described in the above text can be resolved in a robust way yet still have access to video tracks when they are reliable.
- computer vision technology can be utilized to localize an object from a video in 2D space relative to a ground plane.
- the first step is to find the pixel in an image where the object touches the ground plane.
- this pixel's coordinates are transformed through a ground plane homography to coordinates on a floor plan.
- each video camera can have its intrinsic and extrinsic parameters calibrated.
- the intrinsic parameters encompass the focal length, image format, principal point, and lens distortion of the camera.
- the extrinsic parameters denote the coordinate system transformations from camera coordinates to world coordinates.
- the world coordinates can be relative to a building floor plan (e.g. floor plan 100 ).
- the extrinsic parameters can be extracted automatically.
- the system can determine where walls of the building meet the ground plane in a captured image. Then, the points in the image where the walls meet the ground plane can be fit to a floor plan to extract the extrinsic parameters.
- monocular localization uses one camera on a scene in order to detect moving people or objects and, relative to a floor plan, report their locations.
- FIG. 3 illustrates an example monocular localization process 300 .
- a sequence of foreground blobs 304 can be created from image frames 302 by separating the foreground from the background through foreground segmentation 320 .
- foreground segmentation 320 can be performed through background subtraction. Background subtraction involves calculating a reference image, subtracting each new frame from this image, and thresholding the result. The result of thresholding is a binary segmentation of the image, which highlights regions of non-stationary objects. These highlighted regions are called “blobs”.
- Blobs can be a fragment of an object of interest or they may be two or more objects that are overlapping in the camera's field-of-view. Each of these blobs needs to be tracked 306 and labeled to determine which are associated with objects. This labeling process can be complicated when blobs fragment into smaller blobs, blobs merging, or the object of interest entering or leaving the field-of-view. Blob appearance/disappearance and split/merge events 324 caused by noise, reflections, and shadows can be analyzed to infer trajectories 308 . Split and merge techniques 324 can maintain tracking even when the background subtraction is suboptimal.
- Tracking people or objects is further complicated when two or more objects 310 overlap within the field-of-view causing an occlusions.
- Trajectory analysis techniques 326 aim to maintain object tracking through these occlusions.
- Appearance based models used to identify a person or object can be CPU intensive and are far from robust. Implementations described herein solve the recognition problem associated with camera-based localization.
- the Wi-Fi MAC address can be used to identify the person carrying a Wi-Fi device or the object with a Wi-Fi tag.
- FIG. 4 is a block diagram of an example video localization subsystem 400 .
- the subsystem 400 can include camera 402 , background subtraction 404 , binary morphology and labeling 406 , blob tracking 408 , and localization components 410 for performing video localization within the floor plan 412 of a building.
- Background subtraction component 404 can perform background subtraction on an image or images captured using camera 402 . Segmentation by background subtraction is a useful technique for tracking objects that move frequently against a relatively static background. Although the background changes relatively slowly, it is usually not entirely static. Illumination changes and slight camera movements necessitate updating the background model over time.
- One approach is to build a simple statistical model for each of the pixels in the image frame. This model can be used to segment the current frame into background and foreground regions. For example, any pixel that does not fit the background model (e.g. for having a value too far from the mean) is assigned to the foreground. Models based on color features often suffer from an inability to separate a true foreground object from the object's shadow or reflection. To overcome this problem the gradient of the frame can be computed. For example, gradient features can be resilient against shadows and reflection.
- Binary morphology and labeling component 406 can identify blobs in the foreground region of an image. For example, binary morphology can be used to remove small regions of noise in the foreground image. Once the noise is removed from the foreground, the remaining blobs can be flood filled. During connected component labeling, each blob can be identified, the height and width determined, and the size and centroid location calculated.
- Blob tracking component 408 can track blobs as they move in the foreground of an image. Ideally, background subtraction would produce one connected silhouette that completely covers pixels belonging to the foreground object. In practice, background subtraction may not work perfectly for all pixels. For example, moving pixels may go undetected due to partial occlusion or portions of the foreground whose appearance is similar to the background. For example, a foreground silhouette can be fragmented or multiple silhouettes can merge to temporarily create a single silhouette. As a result, blob tracks can be fragmented into components or merged with other tracks. The goal of blob tracking is to merge these fragmented track segments and create distinct, complete tracks for each object. As described below, implementations described herein solve the problem that trajectory-based blob tracking cannot. Trajectory-based blob tracking loses object identity when merged tracks change their trajectory during a merge event.
- the primary entity is the “blob,” which is defined as being a fragment of an “object” or a group of “objects.” The exact nature of the objects is irrelevant; they can be persons, forklifts, etc. It is important to note that a blob acts as a container that can have one or more objects. It is also important to understand that the things being detected, via image processing, and tracked, whether in the absence or presence of occlusions, are blobs, not objects.
- FIG. 5A is a graph 500 that illustrates two objects crossing and a hypothetical set of blobs for those two objects over eight frames.
- One track starts from the upper left origin and moves to the bottom right in image space coordinates.
- the second track starts in the bottom left and moves to the upper right. This second object splits into two fragments in the second frame. Tracking the objects is further complicated when they cross, occluding each other, forming a group for three frames.
- Inference graph 550 of FIG. 5B can be generated to stitch together tracks that belong to the same object.
- foreground pixel clusters i.e., blobs
- Each blob's tracker can be updated with each new frame.
- FIG. 5B illustrates an inference graph 550 that is generated from the target tracks in graph 500 .
- the first step is to record the merge and split events.
- the inference graph can be generated based on spatial connectedness. For example, for a set of blobs, B i , a graph vertex is created for each blob. For split events, a directed edge is created to the two children vertices. During a merge event, a parent vertex is added and directed edges are added to the merging vertices.
- FIG. 6A is a graph 600 that illustrates blobs that are associated with objects and tracks that have been maintained through an occlusion.
- the algorithm to do this takes the inference graph 550 ( FIG. 5B ) as an input as well as all the Kalman tracking states for each vertex.
- the nodes are labeled as either fragment, object or group using a coherent motion constraint.
- the coherent motion constraint implies that any two target blobs from the same object have an average velocity difference vector that has zero-mean Gaussian statistics with a small variance.
- the velocity difference between two target blobs from different objects will be a Gaussian with zero mean and a large variance.
- a depth-first search can be used to traverse the graph, in a bottom-up fashion, and stitch together child vertices with parents until the coherent motion constraint is violated.
- the result is the inference graph 650 of FIG. 6B with fragment, object, and group labels.
- the blob fragments are associated with objects.
- the tracks before and after the merging occlusion event need to be stitched together.
- the Kalman tracker can be used to predict the position of the object after the merging occlusion event.
- the tracks emerging from the occlusion can be compared to the predicted location of the object and the nearest track can be associated with the object.
- a difficulty in tracking through occlusions is reestablishing object identities following a merge/split.
- the identity of objects that change their trajectories during an occlusion can be lost.
- Appearance based occlusion resolution methods are processor intensive and are not robust.
- video localization component 410 can determine the real world location of a target object.
- the localization process includes two steps. First, the piercing point of each tracked object can be found.
- the piercing point of an object is the pixel where the object meets the ground plane.
- the piercing point of a human target is the center point of the target's shoes.
- the second step is to project the piercing point's pixel coordinates through a ground plane homography transformation. The result is the world coordinates of the target object, typically relative to a floor plan.
- FIG. 7 illustrates an example system 700 for multiple view tracking using multiple cameras.
- multiple cameras can give three-dimensional information that can be used to make localization more robust.
- cylinder 702 can be viewed by cameras 704 and 706 .
- Cylinder 702 can represent a person or object that can be tracked using the multiple camera system, for example.
- Image planes 708 and 710 for cameras 704 and 706 are transformed through a ground plane homography projection and fused together on the ground plane 712 .
- the slices of the cylinders 714 , 716 on the ground plane 712 overlap in the ground plane homography.
- the other portions of the cylinders are parallaxed during the transform, as shown by the projections 718 - 724 from the cylinder 702 .
- fusing can be performed on planes parallel to the ground plane, as shown plane 726 .
- Multiple view tracking is not processor cycle efficient, as the entire image must be transformed through a homography rather than just the targets' piercing points.
- FIG. 8 illustrates an example occupancy map 800 .
- the previous section detailed the video localization technology and steps to use video tracking to improve localization. Due to occlusions, it is difficult to maintain consistent track labels even with state-of-the-art technologies.
- the probability of occupancy can be modeled over a grid to improve localization.
- An occupancy map can store the probability of each grid cell being either occupied or empty.
- I t C ) can be estimated over locations x t i given images I t C from M cameras, at time t.
- background subtraction, connected components, and blob tracking can be computed in order to find the target blobs' piercing points.
- a piercing point is the pixel where the blob touches the ground plane. By projecting the piercing point pixel through a ground plane homography, the target's location can be calculated.
- I t C ) can be estimated as p v (x t i
- B t ), where C: ⁇ c 1 , c 2 . . . c m ⁇ for M cameras and B t ⁇ b t 1 , b t 2 . . . b t M ⁇ where b t M is the vector of blobs from each camera image.
- occlusions that occur in crowded spaces can be modeled. For example, an occlusion is when one target crosses in front of another or goes behind any structure that blocks the camera's view of a target. This includes when one person closer to a camera blocks the camera's view of another person.
- FIG. 8 illustrates a situation where person B cannot be distinguished from person A using a monocular camera blob tracker.
- the tracker cannot determine whether one or more people are occluded behind person A.
- This situation can be modeled probabilistically by a Gaussian distribution curve centered at the piercing point of the person closest to the camera and a uniform probability extending from the Gaussian distribution curve to the point where the blob's top pixel pierces the ground plane.
- the instantaneous probability of occupancy at location x i is modeled as a Gaussian distribution centered at the blob's lower piercing point.
- the variance of the Gaussian distribution is proportional to the distance between x i and the camera location.
- FIGS. 10A-10F An example demonstrating the creation of an occupancy grid is illustrated in FIGS. 10A-10F .
- the camera images ( FIGS. 10A-10C ) show three cameras covering a scene with six people. The cameras have both overlapping and non-overlapping regions, as illustrated by FIG. 9 .
- the camera images of FIGS. 10A-10C can correspond to the images captured by cameras 902 - 906 of FIG. 9 .
- FIGS. 10D-10F illustrate occupancy grids generated based on the images of FIGS. 10A-10C .
- multiple blobs across multiple cameras can be fused together using the following equation:
- FIG. 11 illustrates an example combined occupancy grid 1100 generated by combining the occupancy maps of FIGS. 10D-10F .
- Bayesian filtering can be used to compute a posterior occupancy probability conditioned on the instantaneous occupancy probability measurement and velocity measured for each grid location.
- a prediction step can be used to compute a predicted prior distribution for the Bayesian filter. For example, the state of the system is given by the occupancy probability and velocity for each grid cell. The estimate of the posterior occupancy grid will include the velocity estimation in the prediction step.
- the set of velocities that brings a set of corresponding grid cells in the previous time step to the current grid are considered.
- the resulting distribution on the velocity of the current grid cell is updated by conditioning on the incoming velocities with respect to the current grid cell and on the measurements from the cameras.
- the probability of occupancy models can be improved by measuring the height of the blobs. For example, ground plane homography as well as a homography at head level can be performed. Choosing the head level homography height as the average human height, 5′9′′, a blob can be declared short, average, or tall. For example, a failure in the background subtraction might result in a person's pants not being detected resulting in a short blob. A tall example results when two people aligned coaxially with the camera form one blob in the camera's field-of-view. The height of each blob is one piece of information that is used to improve the probability occupancy models, as described further below.
- the architecture of the space seen by the camera also can be used to improve the probability occupancy models.
- a wall or shelf can constrain the occupancy probability to one side of the wall or shelf.
- observing a person within an aisle can constrain them to that aisle.
- the probability model can be selected based on the type of space and the blob's relative height. For example, the probability model can be selected based on whether the blob tall or short.
- the probability model can be selected based on whether the blob is in open space, partially obscured behind a wall, or between walls.
- the probability model can be selected based on the heights of different objects proximate to the detected blobs.
- computer vision detection methods can be used to help resolve occlusions.
- One method is histogram of gradient feature extraction used in conjunction with a classifier such a support vector machine. The speed of these methods can be improved by performing detection only over the blobs from background subtraction rather than the entire frame. Detectors improve the occupancy map by replacing uniform probabilities over the region of an occlusion with Gaussians at specific locations.
- Creating an occupancy map from a depth camera such a stereo camera is simpler than using a monocular camera.
- Monocular cameras suffer from occlusion ambiguity.
- the depth camera may resolve this ambiguity.
- For each pixel a depth camera report the distance of that pixel from the camera.
- the occupancy map can be created from depth camera measurements, with each detection modeled as a 2D Gaussian.
- probability occupancy models have advantages including providing a probabilistic approach to occlusion handling, easily combining multiple cameras, and computational efficiency.
- a target is in the field-of-view of two monocular cameras
- those two camera views can be used to compute the 3D coordinate of the target.
- multi-view geometry uses two or more cameras to compute a distance to the target using epipolar geometry.
- the vision probability occupancy map and blobs' velocity are inputs to the wireless localization module. These inputs improve the precision and accuracy of the Wi-Fi localization.
- the vision probability occupancy map as well as the velocity of blobs set the Wi-Fi transition probabilities for each grid location. This has two effects. First, the Wi-Fi localization calculations are more accurate as the calculations are limited to locations with non-zero probability specified by the vision localization module. Second, the blobs' velocity is a local motion model for the Wi-Fi localization calculations.
- the trajectories of Wi-Fi devices and the trajectories from the video camera network can be spatio-temporally correlated.
- a trajectory is the path a moving object takes through space over time.
- each of these trajectories is a track.
- each Wi-Fi track can be correlated with each video track in order to determine how similar each pair of trajectories are. This process relies on the fact that one object's location, measured two different ways, even when the measurements have different observation error statistics, should move coherently through time.
- the first step is to define a similarity measure between two tracks.
- a similarity measure can include L p norms, time warping, longest common subsequence (LCSS), or deformable Markov model templates, among others.
- the L 2 norm can be used as a similarity measure.
- p equals two.
- the Euclidean norm will find the similarity between the track v and the track w over a time series of data. For a real-time system it may be necessary to have an iterative algorithm that will update the similarity between tracks at every time sample without needing to store the entire track history, as described below and framed as a Bayesian inference graph.
- FIG. 12A illustrates an example inference graph 1200 for a video track.
- the leaves of the graph are the probability that a Wi-Fi device is associated with a particular camera track given a new observation.
- d1) is the probability that Wi-Fi device, with error vector d1, is associated with this camera track given the new error vector d1.
- FIG. 12B illustrates an example inference graph 1250 where the Wi-Fi system reports a probability associated with each map grid.
- d1,g1) is the probability that Wi-Fi device, with error vector d1, is associated with this camera track given the new error vector d1 at grid map g1.
- This framework can allow new measurements to iteratively improve the association probability estimates.
- the Wi-Fi likelihood and vision probability of occupancy are fused together using Bayesian inference to advance the target location through time.
- the probability of the target's position being at x i given images from each camera, the target's wireless measurement, and the previous grid probabilities equals the product of the vision grid probabilities, Wi-Fi grid probabilities, and predicted prior grid probabilities.
- the target's position is the state estimate of the system and may be computed in many ways: as the expectation over the grid, the maximum probability across the grid, or the average of k-largest probabilities. In one embodiment the state estimate's velocity is used to compute the predicted prior.
- Implementations described herein utilize a grid filter for localization.
- other filters e.g., histogram, particle filters, etc.
- FIG. 13 illustrates an AP/Camera combination system 1300 .
- An RF access point that has an integrated camera has many advantages. First, provisioning APs and cameras can be expensive. One way to reduce the installation cost is to provision one piece of hardware that includes both the Wi-Fi radio and the video camera. Second, TDOA systems require the location of all APs. The installation process often does not precisely locate the APs. This can result in geometric dilution of precision in a TDOA system.
- device 1300 can include wireless access point subsystems.
- the wireless access point subsystems can include an antenna 1302 , RF radio 1306 , wireless chipset 1308 , and microprocessor 1310 .
- Device 1300 can include video subsystems.
- the video subsystems can include lens optics 1316 , image sensor 1314 , and a video encoder 1312 .
- FIG. 14 illustrates an example AP Camera combination device 1400 .
- the device 1400 can have one or more internal or external antennas for the wireless access point.
- the video subsystem can have a fixed lens, gimbaled lens, or a PVT steerable lens.
- Device 1400 can have an integrated dome or an exposed camera.
- a LED (light emitting diode) can be attached to the AP/camera combination device 1400 .
- the LED can be modulated with an identifying sequence that can be used to identify the device to which the LED is attached.
- Other cameras can detect the LEDs and can report the location of the APs associated with the detected LEDs.
- the video camera identifies architectural features to self-calibrate the extrinsic parameters of the camera and calculate its location. First, it identifies architectural features such as walls and corners. Next, it finds the corresponding ground plane intersections of these features and extracts lines. These lines are matched to a floor plan CAD drawing. Finally, the system self-calibrates the extrinsic parameters of the camera using corresponding features between the camera image and the floor plan map. From the extrinsic parameters the camera position is calculated.
- Implementations described herein can be used to provide real-time indoor location tracking solutions for highly mobile client populations, leveraging Wi-Fi infrastructure and video analytics. Taking advantage of enterprise trends towards multi-mode client devices, pervasive WLAN coverage, and video camera deployments. Implementations can provide rich location-aware applications that enable a whole new array of revenue generating and cost reducing solutions.
- a challenge with indoor location is providing a system that is accurate, real-time, and financially feasible.
- Some solutions in the marketplace try to solve indoor-based location including RFID, Bluetooth, zigbee, etc., but most of these are only viable in niche application scenarios.
- Enterprise organizations need solutions that traverse all of their disparate locations, work for all classes of devices, and are enabled throughout their network. For many of these other technologies to support this requirement, they need to deploy new infrastructure which brings the cost of the location solution to an unfeasible level. For this reason, Wi-Fi is looked upon as the viable solution for indoor location tracking.
- Most enterprises have already (or are in process of) deployed pervasive Wi-Fi coverage, often using network access as the main business driver for the infrastructure cost.
- the problem with Wi-Fi is that the technology was developed with its focus on efficiently passing traffic, not conserving power or providing location accuracy. Add in the mix a highly mobile user population, and most Wi-Fi location systems do not provide accurate location determinations.
- Implementations described herein can overcome these challenges by developing technology that enables accurate tracking using Wi-Fi by incorporating video analytics into the tracking process. Leveraging traditional IP cameras, that most organizations are already deploying for security and asset protection, implementations can merge information from Wi-Fi associations and video streams in order to present a highly accurate and timely location result.
- Retailers need flexibility in their picking process as often times there are misqueues (items out of stock, too many picked, not enough picked, etc.) and the retailer needs the ability to “pick to order” to deliver items when there is a back up or a rush. Implementations described herein can be used to track the location of the pickers throughout the distribution center enabling the retailer to develop a truly waveless picking process where each next pick is dynamically generated based on the priority of the item and the real-time location of the picker.
- FIG. 15 illustrates an example system 1500 for warehouse location-based picking.
- warehouse management system 1502 can receive customer orders and provide the orders to warehouse control system 1506 .
- Location system 1504 can determine picker locations within the warehouse and provide the picker locations to warehouse control system 1506 .
- Warehouse control system 1506 can dynamically generate a picklist that can result in the efficient collection of the items in the customer order based on the current locations of pickers. Once the picklist is generated, the picklist can be transmitted from warehouse control system 1506 to a warehouse picker mobile device 1508 so that the picker can collect the ordered items.
- the list of “next picks”, traditional in the wave based picking process can be dynamically updated by any change in the business such as the need to get an item immediately.
- the ability of the retailer to marry the efficiency of waveless picking and the immediacy and flexibility of pick to order is made possible by the real-time location capabilities described herein.
- the Retail System The Retail System
- a retailer can implement a secure “guest” Wi-Fi network utilizing the same wireless LAN infrastructure that the retailer has already deployed for back office functions like inventory tracking, POS, and secure corporate network access.
- guest Wi-Fi
- POS point of sale
- corporate network access When the retailer's customers enter a store they may notice signage offering “free Wi-Fi” in the store—since most big-box retailers have poor cellular coverage within their buildings, having high speed internet access is a valuable offering for those smart phone users who need to be connected.
- the retail system is then able to track the location of the customer based on a combination of Wi-Fi location and integration with the in-store video surveillance system, as described with respect to various implementations described herein.
- the Wi-Fi network can give an approximate location of the customer (usually within 10 m) and then the system can overlay real-time video data to perform a more exact approximation, getting accuracy down to the aisle the customer is standing in. This precise location data within the store can enable location-based services on many mobile applications.
- an appliance can be deployed in each retail location that will act as the captive portal and integrate with the in-store Wi-Fi network and video surveillance systems.
- a mobile applications platform can be deployed in the “cloud” which can integrate with the retailer's back-end systems (loyalty, CRM, POS, etc.) and the in-store appliance to enable rich in-store mobile applications.
- the applications will focus on improving the in-store shopping experience, providing self-service features, enabling interactions with store associates, and delivering location specific promotions.
- applications modules can be provided which can have all of the relevant customer information, location data, and store systems integration. The retailer will then be able to brand these applications and insert them into their existing mobile applications and web site, creating a retailer branded and controlled shopping experience.
- FIG. 16 illustrates an example retail system 1600 .
- System 1600 can include a mobile personalization engine (MPE).
- the MPE can generate information that can be re-used in shopping application scenarios.
- the MPE can generate client location information using location engine 1602 .
- the client location can be calculated by the location engine 1602 using the techniques described herein according to various implementations and can provide a near real-time and highly accurate location of the consumer.
- location engine 1602 can use information, such as a Wi-Fi location data, video data, barcode scans, etc., and determine a client location.
- the client location information generated by location engine 1602 can be stored in database 1604 as client location history data.
- the client location history is the saved historical data of the client location information.
- the MPE can generate client identification information using identification engine 1606 .
- identification engine 1606 can generate client identification information using the consumer's mobile device MAC address, the information they provided when registering for the Wi-Fi service via the captive portal (loyalty card information, Internet username and password, etc.) and any history the retailer has on this user from the CRM, loyalty card or internet website.
- the MPE can include event engine 1608 which can generate shopping history information based on the consumers current location, historical location for that shopping experience and then any actions the consumer has done within the retailer's mobile application on the current trip, including scanning a barcode or searching for products.
- the shopping history information can be stored in database 1610 .
- FIG. 17 illustrates an example system 1700 for performing a mobile search.
- the consumer upon entering a store, the consumer can open the retailer's mobile application on their smart phone which can be connected to the retailer's WLAN and provide a product search query 1702 by typing a description in a dialogue box, such as “bike”.
- a mobile search algorithm can take into account retailer specific information and client location information to generate a result that is targeted and meaningful to the consumer.
- the system can pass the consumer's identification to the retailer's Internet website and conduct a web based search, cross reference those results with items that are within the local store's inventory, add weighting to those items that are on the local store promotion list, and add additional weighting based on personalization information from the retailer's CRM system.
- the system will take into account the current location of the consumer and add a weighting to items that are close by, and then also take into account other areas where the consumer has already gone in the store and also weight items that they have passed by.
- the final result will be a mobile location-based search result 1704 which will pass a few items (less than 10) back to the consumer's phone via the retailer mobile application.
- FIG. 18 illustrates an example system 1800 for providing mobile product information.
- the consumer can drill down into the mobile search results or may scan the barcode on one of the products that they are most interested in to obtain product information.
- the mobile application can present information about the product including product image, description and reviews, price and availability, in-store direction, ability to request help from in-store associates, and other products that they may be interested in or related items they should consider purchasing.
- Product information 1802 can include a product image, description and reviews.
- the product image, description and reviews can be taken directly from the retailer's web site or from a feed from the manufacturer.
- Price and availability information 1804 can be taken from the store inventory and pricing systems and may be unique for a particular store.
- FIG. 19 illustrates an example user interface 1900 of a mobile device for displaying product information.
- In-store directions 1806 can be based on the generated client location and a planogram that has the location of the product. This imagery can be displayed on the store floor plan and presented back to the customer on the customer's mobile device.
- FIG. 20 illustrates an example map 2000 that can be displayed on a mobile device for providing in-store directions. Map 2000 can indicate the location of the customer 2004 , the location of the product 2002 and the a route 2006 to follow to get from the customer location 2004 to the product location 2002 .
- FIG. 21 illustrates an example graphical interface 2100 for requesting in-store assistance.
- Graphical interface 2100 can include help button 2102 for summoning help to the customer's in-store location.
- the system can also present product recommendations 1812 to the user that are targeted to the mobile application. Based on knowing where the customer is and has been within the store, what inventory the store has, what items are on promotion, and what the customer has been interested in, the system 1800 can come up with a highly targeted list of items to suggest.
- FIG. 22 illustrates an example graphical interface 2200 for providing product recommendations.
- a purchase transaction can be completed on the customer's mobile device.
- the in-store appliance can receive the purchase information from the customer's mobile device and can send the purchase information to a store employee.
- the store employee can be directed to find the product (typically in the stock room) and bring the item to the customer located in the store.
- a benefit to the retailer of understanding the location of the shopper is the ability to promote products or services that are relevant to the current or past locations of that shopper.
- the current location and location history of the shopper enables the personalization of specific offers and promotions.
- Retailers and their manufacturer partners are very interested in being able to influence the consumer at the point of decision, and for many products that is the time when the shopper is in the aisle of the store where their products are sold.
- FIG. 23 illustrates an example graphical interface 2300 for presenting mobile in-store advertising.
- a retailer has an opportunity to provide location specific advertisement to the shopper in several different manners.
- One option could be providing a coupon as an alert 2302 to a shopping application provided by the retailer.
- Another option could be by sending a short message service (SMS) message 2402 to the phone of the shopper, as illustrated by FIG. 24 .
- SMS short message service
- location information can be bundled with personalization data typically found in a loyalty system in order to provide personalized offers that are location aware.
- location data can be collected and stored for analysis.
- this data can form the basis of rich information on shopper and employee patterns within the store. Retail marketers can use this information to better understand how shoppers use the store, where they dwell, how certain offers and promotions can influence their use of the physical store. Internal departments can use this data to better understand how their employees are traveling the store to complete certain tasks, understand ways to optimize certain workflows, and gain a better understanding of where any given employee is at any particular time.
- Retail analytics is a massive field today but unfortunately most solutions that focus on the store are only deployed in a few locations or provide a snap-shot in time. With the location technology described herein, a retailer can have access to all of this analytical data in all of their store locations. Most electronic retailers have already instrumented their websites with analytics capabilities to understand how shoppers are using their site. Using implementations described herein, physical retailers can instrument their stores with similar analytics capabilities.
- FIG. 25 is a block diagram of an exemplary system architecture implementing the features and processes of FIGS. 1-24 .
- the architecture 2500 can be implemented on any electronic device that runs software applications derived from compiled instructions, including without limitation personal computers, servers, smart phones, media players, electronic tablets, game consoles, email devices, etc.
- the architecture 2500 can include one or more processors 2502 , one or more input devices 2504 , one or more display devices 2506 , one or more network interfaces 2508 and one or more computer-readable mediums 2510 . Each of these components can be coupled by bus 2512 .
- Display device 2506 can be any known display technology, including but not limited to display devices using Liquid Crystal Display (LCD) or Light Emitting Diode (LED) technology.
- Processor(s) 2502 can use any known processor technology, including but are not limited to graphics processors and multi-core processors.
- Input device 2504 can be any known input device technology, including but not limited to a keyboard (including a virtual keyboard), mouse, track ball, and touch-sensitive pad or display.
- Bus 2512 can be any known internal or external bus technology, including but not limited to ISA, EISA, PCI, PCI Express, NuBus, USB, Serial ATA or FireWire.
- Computer-readable medium 2510 can be any medium that participates in providing instructions to processor(s) 2502 for execution, including without limitation, non-volatile storage media (e.g., optical disks, magnetic disks, flash drives, etc.) or volatile media (e.g., SDRAM, ROM, etc.).
- non-volatile storage media e.g., optical disks, magnetic disks, flash drives, etc.
- volatile media e.g., SDRAM, ROM, etc.
- Computer-readable medium 2510 can include various instructions 2514 for implementing an operating system (e.g., Mac OS®, Windows®, Linux).
- the operating system can be multi-user, multiprocessing, multitasking, multithreading, real-time and the like.
- the operating system performs basic tasks, including but not limited to: recognizing input from input device 2504 ; sending output to display device 2506 ; keeping track of files and directories on computer-readable medium 2510 ; controlling peripheral devices (e.g., disk drives, printers, etc.) which can be controlled directly or through an I/O controller; and managing traffic on bus 2512 .
- Network communications instructions 2516 can establish and maintain network connections (e.g., software for implementing communication protocols, such as TCP/IP, HTTP, Ethernet, etc.).
- a graphics processing system 2518 can include instructions that provide graphics and image processing capabilities.
- the graphics processing system 2518 can implement the processes described with reference to FIGS. 1-24 .
- Application(s) 2520 can be an application that uses or implements the processes described in reference to FIGS. 1-24 .
- the processes can also be implemented in operating system 2514 .
- the described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device.
- a computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result.
- a computer program can be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer.
- a processor will receive instructions and data from a read-only memory or a random access memory or both.
- the essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data.
- a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks.
- Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
- semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
- magnetic disks such as internal hard disks and removable disks
- magneto-optical disks and CD-ROM and DVD-ROM disks.
- the processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
- ASICs application-specific integrated circuits
- the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
- a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
- the features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them.
- the components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
- the computer system can include clients and servers.
- a client and server are generally remote from each other and typically interact through a network.
- the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
- An API can define on or more parameters that are passed between a calling application and other software code (e.g., an operating system, library routine, function) that provides a service, that provides data, or that performs an operation or a computation.
- software code e.g., an operating system, library routine, function
- the API can be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document.
- a parameter can be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call.
- API calls and parameters can be implemented in any programming language.
- the programming language can define the vocabulary and calling convention that a programmer will employ to access functions supporting the API.
- an API call can report to an application the capabilities of a device running the application, such as input capability, output capability, processing capability, power capability, communications capability, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
where h is the wireless calibration vector for location xi and the nth AP.
p(x t i |p w(x t-1),z w t,τ), (2)
where i=1:L and L is the number of grid locations; τt is the transition probabilities at time t; Zt is the wireless RSS measurement vector.
p(x t i |z w t)∝p(z w t |x t i){tilde over (p)}(x t i) (3)
The likelihood of receiving feature vector zw at location xi is:
where h is the wireless calibration vector for location xi and the nth AP.
The predicted prior probability
{tilde over (p)}(x t i)=p(x t i |x t-1 i,τt)=Σj=1 L x t-1 jτt i,j, (5)
where τt i,j is the transition probability from location j to location i given that
where QC is the number of blobs in camera C at time t. Other methods to combine multiple occupancy grids include the summation or product of occupancy probability grids from different cameras.
L p(v,w)=(Σi=1|v
where v is a vector of the (x,y) position from the video localization and w is a vector of the (x,y) position from the Wi-Fi localization. For the Euclidean norm, p equals two. The Euclidean norm will find the similarity between the track v and the track w over a time series of data. For a real-time system it may be necessary to have an iterative algorithm that will update the similarity between tracks at every time sample without needing to store the entire track history, as described below and framed as a Bayesian inference graph.
p(x t i |I t ,z w
Claims (17)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/064,020 US9270952B2 (en) | 2010-08-18 | 2013-10-25 | Target localization utilizing wireless and camera sensor fusion |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US37498910P | 2010-08-18 | 2010-08-18 | |
| US13/211,969 US8615254B2 (en) | 2010-08-18 | 2011-08-17 | Target localization utilizing wireless and camera sensor fusion |
| US14/064,020 US9270952B2 (en) | 2010-08-18 | 2013-10-25 | Target localization utilizing wireless and camera sensor fusion |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/211,969 Division US8615254B2 (en) | 2010-08-18 | 2011-08-17 | Target localization utilizing wireless and camera sensor fusion |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20140285660A1 US20140285660A1 (en) | 2014-09-25 |
| US9270952B2 true US9270952B2 (en) | 2016-02-23 |
Family
ID=45594469
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/211,969 Active US8615254B2 (en) | 2010-08-18 | 2011-08-17 | Target localization utilizing wireless and camera sensor fusion |
| US14/064,020 Active 2031-09-08 US9270952B2 (en) | 2010-08-18 | 2013-10-25 | Target localization utilizing wireless and camera sensor fusion |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/211,969 Active US8615254B2 (en) | 2010-08-18 | 2011-08-17 | Target localization utilizing wireless and camera sensor fusion |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US8615254B2 (en) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9875411B2 (en) * | 2015-08-03 | 2018-01-23 | Beijing Kuangshi Technology Co., Ltd. | Video monitoring method, video monitoring apparatus and video monitoring system |
| CN108446710A (en) * | 2018-01-31 | 2018-08-24 | 高睿鹏 | Indoor plane figure fast reconstructing method and reconstructing system |
| US10176379B1 (en) | 2018-06-01 | 2019-01-08 | Cisco Technology, Inc. | Integrating computer vision and wireless data to provide identification |
| CN109461295A (en) * | 2018-12-07 | 2019-03-12 | 连尚(新昌)网络科技有限公司 | A kind of household reporting method and apparatus |
| CN109617771A (en) * | 2018-12-07 | 2019-04-12 | 连尚(新昌)网络科技有限公司 | A home control method and corresponding routing device |
| US10445791B2 (en) | 2016-09-08 | 2019-10-15 | Walmart Apollo, Llc | Systems and methods for autonomous assistance and routing |
| US10650621B1 (en) | 2016-09-13 | 2020-05-12 | Iocurrents, Inc. | Interfacing with a vehicular controller area network |
| WO2020260731A1 (en) | 2019-06-28 | 2020-12-30 | Cubelizer S.L. | Method for analysing the behaviour of people in physical spaces and system for said method |
| US10921460B2 (en) | 2017-10-16 | 2021-02-16 | Samsung Electronics Co., Ltd. | Position estimating apparatus and method |
| WO2021080933A1 (en) * | 2019-10-21 | 2021-04-29 | Position Imaging, Inc. | System and method of personalized navigation inside a business enterprise |
| US11022443B2 (en) * | 2016-12-12 | 2021-06-01 | Position Imaging, Inc. | System and method of personalized navigation inside a business enterprise |
| US20220132270A1 (en) * | 2020-10-27 | 2022-04-28 | International Business Machines Corporation | Evaluation of device placement |
Families Citing this family (115)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8174931B2 (en) | 2010-10-08 | 2012-05-08 | HJ Laboratories, LLC | Apparatus and method for providing indoor location, position, or tracking of a mobile computer using building information |
| US9134137B2 (en) | 2010-12-17 | 2015-09-15 | Microsoft Technology Licensing, Llc | Mobile search based on predicted location |
| US10474858B2 (en) * | 2011-08-30 | 2019-11-12 | Digimarc Corporation | Methods of identifying barcoded items by evaluating multiple identification hypotheses, based on data from sensors including inventory sensors and ceiling-mounted cameras |
| US11288472B2 (en) * | 2011-08-30 | 2022-03-29 | Digimarc Corporation | Cart-based shopping arrangements employing probabilistic item identification |
| US9367770B2 (en) * | 2011-08-30 | 2016-06-14 | Digimarc Corporation | Methods and arrangements for identifying objects |
| US9330468B2 (en) | 2012-02-29 | 2016-05-03 | RetailNext, Inc. | Method and system for analyzing interactions |
| US8989774B2 (en) | 2012-10-11 | 2015-03-24 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system of semnatic indoor positioning using significant places as satellites |
| US9703274B2 (en) | 2012-10-12 | 2017-07-11 | Telefonaktiebolaget L M Ericsson (Publ) | Method for synergistic occupancy sensing in commercial real estates |
| US9224184B2 (en) | 2012-10-21 | 2015-12-29 | Digimarc Corporation | Methods and arrangements for identifying objects |
| US9894269B2 (en) | 2012-10-31 | 2018-02-13 | Atheer, Inc. | Method and apparatus for background subtraction using focus differences |
| GB201220584D0 (en) * | 2012-11-15 | 2013-01-02 | Roadpixel Ltd | A tracking or identification system |
| US9280833B2 (en) | 2013-03-05 | 2016-03-08 | International Business Machines Corporation | Topology determination for non-overlapping camera network |
| US9297654B2 (en) * | 2013-03-15 | 2016-03-29 | Raytheon Company | Associating signal intelligence to objects via residual reduction |
| US11743431B2 (en) | 2013-03-15 | 2023-08-29 | James Carey | Video identification and analytical recognition system |
| US11039108B2 (en) | 2013-03-15 | 2021-06-15 | James Carey | Video identification and analytical recognition system |
| US9167412B2 (en) * | 2013-03-15 | 2015-10-20 | Intel Corporation | Techniques for roaming between wireless local area networks belonging to a social network |
| US8913791B2 (en) | 2013-03-28 | 2014-12-16 | International Business Machines Corporation | Automatically determining field of view overlap among multiple cameras |
| US20140297485A1 (en) * | 2013-03-29 | 2014-10-02 | Lexmark International, Inc. | Initial Calibration of Asset To-Be-Tracked |
| BR112015026374B1 (en) | 2013-04-19 | 2022-04-12 | James Carey | Analytical recognition system |
| WO2015069320A2 (en) * | 2013-05-31 | 2015-05-14 | Andrew Llc | System and method for mobile identification and tracking in location systems |
| KR20150018037A (en) * | 2013-08-08 | 2015-02-23 | 주식회사 케이티 | System for monitoring and method for monitoring using the same |
| KR20150018696A (en) | 2013-08-08 | 2015-02-24 | 주식회사 케이티 | Method, relay apparatus and user terminal for renting surveillance camera |
| KR20150075224A (en) | 2013-12-24 | 2015-07-03 | 주식회사 케이티 | Apparatus and method for providing of control service |
| US10311457B2 (en) * | 2014-03-25 | 2019-06-04 | Nanyang Technological University | Computerized method and system for automating rewards to customers |
| KR102247891B1 (en) | 2014-04-22 | 2021-05-04 | 에스케이플래닛 주식회사 | Apparatus for recommending location inside building using access point fingerprinting and method using the same |
| US10824440B2 (en) | 2014-08-22 | 2020-11-03 | Sensoriant, Inc. | Deriving personalized experiences of smart environments |
| WO2016040874A1 (en) * | 2014-09-11 | 2016-03-17 | Carnegie Mellon University | Associating a user identity with a mobile device identity |
| US10338191B2 (en) * | 2014-10-30 | 2019-07-02 | Bastille Networks, Inc. | Sensor mesh and signal transmission architectures for electromagnetic signature analysis |
| US9804392B2 (en) | 2014-11-20 | 2017-10-31 | Atheer, Inc. | Method and apparatus for delivering and controlling multi-feed data |
| US9354066B1 (en) * | 2014-11-25 | 2016-05-31 | Wal-Mart Stores, Inc. | Computer vision navigation |
| US10169677B1 (en) * | 2014-12-19 | 2019-01-01 | Amazon Technologies, Inc. | Counting stacked inventory using image analysis |
| US10169660B1 (en) * | 2014-12-19 | 2019-01-01 | Amazon Technologies, Inc. | Counting inventory items using image analysis |
| US10671856B1 (en) | 2014-12-19 | 2020-06-02 | Amazon Technologies, Inc. | Detecting item actions and inventory changes at an inventory location |
| US9996818B1 (en) | 2014-12-19 | 2018-06-12 | Amazon Technologies, Inc. | Counting inventory items using image analysis and depth information |
| EP4343728A3 (en) | 2014-12-30 | 2024-06-19 | Alarm.com Incorporated | Digital fingerprint tracking |
| US9626589B1 (en) * | 2015-01-19 | 2017-04-18 | Ricoh Co., Ltd. | Preview image acquisition user interface for linear panoramic image stitching |
| US9594980B1 (en) * | 2015-01-19 | 2017-03-14 | Ricoh Co., Ltd. | Image acquisition user interface for linear panoramic image stitching |
| CN105987694B (en) * | 2015-02-09 | 2019-06-07 | 株式会社理光 | The method and apparatus for identifying the user of mobile device |
| US9519919B2 (en) * | 2015-03-10 | 2016-12-13 | Paypal, Inc. | In-store advertisement customization |
| EP3274976A1 (en) | 2015-03-24 | 2018-01-31 | Carrier Corporation | Systems and methods for providing a graphical user interface indicating intruder threat levels for a building |
| EP3274934A1 (en) | 2015-03-24 | 2018-01-31 | Carrier Corporation | Floor plan coverage based auto pairing and parameter setting |
| US10230326B2 (en) | 2015-03-24 | 2019-03-12 | Carrier Corporation | System and method for energy harvesting system planning and performance |
| US10756830B2 (en) | 2015-03-24 | 2020-08-25 | Carrier Corporation | System and method for determining RF sensor performance relative to a floor plan |
| CN107667552B (en) | 2015-03-24 | 2021-11-09 | 开利公司 | Floor plan based learning and registration method for distributed devices |
| WO2016154306A1 (en) | 2015-03-24 | 2016-09-29 | Carrier Corporation | System and method for capturing and analyzing multidimensional building information |
| WO2016154312A1 (en) | 2015-03-24 | 2016-09-29 | Carrier Corporation | Floor plan based planning of building systems |
| CN107660290B (en) | 2015-03-24 | 2022-03-22 | 开利公司 | Integrated system for sale, installation and maintenance of building systems |
| US10571547B2 (en) | 2015-03-27 | 2020-02-25 | Pcms Holdings, Inc. | System and method for indoor localization using beacons |
| US10217120B1 (en) | 2015-04-21 | 2019-02-26 | Videomining Corporation | Method and system for in-store shopper behavior analysis with multi-modal sensor fusion |
| US9569874B2 (en) * | 2015-06-05 | 2017-02-14 | International Business Machines Corporation | System and method for perspective preserving stitching and summarizing views |
| EP3304489B1 (en) * | 2015-07-03 | 2019-04-17 | Huawei Technologies Co., Ltd. | An image processing apparatus and method |
| CN108028902B (en) * | 2015-07-16 | 2021-05-04 | 博拉斯特运动有限公司 | Integrated Sensors and Video Motion Analysis Methods |
| EP3332549A4 (en) * | 2015-08-04 | 2018-08-08 | James Carey | Video identification and analytical recognition system |
| US10149091B2 (en) | 2015-11-24 | 2018-12-04 | Walmart Apollo, Llc | Device and method for directing employee movement |
| US11528452B2 (en) | 2015-12-29 | 2022-12-13 | Current Lighting Solutions, Llc | Indoor positioning system using beacons and video analytics |
| US11354683B1 (en) * | 2015-12-30 | 2022-06-07 | Videomining Corporation | Method and system for creating anonymous shopper panel using multi-modal sensor fusion |
| US10963893B1 (en) * | 2016-02-23 | 2021-03-30 | Videomining Corporation | Personalized decision tree based on in-store behavior analysis |
| US10277831B2 (en) * | 2016-03-25 | 2019-04-30 | Fuji Xerox Co., Ltd. | Position identifying apparatus and method, path identifying apparatus, and non-transitory computer readable medium |
| US10705179B2 (en) | 2016-04-22 | 2020-07-07 | Tandemlaunch | Device-free subject localization methods and systems using wireless signals |
| US12044789B2 (en) | 2016-04-22 | 2024-07-23 | Azar Zandifar | Systems and methods for occupancy detection using WiFi sensing technologies |
| CN109414119B (en) | 2016-05-09 | 2021-11-16 | 格拉班谷公司 | System and method for computer vision driven applications within an environment |
| WO2018013439A1 (en) | 2016-07-09 | 2018-01-18 | Grabango Co. | Remote state following devices |
| CA3052292A1 (en) | 2017-02-10 | 2018-08-16 | Grabango Co. | A dynamic customer checkout experience within an automated shopping environment |
| US11068721B2 (en) * | 2017-03-30 | 2021-07-20 | The Boeing Company | Automated object tracking in a video feed using machine learning |
| US10778906B2 (en) | 2017-05-10 | 2020-09-15 | Grabango Co. | Series-configured camera array for efficient deployment |
| BR112019027120A2 (en) * | 2017-06-21 | 2020-07-07 | Grabango Co. | method and system |
| GB2565142B (en) | 2017-08-04 | 2020-08-12 | Sony Interactive Entertainment Inc | Use of a camera to locate a wirelessly connected device |
| US10474991B2 (en) | 2017-08-07 | 2019-11-12 | Standard Cognition, Corp. | Deep learning-based store realograms |
| US11250376B2 (en) | 2017-08-07 | 2022-02-15 | Standard Cognition, Corp | Product correlation analysis using deep learning |
| US10650545B2 (en) | 2017-08-07 | 2020-05-12 | Standard Cognition, Corp. | Systems and methods to check-in shoppers in a cashier-less store |
| US11200692B2 (en) * | 2017-08-07 | 2021-12-14 | Standard Cognition, Corp | Systems and methods to check-in shoppers in a cashier-less store |
| US10474988B2 (en) | 2017-08-07 | 2019-11-12 | Standard Cognition, Corp. | Predicting inventory events using foreground/background processing |
| US11232687B2 (en) | 2017-08-07 | 2022-01-25 | Standard Cognition, Corp | Deep learning-based shopper statuses in a cashier-less store |
| TWI656512B (en) * | 2017-08-31 | 2019-04-11 | 群邁通訊股份有限公司 | Image analysis system and method |
| CN109427074A (en) | 2017-08-31 | 2019-03-05 | 深圳富泰宏精密工业有限公司 | Image analysis system and method |
| CA2978418C (en) | 2017-09-05 | 2018-12-18 | I3 International Inc. | System for tracking the location of people |
| US20190079591A1 (en) | 2017-09-14 | 2019-03-14 | Grabango Co. | System and method for human gesture processing from video input |
| US11170208B2 (en) * | 2017-09-14 | 2021-11-09 | Nec Corporation Of America | Physical activity authentication systems and methods |
| EP3460400B1 (en) * | 2017-09-22 | 2021-12-22 | Softbank Robotics Europe | Improved localization of a mobile device based on image and radio words |
| US20190098220A1 (en) * | 2017-09-26 | 2019-03-28 | WiSpear Systems Ltd. | Tracking A Moving Target Using Wireless Signals |
| US10963704B2 (en) | 2017-10-16 | 2021-03-30 | Grabango Co. | Multiple-factor verification for vision-based systems |
| JP6845790B2 (en) * | 2017-11-30 | 2021-03-24 | 株式会社東芝 | Position estimation device, position estimation method and terminal device |
| US10375667B2 (en) * | 2017-12-07 | 2019-08-06 | Cisco Technology, Inc. | Enhancing indoor positioning using RF multilateration and optical sensing |
| US20190180472A1 (en) * | 2017-12-08 | 2019-06-13 | Electronics And Telecommunications Research Institute | Method and apparatus for determining precise positioning |
| US10469590B2 (en) * | 2018-01-02 | 2019-11-05 | Scanalytics, Inc. | System and method for smart building control using directional occupancy sensors |
| US11481805B2 (en) | 2018-01-03 | 2022-10-25 | Grabango Co. | Marketing and couponing in a retail environment using computer vision |
| WO2019137624A1 (en) * | 2018-01-15 | 2019-07-18 | Here Global B.V. | Radio-based occupancies in venues |
| CN108229444B (en) * | 2018-02-09 | 2021-10-12 | 天津师范大学 | Pedestrian re-identification method based on integral and local depth feature fusion |
| US11330450B2 (en) | 2018-09-28 | 2022-05-10 | Nokia Technologies Oy | Associating and storing data from radio network and spatiotemporal sensors |
| US11288648B2 (en) | 2018-10-29 | 2022-03-29 | Grabango Co. | Commerce automation for a fueling station |
| US11164329B2 (en) * | 2018-11-01 | 2021-11-02 | Inpixon | Multi-channel spatial positioning system |
| US10950125B2 (en) * | 2018-12-03 | 2021-03-16 | Nec Corporation | Calibration for wireless localization and detection of vulnerable road users |
| US11126861B1 (en) | 2018-12-14 | 2021-09-21 | Digimarc Corporation | Ambient inventorying arrangements |
| CN109655790A (en) * | 2018-12-18 | 2019-04-19 | 天津大学 | Multi-target detection and identification system and method based on indoor LED light source |
| AU2020231365A1 (en) | 2019-03-01 | 2021-09-16 | Grabango Co. | Cashier interface for linking customers to virtual data |
| US12333739B2 (en) | 2019-04-18 | 2025-06-17 | Standard Cognition, Corp. | Machine learning-based re-identification of shoppers in a cashier-less store for autonomous checkout |
| US11232575B2 (en) | 2019-04-18 | 2022-01-25 | Standard Cognition, Corp | Systems and methods for deep learning-based subject persistence |
| US11087103B2 (en) | 2019-07-02 | 2021-08-10 | Target Brands, Inc. | Adaptive spatial granularity based on system performance |
| US11321944B2 (en) * | 2019-10-17 | 2022-05-03 | Drishti Technologies, Inc. | Cycle detection techniques |
| CN111563464B (en) * | 2020-05-11 | 2023-11-14 | 奇安信科技集团股份有限公司 | Image processing methods, devices, computing equipment and media |
| US11361468B2 (en) | 2020-06-26 | 2022-06-14 | Standard Cognition, Corp. | Systems and methods for automated recalibration of sensors for autonomous checkout |
| US12288294B2 (en) | 2020-06-26 | 2025-04-29 | Standard Cognition, Corp. | Systems and methods for extrinsic calibration of sensors for autonomous checkout |
| US11303853B2 (en) | 2020-06-26 | 2022-04-12 | Standard Cognition, Corp. | Systems and methods for automated design of camera placement and cameras arrangements for autonomous checkout |
| EP3937065B1 (en) | 2020-07-07 | 2022-05-11 | Axis AB | Method and device for counting a number of moving objects that cross at least one predefined curve in a scene |
| JP2022073138A (en) * | 2020-10-30 | 2022-05-17 | パナソニックIpマネジメント株式会社 | Sensor device, processing method, program |
| CN115205375A (en) * | 2021-04-12 | 2022-10-18 | 华为技术有限公司 | Target detection method, target tracking method and device |
| US11810387B2 (en) * | 2021-05-06 | 2023-11-07 | Hitachi, Ltd. | Location system and method |
| CN117795292A (en) * | 2021-08-02 | 2024-03-29 | 创峰科技 | Multi-person real-time positioning and map construction (SLAM) linked positioning and navigation |
| US12308892B2 (en) * | 2021-08-23 | 2025-05-20 | Verizon Patent And Licensing Inc. | Methods and systems for location-based audio messaging |
| US12373971B2 (en) | 2021-09-08 | 2025-07-29 | Standard Cognition, Corp. | Systems and methods for trigger-based updates to camograms for autonomous checkout in a cashier-less shopping |
| WO2023049197A1 (en) * | 2021-09-21 | 2023-03-30 | Verses Technologies Usa Inc. | Method and system for optimizing a warehouse |
| US12374211B2 (en) | 2021-09-23 | 2025-07-29 | Noonlight, Inc. | Systems and methods for alarm event data record processing |
| US12035201B2 (en) * | 2022-01-19 | 2024-07-09 | Qualcomm Incorporated | Determining communication nodes for radio frequency (RF) sensing |
| DE102022115597A1 (en) | 2022-06-22 | 2023-12-28 | Ariadne Maps Gmbh | METHOD FOR IMPROVING ACCURACY OF INDOOR POSITIONING AND SYSTEM FOR POSITION ESTIMATION OF AN INDIVIDUAL OR OBJECT |
| WO2025047248A1 (en) * | 2023-08-25 | 2025-03-06 | 日本電気株式会社 | Information processing device, information processing method, and recording medium |
Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050093976A1 (en) | 2003-11-04 | 2005-05-05 | Eastman Kodak Company | Correlating captured images and timed 3D event data |
| US20060133648A1 (en) * | 2004-12-17 | 2006-06-22 | Xerox Corporation. | Identifying objects tracked in images using active device |
| US20070257985A1 (en) * | 2006-02-27 | 2007-11-08 | Estevez Leonardo W | Video Surveillance Correlating Detected Moving Objects and RF Signals |
| US20080303901A1 (en) * | 2007-06-08 | 2008-12-11 | Variyath Girish S | Tracking an object |
| US20090265105A1 (en) * | 2008-04-21 | 2009-10-22 | Igt | Real-time navigation devices, systems and methods |
| US20090268030A1 (en) * | 2008-04-29 | 2009-10-29 | Honeywell International Inc. | Integrated video surveillance and cell phone tracking system |
| US20090280824A1 (en) * | 2008-05-07 | 2009-11-12 | Nokia Corporation | Geo-tagging objects with wireless positioning information |
| KR20100025338A (en) | 2008-08-27 | 2010-03-09 | 삼성테크윈 주식회사 | System for tracking object using capturing and method thereof |
| KR20100026776A (en) | 2008-09-01 | 2010-03-10 | 주식회사 코아로직 | Camera-based real-time location system and method of locating in real-time using the same system |
| US20100103173A1 (en) | 2008-10-27 | 2010-04-29 | Minkyu Lee | Real time object tagging for interactive image display applications |
| US20100150404A1 (en) | 2008-12-17 | 2010-06-17 | Richard Lee Marks | Tracking system calibration with minimal user input |
| US20110065451A1 (en) * | 2009-09-17 | 2011-03-17 | Ydreams-Informatica, S.A. | Context-triggered systems and methods for information and services |
| US20110135149A1 (en) * | 2009-12-09 | 2011-06-09 | Pvi Virtual Media Services, Llc | Systems and Methods for Tracking Objects Under Occlusion |
| US20110169917A1 (en) * | 2010-01-11 | 2011-07-14 | Shoppertrak Rct Corporation | System And Process For Detecting, Tracking And Counting Human Objects of Interest |
-
2011
- 2011-08-17 US US13/211,969 patent/US8615254B2/en active Active
-
2013
- 2013-10-25 US US14/064,020 patent/US9270952B2/en active Active
Patent Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050093976A1 (en) | 2003-11-04 | 2005-05-05 | Eastman Kodak Company | Correlating captured images and timed 3D event data |
| US20060133648A1 (en) * | 2004-12-17 | 2006-06-22 | Xerox Corporation. | Identifying objects tracked in images using active device |
| US20070257985A1 (en) * | 2006-02-27 | 2007-11-08 | Estevez Leonardo W | Video Surveillance Correlating Detected Moving Objects and RF Signals |
| US20080303901A1 (en) * | 2007-06-08 | 2008-12-11 | Variyath Girish S | Tracking an object |
| US20090265105A1 (en) * | 2008-04-21 | 2009-10-22 | Igt | Real-time navigation devices, systems and methods |
| US20090268030A1 (en) * | 2008-04-29 | 2009-10-29 | Honeywell International Inc. | Integrated video surveillance and cell phone tracking system |
| US20090280824A1 (en) * | 2008-05-07 | 2009-11-12 | Nokia Corporation | Geo-tagging objects with wireless positioning information |
| KR20100025338A (en) | 2008-08-27 | 2010-03-09 | 삼성테크윈 주식회사 | System for tracking object using capturing and method thereof |
| KR20100026776A (en) | 2008-09-01 | 2010-03-10 | 주식회사 코아로직 | Camera-based real-time location system and method of locating in real-time using the same system |
| US20100103173A1 (en) | 2008-10-27 | 2010-04-29 | Minkyu Lee | Real time object tagging for interactive image display applications |
| US20100150404A1 (en) | 2008-12-17 | 2010-06-17 | Richard Lee Marks | Tracking system calibration with minimal user input |
| US20110065451A1 (en) * | 2009-09-17 | 2011-03-17 | Ydreams-Informatica, S.A. | Context-triggered systems and methods for information and services |
| US20110135149A1 (en) * | 2009-12-09 | 2011-06-09 | Pvi Virtual Media Services, Llc | Systems and Methods for Tracking Objects Under Occlusion |
| US20110169917A1 (en) * | 2010-01-11 | 2011-07-14 | Shoppertrak Rct Corporation | System And Process For Detecting, Tracking And Counting Human Objects of Interest |
Non-Patent Citations (2)
| Title |
|---|
| International Preliminary Report on Patentability for International Patent Application No. PCT/US2011/048294, filed Aug. 18, 2011, received Feb. 28, 2013. 6 Pages. |
| International Search Report and the Written Opinion of the International Searching Authority dated Apr. 9, 2012 for Application No. PCT/US2011/048294, 9 pages. |
Cited By (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9875411B2 (en) * | 2015-08-03 | 2018-01-23 | Beijing Kuangshi Technology Co., Ltd. | Video monitoring method, video monitoring apparatus and video monitoring system |
| US10445791B2 (en) | 2016-09-08 | 2019-10-15 | Walmart Apollo, Llc | Systems and methods for autonomous assistance and routing |
| US11232655B2 (en) | 2016-09-13 | 2022-01-25 | Iocurrents, Inc. | System and method for interfacing with a vehicular controller area network |
| US10650621B1 (en) | 2016-09-13 | 2020-05-12 | Iocurrents, Inc. | Interfacing with a vehicular controller area network |
| US11774249B2 (en) * | 2016-12-12 | 2023-10-03 | Position Imaging, Inc. | System and method of personalized navigation inside a business enterprise |
| US20230035636A1 (en) * | 2016-12-12 | 2023-02-02 | Position Imaging, Inc. | System and method of personalized navigation inside a business enterprise |
| US11506501B2 (en) * | 2016-12-12 | 2022-11-22 | Position Imaging, Inc. | System and method of personalized navigation inside a business enterprise |
| US11022443B2 (en) * | 2016-12-12 | 2021-06-01 | Position Imaging, Inc. | System and method of personalized navigation inside a business enterprise |
| US10921460B2 (en) | 2017-10-16 | 2021-02-16 | Samsung Electronics Co., Ltd. | Position estimating apparatus and method |
| CN108446710A (en) * | 2018-01-31 | 2018-08-24 | 高睿鹏 | Indoor plane figure fast reconstructing method and reconstructing system |
| US10176379B1 (en) | 2018-06-01 | 2019-01-08 | Cisco Technology, Inc. | Integrating computer vision and wireless data to provide identification |
| CN109617771A (en) * | 2018-12-07 | 2019-04-12 | 连尚(新昌)网络科技有限公司 | A home control method and corresponding routing device |
| CN109617771B (en) * | 2018-12-07 | 2021-12-07 | 连尚(新昌)网络科技有限公司 | Home control method and corresponding routing equipment |
| CN109461295B (en) * | 2018-12-07 | 2021-06-11 | 连尚(新昌)网络科技有限公司 | Household alarm method and equipment |
| CN109461295A (en) * | 2018-12-07 | 2019-03-12 | 连尚(新昌)网络科技有限公司 | A kind of household reporting method and apparatus |
| WO2020260731A1 (en) | 2019-06-28 | 2020-12-30 | Cubelizer S.L. | Method for analysing the behaviour of people in physical spaces and system for said method |
| WO2021080933A1 (en) * | 2019-10-21 | 2021-04-29 | Position Imaging, Inc. | System and method of personalized navigation inside a business enterprise |
| US20220132270A1 (en) * | 2020-10-27 | 2022-04-28 | International Business Machines Corporation | Evaluation of device placement |
| US11805389B2 (en) * | 2020-10-27 | 2023-10-31 | International Business Machines Corporation | Evaluation of device placement |
Also Published As
| Publication number | Publication date |
|---|---|
| US20140285660A1 (en) | 2014-09-25 |
| US20120046044A1 (en) | 2012-02-23 |
| US8615254B2 (en) | 2013-12-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9270952B2 (en) | Target localization utilizing wireless and camera sensor fusion | |
| WO2012024516A2 (en) | Target localization utilizing wireless and camera sensor fusion | |
| US9411037B2 (en) | Calibration of Wi-Fi localization from video localization | |
| US11774249B2 (en) | System and method of personalized navigation inside a business enterprise | |
| US10387896B1 (en) | At-shelf brand strength tracking and decision analytics | |
| US11087130B2 (en) | Simultaneous object localization and attribute classification using multitask deep neural networks | |
| US10455364B2 (en) | System and method of personalized navigation inside a business enterprise | |
| US10262331B1 (en) | Cross-channel in-store shopper behavior analysis | |
| US9569786B2 (en) | Methods and systems for excluding individuals from retail analytics | |
| US10354262B1 (en) | Brand-switching analysis using longitudinal tracking of at-shelf shopper behavior | |
| Rallapalli et al. | Enabling physical analytics in retail stores using smart glasses | |
| US10217120B1 (en) | Method and system for in-store shopper behavior analysis with multi-modal sensor fusion | |
| Xu et al. | ivr: Integrated vision and radio localization with zero human effort | |
| US11354683B1 (en) | Method and system for creating anonymous shopper panel using multi-modal sensor fusion | |
| US20190242968A1 (en) | Joint Entity and Object Tracking Using an RFID and Detection Network | |
| TW201712361A (en) | Vision and radio fusion based precise indoor localization | |
| US10664879B2 (en) | Electronic device, apparatus and system | |
| Llorca et al. | Recognizing individuals in groups in outdoor environments combining stereo vision, RFID and BLE | |
| Tavanti et al. | Review on systems combining computer vision and radio frequency identification | |
| US12442915B1 (en) | Method and system for determining device orientation within augmented reality | |
| Ecklbauer | A mobile positioning system for android based on visual markers | |
| WO2021080933A1 (en) | System and method of personalized navigation inside a business enterprise | |
| Balado Frías et al. | An overview of methods for control and estimation of capacity in COVID-19 pandemic from point cloud and imagery data | |
| Zimmermann et al. | People Tracking Technology Use Cases in Brick-And-Mortar Retail | |
| Peter | Crowd-sourced reconstruction of building interiors |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: NEARBUY SYSTEMS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAMTGAARD, MARK;MUELLER, NATHAN;REEL/FRAME:031483/0936 Effective date: 20110816 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: RETAILNEXT, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NEARBUY SYSTEMS, INC.;REEL/FRAME:038689/0202 Effective date: 20160523 |
|
| AS | Assignment |
Owner name: TRIPLEPOINT VENTURE GROWTH BDC CORP., CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:RETAILNEXT, INC.;REEL/FRAME:044176/0001 Effective date: 20171116 |
|
| AS | Assignment |
Owner name: SILICON VALLEY BANK, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:RETAILNEXT, INC.;REEL/FRAME:044252/0867 Effective date: 20171122 |
|
| AS | Assignment |
Owner name: ORIX GROWTH CAPITAL, LLC, NEW YORK Free format text: SECURITY INTEREST;ASSIGNOR:RETAILNEXT, INC.;REEL/FRAME:046715/0067 Effective date: 20180827 |
|
| AS | Assignment |
Owner name: RETAILNEXT, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:TRIPLEPOINT VENTURE GROWTH BDC CORP.;REEL/FRAME:046957/0896 Effective date: 20180827 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
| AS | Assignment |
Owner name: SILICON VALLEY BANK, CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE INCORRECTLY IDENTIFIED PATENT APPLICATION NUMBER 14322624 TO PROPERLY REFLECT PATENT APPLICATION NUMBER 14332624 PREVIOUSLY RECORDED ON REEL 044252 FRAME 0867. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST;ASSIGNOR:RETAILNEXT, INC.;REEL/FRAME:053119/0599 Effective date: 20171122 |
|
| AS | Assignment |
Owner name: ALTER DOMUS (US) LLC, ILLINOIS Free format text: SECURITY INTEREST;ASSIGNOR:RETAILNEXT, INC.;REEL/FRAME:056018/0344 Effective date: 20210423 |
|
| AS | Assignment |
Owner name: RETAILNEXT, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:056055/0587 Effective date: 20210423 Owner name: RETAILNEXT, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX GROWTH CAPITAL, LLC;REEL/FRAME:056056/0825 Effective date: 20210423 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 8 |
|
| AS | Assignment |
Owner name: EAST WEST BANK, AS ADMINISTRATIVE AGENT, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:RETAILNEXT, INC.;REEL/FRAME:064247/0925 Effective date: 20230713 |
|
| AS | Assignment |
Owner name: RETAILNEXT, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ALTER DOMUS (US) LLC;REEL/FRAME:064298/0437 Effective date: 20230713 |
|
| AS | Assignment |
Owner name: BAIN CAPITAL CREDIT, LP, AS ADMINISTRATIVE AGENT, MASSACHUSETTS Free format text: SECURITY INTEREST;ASSIGNOR:RETAILNEXT, INC.;REEL/FRAME:069495/0690 Effective date: 20241205 |
|
| AS | Assignment |
Owner name: RETAILNEXT, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MGG INVESTMENT GROUP LP;REEL/FRAME:069511/0217 Effective date: 20241205 |