US20120041983A1 - System and method for information gatekeeper based on aggregate profile data - Google Patents

System and method for information gatekeeper based on aggregate profile data Download PDF

Info

Publication number
US20120041983A1
US20120041983A1 US12/694,551 US69455110A US2012041983A1 US 20120041983 A1 US20120041983 A1 US 20120041983A1 US 69455110 A US69455110 A US 69455110A US 2012041983 A1 US2012041983 A1 US 2012041983A1
Authority
US
United States
Prior art keywords
user
crowd
users
crowds
profile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/694,551
Inventor
Kenneth Jennings
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ip3 2017 Series 200 Of Allied Security Trust I
Original Assignee
KOTA ENTERPRISES LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US12/694,551 priority Critical patent/US20120041983A1/en
Assigned to KOTA ENTERPRISES, LLC reassignment KOTA ENTERPRISES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JENNINGS, KENNETH
Application filed by KOTA ENTERPRISES LLC filed Critical KOTA ENTERPRISES LLC
Assigned to WALDECK TECHNOLOGY, LLC reassignment WALDECK TECHNOLOGY, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KOTA ENTERPRISES, LLC
Publication of US20120041983A1 publication Critical patent/US20120041983A1/en
Assigned to CONCERT DEBT, LLC reassignment CONCERT DEBT, LLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WALDECK TECHNOLOGY, LLC
Assigned to CONCERT DEBT, LLC reassignment CONCERT DEBT, LLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WALDECK TECHNOLOGY, LLC
Assigned to CONCERT DEBT, LLC reassignment CONCERT DEBT, LLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONCERT TECHNOLOGY CORPORATION
Assigned to CONCERT DEBT, LLC reassignment CONCERT DEBT, LLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONCERT TECHNOLOGY CORPORATION
Assigned to WALDECK TECHNOLOGY, LLC reassignment WALDECK TECHNOLOGY, LLC RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CONCERT DEBT, LLC
Assigned to CONCERT TECHNOLOGY CORPORATION reassignment CONCERT TECHNOLOGY CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CONCERT DEBT, LLC
Assigned to WALDECK TECHNOLOGY, LLC reassignment WALDECK TECHNOLOGY, LLC RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CONCERT DEBT, LLC
Assigned to CONCERT TECHNOLOGY CORPORATION reassignment CONCERT TECHNOLOGY CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CONCERT DEBT, LLC
Assigned to IP3 2017, SERIES 200 OF ALLIED SECURITY TRUST I reassignment IP3 2017, SERIES 200 OF ALLIED SECURITY TRUST I ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WALDECK TECHNOLOGY, LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6227Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/306User profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/54Presence management, e.g. monitoring or registration for receipt of user log-on information, or the connection status of the users
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management

Definitions

  • the present invention relates to sharing digital items, and more particularly relates to an information gatekeeper that controls access to digital items in a sharing environment.
  • a computing device of a user stores a sharable item.
  • a sharing rule is configured for the sharable item, where the sharing rule is based on an element of aggregate profile data for a current location of the user, a crowd characteristic of one or more crowds that are currently relevant to the current location of the user, or both.
  • the element of the aggregate profile data, the crowd characteristic of the one or more crowds, or both are obtained and the sharing rule for the sharable item is resolved. Sharing of the sharable item is then provided according to a result of the resolution of the sharing rule for the sharable item.
  • a computing device of a user stores a sharable item.
  • a sharing rule is configured for the sharable item and is based on an element of aggregate profile data for a current location of the user.
  • the element of the aggregate profile data is either an element of a historical aggregate profile for the current location of the user or an element of an aggregate profile for one or more crowds of users currently relevant to the current location of the user.
  • the element of the aggregate profile data is obtained, and then the sharing rule for the sharable item is resolved based on the element of the aggregate profile data. Sharing of the sharable item is then provided according to a result of the resolution of the sharing rule for the sharable item.
  • a computing device of a user stores a sharable item.
  • a sharing rule is configured for the sharable item and is based on a crowd characteristic of one or more crowds currently relevant to a current location of the user. The crowd characteristic of the one or more crowds is obtained, and then the sharing rule for the sharable item is resolved based on the crowd characteristic of the one or more crowds. Sharing of the sharable item is then provided according to a result of the resolution of the sharing rule for the sharable item.
  • FIG. 1 illustrates a Mobile Aggregate Profile (MAP) system according to one embodiment of the present disclosure
  • FIG. 2 is a block diagram of the MAP server of FIG. 1 according to one embodiment of the present disclosure
  • FIG. 3 is a block diagram of the MAP client of one of the mobile devices of FIG. 1 according to one embodiment of the present disclosure
  • FIG. 4 illustrates the operation of the system of FIG. 1 to provide user profiles and current locations of the users of the mobile devices to the MAP server according to one embodiment of the present disclosure
  • FIG. 5 illustrates the operation of the system of FIG. 1 to provide user profiles and current locations of the users of the mobile devices to the MAP server according to another embodiment of the present disclosure
  • FIGS. 6 and 7 graphically illustrate bucketization of users according to location for purposes of maintaining a historical record of anonymized user profile data by location according to one embodiment of the present disclosure
  • FIG. 8 is a flow chart illustrating the operation of a foreground bucketization process performed by the MAP server to maintain the lists of users for location buckets for purposes of maintaining a historical record of anonymized user profile data by location according to one embodiment of the present disclosure
  • FIG. 9 is a flow chart illustrating the anonymization and storage process performed by the MAP server for the location buckets in order to maintain a historical record of anonymized user profile data by location according to one embodiment of the present disclosure
  • FIG. 10 graphically illustrates anonymization of a user record according to one embodiment of the present disclosure
  • FIG. 11 is a flow chart for a quadtree based storage process that may be used to store anonymized user profile data for location buckets according to one embodiment of the present disclosure
  • FIG. 12 is a flow chart illustrating a quadtree algorithm that may be used to process the location buckets for storage of the anonymized user profile data according to one embodiment of the present disclosure
  • FIGS. 13A through 13E graphically illustrate the process of FIG. 12 for the generation of a quadtree data structure for one exemplary base quadtree region
  • FIG. 14 illustrates the operation of the system of FIG. 1 wherein a mobile device is enabled to request and receive historical data from the MAP server according to one embodiment of the present disclosure
  • FIGS. 15A and 15B illustrate a flow chart for a process for generating historical data in a time context in response to a historical request from a mobile device according to one embodiment of the present disclosure
  • FIG. 16 is an exemplary Graphical User Interface (GUI) that may be provided by the MAP application of one of the mobile devices of FIG. 1 in order to present historical aggregate profile data in a time context according to one embodiment of the present disclosure;
  • GUI Graphical User Interface
  • FIGS. 17A and 17B illustrate a flow chart for a process for generating historical data in a geographic context in response to a historical request from a mobile device according to one embodiment of the present disclosure
  • FIG. 18 illustrates an exemplary GUI that may be provided by the MAP application of one of the mobile devices of FIG. 1 to present historical data in the geographic context according to one embodiment of the present disclosure
  • FIG. 19 illustrates the operation of the system of FIG. 1 wherein the subscriber device is enabled to request and receive historical data from the MAP server according to one embodiment of the present disclosure
  • FIGS. 20A and 20B illustrate a process for generating historical data in a time context in response to a historical request from a subscriber device according to one embodiment of the present disclosure
  • FIGS. 21A and 21B illustrate a process for generating historical data in a geographic context in response to a historical request from a subscriber device according to one embodiment of the present disclosure.
  • FIG. 22 is a flow chart for a spatial crowd formation process according to one embodiment of the present disclosure.
  • FIGS. 23A through 23D graphically illustrate the crowd formation process of FIG. 22 for an exemplary bounding box
  • FIGS. 24A through 24D illustrate a flow chart for a spatial crowd formation process according to another embodiment of the present disclosure
  • FIGS. 25A through 25D graphically illustrate the crowd formation process of FIGS. 24A through 24D for a scenario where the crowd formation process is triggered by a location update for a user having no old location;
  • FIGS. 26A through 26F graphically illustrate the crowd formation process of FIGS. 24A through 24D for a scenario where the new and old bounding boxes overlap;
  • FIGS. 27A through 27E graphically illustrate the crowd formation process of FIGS. 24A through 24D in a scenario where the new and old bounding boxes do not overlap;
  • FIG. 28 illustrates the operation the system of FIG. 1 to enable the mobile devices to request crowd data for currently formed crowds according to one embodiment of the present disclosure
  • FIG. 29A is a flow chart for a process for generating aggregate profiles for crowds identified in response to a crowd request from a mobile device according to one embodiment of the present disclosure
  • FIG. 29B is a flow chart for a process for generating aggregate profiles for crowds identified in response to a crowd request from a mobile device according to another embodiment of the present disclosure
  • FIG. 30 illustrates the operation of the system of FIG. 1 to enable a subscriber device to request crowd data for current crowds according to one embodiment of the present disclosure
  • FIG. 31 is a flow chart for a process for generating aggregate profiles for crowds identified for a crowd request in response to a crowd request from a subscriber device according to one embodiment of the present disclosure
  • FIGS. 32A through 32E illustrate a GUI for an exemplary embodiment of the MAP application of one of the mobile devices of FIG. 1 according to one embodiment of the present disclosure
  • FIGS. 33A through 33C illustrate an exemplary web interface provided by the MAP server and presented to the subscriber at the subscriber device according to one embodiment of the present disclosure
  • FIG. 34 is a flow chart illustrating a spatial crowd fragmentation process according to one embodiment of the present disclosure.
  • FIGS. 35A and 35B graphically illustrate the spatial crowd fragmentation process of FIG. 34 for an exemplary crowd
  • FIG. 36 illustrates a connectivity-based crowd fragmentation process according to one embodiment of the present disclosure
  • FIGS. 37A and 37B graphically illustrate the connectivity-based crowd fragmentation process of FIG. 36 for an exemplary crowd
  • FIG. 38 is a flow chart illustrating a recursive crowd fragmentation that uses both spatial crowd formation and connectivity-based crowd formation according to one embodiment of the present disclosure
  • FIG. 39 is a flow chart illustrating a recursive crowd fragmentation that uses both spatial crowd formation and connectivity-based crowd formation according to another embodiment of the present disclosure.
  • FIGS. 40A and 40B illustrate an exemplary graphical representation of the degree of fragmentation for a crowd according to one embodiment of the present disclosure
  • FIG. 41 is a flow chart for a process for determining a best-case and worst-case average degree of separation (DOS) for a crowd fragment of a crowd according to one embodiment of the present disclosure
  • FIG. 42 is a more detailed flow chart illustrating the process for determining a best-case and worst-case average DOS for a crowd fragment according to one embodiment of the present disclosure
  • FIGS. 43A through 43D illustrate an exemplary graphical representation of the best-case and worst-case average DOS for a crowd fragment according to one embodiment of the present disclosure
  • FIG. 44 is a flow chart for a process of determining a degree of bidirectionality of relationships between users in a crowd fragment according to one embodiment of the present disclosure
  • FIGS. 45A through 45C illustrate an exemplary graphical representation of the degree of bidirectionality of friendship relationships for a crowd fragment according to one embodiment of the present disclosure
  • FIG. 46 is a flow chart for a process for generating a quality level for an aggregate profile for a crowd according to one embodiment of the present disclosure
  • FIG. 47 illustrates an exemplary GUI for presenting an aggregate profile for a crowd and a quality level of the aggregate profile generated using the process of FIG. 46 according to one embodiment of the present disclosure
  • FIG. 48 illustrates another exemplary GUI for presenting an aggregate profile for a crowd and a quality level of the aggregate profile generated using the process of FIG. 46 according to another embodiment of the present disclosure
  • FIG. 49 illustrates a flow chart for a process for generating confidence factors for keywords included in an aggregate profile for a crowd based on confidence levels for current locations of users in the crowd according to one embodiment of the present disclosure
  • FIG. 50 illustrates an exemplary GUI for presenting an aggregate profile for a crowd including an indication of a confidence level for each of a number of keywords in the aggregate profile according to one embodiment of the present disclosure
  • FIG. 51 graphically illustrates modification of the confidence level of the current location of a user according to one embodiment of the present disclosure
  • FIG. 52 illustrates an exemplary third-party application that control access to sharable items based on data from the MAP server of FIG. 1 according to one embodiment of the present disclosure
  • FIG. 53 illustrates the operation of the third-party application of FIG. 52 to control access to a sharable item based on data obtained from the MAP server according to one embodiment of the present disclosure
  • FIG. 54 illustrates a process for automatically configuring sharing rules for sharable media items according to one embodiment of the present disclosure
  • FIG. 55 is a more detailed illustration of the operation of the third-party application of FIG. 52 to control access to sharable items according to one embodiment of the present disclosure
  • FIG. 56 is a block diagram of the MAP server of FIG. 1 according to one embodiment of the present disclosure.
  • FIG. 57 is a block diagram of one of the mobile devices of FIG. 1 according to one embodiment of the present disclosure.
  • FIG. 58 is a block diagram of the subscriber device of FIG. 1 according to one embodiment of the present disclosure.
  • FIG. 59 is a block diagram of a computing device operating to host the third-party service of FIG. 1 according to one embodiment of the present disclosure.
  • FIG. 1 illustrates a Mobile Aggregate Profile (MAP) system 10 according to one embodiment of the present disclosure.
  • the system 10 includes a MAP server 12 , one or more profile servers 14 , a location server 16 , a number of mobile devices 18 - 1 through 18 -N having associated users 20 - 1 through 20 -N, a subscriber device 22 having an associated subscriber 24 , and a third-party service 26 communicatively coupled via a network 28 .
  • the network 28 may be any type of network or any combination of networks. Specifically, the network 28 may include wired components, wireless components, or both wired and wireless components.
  • the network 28 is a distributed public network such as the Internet, where the mobile devices 18 - 1 through 18 -N are enabled to connect to the network 28 via local wireless connections (e.g., WiFi or IEEE 802.11 connections) or wireless telecommunications connections (e.g., 3G or 4G telecommunications connections such as GSM, LTE, W-CDMA, or WiMAX connections).
  • local wireless connections e.g., WiFi or IEEE 802.11 connections
  • wireless telecommunications connections e.g., 3G or 4G telecommunications connections such as GSM, LTE, W-CDMA, or WiMAX connections.
  • the MAP server 12 operates to obtain current locations, including location updates, and user profiles of the users 20 - 1 through 20 -N of the mobile devices 18 - 1 through 18 -N.
  • the current locations of the users 20 - 1 through 20 -N can be expressed as positional geographic coordinates such as latitude-longitude pairs, and a height vector (if applicable), or any other similar information capable of identifying a given physical point in space in a two-dimensional or three-dimensional coordinate system.
  • the MAP server 12 is enabled to provide a number of features such as, but not limited to, maintaining a historical record of anonymized user profile data by location, generating aggregate profile data over time for a Point of Interest (POI) or Area of Interest (AOI) using the historical record of anonymized user profile data, identifying crowds of users using current locations and/or user profiles of the users 20 - 1 through 20 -N, generating aggregate profiles for crowds of users at a POI or in an AOI using the current user profiles of users in the crowds, and crowd tracking.
  • POI Point of Interest
  • AOI Area of Interest
  • the MAP server 12 is illustrated as a single server for simplicity and ease of discussion, it should be appreciated that the MAP server 12 may be implemented as a single physical server or multiple physical servers operating in a collaborative manner for purposes of redundancy and/or load sharing.
  • the one or more profile servers 14 operate to store user profiles for a number of persons including the users 20 - 1 through 20 -N of the mobile devices 18 - 1 through 18 -N.
  • the one or more profile servers 14 may be servers providing social network services such the Facebook® social networking service, the MySpace® social networking service, the LinkedIN® social networking service, or the like.
  • the MAP server 12 is enabled to directly or indirectly obtain the user profiles of the users 20 - 1 through 20 -N of the mobile devices 18 - 1 through 18 -N.
  • the location server 16 generally operates to receive location updates from the mobile devices 18 - 1 through 18 -N and make the location updates available to entities such as, for instance, the MAP server 12 .
  • the location server 16 is a server operating to provide Yahoo!'s FireEagle service.
  • the mobile devices 18 - 1 through 18 -N may be mobile smart phones, portable media player devices, mobile gaming devices, or the like. Some exemplary mobile devices that may be programmed or otherwise configured to operate as the mobile devices 18 - 1 through 18 -N are the Apple® iPhone, the Palm Pre, the Samsung Rogue, the Blackberry Storm, and the Apple® iPod Touch® device. However, this list of exemplary mobile devices is not exhaustive and is not intended to limit the scope of the present disclosure.
  • the mobile devices 18 - 1 through 18 -N include MAP clients 30 - 1 through 30 -N, MAP applications 32 - 1 through 32 -N, third-party applications 34 - 1 through 34 -N, and location functions 36 - 1 through 36 -N, respectively.
  • the MAP client 30 - 1 is preferably implemented in software.
  • the MAP client 30 - 1 is a middleware layer operating to interface an application layer (i.e., the MAP application 32 - 1 and the third-party applications 34 - 1 ) to the MAP server 12 .
  • the MAP client 30 - 1 enables the MAP application 32 - 1 and the third-party applications 34 - 1 to request and receive data from the MAP server 12 .
  • the MAP client 30 - 1 enables applications, such as the MAP application 32 - 1 and the third-party applications 34 - 1 , to access data from the MAP server 12 .
  • the MAP client 30 - 1 enables the MAP application 32 - 1 to request anonymized aggregate profiles for crowds of users located at a POI or within an AOI and/or request anonymized historical user profile data for a POI or AOI.
  • the MAP application 32 - 1 is also preferably implemented in software.
  • the MAP application 32 - 1 generally provides a user interface component between the user 20 - 1 and the MAP server 12 . More specifically, among other things, the MAP application 32 - 1 enables the user 20 - 1 to initiate historical requests for historical data or crowd requests for crowd data (e.g., aggregate profile data and/or crowd characteristics data) from the MAP server 12 for a POI or AOI.
  • the MAP application 32 - 1 also enables the user 20 - 1 to configure various settings.
  • the MAP application 32 - 1 may enable the user 20 - 1 to select a desired social networking service (e.g., Facebook, MySpace, LinkedIN, etc.) from which to obtain the user profile of the user 20 - 1 and provide any necessary credentials (e.g., username and password) needed to access the user profile from the social networking service.
  • a desired social networking service e.g., Facebook, MySpace, LinkedIN, etc.
  • any necessary credentials e.g., username and password
  • the third-party applications 34 - 1 are preferably implemented in software.
  • the third-party applications 34 - 1 operate to access the MAP server 12 via the MAP client 30 - 1 .
  • the third-party applications 34 - 1 may utilize data obtained from the MAP server 12 in any desired manner.
  • one of the third party applications 34 - 1 may be a gaming application that utilizes historical aggregate profile data to notify the user 20 - 1 of POIs or AOIs where persons having an interest in the game have historically congregated.
  • the location function 36 - 1 may be implemented in hardware, software, or a combination thereof. In general, the location function 36 - 1 operates to determine or otherwise obtain the location of the mobile device 18 - 1 .
  • the location function 36 - 1 may be or include a Global Positioning System (GPS) receiver.
  • GPS Global Positioning System
  • the subscriber device 22 is a physical device such as a personal computer, a mobile computer (e.g., a notebook computer, a netbook computer, a tablet computer, etc.), a mobile smart phone, or the like.
  • the subscriber 24 associated with the subscriber device 22 is a person or entity. In general, the subscriber device 22 enables the subscriber 24 to access the MAP server 12 via a web browser 38 to obtain various types of data, preferably for a fee.
  • the subscriber 24 may pay a fee to have access to historical aggregate profile data for one or more POIs and/or one or more AOIs, pay a fee to have access to crowd data such as aggregate profiles for crowds located at one or more POIs and/or located in one or more AOIs, pay a fee to track crowds, or the like.
  • the web browser 38 is exemplary.
  • the subscriber device 22 is enabled to access the MAP server 12 via a custom application.
  • the third-party service 26 is a service that has access to data from the MAP server 12 such as a historical aggregate profile data for one or more POIs or one or more AOIs, crowd data such as aggregate profiles for one or more crowds at one or more POIs or within one or more AOIs, or crowd tracking data.
  • the third-party service 26 operates to provide a service such as, for example, targeted advertising.
  • the third-party service 26 may obtain anonymous aggregate profile data for one or more crowds located at a POI and then provide targeted advertising to known users located at the POI based on the anonymous aggregate profile data.
  • targeted advertising is mentioned as an exemplary third-party service 26
  • other types of third-party services 26 may additionally or alternatively be provided.
  • Other types of third-party services 26 that may be provided will be apparent to one of ordinary skill in the art upon reading this disclosure.
  • system 10 of FIG. 1 illustrates an embodiment where the one or more profile servers 14 and the location server 16 are separate from the MAP server 12 , the present disclosure is not limited thereto. In an alternative embodiment, the functionality of the one or more profile servers 14 and/or the location server 16 may be implemented within the MAP server 12 .
  • FIG. 2 is a block diagram of the MAP server 12 of FIG. 1 according to one embodiment of the present disclosure.
  • the MAP server 12 includes an application layer 40 , a business logic layer 42 , and a persistence layer 44 .
  • the application layer 40 includes a user web application 46 , a mobile client/server protocol component 48 , and one or more data Application Programming Interfaces (APIs) 50 .
  • the user web application 46 is preferably implemented in software and operates to provide a web interface for users, such as the subscriber 24 , to access the MAP server 12 via a web browser.
  • the mobile client/server protocol component 48 is preferably implemented in software and operates to provide an interface between the MAP server 12 and the MAP clients 30 - 1 through 30 -N hosted by the mobile devices 18 - 1 through 18 -N.
  • the data APIs 50 enable third-party services, such as the third-party service 26 , to access the MAP server 12 .
  • the business logic layer 42 includes a profile manager 52 , a location manager 54 , a history manager 56 , a crowd analyzer 58 , and an aggregation engine 60 , each of which is preferably implemented in software.
  • the profile manager 52 generally operates to obtain the user profiles of the users 20 - 1 through 20 -N directly or indirectly from the one or more profile servers 14 and store the user profiles in the persistence layer 44 .
  • the location manager 54 operates to obtain the current locations of the users 20 - 1 through 20 -N including location updates. As discussed below, the current locations of the users 20 - 1 through 20 -N may be obtained directly from the mobile devices 18 - 1 through 18 -N and/or obtained from the location server 16 .
  • the history manager 56 generally operates to maintain a historical record of anonymized user profile data by location.
  • the crowd analyzer 58 operates to form crowds of users.
  • the crowd analyzer 58 utilizes a spatial crowd formation algorithm.
  • the crowd analyzer 58 may further characterize crowds to reflect degree of fragmentation, best-case and worst-case degree of separation (DOS), and/or degree of bi-directionality, as discussed below in more detail.
  • the crowd analyzer 58 may also operate to track crowds.
  • the aggregation engine 60 generally operates to provide aggregate profile data in response to requests from the mobile devices 18 - 1 through 18 -N, the subscriber device 22 , and the third-party service 26 .
  • the aggregate profile data may be historical aggregate profile data for one or more POIs or one or more AOIs or aggregate profile data for crowd(s) currently at one or more POIs or within one or more AOIs.
  • the persistence layer 44 includes an object mapping layer 62 and a datastore 64 .
  • the object mapping layer 62 is preferably implemented in software.
  • the datastore 64 is preferably a relational database, which is implemented in a combination of hardware (i.e., physical data storage hardware) and software (i.e., relational database software).
  • the business logic layer 42 is implemented in an object-oriented programming language such as, for example, Java.
  • the object mapping layer 62 operates to map objects used in the business logic layer 42 to relational database entities stored in the datastore 64 .
  • data is stored in the datastore 64 in a Resource Description Framework (RDF) compatible format.
  • RDF Resource Description Framework
  • the datastore 64 may be implemented as an RDF datastore. More specifically, the RDF datastore may be compatible with RDF technology adopted by Semantic Web activities. Namely, the RDF datastore may use the Friend-Of-A-Friend (FOAF) vocabulary for describing people, their social networks, and their interests.
  • the MAP server 12 may be designed to accept raw FOAF files describing persons, their friends, and their interests. These FOAF files are currently output by some social networking services such as Livejournal and Facebook. The MAP server 12 may then persist RDF descriptions of the users 20 - 1 through 20 -N as a proprietary extension of the FOAF vocabulary that includes additional properties desired for the MAP system 10 .
  • FIG. 3 illustrates the MAP client 30 - 1 of FIG. 1 in more detail according to one embodiment of the present disclosure. This discussion is equally applicable to the other MAP clients 30 - 2 through 30 -N.
  • the MAP client 30 - 1 includes a MAP access API 66 , a MAP middleware component 68 , and a mobile client/server protocol component 70 .
  • the MAP access API 66 is implemented in software and provides an interface by which the MAP client 30 - 1 and the third-party applications 34 - 1 are enabled to access the MAP client 30 - 1 .
  • the MAP middleware component 68 is implemented in software and performs the operations needed for the MAP client 30 - 1 to operate as an interface between the MAP application 32 - 1 and the third-party applications 34 - 1 at the mobile device 18 - 1 and the MAP server 12 .
  • the mobile client/server protocol component 70 enables communication between the MAP client 30 - 1 and the MAP server 12 via a defined protocol.
  • FIG. 4 illustrates the operation of the system 10 of FIG. 1 to provide the user profile of the user 20 - 1 of the mobile device 18 - 1 according to one embodiment of the present disclosure.
  • This discussion is equally applicable to user profiles of the other users 20 - 2 through 20 -N of the other mobile devices 18 - 2 through 18 -N.
  • an authentication process is performed (step 1000 ).
  • the mobile device 18 - 1 authenticates with the profile server 14 (step 1000 A) and the MAP server 12 (step 1000 B).
  • the MAP server 12 authenticates with the profile server 14 (step 1000 C).
  • authentication is preformed using OpenID or similar technology.
  • authentication may alternatively be performed using separate credentials (e.g., username and password) of the user 20 - 1 for access to the MAP server 12 and the profile server 14 .
  • the profile server 14 returns an authentication succeeded message to the MAP server 12 (step 1000 D), and the profile server 14 returns an authentication succeeded message to the MAP client 30 - 1 of the mobile device 18 - 1 (step 1000 E).
  • a user profile process is performed such that a user profile of the user 20 - 1 is obtained from the profile server 14 and delivered to the MAP server 12 (step 1002 ).
  • the MAP client 30 - 1 of the mobile device 18 - 1 sends a profile request to the profile server 14 (step 1002 A).
  • the profile server 14 returns the user profile of the user 20 - 1 to the mobile device 18 - 1 (step 1002 B).
  • the MAP client 30 - 1 of the mobile device 18 - 1 then sends the user profile of the user 20 - 1 to the MAP server 12 (step 1002 C).
  • the MAP client 30 - 1 may filter the user profile of the user 20 - 1 according to criteria specified by the user 20 - 1 .
  • the user profile of the user 20 - 1 may include demographic information, general interests, music interests, and movie interests, and the user 20 - 1 may specify that the demographic information or some subset thereof is to be filtered, or removed, before sending the user profile to the MAP server 12 .
  • the profile manager 52 of the MAP server 12 Upon receiving the user profile of the user 20 - 1 from the MAP client 30 - 1 of the mobile device 18 - 1 , the profile manager 52 of the MAP server 12 processes the user profile (step 1002 D). More specifically, in the preferred embodiment, the profile manager 52 includes social network handlers for the social network services supported by the MAP server 12 . Thus, for example, if the MAP server 12 supports user profiles from Facebook, MySpace, and LinkedIN, the profile manager 52 may include a Facebook handler, a MySpace handler, and a LinkedIN handler. The social network handlers process user profiles to generate user profiles for the MAP server 12 that include lists of keywords for each of a number of profile categories.
  • the profile categories may be the same for each of the social network handlers or different for each of the social network handlers.
  • the user profile of the user 20 - 1 is from Facebook.
  • the profile manager 52 uses a Facebook handler to process the user profile of the user 20 - 1 to map the user profile of the user 20 - 1 from Facebook to a user profile for the MAP server 12 including lists of keywords for a number of predefined profile categories.
  • the profile categories may be a demographic profile category, a social interaction profile category, a general interests profile category, a music interests profile category, and a movie interests profile category.
  • the user profile of the user 20 - 1 from Facebook may be processed by the Facebook handler of the profile manager 52 to create a list of keywords such as, for example, liberal, High School graduate, 35-44, College graduate, etc. for the demographic profile category, a list of keywords such as Seeking Friendship for the social interaction profile category, a list of keywords such as politics, technology, photography, books, etc. for the general interests profile category, a list of keywords including music genres, artist names, album names, or the like for the music interests profile category, and a list of keywords including movie titles, actor or actress names, director names, move genres, or the like for the movie interests profile category.
  • the profile manager 52 may use natural language processing or semantic analysis. For example, if the Facebook user profile of the user 20 - 1 states that the user 20 - 1 is 20 years old, semantic analysis may result in the keyword of 18-24 years old being stored in the user profile of the user 20 - 1 for the MAP server 12 .
  • the profile manager 52 of the MAP server 12 After processing the user profile of the user 20 - 1 , the profile manager 52 of the MAP server 12 stores the resulting user profile for the user 20 - 1 (step 1002 E). More specifically, in one embodiment, the MAP server 12 stores user records for the users 20 - 1 through 20 -N in the datastore 64 ( FIG. 2 ). The user profile of the user 20 - 1 is stored in the user record of the user 20 - 1 .
  • the user record of the user 20 - 1 includes a unique identifier of the user 20 - 1 , the user profile of the user 20 - 1 , and, as discussed below, a current location of the user 20 - 1 . Note that the user profile of the user 20 - 1 may be updated as desired. For example, in one embodiment, the user profile of the user 20 - 1 is updated by repeating step 1002 each time the user 20 - 1 activates the MAP application 32 - 1 .
  • the user profiles of the users 20 - 1 through 20 -N may be obtained in any desired manner.
  • the user 20 - 1 may identify one or more favorite websites.
  • the profile manager 52 of the MAP server 12 may then crawl the one or more favorite websites of the user 20 - 1 to obtain keywords appearing in the one or more favorite websites of the user 20 - 1 . These keywords may then be stored as the user profile of the user 20 - 1 .
  • a process is performed such that a current location of the mobile device 18 - 1 and thus a current location of the user 20 - 1 is obtained by the MAP server 12 (step 1004 ).
  • the MAP application 32 - 1 of the mobile device 18 - 1 obtains the current location of the mobile device 18 - 1 from the location function 36 - 1 of the mobile device 18 - 1 .
  • the MAP application 32 - 1 then provides the current location of the mobile device 18 - 1 to the MAP client 30 - 1
  • the MAP client 30 - 1 then provides the current location of the mobile device 18 - 1 to the MAP server 12 (step 1004 A).
  • step 1004 A may be repeated periodically or in response to a change in the current location of the mobile device 18 - 1 in order for the MAP application 32 - 1 to provide location updates for the user 20 - 1 to the MAP server 12 .
  • the location manager 54 of the MAP server 12 stores the current location of the mobile device 18 - 1 as the current location of the user 20 - 1 (step 1004 B). More specifically, in one embodiment, the current location of the user 20 - 1 is stored in the user record of the user 20 - 1 maintained in the datastore 64 of the MAP server 12 . Note that only the current location of the user 20 - 1 is stored in the user record of the user 20 - 1 . In this manner, the MAP server 12 maintains privacy for the user 20 - 1 since the MAP server 12 does not maintain a historical record of the location of the user 20 - 1 . As discussed below in detail, historical data maintained by the MAP server 12 is anonymized in order to maintain the privacy of the users 20 - 1 through 20 -N.
  • the location manager 54 sends the current location of the user 20 - 1 to the location server 16 (step 1004 C).
  • the MAP server 12 in return receives location updates for the user 20 - 1 from the location server 16 .
  • the MAP application 32 - 1 will not be able to provide location updates for the user 20 - 1 to the MAP server 12 unless the MAP application 32 - 1 is active.
  • step 1006 the location server 16 receives a location update for the user 20 - 1 directly or indirectly from another application running on the mobile device 18 - 1 or an application running on another device of the user 20 - 1 (step 1006 A).
  • the location server 16 then provides the location update for the user 20 - 1 to the MAP server 12 (step 1006 B).
  • the location manager 54 updates and stores the current location of the user 20 - 1 in the user record of the user 20 - 1 (step 1006 C).
  • the MAP server 12 is enabled to obtain location updates for the user 20 - 1 even when the MAP application 32 - 1 is not active at the mobile device 18 - 1 .
  • FIG. 5 illustrates the operation of the system 10 of FIG. 1 to provide the user profile of the user 20 - 1 of the mobile device 18 - 1 according to another embodiment of the present disclosure.
  • This discussion is equally applicable to user profiles of the other users 20 - 2 through 20 -N of the other mobile devices 18 - 2 through 18 -N.
  • an authentication process is performed (step 1100 ).
  • the mobile device 18 - 1 authenticates with the MAP server 12 (step 1100 A), and the MAP server 12 authenticates with the profile server 14 (step 1100 B).
  • authentication is performed using OpenID or similar technology.
  • authentication may alternatively be performed using separate credentials (e.g., username and password) of the user 20 - 1 for access to the MAP server 12 and the profile server 14 .
  • the profile server 14 returns an authentication succeeded message to the MAP server 12 (step 1100 C)
  • the MAP server 12 returns an authentication succeeded message to the MAP client 30 - 1 of the mobile device 18 - 1 (step 1100 D).
  • a user profile process is performed such that a user profile of the user 20 - 1 is obtained from the profile server 14 and delivered to the MAP server 12 (step 1102 ).
  • the profile manager 52 of the MAP server 12 sends a profile request to the profile server 14 (step 1102 A).
  • the profile server 14 returns the user profile of the user 20 - 1 to the profile manager 52 of the MAP server 12 (step 1102 B).
  • the profile server 14 may return a filtered version of the user profile of the user 20 - 1 to the MAP server 12 .
  • the profile server 14 may filter the user profile of the user 20 - 1 according to criteria specified by the user 20 - 1 .
  • the user profile of the user 20 - 1 may include demographic information, general interests, music interests, and movie interests, and the user 20 - 1 may specify that the demographic information or some subset thereof is to be filtered, or removed, before sending the user profile to the MAP server 12 .
  • the profile manager 52 of the MAP server 12 Upon receiving the user profile of the user 20 - 1 , the profile manager 52 of the MAP server 12 processes to the user profile (step 1102 C). More specifically, as discussed above, in the preferred embodiment, the profile manager 52 includes social network handlers for the social network services supported by the MAP server 12 .
  • the social network handlers process user profiles to generate user profiles for the MAP server 12 that include lists of keywords for each of a number of profile categories.
  • the profile categories may be the same for each of the social network handlers or different for each of the social network handlers.
  • the profile manager 52 of the MAP server 12 After processing the user profile of the user 20 - 1 , the profile manager 52 of the MAP server 12 stores the resulting user profile for the user 20 - 1 (step 1102 D). More specifically, in one embodiment, the MAP server 12 stores user records for the users 20 - 1 through 20 -N in the datastore 64 ( FIG. 2 ). The user profile of the user 20 - 1 is stored in the user record of the user 20 - 1 .
  • the user record of the user 20 - 1 includes a unique identifier of the user 20 - 1 , the user profile of the user 20 - 1 , and, as discussed below, a current location of the user 20 - 1 . Note that the user profile of the user 20 - 1 may be updated as desired. For example, in one embodiment, the user profile of the user 20 - 1 is updated by repeating step 1102 each time the user 20 - 1 activates the MAP application 32 - 1 .
  • the user profiles of the users 20 - 1 through 20 -N may be obtained in any desired manner.
  • the user 20 - 1 may identify one or more favorite websites.
  • the profile manager 52 of the MAP server 12 may then crawl the one or more favorite websites of the user 20 - 1 to obtain keywords appearing in the one or more favorite websites of the user 20 - 1 . These keywords may then be stored as the user profile of the user 20 - 1 .
  • a process is performed such that a current location of the mobile device 18 - 1 and thus a current location of the user 20 - 1 is obtained by the MAP server 12 (step 1104 ).
  • the MAP application 32 - 1 of the mobile device 18 - 1 obtains the current location of the mobile device 18 - 1 from the location function 36 - 1 of the mobile device 18 - 1 .
  • the MAP application 32 - 1 then provides the current location of the user 20 - 1 of the mobile device 18 - 1 to the location server 16 (step 1104 A).
  • step 1104 A may be repeated periodically or in response to changes in the location of the mobile device 18 - 1 in order to provide location updates for the user 20 - 1 to the MAP server 12 .
  • the location server 16 then provides the current location of the user 20 - 1 to the MAP server 12 (step 1104 B).
  • the location server 16 may provide the current location of the user 20 - 1 to the MAP server 12 automatically in response to receiving the current location of the user 20 - 1 from the mobile device 18 - 1 or in response to a request from the MAP server 12 .
  • the location manager 54 of the MAP server 12 stores the current location of the mobile device 18 - 1 as the current location of the user 20 - 1 (step 1104 C). More specifically, in one embodiment, the current location of the user 20 - 1 is stored in the user record of the user 20 - 1 maintained in the datastore 64 of the MAP server 12 . Note that only the current location of the user 20 - 1 is stored in the user record of the user 20 - 1 . In this manner, the MAP server 12 maintains privacy for the user 20 - 1 since the MAP server 12 does not maintain a historical record of the location of the user 20 - 1 . As discussed below in detail, historical data maintained by the MAP server 12 is anonymized in order to maintain the privacy of the users 20 - 1 through 20 -N.
  • the use of the location server 16 is particularly beneficial when the mobile device 18 - 1 does not permit background processes, which is the case for the Apple® iPhone.
  • the MAP application 32 - 1 will not provide location updates for the user 20 - 1 to the location server 16 unless the MAP application 32 - 1 is active.
  • other applications running on the mobile device 18 - 1 may provide location updates to the location server 16 for the user 20 - 1 when the MAP application 32 - 1 is not active.
  • step 1106 the location server 16 receives a location update for the user 20 - 1 from another application running on the mobile device 18 - 1 or an application running on another device of the user 20 - 1 (step 1106 A).
  • the location server 16 then provides the location update for the user 20 - 1 to the MAP server 12 (step 1106 B).
  • the location manager 54 updates and stores the current location of the user 20 - 1 in the user record of the user 20 - 1 (step 1106 C).
  • the MAP server 12 is enabled to obtain location updates for the user 20 - 1 even when the MAP application 32 - 1 is not active at the mobile device 18 - 1 .
  • the MAP server 12 can provide a number of features.
  • a first feature that may be provided by the MAP server 12 is historical storage of anonymized user profile data by location. This historical storage of anonymized user profile data by location is performed by the history manager 56 of the MAP server 12 . More specifically, as illustrated in FIG. 6 , in the preferred embodiment, the history manager 56 maintains lists of users located in a number of geographic regions, or “location buckets.” Preferably, the location buckets are defined by floor(latitude, longitude) to a desired resolution. The higher the resolution, the smaller the size of the location buckets.
  • the location buckets are defined by floor(latitude, longitude) to a resolution of 1/10,000 th of a degree such that the lower left-hand corners of the squares illustrated in FIG. 6 are defined by the floor(latitude, longitude) values at a resolution of 1/10,000 th of a degree.
  • users are represented as dots, and location buckets 72 through 88 have lists of 1, 3, 2, 1, 1, 2, 1, 2, and 3 users, respectively.
  • the history manager 56 makes a copy of the lists of users in the location buckets, anonymizes the user profiles of the users in the lists to provide anonymized user profile data for the corresponding location buckets, and stores the anonymized user profile data in a number of history objects.
  • a history object is stored for each location bucket having at least one user.
  • a quadtree algorithm is used to efficiently create history objects for geographic regions (i.e., groups of one or more adjoining location buckets).
  • FIG. 7 graphically illustrates a scenario where a user moves from one location bucket to another, namely, from the location bucket 74 to the location bucket 76 .
  • the user is included on both the list for the location bucket 74 and the list for the location bucket 76 .
  • the user is flagged or otherwise marked as inactive for the location bucket 74 and active for the location bucket 76 .
  • users flagged as inactive are removed from the lists of users for the location buckets.
  • FIG. 8 is a flow chart illustrating the operation of a foreground “bucketization” process performed by the history manager 56 to maintain the lists of users for location buckets according to one embodiment of the present disclosure.
  • the history manager 56 receives a location update for a user (step 1200 ). For this discussion, assume that the location update is received for the user 20 - 1 .
  • the history manager 56 determines a location bucket corresponding to the updated location (i.e., the current location) of the user 20 - 1 (step 1202 ).
  • the location of the user 20 - 1 is expressed as latitude and longitude coordinates
  • the history manager 56 determines the location bucket by determining floor values of the latitude and longitude coordinates, which can be written as floor(latitude, longitude) at a desired resolution.
  • the latitude and longitude coordinates for the location of the user 20 - 1 are 32.24267381553987 and ⁇ 111.9249213502935, respectively, and the floor values are to be computed to a resolution of 1/10,000 th of a degree, then the floor values for the latitude and longitude coordinates are 32.2426 and ⁇ 111.9249.
  • the floor values for the latitude and longitude coordinates correspond to a particular location bucket.
  • the history manager 56 determines whether the user 20 - 1 is new to the location bucket (step 1204 ). In other words, the history manager 56 determines whether the user 20 - 1 is already on the list of users for the location bucket. If the user 20 - 1 is new to the location bucket, the history manager 56 creates an entry for the user 20 - 1 in the list of users for the location bucket (step 1206 ). Returning to step 1204 , if the user 20 - 1 is not new to the location bucket, the history manager 56 updates the entry for the user 20 - 1 in the list of users for the location bucket (step 1208 ). At this point, whether proceeding from step 1206 or 1208 , the user 20 - 1 is flagged as active in the list of users for the location bucket (step 1210 ).
  • the history manager 56 determines whether the user 20 - 1 has moved from another location bucket (step 1212 ). More specifically, the history manager 56 determines whether the user 20 - 1 is included in the list of users for another location bucket and is currently flagged as active in that list. If the user 20 - 1 has not moved from another location bucket, the process proceeds to step 1216 . If the user 20 - 1 has moved from another location bucket, the history manager 56 flags the user 20 - 1 as inactive in the list of users for the other location bucket from which the user 20 - 1 has moved (step 1214 ).
  • the history manager 56 determines whether it is time to persist (step 1216 ). More specifically, as mentioned above, the history manager 56 operates to persist history objects at a predetermined time interval such as, for example, every 15 minutes. Thus, the history manager 56 determines that it is time to persist if the predetermined time interval has expired. If it is not time to persist, the process returns to step 1200 and is repeated for a next received location update, which will typically be for another user. If it is time to persist, the history manager 56 creates a copy of the lists of users for the location buckets and passes the copy of the lists to an anonymization and storage process (step 1218 ).
  • the anonymization and storage process is a separate process performed by the history manager 56 .
  • the history manager 56 then removes inactive users from the lists of users for the location buckets (step 1220 ). The process then returns to step 300 and is repeated for a next received location update, which will typically be for another user.
  • FIG. 9 is a flow chart illustrating the anonymization and storage process performed by the history manager 56 at the predetermined time interval according to one embodiment of the present disclosure.
  • the anonymization and storage process receives the copy of the lists of users for the location buckets passed to the anonymization and storage process by the bucketization process of FIG. 8 (step 1300 ).
  • anonymization is performed for each of the location buckets having at least one user in order to provide anonymized user profile data for the location buckets (step 1302 ).
  • Anonymization prevents connecting information stored in the history objects stored by the history manager 56 back to the users 20 - 1 through 20 -N or at least substantially increases a difficulty of connecting information stored in the history objects stored by the history manager 56 back to the users 20 - 1 through 20 -N.
  • the anonymized user profile data for the location buckets is stored in a number of history objects (step 1304 ).
  • a separate history object is stored for each of the location buckets, where the history object of a location bucket includes the anonymized user profile data for the location bucket.
  • a quadtree algorithm is used to efficiently store the anonymized user profile data in a number of history objects such that each history object stores the anonymized user profile data for one or more location buckets.
  • FIG. 10 graphically illustrates one embodiment of the anonymization process of step 1302 of FIG. 9 .
  • anonymization is performed by creating anonymous user records for the users in the lists of users for the location buckets.
  • the anonymous user records are not connected back to the users 20 - 1 through 20 -N.
  • each user in the lists of users for the location buckets has a corresponding user record 90 .
  • the user record 90 includes a unique user identifier (ID) for the user, the current location of the user, and the user profile of the user.
  • the user profile includes keywords for each of a number of profile categories, which are stored in corresponding profile category records 92 - 1 through 92 -M.
  • Each of the profile category records 92 - 1 through 92 -M includes a user ID for the corresponding user which may be the same user ID used in the user record 90 , a category ID, and a list of keywords for the profile category.
  • an anonymous user record 94 is created from the user record 90 .
  • the user ID is replaced with a new user ID that is not connected back to the user, which is also referred to herein as an anonymous user ID.
  • This new user ID is different than any other user ID used for anonymous user records created from the user record of the user for any previous or subsequent time periods. In this manner, anonymous user records for a single user created over time cannot be linked to one another.
  • anonymous profile category records 96 - 1 through 96 -M are created for the profile category records 92 - 1 through 92 -M.
  • the user ID is replaced with a new user ID, which may be the same new user ID included in the anonymous user record 94 .
  • the anonymous profile category records 96 - 1 through 96 -M include the same category IDs and lists of keywords as the corresponding profile category records 92 - 1 through 92 -M. Note that the location of the user is not stored in the anonymous user record 94 . With respect to location, it is sufficient that the anonymous user record 94 is linked to a location bucket.
  • the history manager 56 performs anonymization in a manner similar to that described above with respect to FIG. 10 .
  • the profile category records for the group of users in a location bucket, or the group of users in a number of location buckets representing a node in a quadtree data structure may be selectively randomized among the anonymous user records of those users.
  • each anonymous user record would have a user profile including a selectively randomized set of profile category records (including keywords) from a cumulative list of profile category records for all of the users in the group.
  • the history manager 56 may perform anonymization by storing an aggregate user profile for each location bucket, or each group of location buckets representing a node in a quadtree data structure (see below).
  • the aggregate user profile may include a list of all keywords and potentially the number of occurrences of each keyword in the user profiles of the corresponding group of users. In this manner, the data stored by the history manager 56 is not connected back to the users 20 - 1 through 20 -N.
  • FIG. 11 is a flow chart illustrating the storing step (step 1304 ) of FIG. 9 in more detail according to one embodiment of the present disclosure.
  • the history manager 56 processes the location buckets using a quadtree algorithm to produce a quadtree data structure, where each node of the quadtree data structure includes one or more of the location buckets having a combined number of users that is at most a predefined maximum number of users (step 1400 ).
  • the history manager 56 then stores a history object for each node in the quadtree data structure having at least one user (step 1402 ).
  • Each history object includes location information, timing information, data, and quadtree data structure information.
  • the location information included in the history object defines a combined geographic area of the location bucket(s) forming the corresponding node of the quadtree data structure.
  • the location information may be latitude and longitude coordinates for a northeast corner of the combined geographic area of the node of the quadtree data structure and a southwest corner of the combined geographic area for the node of the quadtree data structure.
  • the timing information includes information defining a time window for the history object, which may be, for example, a start time for the corresponding time interval and an end time for the corresponding time interval.
  • the data includes the anonymized user profile data for the users in the list(s) maintained for the location bucket(s) forming the node of the quadtree data structure for which the history object is stored.
  • the data may include a total number of users in the location bucket(s) forming the node of the quadtree data structure.
  • the quadtree data structure information includes information defining a quadtree depth of the node in the quadtree data structure.
  • FIG. 12 is a flow chart illustrating a quadtree algorithm that may be used to process the location buckets to form the quadtree data structure in step 1400 of FIG. 11 according to one embodiment of the present disclosure.
  • a geographic area served by the MAP server 12 is divided into a number of geographic regions, each including multiple location buckets. These geographic regions are also referred to herein as base quadtree regions.
  • the geographic area served by the MAP server 12 may be, for example, a city, a state, a country, or the like. Further, the geographic area may be the only geographic area served by the MAP server 12 or one of a number of geographic areas served by the MAP server 12 .
  • the base quadtree regions have a size of 2 n ⁇ 2 n location buckets, where n is an integer greater than or equal to 1.
  • the history manager 56 determines whether there are any more base quadtree regions to process (step 1500 ). If there are more base quadtree regions to process, the history manager 56 sets a current node to the next base quadtree region to process, which for the first iteration is the first base quadtree region (step 1502 ). The history manager 56 then determines whether the number of users in the current node is greater than a predefined maximum number of users and whether a current quadtree depth is less than a maximum quadtree depth (step 1504 ). In one embodiment, the maximum quadtree depth may be reached when the current node corresponds to a single location bucket. However, the maximum quadtree depth may be set such that the maximum quadtree depth is reached before the current node reaches a single location bucket.
  • the history manager 56 creates a number of child nodes for the current node (step 1506 ). More specifically, the history manager 56 creates a child node for each quadrant of the current node. The users in the current node are then assigned to the appropriate child nodes based on the location buckets in which the users are located (step 1508 ), and the current node is then set to the first child node (step 1510 ). At this point, the process returns to step 1504 and is repeated.
  • the history manager 56 determines whether the current node has any more sibling nodes (step 1512 ). Sibling nodes are child nodes of the same parent node. If so, the history manager 56 sets the current node to the next sibling node of the current node (step 1514 ), and the process returns to step 1504 and is repeated. Once there are no more sibling nodes to process, the history manager 56 determines whether the current node has a parent node (step 1516 ). If so, since the parent node has already been processed, the history manager 56 determines whether the parent node has any sibling nodes that need to be processed (step 1518 ).
  • the history manager 56 sets the next sibling node of the parent node to be processed as the current node (step 1520 ). From this point, the process returns to step 1504 and is repeated.
  • the process returns to step 1516 , if the current node does not have a parent node, the process returns to step 1500 and is repeated until there are no more base quadtree regions to process. Once there are no more base quadtree regions to process, the finished quadtree data structure is returned to the process of FIG. 11 such that the history manager 56 can then store the history objects for nodes in the quadtree data structure having at least one user (step 1522 ).
  • FIGS. 13A through 13E graphically illustrate the process of FIG. 12 for the generation of the quadtree data structure for one exemplary base quadtree region 98 .
  • FIG. 13A illustrates the base quadtree region 98 .
  • the base quadtree region 98 is an 8 ⁇ 8 square of location buckets, where each of the small squares represents a location bucket.
  • the history manager 56 determines whether the number of users in the base quadtree region 98 is greater than the predetermined maximum number of users. In this example, the predetermined maximum number of users is 3. Since the number of users in the base quadtree region 98 is greater than 3, the history manager 56 divides the base quadtree region 98 into four child nodes 100 - 1 through 100 - 4 , as illustrated in FIG. 13B .
  • the history manager 56 determines whether the number of users in the child node 100 - 1 is greater than the predetermined maximum, which again for this example is 3. Since the number of users in the child node 100 - 1 is greater than 3, the history manager 56 divides the child node 100 - 1 into four child nodes 102 - 1 through 102 - 4 , as illustrated in FIG. 13C . The child nodes 102 - 1 through 102 - 4 are children of the child node 100 - 1 . The history manager 56 then determines whether the number of users in the child node 102 - 1 is greater than the predetermined maximum number of users, which again is 3. Since there are more than 3 users in the child node 102 - 1 , the history manager 56 further divides the child node 102 - 1 into four child nodes 104 - 1 through 104 -N, as illustrated in FIG. 13D .
  • the history manager 56 determines whether the number of users in the child node 104 - 1 is greater than the predetermined maximum number of users, which again is 3. Since the number of users in the child node 104 - 1 is not greater than the predetermined maximum number of users, the child node 104 - 1 is identified as a node for the finished quadtree data structure, and the history manager 56 proceeds to process the sibling nodes of the child node 104 - 1 , which are the child nodes 104 - 2 through 104 - 4 .
  • the child nodes 104 - 2 through 104 - 4 are also identified as nodes for the finished quadtree data structure.
  • the history manager 56 identifies the parent node of the child nodes 104 - 1 through 104 - 4 , which in this case is the child node 102 - 1 .
  • the history manager 56 then processes the sibling nodes of the child node 102 - 1 , which are the child nodes 102 - 2 through 102 - 4 .
  • the number of users in each of the child nodes 102 - 2 through 102 - 4 is less than the predetermined maximum number of users.
  • the child nodes 102 - 2 through 102 - 4 are identified as nodes for the finished quadtree data structure.
  • the history manager 56 identifies the parent node of the child nodes 102 - 1 through 102 - 4 , which in this case is the child node 100 - 1 .
  • the history manager 56 then processes the sibling nodes of the child node 100 - 1 , which are the child nodes 100 - 2 through 100 - 4 . More specifically, the history manager 56 determines that the child node 100 - 2 includes more than the predetermined maximum number of users and, as such, divides the child node 100 - 2 into four child nodes 106 - 1 through 106 - 4 , as illustrated in FIG. 13E .
  • the child nodes 106 - 1 through 106 - 4 are identified as nodes for the finished quadtree data structure. Then, the history manager 56 proceeds to process the child nodes 100 - 3 and 100 - 4 . Since the number of users in each of the child nodes 100 - 3 and 100 - 4 is not greater than the predetermined maximum number of users, the child nodes 100 - 3 and 100 - 4 are identified as nodes for the finished quadtree data structure.
  • the quadtree data structure for the base quadtree region 98 includes the child nodes 104 - 1 through 104 - 4 , the child nodes 102 - 2 through 102 - 4 , the child nodes 106 - 1 through 106 - 4 , and the child nodes 100 - 3 and 100 - 4 , as illustrated in FIG. 13E .
  • the history manager 56 stores a history object for each of the nodes in the quadtree data structure including at least one user.
  • the history manager 56 stores history objects for the child nodes 104 - 2 and 104 - 3 , the child nodes 102 - 2 and 102 - 4 , the child nodes 106 - 1 and 106 - 4 , and the child node 100 - 3 .
  • no history objects are stored for the nodes that do not have any users (i.e., the child nodes 104 - 1 and 104 - 4 , the child node 102 - 3 , the child nodes 106 - 2 and 106 - 3 , and the child node 100 - 4 ).
  • FIG. 14 illustrates the operation of the system 10 of FIG. 1 wherein a mobile device is enabled to request and receive historical data from the MAP server 12 according to one embodiment of the present disclosure.
  • the MAP application 32 - 1 of the mobile device 18 - 1 sends a historical request to the MAP client 30 - 1 of the mobile device 18 - 1 (step 1600 ).
  • the historical request identifies either a POI or an AOI and a time window.
  • a POI is a geographic point whereas an AOI is a geographic area.
  • the historical request is for a POI and a time window, where the POI is a POI corresponding to the current location of the user 20 - 1 , a POI selected from a list of POIs defined by the user 20 - 1 of the mobile device 18 - 1 , a POI selected from a list of POIs defined by the MAP application 32 - 1 or the MAP server 12 , a POI selected by the user 20 - 1 from a map, a POI implicitly defined via a separate application (e.g., POI is implicitly defined as the location of the nearest Starbucks coffee house in response to the user 20 - 1 performing a Google search for “Starbucks”), or the like.
  • a separate application e.g., POI is implicitly defined as the location of the nearest Starbucks coffee house in response to the user 20 - 1 performing a Google search for “Starbucks”
  • the list of POIs may include static POIs which may be defined by street addresses or latitude and longitude coordinates, dynamic POIs which may be defined as the current locations of one or more friends of the user 20 - 1 , or both.
  • the historical request is for an AOI and a time window
  • the AOI may be an AOI of a geographic area of a predefined shape and size centered at the current location of the user 20 - 1 , an AOI selected from a list of AOIs defined by the user 20 - 1 , an AOI selected from a list of AOIs defined by the MAP application 32 - 1 or the MAP server 12 , an AOI selected by the user 20 - 1 from a map, an AOI implicitly defined via a separate application (e.g., AOI is implicitly defined as an area of a predefined shape and size centered at the location of the nearest Starbucks coffee house in response to the user 20 - 1 performing a Google search for “Starbucks”), or the like.
  • the list of AOIs may include static AOIs, dynamic AOIs which may be defined as areas of a predefined shape and size centered at the current locations of one or more friends of the user 20 - 1 , or both.
  • the POI or AOI of the historical request may be selected by the user 20 - 1 via the MAP application 32 - 1 .
  • the MAP application 32 - 1 automatically uses the current location of the user 20 - 1 as the POI or as a center point for an AOI of a predefined shape and size.
  • the time window for the historical request may be relative to the current time.
  • the time window may be the last hour, the last day, the last week, the last month, or the like.
  • the time window may be an arbitrary time window selected by the user 20 - 1 such as, for example, yesterday from 7 pm-9 pm, last Friday, last week, or the like.
  • the historical request includes a single POI or AOI and a single time window, the historical request may include multiple POIs or AOIs and/or multiple time windows.
  • the historical request is made in response to user input from the user 20 - 1 of the mobile device 18 - 1 .
  • the user 20 - 1 selects either a POI or an AOI and a time window and then instructs the MAP application 32 - 1 to make the historical request by, for example, selecting a corresponding button on a graphical user interface.
  • the historical request is made automatically in response to some event such as, for example, opening the MAP application 32 - 1 .
  • the MAP client 30 - 1 Upon receiving the historical request from the MAP application 32 - 1 , the MAP client 30 - 1 forwards the historical request to the MAP server 12 (step 1602 ). Note that the MAP client 30 - 1 may, in some cases, process the historical request from the MAP application 32 - 1 before forwarding the historical request to the MAP server 12 . For example, if the historical request from the MAP application 32 - 1 is for multiple POIs/AOIs and/or for multiple time windows, the MAP client 30 - 1 may process the historical request from the MAP application 32 - 1 to produce multiple historical requests to be sent to the MAP server 12 . For instance, a separate historical request may be produced for each POI/AOI and time window combination. However, for this discussion, the historical request is for a single POI or AOI for a single time window.
  • the MAP server 12 Upon receiving the historical request from the MAP client 30 - 1 , the MAP server 12 processes the historical request (step 1604 ). More specifically, the historical request is processed by the history manager 56 of the MAP server 12 . First, the history manager 56 obtains history objects that are relevant to the historical request from the datastore 64 of the MAP server 12 . The relevant history objects are those recorded for locations relevant to the POI or AOI and the time window for the historical request. The history manager 56 then processes the relevant history objects to provide historical aggregate profile data for the POI or AOI in a time context and/or a geographic context.
  • the historical aggregate profile data is based on the user profiles of the anonymous user records in the relevant history objects as compared to the user profile of the user 20 - 1 or a select subset thereof. In another embodiment, the historical aggregate profile data is based on the user profiles of the anonymous user records in the relevant history objects as compared to a target user profile defined or otherwise specified by the user 20 - 1 .
  • the history manager 56 divides the time window for the historical request into a number of time bands. Each time band is a fragment of the time window. Then, for each time band, the history manager 56 identifies a subset of the relevant history objects that are relevant to the time band (i.e., history objects recorded for time periods within the time band or that overlap the time band) and generates an aggregate profile for each of those history objects based on the user profiles of the anonymous user records in the history objects and the user profile, or a select subset of the user profile, of the user 20 - 1 . Then, the history manager 56 averages or otherwise combines the aggregate profiles for the history objects relevant to the time band. The resulting data for the time bands forms historical aggregate profile data that is to be returned to the MAP client 30 - 1 , as discussed below.
  • the history manager 56 For the geographic context, the history manager 56 generates an average aggregate profile for each of a number of grids surrounding the POI or within the AOI. More specifically, history objects relevant to the POI or the AOI and the time window of the historical request are obtained. Then, the user profiles of the anonymous users in the relevant history objects are used to generate average aggregate profiles for a number of grids, or geographic regions, at or surrounding the POI or the AOI. These average aggregate profiles for the grids form historical aggregate profile data that is to be returned to the MAP client 30 - 1 , as discussed below.
  • the MAP server 12 returns the resulting historical aggregate profile data to the MAP client 30 - 1 (step 1606 ).
  • the historical aggregate profile data may be in a time context or a geographic context.
  • the data returned to the MAP client 30 - 1 may be raw historical data.
  • the raw historical data may be the relevant history objects or data from the relevant history objects such as, for example, the user records in the relevant history objects, the user profiles of the anonymous user records in the relevant history objects, or the like.
  • the MAP client 30 - 1 Upon receiving the historical aggregate profile data, the MAP client 30 - 1 passes the historical aggregate profile data to the MAP application 32 - 1 (step 1608 ).
  • the MAP client 30 - 1 may process the raw historical data to provide desired data.
  • the MAP client 30 - 1 may process the raw historical data in order to generate average aggregate profiles for time bands within the time window of the historical request and/or to generate average aggregate profiles for regions near the POI or within the AOI of the historical request in a manner similar to that described above.
  • the MAP application 32 - 1 then presents the historical aggregate profile data to the user 20 - 1 (step 1610 ).
  • FIGS. 15A and 15B illustrate a flow chart for a process for generating historical aggregate profile data in a time context according to one embodiment of the present disclosure.
  • the history manager 56 establishes a bounding box for the historical request based on the POI or the AOI for the historical request (step 1700 ).
  • a bounding box is used in this example, other geographic shapes may be used to define a bounding region for the historical request (e.g., a bounding circle).
  • the historical request is from a mobile device of a requesting user, which in this example is the user 20 - 1 . If the historical request is for a POI, the bounding box is a geographic region corresponding to or surrounding the POI.
  • the bounding box may be a square geographic region of a predefined size centered on the POI. If the historical request is for an AOI, the bounding box is the AOI. In addition to establishing the bounding box, the history manager 56 establishes a time window for the historical request (step 1702 ). For example, if the historical request is for the last week and the current date and time are Sep. 17, 2009 at 10:00 pm, the history manager 56 may generate the time window as Sep. 10, 2009 at 10:00 pm through Sep. 17, 2009 at 10:00 pm.
  • the history manager 56 obtains history objects relevant to the bounding box and the time window for the historical request from the datastore 64 of the MAP server 12 (step 1704 ).
  • the relevant history objects are history objects recorded for time periods within or intersecting the time window and for locations, or geographic areas, within or intersecting the bounding box for the historical request.
  • the history manager 56 also determines an output time band size (step 1706 ).
  • the output time band size is 1/100 th of the amount of time from the start of the time window to the end of the time window for the historical request. For example, if the amount of time in the time window for the historical request is one week, the output time band size may be set to 1/100 th of a week, which is 1.68 hours or 1 hour and 41 minutes.
  • the history manager 56 sorts the relevant history objects into the appropriate output time bands of the time window for the historical request. More specifically, in this embodiment, the history manager 56 creates an empty list for each of output time band of the time window (step 1708 ). Then, the history manager 56 gets the next history object from the history objects identified in step 1704 as being relevant to the historical request (step 1710 ) and adds that history object to the list(s) for the appropriate output time band(s) (step 1712 ). Note that if the history object is recorded for a time period that overlaps two or more of the output time bands, then the history object may be added to all of the output time bands to which the history object is relevant. The history manager 56 then determines whether there are more relevant history objects to sort into the output time bands (step 1714 ). If so, the process returns to step 1710 and is repeated until all of the relevant history objects have been sorted into the appropriate output time bands.
  • the history manager 56 determines an equivalent depth of the bounding box (D BB ) within the quadtree data structures used to store the history objects (step 1716 ). More specifically, the area of the base quadtree region (e.g., the base quadtree region 98 ) is referred to as A BASE . Then, at each depth of the quadtree, the area of the corresponding quadtree nodes is (1 ⁇ 4) D *A BASE . In other words, the area of a child node is 1 ⁇ 4 th of the area of the parent node of that child node. The history manager 56 determines the equivalent depth of the bounding box (D BB ) by determining a quadtree depth at which the area of the corresponding quadtree nodes most closely matches an area of the bounding box (A BB ).
  • D BB equivalent depth of the bounding box
  • equivalent quadtree depth of the bounding box (D BB ) determined in step 1716 is used below in order to efficiently determine the ratios of the area of the bounding box (A BB ) to areas of the relevant history objects (A HO ).
  • the ratios of the area of the bounding box (A BB ) to the areas of the relevant history objects (A HO ) may be otherwise computed, in which case step 1716 would not be needed.
  • the process proceeds to FIG. 15B where the history manager 56 gets the list for the next output time band of the time window for the historical request (step 1718 ).
  • the history manager 56 then gets the next history object in the list for the output time band (step 1720 ).
  • the history manager 56 sets a relevancy weight for the history object, where the relevancy weight is indicative of a relevancy of the history object to the bounding box (step 1722 ).
  • a history object includes anonymized user profile data for a corresponding geographic area. If that geographic area is within or significantly overlaps the bounding box, then the history object will have a high relevancy weight. However, if the geographic area only overlaps the bounding box slightly, then the history object will have a low relevancy weight.
  • the relevancy weight for the history object is set to an approximate ratio of the area of the bounding box (A BB ) to an area of the history object (A HO ) computed based on a difference between the quadtree depth of the history object (D HO ) and the equivalent quadtree depth of the bounding box (D EQ ).
  • the quadtree depth of the history object (D HO ) is stored in the history object. More specifically, in one embodiment, the relevancy weight of the history object is set according to the following:
  • the history manager 56 generates an aggregate profile for the history object using the user profile of the requesting user, which for this example is the user 20 - 1 , or a select subset thereof (step 1724 ).
  • the requesting user 20 - 1 may be enabled to select a subset of his user profile to be compared to the user profiles of the anonymous user records in the history objects by, for example, selecting one or more desired profile categories.
  • the history manager 56 compares the user profile of the user 20 - 1 , or the select subset thereof, to the user profiles of the anonymous user records stored in the history object.
  • the resulting aggregate profile for the history object includes a number of user matches and a total number of users.
  • the number of user matches is the number of anonymous user records in the history object having user profiles that include at least one keyword that matches at least one keyword in the user profile of the user 20 - 1 or at least one keyword in the select subset of the user profile of the user 20 - 1 .
  • the total number of users is the total number of anonymous user records in the history object.
  • the aggregate profile for the history object may include a list of keywords from the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 having at least one user match.
  • the aggregate profile for the history object may include the number of user matches for each of the keywords from the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 having at least one user match.
  • the history manager 56 determines whether there are more history objects in the list for the output time band (step 1726 ). If so, the process returns to step 1720 and is repeated until all of the history objects in the list for the output time band have been processed. Once all of the history objects in the list for the output time band have been processed, the history manager 56 combines the aggregate profiles of the history objects in the output time band to provide a combined aggregate profile for the output time band. More specifically, in this embodiment, the history manager 56 computes a weighted average of the aggregate profiles for the history objects in the output time band using the relevancy weights of the history objects (step 1728 ). In one embodiment, the aggregate profile of each of the history objects includes the number of user matches for the history object and the total number of users for the history object.
  • the weighted average of the aggregate profiles of the history objects in the output time band includes the weighted average of the number of user matches for all of the history objects in the output time band, which may be computed as:
  • the average aggregate profile for the output time band includes the weighted average of the total number of users for all of the history objects in the output time band, which may be computed as:
  • the average aggregate profile for the output time band may include the weighted average of the ratio of user matches to total users for all of the history objects in the output time band, which may be computed as:
  • relevancy i is the relevancy weight computed in step 1722 for the i-th history object
  • number_of_user_matches i is the number of user matches from the aggregate profile of the i-th history object
  • total_users i is the total number of users from the aggregate profile of the i-th history object
  • n is the number of history objects in the list for the output time band.
  • the average aggregate profile for the output time band may include a weighted average of the number of user matches for each of those keywords, which may be computed as:
  • the average aggregate profile for the output time band may include the weighted average of the ratio of the user matches to total users for each keyword, which may be computed as:
  • relevancy i is the relevancy weight computed in step 1722 for the i-th history object
  • number_of_user_matches KEYWORD — j,i is the number of user matches for the j-th keyword for the i-th history object
  • total_users i is the total number of users from the aggregate profile of the i-th history object
  • n is the number of history objects in the list for the output time band.
  • the history manager 56 determines whether there are more output time bands to process (step 1730 ). If so, the process returns to step 1718 and is repeated until the lists for all output time bands have been processed. Once all of the output time bands have been processed, the history manager 56 outputs the combined aggregate profiles for the output time bands. More specifically, in this embodiment, the history manager 56 outputs the weighted average aggregate profiles computed in step 1728 for the output time bands as the historical aggregate profile data to be returned to the mobile device 18 - 1 (step 1732 ).
  • FIG. 16 is an exemplary Graphical User Interface (GUI) 108 that may be provided by the MAP application 32 - 1 of the mobile device 18 - 1 ( FIG. 1 ) in order to present historical aggregate profile data in a time context according to one embodiment of this disclosure.
  • GUI Graphical User Interface
  • the MAP application 32 - 1 issues a historical request for a POI 110 in the manner described above.
  • the MAP server 12 uses the process of FIGS. 15A and 15B to generate historical aggregate profile data in response to the historical request in the time context. More specifically, the historical aggregate profile data includes an average aggregate profile for each of a number of output time bands within a time window established for the historical request.
  • the time window is a four week period extending from the week of July 5 th to the week of July 26 th .
  • the MAP application 32 - 1 uses the average aggregate profiles for the output time bands included in the historical aggregate profile data to generate a timeline 112 for the time window of the historical request.
  • the timeline 112 is a graphical illustration of the average aggregate profiles for the output time bands. For example, if the average aggregate profile for each of the output time bands includes a weighted average of the number of user matches and a weighted average of the number of total users for the output time band, the timeline 112 may be indicative of the ratio of the weighted average of user matches to the weighted average of total users for each of the output time bands.
  • the output time bands having a ratio of weighted average of user matches to weighted average of total users that is less than 0.25 are represented as having a low similarity
  • the output time bands having a ratio of weighted average of user matches to weighted average of total users that is in the range of 0.25-0.75 are represented as having varying degrees of intermediate similarity
  • the output time bands having a ratio of weighted average of user matches to weighted average of total users that is greater than 0.75 are represented as having a high similarity.
  • output time bands for which there are no history objects may be grayed-out or otherwise indicated.
  • the GUI 108 also includes a second timeline 114 that zooms in on an area of the timeline 112 that includes the most activity or that includes the greatest number of output time bands having a high or medium similarity.
  • the GUI 108 includes an aggregate profile 116 for a crowd that is currently at the POI. Note that crowds and aggregate profiles for the crowds are discussed below in detail.
  • FIGS. 17A and 17B illustrate a flow chart of a process for generating historical aggregate profile data in a geographic context according to one embodiment of the present disclosure.
  • the history manager 56 establishes a bounding box for the historical request based on the POI or the AOI for the historical request (step 1800 ).
  • a bounding box is used in this example, other geographic shapes may be used to define a bounding region for the historical request (e.g., a bounding circle).
  • the historical request is from a mobile device of a requesting user, which in this example is the user 20 - 1 . If the historical request is for a POI, the bounding box is a geographic region corresponding to or surrounding the POI.
  • the bounding box may be a square geographic region of a predefined size centered on the POI. If the historical request is for an AOI, the bounding box is the AOI. In addition to establishing the bounding box, the history manager 56 establishes a time window for the historical request (step 1802 ). For example, if the historical request is for the last week and the current date and time are Sep. 17, 2009 at 10:00 pm, the history manager 56 may generate the time window as Sep. 10, 2009 at 10:00 pm through Sep. 17, 2009 at 10:00 pm.
  • the history manager 56 obtains history objects relevant to the bounding box and the time window of the historical request from the datastore 64 of the MAP server 12 (step 1804 ).
  • the relevant history objects are history objects recorded for time periods within or intersecting the time window and for locations, or geographic areas, within or intersecting the bounding box for the historical request.
  • the history manager 56 sorts the relevant history objects into base quadtree regions. More specifically, in this embodiment, the history manager 56 creates an empty list for each relevant base quadtree region (step 1806 ).
  • a relevant base quadtree region is a base quadtree region within which all or at least a portion of the bounding box is located.
  • the history manager 56 then gets the next history object from the history objects identified in step 1804 as being relevant to the historical request (step 1808 ) and adds that history object to the list for the appropriate base quadtree region (step 1810 ). The history manager 56 then determines whether there are more relevant history objects to sort (step 1812 ). If so, the process returns to step 1808 and is repeated until all of the relevant history objects have been sorted into the appropriate base quadtree regions.
  • each base quadtree region into a grid, where a size of each grid location is set to a smallest history record size of all the history objects sorted into the list for that base quadtree region.
  • aggregate profiles are generated for each of the grid locations covered by the history object.
  • a combined aggregate profile is generated for each grid location based on the aggregate profiles generated using the corresponding history objects.
  • the history manager 56 gets the list for the next base quadtree region (step 1814 ). The history manager 56 then gets the next history object in the list for the base quadtree region (step 1816 ). Next, the history manager 56 creates an aggregate profile for the history object using the user profile of the requesting user, which in this example is the user 20 - 1 , or a select subset of the user profile of the requesting user (step 1818 ). Note that the user 20 - 1 may be enabled to select a subset of his user profile to be used for aggregate profile creation by, for example, selecting one or more profile categories.
  • the history manager 56 compares the user profile of the user 20 - 1 , or the select subset thereof, to the user profiles of the anonymous user records stored in the history object.
  • the resulting aggregate profile for the history object includes a number of user matches and a total number of users.
  • the number of user matches is the number of anonymous user records in the history object having user profiles that include at least one keyword that matches at least one keyword in the user profile of the user 20 - 1 or at least one keyword in the select subset of the user profile of the user 20 - 1 .
  • the total number of users is the total number of anonymous user records in the history object.
  • the history manager 56 determines whether a size of the history object is greater than the smallest history object size in the list of history objects for the base quadtree region (step 1820 ). If not, the aggregate profile for the history object is added to an output list for the corresponding grid location for the base quadtree region (step 1822 ) and the process proceeds to step 1830 . If the size of the history object is greater than the smallest history object size, the history manager 56 splits the geographic area, or location, of the history object into a number of grid locations each of the smallest history object size of all the history objects in the list for the base quadtree region (step 1824 ).
  • the history manager 56 then divides the aggregate profile of the history object evenly over the grid locations for the history object (step 1826 ) and adds resulting aggregate profiles for the grid locations to output lists for those grid locations (step 1828 ). For example, if the geographic area of the history object is split into four grid locations and the aggregate profile for the history object includes eight user matches and sixteen total users, then the aggregate profile is divided evenly over the four grid locations such that each of the four grid locations is given an aggregate profile of two user matches and four total users.
  • the history manager 56 determines whether there are more history objects to process for the base quadtree region (step 1830 ). If so, the process returns to step 1816 and is repeated until all of the history objects for the base quadtree region are processed. At that point, for each grid location in the base quadtree region having at least one aggregate profile in its output list, the history manager 56 combines the aggregate profiles in the output list for the grid location to provide a combined aggregate profile for the grid location. More specifically, in this embodiment, the history manager 56 computes average aggregate profiles for the grid locations for the base quadtree region (step 1832 ). In one embodiment, for each grid location, the average aggregate profile for the grid location includes an average number of user matches and an average total number of users for all of the aggregate profiles in the output list for that grid location.
  • the history manager 56 determines whether there are more relevant base quadtree regions to process (step 1834 ). If so, the process returns to step 1814 and is repeated until all of the relevant base quadtree regions have been processed. At that point, the history manager 56 outputs the grid locations and the average aggregate profiles for the grid locations in each of the relevant base quadtree regions (step 1836 ). The grid locations and their corresponding average aggregate profiles form the historical aggregate profile data that is returned to the mobile device 18 - 1 of the user 20 - 1 in response to the historical request.
  • FIG. 18 illustrates an exemplary GUI 118 that may be provided by the MAP application 32 - 1 of the mobile device 18 - 1 ( FIG. 1 ) to present historical aggregate profile data in the geographic context to the user 20 - 1 in response to a historical request.
  • the GUI 118 includes a map 120 including a grid 122 .
  • the grid 122 provides graphical information indicative of aggregate profiles for grid locations returned by the MAP server 12 in response to a historical request.
  • the GUI 118 also includes buttons 124 and 126 enabling the user 20 - 1 to zoom in or zoom out on the map 120 , buttons 128 and 130 enabling the user 20 - 1 to toggle between the traditional map view as shown or a satellite map view, buttons 132 and 134 enabling the user 20 - 1 to switch between historical mode and a current mode (i.e., a view of current crowd data as discussed below in detail), and buttons 136 and 138 enabling the user 20 - 1 to hide or show POIs on the map 120 .
  • buttons 124 and 126 enabling the user 20 - 1 to zoom in or zoom out on the map 120
  • buttons 128 and 130 enabling the user 20 - 1 to toggle between the traditional map view as shown or a satellite map view
  • buttons 132 and 134 enabling the user 20 - 1 to switch between historical mode and a current mode (i.e., a view of current crowd data as discussed below in detail)
  • buttons 136 and 138 enabling the user 20 - 1 to hide or show PO
  • the aggregate profiles in FIGS. 15A through 18 are generated based on the user profile of the user 20 - 1 or a select subset of the user profile of the user 20 - 1
  • the aggregate profiles may alternatively be generated based on a target user profile defined or otherwise specified by the user 20 - 1 .
  • the user 20 - 1 may define a target profile for a type of person with which the user 20 - 1 would like to interact. Then, by making a historical request with the target profile, the user 20 - 1 can learn whether people matching the target profile are historically located at a POI or an AOI.
  • FIG. 19 illustrates the operation of the system 10 of FIG. 1 wherein the subscriber device 22 is enabled to request and receive historical aggregate profile data from the MAP server 12 according to one embodiment of the present disclosure.
  • the third-party service 26 may send historical requests to the MAP server 12 .
  • the subscriber device 22 sends a historical request to the MAP server 12 (step 1900 ).
  • the subscriber device 22 sends the historical request to the MAP server 12 via the web browser 38 .
  • the historical request identifies either a POI or an AOI and a time window.
  • the historical request may be made in response to user input from the subscriber 24 of the subscriber device 22 or made automatically in response to an event such as, for example, navigation to a website associated with a POI (e.g., navigation to a website of a restaurant).
  • the MAP server 12 Upon receiving the historical request, the MAP server 12 processes the historical request (step 1902 ). More specifically, as discussed above, the historical request is processed by the history manager 56 of the MAP server 12 . First, the history manager 56 obtains history objects that are relevant to the historical request from the datastore 64 of the MAP server 12 . The relevant history objects are those relevant to the POI or the AOI and the time window for the historical request. The history manager 56 then processes the relevant history objects to provide historical aggregate profile data for the POI or the AOI in a time context and/or a geographic context. In this embodiment, the historical aggregate profile data is based on comparisons of the user profiles of the anonymous user records in the relevant history objects to one another. In another embodiment, the aggregate profile data is based on comparisons of the user profiles of the anonymous user records in the relevant history objects and a target user profile.
  • the MAP server 12 returns the resulting historical aggregate profile data to the subscriber device 22 (step 1904 ).
  • the historical aggregate profile data may be in the time context or the geographic context.
  • the MAP server 12 formats the historical aggregate profile data in a suitable format before sending the historical aggregate profile data to the web browser 38 of the subscriber device 22 .
  • the web browser 38 of the subscriber device 22 presents the historical aggregate profile data to the user 20 - 1 (step 1906 ).
  • FIGS. 20A and 20B illustrate a process for generating historical aggregate profile data in a time context in response to a historical request from the subscriber 24 at the subscriber device 22 according to one embodiment of the present disclosure.
  • the process of FIGS. 20A and 20B is substantially the same as that described above with respect to FIGS. 15A and 15B . More specifically, steps 2000 through 2022 are substantially the same as steps 1700 through 1722 of FIGS. 15A and 15B . Likewise, steps 2026 through 2032 are substantially the same as steps 1726 through 1732 of FIG. 15B . However, step 2024 of FIG. 20B is different from step 1724 of FIG. 15B with respect to the manner in which the aggregate profiles for the relevant history objects are computed.
  • the aggregate profile for the history object is generated by comparing the user profiles of the anonymous user records in the history object to one another.
  • the aggregate profile for the history object includes an aggregate list of keywords from the user profiles of the anonymous user records, the number of occurrences of each of those keywords in the user profiles of the anonymous user records, and the total number of anonymous user records in the history object.
  • the weighted average of the aggregate profiles for the history objects in the output time band may provide an average aggregate profile including, for each keyword occurring in the aggregate profile of at least one of the history objects, a weighted average of the number of occurrences of the keyword.
  • the average aggregate profile may include a weighted average of the total number of anonymous user records in the history objects.
  • the average aggregate profile may include, for each keyword, a weighted average of the number of occurrences of the keyword to the total number of anonymous user records.
  • FIGS. 21A and 21B illustrate a process for generating historical aggregate profile data in a geographic context in response to a historical request from the subscriber 24 at the subscriber device 22 according to one embodiment of the present disclosure.
  • the process of FIGS. 21A and 21B is substantially the same as that described above with respect to FIGS. 17A and 17B . More specifically, steps 2100 through 2116 and 2120 through 2136 are substantially the same as steps 1800 through 1816 and 1820 through 1836 of FIGS. 17A and 17B . However, step 2118 of FIG. 21B is different from step 1818 of FIG. 17B with respect to the manner in which the aggregate profiles for the history objects are computed.
  • the aggregate profile for the history object is generated by comparing the user profiles of the anonymous user records in the history object to one another.
  • the aggregate profile for the history object includes an aggregate list of keywords from the user profiles of the anonymous user records, the number of occurrences of each of those keywords in the user profiles of the anonymous user records, and the total number of anonymous user records in the history object.
  • the weighted average of the aggregate profiles for the each of the grid locations may provide an average aggregate profile including, for each keyword, a weighted average of the number of occurrences of the keyword.
  • the average aggregate profile for each grid location may include a weighted average of the total number of anonymous user records.
  • the average aggregate profile for each grid location may include, for each keyword, a weighted average of the number of occurrences of the keyword to the total number of anonymous user records.
  • FIG. 22 begins a discussion of the operation of the crowd analyzer 58 to form crowds of users according to one embodiment of the present disclosure.
  • FIG. 22 is a flow chart for a spatial crowd formation process according to one embodiment of the present disclosure. Note that, in one embodiment, this process is performed in response to a request for crowd data for a POI or an AOI. In another embodiment, this process may be performed proactively by the crowd analyzer 58 as, for example, a background process.
  • the crowd analyzer 58 establishes a bounding box for the crowd formation process (step 2200 ).
  • a bounding box is used in this example, other geographic shapes may be used to define a bounding region for the crowd formation process (e.g., a bounding circle).
  • the bounding box is established based on the POI or the AOI of the request. If the request is for a POI, then the bounding box is a geographic area of a predetermined size centered at the POI. If the request is for an AOI, the bounding box is the AOI. Alternatively, if the crowd formation process is performed proactively, the bounding box is a bounding box of a predefined size.
  • the crowd analyzer 58 then creates a crowd for each individual user in the bounding box (step 2202 ). More specifically, the crowd analyzer 58 queries the datastore 64 of the MAP server 12 to identify users currently located within the bounding box. Then, a crowd of one user is created for each user currently located within the bounding box. Next, the crowd analyzer 58 determines the two closest crowds in the bounding box (step 2204 ) and determines a distance between the two crowds (step 2206 ). The distance between the two crowds is a distance between crowd centers of the two crowds. Note that the crowd center of a crowd of one is the current location of the user in the crowd.
  • the crowd analyzer 58 determines whether the distance between the two crowds is less than an optimal inclusion distance (step 2208 ).
  • the optimal inclusion distance is a predefined static distance. If the distance between the two crowds is less than the optimal inclusion distance, the crowd analyzer 58 combines the two crowds (step 2210 ) and computes a new crowd center for the resulting crowd (step 2212 ). The crowd center may be computed based on the current locations of the users in the crowd using a center of mass algorithm. At this point the process returns to step 2204 and is repeated until the distance between the two closest crowds is not less than the optimal inclusion distance. At that point, the crowd analyzer 58 discards any crowds with less than three users (step 2214 ).
  • crowds are only maintained if the crowds include three or more users. However, while three users is the preferred minimum number of users in a crowd, the present disclosure is not limited thereto. The minimum number of users in a crowd may be defined as any number greater than or equal to two users.
  • FIGS. 23A through 23D graphically illustrate the crowd formation process of FIG. 22 for an exemplary bounding box 139 .
  • crowds are noted by dashed circles, and the crowd centers are noted by cross-hairs (+).
  • the crowd analyzer 58 creates crowds 140 through 148 for the users in the geographic area, where, at this point, each of the crowds 140 through 148 includes one user. The current locations of the users are the crowd centers of the crowds 140 through 148 .
  • the crowd analyzer 58 determines the two closest crowds and a distance between the two closest crowds.
  • the two closest crowds are crowds 142 and 144 , and the distance between the two closest crowds 142 and 144 is less than the optimal inclusion distance.
  • the two closest crowds 142 and 144 are combined by merging crowd 144 into crowd 142 , and a new crowd center (+) is computed for the crowd 142 , as illustrated in FIG. 23B .
  • the crowd analyzer 58 again determines the two closest crowds, which are now crowds 140 and 142 .
  • the crowd analyzer 58 determines a distance between the crowds 140 and 142 .
  • the crowd analyzer 58 combines the two crowds 140 and 142 by merging the crowd 140 into the crowd 142 , and a new crowd center (+) is computed for the crowd 142 , as illustrated in FIG. 23C .
  • the crowd analyzer 58 discards crowds having less than three users, which in this example are crowds 146 and 148 .
  • the crowd 142 has been formed with three users, as illustrated in FIG. 23D .
  • FIGS. 24A through 24D illustrate a flow chart for a spatial crowd formation process according to another embodiment of the present disclosure.
  • the spatial crowd formation process is triggered in response to receiving a location update for one of the users 20 - 1 through 20 -N and is preferably repeated for each location update received for the users 20 - 1 through 20 -N.
  • the crowd analyzer 58 receives a location update, or a new location, for a user (step 2300 ). Assume that, for this example, the location update is received for the user 20 - 1 .
  • the crowd analyzer 58 retrieves an old location of the user 20 - 1 , if any (step 2302 ).
  • the old location is the current location of the user 20 - 1 prior to receiving the new location.
  • the crowd analyzer 58 then creates a new bounding box of a predetermined size centered at the new location of the user 20 - 1 (step 2304 ) and an old bounding box of a predetermined size centered at the old location of the user 20 - 1 , if any (step 2306 ).
  • the predetermined size of the new and old bounding boxes may be any desired size. As one example, the predetermined size of the new and old bounding boxes is 40 meters by 40 meters.
  • the old bounding box is essentially null.
  • bounding “boxes” are used in this example, the bounding areas may be of any desired shape.
  • the crowd analyzer 58 determines whether the new and old bounding boxes overlap (step 2308 ). If so, the crowd analyzer 58 creates a bounding box encompassing the new and old bounding boxes (step 2310 ). For example, if the new and old bounding boxes are 40 ⁇ 40 meter regions and a 1 ⁇ 1 meter square at the northeast corner of the new bounding box overlaps a 1 ⁇ 1 meter square at the southwest corner of the old bounding box, the crowd analyzer 58 may create a 79 ⁇ 79 meter square bounding box encompassing both the new and old bounding boxes.
  • the crowd analyzer 58 determines the individual users and crowds relevant to the bounding box created in step 2310 (step 2312 ).
  • the crowds relevant to the bounding box are crowds that are within or overlap the bounding box (e.g., have at least one user located within the bounding box).
  • the individual users relevant to the bounding box are users that are currently located within the bounding box and not already part of a crowd.
  • the crowd analyzer 58 computes an optimal inclusion distance for individual users based on user density within the bounding box (step 2314 ). More specifically, in one embodiment, the optimal inclusion distance for individuals, which is also referred to herein as an initial optimal inclusion distance, is set according to the following equation:
  • initial_optimal ⁇ _inclusion ⁇ _dist a ⁇ A BoundingBox number_of ⁇ _users ,
  • a BoundingBox is an area of the bounding box
  • number_of_users is the total number of users in the bounding box.
  • the total number of users in the bounding box includes both individual users that are not already in a crowd and users that are already in a crowd. In one embodiment, a is 2 ⁇ 3.
  • the crowd analyzer 58 then creates a crowd for each individual user within the bounding box that is not already included in a crowd and sets the optimal inclusion distance for the crowds to the initial optimal inclusion distance (step 2316 ).
  • the process proceeds to FIG. 24B where the crowd analyzer 58 analyzes the crowds relevant to the bounding box to determine whether any of the crowd members (i.e., users in the crowds) violate the optimal inclusion distance of their crowds (step 2318 ). Any crowd member that violates the optimal inclusion distance of his or her crowd is then removed from that crowd (step 2320 ).
  • the crowd analyzer 58 then creates a crowd of one user for each of the users removed from their crowds in step 2320 and sets the optimal inclusion distance for the newly created crowds to the initial optimal inclusion distance (step 2322 ).
  • the crowd analyzer 58 determines the two closest crowds for the bounding box (step 2324 ) and a distance between the two closest crowds (step 2326 ).
  • the distance between the two closest crowds is the distance between the crowd centers of the two closest crowds.
  • the crowd analyzer 58 determines whether the distance between the two closest crowds is less than the optimal inclusion distance of a larger of the two closest crowds (step 2328 ). If the two closest crowds are of the same size (i.e., have the same number of users), then the optimal inclusion distance of either of the two closest crowds may be used.
  • the optimal inclusion distances of both of the two closest crowds may be used such that the crowd analyzer 58 determines whether the distance between the two closest crowds is less than the optimal inclusion distances of both of the two closest crowds.
  • the crowd analyzer 58 may compare the distance between the two closest crowds to an average of the optimal inclusion distances of the two closest crowds.
  • the two closest crowds are combined or merged (step 2330 ), and a new crowd center for the resulting crowd is computed (step 2332 ).
  • a center of mass algorithm may be used to compute the crowd center of a crowd.
  • a new optimal inclusion distance for the resulting crowd is computed (step 2334 ).
  • the new optimal inclusion distance for the resulting crowd is computed as:
  • the new optimal inclusion distance is computed as the average of the initial optimal inclusion distance and the distances between the users in the crowd and the crowd center plus one standard deviation.
  • the crowd analyzer 58 determines whether a maximum number of iterations have been performed (step 2336 ).
  • the maximum number of iterations is a predefined number that ensures that the crowd formation process does not indefinitely loop over steps 2318 through 2334 or loop over steps 2318 through 2334 more than a desired maximum number of times. If the maximum number of iterations has not been reached, the process returns to step 2318 and is repeated until either the distance between the two closest crowds is not less than the optimal inclusion distance of the larger crowd or the maximum number of iterations has been reached. At that point, the crowd analyzer 58 discards crowds with less than three users, or members (step 2338 ) and the process ends.
  • the process proceeds to FIG. 24C and the bounding box to be processed is set to the old bounding box (step 2340 ).
  • the crowd analyzer 58 then processes the old bounding box in much the same manner as described above with respect to steps 2312 through 2338 . More specifically, the crowd analyzer 58 determines the individual users and crowds relevant to the bounding box (step 2342 ).
  • the crowds relevant to the bounding box are crowds that are within or overlap the bounding box (e.g., have at least one user located within the bounding box).
  • the individual users relevant to the bounding box are users that are currently located within the bounding box and not already part of a crowd.
  • the crowd analyzer 58 computes an optimal inclusion distance for individual users based on user density within the bounding box (step 2344 ). More specifically, in one embodiment, the optimal inclusion distance for individuals, which is also referred to herein as an initial optimal inclusion distance, is set according to the following equation:
  • initial_optimal ⁇ _inclusion ⁇ _dist a ⁇ A BoundingBox number_of ⁇ _users ,
  • a BoundingBox is an area of the bounding box
  • number_of_users is the total number of users in the bounding box.
  • the total number of users in the bounding box includes both individual users that are not already in a crowd and users that are already in a crowd. In one embodiment, a is 2 ⁇ 3.
  • the crowd analyzer 58 then creates a crowd of one user for each individual user within the bounding box that is not already included in a crowd and sets the optimal inclusion distance for the crowds to the initial optimal inclusion distance (step 2346 ).
  • the crowd analyzer 58 analyzes the crowds for the bounding box to determine whether any crowd members (i.e., users in the crowds) violate the optimal inclusion distance of their crowds (step 2348 ). Any crowd member that violates the optimal inclusion distance of his or her crowd is then removed from that crowd (step 2350 ).
  • the crowd analyzer 58 then creates a crowd of one user for each of the users removed from their crowds in step 2350 and sets the optimal inclusion distance for the newly created crowds to the initial optimal inclusion distance (step 2352 ).
  • the crowd analyzer 58 determines the two closest crowds in the bounding box (step 2354 ) and a distance between the two closest crowds (step 2356 ).
  • the distance between the two closest crowds is the distance between the crowd centers of the two closest crowds.
  • the crowd analyzer 58 determines whether the distance between the two closest crowds is less than the optimal inclusion distance of a larger of the two closest crowds (step 2358 ). If the two closest crowds are of the same size (i.e., have the same number of users), then the optimal inclusion distance of either of the two closest crowds may be used.
  • the optimal inclusion distances of both of the two closest crowds may be used such that the crowd analyzer 58 determines whether the distance between the two closest crowds is less than the optimal inclusion distances of both of the two closest crowds.
  • the crowd analyzer 58 may compare the distance between the two closest crowds to an average of the optimal inclusion distances of the two closest crowds.
  • the two closest crowds are combined or merged (step 2360 ), and a new crowd center for the resulting crowd is computed (step 2362 ).
  • a center of mass algorithm may be used to compute the crowd center of a crowd.
  • a new optimal inclusion distance for the resulting crowd is computed (step 2364 ). As discussed above, in one embodiment, the new optimal inclusion distance for the resulting crowd is computed as:
  • the new optimal inclusion distance is computed as the average of the initial optimal inclusion distance and the distances between the users in the crowd and the crowd center plus one standard deviation.
  • the crowd analyzer 58 determines whether a maximum number of iterations have been performed (step 2366 ). If the maximum number of iterations has not been reached, the process returns to step 2348 and is repeated until either the distance between the two closest crowds is not less than the optimal inclusion distance of the larger crowd or the maximum number of iterations has been reached. At that point, the crowd analyzer 58 discards crowds with less than three users, or members (step 2368 ). The crowd analyzer 58 then determines whether the crowd formation process for the new and old bounding boxes is done (step 2370 ). In other words, the crowd analyzer 58 determines whether both the new and old bounding boxes have been processed.
  • the bounding box is set to the new bounding box (step 2372 ), and the process returns to step 2342 and is repeated for the new bounding box. Once both the new and old bounding box have been processed, the crowd formation process ends.
  • FIGS. 25A through 25D graphically illustrate the crowd formation process of FIGS. 24A through 24D for a scenario where the crowd formation process is triggered by a location update for a user having no old location.
  • the crowd analyzer 58 creates a new bounding box 150 for the new location of the user, and the new bounding box 150 is set as the bounding box to be processed for crowd formation.
  • the crowd analyzer 58 identifies all individual users currently located within the bounding box 150 and all crowds located within or overlapping the bounding box.
  • crowd 152 is an existing crowd relevant to the bounding box 150 .
  • Crowds are indicated by dashed circles, crowd centers are indicated by cross-hairs (+), and users are indicated as dots.
  • the crowd analyzer 58 creates crowds 154 through 158 of one user for the individual users, and the optional inclusion distances of the crowds 154 through 158 are set to the initial optimal inclusion distance.
  • the initial optimal inclusion distance is computed by the crowd analyzer 58 based on a density of users within the bounding box 150 .
  • the crowd analyzer 58 then identifies the two closest crowds 154 and 156 in the bounding box 150 and determines a distance between the two closest crowds 154 and 156 .
  • the distance between the two closest crowds 154 and 156 is less than the optimal inclusion distance.
  • the two closest crowds 154 and 156 are merged and a new crowd center and new optimal inclusion distance are computed, as illustrated in FIG. 25C .
  • the crowd analyzer 58 then repeats the process such that the two closest crowds 154 and 158 in the bounding box 150 are again merged, as illustrated in FIG. 23D .
  • the distance between the two closest crowds 152 and 154 is greater than the appropriate optimal inclusion distance.
  • the crowd formation process is complete.
  • FIGS. 26A through 26F graphically illustrate the crowd formation process of FIGS. 24A through 24D for a scenario where the new and old bounding boxes overlap.
  • a user moves from an old location to a new location, as indicated by an arrow.
  • the crowd analyzer 58 receives a location update for the user giving the new location of the user.
  • the crowd analyzer 58 creates an old bounding box 160 for the old location of the user and a new bounding box 162 for the new location of the user.
  • Crowd 164 exists in the old bounding box 160
  • crowd 166 exists in the new bounding box 162 .
  • the crowd analyzer 58 creates a bounding box 168 that encompasses both the old bounding box 160 and the new bounding box 162 , as illustrated in FIG. 26B .
  • the crowd analyzer 58 creates crowds 170 through 176 for individual users currently located within the bounding box 168 .
  • the optimal inclusion distances of the crowds 170 through 176 are set to the initial optimal inclusion distance computed by the crowd analyzer 58 based on the density of users in the bounding box 168 .
  • the crowd analyzer 58 analyzes the crowds 164 , 166 , and 170 through 176 to determine whether any members of the crowds 164 , 166 , and 170 through 176 violate the optimal inclusion distances of the crowds 164 , 166 , and 170 through 176 .
  • the crowd analyzer 58 removes the remaining users from the crowd 164 and creates crowds 178 and 180 of one user each for those users, as illustrated in FIG. 26C .
  • the crowd analyzer 58 then identifies the two closest crowds in the bounding box 168 , which in this example are the crowds 174 and 176 .
  • the crowd analyzer 58 computes a distance between the two crowds 174 and 176 .
  • the distance between the two crowds 174 and 176 is less than the initial optimal inclusion distance and, as such, the two crowds 174 and 176 are combined.
  • crowds are combined by merging the smaller crowd into the larger crowd. Since the two crowds 174 and 176 are of the same size, the crowd analyzer 58 merges the crowd 176 into the crowd 174 , as illustrated in FIG. 26D . A new crowd center and new optimal inclusion distance are then computed for the crowd 174 .
  • the crowd analyzer 58 repeats the process and determines that the crowds 166 and 172 are now the two closest crowds.
  • the distance between the two crowds 166 and 172 is less than the optimal inclusion distance of the larger of the two crowds 166 and 172 , which is the crowd 166 .
  • the crowd 172 is merged into the crowd 166 and a new crowd center and optimal inclusion distance are computed for the crowd 166 , as illustrated in FIG. 26E .
  • the crowd analyzer 58 discards any crowds having less than three members, as illustrated in FIG. 26F .
  • the crowds 170 , 174 , 178 , and 180 have less than three members and are therefore removed.
  • the crowd 166 has three or more members and, as such, is not removed.
  • the crowd formation process is complete.
  • FIGS. 27A through 27E graphically illustrate the crowd formation process of FIGS. 24A through 24D in a scenario where the new and old bounding boxes do not overlap.
  • the user moves from an old location to a new location.
  • the crowd analyzer 58 creates an old bounding box 182 for the old location of the user and a new bounding box 184 for the new location of the user.
  • Crowds 186 and 188 exist in the old bounding box 182
  • crowd 190 exists in the new bounding box 184 .
  • the crowd analyzer 58 processes the old and new bounding boxes 182 and 184 separately.
  • the remaining users in the crowd 186 no longer satisfy the optimal inclusion distance for the crowd 186 .
  • the remaining users in the crowd 186 are removed from the crowd 186 , and crowds 192 and 194 of one user each are created for the removed users as shown in FIG. 26C .
  • no two crowds in the old bounding box 182 are close enough to be combined.
  • processing of the old bounding box 182 is complete, and the crowd analyzer 58 proceeds to process the new bounding box 184 .
  • processing of the new bounding box 184 begins by the crowd analyzer 58 creating a crowd 196 of one user for the user.
  • the crowd analyzer 58 identifies the crowds 190 and 196 as the two closest crowds in the new bounding box 184 and determines a distance between the two crowds 190 and 196 .
  • the distance between the two crowds 190 and 196 is less than the optimal inclusion distance of the larger crowd, which is the crowd 190 .
  • the crowd analyzer 58 combines the crowds 190 and 196 by merging the crowd 196 into the crowd 190 , as illustrated in FIG. 27E .
  • a new crowd center and new optimal inclusion distance are then computed for the crowd 190 .
  • the crowd formation process is complete.
  • a location accuracy of the location update from the user received in step 2300 is considered. More specifically, in step 2300 , the location update received by the MAP server 12 includes the updated location of the user 20 - 1 as well as a location accuracy for the location of the user 20 - 1 , which may be expressed as, for example, a radius in meters from the location of the user 20 - 1 .
  • the location accuracy of the location of the user 20 - 1 may be provided by the GPS receiver or derived from data from the GPS receiver as well be appreciated by one having ordinary skill in the art.
  • steps 2302 and 2304 sizes of the new and old bounding boxes centered at the new and old locations of the user 20 - 1 are set as a function of the location accuracy of the new and old locations of the user 20 - 1 . If the new location of the user 20 - 1 is inaccurate, then the new bounding box will be large. If the new location of the user 20 - 1 is accurate, then the new bounding box will be small.
  • the length and width of the new bounding box may be set to M times the location accuracy of the new location of the user 20 - 1 , where the location accuracy is expressed as a radius in meters from the new location of the user 20 - 1 .
  • the number M may be any desired number. For example, the number M may be 5.
  • the location accuracy of the old location of the user 20 - 1 may be used to set the length and width of the old bounding box.
  • the location accuracy may be considered when computing the initial optimal inclusion distances used for crowds of one user in steps 2314 and 2344 .
  • the initial optimal inclusion distance is computed based on the following equation:
  • initial_optimal ⁇ _inclusion ⁇ _dist a ⁇ A BoundingBox number_of ⁇ _users ,
  • a BoundingBox is an area of the bounding box
  • number_of_users is the total number of users in the bounding box.
  • the total number of users in the bounding box includes both individual users that are not already in a crowd and users that are already in a crowd.
  • a is 2 ⁇ 3.
  • the location accuracy rather than the computed value, is used for the initial optimal inclusion distance for that crowd.
  • crowds become larger and more inclusive.
  • crowds become smaller and less inclusive.
  • the granularity with which crowds are formed is a function of the location accuracy.
  • the new optimal inclusion distance may first be computed based on the following equation:
  • the new optimal inclusion distance is computed as the average of the initial optimal inclusion distance and the distances between the users in the crowd and the crowd center plus one standard deviation.
  • the computed value for the new optimal inclusion distance is less than an average location accuracy of the users in the crowd, the average location accuracy of the users in the crowd, rather than the computed value, is used as the new optimal inclusion distance.
  • FIG. 28 illustrates the operation the system 10 of FIG. 1 to enable the mobile devices 18 - 1 through 18 -N to request crowd data for currently formed crowds according to one embodiment of the present disclosure.
  • the request is initiated by the MAP application 32 - 1 of the mobile device 18 - 1
  • this discussion is equally applicable to the MAP applications 32 - 2 through 32 -N of the other mobile devices 18 - 2 through 18 -N.
  • requests may be received from the third-party applications 34 - 1 through 34 -N.
  • the MAP application 32 - 1 sends a crowd request to the MAP client 30 - 1 (step 2400 ).
  • the crowd request is a request for crowd data for crowds currently formed near a specified POI or within a specified AOI.
  • the crowd request may be initiated by the user 20 - 1 of the mobile device 18 - 1 via the MAP application 32 - 1 or may be initiated automatically by the MAP application 32 - 1 in response to an event such as, for example, start-up of the MAP application 32 - 1 , movement of the user 20 - 1 , or the like.
  • the crowd request is for a POI, where the POI is a POI corresponding to the current location of the user 20 - 1 , a POI selected from a list of POIs defined by the user 20 - 1 , a POI selected from a list of POIs defined by the MAP application 32 - 1 or the MAP server 12 , a POI selected by the user 20 - 1 from a map, a POI implicitly defined via a separate application (e.g., POI is implicitly defined as the location of the nearest Starbucks coffee house in response to the user 20 - 1 performing a Google search for “Starbucks”), or the like.
  • a separate application e.g., POI is implicitly defined as the location of the nearest Starbucks coffee house in response to the user 20 - 1 performing a Google search for “Starbucks”
  • the list of POIs may include static POIs which may be defined by street addresses or latitude and longitude coordinates, dynamic POIs which may be defined as the current locations of one or more friends of the user 20 - 1 , or both.
  • the user 20 - 1 may be enabled to define a POI by selecting a crowd center of a crowd as a POI, where the POI would thereafter remain static at that point and would not follow the crowd.
  • the crowd request is for an AOI
  • the AOI may be an AOI of a predefined shape and size centered at the current location of the user 20 - 1 , an AOI selected from a list of AOIs defined by the user 20 - 1 , an AOI selected from a list of AOIs defined by the MAP application 32 - 1 or the MAP server 12 , an AOI selected by the user 20 - 1 from a map, an AOI implicitly defined via a separate application (e.g., AOI is implicitly defined as an area of a predefined shape and size centered at the location of the nearest Starbucks coffee house in response to the user 20 - 1 performing a Google search for “Starbucks”), or the like.
  • the list of AOIs may include static AOIs, dynamic AOIs which may be defined as areas of a predefined shape and size centered at the current locations of one or more friends of the user 20 - 1 , or both.
  • the user 20 - 1 may be enabled to define an AOI by selecting a crowd such that an AOI is created of a predefined shape and size centered at the crowd center of the selected crowd. The AOI would thereafter remain static and would not follow the crowd.
  • the POI or the AOI of the crowd request may be selected by the user 20 - 1 via the MAP application 32 - 1 .
  • the MAP application 32 - 1 automatically uses the current location of the user 20 - 1 as the POI or as a center point for an AOI of a predefined shape and size.
  • the MAP client 30 - 1 Upon receiving the crowd request, the MAP client 30 - 1 forwards the crowd request to the MAP server 12 (step 2402 ). Note that in some embodiments, the MAP client 30 - 1 may process the crowd request before forwarding the crowd request to the MAP server 12 .
  • the crowd request may include more than one POI or more than one AOI. As such, the MAP client 30 - 1 may generate a separate crowd request for each POI or each AOI.
  • the MAP server 12 identifies one or more crowds relevant to the crowd request (step 2404 ). More specifically, in one embodiment, the crowd analyzer 58 performs a crowd formation process such as that described above in FIG. 22 to form one or more crowds relevant to the POI or the AOI of the crowd request. In another embodiment, the crowd analyzer 58 proactively forms crowds using a process such as that described above in FIGS. 24A through 24D and stores corresponding crowd records in the datastore 64 of the MAP server 12 . Then, rather than forming the relevant crowds in response to the crowd request, the crowd analyzer 58 queries the datastore 64 to identify the crowds that are relevant to the crowd request.
  • the crowds relevant to the crowd request may be those crowds within or intersecting a bounding region, such as a bounding box, for the crowd request. If the crowd request is for a POI, the bounding region is a geographic region of a predefined shape and size centered at the POI. If the crowd request is for an AOI, the bounding region is the AOI.
  • the MAP server 12 generates crowd data for the identified crowds (step 2406 ).
  • the crowd data for the identified crowds may include aggregate profiles for the crowds, information characterizing the crowds, or both.
  • the crowd data may include spatial information defining the locations of the crowds, the number of users in the crowds, the amount of time the crowds have been located at or near the POI or within the AOI of the crowd request, or the like.
  • the MAP server 12 then returns the crowd data to the MAP client 30 - 1 (step 2408 ).
  • the MAP client 30 - 1 Upon receiving the crowd data, the MAP client 30 - 1 forwards the crowd data to the MAP application 32 - 1 (step 2410 ). Note that in some embodiments the MAP client 30 - 1 may process the crowd data before sending the crowd data to the MAP application 32 - 1 . The MAP application 32 - 1 then presents the crowd data to the user 20 - 1 (step 2412 ). The manner in which the crowd data is presented depends on the particular implementation of the MAP application 32 - 1 . In one embodiment, the crowd data is overlaid upon a map. For example, the crowds may be represented by corresponding indicators overlaid on a map. The user 20 - 1 may then select a crowd in order to view additional crowd data regarding that crowd such as, for example, the aggregate profile of that crowd, characteristics of that crowd, or the like.
  • the MAP application 32 - 1 may operate to roll-up the aggregate profiles for multiple crowds into a rolled-up aggregate profile for those crowds.
  • the rolled-up aggregate profile may be the average of the aggregate profiles of the crowds.
  • the MAP application 32 - 1 may roll-up the aggregate profiles for multiple crowds at a POI and present the rolled-up aggregate profile for the multiple crowds at the POI to the user 20 - 1 .
  • the MAP application 32 - 1 may provide a rolled-up aggregate profile for an AOI.
  • the MAP server 12 may roll-up crowds for a POI or an AOI and provide the rolled-up aggregate profile in addition to or as an alternative to the aggregate profiles for the individual crowds.
  • FIG. 29A is a flow chart illustrating step 2406 of FIG. 28 in more detail according to one embodiment of the present disclosure.
  • the crowd data returned by the MAP server 12 includes aggregate profiles for the crowds identified for the POI or the AOI.
  • the MAP server 12 upon receiving the crowd request, the MAP server 12 triggers the crowd analyzer 58 to identify crowds relevant to the current request, and then passes the identified crowds to the aggregation engine 60 in order to generate aggregate profiles for the identified crowds.
  • the identified crowds are passed to the aggregation engine 60 .
  • the aggregation engine 60 selects a next crowd to process, which for the first iteration is the first crowd (step 2500 -A).
  • the aggregation engine 60 selects the next user in the crowd (step 2502 -A).
  • the aggregation engine 60 compares the user profile of the user in the crowd to the user profile of the requesting user, which for this example is the user 20 - 1 of the mobile device 18 - 1 , or a select subset of the user profile of the requesting user (step 2504 -A).
  • the user 20 - 1 may be enabled to select a subset of his user profile to be used for generation of the aggregate profile.
  • the user 20 - 1 may select one or more of the profile categories to be used for aggregate profile generation.
  • the aggregation engine 60 identifies matches between the user profile of the user in the crowd and the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 .
  • the user profiles are expressed as keywords in a number of profile categories. The aggregation engine 60 may then make a list of keywords from the user profile of the user in the crowd that match keywords in user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 .
  • the aggregation engine 60 determines whether there are more users in the crowd (step 2506 -A). If so, the process returns to step 2502 -A and is repeated for the next user in the crowd. Once all of the users in the crowd have been processed, the aggregation engine 60 generates an aggregate profile for the crowd based on data resulting from the comparisons of the user profiles of the users in the crowd to the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 (step 2508 -A). In an alternative embodiment, the aggregation engine 60 generates an aggregate profile for the crowd based on data resulting from the comparisons of the user profiles of the users in the crowd to a target user profile defined or otherwise specified by the user 20 - 1 .
  • the data resulting from the comparisons is a list of matching keywords for each of the users in the crowd.
  • the aggregate profile may then include a number of user matches over all keywords and/or a ratio of the number of user matches over all keywords to the number of users in the crowd.
  • the number of user matches over all keywords is a number of users in the crowd having at least one keyword in their user profile that matches a keyword in the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 .
  • the aggregate profile may additionally or alternatively include, for each keyword in the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 , a number of user matches for the keyword or a ratio of the number of user matches for the keyword to the number of users in the crowd.
  • keywords in the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 that have no user matches may be excluded from the aggregate profile.
  • the aggregate profile for the crowd may include a total number of users in the crowd.
  • the aggregate profile for the crowd may additionally or alternatively include a match strength that is indicative of a degree of similarity between the user profiles of the users in the crowd and the user profile of the user 20 - 1 .
  • the match strength may be computed as a ratio of the number of user matches to the total number of users in the crowd.
  • the match strength may be computed as a function of the number of user matches per keyword and keyword weights assigned to the keywords.
  • the keyword weights may be assigned by the user 20 - 1 .
  • the aggregation engine 60 determines whether there are more crowds to process (step 2510 -A). If so, the process returns to step 2500 -A and is repeated for the next crowd. Once aggregate profiles have been generated for all of the crowds relevant to the current request, the aggregate profiles for the crowds are returned (step 2512 -A). More specifically, the aggregate profiles are included in the crowd data returned to the MAP client 30 - 1 in response to the current request.
  • the user 20 - 1 is enabled to activate a “nearby POIs” feature. If this feature is enabled, the crowds identified by the crowd analyzer 58 and processed by the aggregation engine 60 to produce corresponding aggregate profiles may also include crowds located at or near any nearby POIs.
  • the nearby POIs may be POIs predefined by the user 20 - 1 , the MAP application 32 - 1 , and/or the MAP server 12 that are within a predefined distance from the POI or the AOI of the current request.
  • FIG. 29B is a flow chart illustrating step 2406 of FIG. 28 in more detail according to another embodiment of the present disclosure.
  • the crowd data returned by the MAP server 12 includes aggregate profiles for the crowds identified for the POI or the AOI.
  • the MAP server 12 upon receiving the crowd request, the MAP server 12 triggers the crowd analyzer 58 to identify crowds relevant to the current request, and then passes the identified crowds to the aggregation engine 60 in order to generate aggregate profiles for the identified crowds.
  • the identified crowds are passed to the aggregation engine 60 .
  • the aggregation engine 60 selects a next crowd to process, which for the first iteration is the first crowd (step 2500 -B).
  • the aggregation engine 60 selects the next user in the crowd (step 2502 -B).
  • the aggregation engine 60 compares the user profile of the user in the crowd to the user profile of the requesting user, which for this example is the user 20 - 1 of the mobile device 18 - 1 , or a select subset of the user profile of the requesting user (step 2504 -B).
  • the user 20 - 1 may be enabled to select a subset of his user profile to be used for generation of the aggregate profile.
  • the user 20 - 1 may select one or more of the profile categories to be used for aggregate profile generation.
  • the aggregation engine 60 identifies matches between the user profile of the user in the crowd and the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 .
  • the user profiles are expressed as keywords in a number of profile categories.
  • the aggregation engine 60 may then make a list of keywords from the user profile of the user in the crowd that match keywords in user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 .
  • the aggregation engine 60 determines whether there are more users in the crowd (step 2506 -B). If so, the process returns to step 2502 -B and is repeated for the next user in the crowd. Once all of the users in the crowd have been processed, the aggregation engine 60 generates an aggregate profile for the crowd based on data resulting from the comparisons of the user profiles of the users in the crowd to the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 (step 2508 -B). In an alternative embodiment, the aggregation engine 60 generates an aggregate profile for the crowd based on data resulting from the comparisons of the user profiles of the users in the crowd to a target user profile defined or otherwise specified by the user 20 - 1 .
  • the data resulting from the comparisons is a list of matching keywords for each of the users in the crowd.
  • the aggregate profile may then include a number of user matches over all keywords and/or a ratio of the number of user matches over all keywords to the number of users in the crowd.
  • the number of user matches over all keywords is a number of users in the crowd having at least one keyword in their user profile that matches a keyword in the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 .
  • the aggregate profile may additionally or alternatively include, for each keyword in the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 , a number of user matches for the keyword or a ratio of the number of user matches for the keyword to the number of users in the crowd.
  • keywords in the user profile of the user 20 - 1 or the select subset of the user profile of the user 20 - 1 that have no user matches may be excluded from the aggregate profile.
  • the aggregate profile for the crowd may include a total number of users in the crowd.
  • the aggregate profile for the crowd may additionally or alternatively include a match strength that is indicative of a degree of similarity between the user profiles of the users in the crowd and the user profile of the user 20 - 1 .
  • the match strength may be computed as a ratio of the number of user matches to the total number of users in the crowd.
  • the match strength may be computed as a function of the number of user matches per keyword and keyword weights assigned to the keywords.
  • the keyword weights may be assigned by the user 20 - 1 .
  • the aggregation engine 60 compares the user profiles of the users in the crowd to one another to determine N keywords having the highest number of user matches among the users in the crowd (step 2510 -B).
  • N may be, for example, five.
  • the aggregation engine 60 then adds any of the N keywords that are not already in the aggregate profile to the aggregate profile and flags those keywords as non-matching keywords (step 2512 -B). These keywords are flagged as non-matching because they do not match any of the keywords in the user profile, or select subset thereof, of the user 20 - 1 .
  • the non-matching keywords are preferably differentiated from the matching keywords in the aggregate profile when presented to the user 20 - 1 .
  • the non-matching keywords are particularly beneficial where there are few or no matching keywords between the user profile of the user 20 - 1 and the user profiles of the users in the crowd. In this situation, the non-matching keywords would allow the user 20 - 1 to gain some understanding of the interests of the users in the crowd.
  • the aggregation engine 60 determines whether there are more crowds to process (step 2514 -B). If so, the process returns to step 2500 -B and is repeated for the next crowd. Once aggregate profiles have been generated for all of the crowds relevant to the current request, the aggregate profiles for the crowds are returned (step 2516 -B). More specifically, the aggregate profiles are included in the crowd data returned to the MAP client 30 - 1 in response to the current request.
  • the user 20 - 1 is enabled to activate a “nearby POIs” feature. If this feature is enabled, the crowds identified by the crowd analyzer 58 and processed by the aggregation engine 60 to produce corresponding aggregate profiles may also include crowds located at or near any nearby POIs.
  • the nearby POIs may be POIs predefined by the user 20 - 1 , the MAP application 32 - 1 , and/or the MAP server 12 that are within a predefined distance from the POI or the AOI of the current request.
  • FIG. 30 illustrates the operation of the system 10 of FIG. 1 to enable the subscriber device 22 to request information regarding current crowds according to one embodiment of the present disclosure.
  • subscriber device 22 sends a crowd request to the MAP client 30 - 1 (step 2600 ).
  • the crowd request is a request for current crowds at a specified POI or AOI.
  • the crowd request may be initiated by the subscriber 24 at the subscriber device 22 via the web browser 38 or a custom application enabled to access the MAP server 12 .
  • the subscriber 24 is enabled to identify the POI or the AOI for the crowd request by, for example, selecting the POI or the AOI on a map, selecting a crowd center of an existing crowd as a POI, selecting a crowd location of an existing crowd as a center of an AOI, selecting the POI or the AOI from a predefined list of POIs and/or AOIs, or the like.
  • the predefined list of POIs and/or AOIs may be defined by, for example, the subscriber 24 and/or the MAP server 12 .
  • the MAP server 12 identifies one or more crowds relevant to the crowd request (step 2602 ). More specifically, in one embodiment, the crowd analyzer 58 performs a crowd formation process such as that described above in FIG. 22 to form one or more crowds relevant to the POI or the AOI of the crowd request. In another embodiment, the crowd analyzer 58 proactively forms crowds using a process such as that described above in FIGS. 24A through 24C and stores corresponding crowd records in the datastore 64 of the MAP server 12 . Then, rather than forming the relevant crowds in response to the crowd request, the crowd analyzer 58 queries the datastore 64 to identify the crowds that are relevant to the crowd request.
  • the crowds relevant to the crowd request may be those crowds within or overlapping a bounding region, such as a bounding box, for the crowd request.
  • a bounding region such as a bounding box
  • the bounding region is a geographic region of a predefined shape and size centered at the POI.
  • the crowd request is for an AOI
  • the bounding region is the AOI.
  • the MAP server 12 generates crowd data for the identified crowds (step 2604 ).
  • the crowd data for the identified crowds may include aggregate profiles for the crowds, information characterizing the crowds, or both.
  • the crowd data may include the locations of the crowds, the number of users in the crowds, the amount of time the crowds have been located at or near the POI or within the AOI, or the like.
  • the MAP server 12 then returns the crowd data to the MAP client 30 - 1 (step 2606 ).
  • the MAP server 12 formats the crowd data into a suitable web format before sending the crowd data to the subscriber device 22 .
  • the manner in which the crowd data is formatted depends on the particular implementation.
  • the crowd data is overlaid upon a map.
  • the MAP server 12 may provide the crowd data to the subscriber device 22 via one or more web pages. Using the one or more web pages, crowd indicators representative of the locations of the crowds may be overlaid on a map. The subscriber 24 may then select a crowd in order to view additional crowd data regarding that crowd such as, for example, the aggregate profile of that crowd, characteristics of that crowd, or the like.
  • the subscriber device 22 Upon receiving the crowd data, the subscriber device 22 presents the crowd data to the subscriber 24 (step 2608 ).
  • the MAP server 12 may roll-up the aggregate profiles for multiple crowds at a POI or in an AOI to provide a rolled-up aggregate profile that may be returned in addition to or as an alternative to the aggregate profiles of the individual crowds.
  • the subscriber 24 may be enabled to specify filtering criteria via the web browser 38 or a custom application for interacting with the MAP server 12 .
  • the subscriber 24 may specify filtering criteria regarding types of crowds in which the subscriber 24 is or is not interested.
  • the crowd data may be presented to the subscriber 24 via one or more web pages that enable the subscriber 24 to select a filtering feature.
  • a list of keywords appearing in the user profiles of the crowds identified as being relevant to the current request may be presented to the subscriber 24 .
  • the subscriber 24 may then specify one or more keywords from the list such that crowds having users with user profiles that do not include any of the specified keywords are filtered, or removed, and are therefore not considered when generating the crowd data in response to a crowd request.
  • FIG. 31 is a flow chart illustrating step 2604 of FIG. 30 in more detail according to one embodiment of the present disclosure.
  • the crowd data returned by the MAP server 12 includes aggregate profiles for the crowds identified for the POI or the AOI.
  • the MAP server 12 upon receiving the crowd request, the MAP server 12 triggers the crowd analyzer 58 to identify crowds relevant to the crowd request, and then passes the identified crowds to the aggregation engine 60 in order to generate aggregate profiles for the identified crowds.
  • the identified crowds are passed to the aggregation engine 60 .
  • the aggregation engine 60 selects a next crowd to process, which for the first iteration is the first crowd (step 2700 ).
  • the aggregation engine 60 then generates an aggregate profile for the crowd based on a comparison of the user profiles of the users in the crowd to one another (step 2702 ).
  • the aggregation engine 60 then generates an aggregate profile for the crowd based on a comparison of the user profiles of the users in the crowd to a target user profile defined by the subscriber 24 .
  • the user profiles are expressed as keywords for each of a number of profile categories.
  • the aggregation engine 60 may determine an aggregate list of keywords for the crowd.
  • the aggregate list of keywords is a list of all keywords appearing in the user profiles of the users in the crowd.
  • the aggregate profile for the crowd may then include a number of user matches for each keyword in the aggregate list of keywords for the crowd.
  • the number of user matches for a keyword is the number of users in the crowd having a user profile that includes that keyword.
  • the aggregate profile may include the number of user matches for all keywords in the aggregate list of keywords for the crowd or the number of user matches for keywords in the aggregate list of keywords for the crowd having more than a predefined number of user matches (e.g., more than 1 user match).
  • the aggregate profile may also include the number of users in the crowd.
  • the aggregate profile may include, for each keyword in the aggregate list or each keyword in the aggregate list having more than a predefined number of user matches, a ratio of the number of user matches for the keyword to the number of users in the crowd.
  • the aggregation engine 60 determines whether there are more crowds to process (step 2704 ). If so, the process returns to step 2700 and is repeated for the next crowd. Once aggregate profiles have been generated for all of the crowds relevant to the crowd request, the aggregate profiles for the crowds are returned (step 2706 ). Note that in some embodiments the subscriber 24 is enabled to activate a “nearby POIs” feature. If this feature is enabled, the crowds identified by the crowd analyzer 58 and processed by the aggregation engine 60 to produce corresponding aggregate profiles may also include crowds located at or near any nearby POIs. The nearby POIs may be POIs predefined by the subscriber 24 and/or the MAP server 12 that are within a predefined distance from the POI or the AOI of the crowd request.
  • FIGS. 32A through 32E illustrate a GUI 198 for an exemplary embodiment of the MAP application 32 - 1 of the mobile device 18 - 1 ( FIG. 1 ).
  • the GUI 198 includes a settings screen 198 - 1 that is presented in response to selection of a corresponding settings button 200 by the user 20 - 1 .
  • a navigation button 202 may be selected to view a map and perform navigation functions such as obtaining directions to a desired location.
  • a list button 204 enables the user 20 - 1 to view a list of friends, crowds, POIs, and AOIs, as discussed below.
  • the user 20 - 1 is enabled to provide his Facebook® login information which, as described above, enables the user profile of the user 20 - 1 to be obtained from the Facebook® social networking service.
  • the user 20 - 1 has already been logged in to Facebook.
  • the user 20 - 1 may logout of Facebook by selecting a logout button 206 .
  • the user 20 - 1 is enabled to view his profile and select one or more profile categories to be used for aggregate profile generation.
  • the settings screen 198 - 1 also enables the user 20 - 1 to configure a number of privacy settings. Namely, the settings screen 198 - 1 enables the user 20 - 1 to set a stealth mode switch 210 to either an on position or an off position. When the stealth mode switch 210 is in the on position, the location of the user 20 - 1 is not reported to the friends of the user 20 - 1 . However, the location of the user 20 - 1 is still reported for use by the MAP server 12 .
  • the privacy settings also include a location refresh setting 212 that enables the user 20 - 1 to configure how often location updates are to be sent by the MAP application 32 - 1 .
  • the settings screen 198 - 1 includes an alerts setting 214 that enables the user 20 - 1 to configure one or more alerts.
  • an alert can be tied to a particular POI or AOI such that the user 20 - 1 is alerted, or notified, when a crowd at the particular POI or AOI satisfies one or more specified criteria.
  • an alert can be tied to a particular crowd such that the user 20 - 1 is alerted, or notified, when the crowd satisfies one or more specified criteria.
  • a user profile screen 198 - 2 is presented to the user 20 - 1 via the GUI 198 , as illustrated in FIG. 32B .
  • the user profile screen 198 - 2 shows a number of profile categories 216 A through 216 E and corresponding lists of keywords 218 A through 218 E, which form the user profile of the user 20 - 1 .
  • the user 20 - 1 is enabled to select one or more of the profile categories 216 A through 216 E to be used for aggregate profile generation (i.e., comparison to user profiles for history objects and crowds to create corresponding aggregate profiles for the user 20 - 1 ).
  • the user 20 - 1 has selected his “My Interests” profile category 216 C, where the corresponding list of keywords 218 C define general interests of the user 20 - 1 .
  • the user 20 - 1 can return to the settings screen 198 - 1 by selecting a settings button 220 .
  • FIGS. 32C and 32D illustrate a list screen 198 - 3 that is presented to the user 20 - 1 via the GUI 198 in response to selecting the list button 204 .
  • the list screen 198 - 3 includes a friends button 222 , a crowds button 224 , a POI button 226 , an areas button 228 , and an all button 230 .
  • the list screen 198 - 3 enables the user 20 - 1 to view a list of his friends by selecting the friends button 222 , a list of crowds at POIs or within AOIs of the user 20 - 1 by selecting the crowds button 224 , a list of POIs of the user 20 - 1 by selecting the POI button 226 , or a list of AOIs of the user 20 - 1 by selecting the areas button 228 .
  • the list screen 198 - 3 enables the user 20 - 1 to view a list that includes the friends of the user, the crowds at POIs or within AOIs of the user 20 - 1 , the POIs of the user 20 - 1 , and the AOIs of the user 20 - 1 by selecting the all button 230 .
  • the list screen 198 - 3 presents an AOI list 232 that includes a number of AOIs previously defined by the user 20 - 1 .
  • each of the AOIs may be a static AOI defining a static geographic area or a dynamic AOI that is defined relative to a dynamic location such as a location of a friend of the user 20 - 1 .
  • the “Near Jack Shephard” AOI is a geographic area of a defined shape and size that is centered at the current location of the user's friend Jack Shephard.
  • persons whose current locations may be used for dynamic AOIs are limited to the friends of the user 20 - 1 .
  • the user 20 - 1 may select an AOI from the AOI list 232 in order to view crowd data for the AOI.
  • the GUI 198 may present a map including the My Neighborhood AOI. Crowds relevant to the My Neighborhood AOI are presented on the map.
  • the user 20 - 1 may then select a desired crowd in order to view detailed information regarding that crowd such as, for example, the aggregate profile of the crowd, characteristics of the crowd, or both.
  • the list screen 198 - 3 also presents a crowds list 234 that includes a number of crowds that are at the POIs or within the AOIs of the user 20 - 1 . In this example, there are twelve crowds.
  • the GUI 198 enables the user 20 - 1 to select a crowd from the crowds list 234 in order to view additional information regarding the crowd. For example, by selecting the Crowd of 6, the user 20 - 1 may be presented with a map showing the current location of the Crowd of 6 and detailed information regarding the Crowd of 6 such as, for example, the aggregate profile of the Crowd of 6, characteristics of the Crowd of 6, or both.
  • the list screen 198 - 3 also includes a friends list 236 , as illustrated in FIG. 32D .
  • the user 20 - 1 may select a friend from the friends list 236 in order to view crowds nearby that friend.
  • the current locations of the friends of the user 20 - 1 are treated as temporary or dynamic POIs such that crowd data for current locations of the friends of the user 20 - 1 is obtained from the MAP server 12 .
  • the user 20 - 1 may choose to define an AOI centered at the current location of a friend to create a dynamic AOI, as discussed above.
  • the friends list 236 also presents the current location of the friends of the user 20 - 1 relative to the current location of the user 20 - 1 .
  • the list screen 198 - 3 also includes a POI list 238 that includes a number of POIs of the user 20 - 1 .
  • the user 20 - 1 may select a POI from the POI list 238 in order to view crowd data for the POI.
  • the GUI 198 may present a map including the Steve's house POI. Crowds at or near the Steve's house POI are presented on the map.
  • the user 20 - 1 may then select a desired crowd in order to view detailed information regarding that crowd such as, for example, the aggregate profile of the crowd, characteristics of the crowd, or both.
  • the list screen 198 - 3 includes a You item 240 that may be selected by the user 20 - 1 to access the user profile screen 198 - 2 ( FIG. 32B ).
  • FIG. 32E is a crowd data display screen 198 - 4 presented by the GUI 198 .
  • the user 20 - 1 has selected the Around You AOI from the AOI list 232 ( FIG. 32C ).
  • the GUI 198 presents the crowd data display screen 198 - 4 for the Around You AOI.
  • the crowd data display screen 198 - 4 includes a map area 242 .
  • the current location of the user 20 - 1 is used as the center of the Around You AOI.
  • the current location of the user 20 - 1 is represented in the map area 242 by a corresponding indicator 244 . Crowds in the Around You AOI are represented in the map area by crowd indicators 246 through 250 .
  • the crowd indictors 246 through 250 show the locations of the crowds as well as match strengths for the crowds.
  • the locations of the crowds are included in the crowd data.
  • the match strengths for the crowds may be included in the aggregate profiles for the crowds or may be determined based on the aggregate profiles for the crowds.
  • the match strength of a crowd is computed as a ratio of the number of user matches over all keywords to the number of users in the crowd. A ratio of one results in a highest match strength, and a ratio of zero results in a lowest match strength.
  • the user 20 - 1 is enabled to select a particular crowd in the map area 242 to view more detailed information for that crowd in a crowd detail area 252 of the crowd data display screen 198 - 4 .
  • the user 20 - 1 has selected the crowd indicator 246 .
  • more detailed information for the crowd represented by the crowd indicator 246 is presented in the crowd detail area 252 .
  • the more detailed information for the crowd is from the crowd data for the crowd or derived from the crowd data for the crowd.
  • the aggregate profile of the crowd is used to derive the match strength for the crowd, and the match strength is presented in the crowd detail area 252 .
  • the crowd size and number of user matches over all keywords are obtained from the aggregate profile for the crowd and presented in the crowd detail area 252 .
  • a quality factor for the crowd is also presented.
  • the quality factor of the crowd may be an average of a quality or confidence of the current locations of the users in the crowd.
  • the crowd data display screen 198 - 4 includes a keyword matches area 254 for presenting keyword matches for the selected crowd.
  • a font size of the keywords in the keyword matches area 254 reflects the number of user matches for that keyword. Therefore, in this example, the number of user matches for the keyword “technology” is greater than the number of user matches for the keyword “books.”
  • FIGS. 33A through 33C illustrate an exemplary web interface 256 provided by the MAP server 12 and presented to the subscriber 24 at the subscriber device 22 .
  • the web interface 256 includes a number of tabs 258 through 272 , namely, a home tab 258 , a realtime tab 260 , a historical tab 262 , a watch zones tab 264 , an alerts tab 266 , a filters tab 268 , a reports tab 270 , and an account tab 272 .
  • the home tab 258 enables the subscriber 24 to view a home screen.
  • the home screen may include any desired information such as, for example, a link to a Frequently Asked Question (FAQ) page, instructions on how to use the web interface 256 , or the like.
  • FAQ Frequently Asked Question
  • the realtime tab 260 enables the subscriber to view realtime crowd data for POIs and/or AOIs of the subscriber 24 .
  • the historical tab 262 enables the subscriber 24 to view historical data for a POI or an AOI in a time context and/or a geographic context in the manner described above.
  • the watch zones tab 264 enables the subscriber 24 to select POIs and/or AOIs of interest to the subscriber 24 .
  • the alerts tab 266 enables the subscriber 24 to configure one or more alerts.
  • the filters tab 268 enables the subscriber 24 to configure filters and/or select filters to be applied to the crowd data in the realtime or historical view.
  • the reports tab 270 enables the subscriber 24 to access reports previously generated for crowds of interest, POIs, and/or AOIs.
  • the account tab 272 enables the subscriber 24 to manage the subscriber's account.
  • FIG. 33A illustrates the web interface 256 when the realtime tab 260 has been selected by the subscriber 24 .
  • the web interface 256 presents a map area 274 that shows an AOI 276 and a number of crowds 278 through 282 currently located within the AOI 276 .
  • crowds 284 and 286 that are outside the AOI 276 are also illustrated.
  • the crowds 284 and 286 are crowds located at other POIs or within other AOIs of the subscriber 24 that are not currently being viewed by the subscriber 24 .
  • the subscriber 24 may view another POI or AOI by selecting the desired POI or AOI from a list presented in response to selection of a button 288 .
  • POIs and AOIs are generically referred to as watch zones.
  • the subscriber 24 selects the crowd 278 .
  • the web interface 256 presents an aggregate profile window 290 to the subscriber 24 , as illustrated in FIG. 33B .
  • the aggregate profile window 290 presents an aggregate profile of the crowd 278 , where in this embodiment the aggregate profile is in the form of an interest histogram showing the number of user matches in the crowd 278 for each of a number of keywords.
  • the subscriber 24 may be enabled to create an alert for the crowd 278 by selecting a create an alert button 292 .
  • the subscriber 24 may be enabled to utilize the keywords in the aggregate profile window 290 to create an alert.
  • the subscriber 24 may create an alert such that the subscriber 24 is notified when the number of user matches for the keyword “Sushi” in the crowd 278 reaches one hundred.
  • the subscriber 24 may also be enabled to create a report for the crowd 278 by selecting a create a report button 294 .
  • the report may, for example, include details about the crowd 278 such as, for example, the location of the crowd 278 , the size of the crowd 278 , the aggregate profile of the crowd 278 , the current time and date, or the like, where the report may be saved or printed by the subscriber 24 .
  • the subscriber 24 may be enabled to create a filter by selecting a create a filter button 296 .
  • a new filter screen 298 is presented to the subscriber 24 , as illustrated in FIG. 33C .
  • the subscriber 24 may then select keywords from the interest histogram for the crowd 278 to be used for the filter.
  • the subscriber 24 may be enabled to add new keywords to the filter by selecting an add keywords button 300 .
  • the subscriber 24 is enabled to create the filter by selecting a create button 302 . Once the filter is created, the filter may be used to filter crowds for any AOI or POI of the subscriber 24 .
  • FIGS. 34 through 45 describe the operation of the crowd analyzer 58 of the MAP server 12 to characterize crowds according to another embodiment of the present disclosure. More specifically, the crowd analyzer 58 may determine a degree-of-fragmentation, best and worst case average DOS, and/or a degree of bidirectionality for crowds. This information may then be included in crowd data for those crowds returned to the mobile devices 18 - 1 through 18 -N and/or the subscriber device 22 . In addition or alternatively, the data characterizing crowds may be used to filter crowds. For example, a filter may be applied such that crowds having a worst-case average DOS greater than a defined threshold are not presented to a user/subscriber.
  • the filtering may be performed by the MAP server 12 before returning crowd data to the requesting device (i.e., one of the mobile devices 18 - 1 through 18 -N, the subscriber device 22 , or a device hosting the third-party service 26 ).
  • the filtering may be performed by the mobile devices 18 - 1 through 18 -N, the subscriber device 22 , or a device hosting the third-party service 26 .
  • FIG. 34 is a flow chart illustrating a spatial crowd fragmentation process according to one embodiment of the present disclosure. This process is similar to the spatial crowd formation process discussed above with respect to FIG. 22 .
  • the crowd analyzer 58 creates a crowd fragment of one user for each user in a crowd (step 2800 ). Note that this spatial crowd fragmentation process may be performed reactively in response to a current request for crowd data for a POI or an AOI or performed proactively.
  • the crowd analyzer 58 determines the two closest crowd fragments in the crowd (step 2802 ) and a distance between the two closest crowd fragments (step 2804 ). The distance between the two closest crowd fragments is the distance between the crowd fragment centers of the two closest crowd fragments.
  • the crowd fragment center for a crowd fragment having only one user is the current location of that one user.
  • the crowd analyzer 58 determines whether the distance between the two closest crowd fragments is less than an optimal inclusion distance for a crowd fragment (step 2806 ).
  • the optimal inclusion distance for a crowd fragment is a predefined static value.
  • the optimal inclusion distance of the crowd may vary. For example, if the spatial crowd formation process of FIGS. 24A through 24D is used for proactive crowd formation, then the optimal inclusion distance for the crowd may vary. As such, the optimal inclusion distance for a crowd fragment within the crowd may be defined as a fraction of the optimal inclusion distance of the crowd such that the optimal inclusion distance for a crowd fragment within the crowd varies along with the optimal inclusion distance for the crowd itself.
  • the two closest crowd fragments are combined (step 2808 ) and a new crowd fragment center is computed for the resulting crowd fragment (step 2810 ).
  • the crowd fragment center may be computed using, for example, a center of mass algorithm.
  • the process returns to step 2802 and is repeated.
  • the crowd analyzer 58 has created the crowd fragments or defined the crowd fragments for the crowd.
  • the crowd analyzer 58 may then represent the degree of fragmentation of the crowd based on the number of crowd fragments in the crowd and, optionally, an average number of users per crowd fragment.
  • the degree of fragmentation of the crowd may be included in the crowd data returned to the requesting device in response to a crowd request for a POI or an AOI to which the crowd is relevant.
  • FIGS. 35A and 35B graphically illustrate the spatial crowd fragmentation process of FIG. 34 for an exemplary crowd 304 having bounding box 305 .
  • FIG. 35A illustrates the crowd 304 before spatial crowd fragmentation.
  • FIG. 35B illustrates the crowd 304 after spatial crowd fragmentation.
  • the crowd 304 includes a number of crowd fragments 306 through 314 .
  • the crowd 304 has a degree of fragmentation of five crowd fragments with an average of approximately 2 users per crowd fragment.
  • the crowd 304 has a moderately high degree of fragmentation.
  • the highest degree of fragmentation for the crowd 304 would be to have eleven crowd fragments with an average of one user per crowd fragment.
  • the lowest degree of fragmentation for the crowd 304 would be to have one crowd fragment with an average of eleven users per crowd fragment.
  • FIG. 36 illustrates a connectivity-based crowd fragmentation process according to one embodiment of the present disclosure.
  • the crowd analyzer 58 creates a crowd fragment for each user in the crowd (step 2900 ). Note that this connectivity-based crowd fragmentation process may be performed reactively in response to a current request for crowd data for a POI or an AOI or performed proactively.
  • the crowd analyzer 58 selects a next pair of crowd fragments in the crowd (step 2902 ) and then selects one user from each of those crowd fragments (step 2904 ).
  • the crowd analyzer 58 determines a DOS between the users from the pair of crowd fragments (step 2906 ).
  • DOS is a measure of the degree to which the two users are related in a social network (e.g., the Facebook® social network, the MySpace® social network, or the LinkedIN® social network).
  • the two users have a DOS of one if one of the users is a friend of the other user, a DOS of two if one of the users is a friend of a friend of the other user, a DOS of three if one of the users is a friend of a friend of a friend of the other user, etc.
  • the DOS for the two users is set to a value equal to or greater than the maximum DOS for a crowd fragment.
  • the crowd analyzer 58 determines whether the DOS between the two users is less than a predefined maximum DOS for a crowd fragment (step 2908 ).
  • the predefined maximum DOS may be three. However, other maximum DOS values may be used to achieve the desired crowd fragmentation. If the DOS between the two users is not less than the predefined maximum DOS, the process proceeds to step 2916 . If the DOS between the two users is less than the predefined maximum DOS, the crowd analyzer 58 determines whether a bidirectionality requirement is satisfied (step 2910 ).
  • the bidirectionality requirement specifies whether the relationship between the two users must be bidirectional (i.e., the first user must directly or indirectly know the second user and the second user must directly or indirectly know the first user).
  • Bidirectionality may or may not be required depending on the particular embodiment. If the two users satisfy the bidirectionality requirement, the crowd analyzer 58 combines the pair of crowd fragments (step 2912 ) and computes a new crowd fragment center for the resulting crowd fragment (step 2914 ). The process then returns to step 2902 and is repeated for a next pair of crowd fragments. If the two users do not satisfy the bidirectionality requirement, the process proceeds to step 2916 .
  • the crowd analyzer 58 determines whether all user pairs from the two crowd fragments have been processed (step 2916 ). If not, the process returns to step 2904 and is repeated for a new pair of users from the two crowd fragments. If all user pairs from the two crowd fragments have been processed, the crowd analyzer 58 then determines whether all crowd fragments have been processed (step 2918 ). If not, the process returns to step 2902 and is repeated until all crowd fragments have been processed. Once this process is complete, the crowd analyzer 58 has determined the number of crowd fragments in the crowd. The degree of fragmentation of the crowd may then be provided as the number of crowd fragments and the average number of users per crowd fragment.
  • FIGS. 37A and 37B graphically illustrate the connectivity-based crowd fragmentation process of FIG. 36 .
  • FIG. 37A illustrates a crowd 316 having a number of users and a bounding box 317 .
  • FIG. 37B illustrates the crowd 316 after the connectivity-based crowd fragmentation process has been performed.
  • the users in a particular crowd fragment may not be close to one another spatially since, in this embodiment, there is no spatial requirement for users of the crowd fragment other than that the users of the crowd fragment are in the same crowd.
  • FIG. 38 is a flow chart illustrating a recursive crowd fragmentation process that uses both spatial crowd fragmentation and connectivity-based crowd fragmentation according to one embodiment of the present disclosure.
  • the crowd analyzer 58 performs a spatial crowd fragmentation process to create a number of crowd fragments for a crowd (step 3000 ).
  • the spatial crowd fragmentation process may be the spatial crowd fragmentation process of FIG. 34 .
  • the crowd analyzer 58 selects a next crowd fragment of the crowd fragments created for the crowd (step 3002 ).
  • the crowd analyzer 58 performs a connectivity-based crowd fragmentation process to create a number of sub-fragments for the crowd fragment of the crowd (step 3004 ).
  • the connectivity-based crowd fragmentation process may be the connectivity-based crowd fragmentation process of FIG.
  • the crowd analyzer 58 determines whether the last crowd fragment of the crowd has been processed (step 3006 ). If not, the process returns to step 3002 and is repeated until the last crowd fragment of the crowd has been processed. At that point, the process is complete.
  • the degree of fragmentation for the crowd may then include the number of sub-fragments and average number of users per sub-fragment for each crowd fragment.
  • FIG. 39 is a flow chart illustrating a recursive crowd fragmentation process that uses both spatial crowd fragmentation and connectivity-based crowd fragmentation according to another embodiment of the present disclosure.
  • the crowd analyzer 58 performs a connectivity-based crowd fragmentation process to create a number of crowd fragments for a crowd (step 3100 ).
  • the connectivity-based crowd fragmentation process may be the connectivity-based crowd fragmentation process of FIG. 36 .
  • the crowd analyzer 58 selects a next crowd fragment of the crowd fragments created for the crowd (step 3102 ).
  • the crowd analyzer 58 performs a spatial crowd fragmentation process to create a number of sub-fragments for the crowd fragment of the crowd (step 3104 ).
  • the spatial crowd fragmentation process may be the spatial crowd fragmentation process of FIG.
  • the crowd analyzer 58 determines whether the last crowd fragment of the crowd has been processed (step 3106 ). If not, the process returns to step 3102 and is repeated until the last crowd fragment of the crowd has been processed. At that point, the process is complete.
  • the degree of fragmentation for the crowd may then include the number of sub-fragments and average number of users per sub-fragment for each crowd fragment.
  • FIGS. 40A and 40B illustrate an exemplary graphical representation of the degree of fragmentation for a crowd.
  • This exemplary graphical representation may be presented by the MAP application 32 - 1 based on corresponding crowd data provided by the MAP server 12 in response to a crowd request or presented by the MAP server 12 to the subscriber 24 via the web browser 38 of the subscriber device 22 .
  • FIG. 40A illustrates a graphical representation of the degree of fragmentation for a crowd having two crowd fragments with an average of twenty-five users per crowd fragment.
  • FIG. 40B illustrates a graphical representation of the degree of fragmentation for a crowd having twenty-five crowd fragments with an average of two users per crowd fragment.
  • FIG. 41 is a flow chart for a process for determining a best-case and worst-case average DOS for a crowd fragment of a crowd according to one embodiment of the present disclosure.
  • the crowd analyzer 58 counts the number of 1 DOS, 2 DOS, . . . , M DOS relationships in a crowd fragment (step 3200 ) and the number of user pairs in the crowd fragment for which explicit relationships are not defined or known (step 3202 ). More specifically, for each pair of users in the crowd fragment, the crowd analyzer 58 determines the DOS between the pair of users if the DOS between the pair of user is known or determines that the DOS between the pair of users is not defined or known if the DOS between the pair of users is in fact not defined or known. Based on these determinations, the crowd analyzer 58 counts the number of user pairs having a DOS of 1, the number of user pairs having a DOS of 2, etc. In addition, the crowd analyzer 58 counts the number of user pairs for which no relationship is defined or known.
  • the crowd analyzer 58 then computes a best-case average DOS for the crowd fragment using a best-case DOS for the user pairs in the crowd fragment for which explicit relationships are not defined (step 3204 ).
  • the best-case average DOS is 1.
  • the best-case average DOS may computed as:
  • AverageDOS BestCase is the best-case average DOS for the crowd fragment
  • DOS_count i is the number of user pairs for the ith DOS
  • DOS BestCase is the best-case DOS
  • Num_Unknown is the number of user pairs for which a relationship is not defined or is unknown.
  • the crowd analyzer 58 also computes the worst-case average DOS for the crowd fragment using a worst-case DOS for the user pairs in the crowd fragment for which explicit relationships are not defined (step 3206 ).
  • the worst-case DOS is a greatest possible DOS that the crowd analyzer 58 considers, which may be, for example, a DOS of greater than or equal to 7.
  • the worst-case DOS may be 10.
  • other values for the worst-case DOS may be used.
  • the worst-case average DOS may computed as:
  • AverageDOS WorstCase is the worst-case average DOS for the crowd fragment
  • DOS_count i is the number of user pairs for the ith DOS
  • DOS WorstCase is the worst-case DOS
  • Num_Unknown is the number of user pairs for which a relationship is not defined or is unknown.
  • FIG. 42 is a more detailed flow chart illustrating the process for determining a best-case and worst-case average DOS for a crowd fragment according to one embodiment of the present disclosure.
  • the crowd analyzer 58 selects the next user in the crowd fragment, which for the first iteration is the first user in the crowd fragment (step 3300 ), and clears a found member list (step 3302 ).
  • the crowd analyzer 58 sets a current DOS to one (step 3304 ).
  • the crowd analyzer 58 selects a next friend of the user (step 3306 ).
  • information identifying the friends of the user are obtained from the one or more profile servers 14 along with the user profile of the user.
  • the crowd analyzer 58 determines whether the friend of the user is also a member of the crowd fragment (step 3308 ). If not, the process proceeds to step 3314 . If the friend is also a member of the crowd fragment, the crowd analyzer 58 determines whether the friend is already in the found member list (step 3310 ). If so, the process proceeds to step 3314 . If the friend is also a member of the crowd fragment and is not already in the found member list, the crowd analyzer 58 increments a found count for the current DOS and adds the friend to the found member list (step 3312 ). At this point, whether proceeding from step 3308 or step 3310 , the crowd analyzer 58 then determines whether the user has more friends to process (step 3314 ). If so, the process returns to step 3306 and is repeated for the next friend of the user.
  • the crowd analyzer 58 performs steps 3306 through 3314 recursively for each newly found friend, incrementing the current DOS for each recursion, up to a maximum number of recursions (step 3316 ).
  • Newly found friends are friends added to the found member list in the iteration or recursion of steps 3306 through 3314 just completed.
  • steps 3306 through 3316 operate to find friends of the user selected in step 3300 that are also members of the crowd fragment and increment the found count for a DOS of 1 for each of the found friends of the user.
  • the crowd analyzer 58 finds friends of that friend of the user that are also members of the crowd fragment and increments the found count for a DOS of 2 for each of the found friends of the friend of the user. The process continues in this manner to count the number of user relationships between the user selected in step 3300 and other members in the crowd fragment up to the Mth DOS.
  • the crowd analyzer 58 determines a count of users in the crowd fragment that were not found as being directly or indirectly related to the user selected in step 3300 (step 3318 ). More specifically, by looking at the found member list and the total number of users in the crowd fragment, the crowd analyzer 58 is enabled to determine the count of users in the crowd fragment that were not found as being directly or indirectly related to the user.
  • the crowd analyzer 58 determines whether there are more users in the crowd fragment to process (step 3320 ). If so, the process returns to step 3300 and is repeated for the next user in the crowd fragment. Once all of the users in the crowd fragment have been processed, the crowd analyzer 58 computes a best-case average DOS for the crowd fragment (step 3322 ). Again, in one embodiment, the best-case average DOS for the crowd fragment is computed as:
  • AverageDOS BestCase is the best-case average DOS for the crowd fragment
  • found_count DOSi is the found count for the ith DOS
  • DOS BestCase is the best-case DOS which may be set to, for example, 1
  • Num_Unknown is the total count of user pairs in the crowd fragment that were not found as being directly or indirectly related.
  • the crowd analyzer 58 computes a worst-case average DOS for the crowd fragment (step 3324 ).
  • the worst-case average DOS for the crowd fragment is computed as:
  • AverageDOS WorstCase is the worst-case average DOS for the crowd fragment
  • found_count DOSi is the found count for the ith DOS
  • DOS WorstCase is the worst-case DOS which may be set to, for example, 10
  • Num_Unknown is the total count of user pairs in the crowd fragment that were not found as being directly or indirectly related.
  • FIGS. 43A through 43D illustrate an exemplary graphical representation of the best-case and worst-case average DOS for a crowd fragment according to one embodiment of the present disclosure.
  • Such graphical representations may be presented to the mobile users 20 - 1 through 20 -N by the MAP applications 32 - 1 through 32 -N or presented to the subscriber 24 by the MAP server 12 via the web browser 38 at the subscriber device 22 based on data included in the crowd data for corresponding crowds.
  • FIG. 43A illustrates the graphical representation for a crowd fragment wherein all users in the crowd fragment are friends with one another. As such, both the best-case and worst-case average DOS for the crowd fragment are 1.
  • FIG. 43B illustrates the graphical representation for a crowd fragment wherein the best-case average DOS is 2 and the worst-case average DOS is 3.
  • FIG. 43C illustrates the graphical representation for a crowd fragment wherein the best-case average DOS is 4 and the worst-case average DOS is greater than 7.
  • FIG. 43D illustrates the graphical representation for a crowd fragment wherein the best-case average DOS is 6 and the worst-case average DOS is 7.
  • the graphical representations are for the best-case and worst-case average DOS for a crowd fragment, best-case and worst-case average DOS for a crowd may additionally or alternatively be computed by the MAP server 12 and presented to the users 20 - 1 through 20 -N or the subscriber 24 .
  • FIG. 44 is a flow chart for a process of determining a degree of bidirectionality of relationships between users in a crowd fragment according to one embodiment of the present disclosure. Note, however, that this same process may be used to determine a degree of bidirectionality of relationships between users in a crowd.
  • the crowd analyzer 58 selects the next user in a crowd fragment, which for the first iteration is the first user in the crowd fragment (step 3400 ).
  • the crowd analyzer 58 selects the next friend of the user (step 3402 ).
  • friends of the users 20 - 1 through 20 -N may have been previously been obtained from the one or more profile servers 14 along with the user profiles of the users 20 - 1 through 20 -N and provided to the MAP server 12 .
  • the crowd analyzer 58 determines whether the friend of the user is a member of the crowd fragment (step 3404 ). If not, the process proceeds to step 3412 . If the friend of the user is a member of the crowd fragment, the crowd analyzer 58 increments a connection count (step 3406 ). In addition, the crowd analyzer 58 determines whether the relationship between the user and the friend is bidirectional (step 3408 ). In other words, the crowd analyzer 58 determines whether the user is also a friend of that friend. If not, the process proceeds to step 3412 . If so, the crowd analyzer 58 increments a bidirectional count (step 3410 ).
  • the crowd analyzer 58 determines whether the user has more friends to process (step 3412 ). If so, the process returns to step 3402 and is repeated for the next friend of the user. Once all of the friends of the user have been processed, the crowd analyzer 58 determines whether there are more users in the crowd fragment (step 3414 ). If so, the process returns to step 3400 and is repeated for the next user in the crowd fragment.
  • the crowd analyzer 58 computes a ratio of the bidirectional count (i.e., the number of bidirectional friend relationships) over the connection count (i.e., the number of unidirectional and bidirectional friend relationships) for the crowd fragment (step 3416 ). At this point, the process ends.
  • the ratio of the bidirectionality count to the connection count reflects the degree of bidirectionality of friendship relationships for the crowd fragment and may be returned to the requesting user or subscriber in the crowd data for the corresponding crowd.
  • FIGS. 45A through 45C illustrate an exemplary graphical representation of the degree of bidirectionality of friendship relationships for a crowd fragment according to one embodiment of the present disclosure. Note that this graphical representation may also be used to present the degree of bidirectionality of friendship relationships for a crowd.
  • FIG. 45A illustrates the graphical representation for a crowd having a ratio of bidirectional friend relationships to total friend relationships of approximately 0.5.
  • FIG. 45B illustrates the graphical representation for a crowd having a ratio of bidirectional friend relationships to total friend relationships of approximately 0.2.
  • FIG. 45C illustrates the graphical representation for a crowd having a ratio of bidirectional friend relationships to total friend relationships of approximately 0.95. Graphical representations such as those in FIGS.
  • 45A through 45C may be presented to the mobile users 20 - 1 through 20 -N by the MAP applications 32 - 1 through 32 -N or presented to the subscriber 24 by the MAP server 12 via the web browser 38 at the subscriber device 22 based on data included in the crowd data for corresponding crowds.
  • FIGS. 46 through 51 describe embodiments of the present disclosure where confidence levels for the current locations of users in a crowd are determined and utilized to provide a quality level for the aggregate profile for the crowd and/or confidence levels for individual keywords included in the aggregate profile for the crowd.
  • the current locations of the users 20 - 1 through 20 -N are not updated instantaneously or even substantially instantaneously. There are many reasons why the current locations of the users 20 - 1 through 20 -N are not and possibly cannot be updated instantaneously.
  • battery life and performance limitations may all limit the ability of the mobile devices 18 - 1 through 18 -N to provide continuous location updates to the MAP server 12 .
  • the users 20 - 1 through 20 -N may move from their current locations stored by the MAP server 12 well before corresponding location updates are received by the MAP server 12 .
  • the mobile device 18 - 1 is unable to send location updates for the user 20 - 1 .
  • the current location stored for the user 20 - 1 at the MAP server 12 will no longer be accurate if the user 20 - 1 moves to a new location while the mobile device 18 - 1 is off.
  • FIGS. 46 through 51 describe embodiments where the contribution of the user profiles of the users 20 - 1 through 20 -N to aggregate profiles of corresponding crowds is modified based on an amount of time that has expired since receiving location updates for the users 20 - 1 through 20 -N. More specifically, FIG. 46 is a flow chart for a process for generating a quality level for an aggregate profile for a crowd according to one embodiment of the present disclosure.
  • the crowd analyzer 58 of the MAP server 12 creates an aggregate profile for one or more crowds relevant to a POI or an AOI in response to a crowd request from a requestor (i.e., one of the users 20 - 1 through 20 -N, the subscriber 24 , or the third-party service 26 ).
  • the aggregate profile may be generated based on comparisons of the user profiles of the users in the crowd to a user profile or a select subset of the user profile of a requesting user (e.g., one of the users 20 - 1 through 20 -N for which the aggregate profile is generated), comparisons of the user profiles of the users in the crowd to a target user profile, or comparisons of the user profiles of the users in the crowd to one another.
  • the crowd analyzer 58 can generate a quality level for the aggregate profile for one or more such crowds. Note that the quality level for the aggregate profile of a crowd may also be viewed as a quality level for the crowd itself particularly where a spatial crowd formation process has been used to form the crowd.
  • the crowd analyzer 58 of the MAP server 12 computes confidence levels for the current locations of the users in the crowd (step 3500 ).
  • the confidence level for the current location of a user ranges from 0 to 1, where the confidence level is set to 1 when the current location is updated and then linearly decreases to 0 over some desired period of time.
  • the confidence level of the current location of a user may be computed based on the following equation:
  • CL LOCATION is the confidence level of the current location of the user
  • ⁇ t is an amount of time that has elapsed since the confidence level of the current location of the user was last computed
  • DR is a predefined decrease rate or rate at which the confidence level is to decrease over time
  • CL LOCATION,PREVIOUS is the previous confidence level of the current location of the user.
  • the decrease rate (DR) is preferably selected such that the confidence level (CL) of the current location of the user will decrease from 1 to 0 over a desired amount of time. Note that the decrease rate (DR) may be defined separately for each user or may be the same for all users.
  • the decrease rate (DR) for a user may be defined once and re-used or defined on a case-by-case basis based on the user's current and past locations, profile, history, or the like.
  • the desired amount of time may be any desired amount of time such as, but not limited to, a desired number of hours.
  • the desired amount of time may be 12 hours, and the corresponding decrease rate (DR) is 1/12 if time is measured in hours and 1/(12 ⁇ 60 ⁇ 60 ⁇ 1000) if time is measures in milliseconds.
  • the MAP server 12 stores the confidence level (CL) of the user, a timestamp indicating when the confidence level (CL) was computed, and optionally a timestamp indicating when the current location of the user was last updated. This information may be stored in the user record for the user. Alternatively, only the timestamp of the last location update is stored in the user record for the user. If the initial confidence level (CL) varies per user, the initial confidence level (CL) is also stored in the user record.
  • the current confidence level (CL) is determined whenever it is needed by retrieving the last location update timestamp from the user record, determining an amount of elapsed time between the current time and the time of the last location update, and calculating the new confidence level based on the decrease rate (DR) and the initial confidence level (CL). Also note that while the confidence levels of the current locations of the users in the crowd are computed using a linear algorithm in the exemplary embodiment described above, nonlinear algorithms may alternatively be used.
  • the crowd analyzer 58 may also consider location confidence events. Note that timestamps of such location confidence events and the location confidence events themselves may also be stored to enable correct calculation of the confidence levels.
  • the location confidence events may include negative location confidence events such as, but not limited to, the passing of a known closing time of a business (e.g., restaurant, bar, shopping mall, etc.) at which a user is located or movement of a crowd with which a user has a high affinity.
  • the location confidence events may additionally or alternatively include positive location confidence events such as, but not limited to, frequent interaction with the corresponding MAP application by the user. Frequent interaction with the MAP application by the user may be indicated by reception of frequent location updates for the user.
  • location confidence events in addition to or as an alternative to using location confidence events, other information such as location profiles, event information (e.g., live music event, open-mic night, etc.), current as past crowd histories, or the like may be used when computing the confidence levels for the current locations of the users in the crowds.
  • event information e.g., live music event, open-mic night, etc.
  • current as past crowd histories e.g., current as past crowd histories, or the like
  • the crowd analyzer 58 may increase the decrease rate (DR) used to compute the confidence level (CL) of the current location of the user.
  • the crowd analyzer 58 may decrease the decrease rate (DR) used to compute the confidence level (CL) of the current location of the user or replace the decrease rate (DR) with an increase rate such that the confidence level of the user increases in response to the location confidence event or while the location confidence event continues (e.g., increase while the user frequently interacts with the MAP application).
  • the crowd analyzer 58 may decrease the confidence level (CL) of the current location of the user by a predefined amount. For example, if the negative location event is the passing of a closing time of a business at which the user is located, the crowd analyzer 58 may decrease the confidence level (CL) of the user to zero. Similarly, in response to detecting a positive location confidence event with respect to a user, the crowd analyzer 58 may increase the confidence level (CL) of the current location of the user by a predefined amount. For example, in response to detecting that the user is frequently interacting with the MAP application at his mobile device, the crowd analyzer 58 may increase the confidence level (CL) of the current location of the user by 0.1.
  • the crowd analyzer 58 determines a quality level for the aggregate profile of the crowd (step 3502 ).
  • the quality level for the crowd is computed as an average of the confidence levels of the current locations of the users in the crowd.
  • the quality level of the aggregate profile may then be provided along with the aggregate profile in the crowd data for the crowd returned to the requestor.
  • FIG. 47 illustrates an exemplary GUI 318 for presenting an aggregate profile 320 for a crowd and a quality level 322 of the aggregate profile 320 generated using the process of FIG. 46 according to one embodiment of the present disclosure.
  • FIG. 48 illustrates another exemplary GUI 324 for presenting an aggregate profile 326 for a crowd and a quality level 328 of the aggregate profile 326 generated using the process of FIG. 46 according to one embodiment of the present disclosure.
  • the aggregate profile 326 also indicates a relative number of user matches for each of a number of keywords in the aggregate profile 326 . More specifically, in a keyword area 330 of the GUI 324 , the sizes of the keywords indicate the relative number of user matches for the keywords. Therefore, in this example, the keyword “books” has a larger number of user matches that the keyword “politics,” as indicated by the size, or font size, of the two keywords in the keyword area 330 of the GUI 324 .
  • FIG. 49 illustrates a flow chart for a process for generating confidence factors for keywords included in an aggregate profile for a crowd based on confidence levels for current locations of users in the crowd according to one embodiment of the present disclosure.
  • the crowd analyzer 58 creates an aggregate profile for one or more crowds relevant to a POI or an AOI in response to a crowd request from a requestor (i.e., one of the users 20 - 1 through 20 -N, the subscriber 24 , or the third-party service 26 ).
  • the aggregate profile may be generated based on comparisons of the user profiles of the users in the crowd to a user profile or a select subset of the user profile of a requesting user (e.g., one of the users 20 - 1 through 20 -N for which the aggregate profile is generated), comparisons of the user profiles of the users in the crowd to a target user profile, or comparisons of the user profiles of the users in the crowd to one another.
  • the aggregate profile for a crowd includes a number of user matches for each of a number of keywords and/or a ratio of the number of user matches to the total number of users in the crowd for each of a number of keywords.
  • the crowd analyzer 58 of the MAP server 12 computes confidence levels for the current locations of the users in the crowd (step 3600 ).
  • the confidence levels for the current locations of the users may be computed as discussed above with respect to step 3500 of FIG. 46 . In general, the confidence levels for the current locations of the users may be computed based on an amount of time since the current location of the user was last updated, location confidence events, or both.
  • the crowd analyzer 58 determines a confidence level for each keyword in the aggregate profile of the crowd based on the confidence levels for the current locations of the corresponding users (step 3602 ).
  • the confidence level for the keyword is computed as an average of the confidence levels of the current locations of the users in the crowd having user profiles including the keyword. In other words, for each keyword, there are a number of user matches. The confidence levels of the current locations of the users corresponding to the user matches for the keyword are averaged to provide the confidence level for the keyword.
  • FIG. 50 illustrates an exemplary GUI 332 for presenting an aggregate profile 334 for a crowd including an indication of a confidence level for each of a number of keywords in the aggregate profile 334 according to one embodiment of the present disclosure.
  • the aggregate profile 334 includes a quality level 336 of the aggregate profile 334 generated using the process of FIG. 46 .
  • the quality level 336 of the aggregate profile 334 is optional.
  • the GUI 332 includes a keyword area 338 that graphically illustrates the keywords in the aggregate profile 334 and the confidence levels of the keywords.
  • the confidence levels of the keywords are graphically indicated via opacity of the keywords in the keyword area 338 . The lighter the text of the keyword, the lesser the confidence level of the keyword.
  • the size of the keywords in the keyword area 338 is indicative of the number of user matches for the keywords, as discussed above with respect to FIG. 48 .
  • the size of the keywords in the keyword area 338 may be indicative of the confidence levels of the keywords rather than the number of user matches for the keywords.
  • FIG. 51 graphically illustrates modification of the confidence level of the current location of a user according to one embodiment of the present disclosure.
  • a location update for the user is received by the MAP server 12 and, as such, the confidence level of the current location of the user is set to 1.
  • a positive location confidence event is detected. This positive location confidence event may be detected when, for example, the crowd analyzer 58 is generating an aggregate profile for a crowd in which the user is included and the user has been frequently interacting with the MAP application of his mobile device.
  • the confidence level for the current location of the user at time 1 is computed using an increase rate (i.e., a positive rate of change) rather than a decrease rate (DR).
  • the confidence level of the current location of the user increases from time 0 to time 1 as shown.
  • the confidence level for the current location of the user at time 1 may be increased by a predefined amount such as, for example, 0.1 points.
  • another positive location confidence event is detected.
  • the increase rate is further increased, and the confidence level for the current location of the user at time 2 is computed using the new increase rate.
  • the confidence level of the current location of the user further increases from time 1 to time 2 .
  • the confidence level for the current location of the user at time 2 may be further increased by the predefined amount such as, for example, 0.1 points.
  • the confidence level of the current location of the user is updated.
  • the confidence level of the current location of the user may be updated by the crowd analyzer 58 before generating an aggregate profile for a crowd in which the user is included.
  • the confidence level for the current location of the user is computed based on the previous confidence level computed at time 3 and a predefined decrease rate. As such, the confidence level for the current location of the user at time 3 is less than the confidence level for the current location of the user at time 2 .
  • a negative location confidence event is detected.
  • the decrease rate is increased, and the confidence level for the current location of the user at time 4 is computed based on the new decrease rate.
  • the confidence level for the current location of the user at time 4 is less than the confidence level for the current location of the user at time 3 .
  • the confidence level for the current location of the user continues to decrease until reaching 0 at approximately 4.5 hours after time 0 .
  • the confidence level for the current location of the user at time 4 may be decreased by a predefined amount in addition to or as an alternative to decreasing the confidence level by an amount determined by the amount of time that has elapsed between time 3 and time 4 and the decrease rate.
  • FIG. 52 illustrates an exemplary third-party application 34 - 1 that utilizes data from the MAP server 12 to control sharing of a number of sharable items 340 according to one embodiment of the present disclosure.
  • the third-party application 34 - 1 is generally any type of application that enables sharing of the sharable items 340 and is preferably implemented in software.
  • the third-party application 34 - 1 includes the sharable items 340 and a MAP gatekeeper module 342 (hereinafter “gatekeeper module 342 ”).
  • the sharable items 340 may be any type of digital item such as, for example, a user profile of the user 20 - 1 , a component of a user profile of the user 20 - 1 , a media item, or the like.
  • the user profile of the user 20 - 1 may be the same user profile used by the MAP server 12 for the user 20 - 1 , the same user profile of the user 20 - 1 that is obtained from the profile server 14 and processed by the MAP server 12 to provide the user profile used by the MAP server 12 , or a different user profile defined by the user 20 - 1 for the third-party application 34 - 1 .
  • a media item is a video item such as, for example, a user-generated video, a television program, a movie, or a video clip; an audio item such as, for example, a song, a podcast, or an audio clip; a picture (i.e., a digital image); or the like.
  • the gatekeeper module 342 is preferably implemented in software. In general, the gatekeeper module 342 enables the third-party application 34 - 1 to control sharing of the sharable items 340 based on data obtained from the MAP server 12 , where this data is also referred to herein as MAP data. In this embodiment, the gatekeeper module 342 includes a number of sharing rules 344 and a MAP resolution component 346 (hereinafter referred to as “resolution component 346 ”). The sharing rules 344 may be manually defined by the user 20 - 1 or automatically configured by the third-party application 34 - 1 .
  • Each of the sharing rules 344 is associated with, or mapped to, one or more of the sharable items 340 and defines conditions under which the sharable item(s) 340 to which it is mapped are to be shared. At least one of the sharing rules 344 , but potentially all of the sharing rules 344 , is based on MAP data obtained from the MAP server 12 .
  • the MAP data includes historical aggregate profile data relevant to the current location of the user 20 - 1 , aggregate profile data for current crowds relevant to the current location of the user 20 - 1 , crowd characteristics of current crowds relevant to the current location of the user 20 - 1 , or any combination thereof.
  • each of the sharing rules 344 identifies the sharable item 340 to which the sharing rule 344 applies, a sharing action, and a predicate that defines when the sharing action is to be performed.
  • the sharing action may be, for example, permit sharing, deny sharing, or prompt the user 20 - 1 for a decision as to whether the corresponding sharable item 344 is to be shared.
  • the predicate of at least one, but potentially all, of the sharing rules 344 is based on one or more MAP data elements from the MAP server 12 .
  • one of the sharing rules 344 may be:
  • First Name is the first name of the user 20 - 1 and is the sharable item 340 for which the sharing rule 344 is defined
  • Permit Sharing is the sharing action
  • the best-case average DOS ⁇ 5 is the predicate for the sharing rule 344 .
  • the first name of the user 20 - 1 i.e., the sharable item 340
  • the best-case average DOS for the crowd(s) currently located at or near the current location of the user 20 - 1 is less than five (5).
  • the best-case average DOS is referred to herein as the MAP data, or more specifically the MAP data element, upon which the sharing rule 344 is based.
  • the predicate in this example is based on a single MAP data element, the present disclosure is not limited thereto.
  • the predicate may be based on or more MAP data elements.
  • the sharing rules 344 are based on MAP data
  • some of the other sharing rules 344 may be based on other types of data provided by or otherwise accessible to the third-party application 34 - 1 .
  • the third-party application 34 - 1 may be a social networking application
  • the sharing rule 344 for one of the sharable items 340 may state that the sharable item 340 is to be shared with a particular sharing partner if that sharing partner is within three DOS from the user 20 - 1 in the social network associated with the social networking application.
  • some of the sharing rules 344 may be based on both MAP data and other data provided by or otherwise accessible to the third-party application 34 - 1 .
  • the sharing rules 344 do not necessarily include sharing rules for all of the sharable items 340 .
  • a default sharing action e.g., permit sharing, deny sharing, or prompt user
  • a particular sharable item 340 may have more than one sharing rule 344 in which case the sharing rules 344 for the sharable item 340 are preferably prioritized.
  • the resolution component 346 is preferably implemented in software. In general, the resolution component 346 operates to resolve the sharing rules 344 . Specifically, for the sharing rules 344 that are based on MAP data, the resolution component 346 operates to obtain the MAP data needed for the sharing rules 344 from the MAP server 12 and resolve the sharing rules 344 based on the MAP data. In this embodiment, the resolution component 346 obtains the MAP data needed to resolve the sharing rules 344 from the MAP server 12 via the MAP client 30 - 1 .
  • the functionality of the MAP client 30 - 1 needed to obtain the MAP data from the MAP server 12 may be incorporated into the resolution component 346 such that the resolution component 346 may obtain the MAP data directly from the MAP server 12 via the network 28 .
  • the third-party application 34 - 1 is generally any type of application that shares the sharable items 340 with other users such as, for example, the users 20 - 2 through 20 -N of the other mobile devices 18 - 2 through 18 -N.
  • the third-party application 34 may be a social networking application, where the sharable items 340 include a user profile of the user 20 - 1 or components of the user profile of the user 20 - 1 .
  • the third-party application 34 may be a media sharing application, where the sharable items 340 are media items such as, for example, video items (e.g., user-generated videos, television programs, movies, or video clips), audio items (e.g., songs, podcasts, or audio clips), pictures (i.e., digital images), or the like.
  • video items e.g., user-generated videos, television programs, movies, or video clips
  • audio items e.g., songs, podcasts, or audio clips
  • pictures i.e., digital images
  • FIG. 53 illustrates the operation of the third-party application 34 - 1 of FIG. 52 within the system 10 of FIG. 1 according to one embodiment of the present disclosure.
  • the third-party application 34 - 1 configures the sharing rules 344 (step 3700 ).
  • the sharing rules 344 are configured manually by the user 20 - 1 .
  • the third-party application 34 - 1 may enable the user 20 - 1 to configure the sharing rule 344 for the sharable item 340 by selecting a desired sharing action and defining a desired predicate for the sharing rule 344 .
  • the desired sharing action may be manually defined by the user 20 - 1 or selected from a system-defined list of possible sharing actions.
  • the desired predicate may be defined by enabling the user 20 - 1 to select a desired MAP data element from a system-defined list of available MAP data elements and a logical condition for when the desired sharing action is to be performed based on the desired MAP data element.
  • the available MAP data elements may vary depending on the particular implementation. In general, the available MAP data elements may include crowd characteristics such as degree of fragmentation, worst-base average DOS, best-case average DOS, degree of bidirectionality of social network relationships, or the like.
  • the available MAP data elements may include elements of a historical aggregate profile generated for the current location of the user 20 - 1 such as, for example, an average number of user matches over all keywords in the user profile of the user 20 - 1 , an average ratio of user matches to a total number of users over all keywords in the user profile of the user 20 - 1 , an average number of user matches for each individual keyword in the user profile of the user 20 - 1 , an average ratio of user matches to total users for each individual keyword in the user profile of the user 20 - 1 , or the like.
  • elements of a historical aggregate profile generated for the current location of the user 20 - 1 such as, for example, an average number of user matches over all keywords in the user profile of the user 20 - 1 , an average ratio of user matches to a total number of users over all keywords in the user profile of the user 20 - 1 , an average number of user matches for each individual keyword in the user profile of the user 20 - 1 , an average ratio of user matches to total users for each individual keyword in the user
  • the available MAP data elements may additionally or alternatively include elements of aggregate profile data for current crowds relevant to the current location of the user 20 - 1 such as, for example, a number of user matches over all keywords in the user profile of the user 20 - 1 , a ratio of user matches to a total number of users over all keywords in the user profile of the user 20 - 1 , a number of user matches for each individual keyword in the user profile of the user 20 - 1 , a ratio of user matches to total users for each individual keyword in the user profile of the user 20 - 1 , or the like.
  • elements of aggregate profile data for current crowds relevant to the current location of the user 20 - 1 such as, for example, a number of user matches over all keywords in the user profile of the user 20 - 1 , a ratio of user matches to a total number of users over all keywords in the user profile of the user 20 - 1 , a number of user matches for each individual keyword in the user profile of the user 20 - 1 , a ratio of user matches to total
  • the sharing rules 344 may be configured automatically by the third-party application 34 - 1 .
  • the details automatically configuring the sharing rules 344 are provided below with respect to FIG. 54 .
  • the sharing rules 344 may be modified by the user 20 - 1 as desired.
  • some of the sharing rules 344 may be manually configured by the user 20 - 1 while others are automatically configured by the third-party application 34 - 1 .
  • the third-party application 34 - 1 sends a MAP data request to the MAP client 30 - 1 (step 3702 ). More specifically, when the third-party application 34 - 1 desires to resolve one or more of the sharing rules 344 , the resolution component 346 then determines whether the one or more sharing rules 344 are based on MAP data. If so, the resolution component 346 sends a request (i.e., the MAP data request) to the MAP client 30 - 1 for the particular MAP data elements needed to resolve the one or more sharing rules 344 . The MAP client 30 - 1 then sends the MAP data request to the MAP server 12 (step 3704 ).
  • a request i.e., the MAP data request
  • the MAP server 12 obtains the requested MAP data (step 3706 ).
  • the MAP server 12 obtains the requested MAP data in the manner described above. For example, if the requested MAP data is historical aggregate profile data for the current location of the user 20 - 1 , the MAP server 12 generates the historical aggregate profile in the manner described above.
  • the MAP server 12 returns the MAP data to the MAP client 30 - 1 (step 3708 ).
  • the MAP client 30 - 1 then returns the MAP data to the third-party application 34 - 1 (step 3710 ).
  • the third-party application 34 - 1 Upon receiving the MAP data, the third-party application 34 - 1 , and specifically the resolution component 346 , resolves the one or more sharing rules 344 based on the MAP data to provide corresponding resolution results (step 3712 ).
  • resolving the sharing rule 344 generally refers to determining whether the corresponding sharable item 340 is to be shared based on the MAP data according to the sharing rule 344 . More specifically, in one embodiment, resolving the sharing rule 344 includes determining whether the sharing action defined by the sharing rule 344 is to be performed based on the MAP data and the predicate defined by the sharing rule 344 .
  • the results of resolving the one or more sharing rules 344 identify whether sharing of the corresponding sharable items 340 is permitted or denied or whether the user 20 - 1 is to be prompted for a final decision by the user 20 - 1 regarding whether the corresponding sharable items 340 are to be shared.
  • the third-party application 34 - 1 controls sharing of the one or more sharable items 340 for which the one or more sharing rules 344 were resolved based on the results of resolving the one or more sharing rules 344 (step 3714 ).
  • the third-party application 34 - 1 does not share that sharable item 344 .
  • the third-party application 34 - 1 permits sharing of that sharable item 344 .
  • the third-party application 34 - 1 prompts the user 20 - 1 for a decision as to whether sharing of the sharable item is permitted. If the user 20 - 1 chooses to permit sharing, then the third-party application 34 - 1 permits sharing of the sharable item 340 . Otherwise, the third-party application 34 - 1 does not permit sharing of the sharable item 340 .
  • FIG. 54 is a flow chart illustrating the operation of the third-party application 34 - 1 to automatically configure the sharing rules 344 for the sharable items 340 according to one embodiment of the present disclosure.
  • the third-party application 34 - 1 retrieves one of the sharable items 340 (step 3800 ).
  • the gatekeeper module 342 may obtain the sharable item 344 and/or, if available, metadata describing the sharable item 340 .
  • the metadata describing the sharable item 340 may include any data describing the sharable item 340 that may be used to configure, or create, a sharing rule 344 for the sharable item 340 .
  • the gatekeeper module 342 determines whether the sharable item 340 corresponds to, or matches, any of the available MAP data elements based on the sharable item 340 itself, metadata describing the sharable item, or both (step 3802 ). If not, the process proceeds to step 3806 . If the sharable item 340 corresponds to an available MAP data element, the gatekeeper module 342 creates a sharing rule 344 for the sharable item 340 based on the corresponding MAP data element (step 3804 ).
  • the available MAP data elements preferably include: (1) historical aggregate profile data elements for the average number of user matches for each individual keyword in the user profile of the user 20 - 1 , (2) historical aggregate profile data elements for an average ratio of user matches to total users for each individual keyword in the user profile of the user 20 - 1 , (3) aggregate profile data elements for the number of user matches for each individual keyword in the user profile of the user 20 - 1 for crowd(s) located at or near the current location of the user 20 - 1 , and/or (4) aggregate profile data elements for the ratio of user matches or total users for each individual keyword in the user profile of the user 20 - 1 for crowd(s) located at or near the current location of the user 20 - 1 .
  • the gatekeeper module 342 compares the sharable item 340 and/or the metadata describing the sharable item 340 to the available MAP data elements to determine whether the sharable item 340 matches any of the available MAP data elements. If so, the gatekeeper module 342 then creates the sharing rule 344 for the sharable item 340 based on the matching MAP data element(s). For instance, the sharing rule 344 for the sharable item 340 may be set to a predefined default sharing rule for the matching MAP data element(s).
  • the sharable item 340 is the keyword NCSU (North Carolina State University) from a user profile of the user 20 - 1 used by the third-party application 34 - 1 , which may or may not be the same as the user profile of the user 20 - 1 used by the MAP server 12 .
  • the gatekeeper module 342 compares the keyword NCSU to the available MAP data elements. If the user profile of the user 20 - 1 used by the MAP server 12 also includes the keyword NCSU and the available MAP data elements include a number of user matches for the keyword NCSU for crowd(s) located at or near the current location of the user 20 - 1 , then there is a match.
  • a sharing rule 344 will be automatically configured for the keyword NCSU such that a default sharing action (e.g., Permit Sharing) will be performed in response to satisfaction of a default predicate (e.g., if number of user matches >5).
  • a default sharing action e.g., Permit Sharing
  • the gatekeeper module 342 compares metadata describing the song (e.g., title and artist) to the available MAP data elements. Then, if any of the available MAP data elements match the song, then a sharing rule 344 for the song will be automatically configure.
  • a sharing rule 344 for the song is automatically configured, or created, using a default sharing action (e.g., Permit Sharing) and a default predicate (e.g., if ratio of user matches to total users >50%).
  • a default sharing action e.g., Permit Sharing
  • a default predicate e.g., if ratio of user matches to total users >50%).
  • a match may also occur if the sharable item 340 is sufficiently related to a MAP data element. For example, if the sharable item 340 is a song by Fergie and the available MAP data elements include a number of user matches for Black Eyed Peas, then a determination may be made that there is a match. In order to determine that closely related terms match for purposes of this automatic sharing rule configuration process, an ontology or similar data structure that defines relationships between terms and processing such as, for example, natural language processing may be used. Using such techniques, the sharable item 340 and a MAP data element may be determined to be matching if there is a direct match or if they are related within a predefined number of DOS within the ontology.
  • the gatekeeper module 342 determines whether there are more sharable items 340 to be processed (step 3806 ). If so, the process returns to step 3800 and is repeated for the next sharable item 340 . Once all of the sharable items 340 have been processed, the process ends.
  • FIG. 55 is a flow chart illustrating the operation of the third-party application 34 - 1 of FIG. 52 according to one embodiment of the present disclosure.
  • the third-party application 34 - 1 checks for sharing partners (step 3900 ).
  • a sharing partner is a user of another device with which the user 20 - 1 is enabled to share the sharable items 340 via the third-party application 34 - 1 .
  • the third-party application 34 - 1 determines whether a sharing partner has been found (step 3902 ). If not, the process returns to step 3900 and is repeated.
  • the gatekeeper module 342 utilizes the sharing rules 344 to determine which, if any, of the sharable items 340 are permitted to be shared at this time. More specifically, the resolution component 346 retrieves or otherwise obtains one of the sharing rules 344 (step 394 ). Next, the resolution component 346 determines whether the sharing rule 344 is based on MAP data from the MAP server 12 (step 3906 ). If not, the process proceeds to step 3910 . If the sharing rule 344 is based on MAP data, the resolution component 346 obtains the MAP data needed to resolve the sharing rule 344 (step 3908 ). More specifically, in one embodiment, the resolution component 346 obtains the MAP data needed to resolve the sharing rule 344 from the MAP server 12 via the MAP client 30 - 1 .
  • the resolution component 346 resolves the sharing rule 344 to provide a corresponding result and adds the result to a set of complied results for the resolutions of the sharing rules 3444 (steps 3910 and 3912 ).
  • the result of resolving the sharing rule 344 indicates whether sharing of the sharable item 340 is permitted or denied.
  • the result may state that the user 20 - 1 is to be prompted for a decision as to whether sharing is permitted or denied.
  • the resolution component 346 determines whether there are more sharing rules 344 to process (step 3914 ). If so, the process returns to step 3904 and is repeated for the next sharing rule 344 . Once all of the sharing rules 344 have been processed, the resolution component 346 reports the compiled results for the resolutions of the sharing rules 344 to the third-party application 34 - 1 (step 3916 ). The third-party application 34 - 1 then shares the sharable items 340 according to the results of the resolutions of the sharing rules 344 (step 3918 ). Thus, for example, if the result of the sharing rule 344 for one of the sharable items 340 is to permit sharing, then the third-party application 34 - 1 permits sharing of the sharable item 340 . The process then returns to step 3900 and is repeated when a new sharing partner is found, or detected.
  • FIG. 56 is a block diagram of the MAP server 12 according to one embodiment of the present disclosure.
  • the MAP server 12 includes a controller 348 connected to memory 350 , one or more secondary storage devices 352 , and a communication interface 354 by a bus 356 or similar mechanism.
  • the controller 348 is a microprocessor, digital Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), or the like.
  • the controller 348 is a microprocessor, and the application layer 40 , the business logic layer 42 , and the object mapping layer 62 ( FIG. 2 ) are implemented in software and stored in the memory 350 for execution by the controller 348 .
  • the datastore 64 FIG.
  • the communication interface 354 is a wired or wireless communication interface that communicatively couples the MAP server 12 to the network 28 ( FIG. 1 ).
  • the communication interface 354 may be an Ethernet interface, local wireless interface such as a wireless interface operating according to one of the suite of IEEE 802.11 standards, or the like.
  • FIG. 57 is a block diagram of the mobile device 18 - 1 according to one embodiment of the present disclosure. This discussion is equally applicable to the other mobile devices 18 - 2 through 18 -N. As illustrated, the mobile device 18 - 1 includes a controller 358 connected to memory 360 , a communication interface 362 , one or more user interface components 364 , and the location function 36 - 1 by a bus 366 or similar mechanism.
  • the controller 358 is a microprocessor, digital ASIC, FPGA, or the like.
  • the controller 358 is a microprocessor, and the MAP client 30 - 1 , the MAP application 32 - 1 , and the third-party application(s) 34 - 1 are implemented in software and stored in the memory 360 for execution by the controller 358 .
  • the location function 36 - 1 is a hardware component such as, for example, a GPS receiver.
  • the communication interface 362 is a wireless communication interface that communicatively couples the mobile device 18 - 1 to the network 28 ( FIG. 1 ).
  • the communication interface 362 may be a local wireless interface such as a wireless interface operating according to one of the suite of IEEE 802.11 standards, a mobile communications interface such as a cellular telecommunications interface, or the like.
  • the one or more user interface components 364 include, for example, a touchscreen, a display, one or more user input components (e.g., a keypad), a speaker, or the like, or any combination thereof.
  • FIG. 58 is a block diagram of the subscriber device 22 according to one embodiment of the present disclosure.
  • the subscriber device 22 includes a controller 368 connected to memory 370 , one or more secondary storage devices 372 , a communication interface 374 , and one or more user interface components 376 by a bus 378 or similar mechanism.
  • the controller 368 is a microprocessor, digital ASIC, FPGA, or the like.
  • the controller 368 is a microprocessor, and the web browser 38 ( FIG. 1 ) is implemented in software and stored in the memory 370 for execution by the controller 368 .
  • the one or more secondary storage devices 372 are digital storage devices such as, for example, one or more hard disk drives.
  • the communication interface 374 is a wired or wireless communication interface that communicatively couples the subscriber device 22 to the network 28 ( FIG. 1 ).
  • the communication interface 374 may be an Ethernet interface, local wireless interface such as a wireless interface operating according to one of the suite of IEEE 802.11 standards, a mobile communications interface such as a cellular telecommunications interface, or the like.
  • the one or more user interface components 376 include, for example, a touchscreen, a display, one or more user input components (e.g., a keypad), a speaker, or the like, or any combination thereof.
  • FIG. 59 is a block diagram of a computing device 380 operating to host the third-party service 26 according to one embodiment of the present disclosure.
  • the computing device 380 may be, for example, a physical server.
  • the computing device 380 includes a controller 382 connected to memory 384 , one or more secondary storage devices 386 , a communication interface 388 , and one or more user interface components 390 by a bus 392 or similar mechanism.
  • the controller 382 is a microprocessor, digital ASIC, FPGA, or the like.
  • the controller 382 is a microprocessor, and the third-party service 26 is implemented in software and stored in the memory 384 for execution by the controller 382 .
  • the one or more secondary storage devices 386 are digital storage devices such as, for example, one or more hard disk drives.
  • the communication interface 388 is a wired or wireless communication interface that communicatively couples the computing device 380 to the network 28 ( FIG. 1 ).
  • the communication interface 388 may be an Ethernet interface, local wireless interface such as a wireless interface operating according to one of the suite of IEEE 802.11 standards, a mobile communications interface such as a cellular telecommunications interface, or the like.
  • the one or more user interface components 390 include, for example, a touchscreen, a display, one or more user input components (e.g., a keypad), a speaker, or the like, or any combination thereof.

Abstract

Systems and methods are provided for controlling access to sharable items. In general, a computing device of a user stores a sharable item. A sharing rule is configured for the sharable item, where the sharing rule is based on an element of aggregate profile data for a current location of the user, a crowd characteristic of one or more crowds that are currently relevant to the current location of the user, or both. Depending on the embodiment, the element of the aggregate profile data, the crowd characteristic of the one or more crowds, or both are obtained and the sharing rule for the sharable item is resolved. Sharing of the sharable item is then provided according to a result of the resolution of the sharing rule for the sharable item.

Description

    RELATED APPLICATIONS
  • This application claims the benefit of provisional patent application Ser. No. 61/149,205, filed Feb. 2, 2009, provisional patent application Ser. No. 61/227,192, filed Jul. 21, 2009, and provisional patent application Ser. No. 61/236,296, filed Aug. 24, 2009, the disclosures of which are hereby incorporated by reference in their entireties.
  • FIELD OF THE INVENTION
  • The present invention relates to sharing digital items, and more particularly relates to an information gatekeeper that controls access to digital items in a sharing environment.
  • BACKGROUND
  • Social networking applications are becoming prolific in today's mobile devices. For example, many mobile smart phones (e.g., the Apple® iPhone, Windows Mobile devices, and Blackberry devices) enable users to run social networking applications. However, sharing of personal information between devices in such a mobile environment raises privacy issues. Thus, there is a need for a system and method that controls sharing of sharable items, such as personal information, in a mobile environment.
  • SUMMARY
  • Systems and methods are provided for controlling access to sharable items. In general, a computing device of a user stores a sharable item. A sharing rule is configured for the sharable item, where the sharing rule is based on an element of aggregate profile data for a current location of the user, a crowd characteristic of one or more crowds that are currently relevant to the current location of the user, or both. Depending on the embodiment, the element of the aggregate profile data, the crowd characteristic of the one or more crowds, or both are obtained and the sharing rule for the sharable item is resolved. Sharing of the sharable item is then provided according to a result of the resolution of the sharing rule for the sharable item.
  • In one embodiment, a computing device of a user stores a sharable item. A sharing rule is configured for the sharable item and is based on an element of aggregate profile data for a current location of the user. The element of the aggregate profile data is either an element of a historical aggregate profile for the current location of the user or an element of an aggregate profile for one or more crowds of users currently relevant to the current location of the user. The element of the aggregate profile data is obtained, and then the sharing rule for the sharable item is resolved based on the element of the aggregate profile data. Sharing of the sharable item is then provided according to a result of the resolution of the sharing rule for the sharable item.
  • In another embodiment, a computing device of a user stores a sharable item. A sharing rule is configured for the sharable item and is based on a crowd characteristic of one or more crowds currently relevant to a current location of the user. The crowd characteristic of the one or more crowds is obtained, and then the sharing rule for the sharable item is resolved based on the crowd characteristic of the one or more crowds. Sharing of the sharable item is then provided according to a result of the resolution of the sharing rule for the sharable item.
  • Those skilled in the art will appreciate the scope of the present invention and realize additional aspects thereof after reading the following detailed description of the preferred embodiments in association with the accompanying drawing figures.
  • BRIEF DESCRIPTION OF THE DRAWING FIGURES
  • The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the invention, and together with the description serve to explain the principles of the invention.
  • FIG. 1 illustrates a Mobile Aggregate Profile (MAP) system according to one embodiment of the present disclosure;
  • FIG. 2 is a block diagram of the MAP server of FIG. 1 according to one embodiment of the present disclosure;
  • FIG. 3 is a block diagram of the MAP client of one of the mobile devices of FIG. 1 according to one embodiment of the present disclosure;
  • FIG. 4 illustrates the operation of the system of FIG. 1 to provide user profiles and current locations of the users of the mobile devices to the MAP server according to one embodiment of the present disclosure;
  • FIG. 5 illustrates the operation of the system of FIG. 1 to provide user profiles and current locations of the users of the mobile devices to the MAP server according to another embodiment of the present disclosure;
  • FIGS. 6 and 7 graphically illustrate bucketization of users according to location for purposes of maintaining a historical record of anonymized user profile data by location according to one embodiment of the present disclosure;
  • FIG. 8 is a flow chart illustrating the operation of a foreground bucketization process performed by the MAP server to maintain the lists of users for location buckets for purposes of maintaining a historical record of anonymized user profile data by location according to one embodiment of the present disclosure;
  • FIG. 9 is a flow chart illustrating the anonymization and storage process performed by the MAP server for the location buckets in order to maintain a historical record of anonymized user profile data by location according to one embodiment of the present disclosure;
  • FIG. 10 graphically illustrates anonymization of a user record according to one embodiment of the present disclosure;
  • FIG. 11 is a flow chart for a quadtree based storage process that may be used to store anonymized user profile data for location buckets according to one embodiment of the present disclosure;
  • FIG. 12 is a flow chart illustrating a quadtree algorithm that may be used to process the location buckets for storage of the anonymized user profile data according to one embodiment of the present disclosure;
  • FIGS. 13A through 13E graphically illustrate the process of FIG. 12 for the generation of a quadtree data structure for one exemplary base quadtree region;
  • FIG. 14 illustrates the operation of the system of FIG. 1 wherein a mobile device is enabled to request and receive historical data from the MAP server according to one embodiment of the present disclosure;
  • FIGS. 15A and 15B illustrate a flow chart for a process for generating historical data in a time context in response to a historical request from a mobile device according to one embodiment of the present disclosure;
  • FIG. 16 is an exemplary Graphical User Interface (GUI) that may be provided by the MAP application of one of the mobile devices of FIG. 1 in order to present historical aggregate profile data in a time context according to one embodiment of the present disclosure;
  • FIGS. 17A and 17B illustrate a flow chart for a process for generating historical data in a geographic context in response to a historical request from a mobile device according to one embodiment of the present disclosure;
  • FIG. 18 illustrates an exemplary GUI that may be provided by the MAP application of one of the mobile devices of FIG. 1 to present historical data in the geographic context according to one embodiment of the present disclosure;
  • FIG. 19 illustrates the operation of the system of FIG. 1 wherein the subscriber device is enabled to request and receive historical data from the MAP server according to one embodiment of the present disclosure;
  • FIGS. 20A and 20B illustrate a process for generating historical data in a time context in response to a historical request from a subscriber device according to one embodiment of the present disclosure;
  • FIGS. 21A and 21B illustrate a process for generating historical data in a geographic context in response to a historical request from a subscriber device according to one embodiment of the present disclosure.
  • FIG. 22 is a flow chart for a spatial crowd formation process according to one embodiment of the present disclosure;
  • FIGS. 23A through 23D graphically illustrate the crowd formation process of FIG. 22 for an exemplary bounding box;
  • FIGS. 24A through 24D illustrate a flow chart for a spatial crowd formation process according to another embodiment of the present disclosure;
  • FIGS. 25A through 25D graphically illustrate the crowd formation process of FIGS. 24A through 24D for a scenario where the crowd formation process is triggered by a location update for a user having no old location;
  • FIGS. 26A through 26F graphically illustrate the crowd formation process of FIGS. 24A through 24D for a scenario where the new and old bounding boxes overlap;
  • FIGS. 27A through 27E graphically illustrate the crowd formation process of FIGS. 24A through 24D in a scenario where the new and old bounding boxes do not overlap;
  • FIG. 28 illustrates the operation the system of FIG. 1 to enable the mobile devices to request crowd data for currently formed crowds according to one embodiment of the present disclosure;
  • FIG. 29A is a flow chart for a process for generating aggregate profiles for crowds identified in response to a crowd request from a mobile device according to one embodiment of the present disclosure;
  • FIG. 29B is a flow chart for a process for generating aggregate profiles for crowds identified in response to a crowd request from a mobile device according to another embodiment of the present disclosure;
  • FIG. 30 illustrates the operation of the system of FIG. 1 to enable a subscriber device to request crowd data for current crowds according to one embodiment of the present disclosure;
  • FIG. 31 is a flow chart for a process for generating aggregate profiles for crowds identified for a crowd request in response to a crowd request from a subscriber device according to one embodiment of the present disclosure;
  • FIGS. 32A through 32E illustrate a GUI for an exemplary embodiment of the MAP application of one of the mobile devices of FIG. 1 according to one embodiment of the present disclosure;
  • FIGS. 33A through 33C illustrate an exemplary web interface provided by the MAP server and presented to the subscriber at the subscriber device according to one embodiment of the present disclosure;
  • FIG. 34 is a flow chart illustrating a spatial crowd fragmentation process according to one embodiment of the present disclosure;
  • FIGS. 35A and 35B graphically illustrate the spatial crowd fragmentation process of FIG. 34 for an exemplary crowd;
  • FIG. 36 illustrates a connectivity-based crowd fragmentation process according to one embodiment of the present disclosure;
  • FIGS. 37A and 37B graphically illustrate the connectivity-based crowd fragmentation process of FIG. 36 for an exemplary crowd;
  • FIG. 38 is a flow chart illustrating a recursive crowd fragmentation that uses both spatial crowd formation and connectivity-based crowd formation according to one embodiment of the present disclosure;
  • FIG. 39 is a flow chart illustrating a recursive crowd fragmentation that uses both spatial crowd formation and connectivity-based crowd formation according to another embodiment of the present disclosure;
  • FIGS. 40A and 40B illustrate an exemplary graphical representation of the degree of fragmentation for a crowd according to one embodiment of the present disclosure;
  • FIG. 41 is a flow chart for a process for determining a best-case and worst-case average degree of separation (DOS) for a crowd fragment of a crowd according to one embodiment of the present disclosure;
  • FIG. 42 is a more detailed flow chart illustrating the process for determining a best-case and worst-case average DOS for a crowd fragment according to one embodiment of the present disclosure;
  • FIGS. 43A through 43D illustrate an exemplary graphical representation of the best-case and worst-case average DOS for a crowd fragment according to one embodiment of the present disclosure;
  • FIG. 44 is a flow chart for a process of determining a degree of bidirectionality of relationships between users in a crowd fragment according to one embodiment of the present disclosure;
  • FIGS. 45A through 45C illustrate an exemplary graphical representation of the degree of bidirectionality of friendship relationships for a crowd fragment according to one embodiment of the present disclosure;
  • FIG. 46 is a flow chart for a process for generating a quality level for an aggregate profile for a crowd according to one embodiment of the present disclosure;
  • FIG. 47 illustrates an exemplary GUI for presenting an aggregate profile for a crowd and a quality level of the aggregate profile generated using the process of FIG. 46 according to one embodiment of the present disclosure;
  • FIG. 48 illustrates another exemplary GUI for presenting an aggregate profile for a crowd and a quality level of the aggregate profile generated using the process of FIG. 46 according to another embodiment of the present disclosure;
  • FIG. 49 illustrates a flow chart for a process for generating confidence factors for keywords included in an aggregate profile for a crowd based on confidence levels for current locations of users in the crowd according to one embodiment of the present disclosure;
  • FIG. 50 illustrates an exemplary GUI for presenting an aggregate profile for a crowd including an indication of a confidence level for each of a number of keywords in the aggregate profile according to one embodiment of the present disclosure;
  • FIG. 51 graphically illustrates modification of the confidence level of the current location of a user according to one embodiment of the present disclosure;
  • FIG. 52 illustrates an exemplary third-party application that control access to sharable items based on data from the MAP server of FIG. 1 according to one embodiment of the present disclosure;
  • FIG. 53 illustrates the operation of the third-party application of FIG. 52 to control access to a sharable item based on data obtained from the MAP server according to one embodiment of the present disclosure;
  • FIG. 54 illustrates a process for automatically configuring sharing rules for sharable media items according to one embodiment of the present disclosure;
  • FIG. 55 is a more detailed illustration of the operation of the third-party application of FIG. 52 to control access to sharable items according to one embodiment of the present disclosure;
  • FIG. 56 is a block diagram of the MAP server of FIG. 1 according to one embodiment of the present disclosure;
  • FIG. 57 is a block diagram of one of the mobile devices of FIG. 1 according to one embodiment of the present disclosure;
  • FIG. 58 is a block diagram of the subscriber device of FIG. 1 according to one embodiment of the present disclosure; and
  • FIG. 59 is a block diagram of a computing device operating to host the third-party service of FIG. 1 according to one embodiment of the present disclosure.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the invention and illustrate the best mode of practicing the invention. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the invention and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.
  • FIG. 1 illustrates a Mobile Aggregate Profile (MAP) system 10 according to one embodiment of the present disclosure. In this embodiment, the system 10 includes a MAP server 12, one or more profile servers 14, a location server 16, a number of mobile devices 18-1 through 18-N having associated users 20-1 through 20-N, a subscriber device 22 having an associated subscriber 24, and a third-party service 26 communicatively coupled via a network 28. The network 28 may be any type of network or any combination of networks. Specifically, the network 28 may include wired components, wireless components, or both wired and wireless components. In one exemplary embodiment, the network 28 is a distributed public network such as the Internet, where the mobile devices 18-1 through 18-N are enabled to connect to the network 28 via local wireless connections (e.g., WiFi or IEEE 802.11 connections) or wireless telecommunications connections (e.g., 3G or 4G telecommunications connections such as GSM, LTE, W-CDMA, or WiMAX connections).
  • As discussed below in detail, the MAP server 12 operates to obtain current locations, including location updates, and user profiles of the users 20-1 through 20-N of the mobile devices 18-1 through 18-N. The current locations of the users 20-1 through 20-N can be expressed as positional geographic coordinates such as latitude-longitude pairs, and a height vector (if applicable), or any other similar information capable of identifying a given physical point in space in a two-dimensional or three-dimensional coordinate system. Using the current locations and user profiles of the users 20-1 through 20-N, the MAP server 12 is enabled to provide a number of features such as, but not limited to, maintaining a historical record of anonymized user profile data by location, generating aggregate profile data over time for a Point of Interest (POI) or Area of Interest (AOI) using the historical record of anonymized user profile data, identifying crowds of users using current locations and/or user profiles of the users 20-1 through 20-N, generating aggregate profiles for crowds of users at a POI or in an AOI using the current user profiles of users in the crowds, and crowd tracking. Note that while the MAP server 12 is illustrated as a single server for simplicity and ease of discussion, it should be appreciated that the MAP server 12 may be implemented as a single physical server or multiple physical servers operating in a collaborative manner for purposes of redundancy and/or load sharing.
  • In general, the one or more profile servers 14 operate to store user profiles for a number of persons including the users 20-1 through 20-N of the mobile devices 18-1 through 18-N. For example, the one or more profile servers 14 may be servers providing social network services such the Facebook® social networking service, the MySpace® social networking service, the LinkedIN® social networking service, or the like. As discussed below, using the one or more profile servers 14, the MAP server 12 is enabled to directly or indirectly obtain the user profiles of the users 20-1 through 20-N of the mobile devices 18-1 through 18-N. The location server 16 generally operates to receive location updates from the mobile devices 18-1 through 18-N and make the location updates available to entities such as, for instance, the MAP server 12. In one exemplary embodiment, the location server 16 is a server operating to provide Yahoo!'s FireEagle service.
  • The mobile devices 18-1 through 18-N may be mobile smart phones, portable media player devices, mobile gaming devices, or the like. Some exemplary mobile devices that may be programmed or otherwise configured to operate as the mobile devices 18-1 through 18-N are the Apple® iPhone, the Palm Pre, the Samsung Rogue, the Blackberry Storm, and the Apple® iPod Touch® device. However, this list of exemplary mobile devices is not exhaustive and is not intended to limit the scope of the present disclosure.
  • The mobile devices 18-1 through 18-N include MAP clients 30-1 through 30-N, MAP applications 32-1 through 32-N, third-party applications 34-1 through 34-N, and location functions 36-1 through 36-N, respectively. Using the mobile device 18-1 as an example, the MAP client 30-1 is preferably implemented in software. In general, in the preferred embodiment, the MAP client 30-1 is a middleware layer operating to interface an application layer (i.e., the MAP application 32-1 and the third-party applications 34-1) to the MAP server 12. More specifically, the MAP client 30-1 enables the MAP application 32-1 and the third-party applications 34-1 to request and receive data from the MAP server 12. In addition, the MAP client 30-1 enables applications, such as the MAP application 32-1 and the third-party applications 34-1, to access data from the MAP server 12. For example, as discussed below in detail, the MAP client 30-1 enables the MAP application 32-1 to request anonymized aggregate profiles for crowds of users located at a POI or within an AOI and/or request anonymized historical user profile data for a POI or AOI.
  • The MAP application 32-1 is also preferably implemented in software. The MAP application 32-1 generally provides a user interface component between the user 20-1 and the MAP server 12. More specifically, among other things, the MAP application 32-1 enables the user 20-1 to initiate historical requests for historical data or crowd requests for crowd data (e.g., aggregate profile data and/or crowd characteristics data) from the MAP server 12 for a POI or AOI. The MAP application 32-1 also enables the user 20-1 to configure various settings. For example, the MAP application 32-1 may enable the user 20-1 to select a desired social networking service (e.g., Facebook, MySpace, LinkedIN, etc.) from which to obtain the user profile of the user 20-1 and provide any necessary credentials (e.g., username and password) needed to access the user profile from the social networking service.
  • The third-party applications 34-1 are preferably implemented in software. The third-party applications 34-1 operate to access the MAP server 12 via the MAP client 30-1. The third-party applications 34-1 may utilize data obtained from the MAP server 12 in any desired manner. As an example, one of the third party applications 34-1 may be a gaming application that utilizes historical aggregate profile data to notify the user 20-1 of POIs or AOIs where persons having an interest in the game have historically congregated.
  • The location function 36-1 may be implemented in hardware, software, or a combination thereof. In general, the location function 36-1 operates to determine or otherwise obtain the location of the mobile device 18-1. For example, the location function 36-1 may be or include a Global Positioning System (GPS) receiver.
  • The subscriber device 22 is a physical device such as a personal computer, a mobile computer (e.g., a notebook computer, a netbook computer, a tablet computer, etc.), a mobile smart phone, or the like. The subscriber 24 associated with the subscriber device 22 is a person or entity. In general, the subscriber device 22 enables the subscriber 24 to access the MAP server 12 via a web browser 38 to obtain various types of data, preferably for a fee. For example, the subscriber 24 may pay a fee to have access to historical aggregate profile data for one or more POIs and/or one or more AOIs, pay a fee to have access to crowd data such as aggregate profiles for crowds located at one or more POIs and/or located in one or more AOIs, pay a fee to track crowds, or the like. Note that the web browser 38 is exemplary. In another embodiment, the subscriber device 22 is enabled to access the MAP server 12 via a custom application.
  • Lastly, the third-party service 26 is a service that has access to data from the MAP server 12 such as a historical aggregate profile data for one or more POIs or one or more AOIs, crowd data such as aggregate profiles for one or more crowds at one or more POIs or within one or more AOIs, or crowd tracking data. Based on the data from the MAP server 12, the third-party service 26 operates to provide a service such as, for example, targeted advertising. For example, the third-party service 26 may obtain anonymous aggregate profile data for one or more crowds located at a POI and then provide targeted advertising to known users located at the POI based on the anonymous aggregate profile data. Note that while targeted advertising is mentioned as an exemplary third-party service 26, other types of third-party services 26 may additionally or alternatively be provided. Other types of third-party services 26 that may be provided will be apparent to one of ordinary skill in the art upon reading this disclosure.
  • Before proceeding, it should be noted that while the system 10 of FIG. 1 illustrates an embodiment where the one or more profile servers 14 and the location server 16 are separate from the MAP server 12, the present disclosure is not limited thereto. In an alternative embodiment, the functionality of the one or more profile servers 14 and/or the location server 16 may be implemented within the MAP server 12.
  • FIG. 2 is a block diagram of the MAP server 12 of FIG. 1 according to one embodiment of the present disclosure. As illustrated, the MAP server 12 includes an application layer 40, a business logic layer 42, and a persistence layer 44. The application layer 40 includes a user web application 46, a mobile client/server protocol component 48, and one or more data Application Programming Interfaces (APIs) 50. The user web application 46 is preferably implemented in software and operates to provide a web interface for users, such as the subscriber 24, to access the MAP server 12 via a web browser. The mobile client/server protocol component 48 is preferably implemented in software and operates to provide an interface between the MAP server 12 and the MAP clients 30-1 through 30-N hosted by the mobile devices 18-1 through 18-N. The data APIs 50 enable third-party services, such as the third-party service 26, to access the MAP server 12.
  • The business logic layer 42 includes a profile manager 52, a location manager 54, a history manager 56, a crowd analyzer 58, and an aggregation engine 60, each of which is preferably implemented in software. The profile manager 52 generally operates to obtain the user profiles of the users 20-1 through 20-N directly or indirectly from the one or more profile servers 14 and store the user profiles in the persistence layer 44. The location manager 54 operates to obtain the current locations of the users 20-1 through 20-N including location updates. As discussed below, the current locations of the users 20-1 through 20-N may be obtained directly from the mobile devices 18-1 through 18-N and/or obtained from the location server 16.
  • The history manager 56 generally operates to maintain a historical record of anonymized user profile data by location. The crowd analyzer 58 operates to form crowds of users. In one embodiment, the crowd analyzer 58 utilizes a spatial crowd formation algorithm. However, the present disclosure is not limited thereto. In addition, the crowd analyzer 58 may further characterize crowds to reflect degree of fragmentation, best-case and worst-case degree of separation (DOS), and/or degree of bi-directionality, as discussed below in more detail. Still further, the crowd analyzer 58 may also operate to track crowds. The aggregation engine 60 generally operates to provide aggregate profile data in response to requests from the mobile devices 18-1 through 18-N, the subscriber device 22, and the third-party service 26. The aggregate profile data may be historical aggregate profile data for one or more POIs or one or more AOIs or aggregate profile data for crowd(s) currently at one or more POIs or within one or more AOIs.
  • The persistence layer 44 includes an object mapping layer 62 and a datastore 64. The object mapping layer 62 is preferably implemented in software. The datastore 64 is preferably a relational database, which is implemented in a combination of hardware (i.e., physical data storage hardware) and software (i.e., relational database software). In this embodiment, the business logic layer 42 is implemented in an object-oriented programming language such as, for example, Java. As such, the object mapping layer 62 operates to map objects used in the business logic layer 42 to relational database entities stored in the datastore 64. Note that, in one embodiment, data is stored in the datastore 64 in a Resource Description Framework (RDF) compatible format.
  • In an alternative embodiment, rather than being a relational database, the datastore 64 may be implemented as an RDF datastore. More specifically, the RDF datastore may be compatible with RDF technology adopted by Semantic Web activities. Namely, the RDF datastore may use the Friend-Of-A-Friend (FOAF) vocabulary for describing people, their social networks, and their interests. In this embodiment, the MAP server 12 may be designed to accept raw FOAF files describing persons, their friends, and their interests. These FOAF files are currently output by some social networking services such as Livejournal and Facebook. The MAP server 12 may then persist RDF descriptions of the users 20-1 through 20-N as a proprietary extension of the FOAF vocabulary that includes additional properties desired for the MAP system 10.
  • FIG. 3 illustrates the MAP client 30-1 of FIG. 1 in more detail according to one embodiment of the present disclosure. This discussion is equally applicable to the other MAP clients 30-2 through 30-N. As illustrated, in this embodiment, the MAP client 30-1 includes a MAP access API 66, a MAP middleware component 68, and a mobile client/server protocol component 70. The MAP access API 66 is implemented in software and provides an interface by which the MAP client 30-1 and the third-party applications 34-1 are enabled to access the MAP client 30-1. The MAP middleware component 68 is implemented in software and performs the operations needed for the MAP client 30-1 to operate as an interface between the MAP application 32-1 and the third-party applications 34-1 at the mobile device 18-1 and the MAP server 12. The mobile client/server protocol component 70 enables communication between the MAP client 30-1 and the MAP server 12 via a defined protocol.
  • FIG. 4 illustrates the operation of the system 10 of FIG. 1 to provide the user profile of the user 20-1 of the mobile device 18-1 according to one embodiment of the present disclosure. This discussion is equally applicable to user profiles of the other users 20-2 through 20-N of the other mobile devices 18-2 through 18-N. First, an authentication process is performed (step 1000). For authentication, in this embodiment, the mobile device 18-1 authenticates with the profile server 14 (step 1000A) and the MAP server 12 (step 1000B). In addition, the MAP server 12 authenticates with the profile server 14 (step 1000C). Preferably, authentication is preformed using OpenID or similar technology. However, authentication may alternatively be performed using separate credentials (e.g., username and password) of the user 20-1 for access to the MAP server 12 and the profile server 14. Assuming that authentication is successful, the profile server 14 returns an authentication succeeded message to the MAP server 12 (step 1000D), and the profile server 14 returns an authentication succeeded message to the MAP client 30-1 of the mobile device 18-1 (step 1000E).
  • At some point after authentication is complete, a user profile process is performed such that a user profile of the user 20-1 is obtained from the profile server 14 and delivered to the MAP server 12 (step 1002). In this embodiment, the MAP client 30-1 of the mobile device 18-1 sends a profile request to the profile server 14 (step 1002A). In response, the profile server 14 returns the user profile of the user 20-1 to the mobile device 18-1 (step 1002B). The MAP client 30-1 of the mobile device 18-1 then sends the user profile of the user 20-1 to the MAP server 12 (step 1002C). Note that while in this embodiment the MAP client 30-1 sends the complete user profile of the user 20-1 to the MAP server 12, in an alternative embodiment, the MAP client 30-1 may filter the user profile of the user 20-1 according to criteria specified by the user 20-1. For example, the user profile of the user 20-1 may include demographic information, general interests, music interests, and movie interests, and the user 20-1 may specify that the demographic information or some subset thereof is to be filtered, or removed, before sending the user profile to the MAP server 12.
  • Upon receiving the user profile of the user 20-1 from the MAP client 30-1 of the mobile device 18-1, the profile manager 52 of the MAP server 12 processes the user profile (step 1002D). More specifically, in the preferred embodiment, the profile manager 52 includes social network handlers for the social network services supported by the MAP server 12. Thus, for example, if the MAP server 12 supports user profiles from Facebook, MySpace, and LinkedIN, the profile manager 52 may include a Facebook handler, a MySpace handler, and a LinkedIN handler. The social network handlers process user profiles to generate user profiles for the MAP server 12 that include lists of keywords for each of a number of profile categories. The profile categories may be the same for each of the social network handlers or different for each of the social network handlers. Thus, for this example assume that the user profile of the user 20-1 is from Facebook. The profile manager 52 uses a Facebook handler to process the user profile of the user 20-1 to map the user profile of the user 20-1 from Facebook to a user profile for the MAP server 12 including lists of keywords for a number of predefined profile categories. For example, for the Facebook handler, the profile categories may be a demographic profile category, a social interaction profile category, a general interests profile category, a music interests profile category, and a movie interests profile category. As such, the user profile of the user 20-1 from Facebook may be processed by the Facebook handler of the profile manager 52 to create a list of keywords such as, for example, liberal, High School Graduate, 35-44, College Graduate, etc. for the demographic profile category, a list of keywords such as Seeking Friendship for the social interaction profile category, a list of keywords such as politics, technology, photography, books, etc. for the general interests profile category, a list of keywords including music genres, artist names, album names, or the like for the music interests profile category, and a list of keywords including movie titles, actor or actress names, director names, move genres, or the like for the movie interests profile category. In one embodiment, the profile manager 52 may use natural language processing or semantic analysis. For example, if the Facebook user profile of the user 20-1 states that the user 20-1 is 20 years old, semantic analysis may result in the keyword of 18-24 years old being stored in the user profile of the user 20-1 for the MAP server 12.
  • After processing the user profile of the user 20-1, the profile manager 52 of the MAP server 12 stores the resulting user profile for the user 20-1 (step 1002E). More specifically, in one embodiment, the MAP server 12 stores user records for the users 20-1 through 20-N in the datastore 64 (FIG. 2). The user profile of the user 20-1 is stored in the user record of the user 20-1. The user record of the user 20-1 includes a unique identifier of the user 20-1, the user profile of the user 20-1, and, as discussed below, a current location of the user 20-1. Note that the user profile of the user 20-1 may be updated as desired. For example, in one embodiment, the user profile of the user 20-1 is updated by repeating step 1002 each time the user 20-1 activates the MAP application 32-1.
  • Note that the while the discussion herein focuses on an embodiment where the user profiles of the users 20-1 through 20-N are obtained from the one or more profile servers 14, the user profiles of the users 20-1 through 20-N may be obtained in any desired manner. For example, in one alternative embodiment, the user 20-1 may identify one or more favorite websites. The profile manager 52 of the MAP server 12 may then crawl the one or more favorite websites of the user 20-1 to obtain keywords appearing in the one or more favorite websites of the user 20-1. These keywords may then be stored as the user profile of the user 20-1.
  • At some point, a process is performed such that a current location of the mobile device 18-1 and thus a current location of the user 20-1 is obtained by the MAP server 12 (step 1004). In this embodiment, the MAP application 32-1 of the mobile device 18-1 obtains the current location of the mobile device 18-1 from the location function 36-1 of the mobile device 18-1. The MAP application 32-1 then provides the current location of the mobile device 18-1 to the MAP client 30-1, and the MAP client 30-1 then provides the current location of the mobile device 18-1 to the MAP server 12 (step 1004A). Note that step 1004A may be repeated periodically or in response to a change in the current location of the mobile device 18-1 in order for the MAP application 32-1 to provide location updates for the user 20-1 to the MAP server 12.
  • In response to receiving the current location of the mobile device 18-1, the location manager 54 of the MAP server 12 stores the current location of the mobile device 18-1 as the current location of the user 20-1 (step 1004B). More specifically, in one embodiment, the current location of the user 20-1 is stored in the user record of the user 20-1 maintained in the datastore 64 of the MAP server 12. Note that only the current location of the user 20-1 is stored in the user record of the user 20-1. In this manner, the MAP server 12 maintains privacy for the user 20-1 since the MAP server 12 does not maintain a historical record of the location of the user 20-1. As discussed below in detail, historical data maintained by the MAP server 12 is anonymized in order to maintain the privacy of the users 20-1 through 20-N.
  • In addition to storing the current location of the user 20-1, the location manager 54 sends the current location of the user 20-1 to the location server 16 (step 1004C). In this embodiment, by providing location updates to the location server 16, the MAP server 12 in return receives location updates for the user 20-1 from the location server 16. This is particularly beneficial when the mobile device 18-1 does not permit background processes, which is the case for the Apple® iPhone. As such, if the mobile device 18-1 is an Apple® iPhone or similar device that does not permit background processes, the MAP application 32-1 will not be able to provide location updates for the user 20-1 to the MAP server 12 unless the MAP application 32-1 is active.
  • Therefore, when the MAP application 32-1 is not active, other applications running on the mobile device 18-1 (or some other device of the user 20-1) may directly or indirectly provide location updates to the location server 16 for the user 20-1. This is illustrated in step 1006 where the location server 16 receives a location update for the user 20-1 directly or indirectly from another application running on the mobile device 18-1 or an application running on another device of the user 20-1 (step 1006A). The location server 16 then provides the location update for the user 20-1 to the MAP server 12 (step 1006B). In response, the location manager 54 updates and stores the current location of the user 20-1 in the user record of the user 20-1 (step 1006C). In this manner, the MAP server 12 is enabled to obtain location updates for the user 20-1 even when the MAP application 32-1 is not active at the mobile device 18-1.
  • FIG. 5 illustrates the operation of the system 10 of FIG. 1 to provide the user profile of the user 20-1 of the mobile device 18-1 according to another embodiment of the present disclosure. This discussion is equally applicable to user profiles of the other users 20-2 through 20-N of the other mobile devices 18-2 through 18-N. First, an authentication process is performed (step 1100). For authentication, in this embodiment, the mobile device 18-1 authenticates with the MAP server 12 (step 1100A), and the MAP server 12 authenticates with the profile server 14 (step 1100B). Preferably, authentication is performed using OpenID or similar technology. However, authentication may alternatively be performed using separate credentials (e.g., username and password) of the user 20-1 for access to the MAP server 12 and the profile server 14. Assuming that authentication is successful, the profile server 14 returns an authentication succeeded message to the MAP server 12 (step 1100C), and the MAP server 12 returns an authentication succeeded message to the MAP client 30-1 of the mobile device 18-1 (step 1100D).
  • At some point after authentication is complete, a user profile process is performed such that a user profile of the user 20-1 is obtained from the profile server 14 and delivered to the MAP server 12 (step 1102). In this embodiment, the profile manager 52 of the MAP server 12 sends a profile request to the profile server 14 (step 1102A). In response, the profile server 14 returns the user profile of the user 20-1 to the profile manager 52 of the MAP server 12 (step 1102B). Note that while in this embodiment the profile server 14 returns the complete user profile of the user 20-1 to the MAP server 12, in an alternative embodiment, the profile server 14 may return a filtered version of the user profile of the user 20-1 to the MAP server 12. The profile server 14 may filter the user profile of the user 20-1 according to criteria specified by the user 20-1. For example, the user profile of the user 20-1 may include demographic information, general interests, music interests, and movie interests, and the user 20-1 may specify that the demographic information or some subset thereof is to be filtered, or removed, before sending the user profile to the MAP server 12.
  • Upon receiving the user profile of the user 20-1, the profile manager 52 of the MAP server 12 processes to the user profile (step 1102C). More specifically, as discussed above, in the preferred embodiment, the profile manager 52 includes social network handlers for the social network services supported by the MAP server 12. The social network handlers process user profiles to generate user profiles for the MAP server 12 that include lists of keywords for each of a number of profile categories. The profile categories may be the same for each of the social network handlers or different for each of the social network handlers.
  • After processing the user profile of the user 20-1, the profile manager 52 of the MAP server 12 stores the resulting user profile for the user 20-1 (step 1102D). More specifically, in one embodiment, the MAP server 12 stores user records for the users 20-1 through 20-N in the datastore 64 (FIG. 2). The user profile of the user 20-1 is stored in the user record of the user 20-1. The user record of the user 20-1 includes a unique identifier of the user 20-1, the user profile of the user 20-1, and, as discussed below, a current location of the user 20-1. Note that the user profile of the user 20-1 may be updated as desired. For example, in one embodiment, the user profile of the user 20-1 is updated by repeating step 1102 each time the user 20-1 activates the MAP application 32-1.
  • Note that the while the discussion herein focuses on an embodiment where the user profiles of the users 20-1 through 20-N are obtained from the one or more profile servers 14, the user profiles of the users 20-1 through 20-N may be obtained in any desired manner. For example, in one alternative embodiment, the user 20-1 may identify one or more favorite websites. The profile manager 52 of the MAP server 12 may then crawl the one or more favorite websites of the user 20-1 to obtain keywords appearing in the one or more favorite websites of the user 20-1. These keywords may then be stored as the user profile of the user 20-1.
  • At some point, a process is performed such that a current location of the mobile device 18-1 and thus a current location of the user 20-1 is obtained by the MAP server 12 (step 1104). In this embodiment, the MAP application 32-1 of the mobile device 18-1 obtains the current location of the mobile device 18-1 from the location function 36-1 of the mobile device 18-1. The MAP application 32-1 then provides the current location of the user 20-1 of the mobile device 18-1 to the location server 16 (step 1104A). Note that step 1104A may be repeated periodically or in response to changes in the location of the mobile device 18-1 in order to provide location updates for the user 20-1 to the MAP server 12. The location server 16 then provides the current location of the user 20-1 to the MAP server 12 (step 1104B). The location server 16 may provide the current location of the user 20-1 to the MAP server 12 automatically in response to receiving the current location of the user 20-1 from the mobile device 18-1 or in response to a request from the MAP server 12.
  • In response to receiving the current location of the mobile device 18-1, the location manager 54 of the MAP server 12 stores the current location of the mobile device 18-1 as the current location of the user 20-1 (step 1104C). More specifically, in one embodiment, the current location of the user 20-1 is stored in the user record of the user 20-1 maintained in the datastore 64 of the MAP server 12. Note that only the current location of the user 20-1 is stored in the user record of the user 20-1. In this manner, the MAP server 12 maintains privacy for the user 20-1 since the MAP server 12 does not maintain a historical record of the location of the user 20-1. As discussed below in detail, historical data maintained by the MAP server 12 is anonymized in order to maintain the privacy of the users 20-1 through 20-N.
  • As discussed above, the use of the location server 16 is particularly beneficial when the mobile device 18-1 does not permit background processes, which is the case for the Apple® iPhone. As such, if the mobile device 18-1 is an Apple® iPhone or similar device that does not permit background processes, the MAP application 32-1 will not provide location updates for the user 20-1 to the location server 16 unless the MAP application 32-1 is active. However, other applications running on the mobile device 18-1 (or some other device of the user 20-1) may provide location updates to the location server 16 for the user 20-1 when the MAP application 32-1 is not active. This is illustrated in step 1106 where the location server 16 receives a location update for the user 20-1 from another application running on the mobile device 18-1 or an application running on another device of the user 20-1 (step 1106A). The location server 16 then provides the location update for the user 20-1 to the MAP server 12 (step 1106B). In response, the location manager 54 updates and stores the current location of the user 20-1 in the user record of the user 20-1 (step 1106C). In this manner, the MAP server 12 is enabled to obtain location updates for the user 20-1 even when the MAP application 32-1 is not active at the mobile device 18-1.
  • Using the current locations of the users 20-1 through 20-N and the user profiles of the users 20-1 through 20-N, the MAP server 12 can provide a number of features. A first feature that may be provided by the MAP server 12 is historical storage of anonymized user profile data by location. This historical storage of anonymized user profile data by location is performed by the history manager 56 of the MAP server 12. More specifically, as illustrated in FIG. 6, in the preferred embodiment, the history manager 56 maintains lists of users located in a number of geographic regions, or “location buckets.” Preferably, the location buckets are defined by floor(latitude, longitude) to a desired resolution. The higher the resolution, the smaller the size of the location buckets. For example, in one embodiment, the location buckets are defined by floor(latitude, longitude) to a resolution of 1/10,000th of a degree such that the lower left-hand corners of the squares illustrated in FIG. 6 are defined by the floor(latitude, longitude) values at a resolution of 1/10,000th of a degree. In the example of FIG. 6, users are represented as dots, and location buckets 72 through 88 have lists of 1, 3, 2, 1, 1, 2, 1, 2, and 3 users, respectively.
  • As discussed below in detail, at a predetermined time interval such as, for example, 15 minutes, the history manager 56 makes a copy of the lists of users in the location buckets, anonymizes the user profiles of the users in the lists to provide anonymized user profile data for the corresponding location buckets, and stores the anonymized user profile data in a number of history objects. In one embodiment, a history object is stored for each location bucket having at least one user. In another embodiment, a quadtree algorithm is used to efficiently create history objects for geographic regions (i.e., groups of one or more adjoining location buckets).
  • FIG. 7 graphically illustrates a scenario where a user moves from one location bucket to another, namely, from the location bucket 74 to the location bucket 76. As discussed below in detail, assuming that the movement occurs during the time interval between persistence of the historical data by the history manager 56, the user is included on both the list for the location bucket 74 and the list for the location bucket 76. However, the user is flagged or otherwise marked as inactive for the location bucket 74 and active for the location bucket 76. As discussed below, after making a copy of the lists for the location buckets to be used to persist the historical data, users flagged as inactive are removed from the lists of users for the location buckets. Thus, in sum, once a user moves from the location bucket 74 to the location bucket 76, the user remains in the list for the location bucket 74 until the predetermined time interval has expired and the anonymized user profile data is persisted. The user is then removed from the list for the location bucket 74.
  • FIG. 8 is a flow chart illustrating the operation of a foreground “bucketization” process performed by the history manager 56 to maintain the lists of users for location buckets according to one embodiment of the present disclosure. First, the history manager 56 receives a location update for a user (step 1200). For this discussion, assume that the location update is received for the user 20-1. The history manager 56 then determines a location bucket corresponding to the updated location (i.e., the current location) of the user 20-1 (step 1202). In the preferred embodiment, the location of the user 20-1 is expressed as latitude and longitude coordinates, and the history manager 56 determines the location bucket by determining floor values of the latitude and longitude coordinates, which can be written as floor(latitude, longitude) at a desired resolution. As an example, if the latitude and longitude coordinates for the location of the user 20-1 are 32.24267381553987 and −111.9249213502935, respectively, and the floor values are to be computed to a resolution of 1/10,000th of a degree, then the floor values for the latitude and longitude coordinates are 32.2426 and −111.9249. The floor values for the latitude and longitude coordinates correspond to a particular location bucket.
  • After determining the location bucket for the location of the user 20-1, the history manager 56 determines whether the user 20-1 is new to the location bucket (step 1204). In other words, the history manager 56 determines whether the user 20-1 is already on the list of users for the location bucket. If the user 20-1 is new to the location bucket, the history manager 56 creates an entry for the user 20-1 in the list of users for the location bucket (step 1206). Returning to step 1204, if the user 20-1 is not new to the location bucket, the history manager 56 updates the entry for the user 20-1 in the list of users for the location bucket (step 1208). At this point, whether proceeding from step 1206 or 1208, the user 20-1 is flagged as active in the list of users for the location bucket (step 1210).
  • The history manager 56 then determines whether the user 20-1 has moved from another location bucket (step 1212). More specifically, the history manager 56 determines whether the user 20-1 is included in the list of users for another location bucket and is currently flagged as active in that list. If the user 20-1 has not moved from another location bucket, the process proceeds to step 1216. If the user 20-1 has moved from another location bucket, the history manager 56 flags the user 20-1 as inactive in the list of users for the other location bucket from which the user 20-1 has moved (step 1214).
  • At this point, whether proceeding from step 1212 or 1214, the history manager 56 determines whether it is time to persist (step 1216). More specifically, as mentioned above, the history manager 56 operates to persist history objects at a predetermined time interval such as, for example, every 15 minutes. Thus, the history manager 56 determines that it is time to persist if the predetermined time interval has expired. If it is not time to persist, the process returns to step 1200 and is repeated for a next received location update, which will typically be for another user. If it is time to persist, the history manager 56 creates a copy of the lists of users for the location buckets and passes the copy of the lists to an anonymization and storage process (step 1218). In this embodiment, the anonymization and storage process is a separate process performed by the history manager 56. The history manager 56 then removes inactive users from the lists of users for the location buckets (step 1220). The process then returns to step 300 and is repeated for a next received location update, which will typically be for another user.
  • FIG. 9 is a flow chart illustrating the anonymization and storage process performed by the history manager 56 at the predetermined time interval according to one embodiment of the present disclosure. First, the anonymization and storage process receives the copy of the lists of users for the location buckets passed to the anonymization and storage process by the bucketization process of FIG. 8 (step 1300). Next, anonymization is performed for each of the location buckets having at least one user in order to provide anonymized user profile data for the location buckets (step 1302). Anonymization prevents connecting information stored in the history objects stored by the history manager 56 back to the users 20-1 through 20-N or at least substantially increases a difficulty of connecting information stored in the history objects stored by the history manager 56 back to the users 20-1 through 20-N. Lastly, the anonymized user profile data for the location buckets is stored in a number of history objects (step 1304). In one embodiment, a separate history object is stored for each of the location buckets, where the history object of a location bucket includes the anonymized user profile data for the location bucket. In another embodiment, as discussed below, a quadtree algorithm is used to efficiently store the anonymized user profile data in a number of history objects such that each history object stores the anonymized user profile data for one or more location buckets.
  • FIG. 10 graphically illustrates one embodiment of the anonymization process of step 1302 of FIG. 9. In this embodiment, anonymization is performed by creating anonymous user records for the users in the lists of users for the location buckets. The anonymous user records are not connected back to the users 20-1 through 20-N. More specifically, as illustrated in FIG. 10, each user in the lists of users for the location buckets has a corresponding user record 90. The user record 90 includes a unique user identifier (ID) for the user, the current location of the user, and the user profile of the user. The user profile includes keywords for each of a number of profile categories, which are stored in corresponding profile category records 92-1 through 92-M. Each of the profile category records 92-1 through 92-M includes a user ID for the corresponding user which may be the same user ID used in the user record 90, a category ID, and a list of keywords for the profile category.
  • For anonymization, an anonymous user record 94 is created from the user record 90. In the anonymous user record 94, the user ID is replaced with a new user ID that is not connected back to the user, which is also referred to herein as an anonymous user ID. This new user ID is different than any other user ID used for anonymous user records created from the user record of the user for any previous or subsequent time periods. In this manner, anonymous user records for a single user created over time cannot be linked to one another.
  • In addition, anonymous profile category records 96-1 through 96-M are created for the profile category records 92-1 through 92-M. In the anonymous profile category records 96-1 through 96-M, the user ID is replaced with a new user ID, which may be the same new user ID included in the anonymous user record 94. The anonymous profile category records 96-1 through 96-M include the same category IDs and lists of keywords as the corresponding profile category records 92-1 through 92-M. Note that the location of the user is not stored in the anonymous user record 94. With respect to location, it is sufficient that the anonymous user record 94 is linked to a location bucket.
  • In another embodiment, the history manager 56 performs anonymization in a manner similar to that described above with respect to FIG. 10. However, in this embodiment, the profile category records for the group of users in a location bucket, or the group of users in a number of location buckets representing a node in a quadtree data structure (see below), may be selectively randomized among the anonymous user records of those users. In other words, each anonymous user record would have a user profile including a selectively randomized set of profile category records (including keywords) from a cumulative list of profile category records for all of the users in the group.
  • In yet another embodiment, rather than creating anonymous user records 94 for the users in the lists maintained for the location buckets, the history manager 56 may perform anonymization by storing an aggregate user profile for each location bucket, or each group of location buckets representing a node in a quadtree data structure (see below). The aggregate user profile may include a list of all keywords and potentially the number of occurrences of each keyword in the user profiles of the corresponding group of users. In this manner, the data stored by the history manager 56 is not connected back to the users 20-1 through 20-N.
  • FIG. 11 is a flow chart illustrating the storing step (step 1304) of FIG. 9 in more detail according to one embodiment of the present disclosure. First, the history manager 56 processes the location buckets using a quadtree algorithm to produce a quadtree data structure, where each node of the quadtree data structure includes one or more of the location buckets having a combined number of users that is at most a predefined maximum number of users (step 1400). The history manager 56 then stores a history object for each node in the quadtree data structure having at least one user (step 1402).
  • Each history object includes location information, timing information, data, and quadtree data structure information. The location information included in the history object defines a combined geographic area of the location bucket(s) forming the corresponding node of the quadtree data structure. For example, the location information may be latitude and longitude coordinates for a northeast corner of the combined geographic area of the node of the quadtree data structure and a southwest corner of the combined geographic area for the node of the quadtree data structure. The timing information includes information defining a time window for the history object, which may be, for example, a start time for the corresponding time interval and an end time for the corresponding time interval. The data includes the anonymized user profile data for the users in the list(s) maintained for the location bucket(s) forming the node of the quadtree data structure for which the history object is stored. In addition, the data may include a total number of users in the location bucket(s) forming the node of the quadtree data structure. Lastly, the quadtree data structure information includes information defining a quadtree depth of the node in the quadtree data structure.
  • FIG. 12 is a flow chart illustrating a quadtree algorithm that may be used to process the location buckets to form the quadtree data structure in step 1400 of FIG. 11 according to one embodiment of the present disclosure. Initially, a geographic area served by the MAP server 12 is divided into a number of geographic regions, each including multiple location buckets. These geographic regions are also referred to herein as base quadtree regions. The geographic area served by the MAP server 12 may be, for example, a city, a state, a country, or the like. Further, the geographic area may be the only geographic area served by the MAP server 12 or one of a number of geographic areas served by the MAP server 12. Preferably, the base quadtree regions have a size of 2n×2n location buckets, where n is an integer greater than or equal to 1.
  • In order to form the quadtree data structure, the history manager 56 determines whether there are any more base quadtree regions to process (step 1500). If there are more base quadtree regions to process, the history manager 56 sets a current node to the next base quadtree region to process, which for the first iteration is the first base quadtree region (step 1502). The history manager 56 then determines whether the number of users in the current node is greater than a predefined maximum number of users and whether a current quadtree depth is less than a maximum quadtree depth (step 1504). In one embodiment, the maximum quadtree depth may be reached when the current node corresponds to a single location bucket. However, the maximum quadtree depth may be set such that the maximum quadtree depth is reached before the current node reaches a single location bucket.
  • If the number of users in the current node is greater than the predefined maximum number of users and the current quadtree depth is less than a maximum quadtree depth, the history manager 56 creates a number of child nodes for the current node (step 1506). More specifically, the history manager 56 creates a child node for each quadrant of the current node. The users in the current node are then assigned to the appropriate child nodes based on the location buckets in which the users are located (step 1508), and the current node is then set to the first child node (step 1510). At this point, the process returns to step 1504 and is repeated.
  • Once the number of users in the current node is not greater than the predefined maximum number of users or the maximum quadtree depth has been reached, the history manager 56 determines whether the current node has any more sibling nodes (step 1512). Sibling nodes are child nodes of the same parent node. If so, the history manager 56 sets the current node to the next sibling node of the current node (step 1514), and the process returns to step 1504 and is repeated. Once there are no more sibling nodes to process, the history manager 56 determines whether the current node has a parent node (step 1516). If so, since the parent node has already been processed, the history manager 56 determines whether the parent node has any sibling nodes that need to be processed (step 1518). If the parent node has any sibling nodes that need to be processed, the history manager 56 sets the next sibling node of the parent node to be processed as the current node (step 1520). From this point, the process returns to step 1504 and is repeated. Returning to step 1516, if the current node does not have a parent node, the process returns to step 1500 and is repeated until there are no more base quadtree regions to process. Once there are no more base quadtree regions to process, the finished quadtree data structure is returned to the process of FIG. 11 such that the history manager 56 can then store the history objects for nodes in the quadtree data structure having at least one user (step 1522).
  • FIGS. 13A through 13E graphically illustrate the process of FIG. 12 for the generation of the quadtree data structure for one exemplary base quadtree region 98. FIG. 13A illustrates the base quadtree region 98. As illustrated, the base quadtree region 98 is an 8×8 square of location buckets, where each of the small squares represents a location bucket. First, the history manager 56 determines whether the number of users in the base quadtree region 98 is greater than the predetermined maximum number of users. In this example, the predetermined maximum number of users is 3. Since the number of users in the base quadtree region 98 is greater than 3, the history manager 56 divides the base quadtree region 98 into four child nodes 100-1 through 100-4, as illustrated in FIG. 13B.
  • Next, the history manager 56 determines whether the number of users in the child node 100-1 is greater than the predetermined maximum, which again for this example is 3. Since the number of users in the child node 100-1 is greater than 3, the history manager 56 divides the child node 100-1 into four child nodes 102-1 through 102-4, as illustrated in FIG. 13C. The child nodes 102-1 through 102-4 are children of the child node 100-1. The history manager 56 then determines whether the number of users in the child node 102-1 is greater than the predetermined maximum number of users, which again is 3. Since there are more than 3 users in the child node 102-1, the history manager 56 further divides the child node 102-1 into four child nodes 104-1 through 104-N, as illustrated in FIG. 13D.
  • The history manager 56 then determines whether the number of users in the child node 104-1 is greater than the predetermined maximum number of users, which again is 3. Since the number of users in the child node 104-1 is not greater than the predetermined maximum number of users, the child node 104-1 is identified as a node for the finished quadtree data structure, and the history manager 56 proceeds to process the sibling nodes of the child node 104-1, which are the child nodes 104-2 through 104-4. Since the number of users in each of the child nodes 104-2 through 104-4 is less than the predetermined maximum number of users, the child nodes 104-2 through 104-4 are also identified as nodes for the finished quadtree data structure.
  • Once the history manager 56 has finished processing the child nodes 104-1 through 104-4, the history manager 56 identifies the parent node of the child nodes 104-1 through 104-4, which in this case is the child node 102-1. The history manager 56 then processes the sibling nodes of the child node 102-1, which are the child nodes 102-2 through 102-4. In this example, the number of users in each of the child nodes 102-2 through 102-4 is less than the predetermined maximum number of users. As such, the child nodes 102-2 through 102-4 are identified as nodes for the finished quadtree data structure.
  • Once the history manager 56 has finished processing the child nodes 102-1 through 102-4, the history manager 56 identifies the parent node of the child nodes 102-1 through 102-4, which in this case is the child node 100-1. The history manager 56 then processes the sibling nodes of the child node 100-1, which are the child nodes 100-2 through 100-4. More specifically, the history manager 56 determines that the child node 100-2 includes more than the predetermined maximum number of users and, as such, divides the child node 100-2 into four child nodes 106-1 through 106-4, as illustrated in FIG. 13E. Because the number of users in each of the child nodes 106-1 through 106-4 is not greater than the predetermined maximum number of users, the child nodes 106-1 through 106-4 are identified as nodes for the finished quadtree data structure. Then, the history manager 56 proceeds to process the child nodes 100-3 and 100-4. Since the number of users in each of the child nodes 100-3 and 100-4 is not greater than the predetermined maximum number of users, the child nodes 100-3 and 100-4 are identified as nodes for the finished quadtree data structure. Thus, at completion, the quadtree data structure for the base quadtree region 98 includes the child nodes 104-1 through 104-4, the child nodes 102-2 through 102-4, the child nodes 106-1 through 106-4, and the child nodes 100-3 and 100-4, as illustrated in FIG. 13E.
  • As discussed above, the history manager 56 stores a history object for each of the nodes in the quadtree data structure including at least one user. As such, in this example, the history manager 56 stores history objects for the child nodes 104-2 and 104-3, the child nodes 102-2 and 102-4, the child nodes 106-1 and 106-4, and the child node 100-3. However, no history objects are stored for the nodes that do not have any users (i.e., the child nodes 104-1 and 104-4, the child node 102-3, the child nodes 106-2 and 106-3, and the child node 100-4).
  • FIG. 14 illustrates the operation of the system 10 of FIG. 1 wherein a mobile device is enabled to request and receive historical data from the MAP server 12 according to one embodiment of the present disclosure. As illustrated, in this embodiment, the MAP application 32-1 of the mobile device 18-1 sends a historical request to the MAP client 30-1 of the mobile device 18-1 (step 1600). In one embodiment, the historical request identifies either a POI or an AOI and a time window. A POI is a geographic point whereas an AOI is a geographic area. In one embodiment, the historical request is for a POI and a time window, where the POI is a POI corresponding to the current location of the user 20-1, a POI selected from a list of POIs defined by the user 20-1 of the mobile device 18-1, a POI selected from a list of POIs defined by the MAP application 32-1 or the MAP server 12, a POI selected by the user 20-1 from a map, a POI implicitly defined via a separate application (e.g., POI is implicitly defined as the location of the nearest Starbucks coffee house in response to the user 20-1 performing a Google search for “Starbucks”), or the like. If the POI is selected from a list of POIs, the list of POIs may include static POIs which may be defined by street addresses or latitude and longitude coordinates, dynamic POIs which may be defined as the current locations of one or more friends of the user 20-1, or both.
  • In another embodiment, the historical request is for an AOI and a time window, where the AOI may be an AOI of a geographic area of a predefined shape and size centered at the current location of the user 20-1, an AOI selected from a list of AOIs defined by the user 20-1, an AOI selected from a list of AOIs defined by the MAP application 32-1 or the MAP server 12, an AOI selected by the user 20-1 from a map, an AOI implicitly defined via a separate application (e.g., AOI is implicitly defined as an area of a predefined shape and size centered at the location of the nearest Starbucks coffee house in response to the user 20-1 performing a Google search for “Starbucks”), or the like. If the AOI is selected from a list of AOIs, the list of AOIs may include static AOIs, dynamic AOIs which may be defined as areas of a predefined shape and size centered at the current locations of one or more friends of the user 20-1, or both. Note that the POI or AOI of the historical request may be selected by the user 20-1 via the MAP application 32-1. In yet another embodiment, the MAP application 32-1 automatically uses the current location of the user 20-1 as the POI or as a center point for an AOI of a predefined shape and size.
  • The time window for the historical request may be relative to the current time. For example, the time window may be the last hour, the last day, the last week, the last month, or the like. Alternatively, the time window may be an arbitrary time window selected by the user 20-1 such as, for example, yesterday from 7 pm-9 pm, last Friday, last week, or the like. Note that while in this example the historical request includes a single POI or AOI and a single time window, the historical request may include multiple POIs or AOIs and/or multiple time windows.
  • In one embodiment, the historical request is made in response to user input from the user 20-1 of the mobile device 18-1. For instance, in one embodiment, the user 20-1 selects either a POI or an AOI and a time window and then instructs the MAP application 32-1 to make the historical request by, for example, selecting a corresponding button on a graphical user interface. In another embodiment, the historical request is made automatically in response to some event such as, for example, opening the MAP application 32-1.
  • Upon receiving the historical request from the MAP application 32-1, the MAP client 30-1 forwards the historical request to the MAP server 12 (step 1602). Note that the MAP client 30-1 may, in some cases, process the historical request from the MAP application 32-1 before forwarding the historical request to the MAP server 12. For example, if the historical request from the MAP application 32-1 is for multiple POIs/AOIs and/or for multiple time windows, the MAP client 30-1 may process the historical request from the MAP application 32-1 to produce multiple historical requests to be sent to the MAP server 12. For instance, a separate historical request may be produced for each POI/AOI and time window combination. However, for this discussion, the historical request is for a single POI or AOI for a single time window.
  • Upon receiving the historical request from the MAP client 30-1, the MAP server 12 processes the historical request (step 1604). More specifically, the historical request is processed by the history manager 56 of the MAP server 12. First, the history manager 56 obtains history objects that are relevant to the historical request from the datastore 64 of the MAP server 12. The relevant history objects are those recorded for locations relevant to the POI or AOI and the time window for the historical request. The history manager 56 then processes the relevant history objects to provide historical aggregate profile data for the POI or AOI in a time context and/or a geographic context. In this embodiment, the historical aggregate profile data is based on the user profiles of the anonymous user records in the relevant history objects as compared to the user profile of the user 20-1 or a select subset thereof. In another embodiment, the historical aggregate profile data is based on the user profiles of the anonymous user records in the relevant history objects as compared to a target user profile defined or otherwise specified by the user 20-1.
  • As discussed below in detail, for the time context, the history manager 56 divides the time window for the historical request into a number of time bands. Each time band is a fragment of the time window. Then, for each time band, the history manager 56 identifies a subset of the relevant history objects that are relevant to the time band (i.e., history objects recorded for time periods within the time band or that overlap the time band) and generates an aggregate profile for each of those history objects based on the user profiles of the anonymous user records in the history objects and the user profile, or a select subset of the user profile, of the user 20-1. Then, the history manager 56 averages or otherwise combines the aggregate profiles for the history objects relevant to the time band. The resulting data for the time bands forms historical aggregate profile data that is to be returned to the MAP client 30-1, as discussed below.
  • For the geographic context, the history manager 56 generates an average aggregate profile for each of a number of grids surrounding the POI or within the AOI. More specifically, history objects relevant to the POI or the AOI and the time window of the historical request are obtained. Then, the user profiles of the anonymous users in the relevant history objects are used to generate average aggregate profiles for a number of grids, or geographic regions, at or surrounding the POI or the AOI. These average aggregate profiles for the grids form historical aggregate profile data that is to be returned to the MAP client 30-1, as discussed below.
  • Once the MAP server 12 has processed the historical request, the MAP server 12 returns the resulting historical aggregate profile data to the MAP client 30-1 (step 1606). As discussed above, the historical aggregate profile data may be in a time context or a geographic context. In an alternative embodiment, the data returned to the MAP client 30-1 may be raw historical data. The raw historical data may be the relevant history objects or data from the relevant history objects such as, for example, the user records in the relevant history objects, the user profiles of the anonymous user records in the relevant history objects, or the like.
  • Upon receiving the historical aggregate profile data, the MAP client 30-1 passes the historical aggregate profile data to the MAP application 32-1 (step 1608). Note that in an alternative embodiment where the data returned by the MAP server 12 is raw historical data, the MAP client 30-1 may process the raw historical data to provide desired data. For example, the MAP client 30-1 may process the raw historical data in order to generate average aggregate profiles for time bands within the time window of the historical request and/or to generate average aggregate profiles for regions near the POI or within the AOI of the historical request in a manner similar to that described above. The MAP application 32-1 then presents the historical aggregate profile data to the user 20-1 (step 1610).
  • FIGS. 15A and 15B illustrate a flow chart for a process for generating historical aggregate profile data in a time context according to one embodiment of the present disclosure. First, upon receiving a historical request, the history manager 56 establishes a bounding box for the historical request based on the POI or the AOI for the historical request (step 1700). Note that while a bounding box is used in this example, other geographic shapes may be used to define a bounding region for the historical request (e.g., a bounding circle). In this embodiment, the historical request is from a mobile device of a requesting user, which in this example is the user 20-1. If the historical request is for a POI, the bounding box is a geographic region corresponding to or surrounding the POI. For example, the bounding box may be a square geographic region of a predefined size centered on the POI. If the historical request is for an AOI, the bounding box is the AOI. In addition to establishing the bounding box, the history manager 56 establishes a time window for the historical request (step 1702). For example, if the historical request is for the last week and the current date and time are Sep. 17, 2009 at 10:00 pm, the history manager 56 may generate the time window as Sep. 10, 2009 at 10:00 pm through Sep. 17, 2009 at 10:00 pm.
  • Next, the history manager 56 obtains history objects relevant to the bounding box and the time window for the historical request from the datastore 64 of the MAP server 12 (step 1704). The relevant history objects are history objects recorded for time periods within or intersecting the time window and for locations, or geographic areas, within or intersecting the bounding box for the historical request. The history manager 56 also determines an output time band size (step 1706). In one exemplary embodiment, the output time band size is 1/100th of the amount of time from the start of the time window to the end of the time window for the historical request. For example, if the amount of time in the time window for the historical request is one week, the output time band size may be set to 1/100th of a week, which is 1.68 hours or 1 hour and 41 minutes.
  • The history manager 56 then sorts the relevant history objects into the appropriate output time bands of the time window for the historical request. More specifically, in this embodiment, the history manager 56 creates an empty list for each of output time band of the time window (step 1708). Then, the history manager 56 gets the next history object from the history objects identified in step 1704 as being relevant to the historical request (step 1710) and adds that history object to the list(s) for the appropriate output time band(s) (step 1712). Note that if the history object is recorded for a time period that overlaps two or more of the output time bands, then the history object may be added to all of the output time bands to which the history object is relevant. The history manager 56 then determines whether there are more relevant history objects to sort into the output time bands (step 1714). If so, the process returns to step 1710 and is repeated until all of the relevant history objects have been sorted into the appropriate output time bands.
  • Once sorting is complete, the history manager 56 determines an equivalent depth of the bounding box (DBB) within the quadtree data structures used to store the history objects (step 1716). More specifically, the area of the base quadtree region (e.g., the base quadtree region 98) is referred to as ABASE. Then, at each depth of the quadtree, the area of the corresponding quadtree nodes is (¼)D*ABASE. In other words, the area of a child node is ¼th of the area of the parent node of that child node. The history manager 56 determines the equivalent depth of the bounding box (DBB) by determining a quadtree depth at which the area of the corresponding quadtree nodes most closely matches an area of the bounding box (ABB).
  • Note that equivalent quadtree depth of the bounding box (DBB) determined in step 1716 is used below in order to efficiently determine the ratios of the area of the bounding box (ABB) to areas of the relevant history objects (AHO). However, in an alternative embodiment, the ratios of the area of the bounding box (ABB) to the areas of the relevant history objects (AHO) may be otherwise computed, in which case step 1716 would not be needed.
  • At this point, the process proceeds to FIG. 15B where the history manager 56 gets the list for the next output time band of the time window for the historical request (step 1718). The history manager 56 then gets the next history object in the list for the output time band (step 1720). Next, the history manager 56 sets a relevancy weight for the history object, where the relevancy weight is indicative of a relevancy of the history object to the bounding box (step 1722). For instance, a history object includes anonymized user profile data for a corresponding geographic area. If that geographic area is within or significantly overlaps the bounding box, then the history object will have a high relevancy weight. However, if the geographic area only overlaps the bounding box slightly, then the history object will have a low relevancy weight. In this embodiment, the relevancy weight for the history object is set to an approximate ratio of the area of the bounding box (ABB) to an area of the history object (AHO) computed based on a difference between the quadtree depth of the history object (DHO) and the equivalent quadtree depth of the bounding box (DEQ). The quadtree depth of the history object (DHO) is stored in the history object. More specifically, in one embodiment, the relevancy weight of the history object is set according to the following:
  • relevancy = A BB A HO ( 1 4 ) D HO - D BB , for D HO > D BB , and relevancy = 1 , for D HO D BB .
  • Next, the history manager 56 generates an aggregate profile for the history object using the user profile of the requesting user, which for this example is the user 20-1, or a select subset thereof (step 1724). Note that the requesting user 20-1 may be enabled to select a subset of his user profile to be compared to the user profiles of the anonymous user records in the history objects by, for example, selecting one or more desired profile categories. In order to generate the aggregate profile for the history object, the history manager 56 compares the user profile of the user 20-1, or the select subset thereof, to the user profiles of the anonymous user records stored in the history object. The resulting aggregate profile for the history object includes a number of user matches and a total number of users. In the embodiment where user profiles include lists of keywords for a number of profile categories, the number of user matches is the number of anonymous user records in the history object having user profiles that include at least one keyword that matches at least one keyword in the user profile of the user 20-1 or at least one keyword in the select subset of the user profile of the user 20-1. The total number of users is the total number of anonymous user records in the history object. In addition or alternatively, the aggregate profile for the history object may include a list of keywords from the user profile of the user 20-1 or the select subset of the user profile of the user 20-1 having at least one user match. Still further, the aggregate profile for the history object may include the number of user matches for each of the keywords from the user profile of the user 20-1 or the select subset of the user profile of the user 20-1 having at least one user match.
  • The history manager 56 then determines whether there are more history objects in the list for the output time band (step 1726). If so, the process returns to step 1720 and is repeated until all of the history objects in the list for the output time band have been processed. Once all of the history objects in the list for the output time band have been processed, the history manager 56 combines the aggregate profiles of the history objects in the output time band to provide a combined aggregate profile for the output time band. More specifically, in this embodiment, the history manager 56 computes a weighted average of the aggregate profiles for the history objects in the output time band using the relevancy weights of the history objects (step 1728). In one embodiment, the aggregate profile of each of the history objects includes the number of user matches for the history object and the total number of users for the history object. In this embodiment, the weighted average of the aggregate profiles of the history objects in the output time band (i.e., the average aggregate profile for the output time band) includes the weighted average of the number of user matches for all of the history objects in the output time band, which may be computed as:
  • user_ matches AVG = i = 1 n ( relevancy i · number_of _user _ matches i ) i = 1 n relevancy i ,
  • where relevancyi is the relevancy weight computed in step 1722 for the i-th history object, number_of_user_matchesi is the number of user matches from the aggregate profile of the i-th history object, and n is the number of history objects in the list for the output time band. In a similar manner, in this embodiment, the average aggregate profile for the output time band includes the weighted average of the total number of users for all of the history objects in the output time band, which may be computed as:
  • total_ users AVG = i = 1 n ( relevancy i · total_ users i ) i = 1 n relevancy i ,
  • where relevancyi is the relevancy weight computed in step 1722 for the i-th history object, total_usersi is the total number of users from the aggregate profile of the i-th history object, and n is the number of history objects in the list for the output time band. In addition or alternatively, the average aggregate profile for the output time band may include the weighted average of the ratio of user matches to total users for all of the history objects in the output time band, which may be computed as:
  • user_matches total_ users AVG = i = 1 n ( relevancy i · number_of _user _ matches i total_ users i ) i = 1 n relevancy i ,
  • where relevancyi is the relevancy weight computed in step 1722 for the i-th history object, number_of_user_matchesi is the number of user matches from the aggregate profile of the i-th history object, total_usersi is the total number of users from the aggregate profile of the i-th history object, and n is the number of history objects in the list for the output time band.
  • In addition or alternatively, if the aggregate profiles for the history objects in the output time band include the number of user matches for each keyword in the user profile of the user 20-1, or the select subset thereof, having at least one user match, the average aggregate profile for the output time band may include a weighted average of the number of user matches for each of those keywords, which may be computed as:
  • user_ matches KEYWORD_j , AVG = i = 1 n ( relevancy i · number_of _user _ matches KEYWORD_j , i ) i = 1 n relevancy i ,
  • where relevancyi is the relevancy weight computed in step 1722 for the i-th history object, number_of_user_matchesKEYWORD j,i is the number of user matches for the j-th keyword for the i-th history object, and n is the number of history objects in the list for the output time band. In addition or alternatively, the average aggregate profile for the output time band may include the weighted average of the ratio of the user matches to total users for each keyword, which may be computed as:
  • user_matches total_ users KEYWORD_j , AVG = i = 1 n ( relevancy i · number_of _user _ matches KEYWORD_j , i total_ users i ) i = 1 n relevancy i ,
  • where relevancyi is the relevancy weight computed in step 1722 for the i-th history object, number_of_user_matchesKEYWORD j,i is the number of user matches for the j-th keyword for the i-th history object, total_usersi is the total number of users from the aggregate profile of the i-th history object, and n is the number of history objects in the list for the output time band.
  • Next, the history manager 56 determines whether there are more output time bands to process (step 1730). If so, the process returns to step 1718 and is repeated until the lists for all output time bands have been processed. Once all of the output time bands have been processed, the history manager 56 outputs the combined aggregate profiles for the output time bands. More specifically, in this embodiment, the history manager 56 outputs the weighted average aggregate profiles computed in step 1728 for the output time bands as the historical aggregate profile data to be returned to the mobile device 18-1 (step 1732).
  • FIG. 16 is an exemplary Graphical User Interface (GUI) 108 that may be provided by the MAP application 32-1 of the mobile device 18-1 (FIG. 1) in order to present historical aggregate profile data in a time context according to one embodiment of this disclosure. In operation, the MAP application 32-1 issues a historical request for a POI 110 in the manner described above. In response, the MAP server 12 uses the process of FIGS. 15A and 15B to generate historical aggregate profile data in response to the historical request in the time context. More specifically, the historical aggregate profile data includes an average aggregate profile for each of a number of output time bands within a time window established for the historical request. In this example, the time window is a four week period extending from the week of July 5th to the week of July 26th.
  • Using the average aggregate profiles for the output time bands included in the historical aggregate profile data, the MAP application 32-1 generates a timeline 112 for the time window of the historical request. The timeline 112 is a graphical illustration of the average aggregate profiles for the output time bands. For example, if the average aggregate profile for each of the output time bands includes a weighted average of the number of user matches and a weighted average of the number of total users for the output time band, the timeline 112 may be indicative of the ratio of the weighted average of user matches to the weighted average of total users for each of the output time bands. In this example, the output time bands having a ratio of weighted average of user matches to weighted average of total users that is less than 0.25 are represented as having a low similarity, the output time bands having a ratio of weighted average of user matches to weighted average of total users that is in the range of 0.25-0.75 are represented as having varying degrees of intermediate similarity, and the output time bands having a ratio of weighted average of user matches to weighted average of total users that is greater than 0.75 are represented as having a high similarity. Note that output time bands for which there are no history objects may be grayed-out or otherwise indicated.
  • In addition, in this example, the GUI 108 also includes a second timeline 114 that zooms in on an area of the timeline 112 that includes the most activity or that includes the greatest number of output time bands having a high or medium similarity. Lastly, in this example, the GUI 108 includes an aggregate profile 116 for a crowd that is currently at the POI. Note that crowds and aggregate profiles for the crowds are discussed below in detail.
  • FIGS. 17A and 17B illustrate a flow chart of a process for generating historical aggregate profile data in a geographic context according to one embodiment of the present disclosure. First, upon receiving a historical request, the history manager 56 establishes a bounding box for the historical request based on the POI or the AOI for the historical request (step 1800). Note that while a bounding box is used in this example, other geographic shapes may be used to define a bounding region for the historical request (e.g., a bounding circle). In this embodiment, the historical request is from a mobile device of a requesting user, which in this example is the user 20-1. If the historical request is for a POI, the bounding box is a geographic region corresponding to or surrounding the POI. For example, the bounding box may be a square geographic region of a predefined size centered on the POI. If the historical request is for an AOI, the bounding box is the AOI. In addition to establishing the bounding box, the history manager 56 establishes a time window for the historical request (step 1802). For example, if the historical request is for the last week and the current date and time are Sep. 17, 2009 at 10:00 pm, the history manager 56 may generate the time window as Sep. 10, 2009 at 10:00 pm through Sep. 17, 2009 at 10:00 pm.
  • Next, the history manager 56 obtains history objects relevant to the bounding box and the time window of the historical request from the datastore 64 of the MAP server 12 (step 1804). The relevant history objects are history objects recorded for time periods within or intersecting the time window and for locations, or geographic areas, within or intersecting the bounding box for the historical request. The history manager 56 then sorts the relevant history objects into base quadtree regions. More specifically, in this embodiment, the history manager 56 creates an empty list for each relevant base quadtree region (step 1806). A relevant base quadtree region is a base quadtree region within which all or at least a portion of the bounding box is located. Therefore, for example, if a bounding box is located at the intersection of four base quadtree regions such that the bounding box overlaps a portion of each of the four base quadtree regions, then all four of the bounding boxes would be identified as relevant base quadtree regions. In contrast, if the bounding box is contained within a single base quadtree region, then that base quadtree region is the only relevant base quadtree region.
  • The history manager 56 then gets the next history object from the history objects identified in step 1804 as being relevant to the historical request (step 1808) and adds that history object to the list for the appropriate base quadtree region (step 1810). The history manager 56 then determines whether there are more relevant history objects to sort (step 1812). If so, the process returns to step 1808 and is repeated until all of the relevant history objects have been sorted into the appropriate base quadtree regions.
  • Once sorting is complete, the process proceeds to FIG. 17B. The following steps generally operate to divide each base quadtree region into a grid, where a size of each grid location is set to a smallest history record size of all the history objects sorted into the list for that base quadtree region. Using the history objects in the base quadtree region, aggregate profiles are generated for each of the grid locations covered by the history object. Then, a combined aggregate profile is generated for each grid location based on the aggregate profiles generated using the corresponding history objects.
  • More specifically, the history manager 56 gets the list for the next base quadtree region (step 1814). The history manager 56 then gets the next history object in the list for the base quadtree region (step 1816). Next, the history manager 56 creates an aggregate profile for the history object using the user profile of the requesting user, which in this example is the user 20-1, or a select subset of the user profile of the requesting user (step 1818). Note that the user 20-1 may be enabled to select a subset of his user profile to be used for aggregate profile creation by, for example, selecting one or more profile categories. In order to generate the aggregate profile for the history object, the history manager 56 compares the user profile of the user 20-1, or the select subset thereof, to the user profiles of the anonymous user records stored in the history object. The resulting aggregate profile for the history object includes a number of user matches and a total number of users. In the embodiment where user profiles include lists of keywords for a number of profile categories, the number of user matches is the number of anonymous user records in the history object having user profiles that include at least one keyword that matches at least one keyword in the user profile of the user 20-1 or at least one keyword in the select subset of the user profile of the user 20-1. The total number of users is the total number of anonymous user records in the history object.
  • Next, the history manager 56 determines whether a size of the history object is greater than the smallest history object size in the list of history objects for the base quadtree region (step 1820). If not, the aggregate profile for the history object is added to an output list for the corresponding grid location for the base quadtree region (step 1822) and the process proceeds to step 1830. If the size of the history object is greater than the smallest history object size, the history manager 56 splits the geographic area, or location, of the history object into a number of grid locations each of the smallest history object size of all the history objects in the list for the base quadtree region (step 1824). The history manager 56 then divides the aggregate profile of the history object evenly over the grid locations for the history object (step 1826) and adds resulting aggregate profiles for the grid locations to output lists for those grid locations (step 1828). For example, if the geographic area of the history object is split into four grid locations and the aggregate profile for the history object includes eight user matches and sixteen total users, then the aggregate profile is divided evenly over the four grid locations such that each of the four grid locations is given an aggregate profile of two user matches and four total users.
  • The history manager 56 then determines whether there are more history objects to process for the base quadtree region (step 1830). If so, the process returns to step 1816 and is repeated until all of the history objects for the base quadtree region are processed. At that point, for each grid location in the base quadtree region having at least one aggregate profile in its output list, the history manager 56 combines the aggregate profiles in the output list for the grid location to provide a combined aggregate profile for the grid location. More specifically, in this embodiment, the history manager 56 computes average aggregate profiles for the grid locations for the base quadtree region (step 1832). In one embodiment, for each grid location, the average aggregate profile for the grid location includes an average number of user matches and an average total number of users for all of the aggregate profiles in the output list for that grid location.
  • Next, the history manager 56 determines whether there are more relevant base quadtree regions to process (step 1834). If so, the process returns to step 1814 and is repeated until all of the relevant base quadtree regions have been processed. At that point, the history manager 56 outputs the grid locations and the average aggregate profiles for the grid locations in each of the relevant base quadtree regions (step 1836). The grid locations and their corresponding average aggregate profiles form the historical aggregate profile data that is returned to the mobile device 18-1 of the user 20-1 in response to the historical request.
  • FIG. 18 illustrates an exemplary GUI 118 that may be provided by the MAP application 32-1 of the mobile device 18-1 (FIG. 1) to present historical aggregate profile data in the geographic context to the user 20-1 in response to a historical request. As illustrated, the GUI 118 includes a map 120 including a grid 122. The grid 122 provides graphical information indicative of aggregate profiles for grid locations returned by the MAP server 12 in response to a historical request. The GUI 118 also includes buttons 124 and 126 enabling the user 20-1 to zoom in or zoom out on the map 120, buttons 128 and 130 enabling the user 20-1 to toggle between the traditional map view as shown or a satellite map view, buttons 132 and 134 enabling the user 20-1 to switch between historical mode and a current mode (i.e., a view of current crowd data as discussed below in detail), and buttons 136 and 138 enabling the user 20-1 to hide or show POIs on the map 120.
  • It should be noted that while the aggregate profiles in FIGS. 15A through 18 are generated based on the user profile of the user 20-1 or a select subset of the user profile of the user 20-1, the aggregate profiles may alternatively be generated based on a target user profile defined or otherwise specified by the user 20-1. For example, the user 20-1 may define a target profile for a type of person with which the user 20-1 would like to interact. Then, by making a historical request with the target profile, the user 20-1 can learn whether people matching the target profile are historically located at a POI or an AOI.
  • FIG. 19 illustrates the operation of the system 10 of FIG. 1 wherein the subscriber device 22 is enabled to request and receive historical aggregate profile data from the MAP server 12 according to one embodiment of the present disclosure. Note that, in a similar manner, the third-party service 26 may send historical requests to the MAP server 12. As illustrated, in this embodiment, the subscriber device 22 sends a historical request to the MAP server 12 (step 1900). The subscriber device 22 sends the historical request to the MAP server 12 via the web browser 38. In one embodiment, the historical request identifies either a POI or an AOI and a time window. The historical request may be made in response to user input from the subscriber 24 of the subscriber device 22 or made automatically in response to an event such as, for example, navigation to a website associated with a POI (e.g., navigation to a website of a restaurant).
  • Upon receiving the historical request, the MAP server 12 processes the historical request (step 1902). More specifically, as discussed above, the historical request is processed by the history manager 56 of the MAP server 12. First, the history manager 56 obtains history objects that are relevant to the historical request from the datastore 64 of the MAP server 12. The relevant history objects are those relevant to the POI or the AOI and the time window for the historical request. The history manager 56 then processes the relevant history objects to provide historical aggregate profile data for the POI or the AOI in a time context and/or a geographic context. In this embodiment, the historical aggregate profile data is based on comparisons of the user profiles of the anonymous user records in the relevant history objects to one another. In another embodiment, the aggregate profile data is based on comparisons of the user profiles of the anonymous user records in the relevant history objects and a target user profile.
  • Once the MAP server 12 has processed the historical request, the MAP server 12 returns the resulting historical aggregate profile data to the subscriber device 22 (step 1904). The historical aggregate profile data may be in the time context or the geographic context. In this embodiment where the historical aggregate profile data is to be presented via the web browser 38 of the subscriber device 22, the MAP server 12 formats the historical aggregate profile data in a suitable format before sending the historical aggregate profile data to the web browser 38 of the subscriber device 22. Upon receiving the historical aggregate profile data, the web browser 38 of the subscriber device 22 presents the historical aggregate profile data to the user 20-1 (step 1906).
  • FIGS. 20A and 20B illustrate a process for generating historical aggregate profile data in a time context in response to a historical request from the subscriber 24 at the subscriber device 22 according to one embodiment of the present disclosure. The process of FIGS. 20A and 20B is substantially the same as that described above with respect to FIGS. 15A and 15B. More specifically, steps 2000 through 2022 are substantially the same as steps 1700 through 1722 of FIGS. 15A and 15B. Likewise, steps 2026 through 2032 are substantially the same as steps 1726 through 1732 of FIG. 15B. However, step 2024 of FIG. 20B is different from step 1724 of FIG. 15B with respect to the manner in which the aggregate profiles for the relevant history objects are computed.
  • More specifically, in this embodiment, since the historical request is from the subscriber 24, the aggregate profile for the history object is generated by comparing the user profiles of the anonymous user records in the history object to one another. In this embodiment, the aggregate profile for the history object includes an aggregate list of keywords from the user profiles of the anonymous user records, the number of occurrences of each of those keywords in the user profiles of the anonymous user records, and the total number of anonymous user records in the history object. As such, in step 2028, the weighted average of the aggregate profiles for the history objects in the output time band may provide an average aggregate profile including, for each keyword occurring in the aggregate profile of at least one of the history objects, a weighted average of the number of occurrences of the keyword. In addition, the average aggregate profile may include a weighted average of the total number of anonymous user records in the history objects. In addition or alternatively, the average aggregate profile may include, for each keyword, a weighted average of the number of occurrences of the keyword to the total number of anonymous user records.
  • FIGS. 21A and 21B illustrate a process for generating historical aggregate profile data in a geographic context in response to a historical request from the subscriber 24 at the subscriber device 22 according to one embodiment of the present disclosure. The process of FIGS. 21A and 21B is substantially the same as that described above with respect to FIGS. 17A and 17B. More specifically, steps 2100 through 2116 and 2120 through 2136 are substantially the same as steps 1800 through 1816 and 1820 through 1836 of FIGS. 17A and 17B. However, step 2118 of FIG. 21B is different from step 1818 of FIG. 17B with respect to the manner in which the aggregate profiles for the history objects are computed.
  • More specifically, in this embodiment, since the historical request is from the subscriber 24, the aggregate profile for the history object is generated by comparing the user profiles of the anonymous user records in the history object to one another. In this embodiment, the aggregate profile for the history object includes an aggregate list of keywords from the user profiles of the anonymous user records, the number of occurrences of each of those keywords in the user profiles of the anonymous user records, and the total number of anonymous user records in the history object. As such, in step 2132, the weighted average of the aggregate profiles for the each of the grid locations may provide an average aggregate profile including, for each keyword, a weighted average of the number of occurrences of the keyword. In addition, the average aggregate profile for each grid location may include a weighted average of the total number of anonymous user records. In addition or alternatively, the average aggregate profile for each grid location may include, for each keyword, a weighted average of the number of occurrences of the keyword to the total number of anonymous user records.
  • FIG. 22 begins a discussion of the operation of the crowd analyzer 58 to form crowds of users according to one embodiment of the present disclosure. Specifically, FIG. 22 is a flow chart for a spatial crowd formation process according to one embodiment of the present disclosure. Note that, in one embodiment, this process is performed in response to a request for crowd data for a POI or an AOI. In another embodiment, this process may be performed proactively by the crowd analyzer 58 as, for example, a background process.
  • First, the crowd analyzer 58 establishes a bounding box for the crowd formation process (step 2200). Note that while a bounding box is used in this example, other geographic shapes may be used to define a bounding region for the crowd formation process (e.g., a bounding circle). In one embodiment, if crowd formation is performed in response to a specific request, the bounding box is established based on the POI or the AOI of the request. If the request is for a POI, then the bounding box is a geographic area of a predetermined size centered at the POI. If the request is for an AOI, the bounding box is the AOI. Alternatively, if the crowd formation process is performed proactively, the bounding box is a bounding box of a predefined size.
  • The crowd analyzer 58 then creates a crowd for each individual user in the bounding box (step 2202). More specifically, the crowd analyzer 58 queries the datastore 64 of the MAP server 12 to identify users currently located within the bounding box. Then, a crowd of one user is created for each user currently located within the bounding box. Next, the crowd analyzer 58 determines the two closest crowds in the bounding box (step 2204) and determines a distance between the two crowds (step 2206). The distance between the two crowds is a distance between crowd centers of the two crowds. Note that the crowd center of a crowd of one is the current location of the user in the crowd. The crowd analyzer 58 then determines whether the distance between the two crowds is less than an optimal inclusion distance (step 2208). In this embodiment, the optimal inclusion distance is a predefined static distance. If the distance between the two crowds is less than the optimal inclusion distance, the crowd analyzer 58 combines the two crowds (step 2210) and computes a new crowd center for the resulting crowd (step 2212). The crowd center may be computed based on the current locations of the users in the crowd using a center of mass algorithm. At this point the process returns to step 2204 and is repeated until the distance between the two closest crowds is not less than the optimal inclusion distance. At that point, the crowd analyzer 58 discards any crowds with less than three users (step 2214). Note that throughout this disclosure crowds are only maintained if the crowds include three or more users. However, while three users is the preferred minimum number of users in a crowd, the present disclosure is not limited thereto. The minimum number of users in a crowd may be defined as any number greater than or equal to two users.
  • FIGS. 23A through 23D graphically illustrate the crowd formation process of FIG. 22 for an exemplary bounding box 139. In FIGS. 23A through 23D, crowds are noted by dashed circles, and the crowd centers are noted by cross-hairs (+). As illustrated in FIG. 23A, initially, the crowd analyzer 58 creates crowds 140 through 148 for the users in the geographic area, where, at this point, each of the crowds 140 through 148 includes one user. The current locations of the users are the crowd centers of the crowds 140 through 148. Next, the crowd analyzer 58 determines the two closest crowds and a distance between the two closest crowds. In this example, at this point, the two closest crowds are crowds 142 and 144, and the distance between the two closest crowds 142 and 144 is less than the optimal inclusion distance. As such, the two closest crowds 142 and 144 are combined by merging crowd 144 into crowd 142, and a new crowd center (+) is computed for the crowd 142, as illustrated in FIG. 23B. Next, the crowd analyzer 58 again determines the two closest crowds, which are now crowds 140 and 142. The crowd analyzer 58 then determines a distance between the crowds 140 and 142. Since the distance is less than the optimal inclusion distance, the crowd analyzer 58 combines the two crowds 140 and 142 by merging the crowd 140 into the crowd 142, and a new crowd center (+) is computed for the crowd 142, as illustrated in FIG. 23C. At this point, there are no more crowds separated by less than the optimal inclusion distance. As such, the crowd analyzer 58 discards crowds having less than three users, which in this example are crowds 146 and 148. As a result, at the end of the crowd formation process, the crowd 142 has been formed with three users, as illustrated in FIG. 23D.
  • FIGS. 24A through 24D illustrate a flow chart for a spatial crowd formation process according to another embodiment of the present disclosure. In this embodiment, the spatial crowd formation process is triggered in response to receiving a location update for one of the users 20-1 through 20-N and is preferably repeated for each location update received for the users 20-1 through 20-N. As such, first, the crowd analyzer 58 receives a location update, or a new location, for a user (step 2300). Assume that, for this example, the location update is received for the user 20-1. In response, the crowd analyzer 58 retrieves an old location of the user 20-1, if any (step 2302). The old location is the current location of the user 20-1 prior to receiving the new location. The crowd analyzer 58 then creates a new bounding box of a predetermined size centered at the new location of the user 20-1 (step 2304) and an old bounding box of a predetermined size centered at the old location of the user 20-1, if any (step 2306). The predetermined size of the new and old bounding boxes may be any desired size. As one example, the predetermined size of the new and old bounding boxes is 40 meters by 40 meters. Note that if the user 20-1 does not have an old location (i.e., the location received in step 2300 is the first location received for the user 20-1), then the old bounding box is essentially null. Also note that while bounding “boxes” are used in this example, the bounding areas may be of any desired shape.
  • Next, the crowd analyzer 58 determines whether the new and old bounding boxes overlap (step 2308). If so, the crowd analyzer 58 creates a bounding box encompassing the new and old bounding boxes (step 2310). For example, if the new and old bounding boxes are 40×40 meter regions and a 1×1 meter square at the northeast corner of the new bounding box overlaps a 1×1 meter square at the southwest corner of the old bounding box, the crowd analyzer 58 may create a 79×79 meter square bounding box encompassing both the new and old bounding boxes.
  • The crowd analyzer 58 then determines the individual users and crowds relevant to the bounding box created in step 2310 (step 2312). The crowds relevant to the bounding box are crowds that are within or overlap the bounding box (e.g., have at least one user located within the bounding box). The individual users relevant to the bounding box are users that are currently located within the bounding box and not already part of a crowd. Next, the crowd analyzer 58 computes an optimal inclusion distance for individual users based on user density within the bounding box (step 2314). More specifically, in one embodiment, the optimal inclusion distance for individuals, which is also referred to herein as an initial optimal inclusion distance, is set according to the following equation:
  • initial_optimal _inclusion _dist = a · A BoundingBox number_of _users ,
  • where a is a number between 0 and 1, ABoundingBox is an area of the bounding box, and number_of_users is the total number of users in the bounding box. The total number of users in the bounding box includes both individual users that are not already in a crowd and users that are already in a crowd. In one embodiment, a is ⅔.
  • The crowd analyzer 58 then creates a crowd for each individual user within the bounding box that is not already included in a crowd and sets the optimal inclusion distance for the crowds to the initial optimal inclusion distance (step 2316). At this point, the process proceeds to FIG. 24B where the crowd analyzer 58 analyzes the crowds relevant to the bounding box to determine whether any of the crowd members (i.e., users in the crowds) violate the optimal inclusion distance of their crowds (step 2318). Any crowd member that violates the optimal inclusion distance of his or her crowd is then removed from that crowd (step 2320). The crowd analyzer 58 then creates a crowd of one user for each of the users removed from their crowds in step 2320 and sets the optimal inclusion distance for the newly created crowds to the initial optimal inclusion distance (step 2322).
  • Next, the crowd analyzer 58 determines the two closest crowds for the bounding box (step 2324) and a distance between the two closest crowds (step 2326). The distance between the two closest crowds is the distance between the crowd centers of the two closest crowds. The crowd analyzer 58 then determines whether the distance between the two closest crowds is less than the optimal inclusion distance of a larger of the two closest crowds (step 2328). If the two closest crowds are of the same size (i.e., have the same number of users), then the optimal inclusion distance of either of the two closest crowds may be used. Alternatively, if the two closest crowds are of the same size, the optimal inclusion distances of both of the two closest crowds may be used such that the crowd analyzer 58 determines whether the distance between the two closest crowds is less than the optimal inclusion distances of both of the two closest crowds. As another alternative, if the two closest crowds are of the same size, the crowd analyzer 58 may compare the distance between the two closest crowds to an average of the optimal inclusion distances of the two closest crowds.
  • If the distance between the two closest crowds is less than the optimal inclusion distance, the two closest crowds are combined or merged (step 2330), and a new crowd center for the resulting crowd is computed (step 2332). Again, a center of mass algorithm may be used to compute the crowd center of a crowd. In addition, a new optimal inclusion distance for the resulting crowd is computed (step 2334). In one embodiment, the new optimal inclusion distance for the resulting crowd is computed as:
  • average = 1 n + 1 · ( initial_optimal _inclusion _dist + i = 1 n d i ) , optimial_inclusion _dist = average + ( 1 n · i = 1 n ( d i - average ) 2 ) ,
  • where n is the number of users in the crowd and di is a distance between the ith user and the crowd center. In other words, the new optimal inclusion distance is computed as the average of the initial optimal inclusion distance and the distances between the users in the crowd and the crowd center plus one standard deviation.
  • At this point, the crowd analyzer 58 determines whether a maximum number of iterations have been performed (step 2336). The maximum number of iterations is a predefined number that ensures that the crowd formation process does not indefinitely loop over steps 2318 through 2334 or loop over steps 2318 through 2334 more than a desired maximum number of times. If the maximum number of iterations has not been reached, the process returns to step 2318 and is repeated until either the distance between the two closest crowds is not less than the optimal inclusion distance of the larger crowd or the maximum number of iterations has been reached. At that point, the crowd analyzer 58 discards crowds with less than three users, or members (step 2338) and the process ends.
  • Returning to step 2308 in FIG. 24A, if the new and old bounding boxes do not overlap, the process proceeds to FIG. 24C and the bounding box to be processed is set to the old bounding box (step 2340). In general, the crowd analyzer 58 then processes the old bounding box in much the same manner as described above with respect to steps 2312 through 2338. More specifically, the crowd analyzer 58 determines the individual users and crowds relevant to the bounding box (step 2342). The crowds relevant to the bounding box are crowds that are within or overlap the bounding box (e.g., have at least one user located within the bounding box). The individual users relevant to the bounding box are users that are currently located within the bounding box and not already part of a crowd. Next, the crowd analyzer 58 computes an optimal inclusion distance for individual users based on user density within the bounding box (step 2344). More specifically, in one embodiment, the optimal inclusion distance for individuals, which is also referred to herein as an initial optimal inclusion distance, is set according to the following equation:
  • initial_optimal _inclusion _dist = a · A BoundingBox number_of _users ,
  • where a is a number between 0 and 1, ABoundingBox is an area of the bounding box, and number_of_users is the total number of users in the bounding box. The total number of users in the bounding box includes both individual users that are not already in a crowd and users that are already in a crowd. In one embodiment, a is ⅔.
  • The crowd analyzer 58 then creates a crowd of one user for each individual user within the bounding box that is not already included in a crowd and sets the optimal inclusion distance for the crowds to the initial optimal inclusion distance (step 2346). At this point, the crowd analyzer 58 analyzes the crowds for the bounding box to determine whether any crowd members (i.e., users in the crowds) violate the optimal inclusion distance of their crowds (step 2348). Any crowd member that violates the optimal inclusion distance of his or her crowd is then removed from that crowd (step 2350). The crowd analyzer 58 then creates a crowd of one user for each of the users removed from their crowds in step 2350 and sets the optimal inclusion distance for the newly created crowds to the initial optimal inclusion distance (step 2352).
  • Next, the crowd analyzer 58 determines the two closest crowds in the bounding box (step 2354) and a distance between the two closest crowds (step 2356). The distance between the two closest crowds is the distance between the crowd centers of the two closest crowds. The crowd analyzer 58 then determines whether the distance between the two closest crowds is less than the optimal inclusion distance of a larger of the two closest crowds (step 2358). If the two closest crowds are of the same size (i.e., have the same number of users), then the optimal inclusion distance of either of the two closest crowds may be used. Alternatively, if the two closest crowds are of the same size, the optimal inclusion distances of both of the two closest crowds may be used such that the crowd analyzer 58 determines whether the distance between the two closest crowds is less than the optimal inclusion distances of both of the two closest crowds. As another alternative, if the two closest crowds are of the same size, the crowd analyzer 58 may compare the distance between the two closest crowds to an average of the optimal inclusion distances of the two closest crowds.
  • If the distance between the two closest crowds is less than the optimal inclusion distance, the two closest crowds are combined or merged (step 2360), and a new crowd center for the resulting crowd is computed (step 2362). Again, a center of mass algorithm may be used to compute the crowd center of a crowd. In addition, a new optimal inclusion distance for the resulting crowd is computed (step 2364). As discussed above, in one embodiment, the new optimal inclusion distance for the resulting crowd is computed as:
  • average = 1 n + 1 · ( initial_optimal _inclusion _dist + i = 1 n d i ) , optimal_inclusion _dist = average + ( 1 n · i = 1 n ( d i - average ) 2 ) ,
  • where n is the number of users in the crowd and di is a distance between the ith user and the crowd center. In other words, the new optimal inclusion distance is computed as the average of the initial optimal inclusion distance and the distances between the users in the crowd and the crowd center plus one standard deviation.
  • At this point, the crowd analyzer 58 determines whether a maximum number of iterations have been performed (step 2366). If the maximum number of iterations has not been reached, the process returns to step 2348 and is repeated until either the distance between the two closest crowds is not less than the optimal inclusion distance of the larger crowd or the maximum number of iterations has been reached. At that point, the crowd analyzer 58 discards crowds with less than three users, or members (step 2368). The crowd analyzer 58 then determines whether the crowd formation process for the new and old bounding boxes is done (step 2370). In other words, the crowd analyzer 58 determines whether both the new and old bounding boxes have been processed.
  • If not, the bounding box is set to the new bounding box (step 2372), and the process returns to step 2342 and is repeated for the new bounding box. Once both the new and old bounding box have been processed, the crowd formation process ends.
  • FIGS. 25A through 25D graphically illustrate the crowd formation process of FIGS. 24A through 24D for a scenario where the crowd formation process is triggered by a location update for a user having no old location. In this scenario, the crowd analyzer 58 creates a new bounding box 150 for the new location of the user, and the new bounding box 150 is set as the bounding box to be processed for crowd formation. Then, as illustrated in FIG. 25A, the crowd analyzer 58 identifies all individual users currently located within the bounding box 150 and all crowds located within or overlapping the bounding box. In this example, crowd 152 is an existing crowd relevant to the bounding box 150. Crowds are indicated by dashed circles, crowd centers are indicated by cross-hairs (+), and users are indicated as dots. Next, as illustrated in FIG. 25B, the crowd analyzer 58 creates crowds 154 through 158 of one user for the individual users, and the optional inclusion distances of the crowds 154 through 158 are set to the initial optimal inclusion distance. As discussed above, the initial optimal inclusion distance is computed by the crowd analyzer 58 based on a density of users within the bounding box 150.
  • The crowd analyzer 58 then identifies the two closest crowds 154 and 156 in the bounding box 150 and determines a distance between the two closest crowds 154 and 156. In this example, the distance between the two closest crowds 154 and 156 is less than the optimal inclusion distance. As such, the two closest crowds 154 and 156 are merged and a new crowd center and new optimal inclusion distance are computed, as illustrated in FIG. 25C. The crowd analyzer 58 then repeats the process such that the two closest crowds 154 and 158 in the bounding box 150 are again merged, as illustrated in FIG. 23D. At this point, the distance between the two closest crowds 152 and 154 is greater than the appropriate optimal inclusion distance. As such, the crowd formation process is complete.
  • FIGS. 26A through 26F graphically illustrate the crowd formation process of FIGS. 24A through 24D for a scenario where the new and old bounding boxes overlap. As illustrated in FIG. 26A, a user moves from an old location to a new location, as indicated by an arrow. The crowd analyzer 58 receives a location update for the user giving the new location of the user. In response, the crowd analyzer 58 creates an old bounding box 160 for the old location of the user and a new bounding box 162 for the new location of the user. Crowd 164 exists in the old bounding box 160, and crowd 166 exists in the new bounding box 162.
  • Since the old bounding box 160 and the new bounding box 162 overlap, the crowd analyzer 58 creates a bounding box 168 that encompasses both the old bounding box 160 and the new bounding box 162, as illustrated in FIG. 26B. In addition, the crowd analyzer 58 creates crowds 170 through 176 for individual users currently located within the bounding box 168. The optimal inclusion distances of the crowds 170 through 176 are set to the initial optimal inclusion distance computed by the crowd analyzer 58 based on the density of users in the bounding box 168.
  • Next, the crowd analyzer 58 analyzes the crowds 164, 166, and 170 through 176 to determine whether any members of the crowds 164, 166, and 170 through 176 violate the optimal inclusion distances of the crowds 164, 166, and 170 through 176. In this example, as a result of the user leaving the crowd 164 and moving to his new location, both of the remaining members of the crowd 164 violate the optimal inclusion distance of the crowd 164. As such, the crowd analyzer 58 removes the remaining users from the crowd 164 and creates crowds 178 and 180 of one user each for those users, as illustrated in FIG. 26C.
  • The crowd analyzer 58 then identifies the two closest crowds in the bounding box 168, which in this example are the crowds 174 and 176. Next, the crowd analyzer 58 computes a distance between the two crowds 174 and 176. In this example, the distance between the two crowds 174 and 176 is less than the initial optimal inclusion distance and, as such, the two crowds 174 and 176 are combined. In this example, crowds are combined by merging the smaller crowd into the larger crowd. Since the two crowds 174 and 176 are of the same size, the crowd analyzer 58 merges the crowd 176 into the crowd 174, as illustrated in FIG. 26D. A new crowd center and new optimal inclusion distance are then computed for the crowd 174.
  • At this point, the crowd analyzer 58 repeats the process and determines that the crowds 166 and 172 are now the two closest crowds. In this example, the distance between the two crowds 166 and 172 is less than the optimal inclusion distance of the larger of the two crowds 166 and 172, which is the crowd 166. As such, the crowd 172 is merged into the crowd 166 and a new crowd center and optimal inclusion distance are computed for the crowd 166, as illustrated in FIG. 26E. At this point, there are no two crowds closer than the optimal inclusion distance of the larger of the two crowds. As such, the crowd analyzer 58 discards any crowds having less than three members, as illustrated in FIG. 26F. In this example, the crowds 170, 174, 178, and 180 have less than three members and are therefore removed. The crowd 166 has three or more members and, as such, is not removed. At this point, the crowd formation process is complete.
  • FIGS. 27A through 27E graphically illustrate the crowd formation process of FIGS. 24A through 24D in a scenario where the new and old bounding boxes do not overlap. As illustrated in FIG. 27A, in this example, the user moves from an old location to a new location. The crowd analyzer 58 creates an old bounding box 182 for the old location of the user and a new bounding box 184 for the new location of the user. Crowds 186 and 188 exist in the old bounding box 182, and crowd 190 exists in the new bounding box 184. In this example, since the old and new bounding boxes 182 and 184 do not overlap, the crowd analyzer 58 processes the old and new bounding boxes 182 and 184 separately.
  • More specifically, as illustrated in FIG. 27B, as a result of the movement of the user from the old location to the new location, the remaining users in the crowd 186 no longer satisfy the optimal inclusion distance for the crowd 186. As such, the remaining users in the crowd 186 are removed from the crowd 186, and crowds 192 and 194 of one user each are created for the removed users as shown in FIG. 26C. In this example, no two crowds in the old bounding box 182 are close enough to be combined. As such, processing of the old bounding box 182 is complete, and the crowd analyzer 58 proceeds to process the new bounding box 184.
  • As illustrated in FIG. 27D, processing of the new bounding box 184 begins by the crowd analyzer 58 creating a crowd 196 of one user for the user. The crowd analyzer 58 then identifies the crowds 190 and 196 as the two closest crowds in the new bounding box 184 and determines a distance between the two crowds 190 and 196. In this example, the distance between the two crowds 190 and 196 is less than the optimal inclusion distance of the larger crowd, which is the crowd 190. As such, the crowd analyzer 58 combines the crowds 190 and 196 by merging the crowd 196 into the crowd 190, as illustrated in FIG. 27E. A new crowd center and new optimal inclusion distance are then computed for the crowd 190. At this point, the crowd formation process is complete.
  • Before proceeding, a variation of the spatial formation process discussed above with respect to FIGS. 24A through 24D, 25A through 25D, 26A through 26F, and 27A through 27E will be described. In this alternative embodiment, a location accuracy of the location update from the user received in step 2300 is considered. More specifically, in step 2300, the location update received by the MAP server 12 includes the updated location of the user 20-1 as well as a location accuracy for the location of the user 20-1, which may be expressed as, for example, a radius in meters from the location of the user 20-1. In the embodiment where the location of the user 20-1 is obtained from a GPS receiver of the mobile device 18-1, the location accuracy of the location of the user 20-1 may be provided by the GPS receiver or derived from data from the GPS receiver as well be appreciated by one having ordinary skill in the art.
  • Then, in steps 2302 and 2304, sizes of the new and old bounding boxes centered at the new and old locations of the user 20-1 are set as a function of the location accuracy of the new and old locations of the user 20-1. If the new location of the user 20-1 is inaccurate, then the new bounding box will be large. If the new location of the user 20-1 is accurate, then the new bounding box will be small. For example, the length and width of the new bounding box may be set to M times the location accuracy of the new location of the user 20-1, where the location accuracy is expressed as a radius in meters from the new location of the user 20-1. The number M may be any desired number. For example, the number M may be 5. In a similar manner, the location accuracy of the old location of the user 20-1 may be used to set the length and width of the old bounding box.
  • In addition, the location accuracy may be considered when computing the initial optimal inclusion distances used for crowds of one user in steps 2314 and 2344. As discussed above, the initial optimal inclusion distance is computed based on the following equation:
  • initial_optimal _inclusion _dist = a · A BoundingBox number_of _users ,
  • where a is a number between 0 and 1, ABoundingBox is an area of the bounding box, and number_of_users is the total number of users in the bounding box. The total number of users in the bounding box includes both individual users that are not already in a crowd and users that are already in a crowd. In one embodiment, a is ⅔. However, if the computed initial optimal inclusion distance is less than the location accuracy of the current location of the individual user in a crowd, then the location accuracy, rather than the computed value, is used for the initial optimal inclusion distance for that crowd. As such, as location accuracy decreases, crowds become larger and more inclusive. In contrast, as location accuracy increases, crowds become smaller and less inclusive. In other words, the granularity with which crowds are formed is a function of the location accuracy.
  • Likewise, when new optimal inclusion distances for crowds are recomputed in steps 2334 and 2364, location accuracy may also be considered. As discussed above, the new optimal inclusion distance may first be computed based on the following equation:
  • average = 1 n + 1 · ( initial_optimal _inclusion _dist + i = 1 n d i ) , optimial_inclusion _dist = average + ( 1 n · i = 1 n ( d i - average ) 2 ) ,
  • where n is the number of users in the crowd and di is a distance between the ith user and the crowd center. In other words, the new optimal inclusion distance is computed as the average of the initial optimal inclusion distance and the distances between the users in the crowd and the crowd center plus one standard deviation. However, if the computed value for the new optimal inclusion distance is less than an average location accuracy of the users in the crowd, the average location accuracy of the users in the crowd, rather than the computed value, is used as the new optimal inclusion distance.
  • FIG. 28 illustrates the operation the system 10 of FIG. 1 to enable the mobile devices 18-1 through 18-N to request crowd data for currently formed crowds according to one embodiment of the present disclosure. Note that while in this example the request is initiated by the MAP application 32-1 of the mobile device 18-1, this discussion is equally applicable to the MAP applications 32-2 through 32-N of the other mobile devices 18-2 through 18-N. In addition, in a similar manner, requests may be received from the third-party applications 34-1 through 34-N.
  • First, the MAP application 32-1 sends a crowd request to the MAP client 30-1 (step 2400). The crowd request is a request for crowd data for crowds currently formed near a specified POI or within a specified AOI. The crowd request may be initiated by the user 20-1 of the mobile device 18-1 via the MAP application 32-1 or may be initiated automatically by the MAP application 32-1 in response to an event such as, for example, start-up of the MAP application 32-1, movement of the user 20-1, or the like. In one embodiment, the crowd request is for a POI, where the POI is a POI corresponding to the current location of the user 20-1, a POI selected from a list of POIs defined by the user 20-1, a POI selected from a list of POIs defined by the MAP application 32-1 or the MAP server 12, a POI selected by the user 20-1 from a map, a POI implicitly defined via a separate application (e.g., POI is implicitly defined as the location of the nearest Starbucks coffee house in response to the user 20-1 performing a Google search for “Starbucks”), or the like. If the POI is selected from a list of POIs, the list of POIs may include static POIs which may be defined by street addresses or latitude and longitude coordinates, dynamic POIs which may be defined as the current locations of one or more friends of the user 20-1, or both. Note that in some embodiments, the user 20-1 may be enabled to define a POI by selecting a crowd center of a crowd as a POI, where the POI would thereafter remain static at that point and would not follow the crowd.
  • In another embodiment, the crowd request is for an AOI, where the AOI may be an AOI of a predefined shape and size centered at the current location of the user 20-1, an AOI selected from a list of AOIs defined by the user 20-1, an AOI selected from a list of AOIs defined by the MAP application 32-1 or the MAP server 12, an AOI selected by the user 20-1 from a map, an AOI implicitly defined via a separate application (e.g., AOI is implicitly defined as an area of a predefined shape and size centered at the location of the nearest Starbucks coffee house in response to the user 20-1 performing a Google search for “Starbucks”), or the like. If the AOI is selected from a list of AOIs, the list of AOIs may include static AOIs, dynamic AOIs which may be defined as areas of a predefined shape and size centered at the current locations of one or more friends of the user 20-1, or both. Note that in some embodiments, the user 20-1 may be enabled to define an AOI by selecting a crowd such that an AOI is created of a predefined shape and size centered at the crowd center of the selected crowd. The AOI would thereafter remain static and would not follow the crowd. The POI or the AOI of the crowd request may be selected by the user 20-1 via the MAP application 32-1. In yet another embodiment, the MAP application 32-1 automatically uses the current location of the user 20-1 as the POI or as a center point for an AOI of a predefined shape and size.
  • Upon receiving the crowd request, the MAP client 30-1 forwards the crowd request to the MAP server 12 (step 2402). Note that in some embodiments, the MAP client 30-1 may process the crowd request before forwarding the crowd request to the MAP server 12. For example, in some embodiments, the crowd request may include more than one POI or more than one AOI. As such, the MAP client 30-1 may generate a separate crowd request for each POI or each AOI.
  • In response to receiving the crowd request from the MAP client 30-1, the MAP server 12 identifies one or more crowds relevant to the crowd request (step 2404). More specifically, in one embodiment, the crowd analyzer 58 performs a crowd formation process such as that described above in FIG. 22 to form one or more crowds relevant to the POI or the AOI of the crowd request. In another embodiment, the crowd analyzer 58 proactively forms crowds using a process such as that described above in FIGS. 24A through 24D and stores corresponding crowd records in the datastore 64 of the MAP server 12. Then, rather than forming the relevant crowds in response to the crowd request, the crowd analyzer 58 queries the datastore 64 to identify the crowds that are relevant to the crowd request. The crowds relevant to the crowd request may be those crowds within or intersecting a bounding region, such as a bounding box, for the crowd request. If the crowd request is for a POI, the bounding region is a geographic region of a predefined shape and size centered at the POI. If the crowd request is for an AOI, the bounding region is the AOI.
  • Once the crowd analyzer 58 has identified the crowds relevant to the crowd request, the MAP server 12 generates crowd data for the identified crowds (step 2406). As discussed below in detail, the crowd data for the identified crowds may include aggregate profiles for the crowds, information characterizing the crowds, or both. In addition, the crowd data may include spatial information defining the locations of the crowds, the number of users in the crowds, the amount of time the crowds have been located at or near the POI or within the AOI of the crowd request, or the like. The MAP server 12 then returns the crowd data to the MAP client 30-1 (step 2408).
  • Upon receiving the crowd data, the MAP client 30-1 forwards the crowd data to the MAP application 32-1 (step 2410). Note that in some embodiments the MAP client 30-1 may process the crowd data before sending the crowd data to the MAP application 32-1. The MAP application 32-1 then presents the crowd data to the user 20-1 (step 2412). The manner in which the crowd data is presented depends on the particular implementation of the MAP application 32-1. In one embodiment, the crowd data is overlaid upon a map. For example, the crowds may be represented by corresponding indicators overlaid on a map. The user 20-1 may then select a crowd in order to view additional crowd data regarding that crowd such as, for example, the aggregate profile of that crowd, characteristics of that crowd, or the like.
  • Note that in one embodiment, the MAP application 32-1 may operate to roll-up the aggregate profiles for multiple crowds into a rolled-up aggregate profile for those crowds. The rolled-up aggregate profile may be the average of the aggregate profiles of the crowds. For example, the MAP application 32-1 may roll-up the aggregate profiles for multiple crowds at a POI and present the rolled-up aggregate profile for the multiple crowds at the POI to the user 20-1. In a similar manner, the MAP application 32-1 may provide a rolled-up aggregate profile for an AOI. In another embodiment, the MAP server 12 may roll-up crowds for a POI or an AOI and provide the rolled-up aggregate profile in addition to or as an alternative to the aggregate profiles for the individual crowds.
  • FIG. 29A is a flow chart illustrating step 2406 of FIG. 28 in more detail according to one embodiment of the present disclosure. In this embodiment, the crowd data returned by the MAP server 12 includes aggregate profiles for the crowds identified for the POI or the AOI. In this embodiment, upon receiving the crowd request, the MAP server 12 triggers the crowd analyzer 58 to identify crowds relevant to the current request, and then passes the identified crowds to the aggregation engine 60 in order to generate aggregate profiles for the identified crowds.
  • More specifically, after the crowd analyzer 58 has identified the crowds relevant to the current request, the identified crowds are passed to the aggregation engine 60. The aggregation engine 60 selects a next crowd to process, which for the first iteration is the first crowd (step 2500-A). The aggregation engine 60 then selects the next user in the crowd (step 2502-A). Next, the aggregation engine 60 compares the user profile of the user in the crowd to the user profile of the requesting user, which for this example is the user 20-1 of the mobile device 18-1, or a select subset of the user profile of the requesting user (step 2504-A). In some embodiments, the user 20-1 may be enabled to select a subset of his user profile to be used for generation of the aggregate profile. For example, in the embodiment where user profiles are expressed as keywords in a number of profile categories, the user 20-1 may select one or more of the profile categories to be used for aggregate profile generation. When comparing the user profile of the user in the crowd to the user profile of the user 20-1, the aggregation engine 60 identifies matches between the user profile of the user in the crowd and the user profile of the user 20-1 or the select subset of the user profile of the user 20-1. In one embodiment, the user profiles are expressed as keywords in a number of profile categories. The aggregation engine 60 may then make a list of keywords from the user profile of the user in the crowd that match keywords in user profile of the user 20-1 or the select subset of the user profile of the user 20-1.
  • Next, the aggregation engine 60 determines whether there are more users in the crowd (step 2506-A). If so, the process returns to step 2502-A and is repeated for the next user in the crowd. Once all of the users in the crowd have been processed, the aggregation engine 60 generates an aggregate profile for the crowd based on data resulting from the comparisons of the user profiles of the users in the crowd to the user profile of the user 20-1 or the select subset of the user profile of the user 20-1 (step 2508-A). In an alternative embodiment, the aggregation engine 60 generates an aggregate profile for the crowd based on data resulting from the comparisons of the user profiles of the users in the crowd to a target user profile defined or otherwise specified by the user 20-1. In one embodiment, the data resulting from the comparisons is a list of matching keywords for each of the users in the crowd. The aggregate profile may then include a number of user matches over all keywords and/or a ratio of the number of user matches over all keywords to the number of users in the crowd. The number of user matches over all keywords is a number of users in the crowd having at least one keyword in their user profile that matches a keyword in the user profile of the user 20-1 or the select subset of the user profile of the user 20-1. The aggregate profile may additionally or alternatively include, for each keyword in the user profile of the user 20-1 or the select subset of the user profile of the user 20-1, a number of user matches for the keyword or a ratio of the number of user matches for the keyword to the number of users in the crowd. Note that keywords in the user profile of the user 20-1 or the select subset of the user profile of the user 20-1 that have no user matches may be excluded from the aggregate profile. In addition, the aggregate profile for the crowd may include a total number of users in the crowd.
  • The aggregate profile for the crowd may additionally or alternatively include a match strength that is indicative of a degree of similarity between the user profiles of the users in the crowd and the user profile of the user 20-1. The match strength may be computed as a ratio of the number of user matches to the total number of users in the crowd. Alternatively, the match strength may be computed as a function of the number of user matches per keyword and keyword weights assigned to the keywords. The keyword weights may be assigned by the user 20-1.
  • Once the aggregate profile of the crowd is generated, the aggregation engine 60 determines whether there are more crowds to process (step 2510-A). If so, the process returns to step 2500-A and is repeated for the next crowd. Once aggregate profiles have been generated for all of the crowds relevant to the current request, the aggregate profiles for the crowds are returned (step 2512-A). More specifically, the aggregate profiles are included in the crowd data returned to the MAP client 30-1 in response to the current request.
  • Note that in some embodiments the user 20-1 is enabled to activate a “nearby POIs” feature. If this feature is enabled, the crowds identified by the crowd analyzer 58 and processed by the aggregation engine 60 to produce corresponding aggregate profiles may also include crowds located at or near any nearby POIs. The nearby POIs may be POIs predefined by the user 20-1, the MAP application 32-1, and/or the MAP server 12 that are within a predefined distance from the POI or the AOI of the current request.
  • FIG. 29B is a flow chart illustrating step 2406 of FIG. 28 in more detail according to another embodiment of the present disclosure. In this embodiment, the crowd data returned by the MAP server 12 includes aggregate profiles for the crowds identified for the POI or the AOI. In this embodiment, upon receiving the crowd request, the MAP server 12 triggers the crowd analyzer 58 to identify crowds relevant to the current request, and then passes the identified crowds to the aggregation engine 60 in order to generate aggregate profiles for the identified crowds.
  • More specifically, after the crowd analyzer 58 has identified the crowds relevant to the current request, the identified crowds are passed to the aggregation engine 60. The aggregation engine 60 selects a next crowd to process, which for the first iteration is the first crowd (step 2500-B). The aggregation engine 60 then selects the next user in the crowd (step 2502-B). Next, the aggregation engine 60 compares the user profile of the user in the crowd to the user profile of the requesting user, which for this example is the user 20-1 of the mobile device 18-1, or a select subset of the user profile of the requesting user (step 2504-B). In some embodiments, the user 20-1 may be enabled to select a subset of his user profile to be used for generation of the aggregate profile. For example, in the embodiment where user profiles are expressed as keywords in a number of profile categories, the user 20-1 may select one or more of the profile categories to be used for aggregate profile generation. When comparing the user profile of the user in the crowd to the user profile of the user 20-1, the aggregation engine 60 identifies matches between the user profile of the user in the crowd and the user profile of the user 20-1 or the select subset of the user profile of the user 20-1. In this embodiment, the user profiles are expressed as keywords in a number of profile categories. The aggregation engine 60 may then make a list of keywords from the user profile of the user in the crowd that match keywords in user profile of the user 20-1 or the select subset of the user profile of the user 20-1.
  • Next, the aggregation engine 60 determines whether there are more users in the crowd (step 2506-B). If so, the process returns to step 2502-B and is repeated for the next user in the crowd. Once all of the users in the crowd have been processed, the aggregation engine 60 generates an aggregate profile for the crowd based on data resulting from the comparisons of the user profiles of the users in the crowd to the user profile of the user 20-1 or the select subset of the user profile of the user 20-1 (step 2508-B). In an alternative embodiment, the aggregation engine 60 generates an aggregate profile for the crowd based on data resulting from the comparisons of the user profiles of the users in the crowd to a target user profile defined or otherwise specified by the user 20-1. In this embodiment, the data resulting from the comparisons is a list of matching keywords for each of the users in the crowd. The aggregate profile may then include a number of user matches over all keywords and/or a ratio of the number of user matches over all keywords to the number of users in the crowd. The number of user matches over all keywords is a number of users in the crowd having at least one keyword in their user profile that matches a keyword in the user profile of the user 20-1 or the select subset of the user profile of the user 20-1. The aggregate profile may additionally or alternatively include, for each keyword in the user profile of the user 20-1 or the select subset of the user profile of the user 20-1, a number of user matches for the keyword or a ratio of the number of user matches for the keyword to the number of users in the crowd. Note that keywords in the user profile of the user 20-1 or the select subset of the user profile of the user 20-1 that have no user matches may be excluded from the aggregate profile. In addition, the aggregate profile for the crowd may include a total number of users in the crowd.
  • The aggregate profile for the crowd may additionally or alternatively include a match strength that is indicative of a degree of similarity between the user profiles of the users in the crowd and the user profile of the user 20-1. The match strength may be computed as a ratio of the number of user matches to the total number of users in the crowd. Alternatively, the match strength may be computed as a function of the number of user matches per keyword and keyword weights assigned to the keywords. The keyword weights may be assigned by the user 20-1.
  • Once the aggregate profile of the crowd is generated, in this embodiment, the aggregation engine 60 compares the user profiles of the users in the crowd to one another to determine N keywords having the highest number of user matches among the users in the crowd (step 2510-B). Here, N may be, for example, five. The aggregation engine 60 then adds any of the N keywords that are not already in the aggregate profile to the aggregate profile and flags those keywords as non-matching keywords (step 2512-B). These keywords are flagged as non-matching because they do not match any of the keywords in the user profile, or select subset thereof, of the user 20-1. The non-matching keywords are preferably differentiated from the matching keywords in the aggregate profile when presented to the user 20-1. The non-matching keywords are particularly beneficial where there are few or no matching keywords between the user profile of the user 20-1 and the user profiles of the users in the crowd. In this situation, the non-matching keywords would allow the user 20-1 to gain some understanding of the interests of the users in the crowd.
  • Next, the aggregation engine 60 determines whether there are more crowds to process (step 2514-B). If so, the process returns to step 2500-B and is repeated for the next crowd. Once aggregate profiles have been generated for all of the crowds relevant to the current request, the aggregate profiles for the crowds are returned (step 2516-B). More specifically, the aggregate profiles are included in the crowd data returned to the MAP client 30-1 in response to the current request.
  • Note that in some embodiments the user 20-1 is enabled to activate a “nearby POIs” feature. If this feature is enabled, the crowds identified by the crowd analyzer 58 and processed by the aggregation engine 60 to produce corresponding aggregate profiles may also include crowds located at or near any nearby POIs. The nearby POIs may be POIs predefined by the user 20-1, the MAP application 32-1, and/or the MAP server 12 that are within a predefined distance from the POI or the AOI of the current request.
  • FIG. 30 illustrates the operation of the system 10 of FIG. 1 to enable the subscriber device 22 to request information regarding current crowds according to one embodiment of the present disclosure. First, subscriber device 22 sends a crowd request to the MAP client 30-1 (step 2600). The crowd request is a request for current crowds at a specified POI or AOI. The crowd request may be initiated by the subscriber 24 at the subscriber device 22 via the web browser 38 or a custom application enabled to access the MAP server 12. Preferably, the subscriber 24 is enabled to identify the POI or the AOI for the crowd request by, for example, selecting the POI or the AOI on a map, selecting a crowd center of an existing crowd as a POI, selecting a crowd location of an existing crowd as a center of an AOI, selecting the POI or the AOI from a predefined list of POIs and/or AOIs, or the like. The predefined list of POIs and/or AOIs may be defined by, for example, the subscriber 24 and/or the MAP server 12.
  • In response to receiving the crowd request from the subscriber device 22, the MAP server 12 identifies one or more crowds relevant to the crowd request (step 2602). More specifically, in one embodiment, the crowd analyzer 58 performs a crowd formation process such as that described above in FIG. 22 to form one or more crowds relevant to the POI or the AOI of the crowd request. In another embodiment, the crowd analyzer 58 proactively forms crowds using a process such as that described above in FIGS. 24A through 24C and stores corresponding crowd records in the datastore 64 of the MAP server 12. Then, rather than forming the relevant crowds in response to the crowd request, the crowd analyzer 58 queries the datastore 64 to identify the crowds that are relevant to the crowd request. The crowds relevant to the crowd request may be those crowds within or overlapping a bounding region, such as a bounding box, for the crowd request. If the crowd request is for a POI, the bounding region is a geographic region of a predefined shape and size centered at the POI. If the crowd request is for an AOI, the bounding region is the AOI.
  • Once the crowd analyzer 58 has identified the crowds relevant to the crowd request, the MAP server 12 generates crowd data for the identified crowds (step 2604). The crowd data for the identified crowds may include aggregate profiles for the crowds, information characterizing the crowds, or both. In addition, the crowd data may include the locations of the crowds, the number of users in the crowds, the amount of time the crowds have been located at or near the POI or within the AOI, or the like. The MAP server 12 then returns the crowd data to the MAP client 30-1 (step 2606). In the embodiment where the subscriber 24 accesses the MAP server 12 via the web browser 38 at the subscriber device 22, the MAP server 12 formats the crowd data into a suitable web format before sending the crowd data to the subscriber device 22. The manner in which the crowd data is formatted depends on the particular implementation. In one embodiment, the crowd data is overlaid upon a map. For example, in one embodiment, the MAP server 12 may provide the crowd data to the subscriber device 22 via one or more web pages. Using the one or more web pages, crowd indicators representative of the locations of the crowds may be overlaid on a map. The subscriber 24 may then select a crowd in order to view additional crowd data regarding that crowd such as, for example, the aggregate profile of that crowd, characteristics of that crowd, or the like. Upon receiving the crowd data, the subscriber device 22 presents the crowd data to the subscriber 24 (step 2608). Note that in one embodiment, the MAP server 12 may roll-up the aggregate profiles for multiple crowds at a POI or in an AOI to provide a rolled-up aggregate profile that may be returned in addition to or as an alternative to the aggregate profiles of the individual crowds.
  • It should be noted that in some embodiments, the subscriber 24 may be enabled to specify filtering criteria via the web browser 38 or a custom application for interacting with the MAP server 12. For example, the subscriber 24 may specify filtering criteria regarding types of crowds in which the subscriber 24 is or is not interested. For instance, the crowd data may be presented to the subscriber 24 via one or more web pages that enable the subscriber 24 to select a filtering feature. In response, a list of keywords appearing in the user profiles of the crowds identified as being relevant to the current request may be presented to the subscriber 24. The subscriber 24 may then specify one or more keywords from the list such that crowds having users with user profiles that do not include any of the specified keywords are filtered, or removed, and are therefore not considered when generating the crowd data in response to a crowd request.
  • FIG. 31 is a flow chart illustrating step 2604 of FIG. 30 in more detail according to one embodiment of the present disclosure. In this embodiment, the crowd data returned by the MAP server 12 includes aggregate profiles for the crowds identified for the POI or the AOI. In this embodiment, upon receiving the crowd request, the MAP server 12 triggers the crowd analyzer 58 to identify crowds relevant to the crowd request, and then passes the identified crowds to the aggregation engine 60 in order to generate aggregate profiles for the identified crowds.
  • More specifically, after the crowd analyzer 58 has identified the crowds relevant to the crowd request, the identified crowds are passed to the aggregation engine 60. The aggregation engine 60 selects a next crowd to process, which for the first iteration is the first crowd (step 2700). The aggregation engine 60 then generates an aggregate profile for the crowd based on a comparison of the user profiles of the users in the crowd to one another (step 2702). Note that in an alternative embodiment, the aggregation engine 60 then generates an aggregate profile for the crowd based on a comparison of the user profiles of the users in the crowd to a target user profile defined by the subscriber 24.
  • In one embodiment, in order to generate the aggregate profile for the crowd, the user profiles are expressed as keywords for each of a number of profile categories. Then, the aggregation engine 60 may determine an aggregate list of keywords for the crowd. The aggregate list of keywords is a list of all keywords appearing in the user profiles of the users in the crowd. The aggregate profile for the crowd may then include a number of user matches for each keyword in the aggregate list of keywords for the crowd. The number of user matches for a keyword is the number of users in the crowd having a user profile that includes that keyword. The aggregate profile may include the number of user matches for all keywords in the aggregate list of keywords for the crowd or the number of user matches for keywords in the aggregate list of keywords for the crowd having more than a predefined number of user matches (e.g., more than 1 user match). The aggregate profile may also include the number of users in the crowd. In addition or alternatively, the aggregate profile may include, for each keyword in the aggregate list or each keyword in the aggregate list having more than a predefined number of user matches, a ratio of the number of user matches for the keyword to the number of users in the crowd.
  • Once the aggregate profile of the crowd is generated, the aggregation engine 60 determines whether there are more crowds to process (step 2704). If so, the process returns to step 2700 and is repeated for the next crowd. Once aggregate profiles have been generated for all of the crowds relevant to the crowd request, the aggregate profiles for the crowds are returned (step 2706). Note that in some embodiments the subscriber 24 is enabled to activate a “nearby POIs” feature. If this feature is enabled, the crowds identified by the crowd analyzer 58 and processed by the aggregation engine 60 to produce corresponding aggregate profiles may also include crowds located at or near any nearby POIs. The nearby POIs may be POIs predefined by the subscriber 24 and/or the MAP server 12 that are within a predefined distance from the POI or the AOI of the crowd request.
  • FIGS. 32A through 32E illustrate a GUI 198 for an exemplary embodiment of the MAP application 32-1 of the mobile device 18-1 (FIG. 1). As illustrated in FIG. 32A, the GUI 198 includes a settings screen 198-1 that is presented in response to selection of a corresponding settings button 200 by the user 20-1. A navigation button 202 may be selected to view a map and perform navigation functions such as obtaining directions to a desired location. A list button 204 enables the user 20-1 to view a list of friends, crowds, POIs, and AOIs, as discussed below. Regarding the settings displayed in the settings screen 198-1 of the GUI 198, the user 20-1 is enabled to provide his Facebook® login information which, as described above, enables the user profile of the user 20-1 to be obtained from the Facebook® social networking service. In this example, the user 20-1 has already been logged in to Facebook. As such, the user 20-1 may logout of Facebook by selecting a logout button 206. In addition, by selecting a profile setting 208, the user 20-1 is enabled to view his profile and select one or more profile categories to be used for aggregate profile generation.
  • The settings screen 198-1 also enables the user 20-1 to configure a number of privacy settings. Namely, the settings screen 198-1 enables the user 20-1 to set a stealth mode switch 210 to either an on position or an off position. When the stealth mode switch 210 is in the on position, the location of the user 20-1 is not reported to the friends of the user 20-1. However, the location of the user 20-1 is still reported for use by the MAP server 12. The privacy settings also include a location refresh setting 212 that enables the user 20-1 to configure how often location updates are to be sent by the MAP application 32-1. Lastly, the settings screen 198-1 includes an alerts setting 214 that enables the user 20-1 to configure one or more alerts. As discussed below, an alert can be tied to a particular POI or AOI such that the user 20-1 is alerted, or notified, when a crowd at the particular POI or AOI satisfies one or more specified criteria. Alternatively, an alert can be tied to a particular crowd such that the user 20-1 is alerted, or notified, when the crowd satisfies one or more specified criteria.
  • Returning to the profile setting 208, if the user 20-1 selects the profile setting 208, a user profile screen 198-2 is presented to the user 20-1 via the GUI 198, as illustrated in FIG. 32B. The user profile screen 198-2 shows a number of profile categories 216A through 216E and corresponding lists of keywords 218A through 218E, which form the user profile of the user 20-1. The user 20-1 is enabled to select one or more of the profile categories 216A through 216E to be used for aggregate profile generation (i.e., comparison to user profiles for history objects and crowds to create corresponding aggregate profiles for the user 20-1). In this example, the user 20-1 has selected his “My Interests” profile category 216C, where the corresponding list of keywords 218C define general interests of the user 20-1. In the user profile screen 198-2, the user 20-1 can return to the settings screen 198-1 by selecting a settings button 220.
  • FIGS. 32C and 32D illustrate a list screen 198-3 that is presented to the user 20-1 via the GUI 198 in response to selecting the list button 204. The list screen 198-3 includes a friends button 222, a crowds button 224, a POI button 226, an areas button 228, and an all button 230. The list screen 198-3 enables the user 20-1 to view a list of his friends by selecting the friends button 222, a list of crowds at POIs or within AOIs of the user 20-1 by selecting the crowds button 224, a list of POIs of the user 20-1 by selecting the POI button 226, or a list of AOIs of the user 20-1 by selecting the areas button 228. In addition, the list screen 198-3 enables the user 20-1 to view a list that includes the friends of the user, the crowds at POIs or within AOIs of the user 20-1, the POIs of the user 20-1, and the AOIs of the user 20-1 by selecting the all button 230.
  • In this example, the user 20-1 has selected the all button 230. As such, the list screen 198-3 presents an AOI list 232 that includes a number of AOIs previously defined by the user 20-1. Note that each of the AOIs may be a static AOI defining a static geographic area or a dynamic AOI that is defined relative to a dynamic location such as a location of a friend of the user 20-1. For instance, in this example, the “Near Jack Shephard” AOI is a geographic area of a defined shape and size that is centered at the current location of the user's friend Jack Shephard. Note that in one embodiment, persons whose current locations may be used for dynamic AOIs are limited to the friends of the user 20-1. The user 20-1 may select an AOI from the AOI list 232 in order to view crowd data for the AOI. For example, by selecting the My Neighborhood AOI, the GUI 198 may present a map including the My Neighborhood AOI. Crowds relevant to the My Neighborhood AOI are presented on the map. The user 20-1 may then select a desired crowd in order to view detailed information regarding that crowd such as, for example, the aggregate profile of the crowd, characteristics of the crowd, or both.
  • The list screen 198-3 also presents a crowds list 234 that includes a number of crowds that are at the POIs or within the AOIs of the user 20-1. In this example, there are twelve crowds. The GUI 198 enables the user 20-1 to select a crowd from the crowds list 234 in order to view additional information regarding the crowd. For example, by selecting the Crowd of 6, the user 20-1 may be presented with a map showing the current location of the Crowd of 6 and detailed information regarding the Crowd of 6 such as, for example, the aggregate profile of the Crowd of 6, characteristics of the Crowd of 6, or both.
  • The list screen 198-3 also includes a friends list 236, as illustrated in FIG. 32D. The user 20-1 may select a friend from the friends list 236 in order to view crowds nearby that friend. In other words, the current locations of the friends of the user 20-1 are treated as temporary or dynamic POIs such that crowd data for current locations of the friends of the user 20-1 is obtained from the MAP server 12. In addition, the user 20-1 may choose to define an AOI centered at the current location of a friend to create a dynamic AOI, as discussed above. The friends list 236 also presents the current location of the friends of the user 20-1 relative to the current location of the user 20-1.
  • The list screen 198-3 also includes a POI list 238 that includes a number of POIs of the user 20-1. The user 20-1 may select a POI from the POI list 238 in order to view crowd data for the POI. For example, by selecting the Steve's house POI, the GUI 198 may present a map including the Steve's house POI. Crowds at or near the Steve's house POI are presented on the map. The user 20-1 may then select a desired crowd in order to view detailed information regarding that crowd such as, for example, the aggregate profile of the crowd, characteristics of the crowd, or both. Lastly, returning to FIG. 32C, the list screen 198-3 includes a You item 240 that may be selected by the user 20-1 to access the user profile screen 198-2 (FIG. 32B).
  • FIG. 32E is a crowd data display screen 198-4 presented by the GUI 198. In this example, the user 20-1 has selected the Around You AOI from the AOI list 232 (FIG. 32C). As a result, the GUI 198 presents the crowd data display screen 198-4 for the Around You AOI. The crowd data display screen 198-4 includes a map area 242. In this example, the current location of the user 20-1 is used as the center of the Around You AOI. The current location of the user 20-1 is represented in the map area 242 by a corresponding indicator 244. Crowds in the Around You AOI are represented in the map area by crowd indicators 246 through 250. In this embodiment, the crowd indictors 246 through 250 show the locations of the crowds as well as match strengths for the crowds. The locations of the crowds are included in the crowd data. The match strengths for the crowds may be included in the aggregate profiles for the crowds or may be determined based on the aggregate profiles for the crowds. In this embodiment, the match strength of a crowd is computed as a ratio of the number of user matches over all keywords to the number of users in the crowd. A ratio of one results in a highest match strength, and a ratio of zero results in a lowest match strength.
  • Using the GUI 198, the user 20-1 is enabled to select a particular crowd in the map area 242 to view more detailed information for that crowd in a crowd detail area 252 of the crowd data display screen 198-4. In this example, the user 20-1 has selected the crowd indicator 246. As a result, more detailed information for the crowd represented by the crowd indicator 246 is presented in the crowd detail area 252. The more detailed information for the crowd is from the crowd data for the crowd or derived from the crowd data for the crowd. In this example, the aggregate profile of the crowd is used to derive the match strength for the crowd, and the match strength is presented in the crowd detail area 252. In addition, the crowd size and number of user matches over all keywords are obtained from the aggregate profile for the crowd and presented in the crowd detail area 252. In this example, a quality factor for the crowd is also presented. As discussed below in detail, the quality factor of the crowd may be an average of a quality or confidence of the current locations of the users in the crowd. Still further, the crowd data display screen 198-4 includes a keyword matches area 254 for presenting keyword matches for the selected crowd. In this example, a font size of the keywords in the keyword matches area 254 reflects the number of user matches for that keyword. Therefore, in this example, the number of user matches for the keyword “technology” is greater than the number of user matches for the keyword “books.”
  • FIGS. 33A through 33C illustrate an exemplary web interface 256 provided by the MAP server 12 and presented to the subscriber 24 at the subscriber device 22. The web interface 256 includes a number of tabs 258 through 272, namely, a home tab 258, a realtime tab 260, a historical tab 262, a watch zones tab 264, an alerts tab 266, a filters tab 268, a reports tab 270, and an account tab 272. The home tab 258 enables the subscriber 24 to view a home screen. The home screen may include any desired information such as, for example, a link to a Frequently Asked Question (FAQ) page, instructions on how to use the web interface 256, or the like. The realtime tab 260 enables the subscriber to view realtime crowd data for POIs and/or AOIs of the subscriber 24. The historical tab 262 enables the subscriber 24 to view historical data for a POI or an AOI in a time context and/or a geographic context in the manner described above. The watch zones tab 264 enables the subscriber 24 to select POIs and/or AOIs of interest to the subscriber 24. The alerts tab 266 enables the subscriber 24 to configure one or more alerts. The filters tab 268 enables the subscriber 24 to configure filters and/or select filters to be applied to the crowd data in the realtime or historical view. The reports tab 270 enables the subscriber 24 to access reports previously generated for crowds of interest, POIs, and/or AOIs. Lastly, the account tab 272 enables the subscriber 24 to manage the subscriber's account.
  • More specifically, FIG. 33A illustrates the web interface 256 when the realtime tab 260 has been selected by the subscriber 24. When the realtime tab 260 is selected, the web interface 256 presents a map area 274 that shows an AOI 276 and a number of crowds 278 through 282 currently located within the AOI 276. In addition, in this exemplary embodiment, crowds 284 and 286 that are outside the AOI 276 are also illustrated. The crowds 284 and 286 are crowds located at other POIs or within other AOIs of the subscriber 24 that are not currently being viewed by the subscriber 24. The subscriber 24 may view another POI or AOI by selecting the desired POI or AOI from a list presented in response to selection of a button 288. In this example, POIs and AOIs are generically referred to as watch zones.
  • In this example, the subscriber 24 selects the crowd 278. In response, the web interface 256 presents an aggregate profile window 290 to the subscriber 24, as illustrated in FIG. 33B. The aggregate profile window 290 presents an aggregate profile of the crowd 278, where in this embodiment the aggregate profile is in the form of an interest histogram showing the number of user matches in the crowd 278 for each of a number of keywords. The subscriber 24 may be enabled to create an alert for the crowd 278 by selecting a create an alert button 292. In response, the subscriber 24 may be enabled to utilize the keywords in the aggregate profile window 290 to create an alert. For example, the subscriber 24 may create an alert such that the subscriber 24 is notified when the number of user matches for the keyword “Sushi” in the crowd 278 reaches one hundred. The subscriber 24 may also be enabled to create a report for the crowd 278 by selecting a create a report button 294. The report may, for example, include details about the crowd 278 such as, for example, the location of the crowd 278, the size of the crowd 278, the aggregate profile of the crowd 278, the current time and date, or the like, where the report may be saved or printed by the subscriber 24.
  • In addition, the subscriber 24 may be enabled to create a filter by selecting a create a filter button 296. In response to selecting the create a filter button 296, a new filter screen 298 is presented to the subscriber 24, as illustrated in FIG. 33C. The subscriber 24 may then select keywords from the interest histogram for the crowd 278 to be used for the filter. In addition, the subscriber 24 may be enabled to add new keywords to the filter by selecting an add keywords button 300. Once the subscriber 24 has configured the filter, the subscriber 24 is enabled to create the filter by selecting a create button 302. Once the filter is created, the filter may be used to filter crowds for any AOI or POI of the subscriber 24.
  • FIGS. 34 through 45 describe the operation of the crowd analyzer 58 of the MAP server 12 to characterize crowds according to another embodiment of the present disclosure. More specifically, the crowd analyzer 58 may determine a degree-of-fragmentation, best and worst case average DOS, and/or a degree of bidirectionality for crowds. This information may then be included in crowd data for those crowds returned to the mobile devices 18-1 through 18-N and/or the subscriber device 22. In addition or alternatively, the data characterizing crowds may be used to filter crowds. For example, a filter may be applied such that crowds having a worst-case average DOS greater than a defined threshold are not presented to a user/subscriber. The filtering may be performed by the MAP server 12 before returning crowd data to the requesting device (i.e., one of the mobile devices 18-1 through 18-N, the subscriber device 22, or a device hosting the third-party service 26). Alternatively, the filtering may be performed by the mobile devices 18-1 through 18-N, the subscriber device 22, or a device hosting the third-party service 26.
  • FIG. 34 is a flow chart illustrating a spatial crowd fragmentation process according to one embodiment of the present disclosure. This process is similar to the spatial crowd formation process discussed above with respect to FIG. 22. First, the crowd analyzer 58 creates a crowd fragment of one user for each user in a crowd (step 2800). Note that this spatial crowd fragmentation process may be performed reactively in response to a current request for crowd data for a POI or an AOI or performed proactively. Next, the crowd analyzer 58 determines the two closest crowd fragments in the crowd (step 2802) and a distance between the two closest crowd fragments (step 2804). The distance between the two closest crowd fragments is the distance between the crowd fragment centers of the two closest crowd fragments. The crowd fragment center for a crowd fragment having only one user is the current location of that one user.
  • The crowd analyzer 58 then determines whether the distance between the two closest crowd fragments is less than an optimal inclusion distance for a crowd fragment (step 2806). In one embodiment, the optimal inclusion distance for a crowd fragment is a predefined static value. In another embodiment, the optimal inclusion distance of the crowd may vary. For example, if the spatial crowd formation process of FIGS. 24A through 24D is used for proactive crowd formation, then the optimal inclusion distance for the crowd may vary. As such, the optimal inclusion distance for a crowd fragment within the crowd may be defined as a fraction of the optimal inclusion distance of the crowd such that the optimal inclusion distance for a crowd fragment within the crowd varies along with the optimal inclusion distance for the crowd itself.
  • If the distance between the two closest crowd fragments is less than the optimal inclusion distance for a crowd fragment, then the two closest crowd fragments are combined (step 2808) and a new crowd fragment center is computed for the resulting crowd fragment (step 2810). The crowd fragment center may be computed using, for example, a center of mass algorithm. At this point the process returns to step 2802 and is repeated. Once the two closest crowd fragments in the crowd are separated by more than the optimal inclusion distance for a crowd fragment, the process ends. At this point, the crowd analyzer 58 has created the crowd fragments or defined the crowd fragments for the crowd. The crowd analyzer 58 may then represent the degree of fragmentation of the crowd based on the number of crowd fragments in the crowd and, optionally, an average number of users per crowd fragment. The degree of fragmentation of the crowd may be included in the crowd data returned to the requesting device in response to a crowd request for a POI or an AOI to which the crowd is relevant.
  • FIGS. 35A and 35B graphically illustrate the spatial crowd fragmentation process of FIG. 34 for an exemplary crowd 304 having bounding box 305. FIG. 35A illustrates the crowd 304 before spatial crowd fragmentation. FIG. 35B illustrates the crowd 304 after spatial crowd fragmentation. As illustrated, after spatial crowd fragmentation, the crowd 304 includes a number of crowd fragments 306 through 314. As such, the crowd 304 has a degree of fragmentation of five crowd fragments with an average of approximately 2 users per crowd fragment. Thus, the crowd 304 has a moderately high degree of fragmentation. The highest degree of fragmentation for the crowd 304 would be to have eleven crowd fragments with an average of one user per crowd fragment. The lowest degree of fragmentation for the crowd 304 would be to have one crowd fragment with an average of eleven users per crowd fragment.
  • FIG. 36 illustrates a connectivity-based crowd fragmentation process according to one embodiment of the present disclosure. First, the crowd analyzer 58 creates a crowd fragment for each user in the crowd (step 2900). Note that this connectivity-based crowd fragmentation process may be performed reactively in response to a current request for crowd data for a POI or an AOI or performed proactively. Next, the crowd analyzer 58 selects a next pair of crowd fragments in the crowd (step 2902) and then selects one user from each of those crowd fragments (step 2904). The crowd analyzer 58 then determines a DOS between the users from the pair of crowd fragments (step 2906). More specifically, as will be appreciated by one of ordinary skill in the art, DOS is a measure of the degree to which the two users are related in a social network (e.g., the Facebook® social network, the MySpace® social network, or the LinkedIN® social network). The two users have a DOS of one if one of the users is a friend of the other user, a DOS of two if one of the users is a friend of a friend of the other user, a DOS of three if one of the users is a friend of a friend of a friend of the other user, etc. If the two users are not related in a social network or have an unknown DOS, the DOS for the two users is set to a value equal to or greater than the maximum DOS for a crowd fragment.
  • The crowd analyzer 58 then determines whether the DOS between the two users is less than a predefined maximum DOS for a crowd fragment (step 2908). For example, the predefined maximum DOS may be three. However, other maximum DOS values may be used to achieve the desired crowd fragmentation. If the DOS between the two users is not less than the predefined maximum DOS, the process proceeds to step 2916. If the DOS between the two users is less than the predefined maximum DOS, the crowd analyzer 58 determines whether a bidirectionality requirement is satisfied (step 2910). The bidirectionality requirement specifies whether the relationship between the two users must be bidirectional (i.e., the first user must directly or indirectly know the second user and the second user must directly or indirectly know the first user). Bidirectionality may or may not be required depending on the particular embodiment. If the two users satisfy the bidirectionality requirement, the crowd analyzer 58 combines the pair of crowd fragments (step 2912) and computes a new crowd fragment center for the resulting crowd fragment (step 2914). The process then returns to step 2902 and is repeated for a next pair of crowd fragments. If the two users do not satisfy the bidirectionality requirement, the process proceeds to step 2916.
  • At this point, whether proceeding from step 2908 or step 2910, the crowd analyzer 58 determines whether all user pairs from the two crowd fragments have been processed (step 2916). If not, the process returns to step 2904 and is repeated for a new pair of users from the two crowd fragments. If all user pairs from the two crowd fragments have been processed, the crowd analyzer 58 then determines whether all crowd fragments have been processed (step 2918). If not, the process returns to step 2902 and is repeated until all crowd fragments have been processed. Once this process is complete, the crowd analyzer 58 has determined the number of crowd fragments in the crowd. The degree of fragmentation of the crowd may then be provided as the number of crowd fragments and the average number of users per crowd fragment.
  • FIGS. 37A and 37B graphically illustrate the connectivity-based crowd fragmentation process of FIG. 36. FIG. 37A illustrates a crowd 316 having a number of users and a bounding box 317. FIG. 37B illustrates the crowd 316 after the connectivity-based crowd fragmentation process has been performed. As illustrated, there are three crowd fragments resulting from the connectivity-based crowd fragmentation process. Namely, crowd fragment A has four users marked as “A,” crowd fragment B has five users marked as “B,” and crowd fragment C has three users marked as “C.” As illustrated, the users in a particular crowd fragment may not be close to one another spatially since, in this embodiment, there is no spatial requirement for users of the crowd fragment other than that the users of the crowd fragment are in the same crowd.
  • FIG. 38 is a flow chart illustrating a recursive crowd fragmentation process that uses both spatial crowd fragmentation and connectivity-based crowd fragmentation according to one embodiment of the present disclosure. First, the crowd analyzer 58 performs a spatial crowd fragmentation process to create a number of crowd fragments for a crowd (step 3000). The spatial crowd fragmentation process may be the spatial crowd fragmentation process of FIG. 34. The crowd analyzer 58 then selects a next crowd fragment of the crowd fragments created for the crowd (step 3002). Next, the crowd analyzer 58 performs a connectivity-based crowd fragmentation process to create a number of sub-fragments for the crowd fragment of the crowd (step 3004). The connectivity-based crowd fragmentation process may be the connectivity-based crowd fragmentation process of FIG. 36. The crowd analyzer 58 then determines whether the last crowd fragment of the crowd has been processed (step 3006). If not, the process returns to step 3002 and is repeated until the last crowd fragment of the crowd has been processed. At that point, the process is complete. The degree of fragmentation for the crowd may then include the number of sub-fragments and average number of users per sub-fragment for each crowd fragment.
  • FIG. 39 is a flow chart illustrating a recursive crowd fragmentation process that uses both spatial crowd fragmentation and connectivity-based crowd fragmentation according to another embodiment of the present disclosure. First, the crowd analyzer 58 performs a connectivity-based crowd fragmentation process to create a number of crowd fragments for a crowd (step 3100). The connectivity-based crowd fragmentation process may be the connectivity-based crowd fragmentation process of FIG. 36. The crowd analyzer 58 then selects a next crowd fragment of the crowd fragments created for the crowd (step 3102). Next, the crowd analyzer 58 performs a spatial crowd fragmentation process to create a number of sub-fragments for the crowd fragment of the crowd (step 3104). The spatial crowd fragmentation process may be the spatial crowd fragmentation process of FIG. 34. The crowd analyzer 58 then determines whether the last crowd fragment of the crowd has been processed (step 3106). If not, the process returns to step 3102 and is repeated until the last crowd fragment of the crowd has been processed. At that point, the process is complete. The degree of fragmentation for the crowd may then include the number of sub-fragments and average number of users per sub-fragment for each crowd fragment.
  • FIGS. 40A and 40B illustrate an exemplary graphical representation of the degree of fragmentation for a crowd. This exemplary graphical representation may be presented by the MAP application 32-1 based on corresponding crowd data provided by the MAP server 12 in response to a crowd request or presented by the MAP server 12 to the subscriber 24 via the web browser 38 of the subscriber device 22. FIG. 40A illustrates a graphical representation of the degree of fragmentation for a crowd having two crowd fragments with an average of twenty-five users per crowd fragment. FIG. 40B illustrates a graphical representation of the degree of fragmentation for a crowd having twenty-five crowd fragments with an average of two users per crowd fragment.
  • FIG. 41 is a flow chart for a process for determining a best-case and worst-case average DOS for a crowd fragment of a crowd according to one embodiment of the present disclosure. The crowd analyzer 58 counts the number of 1 DOS, 2 DOS, . . . , M DOS relationships in a crowd fragment (step 3200) and the number of user pairs in the crowd fragment for which explicit relationships are not defined or known (step 3202). More specifically, for each pair of users in the crowd fragment, the crowd analyzer 58 determines the DOS between the pair of users if the DOS between the pair of user is known or determines that the DOS between the pair of users is not defined or known if the DOS between the pair of users is in fact not defined or known. Based on these determinations, the crowd analyzer 58 counts the number of user pairs having a DOS of 1, the number of user pairs having a DOS of 2, etc. In addition, the crowd analyzer 58 counts the number of user pairs for which no relationship is defined or known.
  • The crowd analyzer 58 then computes a best-case average DOS for the crowd fragment using a best-case DOS for the user pairs in the crowd fragment for which explicit relationships are not defined (step 3204). In this embodiment, the best-case average DOS is 1. The best-case average DOS may computed as:
  • AverageDOS BestCase = i = 1 M ( i · DOS_ count i ) + DOS BestCase · Num_Unknown i = 1 M ( DOS_ count i ) + Num_Unknown ,
  • where AverageDOSBestCase is the best-case average DOS for the crowd fragment, DOS_counti is the number of user pairs for the ith DOS, DOSBestCase is the best-case DOS, and Num_Unknown is the number of user pairs for which a relationship is not defined or is unknown.
  • The crowd analyzer 58 also computes the worst-case average DOS for the crowd fragment using a worst-case DOS for the user pairs in the crowd fragment for which explicit relationships are not defined (step 3206). In this embodiment, the worst-case DOS is a greatest possible DOS that the crowd analyzer 58 considers, which may be, for example, a DOS of greater than or equal to 7. For instance, the worst-case DOS may be 10. However, other values for the worst-case DOS may be used. The worst-case average DOS may computed as:
  • AverageDOS WorstCase = i = 1 M ( i · DOS_ count i ) + DOS WorstCase · Num_Unknown i = 1 M ( DOS_ count i ) + Num_Unknown ,
  • where AverageDOSWorstCase is the worst-case average DOS for the crowd fragment, DOS_counti is the number of user pairs for the ith DOS, DOSWorstCase is the worst-case DOS, and Num_Unknown is the number of user pairs for which a relationship is not defined or is unknown.
  • FIG. 42 is a more detailed flow chart illustrating the process for determining a best-case and worst-case average DOS for a crowd fragment according to one embodiment of the present disclosure. First, the crowd analyzer 58 selects the next user in the crowd fragment, which for the first iteration is the first user in the crowd fragment (step 3300), and clears a found member list (step 3302). The crowd analyzer 58 then sets a current DOS to one (step 3304). Next, the crowd analyzer 58 selects a next friend of the user (step 3306). Note that, in one embodiment, information identifying the friends of the user are obtained from the one or more profile servers 14 along with the user profile of the user. The crowd analyzer 58 then determines whether the friend of the user is also a member of the crowd fragment (step 3308). If not, the process proceeds to step 3314. If the friend is also a member of the crowd fragment, the crowd analyzer 58 determines whether the friend is already in the found member list (step 3310). If so, the process proceeds to step 3314. If the friend is also a member of the crowd fragment and is not already in the found member list, the crowd analyzer 58 increments a found count for the current DOS and adds the friend to the found member list (step 3312). At this point, whether proceeding from step 3308 or step 3310, the crowd analyzer 58 then determines whether the user has more friends to process (step 3314). If so, the process returns to step 3306 and is repeated for the next friend of the user.
  • Once all of the friends of the user have been processed, the crowd analyzer 58 performs steps 3306 through 3314 recursively for each newly found friend, incrementing the current DOS for each recursion, up to a maximum number of recursions (step 3316). Newly found friends are friends added to the found member list in the iteration or recursion of steps 3306 through 3314 just completed. In more general terms, steps 3306 through 3316 operate to find friends of the user selected in step 3300 that are also members of the crowd fragment and increment the found count for a DOS of 1 for each of the found friends of the user. Then, for each friend of the user that was found to also be a member of the crowd fragment, the crowd analyzer 58 finds friends of that friend of the user that are also members of the crowd fragment and increments the found count for a DOS of 2 for each of the found friends of the friend of the user. The process continues in this manner to count the number of user relationships between the user selected in step 3300 and other members in the crowd fragment up to the Mth DOS.
  • Next, the crowd analyzer 58 determines a count of users in the crowd fragment that were not found as being directly or indirectly related to the user selected in step 3300 (step 3318). More specifically, by looking at the found member list and the total number of users in the crowd fragment, the crowd analyzer 58 is enabled to determine the count of users in the crowd fragment that were not found as being directly or indirectly related to the user.
  • At this point, the crowd analyzer 58 determines whether there are more users in the crowd fragment to process (step 3320). If so, the process returns to step 3300 and is repeated for the next user in the crowd fragment. Once all of the users in the crowd fragment have been processed, the crowd analyzer 58 computes a best-case average DOS for the crowd fragment (step 3322). Again, in one embodiment, the best-case average DOS for the crowd fragment is computed as:
  • AverageDOS BestCase = i = 1 M ( i · found_ count DOSi ) + DOS BestCase · Num_Unknown i = 1 M ( found_ count DOSi ) + Num_Unknown ,
  • where AverageDOSBestCase is the best-case average DOS for the crowd fragment, found_countDOSi is the found count for the ith DOS, DOSBestCase is the best-case DOS which may be set to, for example, 1, and Num_Unknown is the total count of user pairs in the crowd fragment that were not found as being directly or indirectly related.
  • In addition, the crowd analyzer 58 computes a worst-case average DOS for the crowd fragment (step 3324). Again, in one embodiment, the worst-case average DOS for the crowd fragment is computed as:
  • AverageDOS WorstCase = i = 1 M ( i · found_ count DOSi ) + DOS WorstCase · Num_Unknown i = 1 M ( found_ count DOSi ) + Num_Unknown ,
  • where AverageDOSWorstCase is the worst-case average DOS for the crowd fragment, found_countDOSi is the found count for the ith DOS, DOSWorstCase is the worst-case DOS which may be set to, for example, 10, and Num_Unknown is the total count of user pairs in the crowd fragment that were not found as being directly or indirectly related. At this point the process is complete and the best-case and worst-case average DOS for the crowd fragment may be returned as part of the crowd data for the corresponding crowd. It should be noted that while the processes of FIGS. 41 and 42 were described above as being performed on a crowd fragment, the same processes may be performed on a crowd in order to determine a best-case and worst-case average DOS for the crowd.
  • FIGS. 43A through 43D illustrate an exemplary graphical representation of the best-case and worst-case average DOS for a crowd fragment according to one embodiment of the present disclosure. Such graphical representations may be presented to the mobile users 20-1 through 20-N by the MAP applications 32-1 through 32-N or presented to the subscriber 24 by the MAP server 12 via the web browser 38 at the subscriber device 22 based on data included in the crowd data for corresponding crowds. FIG. 43A illustrates the graphical representation for a crowd fragment wherein all users in the crowd fragment are friends with one another. As such, both the best-case and worst-case average DOS for the crowd fragment are 1. FIG. 43B illustrates the graphical representation for a crowd fragment wherein the best-case average DOS is 2 and the worst-case average DOS is 3. FIG. 43C illustrates the graphical representation for a crowd fragment wherein the best-case average DOS is 4 and the worst-case average DOS is greater than 7. Lastly, FIG. 43D illustrates the graphical representation for a crowd fragment wherein the best-case average DOS is 6 and the worst-case average DOS is 7. Again, while in these examples the graphical representations are for the best-case and worst-case average DOS for a crowd fragment, best-case and worst-case average DOS for a crowd may additionally or alternatively be computed by the MAP server 12 and presented to the users 20-1 through 20-N or the subscriber 24.
  • FIG. 44 is a flow chart for a process of determining a degree of bidirectionality of relationships between users in a crowd fragment according to one embodiment of the present disclosure. Note, however, that this same process may be used to determine a degree of bidirectionality of relationships between users in a crowd. First, the crowd analyzer 58 selects the next user in a crowd fragment, which for the first iteration is the first user in the crowd fragment (step 3400). The crowd analyzer 58 then selects the next friend of the user (step 3402). Again, note that friends of the users 20-1 through 20-N may have been previously been obtained from the one or more profile servers 14 along with the user profiles of the users 20-1 through 20-N and provided to the MAP server 12. The crowd analyzer 58 then determines whether the friend of the user is a member of the crowd fragment (step 3404). If not, the process proceeds to step 3412. If the friend of the user is a member of the crowd fragment, the crowd analyzer 58 increments a connection count (step 3406). In addition, the crowd analyzer 58 determines whether the relationship between the user and the friend is bidirectional (step 3408). In other words, the crowd analyzer 58 determines whether the user is also a friend of that friend. If not, the process proceeds to step 3412. If so, the crowd analyzer 58 increments a bidirectional count (step 3410).
  • At this point, whether proceeding from step 3404, step 3408, or step 3410, the crowd analyzer 58 determines whether the user has more friends to process (step 3412). If so, the process returns to step 3402 and is repeated for the next friend of the user. Once all of the friends of the user have been processed, the crowd analyzer 58 determines whether there are more users in the crowd fragment (step 3414). If so, the process returns to step 3400 and is repeated for the next user in the crowd fragment. Once steps 3402 through 3412 have been performed for all of the users in the crowd fragment, the crowd analyzer 58 computes a ratio of the bidirectional count (i.e., the number of bidirectional friend relationships) over the connection count (i.e., the number of unidirectional and bidirectional friend relationships) for the crowd fragment (step 3416). At this point, the process ends. In this embodiment, the ratio of the bidirectionality count to the connection count reflects the degree of bidirectionality of friendship relationships for the crowd fragment and may be returned to the requesting user or subscriber in the crowd data for the corresponding crowd.
  • FIGS. 45A through 45C illustrate an exemplary graphical representation of the degree of bidirectionality of friendship relationships for a crowd fragment according to one embodiment of the present disclosure. Note that this graphical representation may also be used to present the degree of bidirectionality of friendship relationships for a crowd. FIG. 45A illustrates the graphical representation for a crowd having a ratio of bidirectional friend relationships to total friend relationships of approximately 0.5. FIG. 45B illustrates the graphical representation for a crowd having a ratio of bidirectional friend relationships to total friend relationships of approximately 0.2. FIG. 45C illustrates the graphical representation for a crowd having a ratio of bidirectional friend relationships to total friend relationships of approximately 0.95. Graphical representations such as those in FIGS. 45A through 45C may be presented to the mobile users 20-1 through 20-N by the MAP applications 32-1 through 32-N or presented to the subscriber 24 by the MAP server 12 via the web browser 38 at the subscriber device 22 based on data included in the crowd data for corresponding crowds.
  • FIGS. 46 through 51 describe embodiments of the present disclosure where confidence levels for the current locations of users in a crowd are determined and utilized to provide a quality level for the aggregate profile for the crowd and/or confidence levels for individual keywords included in the aggregate profile for the crowd. In general, in many implementations, the current locations of the users 20-1 through 20-N are not updated instantaneously or even substantially instantaneously. There are many reasons why the current locations of the users 20-1 through 20-N are not and possibly cannot be updated instantaneously. For example, battery life and performance limitations, non-continuous network connectivity, platform limitations such as the inability to run applications in the background, and security architectures (e.g., J2ME MIDP2.0 security architecture) may all limit the ability of the mobile devices 18-1 through 18-N to provide continuous location updates to the MAP server 12. As a result, the users 20-1 through 20-N may move from their current locations stored by the MAP server 12 well before corresponding location updates are received by the MAP server 12. For instance, if the user 20-1 turns the mobile device 18-1 off, then the mobile device 18-1 is unable to send location updates for the user 20-1. As such, the current location stored for the user 20-1 at the MAP server 12 will no longer be accurate if the user 20-1 moves to a new location while the mobile device 18-1 is off.
  • FIGS. 46 through 51 describe embodiments where the contribution of the user profiles of the users 20-1 through 20-N to aggregate profiles of corresponding crowds is modified based on an amount of time that has expired since receiving location updates for the users 20-1 through 20-N. More specifically, FIG. 46 is a flow chart for a process for generating a quality level for an aggregate profile for a crowd according to one embodiment of the present disclosure. As discussed above, the crowd analyzer 58 of the MAP server 12 creates an aggregate profile for one or more crowds relevant to a POI or an AOI in response to a crowd request from a requestor (i.e., one of the users 20-1 through 20-N, the subscriber 24, or the third-party service 26). Depending on the particular embodiment, the aggregate profile may be generated based on comparisons of the user profiles of the users in the crowd to a user profile or a select subset of the user profile of a requesting user (e.g., one of the users 20-1 through 20-N for which the aggregate profile is generated), comparisons of the user profiles of the users in the crowd to a target user profile, or comparisons of the user profiles of the users in the crowd to one another. Using the following process, the crowd analyzer 58 can generate a quality level for the aggregate profile for one or more such crowds. Note that the quality level for the aggregate profile of a crowd may also be viewed as a quality level for the crowd itself particularly where a spatial crowd formation process has been used to form the crowd.
  • First, the crowd analyzer 58 of the MAP server 12 computes confidence levels for the current locations of the users in the crowd (step 3500). In one embodiment, the confidence level for the current location of a user ranges from 0 to 1, where the confidence level is set to 1 when the current location is updated and then linearly decreases to 0 over some desired period of time. As such, the confidence level of the current location of a user may be computed based on the following equation:

  • CLLOCATION =−Δt·DR+CLLOCATION,PREVIOUS,
  • where CLLOCATION is the confidence level of the current location of the user, Δt is an amount of time that has elapsed since the confidence level of the current location of the user was last computed, DR is a predefined decrease rate or rate at which the confidence level is to decrease over time, and CLLOCATION,PREVIOUS is the previous confidence level of the current location of the user. The decrease rate (DR) is preferably selected such that the confidence level (CL) of the current location of the user will decrease from 1 to 0 over a desired amount of time. Note that the decrease rate (DR) may be defined separately for each user or may be the same for all users. If defined separately, the decrease rate (DR) for a user may be defined once and re-used or defined on a case-by-case basis based on the user's current and past locations, profile, history, or the like. The desired amount of time may be any desired amount of time such as, but not limited to, a desired number of hours. As an example, the desired amount of time may be 12 hours, and the corresponding decrease rate (DR) is 1/12 if time is measured in hours and 1/(12×60×60×1000) if time is measures in milliseconds. Note that the MAP server 12 stores the confidence level (CL) of the user, a timestamp indicating when the confidence level (CL) was computed, and optionally a timestamp indicating when the current location of the user was last updated. This information may be stored in the user record for the user. Alternatively, only the timestamp of the last location update is stored in the user record for the user. If the initial confidence level (CL) varies per user, the initial confidence level (CL) is also stored in the user record. The current confidence level (CL) is determined whenever it is needed by retrieving the last location update timestamp from the user record, determining an amount of elapsed time between the current time and the time of the last location update, and calculating the new confidence level based on the decrease rate (DR) and the initial confidence level (CL). Also note that while the confidence levels of the current locations of the users in the crowd are computed using a linear algorithm in the exemplary embodiment described above, nonlinear algorithms may alternatively be used.
  • When computing the confidence levels for the current locations of the users in the crowds, the crowd analyzer 58 may also consider location confidence events. Note that timestamps of such location confidence events and the location confidence events themselves may also be stored to enable correct calculation of the confidence levels. The location confidence events may include negative location confidence events such as, but not limited to, the passing of a known closing time of a business (e.g., restaurant, bar, shopping mall, etc.) at which a user is located or movement of a crowd with which a user has a high affinity. The location confidence events may additionally or alternatively include positive location confidence events such as, but not limited to, frequent interaction with the corresponding MAP application by the user. Frequent interaction with the MAP application by the user may be indicated by reception of frequent location updates for the user. Note that, in addition to or as an alternative to using location confidence events, other information such as location profiles, event information (e.g., live music event, open-mic night, etc.), current as past crowd histories, or the like may be used when computing the confidence levels for the current locations of the users in the crowds.
  • The manner in which the crowd analyzer 58 handles positive and/or negative location confidence events when computing the confidence levels of the users in the crowd may vary. In one embodiment, in response to detecting a negative location confidence event with respect to a user, the crowd analyzer 58 may increase the decrease rate (DR) used to compute the confidence level (CL) of the current location of the user. Similarly, in response to detecting a positive location confidence event with respect to a user, the crowd analyzer 58 may decrease the decrease rate (DR) used to compute the confidence level (CL) of the current location of the user or replace the decrease rate (DR) with an increase rate such that the confidence level of the user increases in response to the location confidence event or while the location confidence event continues (e.g., increase while the user frequently interacts with the MAP application).
  • In another embodiment, in response to detecting a negative location confidence event with respect to a user, the crowd analyzer 58 may decrease the confidence level (CL) of the current location of the user by a predefined amount. For example, if the negative location event is the passing of a closing time of a business at which the user is located, the crowd analyzer 58 may decrease the confidence level (CL) of the user to zero. Similarly, in response to detecting a positive location confidence event with respect to a user, the crowd analyzer 58 may increase the confidence level (CL) of the current location of the user by a predefined amount. For example, in response to detecting that the user is frequently interacting with the MAP application at his mobile device, the crowd analyzer 58 may increase the confidence level (CL) of the current location of the user by 0.1.
  • Once the confidence levels of the current locations of the users in the crowd are computed, the crowd analyzer 58 determines a quality level for the aggregate profile of the crowd (step 3502). In one embodiment, the quality level for the crowd is computed as an average of the confidence levels of the current locations of the users in the crowd. The quality level of the aggregate profile may then be provided along with the aggregate profile in the crowd data for the crowd returned to the requestor.
  • FIG. 47 illustrates an exemplary GUI 318 for presenting an aggregate profile 320 for a crowd and a quality level 322 of the aggregate profile 320 generated using the process of FIG. 46 according to one embodiment of the present disclosure. FIG. 48 illustrates another exemplary GUI 324 for presenting an aggregate profile 326 for a crowd and a quality level 328 of the aggregate profile 326 generated using the process of FIG. 46 according to one embodiment of the present disclosure. However, in the GUI 324, the aggregate profile 326 also indicates a relative number of user matches for each of a number of keywords in the aggregate profile 326. More specifically, in a keyword area 330 of the GUI 324, the sizes of the keywords indicate the relative number of user matches for the keywords. Therefore, in this example, the keyword “books” has a larger number of user matches that the keyword “politics,” as indicated by the size, or font size, of the two keywords in the keyword area 330 of the GUI 324.
  • FIG. 49 illustrates a flow chart for a process for generating confidence factors for keywords included in an aggregate profile for a crowd based on confidence levels for current locations of users in the crowd according to one embodiment of the present disclosure. As discussed above, the crowd analyzer 58 creates an aggregate profile for one or more crowds relevant to a POI or an AOI in response to a crowd request from a requestor (i.e., one of the users 20-1 through 20-N, the subscriber 24, or the third-party service 26). Depending on the particular embodiment, the aggregate profile may be generated based on comparisons of the user profiles of the users in the crowd to a user profile or a select subset of the user profile of a requesting user (e.g., one of the users 20-1 through 20-N for which the aggregate profile is generated), comparisons of the user profiles of the users in the crowd to a target user profile, or comparisons of the user profiles of the users in the crowd to one another. As also discussed above, in one embodiment, the aggregate profile for a crowd includes a number of user matches for each of a number of keywords and/or a ratio of the number of user matches to the total number of users in the crowd for each of a number of keywords.
  • In order to generate confidence factors for each keyword in an aggregate profile for a crowd, the crowd analyzer 58 of the MAP server 12 computes confidence levels for the current locations of the users in the crowd (step 3600). The confidence levels for the current locations of the users may be computed as discussed above with respect to step 3500 of FIG. 46. In general, the confidence levels for the current locations of the users may be computed based on an amount of time since the current location of the user was last updated, location confidence events, or both. Once the confidence levels of the current locations of the users in the crowd are computed, the crowd analyzer 58 determines a confidence level for each keyword in the aggregate profile of the crowd based on the confidence levels for the current locations of the corresponding users (step 3602). In one embodiment, for each keyword, the confidence level for the keyword is computed as an average of the confidence levels of the current locations of the users in the crowd having user profiles including the keyword. In other words, for each keyword, there are a number of user matches. The confidence levels of the current locations of the users corresponding to the user matches for the keyword are averaged to provide the confidence level for the keyword.
  • FIG. 50 illustrates an exemplary GUI 332 for presenting an aggregate profile 334 for a crowd including an indication of a confidence level for each of a number of keywords in the aggregate profile 334 according to one embodiment of the present disclosure. More specifically, in this embodiment, the aggregate profile 334 includes a quality level 336 of the aggregate profile 334 generated using the process of FIG. 46. However, the quality level 336 of the aggregate profile 334 is optional. The GUI 332 includes a keyword area 338 that graphically illustrates the keywords in the aggregate profile 334 and the confidence levels of the keywords. In this embodiment, the confidence levels of the keywords are graphically indicated via opacity of the keywords in the keyword area 338. The lighter the text of the keyword, the lesser the confidence level of the keyword. Conversely, the darker the text of the keyword, the greater the confidence level of the keyword. Thus, in this example, the confidence level for the keyword “books” is greater than the confidence level of the keyword “politics,” and the confidence level of the keyword “photography” is greater than the confidence levels of the keywords “books” and “politics.” In addition, in this embodiment, the size of the keywords in the keyword area 338 is indicative of the number of user matches for the keywords, as discussed above with respect to FIG. 48. Note that in an alternative embodiment, the size of the keywords in the keyword area 338 may be indicative of the confidence levels of the keywords rather than the number of user matches for the keywords.
  • FIG. 51 graphically illustrates modification of the confidence level of the current location of a user according to one embodiment of the present disclosure. As illustrated, at time 0, a location update for the user is received by the MAP server 12 and, as such, the confidence level of the current location of the user is set to 1. At time 1, a positive location confidence event is detected. This positive location confidence event may be detected when, for example, the crowd analyzer 58 is generating an aggregate profile for a crowd in which the user is included and the user has been frequently interacting with the MAP application of his mobile device. As a result of the positive location confidence event, in this embodiment, the confidence level for the current location of the user at time 1 is computed using an increase rate (i.e., a positive rate of change) rather than a decrease rate (DR). As such, the confidence level of the current location of the user increases from time 0 to time 1 as shown. Alternatively, in response to the positive location confidence event, the confidence level for the current location of the user at time 1 may be increased by a predefined amount such as, for example, 0.1 points. Next, at time 2, another positive location confidence event is detected. As a result of this second positive location confidence event, in this embodiment, the increase rate is further increased, and the confidence level for the current location of the user at time 2 is computed using the new increase rate. As such, the confidence level of the current location of the user further increases from time 1 to time 2. Alternatively, in response to the positive location confidence event, the confidence level for the current location of the user at time 2 may be further increased by the predefined amount such as, for example, 0.1 points.
  • At time 3, the confidence level of the current location of the user is updated. The confidence level of the current location of the user may be updated by the crowd analyzer 58 before generating an aggregate profile for a crowd in which the user is included. In this example, since a location confidence event is not detected at time 3, the confidence level for the current location of the user is computed based on the previous confidence level computed at time 3 and a predefined decrease rate. As such, the confidence level for the current location of the user at time 3 is less than the confidence level for the current location of the user at time 2.
  • At time 4, a negative location confidence event is detected. As a result, in this example, the decrease rate is increased, and the confidence level for the current location of the user at time 4 is computed based on the new decrease rate. As such, the confidence level for the current location of the user at time 4 is less than the confidence level for the current location of the user at time 3. Based on the new decrease rate, the confidence level for the current location of the user continues to decrease until reaching 0 at approximately 4.5 hours after time 0. Alternatively, in response to the negative location confidence event, the confidence level for the current location of the user at time 4 may be decreased by a predefined amount in addition to or as an alternative to decreasing the confidence level by an amount determined by the amount of time that has elapsed between time 3 and time 4 and the decrease rate.
  • FIG. 52 illustrates an exemplary third-party application 34-1 that utilizes data from the MAP server 12 to control sharing of a number of sharable items 340 according to one embodiment of the present disclosure. In this embodiment, the third-party application 34-1 is generally any type of application that enables sharing of the sharable items 340 and is preferably implemented in software. As illustrated, the third-party application 34-1 includes the sharable items 340 and a MAP gatekeeper module 342 (hereinafter “gatekeeper module 342”). The sharable items 340 may be any type of digital item such as, for example, a user profile of the user 20-1, a component of a user profile of the user 20-1, a media item, or the like. Here, the user profile of the user 20-1 may be the same user profile used by the MAP server 12 for the user 20-1, the same user profile of the user 20-1 that is obtained from the profile server 14 and processed by the MAP server 12 to provide the user profile used by the MAP server 12, or a different user profile defined by the user 20-1 for the third-party application 34-1. As used herein, a media item is a video item such as, for example, a user-generated video, a television program, a movie, or a video clip; an audio item such as, for example, a song, a podcast, or an audio clip; a picture (i.e., a digital image); or the like.
  • The gatekeeper module 342 is preferably implemented in software. In general, the gatekeeper module 342 enables the third-party application 34-1 to control sharing of the sharable items 340 based on data obtained from the MAP server 12, where this data is also referred to herein as MAP data. In this embodiment, the gatekeeper module 342 includes a number of sharing rules 344 and a MAP resolution component 346 (hereinafter referred to as “resolution component 346”). The sharing rules 344 may be manually defined by the user 20-1 or automatically configured by the third-party application 34-1. Each of the sharing rules 344 is associated with, or mapped to, one or more of the sharable items 340 and defines conditions under which the sharable item(s) 340 to which it is mapped are to be shared. At least one of the sharing rules 344, but potentially all of the sharing rules 344, is based on MAP data obtained from the MAP server 12. As used herein, the MAP data includes historical aggregate profile data relevant to the current location of the user 20-1, aggregate profile data for current crowds relevant to the current location of the user 20-1, crowd characteristics of current crowds relevant to the current location of the user 20-1, or any combination thereof.
  • Specifically, in one embodiment, each of the sharing rules 344 identifies the sharable item 340 to which the sharing rule 344 applies, a sharing action, and a predicate that defines when the sharing action is to be performed. Note that while the discussion herein primarily focuses on the embodiment where each of the sharing rules 344 is mapped to one of the sharable items 340, the sharing rules 344 are not limited thereto. Each of the sharing rules 344 may be mapped to one or more of the sharable items 340. The sharing action may be, for example, permit sharing, deny sharing, or prompt the user 20-1 for a decision as to whether the corresponding sharable item 344 is to be shared. The predicate of at least one, but potentially all, of the sharing rules 344 is based on one or more MAP data elements from the MAP server 12. Thus, as an example, one of the sharing rules 344 may be:
  • First Name; Permit Sharing; if the best-case average DOS <5, where First Name is the first name of the user 20-1 and is the sharable item 340 for which the sharing rule 344 is defined, Permit Sharing is the sharing action, and the best-case average DOS <5 is the predicate for the sharing rule 344. According to this sharing rule 344, the first name of the user 20-1 (i.e., the sharable item 340) will be shared if the best-case average DOS for the crowd(s) currently located at or near the current location of the user 20-1 is less than five (5). The best-case average DOS is referred to herein as the MAP data, or more specifically the MAP data element, upon which the sharing rule 344 is based. Note that while the predicate in this example is based on a single MAP data element, the present disclosure is not limited thereto. The predicate may be based on or more MAP data elements.
  • While some of the sharing rules 344 are based on MAP data, some of the other sharing rules 344 may be based on other types of data provided by or otherwise accessible to the third-party application 34-1. For example, the third-party application 34-1 may be a social networking application, and the sharing rule 344 for one of the sharable items 340 may state that the sharable item 340 is to be shared with a particular sharing partner if that sharing partner is within three DOS from the user 20-1 in the social network associated with the social networking application. Still further, some of the sharing rules 344 may be based on both MAP data and other data provided by or otherwise accessible to the third-party application 34-1.
  • Note that the sharing rules 344 do not necessarily include sharing rules for all of the sharable items 340. A default sharing action (e.g., permit sharing, deny sharing, or prompt user) may be used for any of the sharable items 340 for which a sharing rule is not defined. It should also be noted that a particular sharable item 340 may have more than one sharing rule 344 in which case the sharing rules 344 for the sharable item 340 are preferably prioritized.
  • The resolution component 346 is preferably implemented in software. In general, the resolution component 346 operates to resolve the sharing rules 344. Specifically, for the sharing rules 344 that are based on MAP data, the resolution component 346 operates to obtain the MAP data needed for the sharing rules 344 from the MAP server 12 and resolve the sharing rules 344 based on the MAP data. In this embodiment, the resolution component 346 obtains the MAP data needed to resolve the sharing rules 344 from the MAP server 12 via the MAP client 30-1. However, in another embodiment, the functionality of the MAP client 30-1 needed to obtain the MAP data from the MAP server 12 may be incorporated into the resolution component 346 such that the resolution component 346 may obtain the MAP data directly from the MAP server 12 via the network 28.
  • As stated above, in this embodiment, the third-party application 34-1 is generally any type of application that shares the sharable items 340 with other users such as, for example, the users 20-2 through 20-N of the other mobile devices 18-2 through 18-N. As an example, the third-party application 34 may be a social networking application, where the sharable items 340 include a user profile of the user 20-1 or components of the user profile of the user 20-1. As another example, the third-party application 34 may be a media sharing application, where the sharable items 340 are media items such as, for example, video items (e.g., user-generated videos, television programs, movies, or video clips), audio items (e.g., songs, podcasts, or audio clips), pictures (i.e., digital images), or the like.
  • FIG. 53 illustrates the operation of the third-party application 34-1 of FIG. 52 within the system 10 of FIG. 1 according to one embodiment of the present disclosure. As illustrated, first, the third-party application 34-1 configures the sharing rules 344 (step 3700). In one embodiment, the sharing rules 344 are configured manually by the user 20-1. More specifically, the third-party application 34-1 may enable the user 20-1 to configure the sharing rule 344 for the sharable item 340 by selecting a desired sharing action and defining a desired predicate for the sharing rule 344. The desired sharing action may be manually defined by the user 20-1 or selected from a system-defined list of possible sharing actions. The desired predicate may be defined by enabling the user 20-1 to select a desired MAP data element from a system-defined list of available MAP data elements and a logical condition for when the desired sharing action is to be performed based on the desired MAP data element. The available MAP data elements may vary depending on the particular implementation. In general, the available MAP data elements may include crowd characteristics such as degree of fragmentation, worst-base average DOS, best-case average DOS, degree of bidirectionality of social network relationships, or the like. In addition or alternatively, the available MAP data elements may include elements of a historical aggregate profile generated for the current location of the user 20-1 such as, for example, an average number of user matches over all keywords in the user profile of the user 20-1, an average ratio of user matches to a total number of users over all keywords in the user profile of the user 20-1, an average number of user matches for each individual keyword in the user profile of the user 20-1, an average ratio of user matches to total users for each individual keyword in the user profile of the user 20-1, or the like. Still further, the available MAP data elements may additionally or alternatively include elements of aggregate profile data for current crowds relevant to the current location of the user 20-1 such as, for example, a number of user matches over all keywords in the user profile of the user 20-1, a ratio of user matches to a total number of users over all keywords in the user profile of the user 20-1, a number of user matches for each individual keyword in the user profile of the user 20-1, a ratio of user matches to total users for each individual keyword in the user profile of the user 20-1, or the like.
  • In another embodiment, the sharing rules 344 may be configured automatically by the third-party application 34-1. The details automatically configuring the sharing rules 344 are provided below with respect to FIG. 54. Once automatically configured, the sharing rules 344 may be modified by the user 20-1 as desired. In yet another embodiment, some of the sharing rules 344 may be manually configured by the user 20-1 while others are automatically configured by the third-party application 34-1.
  • At some point after the sharing rules 344 are configured, the third-party application 34-1, and specifically the resolution component 346, sends a MAP data request to the MAP client 30-1 (step 3702). More specifically, when the third-party application 34-1 desires to resolve one or more of the sharing rules 344, the resolution component 346 then determines whether the one or more sharing rules 344 are based on MAP data. If so, the resolution component 346 sends a request (i.e., the MAP data request) to the MAP client 30-1 for the particular MAP data elements needed to resolve the one or more sharing rules 344. The MAP client 30-1 then sends the MAP data request to the MAP server 12 (step 3704).
  • In response to the MAP data request, the MAP server 12 obtains the requested MAP data (step 3706). The MAP server 12 obtains the requested MAP data in the manner described above. For example, if the requested MAP data is historical aggregate profile data for the current location of the user 20-1, the MAP server 12 generates the historical aggregate profile in the manner described above. Once the MAP server 12 has obtained the requested MAP data, the MAP server 12 returns the MAP data to the MAP client 30-1 (step 3708). The MAP client 30-1 then returns the MAP data to the third-party application 34-1 (step 3710).
  • Upon receiving the MAP data, the third-party application 34-1, and specifically the resolution component 346, resolves the one or more sharing rules 344 based on the MAP data to provide corresponding resolution results (step 3712). For each of the one or more sharing rules 344, resolving the sharing rule 344 generally refers to determining whether the corresponding sharable item 340 is to be shared based on the MAP data according to the sharing rule 344. More specifically, in one embodiment, resolving the sharing rule 344 includes determining whether the sharing action defined by the sharing rule 344 is to be performed based on the MAP data and the predicate defined by the sharing rule 344. The results of resolving the one or more sharing rules 344 identify whether sharing of the corresponding sharable items 340 is permitted or denied or whether the user 20-1 is to be prompted for a final decision by the user 20-1 regarding whether the corresponding sharable items 340 are to be shared.
  • Lastly, the third-party application 34-1 controls sharing of the one or more sharable items 340 for which the one or more sharing rules 344 were resolved based on the results of resolving the one or more sharing rules 344 (step 3714). Thus, for example, if the results of resolving the one or more sharing rules 344 indicate that a particular sharable item 344 is not to be shared, then the third-party application 34-1 does not share that sharable item 344. In contrast, if the results indicate that the sharable item 344 is to be shared, then the third-party application 34-1 permits sharing of that sharable item 344. Lastly, if the results indicate that the user 20-1 is to be prompted, then the third-party application 34-1 prompts the user 20-1 for a decision as to whether sharing of the sharable item is permitted. If the user 20-1 chooses to permit sharing, then the third-party application 34-1 permits sharing of the sharable item 340. Otherwise, the third-party application 34-1 does not permit sharing of the sharable item 340.
  • FIG. 54 is a flow chart illustrating the operation of the third-party application 34-1 to automatically configure the sharing rules 344 for the sharable items 340 according to one embodiment of the present disclosure. As illustrated, first, the third-party application 34-1, and more specifically the gatekeeper module 342, retrieves one of the sharable items 340 (step 3800). For this retrieval step, the gatekeeper module 342 may obtain the sharable item 344 and/or, if available, metadata describing the sharable item 340. In general, the metadata describing the sharable item 340 may include any data describing the sharable item 340 that may be used to configure, or create, a sharing rule 344 for the sharable item 340.
  • Next, the gatekeeper module 342 determines whether the sharable item 340 corresponds to, or matches, any of the available MAP data elements based on the sharable item 340 itself, metadata describing the sharable item, or both (step 3802). If not, the process proceeds to step 3806. If the sharable item 340 corresponds to an available MAP data element, the gatekeeper module 342 creates a sharing rule 344 for the sharable item 340 based on the corresponding MAP data element (step 3804). More specifically, for this automatic sharing rule configuration process, the available MAP data elements preferably include: (1) historical aggregate profile data elements for the average number of user matches for each individual keyword in the user profile of the user 20-1, (2) historical aggregate profile data elements for an average ratio of user matches to total users for each individual keyword in the user profile of the user 20-1, (3) aggregate profile data elements for the number of user matches for each individual keyword in the user profile of the user 20-1 for crowd(s) located at or near the current location of the user 20-1, and/or (4) aggregate profile data elements for the ratio of user matches or total users for each individual keyword in the user profile of the user 20-1 for crowd(s) located at or near the current location of the user 20-1. The gatekeeper module 342 compares the sharable item 340 and/or the metadata describing the sharable item 340 to the available MAP data elements to determine whether the sharable item 340 matches any of the available MAP data elements. If so, the gatekeeper module 342 then creates the sharing rule 344 for the sharable item 340 based on the matching MAP data element(s). For instance, the sharing rule 344 for the sharable item 340 may be set to a predefined default sharing rule for the matching MAP data element(s).
  • As an example, assume that the sharable item 340 is the keyword NCSU (North Carolina State University) from a user profile of the user 20-1 used by the third-party application 34-1, which may or may not be the same as the user profile of the user 20-1 used by the MAP server 12. Then, in this example, the gatekeeper module 342 compares the keyword NCSU to the available MAP data elements. If the user profile of the user 20-1 used by the MAP server 12 also includes the keyword NCSU and the available MAP data elements include a number of user matches for the keyword NCSU for crowd(s) located at or near the current location of the user 20-1, then there is a match. As such, a sharing rule 344 will be automatically configured for the keyword NCSU such that a default sharing action (e.g., Permit Sharing) will be performed in response to satisfaction of a default predicate (e.g., if number of user matches >5).
  • As another example, assume that the sharable item 340 is the song “I Gotta Feeling” by the Black Eyed Peas and that the song is legally sharable. Then, in this example, the gatekeeper module 342 compares metadata describing the song (e.g., title and artist) to the available MAP data elements. Then, if any of the available MAP data elements match the song, then a sharing rule 344 for the song will be automatically configure. So, if, for example, the user profile of the user 20-1 used by the MAP server 12 includes the keyword Black Eyed Peas and the available MAP data includes the number of user matches for the keyword Black Eyed Peas for crowd(s) currently located at or near the current location of the user 20-1, then there is a match. As such, a sharing rule 344 for the song is automatically configured, or created, using a default sharing action (e.g., Permit Sharing) and a default predicate (e.g., if ratio of user matches to total users >50%).
  • Note that while exact matching between the sharable item 340 and a MAP data element is used in the examples above, the present disclosure is not limited thereto. A match may also occur if the sharable item 340 is sufficiently related to a MAP data element. For example, if the sharable item 340 is a song by Fergie and the available MAP data elements include a number of user matches for Black Eyed Peas, then a determination may be made that there is a match. In order to determine that closely related terms match for purposes of this automatic sharing rule configuration process, an ontology or similar data structure that defines relationships between terms and processing such as, for example, natural language processing may be used. Using such techniques, the sharable item 340 and a MAP data element may be determined to be matching if there is a direct match or if they are related within a predefined number of DOS within the ontology.
  • At this point, whether proceeding from step 3802 or 3804, the gatekeeper module 342 determines whether there are more sharable items 340 to be processed (step 3806). If so, the process returns to step 3800 and is repeated for the next sharable item 340. Once all of the sharable items 340 have been processed, the process ends.
  • FIG. 55 is a flow chart illustrating the operation of the third-party application 34-1 of FIG. 52 according to one embodiment of the present disclosure. Once the sharing rules 344 have been configured, the third-party application 34-1 checks for sharing partners (step 3900). A used herein, a sharing partner is a user of another device with which the user 20-1 is enabled to share the sharable items 340 via the third-party application 34-1. The third-party application 34-1 then determines whether a sharing partner has been found (step 3902). If not, the process returns to step 3900 and is repeated.
  • If a sharing partner is found, the gatekeeper module 342 utilizes the sharing rules 344 to determine which, if any, of the sharable items 340 are permitted to be shared at this time. More specifically, the resolution component 346 retrieves or otherwise obtains one of the sharing rules 344 (step 394). Next, the resolution component 346 determines whether the sharing rule 344 is based on MAP data from the MAP server 12 (step 3906). If not, the process proceeds to step 3910. If the sharing rule 344 is based on MAP data, the resolution component 346 obtains the MAP data needed to resolve the sharing rule 344 (step 3908). More specifically, in one embodiment, the resolution component 346 obtains the MAP data needed to resolve the sharing rule 344 from the MAP server 12 via the MAP client 30-1.
  • At this point, whether proceeding from step 3906 or 3908, the resolution component 346 resolves the sharing rule 344 to provide a corresponding result and adds the result to a set of complied results for the resolutions of the sharing rules 3444 (steps 3910 and 3912). Generally, the result of resolving the sharing rule 344 indicates whether sharing of the sharable item 340 is permitted or denied. In some embodiments, the result may state that the user 20-1 is to be prompted for a decision as to whether sharing is permitted or denied.
  • Next, the resolution component 346 determines whether there are more sharing rules 344 to process (step 3914). If so, the process returns to step 3904 and is repeated for the next sharing rule 344. Once all of the sharing rules 344 have been processed, the resolution component 346 reports the compiled results for the resolutions of the sharing rules 344 to the third-party application 34-1 (step 3916). The third-party application 34-1 then shares the sharable items 340 according to the results of the resolutions of the sharing rules 344 (step 3918). Thus, for example, if the result of the sharing rule 344 for one of the sharable items 340 is to permit sharing, then the third-party application 34-1 permits sharing of the sharable item 340. The process then returns to step 3900 and is repeated when a new sharing partner is found, or detected.
  • FIG. 56 is a block diagram of the MAP server 12 according to one embodiment of the present disclosure. As illustrated, the MAP server 12 includes a controller 348 connected to memory 350, one or more secondary storage devices 352, and a communication interface 354 by a bus 356 or similar mechanism. The controller 348 is a microprocessor, digital Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), or the like. In this embodiment, the controller 348 is a microprocessor, and the application layer 40, the business logic layer 42, and the object mapping layer 62 (FIG. 2) are implemented in software and stored in the memory 350 for execution by the controller 348. Further, the datastore 64 (FIG. 2) may be implemented in the one or more secondary storage devices 352. The secondary storage devices 352 are digital data storage devices such as, for example, one or more hard disk drives. The communication interface 354 is a wired or wireless communication interface that communicatively couples the MAP server 12 to the network 28 (FIG. 1). For example, the communication interface 354 may be an Ethernet interface, local wireless interface such as a wireless interface operating according to one of the suite of IEEE 802.11 standards, or the like.
  • FIG. 57 is a block diagram of the mobile device 18-1 according to one embodiment of the present disclosure. This discussion is equally applicable to the other mobile devices 18-2 through 18-N. As illustrated, the mobile device 18-1 includes a controller 358 connected to memory 360, a communication interface 362, one or more user interface components 364, and the location function 36-1 by a bus 366 or similar mechanism. The controller 358 is a microprocessor, digital ASIC, FPGA, or the like. In this embodiment, the controller 358 is a microprocessor, and the MAP client 30-1, the MAP application 32-1, and the third-party application(s) 34-1 are implemented in software and stored in the memory 360 for execution by the controller 358. In this embodiment, the location function 36-1 is a hardware component such as, for example, a GPS receiver. The communication interface 362 is a wireless communication interface that communicatively couples the mobile device 18-1 to the network 28 (FIG. 1). For example, the communication interface 362 may be a local wireless interface such as a wireless interface operating according to one of the suite of IEEE 802.11 standards, a mobile communications interface such as a cellular telecommunications interface, or the like. The one or more user interface components 364 include, for example, a touchscreen, a display, one or more user input components (e.g., a keypad), a speaker, or the like, or any combination thereof.
  • FIG. 58 is a block diagram of the subscriber device 22 according to one embodiment of the present disclosure. As illustrated, the subscriber device 22 includes a controller 368 connected to memory 370, one or more secondary storage devices 372, a communication interface 374, and one or more user interface components 376 by a bus 378 or similar mechanism. The controller 368 is a microprocessor, digital ASIC, FPGA, or the like. In this embodiment, the controller 368 is a microprocessor, and the web browser 38 (FIG. 1) is implemented in software and stored in the memory 370 for execution by the controller 368. The one or more secondary storage devices 372 are digital storage devices such as, for example, one or more hard disk drives. The communication interface 374 is a wired or wireless communication interface that communicatively couples the subscriber device 22 to the network 28 (FIG. 1). For example, the communication interface 374 may be an Ethernet interface, local wireless interface such as a wireless interface operating according to one of the suite of IEEE 802.11 standards, a mobile communications interface such as a cellular telecommunications interface, or the like. The one or more user interface components 376 include, for example, a touchscreen, a display, one or more user input components (e.g., a keypad), a speaker, or the like, or any combination thereof.
  • FIG. 59 is a block diagram of a computing device 380 operating to host the third-party service 26 according to one embodiment of the present disclosure. The computing device 380 may be, for example, a physical server. As illustrated, the computing device 380 includes a controller 382 connected to memory 384, one or more secondary storage devices 386, a communication interface 388, and one or more user interface components 390 by a bus 392 or similar mechanism. The controller 382 is a microprocessor, digital ASIC, FPGA, or the like. In this embodiment, the controller 382 is a microprocessor, and the third-party service 26 is implemented in software and stored in the memory 384 for execution by the controller 382. The one or more secondary storage devices 386 are digital storage devices such as, for example, one or more hard disk drives. The communication interface 388 is a wired or wireless communication interface that communicatively couples the computing device 380 to the network 28 (FIG. 1). For example, the communication interface 388 may be an Ethernet interface, local wireless interface such as a wireless interface operating according to one of the suite of IEEE 802.11 standards, a mobile communications interface such as a cellular telecommunications interface, or the like. The one or more user interface components 390 include, for example, a touchscreen, a display, one or more user input components (e.g., a keypad), a speaker, or the like, or any combination thereof.
  • Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the present invention. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow.

Claims (27)

What is claimed is:
1. A method of operation for a computing device of a user, comprising:
configuring a sharing rule for a sharable item stored by the computing device, the sharing rule being based on an element of aggregate profile data for a current location of the user;
obtaining the element of the aggregate profile data for the current location of the user;
resolving the sharing rule for the sharable item based on the element of the aggregate profile data for the current location of the user to provide a resolution result; and
sharing the sharable item according to the resolution result for the sharing rule.
2. The method of claim 1 wherein the aggregate profile data is an aggregate profile for one or more crowds of users currently relevant to the current location of the user.
3. The method of claim 2 wherein the element of the aggregate profile for the one or more crowds of users is a number of user matches over all keywords in a user profile of the user.
4. The method of claim 2 wherein the element of the aggregate profile for the one or more crowds of users is a number of user matches for a keyword in a user profile of the user.
5. The method of claim 2 wherein the element of the aggregate profile for the one or more crowds of users is ratio of a number of user matches to a total number of users over all keywords in a user profile of the user.
6. The method of claim 2 wherein the element of the aggregate profile for the one or more crowds of users is ratio of a number of user matches to a total number of users for a keyword in a user profile of the user.
7. The method of claim 2 wherein the one or more crowds of users currently relevant to the current location of the user are one or more crowds that are at least partially within a bounding region corresponding to the current location of the user.
8. The method of claim 1 wherein the aggregate profile data is a historical aggregate profile for the current location of the user.
9. The method of claim 8 wherein the element of the historical aggregate profile is an average number of user matches over all keywords in a user profile of the user for a defined time window.
10. The method of claim 8 wherein the element of the historical aggregate profile is an average number of user matches for a keyword in a user profile of the user for a defined time window.
11. The method of claim 8 wherein the element of the historical aggregate profile is an average ratio of a number of user matches to a total number of users over all keywords in a user profile of the user for a defined time window.
12. The method of claim 8 wherein the element of the historical aggregate profile is an average ratio of a number of user matches to a total number of users for a keyword in a user profile of the user for a defined time window.
13. The method of claim 1 wherein configuring the sharing rule for the sharable item comprises enabling the user to manually configure the sharing rule for the sharable item.
14. The method of claim 1 wherein configuring the sharing rule for the sharable item comprises automatically configuring the sharing rule for the sharable item.
15. The method of claim 14 wherein automatically configuring the sharing rule comprises:
identifying one of a plurality of available elements of the aggregate profile data that corresponds to the sharable item; and
setting the sharing rule for the sharable item to a default sharing rule for the one of the plurality of available elements of the aggregate profile data that corresponds to the sharable item.
16. The method of claim 1 wherein obtaining the element of the aggregate profile data for the current location of the user comprises obtaining the element of the aggregate profile data for the current location of the user from a remote server.
17. The method of claim 1 wherein the sharing rule is mapped to the sharable item and comprises a sharing action and a predicate that defines when the sharing action is to be performed based on the element of the aggregate profile data.
18. The method of claim 1 wherein the sharing rule for the sharable item is further based on at least one crowd characteristic of one or more crowds currently relevant to the current location of the user.
19. A computing device of a user, comprising:
a communication interface enabling the computing device to share a sharable item stored by the computing device with a computing device of another user; and
a controller associated with the communication interface and adapted to:
configure a sharing rule for the sharable item, the sharing rule being based on an element of aggregate profile data for a current location of the user;
obtain the element of the aggregate profile data for the current location of the user;
resolve the sharing rule for the sharable item based on the element of the aggregate profile data for the current location of the user to provide a resolution result; and
share the sharable item according to the resolution result for the sharing rule.
20. A computer readable medium storing software for instructing a controller of a computing device of a user to:
configure a sharing rule for a sharable item stored by the computing device, the sharing rule being based on an element of aggregate profile data for a current location of the user;
obtain the element of the aggregate profile data for the current location of the user;
resolve the sharing rule for the sharable item based on the element of the aggregate profile data for the current location of the user to provide a resolution result; and
share the sharable item according to the resolution result for the sharing rule.
21. A method of operation for a computing device of a user, comprising:
configuring a sharing rule for a sharable item stored by the computing device, the sharing rule being based on a crowd characteristic of one or more crowds of users currently relevant to a current location of the user;
obtaining the crowd characteristic for the one or more crowds of users;
resolving the sharing rule for the sharable item based on the crowd characteristic of the one or more crowds of users to provide a resolution result; and
sharing the sharable item according to the resolution result for the sharing rule.
22. The method of claim 21 wherein the crowd characteristic is a degree of fragmentation.
23. The method of claim 21 wherein the crowd characteristic is a best-case average Degree of Separation (DOS).
24. The method of claim 21 wherein the crowd characteristic is a worst-case average Degree of Separation (DOS).
25. The method of claim 21 wherein the crowd characteristic is degree of bidirectionality of social network relationship.
26. A computing device of a user, comprising:
a communication interface enabling the computing device to share a sharable item stored by the computing device with a computing device of another user; and
a controller associated with the communication interface and adapted to:
configure a sharing rule for the sharable item, the sharing rule being based on a crowd characteristic of one or more crowds of users currently relevant to a current location of the user;
obtain the crowd characteristic for the one or more crowds of users;
resolve the sharing rule for the sharable item based on the crowd characteristic of the one or more crowds of users to provide a resolution result; and
share the sharable item according to the resolution result for the sharing rule.
27. A computer readable medium storing software for instructing a controller of a computing device of a user to:
configure a sharing rule for a sharable item stored by the computing device, the sharing rule being based on a crowd characteristic of one or more crowds of users currently relevant to a current location of the user;
obtain the crowd characteristic for the one or more crowds of users;
resolve the sharing rule for the sharable item based on the crowd characteristic of the one or more crowds of users to provide a resolution result; and
share the sharable item according to the resolution result for the sharing rule.
US12/694,551 2009-02-02 2010-01-27 System and method for information gatekeeper based on aggregate profile data Abandoned US20120041983A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/694,551 US20120041983A1 (en) 2009-02-02 2010-01-27 System and method for information gatekeeper based on aggregate profile data

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US14920509P 2009-02-02 2009-02-02
US22719209P 2009-07-21 2009-07-21
US23629609P 2009-08-24 2009-08-24
US12/694,551 US20120041983A1 (en) 2009-02-02 2010-01-27 System and method for information gatekeeper based on aggregate profile data

Publications (1)

Publication Number Publication Date
US20120041983A1 true US20120041983A1 (en) 2012-02-16

Family

ID=42398131

Family Applications (13)

Application Number Title Priority Date Filing Date
US12/645,556 Active 2031-08-01 US9397890B2 (en) 2009-02-02 2009-12-23 Serving a request for data from a historical record of anonymized user profile data in a mobile environment
US12/645,560 Active 2030-11-05 US8321509B2 (en) 2009-02-02 2009-12-23 Handling crowd requests for large geographic areas
US12/645,546 Abandoned US20100198917A1 (en) 2009-02-02 2009-12-23 Crowd formation for mobile device users
US12/645,535 Active 2031-04-24 US8495065B2 (en) 2009-02-02 2009-12-23 Maintaining a historical record of anonymized user profile data by location for users in a mobile environment
US12/645,532 Active 2031-01-30 US9098723B2 (en) 2009-02-02 2009-12-23 Forming crowds and providing access to crowd data in a mobile environment
US12/645,544 Active 2032-12-24 US8825074B2 (en) 2009-02-02 2009-12-23 Modifying a user'S contribution to an aggregate profile based on time between location updates and external events
US12/645,539 Active 2030-09-30 US8208943B2 (en) 2009-02-02 2009-12-23 Anonymous crowd tracking
US12/691,008 Active 2037-02-04 US10530654B2 (en) 2009-02-02 2010-01-21 System and method for filtering and creating points-of-interest
US12/694,551 Abandoned US20120041983A1 (en) 2009-02-02 2010-01-27 System and method for information gatekeeper based on aggregate profile data
US13/613,666 Active 2031-10-09 US9515885B2 (en) 2009-02-02 2012-09-13 Handling crowd requests for large geographic areas
US13/919,174 Active US8918398B2 (en) 2009-02-02 2013-06-17 Maintaining a historical record of anonymized user profile data by location for users in a mobile environment
US14/457,499 Active US9092641B2 (en) 2009-02-02 2014-08-12 Modifying a user's contribution to an aggregate profile based on time between location updates and external events
US14/816,563 Active US9641393B2 (en) 2009-02-02 2015-08-03 Forming crowds and providing access to crowd data in a mobile environment

Family Applications Before (8)

Application Number Title Priority Date Filing Date
US12/645,556 Active 2031-08-01 US9397890B2 (en) 2009-02-02 2009-12-23 Serving a request for data from a historical record of anonymized user profile data in a mobile environment
US12/645,560 Active 2030-11-05 US8321509B2 (en) 2009-02-02 2009-12-23 Handling crowd requests for large geographic areas
US12/645,546 Abandoned US20100198917A1 (en) 2009-02-02 2009-12-23 Crowd formation for mobile device users
US12/645,535 Active 2031-04-24 US8495065B2 (en) 2009-02-02 2009-12-23 Maintaining a historical record of anonymized user profile data by location for users in a mobile environment
US12/645,532 Active 2031-01-30 US9098723B2 (en) 2009-02-02 2009-12-23 Forming crowds and providing access to crowd data in a mobile environment
US12/645,544 Active 2032-12-24 US8825074B2 (en) 2009-02-02 2009-12-23 Modifying a user'S contribution to an aggregate profile based on time between location updates and external events
US12/645,539 Active 2030-09-30 US8208943B2 (en) 2009-02-02 2009-12-23 Anonymous crowd tracking
US12/691,008 Active 2037-02-04 US10530654B2 (en) 2009-02-02 2010-01-21 System and method for filtering and creating points-of-interest

Family Applications After (4)

Application Number Title Priority Date Filing Date
US13/613,666 Active 2031-10-09 US9515885B2 (en) 2009-02-02 2012-09-13 Handling crowd requests for large geographic areas
US13/919,174 Active US8918398B2 (en) 2009-02-02 2013-06-17 Maintaining a historical record of anonymized user profile data by location for users in a mobile environment
US14/457,499 Active US9092641B2 (en) 2009-02-02 2014-08-12 Modifying a user's contribution to an aggregate profile based on time between location updates and external events
US14/816,563 Active US9641393B2 (en) 2009-02-02 2015-08-03 Forming crowds and providing access to crowd data in a mobile environment

Country Status (1)

Country Link
US (13) US9397890B2 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100197319A1 (en) * 2009-02-02 2010-08-05 Kota Enterprises, Llc Modifying a user's contribution to an aggregate profile based on time between location updates and external events
US20110072360A1 (en) * 2003-12-15 2011-03-24 J2 Global Communications Method and apparatus for automatically performing an online content distribution campaign
US20110153665A1 (en) * 2009-12-18 2011-06-23 Electronics And Telecommunications Research Institute Apparatus for providing social network service using relationship of ontology and method thereof
US20120066312A1 (en) * 2010-03-03 2012-03-15 Waldeck Technology, Llc Ad-hoc micro-blogging groups
US20120066067A1 (en) * 2009-12-22 2012-03-15 Waldeck Technology, Llc Fragmented advertisements for co-located social groups
US20120124176A1 (en) * 2010-11-11 2012-05-17 Teaneck Enterprises, Llc Automatic check-ins and status updates
US20120150870A1 (en) * 2010-12-10 2012-06-14 Ting-Yee Liao Image display device controlled responsive to sharing breadth
US8473512B2 (en) 2009-11-06 2013-06-25 Waldeck Technology, Llc Dynamic profile slice
US20130218969A1 (en) * 2012-02-16 2013-08-22 Gface Gmbh Method and system for associating user interests with zones and maps
US8554770B2 (en) 2009-04-29 2013-10-08 Waldeck Technology, Llc Profile construction using location-based aggregate profile information
US8589330B2 (en) 2009-03-25 2013-11-19 Waldeck Technology, Llc Predicting or recommending a users future location based on crowd data
US20140289872A1 (en) * 2013-03-25 2014-09-25 Samsung Electronics Co., Ltd. Data sharing control method and data sharing control terminal
US20150324389A1 (en) * 2014-05-12 2015-11-12 Naver Corporation Method, system and recording medium for providing map service, and file distribution system
US9763048B2 (en) 2009-07-21 2017-09-12 Waldeck Technology, Llc Secondary indications of user locations and use thereof by a location-based service
US20170272892A1 (en) * 2010-07-21 2017-09-21 Sensoriant, Inc. Allowing or disallowing access to resources based on sensor and state information
CN107809619A (en) * 2017-11-15 2018-03-16 株洲华通科技有限责任公司 A kind of method and gateway exchange system that outgoing access is realized by multimedia gateway
US9930522B2 (en) 2010-07-21 2018-03-27 Sensoriant, Inc. System and method for controlling mobile services using sensor information
US10096041B2 (en) 2012-07-31 2018-10-09 The Spoken Thought, Inc. Method of advertising to a targeted buyer
US10181148B2 (en) 2010-07-21 2019-01-15 Sensoriant, Inc. System and method for control and management of resources for consumers of information
US10390289B2 (en) 2014-07-11 2019-08-20 Sensoriant, Inc. Systems and methods for mediating representations allowing control of devices located in an environment having broadcasting devices
US10614473B2 (en) 2014-07-11 2020-04-07 Sensoriant, Inc. System and method for mediating representations with respect to user preferences
US10701165B2 (en) 2015-09-23 2020-06-30 Sensoriant, Inc. Method and system for using device states and user preferences to create user-friendly environments
US10917481B2 (en) * 2014-12-09 2021-02-09 Facebook, Inc. Generating business insights using beacons on online social networks

Families Citing this family (414)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7933395B1 (en) * 2005-06-27 2011-04-26 Google Inc. Virtual tour of user-defined paths in a geographic information system
US8554868B2 (en) 2007-01-05 2013-10-08 Yahoo! Inc. Simultaneous sharing communication interface
KR101255422B1 (en) * 2008-03-05 2013-04-17 엔이씨 유럽 리미티드 Method and communication device for protecting a user's privacy
US8265658B2 (en) 2009-02-02 2012-09-11 Waldeck Technology, Llc System and method for automated location-based widgets
US9014832B2 (en) 2009-02-02 2015-04-21 Eloy Technology, Llc Augmenting media content in a media sharing group
US20100250367A1 (en) * 2009-03-31 2010-09-30 Microsoft Corporation Relevancy of virtual markers
US9049543B2 (en) * 2009-04-29 2015-06-02 Blackberry Limited Method and apparatus for location notification using location context information
US9104695B1 (en) 2009-07-27 2015-08-11 Palantir Technologies, Inc. Geotagging structured data
US9286624B2 (en) 2009-09-10 2016-03-15 Google Inc. System and method of displaying annotations on geographic object surfaces
US8121618B2 (en) * 2009-10-28 2012-02-21 Digimarc Corporation Intuitive computing methods and systems
US8370062B1 (en) * 2010-02-09 2013-02-05 Google Inc. Switching between location contexts
US9230258B2 (en) 2010-04-01 2016-01-05 International Business Machines Corporation Space and time for entity resolution
US8626901B2 (en) * 2010-04-05 2014-01-07 Comscore, Inc. Measurements based on panel and census data
US9202230B2 (en) 2010-04-06 2015-12-01 Intel Corporation Techniques for monetizing anonymized context
US9633121B2 (en) 2010-04-19 2017-04-25 Facebook, Inc. Personalizing default search queries on online social networks
US20110270517A1 (en) * 2010-04-29 2011-11-03 Nokia Corporation Method and apparatus for providing personalized presentations based on navigation information
US8406730B1 (en) * 2010-06-10 2013-03-26 Sprint Communications Company L.P. Possession confidence metric for a wireless communication device
US20110321167A1 (en) * 2010-06-23 2011-12-29 Google Inc. Ad privacy management
US9160806B1 (en) * 2010-08-04 2015-10-13 Open Invention Network, Llc Method and apparatus of organizing and delivering data to intended recipients
US8688774B2 (en) 2010-08-09 2014-04-01 Eustace Prince Isidore Method, system, and devices for facilitating real-time social and business interactions/networking
CN101924996A (en) * 2010-09-21 2010-12-22 北京开心人信息技术有限公司 Topic grouping method and system based on geographic position information
US20120117110A1 (en) 2010-09-29 2012-05-10 Eloy Technology, Llc Dynamic location-based media collection aggregation
US7986126B1 (en) 2010-10-01 2011-07-26 Toyota Motor Sales, U.S.A., Inc. Automated system for determining whether vehicle charge station is publicly accessible
US8543460B2 (en) 2010-11-11 2013-09-24 Teaneck Enterprises, Llc Serving ad requests using user generated photo ads
US20120136565A1 (en) * 2010-11-30 2012-05-31 Sony Corporation Filtering social networking information to provide customized mapping
US20120197986A1 (en) * 2011-01-28 2012-08-02 Yahoo! Inc. User-customizable social grouping techniques
US8954566B1 (en) 2011-02-10 2015-02-10 Google Inc. Method for counting without the use of unique identifiers
CN102158802B (en) * 2011-02-15 2015-02-18 广州市动景计算机科技有限公司 Information distribution method and device
US20120209668A1 (en) * 2011-02-15 2012-08-16 Terry Angelos Dynamically serving content to social network members
US9119054B2 (en) * 2011-03-09 2015-08-25 The Boston Consulting Group, Inc. Communication systems and methods
US9507498B2 (en) * 2011-03-31 2016-11-29 Nokia Technologies Oy Method and apparatus for discovering similar content or search results
US9131343B2 (en) 2011-03-31 2015-09-08 Teaneck Enterprises, Llc System and method for automated proximity-based social check-ins
US8738638B1 (en) * 2011-04-06 2014-05-27 Google Inc. Map usage visualization
CL2012000933A1 (en) * 2011-04-14 2014-07-25 Harnischfeger Tech Inc A method and a cable shovel for the generation of an ideal path, comprises: an oscillation engine, a hoisting engine, a feed motor, a bucket for digging and emptying materials and, positioning the shovel by means of the operation of the lifting motor, feed motor and oscillation engine and; a controller that includes an ideal path generator module.
US20140201339A1 (en) * 2011-05-27 2014-07-17 Telefonaktiebolaget L M Ericsson (Publ) Method of conditioning communication network data relating to a distribution of network entities across a space
US9537965B2 (en) 2011-05-31 2017-01-03 Microsoft Technology Licensing, Llc Techniques for managing and applying an availability profile
US10034135B1 (en) * 2011-06-08 2018-07-24 Dstillery Inc. Privacy-sensitive methods, systems, and media for geo-social targeting
US20160197929A9 (en) * 2011-06-22 2016-07-07 Billy G. Tiller Authenticated information exchange
US20130031160A1 (en) * 2011-06-27 2013-01-31 Christopher Carmichael Web 3.0 Content Aggregation, Delivery and Navigation System
US8990709B2 (en) * 2011-07-08 2015-03-24 Net Power And Light, Inc. Method and system for representing audiences in ensemble experiences
BR112014000615B1 (en) 2011-07-12 2021-07-13 Snap Inc METHOD TO SELECT VISUAL CONTENT EDITING FUNCTIONS, METHOD TO ADJUST VISUAL CONTENT, AND SYSTEM TO PROVIDE A PLURALITY OF VISUAL CONTENT EDITING FUNCTIONS
US10356106B2 (en) * 2011-07-26 2019-07-16 Palo Alto Networks (Israel Analytics) Ltd. Detecting anomaly action within a computer network
US8195665B1 (en) 2011-07-29 2012-06-05 Google Inc. Dynamic bitwise sharding of live stream comment groups
EP2742364A1 (en) * 2011-08-09 2014-06-18 BlackBerry Limited Harvesting communication parameter observations in gnss-denied environments
US8600956B2 (en) 2011-08-16 2013-12-03 Nokia Corporation Method, apparatus and computer program product for providing conflicting point of interest information
US9401100B2 (en) 2011-08-17 2016-07-26 Adtile Technologies, Inc. Selective map marker aggregation
US9058565B2 (en) 2011-08-17 2015-06-16 At&T Intellectual Property I, L.P. Opportunistic crowd-based service platform
US8713004B1 (en) * 2011-08-26 2014-04-29 Google Inc. Method and system for prioritizing points of interest for display in a map using category score
US8649806B2 (en) * 2011-09-02 2014-02-11 Telecommunication Systems, Inc. Aggregate location dynometer (ALD)
US8730264B1 (en) * 2011-09-26 2014-05-20 Google Inc. Determining when image elements intersect
US10091322B2 (en) * 2011-10-18 2018-10-02 Qualcomm Incorporated Method and apparatus for improving a user experience or device performance using an enriched user profile
US9489530B2 (en) * 2011-11-17 2016-11-08 Good Technology Corporation Methods and apparatus for anonymising user data by aggregation
US9058573B2 (en) * 2011-11-21 2015-06-16 Facebook, Inc. Network traffic-analysis-based suggestion generation
US9875448B2 (en) 2011-11-30 2018-01-23 At&T Intellectual Property I, L.P. Mobile service platform
US8666989B1 (en) * 2011-12-02 2014-03-04 Google Inc. Adaptive distributed indexing of local adverts
US8627488B2 (en) 2011-12-05 2014-01-07 At&T Intellectual Property I, L.P. Methods and apparatus to anonymize a dataset of spatial data
US8990370B2 (en) 2011-12-16 2015-03-24 Nokia Corporation Method and apparatus for providing information collection using template-based user tasks
ES2427690B1 (en) * 2012-01-20 2014-06-05 Telefónica, S.A. METHOD FOR AUTOMATIC DETECTION AND LABELING OF USER POINTS OF INTEREST
GB2499288A (en) 2012-02-09 2013-08-14 Sita Inf Networking Computing Usa Inc Path determination
US8972357B2 (en) 2012-02-24 2015-03-03 Placed, Inc. System and method for data collection to validate location data
US11734712B2 (en) 2012-02-24 2023-08-22 Foursquare Labs, Inc. Attributing in-store visits to media consumption based on data collected from user devices
US9325797B2 (en) 2012-02-29 2016-04-26 Google Inc. System and method for requesting an updated user location
JP6622962B2 (en) * 2012-05-02 2019-12-18 株式会社電通 Information distribution system
WO2013166588A1 (en) 2012-05-08 2013-11-14 Bitstrips Inc. System and method for adaptable avatars
US8681957B2 (en) 2012-05-10 2014-03-25 International Business Machines Corporation Extracting social relations from calling time data
US10091323B2 (en) * 2012-05-17 2018-10-02 The Meet Group, Inc. Social discovery feed for facilitating social exploration in social networking environments
CN107273437B (en) * 2012-06-22 2020-09-29 谷歌有限责任公司 Method and system for providing information related to places a user may visit
US20150134460A1 (en) * 2012-06-29 2015-05-14 Fengzhan Phil Tian Method and apparatus for selecting an advertisement for display on a digital sign
US20140006096A1 (en) * 2012-06-29 2014-01-02 Mastercard International Incorporated System and method for determining congestion using point of sale authorization data
US8935255B2 (en) 2012-07-27 2015-01-13 Facebook, Inc. Social static ranking for search
US10387780B2 (en) 2012-08-14 2019-08-20 International Business Machines Corporation Context accumulation based on properties of entity features
US9411967B2 (en) * 2012-08-24 2016-08-09 Environmental Systems Research Institute (ESRI) Systems and methods for managing location data and providing a privacy framework
US10148709B2 (en) * 2012-08-31 2018-12-04 Here Global B.V. Method and apparatus for updating or validating a geographic record based on crowdsourced location data
US9037592B2 (en) 2012-09-12 2015-05-19 Flipboard, Inc. Generating an implied object graph based on user behavior
CN102868596B (en) * 2012-09-20 2015-07-29 腾讯科技(深圳)有限公司 A kind of network social intercourse interactive approach and relevant device, system
US8925054B2 (en) 2012-10-08 2014-12-30 Comcast Cable Communications, Llc Authenticating credentials for mobile platforms
US9449121B2 (en) * 2012-10-30 2016-09-20 Apple Inc. Venue based real time crowd modeling and forecasting
TWI526963B (en) 2012-11-13 2016-03-21 財團法人資訊工業策進會 A method, a device and recording media for searching target clients
US8914393B2 (en) * 2012-11-26 2014-12-16 Facebook, Inc. Search results using density-based map tiles
JP2014106585A (en) * 2012-11-26 2014-06-09 Sony Corp Information processing device, terminal device, information processing method and program
US9398104B2 (en) 2012-12-20 2016-07-19 Facebook, Inc. Ranking test framework for search results on an online social network
US9151824B2 (en) 2012-12-21 2015-10-06 Qualcomm Incorporated Adaptive control of crowdsourcing data using mobile device generated parameters
US9501507B1 (en) 2012-12-27 2016-11-22 Palantir Technologies Inc. Geo-temporal indexing and searching
WO2014111863A1 (en) 2013-01-16 2014-07-24 Light Cyber Ltd. Automated forensics of computer systems using behavioral intelligence
US9380431B1 (en) 2013-01-31 2016-06-28 Palantir Technologies, Inc. Use of teams in a mobile application
US20160007184A1 (en) * 2013-02-25 2016-01-07 Radius Mobile, Inc. Identifying computer devices in proximity to a given origin
US9223826B2 (en) 2013-02-25 2015-12-29 Facebook, Inc. Pushing suggested search queries to mobile devices
US9311640B2 (en) 2014-02-11 2016-04-12 Digimarc Corporation Methods and arrangements for smartphone payments and transactions
US9935996B2 (en) * 2013-02-28 2018-04-03 Open Text Sa Ulc Systems, methods and computer program products for dynamic user profile enrichment and data integration
US20140248908A1 (en) 2013-03-01 2014-09-04 Uniloc Luxembourg S.A. Pedestrian traffic monitoring and analysis
US11025521B1 (en) * 2013-03-15 2021-06-01 CSC Holdings, LLC Dynamic sample selection based on geospatial area and selection predicates
US20140303806A1 (en) * 2013-04-04 2014-10-09 GM Global Technology Operations LLC Apparatus and methods for providing tailored information to vehicle users based on vehicle community input
CN103248723B (en) * 2013-04-10 2015-11-25 腾讯科技(深圳)有限公司 The defining method of region, a kind of IP address and device
US9910887B2 (en) 2013-04-25 2018-03-06 Facebook, Inc. Variable search query vertical access
US8799799B1 (en) 2013-05-07 2014-08-05 Palantir Technologies Inc. Interactive geospatial map
US9330183B2 (en) 2013-05-08 2016-05-03 Facebook, Inc. Approximate privacy indexing for search queries on online social networks
US9223898B2 (en) 2013-05-08 2015-12-29 Facebook, Inc. Filtering suggested structured queries on online social networks
EP2806386A1 (en) * 2013-05-23 2014-11-26 Let Network Incorporated Method and system for automatically indicating an event from files received on a computer server
US20140380489A1 (en) * 2013-06-20 2014-12-25 Alcatel-Lucent Bell Labs France Systems and methods for data anonymization
WO2015007945A1 (en) * 2013-07-18 2015-01-22 Nokia Corporation Method and apparatus for updating points of interest information via crowdsourcing
US9305322B2 (en) 2013-07-23 2016-04-05 Facebook, Inc. Native application testing
US9303996B2 (en) * 2013-09-10 2016-04-05 Apple Inc. Point of interest location determination based on application usage
US9892200B2 (en) * 2013-09-18 2018-02-13 Ebay Inc. Location-based and alter-ego queries
US9270451B2 (en) * 2013-10-03 2016-02-23 Globalfoundries Inc. Privacy enhanced spatial analytics
US8924872B1 (en) 2013-10-18 2014-12-30 Palantir Technologies Inc. Overview user interface of emergency call data of a law enforcement agency
US9021384B1 (en) 2013-11-04 2015-04-28 Palantir Technologies Inc. Interactive vehicle information map
US9438647B2 (en) 2013-11-14 2016-09-06 At&T Intellectual Property I, L.P. Method and apparatus for distributing content
US9403482B2 (en) 2013-11-22 2016-08-02 At&T Intellectual Property I, L.P. Enhanced view for connected cars
US9635507B2 (en) * 2013-11-26 2017-04-25 Globalfoundries Inc. Mobile device analytics
US9607015B2 (en) * 2013-12-20 2017-03-28 Qualcomm Incorporated Systems, methods, and apparatus for encoding object formations
WO2015101802A1 (en) 2013-12-31 2015-07-09 Turkcell Teknoloji Arastirma Ve Gelistirme A.S. A system for determining subscriber density on wireless networks
WO2015101803A1 (en) * 2013-12-31 2015-07-09 Turkcell Teknoloji Arastirma Ve Gelistirme A.S. A system for location, activity and subscriber data analysis in mobile networks
US9628950B1 (en) 2014-01-12 2017-04-18 Investment Asset Holdings Llc Location-based messaging
US9336300B2 (en) 2014-01-17 2016-05-10 Facebook, Inc. Client-side search templates for online social networks
GB2523134A (en) * 2014-02-13 2015-08-19 Spatineo Oy Service level monitoring for geospatial web services
US9672291B2 (en) 2014-02-19 2017-06-06 Google Inc. Summarizing social interactions between users
EP3132592B1 (en) * 2014-04-18 2018-02-21 Telecom Italia S.p.A. Method and system for identifying significant locations through data obtainable from a telecommunication network
US20150302439A1 (en) * 2014-04-22 2015-10-22 Optifi Inc. System and method for monitoring mobile device activity
US10466056B2 (en) 2014-04-25 2019-11-05 Samsung Electronics Co., Ltd. Trajectory matching using ambient signals
US9510154B2 (en) 2014-04-28 2016-11-29 Samsung Electronics Co., Ltd Location determination, mapping, and data management through crowdsourcing
US9863773B2 (en) 2014-04-29 2018-01-09 Samsung Electronics Co., Ltd. Indoor global positioning system
US9537811B2 (en) 2014-10-02 2017-01-03 Snap Inc. Ephemeral gallery of ephemeral messages
US9396354B1 (en) 2014-05-28 2016-07-19 Snapchat, Inc. Apparatus and method for automated privacy protection in distributed images
US9913100B2 (en) 2014-05-30 2018-03-06 Apple Inc. Techniques for generating maps of venues including buildings and floors
US10108748B2 (en) 2014-05-30 2018-10-23 Apple Inc. Most relevant application recommendation based on crowd-sourced application usage data
IL239237B (en) 2014-06-05 2018-12-31 Rotem Efrat Web document enhancement
US9113301B1 (en) 2014-06-13 2015-08-18 Snapchat, Inc. Geo-location based event gallery
US9781697B2 (en) 2014-06-20 2017-10-03 Samsung Electronics Co., Ltd. Localization using converged platforms
US9129219B1 (en) 2014-06-30 2015-09-08 Palantir Technologies, Inc. Crime risk forecasting
US9225897B1 (en) 2014-07-07 2015-12-29 Snapchat, Inc. Apparatus and method for supplying content aware photo filters
US10028245B2 (en) 2014-07-16 2018-07-17 Samsung Electronics Co., Ltd. Maintaining point of interest data using wireless access points
US10282478B2 (en) * 2014-08-18 2019-05-07 Perry Street Software, Inc. Density modified search results
US10296549B2 (en) * 2014-08-18 2019-05-21 Perry Street Software, Inc. Density dependent search functions
US10373192B2 (en) * 2014-08-18 2019-08-06 Google Llc Matching conversions from applications to selected content items
US10296550B2 (en) * 2014-08-18 2019-05-21 Perry Street Software, Inc. Selective inclusion of members in a results list
US11494390B2 (en) 2014-08-21 2022-11-08 Affectomatics Ltd. Crowd-based scores for hotels from measurements of affective response
US11269891B2 (en) 2014-08-21 2022-03-08 Affectomatics Ltd. Crowd-based scores for experiences from measurements of affective response
DE102015113931A1 (en) 2014-08-21 2016-02-25 Affectomatics Ltd. Calculation of after-effects from affective reactions
US9959289B2 (en) * 2014-08-29 2018-05-01 Telenav, Inc. Navigation system with content delivery mechanism and method of operation thereof
US9541404B2 (en) 2014-08-29 2017-01-10 Samsung Electronics Co., Ltd. System for determining the location of entrances and areas of interest
WO2016038412A1 (en) * 2014-09-10 2016-03-17 Umm Al-Qura University A spatio-temporal method and system to implement boundary regulation
US10423983B2 (en) 2014-09-16 2019-09-24 Snap Inc. Determining targeting information based on a predictive targeting model
US10824654B2 (en) 2014-09-18 2020-11-03 Snap Inc. Geolocation-based pictographs
US11216869B2 (en) 2014-09-23 2022-01-04 Snap Inc. User interface to augment an image using geolocation
US10284508B1 (en) 2014-10-02 2019-05-07 Snap Inc. Ephemeral gallery of ephemeral messages with opt-in permanence
US9374675B2 (en) 2014-11-06 2016-06-21 International Business Machines Corporation Public service awareness of crowd movement and concentration
US9015285B1 (en) 2014-11-12 2015-04-21 Snapchat, Inc. User interface for accessing media at a geographic location
CN105681007B (en) * 2014-11-19 2020-11-06 北京三星通信技术研究有限公司 Reference signal sending and receiving method and device, and scheduling method and device
US10409873B2 (en) 2014-11-26 2019-09-10 Facebook, Inc. Searching for content by key-authors on online social networks
US10552759B2 (en) 2014-12-01 2020-02-04 Facebook, Inc. Iterative classifier training on online social networks
US9679024B2 (en) 2014-12-01 2017-06-13 Facebook, Inc. Social-based spelling correction for online social networks
US9990441B2 (en) 2014-12-05 2018-06-05 Facebook, Inc. Suggested keywords for searching content on online social networks
CN104573473B (en) 2014-12-05 2018-02-02 小米科技有限责任公司 A kind of method and authenticating device for unlocking administration authority
US9385983B1 (en) 2014-12-19 2016-07-05 Snapchat, Inc. Gallery of messages from individuals with a shared interest
US10311916B2 (en) 2014-12-19 2019-06-04 Snap Inc. Gallery of videos set to an audio time line
US10102273B2 (en) 2014-12-30 2018-10-16 Facebook, Inc. Suggested queries for locating posts on online social networks
US9955299B2 (en) * 2014-12-30 2018-04-24 Telecom Italia S.P.A. Method and system for a posteriori computation of origin-destination matrices relating to gathering of people through analysis of mobile communication network data
US10372879B2 (en) 2014-12-31 2019-08-06 Palantir Technologies Inc. Medical claims lead summary report generation
US9754355B2 (en) 2015-01-09 2017-09-05 Snap Inc. Object recognition based photo filters
US9838396B2 (en) * 2015-01-09 2017-12-05 Facebook, Inc. Controlling content-sharing using a privacy list snapshot
US11388226B1 (en) 2015-01-13 2022-07-12 Snap Inc. Guided personal identity based actions
US10133705B1 (en) 2015-01-19 2018-11-20 Snap Inc. Multichannel system
US9521515B2 (en) 2015-01-26 2016-12-13 Mobli Technologies 2010 Ltd. Content request by location
US10061856B2 (en) 2015-01-29 2018-08-28 Facebook, Inc. Multimedia search using reshare text on online social networks
US10997257B2 (en) 2015-02-06 2021-05-04 Facebook, Inc. Aggregating news events on online social networks
US10223397B1 (en) 2015-03-13 2019-03-05 Snap Inc. Social graph based co-location of network users
EP3611632A1 (en) 2015-03-16 2020-02-19 Palantir Technologies Inc. Displaying attribute and event data along paths
WO2016149594A1 (en) 2015-03-18 2016-09-22 Allen Nicholas Richard Geo-fence authorization provisioning
US9692967B1 (en) 2015-03-23 2017-06-27 Snap Inc. Systems and methods for reducing boot time and power consumption in camera systems
US10102296B2 (en) 2015-03-26 2018-10-16 International Business Machines Corporation Utilizing attributes of users to cluster users at a waypoint
JP6277325B2 (en) * 2015-03-26 2018-02-07 株式会社日立製作所 Closed space estimation system and closed space estimation method
US10049099B2 (en) 2015-04-10 2018-08-14 Facebook, Inc. Spell correction with hidden markov models on online social networks
US10095683B2 (en) 2015-04-10 2018-10-09 Facebook, Inc. Contextual speller models on online social networks
US9547538B2 (en) 2015-04-16 2017-01-17 Nokia Technologies Oy Sending of map interaction data to a program
US9426627B1 (en) 2015-04-21 2016-08-23 Twin Harbor Labs, LLC Queue information and prediction system
US10628636B2 (en) 2015-04-24 2020-04-21 Facebook, Inc. Live-conversation modules on online social networks
US10135949B1 (en) 2015-05-05 2018-11-20 Snap Inc. Systems and methods for story and sub-story navigation
US9881094B2 (en) 2015-05-05 2018-01-30 Snap Inc. Systems and methods for automated local story generation and curation
US10298535B2 (en) 2015-05-19 2019-05-21 Facebook, Inc. Civic issues platforms on online social networks
US10075461B2 (en) 2015-05-31 2018-09-11 Palo Alto Networks (Israel Analytics) Ltd. Detection of anomalous administrative actions
US9460175B1 (en) 2015-06-03 2016-10-04 Palantir Technologies Inc. Server implemented geographic information system with graphical interface
CN105100430B (en) * 2015-06-09 2017-12-08 北京橙鑫数据科技有限公司 Information switching method and device
US10397167B2 (en) 2015-06-19 2019-08-27 Facebook, Inc. Live social modules on online social networks
US10122805B2 (en) 2015-06-30 2018-11-06 International Business Machines Corporation Identification of collaborating and gathering entities
US10509832B2 (en) 2015-07-13 2019-12-17 Facebook, Inc. Generating snippet modules on online social networks
US10993069B2 (en) 2015-07-16 2021-04-27 Snap Inc. Dynamically adaptive media content delivery
US10817898B2 (en) 2015-08-13 2020-10-27 Placed, Llc Determining exposures to content presented by physical objects
US9600146B2 (en) 2015-08-17 2017-03-21 Palantir Technologies Inc. Interactive geospatial map
US10268664B2 (en) 2015-08-25 2019-04-23 Facebook, Inc. Embedding links in user-created content on online social networks
US10706434B1 (en) 2015-09-01 2020-07-07 Palantir Technologies Inc. Methods and systems for determining location information
US9639580B1 (en) 2015-09-04 2017-05-02 Palantir Technologies, Inc. Computer-implemented systems and methods for data management and visualization
US10810217B2 (en) 2015-10-07 2020-10-20 Facebook, Inc. Optionalization and fuzzy search on online social networks
US9930494B2 (en) 2015-10-13 2018-03-27 Cisco Technology, Inc. Leveraging location data from mobile devices for user classification
US9699603B2 (en) 2015-10-14 2017-07-04 Cisco Technology, Inc. Utilizing mobile wireless devices to analyze movement of crowds
US10762132B2 (en) * 2015-10-29 2020-09-01 Pixured, Inc. System for referring to and/or embedding posts, videos or digital media within another post, video, digital data or digital media within 2D, 3D, 360 degree or spherical applications whereby to reach convergence or grouping
US9652896B1 (en) 2015-10-30 2017-05-16 Snap Inc. Image based tracking in augmented reality systems
US10795936B2 (en) 2015-11-06 2020-10-06 Facebook, Inc. Suppressing entity suggestions on online social networks
US10270868B2 (en) 2015-11-06 2019-04-23 Facebook, Inc. Ranking of place-entities on online social networks
US9602965B1 (en) 2015-11-06 2017-03-21 Facebook, Inc. Location-based place determination using online social networks
US10534814B2 (en) 2015-11-11 2020-01-14 Facebook, Inc. Generating snippets on online social networks
US10415978B2 (en) 2015-11-20 2019-09-17 Samsung Electronics Co., Ltd. Landmark location determination
US10387511B2 (en) 2015-11-25 2019-08-20 Facebook, Inc. Text-to-media indexes on online social networks
US10474321B2 (en) 2015-11-30 2019-11-12 Snap Inc. Network resource location linking and visual content sharing
US9984499B1 (en) 2015-11-30 2018-05-29 Snap Inc. Image and point cloud based tracking and in augmented reality systems
USD776147S1 (en) * 2015-12-05 2017-01-10 Velvet Ropes, Inc. Mobile device having graphical user interface
US20170169025A1 (en) * 2015-12-14 2017-06-15 Google Inc. Estimating Geographic Entity Capacity
US10872353B2 (en) 2015-12-14 2020-12-22 Google Llc Providing content to store visitors without requiring proactive information sharing
US10592913B2 (en) 2015-12-14 2020-03-17 Google Llc Store visit data creation and management
WO2017102629A1 (en) * 2015-12-15 2017-06-22 Philips Lighting Holding B.V. Incident prediction system
US10354425B2 (en) 2015-12-18 2019-07-16 Snap Inc. Method and system for providing context relevant media augmentation
US10109094B2 (en) 2015-12-21 2018-10-23 Palantir Technologies Inc. Interface to index and display geospatial data
US10740368B2 (en) 2015-12-29 2020-08-11 Facebook, Inc. Query-composition platforms on online social networks
CN105594235B (en) * 2016-01-04 2019-03-26 薛俊华 Navigation ad supplying system based on geographical location
US10282434B2 (en) 2016-01-11 2019-05-07 Facebook, Inc. Suppression and deduplication of place-entities on online social networks
US9709660B1 (en) * 2016-01-11 2017-07-18 Qualcomm Incorporated Crowdsourced user density applications
US10162899B2 (en) 2016-01-15 2018-12-25 Facebook, Inc. Typeahead intent icons and snippets on online social networks
US10262039B1 (en) 2016-01-15 2019-04-16 Facebook, Inc. Proximity-based searching on online social networks
US10740375B2 (en) 2016-01-20 2020-08-11 Facebook, Inc. Generating answers to questions using information posted by users on online social networks
US10242074B2 (en) 2016-02-03 2019-03-26 Facebook, Inc. Search-results interfaces for content-item-specific modules on online social networks
US10270882B2 (en) 2016-02-03 2019-04-23 Facebook, Inc. Mentions-modules on online social networks
US10157224B2 (en) 2016-02-03 2018-12-18 Facebook, Inc. Quotations-modules on online social networks
US10216850B2 (en) 2016-02-03 2019-02-26 Facebook, Inc. Sentiment-modules on online social networks
US11023514B2 (en) 2016-02-26 2021-06-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US10679389B2 (en) 2016-02-26 2020-06-09 Snap Inc. Methods and systems for generation, curation, and presentation of media collections
US10285001B2 (en) 2016-02-26 2019-05-07 Snap Inc. Generation, curation, and presentation of media collections
US10339365B2 (en) 2016-03-31 2019-07-02 Snap Inc. Automated avatar generation
US10452671B2 (en) 2016-04-26 2019-10-22 Facebook, Inc. Recommendations from comments on online social networks
US10068199B1 (en) 2016-05-13 2018-09-04 Palantir Technologies Inc. System to catalogue tracking data
US11044393B1 (en) 2016-06-20 2021-06-22 Pipbin, Inc. System for curation and display of location-dependent augmented reality content in an augmented estate system
US11876941B1 (en) 2016-06-20 2024-01-16 Pipbin, Inc. Clickable augmented reality content manager, system, and network
US10638256B1 (en) 2016-06-20 2020-04-28 Pipbin, Inc. System for distribution and display of mobile targeted augmented reality content
US10805696B1 (en) 2016-06-20 2020-10-13 Pipbin, Inc. System for recording and targeting tagged content of user interest
US11785161B1 (en) 2016-06-20 2023-10-10 Pipbin, Inc. System for user accessibility of tagged curated augmented reality content
US10334134B1 (en) 2016-06-20 2019-06-25 Maximillian John Suiter Augmented real estate with location and chattel tagging system and apparatus for virtual diary, scrapbooking, game play, messaging, canvasing, advertising and social interaction
US11201981B1 (en) 2016-06-20 2021-12-14 Pipbin, Inc. System for notification of user accessibility of curated location-dependent content in an augmented estate
US9681265B1 (en) 2016-06-28 2017-06-13 Snap Inc. System to track engagement of media items
US10430838B1 (en) 2016-06-28 2019-10-01 Snap Inc. Methods and systems for generation, curation, and presentation of media collections with automated advertising
US10552870B1 (en) 2016-06-30 2020-02-04 Quantcast Corporation Privacy-safe frequency distribution of geo-features for mobile devices
US10733255B1 (en) 2016-06-30 2020-08-04 Snap Inc. Systems and methods for content navigation with automated curation
US10635661B2 (en) 2016-07-11 2020-04-28 Facebook, Inc. Keyboard-based corrections for search queries on online social networks
US10855632B2 (en) 2016-07-19 2020-12-01 Snap Inc. Displaying customized electronic messaging graphics
US10091550B2 (en) * 2016-08-02 2018-10-02 At&T Intellectual Property I, L.P. Automated content selection for groups
US9686357B1 (en) 2016-08-02 2017-06-20 Palantir Technologies Inc. Mapping content delivery
US10223464B2 (en) 2016-08-04 2019-03-05 Facebook, Inc. Suggesting filters for search on online social networks
US10282483B2 (en) 2016-08-04 2019-05-07 Facebook, Inc. Client-side caching of search keywords for online social networks
RU2658876C1 (en) * 2016-08-11 2018-06-25 Общество С Ограниченной Ответственностью "Яндекс" Wireless device sensor data processing method and server for the object vector creating connected with the physical position
US9703775B1 (en) 2016-08-16 2017-07-11 Facebook, Inc. Crowdsourcing translations on online social networks
US10437840B1 (en) 2016-08-19 2019-10-08 Palantir Technologies Inc. Focused probabilistic entity resolution from multiple data sources
US10726022B2 (en) 2016-08-26 2020-07-28 Facebook, Inc. Classifying search queries on online social networks
US10534815B2 (en) 2016-08-30 2020-01-14 Facebook, Inc. Customized keyword query suggestions on online social networks
KR102606785B1 (en) 2016-08-30 2023-11-29 스냅 인코포레이티드 Systems and methods for simultaneous localization and mapping
US10686829B2 (en) 2016-09-05 2020-06-16 Palo Alto Networks (Israel Analytics) Ltd. Identifying changes in use of user credentials
US10102255B2 (en) 2016-09-08 2018-10-16 Facebook, Inc. Categorizing objects for queries on online social networks
US10645142B2 (en) 2016-09-20 2020-05-05 Facebook, Inc. Video keyframes display on online social networks
US10083379B2 (en) 2016-09-27 2018-09-25 Facebook, Inc. Training image-recognition systems based on search queries on online social networks
US10026021B2 (en) 2016-09-27 2018-07-17 Facebook, Inc. Training image-recognition systems using a joint embedding model on online social networks
US10579688B2 (en) 2016-10-05 2020-03-03 Facebook, Inc. Search ranking and recommendations for online social networks based on reconstructed embeddings
US10432559B2 (en) 2016-10-24 2019-10-01 Snap Inc. Generating and displaying customized avatars in electronic messages
CN112738408B (en) 2016-11-07 2022-09-16 斯纳普公司 Selective identification and ordering of image modifiers
US11089126B1 (en) 2016-11-09 2021-08-10 StratoKey Pty Ltd. Proxy computer system to provide direct links for bypass
US10311117B2 (en) 2016-11-18 2019-06-04 Facebook, Inc. Entity linking to query terms on online social networks
US10650009B2 (en) 2016-11-22 2020-05-12 Facebook, Inc. Generating news headlines on online social networks
US10185763B2 (en) 2016-11-30 2019-01-22 Facebook, Inc. Syntactic models for parsing search queries on online social networks
US10162886B2 (en) 2016-11-30 2018-12-25 Facebook, Inc. Embedding-based parsing of search queries on online social networks
US10235469B2 (en) 2016-11-30 2019-03-19 Facebook, Inc. Searching for posts by related entities on online social networks
US10313456B2 (en) 2016-11-30 2019-06-04 Facebook, Inc. Multi-stage filtering for recommended user connections on online social networks
US10203855B2 (en) 2016-12-09 2019-02-12 Snap Inc. Customized user-controlled media overlays
US10515433B1 (en) 2016-12-13 2019-12-24 Palantir Technologies Inc. Zoom-adaptive data granularity to achieve a flexible high-performance interface for a geospatial mapping system
US10270727B2 (en) 2016-12-20 2019-04-23 Palantir Technologies, Inc. Short message communication within a mobile graphical map
US10318763B2 (en) * 2016-12-20 2019-06-11 Privacy Analytics Inc. Smart de-identification using date jittering
US11223699B1 (en) 2016-12-21 2022-01-11 Facebook, Inc. Multiple user recognition with voiceprints on online social networks
US10607148B1 (en) 2016-12-21 2020-03-31 Facebook, Inc. User identification with voiceprints on online social networks
US10535106B2 (en) 2016-12-28 2020-01-14 Facebook, Inc. Selecting user posts related to trending topics on online social networks
US10460602B1 (en) 2016-12-28 2019-10-29 Palantir Technologies Inc. Interactive vehicle information mapping system
US11616745B2 (en) 2017-01-09 2023-03-28 Snap Inc. Contextual generation and selection of customized media content
US10454857B1 (en) 2017-01-23 2019-10-22 Snap Inc. Customized digital avatar accessories
US10915911B2 (en) 2017-02-03 2021-02-09 Snap Inc. System to determine a price-schedule to distribute media content
US10489472B2 (en) 2017-02-13 2019-11-26 Facebook, Inc. Context-based search suggestions on online social networks
US11250075B1 (en) 2017-02-17 2022-02-15 Snap Inc. Searching social media content
US10319149B1 (en) 2017-02-17 2019-06-11 Snap Inc. Augmented reality anamorphosis system
US10074381B1 (en) 2017-02-20 2018-09-11 Snap Inc. Augmented reality speech balloon system
US10565795B2 (en) 2017-03-06 2020-02-18 Snap Inc. Virtual vision system
US10523625B1 (en) 2017-03-09 2019-12-31 Snap Inc. Restricted group content collection
US10614141B2 (en) 2017-03-15 2020-04-07 Facebook, Inc. Vital author snippets on online social networks
US10769222B2 (en) 2017-03-20 2020-09-08 Facebook, Inc. Search result ranking based on post classifiers on online social networks
US10579239B1 (en) 2017-03-23 2020-03-03 Palantir Technologies Inc. Systems and methods for production and display of dynamically linked slide presentations
US10582277B2 (en) 2017-03-27 2020-03-03 Snap Inc. Generating a stitched data stream
US10581782B2 (en) 2017-03-27 2020-03-03 Snap Inc. Generating a stitched data stream
US11170393B1 (en) 2017-04-11 2021-11-09 Snap Inc. System to calculate an engagement score of location based media content
US10387730B1 (en) 2017-04-20 2019-08-20 Snap Inc. Augmented reality typography personalization system
CN117520684A (en) 2017-04-27 2024-02-06 斯纳普公司 Location privacy association on map-based social media platform
US11893647B2 (en) 2017-04-27 2024-02-06 Snap Inc. Location-based virtual avatars
US10212541B1 (en) 2017-04-27 2019-02-19 Snap Inc. Selective location-based identity communication
US10467147B1 (en) 2017-04-28 2019-11-05 Snap Inc. Precaching unlockable data elements
US11379861B2 (en) 2017-05-16 2022-07-05 Meta Platforms, Inc. Classifying post types on online social networks
US10248645B2 (en) 2017-05-30 2019-04-02 Facebook, Inc. Measuring phrase association on online social networks
US11334216B2 (en) 2017-05-30 2022-05-17 Palantir Technologies Inc. Systems and methods for visually presenting geospatial information
US10895946B2 (en) 2017-05-30 2021-01-19 Palantir Technologies Inc. Systems and methods for using tiled data
US10803120B1 (en) 2017-05-31 2020-10-13 Snap Inc. Geolocation based playlists
CN107402955B (en) * 2017-06-02 2020-04-14 阿里巴巴集团控股有限公司 Method and apparatus for determining index grid of geo-fence
US10268646B2 (en) 2017-06-06 2019-04-23 Facebook, Inc. Tensor-based deep relevance model for search on online social networks
US10803125B2 (en) 2017-06-30 2020-10-13 Microsoft Technology Licensing, Llc Rendering locations on map based on location data
US10700864B2 (en) * 2017-07-12 2020-06-30 International Business Machines Corporation Anonymous encrypted data
US10403011B1 (en) 2017-07-18 2019-09-03 Palantir Technologies Inc. Passing system with an interactive user interface
US10489468B2 (en) 2017-08-22 2019-11-26 Facebook, Inc. Similarity search using progressive inner products and bounds
US11475254B1 (en) 2017-09-08 2022-10-18 Snap Inc. Multimodal entity identification
US10776437B2 (en) 2017-09-12 2020-09-15 Facebook, Inc. Time-window counters for search results on online social networks
US10740974B1 (en) 2017-09-15 2020-08-11 Snap Inc. Augmented reality system
CA3212390A1 (en) 2017-09-26 2019-04-04 Rise Buildings, Llc Systems and methods for location-based application management
US10678786B2 (en) 2017-10-09 2020-06-09 Facebook, Inc. Translating search queries on online social networks
US10499191B1 (en) 2017-10-09 2019-12-03 Snap Inc. Context sensitive presentation of content
US10573043B2 (en) 2017-10-30 2020-02-25 Snap Inc. Mobile-based cartographic control of display content
US11222359B2 (en) 2017-10-31 2022-01-11 Mastercard International Incorporated Systems and methods for location related event detection
CN107948408B (en) * 2017-11-14 2020-10-09 维沃移动通信有限公司 Media file playing method and device
CN110070371B (en) * 2017-11-20 2022-11-18 腾讯科技(深圳)有限公司 Data prediction model establishing method and equipment, storage medium and server thereof
US10679306B2 (en) * 2017-11-21 2020-06-09 International Business Machines Corporation Focus-object-determined communities for augmented reality users
US10810214B2 (en) 2017-11-22 2020-10-20 Facebook, Inc. Determining related query terms through query-post associations on online social networks
US10371537B1 (en) 2017-11-29 2019-08-06 Palantir Technologies Inc. Systems and methods for flexible route planning
US10963514B2 (en) 2017-11-30 2021-03-30 Facebook, Inc. Using related mentions to enhance link probability on online social networks
US11265273B1 (en) 2017-12-01 2022-03-01 Snap, Inc. Dynamic media overlay with smart widget
US11599706B1 (en) 2017-12-06 2023-03-07 Palantir Technologies Inc. Systems and methods for providing a view of geospatial information
US10129705B1 (en) 2017-12-11 2018-11-13 Facebook, Inc. Location prediction using wireless signals on online social networks
US11604968B2 (en) 2017-12-11 2023-03-14 Meta Platforms, Inc. Prediction of next place visits on online social networks
US10698756B1 (en) 2017-12-15 2020-06-30 Palantir Technologies Inc. Linking related events for various devices and services in computer log files on a centralized server
CN107979649B (en) * 2017-12-15 2021-01-12 武汉精测电子集团股份有限公司 AOI multi-line multi-station system and method based on Inline server
US11017173B1 (en) 2017-12-22 2021-05-25 Snap Inc. Named entity recognition visual context and caption data
US10009301B1 (en) * 2018-01-02 2018-06-26 Spearhead Inc. Peer-to-peer location-based messaging
US10678818B2 (en) 2018-01-03 2020-06-09 Snap Inc. Tag distribution visualization system
WO2019113612A1 (en) * 2018-02-07 2019-06-13 BlackBook Media Inc. Managing event calendars using histogram-based analysis
US11507614B1 (en) 2018-02-13 2022-11-22 Snap Inc. Icon based tagging
US10885136B1 (en) 2018-02-28 2021-01-05 Snap Inc. Audience filtering system
US10979752B1 (en) 2018-02-28 2021-04-13 Snap Inc. Generating media content items based on location information
US10327096B1 (en) 2018-03-06 2019-06-18 Snap Inc. Geo-fence selection system
KR102494540B1 (en) 2018-03-14 2023-02-06 스냅 인코포레이티드 Creation of collectible items based on location information
US10896234B2 (en) 2018-03-29 2021-01-19 Palantir Technologies Inc. Interactive geographical map
US11163941B1 (en) 2018-03-30 2021-11-02 Snap Inc. Annotating a collection of media content items
US10830599B2 (en) 2018-04-03 2020-11-10 Palantir Technologies Inc. Systems and methods for alternative projections of geographical information
US11585672B1 (en) 2018-04-11 2023-02-21 Palantir Technologies Inc. Three-dimensional representations of routes
US10999304B2 (en) 2018-04-11 2021-05-04 Palo Alto Networks (Israel Analytics) Ltd. Bind shell attack detection
US10219111B1 (en) 2018-04-18 2019-02-26 Snap Inc. Visitation tracking system
US10896197B1 (en) 2018-05-22 2021-01-19 Snap Inc. Event detection system
US10429197B1 (en) 2018-05-29 2019-10-01 Palantir Technologies Inc. Terrain analysis for automatic route determination
US20190377981A1 (en) * 2018-06-11 2019-12-12 Venkata Subbarao Veeravasarapu System and Method for Generating Simulated Scenes from Open Map Data for Machine Learning
US10679393B2 (en) 2018-07-24 2020-06-09 Snap Inc. Conditional modification of augmented reality object
US10997760B2 (en) 2018-08-31 2021-05-04 Snap Inc. Augmented reality anthropomorphization system
US10950231B1 (en) * 2018-09-04 2021-03-16 Amazon Technologies, Inc. Skill enablement
US10698583B2 (en) 2018-09-28 2020-06-30 Snap Inc. Collaborative achievement interface
US10467435B1 (en) 2018-10-24 2019-11-05 Palantir Technologies Inc. Approaches for managing restrictions for middleware applications
US11025672B2 (en) 2018-10-25 2021-06-01 Palantir Technologies Inc. Approaches for securing middleware data access
US10778623B1 (en) 2018-10-31 2020-09-15 Snap Inc. Messaging and gaming applications communication platform
CN109359170B (en) * 2018-11-02 2020-05-12 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
US11199957B1 (en) 2018-11-30 2021-12-14 Snap Inc. Generating customized avatars based on location information
US10939236B1 (en) 2018-11-30 2021-03-02 Snap Inc. Position service to determine relative position to map features
US10936751B1 (en) * 2018-12-14 2021-03-02 StratoKey Pty Ltd. Selective anonymization of data maintained by third-party network services
US11032670B1 (en) 2019-01-14 2021-06-08 Snap Inc. Destination sharing in location sharing system
US10939246B1 (en) 2019-01-16 2021-03-02 Snap Inc. Location-based context information sharing in a messaging system
US11184376B2 (en) 2019-01-30 2021-11-23 Palo Alto Networks (Israel Analytics) Ltd. Port scan detection using destination profiles
US11316872B2 (en) 2019-01-30 2022-04-26 Palo Alto Networks (Israel Analytics) Ltd. Malicious port scan detection using port profiles
US11184377B2 (en) 2019-01-30 2021-11-23 Palo Alto Networks (Israel Analytics) Ltd. Malicious port scan detection using source profiles
US11184378B2 (en) 2019-01-30 2021-11-23 Palo Alto Networks (Israel Analytics) Ltd. Scanner probe detection
US11070569B2 (en) 2019-01-30 2021-07-20 Palo Alto Networks (Israel Analytics) Ltd. Detecting outlier pairs of scanned ports
US11294936B1 (en) 2019-01-30 2022-04-05 Snap Inc. Adaptive spatial density based clustering
US10936066B1 (en) 2019-02-13 2021-03-02 Snap Inc. Sleep detection in a location sharing system
US10838599B2 (en) 2019-02-25 2020-11-17 Snap Inc. Custom media overlay system
US10964082B2 (en) 2019-02-26 2021-03-30 Snap Inc. Avatar based on weather
US10852918B1 (en) 2019-03-08 2020-12-01 Snap Inc. Contextual information in chat
US11868414B1 (en) 2019-03-14 2024-01-09 Snap Inc. Graph-based prediction for contact suggestion in a location sharing system
US11852554B1 (en) 2019-03-21 2023-12-26 Snap Inc. Barometer calibration in a location sharing system
US11249614B2 (en) 2019-03-28 2022-02-15 Snap Inc. Generating personalized map interface with enhanced icons
US10810782B1 (en) 2019-04-01 2020-10-20 Snap Inc. Semantic texture mapping system
US11248929B2 (en) * 2019-05-15 2022-02-15 Here Global B.V. Methods and systems for chronicled history information in a map
US10560898B1 (en) 2019-05-30 2020-02-11 Snap Inc. Wearable device location systems
US10582453B1 (en) 2019-05-30 2020-03-03 Snap Inc. Wearable device location systems architecture
US10893385B1 (en) 2019-06-07 2021-01-12 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
US11144536B2 (en) * 2019-06-26 2021-10-12 Nice Ltd. Systems and methods for real-time analytics detection for a transaction utilizing synchronously updated statistical aggregation data
US11307747B2 (en) 2019-07-11 2022-04-19 Snap Inc. Edge gesture interface with smart interactions
US11157748B2 (en) 2019-09-16 2021-10-26 International Business Machines Corporation Crowd counting
US11821742B2 (en) 2019-09-26 2023-11-21 Snap Inc. Travel based notifications
US11218838B2 (en) 2019-10-31 2022-01-04 Snap Inc. Focused map-based context information surfacing
CN110996148B (en) * 2019-11-27 2021-11-30 重庆特斯联智慧科技股份有限公司 Scenic spot multimedia image flow playing system and method based on face recognition
US11263347B2 (en) * 2019-12-03 2022-03-01 Truata Limited System and method for improving security of personally identifiable information
US11012492B1 (en) 2019-12-26 2021-05-18 Palo Alto Networks (Israel Analytics) Ltd. Human activity detection in computing device transmissions
US11416874B1 (en) 2019-12-26 2022-08-16 StratoKey Pty Ltd. Compliance management system
US11128715B1 (en) 2019-12-30 2021-09-21 Snap Inc. Physical friend proximity in chat
US11429618B2 (en) 2019-12-30 2022-08-30 Snap Inc. Surfacing augmented reality objects
US11343323B2 (en) 2019-12-31 2022-05-24 Snap Inc. Augmented reality objects registry
US11169658B2 (en) 2019-12-31 2021-11-09 Snap Inc. Combined map icon with action indicator
US11748329B2 (en) * 2020-01-31 2023-09-05 Salesforce, Inc. Updating a multi-tenant database concurrent with tenant cloning
US11228551B1 (en) 2020-02-12 2022-01-18 Snap Inc. Multiple gateway message exchange
US11516167B2 (en) 2020-03-05 2022-11-29 Snap Inc. Storing data based on device location
US11619501B2 (en) 2020-03-11 2023-04-04 Snap Inc. Avatar based on trip
US11430091B2 (en) 2020-03-27 2022-08-30 Snap Inc. Location mapping for large scale augmented-reality
US10956743B1 (en) 2020-03-27 2021-03-23 Snap Inc. Shared augmented reality system
US11503432B2 (en) 2020-06-15 2022-11-15 Snap Inc. Scalable real-time location sharing framework
US11483267B2 (en) 2020-06-15 2022-10-25 Snap Inc. Location sharing using different rate-limited links
US11314776B2 (en) 2020-06-15 2022-04-26 Snap Inc. Location sharing using friend list versions
US11290851B2 (en) 2020-06-15 2022-03-29 Snap Inc. Location sharing using offline and online objects
US11212640B1 (en) * 2020-06-24 2021-12-28 Charles Isgar Data gathering zone system
US11308327B2 (en) 2020-06-29 2022-04-19 Snap Inc. Providing travel-based augmented reality content with a captured image
US10877948B1 (en) 2020-07-01 2020-12-29 Tamr, Inc. Method and computer program product for geospatial binning
US20220044267A1 (en) * 2020-08-04 2022-02-10 The Stable Group, Llc Dynamic data attribution of points of interest
US11349797B2 (en) 2020-08-31 2022-05-31 Snap Inc. Co-location connection service
US11509680B2 (en) 2020-09-30 2022-11-22 Palo Alto Networks (Israel Analytics) Ltd. Classification of cyber-alerts into security incidents
KR20220057280A (en) * 2020-10-29 2022-05-09 삼성전자주식회사 Method for providing point of interset information and electronic device supporting the same
US11477615B2 (en) * 2020-10-30 2022-10-18 Hewlett Packard Enterprise Development Lp Alerting mobile devices based on location and duration data
EP4047959A1 (en) * 2021-02-23 2022-08-24 Telia Company AB Generation of information related to geographical area
US11606756B2 (en) 2021-03-29 2023-03-14 Snap Inc. Scheduling requests for location data
US11645324B2 (en) 2021-03-31 2023-05-09 Snap Inc. Location-based timeline media content system
US11388248B1 (en) 2021-08-18 2022-07-12 StratoKey Pty Ltd. Dynamic domain discovery and proxy configuration
US11829834B2 (en) 2021-10-29 2023-11-28 Snap Inc. Extended QR code
US11886453B2 (en) * 2021-10-29 2024-01-30 Splunk Inc. Quantization of data streams of instrumented software and handling of delayed or late data
US11799880B2 (en) 2022-01-10 2023-10-24 Palo Alto Networks (Israel Analytics) Ltd. Network adaptive alert prioritization system
WO2023161912A1 (en) * 2022-02-28 2023-08-31 Niantic, Inc. Anonymizing user location data in a location-based application
CN114726595B (en) * 2022-03-24 2023-09-29 中科吉芯(昆山)信息科技有限公司 Method for authenticating identity of man-machine by using space-time information

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080086741A1 (en) * 2006-10-10 2008-04-10 Quantcast Corporation Audience commonality and measurement
US20100185605A1 (en) * 2007-07-03 2010-07-22 John Chu Method and system for continuous, dynamic, adaptive searching based on a continuously evolving personal region of interest

Family Cites Families (167)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US624069A (en) * 1899-05-02 Safety-pocket
US5539232A (en) * 1994-05-31 1996-07-23 Kabushiki Kaisha Toshiba MOS composite type semiconductor device
US20010013009A1 (en) * 1997-05-20 2001-08-09 Daniel R. Greening System and method for computer-based marketing
US6199014B1 (en) * 1997-12-23 2001-03-06 Walker Digital, Llc System for providing driving directions with visual cues
US6359896B1 (en) 1998-02-27 2002-03-19 Avaya Technology Corp. Dynamic selection of interworking functions in a communication system
US6189008B1 (en) 1998-04-03 2001-02-13 Intertainer, Inc. Dynamic digital asset management
US6240069B1 (en) * 1998-06-16 2001-05-29 Ericsson Inc. System and method for location-based group services
US6539080B1 (en) * 1998-07-14 2003-03-25 Ameritech Corporation Method and system for providing quick directions
US20040181668A1 (en) * 1999-06-30 2004-09-16 Blew Edwin O. Methods for conducting server-side encryption/decryption-on-demand
US6549768B1 (en) * 1999-08-24 2003-04-15 Nokia Corp Mobile communications matching system
ATE253283T1 (en) 1999-09-29 2003-11-15 Swisscom Mobile Ag METHOD FOR FINDING MEMBERS OF A COMMON INTEREST GROUP
US6204844B1 (en) 1999-10-08 2001-03-20 Motorola, Inc. Method and apparatus for dynamically grouping communication units in a communication system
US6819919B1 (en) * 1999-10-29 2004-11-16 Telcontar Method for providing matching and introduction services to proximate mobile users and service providers
US6708172B1 (en) 1999-12-22 2004-03-16 Urbanpixel, Inc. Community-based shared multiple browser environment
WO2001071939A1 (en) * 2000-03-21 2001-09-27 Ehud Shapiro Community co-presence system and method having virtual groups
US7124164B1 (en) * 2001-04-17 2006-10-17 Chemtob Helen J Method and apparatus for providing group interaction via communications networks
US20020010628A1 (en) * 2000-05-24 2002-01-24 Alan Burns Method of advertising and polling
US6539232B2 (en) 2000-06-10 2003-03-25 Telcontar Method and system for connecting mobile users based on degree of separation
US20020049690A1 (en) 2000-06-16 2002-04-25 Masanori Takano Method of expressing crowd movement in game, storage medium, and information processing apparatus
US6968179B1 (en) 2000-07-27 2005-11-22 Microsoft Corporation Place specific buddy list services
US8117281B2 (en) 2006-11-02 2012-02-14 Addnclick, Inc. Using internet content as a means to establish live social networks by linking internet users to each other who are simultaneously engaged in the same and/or similar content
US7062469B2 (en) * 2001-01-02 2006-06-13 Nokia Corporation System and method for public wireless network access subsidized by dynamic display advertising
WO2002057946A1 (en) 2001-01-18 2002-07-25 The Board Of Trustees Of The University Of Illinois Method for optimizing a solution set
US6529136B2 (en) * 2001-02-28 2003-03-04 International Business Machines Corporation Group notification system and method for implementing and indicating the proximity of individuals or groups to other individuals or groups
US6820081B1 (en) 2001-03-19 2004-11-16 Attenex Corporation System and method for evaluating a structured message store for message redundancy
US7035653B2 (en) * 2001-04-13 2006-04-25 Leap Wireless International, Inc. Method and system to facilitate interaction between and content delivery to users of a wireless communications network
US6757517B2 (en) 2001-05-10 2004-06-29 Chin-Chi Chang Apparatus and method for coordinated music playback in wireless ad-hoc networks
JP2003203084A (en) 2001-06-29 2003-07-18 Hitachi Ltd Information terminal device, server, and information distributing device and method
US7123918B1 (en) 2001-08-20 2006-10-17 Verizon Services Corp. Methods and apparatus for extrapolating person and device counts
US20050231425A1 (en) * 2001-09-10 2005-10-20 American Gnc Corporation Wireless wide area networked precision geolocation
AU2002363055A1 (en) * 2001-10-19 2003-05-06 Bank Of America Corporation System and method for interative advertising
DE60334974D1 (en) * 2002-03-01 2010-12-30 Telecomm Systems Inc METHOD AND DEVICE FOR SENDING, RECEIVING AND PLANNING SITE RELEVANT INFORMATION
US20040025185A1 (en) * 2002-04-29 2004-02-05 John Goci Digital video jukebox network enterprise system
US7024207B2 (en) * 2002-04-30 2006-04-04 Motorola, Inc. Method of targeting a message to a communication device selected from among a set of communication devices
US7254406B2 (en) 2002-06-10 2007-08-07 Suman Beros Method and apparatus for effecting a detection of mobile devices that are proximate and exhibit commonalities between specific data sets, or profiles, associated with the persons transporting the mobile devices
US7444655B2 (en) 2002-06-11 2008-10-28 Microsoft Corporation Anonymous aggregated data collection
US7116985B2 (en) * 2002-06-14 2006-10-03 Cingular Wireless Ii, Llc Method for providing location-based services in a wireless network, such as varying levels of services
US6961562B2 (en) * 2002-06-19 2005-11-01 Openwave Systems Inc. Method and apparatus for acquiring, processing, using and brokering location information associated with mobile communication devices
US7247024B2 (en) * 2002-11-22 2007-07-24 Ut-Battelle, Llc Method for spatially distributing a population
JP2004241866A (en) * 2003-02-03 2004-08-26 Alpine Electronics Inc Inter-vehicle communication system
US7787886B2 (en) * 2003-02-24 2010-08-31 Invisitrack, Inc. System and method for locating a target using RFID
US8423042B2 (en) 2004-02-24 2013-04-16 Invisitrack, Inc. Method and system for positional finding using RF, continuous and/or combined movement
US7158798B2 (en) 2003-02-28 2007-01-02 Lucent Technologies Inc. Location-based ad-hoc game services
FI118494B (en) 2003-03-26 2007-11-30 Teliasonera Finland Oyj A method for monitoring traffic flows of mobile users
EP1631932A4 (en) 2003-06-12 2010-10-27 Honda Motor Co Ltd Systems and methods for using visual hulls to determine the number of people in a crowd
US20050038876A1 (en) * 2003-08-15 2005-02-17 Aloke Chaudhuri System and method for instant match based on location, presence, personalization and communication
US7428417B2 (en) 2003-09-26 2008-09-23 Siemens Communications, Inc. System and method for presence perimeter rule downloading
US20040107283A1 (en) 2003-10-06 2004-06-03 Trilibis Inc. System and method for the aggregation and matching of personal information
US20050130634A1 (en) * 2003-10-31 2005-06-16 Globespanvirata, Inc. Location awareness in wireless networks
US7359724B2 (en) * 2003-11-20 2008-04-15 Nokia Corporation Method and system for location based group formation
US20070162328A1 (en) * 2004-01-20 2007-07-12 Nooly Technologies, Ltd. Lbs nowcasting sensitive advertising and promotion system and method
US7398081B2 (en) 2004-02-04 2008-07-08 Modu Ltd. Device and system for selective wireless communication with contact list memory
US7545784B2 (en) * 2004-02-11 2009-06-09 Yahoo! Inc. System and method for wireless communication between previously known and unknown users
US7272394B2 (en) 2004-02-11 2007-09-18 Avaya Technology Corp. Location estimation of wireless terminals in a multi-story environment
US8014763B2 (en) 2004-02-28 2011-09-06 Charles Martin Hymes Wireless communications with proximal targets identified visually, aurally, or positionally
US20050256813A1 (en) * 2004-03-26 2005-11-17 Radvan Bahbouh Method and system for data understanding using sociomapping
US7593740B2 (en) * 2004-05-12 2009-09-22 Google, Inc. Location-based social software for mobile devices
US7509131B2 (en) 2004-06-29 2009-03-24 Microsoft Corporation Proximity detection using wireless signal strengths
US7444315B2 (en) * 2004-08-12 2008-10-28 Sap Aktiengesellschaft Virtual community generation
US20060046743A1 (en) * 2004-08-24 2006-03-02 Mirho Charles A Group organization according to device location
US8126441B2 (en) * 2004-09-21 2012-02-28 Advanced Ground Information Systems, Inc. Method of establishing a cell phone network of participants with a common interest
US7692684B2 (en) 2004-09-27 2010-04-06 Point Grey Research Inc. People counting systems and methods
US11283885B2 (en) * 2004-10-19 2022-03-22 Verizon Patent And Licensing Inc. System and method for location based matching and promotion
KR101061265B1 (en) * 2004-10-19 2011-08-31 야후! 인크. System and method for location based social networking
US7707413B2 (en) * 2004-12-02 2010-04-27 Palo Alto Research Center Incorporated Systems and methods for protecting private information in a mobile environment
US20060229058A1 (en) * 2005-10-29 2006-10-12 Outland Research Real-time person-to-person communication using geospatial addressing
US20060195361A1 (en) * 2005-10-01 2006-08-31 Outland Research Location-based demographic profiling system and method of use
US7853268B2 (en) 2005-01-26 2010-12-14 Broadcom Corporation GPS enabled cell phone location tracking for security purposes
US7423580B2 (en) 2005-03-14 2008-09-09 Invisitrack, Inc. Method and system of three-dimensional positional finding
US7353034B2 (en) 2005-04-04 2008-04-01 X One, Inc. Location sharing and tracking using mobile phones or other wireless devices
US20060256008A1 (en) 2005-05-13 2006-11-16 Outland Research, Llc Pointing interface for person-to-person information exchange
US20070210937A1 (en) 2005-04-21 2007-09-13 Microsoft Corporation Dynamic rendering of map information
US7489240B2 (en) 2005-05-03 2009-02-10 Qualcomm, Inc. System and method for 3-D position determination using RFID
US20060266830A1 (en) * 2005-05-31 2006-11-30 Horozov Tzvetan T Location-based recommendation system
US20070005419A1 (en) 2005-06-30 2007-01-04 Microsoft Corporation Recommending location and services via geospatial collaborative filtering
US20070015518A1 (en) 2005-07-15 2007-01-18 Agilis Systems, Inc. Mobile resource location-based customer contact systems
US8150416B2 (en) * 2005-08-08 2012-04-03 Jambo Networks, Inc. System and method for providing communication services to mobile device users incorporating proximity determination
CN101366045A (en) 2005-11-23 2009-02-11 实物视频影像公司 Object density estimation in vedio
US7558404B2 (en) * 2005-11-28 2009-07-07 Honeywell International Inc. Detection of abnormal crowd behavior
WO2007070505A2 (en) * 2005-12-13 2007-06-21 Squareloop, Inc. System, apparatus, and methods for location managed message processing
US20070135138A1 (en) 2005-12-13 2007-06-14 Internation Business Machines Corporation Methods, systems, and computer program products for providing location based subscription services
US7774001B2 (en) * 2005-12-16 2010-08-10 Sony Ericsson Mobile Communications Ab Device and method for determining where crowds exist
US7801542B1 (en) 2005-12-19 2010-09-21 Stewart Brett B Automatic management of geographic information pertaining to social networks, groups of users, or assets
US7620404B2 (en) 2005-12-22 2009-11-17 Pascal Chesnais Methods and apparatus for organizing and presenting contact information in a mobile communication system
US20070218900A1 (en) * 2006-03-17 2007-09-20 Raj Vasant Abhyanker Map based neighborhood search and community contribution
US7466986B2 (en) 2006-01-19 2008-12-16 International Business Machines Corporation On-device mapping of WIFI hotspots via direct connection of WIFI-enabled and GPS-enabled mobile devices
US20070174243A1 (en) 2006-01-20 2007-07-26 Fritz Charles W Mobile social search using physical identifiers
US7856360B2 (en) * 2006-01-30 2010-12-21 Hoozware, Inc. System for providing a service to venues where people aggregate
US20070179863A1 (en) * 2006-01-30 2007-08-02 Goseetell Network, Inc. Collective intelligence recommender system for travel information and travel industry marketing platform
US8352183B2 (en) 2006-02-04 2013-01-08 Microsoft Corporation Maps for social networking and geo blogs
US7743056B2 (en) 2006-03-31 2010-06-22 Aol Inc. Identifying a result responsive to a current location of a client device
US9100454B2 (en) * 2006-04-07 2015-08-04 Groupon, Inc. Method and system for enabling the creation and maintenance of proximity-related user groups
US7840224B2 (en) * 2006-04-07 2010-11-23 Pelago, Inc. Facilitating user interactions based on proximity
US20070250476A1 (en) 2006-04-21 2007-10-25 Lockheed Martin Corporation Approximate nearest neighbor search in metric space
US8046411B2 (en) * 2006-04-28 2011-10-25 Yahoo! Inc. Multimedia sharing in social networks for mobile devices
US20070282621A1 (en) * 2006-06-01 2007-12-06 Flipt, Inc Mobile dating system incorporating user location information
US8571580B2 (en) * 2006-06-01 2013-10-29 Loopt Llc. Displaying the location of individuals on an interactive map display on a mobile communication device
US20070290832A1 (en) * 2006-06-16 2007-12-20 Fmr Corp. Invoking actionable alerts
WO2008000044A1 (en) 2006-06-29 2008-01-03 Relevancenow Pty Limited Cyberpersonalities in artificial reality
WO2008000043A1 (en) * 2006-06-30 2008-01-03 Eccosphere International Pty Ltd Method of social interaction between communication device users
US7932831B2 (en) * 2006-07-11 2011-04-26 At&T Intellectual Property I, L.P. Crowd determination
US7680959B2 (en) * 2006-07-11 2010-03-16 Napo Enterprises, Llc P2P network for providing real time media recommendations
DE102006037250A1 (en) * 2006-08-09 2008-04-10 Müller, Thomas Methods and devices for identity verification
US20100153213A1 (en) 2006-08-24 2010-06-17 Kevin Pomplun Systems and Methods for Dynamic Content Selection and Distribution
US20080182563A1 (en) 2006-09-15 2008-07-31 Wugofski Theodore D Method and system for social networking over mobile devices using profiles
US20080097999A1 (en) * 2006-10-10 2008-04-24 Tim Horan Dynamic creation of information sharing social networks
US20080113674A1 (en) * 2006-11-10 2008-05-15 Mohammad Faisal Baig Vicinity-based community for wireless users
US20080242317A1 (en) * 2007-03-26 2008-10-02 Fatdoor, Inc. Mobile content creation, sharing, and commerce in a geo-spatial environment
US8116564B2 (en) 2006-11-22 2012-02-14 Regents Of The University Of Minnesota Crowd counting and monitoring
US8108414B2 (en) 2006-11-29 2012-01-31 David Stackpole Dynamic location-based social networking
US20080126113A1 (en) * 2006-11-29 2008-05-29 Steve Manning Systems and methods for creating and participating in ad-hoc virtual communities
WO2008076827A1 (en) * 2006-12-13 2008-06-26 Synthesis Studios, Inc. Mobile proximity-based notifications
US20080146250A1 (en) 2006-12-15 2008-06-19 Jeffrey Aaron Method and System for Creating and Using a Location Safety Indicator
US8224359B2 (en) 2006-12-22 2012-07-17 Yahoo! Inc. Provisioning my status information to others in my social network
US20080188261A1 (en) 2007-02-02 2008-08-07 Miles Arnone Mediated social network
US8112720B2 (en) * 2007-04-05 2012-02-07 Napo Enterprises, Llc System and method for automatically and graphically associating programmatically-generated media item recommendations related to a user's socially recommended media items
US8229458B2 (en) * 2007-04-08 2012-07-24 Enhanced Geographic Llc Systems and methods to determine the name of a location visited by a user of a wireless device
US9140552B2 (en) 2008-07-02 2015-09-22 Qualcomm Incorporated User defined names for displaying monitored location
WO2008128133A1 (en) 2007-04-13 2008-10-23 Pelago, Inc. Location-based information determination
WO2008134595A1 (en) 2007-04-27 2008-11-06 Pelago, Inc. Determining locations of interest based on user visits
TWI346479B (en) 2007-05-07 2011-08-01 Ind Tech Res Inst Method for grouping wireless devices and apparatus thereof
EP2007114B1 (en) * 2007-06-22 2016-08-10 Alcatel Lucent A system for providing information to users sharing a nomadic experience
US8185137B2 (en) 2007-06-25 2012-05-22 Microsoft Corporation Intensity-based maps
US8165808B2 (en) * 2007-07-17 2012-04-24 Yahoo! Inc. Techniques for representing location information
US7962155B2 (en) * 2007-07-18 2011-06-14 Hewlett-Packard Development Company, L.P. Location awareness of devices
US20090030999A1 (en) 2007-07-27 2009-01-29 Gatzke Alan D Contact Proximity Notification
CN101779180B (en) * 2007-08-08 2012-08-15 贝诺特公司 Method and apparatus for context-based content recommendation
US8050690B2 (en) * 2007-08-14 2011-11-01 Mpanion, Inc. Location based presence and privacy management
US8924250B2 (en) 2007-09-13 2014-12-30 International Business Machines Corporation Advertising in virtual environments based on crowd statistics
WO2009039350A1 (en) 2007-09-19 2009-03-26 Micro Target Media Holdings Inc. System and method for estimating characteristics of persons or things
US8224353B2 (en) 2007-09-20 2012-07-17 Aegis Mobility, Inc. Disseminating targeted location-based content to mobile device users
US8923887B2 (en) * 2007-09-24 2014-12-30 Alcatel Lucent Social networking on a wireless communication system
JP4858400B2 (en) 2007-10-17 2012-01-18 ソニー株式会社 Information providing system, information providing apparatus, and information providing method
WO2009055501A1 (en) 2007-10-22 2009-04-30 Pelago, Inc. Providing aggregate user-supplied information related to locations on a map
US8031175B2 (en) 2008-04-21 2011-10-04 Panasonic Corporation Touch sensitive remote control system that detects hand size characteristics of user and adapts mapping to screen display
US20090110177A1 (en) 2007-10-31 2009-04-30 Nokia Corporation Dynamic Secondary Phone Book
US8467955B2 (en) 2007-10-31 2013-06-18 Microsoft Corporation Map-centric service for social events
US8624733B2 (en) 2007-11-05 2014-01-07 Francis John Cusack, JR. Device for electronic access control with integrated surveillance
US8195598B2 (en) 2007-11-16 2012-06-05 Agilence, Inc. Method of and system for hierarchical human/crowd behavior detection
US8620996B2 (en) 2007-11-19 2013-12-31 Motorola Mobility Llc Method and apparatus for determining a group preference in a social network
US9269089B2 (en) 2007-11-22 2016-02-23 Yahoo! Inc. Method and system for media promotion
US20100020776A1 (en) * 2007-11-27 2010-01-28 Google Inc. Wireless network-based location approximation
US7895049B2 (en) 2007-11-30 2011-02-22 Yahoo! Inc. Dynamic representation of group activity through reactive personas
US8307029B2 (en) 2007-12-10 2012-11-06 Yahoo! Inc. System and method for conditional delivery of messages
US8862622B2 (en) 2007-12-10 2014-10-14 Sprylogics International Corp. Analysis, inference, and visualization of social networks
FI20085399A0 (en) 2007-12-14 2008-04-30 Xtract Oy A method and arrangement for segmenting clients in a client management system
US8010601B2 (en) 2007-12-21 2011-08-30 Waldeck Technology, Llc Contiguous location-based user networks
US7822426B1 (en) * 2008-01-17 2010-10-26 Where, Inc. System and method for snapping a user location to a landmark of known location
US8060018B2 (en) 2008-02-08 2011-11-15 Yahoo! Inc. Data sharing based on proximity-based ad hoc network
US20090210480A1 (en) 2008-02-14 2009-08-20 Suthaharan Sivasubramaniam Method and system for collective socializing using a mobile social network
US20090287687A1 (en) * 2008-04-14 2009-11-19 Gianni Martire System and method for recommending venues and events of interest to a user
US20090286550A1 (en) 2008-05-15 2009-11-19 Brane Wolrd Ltd. Tempo Spatial Data Extraction From Network Connected Devices
US10163113B2 (en) * 2008-05-27 2018-12-25 Qualcomm Incorporated Methods and apparatus for generating user profile based on periodic location fixes
US20090307263A1 (en) * 2008-06-06 2009-12-10 Sense Networks, Inc. System And Method Of Performing Location Analytics
US20090315766A1 (en) * 2008-06-19 2009-12-24 Microsoft Corporation Source switching for devices supporting dynamic direction information
EP2289234A4 (en) 2008-07-09 2014-02-12 Loopt Inc Social networking services for a location-aware mobile communication device
US20100017261A1 (en) * 2008-07-17 2010-01-21 Kota Enterprises, Llc Expert system and service for location-based content influence for narrowcast
US8401771B2 (en) * 2008-07-22 2013-03-19 Microsoft Corporation Discovering points of interest from users map annotations
US8386211B2 (en) 2008-08-15 2013-02-26 International Business Machines Corporation Monitoring virtual worlds to detect events and determine their type
US8620624B2 (en) * 2008-09-30 2013-12-31 Sense Networks, Inc. Event identification in sensor analytics
US8645283B2 (en) * 2008-11-24 2014-02-04 Nokia Corporation Determination of event of interest
US9397890B2 (en) 2009-02-02 2016-07-19 Waldeck Technology Llc Serving a request for data from a historical record of anonymized user profile data in a mobile environment
US20120047087A1 (en) * 2009-03-25 2012-02-23 Waldeck Technology Llc Smart encounters
US8577405B2 (en) * 2009-06-12 2013-11-05 Qualcomm Incorporated Systems, methods, and machine-readable media providing location-enabled group management
US20120066138A1 (en) 2009-08-24 2012-03-15 Waldeck Technology, Llc User affinity concentrations as social topography
US8473512B2 (en) 2009-11-06 2013-06-25 Waldeck Technology, Llc Dynamic profile slice
US20120063367A1 (en) 2009-12-22 2012-03-15 Waldeck Technology, Llc Crowd and profile based communication addresses

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080086741A1 (en) * 2006-10-10 2008-04-10 Quantcast Corporation Audience commonality and measurement
US20100185605A1 (en) * 2007-07-03 2010-07-22 John Chu Method and system for continuous, dynamic, adaptive searching based on a continuously evolving personal region of interest

Cited By (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110072360A1 (en) * 2003-12-15 2011-03-24 J2 Global Communications Method and apparatus for automatically performing an online content distribution campaign
US8208943B2 (en) 2009-02-02 2012-06-26 Waldeck Technology, Llc Anonymous crowd tracking
US8321509B2 (en) 2009-02-02 2012-11-27 Waldeck Technology, Llc Handling crowd requests for large geographic areas
US20100197318A1 (en) * 2009-02-02 2010-08-05 Kota Enterprises, Llc Anonymous crowd tracking
US20100198826A1 (en) * 2009-02-02 2010-08-05 Kota Enterprises, Llc Maintaining a historical record of anonymized user profile data by location for users in a mobile environment
US9092641B2 (en) 2009-02-02 2015-07-28 Waldeck Technology, Llc Modifying a user's contribution to an aggregate profile based on time between location updates and external events
US9515885B2 (en) 2009-02-02 2016-12-06 Waldeck Technology, Llc Handling crowd requests for large geographic areas
US20100197319A1 (en) * 2009-02-02 2010-08-05 Kota Enterprises, Llc Modifying a user's contribution to an aggregate profile based on time between location updates and external events
US8825074B2 (en) 2009-02-02 2014-09-02 Waldeck Technology, Llc Modifying a user'S contribution to an aggregate profile based on time between location updates and external events
US9098723B2 (en) 2009-02-02 2015-08-04 Waldeck Technology, Llc Forming crowds and providing access to crowd data in a mobile environment
US20100198862A1 (en) * 2009-02-02 2010-08-05 Kota Enterprises, Llc Handling crowd requests for large geographic areas
US9641393B2 (en) 2009-02-02 2017-05-02 Waldeck Technology, Llc Forming crowds and providing access to crowd data in a mobile environment
US8918398B2 (en) 2009-02-02 2014-12-23 Waldeck Technology, Llc Maintaining a historical record of anonymized user profile data by location for users in a mobile environment
US9397890B2 (en) 2009-02-02 2016-07-19 Waldeck Technology Llc Serving a request for data from a historical record of anonymized user profile data in a mobile environment
US8495065B2 (en) 2009-02-02 2013-07-23 Waldeck Technology, Llc Maintaining a historical record of anonymized user profile data by location for users in a mobile environment
US8589330B2 (en) 2009-03-25 2013-11-19 Waldeck Technology, Llc Predicting or recommending a users future location based on crowd data
US9410814B2 (en) 2009-03-25 2016-08-09 Waldeck Technology, Llc Passive crowd-sourced map updates and alternate route recommendations
US9140566B1 (en) 2009-03-25 2015-09-22 Waldeck Technology, Llc Passive crowd-sourced map updates and alternative route recommendations
US8620532B2 (en) 2009-03-25 2013-12-31 Waldeck Technology, Llc Passive crowd-sourced map updates and alternate route recommendations
US8554770B2 (en) 2009-04-29 2013-10-08 Waldeck Technology, Llc Profile construction using location-based aggregate profile information
US9053169B2 (en) 2009-04-29 2015-06-09 Waldeck Technology, Llc Profile construction using location-based aggregate profile information
US9763048B2 (en) 2009-07-21 2017-09-12 Waldeck Technology, Llc Secondary indications of user locations and use thereof by a location-based service
US8560608B2 (en) 2009-11-06 2013-10-15 Waldeck Technology, Llc Crowd formation based on physical boundaries and other rules
US8473512B2 (en) 2009-11-06 2013-06-25 Waldeck Technology, Llc Dynamic profile slice
US9300704B2 (en) 2009-11-06 2016-03-29 Waldeck Technology, Llc Crowd formation based on physical boundaries and other rules
US8463812B2 (en) * 2009-12-18 2013-06-11 Electronics And Telecommunications Research Institute Apparatus for providing social network service using relationship of ontology and method thereof
US20110153665A1 (en) * 2009-12-18 2011-06-23 Electronics And Telecommunications Research Institute Apparatus for providing social network service using relationship of ontology and method thereof
US20120066067A1 (en) * 2009-12-22 2012-03-15 Waldeck Technology, Llc Fragmented advertisements for co-located social groups
US9046987B2 (en) 2009-12-22 2015-06-02 Waldeck Technology, Llc Crowd formation based on wireless context information
US8782560B2 (en) 2009-12-22 2014-07-15 Waldeck Technology, Llc Relative item of interest explorer interface
US8711737B2 (en) 2009-12-22 2014-04-29 Waldeck Technology, Llc Crowd formation based on wireless context information
US9203793B2 (en) * 2010-03-03 2015-12-01 Waldeck Technology, Llc Ad-hoc micro-blogging groups
US9407598B2 (en) 2010-03-03 2016-08-02 Waldeck Technology, Llc Ad-hoc micro-blogging groups
US8898288B2 (en) 2010-03-03 2014-11-25 Waldeck Technology, Llc Status update propagation based on crowd or POI similarity
US20120066312A1 (en) * 2010-03-03 2012-03-15 Waldeck Technology, Llc Ad-hoc micro-blogging groups
US9930522B2 (en) 2010-07-21 2018-03-27 Sensoriant, Inc. System and method for controlling mobile services using sensor information
US10104518B2 (en) 2010-07-21 2018-10-16 Sensoriant, Inc. System and method for provisioning user computing devices based on sensor and state information
US10003948B2 (en) 2010-07-21 2018-06-19 Sensoriant, Inc. System and method for provisioning user computing devices based on sensor and state information
US9949060B2 (en) * 2010-07-21 2018-04-17 Sensoriant, Inc. System allowing or disallowing access to resources based on sensor and state information
US11140516B2 (en) 2010-07-21 2021-10-05 Sensoriant, Inc. System and method for controlling mobile services using sensor information
US10602314B2 (en) 2010-07-21 2020-03-24 Sensoriant, Inc. System and method for controlling mobile services using sensor information
US20170272892A1 (en) * 2010-07-21 2017-09-21 Sensoriant, Inc. Allowing or disallowing access to resources based on sensor and state information
US20170353816A1 (en) * 2010-07-21 2017-12-07 Sensoriant, Inc. Controlling functions of a user device utilizing an environment map
US10405157B2 (en) * 2010-07-21 2019-09-03 Sensoriant, Inc. System and method for provisioning user computing devices based on sensor and state information
US9913069B2 (en) 2010-07-21 2018-03-06 Sensoriant, Inc. System and method for provisioning user computing devices based on sensor and state information
US9913071B2 (en) * 2010-07-21 2018-03-06 Sensoriant, Inc. Controlling functions of a user device utilizing an environment map
US9913070B2 (en) * 2010-07-21 2018-03-06 Sensoriant, Inc. Allowing or disallowing access to resources based on sensor and state information
US20190028867A1 (en) * 2010-07-21 2019-01-24 Sensoriant, Inc. System and method for provisioning user computing devices based on sensor and state information
US10181148B2 (en) 2010-07-21 2019-01-15 Sensoriant, Inc. System and method for control and management of resources for consumers of information
US9886727B2 (en) * 2010-11-11 2018-02-06 Ikorongo Technology, LLC Automatic check-ins and status updates
US20120124176A1 (en) * 2010-11-11 2012-05-17 Teaneck Enterprises, Llc Automatic check-ins and status updates
US20120150870A1 (en) * 2010-12-10 2012-06-14 Ting-Yee Liao Image display device controlled responsive to sharing breadth
US20130218969A1 (en) * 2012-02-16 2013-08-22 Gface Gmbh Method and system for associating user interests with zones and maps
US10096041B2 (en) 2012-07-31 2018-10-09 The Spoken Thought, Inc. Method of advertising to a targeted buyer
US9454674B2 (en) * 2013-03-25 2016-09-27 Samsung Electronics Co., Ltd. Data sharing control method and data sharing control terminal
US20140289872A1 (en) * 2013-03-25 2014-09-25 Samsung Electronics Co., Ltd. Data sharing control method and data sharing control terminal
US20150324389A1 (en) * 2014-05-12 2015-11-12 Naver Corporation Method, system and recording medium for providing map service, and file distribution system
US11880417B2 (en) * 2014-05-12 2024-01-23 Naver Corporation Method, system and recording medium for providing map service, and file distribution system
US10390289B2 (en) 2014-07-11 2019-08-20 Sensoriant, Inc. Systems and methods for mediating representations allowing control of devices located in an environment having broadcasting devices
US10614473B2 (en) 2014-07-11 2020-04-07 Sensoriant, Inc. System and method for mediating representations with respect to user preferences
US10869260B2 (en) 2014-07-11 2020-12-15 Sensoriant, Inc. Systems and methods for mediating representations allowing control of devices located in an environment having broadcasting devices
US10917481B2 (en) * 2014-12-09 2021-02-09 Facebook, Inc. Generating business insights using beacons on online social networks
US10701165B2 (en) 2015-09-23 2020-06-30 Sensoriant, Inc. Method and system for using device states and user preferences to create user-friendly environments
US11178240B2 (en) 2015-09-23 2021-11-16 Sensoriant, Inc. Method and system for using device states and user preferences to create user-friendly environments
CN107809619A (en) * 2017-11-15 2018-03-16 株洲华通科技有限责任公司 A kind of method and gateway exchange system that outgoing access is realized by multimedia gateway

Also Published As

Publication number Publication date
US20130282723A1 (en) 2013-10-24
US10530654B2 (en) 2020-01-07
US9092641B2 (en) 2015-07-28
US20160036639A1 (en) 2016-02-04
US20130017843A1 (en) 2013-01-17
US9641393B2 (en) 2017-05-02
US20100197319A1 (en) 2010-08-05
US20140349679A1 (en) 2014-11-27
US8825074B2 (en) 2014-09-02
US9397890B2 (en) 2016-07-19
US20100198814A1 (en) 2010-08-05
US20100198828A1 (en) 2010-08-05
US20100198870A1 (en) 2010-08-05
US20100198826A1 (en) 2010-08-05
US9098723B2 (en) 2015-08-04
US8208943B2 (en) 2012-06-26
US8495065B2 (en) 2013-07-23
US20100197318A1 (en) 2010-08-05
US8321509B2 (en) 2012-11-27
US20100198862A1 (en) 2010-08-05
US9515885B2 (en) 2016-12-06
US8918398B2 (en) 2014-12-23
US20100198917A1 (en) 2010-08-05

Similar Documents

Publication Publication Date Title
US9641393B2 (en) Forming crowds and providing access to crowd data in a mobile environment
US20120046017A1 (en) System and method for prevention of indirect user tracking through aggregate profile data
US8473512B2 (en) Dynamic profile slice
US8589330B2 (en) Predicting or recommending a users future location based on crowd data
US8898288B2 (en) Status update propagation based on crowd or POI similarity
US20120064919A1 (en) Crowd creation system for an aggregate profiling service
US20120041672A1 (en) Automated social routing
US20120063367A1 (en) Crowd and profile based communication addresses
US20210173887A1 (en) Sytem and method for filtering and creating points-of-interest
US20240152563A9 (en) Sytem and method for filtering and creating points-of-interest

Legal Events

Date Code Title Description
AS Assignment

Owner name: KOTA ENTERPRISES, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JENNINGS, KENNETH;REEL/FRAME:023857/0201

Effective date: 20100126

AS Assignment

Owner name: WALDECK TECHNOLOGY, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KOTA ENTERPRISES, LLC;REEL/FRAME:024859/0855

Effective date: 20100730

AS Assignment

Owner name: CONCERT DEBT, LLC, NEW HAMPSHIRE

Free format text: SECURITY INTEREST;ASSIGNOR:WALDECK TECHNOLOGY, LLC;REEL/FRAME:036433/0313

Effective date: 20150501

Owner name: CONCERT DEBT, LLC, NEW HAMPSHIRE

Free format text: SECURITY INTEREST;ASSIGNOR:WALDECK TECHNOLOGY, LLC;REEL/FRAME:036433/0382

Effective date: 20150801

AS Assignment

Owner name: CONCERT DEBT, LLC, NEW HAMPSHIRE

Free format text: SECURITY INTEREST;ASSIGNOR:CONCERT TECHNOLOGY CORPORATION;REEL/FRAME:036515/0471

Effective date: 20150501

Owner name: CONCERT DEBT, LLC, NEW HAMPSHIRE

Free format text: SECURITY INTEREST;ASSIGNOR:CONCERT TECHNOLOGY CORPORATION;REEL/FRAME:036515/0495

Effective date: 20150801

AS Assignment

Owner name: WALDECK TECHNOLOGY, LLC, NEW HAMPSHIRE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CONCERT DEBT, LLC;REEL/FRAME:044391/0407

Effective date: 20171213

Owner name: CONCERT TECHNOLOGY CORPORATION, NEW HAMPSHIRE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CONCERT DEBT, LLC;REEL/FRAME:044391/0438

Effective date: 20171213

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: CONCERT TECHNOLOGY CORPORATION, NEW HAMPSHIRE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CONCERT DEBT, LLC;REEL/FRAME:044591/0775

Effective date: 20171221

Owner name: WALDECK TECHNOLOGY, LLC, NEW HAMPSHIRE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CONCERT DEBT, LLC;REEL/FRAME:044591/0845

Effective date: 20171221

AS Assignment

Owner name: IP3 2017, SERIES 200 OF ALLIED SECURITY TRUST I, C

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WALDECK TECHNOLOGY, LLC;REEL/FRAME:045061/0144

Effective date: 20180205