US20150142891A1 - Anticipatory Environment for Collaboration and Data Sharing - Google Patents

Anticipatory Environment for Collaboration and Data Sharing Download PDF

Info

Publication number
US20150142891A1
US20150142891A1 US14/546,480 US201414546480A US2015142891A1 US 20150142891 A1 US20150142891 A1 US 20150142891A1 US 201414546480 A US201414546480 A US 201414546480A US 2015142891 A1 US2015142891 A1 US 2015142891A1
Authority
US
United States
Prior art keywords
user
users
data
user devices
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/546,480
Inventor
Shafkat ul Haque
Ray KUO
Miao Wang
Raphael Dokyun Kim
Abhiram Bojadla
Bindu Madhavan
Sujit Saraf
Sanjay Rajagopalan
Till Pieper
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAP SE
Original Assignee
SAP SE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAP SE filed Critical SAP SE
Priority to US14/546,480 priority Critical patent/US20150142891A1/en
Assigned to SAP SE reassignment SAP SE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HAQUE, SHAFKAT UL, MADHAVAN, BINDU, WANG, MIAO, KIM, RAPHAEL DOKYUN, KUO, RAY, BOJADLA, ABHIRAM, PIEPER, TILL, SARAF, SUJIT
Publication of US20150142891A1 publication Critical patent/US20150142891A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/54Presence management, e.g. monitoring or registration for receipt of user log-on information, or the connection status of the users
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4828End-user interface for program selection for searching program descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences

Definitions

  • the user goes to the computer screen or phone screen to access an application (e.g., a Facebook® app) and that is the extent of the interaction.
  • an application e.g., a Facebook® app
  • the interaction with the users friend is secondary.
  • the software is not so deeply ingrained in the interaction with the users friend that the user is no longer aware of it.
  • the software is always perceived as something separate, a layer.
  • Communication applications allow the user to have a conversation with someone, but nonetheless the user is very much aware of the application, staring at a screen, pushing buttons, etc.
  • the application remains a noticeable part of the interaction.
  • Oblong Industries, Inc. provides the g-speak framework which supports a spatial operating environment that can be used for collaboration in large meeting rooms. But users still need to explicitly interact with a system; e.g., by using a mouse and keyboard, or by swiping a badge, etc.
  • Software and hardware should facilitate human interaction to a level so deep that it is just part of what is happening in the room, not something separate and not something that creates a layer between something as basic as eye contact.
  • FIG. 1 is an illustrative embodiment of an anticipatory smart space in accordance with the present disclosure.
  • FIG. 1A illustrates an example of a configuration of smartboards in accordance with the present disclosure.
  • FIG. 2 shows a configuration of anticipatory smart spaces in accordance with the present disclosure.
  • FIG. 3 shows a computer system in accordance with the present disclosure.
  • FIG. 4 illustrates a high level workflow for multifactor identification in accordance with the present disclosure.
  • FIG. 5 shows details for multifactor identification in accordance with the present disclosure.
  • FIG. 6 shows GPS-based location detection in accordance with the present disclosure.
  • FIG. 7 shows WiFi-based location detection in accordance with the present disclosure.
  • FIG. 8 shows Bluetooth-based location detection in accordance with the present disclosure.
  • FIG. 9 shows RFID-based location detection in accordance with the present disclosure.
  • FIG. 10 illustrates an example of speech recognition.
  • FIG. 1 shows an anticipatory smart space 100 in accordance with the present disclosure.
  • the anticipatory smart space 100 may be configured to support a business environment.
  • the anticipatory smart space 100 can support other enterprises such as educational environments, research labs, medical facilities such as hospitals, and so on.
  • the anticipatory smart space 100 can provide an environment for collaboration that can adapt to and anticipate users' needs based on presence of users 10 , proximity among users, context, business function, etc., and can facilitate business interactions.
  • the anticipatory smart space 100 may be a designated meeting or conference room.
  • the anticipatory smart space 100 may be a common area with no particular designated use other than as a place to relax, for impromptu gatherings, and so on.
  • the anticipatory smart space 100 may identify users 10 by their biometric features such as face recognition, speech patterns, fingerprints, and so on. A user's presence may be indicated by the items they carry.
  • users 10 may carry badges; e.g., employee identification badges.
  • the badges may embed radio frequency identification (RFID) tags or other similar technology that can be sensed by sensors deployed about the anticipatory smart space 100 .
  • Users 10 typically carry various mobile computing devices 22 , such as smart phones, computer tablets, and so on.
  • Technology supported by such devices 22 may be used to indicate their presence in the anticipatory smart space 100 .
  • devices 22 may support location technologies such as global positioning systems (GPS), allowing the anticipatory smart space 100 to detect the presence of such devices based on location.
  • the devices 22 may support wireless communication technologies such as Bluetooth (BT), WiFi, and the like, allowing such devices to announce or otherwise indicate their location to the anticipatory smart space 100 .
  • the anticipatory smart space 100 may be viewed as a multi-modal environment that supports users' interactions, both locally and remotely, with minimal input from the user 10 .
  • the environment can provide this support by perceiving the presence and identities of users, and then anticipating their needs.
  • the anticipatory smart space 100 makes use of various technologies to achieve superior usability in comparison to other collaboration systems. It saves users training effort, setup effort, navigational steps and cognitive effort by making use of natural user interfaces and smart back-end algorithms, providing the user with a seamless and targeted user experience.
  • the anticipatory smart space 100 may expose various functional capabilities 102 to end users 10 .
  • the anticipatory smart space 100 may comprise a server 118 that provides functional capabilities 102 such as access to applications and data (business data in our illustrative example), content sharing between users, and the like.
  • Telepresence functionality may provide interactive video and audio between users at separate locations, typically with high definition (and in some embodiments, near life size) video images and high quality audio.
  • Immersive telepresence can provide a meeting environment that surrounds users with high quality video and audio streams from remote locations, giving users the impression they are physically next to the remote participants. Meetings using immersive telepresence show participants in life-size images and videos, and enable natural interactions as with physical presence.
  • Smartboard functionality can be used to electronically capture content produced during a collaborative effort.
  • the smartboard is an electronic whiteboard that can digitally capture content written on it.
  • the smartboard may be augmented with additional functionality to enhance the telepresence experience.
  • a smartboard in one location may be connected to a smartboard at a remote location so that content provided on the smartboard at one location can appear at the other location.
  • the smartboard may incorporate multi-touch capability, allowing users to manually interact with the content captured and digitized on the smartboard.
  • Input modes may include paint and drawing capabilities, in addition to handwritten input, to provide additional means for users to express their thoughts.
  • Two or more smartboards 102 a , 102 b may be located in different locations A, B.
  • the smartboards 102 a , 102 b may be equipped with cameras 122 configured for high-definition audio/visual capture. Images captured by cameras 122 in smartboard 102 a at location A may be communicated to location B and displayed on smartboard 102 b , and vice-versa.
  • An immersive telepresence experience may be achieved if the meeting space at each location A, B is configured with wall-to-wall, full height smartboards, giving participants in one location the impression that their counterparts at the other location(s) are physically with them. For example, the participants can appear as life-size images on the smartboard.
  • Other features of the smartboard may include:
  • the anticipatory smart space 100 may include peripheral vision displays.
  • a user's extreme peripheral vision can detect motion but little else. As the image approaches the user's fovea, color, shape and text become apparent, respectively.
  • This knowledge can be used by the smartboard to provide the user with different types of information on different display surfaces, after determining which surface the user is currently looking at. Data from cameras and depth sensors can be used to determine where a user is looking, as well as what the user's eyes are focusing on. Thus, different types of information on different display surfaces of the smartboard can be displayed. While the main surface might display a document, a surface in the peri-foveal region could display a stock quote in large letters, or the outside temperature shown as a color, or an alert that dances when a meeting is imminent.
  • the anticipatory smart space 100 may including reactive screens. People are more likely to notice what is displayed on a screen if it reacts to their presence. Thus, the anticipatory smart space 100 may compute a user's distance from a smartboard and select appropriate content. For example, a news headline is shown when the user is far away, but can be replaced with a more detailed story as the user walks toward the screen. As another example, a single number (e.g., current outside temperature) can be displayed when the user is far away, to be replaced with a detailed weather report as user approaches.
  • the reactive screen can recognize a user using face-recognition software or using a near field communication (NFC) reader for the user's badge. At this point, user-specific information such as email preview, next meeting, time to next meeting, location of next meeting, can be displayed. Because the information will appear on the screen only when the user is standing before it (or in a fixed spot), privacy is assured.
  • NFC near field communication
  • Interactive recordings and replay can facilitate retrospection of prior encounters, which can facilitate creative collaboration.
  • interactive recordings and replay provide for multiple perspectives. Recordings may be tagged according to subject matter, speaker, and other criteria. This aspect of the present disclosure is disclosed in more detail in a commonly owned, concurrently filed, co-pending application, entitled “ ” (Atty. Docket No. 000005-041002US).
  • the anticipatory smart space 100 may further include various sensory devices (sensors) deployed about the anticipatory smart space to enable perceptive and anticipatory user interfaces.
  • sensors includes cameras 104 a , 104 b , microphones 104 c , touch sensitive surfaces 104 d , Bluetooth and other wireless technology devices 104 e , and so on.
  • the sensors may be deployed in and around the anticipatory smart space 100 .
  • there may be ceiling mounted devices such as projectors, cameras, microphones and speakers.
  • Floor sensors may be deployed and so on.
  • the sensory devices may be in data communication with the server 118 to provide the server with information about detected devices, biometric information of users, and so on.
  • the server 118 may provide a context analysis functionality 106 for the anticipatory smart space 100 .
  • the context analysis functionality 106 may work with an automated user identification and authentication function 108 to determine who is present in the anticipatory smart space 100 .
  • the context analysis functionality 106 may determine various contexts from interactions among users in the anticipatory smart space 100 and anticipate users' data needs, anticipate tasks, anticipate scheduling requirements, and the like.
  • the automated user identification and authentication function 108 may detect the presence of users 10 and confirm their identities. This aspect of the present disclosure will be discussed in more detail below.
  • the anticipatory smart space 100 may include a data storage system comprising one or more data storage devices 112 , 114 , 116 .
  • Data storage device 112 may store recordings and other data that users 10 may want to retrieve; e.g., documents, email, etc.
  • Data storage device 114 may serve as a device database that stores device information for all the devices 22 that users 10 may carry.
  • the device information may include device identification data that identifies each device 22 .
  • the device identification data may be an employee number stored on an RFID tag 24 embedded in the badge.
  • Smartphone devices may be identified via the MAC address associated with their device, so the device identification data for smartphone device may be the MAC address, and so on.
  • the device information may also include user ID data that identify the users associated with the devices 22 .
  • a user information data store 116 may store biometric information for each user 10 in the enterprise of the anticipatory smart space 100 ; e.g., all the employees in an enterprise.
  • Biometric information refers to measurable characteristics of an individual that can serve to uniquely identify the individual.
  • biometric information may comprise measurements of characteristics such as fingerprint, face recognition, palm print, hand geometry, iris recognition, retina patterns, voice patterns (e.g., tone, pitch, cadence, etc.), and so on.
  • the anticipatory smart spaces 100 a , 100 b can be located anywhere, ranging from locations around the world, locations in a region (e.g., within a state, a city, etc.), or locally such as different locations in the same campus.
  • a communication network 20 may provide data connections among the anticipatory smart spaces 100 a , 100 b .
  • the communication network 20 may comprise a combination of local networks (e.g., to interconnect components comprising an anticipatory smart space) and a wide area network and/or a public network to provide communication (data, voice, video) among anticipatory smart spaces 100 a , 100 b .
  • Common enterprise-wide data may be provided by a storage server 202 .
  • some of the anticipatory smart spaces 100 a , 100 b may have localized storage 204 at their respective locations.
  • an illustrative implementation of server 118 may include a computer system 302 having a processing unit 312 , a system memory 314 , and a system bus 311 .
  • the server 118 may be based on the nodeJS® runtime environment using a socket.io based engine for real-time bidirectional event-based communication.
  • the server 118 may receive and send location information (discussed in more detail below) through sockets as an interface to the various sensors 104 a - 104 e.
  • the system bus 311 may connect various system components including, but not limited to, the processing unit 312 , the system memory 314 , an internal data storage device 316 , and a communication interface 313 .
  • the internal data storage 316 may or may not be included.
  • the processing unit 312 may comprise a single-processor configuration, or may be a multi-processor architecture.
  • the system memory 314 may include read-only memory (ROM) and random access memory (RAM).
  • the internal data storage device 316 may be an internal hard disk drive (HDD), a magnetic floppy disk drive (FDD, e.g., to read from or write to a removable diskette), an optical disk drive (e.g., for reading a CD-ROM disk, or to read from or write to other high capacity optical media such as the DVD, and so on).
  • the internal data storage 316 may be a flash drive.
  • the internal data storage device 316 and its associated non-transitory computer-readable storage media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth.
  • computer-readable media refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD
  • other types of media which are readable by a computer such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used, and further, that any such media may contain computer-executable instructions for performing the methods disclosed herein.
  • the system memory 314 and/or the internal data storage device 316 may store a number of program modules, including an operating system 332 , one or more application programs 334 , program data 336 , and other program/system modules 338 .
  • the application programs 334 which when executed, may cause the computer system 302 functional capabilities 102 , context analysis 106 , and authentication 108 described above.
  • External data storage device 342 may represent the data storage devices 112 , 114 , 116 described above.
  • the data storage devices 112 , 114 , 116 may connect to computer system 302 over communication network 352 .
  • Access to the computer system 302 may be provided by a suitable input device 344 (e.g., keyboard, mouse, touch pad, etc.) and a suitable output device 346 , (e.g., display screen).
  • a suitable input device 344 e.g., keyboard, mouse, touch pad, etc.
  • a suitable output device 346 e.g., display screen.
  • input and output may be provided by a touch sensitive display.
  • the computer system 302 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers (not shown) over a communication network 352 .
  • the communication network 352 may be a local area network (LAN) and/or larger networks, such as a wide area network (WAN).
  • the sensors 104 a - 104 e may connect to the computer system 302 over communication network 352 .
  • Users may interact with the anticipatory smart space (e.g., 100 ) which, in accordance with the present disclosure, can react to their presence and context using natural user interfaces, like voice recognition or gesture controls.
  • the anticipatory smart space it is important the anticipatory smart space be able to correctly identify the users in the first place to be able to provide them with relevant, targeted information and functionality. Proper identification is a special consideration when confidential information is involved.
  • the identification and authentication process 108 may be based on the combination of multiple methods to identify unique features of users and by making use of the individual strengths of each component it achieves a superior recognition performance and usability.
  • the process is especially valuable in the context of smart spaces, e.g., where users just approach public digital devices without having their own private computers with them and without being able to (or wanting to) enter logins and passwords.
  • FIG. 4 illustrates, at a high level, a process for identification and authentication in accordance with the present disclosure.
  • the solution aims to find a limited number of users potentially from among the population of all users 402 in an environment wanting access to a digital solution based on contextual information.
  • a proximity filtering process 404 e.g., users in proximity around a device
  • the subset of users 406 may be reduced to the individual by capturing and analyzing very specific, local information of the user.
  • biometric filtering 408 may be applied to the subset of users 406 ; e.g., by collecting biometric information of the user to be authenticated. Based on the limited subset of users, the system is able to quickly and more accurately identify the user.
  • the workflow can be performed by server 118 ( FIG. 1 ) executing suitably configured program code (e.g., applications 334 , FIG. 3 ).
  • program code e.g., applications 334 , FIG. 3
  • devices they carry can serve to “announce” their presence to the smart space.
  • various sensors deployed about the smart space may sense the users' devices as they come into proximity of the sensors.
  • the users' devices may provide location information about respective users of the devices, which the smart space can use to determine whether a user is within proximity of the smart space.
  • the smart space can access the device database 114 and the user information database 116 to identify the user who is associated with the device. This can be performed for each device that the smart space determines to be in its proximity. In this way, the subset of users 406 can be compiled from among the population of all users (e.g., as identified in the user information database 116 ).
  • a target user may want to gain access to the smart space; e.g., in order to indicate they have arrived for a meeting, to access documents and other information, etc.
  • the smart space may capture biometric information from the target user. In some embodiments, this can be a passive activity so as not to burden the target user with having to consciously “log” into the smart space. More detail about this aspect of the present disclosure will be given below.
  • the smart space can identify/authenticate the target user from the subset of users 406 .
  • the biometric features of each candidate user in the subset of users 406 can be compared to the captured biometric information.
  • this approach can drastically improve the reliability and security of the identification and authentication of users, because the users (in particular, their devices) have to be present in a certain space and false identifications are reduced to a minimum.
  • the approach is very user-friendly because it requires at most minimal (if any) manual interactions from the user.
  • the approach has major advantages over existing solutions because users have control over when they want to be identified. If they are not shortlisted (e.g., listed in the subset of users 406 ) at 506 (e.g., using proximity detection), the feature recognition at 512 will not identify them.
  • the approach can prevent identity theft where someone might attempt to fake the biometric properties of the target user; e.g., by printing a picture of a person and holding it in front of a camera to simulate another face when face recognition is used for user identification.
  • Another advantage is the increased convenience for end users. Just approaching a smart space without any manual interactions can be sufficient to get access to features and data from the smart space that the user might want to access.
  • the recognition process can happen very quickly, because blocks 504 and 506 server to reduce the amount of processed data in the subsequent steps, namely by reducing the search space from the population of all users in the enterprise to the subset of users who are in proximity of the smart space. This can significantly reduce the time for matching biometric information.
  • FIG. 6 illustrates a GPS embodiment, in accordance with embodiments of the present disclosure, in which a system of GPS satellites transmits GPS location data. If the user has a mobile device 62 capable of using GPS, this is the initial step in identifying the subset of users 406 that will at a later time be analyzed further to identify the target user.
  • the mobile device 62 can be any mobile computing device.
  • GPS capable mobile devices 62 are mobile phones; these do not have the FAA standard Wide Area Augmentation System (WAAS).
  • WAAS Wide Area Augmentation System
  • the WAAS specification requires that a device must be accurate to 7.6 meters or better at least 95% of the time.
  • GLONASS Global Navigation Satellite System
  • GLONASS is available world wide, while supporting near WAAS accuracy.
  • an application on the mobile device requests coordinates from the GPS chipset. For example, assuming location services are available, the application would first create a location manager object 602 . Using the location manager 602 , the user (via the application) can set the desired accuracy and ask the location manager to start updating the location.
  • the following code fragment illustrates how the location manager 602 can start location updates in the mobile device 62 :
  • the location manager 602 can then start receiving location data, as illustrated for example by the following code fragment:
  • CODE FRAGMENT II (void) locationManager:(CLLocationManager *)manager didUpdateLocations:(NSArray *)locations ⁇ // If it's a relatively recent event, turn off updates to save power.
  • the mobile device 62 can then update its location on a central server (e.g., server 118 , FIG. 1 ), for example, by transmitting the location data and an identifier of the mobile device 62 to the central server. This information can be used to build up the subset of users 406 .
  • a central server e.g., server 118 , FIG. 1
  • the user may have an option for simply updating the location with a Boolean value. For example, a value indicating TRUE can be sent to the central server to indicate the user is in proximity of the smart space, and FALSE to indicate the user is not in proximity of the smart space.
  • the mobile device 62 may make this determination. However, since this can be a computationally challenging task for the mobile device 62 , the determination can be made closer to the central server.
  • the location data received by the mobile device 62 can be provided to a delegator 604 to determine whether the location indicated by the location data lies in proximity of the smart space.
  • “proximity” can be based on being inside a predetermined radius of a center of the smart space.
  • FIG. 7 illustrates an access point embodiment for proximity detection in accordance with embodiments of the present disclosure.
  • the smart space can have several wireless access points (“hotspots”) 702 deployed about the smart space. To the extent that the access point 702 are in fixed locations in the smart space, then their locations are predetermined. Their locations can be strategically determined to provide maximum coverage of the smart space.
  • the access points 702 can broadcast their WiFi signals, which can be detected by mobile devices 72 that are equipped with WiFi chips. The mobile device 72 will typically connect to the strongest signal available.
  • the mobile device 72 can query the access point 702 a for its Media Access Control (MAC) address. When the MAC address has been obtained, the mobile device 72 can then look up the specific location of the access point 702 a , for example, in the device database 114 . The mobile device 72 can the post its identification and its location to the central server (e.g., server 118 , FIG. 1 ). This information can be used to build up the subset of users 406 .
  • MAC Media Access Control
  • this process can be done on a periodic basis that is either predetermined or set by the user. There can also be certain flagged events that will trigger a location update of the mobile device 72 .
  • FIG. 8 illustrates a Bluetooth embodiment for proximity detection in accordance with embodiments of the present disclosure.
  • the smart space may configured with a deployment of devices having Bluetooth Low Energy (BLE) emitters 802 . These emitters 802 send out a message periodically.
  • the message embeds an identification of specific locations in the smart space; e.g., a meeting room, the cafeteria, and so on.
  • BLE Bluetooth Low Energy
  • a mobile device 82 may be configured with a BLE receiver that listen for the emitters 802 .
  • the mobile device 82 can assess the signal strength. The strength may be compared to a threshold to get the distance between the emitter 802 a and the BLE receiver. If the threshold is met, the mobile device 82 may send its identification to a central server (e.g., server 118 , FIG. 1 ), to indicate that the mobile device is in proximity.
  • a central server e.g., server 118 , FIG. 1
  • the devices that are deployed in the smart space may be Linux-based devices with a BLE dongle and runs a daemon process to emit the identification of the location.
  • the mobile device 82 can listen to its Bluetooth port permanently, and decompose received data to deduce the location identification and the distance.
  • the distance calculation may be done as follows:
  • the strength of the received signal is given in decibels, dB, and is referred to as the received signal strength indication (RSSI).
  • the distance is given by the strength difference between the emitting power and the received power (radio_dB).
  • the emitting power is calibrated by its strength received at 1 meter (calibrated_Power), thus:
  • radio_dB The linearized value of radio_dB is:
  • radio_linearized 10 (radio — dB/10)
  • the signal is a radio wave and falls with the model of 1/distance 2 in a spherical space such as:
  • the distance is the root number of the linearized difference between calibrated signal strength and received signal strength.
  • the mobile device 82 can send its identification and location to the central server. This information can be used to build up the subset of users 406 .
  • FIG. 9 illustrates an RFID embodiment for proximity detection in accordance with embodiments of the present disclosure.
  • the RFID reader 902 can send signals periodically so that it is waiting for RFID tags (e.g., in personal device 92 ) in proximity to respond. To do this the user does not even have to be in the line of sight, but just less than 3 meters away from the RFID reader 902 .
  • the RFID reader 902 can send the meta data received from the tag 92 to a central server (e.g., server 118 , FIG. 1 ), in addition to other data such as an identifier of the tag 92 and the RSSI (received signal strength indication).
  • a central server e.g., server 118 , FIG. 1
  • the central server 118 can process this information, e.g., using a maximum likelihood (ML) location estimation algorithm, to determine the proximity and the relative location of the user within the smart space.
  • the central server also maintains a list of references between RFID tags and users (e.g., in device DB 114 ). The central server can therefore determine which users are in proximity to which RFID reader 902 . This information can be used to build up the subset of users 406 .
  • ML maximum likelihood
  • the smart space can filter all frequencies below 20 kHz and perform a fast Fourier transform (FFT) to get the peaks of received frequencies in order to obtain the amplitude of all the components of the sound signal.
  • FFT fast Fourier transform
  • the characteristics of the signal which are unique to the personal device, can be sent to the server 118 , where it is compared with the reference entries in the device DB 114 to identify a user associated with the personal device 22 . This information can be used to build up the subset of users 406 .
  • an access security system in the smart space may be employed to register or otherwise identify users entering the smart space; e.g. by swiping a badge. This data can be used as an indication in which area a user is currently located and thus be used to build up the subset of users 406 .
  • the information is consolidated and correlated to identify which users are in proximity of which smart space and which are not.
  • the combination and the enhancement of the various technologies reduce the risk of errors if one or several of them fail.
  • the GPS signal might only be reliable if a user (carrying a GPS-enabled personal device) is close to a window, or rather not too far in the inside of a building. In the latter case, the GPS information might incorrectly indicate that a user is somewhere outside of a building.
  • WiFi-based information indicates that a user is indeed in a building (e.g., because their personal device is connected to an access point which is located in the middle of a building close to a smart space)
  • this information is more likely to be accurate and can be given a higher weighting than the GPS-based information.
  • the location detection would still work based on the other technologies.
  • the smart space may include a face recognition system, which in a particular embodiment, uses two sensors: a depth sensor and a webcam.
  • the depth sensor can be used to certify that identification conditions have been met.
  • the webcam takes the photos that are used for identification.
  • the server 118 FIG. 1
  • the server 118 can return the name of the person in the form of a greeting; e.g., “Hi [name]”.
  • user details can be fed into the server 118 for further processing; e.g. to offer user-specific features and data.
  • the user Before face recognition can happen in a smart space, users may participate in a training phase to train the smart space.
  • the user types their name into the system, then stands before a screen and follows a moving red dot with the eyes and face.
  • the webcam takes a photo.
  • This photo can be digitally processed as follows: first, the eyes are identified; then, using the eyes as the center, the face is rotated, scaled and translated so the eyes line up in the correct position in the frame. This ensures that the eyes are horizontally placed, and that the center of the frame is between the eyes.
  • the image is then cropped, tagged with the name of the person, and stored in a face library.
  • Training can happen quickly (e.g., less than a minute), and needs to be done only once. For example, in some embodiments, a total of 10 photos might be taken, processed, tagged and stored in the face library (e.g., user information data store 116 ).
  • the identification process can start with the depth sensor detecting the presence of human skeletons.
  • the depth sensor certifies that only one skeleton is present, and that the skeleton is standing at the correct distance (currently, less than 1 meter). If too many people approach the depth sensor together, it registers this and provides feedback. Once the depth sensor outputs have helped determine that conditions are correct, the webcam takes a series of test photos to carry out face recognition.
  • test photo taken by the webcam is digitally processed (in the same manner that face library photos were processed)—first, the eyes are identified; then, using the eyes as the center, the face is rotated, scaled and translated so the eyes line up in the correct position in the frame; then, the image is then cropped.
  • This processed test photo is then matched against face library photos using a “k-nearest-neighbor” algorithm.
  • the winning face library photo is returned. If the same library photo wins 4 or more times (when matched against 5 test photos), the name associated with that library photo is retuned as the identified face and therefore the identified user.
  • the system can identify which users are not part of the face library. For this, the test photo is re-projected on to the winning library photo.
  • the eigenvalues of the library photo (represented as a matrix) are compared with the eigenvalues of the test photo, and a score is computed. If the score exceeds a certain threshold, the system can say “Hi Stranger”, indicating that the best match for this face is not very good (i.e. this face is a stranger who is not in the face library).
  • Speaker Recognition can be used as an alternative way to identify users in an intuitive, yet safe way. The technology is able to recognize a speaker based on the individual characteristics of his voice.
  • the smart space can be equipped with microphones, which are able to record the voice of any user of the smart space in high quality.
  • the audio data can be streamed to the central server where it is analyzed, for example, using ALIZE, a platform for biometric authentication system written in C++, after the related features of the voice where extracted.
  • ALIZE provides the necessary set of high- and low-level tools.
  • the low-level library includes a statistic engine, while the high level library and the LIA_RAL package provide necessary functions to teach the system the users' voices, in addition to parameter normalization, score normalization, and various other capabilities.
  • the live recorded voice is only compared to the users which were previously identified to be in proximity to the smart space. This increases the security of the solution, as well as the performance of the recognition process. At the end of the recognition process a score is provided together with the identified users. The solution cannot only identify if one of the previously located users has said something, but with a high likelihood also which one it was. See, for example, FIG. 10 .
  • the solution can provide improved results if the speaker recognition process is only applied to specific keywords, which the user taught the system in a one-time setup process. Therefore a user could say a specific password, so that the system can identify that it's not only a specific user, but also that this specific user knows also the specific password.
  • the discussion will now turn to a description of some details of the anticipatory smart space 100 ( FIG. 1 ) in accordance with the present disclosure.
  • the anticipatory smart space described herein supports a business environment that adapts to and anticipates users' needs based on presence, proximity, context and business function, and facilitates business interactions. It will be appreciated that the anticipatory smart space can be configured in other collaborative contexts.
  • the anticipatory smart space 100 can make use of various technologies to achieve superior usability in comparison to other collaboration systems. It saves users training effort, setup effort, navigational steps and cognitive effort by making use of natural user interfaces and smart back-end algorithms, providing the user with an absolutely seamless and targeted user experience.
  • the business environment can perceive the presence and context of users by answering a set of questions, amongst others, such as:
  • the anticipatory smart space 100 uses signals received via its sensors, the anticipatory smart space 100 attempts to anticipate the needs of the user (or users).
  • the primary use-case for the environment is a business meeting, planned or ad-hoc. Some user needs in a meeting are quite obvious, e.g. turn on the lights; others are indirect, e.g. bring up relevant documents on a large screen.
  • a face recognition system identifies and greets them. On a video wall in the space, the system brings up documents relevant to their meeting. These documents are displayed on the part of the video wall that the user is looking at. The user can turn on privacy settings by voice command, causing documents to fade away when the user looks away or leaves the room.
  • the two users interact with these documents using voice, gestures, touch, keyboards or mouse.
  • the video wall changes its behavior based on which users are looking at it, how long they have looked at which screen, and how far they are standing from the screen.
  • Remote participants are automatically dialed in and identified by the system (for instance, a name tag appears over their heads in the video wall). Users may conduct private interactions with sections of the wall (e.g., look up something on the Internet, recall parts of a previous meeting, etc.).
  • additional data in large text is displayed on screens in users' peripheral vision. If a user turns to look at the peripheral screens, the screens respond by presenting denser data.
  • Context analysis for feature and data anticipation may be provided by backend systems; e.g., server 118 .
  • Cues about the users and environment may be collected by the various sensors 104 a - 104 e deployed about the anticipatory smart space 100 in order to anticipate users' needs. For example, the questions listed above may be considered in the analysis.
  • real time image and audio analysis for presence detection, gaze detection, face detection, face recognition, voice recognition, voice control, etc. can also feed into the analysis.
  • Information about users such as name, their roles in the organization, their connections with other participants, their emotional state, biometric data, etc. can be used to assess and anticipate their information needs. Relevant contexts include location where the meting is taking place, time and day of the meeting, available display devices (e.g., are the participants carrying smartphones, computer tablets, etc.), documents being looked at or shared, content written in the smartboard, etc.

Abstract

A collaborative environment adapts to and anticipates users' needs based on presence of participants, proximity, context, in order to facilitate interaction among participants. An aspect of the collaborative environment includes multifactor user recognition.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • Pursuant to 35 U.S.C. §119(e), this application is entitled to and claims the benefit of the filing date of U.S. Provisional App. No. 61/906,327 filed Nov. 19, 2013, the content of which is incorporated herein by reference in its entirety for all purposes.
  • BACKGROUND
  • Unless otherwise indicated, the foregoing is not admitted to be prior art to the claims recited herein and should not be construed as such.
  • The role of software in a collaborative effort is largely perceived at a conscious level. In current software systems, users stare at screens. Users are aware they are using software, that they are using computers.
  • In a social network, for example, the user goes to the computer screen or phone screen to access an application (e.g., a Facebook® app) and that is the extent of the interaction. The interaction with the users friend is secondary. The software is not so deeply ingrained in the interaction with the users friend that the user is no longer aware of it. The software is always perceived as something separate, a layer.
  • Communication applications (e.g., Skype, Viber, etc.) allow the user to have a conversation with someone, but nonetheless the user is very much aware of the application, staring at a screen, pushing buttons, etc. The application remains a noticeable part of the interaction. Oblong Industries, Inc. provides the g-speak framework which supports a spatial operating environment that can be used for collaboration in large meeting rooms. But users still need to explicitly interact with a system; e.g., by using a mouse and keyboard, or by swiping a badge, etc. Software and hardware should facilitate human interaction to a level so deep that it is just part of what is happening in the room, not something separate and not something that creates a layer between something as basic as eye contact.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • With respect to the discussion to follow and in particular to the drawings, it is stressed that the particulars shown represent examples for purposes of illustrative discussion, and are presented in the cause of providing a description of principles and conceptual aspects of the present disclosure. In this regard, no attempt is made to show implementation details beyond what is needed for a fundamental understanding of the present disclosure. The discussion to follow, in conjunction with the drawings, make apparent to those of skill in the art how embodiments in accordance with the present disclosure may be practiced. In the accompanying drawings:
  • FIG. 1 is an illustrative embodiment of an anticipatory smart space in accordance with the present disclosure.
  • FIG. 1A illustrates an example of a configuration of smartboards in accordance with the present disclosure.
  • FIG. 2 shows a configuration of anticipatory smart spaces in accordance with the present disclosure.
  • FIG. 3 shows a computer system in accordance with the present disclosure.
  • FIG. 4 illustrates a high level workflow for multifactor identification in accordance with the present disclosure.
  • FIG. 5 shows details for multifactor identification in accordance with the present disclosure.
  • FIG. 6 shows GPS-based location detection in accordance with the present disclosure.
  • FIG. 7 shows WiFi-based location detection in accordance with the present disclosure.
  • FIG. 8 shows Bluetooth-based location detection in accordance with the present disclosure.
  • FIG. 9 shows RFID-based location detection in accordance with the present disclosure.
  • FIG. 10 illustrates an example of speech recognition.
  • DETAILED DESCRIPTION
  • In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be evident, however, to one skilled in the art that the present disclosure as expressed in the claims may include some or all of the features in these examples, alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
  • FIG. 1 shows an anticipatory smart space 100 in accordance with the present disclosure. In some embodiments, the anticipatory smart space 100 may be configured to support a business environment. In other embodiments, the anticipatory smart space 100 can support other enterprises such as educational environments, research labs, medical facilities such as hospitals, and so on. The anticipatory smart space 100 can provide an environment for collaboration that can adapt to and anticipate users' needs based on presence of users 10, proximity among users, context, business function, etc., and can facilitate business interactions. In some embodiments, the anticipatory smart space 100 may be a designated meeting or conference room. In other embodiments, the anticipatory smart space 100 may be a common area with no particular designated use other than as a place to relax, for impromptu gatherings, and so on.
  • The anticipatory smart space 100 may identify users 10 by their biometric features such as face recognition, speech patterns, fingerprints, and so on. A user's presence may be indicated by the items they carry. In some environments, users 10 may carry badges; e.g., employee identification badges. The badges may embed radio frequency identification (RFID) tags or other similar technology that can be sensed by sensors deployed about the anticipatory smart space 100. Users 10 typically carry various mobile computing devices 22, such as smart phones, computer tablets, and so on. Technology supported by such devices 22 may be used to indicate their presence in the anticipatory smart space 100. For example, devices 22 may support location technologies such as global positioning systems (GPS), allowing the anticipatory smart space 100 to detect the presence of such devices based on location. The devices 22 may support wireless communication technologies such as Bluetooth (BT), WiFi, and the like, allowing such devices to announce or otherwise indicate their location to the anticipatory smart space 100. These aspects of the present disclosure will be discussed in more detail below.
  • The anticipatory smart space 100 may be viewed as a multi-modal environment that supports users' interactions, both locally and remotely, with minimal input from the user 10. The environment can provide this support by perceiving the presence and identities of users, and then anticipating their needs. In various embodiments, the anticipatory smart space 100 makes use of various technologies to achieve superior usability in comparison to other collaboration systems. It saves users training effort, setup effort, navigational steps and cognitive effort by making use of natural user interfaces and smart back-end algorithms, providing the user with a seamless and targeted user experience.
  • In some embodiments, the anticipatory smart space 100 may expose various functional capabilities 102 to end users 10. For example, the anticipatory smart space 100 may comprise a server 118 that provides functional capabilities 102 such as access to applications and data (business data in our illustrative example), content sharing between users, and the like. Telepresence functionality may provide interactive video and audio between users at separate locations, typically with high definition (and in some embodiments, near life size) video images and high quality audio. Immersive telepresence can provide a meeting environment that surrounds users with high quality video and audio streams from remote locations, giving users the impression they are physically next to the remote participants. Meetings using immersive telepresence show participants in life-size images and videos, and enable natural interactions as with physical presence.
  • Smartboard functionality can be used to electronically capture content produced during a collaborative effort. The smartboard is an electronic whiteboard that can digitally capture content written on it. In some embodiments, the smartboard may be augmented with additional functionality to enhance the telepresence experience. For example, a smartboard in one location may be connected to a smartboard at a remote location so that content provided on the smartboard at one location can appear at the other location. The smartboard may incorporate multi-touch capability, allowing users to manually interact with the content captured and digitized on the smartboard. Various granularities in the level of interaction can be provided, ranging from very rough movements (e.g., gestures, hand waving, etc.), to rough gestures (e.g., pointing with the finger), to precise gestures (e.g., pointing with a stylus). Input modes may include paint and drawing capabilities, in addition to handwritten input, to provide additional means for users to express their thoughts.
  • Refer to FIG. 1A for a moment, for an illustrative embodiment of a smartboard configuration in accordance with the present disclosure. Two or more smartboards 102 a, 102 b may be located in different locations A, B. The smartboards 102 a, 102 b may be equipped with cameras 122 configured for high-definition audio/visual capture. Images captured by cameras 122 in smartboard 102 a at location A may be communicated to location B and displayed on smartboard 102 b, and vice-versa. An immersive telepresence experience may be achieved if the meeting space at each location A, B is configured with wall-to-wall, full height smartboards, giving participants in one location the impression that their counterparts at the other location(s) are physically with them. For example, the participants can appear as life-size images on the smartboard. Other features of the smartboard may include:
  • Augmenting telepresence
  • Shared, connected whiteboard with overlay of remote participants
  • Multi-touch capabilities
  • Life-size
  • Multiple spaces can be connected
  • Interaction very rough (gesture), rough (finger) and precise (stylus) possible
  • Paint/draw
  • Shared displays
  • Augmented content
  • Returning to FIG. 1, the anticipatory smart space 100 may include peripheral vision displays. A user's extreme peripheral vision can detect motion but little else. As the image approaches the user's fovea, color, shape and text become apparent, respectively. This knowledge can be used by the smartboard to provide the user with different types of information on different display surfaces, after determining which surface the user is currently looking at. Data from cameras and depth sensors can be used to determine where a user is looking, as well as what the user's eyes are focusing on. Thus, different types of information on different display surfaces of the smartboard can be displayed. While the main surface might display a document, a surface in the peri-foveal region could display a stock quote in large letters, or the outside temperature shown as a color, or an alert that dances when a meeting is imminent.
  • The anticipatory smart space 100 may including reactive screens. People are more likely to notice what is displayed on a screen if it reacts to their presence. Thus, the anticipatory smart space 100 may compute a user's distance from a smartboard and select appropriate content. For example, a news headline is shown when the user is far away, but can be replaced with a more detailed story as the user walks toward the screen. As another example, a single number (e.g., current outside temperature) can be displayed when the user is far away, to be replaced with a detailed weather report as user approaches. In some embodiments, the reactive screen can recognize a user using face-recognition software or using a near field communication (NFC) reader for the user's badge. At this point, user-specific information such as email preview, next meeting, time to next meeting, location of next meeting, can be displayed. Because the information will appear on the screen only when the user is standing before it (or in a fixed spot), privacy is assured.
  • Interactive recordings and replay can facilitate retrospection of prior encounters, which can facilitate creative collaboration. In various embodiments, interactive recordings and replay provide for multiple perspectives. Recordings may be tagged according to subject matter, speaker, and other criteria. This aspect of the present disclosure is disclosed in more detail in a commonly owned, concurrently filed, co-pending application, entitled “ ” (Atty. Docket No. 000005-041002US).
  • In various embodiments, the anticipatory smart space 100 may further include various sensory devices (sensors) deployed about the anticipatory smart space to enable perceptive and anticipatory user interfaces. In some embodiments, sensors includes cameras 104 a, 104 b, microphones 104 c, touch sensitive surfaces 104 d, Bluetooth and other wireless technology devices 104 e, and so on. The sensors may be deployed in and around the anticipatory smart space 100. For example, there may be ceiling mounted devices such as projectors, cameras, microphones and speakers. Floor sensors may be deployed and so on. The sensory devices may be in data communication with the server 118 to provide the server with information about detected devices, biometric information of users, and so on.
  • The server 118 may provide a context analysis functionality 106 for the anticipatory smart space 100. The context analysis functionality 106 may work with an automated user identification and authentication function 108 to determine who is present in the anticipatory smart space 100. The context analysis functionality 106 may determine various contexts from interactions among users in the anticipatory smart space 100 and anticipate users' data needs, anticipate tasks, anticipate scheduling requirements, and the like.
  • The automated user identification and authentication function 108 may detect the presence of users 10 and confirm their identities. This aspect of the present disclosure will be discussed in more detail below.
  • The anticipatory smart space 100 may include a data storage system comprising one or more data storage devices 112, 114, 116. Data storage device 112 may store recordings and other data that users 10 may want to retrieve; e.g., documents, email, etc. Data storage device 114 may serve as a device database that stores device information for all the devices 22 that users 10 may carry. The device information may include device identification data that identifies each device 22. For example, in the case of employee badges, the device identification data may be an employee number stored on an RFID tag 24 embedded in the badge. Smartphone devices may be identified via the MAC address associated with their device, so the device identification data for smartphone device may be the MAC address, and so on. The device information may also include user ID data that identify the users associated with the devices 22.
  • A user information data store 116 may store biometric information for each user 10 in the enterprise of the anticipatory smart space 100; e.g., all the employees in an enterprise. Biometric information refers to measurable characteristics of an individual that can serve to uniquely identify the individual. For example, biometric information may comprise measurements of characteristics such as fingerprint, face recognition, palm print, hand geometry, iris recognition, retina patterns, voice patterns (e.g., tone, pitch, cadence, etc.), and so on.
  • Referring to FIG. 2, an example of a network of two or more anticipatory smart spaces 100 a, 100 b in accordance with the present disclosure is illustrated. The anticipatory smart spaces 100 a, 100 b can be located anywhere, ranging from locations around the world, locations in a region (e.g., within a state, a city, etc.), or locally such as different locations in the same campus. A communication network 20 may provide data connections among the anticipatory smart spaces 100 a, 100 b. The communication network 20 may comprise a combination of local networks (e.g., to interconnect components comprising an anticipatory smart space) and a wide area network and/or a public network to provide communication (data, voice, video) among anticipatory smart spaces 100 a, 100 b. Common enterprise-wide data may be provided by a storage server 202. In some embodiments, some of the anticipatory smart spaces 100 a, 100 b may have localized storage 204 at their respective locations.
  • Referring to FIG. 3, an illustrative implementation of server 118 (FIG. 1) may include a computer system 302 having a processing unit 312, a system memory 314, and a system bus 311. In a particular implementation, the server 118 may be based on the nodeJS® runtime environment using a socket.io based engine for real-time bidirectional event-based communication. The server 118 may receive and send location information (discussed in more detail below) through sockets as an interface to the various sensors 104 a-104 e.
  • The system bus 311 may connect various system components including, but not limited to, the processing unit 312, the system memory 314, an internal data storage device 316, and a communication interface 313. In a configuration where the computer system 302 is a mobile device (e.g., smartphone, computer tablet), the internal data storage 316 may or may not be included.
  • The processing unit 312 may comprise a single-processor configuration, or may be a multi-processor architecture. The system memory 314 may include read-only memory (ROM) and random access memory (RAM). The internal data storage device 316 may be an internal hard disk drive (HDD), a magnetic floppy disk drive (FDD, e.g., to read from or write to a removable diskette), an optical disk drive (e.g., for reading a CD-ROM disk, or to read from or write to other high capacity optical media such as the DVD, and so on). In a configuration where the computer system 302 is a mobile device, the internal data storage 316 may be a flash drive.
  • The internal data storage device 316 and its associated non-transitory computer-readable storage media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth. Although the description of computer-readable media above refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it is noted that other types of media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used, and further, that any such media may contain computer-executable instructions for performing the methods disclosed herein.
  • The system memory 314 and/or the internal data storage device 316 may store a number of program modules, including an operating system 332, one or more application programs 334, program data 336, and other program/system modules 338. For example, the application programs 334, which when executed, may cause the computer system 302 functional capabilities 102, context analysis 106, and authentication 108 described above.
  • External data storage device 342 may represent the data storage devices 112, 114, 116 described above. In some embodiments, the data storage devices 112, 114, 116 may connect to computer system 302 over communication network 352.
  • Access to the computer system 302 may be provided by a suitable input device 344 (e.g., keyboard, mouse, touch pad, etc.) and a suitable output device 346, (e.g., display screen). In a configuration where the computer system 302 is a mobile device, input and output may be provided by a touch sensitive display.
  • The computer system 302 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers (not shown) over a communication network 352. The communication network 352 may be a local area network (LAN) and/or larger networks, such as a wide area network (WAN). The sensors 104 a-104 e (FIG. 1) may connect to the computer system 302 over communication network 352.
  • The discussion will now turn to a description of user identification and authentication processing (108, FIG. 1). Users may interact with the anticipatory smart space (e.g., 100) which, in accordance with the present disclosure, can react to their presence and context using natural user interfaces, like voice recognition or gesture controls. However, it is important the anticipatory smart space be able to correctly identify the users in the first place to be able to provide them with relevant, targeted information and functionality. Proper identification is a special consideration when confidential information is involved.
  • Traditionally users typically identify themselves with digital systems; e.g., using combinations of a username and a password. This involves a very manual process and it has some flaws:
      • It is not user friendly, as it requires the users to memorize certain information (e.g. passwords) or to carry around additional devices (like badges).
      • It is not safe, because badges and logins can easily be stolen or logged.
      • Manual interactions are required; e.g., multiple taps or clicks via keyboard or mouse.
  • Various alternative user identification methods (e.g., face recognition, fingerprint recognition, etc.) also exhibit certain drawbacks:
      • Human features can be faked; e.g., by printing out a photo of someone's face.
      • They take too long; for example, multiple seconds can be required to retrieve the matching phase from a library with millions of faces and users.
      • Rate of misrecognition (e.g. false positive/negative) can be unacceptable.
  • To summarize, current user identification solutions can have unacceptable shortcoming and may not provide users with a safe, fast, and reliable way to identify them when interacting with digital systems.
  • Accordingly, in some embodiments of the present disclosure, the identification and authentication process 108 may be based on the combination of multiple methods to identify unique features of users and by making use of the individual strengths of each component it achieves a superior recognition performance and usability. The process is especially valuable in the context of smart spaces, e.g., where users just approach public digital devices without having their own private computers with them and without being able to (or wanting to) enter logins and passwords.
  • FIG. 4 illustrates, at a high level, a process for identification and authentication in accordance with the present disclosure. In a first step of the user identification process, the solution aims to find a limited number of users potentially from among the population of all users 402 in an environment wanting access to a digital solution based on contextual information. In some embodiments, a proximity filtering process 404 (e.g., users in proximity around a device) may be applied to reduce the selection of users from the population of all users 402 (e.g., millions of users) to create a smaller subset of users 406 (e.g., tens of users). Since the proximity filtering may involve large volumes of data, manual interactions from users should be minimized if not eliminated in order for the filtering to happen efficiently.
  • Next, the subset of users 406 may be reduced to the individual by capturing and analyzing very specific, local information of the user. In accordance with the present disclosure, biometric filtering 408 may be applied to the subset of users 406; e.g., by collecting biometric information of the user to be authenticated. Based on the limited subset of users, the system is able to quickly and more accurately identify the user.
  • Referring now to FIG. 5, details of a workflow for identification and authentication in accordance with the present disclosure will now be described. In some embodiments, the workflow can be performed by server 118 (FIG. 1) executing suitably configured program code (e.g., applications 334, FIG. 3). At 502, as users approach the anticipatory smart space (“smart space”), devices they carry can serve to “announce” their presence to the smart space.
  • At 504, various sensors deployed about the smart space may sense the users' devices as they come into proximity of the sensors. As will be explained in more detail below, the users' devices may provide location information about respective users of the devices, which the smart space can use to determine whether a user is within proximity of the smart space.
  • At 506, when a device is deemed to be in proximity of the smart space, the smart space can access the device database 114 and the user information database 116 to identify the user who is associated with the device. This can be performed for each device that the smart space determines to be in its proximity. In this way, the subset of users 406 can be compiled from among the population of all users (e.g., as identified in the user information database 116).
  • At 508, a target user may want to gain access to the smart space; e.g., in order to indicate they have arrived for a meeting, to access documents and other information, etc. At 510, the smart space may capture biometric information from the target user. In some embodiments, this can be a passive activity so as not to burden the target user with having to consciously “log” into the smart space. More detail about this aspect of the present disclosure will be given below.
  • At 512, the smart space can identify/authenticate the target user from the subset of users 406. In some embodiments, for example, the biometric features of each candidate user in the subset of users 406, can be compared to the captured biometric information.
  • Various benefits can be obtained from the foregoing multifactor identification and authentication. For example, this approach can drastically improve the reliability and security of the identification and authentication of users, because the users (in particular, their devices) have to be present in a certain space and false identifications are reduced to a minimum. In addition, the approach is very user-friendly because it requires at most minimal (if any) manual interactions from the user. Furthermore, in terms of privacy, the approach has major advantages over existing solutions because users have control over when they want to be identified. If they are not shortlisted (e.g., listed in the subset of users 406) at 506 (e.g., using proximity detection), the feature recognition at 512 will not identify them.
  • The approach can prevent identity theft where someone might attempt to fake the biometric properties of the target user; e.g., by printing a picture of a person and holding it in front of a camera to simulate another face when face recognition is used for user identification.
  • Another advantage is the increased convenience for end users. Just approaching a smart space without any manual interactions can be sufficient to get access to features and data from the smart space that the user might want to access.
  • In addition, the recognition process can happen very quickly, because blocks 504 and 506 server to reduce the amount of processed data in the subsequent steps, namely by reducing the search space from the population of all users in the enterprise to the subset of users who are in proximity of the smart space. This can significantly reduce the time for matching biometric information.
  • The discussion will now turn to a description of various embodiments for sensing a user's device and determining proximity (blocks 504, 506). FIG. 6, for example, illustrates a GPS embodiment, in accordance with embodiments of the present disclosure, in which a system of GPS satellites transmits GPS location data. If the user has a mobile device 62 capable of using GPS, this is the initial step in identifying the subset of users 406 that will at a later time be analyzed further to identify the target user. The mobile device 62 can be any mobile computing device.
  • Generally, GPS capable mobile devices 62 are mobile phones; these do not have the FAA standard Wide Area Augmentation System (WAAS). The WAAS specification requires that a device must be accurate to 7.6 meters or better at least 95% of the time. Some mobile phones, however, do not support WAAS, instead they support Global Navigation Satellite System (GLONASS). Unlike WAAS, GLONASS is available world wide, while supporting near WAAS accuracy.
  • In order to use the GPS chipset on a mobile device 62, an application on the mobile device requests coordinates from the GPS chipset. For example, assuming location services are available, the application would first create a location manager object 602. Using the location manager 602, the user (via the application) can set the desired accuracy and ask the location manager to start updating the location. Merely, as an example, the following code fragment illustrates how the location manager 602 can start location updates in the mobile device 62:
  • CODE FRAGMENT I
    (void) startStandardUpdates
    {
    // If the location manager doesn't already exists, create one
    if (nil == location Manager)
    locationManager = [[CLLocationManager alloc] init];
    locationManager.deligate = self;
    // Setting the desired accuracy
    locationManager.desiredAccuracy =
    KCLLocationAccuracyKilometer;
    locationManager.distanceFilter = 500; //meters
    [locationManager startUpdatingLocation];
    }
  • The location manager 602 can then start receiving location data, as illustrated for example by the following code fragment:
  • CODE FRAGMENT II
    (void) locationManager:(CLLocationManager *)manager
    didUpdateLocations:(NSArray *)locations
    {
    // If it's a relatively recent event, turn off updates to save power.
    CLLocation* location = [locations lastObject];
    NSDate* eventDate = location.timestamp;
    NSTimeInterval howRecent = [eventDate timeIntervalSinceNow];
    if (abs(howRecent) < 15.0)
    {
    // If the event is recent, do something with it.
    NSLog(@“latitude %+.6f, longitude %+.6f\n”,
    location.coordinate.latitude,
    location.coordinate.longitude);
    }
    }
  • When a location is received, the mobile device 62 can then update its location on a central server (e.g., server 118, FIG. 1), for example, by transmitting the location data and an identifier of the mobile device 62 to the central server. This information can be used to build up the subset of users 406.
  • In some embodiments, the user, for privacy concerns, may have an option for simply updating the location with a Boolean value. For example, a value indicating TRUE can be sent to the central server to indicate the user is in proximity of the smart space, and FALSE to indicate the user is not in proximity of the smart space. In some embodiments, the mobile device 62 may make this determination. However, since this can be a computationally challenging task for the mobile device 62, the determination can be made closer to the central server. In some embodiments, for example, the location data received by the mobile device 62 can be provided to a delegator 604 to determine whether the location indicated by the location data lies in proximity of the smart space. In some embodiments, “proximity” can be based on being inside a predetermined radius of a center of the smart space.
  • FIG. 7 illustrates an access point embodiment for proximity detection in accordance with embodiments of the present disclosure. In this embodiment, the smart space can have several wireless access points (“hotspots”) 702 deployed about the smart space. To the extent that the access point 702 are in fixed locations in the smart space, then their locations are predetermined. Their locations can be strategically determined to provide maximum coverage of the smart space. The access points 702 can broadcast their WiFi signals, which can be detected by mobile devices 72 that are equipped with WiFi chips. The mobile device 72 will typically connect to the strongest signal available.
  • Using the strongest signal will entail that the mobile device 72 usually connects to the access point 702 a closest to the user; devices can connect to WiFi signals up to 100 meters away. After a connection has been established, the mobile device 72 can query the access point 702 a for its Media Access Control (MAC) address. When the MAC address has been obtained, the mobile device 72 can then look up the specific location of the access point 702 a, for example, in the device database 114. The mobile device 72 can the post its identification and its location to the central server (e.g., server 118, FIG. 1). This information can be used to build up the subset of users 406.
  • In some embodiments, this process can be done on a periodic basis that is either predetermined or set by the user. There can also be certain flagged events that will trigger a location update of the mobile device 72.
  • FIG. 8 illustrates a Bluetooth embodiment for proximity detection in accordance with embodiments of the present disclosure. The smart space may configured with a deployment of devices having Bluetooth Low Energy (BLE) emitters 802. These emitters 802 send out a message periodically. The message embeds an identification of specific locations in the smart space; e.g., a meeting room, the cafeteria, and so on.
  • On the client side, a mobile device 82 may be configured with a BLE receiver that listen for the emitters 802. When a signal is detected from an emitter 802 a, the mobile device 82 can assess the signal strength. The strength may be compared to a threshold to get the distance between the emitter 802 a and the BLE receiver. If the threshold is met, the mobile device 82 may send its identification to a central server (e.g., server 118, FIG. 1), to indicate that the mobile device is in proximity.
  • In some embodiments, the devices that are deployed in the smart space may be Linux-based devices with a BLE dongle and runs a daemon process to emit the identification of the location. The mobile device 82 can listen to its Bluetooth port permanently, and decompose received data to deduce the location identification and the distance. In some embodiments, for example, the distance calculation may be done as follows:
  • The strength of the received signal is given in decibels, dB, and is referred to as the received signal strength indication (RSSI). The distance is given by the strength difference between the emitting power and the received power (radio_dB). The emitting power is calibrated by its strength received at 1 meter (calibrated_Power), thus:

  • radio_dB=calibrated_Power−RRSI
  • The linearized value of radio_dB is:

  • radio_linearized=10(radio dB/10)
  • The signal is a radio wave and falls with the model of 1/distance2 in a spherical space such as:

  • Power=Power_at_one_meter/distancê2

  • So:

  • distance=radio_linearized1/2
  • The distance is the root number of the linearized difference between calibrated signal strength and received signal strength. When the mobile device 82 is in range of a Bluetooth equipped location, it can send its identification and location to the central server. This information can be used to build up the subset of users 406.
  • FIG. 9 illustrates an RFID embodiment for proximity detection in accordance with embodiments of the present disclosure. By attaching small passive RFID tags to personal devices or badges 92 of users, the proximity between them and smart spaces can be determined by deploying RFID readers 902 in the smart space. The RFID reader 902 can send signals periodically so that it is waiting for RFID tags (e.g., in personal device 92) in proximity to respond. To do this the user does not even have to be in the line of sight, but just less than 3 meters away from the RFID reader 902. As soon as a tag is found the RFID reader 902 can send the meta data received from the tag 92 to a central server (e.g., server 118, FIG. 1), in addition to other data such as an identifier of the tag 92 and the RSSI (received signal strength indication).
  • The central server 118 can process this information, e.g., using a maximum likelihood (ML) location estimation algorithm, to determine the proximity and the relative location of the user within the smart space. The central server also maintains a list of references between RFID tags and users (e.g., in device DB 114). The central server can therefore determine which users are in proximity to which RFID reader 902. This information can be used to build up the subset of users 406.
  • Referring back to FIG. 1, the anticipatory smart space 100 may include several microphones 104 c deployed about the smart space. The microphones 104 c may be configured to listen for ultrasound (over 20 kHz) emitted by a personal device 22. Each personal device 22 may emit an ultrasound signal through a speaker with a different pattern and different pitch. An area in the smart space 100 having a microphone 104 c that receives the sound signal can therefore detect a personal device 22.
  • The smart space can filter all frequencies below 20 kHz and perform a fast Fourier transform (FFT) to get the peaks of received frequencies in order to obtain the amplitude of all the components of the sound signal. The characteristics of the signal, which are unique to the personal device, can be sent to the server 118, where it is compared with the reference entries in the device DB 114 to identify a user associated with the personal device 22. This information can be used to build up the subset of users 406.
  • It can be appreciated from the foregoing that, in addition to the technological embodiments described above, other sources of information can be used to build up the subset of users 406. For example, an access security system in the smart space may be employed to register or otherwise identify users entering the smart space; e.g. by swiping a badge. This data can be used as an indication in which area a user is currently located and thus be used to build up the subset of users 406.
  • Based on input from the previously outlined processes to locate users relative to the smart space, the information is consolidated and correlated to identify which users are in proximity of which smart space and which are not. The combination and the enhancement of the various technologies reduce the risk of errors if one or several of them fail.
  • For example, the GPS signal might only be reliable if a user (carrying a GPS-enabled personal device) is close to a window, or rather not too far in the inside of a building. In the latter case, the GPS information might incorrectly indicate that a user is somewhere outside of a building. However, if WiFi-based information indicates that a user is indeed in a building (e.g., because their personal device is connected to an access point which is located in the middle of a building close to a smart space), this information is more likely to be accurate and can be given a higher weighting than the GPS-based information. Also, if a user is not carrying a badge with an RFID tag with them, the location detection would still work based on the other technologies.
  • It is worth noting that none of the technologies require manual interactions by the user. The overall solution is user-friendly and largely transparent. On the other hand the technologies can be deactivated easily by the user; e.g. if the user wants ensure their privacy. In addition, optional filters can be made available, which can automatically disable or limit the location detection if a user is not on a campus with the corresponding smart space.
  • The discussion will now turn to blocks 510 and 512, namely feature recognition, and in particular face recognition and speech recognition. The smart space may include a face recognition system, which in a particular embodiment, uses two sensors: a depth sensor and a webcam. The depth sensor can be used to certify that identification conditions have been met. The webcam takes the photos that are used for identification. When a user stands before the face recognition system and identification conditions are met, the server 118 (FIG. 1) can return the name of the person in the form of a greeting; e.g., “Hi [name]”. In addition, user details can be fed into the server 118 for further processing; e.g. to offer user-specific features and data.
  • Before face recognition can happen in a smart space, users may participate in a training phase to train the smart space. In an embodiment, for example, the user types their name into the system, then stands before a screen and follows a moving red dot with the eyes and face. At every position of the face, the webcam takes a photo. This photo can be digitally processed as follows: first, the eyes are identified; then, using the eyes as the center, the face is rotated, scaled and translated so the eyes line up in the correct position in the frame. This ensures that the eyes are horizontally placed, and that the center of the frame is between the eyes. The image is then cropped, tagged with the name of the person, and stored in a face library. Training can happen quickly (e.g., less than a minute), and needs to be done only once. For example, in some embodiments, a total of 10 photos might be taken, processed, tagged and stored in the face library (e.g., user information data store 116).
  • The identification process can start with the depth sensor detecting the presence of human skeletons. The depth sensor certifies that only one skeleton is present, and that the skeleton is standing at the correct distance (currently, less than 1 meter). If too many people approach the depth sensor together, it registers this and provides feedback. Once the depth sensor outputs have helped determine that conditions are correct, the webcam takes a series of test photos to carry out face recognition.
  • Each test photo taken by the webcam is digitally processed (in the same manner that face library photos were processed)—first, the eyes are identified; then, using the eyes as the center, the face is rotated, scaled and translated so the eyes line up in the correct position in the frame; then, the image is then cropped. This processed test photo is then matched against face library photos using a “k-nearest-neighbor” algorithm.
  • The winning face library photo is returned. If the same library photo wins 4 or more times (when matched against 5 test photos), the name associated with that library photo is retuned as the identified face and therefore the identified user. The system can identify which users are not part of the face library. For this, the test photo is re-projected on to the winning library photo. The eigenvalues of the library photo (represented as a matrix) are compared with the eigenvalues of the test photo, and a score is computed. If the score exceeds a certain threshold, the system can say “Hi Stranger”, indicating that the best match for this face is not very good (i.e. this face is a stranger who is not in the face library).
  • Speaker Recognition—Speaker recognition can be used as an alternative way to identify users in an intuitive, yet safe way. The technology is able to recognize a speaker based on the individual characteristics of his voice.
  • The smart space can be equipped with microphones, which are able to record the voice of any user of the smart space in high quality. The audio data can be streamed to the central server where it is analyzed, for example, using ALIZE, a platform for biometric authentication system written in C++, after the related features of the voice where extracted.
  • ALIZE provides the necessary set of high- and low-level tools. The low-level library includes a statistic engine, while the high level library and the LIA_RAL package provide necessary functions to teach the system the users' voices, in addition to parameter normalization, score normalization, and various other capabilities.
  • In some embodiments, the live recorded voice is only compared to the users which were previously identified to be in proximity to the smart space. This increases the security of the solution, as well as the performance of the recognition process. At the end of the recognition process a score is provided together with the identified users. The solution cannot only identify if one of the previously located users has said something, but with a high likelihood also which one it was. See, for example, FIG. 10.
  • In some embodiments, the solution can provide improved results if the speaker recognition process is only applied to specific keywords, which the user taught the system in a one-time setup process. Therefore a user could say a specific password, so that the system can identify that it's not only a specific user, but also that this specific user knows also the specific password.
  • The discussion will now turn to a description of some details of the anticipatory smart space 100 (FIG. 1) in accordance with the present disclosure. For the purposes of explanation, the anticipatory smart space described herein supports a business environment that adapts to and anticipates users' needs based on presence, proximity, context and business function, and facilitates business interactions. It will be appreciated that the anticipatory smart space can be configured in other collaborative contexts.
  • In accordance with the present disclosure, the anticipatory smart space can be a multi-modal environment that supports users' interactions, both locally and remotely, with minimal input from the user. The environment can provide this support by perceiving the presence and identities of users (e.g., per the multifactor identification and authentication described above), and then anticipating their needs.
  • As shown in FIG. 1, the anticipatory smart space 100 can make use of various technologies to achieve superior usability in comparison to other collaboration systems. It saves users training effort, setup effort, navigational steps and cognitive effort by making use of natural user interfaces and smart back-end algorithms, providing the user with an absolutely seamless and targeted user experience.
  • The business environment can perceive the presence and context of users by answering a set of questions, amongst others, such as:
      • Is a user present?
  • Which user is present?
  • Where is the user standing and where is the user located in general?
  • Is the user moving?
  • Where is the user looking?
  • Which user is speaking?
  • What is the user saying?
  • Is the user gesturing and how?
  • Does the user interact physically with the smart space (e.g. touching its surface)?
  • What did the user do in the past?
  • Which data has the user accessed in the past?
  • With whom has the user interacted in the past?
  • Where was the user in the past?
  • Using signals received via its sensors, the anticipatory smart space 100 attempts to anticipate the needs of the user (or users). The primary use-case for the environment is a business meeting, planned or ad-hoc. Some user needs in a meeting are quite obvious, e.g. turn on the lights; others are indirect, e.g. bring up relevant documents on a large screen. An array of algorithms, some simple and some more involved, help the smart space anticipate users' needs and act on them. Examples include:
      • Turn on displays, activate microphones
      • Reserve a screen for user emails, and bring up the emails only when the user stands in a certain place and looks at the screen
      • Bring up the outlook calendars of participants
      • Display which participants are present
      • Display who is not present, and provide a voice-activated option to call them
      • If permissible, display where in the building the missing participants are
      • Based on recent emails and meeting requests in Outlook, bring up and open all documents relevant to the meeting, including recordings of previous meetings
      • Display everyone's calendar and suggest the next meeting date
      • Identify previous relevant meetings and provide a voice-activated option to replace them
      • Provide a voice-activated option to search the internet.
  • Consider the following illustrative use case, for example. Two users approach an anticipatory smart space. A face recognition system identifies and greets them. On a video wall in the space, the system brings up documents relevant to their meeting. These documents are displayed on the part of the video wall that the user is looking at. The user can turn on privacy settings by voice command, causing documents to fade away when the user looks away or leaves the room.
  • The two users interact with these documents using voice, gestures, touch, keyboards or mouse. The video wall changes its behavior based on which users are looking at it, how long they have looked at which screen, and how far they are standing from the screen.
  • Remote participants are automatically dialed in and identified by the system (for instance, a name tag appears over their heads in the video wall). Users may conduct private interactions with sections of the wall (e.g., look up something on the Internet, recall parts of a previous meeting, etc.). Optionally, additional data in large text is displayed on screens in users' peripheral vision. If a user turns to look at the peripheral screens, the screens respond by presenting denser data.
  • Such seamless interactions will replace business meetings, and the usual flurry of sharing documents and dialing into remote meetings will go away.
  • Context analysis for feature and data anticipation may be provided by backend systems; e.g., server 118. Cues about the users and environment may be collected by the various sensors 104 a-104 e deployed about the anticipatory smart space 100 in order to anticipate users' needs. For example, the questions listed above may be considered in the analysis. In addition, real time image and audio analysis for presence detection, gaze detection, face detection, face recognition, voice recognition, voice control, etc. can also feed into the analysis. Information about users such as name, their roles in the organization, their connections with other participants, their emotional state, biometric data, etc. can be used to assess and anticipate their information needs. Relevant contexts include location where the meting is taking place, time and day of the meeting, available display devices (e.g., are the participants carrying smartphones, computer tablets, etc.), documents being looked at or shared, content written in the smartboard, etc.
  • The above description illustrates various embodiments of the present disclosure along with examples of how aspects of the particular embodiments may be implemented. The above examples should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the particular embodiments as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope of the present disclosure as defined by the claims.

Claims (19)

We claim the following:
1. A method in a computer system to facilitate interactions among users in a conference area comprising operating the computer system to perform steps of:
storing in a first data store a plurality of device information for a corresponding plurality of user devices, the plurality of device information including device identification (ID) data that identify the corresponding user devices and user ID data that identify users associated with the corresponding user devices;
receiving device ID data from a plurality of first user devices, the plurality of first user devices being a subset of the plurality of user devices;
using the device ID data received from the first user devices to access corresponding device information from the first data store;
using identification information obtained from the accessed device information to create a list of candidate users who are associated with the first user devices;
receiving sensed biometric information representative of a first user from one or more sensory devices deployed about the conference area;
accessing a second data store having stored thereon a plurality of biometric information for a corresponding plurality of users to access biometric information of only those users who are in the list of candidate users; and
comparing the sensed biometric information against the accessed biometric information to identity the first user.
2. The method in claim 2 further comprising receiving location information from one of the first user devices and selectively including a user associated with said one of the first user devices in the list of candidate users depending on whether a location represented by the location information is within a predetermined perimeter about the conference area.
3. The method in claim 2 wherein the location information represents a location of said one of the first user devices.
4. The method in claim 2 wherein the location information represents a location of a device in data communication with said one of the first user devices.
5. The method in claim 2 wherein receiving the sensed biometric information representative of the first user includes initiating sensing of the first user absent interaction with the first user.
6. The method in claim 5 wherein sensing the first user includes capturing an image and/or speech of the first user.
7. The method in claim 2 wherein receiving the sensed biometric information representative of the first user includes interacting with the user to initiate activity to obtain identification information from the first user.
8. The method in claim 7 wherein the activity includes capturing an image and/or speech of the first user.
9. The method in claim 2 wherein the first and second data store are in one data storage system.
10. A computer system comprising:
means for storing in a first data store a plurality of device information for a corresponding plurality of user devices, the plurality of device information including device identification (ID) data that identify the corresponding user devices and user ID data that identify users associated with the corresponding user devices;
means for receiving device ID data from a plurality of first user devices, the plurality of first user devices being at most a subset of the plurality of user devices;
means for accessing corresponding device information from the first data store using the device ID data received from the first user devices;
means for creating a list of candidate users who are associated with the first user devices using identification information obtained from the accessed device information;
means for receiving sensed biometric information from one or more sensory devices deployed about the conference area, the sensed biometric information representative of a first user;
means for accessing a second data store having stored thereon a plurality of biometric information for a corresponding plurality of users to access biometric information of only those users who are in the list of candidate users; and
means for comparing the sensed biometric information against the accessed biometric information to identity the first user.
11. The computer system of claim 10 further comprising means for receiving location information from one of the first user devices and selectively including a user associated with said one of the first user devices in the list of candidate users depending on whether a location represented by the location information is within a predetermined perimeter about the conference area.
12. The computer system of claim 11 wherein the location information represents a location of said one of the first user devices.
13. The computer system of claim 11 wherein the location information represents a location of a device in data communication with said one of the first user devices.
14. The computer system of claim 10 wherein the means for receiving the sensed biometric information representative of the first user includes means for initiating sensing of the first user absent interaction with the first user.
15. The computer system of claim 14 wherein the means for sensing the first user includes capturing an image and/or speech of the first user.
16. A computer system to facilitate interactions among users in a conference area:
first means for receiving data from a plurality of sensing devices deployed about the conference area;
second means, in communication with the first means, for detecting interactions among a plurality of first users;
third means, in communication with the second means, for accessing a data store of multimedia documents to access first multimedia documents based on the interactions among the first users and presenting the first multimedia documents on one or more output devices proximate the first users;
fourth means, in communication with the first means, for detecting a second user and for accessing the data store of multimedia documents to access second multimedia documents based on past activities of the second user.
17. The computer system of claim 16 wherein the interactions among the first users include conversations among the first users, wherein the first multimedia documents are based on the content of the conversations.
18. The computer system of claim 16 wherein the second user is among the plurality of first users.
19. The computer system of claim 16 wherein the past activities of the second user include previously accessed data, previous interactions with other users, and previously visited locations.
US14/546,480 2013-11-19 2014-11-18 Anticipatory Environment for Collaboration and Data Sharing Abandoned US20150142891A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/546,480 US20150142891A1 (en) 2013-11-19 2014-11-18 Anticipatory Environment for Collaboration and Data Sharing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361906327P 2013-11-19 2013-11-19
US14/546,480 US20150142891A1 (en) 2013-11-19 2014-11-18 Anticipatory Environment for Collaboration and Data Sharing

Publications (1)

Publication Number Publication Date
US20150142891A1 true US20150142891A1 (en) 2015-05-21

Family

ID=53174416

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/546,480 Abandoned US20150142891A1 (en) 2013-11-19 2014-11-18 Anticipatory Environment for Collaboration and Data Sharing
US14/546,521 Abandoned US20170134819A9 (en) 2013-11-19 2014-11-18 Apparatus and Method for Context-based Storage and Retrieval of Multimedia Content
US16/686,684 Active 2035-01-17 US11070553B2 (en) 2013-11-19 2019-11-18 Apparatus and method for context-based storage and retrieval of multimedia content

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/546,521 Abandoned US20170134819A9 (en) 2013-11-19 2014-11-18 Apparatus and Method for Context-based Storage and Retrieval of Multimedia Content
US16/686,684 Active 2035-01-17 US11070553B2 (en) 2013-11-19 2019-11-18 Apparatus and method for context-based storage and retrieval of multimedia content

Country Status (1)

Country Link
US (3) US20150142891A1 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150198455A1 (en) * 2014-01-14 2015-07-16 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
USD768024S1 (en) 2014-09-22 2016-10-04 Toyota Motor Engineering & Manufacturing North America, Inc. Necklace with a built in guidance device
US9578307B2 (en) 2014-01-14 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US9576460B2 (en) 2015-01-21 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable smart device for hazard detection and warning based on image and audio data
US9586318B2 (en) 2015-02-27 2017-03-07 Toyota Motor Engineering & Manufacturing North America, Inc. Modular robot with smart device
US9629774B2 (en) 2014-01-14 2017-04-25 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US9677901B2 (en) 2015-03-10 2017-06-13 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing navigation instructions at optimal times
US9811752B2 (en) 2015-03-10 2017-11-07 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable smart device and method for redundant object identification
US9898039B2 (en) 2015-08-03 2018-02-20 Toyota Motor Engineering & Manufacturing North America, Inc. Modular smart necklace
US9922236B2 (en) 2014-09-17 2018-03-20 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable eyeglasses for providing social and environmental awareness
US9958275B2 (en) 2016-05-31 2018-05-01 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for wearable smart device communications
US9972216B2 (en) 2015-03-20 2018-05-15 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for storing and playback of information for blind users
US10012505B2 (en) 2016-11-11 2018-07-03 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable system for providing walking directions
US10024679B2 (en) 2014-01-14 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10024680B2 (en) 2016-03-11 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Step based guidance system
US10024678B2 (en) 2014-09-17 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable clip for providing social and environmental awareness
US10024667B2 (en) 2014-08-01 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable earpiece for providing social and environmental awareness
US20180285525A1 (en) * 2017-03-31 2018-10-04 Ricoh Company, Ltd. Approach for displaying information on interactive whiteboard (iwb) appliances
US20180300324A1 (en) * 2017-04-17 2018-10-18 Microstrategy Incorporated Contextually relevant document recommendations
US10172760B2 (en) 2017-01-19 2019-01-08 Jennifer Hendrix Responsive route guidance and identification system
US10248856B2 (en) 2014-01-14 2019-04-02 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10360907B2 (en) 2014-01-14 2019-07-23 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10432851B2 (en) 2016-10-28 2019-10-01 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable computing device for detecting photography
US10490102B2 (en) 2015-02-10 2019-11-26 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for braille assistance
US10521669B2 (en) 2016-11-14 2019-12-31 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing guidance or feedback to a user
US10561519B2 (en) 2016-07-20 2020-02-18 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable computing device having a curved back to reduce pressure on vertebrae
EP3591570A4 (en) * 2017-03-22 2020-03-18 Huawei Technologies Co., Ltd. Method for determining terminal held by subject in picture and terminal
US20200128004A1 (en) * 2018-10-18 2020-04-23 International Business Machines Corporation User authentication by emotional response
US11743723B2 (en) 2019-09-16 2023-08-29 Microstrategy Incorporated Predictively providing access to resources
US11743064B2 (en) * 2019-11-04 2023-08-29 Meta Platforms Technologies, Llc Private collaboration spaces for computing systems

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10133538B2 (en) * 2015-03-27 2018-11-20 Sri International Semi-supervised speaker diarization
US20170034312A1 (en) * 2015-07-29 2017-02-02 Anthony I. Lopez, JR. Texting Communications System and Method for Storage and Retrieval of Structured Content originating from a Secure Content Management System
US10846475B2 (en) * 2015-12-23 2020-11-24 Beijing Xinmei Hutong Technology Co., Ltd. Emoji input method and device thereof
CN105930527B (en) * 2016-06-01 2019-09-20 北京百度网讯科技有限公司 Searching method and device
US20180160200A1 (en) * 2016-12-03 2018-06-07 Streamingo Solutions Private Limited Methods and systems for identifying, incorporating, streamlining viewer intent when consuming media
GB201621768D0 (en) * 2016-12-20 2017-02-01 Really Neural Ltd A method and system for digital linear media retrieval
US20190114131A1 (en) * 2017-10-13 2019-04-18 Microsoft Technology Licensing, Llc Context based operation execution
US10942963B1 (en) * 2018-04-05 2021-03-09 Intuit Inc. Method and system for generating topic names for groups of terms
US11256764B2 (en) * 2018-05-03 2022-02-22 EMC IP Holding Company LLC Managing content searches in computing environments
US11024291B2 (en) 2018-11-21 2021-06-01 Sri International Real-time class recognition for an audio stream
US10735811B2 (en) 2018-12-10 2020-08-04 At&T Intellectual Property I, L.P. System for content curation with user context and content leverage
US11205047B2 (en) * 2019-09-05 2021-12-21 Servicenow, Inc. Hierarchical search for improved search relevance
US11573995B2 (en) * 2019-09-10 2023-02-07 International Business Machines Corporation Analyzing the tone of textual data
US11558208B2 (en) * 2019-09-24 2023-01-17 International Business Machines Corporation Proximity based audio collaboration
EP3951775A4 (en) * 2020-06-16 2022-08-10 Minds Lab Inc. Method for generating speaker-marked text
US11301503B2 (en) * 2020-07-10 2022-04-12 Servicenow, Inc. Autonomous content orchestration
US11475058B1 (en) * 2021-10-19 2022-10-18 Rovi Guides, Inc. Systems and methods for generating a dynamic timeline of related media content based on tagged content

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080172733A1 (en) * 2007-01-12 2008-07-17 David Coriaty Identification and verification method and system for use in a secure workstation
US20090123035A1 (en) * 2007-11-13 2009-05-14 Cisco Technology, Inc. Automated Video Presence Detection
US20100161664A1 (en) * 2008-12-22 2010-06-24 General Instrument Corporation Method and System of Authenticating the Identity of a User of a Public Computer Terminal
US20100228825A1 (en) * 2009-03-06 2010-09-09 Microsoft Corporation Smart meeting room
US20100315483A1 (en) * 2009-03-20 2010-12-16 King Keith C Automatic Conferencing Based on Participant Presence
US20120250950A1 (en) * 2011-03-29 2012-10-04 Phaedra Papakipos Face Recognition Based on Spatial and Temporal Proximity
US20120321143A1 (en) * 2011-06-17 2012-12-20 Microsoft Corporation Broadcast Identifier Enhanced Facial Recognition of Images
US20130208952A1 (en) * 2012-02-13 2013-08-15 Geoffrey Auchinleck Method and Apparatus for Improving Accuracy of Biometric Identification in Specimen Collection Applications
US20130251216A1 (en) * 2012-03-23 2013-09-26 Microsoft Corporation Personal Identification Combining Proximity Sensing with Biometrics
US8558864B1 (en) * 2010-10-12 2013-10-15 Sprint Communications Company L.P. Identifying video conference participants
US20140074874A1 (en) * 2013-10-02 2014-03-13 Federico Fraccaroli Method, system and apparatus for location-based machine-assisted interactions
US20140107846A1 (en) * 2012-10-12 2014-04-17 Telefonaktiebolaget L M Ericsson (Publ) Method for synergistic occupancy sensing in commercial real estates
US20140109210A1 (en) * 2012-10-14 2014-04-17 Citrix Systems, Inc. Automated Meeting Room
US20150042747A1 (en) * 2012-04-03 2015-02-12 Lg Electronics Inc. Electronic device and method of controlling the same
US20150067890A1 (en) * 2013-08-29 2015-03-05 Accenture Global Services Limited Identification system
US20150085058A1 (en) * 2013-09-22 2015-03-26 Cisco Technology, Inc. Classes of meeting participant interaction
US20150169946A1 (en) * 2013-12-12 2015-06-18 Evernote Corporation User discovery via digital id and face recognition
US20160104051A1 (en) * 2012-11-07 2016-04-14 Panasonic Intellectual Property Corporation Of America Smartlight Interaction System

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6564213B1 (en) * 2000-04-18 2003-05-13 Amazon.Com, Inc. Search query autocompletion
US7836044B2 (en) * 2004-06-22 2010-11-16 Google Inc. Anticipated query generation and processing in a search engine
US7487145B1 (en) * 2004-06-22 2009-02-03 Google Inc. Method and system for autocompletion using ranked results
US8019749B2 (en) * 2005-03-17 2011-09-13 Roy Leban System, method, and user interface for organizing and searching information
US9703892B2 (en) * 2005-09-14 2017-07-11 Millennial Media Llc Predictive text completion for a mobile communication facility
US7657518B2 (en) * 2006-01-31 2010-02-02 Northwestern University Chaining context-sensitive search results
US8695031B2 (en) * 2006-08-02 2014-04-08 Concurrent Computer Corporation System, device, and method for delivering multimedia
US20080046925A1 (en) * 2006-08-17 2008-02-21 Microsoft Corporation Temporal and spatial in-video marking, indexing, and searching
US8041730B1 (en) * 2006-10-24 2011-10-18 Google Inc. Using geographic data to identify correlated geographic synonyms
EP2014816B1 (en) * 2007-07-10 2010-04-14 Clariant Finance (BVI) Limited Method for measuring octanol-water distribution coefficients of surfactants
US20090043741A1 (en) * 2007-08-09 2009-02-12 Dohyung Kim Autocompletion and Automatic Input Method Correction for Partially Entered Search Query
US8275764B2 (en) * 2007-08-24 2012-09-25 Google Inc. Recommending media programs based on media program popularity
WO2009040574A1 (en) * 2007-09-24 2009-04-02 Taptu Ltd Search results with search query suggestions
DE212011100017U1 (en) * 2010-08-19 2012-04-03 David Black Predictive query completion and predictive search results
FR2977249B1 (en) * 2011-07-01 2014-09-26 Serac Group PACKAGING INSTALLATION COMPRISING FILLING BITS EQUIPPED WITH CONNECTING DUCTING PIPES
US9785718B2 (en) * 2011-07-22 2017-10-10 Nhn Corporation System and method for providing location-sensitive auto-complete query
US20130080423A1 (en) * 2011-09-23 2013-03-28 Ebay Inc. Recommendations for search queries
US9384279B2 (en) * 2012-12-07 2016-07-05 Charles Reed Method and system for previewing search results
US9852233B2 (en) * 2013-03-15 2017-12-26 Ebay Inc. Autocomplete using social activity signals
US10185748B1 (en) * 2013-08-22 2019-01-22 Evernote Corporation Combining natural language and keyword search queries for personal content collections
US10210215B2 (en) * 2015-04-29 2019-02-19 Ebay Inc. Enhancing search queries using user implicit data

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080172733A1 (en) * 2007-01-12 2008-07-17 David Coriaty Identification and verification method and system for use in a secure workstation
US20090123035A1 (en) * 2007-11-13 2009-05-14 Cisco Technology, Inc. Automated Video Presence Detection
US20100161664A1 (en) * 2008-12-22 2010-06-24 General Instrument Corporation Method and System of Authenticating the Identity of a User of a Public Computer Terminal
US20100228825A1 (en) * 2009-03-06 2010-09-09 Microsoft Corporation Smart meeting room
US20100315483A1 (en) * 2009-03-20 2010-12-16 King Keith C Automatic Conferencing Based on Participant Presence
US8558864B1 (en) * 2010-10-12 2013-10-15 Sprint Communications Company L.P. Identifying video conference participants
US20120250950A1 (en) * 2011-03-29 2012-10-04 Phaedra Papakipos Face Recognition Based on Spatial and Temporal Proximity
US20120321143A1 (en) * 2011-06-17 2012-12-20 Microsoft Corporation Broadcast Identifier Enhanced Facial Recognition of Images
US20130208952A1 (en) * 2012-02-13 2013-08-15 Geoffrey Auchinleck Method and Apparatus for Improving Accuracy of Biometric Identification in Specimen Collection Applications
US20130251216A1 (en) * 2012-03-23 2013-09-26 Microsoft Corporation Personal Identification Combining Proximity Sensing with Biometrics
US20150042747A1 (en) * 2012-04-03 2015-02-12 Lg Electronics Inc. Electronic device and method of controlling the same
US20140107846A1 (en) * 2012-10-12 2014-04-17 Telefonaktiebolaget L M Ericsson (Publ) Method for synergistic occupancy sensing in commercial real estates
US20140109210A1 (en) * 2012-10-14 2014-04-17 Citrix Systems, Inc. Automated Meeting Room
US20160104051A1 (en) * 2012-11-07 2016-04-14 Panasonic Intellectual Property Corporation Of America Smartlight Interaction System
US20150067890A1 (en) * 2013-08-29 2015-03-05 Accenture Global Services Limited Identification system
US20150085058A1 (en) * 2013-09-22 2015-03-26 Cisco Technology, Inc. Classes of meeting participant interaction
US20140074874A1 (en) * 2013-10-02 2014-03-13 Federico Fraccaroli Method, system and apparatus for location-based machine-assisted interactions
US20150169946A1 (en) * 2013-12-12 2015-06-18 Evernote Corporation User discovery via digital id and face recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Buthpitiya et al.""HyPhIVE: A Hybrid Virtual-Physical Collaboration Environment," 2010 Third International Conference on Advances in Computer-Human Interactions, 2010, pp. 199-204 *
Mikic et al., "Activity monitoring and summarization for an intelligent meeting room", Proceedings Workshop on Human Motion, Dec. 2000, pp. 107-112 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150198455A1 (en) * 2014-01-14 2015-07-16 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10360907B2 (en) 2014-01-14 2019-07-23 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US9578307B2 (en) 2014-01-14 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US9915545B2 (en) * 2014-01-14 2018-03-13 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10248856B2 (en) 2014-01-14 2019-04-02 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US9629774B2 (en) 2014-01-14 2017-04-25 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10024679B2 (en) 2014-01-14 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Smart necklace with stereo vision and onboard processing
US10024667B2 (en) 2014-08-01 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable earpiece for providing social and environmental awareness
US10024678B2 (en) 2014-09-17 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable clip for providing social and environmental awareness
US9922236B2 (en) 2014-09-17 2018-03-20 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable eyeglasses for providing social and environmental awareness
USD768024S1 (en) 2014-09-22 2016-10-04 Toyota Motor Engineering & Manufacturing North America, Inc. Necklace with a built in guidance device
US9576460B2 (en) 2015-01-21 2017-02-21 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable smart device for hazard detection and warning based on image and audio data
US10490102B2 (en) 2015-02-10 2019-11-26 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for braille assistance
US9586318B2 (en) 2015-02-27 2017-03-07 Toyota Motor Engineering & Manufacturing North America, Inc. Modular robot with smart device
US10391631B2 (en) 2015-02-27 2019-08-27 Toyota Motor Engineering & Manufacturing North America, Inc. Modular robot with smart device
US9811752B2 (en) 2015-03-10 2017-11-07 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable smart device and method for redundant object identification
US9677901B2 (en) 2015-03-10 2017-06-13 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing navigation instructions at optimal times
US9972216B2 (en) 2015-03-20 2018-05-15 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for storing and playback of information for blind users
US9898039B2 (en) 2015-08-03 2018-02-20 Toyota Motor Engineering & Manufacturing North America, Inc. Modular smart necklace
US10024680B2 (en) 2016-03-11 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Step based guidance system
US9958275B2 (en) 2016-05-31 2018-05-01 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for wearable smart device communications
US10561519B2 (en) 2016-07-20 2020-02-18 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable computing device having a curved back to reduce pressure on vertebrae
US10432851B2 (en) 2016-10-28 2019-10-01 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable computing device for detecting photography
US10012505B2 (en) 2016-11-11 2018-07-03 Toyota Motor Engineering & Manufacturing North America, Inc. Wearable system for providing walking directions
US10521669B2 (en) 2016-11-14 2019-12-31 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for providing guidance or feedback to a user
US10172760B2 (en) 2017-01-19 2019-01-08 Jennifer Hendrix Responsive route guidance and identification system
EP3591570A4 (en) * 2017-03-22 2020-03-18 Huawei Technologies Co., Ltd. Method for determining terminal held by subject in picture and terminal
US11790647B2 (en) 2017-03-22 2023-10-17 Huawei Technologies Co., Ltd. Object recognition in photographs and automatic sharing
US20180285525A1 (en) * 2017-03-31 2018-10-04 Ricoh Company, Ltd. Approach for displaying information on interactive whiteboard (iwb) appliances
US20180300324A1 (en) * 2017-04-17 2018-10-18 Microstrategy Incorporated Contextually relevant document recommendations
US20200128004A1 (en) * 2018-10-18 2020-04-23 International Business Machines Corporation User authentication by emotional response
US11115409B2 (en) * 2018-10-18 2021-09-07 International Business Machines Corporation User authentication by emotional response
US11743723B2 (en) 2019-09-16 2023-08-29 Microstrategy Incorporated Predictively providing access to resources
US11743064B2 (en) * 2019-11-04 2023-08-29 Meta Platforms Technologies, Llc Private collaboration spaces for computing systems

Also Published As

Publication number Publication date
US20160142787A1 (en) 2016-05-19
US20170134819A9 (en) 2017-05-11
US20200196020A1 (en) 2020-06-18
US11070553B2 (en) 2021-07-20

Similar Documents

Publication Publication Date Title
US20150142891A1 (en) Anticipatory Environment for Collaboration and Data Sharing
US9553994B2 (en) Speaker identification for use in multi-media conference call system
US8843649B2 (en) Establishment of a pairing relationship between two or more communication devices
CN106663245B (en) Social alerts
US11789697B2 (en) Methods and systems for attending to a presenting user
EP3619923B1 (en) Coupled interactive devices
US20210344434A1 (en) Method, device, system, and storage medium for live broadcast detection and data processing
US10531048B2 (en) System and method for identifying a person, object, or entity (POE) of interest outside of a moving vehicle
US10133304B2 (en) Portable electronic device proximity sensors and mode switching functionality
US8780162B2 (en) Method and system for locating an individual
Tan et al. The sound of silence
US11445147B2 (en) Providing for cognitive recognition in a collaboration environment
JP2015526933A (en) Sending start details from a mobile device
US10735916B2 (en) Two-way communication interface for vision-based monitoring system
US11289086B2 (en) Selective response rendering for virtual assistants
US11057702B1 (en) Method and system for reducing audio feedback
US20190386840A1 (en) Collaboration systems with automatic command implementation capabilities
US20220236942A1 (en) System and method for casting content
US11112943B1 (en) Electronic devices and corresponding methods for using episodic data in media content transmission preclusion overrides
US20240106969A1 (en) Eye Contact Optimization

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAP SE, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HAQUE, SHAFKAT UL;KUO, RAY;WANG, MIAO;AND OTHERS;SIGNING DATES FROM 20141008 TO 20141107;REEL/FRAME:034199/0611

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION