US20160105470A1 - Single button mobile telephone using server-based call routing - Google Patents
Single button mobile telephone using server-based call routing Download PDFInfo
- Publication number
- US20160105470A1 US20160105470A1 US14/841,138 US201514841138A US2016105470A1 US 20160105470 A1 US20160105470 A1 US 20160105470A1 US 201514841138 A US201514841138 A US 201514841138A US 2016105470 A1 US2016105470 A1 US 2016105470A1
- Authority
- US
- United States
- Prior art keywords
- user
- dataset
- computing system
- event
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1069—Session establishment or de-establishment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72418—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting emergency services
- H04M1/72424—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting emergency services with manual activation of emergency-service functions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/51—Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
- H04M3/5116—Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing for emergency applications
-
- H04W4/005—
-
- H04W4/008—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/70—Services for machine-to-machine communication [M2M] or machine type communication [MTC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/80—Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/04—Wireless resource allocation
- H04W72/044—Wireless resource allocation based on the type of the allocated resource
- H04W72/0453—Resources in frequency domain, e.g. a carrier in FDMA
-
- H04W76/02—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W76/00—Connection management
- H04W76/10—Connection setup
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72418—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting emergency services
- H04M1/72421—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting emergency services with automatic activation of emergency service functions, e.g. upon sensing an alarm
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2242/00—Special services or facilities
- H04M2242/04—Special services or facilities for emergency applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/02—Details of telephonic subscriber devices including a Bluetooth interface
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/10—Details of telephonic subscriber devices including a GPS signal receiver
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/12—Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
Definitions
- PSAP public safety access point
- the wearable device may be configured to call a single destination (e.g., a PSAP) in response to a user request (e.g., in response to the user pushing the button) and may not be able to initiate voice calls to other destinations in response to the user request.
- a single destination e.g., a PSAP
- a user request e.g., in response to the user pushing the button
- FIG. 1 is a block diagram illustrating one embodiment of a system for detecting a predefined user state.
- FIG. 3 is a block diagram illustrating one embodiment of a distributed cloud computing system.
- the cloud computing system is configured to receive datasets of raw measurements based on an event from the wearable device via the network, where one of the datasets is audio.
- the datasets may include audio recoded by an audio capturing module such as microphones; and one or both of acceleration from an accelerometer and change in orientation (e.g., rotation angles change) calculated based on accelerometer, magnetometer and gyroscope measurements.
- the audio data may originate from the user's voice, the user's body, and the environment.
- the datasets may include data received from other sensors, such as data from external health sensors (e.g., an EKG, blood pressure device/sphygmomanometer, a weight scale, a glucometer, a pulse oximeter).
- the clouding computing system may determine whether the event is an activity of daily life (ADL), a fall or other type of accident, or an inconclusive event.
- ADL activity of daily life
- each of the wearable devices 12 a - 12 n is operable to communicate with a corresponding one of users 16 a - 16 n (e.g., via a microphone, speaker, and voice recognition software), external health sensors 18 a - 18 n (e.g., an EKG, blood pressure device, weight scale, glucometer) via, for example, a short-range over the air (OTA) transmission method (e.g., BlueTooth, WiFi, etc.), a call center 30 , a first-to-answer system 32 , and care giver and/or family member 34 , and the distributed cloud computing system 14 via, for example, a long range OTA transmission method (e.g., over a 3 rd Generation (3G) or 4 th Generation (4G) cellular transmission network 20 , such as a Long Term Evolution (LTE) network, a Code Division Multiple Access (CDMA) network, etc.).
- 3G 3 rd Generation
- 4G 4 th Generation
- the wearable user device 12 may include an accelerometer for measuring an acceleration of the user, a magnetometer for measuring a magnetic field associated with the user's change of orientation, a gyroscope for providing a more precise determination of orientation of the user, and a microphone for receiving audio. Based on data received from the above sensors, the wearable device 12 may identify a suspected user state, and then categorize the suspected user state as an activity of daily life, a confirmed predefined user state, or an inconclusive event. The wearable user device 12 may then communicate with the distributed cloud computing system 14 to obtain a re-confirmation or change of classification from the distributed cloud computing system 14 . In another embodiment, the wearable user device 12 transmits data provided by the sensors to the distributed cloud computing system 14 , which then determines a user state based on this data.
- the wearable user device 12 may also obtain audio data from one or more microphones on the wearable device 12 .
- the wearable user device 12 may record the user's voice and/or sounds which are captured by the one or more microphones, and may provide the recorded sounds and/or voice to the distributed cloud computing system 14 for processing (e.g., for voice or speech recognition).
- the wearable devices 12 a - 12 n may continually or periodically gather/obtain data from the sensors and/or the one or more microphones (e.g., gather/obtain datasets and audio data) and the wearable devices 12 a - 12 n may transmit these datasets to the distributed cloud computing system 14 .
- the datasets may be transmitted to the distributed cloud computing system 14 at periodic intervals, or when a particular event occurs (e.g., user pushes a button on the wearable device 12 a - 12 n or a fall is detected).
- the datasets may include data indicative of measurements or information obtained by the sensors which may be within or coupled to the wearable device 12 - 12 n .
- Cloud computing provides computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services.
- the term “cloud” refers to one or more computational services (e.g., servers) connected by a computer network.
- the wearable devices 12 a - 12 n may include a button, which a user 16 may use to initiate voice calls. For example, a user 16 a may push the button on the device 12 a to initiate a voice call in order to obtain assistance or help (e.g., because the user has slipped or fallen, or because the user requires medical assistance). As discussed above, the wearable devices 12 a - 12 n may periodically transmit datasets to the distributed cloud computing system 14 . In one embodiment, the wearable devices 12 a - 12 n may also transmit datasets to the distributed cloud computing system 14 when the user press or pushes the button on the wearable devices 12 a - 12 n . In one embodiment, the wearable devices 12 a - 12 n may be single-button devices (e.g., devices which only have one button) which provide a simplified interface to users.
- a user 16 a may push the button on the device 12 a to initiate a voice call in order to obtain assistance or help (e.g., because the user has
- the distributed cloud computing system 14 may identify a first-to-answer system 32 (e.g., a 911 or emergency response call center) as destination for the voice call.
- a first-to-answer system 32 e.g., a 911 or emergency response call center
- the distributed cloud computing system 14 may identify a family member 24 , as destination for the voice call. After identifying a destination for the voice call, the distributed cloud computing system 14 routes the voice call to the identified destination.
- the distributed cloud computing system 14 may also analyze audio data received from a wearable device 12 to determine what event has happened to a user.
- the wearable device 12 may provide audio data (e.g., a recording of the user's voice or other sounds) to the distributed cloud computing system 14 .
- the distributed cloud computing system 14 may analyze the sound data and may determine that a user is asking for help (e.g., based on the user's words in the recording).
- the distributed cloud computing system 14 may identify a destination for the voice call, based on the audio data and/or the datasets received from the wearable device 12 and may route the voice call to the identified destination.
- the audio data may be used in conjunction with the datasets to identify a destination for routing the voice call.
- the distributed cloud computing system 14 may monitor the status of the voice call, after it routes the voice call to the identified destination. For example, the distributed cloud computing system 14 may route (either automatically or based on an input from an administrator or call center agent) the voice call to a family member 34 . The distributed cloud computing system 14 may monitor the voice call and may determine that the family member 34 did not answer the voice call. The distributed cloud computing system 14 may then route the voice call to a second destination (e.g., to a call center 30 ), based on the status of the voice call (e.g., based on the voice call failing to connect at the first destination).
- a second destination e.g., to a call center 30
- the distributed cloud computing system 14 may determine that the wearable device 12 (which is worn by the user 16 ) is not located within the home of the user 16 (e.g., the user 16 has left or is outside of a specific geographic region such as the user's home), and may route the voice call to a call center 30 . If the wearable device 12 is located within the user's home, the distributed cloud computing system 14 may route the voice call to a family member 34 .
- FIG. 2 is a block diagram illustrating one embodiment of a wearable device 12 a (e.g., wearable device 12 a shown in FIG. 1 ).
- the wearable device 12 a may include a low-power processor 38 communicatively connected to an accelerometer 40 (e.g., a two-or more-axis accelerometer) for detecting acceleration events (e.g., high, low, positive, negative, oscillating, etc.), a magnetometer 42 (preferably a 3-axis magnetometer) for assessing an orientation of the wearable device 12 a , and a gyroscope 44 for providing a more precise determination of orientation of the wearable device 12 a .
- an accelerometer 40 e.g., a two-or more-axis accelerometer
- acceleration events e.g., high, low, positive, negative, oscillating, etc.
- a magnetometer 42 preferably a 3-axis magnetometer
- gyroscope 44 for providing a more precise determination of orientation of the
- the cellular module 46 is also configured to receive commands from and transmit data to the distributed cloud computing system 14 via a 3G, 4G, and/or other wireless protocol transceiver 50 over the cellular transmission network 20 .
- the cellular module 46 is further configured to communicate with and receive position data from an aGPS receiver 52 , and to receive measurements from the external health sensors (e.g., sensors 18 a - 18 n shown in FIG. 1 ) via a short-range BlueTooth transceiver 54 (or other equivalent short range transceiver such as a WiFi transceiver) or via a direct connection to one or more health sensors (e.g., the health sensors may be directly attached/coupled to the wearable device 12 a ).
- the external health sensors e.g., sensors 18 a - 18 n shown in FIG. 1
- a short-range BlueTooth transceiver 54 or other equivalent short range transceiver such as a WiFi transceiver
- the health sensors may be directly attached/coupled to the
- the telephony server 305 may receive requests from users/wearable devices to initiate voice calls (e.g., user pushes a button on the wearable device to initiate a voice call) and may route the voice calls to one or more destinations, based on datasets such as audio data, health data, movement/orientation data, and location data, received from a wearable device.
- the telephony server 305 includes a call receiver 325 , a PSTN interface 310 , an IP telephony interface 315 , and a call monitor 320 .
- the subscription database 396 may store information or data related to user subscriptions and accounts.
- the subscription database 396 may store data related to the level of service a user is subscribed to. For example, a user may have a higher level (e.g., premium) subscription which indicates that calls should be routed to a PSAP, rather than routed to an automated call center.
- a user may have a lower level (e.g., lower tier) subscription which indicates that a call should be routed to a family member first, and that the call should be routed to a PSAP only if the family member does not answer the call.
- the subscription database 396 may also store rules and/or preferences for determining destinations for routing a voice call.
- FIG. 5 is a flow diagram illustrating a method 500 of routing calls, according to another embodiment.
- the method 500 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof.
- the method 500 is performed by computing system (e.g., the distributed cloud computing system 14 of FIG. 1 or the telephony server 22 of FIG. 1 ).
- the computing system may analyze one or more datasets and the audio data to determine whether the event experienced by the user is an ADL, a confirmed fall or other type of accident, and/or inconclusive, at block 520 .
- one or more of health data, orientation/movement data, location data, time data (e.g., the time of day) may be used to determine what type of event the user experienced.
- the computing system may process the audio data to determine what type of event the user experienced.
- the audio data may include a user's cries for help, and the computing system may process the audio data and determine that the event was a fall or other type of accident.
- the computing system may determine that the user is unconscious and unable to respond, and may determine that a fall or other type of accident has occurred.
- the computing system identifies one of a plurality of destinations, based on the datasets and the audio. For example, the computing system may determine that the datasets and audio data indicated a fall or other type of accident (e.g., based on a sudden change in orientation and a user's cries for help), and the computing system may identify a PSAP as a destination for routing the voice call. The computing system routes the voice call to the identified destination at block 530 .
- the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a smart phone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- PDA Personal Digital Assistant
- the instructions 726 may include instructions to execute a server such as the telephony server 22 , the real time data monitoring server 36 , and/or the web server 28 of FIG. 1 .
- the computer-readable storage medium 724 is shown in an exemplary embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
- the term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
- the term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Marketing (AREA)
- Human Computer Interaction (AREA)
- General Business, Economics & Management (AREA)
- Multimedia (AREA)
- Alarm Systems (AREA)
- Telephonic Communication Services (AREA)
Abstract
Implementations disclose a single button mobile telephone using server-based call routing. A method of an implementation of the disclosure includes receiving, by a processing device, datasets from an apparatus worn by a user, wherein the datasets correspond to an event experienced by the user, in response to a signal to initiate a communication with the user, analyzing, by the processing device, the datasets to determine a type of the event, identifying, by the processing device, one of a plurality of destinations based on the analysis of the datasets and the determined type of the event, and routing, by the processing device, the signal to the identified destination.
Description
- This application is a continuation of U.S. patent application Ser. No. 13/439,571, filed on Apr. 4, 2012, which claims the benefit of U.S. Provisional Patent Application No. 61/516,478 filed Apr. 4, 2011, which the disclosures of each are incorporated herein by reference in their entirety.
- Embodiments of the present invention relate generally to health care-based monitoring systems, and more specifically, to a single button mobile telephone using server-based call routing.
- For certain age groups, such as the elderly, or people that engage in certain dangerous activities, such as firefighters and soldiers, it is desirable to track and understand human activity automatically. For example, a person that has fallen may be injured, unconscious, etc., and needs emergency assistance. In such circumstances, relying on the person to initiate a call to a public safety access point (PSAP) (e.g., 9-1-1 emergency services, an automated emergency call center, etc.) is not practical. Moreover, even if the person is capable of placing the call, the PSAP may be located outside the geographical jurisdiction for providing emergency services. An emergency services person located at a PSAP may need to manually place a second call to the local fire station, police, or Emergency Medical Services (EMS) squad, thereby wasting precious time that could be used to save the person's life. Further, if the person is unconscious, they would not be able to relate the nature of their injuries nor their physical location.
- A wearable device may be worn by the user and the wearable device may monitor the activities and/or health of the user using a variety of sensors and/or components (e.g., GPS units, a blood pressure unit, an accelerometer, etc.). The wearable device may also provide a simple interface (e.g., a single button) to allow a user to initiate a voice call (e.g., to request help). However, these simplified interfaces (e.g., the single button) may not allow a user to choose a destination for the voice call. The wearable device may be configured to call a single destination (e.g., a PSAP) in response to a user request (e.g., in response to the user pushing the button) and may not be able to initiate voice calls to other destinations in response to the user request.
- Embodiments of present invention will be more readily understood from the detailed description of exemplary embodiments presented below considered in conjunction with the attached drawings.
-
FIG. 1 is a block diagram illustrating one embodiment of a system for detecting a predefined user state. -
FIG. 2 is a block diagram illustrating one embodiment of a wearable device. -
FIG. 3 is a block diagram illustrating one embodiment of a distributed cloud computing system. -
FIG. 4 is a flow diagram illustrating a method of routing calls, according to one embodiment. -
FIG. 5 is a flow diagram illustrating a method of routing calls, according to another embodiment. -
FIG. 6 is a flow diagram illustrating a method of routing calls, according to a further embodiment. -
FIG. 7 is a block diagram of an exemplary computer system that may perform one or more of the operations described herein. - Embodiments of the invention provide an apparatus wearable by a user for automatically contacting a public safety access point (PSAP). The wearable device includes several sensors for obtaining datasets. One of datasets is a location of a user obtained from an aGPS receiver. The wearable device also includes a cellular transceiver. The cellular transceiver transmits the datasets to a cloud computing system, receives emergency assistance instructions from the cloud computing system, and contacts a safety access point (PSAP) (e.g., 9-1-1 emergency services, an automated emergency call center, etc.) based on one or more of the location of the user, data/datasets received from the wearable device, and a subscription level/account of a user. The wearable device further includes a button which the user may use (e.g., by pressing the button) to initiate a voice call.
- In one embodiment, the cloud computing system is configured to receive datasets of raw measurements based on an event from the wearable device via the network, where one of the datasets is audio. In one embodiment, the datasets may include audio recoded by an audio capturing module such as microphones; and one or both of acceleration from an accelerometer and change in orientation (e.g., rotation angles change) calculated based on accelerometer, magnetometer and gyroscope measurements. The audio data may originate from the user's voice, the user's body, and the environment. Optionally, the datasets may include data received from other sensors, such as data from external health sensors (e.g., an EKG, blood pressure device/sphygmomanometer, a weight scale, a glucometer, a pulse oximeter). The clouding computing system may determine whether the event is an activity of daily life (ADL), a fall or other type of accident, or an inconclusive event.
- In one embodiment, the cloud computing system may route the voice call to a destination, based on the datasets and audio data received from the wearable device. The cloud computing system may additionally use account or subscription information/data to identify a destination for routing the voice call. In one embodiment, the cloud computing system may monitor the status of the voice call and may route the voice call to a second destination, based on the status of the voice call.
-
FIG. 1 is a block diagram illustrating one embodiment of asystem 10 for detecting a predefined user state. Thesystem 10 includes wearable devices 12 a-12 n communicatively connected to a distributedcloud computing system 14. A wearable device 12 may be a small-size computing device that can be worn as a watch, a pendant, a ring, a pager, or the like, and can be held in any orientation. - In one embodiment, each of the wearable devices 12 a-12 n is operable to communicate with a corresponding one of users 16 a-16 n (e.g., via a microphone, speaker, and voice recognition software), external health sensors 18 a-18 n (e.g., an EKG, blood pressure device, weight scale, glucometer) via, for example, a short-range over the air (OTA) transmission method (e.g., BlueTooth, WiFi, etc.), a call center 30, a first-to-
answer system 32, and care giver and/orfamily member 34, and the distributedcloud computing system 14 via, for example, a long range OTA transmission method (e.g., over a 3rd Generation (3G) or 4th Generation (4G)cellular transmission network 20, such as a Long Term Evolution (LTE) network, a Code Division Multiple Access (CDMA) network, etc.). - Each wearable device 12 is configured to detect a predefined state of a user. The predefined state may include a user physical state (e.g., a user fall inside or outside a building, a user fall from a bicycle, a car incident involving a user, a user taking a shower, etc.) or an emotional state (e.g., a user screaming, a user crying, etc.). As will be discussed in more detail below, the wearable device 12 may include multiple sensors for detecting a predefined user state. For example, the wearable user device 12 may include an accelerometer for measuring an acceleration of the user, a magnetometer for measuring a magnetic field associated with the user's change of orientation, a gyroscope for providing a more precise determination of orientation of the user, and a microphone for receiving audio. Based on data received from the above sensors, the wearable device 12 may identify a suspected user state, and then categorize the suspected user state as an activity of daily life, a confirmed predefined user state, or an inconclusive event. The wearable user device 12 may then communicate with the distributed
cloud computing system 14 to obtain a re-confirmation or change of classification from the distributedcloud computing system 14. In another embodiment, the wearable user device 12 transmits data provided by the sensors to the distributedcloud computing system 14, which then determines a user state based on this data. - In one embodiment, the wearable device 12 includes a low-power processor (e.g., low-power processing device) to process data receive from sensors and/or detect anomalous sensor inputs. The low-power processor may cause a second processing device to further analyze the sensor inputs (e.g., may wake up a main CPU). If the second processing device determines that there is possibly an anomalous event in progress the second processing device may send dataset to the distributed
cloud computing system 14. In one embodiment, if the distributedcloud computing system 14 concludes there is an anomalous event, the distributedcloud computing system 14 may instruct the wearable device 12 to initiate a voice call. - In one embodiment, the wearable user device 12 may also obtain audio data from one or more microphones on the wearable device 12. For example, the wearable user device 12 may record the user's voice and/or sounds which are captured by the one or more microphones, and may provide the recorded sounds and/or voice to the distributed
cloud computing system 14 for processing (e.g., for voice or speech recognition). - In one embodiment, the wearable devices 12 a-12 n may continually or periodically gather/obtain data from the sensors and/or the one or more microphones (e.g., gather/obtain datasets and audio data) and the wearable devices 12 a-12 n may transmit these datasets to the distributed
cloud computing system 14. The datasets may be transmitted to the distributedcloud computing system 14 at periodic intervals, or when a particular event occurs (e.g., user pushes a button on the wearable device 12 a-12 n or a fall is detected). In one embodiment, the datasets may include data indicative of measurements or information obtained by the sensors which may be within or coupled to the wearable device 12-12 n. For example, the datasets may include temperature readings (e.g., 98.5 degrees Fahrenheit, measurements obtained from an accelerometer (e.g., a rate of acceleration), a GPS location (e.g., GPS or longitude/latitude coordinates), etc. In one embodiment, the wearable device 12 a-12 n may transmit a dataset per sensor (e.g., one dataset for the accelerometer, one data set for an aGPS receiver, etc.) In another embodiment, the wearable device 12 a-12 n may combine data received from multiple sensors into a dataset. - Cloud computing provides computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. The term “cloud” refers to one or more computational services (e.g., servers) connected by a computer network.
- The distributed
cloud computing system 14 may include one or more computers configured as atelephony server 22 communicatively connected to the wearable devices 12 a-12 n, theInternet 24, and one or morecellular communication networks 20, including, for example, the public circuit-switched telephone network (PSTN) 26. The distributedcloud computing system 14 may further include one or more computers configured as aWeb server 28 communicatively connected to theInternet 24 for permitting each of the users 16 a-16 n to communicate with a call center 30, first-to-answer systems 32, and care givers and/orfamily 34. Theweb server 28 may also provide an interface for users to interact with the distributed cloud computing system 14 (e.g., to access their accounts, profiles, or subscriptions, to access stored datasets and/or audio data, etc.) The distributedcloud computing system 14 may further include one or more computers configured as a real-time data monitoring andcomputation server 36 communicatively connected to the wearable devices 12 a-12 n for receiving measurement data (e.g., datasets), for processing measurement data to draw conclusions concerning a potential predefined user state, for transmitting user state confirmation results and other commands back to the wearable devices 12 a-12 n, for storing and retrieving present and past historical predefined user state data from adatabase 37 which may be employed in the user state confirmation process, and in retraining further optimized and individualized classifiers that can in turn be transmitted to the wearable device 12 a-12 n. In one embodiment, theweb server 28 may store and retrieve present and past historical predefined user state data, instead of the real-time data monitoring and computation serve 36 or thedatabase 37. - In one embodiment, the wearable devices 12 a-12 n may include a button, which a user 16 may use to initiate voice calls. For example, a
user 16 a may push the button on thedevice 12 a to initiate a voice call in order to obtain assistance or help (e.g., because the user has slipped or fallen, or because the user requires medical assistance). As discussed above, the wearable devices 12 a-12 n may periodically transmit datasets to the distributedcloud computing system 14. In one embodiment, the wearable devices 12 a-12 n may also transmit datasets to the distributedcloud computing system 14 when the user press or pushes the button on the wearable devices 12 a-12 n. In one embodiment, the wearable devices 12 a-12 n may be single-button devices (e.g., devices which only have one button) which provide a simplified interface to users. - In one embodiment, the distributed
cloud computing system 14 may receive a request from the wearable device 12 a-12 n to initiate the voice call. The distributedcloud computing system 14 may also receive datasets from the wearable device 12 a-12 n associated with an event experienced by the user. After receiving the request to initiate the voice call, the distributedcloud computing system 14 may analyze the datasets to determine whether the event experienced by the user is an activity of daily life (ADL), a confirmed fall, or an inconclusive event. In another embodiment, the distributedcloud computing system 14 may identify a destination for routing the voice call, based on the analysis of the datasets. For example, if the distributedcloud computing system 14 analyzes the datasets and determines that the event is a confirmed fall, the distributedcloud computing system 14 may identify a first-to-answer system 32 (e.g., a 911 or emergency response call center) as destination for the voice call. In another example, if the distributedcloud computing system 14 analyzes the datasets and is unable to determine what event occurred (e.g., an inconclusive event), the distributedcloud computing system 14 may identify afamily member 24, as destination for the voice call. After identifying a destination for the voice call, the distributedcloud computing system 14 routes the voice call to the identified destination. - In one embodiment, the distributed
cloud computing system 14 may also analyze audio data received from a wearable device 12 to determine what event has happened to a user. For example, the wearable device 12 may provide audio data (e.g., a recording of the user's voice or other sounds) to the distributedcloud computing system 14. The distributedcloud computing system 14 may analyze the sound data and may determine that a user is asking for help (e.g., based on the user's words in the recording). The distributedcloud computing system 14 may identify a destination for the voice call, based on the audio data and/or the datasets received from the wearable device 12 and may route the voice call to the identified destination. The audio data may be used in conjunction with the datasets to identify a destination for routing the voice call. - In another embodiment, the distributed
cloud computing system 14 may monitor the status of the voice call, after it routes the voice call to the identified destination. For example, the distributedcloud computing system 14 may route (either automatically or based on an input from an administrator or call center agent) the voice call to afamily member 34. The distributedcloud computing system 14 may monitor the voice call and may determine that thefamily member 34 did not answer the voice call. The distributedcloud computing system 14 may then route the voice call to a second destination (e.g., to a call center 30), based on the status of the voice call (e.g., based on the voice call failing to connect at the first destination). - In one embodiment, the distributed
cloud computing system 14 may also use subscription data (e.g., information associated with a user's account or subscription to a service) to identify destinations for routing the voice call. For example, a user may have a higher tier/level subscription which specifies that voice calls initiated by the user (via the button on the wearable device 12) should be routed to a live person, such as a call center 30 or a first-to-answer system 32 (e.g., a 911 response center). In another example, a user may have a lower tier/level subscription which specifies that voice calls initiated by the user (via the button on the wearable device 12) should be routed tofamily member 34 first, and then to a call center 30 if thefamily member 34 is not able to answer the voice call. The subscription data may be used in conjunction with the datasets and/or audio data to identify a destination for routing the voice call. - In a further embodiment, the distributed
cloud computing system 14 may also use a time of day and/or a geographic location to identify destinations for routing a voice call. For example, if a request to initiate a voice all is received in the evening (e.g., 7:00 PM), the distributedcloud computing system 14 may route the voice call to a call center 30, but if a request to initiate a voice call is received during the morning (e.g., 10:30 AM), the distributedcloud computing system 14 may route the voice call to afamily member 34. In a further example, the distributedcloud computing system 14 may determine that the wearable device 12 (which is worn by the user 16) is not located within the home of the user 16 (e.g., the user 16 has left or is outside of a specific geographic region such as the user's home), and may route the voice call to a call center 30. If the wearable device 12 is located within the user's home, the distributedcloud computing system 14 may route the voice call to afamily member 34. -
FIG. 2 is a block diagram illustrating one embodiment of awearable device 12 a (e.g.,wearable device 12 a shown inFIG. 1 ). Thewearable device 12 a may include a low-power processor 38 communicatively connected to an accelerometer 40 (e.g., a two-or more-axis accelerometer) for detecting acceleration events (e.g., high, low, positive, negative, oscillating, etc.), a magnetometer 42 (preferably a 3-axis magnetometer) for assessing an orientation of thewearable device 12 a, and a gyroscope 44 for providing a more precise determination of orientation of thewearable device 12 a. The low-power processor 38 is configured to receive continuous or near-continuous real-time measurement data from theaccelerometer 40, themagnetometer 42, and the gyroscope 44 for rendering tentative decisions concerning predefined user states. By utilizing the above components, the wearable device 12 is able to render these decisions in relatively low-computationally expensive, low-powered manner and minimize false positive and false negative errors. A cellular module 46, such as the 3G IEM 6270 manufactured by Qualcomm®, includes a high-computationally-powered microprocessor element and internal memory that are adapted to receive the suspected fall events from the low-power processor 38 and to further correlate orientation data received from the optional gyroscope 44 with digitized audio data received from one or more microphones 48 (preferably, but not limited to, a micro-electro-mechanical systems-based (MEMS) microphone(s)). The audio data may include the type, number, and frequency of sounds originating from the user's voice, the user's body, and the environment. - The cellular module 46 is also configured to receive commands from and transmit data to the distributed
cloud computing system 14 via a 3G, 4G, and/or otherwireless protocol transceiver 50 over thecellular transmission network 20. The cellular module 46 is further configured to communicate with and receive position data from anaGPS receiver 52, and to receive measurements from the external health sensors (e.g., sensors 18 a-18 n shown inFIG. 1 ) via a short-range BlueTooth transceiver 54 (or other equivalent short range transceiver such as a WiFi transceiver) or via a direct connection to one or more health sensors (e.g., the health sensors may be directly attached/coupled to thewearable device 12 a). - In addition to recording audio data for fall analysis, the cellular module 46 is further configured to permit direct voice communication between the
user 16 a and the PSAP (e.g. 9-1-1, an emergency response center, etc., not shown in the figures), a call center 30, first-to-answer systems 32 (e.g. a fire station, a police station, a physician's office, a hospital, etc.), or care givers and/orfamily 34 via a built-in speaker 58 and an amplifier 60. Either directly or via the distributedcloud computing system 14, the cellular module 46 is further configured to permit theuser 16 a to conduct a conference connection with one or more of a PSAP, the call center 30, the first-to-answer systems 32, and/or care givers and/orfamily 34. The cellular module 46 may receive/operate one or more input and output indicators 62 (e.g., one or more mechanical and touch switches (not shown), a vibrator. LEDs, etc.). Thewearable device 12 a also includes an on-board battery power module 64. - The
wearable device 12 a may also include abutton 62. Thebutton 62 may allow a user to provide user input to thewearable device 12 a. For example, the user may press or push the button to initiate a voice call to one or more of a call center 30, first-to-answer systems 32 (e.g. a fire station, a police station, a physician's office, a hospital, etc.), or care givers and/orfamily 34. In another example, a user may use thebutton 62 to answer questions during a voice call (e.g., push thebutton 62 once for “yes” and push thebutton 62 twice for “no”). In another example, the user may indicate that the wearable device should start collecting data (e.g., datasets such as health data, audio data, location data, etc.) and/or send data to the distributedcloud computing system 14, using thebutton 62. - The
wearable device 12 a may also include empty expansion slots and/or connectors (not shown) to collect readings from other sensors (i.e., an inertial measurement unit, a pressure sensor for measuring air pressure or attitude, a heart rate sensor, blood perfusion sensor, temperature sensor), etc. These other sensors may be coupled to the device via the expansion slots and/or connectors to provide additional datasets or information to the distributedcloud computing system 14. - In one embodiment, the
wearable device 12 a may collect, gather, and/or obtain information using a variety of components. For example, thewearable device 12 a may obtain orientation and/or movement data (e.g., information about how a user who is wearing thewearable device 12 a has moved) using theaccelerometer 40, themagnetometer 42, and/or the gyroscope 44. In another example, thewearable device 12 a may determine the location (e.g., location data, such as GPS coordinates) of thewearable device 12 a (and the user who is wearing or holding thewearable device 12 a) using theaGPS receiver 52. In a further example, the wearable device may collect health data (e.g., heart rate, blood pressure, sugar levels, temperature, etc.) using sensors (not shown in the figures) which may be attached to thewearable device 12 a and/or may communicate with thewearable device 12 a using theBluetooth transceiver 54. In yet another example, thewearable device 12 a may obtain audio data (e.g., voice and/or sounds) using themicrophone 48 or a plurality of microphones (now shown in the figures). - In one embodiment, the
wearable device 12 a may obtain and/or generate datasets (e.g., orientation/movement data, health data, location data, audio data) using these components and may transmit these datasets to the distributedcloud computing system 14. In another embodiment, thewearable device 12 a may periodically transmit data sets to the distributedcloud computing system 14. For example, thewearable device 12 a may transmit the datasets once every 5 seconds, or once every 30 seconds. In another embodiment, thewearable device 12 a may transmit the datasets when certain criteria are met (e.g., when an accelerometer detects an acceleration above a certain threshold indicating a possible fall, or when the aGPS receiver determines that the wearable devices has left a certain location). In a further embodiment, thewearable device 12 a may transmit datasets when a user input is received. For example, thewearable device 12 a may send the datasets when the user presses or pushes thebutton 62, in order to initiate a voice call. - In one embodiment, the
wearable device 12 a may process the datasets, prior to providing the datasets to the distributedcloud computing system 14. For example, thewearable device 12 a may process motion and/or orientation data to make an initial determination as to whether a user event (e.g., a fall or some other accident) has occurred. The distributedcloud computing system 14 may further process the datasets, in addition to the processing performed by thewearable device 12 a. In another embodiment, thewearable device 12 a may provide the datasets to the distributedcloud computing system 14 without first processing the datasets, and may allow the distributedcloud computing system 14 to process the datasets. In one embodiment, the distributedcloud computing system 14 may have more processing power (e.g., more CPUs) and may be better able to process and/or analyze the datasets than thewearable device 12 a. -
FIG. 3 is a block diagram illustrating one embodiment of a distributedcloud computing system 300. The distributedcomputing system 300 may include atelephony server 305, adata monitoring server 345, and aweb server 385. More or less components may be included in the distributedcloud computing system 300 without loss of generality. - The
telephony server 305 may receive requests from users/wearable devices to initiate voice calls (e.g., user pushes a button on the wearable device to initiate a voice call) and may route the voice calls to one or more destinations, based on datasets such as audio data, health data, movement/orientation data, and location data, received from a wearable device. Thetelephony server 305 includes acall receiver 325, aPSTN interface 310, anIP telephony interface 315, and acall monitor 320. - The
call receiver 325 may receive the request from the user (e.g., from the wearable device) to initiate the voice call. For example, the user may push button on the wearable device to initiate a voice call and thecall receiver 325 may receive the request to initiate the voice call from the wearable device. In one embodiment, the wearable device may provide the datasets (e.g., via a data connection) directly to thedata monitoring server 345 and/or theweb server 385 for processing. In an alternative embodiment, thecall receiver 325 may receive datasets from the wearable device and forward them to thedata monitoring server 345 and/or theweb server 385 for processing. Thecall receiver 325 may provide the datasets to thedata monitoring server 345 for processing. Thedata monitoring server 345 may analyze the datasets and/or audio data received from the wearable device to determine whether an event experienced by a user is an ADL, a confirmed fall, or an inconclusive event, as discussed in more detail below. Based on the determination about the event, thedata monitoring server 345 may instruct thetelephony server 305 to route the voice call to one or more destinations. For example, if thedata monitoring server 345 determines that an event is a fall, thedata monitoring server 345 may instruct thetelephony server 305 to route the call to a first-to-answer system or to an emergency call center. In another example, if thedata monitoring server 345 determines that an event is an ADL, thedata monitoring server 345 may instruct thetelephony server 305 to route the call to a family member, a general call center, or an automated answering service. Destinations may include, but are not limited to, landline telephones, cellular telephones, IP phones, call centers, first-to-answer systems, and/or public safety access points (PSAPs) (e.g., 9-1-1 emergency services, an automated emergency call center, etc.). - The
PSTN interface 310 may route the call to a public circuit-switched telephone network (e.g., a landline telephone). For example, thePSTN interface 310 may be used to route the call to a person's home phone number. TheIP telephony interface 315 may route the call to an IP telephone system. TheIP telephone interface 315 may also encode/decode audio data (e.g., analog voice data) into digital data. In one embodiment, one or more of the PSTN interface 301 and theIP telephony interface 315 may route a voice call to a cellular phone network (e.g., route the call to a cellular phone). - The call monitor 320 may monitor the status of a call (e.g., whether a call is answered by the destination, whether a call is dropped, monitor voice quality, etc.). Based on the status of the call, the
telephony server 305 may route the call to a second destination. For example, if a call is routed to a first destination (e.g., a family member's cell phone), and callmonitor 320 determines that the call was not answered, the telephony server may re-route the call to a second destination (e.g., a call center). In another example, if thecall monitor 320 determines that a call was dropped (e.g., a cell phone call drops), the call monitor 320 may re-route the call to a second destination (e.g., from a family member's cell phone to a first-to-answer system. Thedestination database 330 may store a list of destinations (e.g., a list of phone numbers, call centers, etc.) which may be used to route the voice call. In one embodiment, thetelephony server 305 may route voice calls using the list of destinations stored in thedestination database 330. In another embodiment, thedata monitoring server 345 may provide thetelephony server 305 with a destination to use for routing the voice call. - As discussed above, the
data monitoring server 345 may analyze and/or process datasets (such as location data, health data, time data, orientation/movement data, etc.) and/or audio data to determine whether an event experienced by a user is an ADL, a confirmed fall or other accident, or an inconclusive event. Thedata monitoring server 345 includes adataset analyzer 350 and anaudio data analyzer 355. - The
dataset analyzer 350 may analyze the datasets provided by the wearable device to classify an event and/or determine what event occurred. For example, thedataset analyzer 350 may analyze motion data (e.g., acceleration, change in direction or orientation, etc.) to determine whether a user has fallen. In another example, thedataset analyzer 350 may also use health data (e.g., elevated heart rate, increase in body temperature) to determine whether a user has fallen or experienced some other type of accident or event. In a further example, thedataset analyzer 350 may analyze location data (e.g., GPS coordinates) to identify where an event occurred. This may allow the distributed computing system to route a voice call to a destination (e.g., a call center) geographically close to the location of the event or to route the voice call to the nearest family member. Theaudio data analyzer 355 may analyze audio data (e.g., voices or sounds) to determine whether an event such as a fall or an accident has occurred. For example, theaudio data analyzer 355 may analyze the speech of a user (e.g., may determine whether a user is yelling for help). The data monitoring serve 345 may also provide an interactive voice response system which may be used to gather user input to help determine whether an event such as a fall or an accident has occurred. For example, the interactive voice response system may ask the user “do you require medical attention” or “have you suffered an accident.” A user may provide answers or feedback using the wearable device (e.g., using the microphones/speakers in the wearable device) and theaudio data analyzer 355 may process the user's answers or feedback. - The
web server 385 may provide an external interface for users to interact with the distributedcloud computing system 300. Theweb server 385 includes a portal 390 (e.g., a web portal or other type of interface/application), adata storage 393, and asubscription database 396. Theweb server 385 may allow a user to update subscription information (e.g., change to a higher or lower subscription), view datasets and audio data stored on thedata monitoring server 345, and/or set preferences for destinations when routing voice calls. - The
subscription database 396 may store information or data related to user subscriptions and accounts. In one embodiment, thesubscription database 396 may store data related to the level of service a user is subscribed to. For example, a user may have a higher level (e.g., premium) subscription which indicates that calls should be routed to a PSAP, rather than routed to an automated call center. In another example, a user may have a lower level (e.g., lower tier) subscription which indicates that a call should be routed to a family member first, and that the call should be routed to a PSAP only if the family member does not answer the call. Thesubscription database 396 may also store rules and/or preferences for determining destinations for routing a voice call. For example, thesubscription database 396 may store a rule indicating that at certain times of the data, voice calls should be routed to different destinations. In another example, thesubscription database 396 may store a rule which indicates different destinations based on the type of the event (e.g., whether the event is a fall, an ADL, or an inconclusive event). - The
data storage 393 may store datasets, audio data, and/or other data received from the wearable devices. The datasets and data stored in thedata storage 393 may be used to maintain a record of events and activities which a user experiences. - In one embodiment, the
web server 385 and thedata monitoring server 345 may store information and may process/use information using representational state transfer architecture (REST). In another embodiment, theweb server 385 and thedata monitoring server 345 may store information and may process/use information user other types of systems such as relational databases, hierarchical databases, etc. - In one embodiment, the
telephony server 305 may receive destinations for routing voice calls from thedata monitoring server 345. In another embodiment, thetelephony server 305 may use the analysis or determinations obtained by the data monitoring server 345 (e.g., by analyzing data sets and/or audio data) and may identify which destination should be used for routing the voice calls. Although thetelephony server 305, thedata monitoring server 345, and theweb server 385 are shown as separate servers, in other embodiments, one or more of thetelephony server 305, thedata monitoring server 345, and theweb server 385 could be combined into a single device (e.g., a single server). -
FIG. 4 is a flow diagram illustrating amethod 400 of routing calls, according to one embodiment. Themethod 400 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof. In one embodiment, themethod 400 is performed by computing system (e.g., the distributedcloud computing system 14 ofFIG. 1 or thetelephony server 22 ofFIG. 1 ). - Referring to
FIG. 4 , themethod 400 starts with the computing system receiving datasets associated with an event experienced by a user (block 405). As discussed above, the datasets may be transmitted periodically to the computing system from a wearable device, or may be transmitted in response to a user input (e.g., user pushes a button on the wearable device). Atblock 410, the computing system receives a request to initiate a voice call. For example, the user may push a button on the wearable device and the wearable device may send a request to initiate a voice call to the computing system. In response to the request, the computing system may analyze one or more datasets to determine whether the event experienced by the user is an ADL, a confirmed fall or other type of accident, and/or inconclusive, atblock 415. For example, the computing system, may analyze the datasets to determine whether a user has slipped or fallen (e.g., whether move data indicates a sudden increase in acceleration or change in orientation). In another example, the computing system may also analyze health data (e.g., heart rate, blood pressure, etc.) to determine what type of event the user has experienced. In a further example, the computing system may use location data (e.g., the GPS location of a user) and/or the time of day to determine what type of event the user has experienced. - In one embodiment, the computing system may employ machine learning when processing and/or analyzing datasets. The computing system may initially process “training” datasets in order to “train” the computing system. For example, the computing system may process the training datasets, which may have expected results. The computing system may be trained to reach the expected results (e.g., the computing system may store rules, thresholds, state machines, etc., generated using the training datasets). The computing system may also use user input to refine the processing and/or analyzing of the datasets. For example, a user (e.g., an administrator) may analyze datasets and classify an event. The computing system may store associate the datasets with the event type and may use this association when processing future datasets.
- At
block 420, the computing system identifies one of a plurality of destinations, based on the datasets. For example, the computing system may determine that the datasets indicated a fall or other type of accident, and the computing system may identify a PSAP as a destination for routing the voice call. In another example, the computing system may determine that the datasets indicate an ADL, and may identify a call center or an automated answering service as a destination for routing the voice call. The computing system routes the voice call to the identified destination atblock 425. -
FIG. 5 is a flow diagram illustrating amethod 500 of routing calls, according to another embodiment. Themethod 500 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof. In one embodiment, themethod 500 is performed by computing system (e.g., the distributedcloud computing system 14 ofFIG. 1 or thetelephony server 22 ofFIG. 1 ). - Referring to
FIG. 5 , themethod 500 starts with the computing system receiving datasets associated with an event experienced by a user (block 505). Atblock 510 the computing system receives a request to initiate a voice call. For example, the user may push a button on the wearable device and the wearable device may send a request to initiate a voice call to the computing system. The computing system receives audio data from the wearable device atblock 515. For example, the computing system may receive recorded voices and/or sounds which the wearable device records after the user pushes a button to initiate a voice call. In response to the request, the computing system may analyze one or more datasets and the audio data to determine whether the event experienced by the user is an ADL, a confirmed fall or other type of accident, and/or inconclusive, atblock 520. For example, one or more of health data, orientation/movement data, location data, time data (e.g., the time of day) may be used to determine what type of event the user experienced. In one embodiment, the computing system may process the audio data to determine what type of event the user experienced. For example, the audio data may include a user's cries for help, and the computing system may process the audio data and determine that the event was a fall or other type of accident. In another example, if the audio data indicates that the user is not speaking or making any noises, the computing system may determine that the user is unconscious and unable to respond, and may determine that a fall or other type of accident has occurred. - At
block 525, the computing system identifies one of a plurality of destinations, based on the datasets and the audio. For example, the computing system may determine that the datasets and audio data indicated a fall or other type of accident (e.g., based on a sudden change in orientation and a user's cries for help), and the computing system may identify a PSAP as a destination for routing the voice call. The computing system routes the voice call to the identified destination atblock 530. -
FIG. 6 is a flow diagram illustrating amethod 600 of routing calls, according to a further embodiment. Themethod 600 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof. In one embodiment, themethod 600 is performed by computing system (e.g., the distributedcloud computing system 14 ofFIG. 1 or thetelephony server 22 ofFIG. 1 ). - Referring to
FIG. 6 , themethod 600 starts with the computing system receiving datasets associated with an event experienced by a user (block 605). Atblock 610, the computing system receives a request to initiate a voice call. In response to the request, the computing system may analyze one or more datasets to determine whether the event experienced by the user is an ADL, a confirmed call or other type of accident, and/or inconclusive, atblock 615. Atblock 620, the computing system identifies one of a plurality of destinations, based on the datasets. The computing system routes the voice call to the identified destination atblock 625. - At
block 630, the computing system monitors the status of the call to determine whether the voice call was answered at the identified destination. If the voice call was answered at the first destination, themethod 600 ends. If the voice call was not answered at the first destination, the computing system routes the voice call to a second destination from the plurality of destinations atbock 635. In one embodiment, the computing system may identify the second destination based on one or more of a user's subscription (e.g., subscription/account information), rules or preferences associated with the destinations, datasets (e.g., location data, health data, orientation data, etc.) received from the wearable device, and audio data received from the wearable device. -
FIG. 7 illustrates a diagrammatic representation of a machine in the exemplary form of acomputer system 700 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a smart phone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. - The
exemplary computer system 700 includes a processing device (processor) 702, a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 706 (e.g., flash memory, static random access memory (SRAM), etc.), and adata storage device 716, which communicate with each other via abus 708. -
Processor 702 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, theprocessor 702 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. Theprocessor 702 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Theprocessor 702 is configured to executeinstructions 726 for performing the operations and steps discussed herein. - The
computer system 700 may further include anetwork interface device 722. Thecomputer system 700 also may include a video display unit 710 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 712 (e.g., a keyboard), a cursor control device 714 (e.g., a mouse), and a signal generation device 720 (e.g., a speaker). In one embodiment, thevideo display 710, the alpha-numeric devices 712 and thecursor control device 714 may be combined into a single device, such as a touch screen. - The
data storage device 716 may include a computer-readable storage medium 724 on which is stored one or more sets of instructions 726 (e.g., software) embodying any one or more of the methodologies or functions described herein. Theinstructions 726 may also reside, completely or at least partially, within themain memory 704 and/or within theprocessor 702 during execution thereof by thecomputer system 700, themain memory 704 and theprocessor 702 also constituting computer-readable storage media. Theinstructions 726 may further be transmitted or received over anetwork 721 via thenetwork interface device 722. - In one embodiment, the
instructions 726 may include instructions to execute a server such as thetelephony server 22, the real timedata monitoring server 36, and/or theweb server 28 ofFIG. 1 . While the computer-readable storage medium 724 is shown in an exemplary embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. - In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure.
- Some portions of the detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
- It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “initiating”, “identifying”, “receiving”, “analyzing”, “routing,” “monitoring”, or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
- The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
- Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. In addition, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” Moreover, the words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion.
- It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Claims (21)
1. (canceled)
2. A method comprising:
receiving, using a first device, a first dataset from an apparatus worn by a user, wherein the first dataset corresponds to an event experienced by the user;
in response to a signal from the user to initiate a communication, determining, using a processing device, a type of the event experienced by the user; and
selecting, using the processing device, a first destination for the communication from multiple available destinations, the selecting based on the determined type of the event experienced by the user.
3. The method of claim 2 , wherein the receiving the first dataset includes receiving the first dataset via a first wireless communication protocol.
4. The method of claim 2 , wherein the receiving the first dataset includes receiving the first dataset via a Bluetooth communication protocol.
5. The method of claim 2 , further comprising:
transmitting, from the first device, information about the first dataset to a remote computing system, and wherein the remote computing system performs one or more of the determining the type of the event and the selecting the first destination.
6. The method of claim 5 , wherein the transmitting information about the first dataset to the remote computing system includes using a wireless communication protocol.
7. The method of claim 6 , wherein the using the wireless communication protocol includes using a cellular communication protocol.
8. The method of claim 2 , wherein the signal from the user to initiate a communication includes a user input to the first device to initiate a voice call, and wherein the selecting the first destination includes selecting a destination for the voice call.
9. The method of claim 8 , further comprising initiating the voice call between the first device and a remote care giver.
10. The method of claim 2 , further comprising:
transmitting, from the first device, information about the first dataset to the selected first destination;
wherein the receiving the first dataset includes receiving the first dataset via a first wireless communication protocol; and
wherein the transmitting the information about the first dataset to the selected first destination includes via a different second wireless communication protocol.
11. The method of claim 2 , further comprising:
establishing a voice call between the first device and the selected first destination;
wherein the receiving the first dataset includes receiving the first dataset via a first wireless communication protocol; and
wherein the establishing the voice call includes via a different second wireless communication protocol.
12. The method of claim 2 , wherein the receiving the first dataset includes receiving health data from a sensor configured to sense information about the user.
13. The method of claim 2 , wherein the receiving the first dataset includes receiving location information about the first device.
14. The method of claim 2 , wherein the receiving the first dataset includes receiving audio signal information from the first device.
15. The method of claim 2 , wherein the receiving the first dataset includes receiving orientation information about the first device.
16. A system comprising:
a first wireless sensor configured to generate a first dataset that includes information about a user or device status, the first dataset corresponding to an event experienced by the user;
a wearable apparatus including a wireless transceiver circuit, the wearable apparatus configured to wirelessly receive the first dataset from the first wireless sensor using the wireless transceiver circuit; and
a remote system configured to coordinate communication between the wearable apparatus and a destination terminal, the remote system including a processor circuit configured to:
based on the first dataset, determine a type of the event experienced by the user; and
based on the determined type of the event experienced by the user, select a first communication channel from among multiple available communication channels for use between the wearable device and the destination terminal.
17. The system of claim 16 , wherein the processor circuit is configured to select the first communication channel in response to a user input to the wearable apparatus.
18. The system of claim 16 , wherein the wireless transceiver circuit includes a Bluetooth transceiver circuit configured to wirelessly receive the first dataset from the first wireless sensor using a Bluetooth communication protocol.
19. The system of claim 16 , wherein the processor circuit is further configured to initiate a voice call between the user and the destination terminal using the selected first communication channel.
20. The system of claim 16 , wherein the remote system comprises a distributed cloud computing system;
wherein the distributed cloud computing system is configured to route data or voice communication between the wearable apparatus and the destination terminal.
21. A wearable device associated with a user, the wearable device comprising:
a user input device;
a wireless transceiver circuit configured to receive first data from a sensor about the user's status or the sensor's status, the first data corresponding to multiple events experienced by the user;
a microphone configured to receive audio information from or about a user;
a first processor circuit configured to receive the first data and identify, among the data corresponding to the multiple events, data corresponding to a suspect event; and
a second processor circuit configured to:
in response to an indication from the user at the user input device, identify an event type for the suspect event; and
based on the identified event type, select one of multiple available communication channels and initiate a voice call over the selected one of the communication channels between the wearable device and a remote terminal, the voice call including the audio information from or about the user received by the microphone.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/841,138 US20160105470A1 (en) | 2011-04-04 | 2015-08-31 | Single button mobile telephone using server-based call routing |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161516478P | 2011-04-04 | 2011-04-04 | |
US13/439,571 US8811964B2 (en) | 2011-04-04 | 2012-04-04 | Single button mobile telephone using server-based call routing |
US14/452,932 US9143600B2 (en) | 2011-04-04 | 2014-08-06 | Single button mobile telephone using server-based call routing |
US14/841,138 US20160105470A1 (en) | 2011-04-04 | 2015-08-31 | Single button mobile telephone using server-based call routing |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/452,932 Continuation US9143600B2 (en) | 2011-04-04 | 2014-08-06 | Single button mobile telephone using server-based call routing |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160105470A1 true US20160105470A1 (en) | 2016-04-14 |
Family
ID=47354057
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/439,571 Active 2032-06-15 US8811964B2 (en) | 2011-04-04 | 2012-04-04 | Single button mobile telephone using server-based call routing |
US14/452,932 Active US9143600B2 (en) | 2011-04-04 | 2014-08-06 | Single button mobile telephone using server-based call routing |
US14/841,138 Abandoned US20160105470A1 (en) | 2011-04-04 | 2015-08-31 | Single button mobile telephone using server-based call routing |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/439,571 Active 2032-06-15 US8811964B2 (en) | 2011-04-04 | 2012-04-04 | Single button mobile telephone using server-based call routing |
US14/452,932 Active US9143600B2 (en) | 2011-04-04 | 2014-08-06 | Single button mobile telephone using server-based call routing |
Country Status (1)
Country | Link |
---|---|
US (3) | US8811964B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170235907A1 (en) * | 2015-09-16 | 2017-08-17 | Kersti A. Peter | Remote healthcare system for family care |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
MX342113B (en) * | 2012-08-16 | 2016-09-14 | Schlage Lock Co Llc | Wireless electronic lock system and method. |
US9282436B2 (en) | 2012-10-17 | 2016-03-08 | Cellco Partnership | Method and system for adaptive location determination for mobile device |
US9295087B2 (en) | 2012-10-17 | 2016-03-22 | Cellco Partnership | Mobile device smart button that adapts to device status |
US11157436B2 (en) | 2012-11-20 | 2021-10-26 | Samsung Electronics Company, Ltd. | Services associated with wearable electronic device |
US11372536B2 (en) | 2012-11-20 | 2022-06-28 | Samsung Electronics Company, Ltd. | Transition and interaction model for wearable electronic device |
US10423214B2 (en) * | 2012-11-20 | 2019-09-24 | Samsung Electronics Company, Ltd | Delegating processing from wearable electronic device |
US9477313B2 (en) | 2012-11-20 | 2016-10-25 | Samsung Electronics Co., Ltd. | User gesture input to wearable electronic device involving outward-facing sensor of device |
US8994827B2 (en) | 2012-11-20 | 2015-03-31 | Samsung Electronics Co., Ltd | Wearable electronic device |
US10551928B2 (en) | 2012-11-20 | 2020-02-04 | Samsung Electronics Company, Ltd. | GUI transitions on wearable electronic device |
US11237719B2 (en) | 2012-11-20 | 2022-02-01 | Samsung Electronics Company, Ltd. | Controlling remote electronic device with wearable electronic device |
US10185416B2 (en) | 2012-11-20 | 2019-01-22 | Samsung Electronics Co., Ltd. | User gesture input to wearable electronic device involving movement of device |
US20140275863A1 (en) * | 2013-03-14 | 2014-09-18 | Vital Herd, Inc. | Fluid analysis device and related method |
US20140357215A1 (en) * | 2013-05-30 | 2014-12-04 | Avaya Inc. | Method and apparatus to allow a psap to derive useful information from accelerometer data transmitted by a caller's device |
US10691332B2 (en) | 2014-02-28 | 2020-06-23 | Samsung Electronics Company, Ltd. | Text input on an interactive display |
US9432498B2 (en) * | 2014-07-02 | 2016-08-30 | Sony Corporation | Gesture detection to pair two wearable devices and perform an action between them and a wearable device, a method and a system using heat as a means for communication |
WO2016048345A1 (en) * | 2014-09-26 | 2016-03-31 | Hewlett Packard Enterprise Development Lp | Computing nodes |
US9572503B2 (en) | 2014-11-14 | 2017-02-21 | Eric DeForest | Personal safety and security mobile application responsive to changes in heart rate |
CN104964383B (en) * | 2015-04-30 | 2018-04-27 | 广东美的制冷设备有限公司 | The matching method and device of air conditioner and wearable device |
US11310290B2 (en) | 2016-07-15 | 2022-04-19 | Samsung Electronics Co., Ltd. | System and method for establishing first-to-answer call in mission critical push to talk communication |
CN106961588A (en) * | 2017-04-24 | 2017-07-18 | 青岛研创电子科技有限公司 | A kind of intelligent power monitoring system |
US11158179B2 (en) | 2017-07-27 | 2021-10-26 | NXT-ID, Inc. | Method and system to improve accuracy of fall detection using multi-sensor fusion |
US20190051144A1 (en) | 2017-07-27 | 2019-02-14 | NXT-ID, Inc. | Social Network for Responding to Event-Driven Notifications |
US11382511B2 (en) | 2017-07-27 | 2022-07-12 | Logicmark, Inc. | Method and system to reduce infrastructure costs with simplified indoor location and reliable communications |
US10812651B2 (en) * | 2018-04-12 | 2020-10-20 | Exotel Techcom Pvt. Ltd. | System and method for monitoring telephony communications in real time |
US10631257B1 (en) * | 2018-11-27 | 2020-04-21 | Avaya Inc. | System and method for providing enhanced routing in a contact center |
US11228624B1 (en) * | 2019-11-13 | 2022-01-18 | Amazon Technologies, Inc. | Overlay data during communications session |
CN111986460A (en) * | 2020-07-30 | 2020-11-24 | 华北电力大学(保定) | Intelligent alarm insole based on acceleration sensor |
CN115376276A (en) * | 2022-07-25 | 2022-11-22 | 苏州智瞳威视科技有限公司 | Old man falling monitoring device adopting artificial intelligence voice interaction mode |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010027384A1 (en) * | 2000-03-01 | 2001-10-04 | Schulze Arthur E. | Wireless internet bio-telemetry monitoring system and method |
US20060282021A1 (en) * | 2005-05-03 | 2006-12-14 | Devaul Richard W | Method and system for fall detection and motion analysis |
US20080001735A1 (en) * | 2006-06-30 | 2008-01-03 | Bao Tran | Mesh network personal emergency response appliance |
US20080129518A1 (en) * | 2006-12-05 | 2008-06-05 | John Carlton-Foss | Method and system for fall detection |
US20080294020A1 (en) * | 2007-01-25 | 2008-11-27 | Demetrios Sapounas | System and method for physlological data readings, transmission and presentation |
US20090322540A1 (en) * | 2008-06-27 | 2009-12-31 | Richardson Neal T | Autonomous fall monitor |
US20100285771A1 (en) * | 2009-05-11 | 2010-11-11 | Peabody Steven R | System containing location-based personal emergency response device |
US20110111736A1 (en) * | 2009-11-06 | 2011-05-12 | ActiveCare, Inc. | Systems and Devices for Emergency Tracking and Health Monitoring |
US20130141233A1 (en) * | 2011-02-23 | 2013-06-06 | Embedrf Llc | Position tracking and mobility assessment system |
US9060683B2 (en) * | 2006-05-12 | 2015-06-23 | Bao Tran | Mobile wireless appliance |
US20150223705A1 (en) * | 2010-03-12 | 2015-08-13 | Rajendra Padma Sadhu | Multi-functional user wearable portable device |
US20160328529A1 (en) * | 2011-03-25 | 2016-11-10 | Zoll Medical Corporation | System and method for adapting alarms in a wearable medical device |
US20170086672A1 (en) * | 2007-05-24 | 2017-03-30 | Bao Tran | Wireless monitoring |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090023425A1 (en) * | 2007-07-20 | 2009-01-22 | Syed Zaeem Hosain | System and method for mobile terminated event communication correlation |
-
2012
- 2012-04-04 US US13/439,571 patent/US8811964B2/en active Active
-
2014
- 2014-08-06 US US14/452,932 patent/US9143600B2/en active Active
-
2015
- 2015-08-31 US US14/841,138 patent/US20160105470A1/en not_active Abandoned
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010027384A1 (en) * | 2000-03-01 | 2001-10-04 | Schulze Arthur E. | Wireless internet bio-telemetry monitoring system and method |
US20060282021A1 (en) * | 2005-05-03 | 2006-12-14 | Devaul Richard W | Method and system for fall detection and motion analysis |
US9060683B2 (en) * | 2006-05-12 | 2015-06-23 | Bao Tran | Mobile wireless appliance |
US20080001735A1 (en) * | 2006-06-30 | 2008-01-03 | Bao Tran | Mesh network personal emergency response appliance |
US20080129518A1 (en) * | 2006-12-05 | 2008-06-05 | John Carlton-Foss | Method and system for fall detection |
US20080294020A1 (en) * | 2007-01-25 | 2008-11-27 | Demetrios Sapounas | System and method for physlological data readings, transmission and presentation |
US20170086672A1 (en) * | 2007-05-24 | 2017-03-30 | Bao Tran | Wireless monitoring |
US20090322540A1 (en) * | 2008-06-27 | 2009-12-31 | Richardson Neal T | Autonomous fall monitor |
US20100285771A1 (en) * | 2009-05-11 | 2010-11-11 | Peabody Steven R | System containing location-based personal emergency response device |
US20110111736A1 (en) * | 2009-11-06 | 2011-05-12 | ActiveCare, Inc. | Systems and Devices for Emergency Tracking and Health Monitoring |
US20150223705A1 (en) * | 2010-03-12 | 2015-08-13 | Rajendra Padma Sadhu | Multi-functional user wearable portable device |
US20130141233A1 (en) * | 2011-02-23 | 2013-06-06 | Embedrf Llc | Position tracking and mobility assessment system |
US20160328529A1 (en) * | 2011-03-25 | 2016-11-10 | Zoll Medical Corporation | System and method for adapting alarms in a wearable medical device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170235907A1 (en) * | 2015-09-16 | 2017-08-17 | Kersti A. Peter | Remote healthcare system for family care |
Also Published As
Publication number | Publication date |
---|---|
US9143600B2 (en) | 2015-09-22 |
US20120322430A1 (en) | 2012-12-20 |
US20140349699A1 (en) | 2014-11-27 |
US8811964B2 (en) | 2014-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9143600B2 (en) | Single button mobile telephone using server-based call routing | |
US9526420B2 (en) | Management, control and communication with sensors | |
US9462444B1 (en) | Cloud based collaborative mobile emergency call initiation and handling distribution system | |
US8907783B2 (en) | Multiple-application attachment mechanism for health monitoring electronic devices | |
US11409816B2 (en) | Methods and systems for determining an action to be taken in response to a user query as a function of pre-query context information | |
US11382511B2 (en) | Method and system to reduce infrastructure costs with simplified indoor location and reliable communications | |
AU2018331264B2 (en) | Method and device for responding to an audio inquiry | |
US10332378B2 (en) | Determining user risk | |
US10510240B2 (en) | Methods and systems for evaluating compliance of communication of a dispatcher | |
CA3065096C (en) | Adaptation of the auditory output of an electronic digital assistant in accordance with an indication of the acoustic environment | |
US20160285800A1 (en) | Processing Method For Providing Health Support For User and Terminal | |
WO2018236514A1 (en) | Methods and systems for delivering a voice message | |
WO2019132682A1 (en) | Methods and systems for simultaneously monitoring multiple talkgroups | |
EP2809057A1 (en) | Method and apparatus to allow a PSAP to derive useful information from accelerometer data transmitted by a caller's device | |
US11290862B2 (en) | Methods and systems for generating time-synchronized audio messages of different content in a talkgroup | |
KR102553745B1 (en) | System for providing emergency alarming service using voice message | |
US11036742B2 (en) | Query result allocation based on cognitive load |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |